diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzqgjy" "b/data_all_eng_slimpj/shuffled/split2/finalzzqgjy" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzqgjy" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\setcounter{equation}{0}\\label{S1}\n\nDirected hypergraphs are the generalization of digraphs and have been widely used in \ndiscrete mathematics and computer science, see e.g. \\cite{Sur}, \\cite{Berge}, \\cite{mor}, and \\cite{Gallo}. \nIn particular, the directed hypergraphs give effective tools for the investigation of databases and \nstructures on complicated discrete objects. \n\nRecently, the topological properties of digraphs, hypergraphs, multigraphs, and \nquivers have been studied using various (co)homologies theories, consult e.g. \\cite{Embed}, \n\\cite{Graham}, \\cite{Betti}, \\cite{Parks}, \\cite{Mi3}, \\cite{Forum}, \\cite{Hyper}, \\cite{Pcomplex}. \n\n In this paper, we construct several functorial and homotopy invariant homology theories on \n the category of directed hypergraphs using the path homology theory introduced in \\cite{Mi4}, \\cite{Hyper}, \\cite{Mi2}, \\cite{Pcomplex}, and \n\\cite{Forum}. \n\n A rich structure of a directed hypergraph gives a number of opportunities to define \n functorially a path complex for the category of hypergraphs which we construct in the paper. \n We describe these constructions in Section \\ref{S3}. We introduce also a notion of a homotopy in the category of directed hypergraphs and describe functorial relations between homotopy categories of directed hypergraphs, digraphs, and path complexes. \n \nThe essential difference from the situation of the category of digraphs is \nthe existence of the notion of the density of the path complex that we introduce for the two of the introduced path complexes of directed hypergraphs. This notion \ngives an opportunity to define a filtration on the corresponding path complex and hence a filtration on\nits path homology groups. We consider all homology groups with coefficients in a unitary commutative ring $R$.\n\nIn Section \\ref{S2}, we define a category of directed hypergraphs and introduce the notion of homotopy in this category. \n\nIn Section \\ref{S3}, we construct several path homology theories on the category of directed hypergraphs.\n\n\n \\section{Path complexes and homotopy of directed hypergraphs}\\label{S2}\n\\setcounter{equation}{0}\n\nLet $\\Pi=(V,P)$ be a path complex with $V=\\{0,\\dots, n\\}$ and $J=\\{0,1\\}$ be a set, see \\cite{Hyper}, \\cite{Pcomplex}. \nDefine a path complex $\\Pi^{\\prime}=(V^{\\prime}, P^{\\prime})$ where $V^{\\prime}=\\{0^{\\prime}, \\dots, n^{\\prime}\\}$ and \n$p^{\\prime}=(i_0^{\\prime}\\dots i_n^{\\prime})\\in P^{\\prime}$ iff $p=(i_0\\dots i_n)\\in P$. We identify \n$V\\times J=V\\times\\{0\\}\\amalg V\\times\\{1\\}$ with $V\\amalg V^{\\prime}$. Define a path complex $\\Pi^{\\uparrow}=(V\\times J, P^{\\uparrow})$ where \n\\begin{equation*}\\label{2.1}\n\\begin{matrix}\nP^{\\uparrow}=P\\cup P^{\\prime}\\cup P^{\\#}, \\\\\n{P^\\#}=\\{ q^\\#_k=(i_0\\dots i_ki_k^{\\prime}i^{\\prime}_{k+1}\\dots i_n^{\\prime})|q=(i_0\\dots i_ki_{k+1}\\dots i_n)\\in P \\}.\\\\\n\\end{matrix}\n\\end{equation*}\nWe have morphisms \n$\ni_{\\bullet}\\colon \\Pi \\to \\Pi^{\\uparrow}\n$\nand $j_{\\bullet}\\colon \\Pi \\to \\Pi^{\\uparrow}$\nthat are induced by the natural inclusion $V$ onto $V\\times \\{0\\}$ and onto $V\\times\\{1\\}$, respectively.\n \n\\begin{definition} \\label{d2.1}\\rm (i) A \\emph{ hypergraph} is a pair $G=(V,E)$ consisting of a non-empty \n\tset $V$ and a set $E=\\{\\bold e_1,\\dots, \\bold e_n\\}$ of distinct and non-ordered subsets of $V$ such that \n\t$\n\t\\bigcup_{i=1}^n \\bold e_i=V\n\t$ and every $\\mathbf e_i$ contains strictly more than one element. \n\tThe elements of $V$ are called \\emph{vertices} and the elements of $E$ are called \\emph{edges}.\n\n\n\t\n\t\n\t(ii) A \\emph{directed hypergraph} $G$ is a pair $(V,E)$ consisting of a set $V$ and a set \n\t$E=\\{\\bold e_1,\\dots , \\bold e_n\\}$ where $\\bold e_i\\in E$ is an ordered pair $(A_i,B_i)$ of disjoint non-empty subsets of the set $V$ such that \n\t$V=\\bigcup_{\\bold e_i\\in E} (A_i\\cup B_i)$. The elements of $V$ are called \\emph{vertices} and the elements of $E$ are called \\emph{arrows}. \n\tThe set $A=\\mbox{orig }(A\\rightarrow B)$ is called \n\tthe \\emph{origin of the arrow} and the set $B=\\mbox{end}(A\\rightarrow B)$\n\tis called the \\emph{end of the arrow}. The elements of $A$ are called \n\tthe \\emph{initial vertices} of $A\\to B$ and the elements of $B$ are called its \\emph{terminal vertices}. \n\n\\end{definition}\n\n \nFor a finite set $X$ let ${\\mathbf{P}}(X)$ denote as usual the power set. \nWe define a set \n$\n\\mathbb P(X)\\colon = \\{{\\mathbf{P}}(X)\\setminus \\emptyset\\}\\times \n\\{{\\mathbf{P}}(X)\\setminus \\emptyset\\}\n$\nconsisting of ordered pairs of non-empty subsets of $X$. \nEvery map of finite sets $f\\colon V\\to W$ induces a map $\\mathbb P(f)\\colon\\mathbb P(V)\\to \\mathbb P(W)$. \nFor a directed hypergraph $G=(V,E)$, by Definition \\ref{d2.1}, we have the natural map $\\varphi_G\\colon E\\to \\mathbb P(V)$ defined by $\\varphi_G(A\\to B)\\colon =(A,B)$. \n\n\\begin{definition}\\label{d2.2} \\rm Let $G=(V_G,E_G)$ and\n$H=(V_H,E_H)$ be two directed hypergraphs. A \\emph{morphism} $f\\colon G\\to H$ is given by a pair of maps $f_V\\colon V_G\\to V_H$ \nand $f_E\\colon E_G\\to E_H$ \n such that the following diagram \n$$\n\\begin{matrix}\nE_G&\\overset{\\varphi_G}\\longrightarrow &\\mathbb P(V_G)\\\\\n\\ \\ \\downarrow f_E&&\\ \\ \\downarrow \\mathbb P(f_V)\\\\\nE_H&\\overset{\\varphi_H}\\longrightarrow &\\mathbb P(V_H)\\\\\n\\end{matrix}\n$$\nis commutative. \n\\end{definition}\n\nLet us denote by $\\mathcal{DH}$ the category whose objects are directed hypergraphs and whose morphisms are morphisms of directed hypergraphs. \n\n For a directed hypergraph $G=(V_G,E_G)$, we can consider \nsubsets \n${\\mathbf{P}}_0(G)\\subset {\\mathbf{P}}(V_G)\\setminus \\emptyset$, \n${\\mathbf{P}}_1(G)\\subset {\\mathbf{P}}(V_G)\\setminus \\emptyset$ and \n${\\mathbf{P}}_{01}(G)={\\mathbf{P}}_{0}(G)\\cup {\\mathbf{P}}_{1}(G)$\nby setting\n\\begin{equation*}\\label{2.1}\n\\begin{matrix}\n{\\mathbf{P}}_0(G)=\\{A\\in {\\mathbf{P}}(V_G)\\setminus \\emptyset| \\exists \nB\\in {\\mathbf{P}}(V_G)\\setminus \\emptyset: \\ A\\to B\\in E_G\\}, \\\\\n{\\mathbf{P}}_1(G)=\\{B\\in {\\mathbf{P}}(V_G)\\setminus \\emptyset| \\exists \nA\\in {\\mathbf{P}}(V_G)\\setminus \\emptyset: \\ A\\to B\\in E_G\\}. \\\\\n\\end{matrix}\n\\end{equation*} \n\n\\begin{definition}\\label{d2.3}\\rm Let $G=(V_G,E_G)$ and $H=(V_H, E_H)$ be directed hypergraphs. \nWe define the \\emph{box product} $G\\Box H$ as a directed hypergraph \nwith the set of vertices $V_{G\\Box H}=V_{G}\\times V_{H}$ and the set of arrows \n$E_{G\\Box H}$ consisting of the union of arrows \n$\n\\{A\\times C\\to B\\times C\\}\n$\nwith $(A\\to B)\\in E_G$, $C\\in {\\mathbf{P}}_{01}(H)$ and \n$ \\{A\\times C\\to A\\times D\\}$\nwith \n$ (C\\to D)\\in E_H$, $A\\in \n{\\mathbf{P}}_{01}(G)$.\n\\end{definition}\n Every connected digraph $H=(V_H, E_H)$ can be considered as a directed hypergraph \n with the same set of vertices and of a set of arrows of the form $\\{v\\}\\to \\{w\\}$ whith $(v\\to w)\\in E_H$. \n Hence, Definition \\ref{d2.3} gives naturally \na box product $G\\Box H$ of a directed hypergraph $G$ and a connected digraph $H$. \nNote that a line digraph $I_n$ defined for example in \\cite[Sec. 3.1]{MiHomotopy} is connected and that we have two digraphs $I_1$, namely $0\\to 1$ and $1\\to 0$. \n\n\n\\begin{definition}\\label{d2.4} \\rm i) Two morphisms $f_0, f_1\\colon G\\rightarrow H$ of directed hypergraphs \n\tare called \\emph{one-step homotopic} if there exists a line digraph $I_1$ and a morphism \n\t$F\\colon G\\Box I_1\\rightarrow H$, such that \n\\begin{equation*}\nF|_{G\\Box \\{0\\}}=f_0\\colon G\\Box \\{0\\}\\rightarrow H,\\ \\ F|_{G\\Box\n\\{1\\}}=f_1\\colon G\\Box \\{1\\}\\rightarrow H.\n\\end{equation*}\nIf the appropriate morphism $F$ called a one-step homotopy exists, we write $f_0\\simeq_1 f_1$.\n \n\nii) Two morphisms $ f,g\\colon G \\to H$ of directed hypergraphs are called \\emph{homotopic}, which we denote $ f\\simeq g$ if there exists a sequence \nof morphisms \n$\nf_i\\colon G\\to H$ for $i=0,\\dots, n$ such that \n $ f=f_0\\simeq_1 f_1\\simeq_1 \\dots \\simeq_1 f_n =g$.\n\niii) Two directed hypergraphs $G$ and $H$ are \\emph{homotopy equivalent} if there\nexist morphisms \n$\nf\\colon G\\to H$ and $g\\colon H\\to G$\nsuch that \n$ fg \\simeq { \\operatorname{Id}_{H}}$ and \n$ gf\\simeq {\\operatorname{Id}_G}\n$. \n In such a case, we write $G\\simeq H$ and call the morphisms $f$, $g$ \\emph{homotopy inverses } of each other.\n\\end{definition}\n\n\\begin{proposition}\\label{p2.5} Two morphisms $ f,g\\colon G \\to H$ of directed hypergraphs \n\tare homotopic if and only if there is a line digraph $I_n$ with $n\\geq 0$ and a morphism \n\t$F\\colon G\\Box I_n\\rightarrow H$ such that \n$\nF|_{G\\Box \\{0\\}}=f_0\\colon G\\Box \\{0\\}\\rightarrow H,\\ \\ F|_{G\\Box\n\\{n\\}}=g\\colon G\\Box \\{n\\}\\rightarrow H. \\ \\ \\ \n$ $\\blacksquare$\n\\end{proposition}\n\nThe relation \"to be homotopic\" is an equivalence relation on the set of morphisms between two directed \nhypergraphs, and homotopy equivalence is an equivalence relation on the set of directed hypergraphs. \nThus, we can consider a category \n${h} \\mathcal{ DH}$ whose objects are directed hypergraphs and morphisms are the classes of homotopic morphisms. \nWe shall call the category ${h} \\mathcal{DH}$ by \n\\emph{homotopy category of directed hypergraphs}.\n\n\n\\section{Path homology of directed hypergraphs}\\setcounter{equation}{0}\\label{S3}\n\n\\subsection{k-connective path homology}\\label{S31}\n\n For a directed hypergraph $G= (V, E)$ and $c=1,2,3, \\dots $\ndefine a path complex \\cite[S3.1]{Pcomplex} $\\mathfrak C^c(G)=(V^c, P_G^c)$ where \n$V^c=V$ and a path $(i_0\\dots i_n)\\in P_{V}$ lies in $P_G^c$ iff \nfor any pair of consequent vertices $(i_{k}, i_{k+1})$ of the path, we have $i_{k}=i_{k+1}$ or\nthere are at least $c$ different edges $\\bold e_1=(A_1\\to B_1), \\dots , \\bold e_c=(A_c\\to B_c)$ such that \nthe vertex $i_k$ is the initial vertex and the vertex $i_{k+1}$ is the terminal vertex of every edge $\\bold e_i$. \nThe number $c$ is called the \\emph{density} of the path complex $\\mathfrak C^c(G)$. It is clear that we have \na filtration \n\\begin{equation}\\label{3.1}\n\\mathfrak C(G)= \\mathfrak C^{1}(G)\\supset \\mathfrak C^{2}(G)\\supset \\mathfrak C^{3}(G)\\supset \\dots\n\\end{equation}\n \n\n\\begin{proposition}\\label{p3.1}\n For every morphism of directed hypergraphs $f\\colon G\\to H$ define a morphism \n$$\n\\mathfrak C(f)=(f_V^1, f_p^1)\\colon \\mathfrak C(G)\\to \\mathfrak C(H)\n$$ \nof path complexes putting $f_V^1\\colon =f_V$ and \n$f_p^1\\colon=f_p|_{P^1_G} \\colon P^1_G\\to P^1_H$ where $f_p$ is defined \nby $f_p(i_0\\dots i_n)= \\left(f(i_0)\\dots f(i_n)\\right)$. Then \n we have the functor $\\mathfrak C$ from the category $\\mathcal{DH}$ of directed hypergraphs to the category $\\mathcal P$ of path complexes.\n \\quad $ \\blacksquare$\n\\end{proposition}\n\n The functor $\\mathfrak C$ \n provides the functorial path homology theory on the category $\\mathcal{DH}$\nof directed hypergraphs. For any directed hypergraph $G$ and $k\\in \\mathbb N$, we set\n $\n H_*^{\\bold {c}( k)}(G)\\colon = H_*(\\mathfrak C^k(G))$ as \\emph{regular path homology groups} \nof path complex\n$\\mathfrak C^k(G)$, see \\cite[S2]{Hyper}. We denote $H_*^{\\bold c}(G)\\colon =H_n^{\\bold {c}(1)}(G)$. \n\nWe call these homology groups \\emph{the connective path homology groups} \nand for $k\\geq 2$ \\emph{the $k$-connective path homology groups} of the\n directed hypergraph $G$, respectively. The connective path homology theory is\n functorial by Proposition \\ref{p3.1}. However\nthe $k$-connective homology theory $H_n^{\\bold{c}( k)}(G)$ is not functorial for \n$k\\geq 2$ as it follows from Example \\ref{e3.2} below. For any directed hypergraph\n $G$ the filtration in\n (\\ref{3.1}) induces homomorphisms \n\\begin{equation*}\nH_n^{\\bold c}(G)=H_n^{\\bold {c}( 1)}(G)\\longleftarrow H_n^{\\bold{c}( 2)}G)\\longleftarrow H_n^{\\bold{c}( 3)}(G)\\longleftarrow \\dots \\ \\ .\n\\end{equation*}\n\n Let $\\mathcal D$ be a category of digraphs without loops \n\\cite[S2]{MiHomotopy}. A category $\\mathcal G$ of graphs is defined similarly \n\\cite[S6]{MiHomotopy}. \n\nLet $G=(V, E)$ be a directed hypergraph. Define a digraph \n$\\mathfrak G(G)=\\left(V^d_G, E^d_G\\right)$ where $V_G^d=V$ and an arrow $v\\to w$ lies in $E^d_G$ iff \nthere is a hyperedge $(A\\to B)\\in E$ such that $v\\in A, w\\in B$. \n\n\\begin{example}\\label{e3.2} \\rm i) Let $G=(V,E)$ be a directed hypergraph such that \n$V$ is the union $A\\cup B$ of two non-empty sets with empty intersection and the set $E$ consists \nof one element $\\bold e=(A\\to B)$. Then $\\mathfrak G(G)$ is a complete bipartite\n digraph with arrows \nfrom vertices lying in $A$ to vertices lying in $B$. \n\nii) Let $G=(V_G, E_G)$ and $H=(V_H, E_H)$ be two directed hypergraphs with \n$V_G=\\{1,2,3,4\\}, E_G=\\{\\mathbf e_1=(\\{1\\}\\to \\{2,3\\}), \\mathbf e_2=(\\{1\\}\\to \\{2.4\\})\\}$, \n$V_H=\\{a,b,c\\}, E_H=\\{\\mathbf e_1^{\\prime}=(\\{a\\}\\to \\{b,c\\})\\}$. The map $f_V\\colon V_G\\to V_H$, \ngiven by $f_V(1)=a, f_V(2)=b, f_V(3)=f_V(4)=c$, induces a morphism $f$ of directed hypergraphs. \nHowever the map $f$ does not induce a morphism from $\\mathfrak C^2(G)$ to $\\mathfrak C^2(H)$. \n\\end{example}\n\n For every morphism $f=(f_V,f_E)\\colon G\\to H$ of directed hypergraphs, define a map \n$\n\\mathfrak G(f)\\colon V^d_G \\to V^d_H \\ \\ \\text{by}\\ \\mathfrak G(f)=f_V.\n$\nFor any arrow $(v\\to w)\\in E^d_G$, we have $(f_V(v)\\to f_V(w))\\in E^d_H$ and the morphism \n$ \\mathfrak G(f)$ of digraphs is well defined. Thus we have a functor \n$\\mathfrak G$ from the category $\\mathcal{DH}$ of directed hypergraphs to the category $\\mathcal D$ of digraphs.\n\\emph{Regular path homology of digraphs} was constructed in \\cite{Axioms}, \\cite{Mi3}. \nIt is based on the natural functor $\\mathfrak D$ from the category $\\mathcal D$ of \ndigraphs to the category $\\mathcal P$ of path complexes.\n\n\\begin{theorem}\\label{t3.3} For every directed hypergraph $G$ there is an\nisomorphism \n$\nH^{\\mathbf c}_*(G)\\cong H_*(\\mathfrak{D}\\circ\\mathfrak{G}(G))\n$\n of path homology groups.\n\\end{theorem}\n\\begin{proof} The path complexes $\\mathfrak C(G)$ and $\\mathfrak D\\circ \\mathfrak G(G)$ coincide. \n\\end{proof}\n\n\n\\begin{example}\\label{e3.4} \\rm The following example illustrates the technique of computations of the connective path homology groups $H_*^{\\bold c(k)}(G)$. \nFor \n$k\\geq 3$ in the presented case, there \nis \nnothing to compute. Let $R=\\mathbb R$ be the ring of coefficients. Consider a hypergraph $G=(V_G,E_G)$ for which \n $\nV_G=\\{1,2, 3, 4\\}, \\ \\ E_G=\\{\\bold e_1,\\bold e_2,\\bold e_3, \\bold e_4, \\bold e_5, \\bold e_6\\},\n$\n $\n\\bold e_1=(\\{1\\}\\to \\{2\\}), \\bold e_2=(\\{2\\}\\to \\{3, 4\\}), \\bold e_3=(\\{4\\}\\to \\{1\\}), \n$\n$\n\\bold e_4=(\\{1\\}\\to \\{2,3\\}), \\bold e_5=(\\{2\\}\\to \\{3\\}), \\bold e_6=(\\{2\\}\\to \\{4\\})$.\n \nWe compute homology of the path complex $\\mathfrak C^c(G)=(V^c, P_G^c)$ as in \\cite{Hyper}. We have \n$\n\\mathcal{R}_0^{reg}=\\left<1, 2, 3, 4\\right>=\\Omega_0\n$, \n$\n\\mathcal {R}_1^{reg}=\\left\n$.\nWe get $\\partial(e_{ij})\\in \\mathcal{R}_0^{reg}$ for all basic elements $e_{ij}\\in \\mathcal{R}_1^{reg}$, so \n$\n\\Omega_1=\\mathcal{R}_1^{reg}\n$.\nThus, $\\Omega_1$ is generated by all directed edges of the digraph $\\mathfrak G(G)$ presented below\n$$\n\\begin{matrix}\n& & \\underset{3}\\bullet &&&&\\\\\n &\\nearrow &&\\nwarrow&&&\\\\\n\\overset{1}\\bullet & &\\to & &\\overset{2}\\bullet& &\\\\\n &\\nwarrow &&\\swarrow&&&\\\\\n& & \\underset{4}\\bullet &&&&\\\\\n\\end{matrix}\n$$\nFrom the definition of $\\mathfrak G(G)$, it follows that $\\Omega_i=0$\nfor $i\\geq 2$ and the homology of the chain complex $\\Omega_*$ coincides with the regular path homology $\\mathfrak G(G)$. \nHence\n $\nH_0^{\\bold c(1)}(G)= H_1^{\\bold c(1)}(G)=\\mathbb R$ and $H_i^{\\bold c(1)}(G)=0$ for $i\\geq 2$. \n\n For \n$H_i^{\\bold c(2)}(G)$, we have \n$\\Omega_0=\\left<1, 2, 3, 4\\right>$ and by definition\n$\n\\Omega_1=\\left\n$.\nMoreover, $\\Omega_i=0$\nfor $i\\geq 2$. Thus, homology groups $H_i^{\\bold c(2)}(G)$ coincide with the homology groups \nof the digraph which has the set of vertices $V_{\\mathfrak G(G)}$ and the set of arrows obtained \nfrom $E_{\\mathfrak G(G)}$ by deleting arrows $(1\\to 3)$ and \n$(4\\to 1)$. Hence, $\nH_0^{\\bold c(2)}(G)=\\mathbb R$ and $H_i^{\\bold c(2)}(G)=0$ for $i\\geq 1$. \nFor $k\\geq 3$ we have \n $\nH_0^{\\bold c(k)}=\\mathbb R^4$ and $H_i^{\\bold c(k)}(G)=0$ for $i\\geq 1$.\n\\end{example} \n\n\\begin{lemma}\\label{l3.5} Let $G=(V,E)$ be a directed hypergraph and \n$I_1=(0\\to 1)$. We have a natural isomorphism\n$\\mathfrak{C}(G\\Box I_1)\\cong[\\mathfrak{C}(G)]^{\\uparrow}$\nof \npath complexes.\n\\end{lemma}\n\\begin{proof} By Definition \\ref{d2.3} a directed hypergraph $G\\Box I_1=(V_{G\\Box I_1},E_{G\\Box I_1}) $ \nhas the set of vertices $V_{G\\Box I_1}= V\\times J=V\\times \\{0,1\\}$ which we \nidentify with $V\\cup V^{\\prime}$, where $V=\\{0,\\dots, n\\}, \\ V^{\\prime}=\\{0^{\\prime}, \\dots, n^{\\prime}\\}$ and the set of edges $E_{G\\Box I_1}$ is the union $E^0\\cup E^1\\cup E^{01}$\nof sets \n$E^i=\\{A\\times\\{i\\}\\to B\\times\\{i\\}\\}$ with $(A\\to B)\\in E_G$ for $i=0,1$ \nand $E^{01}=\\{C\\times\\{0\\}\\to C\\times\\{1\\}\\}$ with $C\\in \\mathbb{S}_{01}(G)$. Let \n$q=\\left(i_0\\dots i_n\\right)$ be a path lying in $\\mathfrak{C}(G\\Box I_1)$. It follows from definition, that there are only three possibilities, namely\n\n(1) all the vertices $i_j\\in V\\times \\{0\\}$ and, hence, $q$ determines the unique path in \nin $\\mathfrak{C}(G)$, \n\n(2) all the vertices $i_j\\in V\\times \\{1\\}$ and, hence, $q$ determines the unique path in \nin $[\\mathfrak{C}(G)]^{\\prime}$, \n\n(3) there exists exactly one pair $(i_k,i_{k+1})$ of consequent vertices in $q$ such that $i_k\\in C\\times \\{0\\}, \ni_{k+1}\\in C\\times \\{1\\}$ for $C\\in \\mathbb{S}_{01}(G)$. \n\nThus, the union of paths from (1)-(3) on the set of vertices \n$V\\times J$ defines the path complex $[\\mathfrak{C}(G)]^{\\uparrow}$ and vice versa. \n\\end{proof}\n\n\\begin{theorem}\\label{t3.6} For a directed hypergraph $G$, the connective path homology groups $H^{\\bold c}(G)$\nare homotopy invariant.\n\\end{theorem}\n\\begin{proof} By Definition \\ref{d2.4}, it is sufficient to prove \n homotopy invariance for a one-step homotopy. Then the result follows from Lemma \\ref{l3.5} and \\cite[Th. 3.4]{Hyper}.\n\\end{proof}\n\n\\subsection{Bold path homology}\\label{S32}\n\nLet $p=\\left(i_0\\dots i_n\\right)$ and $q=\\left(j_0\\dots j_m\\right)$ be two paths of a path complex \n$\\Pi$ with $i_n=j_0$. The \\emph{concatenation} $p\\vee q$ of these paths is a path given by \n$\np\\vee q=\\left(i_0\\dots i_nj_1\\dots j_m\\right)\n$. \nThe concatenation is well defined only if\n $i_n=j_0$.\n\nFor a directed hypergraph $G= (V, E)$, \ndefine a path complex $\\mathfrak B(G)=(V^b_G, P_G^b)$ where $V^b_G=V$ and \na path $q=(i_0\\dots i_n)\\in P_{V}$ lies in $P_G^b$ iff there is a sequence of \n hyperedges $(A_0\\to B_0), \\dots, (A_r\\to B_r)$ in $E$ such that \n $B_i\\cap A_{i+1}\\ne \\emptyset$ for $0\\leq i\\leq r-1$ and the path $q$ has the \n presentation \n\\begin{equation}\\label{3.2}\n\\left(p_0\\vee v_0w_0 \\vee p_1 \\vee v_1w_1\\vee p_2\\vee \\dots \\vee p_{r}\\vee v_r w_r\\vee p_{r+1}\\right)\n\\end{equation}\nwhere $p_0\\in P_{A_0}$, $p_{r+1}\\in P_{B_r}$, $v_i\\in A_i$, $w_i\\in B_i$, $p_i\\in P_{B_{i-1}}\\cap P_{A_{i}}$ \nfor $1\\leq i\\leq r$ and all concatenations in (\\ref{3.2}) are well defined. Note, that in the case of empty \nsequence of edges $A_i\\to B_i$ every path $q\\in P_A$ and every path $q\\in P_B$ for an edge $A\\to B$ lies\nin $P^b_G$. \n\n \\begin{proposition}\\label{p3.7} Let $f\\colon G=(V_G, E_G)\\to H=(V_H,E_H)$ be a morphism of directed hypergraphs. \nDefine a morphism of path complexes\n$$\n\\mathfrak B(f)=(f^b_V, f_p^b) \\colon (V_G^b,P^b_G)\\to (V_H^b, P^b_H)\n$$ \nby $f^b_V=f\\colon V_G^b= V_G\\to V_H=V_H^b$ and $f_p^b=f_p|{_{P^b_G}}$, where \n$f_p$ is defined as in Proposition \\ref{p3.1}.\n Thus, we obtain a functor $\\mathfrak B$ from the category $\\mathcal{DH}$ of directed hypergraphs to the category $\\mathcal P$ of path complexes. \\quad $ \\blacksquare$\n\\end{proposition}\n\n \n Let us define \\emph{the bold path homology groups} of directed hypergraph \n $G$ by \n$\nH_*^{\\bold b}(G)\\colon = H_*(\\mathfrak B(G))\n$.\nBy \nProposition \\ref{p3.7}, we obtain a functorial \n path homology theory on the category $\\mathcal{DH}$ of directed hypergraphs. \n\n\\begin{example}\\label{e3.8} \\rm Let $G=(V, E)$ be a directed \n hypergraph such that for every edge $\\bold e=(A\\to B)\\in E$ the sets $A$ and $B$ \nare one-vertex sets, \n$A=\\{v\\}, B=\\{w\\}, v,w\\in V$. \n We can consider the hypergraph $G$ as a digraph and\n $H^{\\bold c}_*(G)\\cong H^{\\bold b}_*(G)$.\nOn the category of connected digraphs that can be considered as the\n subcategory of directed hypergraphs, the bold path homology groups are \n naturally isomorphic to the connective path homology groups and to the regular path\n homology groups $H_*(G)$ defined in \\cite{Axioms}. \n\\end{example} \n\n\\begin{example}\\label{e3.9} \\rm Now we compute the bold path homology groups \n\t$H^{\\bold b}_*(G)$ of the directed hypergraph $G$ from Example \\ref{e3.4} in dimensions 0,1,2 for $R=\\mathbb R$. \n First, we describe the modules $\\mathcal R_n^{reg}(\\mathfrak B(G))$ for $0\\leq n\\leq 4$. We have \n$$\n\\mathcal R_0^{reg}=\\langle e_1,e_2,e_3,e_4\\rangle, \\ \\\n\\mathcal R_1^{reg}=\\langle e_{12}, e_{13},\ne_{23},e_{24},\ne_{32}, e_{34}, \ne_{43},e_{41}\\rangle,\n $$\n$$\n\\mathcal R_2^{reg}=\\langle e_{123},e_{124},\n e_{132},\n e_{232}, e_{234}, \ne_{241}, e_{243},\ne_{323},\n e_{343},\n e_{434},\ne_{412}, e_{413}\\rangle,\n $$\n$$\n\\begin{matrix}\n\\mathcal R_3^{reg}=\\langle e_{1232},e_{1234}, \ne_{1241}, e_{1243},\ne_{1323},\ne_{2323}, \ne_{2343},\\\\\ne_{2412}, e_{2413},\ne_{2434},\ne_{3232},\ne_{3434}, \ne_{4343},\ne_{4123}, e_{4124},\ne_{4132}\n\\rangle,\n\\end{matrix}\n $$\n$$\n\\begin{matrix}\n\\mathcal R_4^{reg}=\n\\langle e_{12323},\ne_{12343}, \ne_{12412}, e_{12413}, \n e_{12434},\ne_{13232},\ne_{23232}, \ne_{23434},\\\\\ne_{24123},\n e_{24132},\ne_{24343},\ne_{32323},\ne_{34343}, \ne_{43434},\ne_{41232},e_{41234},\n e_{41243},\ne_{41323}\n\\rangle.\n\\end{matrix}\n $$\n$$\n \\Omega_n=\\mathcal{R}_n^{reg} \\ \\ \\text{for} \\ \\ n=0,1.\n$$\nThus $\\Omega_0$ is generated by all the vertices and $\\Omega_1$ is generated \nby all directed edges of the digraph $H$ on Fig. 1. \n\n\\begin{figure}[th]\\label{fig1}\n\\centering\n\\begin{tikzpicture}\n\\node (1) at (4,3) {$1$};\n\\node (2) at (8,3) {$2$};\n\\node (3) at (6,5) {$3$};\n\\node (4) at (6,2) {$4$};\n\\draw (1) edge[ thick, ->] (3);\n\\draw (4) edge[ thick, ->] (1);\n\\draw (2) edge[thick, ->] (4);\n\\draw (3) edge[bend right=12, thick, ->] node [left]{} (2);\n\\draw (2) edge[bend right=15, thick, ->] node [right]{} (3);\n\\draw (1) edge[bend right=90, thick, ->] node [right]{} (2);\n\\draw (3) edge[bend right=15, thick, ->] node [right]{} (4);\n\\draw (4) edge[bend right=15, thick, ->] node [right]{} (3);\n\\end{tikzpicture}\n\\caption{The digraph $H$.}\n\\end{figure}\nAs it follows from the path homology theory of digraphs, the rank \nof the image \n$\\partial\\colon \\Omega_1\\to \\Omega_0$ is equal to 3, \nthe rank \nof the kernel $\\partial$ is equal to 5, \nand hence $H^{\\bold b}_0(G)=\\mathbb R$.\n \nBy the direct computation $\\Omega_2$ is the vector space with the following basis:\n$\n\\{\ne_{123},\ne_{132},\ne_{232}, \ne_{234},\n e_{243},\ne_{323}, \ne_{343},\ne_{434},\ne_{413}\n\\}\n $.\nIn this basis the matrix of homomorphism $\\partial \\colon \\Omega_2\\to \\Omega_1$ has the form:\n$$\n\\left(\\begin{matrix}\n &e_{12}&e_{13}&e_{23}&e_{24}&e_{32}&e_{34}&e_{43}&e_{41}\\\\\ne_{123}&1 &-1 &1 &0 &0 &0 &0 &0 \\\\\ne_{132}&-1 &1 &0 &0 &1 &0 &0 &0 \\\\\ne_{232}&0 &0 &1 &0 &1 &0 &0 &0 \\\\\ne_{234}&0 &0 &1 &-1 &0 &1 &0 &0 \\\\\ne_{243}&0 & 0 &-1 &1 &0 &0 &1 &0 \\\\\ne_{323}& 0 &0 &1 &0 &1 &0 &0 &0 \\\\\ne_{343}&0 &0 &0 &0 &0 &1 &1 &0 \\\\\ne_{434}&0 &0 &0 &0 &0 &1 &1 &0 \\\\\ne_{413}&0 &1 &0 &0 &0 &0 &-1 &1 \\\\\n\\end{matrix}\\right).\n$$\nIts rank is equal to 5. \nHence the rank \nof the image of\n$\\partial$ is equal to 5, \nthe rank \nof the kernel $\\partial$ is equal to 4, \nand hence $H^{\\bold b}_1(G)=0$. \n\nWe have \n$\n\\begin{matrix}\n\\Omega_3=\\langle e_{1232},\ne_{1323},\ne_{2323}, \ne_{2343},\ne_{2434},\ne_{3232},\ne_{3434}, \ne_{4343}\n\\rangle.\\\\\n\\end{matrix}\n $\nSimilar to the previous calculation, the rank \nof the image \n$\\partial\\colon \\Omega_3\\to \\Omega_2$ is equal to 4, \nthe rank \nof the kernel $\\partial$ is equal to 4, \nand hence $H^{\\bold b}_2(G)=0$. \n\nWe have \n$\\Omega_4=\\langle e_{12323}, e_{13232}, e_{23232}, \ne_{23434}, \ne_{24343}, e_{32323}, e_{34343}, e_{43434}\\rangle \n $ \n and, similar to the previous calculation, the rank \nof the image \n$\\partial\\colon \\Omega_4\\to \\Omega_3$ is equal to 4, \nthe rank \nof the kernel $\\partial$ is equal to 4.\nHence $H^{\\bold b}_3(G)=0$. \n\\end{example} \n\n\\begin{lemma}\\label{l3.10} Let $G=(V,E)$ be a directed hypergraph and \n$I_1=(0\\to 1)$ the digraph. There is an inclusion \n$\\lambda\\colon [\\mathfrak{B}(G)]^{\\uparrow}\\to \\mathfrak{B}(G\\Box I_1)$\nof path complexes. The restrictions of \n$\\lambda$ to the images of the morphisms $i_{\\bullet}$ and $j_{\\bullet}$, defined in Section \\ref{S2}, \nare the natural identifications. \n\\end{lemma}\n\\begin{proof} By definition in Section \\ref{S2}, \n we have\n$\n[\\mathfrak{B}(G)]^{\\uparrow}=(V\\times J, [P^b_G]^{\\uparrow}))$, \nwhere $[P^b_G]^{\\uparrow}=P^b_G\\cup [P^b_G]^{\\prime}\\cup [P^b_G]^{\\#} $. \nWe have $V_{G\\Box I_1}= V\\times J=V\\times \\{0,1\\}=V\\cup V^{\\prime}$ with\n$V=\\{0,\\dots, n\\}, \\ V^{\\prime}=\\{0^{\\prime}, \\dots, n^{\\prime}\\}$ and \n$E_{G\\Box I_1}$ is the union of sets $E^0\\cup E^1\\cup E^{01}$, where\n$\nE^i=\\{A\\times\\{i\\}\\to B\\times\\{i\\} \\, | \\, (A\\to B)\\in E_G\\}\n$\n for $i=0,1$ and \n$\nE^{01}=\\{C\\times\\{0\\}\\to C\\times\\{1\\} \\, | \\, C\\in {\\mathbf{P}}_{01}(G))\\}\n$. \nNow it follows\nthat \n$\n\\mathfrak{B}(G)=\\mathfrak{B}(G\\Box\\{0\\}), \\mathfrak{B}(G)^{\\prime}=\\mathfrak{B}(G\\Box\\{1\\})\n$, \nwhere $\\mathfrak{B}(G\\Box\\{0\\}),\\mathfrak{B}(G\\Box\\{1\\})\\subset \\mathfrak{B}(G\\Box I_1)$.\nLet $q=(i_0\\dots i_n)$ be $n$-path in $\\mathfrak{B}(G)=\\mathfrak{B}(G\\Box\\{0\\})$. Consider \nits presentation in the form (\\ref{3.2}) and let $A_i, B_i$ be the corresponding sets \nof vertices. \n For $0\\leq k \\leq n$, consider a path \n $q_k^{\\#}=\\left(i_0\\dots i_ki_k^{\\prime}i^{\\prime}_{k+1}\\dots i_n^{\\prime}\\right)\\in [P^b_G]^{\\#}$. \n We will prove now that this path in \n$P^b_{G\\Box I_1}$. There are following possibilities for the path $q$.\n\n(1) Vertices\n$i_k, i_{k+1}\\in p_s$ for $1\\leq s\\leq r+1$ in presentation (\\ref{3.2}). \nThen we \nwrite path $q_k^{\\#}$ in the form \n\\begin{equation}\\label{3.3}\nq_k^{\\#}=\\left(p_0^{\\#}\\vee v_0^{\\#}w_0^{\\#} \\vee p_1^{\\#} \\vee \\dots \\vee \np_{r+1}^{\\#}\\vee v_{r+1}^{\\#} w_{r+1}^{\\#}\\vee\n p_{r+2}^{\\#}\\right)\n\\end{equation}\n putting \n$$\nA_i^{\\#}=\\begin{cases} A_i\\times\\{0\\} & \\text{for} \\ i\\leq s-1\\\\\n B_{s-1}\\times\\{0\\} & \\text{for} \\ i=s\\\\\nA_{s-1}\\times\\{1\\} & \\text{for} \\ i\\geq s+1, \\\\\n\\end{cases}\\ \\ \\\nB_i^{\\#}=\\begin{cases} B_i\\times\\{0\\} & \\text{for} \\ i\\leq s-1\\\\\nB_{s-1}\\times\\{1\\} & \\text{for} \\ i\\geq s. \\\\\n\\end{cases}\n$$\nWe have the following arrows in $E_{G\\Box I_1}$:\n$$\n(A_i^{\\#}\\to B_i^{\\#})= \n(A_i\\times \\{0\\}\\to B_i\\times\\{0\\}) \\ \\ \\text{for} \\ 0\\leq i\\leq s-1, \n$$\n$$\n(A_s^{\\#}\\to B_s^{\\#})= \n(B_{s-1}\\times \\{0\\}\\to B_{s-1}\\times \\{1\\}), \n$$\n$$\n(A_i^{\\#}\\to B_i^{\\#})= \n(A_{i-1}\\times \\{1\\}\\to B_{i-1}\\times\\{1\\}) \\ \\ \\text{for} \\ s+1\\leq i\\leq r+2. \n$$\n Using identifications \n$\n\\mathfrak{B}(G)=\\mathfrak{B}(G\\Box\\{0\\}), \\mathfrak{B}(G)^{\\prime}=\\mathfrak{B}(G\\Box\\{1\\})\n$, \nwe obtain\n\\begin{equation}\\label{3.4}\np_i^{\\#}= \\begin{cases}p_i& \\text{for} \\ i\\leq s-1\\\\\np_i^{\\prime}& \\text{for} \\ s+2\\leq i\\leq r+2\\\\\n(w_{s-1}\\dots i_s) &\\text{for} \\ i=s\\\\\n(i_{s}^{\\prime}i_{s+1}^{\\prime} \\dots v_s^{\\prime})&\\text{for} \\ i=s+1\\\\\n\\end{cases}\n\\end{equation}\nwhere \n$\n(w_{s-1}\\dots i_s)\\in P_{B_{s-1}^{\\#}\\cap A_s^{\\#}}=\nP_{B_{s-1}\\times\\{0\\}}\n$, \n$\n\\left(i_{s}^{\\prime}i_{s+1}^{\\prime} \\dots v_s^{\\prime}\\right)\n\\in P_{B_{s}^{\\#}\\cap A_{s+1}^{\\#}}=\nP_{(B_{s-1}\\times\\{1\\})\\cap (A_{s}\\times\\{1\\})}\n$.\nPaths $p_i^{\\#}$ in (\\ref{3.4}) \n define vertices \n$v_i^{\\#}, w_{i}^{\\#}$ in (\\ref{3.3}). \nHence, (\\ref{3.3}) gives a presentation of $q_k^{\\#}$ in the form (\\ref{3.2}) \nfor the hypergraph $G\\Box I_1$ and $q_k^{\\#}\\in P_{G\\Box I_1}^b$ in the considered case. \n\n(2) \nVertices $i_k,i_{k+1}\\in p_0$ in presentation (\\ref{3.2}). Then we \nwrite \npath $p_k^{\\#}$ in the form \n(\\ref{3.3}) putting \n$$\nA_i^{\\#}=\\begin{cases} A_0\\times\\{0\\} & \\text{for} \\ i=0\\\\\n A_{i-1}\\times\\{1\\} & \\text{for} \\ 1\\leq i\\leq r+2, \\\\\n\\end{cases} \n$$\n$$\nB_i^{\\#}=\\begin{cases} A_0\\times\\{1\\} & \\text{for} \\ i=1\\\\\nB_{i-1}\\times\\{1\\} & \\text{for} \\ 2\\leq i\\leq r+2 \\\\\n\\end{cases}\n$$\nand \n$$\np_i^{\\#}= \\begin{cases}(i_0\\dots i_k)& \\text{for} \\ i=0\\\\\n\\left(i_k^{\\prime}\\dots v_0^{\\prime}\\right)& \\text{for} \\ i=1\\\\\np_{i-1}^{\\prime} &\\text{for} \\ 2\\leq i \\leq r+2\\\\\n\\end{cases}\n$$\nwhere \n$(i_0\\dots i_k)\\in P_{A_0^{\\#}}$, $\\left(i_k^{\\prime}\\dots v_0^{\\prime}\\right)\\in \nP_{B_0^{\\#}\\cap A_1^{\\#}}=P_{A_{0}\\times\\{1\\}}$\nHence, (\\ref{3.3}) gives a presentation of $q_k^{\\#}$ in the form (\\ref{3.2})\nin the hypergraph $G\\Box I_1$ and $q_k^{\\#}\\in P_{G\\Box I_1}^b$ in the considered case.\n\n(3) Let $i_k=v_s, i_{k+1}=w_s$ for $0\\leq s\\leq r$ in the presentation (\\ref{3.2}). \n Then we \n write \n path $q_k^{\\#}$ in the form \n(\\ref{3.3}) putting \n$$\nA_i^{\\#}=\\begin{cases} A_i\\times\\{0\\} & \\text{for} \\ i\\leq s\\\\\n A_{s+1}\\times\\{1\\} & \\text{for} \\ i=s+1\\\\\nA_{s-1}\\times\\{1\\} & \\text{for} \\ i\\geq s+2, \\\\\n\\end{cases}\\ \nB_i^{\\#}=\\begin{cases} B_i\\times\\{0\\} & \\text{for} \\ i\\leq s-1\\\\\nA_s\\times\\{1\\} & \\text{for} \\ i=s\\\\\nB_{s-1}\\times\\{1\\} & \\text{for} \\ i\\geq s+1. \\\\\n\\end{cases}\n$$\nWe have the following arrows in $E_{G\\Box I_1}$:\n$$\n(A_i^{\\#}\\to B_i^{\\#})= \n(A_i\\times \\{0\\}\\to B_i\\times\\{0\\}) \\ \\ \\text{for} \\ 0\\leq i\\leq s-1, \n$$\n$$\n(A_s^{\\#}\\to B_s^{\\#})= \n(A_{s}\\times \\{0\\}\\to A_{s}\\times \\{1\\}), \n$$\n$$\n(A_i^{\\#}\\to B_i^{\\#} )= \n(A_{i-1}\\times \\{1\\}\\to B_{i-1}\\times\\{1\\}) \\ \\ \\text{for} \\ s+1\\leq i\\leq r+2. \n$$\nSimilarly to case (1), we have \n$$\np_i^{\\#}= \\begin{cases}p_i& \\text{for} \\ i\\leq s\\\\\n(v_s^{\\prime})& \\text{for} \\ i=s+1\\\\\np_{i-1}^{\\prime} &\\text{for} \\ s+2\\leq i \\leq r+2,\\\\\n\\end{cases}\n$$\nwhere \n$\nw_{s}^{\\#}=v_s, v_{s}^{\\#}=v_s^{\\prime}, \nw_{s+1}^{\\#}=v_s^{\\prime}$.\nHence, (\\ref{3.3}) gives a presentation of $q_k^{\\#}$ in the form (\\ref{3.2})\nin the hypergraph $G\\Box I_1$ and $q_k^{\\#}\\in P^b_{G\\Box I_1}$ in the considered case. \n\\end{proof}\n\n\\begin{theorem}\\label{t3.11} Let $G$ be a directed hypergraph. The bold path homology groups $H^{\\bold b}_*(G)$\nare homotopy invariant.\n\\end{theorem}\n\\begin{proof} By Definition \\ref{d2.4}, it is sufficient to prove \n homotopy invariance for the one-step homotopy. Let $f_0, \\, f_1\\colon G\\to H$ be \n one-step homotopic morphisms of directed hypergraphs with homotopy \n$F\\colon G\\Box I_1\\to H$, where $I_1=(0\\to 1)$. \nSince $\\mathfrak B$ is a functor, we obtain\nmorphisms of path complexes\n$\n\\mathfrak B(f_0), \\, \\mathfrak B(f_1)\\colon \\mathfrak B(G)\\to \\mathfrak B(H)$\nand $\\mathfrak B(F)\\colon \\mathfrak B(G\\Box I_1)\\to \\mathfrak B(H)$. Consider the composition \n$\n[\\mathfrak{B}(G)]^{\\uparrow}\\overset{\\lambda}{\\longrightarrow} \\mathfrak{B}(G\\Box I_1)\\overset{\\mathfrak B(F)}{\\longrightarrow}\n\\mathfrak B(H)\n$\nwhich gives a homotopy between morphisms \n$\\mathfrak B(f_0)$ and $\\mathfrak B(f_1)$ of\npath complexes by using identifications of the top and the bottom \nof $[\\mathfrak{B}(G)]^{\\uparrow}$\ndescribed in Lemma \\ref{l3.10}.\nNow the result follows from \n\\cite[Th. 3.4]{Hyper}.\n\\end{proof}\n\n\n\\subsection{Non-directed path homology}\\label{S33}\n\nIn this subsection, we describe several path homology theories on the category of directed hypergraphs \n$\\mathcal{DH}$ that are based on functorial relations between hypergraphs and directed hypergraphs. \n\n For a hypergraph $G=(V_G,E_G)$, we have a natural map \n $\\phi_G\\colon E_G \\to {\\mathbf{P}}(V_G)\\setminus \\emptyset$.\n A \\emph{morphism} of hypergraphs $f\\colon G\\to H=(V_H,E_H)$ \n is given by the pair of maps $f_V\\colon V_G\\to V_H$ \nand $f_E\\colon E_G\\to E_H$ \n such that ${\\mathbf{P}}(f_V)\\circ \\phi_G=\\phi_H\\circ f_E$, \nwhere \n${\\mathbf{P}}(f_V)\\colon{\\mathbf{P}}(V_G)\\setminus \\emptyset \\to {\\mathbf{P}}(V_H)\\setminus \\emptyset$ \nis the map induced by $f_V$. So we may turn to \nthe category of hypergraphs $\\mathcal H$ in \\cite{Hyper}. \n\n\nFirst, define a functor from category $\\mathcal{DH}$\n to category $\\mathcal H$. For a finite set $X$, define a map $\\sigma_X\\colon \\mathbb P(X)\\to {\\mathbf{P}}(X)$ by setting $\\sigma_X(A,B)=A\\cup B$. \n Let $G=(V, E)$ be a directed hypergraph. \nDefine a hypergraph $\\mathfrak E(G)=(V^e, E^e)$ where $V^e=V$ and \n\\begin{equation}\\label{3.5}\nE^{e}=\\{C\\in {\\mathbf{P}}(V)\\setminus \\emptyset \\, | \\, C=A\\cup B, (A\\to B)\\in E\\}.\n\\end{equation}\nRecall that in Section \\ref{S2}, for a directed hypergraph $G=(V,E)$ we defined a map $\\varphi_G\\colon E\\to \\mathbb P(V)$ by $\\varphi_G(A\\to B)=\n(A,B)$. \n\n\n \\begin{proposition}\\label{p3.12} Let $f=(f_V, f_E)\\colon G=(V_G, E_G)\\to H=(V_H,E_H)$ be a morphism of directed hypergraphs. \nDefine a map $f_E^e\\colon E^e_G\\to {\\mathbf{P}}(V_H)$ \nputting \n$\nf_E^e(C)=[{\\mathbf{P}}(f_V)](C)\n$\nfor every $C=A\\cup B\\in E^e_G$. \nThen the map $f_E^e$ is a well defined map \n$ E^e_G\\to E^e_H$ and the pair \n$\n(f_{V}^e, f_{E}^e)$ with $f_{V}^e=f_V$\ndefines a morphism \n$\n\\mathfrak E(f) \\colon (V_G^e,E^e_G)\\to (V_H^e, E^e_H)\n$\n of hypergraphs. Thus, we obtain a functor $\\mathfrak E$ from the category \n $\\mathcal{DH}$ of directed hypergraphs to the category $\\mathcal H$ of hypergraphs. \n\\end{proposition}\n\\begin{proof} The map $f_E^e$ is \n\twell defined. Now we prove that its image lies in $E^e_H$. \nLet $C\\in E^e_G$, $C=\\sigma_{V_G}\\circ \\varphi_G(A\\to B)=A\\cup B$ and \n$f_{E_G}(A\\to B)=(A^{\\prime}\\to B^{\\prime})\\in E_H$. Then, by Definition \\ref{d2.2}, \n$[\\mathbb{P}(f_{V_G})](A,B)= (A^{\\prime}, B^{\\prime})\\in\\mathbb{P}(V_G))$ and, hence, \n$A^{\\prime}=[{\\mathbf{P}}(f_V)](A), B^{\\prime}=[{\\mathbf{P}}(f_V)](B)$. \nWe have \n$$\n[{\\mathbf{P}}(f_V)](C)= [{\\mathbf{P}}(f_V)](A\\cup B)=\\{ [{\\mathbf{P}}(f_V)](A)\\}\\cup \n\\{ [{\\mathbf{P}}(f_V)](B)\\}=A^{\\prime}\\cup B^{\\prime}. \n$$\nHowever, \n$\nA^{\\prime}\\cup B^{\\prime}=\\sigma_{V_H}\\circ \\varphi_H(A^{\\prime}\\to B^{\\prime})\\in E^e_H\n$\nand the claim that morphism $f^e_E$ is well defined is proved. \nThe \nfunctoriality is evident.\n\\end{proof} \n\nFor a hypergraph $G= (V, E)$, \ndefine a path complex $\\mathfrak H^{q}(G)=(V^{q}, P_G^{ q})$ of \\emph{density} $q\\geq 1$\n where $V^{ q}=V$ and a path $(i_0\\dots i_n)\\in P_{V}$ lies $\\in P_G^{ q}$ \n iff every $q$ consequent vertices of this path\nlie in a hyperedge $\\mathbf e$, see \\cite{Hyper}. \nThus, we obtain a collection of functors $\\mathfrak H^{ q}$ from the category $\\mathcal H$ to the category $\\mathcal P$. \nComposition $\\mathfrak H^{ q}\\circ \\mathfrak E$ gives collection of functors \nfrom category $\\mathcal{DH}$ to category $\\mathcal P$.\n For a directed hypergraph $G$ define \n\\begin{equation*}\nH_*^{\\bold e(q)}(G)\\colon = H_*(\\mathfrak H^{q}\\circ \\mathfrak E(G))\n\\ \\text{for} \\ \\ q=1,2,\\dots \\ \\ .\n\\end{equation*}\nWe call these groups by the \\emph{non-directed path homology groups of density \n\t$q$} of a directed hypergraph $G$. We \ndenote $H_*^{\\bold e}(G)\\colon =H_*^{\\bold e(1)}(G)$.\n\n\\begin{proposition}\\label{p3.13} Let $G=(V,E)$ be a directed hypergraph and \n\t$\\Pi_V$ be a path complex of all paths on the set $V$. Then \n\t$H_*^{\\bold e}(G)=H_*(\\Pi_V)$.\n\\end{proposition}\n\\begin{proof} By Definition \\ref{d2.1}, $V=\\cup_{\\bold e_i\\in E} (A_i\\cup B_i)$ \n\tand every vertex $v\\in V^e=V$ lies in an edge $e\\in E^e$. So path complexes \n$\\Pi_V$ and $\\mathfrak H^{1}\\circ \\mathfrak E(G)$ coincide.\n\\end{proof}\n\\begin{example}\\label{e3.14} \\rm Now we compute path homology groups \n$H^{\\bold e(q)}_*(G)$ of density $q=1,2,3$ with coefficients in $\\mathbb R$ of the \n directed hypergraph $G$ with \n $\nV_G=\\{1,2, 3, 4,5,6\\}, \\ \\ E_G=\\{\\bold e_1,\\bold e_2,\\bold e_3, \\bold e_4, \\bold e_5\\},\n$\n where \n $\n\\bold e_1=(\\{1\\}\\to \\{2\\}), \\bold e_2=(\\{1\\}\\to \\{3\\}), \\bold e_3=(\\{2\\}\\to \\{4,6\\}), \n\\bold e_4=(\\{3\\}\\to \\{5\\}), \\bold e_5=(\\{4\\}\\to \\{5,6\\})\n$.\nThen the hypergraph $\\mathfrak E(G)$ has the set of vertices\n$V_G^e=\\{1,2, 3, 4,5,6\\}$ and the set of hyperedges \n$$\nE_G^e=\\left\\{ \n\\bold e_1^{\\prime}=\\{1,2\\}, \n\\bold e_2^{\\prime}=\\{1, 3\\}, \n\\bold e_3^{\\prime}=\\{2,4,6\\}, \n\\bold e_4^{\\prime}=\\{3, 5\\},\n \\bold e_5^{\\prime}=\\{4,5,6\\}\\right\\}.\n$$ \nIn the case of $q=1$, the homology groups $H_*^{\\mathbf e}(G)$ \ncoincide with the path homology group of the complete digraph \n$D=(V_D, E_D)$ which has six vertices and for every two vertices \n$v,w\\in V_D$ there are two arrows $(v\\to w), (w\\to v)\\in E_D$. \nThis digraph is contractible, and hence, see \\cite[S3.3]{MiHomotopy}, \n$\nH_0^{\\bold e}(G)= \\mathbb R$ and groups $H_i^{\\bold e}(G)$ are trivial for $i\\geq 1$.\n\\begin{figure}[th]\\label{fig2}\n\\centering\n\\begin{tikzpicture}\n\\node (1) at (5,3) {$1$};\n\\node (3) at (7,3) {$3$};\n\\node (5) at (9,3) {$5$};\n\\draw (1) edge[ thick, ->] (3)\n(3) edge[ thick, ->] (1)\n (3) edge[ thick, ->] (5)\n(5) edge[ thick, ->] (3);\n\\node (2) at (5,5) {$2$};\n\\node (4) at (7,5) {$4$};\n\\node (6) at (9,5) {$6$};\n\\draw (4) edge[ thick, ->] (6);\n\\draw (6) edge[ thick, ->] (4)\n(2) edge[ thick, ->] (4)\n(4) edge[ thick, ->] (2)\n(5) edge[ thick, ->] (4)\n(4) edge[ thick, ->] (5)\n (1) edge[ thick, ->] (2)\n (2) edge[ thick, ->] (1)\n (5) edge[ thick, ->] (6) \n (6) edge[ thick, ->] (5) \n(2) edge[bend left, thick, ->] node [right]{} (6) \n(6) edge[bend right, thick, ->] node [right]{} (2);\n\\end{tikzpicture}\n\\caption{The digraph $D_2$ for $q=2$.}\n\\end{figure}\n\n\\noindent\nIf $q=2$, homology groups $H_*^{\\mathbf e(2)}(G)$ \ncoincide with path homology group of the digraph $D_2$ on Fig. 2, where two-sided arrow \n$a\\longleftrightarrow b$ means that there are arrows \n$a\\to b$ and $b\\to a$. The digraph $D$ is homotopy equivalent to the induced \nsub-digraph $D_2^{\\prime}\\subset D_2$ with the set of vertices $\\{1,2,3,4,5\\}$. \n We compute directly the path homology of \n$D_2^{\\prime}$ and we obtain \n$ \nH_0^{\\bold e(2)}(G)=H_1^{\\bold e(2)}(G)=\\mathbb R$ and trivial groups \n$H_i^{\\bold e}(G)$ for $i\\geq 2$.\n\nNow we consider the case of $\\mathbf e(3)$.\nWe have $\\Omega^{\\mathbf e(3)}_n=\\Omega^{\\mathbf e(2)}_n$ for $n=0,1$ and \nthis equality is also true for all $n\\geq 0$. We have \n$$\n {\\mathcal{R}_2^{\\mathbf e(3)}}^{reg}= A \\oplus A_{246}\\oplus A_{456},\n $$\nwhere $A=\\langle e_{121}, \ne_{212}, e_{131}, e_{313}, e_{353},e_{535}\\rangle$ and\n$A_{abc}$ is the module generated by all regular paths with three vertices \nin the full digraph with vertices $a,b,c$.\nHence $\\Omega_2^{\\mathbf e(3)}={\\mathcal{R}_2^{\\mathbf e(3)}}^{reg}$. \nConsidering the digraph $D_2$, \nwe obtain that $\\Omega_2^{\\mathbf e(3)}= \n\\Omega_2^{\\mathbf e(2)}$. The cases with \n$n\\geq 4$ are similar and $\\Omega_n^{\\mathbf e(3)}= \n\\Omega_n^{\\mathbf e(2)}$ for $n\\geq 4$. Hence, \n$ \nH_n^{\\bold e(2)}(G)=H_n^{\\bold e(3)}(G)$ for $n\\geq 0$. \n\\end{example}\n\\begin{proposition}\\label{p3.15} Let $G=(V,E)$ be a directed hypergraph, \n$I_1=(0\\to 1)$,\nand $ I=(V_I, E_I)$ be the hypergraph \nwith the set of vertices $V_I=\\{0,1\\}$ and the set of edges \n$ E_I=\\{ \\bold e_0^{\\prime}=\\{0\\}, \\bold e_1^{\\prime}=\\{1\\}, \\bold e_2^{\\prime}=\\{0,1\\}\\}$. \nThere is a natural inclusion of path complexes\n\\begin{equation}\\label{3.6}\n\\mathfrak H^{ q}[\\mathfrak{E}(G\\Box I_1)] \\subset \\mathfrak H^{ q}[\\mathfrak{E}(G)\\times I]\n\\end{equation}\nfor $q\\geq 2$. Moreover, in general case complexes in (\\ref{3.6}) are not equal. \n\\end{proposition}\n\\begin{proof} Recall that the product $\"\\times\"$ of hypergraphs is defined in \\cite{mor}, \\cite{Hyper}. \nThe directed hypergraph \n $G\\Box I_1=(V_{G\\Box I_1}, E_{G\\Box I_1}) $ has the set of vertices \n $V_{G\\Box I_1}=V\\times \\{0,1\\}$ and the set of edges that can be presented as the union \n $E_0\\cup E_1\\cup E_{01}$ of three pairwise disjoint sets \n\\begin{equation*}\n\\begin{matrix}\nE_0=\\{(C\\times \\{0\\}\\to D\\times \\{0\\})| (C\\to D)\\in E\\},\n\\\\\nE_1=\\{(C\\times \\{1\\}\\to D\\times \\{1\\})| (C\\to D)\\in E\\},\n\\\\\nE_{01}=\\{A\\times\\{0\\}\\to A\\times\\{1\\}| A\\subset {\\mathbf{P}}_{01}(G) \\}.\\ \\ \\ \\ \\\\\n\\end{matrix}\n\\end{equation*}\nHence, the hypergraph \n$\n\\mathfrak E(G\\Box I_1)=\\left( V_{\\mathfrak E(G\\Box I_1)}, E_{\\mathfrak E(G\\Box I_1)} \\right)\n$\nhas the set of vertices $V_{\\mathfrak E(G\\Box I_1)}= V_{G\\Box I_1}=V\\times \\{0,1\\}$ and the set of edges \nthat can be presented as a union of three pairwise disjoint sets $E_0^{\\prime}\\cup E_1^{\\prime}\\cup E_{01}^{\\prime}$ where\n\\begin{equation}\\label{3.7}\n\\begin{matrix}\nE_0^{\\prime}=\\{(C\\times \\{0\\})\\cup (D\\times \\{0\\})| (C\\to D)\\in E\\},\n\\\\\nE_1^{\\prime}=\\{(C\\times \\{1\\})\\cup (D\\times \\{1\\})| (C\\to D)\\in E\\},\n\\\\\nE_{01}^{\\prime}=\\{(A\\times\\{0\\})\\cup( A\\times\\{1\\})| A\\subset {\\mathbf{P}}_{01}(G) \\}.\n\\ \\ \\\\\n\\end{matrix}\n\\end{equation}\nBy definition of a hypergraph $\\mathfrak{E}(G)=(V^e, E^e)$, we obtain that the hypergraph \n $\\mathfrak{E}(G)\\times I=(V_{\\mathfrak{E}(G)\\times I}, E_{\\mathfrak{E}(G)\\times I}) $ has the set of vertices \n$V_{\\mathfrak{E}(G)\\times I}=V\\times \\{0,1\\}$ and the set of edges that can be presented as \na union of three pairwise disjoint sets \n$E_0^{\\prime\\prime}\\cup E_1^{\\prime\\prime}\\cup E_{01}^{\\prime\\prime}$ where\n\\begin{equation}\\label{3.8}\n\\begin{matrix}\nE_0^{\\prime\\prime}=\\{(C\\cup D) \\times \\{0\\}, C\\cup D, \\{0\\})\\, | \\,(C\\to D)\\in E\\},\\\\\nE_1^{\\prime\\prime}=\\{(C\\cup D) \\times \\{1\\}, C\\cup D, \\{0\\})\\, |\\, (C\\to D)\\in E\\},\\\\\nE_{01}^{\\prime\\prime}=\\{(A, C\\cup D, \\{0,1\\})\\, | \\, (C\\to D)\\in E, A\\subset (C\\cup D) \\times \\{0,1\\} \\}.\\\\\n\\end{matrix}\n\\end{equation}\nLet $p_1\\colon V\\times \\{0,1\\}\\to V, \\,\np_2\\colon V\\times \\{0,1\\}\\to \\{0,1\\}$ be natural projections. \nThen $p_1(A)=C\\cup D$ and $p_2(A)=\\{0,1\\}$ by definition of the product of hypergraphs. \nThus, path complexes $\\mathfrak H^{q}\\left[\\mathfrak E(\\Box I_1)\\right]$ and \n$\\mathfrak H^{ q}\\left[\\mathfrak E(G)\\times I\\right]$ have the same vertex set and, by (\\ref{3.7}) and (\\ref{3.8}), \n$\nE_0^{\\prime}=E_0^{\\prime\\prime},\\ \n E_1^{\\prime}= E_1^{\\prime\\prime}, \\ E_{01}^{\\prime}\\subset E_{01}^{\\prime\\prime}\n$\nand (\\ref{3.6}) follows. \n\nNow we prove that in general case of (\\ref{3.6}) there is no equality. \nLet $(C\\to D)\\in E$ be a directed edge \nand $v\\in C, w\\in D$ be such vertices that the pair $(v,w)$ does not lie in a set $A\\in {\\mathbf{P}}_{01}(G)$. \nThen the two vertex path $(v\\times\\{0\\}), \n(w\\times\\{1\\})\\in\\mathfrak H^{q}\\left[\\mathfrak E(G)\\times I\\right]$ for $ q=2$ lies in\n$E_{01}^{\\prime\\prime}$ and does not lie in $E_0^{\\prime}\\cup\n E_1^{\\prime}\\cup E_{01}^{\\prime}$.\n\n\\end{proof}\n\\begin{lemma}\\label{l3.16} Let $G=(V,E)$ be a directed hypergraph,\n$I_1=(0\\to 1)$.\nThere is the inclusion \n$\n\\mu\\colon [\\mathfrak H^{ 2}\\circ\\mathfrak{E}(G)]^{\\uparrow}\\to \n\\mathfrak{H}^{2}\\circ \\mathfrak E(G\\Box I_1)\n$\nof path complexes. \n\\end{lemma}\n\\begin{proof} By definition of the hypergraph $\\mathfrak{E}(G)=(V^e, E^e)$ and the functor $\\mathfrak H^2$, \n\twe obtain that the set $P_{\\mathfrak{E}(G)}^2$ of $\\mathfrak H^{ 2}\\circ\\mathfrak{E}(G)$ consists \nof paths $p=(i_0\\dots i_n)$ on the set $V$ such that every two \nconsequent vertices $i_s,i_{s+1}\\in p$ lie in a set \n$\\{C\\cup D\\, | \\, (C\\to D)\\in E\\}$. By definition, the set of paths of the path\ncomplex $[\\mathfrak H^{ 2}\\circ\\mathfrak{E}(G)]^{\\uparrow}$\nis a union of paths \n\\begin{equation}\\label{3.9}\nP_{\\mathfrak{E}(G)}^2\\cup [P_{\\mathfrak{E}(G)}^2]^{\\prime}\\cup {[P_{\\mathfrak{E}(G)}^2]}^{\\#}\n\\end{equation}\non the set $V\\times\\{0,1\\}=V\\cup V^{\\prime}$. \nA path $p=(i_0\\dots i_n)$ on the set $V\\times \\{0,1\\}$ lies in \n$P_{\\mathfrak E(G\\Box I_1)}^2$ if any two consequent vertices lie in the exactly one of the sets \n$\nE_0^{\\prime}, E_1^{\\prime}, E_{01}^{\\prime}\n$\n defined in (\\ref{3.7}). From definition of the functor $\\mathfrak{E}$, we conclude that in (\\ref{3.9}) \n any pair of consequent vertices of a path from $P_{\\mathfrak{E}(G)}^2$ lies in an edge from $E_0^{\\prime}$, \nany pair of consequent vertices of a path from $[P_{\\mathfrak{E}(G)}^2]^{\\prime}$ lies in an edge from \n$E_1^{\\prime}$, and any pair of consequent vertices of a path from \n${[P_{\\mathfrak{E}(G)}^2]}^{\\#}$ lies in an edge from $E_0^{\\prime}\\cup E_1^{\\prime}\\cup E_{01}^{\\prime}$. \n\\end{proof} \n\\begin{theorem}\\label{t3.17} For a directed hypergraph $G$, \nthe non-directed path homology groups $H^{\\bold e(2)}_*(G) $\n of density two are homotopy invariant.\n\\end{theorem}\n\\begin{proof} It is sufficient to prove \n homotopy invariance for the one-step homotopy. \nLet $f_0, \\, f_1\\colon G\\to H$ be one-step homotopic morphisms of directed hypergraphs with a homotopy \n$F\\colon G\\Box I_1\\to H$. Since $\\mathfrak H^{2}\\circ \\mathfrak E$ is a functor, we obtain\nmorphisms of path complexes\n$\n\\mathfrak H^{2}\\circ \\mathfrak E(f_0), \\, \n\\mathfrak H^{2}\\circ \\mathfrak E(f_1)\\colon\n \\mathfrak H^{2}\\circ \\mathfrak E(G)\\to \\mathfrak H^{2}\\circ \\mathfrak E(H), \n$\nand \n$\n\\mathfrak H^{2}\\circ \\mathfrak E(F)\\colon\n\\mathfrak H^{2}\\circ \\mathfrak E(G\\Box I_1)\\to \n\\mathfrak H^{2}\\circ \\mathfrak E(H)\n$.\n Using Lemma \\ref{l3.16}, we can consider the composition \n$$\n[\\mathfrak H^{2}\\circ \\mathfrak E(G)]^{\\uparrow}\\overset{\\mu}\n{\\longrightarrow} \\mathfrak H^{2}\\circ \\mathfrak E(G\\Box I_1)\\overset{\\mathfrak H^{2}\\circ \\mathfrak E(F)}{\\longrightarrow}\n\\mathfrak H^{2}\\circ \\mathfrak E(H)\n$$\nwhich gives a homotopy between morphisms \n$ \\mathfrak H^{2}\\circ \\mathfrak E(f_0)$ and \n$\n\\mathfrak H^{2}\\circ \\mathfrak E(f_1)$.\nNow the result follows from \n\\cite[Th. 3.4]{Hyper}.\n\\end{proof}\n\n\\subsection{Natural path homology}\\label{S34}\n\nLet $G=(V,E)$ be a directed hypergraph.\n Define a digraph \n$\n\\mathfrak N(G)= (V^n_G, E^n_G)\n$\nwhere \n$\nV^n_G=\\{ C\\in {\\mathbf{P}}(V)\\setminus \\emptyset | C\\in {\\mathbf{P}}_{01}(G)\\}\n$\n and \n$\nE^n_G=\\{A\\to B| (A\\to B)\\in E\\}.\n$\n Thus a set $X\\in {\\mathbf{P}}(V)\\setminus \\emptyset $ is a vertex of the digraph \n $\\mathfrak N(G)$ iff $X$ is an origin or an end of an arrow $\\mathbf e\\in E$.\nAny arrow $\\bold e=(A\\to B)\\in E$ gives\nan arrow $(A\\to B)\\in E^n$.\n\n\n \\begin{proposition}\\label{p3.18} Every morphism of directed hypergraphs $f\\colon G\\to H$ \ndefines a morphism of digraphs \n$\n[\\mathfrak N(f)]=(f_{V}^n, f_{E}^n) \\colon (V_G^n,E^n_G)\\to (V_H^n, E^n_H)\n$ \nby\n $\nf^n_V(C) \\colon =[{\\mathbf{P}}(f)]\\circ \\phi_G(C)\n$\nand \n$\nf_E^n(A\\to B) =(f_V(A)\\to f_V(B))\\in E^n_H\n$. Moreover, $\\mathfrak N$ is a functor from the category $\\mathcal{DH}$ \nto the category $\\mathcal D$ of digraphs. \\ \\ \\ $\\blacksquare$\n\\end{proposition}\nThe composition $\\mathfrak D\\circ \\mathfrak N$ gives a functor from \n $\\mathcal{DH}$ to the category of path complexes. \nFor a directed hypergraph $G$, we set \n$\nH_*^{\\bold n}(G)\\colon = H_*(\\mathfrak D\\circ \\mathfrak N(G))\n$.\nThese homology groups will be called the \\emph{natural path homology groups} \nof $G$. \n\\begin{example}\\label{e3.19} \n\\rm Now we compute \n$H^{\\bold{n}}_*(G)$ with coefficients in $\\mathbb R$ of directed hypergraph $G=(V_G,E_G)$:\n $$\nV_G=\\{1,2, 3, 4,5,6,7,8\\}, \\ \\ E_G=\\{\\bold e_1,\\bold e_2,\\bold e_3, \\bold e_4, \\bold e_5, \\bold e_6,\\bold e_7, \\bold e_8, \\bold e_9\\},\n$$\n $$\n\\bold e_1=(\\{1\\}\\to \\{3,4\\}), \\bold e_2=(\\{1\\}\\to \\{5, 6\\}), \\bold e_3=(\\{1\\}\\to \\{7,8\\}), \n$$\n$$\n\\bold e_4=(\\{2\\}\\to \\{3,4\\}), \\bold e_5=(\\{2\\}\\to \\{5,6\\}), \\bold e_6=(\\{2\\}\\to \\{7,8\\}), \n$$ \n$$\n\\bold e_7=(\\{3,4\\}\\to \\{5,6\\}), \\bold e_8=(\\{5,6\\}\\to \\{7,8\\}), \\bold e_9=(\\{7,8\\}\\to \\{3,4\\}).\n$$\nThe groups $H_*^{\\bold n}(G)$ coincide\nwith the regular path homology groups of the digraph \ngiven on Fig. 3. \n\n\\begin{figure}[h]\\label{fig3}\n\\setlength{\\unitlength}{0.14in} \\centering\n\\begin{picture}(16,18)(2,2)\n\\centering\n\\thicklines\n \\put(5,12){\\vector(1,0){7.2}}\n\\put(4.2,11.8){$\\bullet$}\n\\put(12.1,11.7){$\\bullet$}\n\\put(12.4,12){\\vector(1,1){3.2}}\n\\put(15,15.3){\\vector(-3,-1){9.6}}\n\\put(8.3, 19.7){$\\bullet$}\n \\put(15.6,15.1){$\\bullet$}\n\\put(8.6, 19.4){\\vector(-1,-2){3.7}}\n\\put(8.6, 19.6){\\vector(1,-2){3.6}}\n\\put(8.9, 19.5){\\vector(3,-2){6.4}}\n\\put(15.3, 4.4){$\\bullet$}\n\\put(15.6, 5.1){\\vector(-1,2){3.4}}\n\\put(15.3, 5){\\vector(-3,2){10.1}}\n\\put(15.6, 5.2){\\vector(0,1){10}}\n\\end{picture}\n\\caption{The digraph of Example 3.19.}\n\\end{figure}\n\\noindent \nComputation gives: \n$\nH_0^{\\mathbf n}(G)=H_2^{\\mathbf n}(G)=\\mathbb R$ and \n$H_i^{\\mathbf n}(G)=0$ for other $i$.\n\\end{example}\n\\begin{lemma}\\label{l3.20} Let $G=(V,E)$ be a directed hypergraph and \n$I_1=(0\\to 1)$. \nThere is an equality \n$\n\\mathfrak N(G)\\Box I_1=\n\\mathfrak{N}(G\\Box I_1)\n$\nof digraphs.\n\\end{lemma}\n\\begin{proof} The digraph \n$\\mathfrak N(G)\\Box I_1$ has \n$\nV_{\\mathfrak N(G)\\Box I_1}=\\left\\{C\\times \\{0, 1\\}|\\, C\\in {\\mathbf{P}}_{01}(G)\\right\\}\n$\nand $E_{\\mathfrak N(G)\\Box I_1}=E_{0,1}\\cup E_{0\\to 1}$ where \n$\nE_{0,1}=\\{A\\times \\{i\\}\\to B\\times\\{i\\}|\n\\, A\\to B\\in E; i=0,1\\}$, \n$E_{0\\to 1}=\\left\\{C\\times\\{0\\}\\to C\\times\\{1\\}|\\, C\\in {\\mathbf{P}}_{01}(G)\\right\\}$. \nThe digraph \n$\\mathfrak N(G\\Box I_1)$ has the set of vertices \n$\nV_{\\mathfrak N(G\\Box I_1)}=\\left\\{C\\times \\{0, 1\\}|\\, C\\in {\\mathbf{P}}_{01}(G)\\right\\}\n$\nwhich coincides with $V_{\\mathfrak N(G)\\Box I_1}$\nand the set of edges $E_{\\mathfrak N(G\\Box I_1)}=E_{0,1}\\cup E_{0\\to 1}$ which coincides with $E_{\\mathfrak N(G)\\Box I_1}$. \n\\end{proof}\n\n\\begin{theorem}\\label{t3.21} For a directed hypergraph $G$, the natural path \n\thomology groups $H^{\\bold n}_*(G)$ are homotopy invariant.\n\\end{theorem}\n\\begin{proof} The path homology groups defined on the category of digraphs are homotopy \n\tinvariant \\cite{Hyper},\\cite{MiHomotopy} and thus, the result follows from Lemma \\ref{l3.20}. \n\\end{proof}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:Introduction} Introduction}\nOriginally suggested~\\cite{old} as descriptive of adsorption of molecules on a substrate (a motivation that has been renewed in recent experiments~\\cite{Blunt}), dimer models have attracted the interest of researchers in various branches of physics, ranging from statistical and condensed matter physics to high-energy physics~\\cite{string}. Their distinctive properties essentially result from the close-packing condition, which imposes that on a lattice, each site should be part of one and only one dimer. This strict condition generates strong correlations between degrees of freedom, even when interactions are absent from the system. \n\nClassical dimer models have been originally studied in statistical mechanics, with the famous result that non-interacting dimer models on planar graphs can be solved exactly using Pfaffians~\\cite{60a,60b}. On the square lattice for instance, it was shown using subsequent techniques that dimer correlation functions decay algebraically with distance~\\cite{60c}. It was latter shown that dimer models can also be viewed as dual versions of Ising models~\\cite{60b} and generate the ground-state manifolds of fully frustrated Ising models~\\cite{FF}. For bipartite lattices in three dimensions, dimers can be represented by an effective magnetic field living on the bonds of the lattice. The close-packing condition for the dimers encodes a Gauss law for the magnetic field~\\cite{Huse}. Postulating a quadratic dependence on this field of the effective entropic action suggests the existence of {\\it dipolar} dimer-dimer correlations. Monte Carlo simulations~\\cite{Huse} indeed confirm this picture with a great accuracy, and the corresponding phase of dimers on 3d bipartite lattice is often referred to as a Coulomb phase~\\cite{HenleyCoulomb} with this electromagnetic analogy in mind.\n\nIn some sense, dimer models form the ``Ising model\" of local constraint, spelling out their ubiquity in physics. In condensed-matter, dimer models emerged as classical counterparts of quantum dimer models~\\cite{QDM} (which properties for particular values of parameters are determined by those of the classical problem), as well as effective models for magnetization plateaus in frustrated magnets~\\cite{plateaus}. Dimer models show also close similarities with spin-ice systems~\\cite{Isakov} which can also host a Coulomb phase. There the close-packing condition translates into the ``ice rule''.\n\nIntriguing physics take place in classical dimer models when interactions are present. The perhaps simplest case to study is to add local interactions which favor parallel alignment of dimers on a plaquette of the lattice. In two dimensions on bipartite lattices, the system undergoes a phase transition from a columnar phase at low temperature to a disordered critical phase at high temperature~\\cite{Alet2d}. The transition is of the Kosterlitz-Thouless type. It is possible to obtain a field theoretical description of the transition in terms of an height model which predicts accurately the behavior of the correlation functions of different observables~\\cite{Alet2d,Papanikolaou,Castelnovo}. The situation is much less clear in three dimensions. At high temperature on the cubic lattice, the system is located in the Coulomb phase which is destabilized as the temperature is lowered towards a columnar order of dimers. Quite surprisingly, the transition between the critical Coulomb phase and the ordered phase is \\textit{continuous}~\\cite{Alet3d}. Critical exponents estimated from the numerical simulations do not appear to be those of a known ``simple'' universality class, although being very close to those of a tricritical theory.\n\n A field theoretical description of the transition observed in the $3d$ classical dimer model is not easy. In particular, it cannot be properly addressed in the traditional Ginzburg-Landau formalism since the correlations between degrees of freedom in the disordered phase decay algebraically and not exponentially. Different attempts to provide for a field theoretical description of the critical point have led to a representation in the continuum in terms of two complex matter fields coupled to a non compact $U(1)$ gauge field~\\cite{Chalker,Charrier, Chen}, a theory known as the non compact $CP^1$ ($NCCP^1$) theory. The problem is that it is not clear at present if this theory possesses in fact an infra-red fixed point. Efforts to simulate lattice versions of the $NCCP^1$ model either lead to a weakly first-order transition~\\cite{Kuklov1}, or to a continuous transition with unconventional critical exponents~\\cite{Charrier,Motrunich}. Of course, it is possible that the $NCCP^1$ theory possesses a tricritical point and that the different simulated microscopic models all correspond to the same theory but flow in the continuum toward different parameter regimes. \n \n Another possibility is that the microscopic dimer model of Ref.~\\onlinecite{Alet3d} sits at a tricritical point, which would be the essentially unique way of reconciling the observed continuous transition with a Ginzburg-Landau approach based on symmetry-breaking. Note that the value of the critical exponents $\\alpha \\simeq 0.5$, $\\nu \\simeq 0.5$ and $\\eta \\simeq 0$~\\cite{Alet3d} are suggestive of this scenario as they are close to the ones of a $O(N)$ tricritical theory. This putative tricritical point could be the one of the $NCCP^1$ theory, or of another field theory yet to be specified. It is however unclear why the dimer model should be precisely located at a tricritical point: this usually requires a fine-tuning of parameters, and there is no other parameter than temperature in the original microscopic model~\\cite{Alet3d}. \\\\\n \n In this paper, we step aside from field theoretical considerations and instead provide new valuable data regarding the critical behavior of 3d classical dimer models. We have carried an extensive Monte Carlo (MC) simulation of a dimer model consisting of a four-dimers interaction on a cubic lattice, in addition to the usual attractive two-dimers plaquette interactions. The cubic interaction corresponds to a coupling between four parallel dimers sitting on the edges of a cube which can be attractive (non frustrated regime) or repulsive (frustrated regime). The system is studied with a worm MC algorithm which allows us to sample systems of linear size up to $L = 180$. Our results suggest that the phase transition between Coulomb and columnar phases is first order almost everywhere in the non frustrated regime, and second order in the frustrated side - forcing the existence of a tricritical point in between these two regimes. Moreover, we find that the critical exponents in the strongly frustrated regime are different from the ones measured in the absence of the cubic interaction, with $\\eta \\sim 0.2$ and $\\nu \\sim 0.6$. In order to rule out a very weak first order transition for all the range of parameters (which is always possible), we performed a flowgram analysis which clearly indicates two collapses above and below the tricritical point. Finally, at very low temperature in the strongly frustrated regime, we observe a new crystalline phase resulting from the destabilization of the columnar phase. This phase has a degeneracy growing extensively with the linear system size and is separated from the columnar phase by a first order transition. The final phase diagram that we obtain is presented in Fig.~\\ref{fig:phasediag}. We note that similar results were recently obtained by Papanikolaou and Betouras~\\cite{Papanikolaou2} in a different deformation of the dimer model. A comparison between the two works will be made in Sec.~\\ref{sec:conclusion}.\\\\\n \n The plan of the paper is the following. In Sec.~\\ref{sec:def}, we describe the model, the algorithm and the relevant observables for its study. In Sec.~\\ref{sec:nonfrustrated}, we present our results in the non frustrated regime, when both interactions are attractive. In Sec.~\\ref{sec:frustated}, we analyze the frustrated regime where the plaquette interaction is attractive but the cubic interaction is repulsive. The final form of the phase diagram is obtained with a flowgram analysis, which we present in Sec.~\\ref{sec:flow}. We finally discuss the implications of our results in Sec.~\\ref{sec:conclusion}.\n \n \n \\begin{figure}[h]\n \\begin{center}\n\\includegraphics*[width=8.5cm]{Figures\/phasediagram.png}\n\\caption{Phase diagram of the attractive dimer model with cubic interactions. Dots indicate values of interactions where simulations were performed. Dashed lines and green dots represent second order transitions. Solid lines and red dotes correspond to first order transitions. The white dot represents a point where the nature of the transition is still unclear.}\n\\label{fig:phasediag}\n\\end{center}\n\\end{figure}\n \n\n \\section{Definitions}\n \\label{sec:def}\n \\subsection{System}\n The system we consider is a cube of linear dimension $L$ (total number of sites $N=L^3$) covered by hard-core dimers. Only dimer configurations $\\mathcal{C}$ obeying the close-packing condition contribute to the partition function. The partition function reads:\n\\begin{equation}\nZ = \\sum_{\\mathcal{C}} \\exp(-\\beta E_\\mathcal{C}),\n\\end{equation}\nand the energy of an allowed configuration $\\mathcal{C}$ is given by:\n \\begin{equation}\n \\label{eq:model}\n E_\\mathcal{C} = v_2( N_{||} + N_{=} +N_{\/\/})+v_4 N_{\\rm cubes},\n \\end{equation} \nwhere the first term is proportional to the number of plaquettes in the configuration containing two parallel dimers (referred in the following as ``parallel plaquettes'') and the second counts the number of unit cubes sustaining four parallel dimers (see Fig.~\\ref{fig:cubes}). Both occurences of terms in a given dimer configuration are illustrated in Fig.~\\ref{fig:3d}. In the remainder of this study, we will restrict ourselves to the case of attractive plaquette interactions $v_2 < 0$ while cubic interaction $v_4$ can be attractive or repulsive. We investigate the properties of the system as a function of the ratio $x = v_4\/v_2$ and temperature $T = 1\/\\beta$. When $x > 0$, both interactions have the same sign and the system will be said to be non frustrated, as both interactions favor the same columnar ground-states. On the opposite, when $x<0$, the system will be in a frustrated regime ($v_2$ and $v_4$ terms compete). In general, we take $v_2 = -1$ and vary $v_4$. The only exception is the two limits $x \\rightarrow \\pm \\infty$ where we consider the system in absence of plaquette interactions ($v_2 = 0^-$) and take $v_4 = \\pm 1$. \n\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics*[width=\\columnwidth]{Figures\/mini_dimer.pdf}\n\\caption{The three different possibilities with four parallel dimers parallel on a unit cube. Each pattern contributes $v_4$ to the energy $E_{\\mathcal{C}}$ in Eq.~\\ref{eq:model}.}\n\\label{fig:cubes}\n\\end{center}\n\\end{figure} \n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics*[width= \\columnwidth]{Figures\/model3d.pdf}\n\\includegraphics*[width=\\columnwidth]{Figures\/model3dcubic.pdf}\n\\caption{Illustration of occurences of the $v_2$ (top) and $v_4$ (bottom) terms of the model defined in Eq.~\\ref{eq:model} : interacting plaquettes (top) and cube (bottom) are represented with shaded surfaces. }\n\\label{fig:3d}\n\\end{center}\n\\end{figure} \n\nWe simulate the dimer model by means of a worm Monte Carlo algorithm with a local heat-bath detailed balance condition~\\cite{SandvikMoessner}. Compared to a local Metropolis algorithm, autocorrelation times are drastically reduced with a worm algorithm due to the use of non-local moves, which allows us to reach systems of linear size up to $L = 180$. As we will see in the next section, the ability to simulate very large system sizes is of crucial importance to distinguish between continuous and weakly first-order transitions. We would indeed like to emphasize that for systems like dimer models, or others which contain \\textit{a priori} long-range correlations, one should be particularly cautious regarding the issue of finite-size scaling. Here, for the largest system sizes, up to $5. 10^6$ sweeps have been carried (we define one sweep by performing enough worm moves such that on average every site of the lattice is visited). The convergence of simulations is checked by looking at autocorrelation times of the various observables, as obtained from a binning analysis. \n \n \\subsection{Observables}\n \\label{sec:observables}\nThe observables in our system are of three kinds: a first group is made of the thermodynamic quantities such as the average energy, a second type of observables is related to the columnar ordering, and the third type is related to the stiffness of the system (fluctuations of dimer fluxes).\n\n \\subsubsection{Thermodynamic quantities}\n\nA phase transition can generally be detected by monitoring the probability distribution of the energy of the system. For a first order transition, the average energy $\\langle E \\rangle$ is discontinuous and exhibits a latent heat when the temperature is lowered. For a second order transition, the average energy is continuous but its first derivative, the specific heat per site $C_v\/N$, obeys the scaling law:\n\\begin{equation}\n\\frac{C_v}{N} = \\frac{1}{N}\\frac{d\\langle E \\rangle}{dT}=\\frac{\\langle E^2\\rangle-\\langle E\\rangle ^2}{T^2 L^3} \\simeq C_v^{\\rm reg} + A L^{\\alpha\/\\nu}.\n\\end{equation}\nThe first term $C_v^{\\rm reg}$, corresponding to the regular part of the specific heat at criticality, is often forgotten in fits of numerical data as the divergence of the second term usually dominates (for $\\alpha>0$). However, at several points of the phase diagram, we find that this term cannot be neglected as the divergence of the specific heat can turn quite slow. In this situation, one can take $C_v^{\\rm reg}$ as either a new fitting parameter or as given by results obtained on small systems where the second term is negligible. In these cases, we take a conservative approach for the error bar on $\\alpha \/ \\nu$ and quote a result which encloses all possibilities.\n\nIn comparison, for a first order transition, the specific heat diverges like the volume: $C_v\/N \\propto L^3$. Finally, another mean to distinguish between first and second order transitions is to consider the whole histogram of energy at the transition temperature, as obtained in the Monte Carlo simulation. We expect for a first order phase transition the appearance of double peaks, centered at the average energy of the two co-existing phases. These peaks will appear only for samples with size above (or close to) the correlation lenght at the transition, and should keep being separated when increasing system size. For a second order phase transition, the histogram should contain a unique peak. \n\n\\subsubsection{Columnar order parameter}\nThe local columnar order parameter $\\mathbf{m}(\\mathbf{r})$ is defined with respect to the dimer occupation number at each site $\\mathbf{n}(\\mathbf{r})$:\n\\begin{equation}\n\\mathbf{m}(\\mathbf{r}) = (-1)^\\mathbf{r} \\mathbf{n}(\\mathbf{r}).\n\\end{equation}\nThe global order parameter reads $\\mathbf{C} = \\frac{2}{L^3} \\sum_\\mathbf{r} \\mathbf{m}(\\mathbf{r}) $ and is normalized such that the six columnar states correspond to: $\\mathbf{C} = \\{ \\pm 1, 0,0 \\} , \\{ 0 , \\pm 1, 0 \\} , \\{ 0,0,\\pm 1 \\}$ and $C = \\langle | \\mathbf{C} | \\rangle = 1$. One also considers the corresponding susceptibility $\\chi$. In particular, for a second order transition:\n\\begin{equation}\n\\chi\/N = \\frac{\\langle \\mathbf{C}^2\\rangle - \\langle |\\mathbf{C}|\\rangle^2}{L^3} \\propto L^{2-\\eta},\n\\end{equation}\nwhile for a first order transition $\\chi\/N \\propto L^3$. Finally, the Binder cumulant:\n\\begin{equation}\nB = \\frac{\\langle \\mathbf{C}^4\\rangle}{\\langle \\mathbf{C}^2\\rangle^2}.\n\\end{equation}\nis a scale-invariant quantity in the case of a continuous transition, and should thus exhibit a crossing point at criticality as a function of the system size. Moreover, the finite size-scaling (FSS) of its derivative with respect to the temperature: \n\\begin{equation}\n\\frac{\\textrm{d} B}{\\textrm{d} T} \\propto L^{1\/\\nu},\n\\end{equation}\ngives a direct access to the critical exponent $\\nu$.\n \\subsubsection{Stiffness}\nThe (inverse) stiffness encodes fluctuations of dimer fluxes across a plane~\\cite{Huse,Alet3d}:\n\\begin{equation}\nK^{-1} = \\sum_{\\alpha = x,y,z} \\frac{\\langle \\phi_\\alpha^2\\rangle}{3L}\n\\end{equation}\nwhere the flux $\\phi_\\alpha$ is the algebraic number of dimers crossing a plane\nperpendicular to the unit vector $\\hat{\\alpha}$ . Given a lattice direction, the contribution to the flux is $+1$ for a dimer going\nfrom one sublattice to the other and $-1$ for the reverse situation. The stiffness is finite in the Coulomb phase, reflecting the presence of dipolar correlations between dimers. On the other hand, the columnar phase is robust to insertion of fluxes and $K^{-1}$ vanishes exponentially with system size. At a second order phase transition, the quantity $L K^{-1}$ should be scale invariant~\\cite{Alet3d} and the scaling of its derivative:\n\\begin{equation}\nL.\\frac{ dK^{-1}}{dT} \\propto L^{1\/\\nu}\n\\end{equation}\nprovides another access from the high temperature side to the exponent $\\nu$. In general, the error bars that we quote on exponents include at the same time errors due to the fitting procedure (which we measure by considering stability of fits with exclusion of a few data points), errors from the determination of critical temperature as well as statistical errors.\n\n\\section{Non frustrated side: $v_2 < 0$, $v_4 < 0$.}\n\\label{sec:nonfrustrated}\nWe start the discussions of our numerical results on the non-frustrated side $v_4 < 0$ and $v_2 < 0$ of the phase diagram. When both interactions are attractive, we expect to find the same phases as in the simple attractive plaquette model: a six-fold degenerate columnar phase at low temperature and a Coulomb phase with dipolar correlations at high temperature. \n\nAt first, let us consider the extreme case where only attractive cubic interactions are present ($v_2=0$ or $\\tanh (v_4\/v_2 ) = 1$ in the phase diagram of Fig.~\\ref{fig:phasediag}). The evolution of the energy, columnar order parameter $C$ and inverse stiffness $K^{-1}$ for small system sizes is presented on Figure \\ref{fig:extremeNF}. As expected, the columnar order is non-zero only at low temperature where the inverse stiffness vanishes. But in constrast with the attractive plaquette model, both quantities exhibit a strong discontinuity at the transition temperature $T_c \\sim 1.1$. In fact, the energy also displays such a jump, characteristic of a latent heat. This shows undoubtedly a first order transition. \n\n \\begin{figure}[h]\n \\begin{center}\n\\includegraphics*[width=7cm]{Figures\/orders.pdf}\n\\caption{Left: Evolution of the columnar order parameter $C$ and inverse stiffness $K^{-1}$ as a function of temperature for $v_2=0$ and $v_4=-1$. Right: Temperature dependence of the average energy per site $\\langle E \\rangle\/N$. Here L = 16.}\n\\label{fig:extremeNF}\n\\end{center}\n\\end{figure}\n\nTo settle definitively the nature of the transition as well as to benchmark the method, we also study the histogram of energy observed during the simulation, which corresponds to the probability distribution of the energy $P(E)$. The appearance of a double peak distribution at the critical temperature is a typical sign of a first order transition. For $v_4\/v_2 = \\infty$, we can easily detect this double peak for system sizes as small as $L = 8$ (see Fig.~\\ref{fig:histos}). \n\nWe now introduce a small attractive plaquette interaction $v_2$ and repeat the procedure by tracking the temperature at which columnar order sets in and inverse stiffness vanishes. We observe that the nature of the transition remains discontinuous but that the correlation length grows as the ratio $v_4\/v_2$ is decreased: for $v_4\/v_2 = 0.8$ we find double peaks only at sizes $L \\geq 16$, for $v_4\/v_2 = 0.6$ at sizes $L \\geq 32$ and for $v_4\/v_2 = 0.4$ at $ L \\geq 80$ (see Fig.~\\ref{fig:histos}). Finally, for systems close to the pure plaquette model $v_4\/v_2 = 0$, it becomes extremely difficult to distinguish double peaks in the energy distribution. In fact, the histogram for $v_4\/v_2 = 0.2$ shows a slightly deformed single peak for $L = 140$. We were not able to see any sign of double peaks for $v4\/v2 = 0.1$ up to $L=140$. \n\n \\begin{figure}[h]\n \\begin{center}\n\\includegraphics*[width=8cm]{Figures\/histo.pdf}\n\\caption{Energy per site probability $P(e)$ as a function of energy per site $e=E\/N$, for different ratios $v_4\/v_2>0$ at the critical temperature separating columnar and Coulomb phases. The size $L$ corresponds to the minimal length for which the histograms start to display a double peak distribution. For $v_4\/v_2 = 0.2$, a deformed single peak is observed in the distribution for $L = 140$, the maximal sample size simulated for this parameter.}\n\\label{fig:histos}\n\\end{center}\n\\end{figure}\n \nAnother possibility to discern a discontinuous transition is to measure the critical scaling of the maximum of the susceptibility and the specific heat per site. At a first order transition, both quantities should diverge like the volume $L^3$. In terms of the critical exponents introduced in section \\ref{sec:observables}, this would correspond to effective critical exponents $\\left.\\frac{\\alpha}{\\nu}\\right|_{\\rm eff}= 3$ and $\\eta_{\\rm eff} = -1$. We have determined the scaling laws of these two quantities for several values of the ratio $v_4\/v_2$ (see table \\ref{tab:tableau1}) and find that when $v_4\/v_2$ is large, the exponents agrees with the first order values. As we approach $v_4\/v_2 = 0$, the scalings of $C_v$ and $\\chi$ get smoother and the exponents closer to the values of Ref.~\\onlinecite{Alet3d} $\\left.\\frac{\\alpha}{\\nu}\\right|_{\\rm eff} \\sim 1$ and $\\eta_{\\rm eff} \\sim 0$. At this point, it is not possible to conclude on the order of the transition at $v_4\/v_2 = 0.1$. The transition can be either continuous or very weakly first order. \n\n\\begin{center}\n\\begin{table}[h]\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline \n$v_4\/v_2$ & $T_c$ &$ L_{\\rm min}$ & $ L_{\\rm max}$ &$\\left.\\frac{\\alpha}{\\nu}\\right|_{\\rm eff}$ & $\\eta_{\\rm eff}$ \\\\\n\\hline \n0.6 & $2.225$&$ 16 $& $32$ &$ 3 $ & $-1.00$\\\\\n\\hline\n0.4 & $2.033$&$ 56 $ & $80$ & $ 2.9$& $ -0.90$ \\\\\n\\hline\n0.2 & $1.851$&$ 64$ & $140$ & $2$ &$-0.40$ \\\\\n\\hline\n0.1 & $ 1.7625 $ &$64 $&$140$&$1.4$& $-0.25$ \\\\\n\\hline\n0 & $ 1.765$&$56 $ & $96$&$ 1.11(5)$ & $ -0.02(5) $ \\\\\n\\hline\n\\end{tabular}\n\\caption{Critical temperature and effective critical exponents for different non frustrating coupling ratios. For each ratio, the exponents have been measured by a FSS analysis using $L_{\\rm min}$ as the minimal size. The values at $v_4\/v_2=0$ are taken from Ref.~\\onlinecite{Alet3d}.}\n\\label{tab:tableau1} \n\\end{table}\n\\end{center}\n\n\\section{Frustrated side: $v_2 < 0$ $v_4 > 0$}\n \\label{sec:frustated}\n \n We now turn to the analysis of the system where plaquette interactions are attractive but cubic interactions are repulsive. As the two interactions compete with each other, we can expect, at least at low temperature and in the regime $v_4\/v_2 \\ll -1$, that new phases may appear. \n\nLet us again first discuss the extremal case where only cubic interactions are present ($v_2 = 0$). We observe that the columnar order parameter vanishes for all temperature while the inverse stiffness $K^{-1}$ remains always non-zero and finite (see Fig.~\\ref{fig:extremeF} left). Moreover, although the specific heat displays a maximum (see Fig.~\\ref{fig:extremeF} right), it does not display any dependence on the size of the system and cannot be related to a critical phenomena. Thus, we obtain the surprising result that the system is always disordered when $v_4 > 0$ and $v_2 = 0$. In other words, the Coulomb phase can accomodate for having no cubes occupied by four dimers (as in Fig.~\\ref{fig:cubes}), even down to very low temperatures. Even more, the Coulomb phase appears to be strenghten in this situation as the inverse stiffness increases slightly as temperature is lowered. \n \n \\begin{figure}[h]\n \\begin{center}\n\\includegraphics*[width=7cm]{Figures\/ordersINF.pdf}\n\\caption{For $v_4 = 1$ and $v_2 = 0$. Left: Evolution of the stifness $K^{-1}$ and columnar order $C$ as a function of temperature for $L=16$. The non-zero value reached by $C$ is due to the finite size of the sample. Right: The height of the peak of the specific heat per site does not display any dependence on system size.}\n\\label{fig:extremeF}\n\\end{center}\n\\end{figure}\n\nAs soon as a finite attractive coupling $v_2$ is introduced, we find that the specific heat per site displays two peaks as a function of temperature: one at a lower temperature $T_{c_1}$ which is strongly diverging with the system size and another at an upper temperature $T_{c_2}$ which diverges very slowly. Fig~\\ref{fig:result1} and its insets display results at $v_4\/v_2=-1$, which are typical of what we observe in the frustrated regime. The first peak of the specific heat is associated with the freezing of the columnar order parameter to a value smaller than $1$ at $T < T_{c_1}$ , the second to the appearance of the Coulomb phase for $T>T_{c_2}$ (see upper inset of figure \\ref{fig:result1}). In the next two sections, we will detail the nature of these two phase transitions and of the phase separating them. We note that as the ratio $v_4\/v_2 \\rightarrow - \\infty$, the two critical temperatures get closer such that it becomes more difficult to detect the peak at $T_{c2}$. \n\n\\begin{figure}[h]\n \\begin{center}\n\\includegraphics*[width=8cm]{Figures\/results1.pdf}\n\\caption{Evolution of the specific heat per site as a function of temperature for $v_4 \/v_2= -1$. The first peak at $T_{c_1} \\sim 0.5$ is associated with the crystallisation of the system in an ordered phase different from the columnar phase, which can be readily seen by the evolution of the columnar order parameter (see upper inset). The second peak, at $T_{c_2} \\sim 0.95$, delimitates a region with non-zero columnar order from a region with non-zero inverse stiffness. Upper inset: evolution of the inverse stiffness $K^{-1}$ and of the columnar order parameter $C$ for $L = 16$. Lower inset: zoom of the specific heat close to $ T_{c_2}$ . }\n\\label{fig:result1}\n\\end{center}\n\\end{figure}\n \n \\subsection{Low temperature phase transition $T_{c_1}$}\n \n We now discuss the nature of the phase below the lower transition by first considering the evolution of the $T=0$ ground-states as a function of the frustration ratio $v_4\/v_2$. For large positive values of $v_4$ (but still with $v_2=-1$), we expect the columnar ground-states to become unstable as they maximize the number of parallel cubes. In this limit, we must find dimer configurations which have exactly zero parallel cubes but that can nonetheless support a maximal number of parallel plaquettes. We find that there are several configurations (in fact an exponential number) that satisfy this frustrating condition. For instance, we present in Fig.~\\ref{fig:packing} on the example of a $L=6$ cube three configurations satisfying these two constraints. Configuration $A$ is made of a unit pattern consisting of two planes repeating each other. The unit pattern of configurations $B$ and $B'$ posseses three planes. Configurations $B$ and $B'$ are simply related by a unit translation of the two bottom planes. If we now consider larger systems, it is easy to see that we can use the same plane patterns at will, by randomly choosing $B$ or $B'$ every three planes and this, without creating parallel cubes. Therefore the degeneracy of the ground-state is at least growing like $2^{L\/3}$, that is exponentially with the linear system size. It is possible that by forming other cost-free defects within a plane, the degeneracy is even higher $\\propto e^{a L^2}$, but we have not made any further investigations in this direction.\n \n \\begin{figure}[h]\n \\begin{center}\n\\includegraphics*[width=8cm]{Figures\/packing.pdf}\n\\vspace{-4cm}\n\\caption{Three configurations on the $L=6$ cube which maximize the number of parallel plaquettes without having any parallel cubes. Configuration $A$ is built out of a unit pattern containing two planes which repeats itself three times. This unit pattern possesses $42$ parallel plaquettes, leading to a number of $7\/12$ parallel plaquettes per unit site. Configurations $B$ and $B'$ both consist of a unit pattern of three planes repeating itself twice. The unit pattern possesses $63$ parallel plaquettes, leading also to a number of $7\/12$ parallel plaquettes per unit site. Configuration $B'$ is obtained from configuration $B$ by shifting the dimer pattern on the two lowest planes by one unit cell on the right direction. Mixing $B$ and $B'$ patterns, one can create several dimer configurations having the same energy.}\n\\label{fig:packing}\n\\end{center}\n\\end{figure}\n\nFor what thermodynamics is concerned, it is easy to check that all such configurations with no parallel cubes have on average $7\/36$ plaquettes that are parallel. We refer to the corresponding phase as Crystal II. While it may be possible to define correctly an order parameter for this crystal (and this in spite of the high ground-state degeneracy), we simply concentrate on a simple comparison between the energy of the Crystal II ground-states with those of the columnar phase. A columnar ground-state satisfies $1\/3$ of the possible parallel plaquettes, and one parallel cube out of two. Since the number of plaquettes (cubes) is three times (equal to) the number of lattice sites, we obtain:\n\\begin{eqnarray}\nE_{\\rm Crystal \\, II} \/ L^3 &=& E_{\\rm columnar} \/ L^3\\nonumber \\\\\n\\Leftrightarrow \\frac{7}{12} v_2 &=& v_2 - \\frac{v_4}{2} \\\\\n\\Leftrightarrow \\frac{v_4}{v_2} &=& -\\frac{10}{12}. \\nonumber \n\\end{eqnarray}\nThat is, the Crystal II phase should be favored as soon as $v_4\/v_2 < -10\/12\\simeq 0.833$. We now compare this simple estimate with numerical simulations. Considering the evolution of the columnar order parameter as a function of $T$ for different ratios $v_4\/v_2$, we find that $C$ converges to $1$ at very low temperature for $v_4\/v_2 > -0.8$, while for $v_4\/v_2 \\leq -0.8$ it converges towards a smaller value (see Fig.~\\ref{fig:crystal} top). This indicates that the lowest energy configurations are no longer the columnar ones. A further indication of the Crystal II phase is given by the average number per site of parallel cubes $\\langle N_{\\rm cube} \\rangle\/L^3$ and parallel plaquettes $\\langle N_{\\rm plaquette}\\rangle\/L^3$. As expected, we find (see Fig.~\\ref{fig:crystal} bottom) that $\\langle N_{\\rm cube} \\rangle\/L^3$ decreases from $1\/2$ to $0$ and $\\langle N_{\\rm plaquette}\\rangle \/L^3$ from $1$ to $7\/12$ quite abruptly as soon as $v_4\/v_2 \\lesssim -0.8$. Note that the estimate $\\left.v_4\/v_2\\right|_c \\simeq -0.8$ that we obtain is quite rough as it is affected by the chosen grid in $v_4\/v_2$, the moderate size of the sample ($L=32$) and the finite temperature used in our simulations. Given this, it can be considered as in good agreement with the exact value $-10\/12$.\n\nThe abrupt behaviour observed in Fig.~\\ref{fig:crystal} tends to indicate a first order transition between the Crystal II and columnar phases. This is confirmed by the strong divergence of the specific heat (Fig.~\\ref{fig:result1}), but also by the apparition of a latent heat (see Fig.~\\ref{fig:crystal} top). We find that in the full phase diagram, the transition at $T_{c_1}$ is always of first-order nature. When $v_4\/v_2 \\gtrsim -0.8$, the Crystal II phase and the low-$T$ phase transition disappear. \n \n \\begin{figure}[h]\n \\begin{center}\n\\includegraphics*[width=7cm]{Figures\/ColumnarT.pdf}\n\\includegraphics*[width=6.5cm]{Figures\/phasev2.pdf}\n\\caption{Top: Evolution as a function of temperature of the columnar order parameter $C$ for different ratios $v_4\/v_2$ at $L = 32$. For $v_4\/v_2 < -0.8$, $C$ does not converge towards $1$ at $T=0$. Inset: Evolution of the average energy per site. A discontinuity corresponding to the phase transition can easily be deteted. Bottom: Evolution of $C$, $\\langle N_{\\rm plaquette} \\rangle\/N$ and $\\langle N_{\\rm cube} \\rangle\/N$ as a function of $v_4\/v_2$ for $T = 0.3$ and $L = 32$.}\n\\label{fig:crystal}\n\\end{center}\n\\end{figure}\n \n\n\\subsection{High temperature phase transition $T_{c_2}$} \n\nThe high temperature phase transition corresponds to the simultaneous emergence of dipolar correlations at high temperature and disappearance of the columnar order. \nBefore performing a detailed scaling analysis, we already make an important statement : in all simulations for $v_4>0$, we found {\\it no evidence} for a first-order phase transition at $T_{c_2}$ between the columnar and the Coulomb phases. This has been checked in all observables at hand (thermodynamics, related to columnar order or the Coulomb phase), including energy histograms at the transition. We will come back to this issue at the end of this section, and in Sec.~\\ref{sec:flow}.\n\n\\subsubsection{Strongly frustrated regime}\n\nIn order not to be influenced by any crossover effect, we first concentrate on the transition at $T_{c_2}$ far away from the putative tricritical point at $v_4 = 0$. For $v_4\/v_2 = -10$, we performed large scale simulations and applied the FSS analysis to calculate the critical exponents. Our set of data is presented in Fig.~\\ref{fig:results10}. Let us first concentrate on the specific heat per site (see Fig.~\\ref{fig:results10} top). At the transition, it exhibits a very slow growth with system size, suggestive of a continuous transition. In fact, the divergence is so small that the regular part $C_v^{\\rm reg}$ of the specific heat contributes the most even for $L = 140$. The best fit we obtained for system sizes ranging from $L = 32$ to $L=140$ gives an estimate of $\\alpha\/\\nu \\sim 0.4$ (Fig.~\\ref{fig:results10} top inset). Unfortunately, this estimate varies a lot when considering another subset of system sizes. For instance, discarding the point at $L = 140$ leads to an estimate $\\alpha\/\\nu = 0.6$ and discarding the two points $L=120$ and $L=140$ leads to $\\alpha\/\\nu \\sim 0.8$. Thus, while we cannot conclude on the precise value of $\\alpha$ at this point, it seems at least to be quite small. The evolution of the Binder cumulant and of the product $K^{-1} . L$ is presented in the middle panel of Fig.~\\ref{fig:results10}. For both quantities, we observe a crossing point, in agreement with a second order transition. The two crossing points temperatures are very close: $T_{\\rm Binder} = 0.672$ and $T_{\\rm Stiffness} = 0.6715$. The thermodynamic measurements of the derivative quantities $dB\/dT$ and $L dK^{-1}\/dT$, shown in the insets, allows us to have access to two independent estimates of the exponent $\\nu$. We find $\\nu_{\\rm Binder} = \\nu_{\\rm Stiffness} = 0.63(4)$. These exponents are compatible with the Ising and XY universality classes in $3d$. Moreover, this is consistent with a small value of $\\alpha$ assuming hyperscaling $\\alpha = 2 - d\\nu$. The value of the Binder cumulant and the stiffness at the critical point, $B_c$ and $(K^{-1} \\cdot L)_c$, also furnish valuable information, because these quantities are also universal. Checking the crossings of the three largest system sizes of our system, we find $1.28 \\leq B_c \\leq 1.30$ and $ 0.16 \\leq (K^{-1} \\cdot L)_c \\leq 0.20$. While the value of $B_c$ is consistent with the result obtained for the pure plaquette model $B_c(v_4\/v_2 = 0) = 1.27(1)$, the value at criticality for the stiffness is rather smaller $(K^{-1}\\cdot L)_c(v_4\/v_2 = 0) =0.28(2)$. Finally, we discuss the scaling of the columnar susceptibility (see Fig.~\\ref{fig:results10} bottom), obtaining $\\eta = 0.25(3)$. The fact that $\\eta$ is large and positive for the strongly frustrated system is an unambiguous result of our study, as it is robust for instance on the set of sizes chosen to perform the FSS analysis. Such a strong $\\eta$ rules out the possibility of a simple transition such as the Ising or XY universality class. Finally, we note that all the exponents differ considerably from those measured in Ref.~\\onlinecite{Alet3d}, implying a different type of transition. This can be already be seen at the qualitative level as the specific heat does not display any (strong) divergence.\n\n\\begin{figure}[h]\n \\begin{center}\n\\includegraphics*[width=7cm]{Figures\/Cv.pdf}\n\\includegraphics*[width=7cm]{Figures\/Crossings.pdf}\n\\includegraphics*[width=7cm]{Figures\/Sc.pdf}\n\\caption{ As a function of temperature, for $v_4\/v_2 = -10$ . Top: Specific heat per site $C_v\/N$. The dashed line represents a linear interpolation of the regular part of the specific heat. Inset: Maximum of $C_v\/N$ as a function of system size. Middle: Binder cumulant and inverse stiffness crossings. Insets : Binder cumulant derivative (at $T=0.672$) and derivative $L.dK^{-1}\/dT$ (at $T = 0.6715$) as a function of system size (log-log scale). Bottom: Columnar susceptibility $\\chi$ per site. Inset: Maximum of $\\chi$ as a function of system size (log-log scale). All lines in insets denote power-law fits of critical exponents (see text for details).}\n\\label{fig:results10}\n\\end{center}\n\\end{figure}\n\n\\begin{center}\n\\begin{table*}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|}\n\\hline \n$v_4\/v_2$ & $T_c$ & $\\alpha\/\\nu$ & $\\nu_{\\rm Binder}$ & $\\nu_{\\rm Stiffness}$ &$\\eta$&$B_c$&$(K^{-1}\\cdot L)_c $\\\\\n\\hline \n-10 & $ 0.672(1) $&$0.4^*$& $0.63(4)$ & $0.63(4)$ & $0.25(3)$ & $1.28 - 1.30$ & $0.16 - 0.20$\\\\\n\\hline\n-1 & $ 0.953(1) $ &$0.35(10)$& $ 0.60(4) $& $ 0.61(4) $& $0.16(6)$ & $1.27 - 1.29$& $0.18 - 0.22$\\\\\n\\hline\n-0.2 & $ 1.508(1) $ &$0.80(15)$&$0.50(3)$ &$ 0.58(4)$ & $ -0.02(5)$&$1.26 - 1.28$&$0.23 - 0.27 $\\\\\n\\hline\n0 & $ 1.675(1) $&$1.11(15) $ & $0.51(3) $& $0.50(4)$ & $-0.02(5) $ & $ 1.26 - 1.28 $ & $0.26-0.29$ \\\\\n\\hline\n\\end{tabular}\n\\caption{Critical exponents for different frustrating coupling ratios. For the ratio $\\alpha\/\\nu$, the exponents have been measured by a FSS analysis using sizes comprised between $L_{\\rm min} = 32$ and $L_{\\rm max} = 140$. For the other exponents, we can limit ourselves to larger system sizes between $L_{\\rm min} = 80$ and $L_{\\rm max} = 140$. In the last row, we remind the results for the pure plaquette model (taken from Ref.~\\onlinecite{Alet3d}). The symbol $^*$ denotes one case where the determination of $\\alpha\/\\nu$ is impossible due to the strong contribution of the regular part of the specific heat (see Sec.~\\ref{sec:def}).}\n\\label{tab:tableau2} \n\\end{table*}\n\\end{center}\n\n\\subsubsection{Medium and weakly frustrated regime}\n\nWe repeated the same analysis for different values of $v_4\/v_2$ on the frustrated side, measuring the exponents $\\alpha$, $\\nu$ and $\\eta$. In particular, we carried out extensive simulations at $v_4\/v_2 = -1$ and $v_4\/v_2 = -0.2$. Results are summarized in Tab.~\\ref{tab:tableau2}. For $v_4\/v_2 = -1$, the exponent $\\nu$ is compatible with the one obtained for $v_4\/v_2 = -10$, and while estimations of $\\eta$ are slightly different, the anomalous dimension is clearly non-zero in both cases. The ratio of exponents $\\alpha\/\\nu$ is again the hardest to determine reliably due to the important contribution of the regular part in the specific heat. For $v_4\/v_2 = -1$, $C_v\/N$ has however a clear diverging tendency and it is then easier to extract $\\alpha\/\\nu$ (see figure.~\\ref{fig:Cv1.0}). We find $\\alpha\/\\nu \\sim 0.35$, a value in accordance with the rough estimate at $v4\/v2 = -10$ and with the hyper-scaling relation: $\\alpha = 2 -d\\nu$. \nIn any case, the set of exponents we obtain for $v_4\/v_2 = -10$ and $v_4\/v_2 = -1$ are clearly different from the one obtained for the model with no cubic term. For $v_4\/v_2 = -0.2$, the values are on the contrary compatible with the ones obtained from Ref.~\\onlinecite{Alet3d}: $\\alpha \\simeq 0.5$ and $\\eta \\simeq 0$. Interestingly, we find two non overlapping estimates of $\\nu$ from the Binder cumulant and the stiffness derivatives. That may indicate a possible crossover. As a further information, we also give the values of Binder cumulant and stiffness at criticality. \n\n\\begin{figure}[h]\n \\begin{center}\n\\includegraphics*[width=7cm]{Figures\/Cv-1.pdf}\n\\caption{ Specific heat per site $C_v\/N$ at $v_4\/v_2 = -1.0$. The dashed line represents a linear interpolation of the regular part of the specific heat. Inset: Maximum of $C_v\/N$ as a function of system size. }\n\\label{fig:Cv1.0}\n\\end{center}\n\\end{figure}\n\nThe results on the frustrated regime bring some interrogations. Our results suggest that there are two different sets of critical exponents (and therefore universality classes) for the Coulomb-columnar phase transition in the extended dimer model: one for the highly frustrated regime with $\\alpha \\sim 0.2$, $\\nu \\simeq 0.63$ and $\\eta \\sim 0.2$ and one close to the point $v_4 = 0$ where the exponents are close to those of the $O(N)$ tricriticality class. Moreover, in the non frustrated regime, we have seen in section \\ref{sec:nonfrustrated} that the transition between Coulomb and columnar phases is clearly first order, at least when $v_4\/v_2 > 0.4$. A natural interpretation of these data is that there is a tricritical point for a value $v_4^*$ in the vicinity of $v_4 = 0$, separating a continuous transition in the frustrated case to a discontinuous one in the non-frustrated case. The presence of a tricritical point can influence the effective critical exponents measured for values of $v_4\/v_2$ in the vicinity of ${v_4^*\/v_2}$. Despite the large samples used in our simulations, we are not able to precise the exact location of this tricritical point. In particular, there is no formal reason to believe that the tricritical point is located exactly at $v_4 = 0$. In the next section, we will show that the tricritical point should at least be located on the non frustrated side, that is ${v_4^*\/v_2} \\geq 0$. \n\nAnother possible explanation that we cannot exclude {\\it a priori} is that the transition is always first order. In fact, by looking at the evolution of the different exponents from the non frustrated to the frustrated side, one can perfectly imagine that the correlation length grows continuously but remains \\textit{finite} at any $v_4\/v_2$. That would mean in particular, that the exponents found in Ref.~\\onlinecite{Alet3d} for the pure plaquette model are artifacts caused by a very weakly first order transition. There are several examples of models where reports of unconventional continuous phase transitions have been made and which were finally found to be weakly first order. In order to rule out this scenario, we present in the next section a flowgram analysis combined with a study of histograms at very large system sizes. \n\n\\section{Flowgram analysis}\n \\label{sec:flow}\n\nThe flowgram method is an advanced version of the FSS analysis which was proposed by Kuklov and coauthors~\\cite{Kuklov2}. Consider two points located on the critical line separating the Coulomb and columnar phases in Fig.~\\ref{fig:phasediag}. The method relies on the demonstration that the large scale behavior for one given point is identical to that of the second point where the nature of the transition can be easily determined. If this is true, then the two points are in the same critical regime and the nature of the transition remains unchanged all between the two points. On the contrary, a change in the nature of the transition must occur if this is not true. \n\nThe key elements of the method are to {\\it (i)} introduce a definition of the critical point for finite-size systems consistent with the thermodynamic limit and which is insensitive to the transition order and {\\it (ii)} compute a quantity $Q$ which is scale invariant at criticality, vanishes in one phase and diverges in the other. To define the operational critical temperature, we assume a finite probability of having a non-zero flux at criticality~\\cite{Kuklov1}:\n\\begin{equation}\n\\frac{P(\\mathbf{\\phi} = {\\bf 0})}{1- P(\\mathbf{\\phi} = {\\bf 0})} = A,\n\\end{equation}\nwhere $A$ is some constant which exact value is not relevant for the rest of the method. In practice, we chose A in such a way that it is close to typical values found at the transition for a large system size for $v_4\/v_2=-0.2$. We then consider $Q=L.K^{-1}$, as this product is indeed scale invariant for a second order transition, vanishes in the columnar phase (as we expect $K^{-1}$ to vanish exponentally with system size) there and diverges in the Coulomb phase (as $K^{-1}$ is a constant). \n\nThe flows $Q_{v_4\/v_2}(L)$ for several values of $v_4\/v_2$ in the interval $[-0.6,0.7]$ are presented on Fig.~\\ref{fig:flowgram} top. The flows can be roughly divided into two groups: for $v_4\/v_2 < 0.2$, the flows show a very slow divergence with the system size. For $v_4\/v_2 > 0.2$, the flows are strongly diverging. The second group of flows is thus associated with the strongly discontinuous transition. At this stage we cannot draw any conclusion about the first group of curves since it might be that all curves diverge and are actually connected by a scaling transformation. By this, we mean that we need to check if there exists no renormalization fonction $g(v_4\/v_2)$ such that plotted as a function of the renormalized length:\n\\begin{equation}\nL_{\\rm eff} = g(v_4\/v_2) L,\n\\end{equation}\nall the different flows collapse into a single master curve:\n\\begin{equation}\nQ_{v_4\/v_2=a}(L_{\\rm eff}) = Q_{v_4\/v_2=b}(L_{\\rm eff}) \\; \\; \\forall \\; (a,b).\n\\end{equation}\n To search for such a transformation, we first try to collapse the flows two by two, starting from the largest positive values of $v_4\/v_2$. Fixing $g(v_4\/v_2 = 0.7) = 1$, we find the best numerical value for $g(v_4\/v_2 = 0.65)$ such that the corresponding flows collapse as a function of $L_{\\rm eff}$. We then proceed successively with the next two consecutive values of $v_4\/v_2$ and find the best factor for $g(v_4\/v_2 = 0.6)$. In this manner, we go through all the parameter space, trying to collapse the curves two by two and finding the corresponding $g(v_4\/v_2)$, down to $v_4\/v_2 = -0.6$. If such an action is possible, and if the estimated function $g$ varies monotonically as a function of $v_4\/v_2$, then all the critical points in the interval $[-0.6,0.7]$ should refer to the same critical behavior. \n \n We have searched for such a scaling function $g$ and we have arrived to the unambiguous conclusion that it is \\textit{not possible} to perform a global collapse of the flows. In particular, the slow divergence of the flow for $v_4\/v_2 = -0.02$ is not compatible with the strong growth shown by the flows around $v_4\/v_2 = 0.6$. This is because the flows between these two intervals can hardly be collapsed with their neighbors (meaning that for any pair of consecutive flows in the interval $0 < v_4\/v_2 < 0.4$, we could not find a rescaling factor such that the two flows are superimposed). On the contrary, we have succeeded in performing two local collapses: one for the region $-0.6 \\leq v_4\/v_2 \\leq 0.02$, and the other for the region $ 0.4 \\leq v_4\/v_2 \\leq 0.7$ (see Fig.~\\ref{fig:flowgram} bottom). This is a clear indication of the presence of two different critical behaviors in the phase diagram. Because the collapse for positive $v_4\/v_2>0$ is strongly diverging, we naturally associate it with a first order transition. The collapse in the negative range of $v_4\/v_2$ describes most probably a second order regime. One can see that this collapse has actually not converged, as it should for a continuous transition. This is partly due to our original choice of the constant $A$ which made our operational critical temperature slightly above the real $T_c$. A small amount of non-zero stiffness remains present in the system and perturbs the convergence towards the fixed point behavior.\n\t \n\\begin{figure}[h]\n \\begin{center}\n\\includegraphics*[width=7cm]{Figures\/flowgram.pdf}\n\\includegraphics*[width=7cm]{Figures\/collapse1.pdf}\n\\caption{Flowgram (top) and performed collapses (bottom) for the extended dimer model. See text for details of the flowgram procedure.}\n\\label{fig:flowgram}\n\\end{center}\n\\end{figure}\n \n To further confirm our conclusion on the flowgram, we have performed the following check. Suppose that the points $v_4\/v_2 = -0.02$ and $v_4\/v_2 = 0.6$ refer nevertheless to the same critical regime. Then, there should exist a scaling function connecting the flows at $v_4 = -0.02$ and $v_4 = 0.6$. It is in fact possible to connect \\textit{very roughly} the two flows by renormalizing the length $L$ for $v_4\/v_2 = 0.6$ by a factor $g(0.6) = 8$ while leaving the flow for $v_4\/v_2 = -0.02$ unchanged (see Fig.~\\ref{fig:histo180} left)). If this rescaling is really a physical renormalization transformation, then the properties of the system at $v_4\/v_2 = -0.02$ and $L = 160$ should correspond to those at $v_4\/v_2 = 0.6$ and $L = 32$. For this latest value, we know in particular that the transition is first order since the energy histogram displays a double peak (see Fig.~\\ref{fig:histos}). If the critical point $v_4=0$ is indeed in the same regime as the critical point at $v_4\/v_2=0.6$, we should expect to see a double-peak also for samples of sizes $L\\geq 160$ at $v_4=0$. We have therefore carried a simulation at $v_4 = 0$ with $L = 180$ and have then measured the histogram of energy at the maximum of the specific heat. The histogram displays a unique and well-defined peak (see Fig.~\\ref{fig:histo180} bottom right), which confirms that the transition at $v_4=0$ is continuous. Moreover, the value of the specific heat maximum is perfectly compatible with the exponents measured previously at $v_4 = 0$ (see Fig.~\\ref{fig:histo180} top right), confirming the validity of the exponent $\\alpha \/ \\nu \\simeq 1.11(15)$ found in Ref.~\\onlinecite{Alet3d}.\n \n \\begin{figure}[h]\n \\begin{center}\n\\includegraphics*[width=8cm]{Figures\/HistoL180.pdf}\n\\caption{Left: Flows at $v_4\/v_2 = -0.02$ and $v_4\/v_2 = 0.6$ at the rescaled length $L_{\\rm eff}$. Top right: Maximum of the specific heat for $v_4 = 0$ as a function of the system size. Data are taken from Ref.~\\onlinecite{Alet3d} except for the value at $L = 180$. The solid line is the power-law fit, giving rise to the estimate $\\alpha\/\\nu=1.15(15)$ compatible with the result of Ref.~\\onlinecite{Alet3d} {\\it without} the knowledge of the L=180 point. Bottom right: Probability of energy per site at $v_4 = 0$ and $L = 180$.}\n\\label{fig:histo180}\n\\end{center}\n\\end{figure} \n\nTo conclude on this part, not only can we drive out the possibility that the regimes $v_4\/v_2 < 0$ and $v_4\/v_2 > 0$ are connected but we can also state that the tricritical point is necessarily located at a value $v_4^*\/v_2 \\geq 0$. \n \n\\section{Discussion} \n \\label{sec:conclusion}\n In this study, we presented an extended version of the classical dimer model with plaquettes and cubic interactions. Our simulations indicate that depending on the sign of the interactions, the nature of the critical phase transition between the Coulomb and columnar phases changes. Compared to the pure plaquette model ($v_4\/v_2 = 0$), a cubic interaction which reinforces the alinement of dimers ($v_4\/v_2 > 0$) leads to a first order transition, identified by a double peak distribution of the energy and a strong divergence in thermodynamic quantities. Whether an infinitesimal positive cubic coupling is enough to alter the nature of the transition is not certain, as we lack any theoretical argument to support this, but simulations tend to indicate that this change should occur very close to $v_4 = 0$. On the other side, when both interactions compete ($v_4\/v_2 < 0$), finite size scaling analysis of thermodynamic quantities shows no sign of any discontinuity at the transition. When the cubic interaction is largely dominant ($v_4\/v_2 = -10$), the critical exponents deviate significantly from those measured in the pure plaquette model, with $\\alpha \\sim 0.2$, $\\nu \\sim 0.6$ and $\\eta \\sim 0.2$ in the first case and $\\alpha \\sim 0.5$, $\\nu \\simeq 0.5$ and $\\eta \\simeq 0$ in the latter. In the interval $-1 \\leq v_4\/v_2 \\leq 0$, we find exponents in between these two cases, probably due to a cross-over effect. To discharge the possibility of a weak first order transition on the frustrated side, we carried out a flowgram analysis in the vicinity of $v_4 = 0$. The flowgram clearly demonstrates the presence of two groups of flows, one corresponding to positive values of $v_4\/v_2$ and the other to negative values. This rules out the possibility of a connection between the two parts of the phase diagram, and thus reveal the presence of a tricritical point~\\cite{Kuklov2}. Finally, we also detected the presence of a new crystalline phase at low temperature, deep in the frustrated regime. This phase is characterized by a degeneracy growing with the system size. \n \nIn a parallel work, Papanikolaou and Betouras recently studied another extension of the dimer model that slightly differs from ours but which leads to similar conclusions~\\cite{Papanikolaou2}. To perturb the pure plaquette model, these authors introduce further neighbor interactions between dimers which preserve the cubic symmetry. They find that when the extra couplings favor the columnar alinement, the transition becomes first order. On the opposite, a frustrating coupling maintains the continuity of the transition. The critical exponents they measure are in agreement with ours in the limit of large frustration. Our study has the advantage of using much larger samples (the maximum system size used in Ref.~\\onlinecite{Papanikolaou2} is $L=32$), giving more confidence on the values of exponents. Furthermore, the use of the flowgram analysis provides an explicit evidence for tricriticality. In another related work, Chen {\\it et al.}~\\cite{Chen} find that by favoring one particular subset of the 6 columnar orderings at low temperature, the transition between the Coulomb and columnar phases becomes either first order or in the $3d$ $XY$ universality class. \n \n While not entirely interpretive, it is tempting to compare the exponents on the frustrated side of the dimer model to other models which also display unconventional phase transitions. In Ref.~\\onlinecite{Charrier}, a lattice gauge theory representing a coarse-grained version of the dimer model was simulated by Monte Carlo. There, the exponents were found to be very close to the $3d$ $XY$ universality class and thus differ from those presented here. Another source of information is given by the study of the $NCCP^1$ field theory. There exists several indications that the exponent $\\eta$ in the $NCCP^1$ theory should be large and positive~\\cite{Senthil}. For instance, the transition observed in the ring-exchange models studied in Ref.~\\onlinecite{JQ} provide an estimate of $\\eta \\sim 0.2 - 0.35$ (depending on which correlation function is looked at) and $\\nu \\sim 0.68(1)$. These values are not far from those we observed in the strong frustration regime. On the other hand, direct simulations of lattice versions of the $NCCP^1$ all exhibit a first order transition~\\cite{Kuklov1}. This possibility is ruled out in the dimer model, thanks to our results obtained with the flowgram analysis. \\\\\n \n\\section{Acknowledgments} \nWe gratefully thank L. Marty for her active participation at an early stage of this work. We also thank S. Trebst for useful discussions. This work was performed using HPC resources from GENCI-CCRT\/IDRIS (Grant 2009-100225). We also thank CALMIP for allocation of CPU time. We use the ALPS libraries~\\cite{ALPS} for the Monte Carlo simulations.\n \n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Conclusion}\\label{conclusion}\nA data fusion framework for detecting false command and measurement injections due to cyber intrusion is presented in this paper. To design an IDS that uses cyber and physical features, we aggregate features from cyber and physical sensors and align the data, then perform preprocessing techniques, followed by inter-domain fusion. \n\nOur results find that classifier performance improves on an average of 15- 20\\% (based on F1-score) when cyber physical features are considered instead of pure cyber features. Results also show that the performance improved on an average of 10-20\\% (based on F1-score) when labels from Snort are replaced by the labels considered based on intrusion timestamps. From our evaluations of the IDS, we also find that scenarios with balanced and larger records result in better performance. Additionally, co-training based semi-supervised learning technique, which is realistic for a real-world scenario, is found to perform similar to supervised techniques and even better by 2-5\\% (based on F1-score) using some classifiers. Among the unsupervised learning techniques, k-mean clustering technique is found to be more robust and accurate. Moreover, training the classifier with the embeddings from manifold learning didn't improve the accuracy. Hence, manifold learning should only be considered for visualization rather than rely on accuracy.\n\nWe believe our fused dataset and results provide one of the first publicly available studies with cyber and physical features, particularly for power systems, where the experimental data is collected from a testbed that contains both cyber and physical emulation. This benefits research in multi-disciplinary areas such as cyber physical security and data science. \n\n\\section{Introduction}\nMulti-sensor data fusion is a widely-known research area adopted in many areas including military, medical science, and finance as well as in the energy sector. Recently, automatic driving systems widely use data fusion to fuse images and videos from similar or disparate sensor types~\\cite{auto_vehicle}. In power systems, most fusion applications are currently intradomain and consider only physical data. Examples include fault detection~\\cite{pure_phy} and intrusion detection using Principal Component Analysis (PCA)~\\cite{pure_phy2}. Similarly, for network protection in industrial control systems (ICS), intrusion detection systems (IDS) such as Snort, BRO, Suricatta, etc., are increasingly used. These offer a pure cyber-centric approach that results in high false alarms~\\cite{false_alarms} Combining the benefits of visibility of both cyber and physical, cross-domain data fusion has the potential to help methodically and accurately detect mis-operations and measurement tampering in power systems caused by cyber intrusions.\n\nIn power system operations, the telemetry used for collecting wide area measurements may have errors due to sensor damage or cyber-induced compromise; if undetected, applications that rely on these data can become unreliable and\/or untrustworthy. Sensor verification based on multi-source multi-domain measurement collection and fusion can be performed to solve such problems, and it is a valuable mechanism for detection and detailed forensics of cyber intrusions targeting physical impact. While offering numerous potential benefits, fusion for attack detection in real-world utility-scale power systems presents challenges that hinder adoption including the creation, storage, processing, and analysis of the associated large datasets. Fortunately, with the proliferation of affordable computing capability for processing high-dimensional data, it is becoming more feasible to deploy fusion techniques for accurately detecting intrusions. Thus, research is needed to take advantage of these data and computing capabilities and create fusion-based detection techniques that solve this problem.\n\nCyberattacks often progress in multiple stages, e.g., initiating with a reconaissance phase, executing intrusions and vulnerability exploitations, and culminating in actions targeting the physical system such as manipulating measurements and commands. \nThe events that comprise these incidents and provide forensics about what occurred are not reflected using only coarse cyber-side features.\nAdditionally, the system dynamics in both cyber and physical space vary considerably; this causes challenges in merging data. For example, an intruder may take months in the reconaissance phase, but during this period, none of the physical side features reflect any abnormality. Similarly, later when an intruder is injecting false commands or tampering measurements, most of the cyber side features do not reflect any abnormality, assuming the adversary is stealthy.\n\nSensor time resolution varies across domains and within domains, which causes challenges when merging the data. The resolution of physical measurements depends on polling rates as well as the specifications of the device. For example, phasor measurement units (PMUs) provide GPS synchronized data at subsecond data rates, supervisory data acquisition and control (SCADA) systems provide data on the seconds to minutes time frame, and smart meters deployed residentially may have hourly resolution~\\cite{pmustandard}. Relays monitoring system transients have resolution on the order of milliseconds. Similarly, network logs and IDS such as Snort have resolution of milliseconds. Data fusion solutions for cyber-physical power systems must be able to effectively handle the range of time scales.\n\nThe use of machine learning (ML) and deep learning (DL) for intrusion detection faces the problem that the trained model's effectiveness depends on the data collected; it is a challenge to obtain a realistic baseline and to use realistic data to validate the solution for a real-time cyber-physical system. Detection is affected by the choice of data processing techniques applied (e.g., balancing, scaling, encoding). The impact of such factors on detection accuracy must therefore be quantified before the techniques can be trusted for use in securing critical infrastructure.\n\nThe hypothesis of this work is that the use of fused data from cyber and physical domains can enable better attack detection performance than either domain separately, if the challenges above are addressed. Hence, we present a heterogeneous-source platform that fuses data and detects cyber intrusions. First, we provide interfaces for collecting data sources from cyber and physical side emulators. Then, we use these interfaces to collect real-time data from cyber, physical, and security domains; finally, we fuse the datasets and detect cyber intrusions. We aggregate and merge real-time sensor data from multiple sources including Elasticsearch\\cite{elasticsearch}, TShark\\cite{tshark}, raw packet captures with DNP3 traffic, and Snort logs \\cite{snort_cookbook} that are created during emulation of Man-in-the-Middle attacks on a synthetic electric grid, modeled in the Resilient Energy Systems Laboratory (RESLab) testbed \\cite{Sahu2020}.\nFig. \\ref{fig:flowchart} gives an overview of the multi-source data fusion presented. The major contributions of this paper are as follows: \n\n\\begin{figure}\n \n\\smartdiagramset{text width=4.5cm,\n}\n\\begin{center}\n\\smartdiagramset{module shape=rectangle,\nuniform arrow color=true,\narrow color=gray!50!black,\nback arrow disabled=true,\n}\n\\smartdiagramset{\nuniform color list=white!90!black for 6 items}\n\\smartdiagram[flow diagram]{Set up testbed architecture in RESLab,MiTM attacks in synthetic electric grid in RESLab,{Collect sensor data from multiple sources}, Integration of data aggregator for merging real-time data,Data pre-processing techniques,\nCompare performance of models from fused dataset with other IDS models}\n\\end{center}\n\\caption{Multi-source data fusion steps}\\label{fig:flowchart}\n\\end{figure}\n\n\n\\begin{enumerate}\n \\item To present the aggregation and merging of real-time sensor data from multiple sources for cyberattack detection in a cyber-physical testbed emulation of a synthetic electric grid.\n \\item To quantify the value of different data pre-processing techniques such as balancing, normalization, encoding, imputation, feature reduction, and correlation before training the machine learning models.\n \\item To demonstrate the improved detection capability of models built from fused dataset performance by comparing with pure cyber and physical feature based intrusion detection models. \n \\item To evaluate the performance of the supervised, unsupervised and semi-supervised learning based intrusion detection for use cases explored in the MiTM attacks.\n\\end{enumerate}\n\n\nThe paper proceeds as follows. Section~\\ref{background} provides background on data fusion techniques incorporated in areas such as military, healthcare, software firms, security, and cyber-physical systems. In Section~\\ref{architecture}, we discuss the RESLab architecture, the attack types considered, and the data fusion procedure. The details on the data sources, the data fusion types, and the dataset transformations used in this work are presented in Sections~\\ref{dataset}, \\ref{data_transformation}, and \\ref{fusion_types} respectively.\nFinally, intrusion detection based on unsupervised, supervised, and semi-supervised learning methods is presented in Section~\\ref{ids}.\nExperiments are performed for four use cases, and results are analyzed in Section~\\ref{results} and finally concluding the paper in Section~\\ref{conclusion}.\n\\section{Data Fusion Background}\\label{background}\n\\subsection{Multi-Sensor Data Fusion}\nThe goal of multi-sensor data fusion is to make better inferences than those that could be accrued from a single source or sensor. According to \\textit{Mathematical Techniques in Multisensor Data Fusion}~\\cite{hall_book}, multi-sensor data fusion is defined as \\textit{``a technique concerned with the problem of how to combine data from multiple (and possibly diverse) sensors in order to make inferences about a physical event, activity, or situation.''} A data fusion process is modeled in three ways: a) functional, b) architectural, and c) mathematical~\\cite{hall_book}. A functional model illustrates the primary functions, relevant databases, and inter-connectivity to perform fusion. It involves primarily filtering, database creation, and pre-processing such as scaling and encoding, etc. An architectural model specifies hardware and software components, associated data flows, and external interfaces~\\cite{extra_fusion_ref}. For example, it models the location of the fusion tool in a testbed. The fusion architecture can be three types: centralized, autonomous, and hybrid~\\cite{hall_book}. In centralized architectures, either raw or derived data from multiple sensors are fused before they are fed into a classifier or state estimator. In autonomous architectures, the features extracted are fed to the classifiers or estimators for decision making before they are fused. The fusion techniques used in the second case involve techniques including Bayesian~\\cite{intro1} and Dempster Shafer inference~\\cite{zadeh}, because these fusion algorithms are fed with the probability distributions computed from the classifiers or the estimators. The hybrid type mixes both centralized and autonomous architectures. The mathematical model describes the algorithms and logical processes.\n\n\\begin{figure*} [!htb]\n \\centering\n \\vspace{-0.2cm}\n \\includegraphics[width=0.8\\linewidth]{Figures\/fusion_architecture_v2.JPG}\n \\vspace{-0.35cm}\n \\caption{Centralized fusion architecture. In the autonomous architecture the Fusion and Learning blocks will be interchanged with an addition of another Learning block post fusion. \n }\n \\label{fig:architecture_types}\n\\end{figure*}\n\nA holistic data fusion method must consist of all three: functional, architectural, and mathematical models. The functional model defines the objective of the fusion. Since the goal of this work is to detect intrusions, we must determine which data are due to cyber compromise. Functional goals may also include estimating the position of the intruder in the system or estimating the state of an electric grid, where the preprocessing techniques to use vary based on the goal. The architecture model defines the sequence of operations. Our fusion follows the centralized architecture. Finally, the mathematical model defines how these features are processed and merged. Section \\ref{dataset} details our fusion models. \n\n\n\\subsection{Multi-Sensor Fusion Applications}\nRecently, work on multi-sensor fusion has been adopted in the areas of computer vision, automatic vehicle communication, and it is entering into the areas of power systems. The authors in ~\\cite{ref3} review multi-sensor data fusion technology, including benefits and challenges of different methods. The challenges are related to data imperfection, outliers, modality, correlation, dimensionality, operational timing, inconsistencies, etc. For example, different time resolutions of sensors result in under-sampling or over-sampling data in some sensors. The response time of certain sensors also vary depending on the sensor age and type. Data received from multiple sensors must be transformed to a common spatial and temporal reference frame~\\cite{hall_book}. Imperfection is dealt using fuzzy set theory, rough set theory, Dempster Shafer theory, etc. \n\nMulti-sensor data fusion is used in military applications for automated target recognition, battle-field surveillance, and guidance and control of autonomous vehicles~\\cite{hall}. Further, the idea has been expanded to non-defense areas such as medical diagnosis, smart buildings, and automatic vehicular communications ~\\cite{multi2}. Authors in~\\cite{ref4} explore techniques in multi-sensor satellite image fusion to obtain better inferences regarding weather and pollution. Data fusion has also been proposed to accurately detect energy theft from multiple sensors in advanced metering infrastructure (AMI) in power distribution systems~\\cite{multi}.\n\nData fusion is expanded in ~\\cite{ref1} from cyber-physical systems (CPSs) to cyber-physical-social systems (CPSSs) with the use of tensors. Algorithms proposed for mining heterogeneous information networks cannot be directly applied to cross-domain data fusion problems; fusion of the knowledge extracted from each dataset gives better results ~\\cite{cross_fusion}.\n\n\n\\subsection{Data Fusion in Power Systems}\nThe data from diverse domains play a major role in power system operation and control. Weather data is vital for forecasting, e.g., for solar, wind, and load, to schedule generation. Data in cyberspace include data that provide for automation in power system ICS and play a crucial role in wide area control and operation in the electric grid. However, to proceed with multi-domain fusion, the following question must first be answered: To what measurable quantities do \\textit{cyber data} and \\textit{physical data} refer? \n\nA simple example of \\textit{cyber data} in ICS is a spool log of a network printer in the control network. It is crucial to question, could we have prevented the attack on the centrifuge in the Natanz Uranium Enrichment plant, if we had a logger to record the events of a machine with shared printer, so as to prevent the exploitation of remote code execution on this machine? The answer is \\textit{no}, because there were many other vulnerabilities such as WinCC DB exploit, network share, and server service vulnerability, in parallel to print server vulnerability that compromised the Web Navigation Server which was connected to the Engineering Station that configured the S7-315 PLCs which over-speeded the centrifuge~\\cite{stuxnet}. Hence, the deployment of cyber telemetry in every computing node in an ICS network is a solution which seems attractive but results in numerous false alarms. Then, the question arises, can we reduce such alerts by amalgamating such data with data from physical sensors? \n\nData fusion proposed in the areas of power systems are mainly intra-domain. Existing works do not consider fusion of cyber and physical attributes for intrusion detection together. A probabilistic graphic model based power systems data fusion is proposed in~\\cite{pure_phy3}, where the state variables are estimated based on the measurements from heterogeneous sources by belief propagation using factor graphs. These probabilistic models require the knowledge of the priors of the state variables and also assume the measurements to be trustworthy. Hence, such solutions cannot detect cyber induced stealth false data injection attacks. Several works on false data injection detection are based on machine learning~\\cite{ml_power_1,ml_power_2,ml_power_3,ml_power_4} and deep learning~\\cite{dl_power_1,dl_power_2,dl_power_3,arnav_rnn_lstm,a3d,paved} techniques. The authors in~\\cite{pure_phy4} address stealthy attacks using multi-dimensional data fusion by collecting information from power consumption of physical devices, control operation and system states feed to the cascade detection algorithm to identify stealthy attack using Long Short Term Memory (LSTM). Machine learning techniques including clustering are used in power system security for grouping similar operating states (emergency, alert, normal, etc.) to automatically identify the subset of attributes relevant for prediction of the security class. A decision tree based transient stability assessment of the Hydro-Quebec system is presented in~\\cite{ml_power_sys}. Techniques of fusion for fault detection~\\cite{pure_phy} and real-time intrusion detection using Principal Component Analysis (PCA) (PCA)~\\cite{pure_phy2} are specific to the physical domain. Design of such models require data fusion and must consider impending system instabilities that can be caused by cyber intrusions.\n\nCymbiote~\\cite{cymbiote} multi-source sensor fusion platform is one of the work, equivalent to ours, that have leveraged fusion from multiple cyber and physical streams and trained with only supervised learning based IDS. Moreover, their work doesnt clearly describes the features extracted from different sources.\n\n\\subsection{Multi-Domain Fusion Techniques}\n\nTechniques such as co-training, multiple kernel learning, and subspace learning are used for data fusion problems. Co-training based algorithms~\\cite{co_training} maximizes the mutual agreement between two distinct views of the data. This technique is used in fault detection and classification in transmission and distribution systems~\\cite{cotrain_1} as well as in network traffic classification~\\cite{cotrain_2}. \nTo improve learning accuracy, Multiple kernel learning algorithms~\\cite{mkl} are also considered, which utilize kernels that implicitly represent different views and combines them linearly or non-\nlinearly . Subspace learning algorithms~\\cite{subspace_learning} aim to obtain a latent subspace shared\nby multiple views, assuming that the input views are generated from this latent subspace.\nDISMUTE~\\cite{dismute} performs feature selection for multi-view cross-domain learning. Multi-view discriminant transfer (MDT)~\\cite{mv_transfer_learning} learns discriminant weight vectors for each view to minimize the domain\ndiscrepancy and the view disagreement simultaneously. These techniques can be used for cross-domain data fusion. \n\nCoupled matrix factorization and manifold alignment methods are used for similarity based data fusion~\\cite{cross_fusion}. These methods can be implemented intra-domain with multiple data sources. Manifold alignment is another technique that generate projections between disparate data sources, but assumes the generating process shares a common manifold. \nSince the primary goal in this work is to fuse datasets from inter-domain such methods may not be effective enough. Still we have explored manifold learning for the purpose of feature reduction to train the supervised learning based classifier.\n\nTo the best of our knowledge, co-training has not yet been implemented in an intrusion detection system that uses inter-domain fusion. Hence, in this work, we perform co-training in inter-domain fused datasets by splitting the dataset into cyber and physical views. \n\n\n\\subsection{Data Creation, Storage, and Retrieval}\n\nThe storage and retrieval of multi-sensor data play a major role in fusion and learning. A relational database management system (DBMS) is predominantly used in traditional EMS applications. For example, B.C. Hydro proposes a data exchange interface in a legacy EMS and populates a relational database with the schematic of the Common Information Model (CIM) defined in IEC 61970~\\cite{rdbms}. With the proliferation of multiple protocols and data from diverse sources, it is difficult to construct the Entity Relationship model of a relational database management system (RDBMS), since the schema cannot be fixed. Since NoSQL stores unstructured or semi-structured data, usually in the key-value pairs or JSON documents, NoSQL is highly encouraged to make use of database such as Elasticsearch~\\cite{elasticsearch}, MongoDB~\\cite{mongodb}, Cassandra~\\cite{cassandra}, etc., for multi-sensor fusion with heterogeneous sources.\n\nCreation of multi-domain datasets to advance the research is a challenging task, since it requires development of a cyber-physical testbed that processes real-time traffic from different simulators, emulators, hardware, and software. Currently, few datasets are publicly available that provide features from diverse domains and sources. Most of the datasets are simulator-specific, which restricts the domain to either pure physical or cyber. The widely-known KDD \\cite{kdd} and CIDDS\\cite{cidds} datasets used in developing ML-based IDS for bad traffic detection and attack classification are centric to features in the cyber domain~\\cite{ieee_cqr}. Tools such as MATPOWER \\cite{matpower} and pandapower \\cite{pandapower} provide datasets for physical-side bad data detection. Datasets that include measurements related to electric transmission systems including normal, disturbance, control, and cyberattack behaviors are presented in~\\cite{dataset0,dataset1,dataset2,dataset3}. The datasets contain phasor measurement unit (PMU) measurements, data logs from Snort, and also data from a gas pipeline and water storage tank plant. The features in these datasets lack fine-grained details in the cyber, relay, and control spaces, as all the features are binary in nature. A cyber-physical dataset is presented in~\\cite{dataset4} for a subsystem consisting of liquid containers for fuel or water, with its automated control and data acquisition infrastructure showing 15 real-world scenarios; while it presents a useful way of framing the data fusion problem and approaches for cyber-physical systems (CPS), it is not power system specific. \n\nA problem in training machine learning (ML) or deep learning (DL) models for intrusion detection through classification, clustering, and fine-tuning hyper-parameters is that its effectiveness depends on the data collected. That is, a practical challenge is to obtain a baseline which needs to come from realistic data. Emulation is preferred to simulation for CPS networks since a simulator demonstrates a network's behavior while an emulator functionally replicates its behavior and produces real data. Using real data is important to validate that ML or DL solutions address the actual challenges faced in the data from a real-time cyber-physical system. \n\nThe performance of ML and DL models is impacted by the choice of data processing techniques applied to the inputs such as balancing, scaling, or encoding before training the models. The effect of these preprocessing techniques needs to be quantified on the outputs of such ML models before they can be trusted for use in industry.\n\n\n\\section{Data Fusion Architecture}\\label{architecture}\nBefore discussing the data fusion procedures, it is essential to understand the architecture of the RESLab testbed that is producing the data during emulation of the system under study. \n\n\\subsection{Testbed Architecture}\n\nThe RESLab testbed consists of a network emulator, a power system emulator, an OpenDNP3 master and a RTAC based master, an intrusion detection system, and data storage, fusion and visualization software. A brief overview of each component is given below. The detailed explanation of RESLab including its architecture and use cases is provided in ~\\cite{Sahu2020}.\n\\begin{itemize}\n\n \\item \\emph{Network Emulator} - Common Open Research Emulator (CORE) is used to emulate the communication network that consists of routers, linux servers, switches, firewalls, IDSes and bridges with other components emulated with other virtual machines (VMs) in vSphere environment. \n \n \\item \\emph{Power Emulator} - Power World Dynamic Studio (PWDS) is a real-time simulation engine for operating the simulated power system case in real-time as a DS server~\\cite{powerworld}. It is used to simulate the substations in the Texas 2000 case as DNP3 outstations. \\cite{synthetic_comm}.\n\n \\item \\emph{DNP3 Master} - DNP3 Masters are incorporated using an open DNP3 based application (both GUI and console based) and a SEL-3530 Real-Time Automation Controller (RTAC) that polls measurements and operates outstations, sending its traffic through CORE to the emulated outstations in PowerWorld DS. \n\n \\item \\emph{Intrusion Detection System} - Snort is used in the testbed as the rule-based, open-source intrusion detection system (IDS). It is configured to generate alerts for Denial of Service (DoS), MiTM, and ARP cache poisoning based attacks. Currently Snort is running as a network IDS in the router in the substation network.\n \n \\item \\emph{Storage and Visualization} - The Elasticsearch, Logstash, and Kibana (ELK) stack is used to probe and store all virtual and physical network interface traffic. In addition to storing all Snort alerts generated during each use case, this data is able to be queried using Lucene queries to perform in depth visualization and cyber data correlation.\n \n \\item \\emph{Data Fusion } - A different VM is dedicated to operate the fusion engine that collects network logs and Snort alerts from ELK stack using an Elasticsearch client and raw packet captures from CORE using pyshark. This engine constructs cyber and physical features and merges them using the time stamps from different sources to ensure correct alignment of information. Further it pre-processes them using imputation, scaling and encoding before training them for intrusion detection using supervised, unsupervised and semi-supervised learning techniques. This VM is equipped with resources to utilize ML and DL based library such as scikit, Tensorflow and Keras to train the engine for classification, clustering and inference problem. \n \n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\\begin{figure} [h]\n \\centering\n \\vspace{-0.2cm}\n \\includegraphics[width=1.0\\linewidth]{Figures\/testbed_architecture_fusion.JPG}\n \\vspace{-0.35cm}\n \\caption{Testbed architecture with data fusion}\n \\label{fig:architecture}\n\\end{figure}\n\nThere can be considered three broad kinds of IDS for ICS: protocol analysis based IDS, traffic mining based IDS, control process based IDS~\\cite{pure_cy_pure_phy}. The fusion engine in RESLab combines all these types. It performs protocol specific feature extraction from data link, network, transport layers alongwith DNP3 layer, control and measurement specific information through DNP3 payload and headers, traffic mining by extracting network logs from multiple sources.\n\n\n\\subsection{Attack Experiments}\n\nNow that we have discussed the architecture of the testbed, we delve further into how this testbed is utilized to demonstrate a few cyber attacks targeting the grid operation. The threat model we consider is based on emulating multi-stage attacks in a large-scale power system communication network. In the initial stage, the adversary gains access to the substation LAN through Secure Shell (SSH) access, further performing DoS and ARP cache poisoning based MiTM attack to cause False Data injections (FDI) and False Command injections (FCI).\n\n\n\n\n\nIn Man-in-the-Middle attacks, usually the adversary secretly observes the communication between sender and receiver and sometimes manipulates the traffic between both ends. There are different ways to perform MitM such as IP spoofing, ARP spoofing, DNS spoofing, HTTPS spoofing, SSL hijacking, stealing browser cookies, etc. In this current work, we focus on MitM using ARP spoofing. ARP spoofing or poisoning is a type of attack in which an adversary sends false ARP messages over a local network (LAN). This results in the linking of an adversary's MAC address with the IP address of a legitimate machine on the network (in our case, the outstation VM). This attack enables the adversary to receive packets from the master as an impersonator for the outstation and modify commands and forward them to the outstation. In this way, the adversary can cause contingencies such as misoperation of the breakers. The attack is not only to modify but also to sniff the current state of the system since it can receive the outstation response to the master. \n\nThe MiTM attacks are performed considering the four use cases targeting different part of the Texas synthetic grid following different strategies presented in detail in ~\\cite{Sahu2020}. The use cases are combinations of FDI and FCI attacks performed with different polling rates from the DNP3 Master and the number of master application considered. In our previous work, we demonstrated Snort IDS based detection which resulted in many false positives. In this work, we employ fusion techniques, along with machine learning techniques, to enhance the accuracy of detection by evaluating them using F1-scores, Recall, and Precision values.\n\n\n\n\n\\subsection{Data Fusion Procedure}\nThe steps followed in the data fusion engine, from extracting the features from different sources, with their merge of pyshark, snort, packetbeat, raw packet capture to form cyber table, and the final fusion of cyber and physical table, with the steps of imputation, encoding and visualization is presented in Alg.~\\ref{alg:data_fusion}. The details of the sensor sources and the data processing will be discussed in details in the next sections.\n\n\\begin{algorithm}[h]\n\\begin{small}\n \\caption{Data Fusion Procedure}\\label{alg:data_fusion}\n\t\\begin{algorithmic}\n\t\\State{1. Load json from raw pcaps.}\n\t\\State{2. Extract cyber features: network, transport, datalink layer information and store as raw cyber data.}\n\t\\State{3. Extract features using pyshark.}\n\t\\State{4. Merge pyshark to the raw cyber data.}\n\t\\State{5. Extract snort alert.}\n\t\\State{6. Merge snort to the raw cyber data.}\n\t\\State{7. Extract features from packetbeat index in elasticsearch.}\n\t\\State{8. Merge packetbeat features to raw cyber data.}\n\t\\State{9. Extract DNP3 features (DNP3 points and headers) from raw packet capture.}\n\t\\State{10. Fuse cyber data with physical data.}\n\t\\State{11. Imputate missing values.}\n\t\\State{12. Encode categorical features.}\n\t\\State{13. Visualize the merged table.}\n \\end{algorithmic}\n \\end{small}\n\\end{algorithm}\n\n\\subsection{Fusion Challenges}\nThe most challenging task in fusion is to\nperform merge operations, because of the different time stamps generated at\ndifferent sensors. An event will trigger the time stamped measurements at the sensors. Hence, each\nsensor's location impacts the time at which the event is recorded. Domain knowledge\nhas been used to write the algorithm to merge different sources meticulously. For\nexample, Elasticsearch's Packetbeat index stores each record that reflects the\ntraffic between a given small time interval. Each record has an event start and end\ntime. While merging Elasticsearch features, such as flow count attribute, we have to\ncompare the raw packet timestamp and event start and end time of Elasticsearch to\ncalculate the flow counts.\nMoreover, the number of records in the power system side will be less than the\ncyber side, as events in power system side are triggered based on the polling\nfrequency as well as on the time at which an operator performs a control operation.\nHence we remove missing data for the records that do not have any physical side\ntraffic.\n\n\\section{Multi Sensor Data}\\label{dataset}\nA sensor's data is the output or readings of a device that detects and responds to changes in the physical environment. Every sensor has a unique purpose that helps create crucial features that can assist in intrusion detection. In RESLab, the cyber sensors are deployed as Wireshark instances at different locations in the network for raw packet capture. Additionally, monitoring tool such as Packetbeat are integrated for extracting network flow-based information. For security sensors, Snort IDS logs and alerts are considered. Since the physical system is emulated with PWDS acting as a collection of DNP3 outstations, the real-time readings provided by physical sensors are extracted from the observed measurements at the DNP3 master, from the application layer of the raw packet captured at the DNP3 master. The extractions of these multiple sensors are explained in detail:\n\n\\subsection{Raw pcaps from json}\nThe packet captures from Wireshark are packet dissected and saved in the json format, which are loaded using the panda data frame. Further, from the json, around 12 features from physical, datalink, network, and transport layer of OSI stack are extracted, as shown in Table~\\ref{table:fusion_features}. The features primarily consist of the source and destination IP and MAC addresses, along with the port numbers, flags, and lengths in these layers. \n\n\\subsection{Elasticsearch}\nReal-time traffic collection is performed from network interfaces in CORE, using the Packetbeat plugin in the ELK stack. The Packetbeat plugin helps us extract the flow-based information such as \\textit{Flow Count}, \\textit{Flow Count Final}, \\textit{Packets} shown in Table~\\ref{table:fusion_features}.\nElasticsearch queries are based on Lucene, the search library from Apache. Kibana is used to visualize the graphs and real time data visualization for packetbeat index. An example query is shown below:\n\\begin{lstlisting}[language=json,numbers=none]\n\"query\": {\n \"bool\": {\n \"must\": [\n { \"range\": {\n \"event.end\": {\n \"gte\": \"2020-01-22T00:00:00.000Z\",\n \"lte\": \"2020-01-26T00:00:00.000Z\"}}\n },\n {\"range\": {\n \"event.duration\": {\n \"gte\": 0,\n \"lte\": 3000000}}\n },\n {\"bool\": \n {\"should\": [\n {\"match\": {\n \"destination.port\": \"20000\"}},\n {\"match\": {\n \"source.port\": \"20000\"}}\n ]\n }\n },\n {\"match\":\n {\"flow.final\": \"true\"}\n } \n ]}}\n\\end{lstlisting}\n\nThe above query returns the records with event start time $2020-01-22T00:00:00.000Z$ and end time $2020-01-26T00:00:00.000Z$, and the event duration is within $0-300000$ ms, and the source or destination port is $20000$ (port number associated with DNP3), and the flow is a $final$ flow. Keyword $must$ designates an $AND$ operation, $should$ is an $OR$ operation, and $match$ is an $equals\\: to$ operation.\nA logstash index is also created in Elasticsearch to store the logs of Snort alerts, which is also extracted along with the packetbeat index.\n\nThere are two operations on the response from Elasticsearch: a) $Extraction$ of essential features b) $Merge$ of features to the existing cyber features data frame $cb\\_table$ from raw packet captures. \nEach record in the packetbeat index is stored in the form of an event with start and end times. In the extraction phase, we extract the $source.packets$, $flow.id$, $flow.final$, $event.end$, $event.start$, $flow.duration$ features and store them in a new data frame $pb\\_table$.\nThe merge operation of $pb\\_table$ into the existing cyber features is non-trivial due to different timestamps in existing features and features from packetbeat. We compute the features in Table~\\ref{table:fusion_features} $flow.count$, $flow.final\\_count$, and $packets$ using the features of $event.end (end)$, $event.start (start)$ in $pb\\_table$ and $Time$ in the $cb\\_table$ based on the logical OR of three conditions:\n\\begin{enumerate}\n \\item $Condition\\:1:$ add counters if the event start is within the range of current and next records in the $cyber\\_table$\n \\begin{equation}\n cb\\_table[i][t] \\leq start \\land cb\\_table[i+1][t] \\geq start\n \\end{equation}\n \\item $Condition\\:2:$ if the event end is within the range of current and next records in the $cyber\\_table$.\n \\begin{equation}\n cb\\_table[i][t] \\leq end \\land cb\\_table[i+1][t] \\geq end\n \\end{equation}\n \\item $Condition\\:3:$ if the event start is less than the current record and event end is greater than the next record in the $cyber\\_table$.\n \\begin{equation}\n cb\\_table[i][t] \\geq start \\land cb\\_table[i+1][t] \\leq end\n \\end{equation}\n\\end{enumerate}\n\n\n\nThe $\\lor$ and $\\land$ are the logical $or$ and $and$ operators respectively.\nIn this manner, we merge the three features from $pb\\_table$ to the $cb\\_table$. \n\n\n\\subsection{Pyshark}\nPyshark is a Python wrapper for tshark, allowing python packet parsing using Wireshark dissectors.\nUsing $Pyshark$ features such as $Retransmissions$ and $Round Trip Time (RTT)$ are obtained. The RTT is the time duration for a signal or message to be sent plus the time it takes for the acknowledgment of that signal to be received. It has been observed that if congestion is created in any location in between the source and destination such as router or switch, the RTT increases. It also increases due to DoS attacks on the servers or any intermediery nodes in the path between source and destination. The $TCP$ based packet follows different retransmission policies based on the TCP congestion control flavour. Hence, the number of $retransmission$ packets observed within a given time frame is an indicator of loss of communication or increased delay. Usually, a sender retransmits a request if it did not receive an acknowledgment after some multiples of a $RTT$, whose multiplicity is dependent on the TCP flavour. The $retransmission$ and $RTT$ features are selected, as features are correlated and directly related to attacks targeting availability and integrity.\n\n\n\n\\subsection{Snort}\nThe router inside the CORE emulator runs the Snort daemon based on the specific rules, pre-processors, and decoders enabled in the configuration file to create logs. \nSnort operates in three modes: packet sniffer, packet logger, and intrusion detection system (IDS) modes. We run the snort in the IDS mode.\nThe alerts generated at the router in the substation network is continuously probed during the simulation. The alerts are recorded in the form of the $unified2$ format as well as pushed to the Logstash index created in Elasticsearch.\nUnified2 works in three modes, packet logging, alert logging, and true unified logging. We run Snort in alert logging mode to capture the alerts, timestamped with alert time. Further, the $idstools$ python package is utilized to extract these $unified2$ formatted logs. The Snort configuration determines which rules and preprocessor are enabled. The features extracted are the $alert$,$alert\\_type$, and $timestamp$. The merge into the $cb\\_table$ is performed based on the $timestamp$ of each Snort record. The record is inserted based on the condition:\n\\begin{equation}\n cb\\_table[i][t] \\geq timestamp \\leq cb\\_table[i+1][t]\n\\end{equation}\n\n\n\n\\subsection{Physical Features from DNP3}\n\n The Distributed Network Protocol version 3 (DNP3) is widely used in SCADA systems for monitoring and control. This protocol has been upgraded to use TCP\/IP in its transport and network layer. It is based on the master\/outstation architecture, where field devices are at outstations and the monitoring and control is done by the master. DNP3 has its own three layers: a) Data Link Layer, to ensure reliability of physical link by detecting and correcting errors and duplicate frames, b) Transport Layer, to support fragmentation and reassembly of large application payload, and c) Application Layer, to interface with the DNP3 user software that monitors and controls the field devices. Every outstation consists of a collection of measurements such as breaker status, real power output, etc., which are associated with a DNP3 point and classified under one of the five groups: binary inputs (BI), binary outputs (BO), analog inputs (AI), analog outputs (AO), and counter input. The physical features consist of the information carried in the headers in the three layers of DNP3, along with the values carried by the DNP3 points in the application layer payload. Every DNP3 payload's purpose is indicated by a header in the application layer called function code (FC). In our simulations, we extract the features with FCs: 1(READ), 5(DIRECT OPERATE), 20 (ENABLE spontaneous message), 21(Disable spontaneous message), and 129 (DNP3 RESPONSE). The details of the features are in Table~\\ref{table:fusion_features}. \n\n\n\n\\begin{table}[]\n\\caption{Description of the features used in data fusion.}\n\\begin{tabular}{|p{0.8cm}|m{6.5cm}|m{0.4cm}|}\n\\cline{1-3}\n Features & Description & Def \\\\ \\cline{1-3}\n \nFrame Len & Length of the frame after network, transport and application header and payload are added and fragmented based on the channel type. For ethernet, the frame length can be max. 1518 bytes, which varies for wireless channels. & 0\\\\ \\cline{1-3}\n\nFrame Prot. & Determines the list of protocols in the layers above link layer encapsulated in the frame. & Nan \\\\\\cline{1-3}\n\nEth Src & Unique source MAC address. Crucial for detection in ARP spoof attacks. & 0\\\\\\cline{1-3}\n\nEth Dst & Unique destination MAC address. Crucial for detection in ARP spoof attacks. & 0\\\\\\cline{1-3}\n\nIP Src & Unique source IP address. & 0\\\\ \\cline{1-3}\n\nIP Dst & Unique destination IP address. & 0\\\\\\cline{1-3}\n\nIP Len & Stores the length of the header and payload in a IP-based packet. This correlates well with the DNP3 payload size. & 0\\\\ \\cline{1-3}\n\nIP Flags & Indicator of fragmentation caused due to link or router congestion in the intermediary nodes. & 0x00\\\\ \\cline{1-3}\n\nSrc Port & Indicates the port number used by the source application using TCP in transport layer. Ex: if the source is the DNP3 outstation, default port is 20000. & 0\\\\ \\cline{1-3}\n\nDest Port & Indicates the port number used by the destination application using TCP in transport layer. & 0\\\\\\cline{1-3}\n\nTCP Len & Stores the length of the header and payload in a TCP-based segment. This correlates well with the DNP3 payload size. & 0\\\\ \\cline{1-3}\n\nTCP Flags & Flags are used to indicate a particular state of connection such as SYN, ACK, etc. & 0x00\\\\ \\cline{1-3}\n\nRetrans. & Indicates if the current record is from a retransmitted packet, caused due to attack or network congestion.& 0\\\\\\cline{1-3}\n\nRTT & Indicator of propagation and processing delay. High RTT can be caused due to MiTM attack.& -1\\\\\\cline{1-3}\n\nFlow Cnt & Indicates the number of TCP flows in a specific time interval. Indicates the connected and disconnected DNP3 masters. Flow is collection of packets. & -1 \\\\\\cline{1-3}\n\nFlow Fin Cnt & Indicates if the current flow carries the final packet. & -1\\\\\\cline{1-3}\n\nPackets & Number of packets transmitted in a specific time interval. & -1\\\\ \\cline{1-3}\n\nSnort Alert & Boolean indicating an alert from snort. & 0\\\\\\cline{1-3}\n\nAlert Type & Indicates the alert type such as DNP3, ARP spoof, ICMP flood or any other types. & Nan\\\\\\cline{1-3}\n\nLL Src & Source id of the DNP3 master or outstation. Indicator of which outstation communicates with the master in that specific record. & -1 \\\\ \\cline{1-3}\n\nLL Dest & Destination id of the DNP3 master or outstation. Indicator of which outstation communicates with the master in that specific record. & -1\\\\ \\cline{1-3}\n\nLL Len & Indicator of the DNP3 payload size as well as the function type. Usually the response carries DNP3 point information, hence this length correlates with the function code as well as the outstation currently communicating. & 0\\\\\\cline{1-3}\n\nLL Ctrl & This indicates the initiator of the communication. Determines the primary\/secondary server. & 0x00\\\\\\cline{1-3}\n\nTL Ctrl & Indicates the FIN\/FIR\/Sequence number for determining if the DNP3 payload is the first or final segment. & 0x00 \\\\\\cline{1-3}\n\nFunc. code & Indicates the function code: either READ, WRITE, OPERATE, DIRECT OPERATE, etc. & -1\\\\\\cline{1-3}\n\nAL Ctrl & Indicates the FIN\/FIR\/Seq\/Confirm and Unsolicited flags. This indicates if there are unsolicited, first, final from application layer standpoint.\n& 0x00\\\\\\cline{1-3}\n\nObj count & This count determines the number of BI, BO, AI, AO points associated with a substation.& 0\\\\\\cline{1-3}\n \nAL Payload & Contains the DNP3 points used to extract the physical features such as branch status, real power flows and injections in branch and buses for a substation. & Nan\\\\\\cline{1-3}\n\n\\end{tabular}\n\\label{table:fusion_features}\n\\vspace{-0.5cm}\n\\end{table}\n\n\\section{Data Transformation}\\label{data_transformation}\nReal-time testbed data is usually insufficient, conflicting, diverse format and at times lack in certain pattern or trends. Hence, data pre-processing is essential in transforming raw data into an understandable format. The raw data extracted from multiple-sensors are processed through three steps: a) data imputation, b) data encoding, c) data scaling, and d) feature reduction.\n\n\\subsection{Data Imputation}\n\nImputation is a statistical method of replacing the missing data with substituted values. Substitution of a data point is unit imputation, and substituting a component is item imputation. Imputation tries to preserves all the records in the data table by replacing missing data with an estimated value based on other available information or feeds from domain experts. There are other forms of imputation such as mean, stochastic, regression imputation etc. Imputation can introduce a substantial amount of bias and can also impact efficiency. In this work, we have not tried to address such discrepancies of bias introduced due to imputation.\nSince we merge data from different sources with unique features, the chances of missing data are high. Hence, we perform unit imputation in our dataset based on the default values in the $Def$ column of the Table~\\ref{table:fusion_features}.\n\n\\subsection{Data Encoding}\nThere are numerous features in the fused dataset which are categorical. These categorical features are encoded using the preprocessing libraries in scikit learn, so that the predictive model can better understand the data. There are different types of encoders such as an ordinal encoder, label encoder, one hot encoder, etc. In this work, we use label encoding. Label encoding is preferred over one hot encoding when the cardinality of the categories in the categorical feature is quite large as it results in the issue of high dimensions. \nWe also do not consider an ordinal encoder, as it is processed on the 2D dataset ($samples$*$features$). Since we process cross domain features, we perform encoding on individual features separately using label encoding. \n\n\\subsection{Scaling and Normalization}\nScaling and normalizing the feature is essential for various ML and DL techniques such as Principal Component Analysis (PCA), Multi-Layer Perceptrons (MLPs), Support Vector Machines (SVMs), etc. Though certain techniques such as Decision Trees or Random Forest, are scale-invariant, it is still essential to normalize and train. Before performing normalization, we perform log transformation and categorical encoding for the features with high variance and varied range of values, respectively. Hence, \nwe evaluate both log transformation as well as scaling. Additionally, we considered \\textit{Min-Max scaling} as performed in our prior works on intrusion detection on KDD and CIDDS datasets~\\cite{ieee_cqr}.\n\n\\subsection{Feature Reduction}\nOnce the features from multiple sensors are merged, dimension reduction (inter-feature correlation) is performed to remove the trivial features using Principal Component Analysis (PCA). PCA is a linear dimensionality reduction method that uses Singular Value Decomposition (SVD) on the data to project it to a lower dimensional space \\cite{wold1987PCA}. The inter-feature correlation for our fused dataset from RESLab is based on the Pearson Coefficient \\cite{pearson}, shown in as shown in Fig.~\\ref{fig:feature_correlation}, where it can be observed that intra-domain features have higher correlation amongst each other. There is also some correlation observed across the cyber and physical features. Features with higher correlation are more linearly dependent and thus have a similar effect on dependent variables. For example, if two features have high correlation, one of the two features can be eliminated.\n\n\\begin{figure} [h]\n \\centering\n \\vspace{-0.2cm}\n \\includegraphics[width=1.0\\linewidth]{Figures\/inter_feature_correlation.png}\n \\vspace{-0.35cm}\n \\caption{Inter-feature correlation based on Pearson Coefficient}\n \\label{fig:feature_correlation}\n\\end{figure}\n\\section{Fusion}\\label{fusion_types}\nAs presented in Fig.~\\ref{fig:architecture_types}, the \\textit{Fusion} block involves different types of fusion. Intra-domain and inter-domain are considered for training the IDS using supervised and unsupervised learning techniques. We also explore location-based fusion and visualization for causal inference of the impact of the intrusion in different locations of the network. Finally, co-training with feature split is used to train the IDS using semi-supervised learning with labeled and unlabeled data.\n\n\\subsection{Intra-Domain and Inter-Domain Fusion}\nFusion of cyber sensor information from different sources is homogeneous source fusion. For example, the operation of fusing Elasticsearch logs with pyshark or raw packet capture to form the $cyber\\_table$ is intra-domain fusion.\n\nFusion of cyber and physical sensor information from different sources is heterogeneous source fusion. For example, the operation of fusing $cyber\\_table$ with $physical\\_table$ is inter-domain fusion. \n\n\\subsection{Location-Based Fusion}\nIn multi-sensor data fusion, sensor location plays a major role. For example, the military uses location-based multi-sensor fusion to estimate the location of enemy troops by amalgamating sensor information from multiple radars and submarines. The challenges associated with different locations stem from time differences in event recognition. A radar can pick up a signal with a different latency than a submarine due to the difference in communication medium as well as its location relative to the enemy troop. Similarly, our sensors such as IDS, firewall alerts, and network logs are positioned at different locations in the network. It is essential to correlate events among different locations before merging them for inferring any attacks.\n\\begin{figure*} [!htb]\n \\centering\n \\vspace{-0.2cm}\n \\includegraphics[width=1.0\\linewidth]{Figures\/location_based_fusion.jpg}\n \\vspace{-0.35cm}\n \\caption{Location based fusion from the master, outstation, and substation router. The high density traffic observed in the places marked with red rectangles is an indicator of DoS attack. This fusion assists in causal analysis for determining the initial victim of the DoS intrusion as well as inferring the pattern of impact across other devices in the network.}\n \\label{fig:location_fusion}\n\\end{figure*}\n\n\\subsection{Co-Training Based Split and Fusion}\nThere exist scenarios where labels cannot be captured. \nThe co-training algorithm~\\cite{co_training} uses feature split when learning from a dataset containing a mix of labeled and unlabeled data. This algorithm is usually preferred for datasets that have a natural separation of features into disjoint sets~\\cite{cotraining2}. Since the cyber and physical features are disjoint, we adopt feature split based co-training. \nThe approach is to incrementally build classifiers over each of the split feature sets. In our case, we split the fused features into cyber and physical features. Each classifier, $cy\\_cfr$ (first 17 features in Table~\\ref{table:fusion_features}) and $phy\\_cfr$(last 9 features in Table~\\ref{table:fusion_features}), is initialized using a few labeled records. At every loop of co-training, each classifier chooses one unlabeled record per class to add to the labeled set. The record is selected based on the highest classification confidence, as provided by the underlying classifier. Further, each classifier rebuilds from the augmented labeled set, and the process repeats. Finally, the two classifiers $cy\\_cfr$ and $phy\\_cfr$ obtained from the co-training algorithm gives probability score against the classes for each record, which is added and normalized to determine the final class of the record~\\cite{cotraining2}. The classifiers selected in our experiments are Linear Support Vector Machine (SVM), Logistic Regression, Decision Tree, Random Forest, Naive Bayes, and Multi-Layer Perceptron. \n\n\\begin{figure} [h]\n \\centering\n \\vspace{-0.2cm}\n \\includegraphics[width=1.0\\linewidth]{Figures\/CoTraining_CP.JPG}\n \\vspace{-0.35cm}\n \\caption{Co-training based fusion for labeled and unlabeled datasets. The fused dataset is split into cyber and physical views and trained in the cyber and physical classifiers separately, finally fusing and normalizing the probability scores for final classification.}\n \\label{fig:cotraining}\n\\end{figure}\n\\section{Intrusion Detection Post Fusion}\\label{ids}\nAfter the features are extracted, merged, and pre-processed we design IDS using different ML techniques. We have con- sidered manifold learning and clustering as the unsupervised learning techniques, a few linear and non-linear supervised learning techniques, and co-training based semi-supervised learning methods for training the IDS. In this section, we briefly explain the ML techniques we use. \n\n\\subsection{Manifold Learning}\nPCA for feature reduction does not perform well when there are nonlinear relationships within the features. Manifold learning is adopted in the scenarios where the projected data in the low dimensional planar surface is not well represented and needs more complex surfaces. Multi-featured data are described as a function of a few underlying latent parameters. Hence the data points can be assumed to be samples from a low-dimensional manifold embedded in a high-dimensional space. These algorithms tries to decipher these latent parameters for low-dimensional representation of the data. There are a lot of approaches to solve this problem such as Locally Linear Embedding, Spectral Embedding, Multi Dimensional Scaling, IsoMap etc. \n\n\\subsubsection{Locally Linear Embedding (LLE)}\nLLE computes the lower-dimensional projection of the high dimensional data by preserving distances within local neighborhoods. It is equivalent to a series of local PCA which are globally compared to obtain the best non-linear embedding~\\cite{scikit_manifold}. The LLE algorithm consists of 3 steps~\\cite{lle}: a) Compute k-nearest neighbor for a data point. b) Construct a weight matrix associated with the neighborhood of each data point. Obtains the weights \nthat best reconstruct each data from its neighbors, minimizing the cost.\nc) Compute the transformed data point $Y$ best reconstructed by the weights, minimizing the quadratic form.\n\n\n\n\\subsubsection{Spectral Embedding}\nSpectral embedding builds a graph incorporating neighborhood information. Considering the Laplacian of the graph, it computes a low dimensional representation of the data set that optimally preserves local neighborhood information~\\cite{se}. Minimization of a cost function, based on the graph ensures that points closer on the manifold are mapped closer in the low dimensional space, preserving local distances~\\cite{scikit_manifold}. \nThe Spectral Embedding algorithm consists of 3 steps: a) Weighted Graph Construction in which raw data are input into a graph representation using an adjacency matrix. b) Construction of unnormalized and a normalzied graph Laplacians as $L = D- A$ and $L = D^{-0.5}(D-A)D^{-0.5}$, respectively. c) Finally, partial eigenvalue decomposition is done on the graph Laplacian.\n\n\\subsubsection{Multi Dimensional Scaling (MDS)}\nMDS performs projection to lower dimension to improve interpretability while preserving `dissimilarity' between the samples. It preserves the dissimilarity by minimizing the square difference of the pairwise distances between all the training data between the projected, lower dimensional and the original higher dimensional space, \n\\begin{equation}\n \\operatorname{Diff}_{P}\\left(X_{1}, \\ldots, X_{n}\\right)=\\left(\\sum_{i=1}^{n} \\sum_{j=1 \\mid i \\neq j}^{n}\\left(\\left\\|x_{i}-x_{j}\\right\\|-\\delta_{i, j}\\right)^{2}\\right)^{1 \/ 2}\n\\end{equation}\n\n\\noindent where $\\delta_{i,j}$ is the general dissimilarity metric in the original higher dimensional space and $\\left\\|x_{i}-x_{j}\\right\\|$ is the projected\/lower dimensional dissimilarity pairwise between training samples $i$ and $j$. The model can be finally validated by a scatter plot of pairwise distance in projected and original space. There are two types of MDS: Metric and Non-Metric based. In Metric MDS, the distances between the two points in projection are set to be as close as possible to the dissimilarity (or distance) in original space. Non-metric MDS tries to preserve the order of the distances, and hence seeks a monotonic relationship between the distances in the embedded and original space.\n\n\n\n\n\n\\subsubsection{t-SNE Visualization}\nThe manifold learning technique called t-distributed Stochastic Neighbor Embedding is useful to visualize high-dimensional data, as it reduces the tendency of points to crowd together at the center. This technique converts similarities between data records to joint probabilities and then tries to minimize the Kullback-Leibler divergence (technique used to compare two probability distributions) between the joint probabilities of the low-dimensional embedding and the high-dimensional data using gradient descent. The only issue with the use of this technique is that it is computationally expensive and is limited by two or three embeddings in some methods. In our intrusion detection methods, our purpose is to evaluate if in the low-dimensional embedding we can find some correlation of the data points with the labels.\n\n\\subsubsection{IsoMap Embedding}\nIsomap stands for isometric mapping and is an extension to the MDS technique discussed earlier. It uses geodesic\npaths instead of eucledian distance for nonlinear dimensionality reduction. Since MDS tries to preserve large pairwise distance over the small pairwise distance, Isomap first determine a neighborhood graph by finding the k nearest neighbor of each point, further connecting this points in the graph and assigns weights. Then it computes the shortest geodesic path between all pairs of point in the graph, to use this distance measure between connected points as weights to apply MDS to the shortest-path distance matrix~\\cite{extra_fusion_ref1}. \n\n\\subsection{Clustering}\nOne of the fundamental problems in multi-sensor data fusion is \\textit{data association}, where different observations in the dataset are grouped into clusters~\\cite{hall_book}. Hence, various clustering techniques are explored for data association. \n\\subsubsection{K-means Clustering}\nThe k-means algorithm clusters data by separating samples in $n$ groups of equal variance, minimizing a criterion known as the inertia. The algorithm starts with a group of randomly selected centroids, which are used as the beginning points for every cluster, then performs iterative calculations to optimize the positions of the centroids by minimizing inertia. The process stops when either \nthe centroids have stabilized or the number of iterations has been achieved.\n\n\n\n\n\n\\subsubsection{Spectral Clustering}\nThe main concept behind spectral clustering is the graph Laplacian matrix. The algorithm takes the following steps~\\cite{spectral_clustering}:\n\\begin{enumerate}\n \\item Construct a similarity graph either based on an $\\epsilon$-neighborhood graph, a $k-nearest$ neighbor graph, or a fully connected graph.\n \\item Compute the normalized Laplacian $L$.\n \\item Compute the first $k$ eigen-vectors $u_{1},u_{2}...,u_{k}$ of $L$. The first eigen-vectors are related to the $k$ smallest eigen values of $L$.\n \\item Let $U \\in R^{n*k}$ be the matrix containing the vectors $u_{1},u_{2}...,u_{k}$ as columns.\n \\item For $i=1,,,,n$, let $y_{i} \\in R^{k}$ be the vector corresponding to the $i^{th}$ row of $U$.\n \\item Cluster points $(y_{i})$ in $R^{k}$ with k-means algorithm into clusters $C_{1},...C_{k}$.\n\\end{enumerate}\n\n\n\\subsubsection{Agglomerative Clustering}\nAgglomerative clustering in a bottom-up manner, where at the beginning, where each object belongs to one single-element cluster, which are the leaf clusters of a dendogram. At each step of the algorithm, the two clusters that are most similar (based on a similarity metric such as distance) are combined into a larger cluster. The procedure is followed until all points are members of a single big cluster. The steps form a hierarchical tree, where a distance threshold is used considered to cut the tree to partition the data into clusters. As per scikit, this algorithm recursively merges the pair of clusters that minimally increases a given linkage distance~\\cite{scikit_clustering}. The parameter \\textit{distance\\_threshold} in the scikit-learn implementation is used to cut the dendogram.\n\n\\subsubsection{Birch Clustering}\nThe Balanced Iterative Reducing and Clustering Using Hierarchies (BIRCH) \\cite{zhang1996birch} algorithm is more suitable for the cases where the amount of data is large and the number of categories K is also relatively large. It runs very fast, and it only needs a single pass to scan the data set for clustering.\n\n\n\n\\subsection{Supervised Learning}\nThough manifold learning and clustering techniques helps visualize and cluster the data samples in the intrusion time-interval from the non-intrusion ones, still the results of these techniques are hard to validate without any labels, hence various supervised learning techniques are also considered in designing the anomaly based IDS.\n\n\n\\subsubsection{Support Vector Classifier (SVC)}\n Support vector machine builds an hyperplane or set of hyperplanes in a higher dimensional space which are further used as a decision surface for classification or outlier detection. It is a supervised learning based classifier which performs better even for scenarios with higher feature size than sample size. The decision function, or support vectors, defined using the kernel type such as sigmoid, polynomial, linear or radial basis function plays a major impact on the classifier performance. Different variants of SVCs have been predominantly proposed in intrusion detection solutions~\\cite{svm1,svm2}.\n\n\n\\subsubsection{Logistic Regression (LR) Classifier}\nLR is a classification algorithm, used mainly for discrete set of classes. It is a probability-based classification technique which minimizes the error cost using the logistic sigmoid function. It uses the gradient descent technique to reduce the error cost function. Industries make a wide use of it, since it is very efficient and highly interpretable\\cite{lr}.\n\n\n\\subsubsection{Naive Bayes (NB) Classifier}\nNB is a supervised learning technique based on the Bayes Theorem, with the naive assumption of independent features, conditioned on the class. Based on the feature likelihood distribution, they posses different forms: Gaussian, Bernoulli, Categorical, Complement, etc. Though it is computationally efficient, the selection of feature likelihood may alter results. Spam filtering, text classification, and also in network intrusion detection it is used profusely~\\cite{nb1}. A naive-bayes based solution was proposed for IDS in a smart meter network~\\cite{sahu_naive_bayes}.\n\n\\subsubsection{Decision Tree (DT) Classifier}\nThe advantage of using DT is that it requires the least data transformation. Fundamentally it creates internally, models that predicts the target class by learning decision rules inferred from the features. \nThis technique sometimes meet with over-fitting issues while learning complex trees that are hard to generalize. Hence, it adopts pruning techniques such as reducing the tree max-depth to deal with over-fitting. If data in the samples are biased it may highly likely create biased trees. The computation cost of using this classifier is logarithmic in the number of data records. It has been used in protocol classification problem~\\cite{dt3,dt4} for classifying anomalous packets.\n\n\n\n\\subsubsection{Random Forest (RF) Classifier}\nBasically, RF creates decision trees on randomly picked data samples, further computes prediction from each tree and selects the best solution through voting. More trees results in a more robust forest. It is an ensemble based classifier in which a diverse collection of classifiers (decision trees) are constructed by incorporating randomness in tree construction.\nRandomness decreases the variance to address the overfit issues prevailing in DT. While comparing with SVMs, RF is fast and works well with a mixture of numerical and categorical features.\nIt has a variety of applications, such as recommendation engines, image classification and feature selection.\nDue to its variance reduction feature and least need of data pre-processing, it is also preferred in the cyber security area~\\cite{rf1,rf2}.\n\n\\subsubsection{Neural Network (NN) Classifier}\nNeural networks is effective in the case of complex non-linear models. In our IDS classification problem, we make use of multi-layer perceptron (MLP) as the supervised learning algorithm. It learns a non-linear function approximator whose inputs are the features for a record and outputs the class. Unlike a logistic regressor, it comprises of multiple hidden layers. A major issue with NN models is it large set of hyper tuning parameter such as number of hidden neurons, layers, iterations, dropouts, etc., that can affect the hyper-parameter tuning process for improving accuracy. Additionally, it is quite sensitive to feature scaling. \nFollowing Occam's razor, security professionals tend to avoid \nneural networks in intrusion detection, wherever possible. Still NN can be explored to capture temporal pattern with the use of Recurrent Neural Networks (RNN) and spatial pattern using Graph Neural Networks (GNN). \n\n\n\n\n\\section{Results and Analysis}\\label{results}\n\n\nIn this section, we study the improvement of the detection performance of IDS, when a fused dataset is considered in comparison to the use of only cyber or physical features. We design the IDS as a classifier when training with supervised and semi-supervised based ML techniques. We analyze the IDS performance based on the different types of MiTM attack carried out in the RESLab testbed. For supervised learning techniques, we analyse the impact of labeling as well as feature reduction on the detection accuracy. For unsupervised learning techniques, we compare the performance of the clustering techniques based on different metrics. In most of the experiments, we expect to receive highest scores for \neither 2 or 3 clusters, since we want to cluster attacked traffic from non-attacked. The third cluster can be an undetermined cluster. We also utilize and test a co-training based semi-supervised learning technique by assuming loss of labels for some experiments and compare them with supervised learning techniques. \n\n\n\n\\subsection{Supervised Technique Intrusion Detection with Snort Alert as Label}\n\n\\subsubsection{Metrics for evaluation}\nThe IDS performance is evaluated by classifier's accuracy computed using metrics such as \\textit{Recall}, \\textit{Precision}, and \\textit{F1-score}. Recall is the ratio of the true positives to the sum of true positives and false negatives. Precision is the ratio of the true positives to the sum of true positives and false positives.\nHigh precision is ensured by a low false positive rate. High recall is an indication of low false negative rate. \nFalse negatives are highly unwanted in security, since an undetected attack may result in more privilege escalations and can impact a larger part of network. False positives is expensive as time and money is invested for security professionals to investigate a non-critical alert.\nHence, harmonic mean of recall and precision, called F1-score, is a preferred metric for a balanced evaluation.\n\n\n\\subsubsection{Labels Evaluation}\nThe performances are compared, considering labels from Snort alerts and labels based on the intruders' attack windows, to train the supervised learning based IDS classifiers. The intruders' attack window is the difference between the attack script end and start time. We label every record in this window belonging to the compromised class. It is interesting to observe from Table~\\ref{tab:label_comparison} that the classifier trained using the attack window label performed better than the Snort labels, based on the average F1-score, Recall, and Precision. \nThese metrics are computed by taking the average of all the metrics from different use cases. This analysis indicates that training a model from well-known IDS may not act as an ideal classifier for intrusion detection. Hence, for our further studies, we train the classifier using the attack window based label. \n\n\\begin{table}[h]\n \\centering\n \\begin{tabular}{|l|l|l|l|l|l|l|}\n \\hline\nClassifier & \\multicolumn{3}{|c|}{Snort Label} & \\multicolumn{3}{|c|}{Label from Attack Window}\\\\\n\n \\hline\n Avg. & F1 score & Rec. & Prec. & F1 score & Rec. & Prec.\\\\ \\hline\n SVC & .566 & .69 & .496 & .752 & .776 & .799 \\\\ \\hline\n DT & .738 & .73 & .757 & .909 & .909 & .92 \\\\ \\hline\n RF & .764 & .789 & .776 & .891 & .896 & .903 \\\\ \\hline\n GNB & .598 & .574 & .745 & .724 & .729 & .748 \\\\ \\hline\n BNB & .57 & .589 & .621 & .634 & .655 & .676 \\\\ \\hline\n MLP & .561 & .671 & .491 & .621 & .695 & .604 \\\\ \\hline\n \\end{tabular}\n \\caption{Comparison of the labels using different classifier based on the evaluation metrics.}\n \\label{tab:label_comparison}\n \\vspace{-0.2 cm}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\\subsubsection{Use Case Specific Evaluation} \nWe analyze the dataset constructed from four use cases based on different strategies of FDI and FCI attacks (measurement and control, respectively). These cases use different polling rates and DNP3 masters on the synthetic 2000-bus grid case illustrated in the RESLab paper~\\cite{Sahu2020}. Use Case 1 and 2 are FCI attacks on binary and mixed binary\/analog commands from the control center to some selected outstations, selected from our prior work on graph-based contingency discovery~\\cite{n_x}. Use Case 3 and 4 are a mix of FCI and FDI attacks. These use cases differ based on the type and sequence of modifications done by the intruder as shown in Table~\\ref{tab:uc_table}.\n\n\n\\begin{table*}[t]\n \\centering\n \\begin{tabular}{|l|l|l|l|}\n \\hline\n \\multicolumn{2}{|c|}{FCI} & \\multicolumn{2}{|c|}{FCI with FDI}\\\\\n\n \\hline\n UC1 & UC2 & UC3 & UC4\\\\ \\hline\n Binary Commands & Analog, Binary Commands & Measurements=$>$Commands & Measurements=$>$ Commands=$>$Measurements \\\\ \\hline\n \\end{tabular}\n \\caption{Use cases based on the type and sequence of modifications.}\n \\label{tab:uc_table}\n \\vspace{-0.2 cm}\n\\end{table*}\n\nDue to the variation of attempts an intruder needs to take to implement the use cases, the number of samples collected for every scenario differs. \nIn the MLP based classifier, the number of samples plays a vital role; hence, MLP performs better for scenarios with the number of DNP3 masters equal to 10 versus 5 and with a DNP3 polling interval of 30 s versus 60 s. The DT and RF classifiers outperform the other classifiers in almost all the scenarios. The NB classifiers, both Gaussian and Bernoulli, need the features to be independent for optimal performance. Since most of the features are strongly correlated based on Fig.~\\ref{fig:feature_correlation}, the performance of NB is relatively weak compared to other classifiers. Usually, Gaussian Naive Bayes (GNB) is considered for features that are continuous and Bernoulli Naive Bayes (BNB) for discrete features. In our fused dataset, since we have both types of features, we consider both techniques for evaluation. In majority of the scenarios, GNB performed better than BNB, indicating the physical features have more impact on the detection compared to categorical cyber features. Table~\\ref{tab:uc_comparison} shows the comparison of classifiers for different use cases, and Table~\\ref{tab:uc_comparison_gridsearch} shows the comparison using grid search cross validation based tuning of hyper-parameters for each classifier.\n\n\\begin{table}[!h]\n \n \\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}\n \\cline{1-9}\n\\multicolumn{3}{|c|}{Scenarios} & \\multicolumn{6}{|c|}{Classifiers}\\\\\n \\cline{1-9}\n uc & masters & PI & SVC & DT & RF & GNB & BNB & MLP\\\\\n \\cline{1-9}\n \\multirow{2}{*}{\\textbf{UC1}} & 10 & 30 & .70 & .74 & .75 & .59 & .70 & .70\\\\ \\cline{2-9}\n & 10 & 60 & .78 & .87 & .81 & .75 & .49 & .58\\\\ \\cline{1-9}\n \\multirow{4}{*}{\\textbf{UC2}} & 5 & 30 & .88 & .76 & .92 & .73 & .52 & .86\\\\ \\cline{2-9}\n & 5 & 60 & .88 & .89 & 1.0 & .94 & .89 & .66\\\\ \\cline{2-9}\n & 10 & 30 & .84 & .93 & .93 & .73 & .59 & .77\\\\ \\cline{2-9}\n & 10 & 60 & .64 & .97 & .88 & .33 & .58 & .52\\\\ \\cline{1-9}\n \\multirow{4}{*}{\\textbf{UC3}} & 5 & 30 & .95 & .98 & .93 & .93 & .57 & .72\\\\ \\cline{2-9}\n & 5 & 60 & .50 & 1.0 & .88 & .72 & .33 & .40\\\\ \\cline{2-9}\n & 10 & 30 & .85 & 1.0 & .97 & .83 & .66 & .86\\\\ \\cline{2-9}\n & 10 & 60 & .89 & .98 & .91 & .84 & .73 & .91\\\\ \\cline{1-9}\n \\multirow{4}{*}{\\textbf{UC4}} & 5 & 30 & .59 & .86 & .88 & .56 & .54 & .39\\\\ \\cline{2-9}\n & 5 & 60 & .63 & .81 & .77 & .74 & .77 & .31\\\\ \\cline{2-9}\n & 10 & 30 & .65 & .96 & .97 & .63 & .78 & .57\\\\ \\cline{2-9}\n & 10 & 60 & .75 & .98 & .88 & .83 & .80 & .50\\\\ \\cline{1-9}\n \\hline\n \\end{tabular}\n \\caption{Comparison of the classifier based on the scenarios i.e. use cases, number of masters and the polling interval (PI) in sec. }\n \\label{tab:uc_comparison}\n \\vspace{-0.2 cm}\n\\end{table}\n\n\n\\begin{table}[!h]\n \n \\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}\n \\cline{1-9}\n\\multicolumn{3}{|c|}{Scenarios} & \\multicolumn{6}{|c|}{Classifiers}\\\\\n \\cline{1-9}\n uc & masters & PI & SVC & DT & RF & GNB & BNB & MLP\\\\\n \\cline{1-9}\n \\multirow{2}{*}{\\textbf{UC1}} & 10 & 30 & .70 & .78 & .75 & .70 & .69 & .70\\\\ \\cline{2-9}\n & 10 & 60 & .54 & .87 & .81 & .78 & .52 & .7\\\\ \\cline{1-9}\n \\multirow{4}{*}{\\textbf{UC2}} & 5 & 30 & .51 & .88 & .84 & .72 & .51 & .67\\\\ \\cline{2-9}\n & 5 & 60 & .66 & 1.0 & 1.0 & .83 & .89 & .62\\\\ \\cline{2-9}\n & 10 & 30 & .45 & .94 & .89 & .81 & .44 & .86\\\\ \\cline{2-9}\n & 10 & 60 & .52 & .97 & .85 & .75 & .61 & .58\\\\ \\cline{1-9}\n \\multirow{4}{*}{\\textbf{UC3}} & 5 & 30 & .36 & .98 & .93 & .93 & .50 & .91\\\\ \\cline{2-9}\n & 5 & 60 & .40 & 1.0 & .96 & .88 & .26 & .44\\\\ \\cline{2-9}\n & 10 & 30 & .41 & 1.0 & .99 & .84 & .63 & .69\\\\ \\cline{2-9}\n & 10 & 60 & .40 & .93 & .88 & .89 & .76 & .82\\\\ \\cline{1-9}\n \\multirow{4}{*}{\\textbf{UC4}} & 5 & 30 & .39 & .97 & .93 & .57 & .56 & .61\\\\ \\cline{2-9}\n & 5 & 60 & .31 & .63 & .68 & .65 & .77 & .68\\\\ \\cline{2-9}\n & 10 & 30 & .44 & .96 & .95 & .65 & .78 & .65\\\\ \\cline{2-9}\n & 10 & 60 & .50 & .88 & .85 & .80 & .80 & .50\\\\ \\cline{1-9}\n \\hline\n \\end{tabular}\n \\caption{\\textbf{Optimal HyperParameter with GridSearch} Comparison of the classifier based on the scenarios i.e. use cases, number of masters and the polling interval (PI) in sec. }\n \\label{tab:uc_comparison_gridsearch}\n \\vspace{-0.2 cm}\n\\end{table}\n\n\\subsubsection{Impact of Fusion}\nWe evaluate the performance of the classifier by considering pure physical and pure cyber based intra-domain fusion as well as cyber-physical inter-domain fusion. The pure physical and cyber physical based fusion outperforms pure-cyber based fusion for all the classifiers shown in Table~\\ref{tab:fusion_type_comparison}. Hence, it indicates that introduction of physical side features can improve the accuracy of conventional IDS that only considers network logs in communication domain. The pure physical features relatively performed better than cyber physical because in the testbed, only few features (i.e. measurements for the impacted substation) are considered for extraction. If we consider all the measurements from the grid simulation, the detection accuracy will decrease due to feature explosion. Feature reduction techniques such as PCA for the physical features may not be an ideal solution for a huge synthetic grid. \n\n\\begin{table}[!h]\n \\centering\n \\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}\n \\hline\nClfr & \\multicolumn{3}{|c|}{Pure Cyber} & \\multicolumn{3}{|c|}{Pure Physical} & \\multicolumn{3}{|c|}{Cyber Physical}\\\\\n\n \\hline\n Avg. & F1 & Rec. & Pre. & F1 & Rec. & Pre. & F1 & Rec. & Pre.\\\\ \\hline\n SVC & .62 & .68 & .59 & .75 & .77 & .80 & .75 & .77 & .80\\\\ \\hline\n DT & .77 & .77 & .77 & .93 & .93 & .94 & .91 & .91 & .92\\\\ \\hline\n RF & .69 & .69 & .68 & .92 & .92 & .93 & .89 & .90 & .90\\\\ \\hline\n GNB & .58 & .57 & .59 & .78 & .77 & .81 & .72 & .73 & .75\\\\ \\hline\n BNB & .52 & .56 & .55 & .65 & .68 & .66 & .63 & .66 & .68\\\\ \\hline\n MLP & .56 & .66 & .53 & .72 & .76 & .77 & .62 & .70 & .61\\\\ \\hline\n \\end{tabular}\n \\caption{Comparison of the classifier with pure cyber fusion, pure physical fusion, and cyber-physical fusion features }\n \\label{tab:fusion_type_comparison}\n \\vspace{-0.2 cm}\n\\end{table}\n\n\\subsubsection{Impact of Feature Reduction}\nIn this subsection, we analyze feature reduction techniques such as PCA and Shapiro ranking for feature reduction and feature filtering to evaluate the performance of the IDS. Table~\\ref{tab:feature_selection_comparison} illustrates the performance scores for different classifiers with PCA transformed features and shapiro features selected for scores more than 0.7. It can be observed that except for the DT and RF, other classifier's performance improved by both operation. DT and RF behaves the best when most of the features are kept intact. In most of the case selection of features based on Shapiro features performed better than PCA transformation. Still the total variance threshold taken may impact the number of principal components considered, which can affect the results.\n\n\n\\begin{figure} [!h]\n \\centering\n \\vspace{-0.2cm}\n \\includegraphics[width=1.0\\linewidth]{Figures\/shapiro_feature_ranks.png}\n \\vspace{-0.35cm}\n \\caption{Ranking feature importance for extracting features. Of all the features, scores above 0.7 is selected for training.}\n \\label{fig:shapiro_ranking}\n\\end{figure}\n\n\\begin{table}[h]\n \\centering\n \\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}\n \\hline\nClfr & \\multicolumn{3}{|c|}{All Features} & \\multicolumn{3}{|c|}{PCA} & \\multicolumn{3}{|c|}{Shapiro Ftrs $\\ge$ 0.7}\\\\\\hline\n Avg. & F1 & Rec. & Pre. & F1 & Rec. & Pre. & F1 & Rec. & Pre.\\\\ \\hline\n SVC & .75 & .77 & .80 & .77 & .80 & .81 & .77 & .78 & .79\\\\ \\hline\n DT & .91 & .91 & .92 & .82 & .82 & .83 & .89 & .89 & .91\\\\ \\hline\n RF & .89 & .90 & .90 & .86 & .86 & .87 & .84 & .84 & .84\\\\ \\hline\n GNB & .72 & .73 & .75 & .77 & .78 & .78 & .83 & .84 & .87\\\\ \\hline\n BNB & .63 & .66 & .68 & .74 & .76 & .76 & .80 & .82 & .86\\\\ \\hline\n MLP & .62 & .70 & .61 & .61 & .68 & .64 & .50 & .64 & .41\\\\ \\hline\n \\end{tabular}\n \\caption{Comparison of the classifier with all features, reduced feature with PCA transformation, and feature selection based on shapiro ranking }\n \\label{tab:feature_selection_comparison}\n \\vspace{-0.2 cm}\n\\end{table}\n\n\n\n\n\\subsection{Unsupervised Learning Techniques}\n\n\\subsubsection{Metrics for evaluation}\nFor evaluating the performance of the clustering techniques, the Silhoutte scores, Calinski Harabasz score, Adjusted Rand score, and Davies Bouldin scores are considered. The \\textit{Silhoutte score} (S) is the mean Silhouette Coefficient of all samples. The Silhouette Coefficient is calculated using the mean intra-cluster distance (a) and the mean nearest-cluster distance (b) for each sample, using $\\frac{b - a} {max(a, b)}$. The \\textit{Calinski Harabasz score} (CH) is computed based on~\\cite{ch_score}. It is the ratio between the within-cluster dispersion and the between-cluster dispersion. The \\textit{Rand Index} computes a similarity measure between two clusterings by considering all pairs of samples and counting pairs that are assigned in the same or different clusters in the predicted and true clusterings. This index is further adjusted to be called the Adjusted Rand Index (AR). The \\textit{Davies Bouldin score} (DB) is defined as the average similarity measure of each cluster with its most similar cluster, where similarity is the ratio of within-cluster distances to between-cluster distances~\\cite{db_score}. Thus, clusters which are farther apart and less dispersed will result in a better score.\n\n\n\\subsubsection{Clustering}\nPrior to the clustering techniques, we scaled and normalized the dataset using scaler and normalize functions since otherwise there will be feature-based bias. We implement four types of clustering techniques: Agglomerative, k-means, Spectral and Birch clustering, to evaluate the optimal number of clusters based on the S, CH, AR, and DB scores. For determining the clusters, we merged the samples from all the use cases to form a larger dataset and then trained the clustering methods by tuning the number of clusters hyper-parameter ($N_c$) from 2 to 10. Fig 8 (a-e)\nshow the clustered plots using Agglomerative clustering with different number of clusters.\nThe number of clusters, or centroids, are selected for hyper-parameter tuning since it is found to be the most important factor for success of the algorithm~\\cite{clus_ref}.\nIdeally, there need to be 3 clusters for un-attacked, attacked with DNP3 alerts, and attacked with ARP alerts, but the distance metric considered results in a greater number of clusters in some methods. Among all the clustering techniques presented in the previous section, the affinity propagation technique does not converge to obtain the exemplars with default paramaters ($damping$ =50, $convergence\\_iter$ =200). Hence, the damping and maximum convergence iteration parameters are increased to 0.95 and 2000 respectively, resulting in 34 clusters. The S, CH, DB, and AR scores obtained are 0.605, 3658.1, 0.736, and 0.00085 respectively. \n\n\n\\begin{table}[h]\n \\centering\n \\begin{tabular}{|l|l|l|l|l|}\n\n \\hline\n Clustering Algo & S & CH & AR & DB\\\\ \\hline\n Agglomerative & 3 & 3 & 2 & 6\\\\ \\hline\n K-means & 3 & 5 & 2 & 6\\\\ \\hline\n Spectral & 3 & 5 & 2 & 6 \\\\ \\hline\n Birch & 3 & 3 & 3 & 2\n \\\\ \\hline\n \\end{tabular}\n \\caption{Optimal clusters (Opt $N_c$) using different algorithm obtained using four different evaluation metric}\n \\label{tab:unsup_cluster_comparison}\n \\vspace{-0.2 cm}\n\\end{table}\n\n\\begin{figure*}[!htb]\n \\centering\n \\subfigure[]{\\includegraphics[height=1.3 in,width=1.37 in]{Figures\/Agglomerative\/ac2.jpg}\\label{ac2}}\n \\subfigure[]{\\includegraphics[height=1.3 in,width=1.37 in]{Figures\/Agglomerative\/ac3.jpg}\\label{ac3}}\n \\subfigure[]{\\includegraphics[height=1.3 in,width=1.37 in]{Figures\/Agglomerative\/ac4.jpg}\\label{ac4}}\n \\subfigure[]{\\includegraphics[height=1.3 in,width=1.37 in]{Figures\/Agglomerative\/ac5.jpg}\\label{ac5}}\n \\subfigure[]{\\includegraphics[height=1.3 in,width=1.37 in]{Figures\/Agglomerative\/ac6.jpg}\\label{ac6}}\n \\caption{Agglomerative clustering with different number of clusters. Clustering with size 2 and 3 outperforms others, validating the detection accuracy of a attacked traffic from a non-attacked one.}\n\\end{figure*}\n\n\n\n\\subsubsection{Impact of Fusion}\nConsidering only physical side fea- tures, most of the evaluation metrics computed very low or negative (in the case of Adjusted Rand index) values, indicating inefficient clusters. The scores of the optimal clusters with combined cyber-physical features had an AR score of more than 0.8, but its maximum is 0.01 for 6 clusters with only physical features. The pure cyber features performed similar to the cyber physical case, but the scores are less compared to the merged features. Hence, it is essential to fuse cyber and physical features prior to perform clustering based unsupervised learning.\n\\begin{table}[h]\n \\centering\n \\begin{tabular}{|l|l|l|l|l|l|l|l|l|}\n\n \\hline\n & \\multicolumn{4}{|c|}{Pure Cyber} & \\multicolumn{4}{|c|}{Pure Physical}\\\\ \\hline\n Clustering & S & CH & AR & DB& S & CH & AR & DB\\\\ \\hline\n Agglo. & 3 & 5 & 2 & 6 & 2 & 6 & 6 (neg) & 2\\\\ \\hline\n K-means & 3 & 6 & 2 & 5 & 3 & 6 & 6 (neg) & 2\\\\ \\hline\n Spectral & 3 & 5 & 2 & 6 & 3 & 3 & 6 (neg) & 2 \\\\ \\hline\n Birch & 3 & 3 & 2 & 2 & same & 3 & 3 (neg) & 2\n \\\\ \\hline\n \\end{tabular}\n \\caption{Comparison of Optimal clusters (Opt $N_c$) using different algorithm considering pure cyber and physical features}\n \\label{tab:unsup_cluster_cyphy_comparison}\n \\vspace{-0.2 cm}\n\\end{table}\n\n\\subsubsection{Robustness}\nThe robustness of the clustering techniques can be evaluated based on the variance of these evaluation metrics with respect to a) hyper-parameter tuning and b) dataset alterations. In the first case, the mean, variance, and normalized variance ($NVar$ = $\\frac{sd}{mean}$) of the evaluation metric $S$, $CH$, $AR$, and $DB$ are computed by altering $N_c$ from 2 to 10 and using the complete dataset extracted for all the use cases. In the second case, similar statistics are computed by keeping the number of clusters fixed at $N_c=3$ and altering the dataset i.e. by using different use cases. A clustering technique that has a lower normalized variance is more robust, and a better mean score is more accurate. Based on the silhoutte scores ($S$) from Table~\\ref{tab:clustering_robustness}, k-mean based clustering is found to be more robust to varying data source and has a better mean score, but a main limitation of k-means is its strong dependence on $N_c$. Still, k-means is used in many practical situations such as anomaly detection~\\cite{kmean} due to its low computation cost. \n\n\\begin{table}[h]\n \n \\begin{tabular}{|l|l|l|l|l|l|l|l|}\n \\cline{1-8}\n\\multicolumn{2}{|c|}{Scenarios} & \\multicolumn{3}{|c|}{Effect of Parameters}& \\multicolumn{3}{|c|}{Effect of Data Alt.}\\\\\n \\cline{1-8}\n Met & Algo & Mean & Var & NVar & Mean & Var & NVar\\\\\n \\cline{1-8}\n \\multirow{4}{*}{\\textbf{S}} \n & Agg & .52 & .0175 & .254 & .609 & .01 & .164\\\\ \\cline{2-8}\n & K-m & .54 & .013 & .212 & \\fontseries{b}\\selectfont .615 & .008 & \\fontseries{b}\\selectfont .145\\\\ \\cline{2-8}\n & Spec & .504 & .021 & .287 & .581 & .015 & .213 \\\\ \\cline{2-8}\n & Bir & \\fontseries{b}\\selectfont .74 & .011 & \\fontseries{b}\\selectfont .146 & .599 & .010 & .172 \\\\ \\cline{1-8}\n \\multirow{4}{*}{\\textbf{CH}} \n & Agg & 9965 & \\num[round-precision=2,round-mode=figures,\n scientific-notation=true]{5516755} & \\fontseries{b}\\selectfont .235 & 337 & 36880 & .569\\\\ \\cline{2-8}\n & K-m & \\fontseries{b}\\selectfont 10822 & \\num[round-precision=2,round-mode=figures,\n scientific-notation=true]{66329012} & .237 & \\fontseries{b}\\selectfont 346 & 35690 & .545\\\\ \\cline{2-8}\n & Spec & 8765 & \\num[round-precision=2,round-mode=figures,\n scientific-notation=true]{10066904} & .362 & 311 & 34047 & .592 \\\\ \\cline{2-8}\n & Bir & 10484 & \\num[round-precision=2,round-mode=figures,\n scientific-notation=true]{13395600} & .349 & 331 & 35637 & .57 \\\\ \\cline{1-8}\n \\multirow{4}{*}{\\textbf{AR}} \n & Agg & .703 & .035 & .266 & .534 & .029 & .319\\\\ \\cline{2-8}\n & K-m & .672 & .027 & \\fontseries{b}\\selectfont .248 & .529 & .022 & \\fontseries{b}\\selectfont .281\\\\ \\cline{2-8}\n & Spec & \\fontseries{b}\\selectfont .714 & .039 & .278 & \\fontseries{b}\\selectfont .638 & .049 & .349 \\\\ \\cline{2-8}\n & Bir & .342 & .014 & .35 & .589 & .047 & .368 \\\\ \\cline{1-8}\n \\multirow{4}{*}{\\textbf{DB}} \n & Agg & .026 & 0.0 & \\fontseries{b}\\selectfont .32 & .053 & .003 & 1.038\\\\ \\cline{2-8}\n & K-m & .54 & 0.0 & .322 & .063 & .003 & .925\\\\ \\cline{2-8}\n & Spec & .504 & 0.0 & .559 & .058 & .003 & 1.037\\\\ \\cline{2-8}\n & Bir & \\fontseries{b}\\selectfont .74 & 0.0 & 1.344 & \\fontseries{b}\\selectfont .065 & .003 & \\fontseries{b}\\selectfont .895\\\\ \\cline{1-8}\n \\hline\n \\end{tabular}\n \\caption{Evaluation of the Robustness of the Clustering Algorithm by varying hyper-parameters and data source.}\n \\label{tab:clustering_robustness}\n \\vspace{-0.2 cm}\n\\end{table}\n\n\n\n\\subsubsection{Manifold Learning}\nManifold learning is adopted for the purpose of visualization. \nFor quantitative comparisons, we need to employ classification techniques on the features projected in the lower dimensions using these embeddings. We evaluate the performance of manifold learning methods by testing them with the classifiers presented in the previous subsection. Table~\\ref{tab:manifold_learning_comparison} presents the comparison of the LLE, MDS, spectral, t-SNE, and IsoMap \\cite{tenenbaum2000isomap} embeddings considered for classification using SVC, k-NN, DT, RF, GNB, BNB and MLP. Inter-domain fusion doesnt gain much from manifold learning, but an interesting observation is made on the decrease in the difference of F1-scores among the high performing DT and RF classifiers, with the low performing SVC and k-NN classifiers. Hence, we conclude that it is unadvisable to perform manifold learning for our datasets, if training using Decision Tree or Random Forest. The IsoMap embedding that preserves local features of the data by first determining neighbor-hood graph and uses MDS in its last stage performs better than MDS for all the classifier only with the exception of SVC.\n\n\\begin{table}[h]\n \\centering\n \\begin{tabular}{|l|l|l|l|l|l|l|l|}\n \\hline\n Clfr$\\rightarrow$ & SVC & k-NN & DT & RF & GNB & BNB & MLP\\\\\\hline\n Manifold $\\downarrow$ & \\multicolumn{7}{|c|}{F1 scores}\\\\\\hline\n LLE & .66 & .74 & .66 & .64 & .38 & .39 & .49\\\\ \\hline\n MDS & .65 & .78 & .77 & .80 & .54 & .48 & .55\\\\ \\hline\n Spectral & .61 & .75 & .73 & .75 & .61 & .62 & .54\\\\ \\hline\n t-SNE & .64 & .74 & .73 & .76 & .63 & .57 & .63\\\\ \\hline\n IsoMap & .65 & .78 & .77 & .79 & .54 & .48 & .55\\\\ \\hline\n \\end{tabular}\n \\caption{Comparison of the different manifold learning embeddings considered with different classifiers.}\n \\label{tab:manifold_learning_comparison}\n \\vspace{-0.2 cm}\n\\end{table}\n\n\\subsection{Semi-Supervised Learning}\n\n\n\\subsubsection{Co-Training}\nFor co-training, we first split the dataset into labeled and unlabeled sets randomly in the ratio of 1:2. In the real world, this randomness may be caused due to accidental cessation of the Snort application or if a network security expert cannot make an inference of intrusion. Further both the labeled and unlabeled data are split into cyber and physical views consisting of respective features. In these experiments, we compare the supervised learning techniques on the labeled dataset with the co-training technique which uses supervised learning cyber and physical classifiers as shown in Fig.~\\ref{fig:cotraining}. It is expected to have a reduction in performance from supervised learning techniques, due to lack of labels for some samples, but it can be observed from Table~\\ref{tab:cotrain_comparison}, that the co-training based classification outperforms supervised for some classifiers such as $LR$,$GNB$,$BNB$,$MLP$ and performs at par with other classifiers with a difference of a mere 8 percent in the case of $RF$. The probable reason for improvement in performance using co-training may be due to the training of two different classifiers using intra-domain features.\n\n\\begin{table}[h]\n \\centering\n \\begin{tabular}{|l|l|l|l|l|l|l|}\n\n \\hline\n Classifier & \\multicolumn{3}{|c|}{Supervised} & \\multicolumn{3}{|c|}{Co-Training}\\\\ \\hline\n & F1-score & Rec.& Prec.& F1-score & Rec.& Prec.\\\\ \\hline\n LR & .63 & .67 & .64 & .64 & .73 & .58\\\\ \\hline\n SVC & .63 & .67 & .64 & .59 & .70 & .52\\\\ \\hline\n DT & .69 & .71 & .69 & .64 & .71 & .65 \\\\ \\hline\n RF & .73 & .77 & .72 & .65 & .72 & .72\\\\ \\hline\n GNB & .28 & .33 & .66 & .30 & .32 & .56\\\\ \\hline\n BNB & .53 & .51 & .67 & .58 & .66 & .52\\\\ \\hline\n MLP & .59 & .71 & .51 & .61 & .71 & .55\\\\ \\hline\n \\end{tabular}\n \\caption{Comparison of the classifier using supervised and co-training based unsupervised learning.}\n \\label{tab:cotrain_comparison}\n \\vspace{-0.2 cm}\n\\end{table}\n\n\n\\section{Discussion}\\label{challenges}\n\\textcolor{red}{temp; MOVE TO FUTURE WORK: Knowledge extraction prior to fusion is currently not incorporated in this work but can be easily leveraged since we perform sequential merge of features.}\n\n\\textcolor{red}{temp; MOVE TO discussion: Supervised, semi-supervised, and un-supervised techniques all need to be explored since each can be more appropriate in different scenarios. \nOther work fails to consider packet header details in creating feature space. Our previous work \\cite{} included evaluation of cyber-side features like retransmission, round trip time (RTT), and flow counts; this motivates the need of extraction of measurements from packet capture. \nDeploying the data to a datasource such as Elasticsearch makes data aggregation, filtering, visualization convenient. \n}\n\n\n\\textcolor{red}{temp;Move to DISCUSSION ?:\nA challenge with power system data fusion is the location of the fusion engine. If the fusion and inference engine are located outside the control network, security to pull out the data is an issue. In the real world, it is important to consider how the physical data will be available to a fusion engine, the size of the database, and whether anonymization techniques may be applied to preserve privacy. If such challenges are overcome, multi-sensor fusion enable stakeholders to classify data to identify and localize mis-operations due to system faults as well as intruder-instigated failure.}\n\n\n\\textcolor{red}{[like in paper 1, you don't really need a future work section in a journal paper, but a discussion section (or could be a final subsection of previous section) instead, so I suggest turning this section into that instead, but of course keeping our list of next-steps for ourselves. Discussing the \"current challenges\" in the discussion section is still a good idea and it can also still mention next steps\/future work, but you really want to capture\/highlight the main aspects and key results of the paper in this discussion section. You could check different papers to get examples of how this might be done well. There are likely a variety of different good ways. ]} Data Fusion is a prior step followed before\nstate estimation. Hence, in our current work we have used fusion to infer an attack\nand the time when the attack was performed. In our future work we will explore, if\nsuch inference can improve the cyber physical state estimation. Also in the current\nwork, we are using decimal to compress the information of the binary array, which is\nonly efficient in storage and neither interpretable nor comparable. Our next step is\ncoming up with a method to better map a variant-length of data (BI, BO, AI, AO) to a\nfixed-dimension space where the mapped data will have the following properties,\nwhich enable us to easily find anomaly event chronologically and spatially:\n\\begin{enumerate}\n \\item Reconstructed data will be the same if their original data are the same\n \\item Reconstructed data will be close if their original data are similar\n \\item Reconstructed data will be an outlier if the original data has values that is out of\nnormal range\n\\end{enumerate}\n\\section*{Acknowledgment}\nThis research is supported by the US Department of Energy's (DoE) Cybersecurity for Energy Delivery Systems program under award DE-OE0000895.\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\\label{sec:intro}\nIn Self-induced transparency (SIT), an optical pulse propagates resonantly through the two-level absorbing medium without any loss and distortion. This pioneering work was carried out by McCall and Hahn \\cite{McHa1, McHa2}. SIT originates from the generated coherence of a strongly coupled light-medium interaction. Therefore for observing SIT, the incident pulse should be short compared to the various relaxation times present in the system, such that the coherence will not vanish during the pulse propagation. Further, the pulse should also be strong enough to excite the atom from the ground state. \n One of the best theoretical estimations of the input pulse was reported in the ``area theorem\" \\cite{McHa2}. This theorem dictates that a 2$\\pi$ secant pulse can propagate through the medium without any loss and distortion in the pulse shape. In general, for an initial pulse area $\\theta_{0}$ obeying the condition $(n+1)\\pi >\\theta_{0}>n\\pi$, evolves the area towards $(n+1)\\pi$ or $n\\pi$ depending on whether $n$ is odd or even. Therefore input pulse with a larger area of $2n\\pi$, breaks up into $n$ number of $2\\pi$ pulses with different propagation velocities. These effects have been observed experimentally in atomic rubidium medium by Slusher and Gibbs \\cite{SluGib1}. In particular, they have found excellent agreement between numerical simulations and experimental results. These fundamental properties of the SIT were investigated several times, both theoretically and experimentally \\cite{Allen1, Lamb1, Eilbeck1}.\n\nHowever, in atomic medium, the preparation and trapping of atomic gas required a vast and sophisticated setup. Moreover, due to the gaseous nature of the medium, the different velocity of the atom shows Dopler broadening in output result.\nFor the last two decades, solid-state semiconductor mediums have emerged as a potential candidate for optical applications, particularly for scalable on-chip quantum technology. Earlier, the resonant coherent pulse propagation in bulk and quantum-well semiconductors behaves differently compared to a two-level atomic medium. The discrepancy mentioned above occurs due to the many-body Coulomb interaction of the different momentum states present in a bulk medium\\cite{SWKoch1, HG1, ASch1}. This problem has been overcome in three-dimensionally confined excitons in quantum dots (QD's). The quantum dots can easily be engineered to get the desired transition frequency to avoid the problem of laser availability. The scalability and fabrication technology make the semiconductor QDs suitable for modern quantum optics experiments. There have been some interesting theoretical proposals about the possibility of observing SIT in self-organized InGaAs QDs\\cite{Gpan1}. Excitonic transition in InGaAs QDs have large transition dipole moments and long dephasing time in the range of nanoseconds at cryogenic temperatures \\cite{PBor1} and are, therefore a promising candidate for SIT.\n\nThough the QD medium is a potential candidate for observing SIT, it has a few drawbacks also. All the QDs inside the medium are not identical, so an inhomogeneous level broadening is always present in the system. In semiconductors, longitudinal acoustic phonon interaction is vital because of the environment temperature. Interactions between phonon and exciton lead to dephasing in coupled dynamics of exciton-photon interaction\\cite{DPHZ1,DPHZ2}. Several theoretical models and experiments have recently explained SIT in the semiconductor QD medium \\cite{QDSIT1, QDSIT2, QDSIT3}. Few of them consider the effect of the phonon environment on the system dynamics in the context of group velocity dispersion\\cite{Phonon1}. Another recent experimental work showed the SIT mode-locking and area theorem for semiconductor QD medium, and rubidium atom \\cite{MODELOCK1, MODELOCK2}.\n\nIn this paper, we discuss the possibility of SIT in a semiconductor QD medium incorporating the effect of phonon bath in our model. We utilize the recently developed polaron transformed master equation keeping all orders of exciton-phonon interaction \\cite{POLARON1, POLARON2, POLARON3}. Our model's pulse propagation dynamics depend on system and bath parameters. Hence, the propagation dynamics become more transparent by knowing both the system and the bath's contribution.The motivation behind this work is to find long-distance optical communication without loss of generality in an array of QD. Due to strong confinement of electron hole pairs, QDs have discrete energy levels thus QD arrays mimic atomic medium with the added advantage of scalability and controllability with advanced semiconductor technology. It is also possible to create QD fibers which can be used for quantum communication channels \\cite{QDFIBER,QCOM}. Motivated by this work, we theoretically investigate the self-induced transparency effect in a semiconductor QD medium.\n\nOur paper is organized as follows. Sec.\\ \\ref{sec:intro} contains a brief introduction of the SIT in a QD medium and its application.\nIn Sec.\\ \\ref{sec:model}, we present our considered model system along with the theoretical formalism of the polaron master equation. In Sec.\\ \\ref{sec:result} we discuss the result after numerically solving the relevant system equations. Finally, we draw a conclusion in Sec.\\ \\ref{sec:conclud}. \n\n\n\\section{MODEL SYSTEM}\n\\label{sec:model}\n\nThe phonon contribution to QD dynamics at low temperature is mandatory. We assume the propagation of an optical pulse along the $z$-direction. Accordingly, we define the electric field of the incident optical pulse as\n\\begin{equation}\n\\vec{E}(z,t) = \\hat{e}\\mathcal{E}(z,t)e^{i(kz-\\omega t)} + c.c ,\n\\end{equation}\nwhere $\\mathcal{E}(z,t)$ is the slowly varying envelop of the field. The bulk QD medium comprises multiple alternating InGaAs\/GaAs QD deposition layers. Every QD inside the medium strongly interacts with the electric field due to the significant dipole moment. The QD can be modelled as a two-level system with exciton state $\\vert 1\\rangle$, and ground state $\\vert 2\\rangle$ with energy gap $\\hbar\\omega_{12}$ by taking proper choice of biexciton binding energy and polarisation as shown in the Fig.\\ref{Fig.1}. The raising and lowering operator for the QD can be written as $\\sigma^{+} = \\vert 1\\rangle\\langle 2\\vert$ and $\\sigma^{-} = \\vert 2\\rangle\\langle 1\\vert$. In case of semiconductor QD's, the optical properties get modified due to the lattice mode of vibration \\textit{i.e.}, the acoustic phonon. Hence, QD exciton transition coupled to an acoustic phonon bath model mimics the desired interaction. The phonon bath consists of a large number of closely spaced harmonic oscillator modes. Therefore, we introduce the annihilation and creation operators associated with $k^{th}$ phonon mode having frequency $\\omega_{k}$ as $b_{k}$ and $b_{k}^{\\dagger}$. The mode frequency can be expressed as $\\omega_{k} = c_{s}k$ where $k$ and $c_{s}$ are the wave vector and velocity of sound. The Hamiltonian for the described model system after making dipole and rotating wave approximation is given by\n\\begin{figure}[t]\n \\includegraphics[scale=0.4]{fig_1}\n \\caption{A Schematic diagram of the QD level system with ground state $\\vert 2 \\rangle$ and exciton state $\\vert 1 \\rangle$ driven by the optical pulse with effective coupling $\\langle B\\rangle\\Omega$(blue line). The spontaneous decay from the exciton state to the ground state is shown using a curly red line. The parallel violet lines represent the phonon modes interacting with the exciton state. The red and blue dashed lines represent the phonon-induced decay and pumping rate respectively.}\n \\label{Fig.1}\n\\end{figure}\n\\begin{equation}\n\\begin{split}\nH &= - \\hbar\\delta\\sigma^{+}\\sigma^{-} + \\frac{1}{2}\\hbar\\Bigl( \\Omega(z,t)\\sigma^{+} + \\Omega^{*}(z,t)\\sigma^{-} \\Bigr) \\\\\n &+\\hbar\\sum\\limits_k\\omega_kb_k^{\\dag}b_k + \\hbar\\sigma^{+}\\sigma^{-}\\sum\\limits_{k}\\lambda_{k}\\left(b_k+b_k^{\\dag}\\right),\n\\end{split}\n\\end{equation}\nwhere $\\lambda_{k}$ is the exciton phonon mode coupling constant and $\\Omega(z,t) = - 2 \\vec{d}_{12}\\cdot\\hat{e}\\mathcal{E}(z,t)\/\\hbar$ is the Rabi frequency with transition dipole moment vector $\\vec{d}_{12}$. The detuning of the optical field with QD transition is defined as $\\delta = \\omega - \\omega_{12}$. We notice that the Hamiltonian contains an infinite sum over phonon modes. Keeping all order of exciton phonon interaction, we made a transformation in the polaron frame. The transformation rule for modified Hamiltonian is given by $H^{'} = e^{P} H e^{-P}$ where the operator P = $\\sigma^{+}\\sigma^{-}\\sum_{k}\\lambda_{k}( b_{k}^{\\dagger} - b_{k})\/\\omega_{k}$. This transformation also helps us to separate the system Hamiltonian from the total Hamiltonian which is our primary interest. The transformed Hamiltonian is divided into system, bath and interaction part, which can be decompsed as $H^{\\prime}=H_s+H_b+H_{I}$, where\n\\begin{eqnarray}\nH_{s} &=& - \\hbar\\Delta\\sigma^{+}\\sigma^{-} + \\langle B\\rangle X_{g},\\\\\nH_b &=& \\hbar\\sum_k\\omega_k b_k^{\\dag}b_k,\\\\\n H_{I} &=& \\xi_gX_g+\\xi_uX_u,\n\\end{eqnarray}\n and $\\Delta$ is the redefined detuning by considering the polaron shift $\\sum_k\\lambda_{k}^2\/\\omega_k$.\nThe definition of phonon-modified system operators is given by \n\\begin{eqnarray}\nX_{g} &=& \\frac{\\hbar}{2}\\left(\\Omega(z,t)\\sigma^{+} + \\Omega^{*}(z,t)\\sigma^{-} \\right),\\\\\nX_{u} &=& \\frac{i\\hbar}{2}\\left(\\Omega(z,t)\\sigma^{+} - \\Omega^{*}(z,t)\\sigma^{-} \\right).\n\\end{eqnarray}\nThe phonon bath fluctuation operators are\n\\begin{eqnarray}\n \\xi_{g} &=& \\frac{1}{2}\\left( B_{+} + B_{-} -2\\langle B\\rangle \\right),\\\\\n \\xi_{u} &=& \\frac{1}{2i}\\left( B_{+} - B_{-} \\right),\n \\end{eqnarray}\n\nwhere $B_{+}$ and $B_{-}$ are the coherent-state phonon displacement operators. Explicitly, the phonon displacement operators in terms of the phonon mode operators can be written as\n\\begin{equation}\nB_{\\pm} = \\exp\\left[{\\pm \\sum_{k} \\frac{\\lambda_{k}}{\\omega_{k}}\\left( b_{k}^{\\dagger} - b_{k}\\right)}\\right].\\nonumber \n\\end{equation}\nFrom this expression, it is clear that the exponential of the phonon operator takes care of all the higher order phonon processes.\nTherefore, the phonon displacement operator averaged over all closely spaced phonon modes at a temperature T, obeys the relation $\\langle B_{+}\\rangle =\\langle B_{-}\\rangle = \\langle B\\rangle$ where\n\\begin{equation}\n\\langle B\\rangle= \\text{exp}\\left[-\\frac{1}{2}\\int_0^{\\infty}d\\omega\\frac{J(\\omega)}{\\omega^2}\n\\coth\\left(\\frac{\\hbar\\omega}{2K_{B}T}\\right)\\right],\n\\end{equation}\nand $K_{B}$ is the Boltzman constant.\nThe phonon spectral density function $J(\\omega)= \\alpha_{p}\\omega^3\\exp[-\\omega^2\/2\\omega_b^2]$ describes longitudinal acoustic(LA) phonon coupling via a deformation potential \\cite{SPECT} for QD system, where the parameters $\\alpha_p$ and $\\omega_b$ are the electron-phonon coupling and cutoff frequency, respectively.\n\nNext we use the master equation(ME) approach to solve the polaron-transformed system Hamiltonian dynamics by considering the phonon bath as a perturbation. The Born-Markov approximation can be performed with respect to the polaron-transformed perturbation in the case of nonlinear excitation. Hence, the density matrix equation for the reduced system under Born-Markov approximation can be written as\n\\begin{equation}\n\\dot{\\rho} = \\frac{1}{i\\hbar}[H_s,\\rho]+{\\cal L}_{ph}\\rho\n+\\frac{\\gamma}{2}{\\cal L}[\\sigma^{-}]\\rho\n +\\frac{\\gamma_d}{2}{\\cal L}[\\sigma^{+}\\sigma^{-}]\\rho\\label{meq},\n\\end{equation}\nwhere $\\gamma$ is the spontaneous decay rate of the exciton state. \nIn addition, we incorporate the pure-dephasing process phenomenologically in ME with a decay rate $\\gamma_d$. This additional dephasing term explains the broadening of the zero-phonon line (ZPL) in QD with increasing temperatures \\cite{ZPL1,ZPL2}. The Lindblad superoperator $\\cal L$ is expressed as ${\\cal L}[\\cal O]\\rho = \\text{2} {\\cal O} \\rho {\\cal} O^{\\dagger} - {\\cal O}^{\\dagger}{\\cal O} \\rho - \\rho \\cal O^{\\dagger}\\cal O $, under the operation of $\\cal O$ operator. The term ${\\cal L}_{ph}$ represents the effect of phonon bath on the system dynamics. Therefore the explicit form of ${\\cal L}_{ph}\\rho$ in terms of previously defined system operators can be expressed as\n\\begin{eqnarray}\n{\\cal L}_{ph}\\rho &=& -\\frac{1}{\\hbar^2}\\int_0^{\\infty}d\\tau\\sum_{j=g,u}G_j(\\tau)[X_j(z,t),X_j(z,t,\\tau)\\rho(t)]~\\nonumber\\\\\n &+& H.c.,\n\\end{eqnarray}\nwhere $X_j(z,t,\\tau)=e^{-iH_s\\tau\/\\hbar}X_j(z,t)e^{iH_s\\tau\/\\hbar}$, and the polaron Green's functions are $G_g(\\tau)=\\langle B\\rangle^2\\{\\cosh\\left[\\phi(\\tau)\\right]-1\\}$ and $G_u(\\tau)=\\langle B\\rangle^2\\sinh[\\phi(\\tau)]$. \nThe phonon Green's functions depend on phonon correlation function given below \n\\begin{equation}\n\\phi(\\tau)=\\int_0^{\\infty}d\\omega\\frac{J(\\omega)}{\\omega^2}\n\\left[\\coth\\left(\\frac{\\hbar\\omega}{2K_{B}T}\\right)\\cos(\\omega\\tau)-i\\sin(\\omega\\tau)\\right].\\label{phi}\n\\end{equation}\nThe polaron ME formalism is not generally valid for arbitrary excitation strength and exciton phonon coupling. The validity of polaron ME is stated as \\cite{POLARON1}\n\\begin{equation}\n \\left( \\frac{\\Omega}{\\omega_{b}} \\right)^{2} \\left(1 - \\langle B\\rangle^{4}\\right) \\ll 1 .\\label{valid}\n\\end{equation}\nIt is clear from the above equation that, at low temperatures $\\langle B\\rangle \\approx 1$ and $\\Omega\/\\omega_{b} < 1$ fulfil the above criteria. Hence, we restrict our calculation in the weak field regime satisfying $\\Omega\/\\omega_{b} < 1$ at a low phonon bath temperature.\\\\\n\nThe full polaron ME (\\ref{meq}) contains multiple commutator brackets and complex operator exponents, which require involved numerical treatment for studying time dynamics. We make some simplifications of the full ME by using various useful identities. These reduce ME into a simple analytical form with decay rates corresponding to the various phonon-induced processes. Though we have not made any approximation, simplified ME scales down the numerical computation efforts and gives better insight into the physical process. By expanding all the commutators in Eq.(\\ref{meq}) and rearranging using fermion operator identities, we get the simplified ME as\n\\begin{equation}\n\\begin{split}\n\\dot{\\rho} &= \\frac{1}{i\\hbar}[H_s,\\rho]+\\frac{\\gamma}{2}{\\cal L}[\\sigma^{-}]\\rho +\\frac{\\gamma_d}{2}{\\cal L}[\\sigma^{+}\\sigma^{-}]\\rho +\\frac{\\Gamma^{\\sigma^{+}}}{2}{\\cal L}[\\sigma^{+}]\\rho \\\\\n&+\\frac{\\Gamma^{\\sigma^{-}}}{2}{\\cal L}[\\sigma^{-}]\\rho -\\Gamma^{cd}(\\sigma^{+}\\rho\\sigma^{+} + \\sigma^{-}\\rho\\sigma^{-})\\\\\n&- i\\Gamma^{sd}(\\sigma^{+}\\rho\\sigma^{+} - \\sigma^{-}\\rho\\sigma^{-}) + i\\Delta^{\\sigma^{+}\\sigma^{-}}[\\sigma^{+}\\sigma^{-},\\rho]\\\\\n&-[i\\Gamma^{gu+}(\\sigma^{+}\\sigma^{-}\\rho\\sigma^{+} + \\sigma^{-}\\rho - \\sigma^{+}\\sigma^{-}\\rho\\sigma^{-})+H.c.]\\\\\n&-[\\Gamma^{gu-}(\\sigma^{+}\\sigma^{-}\\rho\\sigma^{+} - \\sigma^{-}\\rho + \\sigma^{+}\\sigma^{-}\\rho\\sigma^{-})+H.c.]\\label{smeq}.\n\\end{split}\n\\end{equation}\nThe phonon-induced decay rates are given by \n\\begin{comment}\n\\begin{align}\n\\Gamma^{\\sigma^{+} \/ \\sigma^{-}} &= \\frac{\\Omega_R(z,t)^2}{2}\\int_{0}^{\\infty}\\Bigg(\\operatorname{Re}\\bigg\\{(\\cosh(\\phi(\\tau))-1)f(z,t,\\tau) \\nonumber\\\\\n&+ \\sinh(\\phi(\\tau))\\cos(\\eta(z,t)\\tau)\\bigg\\} \\nonumber\\\\\n&\\mp \\operatorname{Im}\\left\\{(e^{\\phi(\\tau)}-1)\\frac{\\Delta\\sin(\\eta(z,t)\\tau)}{\\eta(z,t)}\\right\\}\\Bigg)\\,d\\tau, \\label{phdrate1}\\\\\n\\Gamma^\\mathrm{cd} &= \\frac{1}{2}\\int_{0}^{\\infty} \\operatorname{Re}\\bigg\\{\\Omega_{S}(z,t)\\sinh(\\phi(\\tau))\\cos(\\eta(z,t)\\tau) \\nonumber\\\\\n& - \\Omega_{S}(z,t)(\\cosh(\\phi(\\tau))-1) f(z,t,\\tau) \\nonumber\\\\\n& + \\Omega_{T}(z,t)(e^{-\\phi(\\tau)}-1) \\frac{\\Delta\\sin(\\eta(z,t)\\tau)}{\\eta(z,t)}\\bigg\\} d\\tau, \\label{gmcd}\\\\\n\\Gamma^\\mathrm{sd} &= \\frac{1}{2}\\int_{0}^{\\infty} \\operatorname{Re}\\bigg\\{\\Omega_{T}(z,t)\\sinh(\\phi(\\tau))\\cos(\\eta(z,t)\\tau)\\nonumber \\\\\n& - \\Omega_{T}(z,t)(\\cosh(\\phi(\\tau))-1) f(z,t,\\tau) \\nonumber\\\\\n& - \\Omega_{S}(z,t)(e^{-\\phi(\\tau)} - 1) \\frac{\\Delta\\sin(\\eta(z,t)\\tau)}{\\eta(z,t)}\\bigg\\} d\\tau, \\\\\n\\Delta^{\\sigma^{+}\\sigma^{-}} &= \\frac{\\Omega_R(z,t)^2}{2}\\int_{0}^{\\infty} \\operatorname{Re}\\bigg\\{(e^{\\phi(\\tau)} - 1) \\frac{\\Delta\\sin(\\eta(z,t)\\tau)}{\\eta(z,t)}\\bigg\\} d\\tau,\\label{delpm}\\\\\n\\Gamma^{\\mathrm{gu+}} &= \\frac{\\Omega_R(z,t)^2}{2}\\int_{0}^{\\infty}\\bigg\\{(\\cosh(\\phi(\\tau))-1)\\operatorname{Im}[\\langle B\\rangle\\Omega]h(z,t,\\tau) \\nonumber\\\\\n&+ \\sinh(\\phi(\\tau))\\frac{\\operatorname{Re}[\\langle B\\rangle\\Omega]\\sin(\\eta(z,t)\\tau)}{\\eta(z,t)}\\bigg\\}\\,d\\tau, \\\\\n\\Gamma^{\\mathrm{gu-}} &= \\frac{\\Omega_R(z,t)^2}{2}\\int_{0}^{\\infty}\\bigg\\{(\\cosh(\\phi(\\tau))-1)\\operatorname{Re}[\\langle B\\rangle\\Omega]h(z,t,\\tau) \\nonumber\\\\\n&- \\sinh(\\phi(\\tau))\\frac{\\operatorname{Im}[\\langle B\\rangle\\Omega]\\sin(\\eta(z,t)\\tau)}{\\eta(z,t)}\\bigg\\}\\,d\\tau,\\ \\label{phdrate2}\n\\end{align}\n\\end{comment}\n\n\\begin{align}\n\\Gamma^{\\sigma^{+} \/ \\sigma^{-}} &= \\frac{\\Omega_R(z,t)^2}{2}\\int_{0}^{\\infty}\\Bigg(\\operatorname{Re}\\bigg\\{(\\cosh(\\phi(\\tau))-1)f(z,t,\\tau) \\nonumber\\\\\n&+ \\sinh(\\phi(\\tau))\\cos(\\eta(z,t)\\tau)\\bigg\\} \\nonumber\\\\\n&\\mp \\operatorname{Im}\\left\\{(e^{\\phi(\\tau)}-1)\\frac{\\Delta\\sin(\\eta(z,t)\\tau)}{\\eta(z,t)}\\right\\}\\Bigg)\\,d\\tau, \\label{phdrate1}\\\\\n\\Gamma^\\mathrm{cd} &= \\frac{1}{2}\\int_{0}^{\\infty} \\operatorname{Re}\\bigg\\{\\Omega_{S}(z,t)\\sinh(\\phi(\\tau))\\cos(\\eta(z,t)\\tau) \\nonumber\\\\\n& - \\Omega_{S}(z,t)(\\cosh(\\phi(\\tau))-1) f(z,t,\\tau) \\nonumber\\\\\n& + \\Omega_{T}(z,t)(e^{-\\phi(\\tau)}-1) \\frac{\\Delta\\sin(\\eta(z,t)\\tau)}{\\eta(z,t)}\\bigg\\} d\\tau, \\label{gmcd}\\\\\n\\Gamma^\\mathrm{sd} &= \\frac{1}{2}\\int_{0}^{\\infty} \\operatorname{Re}\\bigg\\{\\Omega_{T}(z,t)\\sinh(\\phi(\\tau))\\cos(\\eta(z,t)\\tau)\\nonumber \\\\\n& - \\Omega_{T}(z,t)(\\cosh(\\phi(\\tau))-1) f(z,t,\\tau) \\nonumber\\\\\n& - \\Omega_{S}(z,t)(e^{-\\phi(\\tau)} - 1) \\frac{\\Delta\\sin(\\eta(z,t)\\tau)}{\\eta(z,t)}\\bigg\\} d\\tau,\n\\end{align}\n\\begin{align}\n\\Delta^{\\sigma^{+}\\sigma^{-}} &= \\frac{\\Omega_R(z,t)^2}{2}\\int_{0}^{\\infty} \\operatorname{Re}\\bigg\\{(e^{\\phi(\\tau)} - 1) \\frac{\\Delta\\sin(\\eta(z,t)\\tau)}{\\eta(z,t)}\\bigg\\} d\\tau,\\label{delpm}\\\\\n\\Gamma^{\\mathrm{gu+}} &= \\frac{\\Omega_R(z,t)^2}{2}\\int_{0}^{\\infty}\\bigg\\{(\\cosh(\\phi(\\tau))-1)\\operatorname{Im}[\\langle B\\rangle\\Omega]h(z,t,\\tau) \\nonumber\\\\\n&+ \\sinh(\\phi(\\tau))\\frac{\\operatorname{Re}[\\langle B\\rangle\\Omega]\\sin(\\eta(z,t)\\tau)}{\\eta(z,t)}\\bigg\\}\\,d\\tau, \\\\\n\\Gamma^{\\mathrm{gu-}} &= \\frac{\\Omega_R(z,t)^2}{2}\\int_{0}^{\\infty}\\bigg\\{(\\cosh(\\phi(\\tau))-1)\\operatorname{Re}[\\langle B\\rangle\\Omega]h(z,t,\\tau) \\nonumber\\\\\n&- \\sinh(\\phi(\\tau))\\frac{\\operatorname{Im}[\\langle B\\rangle\\Omega]\\sin(\\eta(z,t)\\tau)}{\\eta(z,t)}\\bigg\\}\\,d\\tau,\\ \\label{phdrate2}\n\\end{align}\n\n\n\n\\noindent where $f(z,t,\\tau) = (\\Delta^2\\cos(\\eta(z,t)\\tau)+\\Omega_R(z,t)^2)\/\\eta(z,t)^2$, $h(z,t,\\tau) = \\Delta(1 - \\cos(\\eta(z,t)\\tau))\/\\eta^{2}(z,t)$ and $\\eta(z,t) = \\sqrt{\\Omega_R(z,t)^2 + \\Delta^2}$ with the polaron-shifted Rabi frequency, $\\Omega_R(z,t) = \\langle B\\rangle\\vert\\Omega(z,t)\\vert$, $ \\Omega_{S}(z,t)=\\operatorname{Re}[\\langle B\\rangle\\Omega(z,t)]^{2}-\\operatorname{Im}[\\langle B\\rangle\\Omega(z,t)]^{2}$, $ \\Omega_{T}(z,t)= 2\\operatorname{Re}[\\langle B\\rangle\\Omega(z,t)]\\operatorname{Im}[\\langle B\\rangle\\Omega(z,t)]$.\n\nNext, we use Maxwell wave equation to describe the propagation dynamics of the electromagnetic field inside the QD medium\n\\begin{equation}\n\\bigg(\\nabla^{2} - \\frac{1}{c^{2}}\\frac{\\partial^{2}}{\\partial t^{2}}\\bigg)\\vec{E}(z,t) = \\mu_{0} \\frac{\\partial^{2}}{\\partial t^{2}} \\vec{P}(z,t)\\label{wave_eq}\n\\end{equation}\nwhere $\\mu_{0}$ is the permeability of free space. The induced polarisation $\\vec{P}(z,t)$ originates from the alignment of the medium dipole in the presence of an applied field. The induced macroscopic polarisation can be written in terms of the density matrix element as \n\\begin{equation}\n\\vec{P}(z,t) = N \\int_{-\\infty}^{\\infty}\\left(\\vec{d}_{12}\\rho_{12}(\\Delta,z,t)e^{i(kz-\\omega t)} + c.c.\\right) g(\\Delta)d\\Delta,\n\\end{equation}\nwhere $N$ is the QD volume number density and $\\rho_{12}$ is induced coherence. The inhomogeneous level broadening function in the frequency domain is defined by $g(\\Delta)$.\nIn our calculation, the form of $g(\\Delta)$ is\n\\begin{equation}\ng(\\Delta) = \\frac{1}{\\sigma\\sqrt{2\\pi}} e^{-\\frac{(\\Delta -\\Delta_{c})^{2}}{2\\sigma^{2}}},\n\\end{equation}\nwhere the standard deviation is $\\sigma$. The detuning between the applied field and the QD's central frequency is represented by $\\Delta_{c}$.\nBy applying slowly varying envelop approximation, one can cast inhomogeneous second order partial differential Eq.(\\ref{wave_eq}) to first order differential equation as\n\\begin{equation}\n\\bigg(\\frac{\\partial}{\\partial z} + \\frac{1}{c}\\frac{\\partial}{\\partial t} \\bigg) \\Omega(z,t) = i\\hspace{1pt}\\eta \\int_{-\\infty}^{\\infty}\\rho_{12}(\\Delta,z,t)g(\\Delta)d\\Delta, \\label{prop_eq}\n\\end{equation}\nwhere the coupling constant $\\eta$ is defined by \n\\begin{equation}\n\\eta = - 3N\\lambda^{2}\\gamma\/4\\pi\n\\end{equation}\nand $\\lambda$ is the carrier wavelength of the QD transition. The self consistent solution of Eq.($\\ref{smeq}$) and ($\\ref{prop_eq}$) with proper initial conditions can display the spatiotemporal evolution of the field inside the medium. Moreover the analytical solution of the coupled partial differential equation is known only for some special conditions, hence we adopted numerical integration of Eq.($\\ref{smeq}$) and ($\\ref{prop_eq}$) to depict the results. For numerical computation, a useful frame transformation $\\tau = t - z\/c$ and $\\zeta = z$ is needed which removes the explicit time variable from Eq.($\\ref{prop_eq}$), which now only depends on the one variable $\\zeta$.\n\n\n\\section{NUMERICAL RESULT}\n\\label{sec:result}\n\n\\subsection{Phonon-induced scattering rates}\n\nFirst we discuss various decay rates for the QD system with experimentally available parameter regions \\cite{PARA1,PARA2}.\nThe medium comprises InGaAs\/GaAs QDs with volume density $N = 5\\times 10^{20} \\text{m}^{-3}$ and a length of 1\\ mm. The central QD excitation energy is $\\hbar \\omega_{12}$ =1.3 eV with a Gaussian spectral distribution having FWHM of 23.5 meV. The QD is driven by the optical pulse at $\\zeta$ = 0 with a hyperbolic secant profile\n\\begin{equation}\n\\vert\\Omega(0,\\tau)\\vert = \\Omega_{0}\\hspace{1pt}\\text{sech}\\left(\\frac{\\tau-\\tau_{c}}{\\tau_{0}}\\right)\\label{rabi}\n\\end{equation}\nwhere $\\tau_{0}$, and $\\tau_{c}$ defines the width, and center of the pulse, respectively. For numerical computation, the amplitude and width of the pulse are taken to be $\\Omega_{0}$ = 0.2 meV and $\\tau_{0}$ = 6.373 ps.\nThe phonon bath temperature T = 4.2 K gives $\\langle B \\rangle = 0.95 $. Other parameters are $\\alpha_{p} = 0.03\\ \\text{ps}^{2}$, $\\omega_{b} = 1\\ \\text{meV}$. The system under consideration has a relaxation rate $\\gamma =\\gamma_{d} = 2\\ \\mu\\text{eV}$(2 ns). In order to normalize all the system parameters to a dimensionless quantity we have chosen normalization frequency to be $\\gamma_{n}$ = 1 rad\/ps.\n\nIn Fig.(\\ref{Fig.2}), the color bar represents the variation of various phonon-induced scattering rates as a function of detuning and time, both at normalised units along the $x$- and $y$-axis respectively. In the QD system, various phonon processes are connected with exciton transitions. In the case of ground state to exciton transition, phonon absorption occurs while in the opposite process, phonon emission occurs.\n\\begin{figure}[h]\n \\includegraphics[scale=0.4]{fig_2} \n \\caption{The variation of phonon-induced scattering rates with detuning and time of a QD at $\\zeta$ = 0 for the applied secant pulse in Eq.(\\ref{rabi}). a) Phonon-induced pumping rate $\\Gamma^{\\sigma^{+}}$[Eq.(\\ref{phdrate1})] b) Phonon-induced decay rate $\\Gamma^{\\sigma^{-}}$[Eq.(\\ref{phdrate1})] c) Phonon induced dephasing $\\Gamma^\\mathrm{cd}$[Eq.(\\ref{gmcd})] d) Phonon induced detuning $\\Delta^{\\sigma^{+}\\sigma^{-}}$[Eq.(\\ref{delpm})] for peak Rabi frequency $\\Omega_{0}$ = 0.2 meV, pulse width $\\tau_{0}$ = 6.373 ps and pulse center $\\gamma_{n}\\tau_{c} = 40$. The phonon bath temperature T = 4.2 K corresponds to $\\langle B \\rangle = 0.95 $ with spectral density function parameters $\\alpha_{p} = 0.03\\ \\text{ps}^{2}$, $\\omega_{b} = 1\\ \\text{meV}$.}\n \\label{Fig.2}\n\\end{figure}\nNow we discuss the physical process associated with the phonon scattering rates $\\Gamma^{\\sigma^{+}}$ and $\\Gamma^{\\sigma^{-}}$. For positive detuning, the applied field frequency is larger than the QD transition frequency. Subsequently a phonon generates with $\\Delta$ frequency in order to make a resonant QD transition. These emitted phonons develop an incoherent excitation in the system referred by the $\\Gamma^{\\sigma^{+}}$. Oppositely for negative detuning, the applied field frequency is smaller than the QD transition frequency, and a resonant QD transition is possible only when some phonon of frequency $\\Delta$ will be absorbed from the bath. With this mechanism, QD exciton to ground state decay enhances the radiation which is represented by the $\\Gamma^{\\sigma^{-}}$. This low-temperature asymmetry is clearly visible in Fig.$\\ref{Fig.2}$(a) and $\\ref{Fig.2}$(b). At higher temperatures, this asymmetry gets destroyed, and both rates overlap and are centered at $\\Delta$ = 0. Fig.$\\ref{Fig.2}$(c) shows the variation of $\\Gamma^{cd}$ which is only present in the off-diagonal density matrix element and responsible for the additional dephasing in the system dynamics. The additional detuning $\\Delta^{\\sigma^{+}\\sigma^{-}}$ from the simplified master equation plotted in Fig.$\\ref{Fig.2}$(d), shows a very tiny value compared to the system detuning $\\Delta$. We also notice that the sign of $\\Delta^{\\sigma^{+}\\sigma^{-}}$ changes according to the system detuning $\\Delta$. It is important to keep in mind that we display the variation along the $y$-axis around $\\gamma_{n}\\tau$ =40, which is the centre of the pulse with the secant profile.\n\n\\subsection{Pulse area theorem}\nIt is well know from Beer's law, that a weak pulse gets absorbed inside the medium due to the presence of opacity at the resonance condition. However, McCall and Hahn showed that some specific envelope pulse shape remains intact for a long distance without absorption, even at resonance\\cite{McHa1, McHa2}. Inspired of this phenomena, we have taken into account of a time-varying pulse whose envelope shape is stated in the Eq.(\\ref{rabi}). The area $\\Theta(z)$ enclosed by its hyperbolic envelope shape is defined as\n\\begin{equation}\n\\Theta(z) = \\int_{-\\infty}^{+\\infty}\\Omega(z,t^{'})dt^{'}.\n\\end{equation}\nTo see the spatial variation of the pulse area, we integrate over time and detuning in Eq.($\\ref{prop_eq}$). \nThe evolution of the pulse area $\\Theta(z)$ during its propagation in a two-level absorbing QD medium is given by\n\\begin{equation}\n\\frac{d\\Theta(z)}{dz} = -\\frac{\\alpha}{2} \\sin\\Theta(z)\\label{paeq}\n\\end{equation}\nwhere $\\alpha$ is the optical extinction per unit length. The optical extinction depends on the various system parameters as $\\alpha = 2\\pi\\eta g(0)$. The solution of the Eq.(\\ref{paeq}) is\n\\begin{equation}\n \\tan \\frac{\\Theta(z)}{2} = \\tan \\frac{\\Theta(0)}{2} e^{-\\alpha z\/2},\\label{pasol}\n\\end{equation}\n where $\\Theta(0)$ is the pulse area at $z$ =0. From the above expression, we find that $\\Theta(z) = 2n\\pi$ is the stable solution, whereas $\\Theta(z) = (2n+1)\\pi$ is the unstable. The pulse area of the given envelope as stated in Eq.(\\ref{rabi}) is $\\Theta(0) = \\pi\\Omega_{0}\\tau_{0}$. Thus, the envelope with amplitude $\\Omega_{0} = 2\/\\tau_{0}$ gives 2$\\pi$ area pulse. This envelope shape remains preserve for the long propagation distance even though it interacts resonantly with the medium.\n\\begin{figure}[h]\n \\includegraphics[scale=0.4]{fig_3}\n \\caption{Evolution of the pulse area($\\Theta$) as a function of propagation distance $\\zeta$ started with $2\\pi$ sech-type pulse for different temperatures. The applied pulse has a width of $\\tau_{0}$ = 6.373 ps and centered at $\\gamma_{n}\\tau_{c} = 40$. The system under consideration without phonon bath(black) and with phonon bath maintaining temperature T = 4.2K(red), 10K(blue), 20K(green) with electron phonon coupling $\\alpha_{p} = 0.03\\ \\text{ps}^{2}$ and cut off frequency $\\omega_{b} = 1\\ \\text{meV}$.The central QD detuning $\\Delta_{c}$ = 0 with spontaneous decay and the pure dephasing rate $\\gamma =\\gamma_{d} = 2\\ \\mu\\text{eV}$(2 ns).The optical extinction per unit length $\\alpha$ = 10 mm$^{-1}$. The inset figure shows the stability of the pulse area higher than $2\\pi$ for different phonon bath temperatures.}\n \\label{Fig.3}\n\\end{figure}\nFig.(\\ref{Fig.3}) exhibits the variation of pulse area with the propagation distance inside the QD medium. It is evident from this figure that the propagation dynamics of 2$\\pi$ area pulse through the medium of length $L$ has negligible loss in pulse area. In the absence of phonon(black line) interaction, the system behaves identical to the atomic system and hence follows $\\Theta \\approx 2\\pi(1 - \\tau_{0}\/T_{2}^{'})$ reported earlier by McCall and Hahn \\cite{McHa2}. The loss in pulse area comes from the finite lifetime $T_{2}^{'}$ of the QD which is inversely proportional to $\\gamma_{d}$. Ideally, the pulse will retain initial pulse area for an arbitrary distance in absence of decay and decoherence. However, in presence of phonon contribution, we have noticed the pulse area gets enhanced by a small amount. The amount of raise in the pulse area linearly depends on the bath temperature as indicated in Fig.($\\ref{Fig.3}$). This effect can be explained by carefully examing the definition of an effective Rabi frequency $\\Omega_R(z,t) = \\langle B\\rangle\\vert\\Omega(z,t)\\vert$ where $\\langle B\\rangle$ is dependent on the bath temperatures. The inset of Fig.(\\ref{Fig.3}) illustrate the convergence of the pulse area shifted from the $2\\pi$ value at different temperatures.\n\\begin{figure}[h]\n \\includegraphics[scale=0.4]{fig_4}\n \\caption{The real(black) and imaginary(red) part of the coherence $\\rho_{12}$ of a single QD at different times for a $2\\pi$ sech-type short pulse with pulse center at $\\gamma_{n}\\tau_{c}$ = 40 as a function of detuning. The pulse has a width $\\tau_{0}$ = 6.373 ps. Corresponding phonon bath parameters are T = 4.2K, $\\alpha_{p} = 0.03\\ \\text{ps}^{2}$, $\\omega_{b} = 1\\ \\text{meV}$. Considered QD relaxation rates are $\\gamma =\\gamma_{d} = 2\\ \\mu\\text{eV}$(2 ns). }\n \\label{Fig.4}\n\\end{figure}\nTo explain the behavior of Fig.(\\ref{Fig.3}), we study the absorption and dispersion properties of the medium as a function of detuning at various time intervals of the pulse. Fig.(\\ref{Fig.4}) delineates the physical process behind the dispersion and absorption. We assume all the population in the ground state, before the leading edge of the pulse reaches the medium. The peak of incident pulse enters inside the medium at $\\gamma_{n}\\tau_{c}$ = 40. It is clear from Fig.\\ref{Fig.4}(a) that most of the leading edge pulse energy gets absorbed by the ground state population and the population goes to the excited state. Hence the medium shows maximum absorption at $\\gamma_{n}\\tau$ = 30, hence elucidating the absorption phenomenon at resonance. Simultaneously, the nature of the dispersion curve is anomalous as previously reported \\cite{BOYD}. The anomalous dispersion accompanied fast velocity is completely prohibited due to huge absorption at the resonance condition. The medium becomes saturated as the centre of the pulse enters the medium; consequently, the medium turns less absorbent to the pulse. Nonetheless, a tiny absorption peak still exists at the resonance condition due to the presence of various decay processes of the medium as indicated by Fig.\\ref{Fig.4}(b). Therefore, the excited state gets populated during the passage of the leading edge pulse. This population can leave the excited state and return to the ground by stimulated emission in the presence of the trailing edge of the pulse. As a results, a gain can be experienced by the incident pulse at $\\gamma_{n}\\tau$ = 50 as revealed in Fig.\\ref{Fig.4}(c). From these three panels, we can conclude that the leading edge of the pulse gets absorbed by the medium, while the tailing edge of the pulse experiences gain. Towards the trailing end of the pulse, the dispersive nature of the medium changes from anomalous to normal, as shown in Fig.\\ref{Fig.4}(d). The positive slope of the dispersion curve lead to slow group velocity that started at $\\gamma_{n}\\tau$ = 60 shown in Fig.\\ref{Fig.4}(d). Fig.\\ref{Fig.4}(d) to Fig.\\ref{Fig.4}(f) indicate that the optical pulse regeneration process is completed due to the medium-assisted gain; hence, the pulse shape remains preserved. This is the explanation of the underpinning mechanism behind SIT.\n\\begin{figure}[h]\n \\includegraphics[scale=0.4]{fig_5}\n \\caption{The variation of excited state population with input pulse area at resonance condition $\\Delta_{c}$ = 0. The system and bath parameters are $\\tau_{0}$ = 6.373 ps, $\\gamma_{n}\\tau_{c}$ = 40, T = 4.2K, $\\alpha_{p} = 0.03\\ \\text{ps}^{2}$, $\\omega_{b} = 1\\ \\text{meV}$, $\\gamma =\\gamma_{d} = 2\\ \\mu\\text{eV}$(2 ns).}\n \\label{Fig.5}\n\\end{figure}\nThe claim of the above physical mechanism can be supported by studying population dynamics at the excited state. For this purpose, we have plotted the excited state population as a function of the pulse area in Fig.(\\ref{Fig.5}). A noticeable population redistribution among the levels is feasible within few widths of incident pulse wherein intensity is appreciable. As soon as the pulse intensity diminishes at the trailing end, spontaneous emission takes care of depletion of the excited state population. This leads to vanishing population at the excited state after a sufficiently long time from the pulse centre. As a consequence, it is crucial to decide the observation time of the QD population. Hence, we display the exciton population at just the end of the pulse $\\gamma_{n}\\tau = 60$, to capture the outcome of the pulse. It is clear from Fig.(\\ref{Fig.5}) that the excited state population shows a decaying Rabi oscillation kind of behaviour. It is also confirmed that the population never fully transferred to the excited state or fully returned to the ground state for any pulse area, indicating to non-constant phonon induced decay and gain process involved in the system. The decaying features of the local population maximum can be justified by the examining the photon and phonon induced decay rates. The various phonon decay rates are given in Eq.(\\ref{phdrate1})-(\\ref{phdrate2}) where increasing incident pulse amplitude $\\Omega(z,t)$ results in the enhancement of these decay rates. This field amplitude dependent phonon decay together with constant photon decay can explain the gradual decay of the population local maxima. On the contrary, the dip of local minima increases due to the presence of phonon induced gain processes $\\Gamma^{\\sigma+}$ as suggested in Eq.(\\ref{phdrate1}). The local maximum and minimum of the exciton population are located respectively near odd and even integer multiples of $\\pi$ pulse area. The maxima signifies the pulse absorption by the medium, resulting in population inversion. Similarly, minima manifests the transparency of the medium. Thus, the leading edge of the pulse excite the population whereas the tailing edge assists in stimulated emission leaving the population in the ground state of the medium. It is evident that only even integer multiples of $\\pi$ pulse can propagate through the medium without absorption that is consistent with the pulse area theorem. That the local maxima and minima of exciton population never match exactly with the integer value can be figured out later by investigating pulse propagation dynamics. Previously, we found the stable pulse area is higher than $2\\pi$ as shown in Fig.($\\ref{Fig.3}$) which also agrees with the above observation. Therefore, the analysis of coherence and population ensures us that SIT phenomena can be accomplished in the QD medium.\n\\begin{figure}[h]\n \\includegraphics[scale=0.4]{fig_6}\n \\caption{The Rabi frequency normalized with the input peak value is plotted against retarded time at different propagation distances inside the medium at resonance condition $\\Delta_{c}$ = 0. The input pulse has the following parameters $\\Theta(0) = 2\\pi$, $\\tau_{0}$ = 6.373 ps, $\\gamma_{n}\\tau_{c}$ = 40. Other parameters are T = 4.2K, $\\alpha_{p} = 0.03\\ \\text{ps}^{2}$, $\\omega_{b} = 1\\ \\text{meV}$, $\\gamma =\\gamma_{d} = 2\\ \\mu\\text{eV}$(2 ns).}\n \\label{Fig.7}\n \\end{figure} \n\\subsection{Self Induced Transparency}\nA homogenous QD medium with length $1$\\ mm is taken into account for studying spatio-temporal evolution of hyperbolic secant optical pulse. To achieve a stable pulse propagation, we have chosen the initial pulse area to be 2$\\pi$. Fig.$\\ref{Fig.7}$ confirms the area theorem by showing a stable optical pulse propagation for a longer distance. However, the pulse shape at larger distances has noticed some distortion and absorption.\nFig.$\\ref{Fig.7}$ also indicates that the pulse's peak value gradually decreases by increasing the propagation distance. This suggests a finite absorption in the QD medium that prohibited complete transparency in the system. In particular, the statement agrees well with the small absorption peak at resonance in the absorption profile shown in Fig.$\\ref{Fig.4}$b.\n\\begin{figure}[h]\n \\includegraphics[scale=0.4]{fig_7}\n \\caption{The Rabi frequency normalized with the individual peak value is plotted against retarded time at different propagation distances inside the medium at resonance condition $\\Delta_{c}$ = 0. All the other parameters are the same as Fig.(\\ref{Fig.7}).}\n \\label{Fig.8}\n\\end{figure}\nFigure \\ref{Fig.8} displays the individually normalized pulse for different propagation distances. Inspection says that the input pulse experiences delay and a little broadening during the propagation through the medium. The sole reason behind the pulse broadening is the dispersive nature of the system. In the frequency domain, a temporal pulse can be treated as a linear superposition of many travelling plane waves with different frequencies. These individual frequency waves gather different phases and move with varying velocities during the pulse propagation in a dispersive medium. Therefore the pulse gets broader as the leading part(low frequency) moves faster, and the tailing end(high frequency) goes slower. In the QD system, the pure dephasing rate is also responsible for this broadening as it destroys the coherence.\nFrom Fig.$\\ref{Fig.8}$, a distinct peak shift is observed while optical pulse propagating through the medium. This peak shift arises because of normal dispersive medium that induced slow group velocity of the optical pulse inside the medium.\nWe adopt the analytical expression of time delay in the ideal case by considering $\\sigma \\gg 1\/\\tau_{0}$ reported earlier\\cite{RAHMAN}. The analytical expression for time delay found to be $\\gamma_{n}\\tau_{d} = \\alpha L \\gamma_{n}\\tau_{0}\/4$. Here the absorption coefficient $\\alpha$ is approximately 10 mm$^{-1}$ calculated from the chosen parameters. Therefore the calculated analytical time delay $\\gamma_{n}\\tau_{d}\\approx$ 15 shows excellent agreement with the numerical result. Recalling the pulse area theorem again, we observe that the pulse area is almost constant throughout the propagation near $2\\pi$. The result is consistent because as the pulse amplitude decreases, the pulse width increases, maintaining the constant area under the curve. Therefore an absorbing QD medium can exhibit the SIT phenomena at low temperatures.\n\\subsection{Phonon bath parameter dependence on SIT }\nIn the simplified master equation ($\\ref{smeq}$), various phonon-induced scattering rates depend on both the system and bath parameters. Hence it is crucial to study the effect of phonon bath on the SIT dynamics. The phonon contribution comes to the picture in two ways; one from the reduced Rabi frequency, which depends on the $\\langle B \\rangle$ and the other is the phonon-induced scattering rates connected with the phonon spectral density function.\n\\begin{figure}[h]\n \\includegraphics[scale=0.4]{fig_8}\n \\caption{The plot of Rabi frequency envelope with time at a propagation distance $\\zeta\\eta\/\\gamma_{n}$ = 50 for different phonon bath temperatures at resonance condition $\\Delta_{c}$ = 0. The common parameters are $\\Theta(0) = 2\\pi$, $\\tau_{0}$ = 6.373 ps, $\\gamma_{n}\\tau_{c}$ = 40, $\\alpha_{p} = 0.03\\ \\text{ps}^{2}$, $\\omega_{b} = 1\\ \\text{meV}$, $\\gamma =\\gamma_{d} = 2\\ \\mu\\text{eV}$(2 ns). The figure display four different configurations, system without a phonon bath (black) and with a phonon bath at a temperature T = 4.2K(red), 10K(blue), 20K(green).}\n \\label{Fig.9}\n\\end{figure}\nTherefore increasing phonon bath temperatures reduces the value of $\\langle B \\rangle$ and $\\hbar\\omega\/2K_{b}T$ present in the expression of $\\phi(\\tau)$ given in the Eq.($\\ref{phi}$). Consequently, effective coupling between QD and applied field gets reduced, but the phonon-induced decay rates get enhanced. From Fig.($\\ref{Fig.9}$), we notice that the final pulse shape experiences more deformation for higher temperatures. The peak of the output pulse is also very much reduced for the higher temperature T =20 K. Therefore, the bath temperature should be minimised to see the SIT in the QD medium.\n\\begin{figure}[h]\n \\includegraphics[scale=0.4]{fig_9}\n \\caption{The Rabi frequency envelop with time at a propagation distance $\\zeta\\eta\/\\gamma_{n}$ = 50 for different electron-phonon coupling strength $\\alpha_{p}$ at resonance condition $\\Delta_{c}$ = 0. All the parameters are same as Fig.(\\ref{Fig.9}) except T = 4.2K and various electron-phonon coupling $\\alpha_{p} = 0.03\\ \\text{ps}^{2}$(red), 0.06\\ $\\text{ps}^{2}$(blue), 0.12\\ $\\text{ps}^{2}$(green).}\n \\label{Fig.10}\n\\end{figure}\nAnother controlling factor of the SIT is the interaction strength between the QD and the phonon bath. So the increment of system-bath coupling leads to the reduction of the coherence in the system. This statement is understandable by looking at the phonon correlation function shown in Eq.($\\ref{phi}$). Thus the final pulse shape for the equal propagation distances is significantly modified by the electron-phonon coupling constant, as shown in Fig.($\\ref{Fig.10}$). Therefore we also have to ensure that the QD bath interacts weakly to get SIT phenomena in the QD medium.\n\\subsection{Higher pulse area and pulse breakup}\nFinally, we discuss the behaviour of a pulse propagating through the absorbing QD medium with a higher pulse area than $2\\pi$. Therefore we consider the next stable pulse area solution 4$\\pi$ for further investigation. The numerical result of the pulse propagation in both space and time is shown in Fig.($\\ref{Fig.11}$). Unlike the 2$\\pi$ pulse case, here, the initial pulse breaks into two pulses as it travels through the medium. This phenomenon is also well explained by the pulse area theorem where $2n\\pi$ pulse is split into $n$ number of 2$\\pi$ pulses. Surprisingly, the initial pulse breakup into two pulses is not identical in shape. One pulse gets sharper, and the other gets broader in the time domain and adjusts the peak value such that the area under the curve is 2$\\pi$. The broader pulse component shows a prominent time delay, whereas the sharper pulse component propagates with a tiny time delay. As a result, total pulse area is constant throughout the propagation distance near 4$\\pi$.\n\\begin{figure}[h]\n \\includegraphics[scale=0.4]{fig_10}\n \\caption{The propagation dynamics of a 4$\\pi$ area pulse in an absorbing QD medium as a function of both space and time at resonance condition $\\Delta_{c} = 0$. All other parameters are same as Fig.(\\ref{Fig.7}).}\n \\label{Fig.11}\n\\end{figure}\n\\section{CONCLUSIONS}\n\\label{sec:conclud}\nWe have investigated the SIT phenomena in an inhomogeneously broadened semiconductor QD medium. In our model, we have included the effect of phonon in the total Hamiltonian to describe the modified optical properties of QD in the presence of a thermal environment. We then adopted the polaron ME formalism to analytically derive the simplified ME with various phonon-induced decay rates. These phonon-induced scattering rates are plotted against detuning and time, which verify the presence of low-temperature asymmetry of phonon-induced pumping and decay in our system. We solve numerically the density matrix equation and Maxwell equation selfconsistently with suitable parameters. We observe that stable pulse propagation is possible in the QD medium with pulse area slightly higher than 2$\\pi$, depending on the phonon bath temperature. The physical mechanism of the SIT is clearly understood by analyzing the absorption and dispersion of the medium. The leading edge of the pulse gets absorbed by the medium, whereas the tailing edge of the pulse experience gain, hence the pulse shape remains intact and propagate through medium with short length. However, for longer propagation distances, we find that even though the pulse propagation through the medium is possible, the propagating pulse gets absorbed and broadened. The final pulse shape is preserved on exiting the medium. Increasing the phonon bath temperature and coupling produce more deformation in the final pulse shape, as it destroys the coherence in the system. Finally, we explore the propagation of a 4$\\pi$ pulse in the QD medium, which shows prominent pulse breakup phenomena reported earlier in the literature.\nTherefore our investigation ensures that a short pulse can propagate through the considered QD medium with a tiny change in shape. Hence, this work may have potential applications in quantum communication, quantum information, and mode-locking.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{RED by PGD attack: Performing PGD back to the true class} \\label{sec:naive_red}\n\nA naive approach to reverse engineer the adversarial perturbation is using the target PGD attack to revert the label back to the groundtruth. However, this requires additional assumptions. \nFirst, since PGD is a test-time deterministic optimization approach for perturbation generation, its targeted implementation requires the true class of the adversarial example, which could be unknown at testing time. What is more, one has to pre-define the perturbation budget $\\epsilon$ for PGD. This value is also unknown. \nSecond, performing PGD back to the true class might not exactly recover the ground-truth adversarial perturbations. By contrast, its RED counterpart could be over-perturbed. To make it more convincing, we applied the target $l_\\infty$ PGD attack method to adversarial examples generated by PGD (assuming true class, victim model, and attack budget are known). We tried various PGD settings ($\\text{PGD10}_{\\epsilon=10\/255}$ refers to PGD attack using 10 steps and $\\epsilon=10\/255$). Eventually, we compare these results to our CDD-RED method in Table \\ref{table:PGD_back}. \n\n\nGiven that the average reconstruction error between $\\mathbf x$ and $\\mathbf x'$ is 20.60, we can see from Table \\ref{table:PGD_back} that PGD attacks further enlarge the distortion from the clean data. Although PGD attacks can achieve high accuracy after reverting the adversarial data back to their true labels, the resulting perturbation estimate is far from the ground-truth in terms of their prediction alignment. We can tell from the low $\\text{PA}_{\\text{adv}}$ by PGD methods that $\\mathbf x_{\\mathrm {RED}}'$ does not align with the input $\\mathbf x'$ at all.\n\n\n\\begin{table}[H]\n\\begin{center}\n\\caption{The performance comparison among DO, DS, and {\\text{CDD-RED}} on the CIDAR-10 dataset.} \\label{table:PGD_back}\n{\n\\begin{tabular}{ccccc}\n\\toprule\\hline\n & PGD10{$\\epsilon_{20\/255}$} & PGD10{$\\epsilon_{10\/255}$} & PGD20{$\\epsilon_{20\/255}$} & CDD-RED \\\\ \\hline\n$d(\\mathbf x, \\mathbf x_\\mathrm{RED})$ & 27.63 & 22.67 & 27.53 & \\textbf{11.73} \\\\ \\hline\n$\\text{\\text{PA}}_{\\mathrm{benign}}$ & 96.20\\% & 82.60\\% & \\textbf{99.80\\%} & 83.20\\% \\\\ \\hline\n$\\text{\\text{PA}}_{\\mathrm{adv}}$ & 6.20\\% & 7.20\\% & 4.80\\% & \\textbf{97.40\\%}\\\\ \\hline\\bottomrule\n\\end{tabular}}\n\\end{center}\n\\end{table}\n\n\\section{Dataset Details.} \\label{sec:dataset_details}\n\nThe training and testing dataset is composed of three attack methods including PGD \\cite{madry2017towards}, FGSM \\cite{Goodfellow2015explaining}, and CW attack \\cite{carlini2017towards}, applied to \\textbf{5 models} including pre-trained ResNet18 (Res18), Resnet50 (Res50) \\cite{DBLP:journals\/corr\/HeZRS15}, VGG16, VGG19, and InceptionV3 (IncV3) \\cite{DBLP:journals\/corr\/SzegedyVISW15}. By default, PGD attack and FGSM attack are bounded by $\\ell_\\infty$-norm constraint and CW attack is bounded by $\\ell_2$-norm. The range of the perturbation strength $\\epsilon$ for PGD attack and FGSM attack are $[1\/255,40\/255)$ and $[1\/255,20\/255)$, respectively. As for CW attack, the adversary's confidence parameter $k$ is uniformly sampled from from $[1,20]$.\nOne attack method is applied to one victim model to obtain \\textbf{3K successfully attacked images}. As a consequence, 45K ($3\\times5\\times3$K) adversarial images are generated in total: 37.5K for training and 7.5K for validating. \nThe testing set contains 28K adversarial images generated with the same attack method \\& victim model.\n\n\\section{performance on more attack types} \\label{sec:appendix_unseen}\n\n{We show more evaluations of the RED approaches on unforeseen attacks during training. The denoisers are all trained on the training dataset containing adversarial examples generated on the ImageNet dataset, as in Appendix \\ref{sec:dataset_details}. The test data includes adversarial examples generated on the CIFAR-10 dataset in \\ref{sec:cifar10}, Wasserstein minimized attackers in \\ref{sec:W_attack}, and attacks on smoothed classifiers in \\ref{sec:attack_against_smoothed_classifiers}.}\n\n\\subsection{{Performance on CIFAR-10 dataset}} \\label{sec:cifar10}\n\n{\nWe further evaluate the performance of the RED approaches on the adversarial examples generated on the CIFAR-10 dataset. As the denoiser is input agnostic, we directly test the denoiser trained on adversarial examples generated on the ImageNet dataset. Here we consider the $10$-step PGD-$l_{\\inf}$ attack generation method with the perturbation radius $\\epsilon = 8\/255$. And these examples are not seen during our training. As shown in the Table \\ref{table:cifar10_result}, the proposed {\\text{CDD-RED}} method provides the best $\\text{PA}_{\\text{clean}}$ and $\\text{PA}_{\\text{adv}}$ with a slightly larger $d(\\mathbf x, \\mathbf x_\\mathrm{RED})$ than DO. This is not surprising as DO focuses only on the pixel-level denoising error metric. However, as illustrated in Sec.\\,\\ref{sec:evaluation_metric}, the other metric like PA also plays a key role in evaluating the RED performance.\n}\n\n\\begin{table}[H]\n\\begin{center}\n\\caption{{The performance comparison among DO, DS, and {\\text{CDD-RED}} on the CIDAR-10 dataset.}} \\label{table:cifar10_result}\n{\n\\begin{tabular}{c|c|c|c}\n\\toprule\\hline\n & DO & DS & \\text{CDD-RED} \\\\ \\hline\n$d(\\mathbf x, \\mathbf x_\\mathrm{RED})$ & \\textbf{0.94} & 4.50 & 1.52 \\\\ \\hline\n$\\text{\\text{PA}}_{\\mathrm{benign}}$ & 9.90\\% & 71.75\\% & \\textbf{71.80\\%} \\\\ \\hline\n$\\text{\\text{PA}}_{\\mathrm{adv}}$ & 92.55\\% & 89.70\\% & \\textbf{99.55\\%} \\\\ \\hline\\bottomrule\n\\end{tabular}}\n\\end{center}\n\\end{table}\n\n\\subsection{{Performance on Wasserstein minimized attackers}} \\label{sec:W_attack}\n\n\n{\nWe further show the performance on Wasserstein minimized attackers, which is an unseen attack type during training. The adversarial examples are generated on the ImageNet sub-dataset using Wasserstein ball. \nWe follow the same setting from \\cite{wong2019wasserstein}, where the attack radius $\\epsilon$ is 0.1 and the maximum iteration is 400 under $l_\\infty$ norm inside Wasserstein ball. \nThe results are shown in Table \\ref{table:w_attack}.\nAs we can see, Wasserstein attack is a more challenging attack type for RED than the $l_p$ attack types considered in the paper, justified by the lower prediction alignement $\\text{PA}_\\text{benign}$ across all methods. \nThis implies a possible limitation of supervised training over ($l_2$ or $l_\\infty$) attacks. One simple solution is to expand the training dataset using more diversified attacks (including Wasserstein attacks). However, we believe that the further improvement of the generalization ability of RED deserves a more careful study in the future, e.g., an extension from the supervised learning paradigm to the (self-supervised) pre-training and finetuning paradigm.\n}\n\n\n\\begin{table}[H]\n\\begin{center}\n\\caption{{The performance comparison among DO, DS, and {\\text{CDD-RED}} on Wasserstein minimized attackers.}} \\label{table:w_attack}\n{\n\\begin{tabular}{c|c|c|c}\n\\toprule\\hline\n & DO & DS & \\text{CDD-RED} \\\\ \\hline\n$d(\\mathbf x, \\mathbf x_\\mathrm{RED})$ & \\textbf{9.79} & 17.38 & 11.66 \\\\ \\hline\n$\\text{\\text{PA}}_{\\mathrm{benign}}$ & 92.50\\%\t\t & 96.20\\% & \\textbf{97.50\\%} \\\\ \\hline\n$\\text{\\text{PA}}_{\\mathrm{adv}}$ & 35.00\\% & \t37.10\\%\t& \\textbf{37.50\\%} \\\\ \\hline\\bottomrule\n\\end{tabular}}\n\\end{center}\n\\end{table}\n\n\n\\subsection{{Performance on attacks against smoothed classifiers}} \\label{sec:attack_against_smoothed_classifiers}\n\n\n{\nWe further show the performance on the attack against smoothed classifiers, which is an unseen attack type during training. A smoothed classifier predicts any input $x$ using the majority vote based on randomly perturbed inputs $\\mathcal{N}(x,\\sigma^2I)$ \\cite{cohen2019certified}. Here we consider the 10-step PGD-$\\ell_\\infty$ attack generation method with the perturbation radius $\\epsilon=20\/255$, and $\\sigma=0.25$ for smoothing. As shown in Table \\ref{table:attack_smoothed_clf}, the proposed CDD-RED method provides the best $\\text{PA}_\\text{clean}$ and $\\text{PA}_\\text{adv}$ with a slightly larger $d(\\mathbf x, \\mathbf x_\\mathrm{RED})$ than DO. \n}\n\n\\begin{table}[H]\n\\begin{center}\n\\caption{{The performance comparison among DO, DS, and {\\text{CDD-RED}} on the PGD attack against smoothed classifiers.}} \\label{table:attack_smoothed_clf}\n{\n\\begin{tabular}{c|c|c|c}\n\\toprule\\hline\n & DO & DS & \\text{CDD-RED} \\\\ \\hline\n$d(\\mathbf x, \\mathbf x_\\mathrm{RED})$ & \\textbf{15.53}\t&22.42&\t15.89 \\\\ \\hline\n$\\text{\\text{PA}}_{\\mathrm{benign}}$ & 68.13\\%&\t70.88\\%\t& \\textbf{76.10\\%} \\\\ \\hline\n$\\text{\\text{PA}}_{\\mathrm{adv}}$ & 58.24\\%&\t58.79\\%\t& \\textbf{61.54\\%} \\\\ \\hline\\bottomrule\n\\end{tabular}}\n\\end{center}\n\\end{table}\n\n\\section{{Computation cost of RED}} \\label{sec:cost}\n{We measure the computation cost on a single RTX Titan GPU. The inference time for DO, DS, and {\\text{CDD-RED}} is similar as they use the same denoiser architecture. For the training cost, the maximum training epoch for each method is set as 300. The average GPU time (in seconds) of one epoch for DO, DS, and {\\text{CDD-RED}} is 850, 1180, and 2098, respectively. It is not surprising that {\\text{CDD-RED}} is conducted over a more complex RED objective. Yet, the denoiser only needs to be trained once to reverse-engineer a wide variety of adversarial perturbations, including those unseen attacks during the training.}\n\n\\section{{Ablation Studies on {\\text{CDD-RED}}}} \\label{sec:ablation}\n\n{In this section, we present additional experiment results using the proposed {\\text{CDD-RED}} method for reverse engineering of deception (RED). We will study the effect of the following model\/parameter choice on the performance of {\\text{CDD-RED}}: 1) pretrained classifier $\\hat{f}$ for PA regularization, 2) data augmentation strategy $\\check{\\mathcal T}$ for PA regularization, and 3) regularization parameter $\\lambda$ that strikes a balance between the pixel-level reconstruction error and PA in \\eqref{eq: overall_DeRED}.\nRecall that the {\\text{CDD-RED}} method in the main paper sets $\\hat{f}$ as VGG19, $\\check{\\mathcal T} = \\{ t \\in \\mathcal T\\,| \\, \\hat{F}(t(\\mathbf{x})) = \\hat{F}(\\mathbf{x}), \\hat{F}(t(\\mathbf{x}^{\\prime})) = \\hat{F}(\\mathbf{x}^{\\prime}) \\ \\}$, and $\\lambda=0.025$. }\n\n\n\\subsection{{Pretrained classifier \\texorpdfstring{$\\hat f$}{}.}}\n\n\\begin{table}[htb]\n\\begin{center}\n\\caption{{The performance of {\\text{CDD-RED}} using a different pretrained classifier $\\hat{f}$ (either Res50 or R-Res50) compared with the default setting $\\hat{f}=$VGG19.} }\\label{table:surrogate_model}\n\\begin{threeparttable}\n{\n\\begin{tabular}{c|c|c}\n\\toprule\n\\hline\n & $\\hat{f}$=Res50 & $\\hat{f}$=R-Res50\\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}$d(\\mathbf x, \\mathbf x_\\mathrm{RED})$\\\\ (\\textcolor{red}{$\\downarrow$} is better)\\end{tabular} & 12.84 (\\textcolor{red}{$\\downarrow$ 0.20}) & 10.09 (\\textcolor{red}{$\\downarrow$ 2.95}) \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}$\\text{\\text{PA}}_{\\mathrm{benign}}$\\\\ (\\textcolor{red}{$\\uparrow$} is better)\\end{tabular} & 84.33\\% ({$\\downarrow$1.38\\%}) & 57.88\\% ($\\downarrow$ 27.83\\%) \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}$\\text{\\text{PA}}_{\\mathrm{adv}}$\\\\ (\\textcolor{red}{$\\uparrow$} is better)\\end{tabular} & 79.94\\% ($\\downarrow$ 0.49\\%) & 71.02\\% ($\\downarrow$ 9.40\\%) \\\\ \\hline\\bottomrule\n\\end{tabular}}\n\\end{threeparttable}\n\\end{center}\n\\end{table}\n\n\n{Besides setting $\\hat f$ as VGG19, Table \\ref{table:surrogate_model} shows the RED performance using the other pretrained models, {\\it i.e.}, Res50 and R-Res50. As we can see, the use of Res50 yields the similar performance as VGG19. Although some minor improvements are observed in terms of \npixel level reconstruction error, the PA performance suffers a larger degradation.\nCompared to Res50, \nthe use of an adversarially robust model \nR-Res50 significantly hampers the RED performance. \nThat is because the adversarially robust model typically lowers the prediction accuracy, it is not able to ensure the class-discriminative ability in the non-adversarial context, namely, the {\\text{PA}} regularization performance. }\n\n\\subsection{{Data selection for {\\text{PA}} regularization.}}\n\n\\begin{table}[htb]\n\\begin{center}\n\\caption{{Ablation study on the selection of $\\check{\\mathcal T}$ ($\\check{\\mathcal T}=\\mathcal{T}$ and without (w\/o) $\\check{\\mathcal T}$)) for training {\\text{CDD-RED}}, compared with $ \\check{\\mathcal T} = \\{ t \\in \\mathcal T\\,| \\, \\hat{F}(t(\\mathbf{x})) = \\hat{F}(\\mathbf{x}), \\hat{F}(t(\\mathbf{x}^{\\prime})) = \\hat{F}(\\mathbf{x}^{\\prime}). \\ \\}$}} \\label{table:ablation_data}\n\\begin{threeparttable}\n{\n\\begin{tabular}{c|c|c}\n\\toprule\n\\hline\n & $\\check{\\mathcal T}=\\mathcal{T}$ & w\/o $\\check{\\mathcal T}$ \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}$d(\\mathbf x, \\mathbf x_\\mathrm{RED})$\\\\ (\\textcolor{red}{$\\downarrow$} is better)\\end{tabular} & 15.52 ($\\uparrow$ 2.48) & 13.50 ($\\uparrow$ 0.46) \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}$\\text{\\text{PA}}_{\\mathrm{benign}}$\\\\ (\\textcolor{red}{$\\uparrow$} is better)\\end{tabular} & 83.64\\% ($\\downarrow$ 2.07\\%) & 84.04\\% ({$\\downarrow$1.67\\%}) \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}$\\text{\\text{PA}}_{\\mathrm{adv}}$\\\\ (\\textcolor{red}{$\\uparrow$} is better)\\end{tabular} & 75.92\\% ($\\downarrow$ 4.51\\%) & 79.99\\% ($\\downarrow$ 0.44\\%) \\\\ \\hline\\bottomrule\n\\end{tabular}}\n\\end{threeparttable}\n\\end{center}\n\\end{table}\n\n{As data augmentation might alter the classifier's original decision in \\eqref{eq: overall_DeRED}, we study how $\\check{\\mathcal T}$ affects the RED performance by setting $\\check{\\mathcal T}$ as the original data, i.e., without data augmentation, and all data, i.e., $\\check{\\mathcal T} = \\mathcal{T}$. Table \\ref{table:ablation_data} shows the performance of different $\\check{\\mathcal T}$ configurations, compared with the default setting.\nThe performance is measured on the testing dataset. As we can see, using all data or original data cannot provide an overall better performance than {\\text{CDD-RED}}. That is because the former might cause over-transformation, and the latter lacks generalization ability. }\n\n\n\\subsection{{Regularization parameter \\texorpdfstring{$\\lambda$}{}.}}\n\n\n\\begin{table}[htb]\n\\begin{center}\n\\caption{{Ablation study on the regularization parameter $\\lambda$ (0, 0.0125, and 0.05) for {\\text{CDD-RED}} training, compared with $\\lambda$=0.025. }} \\label{table:ablation_lambda}\n\\begin{threeparttable}\n{\n\\begin{tabular}{c|c|c|c}\n\\toprule\\hline\n & $\\lambda$=0 & $\\lambda$=0.0125 & $\\lambda$=0.05 \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}$d(\\mathbf x, \\mathbf x_\\mathrm{RED})$\\\\ (\\textcolor{red}{$\\downarrow$} is better)\\end{tabular} & 8.92 (\\textcolor{red}{$\\downarrow$ 4.12}) & 12.79 (\\textcolor{red}{$\\downarrow$ 0.25}) & 14.85 ($\\uparrow$ 2.13) \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}$\\text{\\text{PA}}_{\\mathrm{benign}}$\\\\ (\\textcolor{red}{$\\uparrow$} is better)\\end{tabular} & 47.61\\% ({$\\downarrow$38.10\\%}) & 81.00\\% ({$\\downarrow$4.71\\%}) & 85.56\\% ({$\\downarrow$ 0.15\\%)} \\\\ \\hline\n\\begin{tabular}[c]{@{}c@{}}$\\text{\\text{PA}}_{\\mathrm{adv}}$\\\\ (\\textcolor{red}{$\\uparrow$} is better)\\end{tabular} & 73.37\\% ({$\\downarrow$7.06\\%}) & 78.25\\% ($\\downarrow$ 2.18\\%) & 79.94\\% ($\\downarrow$ 0.49\\%) \\\\ \\hline\\bottomrule\n\\end{tabular}}\n\\end{threeparttable}\n\\end{center}\n\\end{table}\n\n{The overall training objective of {\\text{CDD-RED}} is (\\ref{eq: overall_DeRED}), which is the weighted sum of the reconstruction error and PA with a regularization parameter $\\lambda$. We further study the sensitivity of {\\text{CDD-RED}} to the choice of \n$\\lambda$, which is set by $0, 0.0125$, and $0.05$. Table \\ref{table:ablation_lambda} shows the RED performance of using different $\\lambda$ values, compared with the default setting $\\lambda=0.025$. We report the average performance on the testing dataset. As we can see, the use of $\\lambda=0$, which corresponds to training the denoiser without PA regularization, achieves a lower pixel-level reconstruction error $d(\\mathbf x, \\mathbf x_\\mathrm{RED})$, but degrades the prediction-level performance, especially $\\text{\\text{PA}}_{\\mathrm{benign}}$ greatly. In the same time, $\\lambda=0$ provides a smaller pixel-level reconstruction error with a better PA performance than DO, which indicates the importance of using proper augmentations. We also observe that keep increasing $\\lambda$ to $0.05$ is not able to provide a better PA.} \n\n\\section{{Ablation study of different attack hyperparameter settings}}\n\\label{sec:diff_attack_hyper}\n\n\n\n{We test on PGD attacks generated with different step sizes, including $4\/255$ and $6\/255$, and with and without random initialization (RI). Other hyperparameters are kept the same. The adversarial examples are generated by the same set of images w.r.t. the same classifier ResNet-50. The results are shown in Table \\ref{table:diff_pgd_hyper}. As we can see, the RED performance is quite robust against the varying hyperparameters of PGD attacks. Compared with DO, CDD-RED greatly improves $\\text{PA}_{\\text{benign}}$ and achieves higher $\\text{PA}_{\\text{adv}}$ with a slightly larger $d(\\mathbf x, \\mathbf x_\\mathrm{RED})$. Compared to DS, {\\text{CDD-RED}} achieves slightly better PA but with a much smaller reconstruction error $d(\\mathbf x, \\mathbf x_\\mathrm{RED})$.} \n\\begin{table}[H]\n \\centering\n \\caption{{RED performance for PGD with different hyperparameter settings, including stepsize as $4\/255$ and $6\/255$, and with and without RI.}}\n \\label{table:diff_pgd_hyper}\n \\begin{tabular}{cc}\n \\begin{tabular}{c|c|c|c|c}\n\\toprule\\hline\nWithout RI \/ With RI & Stepsize & DO & DS & CDD-RED \\\\ \\hline\n & 4\/255 & \\textbf{5.94\/5.96} & 16.56\/16.57 & 8.91\/8.94 \\\\ \\cline{2-5} \n\\multirow{-2}{*}{$d(\\mathbf x, \\mathbf x_\\mathrm{RED})$} & 6\/255 & \\textbf{5.99\/5.98} & 16.52\/16.50 & 8.97\/8.94 \\\\ \\hline\n & 4\/255 & 40.00\\%\/47.00\\% & 91.00\\%\/91.00\\% &\\textbf{ 94.00\\%\/93.00\\% } \\\\ \\cline{2-5} \n\\multirow{-2}{*}{$\\text{\\text{PA}}_{\\mathrm{benign}}$} & 6\/255 & 51.00\\%\/61.00\\% & 94.00\\%\/93.00\\% & \\textbf{95.00\\%\/94.50\\%} \\\\ \\hline\n & 4\/255 & 97.50\\%\/97.50\\% & 97.50\\%\/97.50\\% & \\textbf{ 99.50\\%\/99.50\\%} \\\\ \\cline{2-5} \n\\multirow{-2}{*}{$\\text{\\text{PA}}_{\\mathrm{adv}}$} & 6\/255 & 96.50\\%\/96.50\\% & 95.50\\%\/95.50\\% & \\textbf{98.50\\%\/99.50\\%} \\\\ \\hline\\bottomrule\n\\end{tabular}\n \\end{tabular}\n\\end{table}\n\n\n\\section{{Performance without adversarial detection assumption}} \\label{sec:mixture}\n\n {The focus of RED is to demonstrate the feasibility of recovering the adversarial perturbations from an adversarial example. However, in order to show the RED performance on the global setting, we experiment with a mixture of 1) adversarial images, 2) images with Gaussian noise (images with random perturbations), and 3) clean images on the ImageNet dataset. The standard deviation of the Gaussian noise is set as 0.05. Each type of data accounts for \\textbf{1\/3} of the total data. The images are shuffled to mimic the live data case. The overall accuracy before denoising is 63.08\\%. After denoising, the overall accuracy obtained by DO, DS, and CDD-RED is 72.45\\%, 88.26\\%, and \\textbf{89.11\\%}, respectively. During the training of the denoisers, random noise is not added to the input. }\n \n \n\n\n\n\n\n\\section{{Transferability of reconstructed adversarial estimate}} \\label{sec:tranferability}\n\n{We further examine the transferability of RED-synthesized perturbations. In the experiment, RED-synthesized perturbations are generated from PGD attacks using ResNet-50. We then evaluate the attack success rate (ASR) of the resulting perturbations transferred to the victim models ResNet-18, VGG16, VGG19, and Inception-V3. The results are shown in Table \\ref{table:transferability}. As we can see, the perturbation estimates obtained via our {\\text{CDD-RED}} yield better attack transferability than DO and DS. Therefore, such RED-synthesized perturbations can be regarded as an efficient transfer attack method.}\n\n\\begin{table}[H]\n\\begin{center}\n\\caption{{The tranferability of RED-synthesized perturbations.}} \\label{table:transferability}\n{\n\\begin{tabular}{c|c|c|c}\n\\toprule\\hline\n & DO & DS & \\text{CDD-RED} \\\\ \\hline\nResNet-18 & 66.50\\%\t& 70.50\\% &\t \\textbf{77.50\\%} \\\\ \\hline\nVGG16 & 71.50\\%\t& 74.00\\% &\t\\textbf{81.00\\%} \\\\ \\hline\nVGG19 & 71.50\\%\t& 70.00\\%\t& \\textbf{80.00\\%} \\\\ \\hline\nInception-V3 & 86.00\\% &\t85.50\\%\t& \\textbf{ 90.00\\%} \\\\ \\hline\\bottomrule\n\\end{tabular}}\n\\end{center}\n\\end{table}\n\n\\section{{potential applications of RED}} \\label{section:potential_app}\n\n{In this paper, we focus on recovering attack perturbation details from adversarial examples. But in the same time, the proposed RED can be leveraged for various potential interesting applications. In this section, we delve into three applications, including RED for adversarial detection in \\ref{sec:detection}, inferring the attack identity in \\ref{sec:identity}, and provable defense in \\ref{sec:provable}. }\n\n\\subsection{{RED for adversarial detection }} \\label{sec:detection}\n {The outputs of RED can be looped back to help the design of adversarial detectors. Recall that our proposed RED method (CDD-RED) can deliver an attribution alignment test, which reflects the sensitivity of input attribution scores to the pre-RED and post-RED operations. Thus, if an input is an adversarial example, then it will cause a high attribution dissimilarity (i.e., misalignment) between the pre-RED input and the post-RED input, i.e., $I(x, f(x))$ vs. $I(D(x), f(D(x)))$ following the notations in Section \\ref{sec:evaluation_metric}. In this sense, attribution alignment built upon $I(x, f(x))$ and $I(D(x), f(D(x)))$ can be used as an adversarial detector. Along this direction, we conducted some preliminary results on RED-assisted adversarial detection, and compared the ROC performance of the detector using CDD-RED and that using denoised smoothing (DS). In Figure \\ref{fig:RoC}, we observe that the CDD-RED based detector yields a superior detection performance, justified by its large area under the curve. Here the detection evaluation dataset is consistent with the test dataset in the evaluation section of the paper.}\n \n \\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.5 \\columnwidth]{Figs\/detection.png}\n \\caption{{RoC of detecting adversarial examples.}}\n \\label{fig:RoC}\n\\end{figure}\n\n\\subsection{{RED for attack identity inference}}\\label{sec:identity}\n\n{We consider another application to infer the attack identity using the reverse-engineered adversarial perturbations. Similar to the setup of Figure \\ref{fig:correlationmatrix}, we achieve the above goal using the correlation screening between the new attack and the existing attack type library. Let $z^\\prime$ (e.g., PGD attack generated under the unseen AlexNet victim model) be the new attack. We can then adopt the RED model $D(\\cdot)$ to estimate the perturbations $\\delta_{new} = z^\\prime - D(z^\\prime)$. And let $x^\\prime_{Atk_i}$ denote the generated attack over the estimated benign data $D(z^\\prime)$ but using the existing attack type i. Similarly, we can obtain the RED-generated perturbations $\\delta_{i} = x^\\prime_{Atk_i} - D(x^\\prime_{Atk_i})$. With the aid of $\\delta_{new}$ and $\\delta_{i}$ for all $i$, we infer the most similar attack type by maximizing the cosine similarity: $i^* = \\text{argmax}_{i} ~ cos(\\delta_{new},\\delta_{i})$. Figure \\ref{fig:identity} shows an example to link the new AlexNet-generated PGD attack with the existing VGG19-generated PGD attack. The reason is elaborated on below. (1) Both attacks are drawn from the PGD attack family. And (2) in the existing victim model library (including ResNet, VGG, and InceptionV3), VGG19 has the most similar architecture as AlexNet, both of which share a pipeline composed of convolutional layers following fully connected layers without residual connections.}\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=1 \\columnwidth]{Figs\/RED_corr_alex.pdf}\n \\caption{{Similarity between AlexNet-generated PGD vs. existing attacks in the library.}}\n \\label{fig:identity}\n\\end{figure}\n \n\\subsection{{RED for provable defense}} \\label{sec:provable}\n\n\n\n{We train the RED models to construct smooth classifiers, the resulting certified accuracy is shown in Figure \\ref{fig:certified}. Here the certified accuracy is defined by the ratio of correctly-predicted images whose certified perturbation radius is less than the $\\ell_2$ perturbation radius shown in the x-axis.}\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.6\\columnwidth]{Figs\/CAbyVariousMethods.png}\n \\caption{{Certified Robustness by different methods.}}\n \\label{fig:certified}\n\\end{figure}\n\n\\section{#1} \\vspace{-2mm}}\n\\newcommand{\\SubSection}[1]{\\vspace{-2mm} \\subsection{#1} \\vspace{-2mm}}\n\\newcommand{\\SubSubSection}[1]{\\vspace{-1mm} \\subsubsection{#1} \\vspace{-1mm}}\n\n\\newcommand{\\Paragraph}[1]{\\vspace{1mm} \\noindent \\textbf{#1} \\hspace{0mm}}\n\n\\newcommand{\\text{AD}}{\\text{AD}}\n\\newcommand{\\text{AT}}{\\text{AT}}\n\\newcommand{\\text{IAA}}{\\text{IAA}}\n\n\n\n\n\n\n\\title\nReverse Engineering of Imperceptible Adversarial Image Perturbations}\n\n\\author{%\n Yifan Gong$^1$\\thanks{Equal contributions.}, Yuguang Yao$^{2*}$, Yize Li$^{1}$, Yimeng Zhang$^{2}$, Xiaoming Liu$^{2}$, Xue Lin$^{1}$, Sijia Liu$^{2,3}$\n \\\\\n $^{1}$ Northeastern University, $^{2}$ Michigan State University, $^{3}$ MIT-IBM Watson AI Lab, IBM Research\n\\\\\n \\texttt{\\{gong.yifa, li.yize, xue.lin\\}@northeastern.edu}, \\\\\n \\texttt{\\{yaoyugua, zhan1853, liusiji5\\}@msu.edu, liuxm@cse.msu.edu}}\n\n\n\\newcommand{\\marginpar{FIX}}{\\marginpar{FIX}}\n\\newcommand{\\marginpar{NEW}}{\\marginpar{NEW}}\n\n\\iclrfinalcopy\n\\begin{document}\n\n\n\\maketitle\n\n\\begin{abstract}\nIt has been well recognized that neural network based image classifiers are easily fooled by images with tiny perturbations crafted by an adversary. There has been a vast volume of research to generate and defend such adversarial attacks. However, the following problem is left unexplored: \\textit{How to reverse-engineer adversarial perturbations from an adversarial image?}\nThis leads to a new adversarial learning paradigm---Reverse Engineering of Deceptions (RED). \nIf successful, RED allows us to estimate adversarial perturbations and recover the original images. \nHowever, carefully crafted, tiny adversarial perturbations are difficult to recover by optimizing a unilateral RED objective. For example, the pure image denoising method may overfit to minimizing the reconstruction error but hardly preserves the classification properties of the true adversarial perturbations. \nTo tackle this challenge, we formalize the RED problem and identify a set of principles crucial to the RED approach design. \nParticularly, we find that prediction alignment and proper data augmentation (in terms of spatial transformations) are two criteria to achieve a generalizable RED approach. \nBy integrating these RED principles with image denoising, we propose a new \\underline{C}lass-\\underline{D}iscriminative \\underline{D}enoising based \\underline{RED} framework, termed {{\\text{CDD-RED}}}.\nExtensive experiments demonstrate the effectiveness of {\\text{CDD-RED}} under different evaluation metrics (ranging from the pixel-level, prediction-level to the attribution-level alignment) and a variety of attack generation methods ({\\it e.g.}, FGSM, PGD, CW, AutoAttack, and adaptive attacks). Codes are available at \\href{https:\/\/github.com\/Yifanfanfanfan\/Reverse-Engineering-of-Imperceptible-Adversarial-Image-Perturbations}{\\texttt{link}}. \n\\end{abstract}\n\n\\vspace{-3mm}\n\\section{Introduction}\n\n\nDeep neural networks (DNNs) are susceptible\nto adversarially-crafted tiny input perturbations during inference.\nSuch imperceptible perturbations, {\\it a.k.a.}~adversarial attacks, could cause DNNs to draw manifestly wrong conclusions. \nThe existence of adversarial attacks was first uncovered in the domain of image classification \\citep{Goodfellow2015explaining,carlini2017towards,papernot2016limitations}, and was then rapidly extended to the other domains, such as object detection \\citep{xie2017adversarial,serban2020adversarial}, language modeling \\citep{cheng2020seq2sick,srikant2021generating}, and medical machine learning \\citep{finlayson2019adversarial,antun2020instabilities}. Despite different applications, the underlying attack formulations and generation methods commonly obey the ones used in image classification.\n\n\nA vast volume of existing works have been devoted to designing defenses against such attacks, mostly focusing on either detecting adversarial examples \\citep{grosse2017statistical,yang2020ml,metzen2017detecting,meng2017magnet,wojcik2020adversarial} or acquiring adversarially robust DNNs \\citep{madry2017towards,zhang2019theoretically,wong2017provable,salman2020denoised,wong2020fast,carmon2019unlabeled,shafahi2019adversarial}.\nDespite the plethora of prior work on adversarial defenses, it seems impossible to achieve `perfect' robustness.\nGiven the fact that adversarial attacks are inevitable~\\citep{shafahi_are_2020}, we ask whether or not an adversarial attack can be reverse-engineered so that one can estimate the adversary's information ({\\it e.g.}, adversarial perturbations) behind the attack instances.\nThe above problem is referred to as \\textit{Reverse Engineering of Deceptions (RED)}, fostering a new adversarial learning regime. \nThe development of RED technologies will also\nenable the adversarial situation awareness in high-stake applications.\n\nTo the best of our knowledge, few work studied the RED problem. The most relevant one that we are aware of is \\citep{pang_advmind_2020}, which proposed the so-called query of interest (QOI) estimation model to \n infer the adversary's target class by model queries.\n However, the work \\citep{pang_advmind_2020} was restricted to the black-box attack scenario and thus lacks a general formulation of RED. Furthermore, it has not built\n a complete RED pipeline, which should not only provide a solution to estimating the adversarial example but also formalizing evaluation metrics to comprehensively measure the performance of RED.\n In this paper, we aim to take a solid step towards addressing the RED problem.\n\n\n\n\n\n\n\n\\iffalse\nDeep Neural Networks (DNNs) are inherently susceptible to adversarial\nattacks even under black-box settings, in which the adversary\nonly has query access to the target models. In practice, while it\nmay be possible to effectively detect such attacks ({\\it e.g.}, observing\nmassive similar but non-identical queries), it is often challenging to\nexactly infer the adversary intent ({\\it e.g.}, the target class of the adversarial\nexample the adversary attempts to craft) especially during\nearly stages of the attacks, which is crucial for performing effective\ndeterrence and remediation of the threats in many scenarios.\nIn this paper, we present AdvMind, a new class of estimation\nmodels that infer the adversary intent of black-box adversarial attacks\nin a robust and prompt manner. Specifically,\n\\fi\n\n\\iffalse\nDenoising-based strategies for detection:\nMagNet:\nThe detector\nnetworks learn to differentiate between normal and adversarial examples\nby approximating the manifold of normal examples.\nAnother work: Adversarial Examples Detection and Analysis with\nLayer-wise Autoencoders\n\\fi \n\n\\iffalse\nARE ADVERSARIAL EXAMPLES INEVITABLE?\nExistence of adversarial examples.\n\\fi \n\n\\SubSection{Contributions}\nThe main contributions of our work is listed below. \n\n\\noindent $\\bullet$ \nWe formulate the Reverse Engineering of Deceptions (RED) problem that is able to estimate adversarial perturbations and \nprovides the feasibility of inferring the intention of an adversary, {\\it e.g.}, `adversary saliency regions' of an adversarial image.\n\n\\noindent $\\bullet$ \nWe identify a series of RED principles to \neffectively estimate the adversarially-crafted tiny perturbations.\nWe find that the class-discriminative ability is crucial to evaluate the RED performance.\nWe also find that \ndata augmentation, \\textit{e.g.}, spatial transformations, is another key to improve the RED result.\n Furthermore, we integrate the developed RED principles into \nimage denoising and \npropose a denoiser-assisted RED approach.\n\n\n\\noindent $\\bullet$ \nWe \nbuild a comprehensive evaluation pipeline to quantify the RED performance from different perspectives, such as pixel-level reconstruction error, prediction-level alignment, and attribution-level adversary saliency region recovery. \nWith an extensive experimental study,\n we show that, \n compared to image denoising baselines,\n\n\n our proposal\n yields a consistent improvement across diverse RED evaluation metrics and attack generation methods, {\\it e.g.}, FGSM \\citep{Goodfellow2015explaining}, CW \\citep{carlini2017towards}, PGD \\citep{madry2017towards} and AutoAttack \\citep{croce2020reliable}. \n \n\n\n\n\n\n\n\\SubSection{Related work}\n\n\\paragraph{Adversarial attacks.}\nDifferent types of adversarial attacks have been proposed, ranging from digital attacks \\citep{Goodfellow2015explaining,carlini2017towards,madry2017towards,croce2020reliable,xu2019structured,chen2017ead,xiao2018spatially}\n to physical attacks \\citep{eykholt2018robust,li2019adversarial,athalye18b,chen2018shapeshifter,xu2019evading}. The former \ngives the most fundamental threat model that\ncommonly deceives DNN models during inference by crafting imperceptible adversarial perturbations.\nThe latter extends the former to fool the victim models in the physical environment. Compared to digital attacks, physical attacks require much larger perturbation strengths to enhance the adversary's resilience to various physical conditions such as lightness and object deformation \\citep{athalye18b,xu2019evading}.\n\nIn this paper, we focus on $\\ell_p$-norm ball constrained attacks, {\\it a.k.a.}~$\\ell_p$ attacks, for $p \\in \\{ 1,2,\\infty\\}$, most widely-used in digital attacks. \nExamples include FGSM \\citep{Goodfellow2015explaining}, PGD \\citep{madry2017towards}, CW \\citep{carlini2017towards}, and the recently-released attack benchmark AutoAttack \\citep{croce2020reliable}. \nBased on the adversary's intent, $\\ell_p$ attacks are further divided into untargeted attacks and targeted attacks, where in contrast to the former, the latter designates the (incorrect) prediction label of a victim model.\nWhen an adversary has no access to victim models' detailed information (such as architectures and model weights), $\\ell_p$ attacks can be further generalized to black-box attacks by leveraging either surrogate victim models \\citep{papernot_practical_2017,papernot_transferability_2016,dong_evading_2019,liu_delving_2017} or input-output queries from the original black-box models \\citep{chen2017zoo,liu2018signsgd,cheng2019sign}. \n\n\\paragraph{Adversarial defenses.}\nTo improve the robustness of DNNs, a variety of approaches have been proposed\nto defend against $\\ell_p$ attacks.\nOne line of research focuses on enhancing the robustness of DNNs during training, {\\it e.g.},\nadversarial training \\citep{madry2017towards}, TRADES \\citep{zhang2019theoretically}, randomized smoothing \\citep{wong2017provable}, and their variants \\citep{salman2020denoised,wong2020fast,carmon2019unlabeled,shafahi2019adversarial,uesato2019labels,chenliu2020cvpr}.\nAnother line of research is to detect adversarial attacks without altering the victim model or the training process. \nThe key technique is to differentiate between benign and adversarial examples by measuring their `distance.'\nSuch a distance measure has been defined in the input space via pixel-level reconstruction error~\\citep{meng2017magnet,liao_defense_2018}, in the intermediate layers via neuron activation anomalies \\citep{xu2019interpreting}, and in the logit space by tracking the sensitivity of deep feature attributions to input perturbations \\citep{yang2020ml}. \n\nIn contrast to RED, \\textit{adversarial detection is a relatively simpler problem} as a roughly approximated distance possesses detection-ability \\citep{meng2017magnet,luo2015foveation}. \n \n \n Among the existing adversarial defense techniques, the recently-proposed Denoised Smoothing (DS) method \\citep{salman2020denoised} is more related to ours. In \\citep{salman2020denoised}, an image denoising network is prepended to an existing victim model so that the augmented system can be performed as a smoothed image classifier with certified robustness. \n Although DS is not designed for RED, its denoised output can be regarded as a benign example estimate.\n The promotion of classification stability in DS also motivates us to design the RED methods with class-discriminative ability. \n Thus, DS will be a main baseline approach for comparison. Similar to our RED setting, the concurrent work \\citep{souri2021identification} also identified the feasibility of estimating adversarial perturbations from adversarial examples.\n \n \n\n \n \n \n \n \n \n \n\n\\vspace{-3mm}\n\\section{Reverse Engineering of Deceptions: Formulation and Challenges} \\label{sec:red_formulation_challenges}\n\n\n\\vspace{-2mm}\nIn this section, we first introduce the threat model of our interest: adversarial attacks on images. \nBased on that, we formalize the Reverse Engineering of Deceptions (RED) problem and demonstrate its challenges through some `warm-up' examples.\n\\vspace{-2mm}\n\\paragraph{Preliminaries on threat model.}\nWe focus on $\\ell_p$ attacks, where the \\textit{adversary's goal} is to generate imperceptible input perturbations to fool a well-trained image classifier.\nFormally, let $\\mathbf x$ denote a benign image, and $\\boldsymbol \\delta$ an additive perturbation variable. Given a victim classifier $f$ and a perturbation strength tolerance $\\epsilon$ (in terms of, {\\it e.g.}, $\\ell_\\infty$-norm constraint $ \\| \\boldsymbol{\\delta} \\|_\\infty \\leq \n\\epsilon$), the desired \\textit{attack generation algorithm} $\\mathcal A$ then seeks the optimal $\\boldsymbol \\delta$ subject to the perturbation constraints. Such an attack generation process is denoted by \n$\\boldsymbol \\delta = \\mathcal A (\\mathbf x, f, \\epsilon)$, resulting in an adversarial example $\\mathbf x^{\\prime} = \\mathbf x + \\boldsymbol \\delta$.\n{Here $\\mathcal A$ can be fulfilled by different attack methods, e.g., FGSM~\\citep{Goodfellow2015explaining}, CW~\\citep{carlini2017towards}, PGD~\\citep{madry2017towards}, and AutoAttack \\citep{croce2020reliable}.}\n\\vspace{-2mm}\n\\paragraph{Problem formulation of RED.}\nDifferent from conventional defenses to detect or reject adversarial instances \\citep{pang_advmind_2020,liao_defense_2018,shafahi_are_2020,niu_limitations_2020}, RED aims to address the following question.\n\\begin{center}\n\t\\vspace{-2mm}\n\t\\setlength\\fboxrule{0.0pt}\n\t\\noindent\\fcolorbox{black}[rgb]{0.95,0.95,0.95}{\\begin{minipage}{0.98\\columnwidth}\n\t\t\n\t\t\t\t\\vspace{-0.07cm}\n\t\t\t\t{\\bf (RED problem)} Given an adversarial instance, can we reverse-engineer the adversarial perturbations $\\boldsymbol \\delta$, and infer the adversary's objective and knowledge, {\\it e.g.}, true image class behind deception and adversary saliency image region? \n\t\t\t\t\\vspace{-0.07cm}\n\t\n\t\\end{minipage}}\n\t\\vspace{-2mm}\n\\end{center}\n\n \nFormally,\nwe aim to recover $\\boldsymbol \\delta$ from an adversarial example $\\mathbf x^{\\prime}$\nunder the prior knowledge \nof the victim model $f$ or its substitute $\\hat{f}$ if the former is a black box.\nWe denote the RED operation as $\\boldsymbol \\delta = \\mathcal R(\\mathbf x^{\\prime}, \\hat {f})$, \nwhich covers the white-box scenario ($\\hat {f} = f$) as a special case. \nWe propose to learn a\nparametric model $\\mathcal D_{\\boldsymbol \\theta}$ ({\\it e.g.}, a denoising neural network that we will focus on)\nas an approximation of $\\mathcal R$ through\n a training\ndataset of adversary-benignity pairs $ \\Omega = \\{ (\\mathbf x^{\\prime}, \\mathbf x) \\}$. \n Through $\\mathcal D_{\\boldsymbol \\theta}$, \nRED will provide a \\textbf{benign example estimate} $\\mathbf x_{\\mathrm{RED}}$ and a \\textbf{adversarial example estimate} $\\mathbf x_{\\mathrm{RED}}^\\prime$ as below:\n\n\n{\n\\vspace*{-6mm}\n\\small{\n\\begin{align}\\label{eq: RED_results}\n \\mathbf x_{\\mathrm{RED}} = \\mathcal D_{\\boldsymbol{\\theta}} (\\mathbf x^{\\prime}), \\quad \\mathbf x_{\\mathrm{RED}}^\\prime = \\underbrace{ \\mathbf x^{\\prime} -\\mathbf x_{\\mathrm{RED}} }_\\text{perturbation estimate} + \\mathbf x,\n\\end{align}\n}}%\n\nwhere a \\textbf{perturbation estimate} is given by subtracting the RED's output with its input, \n$ \\mathbf x^{\\prime} - \\mathcal D_{\\boldsymbol \\theta}(\\mathbf x^{\\prime})$.\n\n\n\n\\begin{wrapfigure}{r}{60mm}\n\\vspace{-5mm}\n\\centering{\n\\begin{tabular}{c}\n\\hspace*{-3mm}\n\\includegraphics[width=0.4\\textwidth]{Figs\/RED_figure.png}\n\\end{tabular}\n}\n\\vspace{-5mm}\n\\caption{\\footnotesize{Overview of {RED} versus {\\text{AD}}.\n}}\n\\vspace{-4mm}\n\\label{fig: REDvsAD}\n\\end{wrapfigure}\nWe \\textbf{highlight} that RED yields a new defensive approach aiming to `diagnose' the perturbation details of an existing adversarial example in a post-hoc, forensic manner. \nThis is different from {adversarial detection ({\\text{AD}})}. Fig.\\ref{fig: REDvsAD} provides a visual comparison of {RED} with {\\text{AD}}.\n Although {\\text{AD}} is also designed in a post-hoc manner, it aims to determine whether an input is an adversarial example for a victim model based on certain statistics on model features or logits. Besides, \n{\\text{AD}} might be used as a pre-processing step of {RED}, where the former provides `detected' adversarial examples for fine-level RED diagnosis.\n{In our experiments, we will also show that the outputs of RED can be leveraged to guide the design of adversarial detection. In this sense, {RED} and {\\text{AD}} are complementary building blocks within a closed loop.}\n\n\n\n\n\\paragraph{Challenges of RED}\nIn this work, we will specify the RED model $\\mathcal D_{\\boldsymbol{\\theta}}$ as a {denoising network}. However,\nit is highly non-trivial to design a proper denoiser for RED. \nSpeaking at a high level, \nthere exist two {main challenges}. \nFirst, unlike \nthe conventional image denoising strategies~\\citep{DNCNN_denoiser},\nthe design of an RED-aware denoiser needs to take into account the effects of victim models and data properties of adversary-benignity pairs. \nSecond, it might be insufficient to merely minimize the reconstruction error as the adversarial perturbation is \nfinely-crafted~\\citep{niu_limitations_2020}. Therefore, either under- or over-denoising will lead to poor RED performance.\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{{RED Evaluation Metrics and Denoising-Only Baseline}} \\label{sec:evaluation_metric}\n\nSince RED is different from existing defensive approaches, we first develop new performance metrics of RED, ranging from\npixel-level reconstruction error to attribution-level adversary saliency region. \nWe next leverage the proposed performance metrics to demonstrate why a pure image denoiser is incapable of fulfilling RED. \n\n\\paragraph{RED evaluation metrics.}\n \n Given a learned RED model $\\mathcal D_{\\boldsymbol{\\theta}}$, the RED performance will be evaluated over a testing dataset $(\\mathbf x^{\\prime}, \\mathbf x) \\in \\mathcal D_{\\mathrm{test}}$; see implementation details in Sec.\\,\\ref{sec: exp}. Here, $\\mathbf x^{\\prime}$ is used as the testing input of the RED model, \n and $\\mathbf x$ is the associated ground-truth benign example for comparison. The benign example estimate $\\mathbf x_{\\mathrm{RED}}$ and adversarial example estimate $\\mathbf x_{\\mathrm{RED}}^\\prime$ are obtained following \\eqref{eq: RED_results}. \n \n RED evaluation pipeline is conducted from the following aspects: \\ding{172} pixel-level reconstruction error, \\ding{173} prediction-level inference alignment, and \\ding{174} attribution-level adversary saliency region.\n \n \n\\noindent \\ding{226} \\ding{172} \\textbf{Pixel-level}: Reconstruction error given by\n $d(\\mathbf x, \\mathbf x_\\mathrm{RED})=\n\\mathbb E_{(\\mathbf x^\\prime, \\mathbf x) \\in \\mathcal D_{\\mathrm{test}}} [\\| \\mathbf x_{\\mathrm{RED}} - \\mathbf x \\|_2 ]$. \n\n\n\n\\noindent \\ding{226} \\ding{173} \\textbf{Prediction-level}:\nPrediction alignment (\\text{PA}) between the pair of \\textit{benign} example and its estimate $(\\mathbf x_{\\mathrm{RED}}, \\mathbf x)$ and {\\text{PA}} between the pair of \\textit{adversarial} example and its estimate $(\\mathbf x_{\\mathrm{RED}}^\\prime, \\mathbf x^\\prime)$, given by\n\n{\n\\small{\n\\vspace*{-5mm}\n\\begin{align}\n\\text{\\text{PA}}_{\\mathrm{benign}} = \\frac{ \\mathrm{card}( \\{ (\\mathbf x_{\\mathrm{RED}}, \\mathbf x) \\, | \\,F(\\mathbf x_{\\mathrm{RED} } ) = F(\\mathbf x ) \\} ) }{\\mathrm{card}(\\mathcal D_{\\mathrm{test}})},~\n\\text{\\text{PA}}_{\\mathrm{adv}}= \\frac{ \\mathrm{card}( \\{ (\\mathbf x_{\\mathrm{RED}}^\\prime, \\mathbf x^\\prime) \\, | \\, F(\\mathbf x_{\\mathrm{RED} }^\\prime ) = F(\\mathbf x^\\prime ) \\} ) }{\\mathrm{card}(\\mathcal D_{\\mathrm{test}})} \\nonumber \n\\end{align}\n}}%\nwhere $\\mathrm{card}(\\cdot)$ denotes a cardinality function of a set and $F$ refers to the prediction label provided by the victim model $f$. \n\n\n\\noindent \\ding{226} \\ding{174} \\textbf{Attribution-level}:\nInput attribution alignment ({\\text{IAA}}) between the benign pair $(\\mathbf x_{\\mathrm{RED}}, \\mathbf x)$ and between the adversarial pair $(\\mathbf x_{\\mathrm{RED}}^\\prime, \\mathbf x^\\prime)$. In this work, we adopt \nGradCAM \\citep{selvaraju_grad-cam_2020} to attribute the predictions of classes back to input \nsaliency regions.\nThe rationale behind {\\text{IAA}} is that the unnoticeable adversarial perturbations (in the pixel space) can introduce an evident input attribution discrepancy with respect to (w.r.t.) the true label $y$ and the adversary's target label $y^\\prime$ \\citep{boopathy2020proper,xu2019structured}. Thus, an accurate RED should be able to\nerase the adversarial attribution effect through $\\mathbf x_{\\mathrm{RED}}$, and \nestimate the adversarial intent through the saliency region of $\\mathbf x_{\\mathrm{RED}}^\\prime$ (see Fig.\\,\\ref{fig: REDvsAD} for illustration).\n\n\n\n\n\n\n\\mycomment{\nAs a complement of quantitative RED metrics \\ding{172}-\\ding{173}, \nwe next introduce \\textbf{RED triangle}, a visualization tool for adversarial diagnosis. The {RED triangle} is defined with three vertices representing $\\mathbf x^\\prime$, $\\mathbf x$, and $\\mathbf{x}_{\\mathrm{RED}}$. It lies in a Cartesian coordinate system that represents the pixel space or the \nlogit space of the victim model, \\textit{i.e.}, the output by the victim model before the softmax layer, noted by $f(\\mathbf x)$.\nTherefore, a necessary condition for an ideal $\\mathbf{x}_{\\mathrm{RED}}$ is that it should lie on the arc centered at \n$\\mathbf x^\\prime$ with radius equal to $\\|\\mathbf x^\\prime - \\mathbf x \\|_2$, {\\it i.e.}, the dashed arc in Fig.\\,\\ref{fig: warm_up}. This implies an exact recovery of the perturbation norm.\nMeanwhile, an accurate estimate $\\mathbf{x}_{\\mathrm{RED}}$ is expected to\nhave a smaller angle\nbetween the directional vector $(\\mathbf{x}_{\\mathrm{RED}} - \\mathbf x^\\prime)$ and $(\\mathbf{x} - \\mathbf x^\\prime)$, noted by $\\angle{\\mathbf x'}$, \n\\textit{i.e.}, the `Optimal Direction' toward $\\mathbf x$ in Fig.\\,\\ref{fig: warm_up}.\nIdeally,\nthe perfect estimation is $\\mathbf{x}_{\\mathrm{RED}} = \\mathbf x$. \n}\n\n\n\\paragraph{Denoising-Only (DO) baseline.}\n\\begin{wrapfigure}{r}{70mm}\n \\vspace{-10mm}\n \\centering\n \\begin{adjustbox}{max width=0.5\\textwidth }\n \\begin{tabular}{@{\\hskip 0.00in}c @{\\hskip 0.00in}c | @{\\hskip 0.00in} @{\\hskip 0.02in} c @{\\hskip 0.02in} | @{\\hskip 0.02in} c @{\\hskip 0.02in} |@{\\hskip 0.02in} c @{\\hskip 0.02in} |@{\\hskip 0.02in} c @{\\hskip 0.02in} \n }\n& \n\\colorbox{lightgray}{ \\textbf{Input image}}\n&\n\\colorbox{lightgray}{ \\textbf{DO}} \n&\n\\colorbox{lightgray}{ \\textbf{Groundtruth}}\n\n\\\\\n \\begin{tabular}{@{}c@{}} \n\n\\rotatebox{90}{\\parbox{10em}{\\centering \\textbf{\\large{Benign example $\\mathbf x$\/$\\mathbf x_{\\mathrm{RED}}$}}}}\n \\\\\n\n\\rotatebox{90}{\\parbox{10em}{\\centering \\textbf{\\large{Adv. example $\\mathbf x^\\prime$\/$\\mathbf x_{\\mathrm{RED}}^\\prime$} \n}}}\n\n\\end{tabular} \n& \n\\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}}\n\n\\\\\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in} }\n \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/clean.jpg}\n\n } \n\\end{tabular}\n \\\\\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}}\n \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/adv.jpg}} \n\\end{tabular}\n\\end{tabular}\n&\n\\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}}\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}c@{\\hskip 0.02in} }\n \\begin{tabular}{@{\\hskip 0.00in}c@{\\hskip 0.00in}}\n \\parbox{10em}{\\centering $I(\\cdot, y)$} \n \\end{tabular} \n & \n \\begin{tabular}{@{\\hskip 0.00in}c@{\\hskip 0.00in}}\n \\parbox{10em}{\\centering $I(\\cdot, y^\\prime )$} \n \\end{tabular} \n\\end{tabular}\\\\\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}c@{\\hskip 0.02in} }\n \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/DOcc.jpg}\n\n } & \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/DOct.jpg}} \n\\end{tabular}\n \\\\\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}c@{\\hskip 0.02in} }\n \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/DOac.jpg}} & \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/DOat.jpg}} \\\\\n\\end{tabular}\n\\end{tabular}\n&\n\\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}}\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}c@{\\hskip 0.02in} }\n \\begin{tabular}{@{\\hskip 0.00in}c@{\\hskip 0.00in}}\n \\parbox{10em}{\\centering $I(\\cdot, y)$} \n \\end{tabular} \n & \n \\begin{tabular}{@{\\hskip 0.00in}c@{\\hskip 0.00in}}\n \\parbox{10em}{\\centering $I(\\cdot, y^\\prime )$} \n \\end{tabular} \n\\end{tabular}\\\\\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}c@{\\hskip 0.02in} }\n \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/cc_gt.jpg}} & \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/ct_gt.jpg}} \n\\end{tabular}\n \\\\\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}c@{\\hskip 0.02in} }\n \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/ac_gt.jpg}} & \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/at_gt.jpg}} \n \\\\\n\\end{tabular}\n\\end{tabular}\n\\end{tabular}\n \\end{adjustbox}\n \\vspace*{-3mm}\n\\caption{\\footnotesize{\nIAA of DO compared with ground-truth.\n}}\n\\label{fig: DOIAA}\n\n\\end{wrapfigure}\nWe further show that how a pure image denoiser, a `must-try' baseline, is insufficient of \ntackling the RED problem.\nThis failure case drive us to rethink the denoising strategy through the lens of RED. \nFirst, we obtain the denoising network by minimizing the reconstruction error:\n\n{\\small\n\\vspace*{-2.5mm}\n\\begin{align}\n \\begin{array}{ll}\n\\displaystyle \\minimize_{\\boldsymbol{\\theta}} & \\ell_{\\mathrm{denoise}}(\\boldsymbol{\\theta}; \\Omega) \\Def \\mathbb E_{(\\mathbf x^{\\prime}, \\mathbf x) \\in \\Omega }\\| \\mathcal D_{\\boldsymbol{\\theta}} (\\mathbf x^{\\prime}) - \\mathbf x \\|_1,\n \n \\end{array}\n \\label{eq: MAE_only}\n\\end{align}\n}%\nwhere a Mean Absolute Error (MAE)-type loss is used for denoising \\citep{liao_defense_2018}, and the creation of training dataset $\\Omega$ is illustrated in Sec.\\,\\ref{sec: exp_setup}.\nLet us then evaluate the performance of {DO} through the non-adversarial prediction alignment $\\mathrm{PA}_\\mathrm{benign}$ and {\\text{IAA}}. We find that\n$\\mathrm{PA}_\\mathrm{benign} = 42.8\\%$ for DO. And\n Fig.\\,\\ref{fig: DOIAA} shows the {{\\text{IAA}}} performance of DO w.r.t. an input example. As we can see, DO is not capable of exactly recovering the adversarial saliency regions compared to the ground-truth adversarial perturbations.\n These suggest that\n DO-based RED lacks\n the reconstruction ability at the prediction and the attribution levels. {Another naive approach is performing adversarial attack back to $x'$, yet it requires additional assumptions and might not precisely recover the ground-truth perturbations. The detailed limitations are discussed in Appendix \\ref{sec:naive_red}.}\n\n\n\n\n \n\\iffalse\nFirst, \n the denoising-only criterion is insufficient to warrant RED. This can be observed from a significant degradation of IA, i.e., lower prediction accuracy of $\\mathbf x_{\\mathrm{RED}} $ and lower attack success rate (ASR) of $\\mathbf x_{\\mathrm{RED}}^{\\prime} $ than the original data \n$\\mathbf x$ and $\\mathbf x^{\\prime}$. This implies that both the benign example estimate $\\mathbf x_{\\mathrm{RED}} $ and adversarial example estimate $\\mathbf x_{\\mathrm{RED}}^{\\prime} $ lose the \\textit{class-discriminative ability} of the original benign and adversarial examples.\nSecond, although\nthe reconstruction error is minimized during training, \nthe denoising ability fails to generalize to adversarial examples during inference. This can be seen from a fairly large RMSE $\\|\n \\mathbf x_{\\mathrm{RED}} - \\mathbf x\n\\|_2$ vs. the ground-truth perturbation error $\\|\n \\mathbf x^{\\prime} - \\mathbf x\n\\|_2$. \nBased on the insight from the denoising-only baseline, in the next section we will propose an RED-aware denoiser that yields improved \\textit{class-discriminative ability} and \\textit{generalization ability}.\n\\fi \n \n\n\n \n\n\n\n\n\n\n\n\\section{{Class-Discriminative Denoising for RED}}\n\\label{sec: approach}\n\n\\begin{wrapfigure}{r}{70mm}\n \\vspace*{-5mm}\n\\centerline{\n\\includegraphics[width=.46\\textwidth,height=!]{Figs\/RED_figure_overview.png}\n}\n\\vspace*{-5mm}\n\\caption{\\footnotesize{\n{\\text{CDD-RED}} overview. \n}}\n \\label{fig: overview}\n \\vspace*{-5mm}\n\\end{wrapfigure}\n\nIn this section, we propose a novel \n\\underline{C}lass-\\underline{D}iscriminative \n\\underline{D}enoising based \\underline{RED} approach termed {{\\text{CDD-RED}}}; see Fig.\\,\\ref{fig: overview} for an overview.\n{{\\text{CDD-RED}}} contains two key components. First, we propose a \n{\\text{PA}}\nregularization to enforce the prediction-level stabilities of both estimated benign example $\\mathbf x_{\\mathrm{RED}}$ and adversarial example $\\mathbf x_{\\mathrm{RED}}^\\prime$ with respect to their true counterparts $\\mathbf x$ and $\\mathbf x^\\prime$, respectively. \nSecond, \nwe propose a data augmentation strategy\nto improve the {{RED}}'s generalization without losing its class-discriminative ability. \n\n\n\n\\paragraph{Benign and adversarial prediction alignment.}\nTo accurately estimate the adversarial perturbation from an adversarial instance, the lessons from the DO approach suggest to preserve the class-discriminative ability of RED estimates to align with the original predictions, given by $\\mathbf x_{\\mathrm{RED}}$ vs. $\\mathbf x$, and $\\mathbf x_{\\mathrm{RED}}^\\prime$ vs. $\\mathbf x^\\prime$.\nSpurred by that, \nthe training objective \nof {\\text{CDD-RED}} is required \n not only to minimize the reconstruction error like \\eqref{eq: MAE_only} but also to maximize {{\\text{PA}}}, namely,\n`clone' the class-discriminative ability of original data. \nTo achieve this goal, we augment the denoiser $\\mathcal D_{\\boldsymbol{\\theta}}$ with a known classifier $\\hat{f}$ to generate predictions of \nestimated benign and adversarial examples (see Fig.\\,\\ref{fig: overview}), \\textit{i.e.}, $\\mathbf x_{\\mathrm{RED}}$ and $\\mathbf x_{\\mathrm{RED}}^{\\prime}$ defined in \\eqref{eq: RED_results}. \nBy contrasting $\\hat {f} (\\mathbf x_{\\mathrm{RED}})$ with \n$\\hat{f}(\\mathbf x) $, and $\\hat {f} (\\mathbf x_{\\mathrm{RED}}^\\prime)$ with \n$\\hat{f}(\\mathbf x^{\\prime}) $, we can promote {\\text{PA}} by minimizing the prediction gap between true examples and estimated ones:\n\n{\\vspace*{-3.5mm}\n\\small\\begin{align}\\label{eq: IA}\n \\ell_{\\text{\\text{PA}}} ( \\boldsymbol{\\theta}; \\Omega) = \n \\mathbb E_{(\\mathbf x^\\prime, \\mathbf x) \\in \\Omega } [ \\ell_{\\text{\\text{PA}}} ( \\boldsymbol{\\theta}; \\mathbf x^\\prime, \\mathbf x) ], ~ \n \\ell_{\\text{\\text{PA}}} ( \\boldsymbol{\\theta}; \\mathbf x^\\prime, \\mathbf x) \\Def \\underbrace{ {\\mathrm{CE}}( \\hat {f} (\\mathbf x_{\\mathrm{RED}}), \\hat{f}(\\mathbf x)) }_\\text{{\\text{PA}} for benign prediction} + \\underbrace{ {\\mathrm{CE}}( \\hat {f} (\\mathbf x_{\\mathrm{RED}}^{\\prime}), \\hat{f}(\\mathbf x^{\\prime})) }_\\text{{\\text{PA}} for adversarial prediction}, \n\\end{align}}%\nwhere CE denotes the cross-entropy loss.\nTo enhance the class-discriminative ability, it is desirable to\nintegrate the denoising loss \\eqref{eq: MAE_only} with the PA regularization \\eqref{eq: IA}, leading to $\\ell_{\\mathrm{denoise}} + \\lambda \\ell_{\\text{\\text{PA}}}$, where $\\lambda > 0$ is a regularization parameter. \n\n \n\nTo address this issue, we will further propose a data augmentation method to improve the denoising ability without losing the advantage of {\\text{PA}} regularization.\n\n\\paragraph{Proper data augmentation improves RED.}\n\n\\begin{wrapfigure}{r}{70mm}\n\\vspace*{-5mm}\n\\centering{\n\\begin{tabular}{c}\n\\hspace*{-3mm}\n\\includegraphics[width=0.48\\textwidth]{Figs\/aug_fig.pdf}\n\\end{tabular}\n}\n\\vspace*{-6mm}\n\\caption{\\footnotesize{The influence of different data augmentations. `Base' refers to the base training without augmentation.\n}}\n\\vspace*{-5mm}\n\\label{fig: aug_study}\n\\end{wrapfigure}\n\nThe rationale behind \n incorporating image transformations into {\\text{CDD-RED}} lies in two aspects.\nFirst, data transformation can\nmake RED foveated to the most informative attack artifacts since an adversarial instance could be {sensitive} to input transformations \\citep{luo2015foveation,athalye18b,xie2019improving,li2020practical, fan2021does}.\nSecond, the identification of \n transformation-resilient benign\/adversarial instances may enhance the capabilities of {\\text{PA}} and {\\text{IAA}}.\n \n However, it is highly non-trivial to determine the most appropriate data augmentation operations. For example, a pixel-sensitive data transformation, e.g., Gaussian blurring and colorization, would hamper the reconstruction-ability of the original \n adversary-benignity pair $(\\mathbf x^\\prime, \\mathbf x)$. Therefore, we focus on spatial image transformations, including rotation, translation, cropping \\& padding, cutout, and CutMix \\citep{yun2019cutmix}, which keep the original perturbation in a linear way. \n In Fig.\\ref{fig: aug_study}, we evaluate the RED performance, in terms of pixel-level reconstruction error and prediction-level alignment accuracy, for different kinds of spatial image transformations. As we can see, CutMix and cropping \\& padding can increase the both performance simultaneously, considered as the appropriate augmentation to boost the RED. Furthermore, we empirically find that combining the two transformations can further improve the performance.\n \n\n \n Let $\\mathcal T$ denote a transformation set, including cropping \\& padding and CutMix operations. \n With the aid of the denoising loss \\eqref{eq: MAE_only}, {\\text{PA}} regularization \\eqref{eq: IA}, and data transformations $\\mathcal T$, we then cast\n the overall training objective of {\\text{CDD-RED}} as \n\n\n {\\small\n \\vspace*{-3.5mm}\n \\begin{align}\n \\hspace*{-5mm} \\begin{array}{ll}\n\\displaystyle \\minimize_{\\boldsymbol{\\theta}} & \n\\underbrace{ \\mathbb E_{(\\mathbf x^{\\prime}, \\mathbf x) \\in \\Omega , t \\sim \\mathcal T }\\| \\mathcal D_{\\boldsymbol{\\theta}} (t(\\mathbf x^{\\prime})) - t(\\mathbf x) \\|_1 }_\\text{$\\ell_{\\mathrm{denoise}}$ \\eqref{eq: MAE_only} with data augmentations} + \\underbrace{ \\lambda\n\\mathbb E_{(\\mathbf x^\\prime, \\mathbf x) \\in \\Omega, t \\sim \\check {\\mathcal T} } [ \n\\ell_{\\text{\\text{PA}}} ( \\boldsymbol{\\theta}; t(\\mathbf x^\\prime), t(\\mathbf x))\n] }_\\text{$\\ell_{\\text{\\text{PA}}}$ \\eqref{eq: IA} with data augmentation via $\\check{\\mathcal T}$},\n \\end{array}\n\\hspace*{-5mm} \\label{eq: overall_DeRED}\n\\end{align}}%\nwhere \n$\\check{\\mathcal T}$ denotes a properly-selected subset of \n$\\mathcal T$, and $\\lambda > 0$ is a regularization parameter. \nIn the {\\text{PA}} regularizer \\eqref{eq: overall_DeRED}, we need to avoid the scenario of\n over-transformation where data augmentation alters the classifier's original decision. \nThis suggests \n$ \\check{\\mathcal T} = \\{ t \\in \\mathcal T\\,| \\, \\hat{F}(t(\\mathbf{x})) = \\hat{F}(\\mathbf{x}), \\hat{F}(t(\\mathbf{x}^{\\prime})) = \\hat{F}(\\mathbf{x}^{\\prime}) \\ \\}$, where $\\hat{F}$ represents the prediction label of the pre-trained classifier $\\hat f$, \\textit{i.e.}, $\\hat{F}(\\cdot)=\\mathrm{argmax}(\\hat f(\\cdot))$.\n \n \n \n \n \n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Experiments}\n\\label{sec: exp}\nWe show the effectiveness of our proposed method in {$5$ aspects}: \\textbf{a)} reconstruction error of adversarial perturbation inversion, \\textit{i.e.,} $d(\\mathbf x, \\mathbf x_\\mathrm{RED})$,\n\\textbf{b)} class-discriminative ability of the benign and adversarial example estimate, \\textit{i.e.},\n$\\text{\\text{PA}}_{\\mathrm{benign}}$ and $\\text{\\text{PA}}_{\\mathrm{adv}}$ by victim models,\n\\textbf{c)} adversary saliency region recovery, \\textit{i.e.}, attribution alignment, and \n\\textbf{d)} RED evaluation over unseen attack types and adaptive attacks.\n\n\\SubSection{Experiment setup}\n\\label{sec: exp_setup}\n\n\n\n\n\\paragraph{Attack datasets.} \n\n\nTo train and test RED models, we generate adversarial examples on the ImageNet dataset \\citep{deng2009imagenet}.\nWe consider \\textbf{$3$ attack methods} including PGD \\citep{madry2017towards}, FGSM \\citep{Goodfellow2015explaining}, and CW attack \\citep{carlini2017towards}, applied to \\textbf{$5$ models} including pre-trained ResNet18 (Res18), Resnet50 (Res50) \\citep{DBLP:journals\/corr\/HeZRS15}, VGG16, VGG19, and InceptionV3 (IncV3) \\citep{DBLP:journals\/corr\/SzegedyVISW15}. \n{The detailed parameter settings can be found in Appendix \\ref{sec:dataset_details}.} Furthermore, to evaluate the RED performance on unseen perturbation types during training, additional 2K adversarial examples generated by \\textbf{AutoAttack} \\citep{croce2020reliable} and 1K adversarial examples generated by \\textbf{Feature Attack} \\citep{sabour2015adversarial} are included as the unseen testing dataset. AutoAttack is applied on VGG19, Res50 and \\textbf{two new victim models}, i.e., Alexet and Robust Resnet50 (R-Res50), via fast adversarial training \\citep{wong2020fast} while Feature Attack is applied on VGG19 and Alexnet. The rational behind considering Feature Attack is that feature adversary has been recognized as an effective way to circumvent adversarial detection \\citep{tramer2020adaptive}. Thus, it provides a supplement on detection-aware attacks. \n\n\n\\paragraph{RED model configuration, training and evaluation.} During the training of the RED denoisers, VGG19 \\citep{vgg19} is chosen as the pretrained classifier $\\hat{f}$ for {\\text{PA}} regularization. Although different victim models were used for generating adversarial examples, we will show that the inference guided by VGG19 is able to accurately estimate the true image class and the intent of the adversary. In terms of the architecture of $\\mathcal D_{\\boldsymbol{\\theta}}$, DnCNN \\citep{DNCNN_denoiser} is adopted. \nThe RED problem is solved using an Adam optimizer \\citep{KingmaB2015adam} with the initial learning rate of $10^{-4}$, which decays $10$ times for every $140$ training epochs. In \\eqref{eq: overall_DeRED}, the regularization parameter $\\lambda$ is set as $0.025$. The transformations for data augmentation include CutMix and cropping \\& padding. The maximum number of training epochs is set as $300$. \n{The computation cost and ablation study of {\\text{CDD-RED}} are in Appendix \\ref{sec:cost} and \\ref{sec:ablation}, respectively.}\n\n\n\\paragraph{Baselines.}\nWe compare {\\text{CDD-RED}} with two baseline approaches: \\textbf{a)} the conventional denoising-only (DO) approach with the objective function \\eqref{eq: MAE_only}; \\textbf{b)} The state-of-the-art Denoised Smoothing (DS) \\citep{salman2020denoised} approach that considers both the reconstruction error and the {\\text{PA}} for benign examples in the objective function.\nBoth methods are tuned to their best configurations.\n\n\\subsection{Main results}\n\n\\paragraph{Reconstruction error {$d(\\mathbf x, \\mathbf x_\\mathrm{RED})$} and {\\text{PA}}.}\nTable \\ref{table:RMSE_IA_comparison} presents the comparison of {\\text{CDD-RED}} with the baseline denoising approaches in terms of $d(\\mathbf x, \\mathbf x_\\mathrm{RED})$, $d(f(\\mathbf x), f(\\mathbf x_\\mathrm{RED}))$, $d(f(\\mathbf x'), f(\\mathbf x'_\\mathrm{RED}))$, $\\text{\\text{PA}}_{\\mathrm{benign}}$, and $\\text{\\text{PA}}_{\\mathrm{adv}}$ on the testing dataset. As we can see, our approach ({\\text{CDD-RED}}) improves the class-discriminative ability from benign perspective by 42.91\\% and adversarial perspective by 8.46\\% with a slightly larger reconstruction error compared with the DO approach.\n\\begin{wraptable}{r}{58mm}\n\\begin{center}\n\\vspace*{-6mm}\n\\caption{\\footnotesize{The performance comparison among DO, DS and {\\text{CDD-RED}} on the testing dataset.}} \\label{table:RMSE_IA_comparison}\n\\begin{threeparttable}\n\\resizebox{0.4\\textwidth}{!}{\n\\begin{tabular}{c|c|c|c}\n\\toprule\\hline\n & DO & DS & \\text{CDD-RED} \\\\ \\hline\n$d(\\mathbf x, \\mathbf x_\\mathrm{RED})$ & 9.32 & 19.19 & 13.04 \\\\ \\hline\n$d(f(\\mathbf x), f(\\mathbf x_\\mathrm{RED}))$ & 47.81 & 37.21 & 37.07 \\\\ \\hline\n$d(f(\\mathbf x'), f(\\mathbf x'_\\mathrm{RED}))$ & 115.09 & 150.02 & 78.21 \\\\ \\hline\n$\\text{\\text{PA}}_{\\mathrm{benign}}$ & 42.80\\% & 86.64\\% & 85.71\\% \\\\ \\hline\n$\\text{\\text{PA}}_{\\mathrm{adv}}$ & 71.97\\% & 72.47\\% & 80.43\\% \\\\ \\hline\\bottomrule\n\\end{tabular}}\n\\end{threeparttable}\n\\end{center}\n\\vspace*{-7mm}\n\\end{wraptable}\nIn contrast to DS, \n{\\text{CDD-RED}} achieves similar $\\text{\\text{PA}}_{\\mathrm{benign}}$ \nbut improved pixel-level denoising error and $\\text{\\text{PA}}_{\\mathrm{adv}}$. Furthermore, {\\text{CDD-RED}} achieves the best logit-level reconstruction error for both $f(\\mathbf x_\\mathrm{RED})$ and $f(\\mathbf x'_\\mathrm{RED})$ among the three approaches. This implies that $\\mathbf{x}_\\mathrm{RED}$ rendered by {\\text{CDD-RED}} can achieve highly similar prediction to the true benign example $\\mathbf x$, and the perturbation estimate ${\\mathbf x^{\\prime} -\\mathbf x_{\\mathrm{RED}}}$ yields a similar misclassification effect to the ground-truth perturbation. \n{Besides, {\\text{CDD-RED}} is robust against attacks with different hyperparameters settings, details can be found in Appendix \\ref{sec:diff_attack_hyper}.}\n\n\n\n\n\n\n\n\n\n \\begin{figure*}[htb]\n \\centering\n \\begin{adjustbox}{max width=1\\textwidth }\n \\begin{tabular}{@{\\hskip 0.00in}c @{\\hskip 0.00in}c | @{\\hskip 0.00in} @{\\hskip 0.02in} c @{\\hskip 0.02in} | @{\\hskip 0.02in} c @{\\hskip 0.02in} |@{\\hskip 0.02in} c @{\\hskip 0.02in} |@{\\hskip 0.02in} c @{\\hskip 0.02in} \n }\n& \n\\colorbox{lightgray}{ \\textbf{Input image}}\n&\n\\colorbox{lightgray}{ \\textbf{DO}} \n& \n\\colorbox{lightgray}{ \\textbf{DS}} \n&\n\\colorbox{lightgray}{ \\textbf{{\\text{CDD-RED}} (ours)}}\n&\n\\colorbox{lightgray}{ \\textbf{Groundtruth}}\n\n\\\\\n \\begin{tabular}{@{}c@{}} \n\n\\rotatebox{90}{\\parbox{10em}{\\centering \\textbf{\\large{Benign example $\\mathbf x$\/$\\mathbf x_{\\mathrm{RED}}$}}}}\n \\\\\n\n\\rotatebox{90}{\\parbox{10em}{\\centering \\textbf{\\large{Adv. example $\\mathbf x^\\prime$\/$\\mathbf x_{\\mathrm{RED}}^\\prime$} \n}}}\n\n\\end{tabular} \n& \n\\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}}\n\n\\\\\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in} }\n \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/clean.jpg}\n\n } \n\\end{tabular}\n \\\\\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}}\n \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/adv.jpg}} \n\\end{tabular}\n\\end{tabular}\n&\n\\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}}\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}c@{\\hskip 0.02in} }\n \\begin{tabular}{@{\\hskip 0.00in}c@{\\hskip 0.00in}}\n \\parbox{10em}{\\centering $I(\\cdot, y)$} \n \\end{tabular} \n & \n \\begin{tabular}{@{\\hskip 0.00in}c@{\\hskip 0.00in}}\n \\parbox{10em}{\\centering $I(\\cdot, y^\\prime )$} \n \\end{tabular} \n\\end{tabular}\\\\\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}c@{\\hskip 0.02in} }\n \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/DOcc.jpg}\n\n } & \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/DOct.jpg}} \n\\end{tabular}\n \\\\\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}c@{\\hskip 0.02in} }\n \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/DOac.jpg}} & \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/DOat.jpg}} \\\\\n\\end{tabular}\n\\end{tabular}\n&\n\\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}}\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}c@{\\hskip 0.02in} }\n \\begin{tabular}{@{\\hskip 0.00in}c@{\\hskip 0.00in}}\n \\parbox{10em}{\\centering $I(\\cdot, y)$} \n \\end{tabular} \n & \n \\begin{tabular}{@{\\hskip 0.00in}c@{\\hskip 0.00in}}\n \\parbox{10em}{\\centering $I(\\cdot, y^\\prime )$} \n \\end{tabular} \n\\end{tabular}\\\\\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}c@{\\hskip 0.02in} }\n \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/DScc.jpg}} & \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/DSct.jpg}} \n\\end{tabular}\n \\\\\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}c@{\\hskip 0.02in} }\n \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/DSac.jpg}} & \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/DSat.jpg}} \n \\\\\n\\end{tabular}\n\\end{tabular}\n&\n\\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}}\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}c@{\\hskip 0.02in} }\n \\begin{tabular}{@{\\hskip 0.00in}c@{\\hskip 0.00in}}\n \\parbox{10em}{\\centering $I(\\cdot, y)$} \n \\end{tabular} \n & \n \\begin{tabular}{@{\\hskip 0.00in}c@{\\hskip 0.00in}}\n \\parbox{10em}{\\centering $I(\\cdot, y^\\prime )$} \n \\end{tabular} \n\\end{tabular}\\\\\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}c@{\\hskip 0.02in} }\n \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/DeREDcc.jpg}} & \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/DeREDct.jpg}} \n\\end{tabular}\n \\\\\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}c@{\\hskip 0.02in} }\n \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/DeREDac.jpg}} & \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/DeREDat.jpg}} \n \\\\\n\\end{tabular}\n\\end{tabular}\n&\n\\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}}\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}c@{\\hskip 0.02in} }\n \\begin{tabular}{@{\\hskip 0.00in}c@{\\hskip 0.00in}}\n \\parbox{10em}{\\centering $I(\\cdot, y)$} \n \\end{tabular} \n & \n \\begin{tabular}{@{\\hskip 0.00in}c@{\\hskip 0.00in}}\n \\parbox{10em}{\\centering $I(\\cdot, y^\\prime )$} \n \\end{tabular} \n\\end{tabular}\\\\\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}c@{\\hskip 0.02in} }\n \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/cc_gt.jpg}} & \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/ct_gt.jpg}} \n\\end{tabular}\n \\\\\n \\begin{tabular}{@{\\hskip 0.02in}c@{\\hskip 0.02in}c@{\\hskip 0.02in} }\n \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/ac_gt.jpg}} & \\parbox[c]{10em}{\\includegraphics[width=10em]{Figs\/piggybank\/at_gt.jpg}} \n \\\\\n\\end{tabular}\n\\end{tabular}\n\n\\end{tabular}\n \\end{adjustbox}\n\\caption{\\footnotesize{\nInterpretation ($I$) of benign ($\\mathbf x$\/$\\mathbf x_{\\mathrm{RED}}$) and adversarial ($\\mathbf x^\\prime$\/$\\mathbf x_{\\mathrm{RED}}^\\prime$) image w.r.t. the true label {$y$=`ptarmigan'} and the adversary targeted label {$y^\\prime$=`shower curtain'}.\nWe compare three methods of RED training, DO, DS, and {\\text{CDD-RED}} as our method, to the ground-truth interpretation. Given an RED method, the first column is $I(\\mathbf x_{\\mathrm{RED}}, y)$ versus $I(\\mathbf x_{\\mathrm{RED}}^\\prime , y)$, the second column is $I(\\mathbf x_{\\mathrm{RED}}, y^\\prime)$ versus $I(\\mathbf x_{\\mathrm{RED}}^\\prime , y^\\prime)$, and all maps under each RED method are normalized w.r.t. their largest value. For the ground-truth, the first column is $I(\\mathbf x, y)$ versus $I(\\mathbf x^\\prime , y)$, the second column is $I(\\mathbf x, y^\\prime)$ versus $I(\\mathbf x^\\prime , y^\\prime)$.\n}}\n\\label{fig: attribution}\n\n\\end{figure*}\n\n\n\n\n\\paragraph{Attribution alignment.}\nIn addition to pixel-level alignment and prediction-level alignment to evaluate the RED performance, attribution alignment is examined in what follows. \nFig.\\,\\ref{fig: attribution} presents attribution maps generated by GradCAM \n{in terms of $ I(\\mathbf x, y)$, $ I(\\mathbf x^\\prime, y)$, $I(\\mathbf x, y^\\prime)$, and $ I(\\mathbf x^\\prime, y^\\prime)$, where $\\mathbf x^\\prime$ denotes the perturbed version of $\\mathbf x$, and $y^\\prime$ is the adversarially targeted label}. From left to right is the attribution map over DO, DS, {\\text{CDD-RED}} (our method), and the ground-truth. \nCompared with DO and DS, {\\text{CDD-RED}} yields \na closer attribution alignment with the ground-truth especially when making a comparison between $I(\\mathbf x_{\\mathrm{RED}}, y)$ and $I(\\mathbf x, y)$.\nAt the dataset level, \nFig.\\,\\ref{fig: IoU} shows the distribution of attribution IoU scores. It is observed that the IoU distribution of {\\text{CDD-RED}}, compared with DO and DS, has a denser concentration over the high-value area, corresponding to closer alignment with the attribution map by the adversary. {This feature indicates an interesting application of the proposed RED approach, which is to achieve the recovery of adversary's saliency region, in terms of the class-discriminative image regions that the adversary focused on.}\n\n\n\n\\begin{figure}[htb]\n\\centering\n\\subfigure\n[Denoising Only] \n{\\includegraphics[width=12em]{Figs\/IoU_DO.png}} \n\\subfigure[Denoised Smoothing]{\\includegraphics[width=12em]{Figs\/IoU_DS.png}}\n\\subfigure[{\\text{CDD-RED}} (ours)]{\\includegraphics[width=12em]{Figs\/IoU_DeRED.png}}\n\\caption{\\footnotesize{IoU distributions of the attribution alignment by three RED methods. Higher IoU is better. For each subfigure, the four IoU scores standing for $\\mathrm{IoU}( \\mathbf x_{\\mathrm{RED}}, \\mathbf x, y )$,\n$\\mathrm{IoU}( \\mathbf x_{\\mathrm{RED}}, \\mathbf x, y^\\prime )$,\n$\\mathrm{IoU}( \\mathbf x_{\\mathrm{RED}}^\\prime, \\mathbf x^\\prime, y )$,\nand\n$\\mathrm{IoU}( \\mathbf x_{\\mathrm{RED}}^\\prime, \\mathbf x^\\prime, y^\\prime )$.} }\n\\label{fig: IoU} \n\\vspace*{-3mm}\n\\end{figure}\n\n\n\n\n\n\\vspace{-2mm}\n\\paragraph{RED vs. unforeseen attack types.}\n\n\\begin{figure}[htb]\n\\centering\n\\subfigure\n[Accuracy of $\\mathbf{x}^{\\prime p}_{\\mathrm{RED}}$] \n{\\includegraphics[width=0.33\\linewidth]{Figs\/perturbation_portion\/denoise_pertubation_portion_acc.pdf}} \n\\subfigure[Success rate of $\\mathbf{x}^{\\prime p}_{\\mathrm{RED}}$]{\\includegraphics[width=0.33\\linewidth]{Figs\/perturbation_portion\/denoise_pertubation_portion_suc_rate.pdf}}\n\\subfigure[$d(f(\\mathbf{x}'^{p}_{\\mathrm{RED}}),f(\\mathbf{x}))$]{\\includegraphics[width=0.332\\linewidth]{Figs\/perturbation_portion\/denoise_pertubation_portion_logit.pdf}}\n\\subfigure[$d(\\mathbf{x}'^{p}_{\\mathrm{RED}},\\mathbf{x})$]{\\includegraphics[width=0.33\\linewidth]{Figs\/perturbation_portion\/denoise_pertubation_portion_mse.pdf}}\n\\vspace{-5mm}\n\\caption{\\footnotesize{Reverse engineer partially-perturbed data under different interpolation portion $p$.} }\n\\vspace{-7mm}\n \\label{fig:interpolation} \n\\end{figure}\n\nThe experiments on the recovery of unforeseen attack types are composed of two parts: \\textbf{a)} partially-perturbed data via linear interpolation, and \\textbf{b)} the unseen attack type, AutoAttack, Feature Attack, and Adaptive Attack. \n{More attack results including adversarial attacks on CIFAR-10 dataset, Wasserstein minimized attacks, and attacks on smoothed classifiers can be found in Appendix \\ref{sec:appendix_unseen}.}\n\nFirst, we construct partially-perturbed data by adding a portion $p \\in \\{0\\%, 20\\%, \\cdots, 100\\%\\}$ of the perturbation $\\mathbf{x}^{\\prime}-\\mathbf{x}$ to the true benign example $\\mathbf{x}$, namely, $\\mathbf{x}^{\\prime p} = \\mathbf{x} + p(\\mathbf{x}^{\\prime}-\\mathbf{x})$. The interpolated $\\mathbf{x}^{\\prime p}$ is then used as the input to an RED model. We aim to investigate whether or not the proposed RED method can recover partial perturbations (even not successful attacks). \n\nFig.\\,\\ref{fig:interpolation} (a) and (b) show the the prediction alignment with $y$ and $y^\\prime$, of the adversarial example estimate $\\mathbf{x}^{\\prime p}_{\\mathrm{RED}}=\\mathbf{x}^{\\prime p}-\\mathcal D_{\\boldsymbol{\\theta}}(\\mathbf{x}^{\\prime p})+\\mathbf{x}$ by different RED models. Fig.\\,\\ref{fig:interpolation} (c) shows the logit distance between the prediction of the partially-perturbed adversarial example estimate and the prediction of the benign example while Fig.\\,\\ref{fig:interpolation} (d) demonstrates the pixel distance between $\\mathbf{x}^{\\prime p}_{\\mathrm{RED}}$ and the benign example. \n\nFirst, a smaller gap between the ground-truth curve (in red) and the adversarial example estimate $\\mathbf{x}^{\\prime p}_{\\mathrm{red}}$ curve indicates a better performance. \nFig.\\,\\ref{fig:interpolation} (a) and (b) show that {\\text{CDD-RED}} estimates the closest adversary's performance to the ground truth in terms of the prediction accuracy and attack success rate. This is also verified by the distance of prediction logits in Fig.\\,\\ref{fig:interpolation} (c). Fig.\\,\\ref{fig:interpolation} (d) \nshows that DS largely over-estimates the additive perturbation, while {\\text{CDD-RED}} maintains the perturbation estimation performance closest to the ground truth. {Though DO is closer to the ground-truth than {\\text{CDD-RED}} at p < 40\\%, DO is not able to recover a more precise adversarial perturbation in terms of other performance metrics. For example, in Fig.\\,\\ref{fig:interpolation} (b) at p = 0.2, $\\mathbf{x}^{\\prime p}_{\\mathrm{RED}}$ by DO achieves a lower successful attack rate compared to CDD-RED and the ground-truth.}\n\n\n\\begin{wraptable}{r}{80mm}\n\\begin{center}\n\\vspace*{-8mm}\n\\caption{\\footnotesize{The $d(\\mathbf x, \\mathbf x_\\mathrm{RED})$, $\\text{\\text{PA}}_{\\mathrm{benign}}$, and $\\text{\\text{PA}}_{\\mathrm{adv}}$ performance of the denoisers on the unforeseen perturbation type AutoAttack, Feature Attack, and Adaptive Attack.\n}}\n\\label{table:unforeseen_results}\n\\begin{threeparttable}\n\\resizebox{0.57\\textwidth}{!}{\n\\begin{tabular}{c|c|c|c|c}\n\\toprule\\hline\n\\multicolumn{2}{c|}{} & DO & DS & \\text{CDD-RED} \\\\ \\hline\n\\multirow{3}{*}{$d(\\mathbf x, \\mathbf x_\\mathrm{RED})$} & AutoAttack & 6.41 & 16.64 & 8.81 \\\\ \\cline{2-5} \n & Feature Attack & 5.51 & 16.14 & 7.99 \\\\ \\cline{2-5} \n & Adaptive Attack& 9.76 & 16.21 & 12.24 \\\\ \\hline \n\\multirow{3}{*}{$\\text{\\text{PA}}_{\\mathrm{benign}}$} & AutoAttack & 84.69\\% & 92.64\\% & 94.58\\% \\\\ \\cline{2-5} \n & Feature Attack & 82.90\\% & 90.75\\% & 93.25\\% \\\\ \\cline{2-5} \n & Adaptive Attack& 33.20\\% & 27.27\\% & 36.29\\% \\\\ \\hline\n\\multirow{3}{*}{$\\text{\\text{PA}}_{\\mathrm{adv}}$} & AutoAttack & 85.53\\% & 83.30\\% & 88.39\\% \\\\ \\cline{2-5} \n & Feature Attack & 26.97\\% & 35.84\\% & 63.48\\% \\\\ \\cline{2-5} \n & Adaptive Attack& 51.21\\% & 55.41\\% & 57.11\\% \\\\ \\hline\\bottomrule\n\\end{tabular}}\n\\end{threeparttable}\n\\end{center}\n\\vspace{-5mm}\n\\end{wraptable}\nMoreover, as for benign examples with $p=0\\%$ perturbations, though the RED denoiser does not see benign example pair ($\\mathbf x$, $\\mathbf x$) during training, it keeps the performance of the benign example recovery. {\\text{CDD-RED}} can handle the case with a mixture of adversarial and benign examples. That is to say, even if a benign example, detected as adversarial, is wrongly fed into the RED framework, our method can recover the original perturbation close to the ground truth. {See Appendix \\ref{sec:mixture} for details.} \n\nTable\\,\\ref{table:unforeseen_results} shows the RED performance on the unseen attack type, AutoAttack, Feature Attack, and Adaptive Attack. For AutoAttack and Feature Attack, {\\text{CDD-RED}} outperforms both DO and DS in terms of PA from both benign and adversarial perspectives. Specifically, {\\text{CDD-RED}} increases the $\\text{\\text{PA}}_{\\mathrm{adv}}$ for Feature Attack by 36.51\\% and 27.64\\% compared to DO and DS, respectively. \n\nAs for the adaptive attack {\\citep{tramer2020adaptive}}, we assume that the attacker has access to the knowledge of the RED model, \\textit{i.e.}, $D_{\\boldsymbol{\\theta}}$. It can then perform the PGD attack method to generate successful prediction-evasion attacks even after taking the RED operation.\n\n\nWe use PGD methods to generate such attacks within the $\\ell_\\infty$-ball of \nperturbation radius \n$\\epsilon=20\/255$. Table\\,\\ref{table:unforeseen_results} shows that Adaptive Attack is much stronger than Feature Attack and AutoAttack, leading to larger reconstruction error and lower PA. However, {\\text{CDD-RED}} still outperforms DO and DS in $\\text{\\text{PA}}_{\\mathrm{benign}}$ and $\\text{\\text{PA}}_{\\mathrm{adv}}$. Compared to DS, it achieves a better trade-off with denoising error $d(\\mathbf x, \\mathbf x_\\mathrm{RED})$.\n\n\n\n\n \n\n\n\n\n\nIn general, {\\text{CDD-RED}} can achieve high PA even for unseen attacks, indicating the generalization-ability of our method to estimate not only new adversarial examples (generated from the same attack method), but also new attack types.\n\\vspace{-3mm}\n\\paragraph{RED to infer correlation between adversaries.}\n\nIn what follows, we investigate whether the RED model guided by the single classifier (VGG19) enables to identify different adversary classes, given by combinations of attack types (FGSM, PGD, CW) and victim model types (Res18, Res50, VGG16, VGG19, IncV3). \n\n\\begin{wrapfigure}{r}{100mm}\n\\vspace{-10mm}\n\\centering\n\\subfigure\n[Groundtruth] \n{\\includegraphics[width=14em]{Figs\/heatmap_gt.png}} \n\\subfigure[{\\text{CDD-RED}} (ours)]{\\includegraphics[width=14em]{Figs\/heatmap_CDDRED.png}}\n\\vspace*{-5mm}\n\\caption{\\footnotesize{Correlation matrices between different adversaries. For each correlation matrix, rows and columns represent adversarial example estimate $\\mathbf x_{\\mathrm{RED}}^\\prime$ and true adversarial example $\\mathbf x^\\prime$ (For the ground-truth correlation matrix, $\\mathbf x_{\\mathrm{RED}}^\\prime = \\mathbf x^\\prime $). Each entry represents the average Spearman rank correlation between the logits of two adversary settings $\\in$ \\{(victim model, attack type)\\}.\n}}\n\\vspace*{-3mm}\n\\label{fig:correlationmatrix} \n\\end{wrapfigure}\nFig.\\,\\ref{fig:correlationmatrix} presents \nthe correlation between every two adversary classes in the logit space. \nFig.\\,\\ref{fig:correlationmatrix} (a) shows the ground-truth correlation map. \nFig.\\,\\ref{fig:correlationmatrix} (b) shows correlations between logits of $\\mathbf x_{\\mathrm{RED}}^\\prime$ estimated by our RED method ({\\text{CDD-RED}})\nand logits of the true $\\mathbf x^\\prime$. \nAlong the diagonal of each correlation matrix, the darker implies the better RED estimation under the same adversary class. \nBy peering into off-diagonal entries, \nwe find that FGSM attacks\nare more resilient to the choice of a victim model (see the cluster of high correlation values at the top left corner of Fig.\\,\\ref{fig:correlationmatrix}).\nMeanwhile, the proposed {\\text{CDD-RED}} precisely recovers the correlation behavior of the true adversaries. Such a correlation matrix can help explain the similarities between different attacks' properties. Given an inventory of existing attack types, if a new attack appears, then one can resort to RED to estimate the correlations between the new attack type and the existing attack types. {Based on the correlation screening, it can infer the properties of the new attack type based on its most similar counterpart in the existing attack library; see Appendix \\ref{sec:identity}. Inspired by the correlation, RED-synthesized perturbations can also be used as a transfer attack as well; see Appendix \\ref{sec:tranferability}.} \n\\vspace{-3mm}\n\\paragraph{Other applications of RED.}\n{In Appendix \\ref{section:potential_app}, we also empirically show that the proposed RED approach can be applied to adversarial detection, attack identity inference, and adversarial defense.}\n\n\n \\iffalse\nFig.\\,\\ref{fig:correlationmatrix}(a) presents the correlation matrix between the logits of $\\mathbf x^\\prime$ by different adversaries. For each entry, 100 benign images are utilized to generate their adversarial counterparts respectively by two adversaries from 15 settings and got their logits by corresponding victim models. Then the average Spearman correlation score is calculated over these 100 pairs of adversarial images. Fig.\\,\\ref{fig:correlationmatrix}(b) and (c) present correlation matrices between logits of $\\mathbf x_{\\mathrm{RED}}^\\prime$ estimated by the RED model and $\\mathbf x^\\prime$. Along the diagonal of each correlation matrix, the darker means the better estimation. For other entries in Fig.\\,\\ref{fig:correlationmatrix}(a), they reflect the correlation between different adversaries. It is worth noting that CW attacks targeting different victim models have similar logit outputs. Comparing Fig.\\,\\ref{fig:correlationmatrix}(b) and (c), our {\\text{CDD-RED}} method results in a more similar correlation matrix to the ground-truth than the denoised smoothing method, especially for the high values along the diagonal. Our proposed {\\text{CDD-RED}} can not only recover the inference alignment of the same adversary but also keep the inference discrepancy between different adversarial settings. \n\\fi\n\n\\mycomment{\n\\paragraph{Ablation study.}\nIn the Supplementary Material, we conduct additional experiments to demonstrate the effect of the selection of a pretrained classifier $\\hat f$ and the selection of a subset $\\check{\\mathcal T}$ on RED. \nWe find that the RED performance is not sensitive to the choice of $\\hat f$.\nFurthermore, we find that \nthe absence of $\\check{T}$ (\\textit{i.e.}, without data augmentation) hampers the RED performance. Meanwhile, the use of all data transformations (\\textit{i.e.}, $\\check{T} = \\mathcal T$) will also cause a degradation of PA.}\n\n\n\n\n\\vspace*{-4mm}\n\\section{Conclusion}\\label{ref:conclusion}\n\\vspace*{-3mm}\nIn this paper, we study the problem of Reverse Engineering of Deceptions (RED), to recover the attack signatures (\\textit{e.g.} adversarial perturbations and adversary saliency regions) from an adversarial instance. \nTo the best of our knowledge, RED has not been well studied. \nOur work makes a solid step towards formalizing the RED problem and developing a systematic pipeline, covering not only a solution but also a complete set of evaluation metrics. \nWe have identified a series of RED principles, ranging from the pixel level to the attribution level, desired to reverse-engineer adversarial attacks. \nWe have developed an effective denoiser-assisted RED approach by integrating class-discrimination and data augmentation into an image denoising network. \nWith extensive experiments, our approach outperforms the existing baseline methods and generalizes well to unseen attack types.\n\n\\section*{Acknowledgment}\n\nThe work is supported by the DARPA RED program. \nWe also thank the MIT-IBM Watson AI Lab, IBM Research for supporting us with GPU computing resources.\n\n\n\n{{\n\\bibliographystyle{iclr2022_conference}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}