diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbsxo" "b/data_all_eng_slimpj/shuffled/split2/finalzzbsxo" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbsxo" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nBranching programs are one of the well known models of computation. These models have been shown useful in a variety of domains, such as hardware verification, model checking, and other CAD applications (see for example the book by Wegener \\cite{Weg00}). It is known that the class of Boolean functions computed by polynomial size branching programs are coincide with the class of functions computed by non-uniform log-space machines. Moreover branching program is a convenient model for considering different (natural) restrictive variants and different complexity measures such as size (number of inner nodes), length, and width.\n\nOne of important restrictive branching programs is oblivious read once branching programs, also known in applied computer science as Ordered Binary Decision Diagrams (OBDD) \\cite{Weg00}. Since the length of an OBDD is at most linear (in the length of the input), the main complexity measure is ``width''.\n\nOBDDs also can be seen as nonuniform automata (see for example \\cite{ag05}). During the last decades different variants of OBDDs were considered, i.e. deterministic, nondeterministic, probabilistic, and quantum, and\nmany results have been proved on comparative power of deterministic, nondeterministic, and randomized OBDDs \\cite{Weg00}. For example, Ablayev and Karpinski \\cite{ak96} presented the first function that is polynomially easy for randomized and exponentially hard for deterministic and even nondeterministic OBDDs. More specifically, it was proven that OBDD variants of $ \\mathsf{coRP} $ and $\\mathsf{NP}$ are different.\n\nIn the last decade quantum model of OBDD came into play \\cite{agk01},\\cite{nhk00},\\cite{SS05}. It was proven that quantum OBDDs can be exponentially cheaper than classical ones and it was shown that this bound is tight \\cite{agkmp05}.\n\n\nIn this paper we present the first results on comparative complexity for classical and quantum OBDDs computing partial functions. Then, we focus on the width complexity of deterministic and nondeterministic OBDDs, which have been investigated in different papers (see for more information and citations \\cite{hs00}, \\cite{hs03}). Here we present very strict hierarchies for the classes of Boolean functions computed by deterministic and nondeterministic OBDDs.\n\nThe paper is organized as follows. The Section 2 contains the definitions and notations used in the paper. In Section 3, we compare classical and exact quantum OBDDs. We consider a partial function depending on parameter $k$ such that, for any $k>0$, this function is computed by an exact quantum OBDD of width $2$ but deterministic or bounded error probabilistic OBDDs need width $2^{k+1}$. Also it is easy to show that nondeterministic OBDD needs width $k+1$. In Section 4, we consider quantum and classical nondeterminism. We show that quantum nondeterministic OBDDs can be more efficient than their classical counterparts. We present an explicit function which is computed by a quantum nondeterministic OBDD with constant width but any classical nondeterministic OBDD needs non-constant width. The Section 5 contains our results on hierarchies on the sublinear (5.1) and larger (5.2) widths of deterministic and nondeterministic OBDDs.\n\nThe proofs of lower bounds results (Theorem \\ref{th-DOBDD1} and Lemma \\ref{peq1s}) are based on Pigeonhole principle. The lower bound on Theorem \\ref{th-POBDD1} uses the technique of Markov chains. \n\n\\section{Preliminaries}\n\nWe refer to \\cite{Weg00} for more information on branching programs. The main model investigated throughout the paper is OBDD (Ordered Binary Decision Diagram), a restricted version of branching program.\n\nIn this paper we use following notations for vectors. We use subscript for enumerating elements of vector and strings and superscript for enumerating vectors and strings. For a binary string $\\nu$, $\\#_1(\\nu)$ and $\\#_0(\\nu)$ are the number of $1$'s and $0$'s in $\\nu$, respectively. We denote $\\#^k_{0}(\\nu)$ and $\\#^k_{1}(\\nu)$ to be the numbers of $1$'s and $0$'s in the first $k$ elements of string $\\nu$, respectively.\n \n\nFor a given $ n>0 $, a probabilistic OBDD $ P_n $ with width $d$, defined on $ \\{0,1\\}^n $, is a 4-tuple\n$\n\tP_n=(T,v^0,Accept{\\bf,\\pi}),\n$\nwhere\n\\begin{itemize}\n\t\\item $ T = \\{ T_j : 1 \\leq j \\leq n \\mbox{ and } T_j = (A_j(0),A_j(1)) \\} $ is an ordered pairs of (left) stochastic matrices representing the transitions, where, at the $j$-th step, $ A_j(0) $ or $ A_j(1) $, determined by the corresponding input bit, is applied.\n\t\\item $ v^0 $ is a zero-one initial stochastic vector (initial state of $P_n$).\n\t\\item $ Accept \\subseteq \\{1,\\ldots,d\\} $ is the set of accepting nodes.\n\\item $ \\pi $ is a permutation of $ \\{1,\\ldots,n\\} $ defining the order of testing the input bits.\n\\end{itemize}\n\n For any given input $ \\nu \\in \\{0,1\\}^n $, the computation of $P_n$ on $\\nu$ can be traced by a stochastic vector which is initially $ v^0 $. In each step $j$, $1 \\leq j \\leq n$, the input bit $ x_{\\pi(j)} $ is tested and then the corresponding stochastic operator is applied:\n\\[\n\tv^j = A_{j}(x_{\\pi(j)}) v^{j-1},\n\\]\nwhere $ v^j $ represent the probability distribution vector of nodes after the $ j$-th steps, $ 1 \\leq j \\leq n $.\nThe accepting probability of $ P_n $ on $ \\nu $ is \n\\[\n\t\\sum_{i \\in Accept} v^n_i.\n\\]\n\nWe say that a function $f$ is computed by $ P_n $ with bounded error if there exists an $ \\varepsilon \\in (0,\\frac{1}{2}] $ such that $ P_n $ accepts all inputs from $ f^{-1}(1) $ with a probability at least $ \\frac{1}{2}+\\varepsilon $ and $ P_n $ accepts all inputs from $ f^{-1}(0) $ with a probability at most $ \\frac{1}{2}-\\varepsilon $. We say that $P_n$ computes $f$ \\textit{exactly} if $ \\varepsilon =1\/2 $. \n\nA deterministic OBDD is a probabilistic OBDD restricted to use only 0-1 transition matrices. In the other words, the system is always being in a single node and, from each node, there is exactly one outgoing transition for each tested input bit.\n\nA nondeterministic OBDD (NOBDD) can have the ability of making more than one outgoing transition for each tested input bit from each node and so the program can follow more than one computational path and if one of the path ends with an accepting node, then the input is accepted (rejected, otherwise). \n\n\n\\begin{itemize}\n\\item \n\nAn OBDD is called {\\em stable} if each transition set $T_j$ is identical for each level.\n\\item\n\nAn OBDD is called ID (ID-OBDD) if the input bits are tested in order $\\pi=(1,2,\\dots, n)$.\nIf a {\\em stable} ID-OBDD has a fixed width and transition rules for each $ n $, then it can be considered as a realtime finite automaton.\n\n\\end{itemize}\n\nQuantum computation is a generalization of classical computation \\cite{Wat09A}. Therefore, each quantum model can simulate its probabilistic counterparts. In some cases, on the other hand, the quantum models are defined in a restricted way, e.g., using only unitary operators during the computation followed by a single measurement at the end, and so they may not simulate their probabilistic counterparts. Quantum automata literature has a lot of such results such as \\cite{KW97,AF98,BC01B}. A similar result was also given for OBDDs in \\cite{SS05}, in which a function with a small size of deterministic OBDD was given but the quantum OBDD defined in a restricted way needs exponential size to solve this function.\n\nQuantum OBDDs that defined with the general quantum operators, i.e. superoperator \\cite{Wat03,Wat09A,YS11A}, followed by a measurement on the computational basis at the end can simulate its classical counterpart with the same size and width. So we can always follow that any quantum class contains its classical counterpart. \n\nIn this paper, we follow our quantum results based stable ID-OBDDs, which are realtime quantum finite automata. But, we give the definition of quantum OBDDs for the completeness of the paper.\n\nA quantum OBDD is the same as a probabilistic OBDD with the following modifications:\n\\begin{itemize}\n\t\\item The state set is represented by a $ d $-dimensional Hilbert space over field of complex numbers. The initial one is $ \\ket{\\psi}_0=\\ket{q_0}$ where $ q_0 $ corresponds to the initial node.\n\t \n\t \\item Instead of a stochastic matrix, we apply a unitary matrix in each step. That is, $ T = \\{ T_j : 1 \\leq j \\leq n \\mbox{ and } T_j = (U_j^0,U_j^1) \\} $, where, at the $j$-th step, $ U_j^0 $ or $ U_j^1 $, determined by the corresponding input bit, is applied,\n\t \n\t \\item At the end, we make a measurement on the computational basis.\n\\end{itemize}\n\n\n\nThe state of the system is updated as follows after the $ j$-th step:\n\\[\n\t\\ket{\\psi}_j = U_j^{x_{\\pi(j)}} (\\ket{\\psi}_{j-1}),\n\\]\nwhere $ \\ket{\\psi}_{j-1} $ and $ \\ket{\\psi}_j $ represent the state of the system after the $ (j-1)$-th and $ j$-th steps, respectively, where $ 1 \\leq j \\leq n $.\n\n\n\nThe accepting probability of the quantum program on $ \\nu $ is calculated from $\\ket{\\psi}_n=(z_1, \\dots, z_d)$ as\n\\[\n\t\\sum_{i \\in Accept} |z_i|^2.\n\\]\n\n\n\\section{Exact Quantum OBDDs.}\n\nIn \\cite{AY12}, Ambainis and Yakary{\\i}lmaz defined a new family of unary promise problems: For any $ k>0 $, $ A^k = (A^k_{yes},A^k_{no}) $ such that $ A^k_{yes} = \\{ a^{(2i)2^k} : i \\geq 0 \\} $ and $ A^k_{no} = \\{ a^{(2i+1)2^k} : i \\geq 0 \\} $. They showed that each member of this family ($A^k$) can be solved exactly by a 2-state realtime quantum finite automaton (QFA) but any exact probabilistic finite automaton (PFA) needs at least $ 2^{k+1} $ states. Recently, Rashid and Yakary{\\i}lmaz \\cite{RY14A} showed that bounded-error realtime PFAs also needs at least $ 2^{k+1} $ states for solving $A^k$.\\footnote{The same result is also followed for two-way nondeterministic finite automata by Geffert and Yakary{\\i}lmaz \\cite{GY14A}.} Based on this promise problem, we define a partial function:\n\\[\n\t\\mathtt{PartialMOD^k_n} (\\nu) = \\left\\lbrace \\begin{array}{lcll}\n\t\t1 & , & \\mbox{if } \\#_1(\\nu) = 0 & (mod\\textrm{ }2^{k+1}) \\\\\n\t\t0 & , & \\mbox{if } \\#_1(\\nu) = 2^{k} & (mod\\textrm{ }2^{k+1}) \\\\\n\t\t* & , & \\multicolumn{2}{l}{\\mbox{otherwise}} \\\\\t\n\\end{array}\t \\right. ,\n\\]\nwhere the function is not defined for the inputs mapping to ``*''. We call the inputs where the function takes the value of 1 (0) as yes-instances (no-instances).\n\n\\begin{theorem}\n\tFor any $k \\geq 0$, $ \\mathtt{PartialMOD^k_n} $ can be solved by a stable quantum ID-OBDD with width 2 exactly.\n\\end{theorem}\n\n\nThe OBDD can be construct by the same way as QFA, which solves promise problem $ A^k $ \\cite{AY12}.\n\n\n\nWe show that the width of deterministic, or bounded-error stable probabilistic OBDDs that solve $ \\mathtt{PartialMOD^k_n} $ cannot be less than $ 2^{k+1} $.\n\n\n\\begin{remark} \n Note that, a proof for deterministic OBDD is not similar to the proof for automata because potentially nonstability can give profit. Also this proof is different from proofs for total functions (for example, $ MOD_p$) due to the existence of incomparable inputs.\nNote that, classical one-way communication complexity techniques also fail for partial functions (for example, it can be shown that communication complexity of $ \\mathtt{PartialMOD^k_n} $ is $1$), and we need to use more careful analysis in the proof. \n\\end{remark}\n \n \n Deterministic stable ID-OBDD with width $2^{k+1}$ for $ \\mathtt{PartialMOD^k_n} $ can be easy constructed. We left open the case for bounded-error non-stable probabilistic OBDDs.\n\n\\begin{theorem}\n\\label{th-DOBDD1}\n\tFor any $k\\geq 0$, there are infinitely many $n$ such that any deterministic OBDD computing the partial function $ \\mathtt{PartialMOD^k_n} $ has width at least $2^{k+1}$.\n\\end{theorem}\n\n\\begin{proof}\nLet $\\nu\\in\\{0,1\\}^n, \\nu=\\sigma\\gamma$. We call $\\gamma$ valid for $\\sigma$ if $\\nu \\in (\\mathtt{PartialMOD^k_n})^{-1}(0) \\cup (\\mathtt{PartialMOD^k_n})^{-1}(1).$ We call two substrings $\\sigma'$ and $\\sigma''$ comparable if for all $\\gamma$ it holds that $\\gamma$ is valid for $\\sigma'$ iff $\\gamma$ is valid for $\\sigma''$.\nWe call two substrings $\\sigma'$ and $\\sigma''$ nonequivalent if they are comparable and there exists a valid substring $\\gamma$ such that $\\mathtt{PartialMOD^k_n}(\\sigma'\\gamma)\\neq\\mathtt{PartialMOD^k_n}(\\sigma''\\gamma)$ . \n\nLet $P$ be a deterministic OBDD computing the partial function $ \\mathtt{PartialMOD^k_n} $. Note that paths associated with nonequivalent strings must lead to different nodes. Otherwise, if $\\sigma$ and $\\sigma'$ are nonequivalent, there exists a valid string $\\gamma$ such that $\\mathtt{PartialMOD^k_n}(\\sigma\\gamma)\\neq\\mathtt{PartialMOD^k_n}(\\sigma'\\gamma)$ and computations on these inputs lead to the same final node.\n\nLet $N=2^k$ and $\\Gamma=\\{\\gamma: \\gamma\\in \\{0,1\\}^{2N-1}, \\gamma=0\\dots 0 1\\dots 1\\}$.\nWe will naturally identify any string $\\nu$ with the element $a=\\#_1(\\nu) \\pmod{2N}$ of additive group $\\mathbb{Z}_{2N}$. We call two strings of the same length different if the numbers of ones by modulo $2N$ in them are different. We denote by $\\rho(\\gamma^1,\\gamma^2)=\\gamma^1-\\gamma^2$ the distance between numbers $\\gamma^1, \\gamma^2.$ Note that $\\rho(\\gamma^1,\\gamma^2)\\neq \\rho(\\gamma^2,\\gamma^1) $.\n\nLet the width of $P$ is $t<2N.$ On each step $i$ ($i=1,2,\\dots$) of the proof we will count the number of different strings, which lead to the same node \n(denote this node $v_i$). On $i$-th step we consider $(2N-1)i$-th level of $P$.\n\nLet $i=1$.\nBy pigeonhole principle there exist two different strings $\\sigma^1$ and $ \\sigma^2$ from the set $\\Gamma$ such that corresponding paths lead to the same node $v_1$ of the $(2N-1)$-th level of $P$. Note that $\\rho(\\sigma^1, \\sigma^2)\\neq N,$ because in this case $\\sigma^1$ and $\\sigma^2$ are nonequivalent and cannot lead to the same node.\n\nWe will show by induction that on each step of the proof the number of different strings which lead to the same node increases.\n\nStep 2. By pigeonhole principle there exist two different strings $\\gamma^1$ and $ \\gamma^2$ from the set $\\Gamma$ such that corresponding paths from the node $v_1$ lead to the same node $v_2$ of the $(2N-1)2$-th level of $P$. In this case, the strings $\\sigma^1 \\gamma^1,\\sigma^2 \\gamma^1,\\sigma^1 \\gamma^2 $, and $\\sigma^2 \\gamma^2$ lead to the node $v_2$. Note that $\\rho(\\gamma^1, \\gamma^2)\\neq N,$ because in this case $\\sigma^1\\gamma^1$ and $\\sigma^1\\gamma^2$ are nonequivalent and cannot lead to the same node. \n\nAdding the same number does not change the distance between the numbers, so we have\n \\[\\rho(\\sigma^1+\\gamma^1, \\sigma^2+\\gamma^1)=\\rho(\\sigma^1,\\sigma^2)\\]\n and\n\\[\\rho(\\sigma^1+\\gamma^2, \\sigma^2+\\gamma^2)=\\rho(\\sigma^1,\\sigma^2).\\]\nLet $\\gamma^2>\\gamma^1$. Denote $\\Delta=\\gamma^2-\\gamma^1 $. Let us count the number of different numbers among $\\sigma^1+\\gamma^1,$ $\\sigma^2+\\gamma^1 $, $\\sigma^1+\\gamma^1+\\Delta $, and $\\sigma^2+\\gamma^1+\\Delta $.\nBecause $\\sigma^1$ and $\\sigma^2$ are different and $\\rho(\\sigma^1, \\sigma^2)\\neq N$, the numbers from the pair $\\sigma^1+\\gamma^1 $, and $\\sigma^2+\\gamma^1 $ coincide with corresponding numbers from the pair $\\sigma^1+\\gamma^1+\\Delta $ and $\\sigma^2+\\gamma^1+\\Delta $ iff $\\Delta=0 \\pmod {2N}$. But $\\Delta \\neq 0 \\pmod {2N}$ since the numbers $\\gamma^1 $ and $ \\gamma^2$ are different and $\\gamma^1, \\gamma^2<2N.$ The numbers $\\sigma^1+\\gamma^1+\\Delta $ and $\\sigma^2+\\gamma^1+\\Delta $ cannot be a permutation of numbers $\\sigma^1+\\gamma^1 $ and $\\sigma^2+\\gamma^1 $ since $\\rho(\\gamma^1, \\gamma^2)\\neq N$ and $\\rho(\\sigma^1, \\sigma^2)\\neq N$.\nIn this case, at least 3 numbers from $\\sigma^1+\\gamma^1 $, $\\sigma^2+\\gamma^1 ,$ $\\sigma^1+\\gamma^2 $, and $\\sigma^2+\\gamma^2 $ are different.\n\nStep of induction. Let the numbers $\\sigma^1, \\dots, \\sigma^i$ be different on the step $i-1$ and the corresponding paths lead to the same node $v_{i-1}$ of the $(2N-1)(i-1)$-th level of $P$. \n\nBy pigeonhole principle there exist two different strings $\\gamma^1 $ and $ \\gamma^2$ from the set $\\Gamma$ such that the corresponding paths from the node $v_{i-1}$ lead to the same node $v_i$ of the $(2N-1)i$-th level of $P$. So paths $\\sigma^1\\gamma^1,\\dots, \\sigma^i\\gamma^1,\\sigma^1\\gamma^2,\\dots, \\sigma^i\\gamma^2 $ lead to the same node $v_i$. Let us estimate a number of different strings among them. Note that $\\rho(\\gamma^1, \\gamma^2)\\neq N$ since, in this case, the strings $\\sigma^1\\gamma^1$ and $ \\sigma^1\\gamma^2$ are nonequivalent but lead to the same node. \n\nThe numbers $\\sigma^1, \\dots,\\sigma^{i}$ are different and $\\rho(\\sigma^l, \\sigma^j)\\neq N$ for each pair $(l,j) $ such that $ l\\neq j$.\nLet $\\sigma^1 < \\dots <\\sigma^i$. We will show that among $\\sigma^1+\\gamma^1 ,\\dots, \\sigma^i+\\gamma^1 $ and $\\sigma^1+\\gamma^1+\\Delta , \\dots, \\sigma^i+\\gamma^1+\\Delta $ at least $i+1$ numbers are different.\n\n\nThe sequence of numbers $\\sigma^1+\\gamma^1 ,\\dots, \\sigma^i+\\gamma^1 $ coincide with the the sequence $\\sigma^1+\\gamma^1+\\Delta , \\dots, \\sigma^i+\\gamma^1+\\Delta $ iff $\\Delta=0 \\pmod {2N}$. But $\\Delta\\neq 0 \\pmod {2N}$ since $\\gamma^1$ and $ \\gamma^2$ are different and $\\gamma^1, \\gamma^2<2N.$\n \nSuppose that the sequence $\\sigma^1+\\gamma^1+\\Delta , \\dots, \\sigma^i+\\gamma^1+\\Delta $ is a permutation of the sequence $\\sigma^1+\\gamma^1 $,$\\dots,$ $\\sigma^i+\\gamma^1 $. In this case, \nwe have numbers $a_0, \\dots, a_r$ from $\\mathbb{Z}_{2N}$ such that all $a_j$ are from the sequence $\\sigma^1+\\gamma^1 $,$\\dots,$ $\\sigma^i+\\gamma^1 $, $a_0=a_r=\\sigma^1+\\gamma^1 $, and $a_j=a_{j-1}+\\Delta $, where $j=1, \\dots, r$. \nIn this case, $r\\Delta=2Nm.$\nBecause $N=2^k$, $\\Delta<2N$, and $\\Delta\\neq N$ we have that $r$ is even. For $z=r\/2$ we have $z\\Delta=Nm$.\nSince all numbers from $\\sigma^1+\\gamma^1,\\dots, \\sigma^i+\\gamma^1$ are different, we have that $\\rho(a_0,a_z)=N$.\nSo we have that $a_0 $ and $ a_z$ are nonequivalent but the corresponding strings lead to the same node $v_i$. \nSo after $i$-th step, we have that at least $i+1$ different strings lead to the same node $v_i.$\n\nOn the $N$-th step, we have that $N+1$ different strings lead to the same node $v_N$. Among these strings, there must be at least two nonequivalent strings. Thus we can follow that $P$ cannot compute the function $\\mathtt{PartialMOD^k_n}$ correctly. \n\\qed\\end{proof}\n\n\\begin{theorem}\n\\label{th-NOBDD1}\n\tFor any $k\\geq 0$, there are infinitly many $n$ such that any nondeterministic OBDD computing the partial function $ \\mathtt{PartialMOD^k_n} $ has width at least $2^{k+1}$.\n\\end{theorem}\n\nThe proof is similar to deterministic case with the following modifications. \nLet $P$ -- NOBDD, that computes $ \\mathtt{PartialMOD^k_n} $. \nWe consider only accepting pathes in $ P $. Note that if $P$ computes $ \\mathtt{PartialMOD^k_n} $ correctly then accepting paths associated with nonequivalent strings can not pass through the same nodes. Denote $N=2^k$. Let $\\Gamma=\\{\\gamma: \\gamma\\in \\{0,1\\}^{2N-1}, \\gamma=\\underbrace{0\\dots 0}_{2N-1-j} \\underbrace{1\\dots 1}_{j}, j=0, \\dots,2N-1\\}$.\n\n\n Denote $V_{l}$ -- a set of nodes on the $l$-th level of $ P $. Assume that the width of $P$ is less than $2N$, that is, $|V_l|<2N$ for each $ l=0, \\dots,n $.\n\nDenote $V_l(\\gamma, V)$ -- a set of nodes of $l$-th level through which accepting paths, leading from the nodes of set $ V $ and corresponding to string $\\gamma$, pass.\n\nOn step $i$ ($ i=1,2,\\dots $) of the proof we consider $(2N-1)i$-th level of $P$.\nBecause $|V_{2N-1}|<2N$, on the first step of the proof we have that there exists two different strings $\\sigma^1,\\sigma^2$ from the set $\\Gamma$ such that $ V_{2N-1}(\\sigma^1, V_0) \\cap V_{2N-1}(\\sigma^2, V_0) \\neq \\emptyset$. Denote this nonempty intersection $G_1$. Next we continue our proof considering only accepting paths which pass through the nodes of $G_1.$\n\nUsing the same ideas as for deterministic case we can show that the number of strings with different numbers of ones by modulo $2N$, such that corresponding accepting paths lead to the same set of nodes, increases with each step of the proof. \n\nOn the $N$-th step of the proof we will have that there must be two different nonequivalent strings such that corresponding accepting paths lead to the same set $G_N$ of nodes. That implies that $P$ does not compute the function $\\mathtt{PartialMOD^k_n}$ correctly. \n\n\n\n\n\\begin{theorem}\n\\label{th-POBDD1}\nFor any $k\\geq 0$, there are infinitely many $n$ such that any stable probabilistic OBDD computing $ \\mathtt{PartialMOD^k_n} $ with bounded error has width at least $2^{k+1}$. \n\\end{theorem}\n\nThe proof of the Theorem is based on the technique of Markov chains and the details are given in Appendix \\ref{app-POBDD1}.\n\n\\section{Nondeterministic Quantum and Classical OBDDs.}\n\nIn \\cite{YS10A}, Yakary{\\i}lmaz and Say showed that nondeterministic QFAs can define a super set of regular languages, called exclusive stochastic languages \\cite{Paz71}. This class contains the \\textit{complements} of some interesting languages:\n$ PAL = \\{ w \\in \\{0,1\\}^* : w = w^r \\} $, where $w^r$ is a reverse of $w$, $ O = \\{ w \\in \\{0,1\\}^* : \\#_1(w)=\\#_0(w)\\} $, $ SQUARE = \\{ w \\in \\{0,1\\}^* : \\#_1(w)=(\\#_0(w))^2\\} $, and $ POWER = \\{ w \\in \\{0,1\\}^* : \\#_1(w)=2^{\\#_0(w)}\\} $.\n\nBased on these languages, we define three symmetric functions for any input $ \\nu \\in \\{0,1\\}^n $:\n\t \\[ \\mathtt{NotO_n}(\\nu) = \\left\\lbrace \\begin{array}{lll}\n\t\t0 & , & \\mbox{if } \\#_0(\\nu) = \\#_1(\\nu) \\\\\n\t\t1 & , & \\mbox{otherwise }\n\\end{array}\t \\right., \\]\n\t\\[ \\mathtt{NotSQUARE_n}(\\nu) = ~~~~ \\left\\lbrace \\begin{array}{lll}\n\t\t0 & , & \\mbox{if } (\\#_0(\\nu))^2 = \\#_1(\\nu) \\\\\n\t\t1 & , & \\mbox{otherwise }\n\\end{array}\t \\right., \\]\n\t\\[ \\mathtt{NotPOWER_n}(\\nu) = \\left\\lbrace \\begin{array}{lll}\n\t\t0 & , & \\mbox{if } 2^{\\#_0(\\nu)} = \\#_1(\\nu) \\\\\n\t\t1 & , & \\mbox{otherwise }\n\\end{array}\t \\right.. \\]\n\n\n\\begin{theorem}\n\tBoolean Functions $ \\mathtt{NotO_n} $, $ \\mathtt{NotSQUARE_n} $, and $ \\mathtt{NotPOWER_n} $ can be solved by a nondeterministic quantum OBDD with constant width.\n\\end{theorem}\n\nFor all these three functions, we can define nondeterministic quantum (stable ID-) OBDD with constant width based on nondeterministic QFAs for languages $ O $, $SQUARE$, and $POWER$, respectively \\cite{YS10A}. \n\n\nThe complements of $ PAL, O, SQUARE $ and $POWER$ cannot be recognized by classical nondeterministic finite automata. But, for example, the function version of the complement of $PAL$, $ \\mathtt{NotPAL_n} $, which returns 1 only for the non-palindrome inputs, is quite easy since it can be solved by a deterministic OBDD with width $3$. Note that the order of such an OBDD is not natural $(1,\\dots,n)$. However, as will be shown soon, this is not the case for the function versions of the complements of the other three languages.\n\n\\begin{theorem}\\label{nondetermenisticBoundTheorem}\nThere are infinitely many $n$ such that any NOBDD $P_n$ computing $ \\mathtt{NotO_n} $ has width at least $\\lfloor\\log n\\rfloor -1 $.\n\\end{theorem}\n\nThe proof of Theorem is based on the complexity properties of Boolean function $\\mathtt{NotO_n}$. At first we will discuss complexity properties of this function in Lemma \\ref{leqN}. After that we will prove claim of Theorem.\n\t\n\\begin{lemma}\\label{leqN}\nThere are infinitely many $n$ such that any OBDD computing $ \\mathtt{NotO_n} $ has width at least $n\/2 + 1$. (For the proof see \\ref{app-leqN}).\n\\end{lemma}\n{\\em Proof of Theorem \\ref{nondetermenisticBoundTheorem}}\\quad\\quad\n Let function $\\mathtt{NotO_n}$ is computed by $NOBDD$ $P_n$ of width $d$, then by the same way as in the proof of Theorem \\ref{th-NOBDD1} we have\n$d\\geq\\log (n\/2 + 1)>\\log n -1.$\n\\hfill$\\Box$\\\\\n\nBy the same way we can show that there are infinitely many $n$ such that any NOBDD $P_n$ computing function $\\mathtt{NotSQUARE_n}$ has width at least $ \\Omega(\\log (n)) $ and any NOBDD $P'_n$ computing function $\\mathtt{NotPOWER_n}$ has width at least $ \\Omega(\\log\\log (n)) $.\n\n\n\\section{Hierarchies for Deterministic and Nondeterministic OBDDs}\n\nWe denote $\\mathsf{OBDD^d}$ and $\\mathsf{NOBDD^d}$ to be the sets of Boolean functions that can be computed by $OBDD$ and $NOBDD$ of width $d=d(n)$, respectively, where $n$ is the number of variables.\nIn this section, we present some width hierarchies for $\\mathsf{OBDD^d}$ and $\\mathsf{NOBDD^d}$. Moreover, we discuss relations between these classes.\nWe consider $\\mathsf{OBDD^d}$ and $\\mathsf{NOBDD^d}$ with small (sublinear) widths and large widths.\n\n\\subsection{Hierarchies and relations for small width OBDDs}\n\n \nWe have the following width hierarchy for deterministic and nondeterministic models.\n\n\\begin{theorem}\\label{smallwh}\nFor any integer $n$, $d=d(n)$, and $1{\\mbox{\\bf{generateCookie}}(Miner $M$)\\{}\\\\\n3.\\>\\>{$R_M$ := getRandom(256);}\\\\\n4.\\>\\>{$C_M$ := $H^2(R_M, M.uname)$;}\\\\\n5.\\>\\>{$K_M$ := $M.key$;}\\\\\n6.\\>\\>{store($M.uname$, $K_M$, $R_M$, $target$);}\\\\\n7.\\>\\>{sendEncrypted($M$, $E_{K_M}(R_M)$);}\\\\\n\n8.\\>{\\mbox{\\bf{verifyJob}}(Miner $M$,\\ $nonce$,\\ $extranonce2$)\\{}\\\\\n9.\\>\\>{($K_M$,\\ $R_M$,\\ $target$) := getMParams($M.uname$);}\\\\\n10.\\>\\>{$C_M$ := $H^2(R_M, M.uname)$;}\\\\\n11.\\>\\>{$F$ := computeF($C_M$,\\ $extranonce2$);}\\\\\n12.\\>\\>{\\mbox{\\bf{if}}\\ ($H^2(nonce || F) < target$)}\\\\\n13.\\>\\>\\>{sendToMiner($M$, result, ``accept'');}\\\\\n14.\\>\\>{\\mbox{\\bf{else}}\\ sendToMiner($M$, result, ``reject'');}\\\\\\\\\n\n15.{Implementation\\ \\mbox{\\bf{Miner}}}\\\\\n16.\\>{$K_M$ : int[256] \\% key\\ shared\\ with\\ pool}\\\\\n17.\\>{$C_M$ : int[256] \\% mining\\ cookie;}\\\\\n18.\\>{\\mbox{\\bf{solvePuzzle}}($target$: int)\\{}\\\\\n19.\\>\\>{\\mbox{\\bf{do}}}\\\\\n20.\\>\\>\\>{$randPerm$ := newPseudoRandPerm(32);}\\\\\n21.\\>\\>\\>{$extranonce2$ := getRandom(32);}\\\\\n22.\\>\\>\\>{$F$ := computeF($C_M$,\\ $extranonce2$);}\\\\\n23.\\>\\>\\>{\\mbox{\\bf{while}}\\ ($randPerm$.isNext())\\{}\\\\\n24.\\>\\>\\>\\>{$nonce$ := $randPerm$.next();}\\\\\n25.\\>\\>\\>\\>{\\mbox{\\bf{if}}\\ ($H^2(nonce || F) < target$)}\\\\\n26.\\>\\>\\>\\>\\>{sendToPool($uname$, $nonce$, $extranonce2$);}\\\\\n27.\\>\\>{\\mbox{\\bf{while}} ($clean\\_jobs$ != 1)}\n\\vspace{-25pt}\n\\end{tabbing}\n\\caption{Bedrock pseudo-code for cookie generation and job verification (pool\nside), and job solving (miner side).}\n\\label{alg:bedrock}\n\\end{algorithm}\n\\end{minipage}\n\\normalsize\n\\vspace{-15pt}\n\\end{figure}\n\n\nBedrock has 3 components, each addressing different Stratum vulnerabilities.\nThe first component authenticates and obfuscates the job assignment and share\nsubmission messages. The second component secures the share difficulty\nnotifications, and the third component secures the pool's inference of the\nminer's capabilities. In the following we detail each component. We assume\nthat the pool shares a unique secret symmetric key $K_M$ with each miner $M$.\nThe miner and the pool create the key during the first authorization protocol\n(see $\\S$~\\ref{sec:stratum}), e.g., using authenticated Diffie-Hellman).\n\n\\vspace{-5pt}\n\n\\subsubsection{Mining Cookies}\n\\label{sec:bedrock:solution:cookies}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.47\\textwidth]{{bedrock.puzzle}.pdf}\n\\vspace{-15pt}\n\\caption{Bedrock puzzle illustration. The cookie $C_M$ is generated on the\npool, while the $nonce$ and $extranonce2$ are generated on the miner. The\ncoinbase transaction contains both $C_M$ and $extranonce2$, see\nFigure~\\ref{fig:coinbase}.}\n\\label{fig:bedrock:puzzle}\n\\vspace{-15pt}\n\\end{figure}\n\nThe share submission packets are particularly vulnerable. First, they can\nreveal the target value, thus the difficulty of the jobs on which the miner\nworks and then the miner's hashrate (see $\\S$~\\ref{sec:bedrock:discussion}).\nSecond, share submissions can be hijacked by an active adversary, see\n$\\S$~\\ref{sec:attacks:active}. Encryption of share submissions\nwill prevent these attacks, but it will strain the pool's resources.\n\nTo efficiently address these vulnerabilities, we introduce the concept of {\\it\nmining cookie}, a secret that each miner shares with the pool, see\nFigure~\\ref{fig:bedrock:puzzle} and Algorithm~\\ref{alg:bedrock}. The miner uses\nits mining cookie as an additional, secret field in the Bitcoin puzzle. Without\nknowledge of the mining cookie, an adversary cannot infer the progress made by\nthe miner, thus its hashrate and payout, thus cannot hijack shares submitted by\nthe miner.\n\nSpecifically, let $R_M$ be a random cookie seed that the pool generates for a\nminer $M$ Algorithm~\\ref{alg:bedrock}, line 3). The pool associates $R_M$ with\n$M$, and stores it along with $M$'s symmetric key $K_M$, and its current\n$target$ value (line 6). The pool computes $M$'s cookie as $C_M = H^2(R_M,\nM.uname)$ (line 4), where $M.uname$ is the username of the miner. It then\nsends $R_M$ to $M$, encrypted with the key $K_M$ (line 7), see\n$\\S$~\\ref{sec:bedrock:solution:set}. The miner similarly uses $R_M$ and its\n$username_M$ to compute $C_M$.\n\nTo minimally modify Bitcoin, Bedrock stores the cookie as part of the coinbase\ntransaction (see Figure~\\ref{fig:coinbase}), in the place of its unused {\\it\nprevious hash} field. This field is unused since the coinbase transaction does\nnot have a need for a meaningful input address hash (see\n$\\S$~\\ref{sec:model:coinbase}). Thus, the puzzle remains the same: The miner\niterates over the $nonce$ and $extranonce2$ values, and reports the pairs that\nsolve the puzzle, along with its username, in share submission packets (lines\n23-26).\n\nTo verify the shares, the pool retrieves the miner's key $K_M$, random seed\n$R_M$ and $target$ values (line 9). It uses $R_M$ to reconstruct the cookie\n(line 10) and uses $target$, and the reported $nonce$ and $extranonce2$ values,\nto reconstruct and verify the puzzle lines 11 ans 12).\n\n\n\\noindent\n{\\bf Random iterators}.\nIn the Bitcoin protocol and the Stratum implementation on F2Pool, the $nonce$\nand $extranonce2$ values are incremented sequentially: once the miner exhausts\n$nonce$, it increments $extranonce2$, then continues to iterate over a reset\n$nonce$ value. In $\\S$~\\ref{sec:bedrock:discussion} we show that this further\nexposes the miner to hashrate inference attacks. We address this problem by\nrequiring the miner to choose random values for $nonce$ and $extranonce2$ at\neach puzzle iteration. To prevent the miner from recomputing an expensive\nMerkle tree root at each iteration, we iterate through the $nonce$ space\nusing a pseudo random permutation (lines 20, 24).\n\n\\noindent\n{\\bf Cookie refresh}.\nWhen a miner mines the current block, i.e., when $H^2(nonce || F || C_M)$ is\nless than the target corresponding to the $Nbits$ value, see\n$\\S$~\\ref{sec:model:puzzle}, the puzzle solution needs to be published in the\nblockchain. The published block needs to include all the fields that defined\nthe puzzle (see $\\S$~\\ref{sec:model:puzzle}), including the miner's cookie, to\nbe publicly verified.\n\nTo prevent an adversary who monitors the blockchain to learn the\nmining cookie of a victim miner and then launch a successful BiteCoin attack\n(see $\\S$~\\ref{sec:bedrock:discussion}), Bedrock changes the mining cookie of\nthe miner once the miner mines the current block. This is an infrequent\nevent: for an AntMiner S7 mining equipment, with a hashrate of 4.73 TH\/s, and\nthe current Bitcoin network difficulty (2.58522748405e+11),\nEquation~\\ref{eq:expected_time_to_share} shows that the expected time to mine a\nblock is 7.44 years. This is a very low lower bound since it assumes a\nconstant difficulty. In reality, the difficulty has increased exponentially\nsince the creation of Bitcoin.\nTo change the cookie, the pool invokes generateCookie (line 2).\n\n\n\\vspace{-5pt}\n\n\\subsubsection{Protect Communicated Secrets}\n\\label{sec:bedrock:solution:set}\n\n\\vspace{-5pt}\n\nStratum's share difficulty notification messages reveal the difficulty assigned\nby the pool to the miner and that the miner uses in the subsequent jobs.\nKnowledge of the puzzle difficulty value coupled with the (regulated) share\nsubmission rate, will enable the adversary to infer the hashrate of the miner\n(see Equation~\\ref{eq:e}), thus its payout. In addition, Bedrock also needs to\ncommunicate secret values (e.g., the random $R_M$, see\n$\\S$~\\ref{sec:bedrock:solution:cookies}). Bedrock addresses these problems by\nextending Stratum's set difficulty notifications to the following {\\bf mining\nencrypted} message:\n\n\\vspace{-10pt}\n\n\n\\[\n{\\tt mining.encrypted,\\ E_{K_M}(param\\_list)}\n\\]\n\n\\noindent\nwhere (\\textit{param\\_list}) is a list of values that need protection, i.e.,\ndifficulty values and the secret $R_M$. Specifically, \\textit{param\\_list} can\ncontain any number of sensitive values in the format {\\tt\n[[``difficulty'',1024],[``secret'',$R_M$]]}. \n\n\n\\vspace{-5pt}\n\n\\subsubsection{Secure Hashrate Computation}\n\n\\vspace{-5pt}\n\nThe hashrate inference protocol following a miner subscription and\nauthorization, as documented in $\\S$~\\ref{sec:attacks:passive:isp} and\n$\\S$~\\ref{sec:eval:passive:isp} can be exploited also by an adversary to infer\nthe miner's hashrate. To address this vulnerability, Bedrock requires the miner\nto directly report its hashrate during the initial subscription message, along\nwith other miner capabilities. The miner can locally estimate its hashrate,\ne.g., by creating and executing random jobs with a difficulty of 1024. The\nminer also needs to factor in its communication latency to the pool, which it\ncan infer during the subscription protocol. The miner sends its hashrate\nencrypted, using the ``mining encrypted'' message defined above.\n\n\nIf subsequently, the pool receives share submissions from the miner, outside\nthe desired rate range, it can then adjust the difficulty (through the above\nencrypted share difficulty notifications) in order to reflect its more accurate\ninference of the miner's hashrate.\n\n\\vspace{-5pt}\n\n\\section{Discussion}\n\\label{sec:discussion}\n\n\\subsection{Security Discussion}\n\\label{sec:bedrock:discussion}\n\n\\vspace{-5pt}\n\nWe now discuss attacks against Stratum and Bedrock, and detail the defenses\nprovided by Bedrock.\n\n\\noindent\n\\textbf{Target reconstruction attack}.\nAn attacker that can inspect cleartext subscription response, job assignment\nand share submission packets, can reconstruct the job (i.e., puzzle) solved by\nthe victim miner: Recover $extranonce1$ from an early miner subscription\nmessage, $coinbase1$, $coinbase2$ and the Merkle tree branches from a job\nassignment, and $nonce$ and $extranonce2$ from a subsequent share submission\npacket. The attacker then reconstructs the $F$ field of the puzzle (see\n$\\S$~\\ref{sec:model:puzzle}) and uses it to infer the miner's hashrate, even\nwithout knowing the puzzle's associated $target$ value. Specifically, the\nattacker computes the double hash of $F$ concatenated with $nonce$, then\ncounts the number of leading zeroes to obtain an upper bound on the job's\ntarget. The attacker then uses recorded inter-share submission time stats and\nEquation~\\ref{eq:e} to estimate the miner hashrate.\n\nBedrock thwarts this attack through its use of the cookie $C_M$, a secret known\nonly by the miner and the pool. The cookie is part of the puzzle. Without its\nknowledge, the attacker cannot reconstruct the entire puzzle, thus infer the\ntarget.\n\n\\noindent\n\\textbf{Brute force the cookie}.\nThe attacker can try to brute force the cookie value. To gain confidence, the\nattacker uses the fields from multiple jobs assigned to the same miner to try\neach candidate cookie value. A candidate is considered ``successful'' if it\nproduces a high target value for all the considered jobs. However, in\n$\\S$~\\ref{sec:implementation} we leverage the unused, 256-bit long ``previous\nhash'' field of the coinbase transaction, to store the mining cookie. Brute\nforcing this field is consider unfeasible.\n\n\\noindent\n{\\bf Resilience to cryptographic failure}.\nWe assume now an adversary that is able to break the encryption employed by the\npool and the miner, e.g., due to the use of weak random values. Giechaskiel et\nal.~\\cite{GCR16} studied the effect of broken cryptographic primitives on\nBitcoin, see $\\S$~\\ref{sec:related}. While such an adversary can compromise the\nprivacy of the miner, by recovering the miner's cookie, he will be prevented\nfrom launching active attacks. This is because the miner's cookie is a function\nof both a random number and the miner's username.\n\nSpecifically, if the attacker hijacks a miner's share submission, the pool\nwould use the attacker's username instead of the victim's username to construct\nthe cookie, the coinbase transaction and eventually the header block. The share\nwill only validate if the attacker managed to find a username that produced a\ndouble hash that was still smaller than the target corresponding to the\ndifficulty set by the pool. However, the attacker will need to find such\nusernames for each hijacked share. If the attacker was able to quickly find\nsuch partial collisions, it would be much easier to simply compute the shares\nwithout doing any interception and hijacking.\n\nWe further consider an attacker able to break the hash function (invert and\nfind collisions). Such an attacker can recover a miner's $R_M$ value, then find\na username that produces a collision with the miner's cookie $C_M$. We observe\nhowever that such an attacker could then be able to also mine blocks quickly,\ne.g., by inverting hash values that are smaller than the target corresponding\nto the $Nbits$ value.\n\n\\vspace{-5pt}\n\n\n\n\n\n\n\n\n\n\\subsection{Limitations}\n\n\\vspace{-5pt}\n\n\\noindent\n\\textbf{Opportunistic cookie discovery}.\nWhen the miner mines the current block, i.e., the double hash of the puzzle is\nsmaller than the target corresponding to $Nbits$, the miner's cookie is\npublished in the blockchain. An adversary who has captured job assignments and\nshare submissions from the miner, just before this takes place, can use them,\nalong with the published cookie, to reconstruct the entire puzzle and infer the\nminer's hashrate.\n\nThis opportunistic attack may take years (e.g., 7.44 years\nfor an AntMiner S7, see $\\S$~\\ref{sec:bedrock:solution:cookies}), while, from\nour experience, mining equipment has a useful lifetime of around 2 years.\n\\newmaterial{However, this attack may be more effective against an entity\nthat owns many homogeneous miners: an adversary may only need days to infer\nthe rate of a single miner.}\n\nHowever, to address this limitation, each miner could, at random intervals,\nchange its operation frequency to a randomly chosen value within an\n``acceptable'' operation range. Assuming that the adversary only captures a\nlimited window of the victim miner's communications, he will only be able to\n(i) recover temporary, past hashrate values of the miner, and (ii) reconstruct\nthe miner's payouts over the monitored interval. Since the miner changes its\noperation frequency, once a new cookie is assigned, the adversary will not be\nable to predict the miner's future hashrates and payouts.\n\n\\noindent\n{\\bf Verification scope}.\nWe have only investigated the implementation of Stratum in the pool F2Pool.\nHowever, the identified privacy issues also likely affect other pools, as any\nobfuscation to the set difficulty messages would break the compatibility with\nthe Stratum protocol implemented in current mining equipment.\n\n\\vspace{-5pt}\n\n\\section{Implementation and Testbed}\n\\label{sec:implementation}\n\n\\vspace{-5pt}\n\nIn our experiments, we have used AntMiner S7, a specialized FPGA device for\nBitcoin mining that achieves a hashrate of 4.73 TH\/s at 700MHz~\\cite{s7_miner}.\nWe have configured the device for mining on the F2Pool pool, using the Stratum\nprotocol~\\cite{f2pool_help}.\n\n\n\\subsection{Passive Attacks}\n\\label{sec:implementation:passive}\n\nIn order to collect experimental traffic for the passive attacks, we have\nleveraged the ability of the AntMiner S7 device to operate at different chip\nclock frequencies in order to simulate miner devices with different\ncapabilities. Specifically, we carried out 24 hour long mining experiments with\nthe AntMiner S7 operating at frequencies ranging from 100 MHz to 700MHz, with\n50MHz granularity. We have used Tcpdump~\\cite{jacobson2003tcpdump} to capture\n138MB of Stratum traffic of AntMiner S7 devices in the US (May 27 - June\n8, 2016) and Venezuela (March 8 - April 2, 2016). We have sliced the resulting\npcap files into 24 hour intervals using editcap, then processed the\nresults using python scripts with the scapy library~\\cite{biondi2011scapy}.\n\nIn addition to the mining traffic, for each of the 24 hour runs, we collected\nthe empirical payout as reported by the pool, as well as the device hashrate\nreported by its internal functionality. We used 24 hour runs because the pool\nuses 24 hour cycles for executing payouts. We have manually synchronized the\nruns and payout cycles so as to easily correlate the data collected with its\ncorresponding payout.\n\n\\noindent\n{\\bf StraTap attack}.\nTo implement the StraTap attack, we have created a script that selects packets\nfrom the captured traces with the ``set\\_difficulty'' pattern (invoked method\nof the share difficulty notification messages). This pattern signals our script\nto perform a share submission count reset, as well as a new recording of the\nnew difficulty.\n\n\\noindent\n{\\bf ISP Log attack}.\nFor the ISP Log attack, we used packets sent after the 3-way handshake\ninitiated by the pool. In addition, to compute more accurate inter-packet\ntimes, we only considered packets that had the PUSH flag set (captured by most\nfirewall logs, e.g., Snort IDS), thus with non-empty payloads (i.e., no ack\npackets that originated on the miner). The PUSH flag is used to mitigate the\neffects of delays on the processing of share submissions, that may end up\ncausing share rejections. By setting the PUSH flag, miners try to increase the\nchance that their shares are quickly processed.\n\n\\vspace{-5pt}\n\n\\subsection{BiteCoin Attack Implementation}\n\\label{sec:implementation:bitecoin}\n\n\\vspace{-5pt}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.49\\textwidth]{bitecoin_system}\n\\caption{Architecture of BiteCoin attack implementation.}\n\\label{fig:bitecoin:system}\n\\vspace{-15pt}\n\\end{figure}\n\nThe BiteCoin attack system is illustrated in Figure~\\ref{fig:bitecoin:system}.\nWe have built WireGhost using the iptables nfqueue target, in order to pass\npackets into user space. Once it receives network segments in the user space,\nit uses the scapy python library to parse and modify packets. Additionally, it\nuses the the python nfqueue bindings in order to pass a verdict to the packets.\n\nIn order to test BiteCoin and WireGhost, we set up the victim miner behind an\nattacker controlled server that performed ``source NAT'' and packet forwarding\nfor it. This architecture allowed us to emulate an active attacker\nintercepting the communication between the miner and the pool. We have\nimplemented the attacker as a python script that connects to the F2Pool using\nStratum, then intercepts and modifies job assignments and share submissions on\nthe victim's connection to the pool. While the attacker script does not perform\nany mining, in $\\S$~\\ref{sec:eval:bitecoin} we show that it is able to steal\nthe victim's hashing power.\n\n\\vspace{-5pt}\n\n\\subsection{Bedrock Implementation}\n\n\\vspace{-5pt}\n\nOne requirement of Bedrock is to minimally disrupt the Stratum protocol, see\n$\\S$~\\ref{sec:bedrock:requirements}. Thus, instead of designing the cookie to\nbe an external field, we seek to leverage unused fields of the coinbase\ntransaction. An obvious candidate for the cookie placement is the input script\nwhere the $extranonce1$ and $extranonce2$ reside. However, most pools have\nalready started using this space for their own internal procedures, e.g., in\nF2Pool, to store the miner's name.\n\nInstead, Bedrock uses the yet unused, 32 byte (256 bit) long ``previous input\naddress'' field of the coinbase transaction, see Figure~\\ref{fig:coinbase}.\nSince the coinbase transaction rewards the pool with the value of the mined\nblock (if that event occurs), its input is not used. We have investigated the\nStratum implementation of several pools, including F2Pool~\\cite{f2pool_help},\nGHash.io~\\cite{GHash.io}, SlushPool~\\cite{SlushPool} and have confirmed that\nnone of them use this field. In addition, we note that the size of this field\nmakes it ideal to store the output of a double SHA-256 hash.\n\n\\begin{table}\n\\centering\n\\textsf{\n\\small\n\\begin{tabular}{l r r}\n\\textbf{Freq(MHz)} & \\textbf{Hashrate(GHz)} & \\textbf{StraTap Hashrate(GHz)}\\\\\n\\midrule\n700 & 4720.55 & 4571.48\\\\\n650 & 4371.85 & 4309.96\\\\\n600 & 4040.49 & 4151.27\\\\\n550 & 3693.90 & 3624.13\\\\\n500 & 3365.38 & 3524.57\\\\\n450 & 3030.01 & 3154.80\\\\\n400 & 2689.34 & 2696.72\\\\\n350 & 2364.61 & 2382.17\\\\\n300 & 2023.65 & 2039.55\\\\\n250 & 1687.17 & 1699.91\\\\\n200 & 1347.14 & 1274.29\\\\\n150 & 1010.19 & 1007.06\\\\\n100 & 672.55 & 703.28\\\\\n\\end{tabular}\n}\n\\caption{Operation frequency, actual hashrate and StraTap inferred hashrate.\nWe observe the correlation between the actual and the average hashrate, that\nallowed StraTap to achieve a good payout estimate.\n}\n\\label{table:freq_hashrate_payout}\n\\vspace{-15pt}\n\\end{table}\n\n\n\\vspace{-5pt}\n\n\\section{Evaluation}\n\\label{sec:eval}\n\n\\vspace{-5pt}\n\nIn this section we evaluate the StraTap, ISP Log and BiteCoin attacks, as well\nas the performance of Bedrock. We use the mean squared error (MSE) and the mean\npercentage error (MPE) to evaluate the accuracy of the predictions made by the\npassive attacks. Specifically, let $P = \\{ P_1, .., P_n\\}$ be a set of observed\ndaily payments over $n$ days, and let $\\bar{P} = \\{ \\bar{P_1}, .., \\bar{P_n}\n\\}$ be the corresponding predicted daily payments for the same days. Then,\nMSE($\\bar{P},P$) = $\\frac{1}{n} \\sum_i^n (\\bar{P_i} - P_i)^2$, and\nMPE($\\bar{P},P$) = $\\frac{100\\%}{n} \\sum_i^n \\frac{P_i - \\bar{P_i}}{P_i}$.\n\n\\vspace{-5pt}\n\n\\subsection{The StraTap Attack}\n\\label{sec:eval:passive:stratap}\n\n\\vspace{-5pt}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{isplog_stratap_payout}\n\\caption{Payout prediction by StraTap and ISP Log attacks, compared to\nempirical payout, in mili Bitcoin (mBTC), as a function of the miner's\nfrequency of operation (MHz). The \\textit{actual payout} series (red diamonds)\ncorresponds to daily payouts collected from the F2Pool account records. The\nStraTap payout series (blue disks) shows daily payout predictions based on\nentire Stratum messages intercepted. The ISP Log series (green triangles) shows\nthe daily payout prediction when using the average inter-packet times over 50\npackets. StraTap's prediction error ranges between 1.75-6.5\\% (MSE=0.062,\nMPE=3.46\\%). ISP Log has an error between 0.53 - 34.4\\% (MSE = 2.02, MPE =\n-9.49\\%).}\n\\label{fig:eval:stratap}\n\\vspace{-15pt}\n\\end{figure}\n\nWe have used the StraTap script described in\n$\\S$~\\ref{sec:implementation:passive} to calculate the average time of share\ncreation for each of the detected intervals of constant difficulty. For each of\nthe 24 hour runs, we also calculated the weighted average difficulty as well as\nthe weighted average hashrate for the entire run. In addition, we have also\nused Equation~\\ref{eq:e}, along with the computed average time and recorded\ndifficulty values, to compute a prediction of the weighted average hashrate of\nthe miner.\n\nTable~\\ref{table:freq_hashrate_payout} shows the AntMiner's frequency of\noperation, the output hashrate achieved at that frequency, and the predicted\nhashrate. As expected, there is a linear relationship between the frequency of\noperation and the device's hashrate achieved. As a consequence, this\nrelationship is preserved across the empirical payout reported by the pool\noperators.\n\nSpecifically, we have used the pool's hashrate to BTC conversion (see\n$\\S$~\\ref{sec:stratum}) to predict the miner's resulting daily payout.\nFigure~\\ref{fig:eval:stratap} shows the data series for the empirical and\npredicted payouts, versus the operation frequency of the miner. The StraTap\nattack achieves a prediction error of between 1.75\\% and 6.5\\%, with a mean\nsquare error (MSE) of 0.062 and mean percentage error (MPE) of -3.46\\%. Thus,\nStraTap's predictions tend to be slightly larger than the actual payout values.\n\n\\subsection{The ISP Log Attack}\n\\label{sec:eval:passive:isp}\n\n\\vspace{-5pt}\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[]\n{\\label{fig:timeline:hashrate:200MHz}{\\includegraphics[width=0.49\\textwidth]{timeline_80packet_200MHz_indicator}}}\n\\vspace{-5pt}\n\\subfigure[]\n{\\label{fig:timeline:hashrate:600MHz}{\\includegraphics[width=0.49\\textwidth]{timeline_125packet_600MHz_indicator}}}\n\\vspace{-5pt}\n\\caption{Timelines that focus on the interval between the first two share\ndifficulty notifications, following a miner subscription and authorization\nprotocol, when\n(a) the miner operates at 200MHz and\n(b) the miner operates at 600MHz.\nWhile the intervals between the first two such notifications at both\nfrequencies contain approximately 50 share submission packets, this interval is\nsignificantly shorter at 600MHz. This is because at 600MHz the miner can solve\nthe 1024 difficulty puzzles much faster than at 200MHz. The ``ISP Log'' attack\nexploits this observation to infer the hashrate of the miner, while only\ncounting packets (i.e., without being able to inspect them).}\n\\label{fig:timeline:hashrate}\n\\vspace{-15pt}\n\\end{figure}\n\nWe first present results of our analysis of F2Pool's hashrate inference\nprotocol. We then show the ability of the ISP Log attack to leverage these\nfindings to infer the miner's daily payouts, given only metadata of the miner's\npackets.\n\n\\begin{table}\n\\centering\n\\textsf{\n\\small\n\\begin{tabular}{l r r}\n\\textbf{Freq(MHz)} & \\textbf{\\# of Packets} & \\textbf{Time Interval}\\\\\n\\midrule\n100 & 57 & 288.872897148\\\\\n150 & 56 & 256.145660877\\\\\n200 & 51 & 153.622557878\\\\\n250 & 63 & 146.007184982\\\\\n300 & 55 & 131.089562893\\\\\n350 & 62 & 146.259056807\\\\\n400 & 54 & 101.954112053\\\\\n450 & 67 & 104.665092945\\\\\n500 & 50 & 58.2229411602\\\\\n550 & 62 & 76.0586118698\\\\\n600 & 54 & 50.7432210445\\\\\n650 & 56 & 45.6691811085\\\\\n\\end{tabular}\n}\n\\caption{Number of share submission packets for the initial $1024$ difficulty\nperiod, as well as the length of the time interval when the pool accepted those\nshares, for various miner frequencies of operation. At any miner operation\nfrequency, at least $50$ share submission packets are accepted, irrespective of\nwait time. This process enables the pool and the ISP Log attack to infer the\nminer's hashrate.\n}\n\\label{table:timeline_1024}\n\\vspace{-15pt}\n\\end{table}\n\n\\noindent\n{\\bf Hashrate inference protocol}.\nAs mentioned in $\\S$~\\ref{sec:attacks:passive:isp}, immediately following the\nminer subscription and authorization, the pool sets the difficulty to 1024, and\nchanges it only after receiving a sufficient number of share submissions to\ninfer the miner's hashrate. For instance,\nFigure~\\ref{fig:timeline:hashrate:200MHz} shows that when the miner operates at\n200MHz, the number of share submissions between the first two share difficulty\nnotification messages is similar to the number of share submissions when the\nminer operates at 600MHz (Figure~\\ref{fig:timeline:hashrate:600MHz})\n(approximately 50). However, the time interval between the first two share\ndifficulty notifications is much shorter at 600MHz: the miner can compute 50\nshares at the constant difficulty 1024 much faster than when operating at\n200MHz.\n\nMore general, Table~\\ref{table:timeline_1024} shows the number of share\nsubmission packets sent for this initial measurements period for each of the\nfrequencies analyzed. We observe that the pool requires that this process\ngenerates at least $50$ share submissions, irrespective of the miner operation\nfrequency. The pool waits up to 288 seconds to receive the required number of\nshares, before sending the second share difficulty notification.\n\nWe conjecture that the pool uses this process in order to infer the hashrate of\nthe miner, which it needs in order to assign jobs (puzzles) that a miner can\nsolve at a ``desirable'' rate. Specifically, large pools handle thousands of miners\nsimultaneously~\\footnote{The Bitcoin network currently has around 100,000\nminers~\\cite{MinerCount,Corti}, of which at least 16\\% work with\nF2Pool~\\cite{F2PoolShare}.}. In order to minimize the time it takes to process\nshare submissions received from thousands of miners, the pool\nneeds to regulate the rate at which a miner submits shares, irrespective of the\nminer's computing capabilities. Figure~\\ref{fig:timeline:rate} illustrates\nthis share submission rate control. In our experiments we observed that for F2Pool,\nthis rate ranges to between 1 to 4 share submissions per minute. A second reason\nfor this process stems from the need of miners to prove computation progress\nand gain regular payouts.\n\n\\begin{figure}[t!]\n\\vspace{-15pt}\n\\centering\n\\includegraphics[width=0.43\\textwidth]{passive_blind_restricted_30_R}\n\\vspace{-5pt}\n\\caption{\n1st, 2nd and 3rd quartile for the inter-packet times of the first 50\npackets during the initial difficulty setting procedure, as a function of the\nminer's operating frequency. We observe a monotonically decreasing tendency\nof the inter-packet times, with an increase in the miner capabilities.\nThis suggests that inter-packet time stats over the first 50 packets can provide\na good hashrate estimator for the ISP Log attack.\n\\label{fig:isp:log:predictor}}\n\\vspace{-15pt}\n\\end{figure}\n\n\\noindent\n{\\bf ISP Log attack results}.\nWe have implemented the ISP Log attack using statistics of the inter-packet\narrival time of the first 50 packets sent by the miner to the pool, after a\ndetected 3-way miner subscription and authorization protocol.\nFigure~\\ref{fig:isp:log:predictor} shows the 1st, 2nd (median) and 3rd\nquartiles of the inter-packet times, for the first 50 packets, when the miner\noperates at frequencies ranging from 100 to 650 MHz. The linearly decreasing\nbehavior of the median, 1st and 3rd quartiles indicates that statistics over\nthe inter-packet times of the first 50 packets, may make a good predictor.\n\n\n\nTo confirm this, we have used the mean inter-packet time over the first 50\npackets to predict the miner's hashrate and then its payout.\nFigure~\\ref{fig:eval:stratap} compares the ISP Log attack daily payout\nprediction with that of StraTap and with the empirical payout. The ISP Log has\nan error that ranges between 0.53\\% and 34.4\\%, with an MSE of 2.02 and MPE of\n-9.49\\% . Thus, ISP Log over predicts the daily payouts, and, as expected, it\nexceeds the error of the StraTap attack. \n\n\\vspace{-5pt}\n\n\\subsection{BiteCoin: Proof of Concept}\n\\label{sec:eval:bitecoin}\n\n\\vspace{-5pt}\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[]\n{\\label{fig:bitecoin:timeline:attacker}{\\includegraphics[width=0.49\\textwidth]{bitecoin_attacker_marked}}}\n\\vspace{-5pt}\n\\subfigure[]\n{\\label{fig:bitecoin:timeline:victim}{\\includegraphics[width=0.49\\textwidth]{bitecoin_victim}}}\n\\vspace{-5pt}\n\\caption{Greedy BiteCoin attack timelines for\n(a) adversary and (b) victim miner. In a 5h interval, the attacker hijacked 342\njob assignments and 72 corresponding share submissions of the victim miner.\n23 shares (the green clumps marked with red arrows) were accepted by the pool.}\n\\label{fig:bitecoin:timeline}\n\\vspace{-15pt}\n\\end{figure}\n\nWe have experimented with the BiteCoin implementation described in\n$\\S$~\\ref{sec:implementation:bitecoin}. Specifically, the attacker greedily\ninjected all the jobs assigned by the pool into the victim communication stream\nduring the attack time and without any modification. Our implementation\ninjected a total of 342 job assignments in a time interval of 5 hours, from\nhour 21:25 to 02:24. The attacker monitored the share submissions from the\nvictim, and hijacked shares corresponding to the injected jobs.\n\nFigure~\\ref{fig:bitecoin:timeline} shows the results of this attack. The\nadversary, whose timeline is shown in\nFigure~\\ref{fig:bitecoin:timeline:attacker}, hijacked 72 share submissions from\nthe victim miner. 23 shares (the green clumps marked with red arrows) were\naccepted by the pool, i.e., as if they were mined by the attacker and not by\nthe victim. 49 shares were rejected. Figure~\\ref{fig:bitecoin:timeline:victim}\nshows the timeline of the attack from the perspective of the victim miner.\n\nThe gaps are likely due to the script trying to get some constant work in.\nEvery disconnection and reconnection of the attacker will trigger a subscribe\nprotocol where the first job has the true flag set. This would explain why\nthere are no hijacked shares between around 22:00 and 1:00 in the attacker\ntimeline and also the gap of any activity in the victim timeline. These\nconstant reconnects may have constantly blanked the job pool of the victim\nuntil the attacker was able to maintain its connection to submit the shares.\n\n\n\\vspace{-5pt}\n\n\\subsection{The Bedrock Evaluation}\n\\label{sec:eval:bedrock}\n\n\\vspace{-5pt}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{overhead}\n\\caption{Overhead comparison of Bedrock and a complete encryption approach, for\nminer and pool. Bedrock imposes a small daily overhead on both the pool (12.03s\nto handle 16,000 miners) and miner (0.002s). However, a solution that\nencrypts all Stratum packets imposes a daily overhead of 1.36 hours on the\npool.}\n\\label{fig:eval:bedrock}\n\\vspace{-15pt}\n\\end{figure}\n\nWe measured Bedrock's encryption times when using AES-256 in CBC mode on the\nAntMiner S7 and on a server with 40 cores Intel(R) Xeon(R) CPU E5-2660 v2 @\n2.20GHz and 64 GB RAM. The AntMiner was able to encrypt 1024-blocks at\n32,231.09 Kb\/sec while the server was able to encrypt at 86,131.07 Kb\/sec for\nthe same block size.\n\n\nBased on the collected data, Stratum generates an average of 31.63 set\ndifficulty messages per day. Figure~\\ref{fig:eval:bedrock} shows that Bedrock\nimposes a 0.002s decryption overhead per day on an AntMiner S7, while on a pool\nusing the above server to handle 16,000 miners, it imposes an encryption\noverhead of 12.03 seconds per day.\n\nIn contrast, a solution that encrypts each Stratum packet imposes an overhead\nof 0.13 seconds per day on the AntMiner, and an unacceptable 1.36 hours per day\non the pool server, to handle 16,000 miners.\n\n\\vspace{-15pt}\n\n\\subsubsection{TLS Overheads}\n\n\\newmaterial{\nWe also compare Bedrock against Stratum protected with TLS. We have used a\nreplay of a 24 hour subset of our Stratum traffic dataset\n($\\S$~\\ref{sec:implementation:passive}), sent over TLS between a laptop used as\nminer (AntMiner does not support TLS) and the server above, used as the pool.}\n\n\\noindent\n\\newmaterial{\n{\\bf Computation overheads}.\nTo measure the TLS computation overheads, we have used\nTcpdump~\\cite{jacobson2003tcpdump} to capture the times when Stratum\/TLS\npackets leave from and arrive at the pool application, and also captured the\ntime when the packets are sent from\/received by the pool TLS socket. We have\ncomputed the total daily pool side TLS overhead of sending and receiving\nStratum packets (job assignment, share submission, notifications, set\ndifficulty change, etc). Figure~\\ref{fig:eval:bedrock} shows the difference\nbetween this overhead and the same overhead but when using bare TCP. It shows\nthat the daily computation overhead imposed by TLS on the pool, through the\ntraffic of 16,000 miners, is 1.01 hours.} This amounts to a\ncomputational overhead percentage of at least 4.3\\%.\n\n\n\\noindent\n\\newmaterial{\n{\\bf Bandwidth overhead}.\nIn addition, we have measured the bandwidth overhead imposed by TLS. The total\nminer-to-pool payload (single miner) for cleartext Stratum\/TCP traffic is\n465,875 bytes and for Stratum\/TLS is 738,873 bytes. The total pool-to-miner\npayload of Stratum\/TCP is 3,852,795 bytes while for Stratum\/TLS is 4,062,956\nbytes. Thus, TLS imposes a 58\\% overhead on the miner-to-pool bandwidth, for a\ntotal of 4.05GB daily overhead on the pool from 16,000 miners. This uplink\noverhead is significant, especially for miners in countries with poor Internet\nconnectivity.}\n\n\\newmaterial{\nTLS imposes a 5\\% overhead on the pool-to-miner bandwidth, for a total of\n3.13GB daily overhead on the pool. The TLS overhead is much larger in\nminer-to-pool communications, even though there are more pool-to-miner packets.\nThis is because the miner-to-pool share submission packets are much smaller\nthan the pool-to-miner job assignments, thus the TLS overhead (125 to 160\nbytes) becomes a significant factor for them.} In contrast, the\npercentage bandwidth overhead for Bedrock is only 0.04\\%.\n\n\\noindent\n\\newmaterial{\n{\\bf Conclusions}.\nBedrock is more efficient than blanket encryption and TLS. While the pool could\nuse more equipment to handle encryption more efficiently, blanket encryption\nand TLS do not address the hashrate inference vulnerability. In addition, TLS\nimposes a significant uplink bandwidth overhead on miners.}\n\n\\vspace{-10pt}\n\n\\section{Related Work}\n\\label{sec:related}\n\n\\vspace{-5pt}\n\n\\noindent\n{\\bf Bitcoin mining attacks}.\nDecker and Wattenhofer~\\cite{DW13} study Bitcoin's use of a multi-hop broadcast\nto propagate transactions and blocks through the network to update the ledger\nreplicas, then study how the network can delay or prevent block propagation.\nHeilman et al.~\\cite{HKZG15} propose eclipse attacks on the Bitcoin network,\nwhere an attacker leverages the reference client's policy for updating peers to\nmonopolize all the connections of a victim node, by forcing it to accept only\nfraudulent peers. The victim can then be exploited to attack the mining and\nconsensus systems of Bitcoin. Bissas et al.~\\cite{BLOAH16} present and\nvalidate a novel mathematical model of the blockchain mining process and use it\nto conduct an economic evaluation of double-spend attacks, both with and\nwithout a concurrent eclipse attack.\n\nCourtois and Bahack~\\cite{CB14} propose a practical block withholding attack,\nin which dishonest miners seek to obtain a higher reward than their relative\ncontribution to the network. They also provide an excellent background\ndescription of the motivation and functionality of mining pools and the mining\nprocess.\n\n\\noindent\n{\\bf Bitcoin anonymity}.\nSignificant work has focused on breaking the anonymity of Bitcoin\nclients~\\cite{BKP14,KKM14,MPJLMVS13,AKRSC13}. For instance, Biryukov et\nal.~\\cite{BKP14} proposed a method to deanonymize Bitcoin users, which allows\nto link user pseudonyms to the IP ad- dresses where the transactions are\ngenerated. Koshy et al.~\\cite{KKM14} use statistical metrics for mappings of\nBitcoin to IP addresses, and identify pairs that may represent ownership\nrelations.\n\nSeveral solutions arose to address this problem. Miers et al.~\\cite{MGGR13}\nproposed ZeroCoin, that extends Bitcoin with a cryptographic accumulator and\nzero knowledge proofs to provide fully anonymous currency transactions.\nBen-Sasson et al.~\\cite{BSCG0MTV14} introduced Zerocash, a decentralized\nanonymous payment solution that hides all information linking the source and\ndestination of transactions. Bonneau et al.~\\cite{BNMCKF14} proposed\nMixcoin, a currency mix with accountability assurances and randomized fee\nbased incentives.\n\nOur work is orthogonal to previous work on Bitcoin anonymity, as it identifies\nvulnerabilities in Stratum, the communication protocol employed by\n\\newmaterial{cryptocurrency} mining solutions. As such, our concern is for the\nprivacy and security of the miners, as they generate coins. Our attacks are\nalso more general, as they apply not only to Bitcoin, but to a suite of other\npopular altcoin solutions,\ne.g.,~\\cite{litecoin_stratum,ethereum_stratum,monero_stratum} that build on\nStratum.\n\n\\noindent\n{\\bf Effects of broken crypto on Bitcoin}.\nGiechaskiel et al.~\\cite{GCR16} systematically analyze the effects of broken\ncryptographic primitives on Bitcoin. They reveal a wide range of possible\neffects that range from minor privacy violations to a complete breakdown of the\ncurrency. Our attacks do not need broken crypto to succeed. However, we show\nthat Bedrock, our secure Stratum extension is resilient to broken crypto\nprimitives.\n\n\\vspace{-10pt}\n\n\\section{Conclusions}\n\n\\vspace{-5pt}\n\nIn this paper we have shown that the lack of security in Stratum, Bitcoin's\nmining communication protocol, makes miners vulnerable to a suite of passive\nand active attacks, that expose their owners to hacking, coin and equipment\ntheft, loss of revenue, and prosecution. We have implemented and shown that the\nattacks that we introduced are efficient. Our attacks reveal that encryption is\nnot only undesirable, due to its significant overheads, but also ineffective:\nan adversary can predict miner earnings even when given access to only the\ntimestamps of miner communications. We have developed Bedrock, a minimal and\nefficient Stratum extension that protects the privacy and security of mining\nprotocol participants. We have shown that Bedrock imposes an almost negligible\ncomputation overhead on the mining participants and is resilient to active\nattacks even if the used cryptographic tools are compromised.\n\n\\section{Acknowledgments}\n\n\\newmaterial{\nWe thank the shepherds and the anonymous reviewers for their excellent\nfeedback. We thank Patrick O'Callaghan for suggesting this problem and for\ninsightful discussions. This research was supported by NSF grants 1526494 and\n1527153.}\n\n\\vspace{-10pt}\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe problem of designing auctions that maximize the seller's revenue in settings with many heterogeneous goods has attracted a large amount of interest in the last years, both from the Computer Science as well as the Economics community (see e.g.~\\citep{Manelli:2006vn,Pavlov:2011fk,Hart:2012uq,Hart:2012zr,Daskalakis:2012fk,Daskalakis:2013vn,gk2014,Menicucci:2014jl,Daskalakis:2014fk}). Here the seller faces a buyer whose true values for the $m$ items come from a probability distribution over $\\mathbb R_+^m$ and, based only on this incomplete prior knowledge, he wishes to design a selling mechanism that will maximize his expected revenue. For the purposes of this paper, the prior distribution is a product one, meaning that the item values are independent. The buyer is additive, in the sense that her happiness from receiving any subset of items is the sum of her values of the individual items in that bundle. The buyer is also selfish and completely rational, thus willing to lie about her true values if this is to improve her own happiness. So, the seller should also make sure to give the right incentives to the buyer in order to avoid manipulation of the protocol by misreporting. \n\nThe special case of a single item has been very well understood since the seminal work of~\\citet{Myerson:1981aa}. However, when one moves to settings with multiple goods, the problem becomes notoriously difficult and novel approaches are necessary. Despite the significant effort of the researchers in the field, essentially only specialized, partial results are known: there are exact solutions for two items in the case of identical uniform distributions over unit-length intervals~\\citep{Pavlov:2011fk,Manelli:2006vn}, exponential over $[0,\\infty)$ \\citep{Daskalakis:2013vn} or identical Pareto distributions with tail index parameters $\\alpha\\geq 1\/2$ \\citep{Hart:2012uq}. For more than two items, optimal results are only known for uniform values over the unit interval~\\citep{gk2014}, and due to the difficulty of exact solutions most of the work focuses in showing approximation guarantees for simple selling mechanisms~\\citep{Hart:2012uq,Li:2013ty,Babaioff:2014ys,g2014,Bateni:2014ph,Rubinstein:2015kx}. This difficulty is further supported by the complexity ($\\#P$-hardness) results of~\\citet{Daskalakis:2012fk}. It is important to point out that even for two items \\emph{we know of no general and simple, closed-form conditions framework under which optimality can be extracted when given as input the item distributions, in the case when these are not necessarily identical.} This is our goal in the current paper. \n \n\\paragraph{Our contribution}\nWe introduce general but simple and clear, closed-form distributional conditions that can guarantee optimality and immediately give the form of the revenue-maximizing selling mechanism (its payment and allocation rules), for the setting of two goods with values distributed over bounded intervals (Theorem~\\ref{th:characterization_main}). For simplicity and a clearer exposition we study distributions supported over the real unit interval $[0,1]$. By scaling, the results generalize immediately to intervals that start at $0$, but more work would be needed to generalize them to arbitrary intervals. We use the closed forms to get optimal solutions for a wide class of distributions satisfying certain simple analytic assumptions (Theorem~\\ref{th:characterization_iid} and Sect.~\\ref{sec:non-iid}). As useful examples, we provide exact solutions for families of monomial ($\\propto x^c$) and exponential ($\\propto e^{-\\lambda x}$) distributions (Corollaries~\\ref{th:optimal_two_power} and \\ref{th:optimal_two_expo} and Sect.~\\ref{sec:non-iid}), and also near-optimal results for power-law ($\\propto (x+1)^{-\\alpha}$) distributions (Sect.~\\ref{sec:approximate_convex_fail}). This last approximation is an application of a more general result (Theorem~\\ref{th:two_iid_approx}) involving the relaxation of some of the conditions for optimality in the main Theorem~\\ref{th:characterization_main}; the ``solution'' one gets in this new setting might not always correspond to a feasible selling mechanism, however it still provides an upper bound on the optimal revenue as well as hints as to how to design a well-performing mechanism, by ``convexifying'' it into a feasible mechanism (Sect.~\\ref{sec:approximate_convex_fail}).\n\nParticularly for the family of monomial distributions it turns out that the optimal mechanism is a very simple deterministic mechanism that offers to the seller a menu of size just $4$ (using the menu-complexity notion of Hart and Nisan \\citep{Hart:2012ys,Wang:2013ab}): fixed prices for each one of the two items and for their bundle, as well as the option of not buying any of them. For other distributions studied in the current paper randomization is essential for optimality, as is generally expected in such problems of multidimensional revenue maximization (see e.g.~\\citep{Hart:2012zr,Pavlov:2011fk,Daskalakis:2013vn}). For example, this is the case for two i.i.d. exponential distributions over the unit interval $[0,1]$, which gives the first such example where determinism is suboptimal even for regularly\\footnote{A probability distribution $F$ is called \\emph{regular} if $t-\\frac{1-F(t)}{f(t)}$ is increasing. This quantity is known as the \\emph{virtual valuation}.} i.i.d.\\ items. \nA point worth noting here is the striking difference between this result and previous results~\\citep{Daskalakis:2013vn,g2014} about i.i.d.\\ exponential distributions which have as support the entire $\\mathbb R_+$: the optimal selling mechanism there is the deterministic one that just offers the full bundle of both items.\n\nAlthough the conditions that the probability distributions must satisfy are quite general, they leave out a large class of distributions. For example, they do not apply to power-law distributions with parameter $\\alpha>2$. In other words, this work goes some way towards the complete solution for arbitrary distributions for two items, but the general problem is still open. In this paper, we opted towards simple conditions rather than full generality, but we believe that extensions of our method can generalize significantly the range of distributions; we expect that a proper ``ironing'' procedure will enable our technique to resolve the general problem for two items.\n\n\\paragraph{Techniques}\nThe main result of the paper (Theorem~\\ref{th:characterization_main}) is proven by utilizing the \\emph{duality} framework of~\\citep{gk2014} for revenue maximization, and in particular using complementarity: the optimality of the proposed selling mechanism is shown by verifying the existence of a dual solution with which they satisfy together the required complementary slackness conditions of the duality formulation. Constructing these dual solutions explicitly seems to be a very challenging task and in fact there might not even be a concise way to do it, especially in closed-form. So instead we just prove the existence of such a dual solution, using a \\emph{max-flow min-cut} argument as main tool (Lemma~\\ref{lemma:coloring}, Fig.~\\ref{fig:flows_graph}). This is, in a way, an abstraction of a technique followed in~\\citep{gk2014} for the case of uniform distributions which was based on Hall's theorem for bipartite matchings. Since here we are dealing with general and non-identical distributions, this kind of refinement is essential and non-trivial, and in fact forms the most technical part of the paper. Our approach has a strong geometric flavor, enabled by introducing the notion of the \\emph{deficiency} of a two-dimensional body (Definition~\\ref{def:deficiency}, Lemma~\\ref{lemma:no_positive_def}), which is inspired by classic matching theory~\\citep{Ore:1955fk,Lovasz:1986qf}. \n\n\\subsection{Model and Notation}\nWe study a two-good monopoly setting in which a seller deals with a buyer who has values $x_1, x_2\\in I$ for the items, where $I=[0,1]$. The seller has only an incomplete knowledge of the buyer's preference, in the form of two independent distributions (with densities) $f_1$, $f_2$ over $I$ from which $x_1$ and $x_2$ are drawn, respectively. The cdf of $f_j$ will be denoted by $F_j$. As in the seminal work of~\\citet{Myerson:1981aa}, the density functions will be assumed to be absolutely continuous and positive.\nWe will also use vector notation $\\vecc x=(x_1,x_2)$. For any item $j\\in\\{1,2\\}$, index $-j$ will refer the complementary item, that is $3-j$, and as it's standard in game theory $\\vecc x_{-j}=x_{-j}$ will denote the remaining of vector $\\vecc x$ if the $j$-th coordinate is removed, so $\\vecc x=(x_j,x_{-j})$ for any $j=1,2$.\n\nThe seller's goal is to design a selling mechanism that will maximize his revenue. Without loss\\footnote{This is due to the celebrated Revelation Principle~\\citep{Myerson:1981aa}.} we can focus on direct-revelation mechanisms: the bidder will be asked to submit bids $b_1,b_2$ and the mechanism consists simply of an allocation rule $a_1,a_2:I^2\\to I$ and a payment function $p:I^2\\to\\mathbb R_+$ such that $a_j(b_1,b_2)$ is the probability of item $j$ being sold to the buyer (notice how we allow for randomized mechanisms, i.e.~lotteries) and $p(b_1,b_2)$ is the payment that the buyer expects to pay; it is easier to consider the expected payment for all allocations, rather than individual payments that depend on the allocation of items. The reason why the bids $b_j$ are denoted differently than the original values $x_j$ for the items is that, since the bidder is a rational and selfish agent, she might lie and misreport $b_j\\neq x_j$ if this is to increase her personal gain given by the quasi-linear \\emph{utility} function \n\\begin{equation}\n\\label{eq:utility}\nu(\\vecc b;\\vecc x)\\equiv a_1(\\vecc b) x_1+a_2(\\vecc b) x_2-p(\\vecc b),\n\\end{equation}\nthe expected happiness she'll receive by the mechanism minus her payment. \nThus, we will demand our selling mechanisms to satisfy the following standard properties: \n\\begin{itemize}\n\\item \\emph{Incentive Compatibility (IC)}, also known as truthfulness, saying that the player would have no incentive to misreport and manipulate the mechanism, i.e.\\ her utility is maximized by truth-telling: $u(\\vecc b;\\vecc x)\\leq u(\\vecc x;\\vecc x)$ \n\\item \\emph{Individual Rationality (IR)}, saying that the buyer cannot harm herself just by truthfully participating in the mechanism: $u(\\vecc x;\\vecc x)\\geq 0$.\n\\end{itemize}\nIt turns out the critical IC property comes without loss\\footnote{Also due to the Revelation Principle.} for our revenue-maximization objective, so for now on we will only consider truthful mechanisms, meaning we can also relax the notation $u(\\vecc b;\\vecc x)$ to just $u(\\vecc x)$.\n\nThere is a very elegant and helpful analytic characterization of truthfulness, going back to~\\citet{Rochet:1985aa} (for a proof see e.g.~\\citep{Hart:2012uq}), which states that the player's utility function must be \\emph{convex} and that the allocation probabilities are simply given by the utility's derivatives, i.e.\\ $\\partial u(\\vecc x)\/\\partial x_j=a_j(\\vecc x)$. Taking this into consideration and rearranging~\\eqref{eq:utility} with respect to the payment, we define\n$$\n\\mathcal R_{f_1,f_2}(u)\\equiv \\int_0^1\\int_0^1\\left(\\frac{\\partial u(\\vecc x)}{\\partial x_1}x_1+\\frac{\\partial u(\\vecc x)}{\\partial x_2}x_2-u(\\vecc x) \\right)f_1(x_1)f_2(x_2)\\,dx_1\\,dx_2\n$$\nfor every absolutely continuous function $u:I^2\\longrightarrow\\mathbb R_+$. If $u$ is convex with partial derivatives in $[0,1]$ then $u$ is a valid utility function and \\emph{$\\mathcal R_{f_1,f_2}(u)$ is the expected revenue of the seller under the mechanism induced by $u$}. Let $\\ensuremath{\\text{\\rm\\sc Rev}}(f_1,f_2)$ denote the best possible such revenue, i.e.\\ the supremum of $\\mathcal R_{f_1,f_2}(u)$ when $u$ ranges over the space of all feasible utility functions over $I^2$. So the problem we want to deal with in this paper is exactly that of $\\sup_u \\mathcal R_{f_1,f_2}(u)$.\n\nWe now present the condition on the probability distributions which will enable our technique to provide a closed-form of the optimal auction.\n\n\\begin{assumption}\n\\label{assume:upwards_def}\n\\label{assume:regularity}\nThe probability distributions $f_1,f_2$ are such that functions $h_{f_1,f_2}(\\vecc x)-f_2(1)f_1(x_1)$ and $h_{f_1,f_2}(\\vecc x)-f_1(1)f_2(x_2)$ are nonnegative, where\n\\begin{equation}\n\\label{eq:def_h}\nh_{f_1,f_2}(\\vecc x)\\equiv 3 f_1(x_1)f_2(x_2)+x_1f_1'(x_1)f_2(x_2)+x_2f'_2(x_2)f_1(x_1).\n\\end{equation}\nFunction $h_{f_1,f_2}$ will also be assumed to be absolutely continuous with respect to each of its coordinates.\n\\end{assumption}\nWe will drop the subscript $f_1,f_2$ in the above notations whenever it is clear which distributions we are referring to. Assumption~\\ref{assume:upwards_def} is a slightly stronger condition than $h(\\vecc x)\\geq 0$ which is a common regularity assumption in the economics literature for multidimensional auctions with $m$ items: $(m+1)f(\\vecc x)+\\nabla f(\\vecc x)\\cdot \\vecc x\\geq 0$, where $f$ is the joint distribution for the item values (see e.g.~\\citep{Manelli:2006vn,Pavlov:2011fk,McAfee:1988nx}). In fact, \\citet{Manelli:2006vn} make the even stronger assumption that for each item $j$, $x_j f_j(x_j)$ is an increasing function. Even more recently, that assumption has also been deployed by \\citet{Wang:2013ab} in a two-item setting as one of their sufficient conditions for the existence of optimal auctions with small-sized menus. It has a strong connection with the standard single-dimensional regularity condition of~\\citet{Myerson:1981aa}, since for $m=1$ condition $h(\\vecc x)\\geq 0$ gives that $f(x)\\left( x-\\frac{1-F(x)}{f(x)} \\right)$ is increasing, thus ensures the single-crossing property of the virtual valuation function (see also the discussion in \\citep[Sect.~2]{Manelli:2006vn}). \n\nStrengthening the regularity condition $h(\\vecc x)\\geq 0$ to that of Assumption~\\ref{assume:upwards_def} is essentially only used \nas a technical tool within the proof of Lemma~\\ref{lemma:no_positive_def}, and\nas a matter of fact we don't really need it to hold in the entire unit box $I^2$ but just in a critical sub-region $D_{1,2}$ which corresponds to the valuation subspace where both items are sold with probability $1$ (see Fig.~\\ref{fig:Exp_Uniform} and Sect.~\\ref{sec:partition_optimal}). As mentioned earlier in the Introduction, we introduce this technical conditions in order to simplify our exposition and enforce the clarity of the techniques, but we believe that a proper ``ironing''~\\citep{Myerson:1981aa} process can probably bypass these restrictions and generalize our results.\nThe critical Assumption~\\ref{assume:upwards_def} is of course satisfied by all distributions considered in the results of this paper, namely monomial $\\propto x^c$ for any power $c\\geq 0$ (Corollary~\\ref{th:optimal_two_power}), exponential $\\propto e^{-\\lambda x}$ with rates $\\lambda\\leq 1$ (Corollary~\\ref{th:optimal_two_expo}), power-law $\\propto (t+1)^{-\\alpha}$ with parameters $\\alpha\\leq 2$ (Example~\\ref{example:power-law}), as well as combinations of these (see Example~\\ref{example:uniform-expo}). However, there is still a large class of distributions not captured by Assumption~\\ref{assume:upwards_def} as it is, e.g.\\ exponential with rates larger than $1$, power-law with parameters greater than $2$ and some beta-distributions (take, for example, $\\propto x^2(1-x)^2$). See Footnote~\\ref{foot:alter-assumption-monotone} for an alternative condition that can replace Assumption~\\ref{assume:upwards_def}.\n\\section{Sufficient Conditions for Optimality}\nThis section is dedicated to proving the main result of the paper:\n\\begin{theorem}\n\\label{th:characterization_main}\nIf there exist decreasing, concave functions $s_1,s_2:I\\to I$, with $s_1'(t),s_2'(t)> -1$ for all $t\\in I$, such that for almost every\\footnote{Everywhere except a subset of zero Lebesgue measure.} (a.e.) $x_1,x_2\\in I$\n\\begin{equation}\n\\label{eq:1slice_gen_functions}\n\\frac{s_1(x_2)f_1(s_1(x_2))}{1-F_1(s_1(x_2))} =2+\\frac{x_2f_2'(x_2)}{f_2(x_2)}\n\\quad\\text{and}\\quad\n\\frac{s_2(x_1)f_2(s_2(x_1))}{1-F_2(s_2(x_1))} =2+\\frac{x_1f_1'(x_1)}{f_1(x_1)}, \n\\end{equation}\nthen \nthere exists a constant $p\\in[0,2]$ such that \n\\begin{equation}\n\\label{eq:2slice_gen}\n\\int_{D}h(\\vecc x)\\,dx_1\\,dx_2 \n=f_1(1)+f_2(1)\n\\end{equation}\nwhere $D$ is the region of $I^2$ enclosed by curves\\footnote{See Fig.~\\ref{fig:Exp_Uniform}.} $x_1+x_2=p$, $x_1=s_1(x_2)$ and $x_2=s_2(x_1)$ and including point $(1,1)$, i.e.~$D=\\sset{\\vecc x\\in I\\fwh{x_1+x_2\\geq p\\lor x_1\\geq s_1(x_2) \\lor x_2\\geq s_2(x_1)}}$, \nand the optimal selling mechanism is given by the utility function\n\\begin{equation}\n\\label{eq:optimal_auction_gen}\nu(\\vecc x)=\\max\\sset{0,x_1-s_1(x_2),x_2-s_2(x_1),x_1+x_2-p}.\n\\end{equation}\nIn particular, if $p\\leq\\min\\sset{s_1(0),s_2(0)}$, then the optimal mechanism is the deterministic full-bundling with price $p$. \n\\end{theorem}\n\nNotice that for any $s\\in I$ we have\n\\begin{align*}\n\\int_s^1h(\\vecc x)\\,dx_1 &=\\int_s^1 3f_1(x_1)f_2(x_2)+x_1f_1'(x_1)f_2(x_2)+x_2f_2'(x_2)f_1(x_1) \\,dx_1\\\\\n\t\t\t\t&=3f_2(x_2)(1-F_1(s))+f_2(x_2)\\int_s^1x_1f_1'(x_1)\\,dx_1+x_2f_2'(x_2)(1-F_1(s))\\\\\n\t\t\t\t&=3f_2(x_2)(1-F_1(s))+f_2(x_2)\\left(\\left[x_1f_1(x_1)\\right]_s^1-(1-F_1(s))\\right)+x_2f_2'(x_2)(1-F_1(s))\\\\\n\t\t\t\t&=2f_2(x_2)(1-F_1(s))+f_2(x_2)(f_1(1)-sf_1(s))+x_2f_2'(x_2)(1-F_1(s))\\\\\n\t\t\t\t&=(1-F_1(s))f_2(x_2)\\left[2+\\frac{x_2f_2'(x_2)}{f_2(x_2)}-\\frac{sf_1(s)}{1-F_1(s)} \\right] +f_1(1)f_2(x_2)\n\\end{align*}\nwhich means that an equivalent way of looking at~\\eqref{eq:1slice_gen_functions} is, more simply, by\n\\begin{equation}\n\\label{eq:1slice_gen_integrals}\n\\int_{s_1(x_2)}^1h(\\vecc x)\\,dx_1=f_1(1)f_2(x_2)\n\\quad\\text{and}\\quad\n\\int_{s_2(x_1)}^1h(\\vecc x)\\,dx_2=f_2(1)f_1(x_1).\n\\end{equation}\nThis also means that~\\eqref{eq:1slice_gen_integrals} can take the place of~\\eqref{eq:1slice_gen_functions} in the statement of Theorem~\\ref{th:characterization_main} whenever this gives an easier way to solve for functions $s_1$ and $s_2$.\n\\subsection{Partitioning of the Valuation Space}\n\\label{sec:partition_optimal}\n\\begin{figure}\n\\centering\n\\includegraphics[width=10cm]{Figures\/Exp_Uniform.pdf}\n\\caption{\\footnotesize The valuation space partitioning of the optimal selling mechanism for two independent items, one following a uniform distribution and the other an exponential with parameter $\\lambda=1$. Here $s_1(t)=(2-t)\/(3-t)$, $s_2(t)=2-W(2e)\\approx 0.625$ and $p\\approx 0.787$. In region $D_{1}$ (light grey) item $1$ is sold deterministically and item $2$ with a probability of $-s_1'(x_2)$, in $D_{2}$ (light grey) only item $2$ is sold and region $D_{1,2}$ (dark grey) is where the full bundle is sold deterministically, for a price of $p$.}\n\\label{fig:Exp_Uniform}\n\\end{figure}\nDue to the fact that the derivatives of functions $s_j$ in Theorem~\\ref{th:characterization_main} are above $-1$, each curve $x_1=s_1(x_2)$ and $x_2=s_2(x_1)$ can intersect the full-bundle line $x_1+x_2=p$ at most at a single point. So let $x_2^*=x_2^*(p), x_1^*=x_1^*(p)$ be the coordinates of these intersections, respectively, i.e.~$s_1(x_2^*)=p-x_2^*$ and $s_2(x_1^*)=p-x_1^*$. If such an intersection does not exist, just define $x_2^*=0$ or $x_1^*=0$.\n\nThe construction and the optimal mechanism given in Theorem~\\ref{th:characterization_main} then gives rise to the following partitioning of the valuation space $I^2$ (see Fig.~\\ref{fig:Exp_Uniform}):\n\\begin{itemize}\n\\item Region $\\bar D=I^2\\setminus D$ where no item is allocated\n\\item Region $D_1=\\sset{\\vecc x\\in I^2\\fwh{x_1\\geq s_1(x_2)\\land x_2\\leq x_2^*}}$ where item $1$ is sold with probability $1$ and item $2$ with probability $-s_1'(x_2)$ for a price of $s_1(x_2)-x_2s_1'(x_2)$\n\\item Region $D_2=\\sset{\\vecc x\\in I^2\\fwh{x_2\\geq s_2(x_1)\\land x_1\\leq x_1^*}}$ where item $2$ is sold with probability $1$ and item $1$ with probability $-s_2'(x_1)$ for a price of $s_2(x_1)-x_1s_2'(x_1)$\n\\item Region $D_{1,2}=D\\setminus{D_1\\union D_2}=\\sset{\\vecc x\\in I^2\\fwh{x_1+x_2\\geq p \\land x_1\\geq x_1^* \\land x_2\\geq x_2^*}}$ where both items are sold deterministically in a full bundle of price $p$.\n\\end{itemize}\n\nUnder this decomposition, by \\eqref{eq:1slice_gen_integrals}:\n$$\n\\int_{D_{1}}h(\\vecc x)\\,dx_1\\,dx_2=\\int_{0}^{x_2^*}\\int_{s_1(x_2)}^1h(\\vecc x)\\,dx_1\\,dx_2=f_1(1)F_2(x_2^*)\n$$\nso expression~\\eqref{eq:2slice_gen} can be written equivalently as\n\\begin{equation}\n\\label{eq:2slice_gen_bundle_region}\n\\int_{D_{1,2}}h(\\vecc x)\\,dx_1\\,dx_2 \n= f_1(1)(1-F_2(x_2^*))+f_2(1)(1-F_1(x_1^*)).\n\\end{equation}\n\\subsection{Duality}\n\\label{sec:duality}\nThe major underlying tool to prove Theorem~\\ref{th:characterization_main} will be the duality framework of~\\citep{gk2014}. For completeness we briefly present here the formulation and key aspects, and the interested reader is referred to the original text for further details. \n\nRemember that the revenue optimization problem we want to solve here is to maximize $\\mathcal R(u)$ over the space of all convex functions $u:I^2\\longrightarrow\\mathbb R_+$ with\n\\begin{equation}\n\\label{eq:allocs_probs_01}\n0\\leq \\frac{\\partial u(\\vecc x)}{\\partial x_j}\\leq 1,\\qquad j=1,2,\n\\end{equation}\nfor a.e. $\\vecc x\\in I^2$. First we relax this problem by dropping the convexity assumption and replacing it with (absolute) continuity. We also drop the lower bound in~\\eqref{eq:allocs_probs_01}. Then this new relaxed program is dual to the following: minimize $\\int_0^1\\int_0^1 z_1(\\vecc x)+z_2(\\vecc x)\\,d\\vecc x$ where the new dual variables $z_1,z_2:I^2\\longrightarrow\\mathbb R_+$ are such that $z_j$ is (absolutely) continuous with respect to its $j$-coordinate and the following conditions are satisfied for all $x_1,x_2\\in I$:\n\\begin{align}\nz_j(0,x_{-j}) &=0, &&j=1,2, \\label{eq:dual_const_1}\\\\\nz_j(1,x_{-j}) &\\geq f_j(1)f_{-j}(x_{-j}), &&j=1,2, \\label{eq:dual_const_2}\\\\\n\\frac{\\partial z_1(\\vecc x)}{\\partial x_2}+\\frac{\\partial z_2(\\vecc x)}{\\partial x_2} &\\leq 3 f_1(x_1)f_2(x_2)+ x_1f_1'(x_1)f_2(x_2)+x_2f_1(x_1)f_2'(x_2).\\label{eq:dual_const_3}\n\\end{align}\nWe will refer to the first optimization problem, where $u$ ranges over the relaxed space of continuous, nonnegative functions with derivatives at most $1$, as the \\emph{primal program} and to the second as the \\emph{dual}. Intuitively, every dual solution $z_j$ must start at zero and grow all the way up to $f_j(1)f_{-j}(x_{-j})$ while travelling in interval $I$, in a way that the sum of the rate of growth of both $z_1$ and $z_2$ is never faster than the right hand side of~\\eqref{eq:dual_const_3}.\nIn~\\citep{gk2014} is proven that indeed these two programs satisfy both weak duality, i.e.~for any feasible $u,z_1,z_2$ we have\n$$\n\\mathcal R(u)\\leq \\int_{0}^1\\int_0^1 z_1(\\vecc x)+z_2(\\vecc x)\\,d\\vecc x\n$$ \nas well as complementary slackness, in the form of the even stronger following form of $\\varepsilon$-complementarity: \n\n\\begin{lemma}[Complementarity]\\label{lemma:complementarity}\nIf $u,z_1,z_2$ are feasible primal and dual solutions, respectively, $\\varepsilon>0$ and the following complementarity constraints hold for a.e. $\\vecc x\\in I^2$,\n\\begin{align}\n u(\\vecc x) \\left( h(\\vecc x)\n -\\frac{\\partial z_1(\\vecc x)}{\\partial x_1}-\\frac{\\partial z_2(\\vecc x)}{\\partial x_2}\n \\right) &\\leq \\varepsilon f_1(x_1)f_2(x_2), \\label{eq:e_compl_2}\\\\\nu(1,x_{-j}) \\left( z_j(1, x_{-j})\n -f_j(1)f_{-j}(x_{-j}) \\right) &\\leq \\varepsilon f_j(1)f_{-j}(x_{-j}) \\label{eq:e_compl_3}, &&j=1,2,\\\\\n z_j(\\vecc x) \\left( 1 - \\frac{\\partial\n u(\\vecc x)}{\\partial x_j} \\right) &\\leq \\varepsilon f_1(x_1)f_2(x_2), &&j=1,2, \\label{eq:e_compl_4}\n\\end{align}\nwhere $h$ is defined in~\\eqref{eq:def_h}, then the values of the primal and dual programs differ by at most\n$7\\varepsilon$. In particular, if the conditions are satisfied\nwith $\\varepsilon=0$, both solutions are optimal.\n\\end{lemma} \n\nOur approach into proving Theorem~\\ref{th:characterization_main} will be to show the existence of a pair of dual solutions $z_1,z_2$ with respect to which the utility function $u$ given by the theorem indeed satisfies complementarity. Notice here the existential character of our technique: our duality approach offers the advantage to use the proof of just the existence of such duals, without having to explicitly describe them and compute their objective value in order to prove optimality, i.e.~that the primal and dual objectives are indeed equal. Also notice that the utility function $u$ given by Theorem~\\ref{th:characterization_main} is convex by construction, so in case someone shows optimality for $u$ in the relaxed setting, then $u$ must also be optimal among all feasible mechanisms.\n\nDefine function $W:I^2\\to\\mathbb R_+$ by\n$$\nW(\\vecc x)=\n\\begin{cases}\nh(\\vecc x), &\\text{if}\\;\\; \\vecc x\\in D,\\\\\n0, &\\text{otherwise},\n\\end{cases} \n$$\nwhere $D$ is defined in Sect.~\\ref{sec:partition_optimal} (see Fig.~\\ref{fig:Exp_Uniform}).\nIf one could decompose $W$ into functions $w_1,w_2:I^2\\to\\mathbb R_+$ such that\n\\begin{align}\nw_1(\\vecc x)+w_2(\\vecc x) &=W(\\vecc x)\\label{eq:Wdecomp_sum} \\\\\n\\int_0^1w_j(\\vecc x)\\,d x_j &= f_j(1)f_{-j}(x_{-j}) \\label{eq:Wdecomp1}, \\qquad j=1,2,\n\\end{align}\nfor all $\\vecc x\\in I$, and $w_j$ is almost everywhere continuous with respect to its $j$-th coordinate, then by defining \n$$\nz_j(\\vecc x)=\\int_0^{x_j} w_j(t,x_{-j})\\,dt\n$$\nwe'll have\n\\begin{align}\n\\frac{\\partial z_1(\\vecc x)}{\\partial x_1}+\\frac{\\partial z_2(\\vecc x_2)}{\\partial x_2} &=\n\\begin{cases}\n h(\\vecc x), & \\text{for}\\;\\; \\vecc x\\in D,\\\\\n 0, &\\text{otherwise},\n \\end{cases}\n \\label{eq:prop_dual_1}\n \\\\\nz_j(0,x_{-j}) &=0, && j=1,2, \\label{eq:prop_dual_2} \\\\\nz_j(1,x_{-j}) &= f_j(1)f_{-j}(x_{-j}), && j=1,2. \\label{eq:prop_dual_3} \n\\end{align}\nIf the requirements of Theorem~\\ref{th:characterization_main} hold, then it is fairly straightforward to get such a decomposition in certain regions. In particular, we can set $w_1=w_2=0$ in $I^2\\setminus D$, $w_1=W=h$ and $w_2=0$ in $D_1$ and $w_2=W=h$ and $w_1=0$ in $D_2$. Then, by~\\eqref{eq:1slice_gen_integrals}, it is not difficult to see that indeed conditions~\\eqref{eq:Wdecomp_sum}--\\eqref{eq:Wdecomp1} are satisfied. However, \\emph{it is highly non-trivial how to create such a decomposition in the remaining region $D_{1,2}$} and that is what the proof of Lemma~\\ref{lemma:coloring} achieves, with the assistance of the geometric Lemma~\\ref{lemma:no_positive_def}, in the remaining of this section. This is the most technical part of the paper.\n\nIn any case, if we are able to get such a decomposition, by the previous discussion that would mean that functions $z_1,z_2:I^2\\to\\mathbb R_+$ are \\emph{feasible dual} solutions: it is trivial to verify that properties~\\eqref{eq:prop_dual_1}--\\eqref{eq:prop_dual_3} satisfy the dual constraints \\eqref{eq:dual_const_1}--\\eqref{eq:dual_const_3}. But most importantly, the \\emph{equalities} in properties~\\eqref{eq:prop_dual_1}--\\eqref{eq:prop_dual_3} and the way $w_1$ and $w_2$ are defined in regions $D_1$ and $D_2$ tell us something more: that this pair of solutions would satisfy complementarity with respect to the primal given in~\\eqref{eq:optimal_auction_gen} and whose allocation is analyzed in detail in Sect.~\\ref{sec:partition_optimal}, thus proving that this mechanism is optimal and thus establishing Theorem~\\ref{th:characterization_main}.\n\n\\subsection{Deficiency}\n\\label{sec:nodef}\nThe following notion will be the tool that gives a very useful geometric interpretation to the rest of the proof of Theorem~\\ref{th:characterization_main} and it will be critical into proving Lemma~\\ref{lemma:coloring}. \n\\begin{definition}\n\\label{def:deficiency}\nFor any body $S\\subseteq I^2$ define its \\emph{deficiency} (with respect to distributions $f_1,f_2$) to be\n$$\n\\delta(S)\\equiv \\int_S h(\\vecc x)\\,d\\vecc x - f_2(1)\\int_{S_1}f_1(x_1)\\,dx_1-f_1(1)\\int_{S_2}f_2(x_2)\\,dx_2,\n$$\nwhere $S_1$, $S_2$ denote $S$'s projections to the $x_1$ and $x_2$ axis, respectively.\n\\end{definition}\n\\begin{lemma}\n\\label{lemma:no_positive_def}\nIf the requirements of Theorem~\\ref{th:characterization_main} hold, then no body $S\\subseteq D_{1,2}$ has positive deficiency.\n\\end{lemma}\n\\begin{proof}\nTo get to a contradiction, assume that there is body $S\\subseteq D_{1,2}$ with $\\delta(S)>0$. \nFirst, we'll show that without loss $S$ can be assumed to be upwards closed. Intuitively, we'll show that one can push mass of $S$ to the right or upwards, without reducing its deficiency. By Assumption~\\ref{assume:upwards_def} function $h(\\vecc x)-f_2(1)f_1(x_1)$ is nonnegative. Then, if there exists a nonempty horizontal line segment $\\slice{S}{x_2}{t}$ of $S$ at some height $x_2=t$, then we can assume that this line segment fills the entire available horizontal space of $D_{1,2}$: if that was not the case, and there existed a small interval $[\\alpha,\\beta]\\times{t}$ that was not in $S$, then we could add it to it, not increasing the projection towards the $x_2$-axis (it is already covered by the other existing points at $x_2=t$) and the projection towards the $x_1$-axis is increased at most by $\\beta-\\alpha$, leading to a change to the overall deficiency by at most $\\int_{\\alpha}^{\\beta}h(\\vecc x)\\,dx_1-f_2(1)\\int_{\\alpha}^{\\beta}f_1(x_1)\\,dx_1$, which is nonnegative\\footnote{We must mention here that the assumption of the nonnegativity of $h(\\vecc x) -f_2(1)f_1(x_1)$ could be replaced by that of $h(\\vecc x)-f_2(1)f_1(x_1)$ being increasing with respect to $x_1$ and the argument would still carry through: we can move entire columns of $S$ to the right, pushing elements horizontally; the projection towards axis $x_2$ again remains unchanged, and because of the monotonicity of $h(\\vecc x)-f_2(1)f_1(x_1)$, the overall deficiency will not decrease since we are integrating over higher values of $x_1$.\n\nThis means that the monotonicity of $h(\\vecc x)-f_j(x_j)j_{-j}(1)$ with respect to $x_j$ can replace its nonnegativity in the initial Assumption~\\ref{assume:upwards_def} (while still maintaining the regularity requirement of $h(\\vecc x)$ being nonnegative) without affecting the main results of this paper, namely Theorems \\ref{th:characterization_main}, \\ref{th:characterization_iid} and \\ref{th:two_iid_approx}.\n\\label{foot:alter-assumption-monotone}\n}.\n\nSo $S$ can be assumed to be the intersection of $D_{1,2}$ with a box, i.e.~$S=[t_1,1]\\times[t_2,1]\\inters D_{1,2}$, where $t_1\\geq x_1^*$ and $t_2\\geq x_2^*$. This also means that its projections are $S_1=[t_1,1]$ and $S_2=[t_2,1]$.\nNow consider the lowest horizontal slice $\\slice{S}{x_2}{t_2}$ of $S$. It obviously lies within $D_{1,2}$. But from condition~\\eqref{eq:1slice_gen_integrals} so do all horizontal line segments of the form $[s_1(x_2),1]$ for any $x_2\\in[x_2^*, t_2]$: $s_1(x_2)$ is decreasing and specifically less steeply than the line $-x_2+p$ which is the boundary of $D_{1,2}$. So, by adding all these segments to $S$ we won't increase the projections towards the $x_1$-axis (these are covered already by $\\slice{S}{x_2}{t_2}$, which has to be a superset of $[s_1(t_2),1]$, otherwise it would have a negative deficiency, see~\\eqref{eq:1slice_gen_integrals}) and the new projections towards the $x_2$-axis are dominated by the increase of the area of $S$ (this segments have nonnegative deficiency). So, $S$ can be assumed to project in the entire boundaries $[x_1^*,1]$ and $[x_2^*,1]$ of $D_{1,2}$ and thus, since $h$ is nonnegative, $S$ can be assumed to fill the entire $D_{1,2}$ region. But by the definition of price $p$ in Theorem~\\ref{th:characterization_main}, $\\delta(D_{1,2})=0$ which concludes the proof. \n\\end{proof}\n\\subsection{Dual Solution and Optimality}\n\\label{sec:optimality}\nNotice that Theorem~\\ref{th:characterization_main} ensures the existence of a full-bundling price in~\\eqref{eq:2slice_gen}. This needs to be proven. Indeed,\nquantity $\\int_Dh(\\vecc x)\\,d\\vecc x$ continuously (weakly) increases as $p$ decreases, and for $p=0$ %\n\\begin{align*}\n\\int_Dh(\\vecc x)\\,d\\vecc x &=\\int_0^1\\int_0^1 3f_1(x_1)f_2(x_2)+x_1f_1'(x_1)f_2(x_2)+yf_2'(x_2)f_1(x_1)\\,dx_1\\,dx_2\\\\\n\t\t&=3+(f_1(1)-1)+(f_2(1)-1)=1+f_1(1)+f_2(1)\n\t\t>f_1(1)+f_2(1)\n\\end{align*}\nwhile for $p=\\hat x_1+\\hat x_2$, where $(\\hat x_1,\\hat x_2)$ is the unique point of intersection of the curves $x_2=s_1(x_1)$ and $x_1=s_2(x_2)$ in $I^2$ (such a point certainly exists because $s_1$ and $s_2$ are defined over the entire $I$), \n\\begin{align*}\n\\int_{D_{1,2}}h(\\vecc x)\\,d\\vecc x &=\\int_{\\hat x_2}^1\\int_{\\hat x_1}^1 h(\\vecc x)\\,d\\vecc x\n\t\t\\leq \\int_{\\hat x_2}^1\\int_{s_1(x_2)}^1 h(\\vecc x)\\,d\\vecc x\n\t\t= \\int_{\\hat x_2}^1f_1(1)f_2(x_2)\\,dx_2\\\\\n\t\t&= f_1(1)(1-F_2(\\hat x_2))\n\t\t\\leq f_1(1)(1-F_2(\\hat x_2))+f_2(1)(1-F_1(\\hat x_1)),\n\\end{align*}\nthe first inequality holding because $h$ is nonnegative and $s_1(x_2)\\leq s_1(\\hat x_2)=\\hat x_1$ ($s_1$ is decreasing), and the second equality by substituting~\\eqref{eq:1slice_gen_integrals}, and from~\\eqref{eq:2slice_gen_bundle_region} this means that $\\int_{D}h\\,d\\vecc x\\leq f_1(1)+f_2(1)$.\n\nCombining the above, indeed there must be a $p\\in[0,\\hat x_1+\\hat x_2]$ such that $\\int_{D}h\\,d\\vecc x=f_1(1)+f_2(1)$. In fact, using this argument, if for $p=\\min\\sset{s_1(0),s_2(0)}$ it is $\\int_{D}h\\,d\\vecc x0$, there exist feasible dual solutions $z_1,z_2$ which are $\\varepsilon$-complementary to the (primal) $u$ given by~\\eqref{eq:optimal_auction_gen}. Therefore, the mechanism induced by $u$ is optimal.\n\\end{lemma}\n\\begin{proof}\nFollowing the discussion in Sect.~\\ref{sec:duality}, we would like to decompose $W$ into the desired functions $w_1$ and $w_2$ within $D_{1,2}$, i.e.~such that they satisfy~\\eqref{eq:Wdecomp_sum}--\\eqref{eq:Wdecomp1}. In fact, we are aiming for $\\varepsilon$-complementarity, so we can relax conditions~\\eqref{eq:Wdecomp1} a bit: \n\\begin{equation}\n\\int_0^1w_j(\\vecc x)\\,dx_j \\leq f_j(1)f_{-j}(x_{-j})+\\varepsilon' \n\\label{eq:relax_dual_boundary}\n\\end{equation}\nTo be precise, the $\\varepsilon$-complementarity of Lemma~\\ref{lemma:complementarity} dictates that regarding these conditions we must show that for a.e.\\ $\\vecc x\\in D_{1,2}$ property~\\eqref{eq:e_compl_3} holds (conditions~\\eqref{eq:e_compl_2} and~\\eqref{eq:e_compl_4} are immediately satisfied with strong equality, by~\\eqref{eq:prop_dual_1} and the fact that within $D_{1,2}$ both items are sold deterministically with probability $1$.).\nBut since $u(\\vecc x)\\leq x_1+x_2 \\leq 2$ for all $x_1,x_2\\in I$ ($u$'s derivatives are at most $1$ with respect to any direction) and also exists $M>0$ such that $f_1(1)f_2(x_2),f_2(1)f_1(x_1)\\geq M$ for all $\\vecc x\\in D_{1,2}$ (the density functions are continuous over the closed interval $I$ and positive\\footnote{We would like to note here that this is the only point in the paper where the fact that the densities are \\emph{strictly} positive is used. As a matter of fact, a closer look will reveal that the proof just needs the property to hold in the closure of $D_{1,2}$ and not necessarily in the entire domain $I^2$. This allows the consideration of a wider family of feasible distributional priors, for example the monomial distributions of Corollary~\\ref{th:optimal_two_power}: their densities $f(t)=(c+1)t^c$ may vanish at $t=0$ but these ``problematic'' points happen to lie outside the area $D_{1,2}$ where both items are sold.}), indeed~\\eqref{eq:relax_dual_boundary} is enough to guarantee $\\varepsilon$ complementarity if one ensures $\\varepsilon'\\leq \\varepsilon M\/2$. So, the remaining of the proof is dedicated into constructing nonnegative, a.e.~continuous functions $w_1$ and $w_2$ over $D_{1,2}$, such that $w_1+w_2=h$ and~\\eqref{eq:relax_dual_boundary} are satisfied.\n\nWe will do that by constructing an appropriate graph and recovering $w_1$ and $w_2$ as ``flows'' through its nodes, deploying the min-cut max-flow theorem to prove existence. To start, we pick an arbitrary small $\\delta>0$ and discretize $I^2$ into a lattice of $\\delta$-size boxes $[(i-1)\\delta,i\\delta] \\times [(j-1)\\delta,j\\delta]$, where $i,j=1,2,\\dots,1\/\\delta$, selecting $\\delta$ such that $1\/\\delta$ is an integer. Denote the intersection of such a box with $D_{1,2}$ by $B_{i,j}$. Also, let $B^1_i$ denote the projection of all nonempty $B_{i,j}$'s, as $j$ ranges, towards the $x_1$-axis and $B^2_j$ towards the $x_2$-axis, as $i$ ranges. Note that these are well-defined in this way, since by the geometry of region $D_{1,2}$ two nonempty $B_{i,j}$, $B_{i',j'}$ will have the same vertical projection if $i=i'$ and the same horizontal if $j=j'$. Also, it is a simple fact to observe that all $B^1_i$ and $B^2_j$ are single-dimensional real intervals of length at most $\\delta$.\n\nNow let's construct a directed graph $G=(V,E)$, together with a capacity function $c(e)$ for all edges $e\\in E$. Initially, for any pair $(i,j)$ such that $B_{i,j}$ has positive (two-dimensional Lebesgue) measure we insert a node $v(i,j)$ in $V$. We'll call these nodes \\emph{internal} and we'll denote them by $V_o$. Also, for any internal node $v(i,j)$ we add nodes $v_1(i)$ and $v_2(j)$ corresponding to entire columns and rows, calling them \\emph{column} and \\emph{row} vertices and denoting them by $V_1$ and $V_2$, respectively. Finally there are two special nodes, a source $\\sigma$ and a destination $\\tau$. From the source to all internal nodes $v=v(i,j)$ we add an edge $(\\sigma,v)$ with capacity equal to the area of $B_{i,j}$ under $h$, i.e.~$c(\\sigma,v)=\\int_{B_{i,j}}h(\\vecc x)\\,d\\vecc x$. From any internal node $v=v(i,j)$ to its external column and row nodes $v_1=v_1(i)$ and $v_2=v_2(j)$ we add edges with capacities $c(v,v_1)=c(v,v_2)=c(\\sigma,v)$ equal to the internal node's incoming edge capacity from the source. Finally, for all external nodes $v_1(i)\\in V_1$ and $v_2(j)\\in V_2$ we add edges towards the destination $\\tau$ with capacities $c(v_1,\\tau)=f_2(1)\\int_{B^1_i}f_1(x_1)\\,dx_1$ and $c(v_2,\\tau)=f_1(1)\\int_{B^2_j}f_2(x_2)\\,dx_2$, respectively. The structure of graph $G$ is depicted in Fig.~\\ref{fig:flows_graph}.\n \\begin{figure}\n \\centering\n \\includegraphics[width=10cm]{Figures\/flows.pdf}\n \\caption{\\footnotesize The graph $G$ in the proof of Lemma~\\ref{lemma:coloring}. Every internal node $B_{i,j}$ of region $D_{1,2}$ can receive at most $\\int_{B_{i,j}}h(\\vecc x)\\,d\\vecc x$ flow from the source node $\\sigma$ and can send at most that amount to each one of its neighbouring external nodes $B_{i}^1$ and $B_{j}^2$. Every external node $B^1_i$ and $B^2_j$ is connected to the destination $\\tau$ with edges of capacity $f_2(1)\\int_{B_i^1}f_1(x_1)\\,dx_1$ and $f_1(1)\\int_{B_j^2}f_2(x_2)\\,dx_2$, respectively. Internal $B_{i,j}$'s are two-dimensional intersections of $\\delta$-boxes with $D_{1,2}$, while the external ones, $B^1_i$ and $B^2_j$ are single dimensional intervals of length $\\delta$.}\n \\label{fig:flows_graph}\n \\end{figure}\n\nAs a first observation, notice that the maximum flow that can be sent from $\\sigma$ within the graph is $\\int_{D_{1,2}}h(\\vecc x)\\,dx_1\\,dx_2$ and the maximum flow that $\\tau$ can receive is \n$$f_2(1)\\int_{x_1^*}^1f_1(x_1)\\,dx_1+f_1(1)\\int_{x_2^*}^1f_2(x_2)\\,dx_2$$ (remember that the projection of $D_{1,2}$ to the $x_1$-axis is $[x_1^*,1]$ and to the $x_2$-axis $[x_2^*,1]$). But, from the way the entire region $D$ is constructed, we know that the above two quantities are equal (see~\\eqref{eq:2slice_gen_bundle_region}). Let's denote this value by $\\psi$. Next, we will prove that indeed one can create a feasible flow through $G$ that achieves that maximum value $\\psi$. From the max-flow min-cut theorem, it is enough to show that the minimum $(\\sigma,\\tau)$-cut of $G$ has a value of at least $\\psi$. To do that, we'll show that $(\\sigma,V\\setminus\\{\\sigma\\})$ is a minimum cut of $G$.\n\nIndeed, let $(S,V\\setminus S)$ be a $(\\sigma,\\tau)$-cut of $G$. First, let there be an edge $(v,v_j)$ crossing the cut, i.e.~$v\\in S$ and $v_j\\notin S$, with $v$ internal node and $v_j$ external. Then, by moving $v$ at the other side of the cut, i.e.~removing it from $S$, we would create at most a new edge contributing to the cut, namely $(\\sigma,v)$ but also destroy at least one edge $(v,v_j)$. Since the capacities of these two edges are the same, the overall effect would be to get a new cut with weakly smaller value. So, from now on we can assume that for all edges $(v,v_j)$ of $G$, if $v\\in S$ then also $v_j\\in S$. Under this assumption, if $S_{o}=V_{o}\\inters S$ denotes the set of internal nodes belonging at the left side of the cut, for every $v\\in S_{o}$ all edges $(v,v_j)$ adjacent to $v$ will not cross the cut. However, this means that all edges $(v_j,\\tau)$, where $v_j\\in N(v)$\\footnote{$N(v)$ denotes the set of neighbours of $v$ in graph $G$.}, do contribute to the cut. But then, if we remove all nodes in $S_{o}$, together with their neighbouring external nodes $N(S_{o})$ at the other side of the cut, we increase the cut's value by at most $\\sum_{v\\in S_{o}}c(\\sigma,v)$ and at the same time reduce it by at least $\\sum_{v_j\\in N(S_{o})}c(v_j,\\tau)$. However, by the way graph $G$ is constructed, this corresponds to an overall increase in the cut of at least \n$$\\int_B h(\\vecc x)\\,d\\vecc x -f_2(1) \\int_{B_{1}}f_1(x_1)\\,dx_1 -f_1(1)\\int_{B_{2}}f_2(x_2)\\,dx_2,$$ \nwhere $B=\\union_{v(i,j)\\in S_{o}}B_{i,j}$ is the region of $D_{1,2}$ covered by the boxes of nodes in $S_{o}$ and $B_1$, $B_2$ are the projections of this body to the horizontal and vertical axis, respectively. From Lemma~\\ref{lemma:no_positive_def} this difference must be nonpositive, thus this change results in a cut of an even (weakly) smaller value. The above arguments show that indeed the cut that has only $\\sigma$ remaining at its left side is a minimum one.\n\nSo, there must be a flow $\\phi:E\\longrightarrow\\mathbb R_+$, achieving to transfer a total value of $\\psi$ through $G$. As we argued above though, by the construction of $G$, in order to achieve this value of $\\psi$ the full capacity of \\emph{all} edges $(\\sigma, v)$ as well as that of all $(v_j,\\tau)$ must be used. So, this flow $f$ manages to elegantly separate all incoming flow $\\phi(\\sigma,v(i,j))=\\int_{B_{i,j}}h(\\vecc x)\\,d\\vecc x$ towards an internal box of $D_{1,2}$, into a sum of flows $\\phi(v(i,j),v_1(i))+\\phi(v(i,j),v_2(j))$ towards its external neighbours. But this is exactly what we need in order to construct our feasible dual solution! For simplicity, denote this incoming flow $\\phi(i,j)$ and the outgoing ones $\\phi_1(i,j)$ and $\\phi_2(i,j)$, respectively. Then, define the functions $w_1$, $w_2$ throughout $D_{1,2}$ by\n$$\nw_1(\\vecc x)=\\frac{\\phi_1(i,j)}{\\phi(i,j)}h(\\vecc x)\\qquad\\text{and}\\qquad w_2(\\vecc x)=\\frac{\\phi_2(i,j)}{\\phi(i,j)}h(\\vecc x),\n$$\nwhere $B_{i,j}$ is the discretization box where point $\\vecc x$ of $D_{1,2}$ belongs to. In that way, first notice that we achieve $w_1+w_2=h$. Secondly, functions $w_1$ and $w_2$ are almost everywhere continuous, since the values of the flows are constant within the boxes, and our discretization is finite. The only remaining property to prove is~\\eqref{eq:relax_dual_boundary}. \n\nFix some height $x_2=\\tilde x_2$ such that this horizontal line intersects $D_{1,2}$. We'll prove that \n$$\\int_{0}^{1}w_1(x_1,\\tilde x_2)\\,dx_1-f_1(1)f_2(\\tilde x_2)\\leq \\varepsilon'.$$ \nValue $\\tilde x_2$ falls within some interval of the discretization, let $\\tilde x_2\\in[(\\tilde j-1)\\delta,\\tilde j\\delta]=B_{\\tilde j}^2$. The average value of function $f_1(1)f_2(x_2)$ (with respect to $x_2$) within this interval is $$\\frac{1}{\\delta}f_1(1)\\int_{B^2_{\\tilde j}}f_2(x_2)\\,dx_2=c(v_2(\\tilde j),\\tau)\/\\delta$$ and the average value of $\\int_0^1w_1(\\vecc x)\\,dx_1$ is \n$$\n\\frac{1}{\\delta}\\int_{B^2_{\\tilde j}}\\int_0^1w_1(\\vecc x)\\,dx_1 = \\frac{1}{\\delta}\\sum_i\\int_{B_{i,\\tilde j}}w_1(\\vecc x)\\,d\\vecc x = \\frac{1}{\\delta}\\sum_i\\frac{\\phi_1(i,j)}{\\phi(i,j)}\\int_{B_{i,\\tilde j}}h(\\vecc x)\\,d\\vecc x \n= \\sum_i \\phi_1(i,\\tilde j)\/\\delta.\n$$\nBut since the sum of the outgoing flows over any horizontal line of internal nodes of the graph (here $j=\\tilde j$) must equal the outgoing flow of the corresponding external node (here $v_2(\\tilde j)$), the above quantities are equal. Thus, by selecting the discretization parameter $\\delta$ small enough, we can indeed make the values $\\int_{0}^{1}w_1(x_1,\\tilde x_2)\\,dx_1$ and $f_1(1)f_2(\\tilde x_2)$ to be $\\varepsilon'$ close to each other\n\\footnote{This should feel intuitively clear, and it relies on the uniform continuity of functions $f_2$ and $h$, but we also give a formal proof in Appendix~\\ref{append:technical_flow_rest}.}.\n\\end{proof}\n\n\n\\section{The Case of Identical Items}\nIn this section we focus on the case of identically distributed values, i.e.~$f_1(t)=f_2(t)\\equiv f(t)$ for all $t\\in I$, and we provide clear and simple conditions under which the critical property~\\eqref{eq:1slice_gen_functions} of Theorem~\\ref{th:characterization_main} holds. \n\nFirst notice that in this case the regularity Assumption~\\ref{assume:regularity} gives $3+\\frac{x_1f'(x_1)}{f(x_1)}+\\frac{x_2f'(x_2)}{f(x_2)}\\geq 0$ a.e. in $I^2$ (since $f$ is positive) and thus $\\frac{tf'(t)}{f(t)}\\geq -\\frac{3}{2}$ for a.e. $t\\in I$. An equivalent way of writing this is that $t^{3\/2}f(t)$ is increasing, which interestingly is the complementary case of that studied by~\\citet{Hart:2012uq} for two i.i.d. items: they show that when $t^{3\/2}f(t)$ is decreasing, then deterministically selling in a full bundle is optimal.\n\n\\begin{theorem}\n\\label{th:characterization_iid}\nAssume that $G(t)=tf(t)\/(1-F(t))$ and $H(t)=tf'(t)\/f(t)$ give rise to well defined, differentiable functions over $I$, $G$ being strictly increasing and convex, $H$ decreasing and concave, with $G+H$ increasing and $G(1)\\geq 2+H(0)$. Then the requirements of Theorem~\\ref{th:characterization_main} are satisfied. In particular \n$$s(t)=G^{-1}(2+H(t))$$\nand, if \n\\begin{equation}\n\\label{eq:two_iid_full_bundle_price}\n\\int_0^1\\int_0^1h(\\vecc x)\\,d\\vecc x-\\int_0^p\\int_0^{p-x_2}h(\\vecc x)\\,d\\vecc x-2f(1)\n\\end{equation}\nis nonpositive for $p=s(0)$ then the optimal selling mechanism is the one offering deterministically the full bundle for a price of $p$ being the root of \\eqref{eq:two_iid_full_bundle_price} in $[0,s(0)]$, otherwise the optimal mechanism is the one defined by the utility function\n$$\nu(\\vecc x)=\\max\\sset{0,x_1-s(x_2),x_2-s(x_1),x_1+x_2-p}\n$$\nwith $p=x^*+s(x^*)$, where $x^*\\in [0,s(0)]$ is the constant we get by solving \n\\begin{equation}\n\\label{eq:price_bundle_eq_iid}\n\\int_{x^*}^{s(x^*)}\\int_{s(x^*)+x^*-x_2}^1h(\\vecc x)\\,d\\vecc x+\\int_{s(x^*)}^1\\int_{x^*}^1h(\\vecc x)\\,d\\vecc x= 2f(1)(1-F(x^*)).\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\nFunction $G$ is strictly monotone, thus invertible and has a range of $[G(0),G(1)]=[0,G(1)]\\supseteq [0,2+H(0)]$. By Assumption~\\ref{assume:regularity} and the previous discussion, it must be $tf'(t)\/f(t)\\geq -3\/2$, so $2+H(t)\\geq 1\/2>0$ for all $t\\in I$. Thus, $s(t)=G^{-1}(2+H(t))$ is well defined and furthermore it is decreasing, since $G$ is increasing and $H$ decreasing. Also, by the way $s$ is defined we get that for all $t$: $G(s(t))=2+H(t)$, which is exactly condition~\\eqref{eq:1slice_gen_functions} of Theorem~\\ref{th:characterization_main}. \n\nIt remains to be shown that $s$ is concave and that $s'(t)>-1$. From the definition of $s$, $s'(t)=H'(t)\/G'(s(t))$. Function $H$ is decreasing and concave, so $H'(t)$ is negative and decreasing, and function $G$ is increasing and convex and $s$ decreasing, so $G'(s(t))$ is positive and decreasing. Combining these we get that the ratio $H'(t)\/G'(s(t))$ is decreasing, proving that $s$ is concave. Finally, notice that since we are in a two item i.i.d. setting, the only part of curve $x_2=s(x_1)$ that matters and may appear in the utility of the resulting mechanism~\\eqref{eq:optimal_auction_gen} is the one where $x_1\\leq x_2$ (curves $x_2=s(x_1)$ and $x_1=s(x_2)$ will intersect on the line $x_1=x_2$), so we only have to show that $s'(t)>-1$ for $t\\leq s(t)$. Indeed, in that case $G'(t)\\leq G'(s(t))$, so $s'(t)=H'(t)\/G'(s(t))\\geq H'(t)\/G'(t)$ and thus it is enough to show that $H'(t)-G'(t)\\geq 0$ which we know holds since $H+G$ is assumed to be increasing.\n\n\\end{proof}\n\n\n\n\\begin{corollary}[Monomial Distributions]\n\\label{th:optimal_two_power}\nThe optimal selling mechanism for two items with i.i.d.\\ values from the family of distributions with densities $f(t)=(c+1)t^c$, $c\\geq 0$, is deterministic. In particular, it offers each item for a price of $s=\\sqrt[c+1]{\\frac{c+2}{2c+3}}$ and the full bundle for a price of $p=s+x^*$, where $x^*$ is the solution to~\\eqref{eq:price_bundle_eq_iid}.\n\\end{corollary}\n\\begin{proof}\nFor two monomial i.i.d.\\ items with $f_1(t)=f_2(t)=(c+1)t^c$ we have $h(\\vecc x)=(c+1)^2 (2 c+3) x_1^c x_2^c\\geq 0$, thus $h(\\vecc x)-f_2(1)f_1(x_1)=(c+1)^2 x_1^c \\left((2 c+3) x_2^c-1\\right)$ which is nonnegative for all $x_2\\geq \\sqrt[c]{1\/(2c+3)}\\equiv \\omega$. So, in order to make sure that Assumption~\\ref{assume:upwards_def} is satisfied, it is enough to show that $x^*\\geq\\omega$ because then $D_{1,2}\\subseteq [\\omega,1]^2$. We'll soon show that this is indeed satisfied for all $c\\geq 0$.\n\nApplying Theorem~\\ref{th:characterization_iid} we compute: $G(t)=(c+1)t^{c+1}\/(1-t^{c+1})$ which is strictly increasing and convex in $I$ and $H(t)=c$ which is constant and thus decreasing and concave. Also, it is trivial to deduce that $G+H$ is increasing and $\\lim_{t\\to 1^{-}}G(t)=\\infty>2+c=2+H(0)$. Then, it is valid to compute $G^{-1}(t)=\\left(\\frac{3+2c}{2+c}\\right)^{-\\frac{1}{1+c}}$ and thus $s(t)=\\sqrt[c+1]{\\frac{c+2}{2c+3}}$ which is constant. \n\nRegarding the computation of the full-bundle price $p$, condition~\\eqref{eq:price_bundle_eq_iid} gives rise to quantity\n$$\n\\int_{x^*}^{s}\\int_{s+x^*-x_2}^1x_1^cx_2^c\\,d\\vecc x+\\int_{s}^1\\int_{x^*}^1x_1^cx_2^c\\,d\\vecc x- \\frac{2}{c+1}(1-{x^*}^{c+1}),\n$$\nwhich by plugging-in $x^*=\\omega$ and using the values of $s$ and $\\omega$ (as functions of $c$) one can see that it is positive for all $c\\geq 0$. So, by the discussion in the beginning of Sect.~\\ref{sec:optimality} it can be deduced that the solution to~\\eqref{eq:price_bundle_eq_iid} will be such that $x^*>\\omega$. \n\\end{proof}\nNotice that for $c=0$ the setting of Corollary~\\ref{th:optimal_two_power} reduces to a two uniformly distributed goods setting, and gives the well-known results of $s=2\/3$ and $p=(4-\\sqrt{2})\/3$ (see e.g.~\\citep{Manelli:2006vn}). For the linear distribution $f(t)=2t$, where $c=1$, we get $s=\\sqrt{3\/5}$ and $p\\approx1.091$.\n\n\\begin{corollary}[Exponential Distributions]\n\\label{th:optimal_two_expo}\nThe optimal selling mechanism for two items with exponentially i.i.d.\\ values over $[0,1]$, i.e.\\ having densities $f(t)=\\lambda e^{-\\lambda t}\/(1-e^{-\\lambda})$, with $0<\\lambda\\leq 1$, is the one having $s(t)=\\frac{1}{\\lambda}\\left[2-\\lambda t-W\\left(e^{2-\\lambda-\\lambda t}(2-\\lambda t)\\right)\\right]$ and a price of\n$p=x^*+s(x^*)$ for the full bundle, where $x^*$ is the solution to~\\eqref{eq:price_bundle_eq_iid}. Here $W$ is Lambert's product logarithm function\\footnote{Function $W$ can be defined as the solution to $W(t)e^{W(t)}=t$.}.\n\\end{corollary}\n\\begin{proof}\nFor two i.i.d. exponentially distributed items with $f_1(t)=f_2(t)=\\lambda e^{-\\lambda t}\/(1-e^{-\\lambda})$ we have \n$$\n\\hspace{-0.5cm}\nh(\\vecc x)-f_2(1)f_1(x_1)=\\frac{\\lambda^2}{\\left(e^\\lambda-1\\right)^2} e^{2-\\lambda (x_1+x_2)} (3-\\lambda (x_1+x_2)-e^{\\lambda x_2})\\geq \\frac{\\lambda^2}{\\left(e^\\lambda-1\\right)^2} e^{2-\\lambda (x_1+x_2)} (2-\\lambda (x_1+x_2))\\geq 0\n$$ \nfor all $x_1,x_2\\in I$, since $\\lambda\\leq 1$.\n\nApplying Theorem~\\ref{th:characterization_iid} we compute: $G(t)=\\lambda t\/(1-e^{-\\lambda(1-t)})$ which is strictly increasing and convex in $I$ and $H(t)=-\\lambda t$ which is decreasing and concave. Also, $G(t)+H(t)=\\lambda te^{-\\lambda(1-t)}\/(1-e^{-\\lambda(1-t)})$ is increasing and $\\lim_{t\\to 1^{-}}G(t)=\\infty>2=2+H(0)$. Then, it is valid to compute $G^{-1}(t)=t\/\\lambda-W\\left(t e^{t-\\lambda}\\right)\/\\lambda$ and thus $s(t)=\\frac{1}{\\lambda}\\left[2-\\lambda t-W\\left(e^{2-\\lambda-\\lambda t}(2-\\lambda t)\\right)\\right]$.\n\\end{proof}\n\nFor example, for $\\lambda=1$ we get $s(t)=2-t-W\\left(e^{1-t} (2-t)\\right)$ and $p\\approx 0.714$. \nInterestingly, to our knowledge this is the first example for an i.i.d.\\ setting with values coming from a regular, continuous distribution over an interval $[0,b]$, where an optimal selling mechanism is \\emph{not} deterministic. Also notice how this case of exponential i.i.d.\\ items on a bounded interval is different from the one on $[0,\\infty)$: by \\citep{Daskalakis:2013vn,g2014} we know that at the unbounded case the optimal selling mechanism for two exponential i.i.d.\\ items is simply the deterministic full-bundling, but in our case of the bounded $I$ this is not the case any more.\n\n\\section{Non-Identical Items}\n\\label{sec:non-iid}\n\nAn interesting aspect of the technique of Theorem~\\ref{th:characterization_iid} is that it can readily be used also for non identically distributed values. One just has to define $G_j(t)\\equiv tf_j(t)\/(1-F_j(t))$ and $H_j(t)=tf_j'(t)\/f_j(t)$ for both items $j=1,2$ and check again whether $G_1,G_2$ are strictly increasing and convex and $H_1,H_2$ nonnegative, decreasing and concave. Then, we can get $s_j(t)=G_j^{-1}(2+H_{-j}(t))$ and check if $s_j(1)> -1$ and the price $p$ of the full bundle can be given by~\\eqref{eq:2slice_gen}. Again, a quick check of whether full bundling is optimal is to see if for $p=\\min\\sset{s_1(0),s_2(0)}$ expression $\\int_0^1\\int_0^1h(\\vecc x)\\,d\\vecc x-\\int_0^p\\int_0^{p-x_2}h(\\vecc x)\\,d\\vecc x-f_1(1)-f_2(1)$ is nonpositive.\n\n\\begin{example}\n\\label{example:uniform-expo}\nConsider two independent items, one having uniform valuation $f_1(t)=1$ and one exponential $f_2(t)=e^{-t}\/(1-e^{-1})$. Then we get that $s_1(t)=(2-t)\/(3-t)$, $s_2(t)=2-W(2e)\\approx 0.625$ and $p \\approx0.787$. The optimal selling mechanism offers either only item $2$ for a price of $s_2\\approx 0.625$, or item $1$ deterministically and item $2$ with a probability $s_1'(x_2)$ for a price of $s_1(x_2)-x_2s_1'(x_2)$, or the full bundle for a price of $p\\approx 0.787$. You can see the allocation space of this mechanism in Fig.~\\ref{fig:Exp_Uniform}.\n\\end{example}\n\n\\section{Approximate Solutions}\n\\label{sec:approximate_convex_fail}\nIn the previous sections we developed tools that, under certain assumptions, can give a complete closed-form description of the optimal selling mechanism. However, remember that the initial primal-dual formulation upon which our analysis was based, assumes a relaxed optimization problem. Namely, we dropped the convexity assumption of the utility function $u$. In the results of the previous sections this comes for free: the optimal solution to the relaxed program turns out to be convex anyways, as a result of the requirements of Theorem~\\ref{th:characterization_main}. But what happens if that was not the case? The following tool shows that even in that case our results are still applicable and very useful for both finding good upper bounds on the optimal revenue (Theorem~\\ref{th:two_iid_approx}) as well as designing almost-optimal mechanisms that have provably very good performance guarantees (Sect.~\\ref{sec:convexification}).\n\\begin{theorem}\n\\label{th:two_iid_approx}\nAssume that all conditions of Theorem~\\ref{th:characterization_main} are satisfied, except from the concavity of functions $s_1,s_2$. Then, the function $u$ given by that theorem might not be convex any more and thus not a valid utility function, but it generates an \\emph{upper bound} to the optimal revenue, i.e.\\ $\\ensuremath{\\text{\\rm\\sc Rev}}(f_1,f_2)\\leq\\mathcal R_{f_1,f_2}(u)$. In particular, this is the case if all the requirements of Theorem~\\ref{th:characterization_iid} hold except the concavity of $H$.\n\\end{theorem}\n\\begin{proof}\nThe proof is a straightforward result of the duality framework (see Sect.~\\ref{sec:duality}): By dropping only the concavity requirement of functions $s_1$ and $s_2$ but satisfying all the remaining conditions of Theorem~\\ref{th:characterization_main}, we still construct an optimal solution to the pair of primal-dual programs, meaning that function $u$ produced in \\eqref{eq:optimal_auction_gen} maximizes $\\mathcal R_{f_1,f_2}(u)$ over the space of all functions $u: I^2\\longrightarrow \\mathbb R_+$ with partial derivatives in $[0,1]$ (see~\\eqref{eq:allocs_probs_01}); the only difference is that $u$ might not be convex since $s_1,s_2$ might not be concave any more. The actual optimal revenue objective $\\ensuremath{\\text{\\rm\\sc Rev}}(f_1,f_2)$ has the extra constraint of $u$ being convex, thus, given that it is a maximization problem, it has to be that $\\ensuremath{\\text{\\rm\\sc Rev}}(f_1,f_2)\\leq \\mathcal R_{f_1,f_2}(u)$. Finally, it is easy to verify in the proof of Theorem~\\ref{th:characterization_iid} that dropping just the concavity requirement for $H$ can only affect the concavity of functions $s_1,s_2$ and hence the convexity of $u$.\n\\end{proof}\n\n\\begin{example}[Power-Law Distributions]\n\\label{example:power-law}\n A class of important distributions that falls into the description of Theorem~\\ref{th:two_iid_approx} are the power-law distributions with parameters $0<\\alpha\\leq 2$. More specifically, these are the distributions having densities $f(t)=c\/(t+1)^\\alpha$, with the normalization factor $c$ selected so that $\\int_0^1f(t)\\,dt=1$, i.e.~$c=(a-1)\/(1-2^{1-\\alpha})$. It is not difficult to verify that these distributions satisfy Assumption~\\ref{assume:upwards_def}. For example, for $\\alpha=2$ one gets $f(x)=2\/(x+1)^2$, the \\emph{equal revenue} distribution shifted in the unit interval. For this we can compute via~\\eqref{th:two_iid_approx} that $s(t)=\\frac{1}{2} \\sqrt{5+2 t+t^2}-\\frac{1}{2} (1+t)$ and $p\\approx 0.665$, which gives an upper bound of $R_{f,f}(u)\\approx 0.383$ to the optimal revenue $\\ensuremath{\\text{\\rm\\sc Rev}}(f,f)$.\n\\end{example}\n\n\\subsection{Convexification}\n\\label{sec:convexification}\nThe approximation results described in Theorem~\\ref{th:two_iid_approx} can be used not only for giving upper bounds on the optimal revenue, but also as a \\emph{design} technique for good selling mechanisms. Since the only deviation from a feasible utility function is the fact that function $s$ is not concave (and thus $u$ is not convex), why don't we try to ``convexify'' $u$, by replacing $s$ by a concave function $\\tilde s$? If $\\tilde s$ is ``close enough'' to the original $s$, by the previous discussion this would also result in good approximation ratios for the new, feasible selling mechanism. \n\nLet's demonstrate this by an example, using the equal revenue distribution $f(t)=2\/(t+1)^2$ of the previous example. We need to replace $s$ with a concave $\\tilde s$ in the interval $[0,x^*]$. So let's choose $\\tilde s$ to be the concave hull of $s$, i.e.~the minimum concave function that dominates $s$. Since $s$ is convex, this is simply the line that connects the two ends of the graph of $s$ in $[0,x^*]$, that is, the line\n$$\n\\tilde s(t)=\\frac{s(0)-s(x^*)}{x^*}(x^*-t)+s(x^*).\n$$\nA calculation shows that this new \\emph{valid} mechanism has an expected revenue which is within a factor of just $1+3\\times 10^{-9}$ of the upper bound given by $s$ using Theorem~\\ref{th:two_iid_approx}, rendering it essentially optimal.\n\n\\paragraph{Acknowledgements:} We thank Anna Karlin, Amos Fiat, Costis Daskalakis and Ian Kash for insightful discussions. We also thank the anonymous reviewers for their useful comments on the conference version of this paper.\n\n\\bibliographystyle{abbrvnat} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Appendix: Experimental Details}\n\\label{sec:appendix_exp_details}\n\n\\subsection{Models}\nWe use the following families of architectures.\nThe PyTorch~\\cite{paszke2017automatic} specification of our ResNets and CNNs\nare available at \\url{https:\/\/gitlab.com\/harvard-machine-learning\/double-descent\/tree\/master}.\n\n\\paragraph{ResNets.}\nWe define a family of ResNet18s of increasing size as follows.\nWe follow the Preactivation ResNet18 architecture of \\cite{he2016identity},\nusing 4 ResNet blocks, each consisting of two BatchNorm-ReLU-Convolution layers. The layer widths for the 4 blocks are $[k, 2k, 4k, 8k]$ for varying $k \\in \\mathbb{N}$ and the strides are [1, 2, 2, 2].\nThe standard ResNet18 corresponds to $k=64$ convolutional channels in the first layer.\nThe scaling of model size with $k$ is shown in Figure~\\ref{fig:resnet_params}.\nOur implementation is adapted from \\url{https:\/\/github.com\/kuangliu\/pytorch-cifar}.\n\n\\paragraph{Standard CNNs.}\nWe consider a simple family of 5-layer CNNs,\nwith four Conv-BatchNorm-ReLU-MaxPool layers and a fully-connected output layer.\nWe scale the four convolutional layer widths as $[k, 2k, 4k, 8k]$. The MaxPool is [1, 2, 2, 8]. For all the convolution layers, the kernel size = 3, stride = 1 and padding=1. \nThis architecture is based on the ``backbone'' architecture from \n\\cite{mcnn}.\nFor $k=64$, this CNN has 1558026 parameters and can reach $>90\\%$ test accuracy on CIFAR-10 (\\cite{cifar}) with data-augmentation.\nThe scaling of model size with $k$ is shown in Figure~\\ref{fig:mcnn_params}.\n\n\n\\paragraph{Transformers.}\nWe consider the encoder-decoder Transformer model from \\cite{Vaswani}\nwith 6 layers and 8 attention heads per layer, as implemented by fairseq \\cite{ott2019fairseq}.\nWe scale the size of the network by modifying the embedding dimension ($d_{\\text{model}}$),\nand scale the width of the fully-connected layers proportionally ($d_{\\text{ff}} = 4 d_{\\text{model}}$).\nWe train with 10\\% label smoothing and no drop-out, for 80 gradient steps.\n\n\\begin{figure}[!h]\n \\centering\n \\begin{subfigure}{.3\\textwidth}\n \\includegraphics[width=\\textwidth]{mcnn_parameters}\n \\caption{5-layer CNNs}\n \\label{fig:mcnn_params}\n \\end{subfigure}\\hfill\n \\begin{subfigure}{.3\\textwidth}\n \\includegraphics[width=\\textwidth]{resnet_parameters}\n \\caption{ResNet18s}\n \\label{fig:resnet_params}\n \\end{subfigure}\\hfill\n \\begin{subfigure}{.3\\textwidth}\n \\includegraphics[width=\\textwidth]{nlp_parameters}\n \\caption{Transformers}\n \\label{fig:resnet_params2}\n \\end{subfigure}\n \\label{fig:nparams}\n \\caption{Scaling of model size with our\n parameterization of width \\& embedding dimension.}\n\\end{figure}\n\n\\subsection{Image Classification: Experimental Setup}\nWe describe the details of training for CNNs and ResNets below.\n\n\\textbf{Loss function:} Unless stated otherwise, we use the cross-entropy loss for all the experiments.\n\n\\textbf{Data-augmentation:} In experiments where data-augmentation was used, we apply \\texttt{RandomCrop(32, padding=4)} and \\texttt{RandomHorizontalFlip}. In experiments with added label noise, the label for all augmentations of a given training sample are given the same label.\n\n\\textbf{Regularization:} No explicit regularization like weight decay or dropout was applied unless explicitly stated.\n\n\\textbf{Initialization:} We use the default initialization provided by PyTorch for all the layers.\n\n\\textbf{Optimization:}\n\\begin{itemize}\n \\item \\textbf{Adam:} Unless specified otherwise, learning rate was set at constant to $1\\mathrm{e}{-4}$ and all other parameters were set to their default PyTorch values. \n \\item \\textbf{SGD:} Unless specified otherwise, learning rate schedule inverse-square root (defined below) was used with initial learning rate\n $\\gamma_{0} = 0.1$ and updates every $L = 512$ gradient steps. No momentum was used.\n\\end{itemize}\nWe found our results are robust to various other \nnatural choices of optimizers and learning rate schedule.\nWe used the above settings because\n(1) they optimize well,\nand (2) they do not require experiment-specific hyperparameter tuning, and allow us to use the same optimization across many experiments.\n\n\\textbf{Batch size}: All experiments use a batchsize of 128.\n\n\\textbf{Learning rate schedule descriptions:}\n\\begin{itemize}\n \\item \\textbf{Inverse-square root $(\\gamma_0, L)$}:\n At gradient step $t$, the learning rate is set to\n $\\gamma(t) := \\frac{\\gamma_0}{\\sqrt{1+\\lfloor t \/ 512\\rfloor}}$.\n We set learning-rate with respect to number of gradient steps, and not epochs, in order to allow comparison between experiments with varying train-set sizes.\n \\item \\textbf{Dynamic drop ($\\gamma_0$, drop, patience)}: Starts with an initial learning rate of $\\gamma_0$ and drops by a factor of 'drop' if the training loss has remained constant or become worse for 'patience' number of gradient steps.\n\\end{itemize}\n\n\\subsection{Neural Machine Translation: Experimental Setup}\n\nHere we describe the experimental setup for the neural machine translation experiments.\n\n{\\bf Training procedure.}\n\nIn this setting, the distribution $\\mathcal D$ consists of triples \\[\n (x, y, i)\\;:\\;x \\in V_{src}^*,\\;y \\in V_{tgt}^*,\\;i \\in \\{0, \\dots, |y|\\}\n\\] where $V_{src}$ and $V_{tgt}$ are the source and target vocabularies, the string $x$ is a sentence in the source language, $y$ is its translation in the target language, and $i$ is the index of the token to be predicted by the model. We assume that $i|x, y$ is distributed uniformly on $\\{0, \\dots, |y|\\}$.\n\nA standard probabilistic model defines an autoregressive factorization of the likelihood: \\[\n p_M(y|x) = \\prod_{i = 1}^{|y|} p_M(y_i|y_{0$,\nis defined as:\n\\begin{align*}\n \\mathrm{EMC}_{\\cD,{\\epsilon}}(\\cT)\n := \\max \\left\\{n ~|~ \\mathbb{E}_{S \\sim \\cD^n}[ \\mathrm{Error}_S( \\cT( S ) ) ] \\leq {\\epsilon} \\right\\}\n \\end{align*}\n where $\\mathrm{Error}_S(M)$ is the mean error of model $M$ on train samples $S$.\n\\end{definition}\n\nOur main hypothesis can be informally stated as follows:\n\n\\begin{hypothesis}[Generalized Double Descent hypothesis, informal] \\label{hyp:informaldd}\nFor any natural data distribution $\\cD$, neural-network-based training procedure $\\cT$, and small $\\epsilon>0$,\nif we consider the task of predicting labels based on $n$ samples from $\\cD$ then:\n\\begin{description}\n \\item[Under-paremeterized regime.] If~$\\mathrm{EMC}_{\\cD,\\epsilon}(\\cT)$ is sufficiently smaller than $n$, any perturbation of $\\cT$ that increases its effective complexity will decrease the test error.\n \\item[Over-parameterized regime.] If $\\mathrm{EMC}_{\\cD,\\epsilon}(\\cT)$ is sufficiently larger than $n$,\n any perturbation of $\\cT$ that increases its effective complexity will decrease the test error.\n \n \\item[Critically parameterized regime.] If $\\mathrm{EMC}_{\\cD,\\epsilon}(\\cT) \\approx n$, then\n a perturbation of $\\cT$ that increases its effective complexity\n might decrease {\\bf or increase} the test error.\n\\end{description}\n\\end{hypothesis}\n\n\\iffalse\nHypothesis~\\ref{hyp:informaldd} is stated informally\nas we are yet to fully understand the behavior at the critically parameterized regime.\nFor example, it is an open question what determines the width of the \\emph{critical interval}---the interval around the point in which $\\mathrm{EMC}_{\\cD,\\epsilon}(\\cT) = n$, where increasing complexity might hurt performance. Specifically, we lack a formal definition for ``sufficiently smaller'' and ``sufficiently larger''. Another parameter that lacks principled understanding is the choice of $\\epsilon$. In our experiments, we use $\\epsilon=0.1$ heuristically.\n\nWhile in both the under-parameterized and over-parameterized regimes increasing complexity helps performance, the dynamics of the learning process seem very different in these two regimes\nFor example, in the over-parameterized regime, the gain comes not from classifying more training samples correctly but rather from increasing the confidence for the training samples that have already been correctly classified. \\gal{iffalsed previous stuff}\n\\fi\n\nHypothesis~\\ref{hyp:informaldd} is informal in several ways.\nWe do not have a principled way to choose the parameter $\\epsilon$ (and currently heuristically use $\\epsilon=0.1$). \nWe also are yet to have a formal specification for ``sufficiently smaller'' and ``sufficiently larger''.\nOur experiments suggest that there is a \\emph{critical interval}\naround the \\emph{interpolation threshold} when $\\mathrm{EMC}_{\\cD,\\epsilon}(\\cT) = n$: below and above\nthis interval increasing complexity helps performance,\nwhile within this interval it may hurt performance.\nThe width of the critical interval depends on both the distribution and\nthe training procedure in ways we do not yet completely understand.\n\n\nWe believe Hypothesis~\\ref{hyp:informaldd} sheds light on the interaction between optimization algorithms, model size, and test performance and helps reconcile some of the competing intuitions about them.\nThe main result of this paper is an experimental validation of Hypothesis~\\ref{hyp:informaldd} under a variety of settings, where we considered several natural choices of datasets, architectures, and optimization algorithms,\nand we changed the ``interpolation threshold''\nby varying the number of model parameters,\nthe length of training, the amount of label noise in the distribution, and the number of train samples.\n\n\n{\\bf Model-wise Double Descent.}\nIn Section~\\ref{sec:model-dd},\nwe study the test error of models of increasing size,\nfor a fixed large number of optimization steps.\nWe show that ``model-wise double-descent'' occurs\nfor various modern datasets\n(CIFAR-10, CIFAR-100, \\iwslt~de-en, with varying amounts of label noise),\nmodel architectures (CNNs, ResNets, Transformers),\noptimizers (SGD, Adam),\nnumber of train samples,\nand training procedures (data-augmentation, and regularization).\nMoreover, the peak in test error systematically occurs at the interpolation threshold.\nIn particular, we demonstrate realistic settings in which\n\\emph{bigger models are worse}.\n\n{\\bf Epoch-wise Double Descent.}\nIn Section~\\ref{sec:epoch-dd},\nwe study the test error of a fixed, large architecture over the course of training.\nWe demonstrate, in similar settings as above,\na corresponding peak in test performance when\nmodels are trained just long enough to reach $~\\approx 0$ train error.\nThe test error of a large model\nfirst decreases (at the beginning of training), then increases (around the critical regime),\nthen decreases once more (at the end of training)---that is,\n\\emph{training longer can correct overfitting.}\n\n\n{\\bf Sample-wise Non-monotonicity.}\nIn Section~\\ref{sec:samples},\nwe study the test error of a fixed model and training procedure, for varying number of train samples.\nConsistent with our generalized double-descent hypothesis,\nwe observe distinct test behavior in the ``critical regime'',\nwhen the number of samples is near the maximum that the model can fit.\nThis often manifests as a long plateau region,\nin which taking significantly more data might not help when training to completion\n(as is the case for CNNs on CIFAR-10).\nMoreover, we show settings (Transformers on \\iwslt~en-de),\nwhere this manifests as a peak---and for a fixed architecture and training procedure, \\emph{more data actually hurts.}\n\n{\\bf Remarks on Label Noise.} \nWe observe all forms of double descent most strongly in settings\nwith label noise in the train set\n(as is often the case when collecting train data in the real-world).\nHowever, we also show several realistic settings with a test-error peak even without label noise:\nResNets (Figure~\\ref{fig:resnet-cifar-left}) and CNNs (Figure~\\ref{fig:app-cifar100-clean-sgd}) on CIFAR-100;\nTransformers on \\iwslt~ (Figure~\\ref{fig:more-mdd2}).\nMoreover, all our experiments demonstrate distinctly different test behavior\nin the critical regime--- often manifesting as a ``plateau'' in the test error in the noiseless case which \ndevelops into a peak with added label noise.\nSee Section~\\ref{sec:discuss} for further discussion.\n\n\\section{Related work}\nModel-wise double descent was first proposed as a general phenomenon\nby \\cite{belkin2018reconciling}.\nSimilar behavior had been observed in\n\\cite{opper1995statistical, opper2001learning},\n\\cite{advani2017high},\n\\cite{spigler2018jamming}, and \\cite{geiger2019jamming}.\nSubsequently, there has been a large body of work studying the double descent phenomenon.\nA growing list of papers that theoretically analyze it in the tractable setting of linear least squares regression includes \\cite{belkin2019two, hastie2019surprises, bartlett2019benign, muthukumar2019harmless, bibas2019new, Mitra2019UnderstandingOP, mei2019generalization}. Moreover, \\cite{geiger2019scaling} provide preliminary results for model-wise double descent in convolutional networks trained on CIFAR-10. Our work differs from the above papers in two crucial aspects: First, we extend the idea of double-descent beyond the number of parameters to incorporate the training procedure under a unified notion of ``Effective Model Complexity\", leading to novel insights like epoch-wise double descent and sample non-monotonicity.\nThe notion that increasing train time corresponds to\nincreasing complexity was also presented in~\\cite{nakkiran2019sgd}.\nSecond, we provide an extensive\nand rigorous demonstration of double-descent for modern practices spanning a variety of architectures, datasets optimization procedures.\nAn extended discussion of the related work is provided in Appendix \\ref{sec:related_appendix}.\n\n\n\n\\subsubsection*{Acknowledgments}\nWe thank Mikhail Belkin for extremely useful discussions in the early stages of this work.\nWe thank Christopher Olah for suggesting the Model Size $\\x$ Epoch visualization, which led to the investigation of epoch-wise double descent,\nas well as for useful discussion and feedback.\nWe also thank Alec Radford, Jacob Steinhardt, and Vaishaal Shankar for helpful discussion and suggestions.\nP.N. thanks OpenAI, the Simons Institute, and the Harvard Theory Group for a research environment that enabled this kind of work.\n\nWe thank Dimitris Kalimeris, Benjamin L. Edelman, and Sharon Qian,\nand Aditya Ramesh for comments on an early draft of this work.\n\nThis work supported in part by NSF grant CAREER CCF 1452961, BSF grant 2014389, NSF USICCS proposal 1540428, a Google Research award, a Facebook research award, a Simons Investigator Award,\na Simons Investigator Fellowship,\nand NSF Awards\nCCF 1715187,\nCCF 1565264, CCF 1301976, IIS 1409097,\nand CNS 1618026. Y.B. would like to thank the MIT-IBM Watson AI Lab for contributing computational resources for experiments.\n\n\n\n\\section{Model-wise Double Descent}\n\\label{sec:model-dd}\n\n\\begin{figure}[h]\n \\centering\n \\begin{subfigure}[t]{.47\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{cifar100_resnet_pVar}\n \\caption[width=0.8\\linewidth]{{\\bf CIFAR-100.}\n There is a peak in test error even with no label noise.\n } \\label{fig:resnet-cifar-left}\n \\end{subfigure}\\hfill\n \\begin{subfigure}[t]{.47\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{cifar10_resnet_pVar}\n \\caption[width=0.8\\linewidth]{{\\bf CIFAR-10.}\n There is a ``plateau'' in test error around the interpolation point with no label noise,\n which develops into a peak for added label noise.\n }\n \\end{subfigure}\n \\caption{{\\bf Model-wise double descent for\n ResNet18s.} Trained on CIFAR-100 and CIFAR-10, with varying label noise. Optimized using Adam with LR $0.0001$ for 4K epochs, and data-augmentation.\n }\n \\label{fig:resnet-cifar}\n\\end{figure}\n\nIn this section, we study the test error of models of increasing size,\nwhen training to completion (for a fixed large number of optimization steps). \nWe demonstrate model-wise double descent across different architectures,\ndatasets, optimizers, and training procedures.\nThe critical region exhibits distinctly different test behavior\naround the interpolation point and there is often a peak in test error that becomes more prominent in settings with label noise.\n\nFor the experiments in this section (Figures~\\ref{fig:resnet-cifar}, \\ref{fig:cnn-cifar},\n\\ref{fig:more-mdd1appendix},\n\\ref{fig:mcnn_cf100},\n\\ref{fig:more-mdd2}), notice that all modifications which increase the interpolation threshold\n(such as adding label noise, using data augmentation, and increasing the number of train samples)\nalso correspondingly shift the peak in test error towards larger models.\nAdditional plots showing the early-stopping behavior of these models,\nand additional experiments showing double descent\nin settings with no label noise (e.g. Figure~\\ref{fig:app-cifar100-clean})\nare in Appendix~\\ref{subsec:appendix_model}.\nWe also observed model-wise double descent for adversarial training,\nwith a prominent robust test error peak even in settings without label noise.\nSee Figure~\\ref{fig:adv_training} in Appendix~\\ref{subsec:appendix_model}.\n\n\\begin{figure}[h]\n \\centering\n \\begin{subfigure}{.47\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{cifar10_mcnn_noaug_pVar}\n \\caption{Without data augmentation.}\n \\end{subfigure}\\hfill\n \\begin{subfigure}{.47\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{cifar10_mcnn_aug_pVar_trunc}\n \\caption{With data augmentation.}\n \\end{subfigure}\n \\caption{{\\bf Effect of Data Augmentation.}\n 5-layer CNNs on CIFAR10, with and without data-augmentation.\n Data-augmentation shifts the interpolation threshold to the right,\n shifting the test error peak accordingly.\n Optimized using SGD for 500K steps.~\n See Figure~\\ref{fig:mcnn_aug_large} for larger models.\n }\\label{fig:cnn-cifar}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\captionsetup{calcwidth=.9\\linewidth}\n\\begin{minipage}[t]{0.47\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{cifar10_mcnn_sgd_adam}\n \\caption[width=0.8\\linewidth]{{\\bf SGD vs. Adam.}\n 5-Layer CNNs on CIFAR-10 with no label noise, and no data augmentation.\n Optimized using SGD for 500K gradient steps, and Adam for 4K epochs.}\\label{fig:more-mdd1appendix}\n\\end{minipage}\\hfill\n\\begin{minipage}[t]{.47\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{cifar100_mcnn_noaug}\n \\caption{{\\bf Noiseless settings.} \n 5-layer CNNs on CIFAR-100 with no label noise;\n note the peak in test error.\n Trained with SGD and no data augmentation.\n See Figure~\\ref{fig:app-cifar100-clean-sgd}\n for the early-stopping behavior of these models.\n }\n \\label{fig:mcnn_cf100}\n\\end{minipage}\n\\end{figure}\n\n\\begin{figure}[h]\n \\centering\n \\begin{minipage}[c]{0.5\\textwidth}\n \\includegraphics[width=\\textwidth]{nlp_de_en_fr}\n \\end{minipage}\\hfill\n \\begin{minipage}[c]{0.45\\textwidth}\n \\caption[0.9\\textwidth]{{\\bf Transformers on language translation tasks:} Multi-head-attention encoder-decoder Transformer model trained for 80k gradient steps with labeled smoothed cross-entropy loss on \\iwslt\\ German-to-English (160K sentences) and WMT`14 English-to-French (subsampled to 200K sentences) dataset. Test loss is measured as per-token perplexity.}\n \\label{fig:more-mdd2}\n \\end{minipage}\n\\end{figure}\n\n\n\n\n\n\n\n\n\\paragraph{Discussion.}\nFully understanding the mechanisms behind model-wise double descent\nin deep neural networks remains an important open question.\nHowever, an analog of model-wise double descent occurs even\nfor linear models. A recent stream of theoretical works\nanalyzes this setting\n(\\cite{bartlett2019benign,muthukumar2019harmless,belkin2019two,mei2019generalization,hastie2019surprises}).\nWe believe similar mechanisms may be at work in deep neural networks.\n\nInformally, our intuition is that for model-sizes at the interpolation threshold, there is effectively\nonly one model that fits the train data and this interpolating model is very sensitive\nto noise in the train set and\/or model mis-specification.\nThat is, since the model is just barely able to fit the train data,\nforcing it to fit even slightly-noisy or mis-specified\nlabels will destroy its global structure, and result in high test error.\n(See Figure~\\ref{fig:ens_resnet} in the Appendix for an experiment\ndemonstrating this noise sensitivity, by showing that ensembling helps significantly\nin the critically-parameterized regime).\nHowever for over-parameterized models,\nthere are many interpolating models that fit the train set,\nand SGD is able to find one that ``memorizes''\n(or ``absorbs'') the noise while still performing well on the distribution.\n\nThe above intuition is theoretically justified for linear models.\nIn general, this situation manifests even without label noise for linear models\n(\\cite{mei2019generalization}),\nand occurs whenever there is \\emph{model mis-specification}\nbetween the structure of the true distribution and the model family.\nWe believe this intuition extends to deep learning as well,\nand it is consistent with our experiments.\n\n\n\n\\iffalse\n\\textcolor{blue}{ {\\bf Omit this paragraph? --Preetum.}\nThe situation for linear models is most clear with added label noise.\nHere, the fundamental reason why there is high test error\nat the interpolation threshold is: when the number of samples $n$\nis equal to the dimension $d$ of the linear model,\nthen there is exactly one model which fits the train set--- and this \ninterpolating model is very sensitive\nto noise in the train set.\nThat is, since the linear model is just barely able to fit the train data,\nforcing it to fit even slightly-noisy\nlabels will destroy its global structure, and result in high test error.\nHowever for ``overparameterized'' linear models, when $d \\gg n$,\nthere are many interpolating models that fit the train set,\nand selecting the minimum-norm one will often\nachieve good test performance\\footnote{Note that for linear problems,\nthe minimum-norm solution is the one found by gradient descent from $0$ initialization.}.\nThat is, overparameterized linear models are able to\nfit the label noise, while maintaining good performance on the distribution.\nIn general, this situation manifests even without label noise, whenever there is\n\\emph{model mis-specification} between the structure of the true distribution, and the\nlinear model family.\n}\n\nWe believe similar mechanisms may be at work in deep neural networks.\nInformally, our intuition is:\nfor model-sizes at the interpolation threshold, there is effectively\nonly one model that fits the train data.\nThis model is very sensitive to [noise in] the specific train samples.\nOverparameterized models, however, have the capacity to ``memorize the noise'',\nwithout affecting their global structure as much.\nMoreover, SGD (through unexplained means) manages to find such a minima.\nFor overparameterized models, there are many minima which fit the train data,\nan\nand SGD (through unexplained means) is able to ``memorize the noise''\nwithout destroying the global structure of the model.\n\\fi\n\n\n\n\\section{On The Mechanisms and Characterizations of Double Descent}\n\\label{sec:morediscuss}\n\n\\paragraph{On Effective Model Complexity.}\nWe stress that Effective Model Complexity is \\emph{not} a single-letter characterization\nof test error.\nFor example, two training procedures with the same EMC can output models with\nvery different test errors (e.g. consider a CNN and a fully-connected net, which both are capable of interpolating the same number of samples from a natural image distribution.\nHere, we would expect the CNN to have much lower test error than the fully-connected net.\nOur generalized double desccent hypothesis applies as we vary EMC in a ``smooth'' way--- say, by only\nchanging the model by a small amount--- and does not apply if we say, jump discontinuously from a CNN to a fully-connected net.\n\\section{Extended discussion of related work}\n\\label{sec:related_appendix}\n\n\n\n\\paragraph{\\cite{belkin2018reconciling}:} \nThis paper proposed, in very general terms, that the apparent contradiction between traditional notions of the bias-variance trade-off and empirically successful practices in deep learning can be reconciled under a double-descent curve---as model complexity increases, the test error follows the traditional ``U-shaped curve'', but beyond the point of interpolation, the error starts to \\textit{decrease}. This work provides empirical evidence for the double-descent curve with fully connected networks trained on subsets of MNIST, CIFAR10, SVHN and TIMIT datasets. They use the $l_2$ loss for their experiments. They demonstrate that neural networks are not an aberration in this regard---double-descent is a general phenomenon observed also in linear regression with random features and random forests.\n\n\\paragraph{Theoretical works on linear least squares regression:}\nA variety of papers have attempted to theoretically analyze this behavior in restricted settings, particularly the case of least squares regression under various assumptions on the training data, feature spaces and regularization method.\n\n\\begin{enumerate}\n \\item \\cite{advani2017high, hastie2019surprises} both consider the linear regression problem stated above and analyze the generalization behavior in the asymptotic limit $N, D \\rightarrow \\infty$ using random matrix theory. \\cite{hastie2019surprises} highlight that when the model is mis-specified, the minimum of training error can occur for over-parameterized models\n \\item \\cite{belkin2019two} Linear least squares regression for two data models, where the input data is sampled from a Gaussian and a Fourier series model for functions on a circle. They provide a finite-sample analysis for these two cases\n \\item \\cite{bartlett2019benign} provides generalization bounds for the minimum $l_2$-norm interpolant for Gaussian features\n \\item \\cite{muthukumar2019harmless} characterize the fundamental limit of of any interpolating solution in the presence of noise and provide some interesting Fourier-theoretic interpretations.\n \\item \\cite{mei2019generalization}: This work provides asymptotic analysis for ridge regression over random features\n\\end{enumerate}\n\nSimilar double descent behavior was investigated in \\cite{opper1995statistical, opper2001learning}\n\n\\cite{geiger2019jamming} showed that deep fully connected networks trained on the MNIST dataset with hinge loss exhibit a ``jamming transition'' when the number of parameters exceeds a threshold that allows training to near-zero train loss. \\cite{geiger2019scaling} provide further experiments on CIFAR-10 with a convolutional network. They also highlight interesting behavior with ensembling around the critical regime, which is consistent with our\ninformal intuitions in Section~\\ref{sec:model-dd} and our experiments in Figures \\ref{fig:ens_resnet}, \\ref{fig:ens_mcnn}.\n\n\\cite{advani2017high, geiger2019jamming, geiger2019scaling} also point out that double-descent is not observed when optimal early-stopping is used.\n\n\n\n\n\n\n\n\\section{Random Features: A Case Study}\n\\label{sec:rff}\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=\\textwidth]{rff_fmnist.png} \\\\\n \n \n \\caption{\\textbf{Random Fourier Features} on the Fashion MNIST dataset. The setting is equivalent to two-layer neural network with $e^{-ix}$ activation, with randomly-initialized first layer that is fixed throughout training. The second layer is trained using gradient flow.}\n \\label{fig:rff}\n\\end{figure}\n\n\nIn this section, for completeness sake, we show that both the model- and sample-wise double descent phenomena are not unique to deep neural networks---they exist even in the setting of Random Fourier Features of \\cite{rahimi2008random}. This setting is equivalent to a two-layer neural network with $e^{-ix}$ activation. The first layer is initialized with a $\\mathcal{N}(0, \\frac{1}{d})$ Gaussian distribution and then fixed throughout training. The width (or embedding dimension) $d$ of the first layer parameterizes the model size. The second layer is initialized with $0$s and trained with MSE loss.\n\nFigure \\ref{fig:rff} shows the grid of Test Error as a function of\nboth number of samples $n$ and model size $d$.\nNote that in this setting $\\mathrm{EMC} = d$ (the embedding dimension). As a result, as demonstrated in the figure, the peak follows the path of $n=d$. Both model-wise and sample-wise (see figure~\\ref{fig:rff-samples}) double descent phenomena are captured, by horizontally and vertically crossing the grid, respectively. \n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.6\\textwidth]{figures\/rff_fmnist_samples.png} \\\\\n \n \n \\caption{Sample-wise double-descent slice for Random Fourier Features on the Fashion MNIST dataset. In this figure the embedding dimension (number of random features) is 1000.}\n \\label{fig:rff-samples}\n\\end{figure}\n\\section{Sample-wise Non-monotonicity}\n\\label{sec:samples}\n\nIn this section, we investigate the effect of varying the number of train samples,\nfor a fixed model and training procedure.\nPreviously, in model-wise and epoch-wise double descent, we explored behavior in the critical regime, where $\\mathrm{EMC}_{\\cD, {\\epsilon}}(\\cT) \\approx n$, by varying the EMC.\nHere, we explore the critical regime by varying the number of train samples $n$.\nBy increasing $n$, the same training procedure $\\cT$ can switch from being effectively over-parameterized\nto effectively under-parameterized.\n\n\nWe show that increasing the number of samples has two different effects on the test error vs. model complexity graph.\nOn the one hand, (as expected) increasing the number of samples shrinks the area under the curve.\nOn the other hand, increasing the number of samples also has the effect of ``shifting the curve to the right'' and increasing the model complexity at which test error peaks.\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[c]{.45\\textwidth}\n \n \\centering\n \n \\includegraphics[width=\\textwidth]{mcnn_p10_big}\n \\includegraphics[width=\\textwidth]{mcnn_p20_big}\n \\caption{Model-wise double descent for 5-layer CNNs on CIFAR-10, for varying dataset sizes. \n \n {\\bf Top:}\n There is a range of model sizes (shaded green)\n where training on $2\\x$ more samples does not improve test error.\n {\\bf Bottom:}\n There is a range of model sizes (shaded red)\n where training on $4\\x$ more samples does not improve test error.}\n \\label{fig:sample-modeld}\n \\end{subfigure}\\hspace{0.7cm}\n \\begin{subfigure}[c]{.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{nlp_sampledd_test-annotated}\n \\caption{\\textbf{Sample-wise non-monotonicity.}\n Test loss (per-word perplexity) as a function of number of train samples, for two transformer models trained to completion on IWSLT'14. For both model sizes, there is a regime where more samples hurt performance. Compare to Figure~\\ref{fig:moresamplesareworse}, of model-wise double-descent in the identical setting.}\n \\label{fig:nlpdd}\n \\end{subfigure}\n \\caption{Sample-wise non-monotonicity.}\n\\end{figure}\n\nThese twin effects are shown in Figure~\\ref{fig:sample-modeld}.\nNote that there is a range of model sizes\nwhere the effects ``cancel out''---and\nhaving $4\\x$ more train samples does not help test\nperformance when training to completion.\nOutside the critically-parameterized regime, for sufficiently under- or over-parameterized models, having more samples helps.\nThis phenomenon is corroborated in Figure~\\ref{fig:grid},\nwhich shows test error as a function of both model and sample size,\nin the same setting as Figure~\\ref{fig:sample-modeld}.\n\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=\\textwidth]{joint_grid}\n \\caption{{\\bf Left:} Test Error as a function of model size and\n number of train samples, for 5-layer CNNs on CIFAR-10 + 20\\% noise.\n Note the ridge of high test error again lies along the interpolation threshold.\n {\\bf Right: } Three slices of the left plot,\n showing the effect of more data for models of different sizes.\n Note that, when training to completion, more data helps for\n small and large models, but does not help\n for near-critically-parameterized models (green).\n }\n \\label{fig:grid}\n\\end{figure}\n\nIn some settings, these two effects combine to yield a regime of model sizes\nwhere more data actually hurts test performance as in\nFigure~\\ref{fig:moresamplesareworse} (see also \nFigure~\\ref{fig:nlpdd}).\nNote that this phenomenon is not unique to DNNs:\nmore data can hurt even for linear models \n(see Appendix~\\ref{sec:rff}).\n\n\\section{Experimental Setup}\n\n\nWe briefly describe the experimental setup here; full details are in Appendix~\\ref{sec:appendix_exp_details}\n\\footnote{The raw data from our experiments\nare available at:\n\\url{https:\/\/gitlab.com\/harvard-machine-learning\/double-descent\/tree\/master}}.\nWe consider three families of architectures: ResNets, standard CNNs, and Transformers.\n\\textbf{ResNets:}\nWe parameterize a family of ResNet18s (\\cite{he2016identity})\nby scaling the width (number of filters) of convolutional layers.\nSpecifically, we use layer widths $[k, 2k, 4k, 8k]$ for varying $k$.\nThe standard ResNet18 corresponds to $k=64$.\n\\textbf{Standard CNNs:}\nWe consider a simple family of 5-layer CNNs, with 4 convolutional layers of widths $[k, 2k, 4k, 8k]$ for varying $k$, and a fully-connected layer. For context, the CNN with width $k=64$, can reach over $90\\%$ test accuracy on CIFAR-10 with data-augmentation.\n\\textbf{Transformers:}\nWe consider the 6 layer encoder-decoder from \\cite{Vaswani},\nas implemented by \\cite{ott2019fairseq}.\nWe scale the size of the network by\nmodifying the embedding dimension $d_{\\text{model}}$,\nand setting the width of the fully-connected layers proportionally\n($d_{\\text{ff}} = 4\\cdot d_{\\text{model}}$).\nFor ResNets and CNNs, we train with cross-entropy loss, and the following optimizers:\n(1) Adam with learning-rate $0.0001$ for 4K epochs; (2) SGD with learning rate $\\propto \\frac{1}{\\sqrt{T}}$ for 500K gradient steps.\nWe train Transformers for 80K gradient steps, with 10\\% label smoothing and no drop-out.\n\n\\paragraph{Label Noise.}\nIn our experiments, label noise of probability $p$ refers to\ntraining on a samples which have the correct label\nwith probability $(1-p)$, and a uniformly random incorrect label otherwise (label noise is sampled only once and not per epoch).\nFigure~\\ref{fig:errorvscomplexity} plots test error on the noisy distribution, while the remaining figures plot test error with respect to the clean distribution (the two curves are just linear rescaling of one another).","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Pseudo-code of proposed methods}\n\\label{appendix:ap_pseudocode}\nIn this section, we provide the pseudo-codes of methods proposed in the main paper. First, Algorithm~\\ref{alg:ap_white} shows the pseudo-code of ODI for white-box attacks in Section~\\ref{sec_ODS_white}. The line 4-6 in the algorithm describes the iterative update by ODI.\n\n\n\\begin{algorithm}[htbp]\n \\caption{Initialization by ODS (ODI) for white-box attacks }\n \\label{alg:ap_white}\n\\begin{algorithmic}[1]\n \\STATE {\\bfseries Input:} A targeted image $\\bm{\\mathrm{x}}_{org}$, a target classifier $\\bm{\\mathrm{f}}$, perturbation set $B(\\bm{\\mathrm{x}}_{org})$, number of ODI steps $N_{\\text{ODI}}$, step size $ \\eta_{\\text{ODI}}$, number of restarts $N_R$\n \\STATE {\\bfseries Output:} Starting points $\\{x^{start}_i \\}$ for adversarial attacks\n \\FOR{$i=1$ {\\bfseries to} $N_R$}\n \\STATE Sample $\\bm{\\mathrm{x}}_{0}$ from $B(\\bm{\\mathrm{x}}_{org})$, and sample $\\bm{\\mathrm{w}}_{\\text{d}} \\sim U(-1,1)^C$\n \\FOR{$k=0$ {\\bfseries to} $N_{\\text{ODI}}-1$}\n \\STATE $\\bm{\\mathrm{x}}_{k+1} \\gets \\mathrm{Proj}_{B(\\bm{\\mathrm{x}}_{org})} \\left( x_{k} + \\eta_{\\text{ODI}} \\, \\mathrm{sign}(\\bm{\\mathrm{v}}_{\\text{ODS}}(\\bm{\\mathrm{x}}_k,\\bm{\\mathrm{f}},\\bm{\\mathrm{w}}_{\\text{d}}) ) \\right)$\n \\ENDFOR\n \\STATE $\\bm{\\mathrm{x}}^{start}_i \\gets x_{N_{\\text{ODI}}}$\n \\ENDFOR\n\\end{algorithmic}\n\\end{algorithm}\n\n\nWe also describe the algorithm of Boundary-ODS, used in Section~\\ref{sec_black_decision} of the main paper. Algorithm~\\ref{alg:ap_boundary} shows pseudo-code of Boundary-ODS. \nThe original Boundary Attack~\\citep{Brendel18} first sampled a random noise vector $\\bm{\\mathrm{q}}$ from a Gaussian distribution $\\mathcal{N}(\\mathbf{0}, \\mathbf{I})$ and then orthogonalized the vector to keep the distance from the original image (line 7 in Algorithm~\\ref{alg:ap_boundary}).\nAfter that, the attack refined the vector $\\bm{\\mathrm{q}}$ to reduce the distance from the original image such that the following equation holds:\n\\begin{equation}\n\\label{eq_decision_update}\nd(\\bm{\\mathrm{x}},\\bm{\\mathrm{x}}_{adv}) - d(\\bm{\\mathrm{x}},\\bm{\\mathrm{x}}_{adv}+\\bm{\\mathrm{q}}) = \\epsilon \\cdot d(\\bm{\\mathrm{x}},\\bm{\\mathrm{x}}_{adv})\n\\end{equation}\nwhere $d(a,b)$ is the distance between $a$ and $b$.\nWe replace the random Gaussian sampling to ODS as in the line 5 and 6 of Algorithm~\\ref{alg:ap_boundary}.\nSampled vectors by ODS yield large changes for outputs on the target model and increase the probability that the updated image is adversarial (i.e. the image satisfies the line 9 of Algorithm~\\ref{alg:ap_boundary}), so ODS makes the attack efficient.\n\n\\begin{algorithm}[htbp]\n \\caption{Boundary Attack~\\citep{Brendel18} with sampling update direction by ODS }\n \\label{alg:ap_boundary}\n\\begin{algorithmic}[1]\n \\STATE {\\bfseries Input:} A targeted image $\\bm{\\mathrm{x}}$, a label $y$, a target classifier $\\bm{\\mathrm{f}}$, a set of surrogate models $\\mathcal{G}$\n \\STATE {\\bfseries Output:} attack result $\\bm{\\mathrm{x}}_{adv}$\n \\STATE Set the starting point $\\bm{\\mathrm{x}}_{adv} = \\bm{\\mathrm{x}}$ which is adversary\n \\WHILE {$k<$ number of steps}\n \\STATE Choose a surrogate model $\\bm{\\mathrm{g}}$ from $\\mathcal{G}$, sample $\\bm{\\mathrm{w}}_{\\text{d}} \\sim U(-1,1)^C$\n \\STATE Set $\\bm{\\mathrm{q}} = \\bm{\\mathrm{v}}_{\\text{ODS}}(\\bm{\\mathrm{x}}_{adv},\\bm{\\mathrm{g}},\\bm{\\mathrm{w}}_{\\text{d}})$\n \\STATE Project $\\bm{\\mathrm{q}}$ onto a sphere around the original image $\\bm{\\mathrm{x}}$\n \\STATE Update $\\bm{\\mathrm{q}}$ with a small movement toward the original image $\\bm{\\mathrm{x}}$ such that Equation~(\\ref{eq_decision_update}) holds\n \\IF{$\\bm{\\mathrm{x}}_{adv}+\\bm{\\mathrm{q}}$ is adversarial}\n \\STATE Set $\\bm{\\mathrm{x}}_{adv}=\\bm{\\mathrm{x}}_{adv}+\\bm{\\mathrm{q}}$ \n \\ENDIF\n \\ENDWHILE\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\section{Details of experiment settings}\n\\label{appendix:ap_parameter}\n\n\n\\subsection{Hyperparameters and settings for attacks in Section~\\ref{sec_white_various}}\n\\label{appendix:ap_parameter_whiteall}\nWe describe hyperparameters and settings for PGD and C\\&W attacks in Section~\\ref{sec_white_various}. \n\nMultiple loss functions $L(\\cdot)$ can be used for PGD attacks, including the cross-entropy loss, and the margin loss defined as $\\max_{i \\neq y} f_{i}(\\bm{\\mathrm{x}}) - f_{y}(\\bm{\\mathrm{x}})$. %\nWe use the margin loss for PGD attacks to make considered attacking methods stronger.\n\n\n\nPGD attacks have three hyperparameters: pertubation size $\\epsilon$, step size $\\eta$ and number of steps $N$. \nWe chose $\\epsilon=0.3,8\/255,4\/255$, $\\eta= 0.02, 2\/255, 0.5\/255$ and $N=40,20,50$ for MadryLab (MNIST), MadryLab (CIFAR-10), ResNet152 Denoise (ImageNet), respectively. We use the whole test set except for ImageNet, where the first 1000 test images are used. \n\nFor C\\&W attacks, we define na\\\"{i}ve random initialization to make sure the starting points are within an $\\ell_2$ $\\epsilon$-radius ball: we first sample Gaussian noise $\\bm{\\mathrm{w}} \\sim \\mathcal{N}(\\mathbf{0}, \\mathbf{I})$ and then add the clipped noise $\\epsilon \\cdot \\bm{\\mathrm{w}} \/ \\|\\bm{\\mathrm{w}}\\|_2$ to an original image. We set the perturbation radius of initialization $\\epsilon$ by reference to attack bounds in other studies: $\\epsilon= 2.0, 1.0, 5.0$ for MadryLab (MNIST), MadryLab (CIFAR-10), ResNet152 Denoise (ImageNet), respectively.\nwe also set hyperparameters of C\\&W attacks as follows: max iterations are 1000 (MNIST) and 100 (CIFAR-10 and ImageNet), search step is 10, learning rate is 0.1, and initial constant is 0.01. The attack is performed for the first 1000 images (MNIST and CIFAR-10) and the first 500 images (ImageNet).\n\n\n\\subsection{Hyperparameter tuning for tuned ODI-PGD in Section~\\ref{sec_sota} }\n\\label{appendix:ap_parameter_ODI}\nWe describe hyperparameter tuning for our tuned ODI-PGD in Section~\\ref{sec_sota}. \nWe summarize the setting in Table~\\ref{tab_para}. \n\n\\begin{table*}[ht]\n\\caption{Hyperparameter setting for tuned ODI-PGD in Section~\\ref{sec_sota}.}\n\\label{tab_para}\n\\begin{center}\n\\setlength{\\tabcolsep}{4.5pt}\n\\begin{tabular}{c|cc|ccc}\n\\toprule\n & \\multicolumn{2}{c|}{\\text{ODI}} & \\multicolumn{3}{c}{PGD} \\\\ \nmodel & \\begin{tabular}{c} total step \\\\ $N_{\\text{ODI}}$ \\end{tabular} & \\begin{tabular}{c} step size \\\\ $\\eta_{\\text{ODI}}$ \\end{tabular} & \n optimizer & \n \\begin{tabular}{c} total step \\\\ $N$ \\end{tabular} & \n \\begin{tabular}{c} step size (learning rate) \\\\ $\\eta_k$ \\end{tabular} \\\\ \\midrule\n\\begin{tabular}{c}MNIST\n\\end{tabular} & 50 & \n 0.05 & Adam & 950 & \n $\\begin{array}{l@{~}l}\n 0.1 & (k < 475) \\\\\n 0.01 & (475 \\leq k < 712) \\\\\n 0.001 & (712 \\leq k)\n \\end{array}$ \\\\ \\hline\n\\begin{tabular}{c}CIFAR-10\n\\end{tabular} & 10 & \n 8\/255 & \\begin{tabular}{c}sign \\\\ function \\end{tabular} & 140 & \n $\\begin{array}{l@{~}l}\n 8\/255 & (k < 46) \\\\\n 0.8\/255 & (46 \\leq k < 92) \\\\\n 0.08\/255 & (92 \\leq k)\n \\end{array}$ \\\\ \n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\end{table*}\nFor ODI, we increase the number of ODI step $N_{\\text{ODI}}$ to obtain more diversified inputs than ODI with $N_{\\text{ODI}}=2$. In addition, we make step size $\\eta_{\\text{ODI}}$ smaller than $\\epsilon$ on MNIST, because $\\epsilon$-ball with $\\epsilon=0.3$ is large and $\\eta_{\\text{ODI}}=0.3$ is not suitable for seeking the diversity within the large $\\epsilon$-ball. \nIn summary, we set $(N_{\\text{ODI}},\\eta_{\\text{ODI}})=(50,0.05), (10, 8\/255)$ for the MNIST model and the CIFAR-10 model, respectively.\n\n\nWe tune hyperparameters of PGD based on Gowal et al.~\\citep{MT19}. \nWhile several studies used the sign function to update images for the PGD attack, some studies \\citep{SPSA18,MT19} reported that updates by Adam optimizer~\\citep{Adam15} brought better results than the sign function. Following the previous studies~\\citep{SPSA18,MT19}, we consider the sign function as an optimizer and the choice of an optimizer as a hyperparameter. \nWe use Adam for the PGD attack on the MNIST model and the sign function on the CIFAR-10 model. \n\nWe adopt scheduled step size instead of fixed one. \nBecause we empirically found that starting from large step size brings better results, we set the initial step size $\\eta_0$ as $\\eta_0=\\epsilon$ on CIFAR-10. We update step size at $k=0.5N, 0.75N$ on MNIST and $k=N\/3, 2N\/3$ for on CIFAR-10. \nWhen we use Adam, step size is considered as learning rate.\nFinally, we set PGD step $N$ as $N_{\\text{ODI}}+N=1000$ on MNIST and $150$ on CIFAR-10.\n\n\\subsection{Setting for training on ImageNet in Section~\\ref{sec_black_limited}}\n\\label{appendix:ap_parameter_blackOOD}\nWe describe the setting of training of surrogate models on ImageNet in the experiment of Section~\\ref{sec_black_limited}.\nWe use the implementation of training provided in PyTorch with default hyperparameters.\nNamely, training epochs are 90 and learning rates are changed depending on epoch: 0.1 until 30 epochs, 0.01 until 60 epochs, 0.001 until 90 epochs. Batch size is 256 and weight decay 0.0001.\n\n\n\n\n\\section{Additional results and experiments for ODI with white-box attacks}\n\\label{appendix:ap_white}\n\n\\subsection{Diversity offered by ODI}\n\\label{appendix:ap_white_diversity}\n\\label{sec_append_white_diversity}\nWe empirically demonstrate that ODI can find a more diverse set of starting points than random uniform initialization, as pictorially shown in the left figures of Figure~\\ref{figure1} of the main paper. \n\nAs an example of target models, we train a robust classification model using adversarial training~\\citep{madry17} on CIFAR-10. We adopted popular hyperparameters for adversarial training under the $\\ell_\\infty$ PGD attack on CIFAR-10: perturbation size $\\epsilon = 8\/255$, step size $\\eta=2\/255$, and number of steps $N=10$. Training epochs are 100 and learning rates are changed depending on epoch: 0.1 until 75 epochs, 0.01 until 90 epochs, and 0.001 until 100 epochs. Batch size is 128 and weight decay 0.0002.\n\nOn the target model, we quantitatively evaluate the diversity of starting points by each initialization in terms of pairwise distances of output values $\\bm{\\mathrm{f}}(\\bm{\\mathrm{x}})$. Each initialization is bounded within $\\ell_\\infty$ $\\epsilon$-ball with $\\epsilon=8\/255$.\nWe pick 100 images on CIFAR-10 and run each initialization 10 times to calculate the mean pairwise distances among outputs for different starting points. As a result, the mean pairwise distance obtained from ODI is 6.41, which is about 15 times larger than that from uniform initialization (0.38). This corroborates our intuition that starting points obtained by ODI are more diverse than uniform initialization. We note that PGD does not generate diverse samples. When we use PGD with 2 steps as an initialization, the mean pairwise distance is only 0.43. \n\nWe also visualize the diversity offered by ODI. \nFirst, we focus on loss histogram of starting points by ODI and na\\\"{i}ve uniform initialization. We pick an image from the CIFAR-10 test dataset and run each initialization 100 times. Then, we calculate loss values for starting points to visualize their diversity in the output space. The left panel of Figure~\\ref{fig_div_visualize} is the histogram of loss values for each initialization. We can easily observe that images from na\\\"{i}ve initialization concentrate in terms of loss values (around $-1.0$), whereas images from ODI-2 (ODI with 2 steps) are much more diverse in terms of the loss values. We also observe that images from PGD-2 also take similar loss values. \nBy starting attacks from these initial inputs, we obtain the histogram of loss values in the center panel of Figure~\\ref{fig_div_visualize}. We can observe that ODI-PGD generates more diverse results than PGD with na\\\"{i}ve initialization (PGD-20).\n\nIn addition, we apply \nt-SNE~\\citep{tSNE} to the output logits for starting points by each initialization.\nWe visualize the embedding produced by t-SNE in the right panel of Figure~\\ref{fig_div_visualize}.\nAs expected, starting points produced by ODI are more diversified than those by na\\\"{i}ve initialization.\n\n\n\n\\begin{figure*}[htbp]\n\\centering\n \\begin{tabular}{ccc}\n\n \\begin{minipage}{0.35\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/neurips_losshist1_camera.png}\n \\end{center}\n \\end{minipage}\n \\begin{minipage}{0.35\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/neurips_losshist2_camera.png}\n \\end{center}\n \\end{minipage}\n \\hspace{0.3cm}\n \\begin{minipage}{0.23\\hsize}\n \\begin{center}\n \\includegraphics[width=0.9\\textwidth]{image\/ODI_loss_tsne.png}\n \\end{center}\n \\end{minipage}\n \\end{tabular}\n \\caption{(Left): Histogram of loss values evaluated at starting points by ODI, na\\\"{i}ve uniform initialization and PGD. PGD-2 means 2-step PGD with na\\\"{i}ve initialization. The loss function is the margin loss. (Right) Histogram of loss values after attacks with 20 total steps. ODI-PGD-18 means 18-step PGD with 2-step ODI. \n (Right): Embedding for starting points sampled on each initialization produced by t-SNE.\n }\n \\label{fig_div_visualize}\n\\end{figure*}\n\n\n\n\\subsection{Analysis of the sensitivity to hyperparameters of ODI}\n\\label{appendix:ap_white_sensitivity}\nFor ODI, we mainly set the number of ODI steps $N_{\\text{ODI}}=2$ and step size $\\eta_{\\text{ODI}}=\\epsilon$. \nTo validate the setting, we confirm that ODI-PGD is not sensitive to these hyperparameters. \nWe attack adversarially trained models on CIFAR-10 introduced in Section~\\ref{sec_append_white_diversity}, and adopt the same attack setup for ODI-PGD on CIFAR-10 as Section~\\ref{sec_white_various}.\nWe test $N_{\\text{ODI}}=2,4,8,16$ and $\\eta_{\\text{ODI}} = \\epsilon, \\epsilon\/2, \\epsilon\/4, \\epsilon\/8$, but \n exclude patterns with $N_{\\text{ODI}} \\cdot \\eta_{\\text{ODI}} < 2 \\epsilon$ to make $N_{\\text{ODI}} \\cdot \\eta_{\\text{ODI}}$ larger than or equal to the diameter of the $\\epsilon$-ball. \nWe calculate the mean accuracy for five repetitions of the attack, each with 20 restarts. \n\n\n\\begin{table}[htb]\n\\caption{The sensitivity to the number of ODI steps $N_{\\text{ODI}}$ and step size $\\eta_{\\text{ODI}}$. \nWe repeat each experiment 5 times to calculate statistics.\n}\n\\begin{center}\n\\begin{tabular}{cc|ccc}\n\\toprule\n$N_{\\text{ODI}}$ & $\\eta_{\\text{ODI}}$ & mean & max & min \\\\ \\midrule\n2 &$\\epsilon$ & 44.46\\% &44.50\\% & 44.45\\% \\\\\n4 &$\\epsilon \/2$ & 44.47\\% &44.50\\% & 44.42\\% \\\\\n4 &$\\epsilon $ & 44.42\\% &44.48\\% & 44.40\\% \\\\\n8 &$\\epsilon \/4$ & 44.47\\% &44.52\\% & 44.44\\% \\\\\n8 &$\\epsilon \/2$ & 44.42\\% &44.48\\% & 44.36\\% \\\\\n8 &$\\epsilon $ & 44.46\\% &44.49\\% & 44.42\\% \\\\\n16 &$\\epsilon \/8$ & 44.46\\% &44.50\\% & 44.43\\% \\\\\n16 &$\\epsilon \/4$ & 44.46\\% &44.50\\% & 44.40\\% \\\\\n16 &$\\epsilon \/2$ & 44.45\\% &44.48\\% & 44.43\\% \\\\\n16 &$\\epsilon$ & 44.44\\% &44.47\\% & 44.41\\% \\\\\n \\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_sensi}\n\\end{table}\n\nTable~\\ref{tab_sensi} shows the mean accuracy under ODI-PGD for different hyperparameters. \nThe maximum difference in the mean accuracy among different hyperparameters of ODI is only 0.05\\%.\nAlthough large $N_{\\text{ODI}}$ and $\\eta_{\\text{ODI}}$ will be useful to find more diversified starting points, the performance of ODI is not very sensitive to hyperparameters.\nThus, we restrict $N_{\\text{ODI}}$ to a small value to give fair comparison in terms of computation time as much as possible. \nTable~\\ref{tab_sensi} also shows that the difference between the maximum and minimum accuracy is about 0.1\\% for all hyperparameter pairs. This result supports the stability of ODI.\n\n\\subsection{Accuracy curve for adversarial attacks with ODI}\n\\label{appendix:ap_white_accuracycurve}\nIn Section~\\ref{sec_white}, we experimentally represented that the diversity offered by ODI improved white-box $\\ell_\\infty$ and $\\ell_2$ attacks. \nwe describe the accuracy curve with the number of restarts for attacks with ODI and na\\\"{i}ve initialization.\n\nFigure~\\ref{fig_Linf} shows how the attack performance improves as the number of restarts increases in the experiment of Section~\\ref{sec_white_various}. \nAttacks with ODI outperforms those with na\\\"{i}ve initialization with the increase of restarts in all settings. These curves further corroborate that restarts facilitate the running of attack algorithms, and ODI restarts are more effective than na\\\"{i}ve ones. We note that the first restart of ODI is sometimes worse than na\\\"{i}ve initialization. It is because diversity can cause local optima, i.e. random directions of ODI are not always useful. With the increase of restarts, at least one direction is useful and the accuracy drops.\n\n\\begin{figure*}[htbp]\n\\centering\n \\begin{tabular}{cccc}\n \\rotatebox[origin=c]{90}{PGD}&\n \\begin{minipage}{0.28\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/white_PGD_mnist.png}\n \\end{center}\n \\end{minipage}\n\n &\n \\begin{minipage}{0.28\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/white_PGD_cifar.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.28\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/white_PGD_image.png}\n \\end{center}\n \\end{minipage}\n \\\\\n \\rotatebox[origin=c]{90}{C\\&W}&\n \\begin{minipage}{0.28\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/white_cw_mnist.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.28\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/white_cw_cifar.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.28\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/white_cw_image.png}\n \\end{center}\n \\end{minipage}\n \\\\ \n & \\begin{tabular}{c} MNIST \n \\end{tabular}&\n \\begin{tabular}{c} CIFAR-10 \n \\end{tabular}&\n \\begin{tabular}{c} ImageNet \n \\end{tabular}\n \\end{tabular}\n \\caption{The attack performance against number of restarts for attacks with ODI. (Top): the model accuracy for PGD, (Bottom): the average of minimum $\\ell_2$ perturbations for C\\&W. %\n }%\n \\label{fig_Linf}\n\\end{figure*}\n\nNext, we describe the accuracy curve for the comparison between state-of-the-are attacks and ODI-PGD in Section~\\ref{sec_sota}. \nTo emphasize the stability of the improvement, \nwe evaluate the confidence intervals of our results against MadryLab's MNIST and CIFAR-10 models. We run tuned ODI-PGD attack with 3000 restarts on MNIST and \n100 restarts on CIFAR-10. \nThen, we sample 1000 runs on MNIST and 20 runs on CIFAR-10 from the results to evaluate the model accuracy, \nand re-sample 100 times to calculate statistics. \nFigure~\\ref{fig_SOTAmadry} shows the accuracy curve under tuned ODI-PGD. \nWe observe that confidence intervals become tighter as the number of restarts grows, and tuned ODI-PGD consistently outperforms the state-of-the-art attack after 1000 restarts on MNIST and 20 restarts on CIFAR-10.\n\n\n\\begin{figure}[ht]\n\\centering\n\n \\begin{tabular}{cc}\n \\begin{minipage}{0.45\\hsize}\n \\begin{center}\n \\includegraphics[width=.9\\textwidth]{image\/white_tuned_mnist.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.45\\hsize}\n \\begin{center}\n \\includegraphics[width=.9\\textwidth]{image\/white_tuned_cifar.png}\n \\end{center}\n \\end{minipage}\n \\\\\n MadryLab (MNIST) & MadryLab (CIFAR-10) \n \\end{tabular}\n \\caption{Model accuracy under tuned ODI-PGD and the current state-of-the-art attacks~\\citep{MT19}. \n The solid lines represent values from Table~\\ref{tab_sota} and \n the error bars show 95\\% confidence intervals. }\n \\label{fig_SOTAmadry}\n\\end{figure}\n\n\n\\subsection{Tighter estimation of robustness for various models}\n\\label{appendix:ap_white_recent}\nOne important application of powerful adversarial attacks is to evaluate and compare different defense methods. In many previous works on defending against adversarial examples, PGD attack with na\\\"{i}ve uniform initialization (called na\\\"{i}ve-PGD) is a prevailing benchmark and its attack success rate is commonly regarded as a tight estimation on (worst-case) model robustness. In this section, we conduct a case study on six recently published defense methods~\\citep{UAT19,carmon19,scatter19,metric19,free19,YOPO19} to show that ODI-PGD outperforms na\\\"{i}ve-PGD in terms of upper bounding the worst model accuracy under all possible attacks.\n\n\\paragraph{Setup}\nWe use pre-trained models from four of those studies,\nand train the other two models~\\citep{free19,YOPO19}\n using the settings and architectures described in their original papers. We run attacks with $\\epsilon = 8\/255$ on all test images. Other attack settings are the same as the experiment for CIFAR-10 in Section~\\ref{sec_white_various}. Apart from comparing ODI-PGD and na\\\"{i}ve-PGD, we also evaluate PGD attack without restarts (denoted as $\\text{PGD}_{1}$) as it is adopted in several existing studies \\citep{UAT19,carmon19,scatter19,YOPO19}.\n \n\n\\begin{table*}[ht]\n\\caption{Accuracy of models after performing ODI-PGD and na\\\"{i}ve-PGD attacks against recently proposed defense models.}\n\\begin{center}\n\\begin{tabular}{c|ccc|cc}\n\\toprule\nmodel& (1) $\\text{PGD}_{1}$ & (2) $\\text{na\\\"{i}ve-PGD}$ \n& (3) $\\text{ODI-PGD}$ \n&(1)$-$(2) &(2)$-$(3)\n\\\\ \\midrule\nUAT~\\citep{UAT19} & 62.63\\% & 61.93\\% & {\\bf 57.43\\%} & 0.70\\% & 4.50\\% \\\\\nRST~\\citep{carmon19} & 61.17\\% & 60.77\\% & {\\bf 59.93\\%} & 0.40\\% & 0.84\\% \\\\\nFeature-scatter~\\citep{scatter19} & 59.69\\% & 56.49\\% & {\\bf 39.52\\%} & 3.20\\% & 16.97\\% \\\\\nMetric learning~\\citep{metric19} & 50.57\\% & 49.91\\% & {\\bf 47.64\\%} & 0.56\\% & 2.27\\% \\\\\nFree~\\citep{free19} & 47.19\\% & 46.39\\% &{\\bf 44.20\\%} & 0.80\\% & 2.19\\% \\\\\nYOPO~\\citep{YOPO19} & 47.70\\% & 47.07\\% & {\\bf 45.09\\%} & 0.63\\% & 1.98\\% \\\\\n \\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_recent}\n\\end{table*}\n\n\n\n\\paragraph{Results}\nAs shown in Table~\\ref{tab_recent}, ODI-PGD uniformly outperforms na\\\"{i}ve-PGD against all six recently-proposed defense methods, lowering the estimated model accuracy by 1--17\\%. In other words, ODI-PGD provides uniformly tighter upper bounds on the worst case model accuracy than na\\\"{i}ve-PGD. \nAdditionally, The accuracy ranking of the defence methods for ODI-PGD is different from na\\\"{i}ve-PGD and $\\text{PGD}_{1}$.\nThese results indicate that ODI-PGD might be {a better benchmark for comparing and evaluating different defense methods}, rather than na\\\"{i}ve-PGD and $\\text{PGD}_{1}$.\n\n\n\\section{Additional results and experiments for ODS with black-box attacks}\n\\label{appendix:ap_black}\n\n\n\n\\subsection{Diversified samples by ODS}\n\\label{appendix:ap_black_diversity}\nWe empirically show that ODS can yield diversified changes in the output space of the target model, as shown in the right figures of Figure~\\ref{figure1} of the main paper. \nSpecifically, we evaluate the mean pairwise distance among outputs for different perturbations by ODS and \ncompare it with the distance among outputs for random Gaussian sampling. \n\nWe use pre-trained Resnet50~\\citep{resnet16} and VGG19~\\citep{VGG19} model \nas the target and surrogate models, respectively. \nWe pick 100 images on ImageNet validation set and sample perturbations 10 times by each sampling method. For comparison, we normalize the perturbation to the same size in the input space. \nThen, the obtained pairwise distance on the target model by ODS is 0.79, which is 10 times larger than the pairwise distance by random Gaussian sampling (0.07). This indicates that the diversity by ODS is transferable. \n\n\\subsection{Success rate curve in Section~\\ref{sec_black_score} and Section~\\ref{sec_black_decision} }\n\\label{appendix:ap_black_successcurve}\nIn Section~\\ref{sec_black_score}, we demonstrated that SimBA-ODS outperformed state-of-the-art attacks in terms of the query-efficiency. As an additional result, we give the success rate curve of score-based attacks with respect to the number of queries in the experiments. Figure~\\ref{fig_append_simba} shows how the success rate changes with the number of queries for SimBA-ODS and SimBA-DCT for the experiment of Table~\\ref{tab_black_score}. SimBA-ODS especially brings query-efficiency at small query levels. In Figure~\\ref{fig_append_square}, we also describe the success rate curve for the experiment of Table~\\ref{tab_black_square}. ODS-RGF outperforms other methods in the $\\ell_2$ norm.\n\n\\begin{figure*}[htbp]\n\\centering\n \\begin{tabular}{cc}\n \\begin{minipage}{0.4\\hsize}\n \\begin{center}\n \\includegraphics[width=0.9\\textwidth]{image\/simba_untarget.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.4\\hsize}\n \\begin{center}\n \\includegraphics[width=0.9\\textwidth]{image\/simba_target.png}\n \\end{center}\n \\end{minipage}\\\\\n untargeted & targeted\n \\end{tabular}\n \\caption{Relationship between success rate and number of queries for score-based SimBA-ODS and SimBA-DCT.}\n \\label{fig_append_simba}\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\\centering\n \\begin{tabular}{cccc}\n \\begin{minipage}{0.22\\hsize}\n \\begin{center}\n \\includegraphics[width=1\\textwidth]{image\/curve_score_untargeted.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.22\\hsize}\n \\begin{center}\n \\includegraphics[width=1\\textwidth]{image\/curve_score_targeted.png}\n \\end{center}\n \\end{minipage} &\n \\begin{minipage}{0.22\\hsize}\n \\begin{center}\n \\includegraphics[width=1\\textwidth]{image\/curve_score_linf_untargeted.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.22\\hsize}\n \\begin{center}\n \\includegraphics[width=1\\textwidth]{image\/curve_score_linf_targeted.png}\n \\end{center}\n \\end{minipage}\\\\\n untargeted ($\\ell_2$) & targeted ($\\ell_2$) & untargeted ($\\ell_\\infty$) & targeted ($\\ell_\\infty$)\n \\end{tabular}\n \\caption{Relationship between success rate and number of queries for SimBA-ODS, ODS-RGF, and Square Attack. Each attack is evaluated with norm bound $\\epsilon=5 (\\ell_2), 0.05 (\\ell_\\infty)$.}\n \\label{fig_append_square}\n\\end{figure*}\n\nIn Section~\\ref{sec_black_decision}, we demonstrated that Boundary-ODS outperformed state-of-the-art attacks in terms of median $\\ell_2$ perturbation. Here, we depict the relationship between the success rate and perturbation size (i.e. the frequency distribution of the perturbations) to show the consistency of the improvement. \nFigure~\\ref{fig_append_decision_curve} describes the cumulative frequency distribution of $\\ell_2$ perturbations for each attack at 10000 queries.\nBoundary-ODS consistently decreases $\\ell_2$ perturbations compared to other attacks in both untargeted and targeted settings. \n\n\n\n\n\\begin{figure*}[htbp]\n\\centering\n \\begin{tabular}{cc}\n \\begin{minipage}{0.4\\hsize}\n \\begin{center}\n \\includegraphics[width=0.9\\textwidth]{image\/decision_l2hist_untargeted.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.4\\hsize}\n \\begin{center}\n \\includegraphics[width=0.9\\textwidth]{image\/decision_l2hist_targeted.png}\n \\end{center}\n \\end{minipage}\\\\\n untargeted & targeted\n \\end{tabular}\n \\caption{Cumulative frequency distribution of $\\ell_2$ perturbations at 10000 queries for decision-based attacks.}\n \\label{fig_append_decision_curve}\n\\end{figure*}\n\n\n\n\\subsection{Comparison of ODS with TREMBA}\n\\label{appendix:ap_black_TREMBA}\nWe run experiments to compare ODS with TREMBA, which is a state-of-the-art attack with surrogate models, as we mentioned in Section~\\ref{sec_black_score_RGF}. \nTREMBA leverages surrogate models to learn a\nlow-dimensional embedding \nso as to obtain initial adversarial examples using a transfer-based attack and then update them using a score-based attack. \nAlthough TREMBA uses random sampling, ODS does not work well with TREMBA because random sampling of TREMBA is performed in the embedding space. In addition, it is difficult to directly compare attacks with ODS (e.g., ODS-RGF) and TREMBA because we do not discuss the combination of ODS with transfer-based attacks in this paper. \n\nHowever, we can start attacks with ODS (e.g., ODS-RGF) from images generated by any transfer-based attacks and compare the attack with TREMBA. \nWe generate starting points by SI-NI-DIM~\\citep{NesterovTransfer2020} (Scale-Invariant Nesterov Iterative FGSM integrated with diverse input method), which is a state-of-the-art transfer-based attack, and run ODS-RGF from these starting points. \n\nWe adopt the same experiment setup as TREMBA~\\citep{Huang20TREMBA}: we evaluate attacks against four target models (VGG19, ResNet34, DenseNet121, MobileNetV2) for 1000 images on ImageNet and use four surrogate models (VGG16, Resnet18, Squeezenet~\\citep{squeezenet} and Googlenet~\\citep{googlenet} ).\nWe set the same hyperparameters as in the original paper~\\citep{Huang20TREMBA} for TREMBA. For fair comparisons, we set the same sample size (20) and use the same surrogate models as TREMBA for ODS-RGF. We also set step size of ODS-RGF as 0.001. \nAs for SI-NI-DIM, we set hyperparameters referring to the paper~\\citep{NesterovTransfer2020}: maximum iterations as 20, decay factor as 1, and number of scale copies as 5.\nWe describe results in Table~\\ref{fig_black_TREMBA}. We can observe that \nODS-RGF with SI-NI-DIM is comparable to TREMBA. \n\nWe note that ODS is more flexible than TREMBA in some aspects. First, TREMBA is specific in the $\\ell_\\infty$-norm, whereas ODS can be combined with attacks at least in $\\ell_\\infty$ and $\\ell_2$-norms. \nIn addition, TREMBA needs to train a generator per target class in targeted settings, whereas ODS does not need additional training. \n\n\n\\begin{table}[htb]\n\\caption{Comparison of ODS-RGF with TREMBA against four target models. The first two rows and bottom two rows describe results for untargeted (U) attacks and targeted (T) attacks, respectively. Targeted class for targeted attacks is class 0. }\n\\begin{center}\n\\setlength{\\tabcolsep}{2pt}\n\\begin{tabular}{cc|cc|cc|cc|cc}\n\\toprule\n&& \\multicolumn{2}{c|}{VGG19}& \\multicolumn{2}{c|}{ResNet34}& \\multicolumn{2}{c|}{DenseNet121}& \\multicolumn{2}{c}{MobilenetV2}\\\\\n&attack& success & query& success & query& success & query& success & query\\\\ \\hline\n\\multirow{2}{*}{U}&TREMBA~\\citep{Huang20TREMBA} & {\\bf 100.0\\%} & 34\n& {\\bf 100.0\\%} & 161 & {\\bf 100.0\\%} & 157 & {\\bf 100.0\\%}& 63\\\\\n&SI-NI-DIM~\\citep{NesterovTransfer2020} + ODS-RGF& {\\bf 100.0\\%} & {\\bf 18}\n& 99.9\\% & {\\bf 47} & 99.9\\% & {\\bf 50} & {\\bf 100.0\\%}& {\\bf 29} \\\\ \\hline\n\\multirow{2}{*}{T}&TREMBA~\\citep{Huang20TREMBA} & {98.6\\%} & 975\n& {96.7\\%} & {\\bf 1421} & {\\bf 98.5\\%} & {\\bf 1151} & {\\bf 99.0\\%}& {\\bf 1163}\\\\\n&SI-NI-DIM~\\citep{NesterovTransfer2020} + ODS-RGF& {\\bf 99.4\\%} & {\\bf 634}\n& {\\bf 98.7\\%} & {1578} & {98.2\\%} & 1550 & {98.3\\%}& {2006} \\\\ \n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{fig_black_TREMBA}\n\\end{table}\n\n\n\\subsection{Performance of ODS against different target models}\n\\label{appendix:ap_black_targetchange}\nIn this paper, we used pre-trained ResNet50 model as the target model for all experiments in Section~\\ref{sec_black_experiment}. Here we set pre-trained VGG19 model as the target model and run experiments to show that the efficiency of ODS is independent with target models.\nAs surrogate models, we replace VGG19 with ResNet50, i.e. we use four pre-trained models (ResNet50, ResNet34, DenseNet121, MobileNetV2).\n\nWe run experiments for SimBA-ODS in Section~\\ref{sec_black_score} and Boundary-ODS in Section~\\ref{sec_black_decision}. \nAll settings except the target model and surrogate models are the same as the previous experiments.\nIn Table~\\ref{tab_black_score_VGG} and \\ref{tab_black_decision_VGG}, ODS significantly improves attacks against VGG19 model for both SimBA and Boundary Attack. This indicates that the efficiency of ODS does not depend on target models. \n\n\\begin{table}[htb]\n\\caption{Query counts and $\\ell_2$ perturbations for score-based Simple Black-box Attacks (SimBA) against pre-trained VGG19 model on ImageNet.}\n\\begin{center}\n\\setlength{\\tabcolsep}{4pt}\n\\begin{tabular}{cc|ccc|ccc}\n\\toprule\n&& \\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \\cline{3-8}\n& num. of &success & average & median $\\ell_2$& success & average & median $\\ell_2$ \\\\ \\\nattack& surrogates& rate & query & distance & rate & query & distance \\\\ \\midrule\nSimBA-DCT~\\citep{Guo19} &0& {\\bf 100.0\\%} & 619 & 2.85 & {\\bf100.0\\%} & 4091 &6.81 \\\\\nSimBA-ODS &4& {\\bf 100.0\\%} & {\\bf 176} & {\\bf 1.35} & 99.7\\% & {\\bf 1779 } & {\\bf 3.31} \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_black_score_VGG}\n\\end{table}\n\n\n\\begin{table}[htb]\n\\caption{Median $\\ell_2$ perturbations for decision-based Boundary Attacks against pre-trained VGG19 model on ImageNet.}\n\\begin{center}\n\\begin{tabular}{cc|ccc|ccc}\n\\toprule\n&& \\multicolumn{6}{c}{number of queries} \\\\\n& num. of & \\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ %\nattack&surrogates & 1000 & 5000 & 10000 & 1000 & 5000 & 10000 \\\\ \\midrule\nBoundary\\citep{Brendel18} &0& 45.62&11.79&4.19 & 75.10&41.63&27.34 \\\\\nBoundary-ODS&4 & {\\bf 6.03} & {\\bf 0.69} & {\\bf 0.43} & {\\bf 24.11} & {\\bf 5.44} & {\\bf 2.97}\\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_black_decision_VGG}\n\\end{table}\n\n\\subsection{Effect of the choice of surrogate models}\n\\label{appendix:ap_black_surrogatechange}\nIn Section~\\ref{sec_black_score} and \\ref{sec_black_decision}, we mainly used four pre-trained models as surrogate models. To investigate the effect of the choice of surrogate models, we run attacks with seven different sets of surrogate models.\nAll settings except surrogate models are the same as the previous experiments.\n\nTable~\\ref{tab_black_score_surrogate} and \\ref{tab_black_decision_surrogate} shows results for SimBA-ODS and Boundary-ODS, respectively. First, the first four rows in both tables are results for a single surrogate model. The degree of improvements depends on the model. ResNet34 gives the largest improvement and VGG19 gives the smallest improvement. Next, the fifth and sixth rows show results for sets of two surrogate models. By combining surrogate models, the query efficiency improves, especially for targeted attacks. This means that the diversity from multiple surrogate models is basically useful to make attacks strong. Finally, the performances in the seventh row are results for four surrogate models, which are not always better than results for the combination of two models (ResNet34 and DenseNet121). When the performances for each surrogate model are widely different, the combination of those surrogate models could be harmful.\n\n\\begin{table}[htb]\n\\caption{Query counts and $\\ell_2$ perturbations for SimBA-ODS attacks with various sets of surrogate models. In the column of surrogate models, R:ResNet34, D:DenseNet121, V:VGG19, M:MobileNetV2.}\n\\begin{center}\n\\begin{tabular}{cc|ccc|ccc}\n\\toprule\n&& \\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \\cline{3-8}\nsurrogate& & success & average & median $\\ell_2$& success & average & median $\\ell_2$ \\\\ \\\nmodels&num.& rate & query & distance & rate & query & distance \\\\\n\\midrule\nR&1& {\\bf 100.0\\%} & {274} & 1.35 & 95.3\\%& 5115 & 3.50 \\\\\nD&1& {\\bf 100.0\\%} &342&1.38 &96.7\\% & 5282 & 3.51\\\\\nV&1& {\\bf 100.0\\%} &660 &1.78 &88.0\\% & 9769 & 4.80 \\\\\nM&1& {\\bf 100.0\\%} &475&1.70 &95.3\\%& 6539 & 4.53\\\\\nR,D&2& {\\bf 100.0\\%} &{\\bf 223}&{\\bf 1.31} & 98.0\\%& {\\bf 3381} & {\\bf 3.39}\\\\\nV,M&2& {\\bf 100.0\\%} &374&1.60 & 96.3\\%& 4696 & 4.27\\\\\nR,V,D,M &4& {\\bf 100.0\\%}& {241} & 1.40 & {\\bf 98.3\\%} & {3502} & {3.55}\\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_black_score_surrogate}\n\\end{table}\n\n\n\\begin{table}[htb]\n\\caption{Median $\\ell_2$ perturbations for Boundary-ODS attacks with various sets of surrogate models. In the column of surrogate models, R:ResNet34, D:DenseNet121, V:VGG19, M:MobileNetV2.}\n\\begin{center}\n\\begin{tabular}{cc|ccc|ccc}\n\\toprule\n& & \\multicolumn{6}{c}{number of queries} \\\\ \nsurrogate& & \\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \nmodels & num. & 1000 & 5000 & 10000 & 1000 & 5000 & 10000 \\\\ \\midrule\nR&1 & 9.90&1.41&0.79 & 31.32&11.49&7.89\\\\\nD&1 & 10.12&1.39&0.76 &32.63&11.30&7.44\\\\\nV&1 & 22.68&3.47&1.52 & 49.18&24.26&17.75 \\\\\nM&1 & 20.67&2.34&1.10 & 44.90&18.62&12.01\\\\\nR,D&2 & {\\bf 7.53}&1.07&0.61 & {\\bf 26.00} & 8.08& 6.22\\\\\nV,M&2 & 17.60& 1.70& 0.92&39.63&14.97&9.21 \\\\\nR,V,D,M &4 & 7.57 & {\\bf 0.98} & {\\bf 0.57} & 27.24 & {\\bf 6.84} & {\\bf 3.76}\\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_black_decision_surrogate}\n\\end{table}\n\nIn Section~\\ref{sec_black_score_RGF}, we compared ODS-RGF with P-RGF only using the ResNet34 surrogate model. To show the effectiveness of ODS-RGF is robust to the choice of surrogate models, we evaluate ODS-RGF with different surrogate models. Table~\\ref{fig_black_RGF_VGG} shows the query-efficiency of ODS-RGF and P-RGF with the VGG19 surrogate model. We can observe that ODS-RGF outperforms P-RGF for all settings \nand the results are consistent with the experiment in Section~\\ref{sec_black_score_RGF}.\n\n\\begin{table}[htb]\n\\caption{Comparison between ODS-RGF and P-RGF with the VGG19 surrogate model. Settings in the comparison are the same as Figure~\\ref{fig_black_RGF}. }\n\\begin{center}\n\\setlength{\\tabcolsep}{3.7pt}\n\\begin{tabular}{ccc|ccc|ccc}\n\\toprule\n& & &\\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \\cline{4-9}\n & &num. of & & average & median $\\ell_2$ & success & average & median $\\ell_2$ \\\\ \nnorm& attack &surrogates & success & queries & perturbation & success & queries & perturbation \\\\ \n\\midrule\n\\multirow{3}{*}{$\\ell_2$}&RGF & 0& \\textbf{100.0\\%} & 633 & 3.07 &\\textbf{99.3\\%} &3141 & 8.23\\\\\n&P-RGF~[25] &1& \\textbf{100.0\\%} &467 & 3.03\n&{97.0\\%} &3130 & 8.18\\\\\n&ODS-RGF& 1& \\textbf{100.0\\%} &\\textbf{294}& \\textbf{2.24}\n&{98.0\\%} &\\textbf{2274} & \\textbf{6.60}\\\\ \\hline\n\\multirow{3}{*}{$\\ell_\\infty$} &RGF &0& {97.0\\%} & 520 & - \n&{25.0\\%} & 2971 & - \\\\\n&P-RGF~[25] &1& {98.7\\%} &337& - & {29.0\\%} & 2990 & -\\\\\n&ODS-RGF& 1& \\textbf{99.7\\%} &\\textbf{256}& - & \\textbf{45.7\\%} & \\textbf{2116} & - \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{fig_black_RGF_VGG}\n\\end{table}\n\n\\subsection{Effect of the number of surrogate models for the experiment in Section~\\ref{sec_black_limited}}\n\\label{appendix:ap_black_OOD_surrogatenum}\nWe described that surrogate models with limited out-of-distribution training dataset are still useful for ODS in Section~\\ref{sec_black_limited}. \nIn the experiment, we used five surrogate models with the same ResNet18 architecture. \nHere, we reveal the importance of the number of surrogate models through experiments with the different number of models. \nTable~\\ref{tab_black_limited_append} shows the result for Boundary-ODS with the different number of surrogate models. With the increase of the number of models, the query efficiency consistently improves.\n\n\\begin{table}[htb]\n\\caption{Median $\\ell_2$ perturbations for Boundary-ODS attacks with different number of surrogate models against out-of-distribution images on ImageNet.}\n\\begin{center}\n\\begin{tabular}{c|ccc|ccc}\n\\toprule\n& \\multicolumn{6}{c}{number of queries} \\\\ \nnum. of& \\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \nsurrogates & 1000 & 5000 & 10000 & 1000 & 5000 & 10000 \\\\ \\midrule\n1 & 19.45 & 2.90 & 1.66 & 47.86& 25.30 & 20.46\\\\ \n2 & 15.45 & 2.42 & 1.35 & 43.45 & 19.30 & 13.78\\\\ \n3 & 13.75& 1.96&1.14& {\\bf 41.63}& 16.91 & 11.14\\\\ \n4 & 14.23& 1.86 & 1.21 & 41.65& 14.86 & 9.64\\\\ \n5 & {\\bf 11.27}& {\\bf 1.63}& {\\bf 0.98}& 41.67& {\\bf 13.72} & {\\bf 8.39}\\\\ \n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_black_limited_append}\n\\end{table}\n\n\n\\subsection{Score-based attacks with ODS against out-of-distribution images}\n\\label{appendix:ap_black_OOD_scorebased}\nIn Section~\\ref{sec_black_limited}, we demonstrated that the decision-based Boundary-ODS attack works well even if we only have surrogate models trained with limited out-of-distribution dataset.\nHere, we evaluate score-based SimBA-ODS with these surrogate models. Except surrogate models, we adopt the same setting as Section~\\ref{sec_black_score}.\n\nIn Table~\\ref{tab_black_OOD_SimBA}, SimBA-ODS with out-of-distribution dataset outperforms SimBA-DCT in untargeted settings. \nIn targeted settings, while SimBA-ODS improves the $\\ell_2$ perturbation, the average queries for SimBA-ODS are comparable with SimBA-DCT. \nWe hypothesize that it is because ODS only explores the subspace of the input space. The restriction to the subspace may lead to bad local optima. \nWe can mitigate this local optima problem by applying random sampling temporally when SimBA-ODS fails to update a target image in many steps in a low.\n\nWe note that decision-based Boundary-ODS with OOD dataset is effective, as shown in Section~\\ref{sec_black_limited}. \nWe hypothesize that the difference in effectiveness is because Boundary-ODS does not use scores of the target model and thus does not trap in local optima.\n\n\n\n\\begin{table}[htb]\n\\caption{Query counts and $\\ell_2$ perturbations for SimBA-ODS attacks with surrogate models trained with OOD images on ImageNet.}\n\\begin{center}\n\\setlength{\\tabcolsep}{4pt}\n\\begin{tabular}{c|ccc|ccc}\n\\toprule\n& \\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \\cline{2-7}\n& success & average & median $\\ell_2$ & success & average & median $\\ell_2$ \\\\ \\\nattack& rate & queries & perturbation & rate & queries & perturbation \\\\ \\midrule\nSimBA-DCT~\\citep{Guo19} & {\\bf 100.0\\%} & 909 & 2.95 & {\\bf 97.0\\%} & 7114 &7.00 \\\\\nSimBA-ODS \n(OOD dataset) & {\\bf 100.0\\%} & {\\bf 491} & {\\bf 1.94} & 94.7\\% & {\\bf 6925} & {\\bf 4.92} \\\\ \\hline\n\\begin{tabular}{c}\nSimBA-ODS (full dataset) \n\\end{tabular} & {100.0\\%} & {242} & {1.40} & {98.3\\%} & {3503} & {3.55} \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_black_OOD_SimBA}\n\\end{table}\n\n\n\\subsection{ODS against robust defense models}\n\\label{appendix:ap_black_defensemodel}\nIn this paper, we mainly discuss transferability when surrogate models and the target model are trained with similar training schemes. On the other hand, it is known that transferability decreases when models are trained with different training schemes, \ne.g. the target model uses adversarial training and surrogate models use natural training. \nIf all surrogate models are trained with natural training scheme, \nODS will also not work against adversarially trained target models. \nHowever, we can mitigate the problem by simultaneously using surrogates obtained with various training schemes (which are mostly publicly available). In order to confirm this, we run an experiment to attack a robust target model using SimBA-ODS with both natural and robust surrogate models (a natural model and a robust model). In Table~\\ref{tab_black_advmodel}, the first row shows the attack performance of SimBA-DCT (without surrogate models) and the others show the performance of SimBA-ODS. \nIn the fourth row of Table~\\ref{tab_black_advmodel}, SimBA-ODS with natural and robust surrogate models significantly outperforms SimBA-DCT without surrogate models. This suggests that if the set of surrogates includes one that is similar to the target, ODS still works (even when some other surrogates are \"wrong\").\nWhile the performance with natural and robust surrogate models slightly underperforms single adversarial surrogate model in the third row, dynamic selection of surrogate models during the attack will improve the performance, as we mentioned in the conclusion of the paper.\n\n\\begin{table*}[htbp]\n\\caption{Transferability of ODS when training schemes of surrogate models are different from the target model. \nR50 shows pretrained ResNet50 model, and R101(adv) and R152(adv) are adversarially trained ResNeXt101 and ResNet152 denoise models from~\\citep{featureDenoise19}, respectively. All attacks are held in the same setting as Section~\\ref{sec_black_score}. \n}\n\\begin{center}\n\\begin{tabular}{cc|ccc}\n\\toprule\ntarget & surrogate & success rate & average queries & median $\\ell_2$ perturbation \\\\ \n \\midrule\nR101(adv) &- & 89.0\\% & 2824& 6.38\\\\\nR101(adv) &R50 & 80.0\\% & 4337& 10.15\\\\\nR101(adv) &R152(adv) & \\textbf{98.0\\%} &\\textbf{1066} & \\textbf{4.93}\\\\ \nR101(adv) &R50, R152(adv) & \\textbf{98.0\\%} &1304 & {5.62}\\\\\n\\bottomrule \n\\end{tabular}\n\\end{center}\n\\label{tab_black_advmodel}\n\\end{table*}\n\n\\section{Relationship and Comparison between ODS and MultiTargeted}\n\\label{appendix:ap_multitargeted}\nIn this section, we describe that ODS gives better diversity than the MultiTargeted attack~\\citep{MT19} for initialization and sampling. \n\nMultiTargeted is a variant of white-box PGD attacks, which maximizes $f_t(\\bm{\\mathrm{x}})-f_y(\\bm{\\mathrm{x}})$ where $\\bm{\\mathrm{f}}(\\bm{\\mathrm{x}})$ is logits, $y$ is the original label and $t$ is a target label. The target label is changed per restarts. In other words, MultiTargeted moves a target image to a particular direction in the output space, which is represented as like $\\bm{\\mathrm{w}}_{\\text{d}}=(1,0,-1,0)$ where 1 and -1 correspond to the target and original label, respectively. Namely, the procedure of MultiTargeted is technically similar to ODS. \n\n\nHowever, there are some key differences between MultiTargeted and ODS. One of the difference is the motivation. MultiTargeted was proposed as a white-box attack and the study only focused on $\\ell_p$-bounded white-box attacks. On the other hand, our study gives broader application for white- and black-box attacks. As far as we know, ODS is the first method which exploits the output diversity for initialization and sampling.\n\nAnother difference is the necessity of the original label of target images. \nODS does not require the original class of the target image,\nand thus ODS is applicable for black-box attacks \neven if surrogate models are trained with out-of-distribution training dataset, as shown in Section~\\ref{sec_black_limited}.\nOn the other hand, \nsince MultiTargeted exploits the original label of target images to calculate the direction of the attack, \nwe cannot apply MultiTargeted to sampling for black-box attacks against out-of-distribution images. \n\n\nFinally, the level of diversity is also different. \nAs we mentioned in Section~\\ref{sec_related}, the direction of MultiTargeted is restricted to away from the original class.\nThis restriction could be harmful for diversity because the subspace to explore directions is limited. \nTo show this statement, we apply MultiTargeted to initialization for white-box attacks and sampling for black-box attacks, and demonstrate that ODI provides better diversity than MultiTargeted for initialization and sampling (especially for sampling).\n\n\\paragraph{Initialization in white-box settings}\nWe apply MultiTargeted to initalization for white-box attacks in Section~\\ref{sec_white_various}. \nTable~\\ref{tab_appendix_MTwhite} represents the comparison of the attack performance with initialization by MultiTargeted and ODI.\nFor PGD attacks, MultiTargeted is slightly better than ODI. We hypotheses that it is because MultiTargeted was developed as a variant of PGD attacks and the initialization by MultiTargeted also works as an attack method. \nOn the other hand, ODI outperforms MultiTargeted for C\\&W attacks. In this setting, MultiTargeted does not work as an attack method, and thus the difference in the diversity makes the difference in the performance.\n\\begin{table*}[htb]\n\\caption{Comparison of model performance under attacks with MultiTargeted (MT) and ODI. The values are model accuracy (lower is better) for PGD and the average of the minimum $\\ell_2$ perturbations (lower is better) for C\\&W. All results are the average of three trials. Results for ODI are from Table~\\ref{tab_Linf}.\n}\n\\begin{center}\n\\setlength{\\tabcolsep}{4.5pt}\n\\begin{tabular}{c|cc|cc}\n\\toprule\n & \\multicolumn{2}{|c}{PGD } &\n\\multicolumn{2}{|c}{C\\&W} \\\\ \nmodel & MT & ODI & MT & ODI \n\\\\ \\midrule\nMNIST & $\\textbf{89.95}\\pm 0.05\\%$ & ${90.21}\\pm 0.05\\%$ & ${2.26}\\pm0.01$ & $\\textbf{2.25}\\pm0.01$ \\\\ \nCIFAR-10 & $\\textbf{44.33}\\pm 0.01\\%$ &${44.45}\\pm 0.02\\%$ & $0.69\\pm0.01$ &$\\textbf{0.67}\\pm0.00$ \\\\ \nImageNet & $\\textbf{42.2}\\pm 0.0\\%$ & ${42.3}\\pm 0.0\\%$& $2.30\\pm0.01$ & $\\textbf{1.32}\\pm0.01$ \\\\ \n\\bottomrule \n\\end{tabular}\n\\end{center}\n\\label{tab_appendix_MTwhite}\n\\end{table*}\n\n\n\\paragraph{Sampling in black-box settings} \nWe use MultiTargeted for sampling on the Boundary Attack in Section~\\ref{sec_black_decision} (called Boundary-MT), and compare it with Boundary-ODS. Table~\\ref{tab_append_MTblack} and Figure~\\ref{fig_append_MTblack} show the results of the comparison. While Boundary-MT outperforms the original Boundary Attack, Boundary-ODS finds much smaller adversarial perturbation than Boundary-MT. \n\nIn Figure~\\ref{fig_append_MTblack}, Boundary-MT slightly outperforms Boundary-ODS at small queries. We hypotheses that it is because MultiTargeted not works for providing diversity, but works for the approximation of gradients of the loss function. However, with the number of queries, the curve of Boundary-MT is saturated, and Boundary-MT underperforms Boundary-ODS. This is an evidence that the restriction of directions is harmful for sampling. %\n\n\\begin{table}[htb]\n\\caption{Median $\\ell_2$ perturbations for Boundary Attack with ODS and MultiTargeted (MT).}\n\\begin{center}\n\\begin{tabular}{c|ccc|ccc}\n\\toprule\n& \\multicolumn{6}{c}{number of queries} \\\\\n& \\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \nattack & 1000 & 5000 & 10000 & 1000 & 5000 & 10000 \\\\ \\midrule\nBoundary~\\citep{Brendel18} & 45.07& 11.46 & 4.30 & 73.94 &41.88 &27.05 \\\\\nBoundary-ODS & {\\bf 7.57} & {\\bf 0.98} & {\\bf 0.57} & {\\bf 27.24} & {\\bf 6.84} & {\\bf 3.76}\\\\\nBoundary-MT & 7.65&2.20&2.01&28.16&18.48&16.59 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_append_MTblack}\n\\end{table}\n\\begin{figure}[ht]\n\\centering\n\n \\begin{tabular}{cc}\n \\begin{minipage}{0.35\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/boundary_untarget_MT.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.35\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/boundary_target_MT.png}\n \\end{center}\n \\end{minipage}\n \\\\\n Untargeted & Targeted \n \\end{tabular}\n \\caption{Relationship between median $\\ell_2$ perturbations and the number of queries for Boundary Attack with ODS and MultiTargeted. Error bars show 25\\%ile and 75\\%ile of $\\ell_2$ perturbations. }\n \\label{fig_append_MTblack}\n\\end{figure}\n\n\\section{Introduction}\nDeep neural networks have achieved great success in image classification. However, it is known that they are vulnerable to adversarial examples~\\citep{Szegedy13} --- small perturbations imperceptible to humans that cause classifiers to output wrong predictions. \nSeveral studies have focused on improving model robustness against these malicious perturbations. Examples include adversarial training~\\citep{madry17,Ian15}, %\ninput purification using generative models~\\citep{song2017pixeldefend,samangouei2018defense}, \nregularization of the training loss~\\citep{regGrad18,regStGrad18,regCurv19,regLinear19}, \nand certified defenses~\\citep{Certify17,Certify18_2,Certify19}. \n\n\nStrong attacking methods are crucial for evaluating the robustness of classifiers and defense mechanisms. \nMany existing adversarial attacks rely on random sampling, i.e., adding small random noise to the input.\nIn white-box settings, random sampling is widely used for random restarts~\\citep{PGD17,Dist19,fab19,MT19} to find a diverse set of starting points for the attacks.\nSome black-box attack methods also use random sampling to explore update directions~\\citep{Brendel18,Guo19} or to estimate gradients of the target models~\\citep{ilyas2018blackbox,IEM2018PriorCB,autozoom2019}. \nIn these attacks, random perturbations are typically sampled from a na\\\"{i}ve uniform or Gaussian distribution in the input pixel space.\n\n\\begin{figure*}[htbp]\n\\centering\n \\begin{tabular}{ccc|ccc}\n\n \\begin{minipage}{0.2\\hsize}\n \\begin{center}\n \\includegraphics[width=0.88\\textwidth]{image\/input_target.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.2\\hsize}\n \\begin{center}\n \\includegraphics[width=0.908\\textwidth]{image\/output_target.png}\n \\end{center}\n \\end{minipage} &&&\n \\begin{minipage}{0.2\\hsize}\n \\begin{center}\n \\includegraphics[width=0.908\\textwidth]{image\/output_target.png}\n \\end{center}\n \\end{minipage} &\n \\begin{minipage}{0.2\\hsize}\n \\begin{center}\n \\includegraphics[width=0.908\\textwidth]{image\/output_surrogate.png}\n \\end{center}\n \\end{minipage}\n \\\\ \n Input space& Output space&&& Output space &Output space \n \\\\\n &&&& (surrogate model) &(target model)\n \\end{tabular}\n \\caption{Illustration of the differences between random sampling (blue dashed arrows) and ODS (red solid arrows). In each figure, the black `o' corresponds to an original image, and white `o's represent sampled perturbations. (Left): white-box setting. Perturbations by ODS in the input space are crafted by maximizing the distance in the output space. (Right): black-box setting. Perturbations crafted on the surrogate model transfer well to perturbations on the target model. \n }\n \\label{figure1}\n \\vskip -0.0in\n\\end{figure*}\n\nRandom sampling in the input space, however, may not sufficiently explore the output (logits) space of a neural network --- diversity in the input space does not directly translate to diversity in the output space of a deep nonlinear model. We illustrate this phenomenon in the left panel of Figure~\\ref{figure1}. When we add random perturbations to an image in the input space (see dashed blue arrows in the first plot of Figure~\\ref{figure1}), the corresponding output logits could be very similar to the output for the original image (as illustrated by the second plot of Figure~\\ref{figure1}). \nEmpirically, we observe that this phenomenon can negatively impact the performance of attack methods.\n\nTo overcome this issue, we propose a sampling strategy designed to obtain samples that are diverse in the output space. Our idea is to perturb an input away from the original one as measured directly by distances in the output space (see solid red arrows in the second plot in Figure~\\ref{figure1}). \nFirst, we randomly specify a direction in the output space. Next, we perform gradient-based optimization to generate a perturbation in the input space that \nyields a large change in the specified direction. \nWe call this new sampling technique \\uline{Output Diversified Sampling} (ODS).\n\n\nODS can improve adversarial attacks under both white-box and black-box settings. For white-box attacks, we exploit ODS to initialize the optimization procedure of finding adversarial examples (called ODI). ODI typically provides much more diverse (and effective) starting points for adversarial attacks. \nMoreover, this initialization strategy is agnostic to the underlying attack method, and can be incorporated into most optimization-based white-box attack methods.\nEmpirically, we demonstrate that ODI improves the performance of $\\ell_\\infty$ and $\\ell_2$ attacks compared to na\\\"{i}ve initialization methods.\nIn particular, the PGD attack with ODI outperforms the state-of-the-art MultiTargeted attack~\\citep{MT19} against pre-trained defense models, with 50 times smaller computational complexity on CIFAR-10.\n\n\nIn black-box settings, we cannot directly apply ODS because we do not have access to gradients of the target model. As an alternative, we apply ODS to surrogate models and observe that the resulting samples are diverse with respect to the target model: diversity in the output space transfers (see the rightmost two plots in Figure~\\ref{figure1}). \nEmpirically, \nwe demonstrate that ODS can reduce the number of queries needed for a score-based attack (SimBA~\\citep{Guo19}) by a factor of two on ImageNet. \n{ODS also shows better query-efficiency than P-RGF~\\citep{Cheng19prior}, which is another method exploiting surrogate models to improve a black-box attack.}\nThese attacks with ODS achieve better query-efficiency than the state-of-the-art Square Attack~\\citep{ACFH2019square}.\nIn addition, ODS with a decision-based attack (Boundary Attack~\\citep{Brendel18}) reduces the median perturbation distances of adversarial examples by a factor of three compared to the state-of-the-art HopSkipJump~\\citep{chen2019hop} and Sign-OPT~\\citep{cheng20sign} attacks. \n\n\n\n\n\\section{Preliminaries}\nWe denote an image classifier as $\\bm{\\mathrm{f}}: \\bm{\\mathrm{x}}\\in[0,1]^D \\mapsto \\bm{\\mathrm{z}}\\in \\mathbb{R}^{C}$, where $\\bm{\\mathrm{x}}$ is an input image, $\\bm{\\mathrm{z}}$ represents the logits, and $C$ is the number of classes.\nWe use\n$ h(\\bm{\\mathrm{x}}) = \\argmax_{c=1,\\ldots,C} f_{c}(\\bm{\\mathrm{x}})$ to denote the model prediction, where $f_{c}(\\bm{\\mathrm{x}})$ is the $c$-th element of $\\bm{\\mathrm{f}}(\\bm{\\mathrm{x}})$. %\n\nAdversarial attacks can be classified into targeted and untargeted attacks.\nGiven an image $\\bm{\\mathrm{x}}$, a label $y$ and a classifier $\\bm{\\mathrm{f}}$, \nthe purpose of untargeted attacks \nis to find an adversarial example $\\bm{\\mathrm{x}}^{\\text{adv}}$ that is similar to $\\bm{\\mathrm{x}}$ but causes misclassification $ h(\\bm{\\mathrm{x}}^{\\text{adv}}) \\neq y $. \nIn targeted settings, attackers aim to change the model prediction $h(\\bm{\\mathrm{x}}^{\\text{adv}})$ to a particular target label $t \\neq y$. \nThe typical goal of adversarial attacks is to find an adversarial example $\\bm{\\mathrm{x}}^{\\text{adv}}$ within $B_{\\epsilon}(\\bm{\\mathrm{x}}) = \\{ \\bm{\\mathrm{x}} + \\bm{\\delta} : \\|\\bm{\\delta}\\|_{p} \\leq \\epsilon \\}$, i.e., the $\\epsilon$-radius ball around an original image $\\bm{\\mathrm{x}}$. Another common setting is to find a valid adversarial example with the smallest $\\ell_p$ distance from the original image. \n\n\\paragraph{White-box attacks}\nIn white-box settings, attackers can access full information of the target model. One strong and popular example is the Projected Gradient Descent (PGD) attack~\\citep{madry17}, which iteratively applies the following update rule:\n\\begin{equation}\n\\bm{\\mathrm{x}}^{\\text{adv}}_{k+1} \n= \\mathrm{Proj}_{B_{\\epsilon}(\\bm{\\mathrm{x}})} \\left( \\bm{\\mathrm{x}}^{\\text{adv}}_k + \\eta \\, \\mathrm{sign} \\left(\\nabla_{\\bm{\\mathrm{x}}^{\\text{adv}}_k} L(\\bm{\\mathrm{f}}(\\bm{\\mathrm{x}}^{\\text{adv}}_k),y) \n \\right) \\right)\n\\label{eq_pgd}\n\\end{equation}\nwhere $\\mathrm{Proj}_{B_{\\epsilon}(\\bm{\\mathrm{x}})}(\\bm{\\mathrm{x}}^{\\text{adv}}) \\triangleq \\argmin_{\\bm{\\mathrm{x}}' \\in B_{\\epsilon}(\\bm{\\mathrm{x}})} \\|\\bm{\\mathrm{x}}^{\\text{adv}}-\\bm{\\mathrm{x}}'\\|_{p}$, $\\eta$ is the step size, and $L(\\bm{\\mathrm{f}}(\\bm{\\mathrm{x}}),y)$ is a loss function, e.g. the margin loss defined as $\\max_{i \\neq y} f_{i}(\\bm{\\mathrm{x}}) - f_{y}(\\bm{\\mathrm{x}})$. \nTo increase the odds of success, the procedure is restarted multiple times with uniformly sampled initial inputs from $B_{\\epsilon}(\\bm{\\mathrm{x}})$.\n\n\n\\paragraph{Black-box attacks}\nIn black-box settings, the attacker only has access to outputs of the target model without knowing its architecture and weights. Black-box attacks can be largely classified into transfer-based, score-based, and decision-based methods respectively. Transfer-based attacks craft white-box adversarial examples with respect to surrogate models, and transfer them to the target model. The surrogate models are typically trained with the same dataset as the target model so that they are close to each other.\nIn score-based settings, attackers can know the output scores (logits) of the classifier; while for decision-based settings, attackers can only access the output labels of the classifier. \nFor these two approaches, attacks are typically evaluated in terms of query efficiency, i.e. the number of queries needed to generate an adversarial example and its perturbation size.\n\nRecently, several studies~\\citep{Cheng19prior,subspaceattack,Cai19transferSMBdirect} employed surrogate models to estimate the gradients of the loss function of the target model.\nSome attack methods used random sampling in the input space, \nsuch as the decision-based Boundary Attack~\\citep{Brendel18} and the score-based Simple Black-Box Attack~\\citep{Guo19}.\n\n\\section{Output Diversified Sampling}\n\nAs intuitively presented in Figure~\\ref{figure1}, random sampling in the input space does not necessarily produce samples with high diversity as measured in the output space. To address this problem, we propose Output Diversified Sampling (ODS). Given an image $\\bm{\\mathrm{x}}$, a classifier $\\bm{\\mathrm{f}}$ and the direction of diversification $\\bm{\\mathrm{w}}_{\\text{d}} \\in \\mathbb{R}^{C}$, \nwe define the normalized perturbation vector of ODS as follows:\n\\begin{equation}\n\\bm{\\mathrm{v}}_{\\text{ODS}}(\\bm{\\mathrm{x}},\\bm{\\mathrm{f}},\\bm{\\mathrm{w}}_{\\text{d}}) = \\frac{\\nabla_{\\bm{\\mathrm{x}}} (\\bm{\\mathrm{w}}_{\\text{d}}^\\intercal \\bm{\\mathrm{f}}(\\bm{\\mathrm{x}}))}{\\| \\nabla_{\\bm{\\mathrm{x}}} (\\bm{\\mathrm{w}}_{\\text{d}}^\\intercal \\bm{\\mathrm{f}}(\\bm{\\mathrm{x}})) \\|_2},\n\\end{equation}\nwhere $\\bm{\\mathrm{w}}_{\\text{d}}$ is sampled from the uniform distribution over $[-1,1]^C$. \nBelow we show how to enhance white- and black-box attacks with ODS.\n\n\n\n\\subsection{Initialization with ODS for white-box attacks}\n\\label{sec_ODS_white}\nIn white-box settings, we utilize ODS for initialization (ODI) to generate output-diversified starting points. Given an original input $\\bm{\\mathrm{x}}_{\\text{org}}$ and the direction for ODS $\\bm{\\mathrm{w}}_{\\text{d}}$, we try to find a restart point $\\bm{\\mathrm{x}}$ that is as far away from $\\bm{\\mathrm{x}}_{\\text{org}}$ as possible by maximizing $\\bm{\\mathrm{w}}_{\\text{d}}^\\intercal (\\bm{\\mathrm{f}}(\\bm{\\mathrm{x}})-\\bm{\\mathrm{f}}(\\bm{\\mathrm{x}}_{\\text{org}}))$ via the following iterative update:\n\\begin{equation}\n\\label{eq_linf}\n\\bm{\\mathrm{x}}_{k+1} = \\mathrm{Proj}_{B(\\bm{\\mathrm{x}}_{\\text{org}})} \\left( \\bm{\\mathrm{x}}_{k} + \\eta_{\\text{ODI}} \\, \\mathrm{sign}(\\bm{\\mathrm{v}}_{\\text{ODS}}(\\bm{\\mathrm{x}}_k,\\bm{\\mathrm{f}},\\bm{\\mathrm{w}}_{\\text{d}})) \\right)\n\\end{equation}\nwhere $B(\\bm{\\mathrm{x}}_{\\text{org}})$ is the set of allowed perturbations, which is typically an $\\epsilon$-ball in $\\ell_p$ norm, and $\\eta_{\\text{ODI}}$ is a step size. When applying ODI to $\\ell_2$ attacks, we omit the sign function. \nAfter some steps of ODI, we start an attack from the restart point obtained by ODI. \nWe sample a new direction $\\bm{\\mathrm{w}}_{\\text{d}}$ for each restart in order to obtain diversified starting points for the attacks.\nWe provide the pseudo-code for ODI in Algorithm~\\ref{alg:ap_white} of the Appendix. \n\n\n\nOne sampling step of ODI costs roughly the same time as one iteration of most gradient-based attacks (e.g., PGD). \nEmpirically, we observe that the number of ODI steps $N_{\\text{ODI}}=2$ is already sufficient to obtain diversified starting points (details of the sensitivity analysis are in Appendix~\\ref{appendix:ap_white_sensitivity}), and fix $N_{\\text{ODI}}=2$ in all our experiments unless otherwise specified. We emphasize that ODS is not limited to PGD, and can be applied to a wide family of optimization-based adversarial attacks.\n\n\\textbf{Experimental verification of increased diversity:} \nWe quantitatively evaluate the diversity of starting points in terms of pairwise distances of output values $\\bm{\\mathrm{f}}(\\bm{\\mathrm{x}})$, confirming the intuition presented in the left two plots of Figure~\\ref{figure1}. We take a robust model on CIFAR-10 as an example of target models, and generate starting points with both ODI and standard uniform initialization to calculate the mean pairwise distance. The pairwise distance (i.e. diversity) obtained by ODI is 6.41, which is about 15 times larger than that from uniform initialization (0.38). In addition, PGD with the same steps as ODI does not generate diverse samples (pairwise distance is 0.43). \nDetails are in Appendix~\\ref{appendix:ap_white_diversity}.\n\n\\subsection{Sampling update directions with ODS for black-box attacks}\n\\label{sec_ODS_black}\nIn black-box settings, we employ ODS to sample update directions instead of random sampling. \nGiven a target classifier $\\bm{\\mathrm{f}}$, we cannot directly calculate the ODS perturbation $\\bm{\\mathrm{v}}_{\\text{ODS}}(\\bm{\\mathrm{x}},\\bm{\\mathrm{f}},\\bm{\\mathrm{w}}_{\\text{d}})$ because gradients of the target model $\\bm{\\mathrm{f}}$ are unknown. Instead, we introduce a surrogate model $\\bm{\\mathrm{g}}$ and calculate the ODS vector $\\bm{\\mathrm{v}}_{\\text{ODS}}(\\bm{\\mathrm{x}},\\bm{\\mathrm{g}},\\bm{\\mathrm{w}}_{\\text{d}})$. \n\nODS can be applied to attack methods that rely on random sampling in the input space. Since many black-box attacks use random sampling to explore update directions~\\citep{Brendel18,Guo19} or estimate gradients of the target models~\\citep{ilyas2018blackbox,autozoom2019,chen2019hop}, ODS has broad applications. \nIn this paper, we apply ODS to two popular black-box attacks that use random sampling: decision-based Boundary Attack~\\citep{Brendel18} and score-based Simple Black-Box Attack (SimBA~\\citep{Guo19}). In addition, we compare ODS with P-RGF~\\citep{Cheng19prior}, which is an another attack method using surrogate models.\n\nTo illustrate how we apply ODS to existing black-box attack methods, we provide the pseudo-code of SimBA~\\citep{Guo19} with ODS in Algorithm~\\ref{alg:black}.\nThe original SimBA algorithm picks an update direction $\\bm{\\mathrm{q}}$ randomly from a group of candidates $Q$ that are orthonormal to each other. We replace it with ODS, as shown in the line 5 and 6 of Algorithm~\\ref{alg:black}. For other attacks, we replace random sampling with ODS in a similar way.\nNote that in Algorithm~\\ref{alg:black}, we make use of multiple surrogate models and uniformly sample one each time, since we empirically found that using multiple surrogate models can make attacks stronger.\n\n\\textbf{Experimental verification of increased diversity:} \nWe quantitatively evaluate that ODS can lead to high diversity in the output space of the target model, as shown in the right two plots of Figure~\\ref{figure1}. We use pre-trained Resnet50~\\citep{resnet16} and VGG19~\\citep{VGG19} models on ImageNet as the target and surrogate models respectively. We calculate and compare the mean pairwise distances of samples with ODS and random Gaussian sampling. The pairwise distance (i.e. diversity) for ODS is 0.79, which is 10 times larger than Gaussian sampling (0.07). Details are in Appendix~\\ref{appendix:ap_black_diversity}. We additionally observe that ODS does not produce diversified samples when we use random networks as surrogate models. This indicates that good surrogate models are crucial for transferring diversity. \n\n\n\\begin{algorithm}[tb]\n \\caption{Simple Black-box Attack~\\citep{Guo19} with ODS }\n \\label{alg:black}\n\\begin{algorithmic}[1]\n \\STATE {\\bfseries Input:} A targeted image $\\bm{\\mathrm{x}}$, loss function $L$, a target classifier $\\bm{\\mathrm{f}}$, a set of surrogate models $\\mathcal{G}$\n \\STATE {\\bfseries Output:} attack result $\\bm{\\mathrm{x}}_{\\text{adv}}$\n \\STATE Set the starting point $\\bm{\\mathrm{x}}_{\\text{adv}} = \\bm{\\mathrm{x}}$ \n \\WHILE {$\\bm{\\mathrm{x}}_{\\text{adv}}$ is not adversary}\n \\STATE Choose a surrogate model $\\bm{\\mathrm{g}}$ from $\\mathcal{G}$, and sample $\\bm{\\mathrm{w}}_{\\text{d}} \\sim U(-1,1)^C$\n \\STATE Set $\\bm{\\mathrm{q}} = \\bm{\\mathrm{v}}_{\\text{ODS}}(\\bm{\\mathrm{x}}_{\\text{adv}},\\bm{\\mathrm{g}},\\bm{\\mathrm{w}}_{\\text{d}})$\n \\FOR{$\\alpha \\in \\{\\epsilon, -\\epsilon\\}$}\n \\IF{$L(\\bm{\\mathrm{x}}_{\\text{adv}}+\\alpha \\cdot \\bm{\\mathrm{q}}) > L(\\bm{\\mathrm{x}}_{\\text{adv}})$}\n \\STATE Set $\\bm{\\mathrm{x}}_{\\text{adv}}=\\bm{\\mathrm{x}}_{\\text{adv}}+\\alpha \\cdot \\bm{\\mathrm{q}}$ and {\\bf break}\n \\ENDIF\n \\ENDFOR\n \\ENDWHILE\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\n\\section{Experiments in white-box settings}\n\\label{sec_white}\nIn this section, we show that the diversity offered by ODI can improve white-box attacks for both $\\ell_\\infty$ and $\\ell_2$ distances. \nMoreover, we demonstrate that a simple combination of PGD and ODI achieves new state-of-the-art attack success rates. All experiments are for untargeted attacks.\n\n\n\n\\subsection{Efficacy of ODI for white-box attacks}\n\\label{sec_white_various}\nWe combine ODI with two popular attacks: PGD attack~\\citep{madry17} with the $\\ell_\\infty$ norm and C\\&W attack~\\citep{cw17} with the $\\ell_2$ norm. \nWe run these attacks on MNIST, CIFAR-10 and ImageNet. \n\n\\paragraph{Setup} \nWe perform attacks against three adversarially trained models from MadryLab\\footnote{\\url{https:\/\/github.com\/MadryLab\/mnist_challenge} and \\url{https:\/\/github.com\/MadryLab\/cifar10_challenge}. We use their secret model.}~\\citep{madry17} for MNIST and CIFAR-10 and the Feature Denoising ResNet152 network\\footnote{\\url{https:\/\/github.com\/facebookresearch\/ImageNet-Adversarial-Training}.}~\\citep{featureDenoise19} for ImageNet. For PGD attacks, we evaluate the model accuracy with 20 restarts, where starting points are uniformly sampled over an $\\epsilon$-ball for the na\\\"{i}ve resampling. For C\\&W attacks, we calculate the minimum $\\ell_2$ perturbation that yields a valid adversarial example among 10 restarts for each image, and measure the average of the minimum perturbations. Note that the original paper of C\\&W~\\citep{cw17} attacks did not apply random restarts. Here for the na\\\"{i}ve initialization of C\\&W attacks we sample starting points from a Gaussian distribution and clip it into an $\\epsilon$-ball (details in Appendix~\\ref{appendix:ap_parameter_whiteall}). \n\nFor fair comparison, we test different attack methods with the same amount of computation. Specifically, we compare $k$-step PGD with na\\\"{i}ve initialization (denoted as PGD-$k$) against ($k$-2)-step PGD with 2-step ODI (denoted as ODI-PGD-($k$-2)). We do not adjust the number of steps for C\\&W attacks because the computation time of 2-step ODI are negligible for C\\&W attacks. %\n\n\n\\begin{table*}[htb]\n\\caption{Comparing different white-box attacks. We report model accuracy (lower is better) for PGD and average of the minimum $\\ell_2$ perturbations (lower is better) for C\\&W. All results are the average of three trials.\n}\n\\begin{center}\n\\begin{tabular}{c|cc|cc}\n\\toprule\n & \\multicolumn{2}{|c}{PGD } &\n\\multicolumn{2}{|c}{C\\&W} \\\\ \nmodel & na\\\"{i}ve (PGD-$k$) & ODI (ODI-PGD-($k$-2)) & na\\\"{i}ve & ODI \n\\\\ \\midrule\nMNIST & $90.31\\pm 0.02\\%$ & $\\textbf{90.21}\\pm 0.05\\%$ & ${2.27}\\pm0.00$ & $\\textbf{2.25}\\pm0.01$ \\\\ \nCIFAR-10 & $46.06\\pm 0.02\\%$ &$\\textbf{44.45}\\pm 0.02\\%$ & $0.71\\pm0.00$ & $\\textbf{0.67}\\pm0.00$ \\\\ \nImageNet & $43.5\\pm 0.0\\%$ & $\\textbf{42.3}\\pm 0.0\\%$& $1.58\\pm0.00$ & $\\textbf{1.32}\\pm0.01$ \\\\ \n\\bottomrule \n\\end{tabular}\n\\end{center}\n\\label{tab_Linf}\n\\end{table*}\n\n\n\n\n\n\\paragraph{Results} \nWe summarize all quantitative results in Table~\\ref{tab_Linf}. %\nAttack performances with ODI are better than na\\\"{i}ve initialization for all models and attacks. \nThe improvement by ODI on the CIFAR-10 and ImageNet models is more significant than on the MNIST model. We hypothesize that this is due to the difference in model non-linearity. When a target model includes more non-linear transformations,\nthe difference in diversity between the input and output space could be larger, in which case ODI will be more effective in providing a diverse set of restarts. \n\n\n\n\\subsection{Comparison between PGD attack with ODI and state-of-the-art attacks}\n\\label{sec_sota}\nTo further demonstrate the power of ODI, we perform ODI-PGD against MadryLab's robust models~\\citep{madry17} on MNIST and CIFAR-10 \nand compare ODI-PGD with state-of-the-art attacks.\n\n\\paragraph{Setup}\nOne state-of-the-art attack we compare with is the well-tuned PGD attack~\\citep{MT19}, which achieved 88.21\\% accuracy for the robust MNIST model. The other attack we focus on is the MultiTargeted attack~\\citep{MT19}, which obtained 44.03\\% accuracy against the robust CIFAR-10 model. \nWe use all test images on each dataset and perform ODI-PGD under two different settings. One is the same as Section~\\ref{sec_white_various}. \nThe other is ODI-PGD with tuned hyperparameters, e.g. increasing the number of steps and restarts. Please see Appendix~\\ref{appendix:ap_parameter_ODI} for more details of tuning.\n\n\n\\begin{table*}[htbp]\n\\caption{Comparison of ODI-PGD with state-of-the-art attacks against pre-trained defense models. The complexity rows display products of the number of steps and restarts. Results for ODI-PGD are the average of three trials.\nFor ODI-PGD, the number of steps is the sum of ODS and PGD steps. \n}\n\\begin{center}\n\\setlength{\\tabcolsep}{4.5pt}\n\\begin{tabular}{cc|cccc}\n\\toprule\nmodel& & \\begin{tabular}{c}ODI-PGD \\\\ \n(in Sec.~\\ref{sec_white_various}) \\end{tabular} & \\begin{tabular}{c}tuned \\\\ODI-PGD \\end{tabular}\n & \\begin{tabular}{c}tuned PGD \\\\~\\citep{MT19} \\end{tabular}\n & \\begin{tabular}{c}MultiTargeted \\\\~\\citep{MT19} \\end{tabular} \\\\ \\midrule\n \\multirow{2}{*}{MNIST} &accuracy & $90.21\\pm 0.05\\%$ & $\\textbf{88.13}\\pm 0.01\\%$ & 88.21\\% & 88.36\\% \\\\\n &complexity & \n$40 \\times 20$ & $1000\\times 1000 $ & $1000 \\times 1800$ &\n$1000 \\times 1800$ \\\\ \\hline \n\\multirow{2}{*}{CIFAR-10}&accuracy& $44.45\\pm 0.02\\%$ & $\\textbf{44.00} \\pm 0.01\\%$ & 44.51\\% & 44.03\\% \\\\\n &complexity & $20 \\times 20$ & $150 \\times 20$ & $1000 \\times 180$ & $1000 \\times 180$ \\\\ \n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_sota}\n\\end{table*}\n\n\n\n\\paragraph{Results}\nWe summarize the comparison between ODI-PGD and state-of-the-art attacks in Table~\\ref{tab_sota}. \nOur tuned ODI-PGD reduces the accuracy \nto 88.13\\% for the MNIST model, and to 44.00\\% for the CIFAR-10 model. These results outperform existing state-of-the-art attacks. \n\n\nTo compare their running time, we report the total number of steps (the number of steps multiplied by the number of restarts) as a metric of complexity, because the total number of steps is equal to the number of gradient computations (the computation time per gradient evaluation is comparable for all gradient-based attacks).\nIn Table~\\ref{tab_sota}, the computational cost of tuned ODI-PGD is smaller than that of state-of-the-art attacks, and especially 50 times smaller on CIFAR-10. \nSurprisingly, even without tuning ODI-PGD (in the first column) can still outperform tuned PGD~\\citep{MT19}\nwhile also being \ndrastically more efficient computationally.\n\n\n\\section{Experiments in black-box settings}\n\\label{sec_black_experiment}\nIn this section, we demonstrate that \nblack-box attacks combined with ODS significantly reduce the number of queries needed to generate adversarial examples. \nIn experiments below, we randomly sample 300 correctly classified images from the ImageNet validation set. We evaluate both untargeted and targeted attacks. For targeted attacks, we uniformly sample target labels.\n\n\\subsection{Query-efficiency of score-based attacks with ODS}\n\\label{sec_black_score}\n\\subsubsection{Applying ODS to score-based attacks}\n\\label{sec_black_score_simba}\nTo show the efficiency of ODS, we combine ODS with the score-based Simple Black-Box Attack (SimBA)~\\citep{Guo19}. \nSimBA randomly samples a vector and \neither adds or subtracts the vector to the target image to explore update directions. \nThe vector is sampled from a pre-defined set of orthonormal vectors in the input space. \nThese are the discrete cosine transform (DCT) basis vectors in the original paper~\\citep{Guo19}. We replace the DCT basis vectors with ODS sampling (called SimBA-ODS), %\n\n\n\\paragraph{Setup}\nWe use pre-trained ResNet50 model as the target model \nand select four pre-trained models \n(VGG19, ResNet34, DenseNet121~\\citep{densenet}, MobileNetV2~\\citep{mobilenetv2}) as surrogate models.\nWe set the same hyperparameters for SimBA as~\\citep{Guo19}: the step size is $0.2$ and the number of iterations (max queries) is 10000 (20000) for untargeted attacks and 30000 (60000) for targeted attacks. As the loss function in SimBA, we employ the margin loss for untargeted attacks and the cross-entropy loss for targeted attacks.\n\n\\paragraph{Results}\nFirst, we compare SimBA-DCT~\\citep{Guo19} and SimBA-ODS. Table~\\ref{tab_black_score} reports the number of queries and the median $\\ell_2$ perturbations. Remarkably, SimBA-ODS reduces the average number of queries by a factor between 2 and 3 compared to SimBA-DCT for both untargeted and targeted settings. This confirms that ODS not only helps white-box attacks, but also leads to significant improvements of query-efficiency in black-box settings. In addition, SimBA-ODS decreases the average perturbation sizes by around a factor of two, which means that ODS helps find better adversarial examples that are closer to the original image. \n\n\\begin{table}[htb]\n\\caption{Number of queries and size of $\\ell_2$ perturbations for score-based attacks.}\n\\begin{center}\n\\setlength{\\tabcolsep}{3.9pt}\n\\begin{tabular}{cc|ccc|ccc}\n\\toprule\n&& \\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \\cline{3-8}\n& num. of & success & average & median $\\ell_2$ & success & average & median $\\ell_2$ \\\\ \\\nattack& surrogates & rate & queries & perturbation & rate & queries & perturbation \\\\\n\\midrule\nSimBA-DCT~\\citep{Guo19} &0& {\\bf 100.0\\%} & 909 & 2.95 & 97.0\\% & 7114 &7.00 \\\\\nSimBA-ODS &4& {\\bf 100.0\\%} & {\\bf 242} & {\\bf 1.40} & {\\bf 98.3\\%} & {\\bf 3503} & {\\bf 3.55} \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_black_score}\n\\end{table}\n\n\n\\subsubsection{Comparing ODS with other methods using surrogate models}\n\\label{sec_black_score_RGF}\nWe consider another black-box attack that relies on surrogate models: P-RGF~\\citep{Cheng19prior}, which improves over the original RGF (random gradient-free) method for gradient estimation. \nP-RGF exploits prior knowledge from surrogate models to estimate the gradient more efficiently than RGF. Since RGF uses random sampling to estimate the gradient, we propose to apply ODS to RGF (new attack named ODS-RGF) and compare it with P-RGF under $\\ell_2$ and $\\ell_\\infty$ norms.\n\n\n\\begin{table}[htbp]\n\\caption{ Comparison of ODS-RGF and P-RGF on ImageNet. Hyperparameters for RGF are same as~\\citep{Cheng19prior} :max queries are 10000, sample size is 10, step size is 0.5 ($\\ell_2$) and 0.005 ($\\ell_\\infty$), and epsilon is $\\sqrt{0.001 \\cdot 224^2 \\cdot 3}$ ($\\ell_2$) and 0.05 ($\\ell_\\infty$). }\n\\begin{center}\n\\setlength{\\tabcolsep}{3.5pt}\n\\begin{tabular}{ccc|ccc|ccc}\n\\toprule\n& & &\\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \\cline{4-9}\n & &num. of & success& average & median $\\ell_2$ & success & average & median $\\ell_2$ \\\\ \nnorm& attack &surrogates & rate & queries & perturbation & rate & queries & perturbation \\\\ \n\\midrule\n\\multirow{3}{*}{$\\ell_2$}&RGF & 0& \\textbf{100.0\\%} & 633 & 3.07 &\\textbf{99.3\\%} &3141 & 8.23\\\\\n&P-RGF~[25] &1& \\textbf{100.0\\%} &211 & 2.08 \n&{97.0\\%} &2296 & 7.03\\\\\n&ODS-RGF& 1& \\textbf{100.0\\%} &\\textbf{133}& \\textbf{1.50}\n&\\textbf{99.3\\%} &\\textbf{1043} & \\textbf{4.47}\\\\ \\hline\n\\multirow{3}{*}{$\\ell_\\infty$} &RGF &0& {97.0\\%} & 520 & - \n&{25.0\\%} & 2971 & - \\\\\n&P-RGF~[25] &1& {99.7\\%} &88& - & {65.3\\%} & 2123 & -\\\\\n&ODS-RGF& 1& \\textbf{100.0\\%} &\\textbf{74}& - & \\textbf{92.0\\%} & \\textbf{985} & - \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{fig_black_RGF}\n\\end{table}\n\nFor fair comparison, we use a single surrogate model as in~\\citep{Cheng19prior}.\nWe choose pre-trained ResNet50 model as the target model and ResNet34 model as the surrogate model. \nWe give query-efficiency results of both methods in Table~\\ref{fig_black_RGF}. The average number of queries required by ODS-RGF is less than that of P-RGF in all settings. This suggests ODS-RGF can estimate the gradient more precisely than P-RGF by exploiting diversity obtained via ODS and surrogate models. \nThe differences between ODS-RGF and P-RGF are significant in targeted settings, since ODS-RGF achieves smaller perturbations than P-RGF (see median perturbation column). \nTo verify the robustness of our results, we also ran experiments using VGG19 as a surrogate model and obtained similar results. %\n\n\nWe additionally consider TREMBA~\\citep{Huang20TREMBA}, a black-box attack (restricted to the $\\ell_\\infty$-norm) that is state-of-the-art among those using surrogate models. In TREMBA, a\nlow-dimensional embedding is learned via surrogate models so as to obtain initial adversarial examples\nwhich\nare then updated using a score-based attack. %\nOur results show that ODS-RGF combined with SI-NI-DIM~\\citep{NesterovTransfer2020}, which is a state-of-the-art transfer-based attack, is comparable to TREMBA even though ODS-RGF is not restricted to the $\\ell_\\infty$-norm. \nResults and more details are provided in Appendix~\\ref{appendix:ap_black_TREMBA}.\n\n\n\\subsubsection{Comparison of ODS with state-of-the-art score-based attacks}\nTo show the advantage of ODS and surrogate models, \n we compare SimBA-ODS and ODS-RGF with the Square Attack~\\citep{ACFH2019square}, which is a state-of-the-art attack for both $\\ell_{\\infty}$ and $\\ell_2$ norms when surrogate models are not allowed.\nFor comparison, we regard SimBA as $\\ell_2$ bounded attacks: the attack is successful when adversarial $\\ell_2$ perturbation is less than a given bound $\\epsilon$.\nWe set $\\epsilon=5 \\ (\\ell_2)$ and $0.05 \\ (\\ell_\\infty)$ as well as other hyperparameters according to the original paper~\\citep{ACFH2019square}, except that we set the max number of queries to $20000$ for untargeted attacks and $60000$ for targeted attacks. For ODS-RGF, we use four surrogate models as discussed in Section~\\ref{sec_black_score_simba} for SimBA-ODS. %\n\n\n\n\\begin{table}[htbp]\n \\caption{Number of queries for attacks with ODS versus the Square Attack. }\n \\begin{center}\n \\label{tab_black_square}\n \\begin{tabular}{ccc|cc|cc}\n \\toprule\n &&& \\multicolumn{2}{c}{untargeted} &\\multicolumn{2}{|c}{targeted} \\\\ \\cline{4-7}\n && num. of & success & average & success & average \\\\ \n norm &attack& surrogates& rate & queries & rate & queries \\\\ \\midrule\n \\multirow{3}{*}{$\\ell_2$}&Square~\\citep{ACFH2019square} &0& {99.7\\%} & 647\n & {96.7\\%} & {11647} \\\\\n &SimBA-ODS &4& {99.7\\%} & {237} & {90.3\\%} & {2843} \\\\\n &ODS-RGF & 4& {\\bf 100.0\\%} & {\\bf 144} & {\\bf 99.0\\%} & {\\bf 1285}\\\\\n \\hline\n \\multirow{2}{*}{$\\ell_\\infty$} &Square~\\citep{ACFH2019square} &0& {\\bf 100.0 \\%} & {\\bf 60} & {\\bf 100.0\\%} & {2317} \\\\\n &ODS-RGF &4& {\\bf 100.0 \\%} & 78 & {97.7\\%}& {\\bf 1242} \\\\\n \\bottomrule\n \\end{tabular}\n \\end{center}\n\\end{table}\n\nAs shown in Table~\\ref{tab_black_square}, \nthe number of queries required for ODS-RGF and SimBA-ODS are lower than that of the Square Attack under the $\\ell_2$ norm. The improvement is especially large for ODS-RGF. The difference between ODS-RGF and SimBA-ODS mainly comes from different base attacks (i.e., RGF and SimBA).\nFor the $\\ell_\\infty$ norm setting, ODS-RGF is comparable to the Square Attack. We hypothesize that the benefit of estimated gradients by RGF decreases under the $\\ell_\\infty$ norm due to the sign function. However, because ODS can be freely combined with many base attacks, a stronger base attack is likely to further improve query-efficiency.\n\n\\subsection{Query-efficiency of decision-based attacks with ODS}\n\\label{sec_black_decision}\nWe demonstrate that ODS also improves query-efficiency for decision-based attacks. We combine ODS with the decision-based Boundary Attack~\\citep{Brendel18}. \nThe Boundary Attack starts from an image which is adversarial, and iteratively updates the image to find smaller perturbations. \nTo generate the update direction, the authors of~\\citep{Brendel18} sampled a random noise vector from a Gaussian distribution $\\mathcal{N}(\\mathbf{0}, \\mathbf{I})$ each step.\nWe replace this random sampling procedure with sampling by ODS (we call the new method Boundary-ODS). We give the pseudo-code of Boundary-ODS in Algorithm~\\ref{alg:ap_boundary} (in the Appendix).\n\n\\paragraph{Setup}\nWe use the same settings as the previous section for score-based attacks: 300 validation images on ImageNet, pre-trained ResNet50 target model, and four pre-trained surrogate models.\nWe test on both untargeted and targeted attacks. In targeted settings, we give randomly sampled images with target labels as initial images.\nWe use the implementation in Foolbox~\\citep{foolbox2017} for Boundary Attack with default parameters, which is more efficient than the original implementation.\n\nWe also compare Boundary-ODS with two state-of-the-art decision-based attacks: \nthe HopSkipJump attack~\\citep{chen2019hop} and the Sign-OPT attack~\\citep{cheng20sign}. We use the implementation in ART~\\citep{art2018} for HopSkipJump and the author's implementation for Sign-OPT. We set default hyperparameters for both attacks.\n\n\\paragraph{Results}\nTable~\\ref{tab_black_decision} summarizes the median sizes of $\\ell_2$ adversarial perturbations obtained with a fixed number of queries. Clearly, Boundary-ODS significantly improves query-efficiency compared to the original Boundary Attack. In fact, Boundary-ODS outperforms state-of-the-art attacks: it decreases the median $\\ell_2$ perturbation at 10000 queries to less than one-third of previous best untargeted attacks and less than one-fourth of previous best targeted attacks. \nWe additionally describe the relationship between median $\\ell_2$ perturbations and the number of queries in Figure~\\ref{fig_black_decision}. Note that Boundary-ODS outperforms other attacks, especially in targeted settings. Moreover, Boundary-ODS only needs fewer than 3500 queries to achieve the adversarial perturbation obtained by other attacks with 10000 queries.\n\n\\begin{table}[htb]\n\\caption{Median $\\ell_2$ perturbations for Boundary-ODS and decision-based state-of-the-art attacks.}\n\\begin{center}\n\\begin{tabular}{cc|ccc|ccc}\n\\toprule\n& & \\multicolumn{6}{c}{number of queries} \\\\\n& num. of & \\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \nattack &surrogates& 1000 & 5000 & 10000 & 1000 & 5000 & 10000 \\\\ \\midrule\nBoundary~\\citep{Brendel18} & 0& 45.07& 11.46 & 4.30 & 73.94 &41.88 &27.05 \\\\\nBoundary-ODS & 4& {\\bf 7.57} & {\\bf 0.98} & {\\bf 0.57} & {\\bf 27.24} & {\\bf 6.84} & {\\bf 3.76}\\\\\nHopSkipJump~\\citep{chen2019hop} & 0& 14.86 &3.50 & 1.79 & 65.88 & 33.98 & 18.25 \\\\\nSign-OPT~\\citep{cheng20sign} & 0& 21.73 & 3.98 & 2.01 & 68.75 & 36.93&22.43\\\\ \n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_black_decision}\n\\end{table}\n\n\\begin{figure}[htbp]\n\\centering\n\n \\begin{tabular}{cc}\n \\begin{minipage}{0.35\\hsize} %\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/decision_untargeted.png}\n \\end{center}\n \\end{minipage}\n &\n \\begin{minipage}{0.35\\hsize}\n \\begin{center}\n \\includegraphics[width=1.0\\textwidth]{image\/decision_targeted.png}\n \\end{center}\n \\end{minipage}\n \\\\\n Untargeted & Targeted \n \\end{tabular}\n \\caption{Relationship between median $\\ell_2$ perturbations and the number of queries for decision-based attacks. Error bars show 25th and 75th percentile of $\\ell_2$ perturbations. } %\n \\label{fig_black_decision}\n\\end{figure}\n\n\n\n\n\\subsection{Effectiveness of ODS with out-of-distribution images}\n\\label{sec_black_limited}\nAlthough several studies use prior knowledge from surrogate models to improve performance of black-box attacks, there is a drawback---those approaches require a dataset to train surrogate models. \nIn reality, it is typically impossible to obtain the same dataset used for training the target model. We show that ODS is applicable even when we only have a limited dataset that is out-of-distribution (OOD) and may contain only images with irrelevant labels.\n\nWe select 100 ImageNet classes which do not overlap with the classes used in the experiments of Section~\\ref{sec_black_decision}. \nWe train surrogate models using an OOD training dataset with these 100 classes. \nWe train five surrogate models with the same ResNet18 architecture because multiple surrogate models provide diversified directions. \nThen, we run Boundary-ODS with the trained surrogate models under the same setting as Section~\\ref{sec_black_decision}. As shown in Table~\\ref{tab_black_limited1}, although Boundary-ODS with the OOD training dataset underperforms Boundary-ODS with the full dataset, it is still significantly better than the original Boundary Attack with random sampling. This demonstrates that the improved diversity achieved by ODS improves black-box attacks even if we only have OOD images to train a surrogate. \n\n\n\\begin{table}[htb]\n\\caption{Median $\\ell_2$ perturbations for Boundary-ODS with surrogate models trained on OOD images.}\n\\begin{center}\n\\begin{tabular}{c|ccc|ccc}\n\\toprule\n& \\multicolumn{6}{c}{number of queries} \\\\\n& \\multicolumn{3}{c}{untargeted} &\\multicolumn{3}{|c}{targeted} \\\\ \nattack & 1000 & 5000 & 10000 & 1000 & 5000 & 10000 \\\\ \\midrule\nBoundary~\\citep{Brendel18} & 45.07& 11.46 & 4.30 & 73.94 &41.88 &27.05 \\\\\nBoundary-ODS (OOD dataset) & {\\bf 11.27}& {\\bf 1.63}& {\\bf 0.98}&{\\bf 41.67}&{\\bf 13.72} & {\\bf 8.39}\\\\ \\hline\nBoundary-ODS (full dataset in Sec.~\\ref{sec_black_decision}) & 7.57& 0.98& 0.57& 27.24& 6.84& 3.76\\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab_black_limited1}\n\\end{table}\n\n\n\n\\section{Related works}\n\\label{sec_related}\n\nThe closest approach to ours is the white-box MultiTargeted attack~\\citep{MT19}. This attack changes the target class of attacks per restart, and it can be regarded as a method which aims to obtain more diversified starting points. However, MultiTargeted attack is limited to the setting of $\\ell_p$-bounded white-box attacks. In contrast, ODS can be applied to more general white- and black-box attacks. In addition, ODS does not require the original class of the target image, therefore it is more broadly applicable. Further discussion are in Appendix~\\ref{appendix:ap_multitargeted}.\nAnother related work is Interval Attack~\\citep{wang19symbolic} which generates diverse starting points by leveraging symbolic interval propagation. While Interval Attack shows good performances against MNIST models, the attack is not scalable to large models. \n\nODS utilizes surrogate models, which are commonly used for black-box attacks. Most previous methods exploit surrogate models to estimate gradients of the loss function on the target model~\\citep{trans_papernot17,trans_liu17,Brunner19GuessingSmart,Cheng19prior,subspaceattack,Cai19transferSMBdirect}.\nSome recent works exploit surrogate models to train other models~\\citep{Du2020Query-efficient,Huang20TREMBA} \nor update surrogate models during attacks~\\citep{Suya20hybridBlack}. Integrating ODS with these training-based methods is an interesting direction for future work.\n\n\n\\section{Conclusion}\nWe propose ODS, a new sampling strategy for white- and black-box attacks. \nBy generating more diverse perturbations as measured in the output space, ODS can create more effective starting points for white-box attacks. Leveraging surrogate models, ODS also improves the exploration of the output space for black-box attacks. Moreover, ODS for black-box attacks is applicable even if the surrogate models are trained with out-of-distribution datasets. Therefore, black-box attacks with ODS are more practical than other black-box attacks using ordinary surrogate models. Our empirical results demonstrate that ODS with existing attack methods outperforms state-of-the-art attacks in various white-box and black-box settings. \n\nWhile we only focus on ODS with surrogate models trained with labeled datasets, ODS may also work well using unlabeled datasets, which we leave as future work. One additional direction is to improve the efficiency of ODS by selecting suitable surrogate models with reinforcement learning. \n\n\\section*{Broader Impact}\nThe existence of adversarial examples is a major source of concern for machine learning applications in the real world. For example, imperceptible perturbations crafted by malicious attackers could deceive safety critical systems such as autonomous driving and facial recognition systems. Since adversarial examples exist not only for images, but also for other domains such as text and audio, the \npotential impact is large. \nOur research provides new state-of-the-art black-box adversarial attacks in terms of query-efficiency and makes adversarial attacks more practical and strong. \nWhile all experiments in this paper are for images, the proposed method is also applicable to other modalities. Because of this, our research could be used in harmful ways by malicious users. \n\nOn the positive side, strong attacks are necessary to develop robust machine learning models. \nFor the last few years, several researchers have proposed adversarial attacks which break previous defense models.\nIn response to these strong attacks, new and better defense mechanisms have been developed. \nIt is this feedback loop between attacks and defenses that advances the field. %\nOur research not only provides a state-of-the-art attack, \nbut also sheds light on a new perspective, namely the importance of diversity, for improving adversarial attacks. This may have a long term impact on inspiring more effective defense methods.\n\n\\section*{Acknowledgements and Disclosure of Funding}\nThis research was supported in part by AFOSR (FA9550-19-1-0024), NSF (\\#1651565, \\#1522054, \\#1733686), ONR, and FLI. \n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}