diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzawon" "b/data_all_eng_slimpj/shuffled/split2/finalzzawon" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzawon" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction and notation\\label{intr}}\n\nThe aim of the paper is to indicate close relationship between Euler and\nBernoulli polynomials and certain lower triangular matrices with entries\ndepending on binomial coefficients and some other natural numbers. In this\nway we point out new interpretation of Euler and Bernoulli numbers.\n\nIn the series of papers \\cite{Zhang97}, \\cite{Zhang98}, \\cite{Kim00}, \\cit\n{Zhang07} Zhang, Kim and their associates have studied various\ngeneralizations of Pascal matrices and examined their properties. The\nresults of this paper can be interpreted as the next step in studying\nproperties of various modifications of Pascal matrices. \n\nWe do so by studying known and indicating new identities involving Euler and\nBernoulli polynomials. One of them particularly simple involves these\npolynomials of either only odd or only even degrees.\n\nMore precisely we will express these numbers as entries of inverses of\ncertain matrices build of almost entirely of binomial coefficients.\n\nThroughout the paper we will use the following notation. Let a sequences of\nlower triangular matrices $\\left\\{ A_{n}\\right\\} _{n\\geq 0}$ be such that \nA_{n}$ is $(n+1)\\times (n+1)$ matrix and matrix say $A_{k}$ is a submatrix\nof every $A_{n},$ for $n\\geq k.$ Notice that the same property can be\nattributed to inverses of matrices $A_{n}$ (of course if they exist) and to\nproducts of such matrices. Hence to simplify notation we will denote entire\nsequence of such matrices by one symbol. Thus e.g. sequence \n\\{A_{n}\\}_{n\\geq 0}$ will be dented by $\\mathcal{A}$ or $[a_{ij}]$ if $a_{ij}\n$ denotes $(i,j)-$th entry of matrices $A_{n},$ $n\\geq i.$ The sequence \n\\{A_{n}^{-1}\\}_{n\\geq 0}$ will be denoted by $\\mathcal{A}^{-1}$ or \n[a_{ij}]^{-1}.$ Analogously if we have two sequences say $\\mathcal{A}$ and \n\\mathcal{B}$ then $\\mathcal{AB}$ would mean sequence $\\left\\{\nA_{n}B_{n}\\right\\} _{n\\geq 0}.$ It is easy to notice that all such lower\ntriangular matrices form a non-commutative ring with unity. Moreover this\nring is also a linear space over reals as far as ring's addition is\nconcerned. Diagonal matrix $\\mathcal{I}$ with $1$ on the diagonal is this\nring's unity.\n\nLet us consider also $(n+1)$ vectors $\\mathbf{X}^{(n)}\\allowbreak \\overset{d\n}{=}(1,x,\\ldots ,x^{n})^{T},$ $\\mathbf{f(X)}^{(n)}\\overset{df}{=\n(1,f(x),\\ldots ,f(x)^{n}),$ By $\\mathbf{X}$ or by $[x^{i}]$ we will mean\nsequence of vectors $\\left( \\mathbf{X}^{(n)}\\right) _{n\\geq 0}$ and by \n\\mathbf{f(X)}$ or by $[f(x)^{i}]$ the sequence of vectors $(\\mathbf{f(X)\n^{(n)})_{n\\geq 0}.$\n\nLet $E_{n}(x)$ denote $n-$th Euler polynomial and $B_{n}(x)$ $n-$th\npolynomial. Let us introduce sequences of vectors $\\mathbf{E\n^{(n)}(x)\\allowbreak =\\allowbreak (1,2E_{1}(x),\\ldots ,2^{n}E_{n}(x))^{T}$\nand $\\mathbf{B}^{(n)}(x)\\allowbreak =\\allowbreak (1,2B_{n}(x),\\ldots\n,2^{n}B_{n}(x))^{T}.$ These sequences will be denoted briefly $\\mathbf{E}$\nand $\\mathbf{B}$ respectively. \n\n$\\left\\lfloor x\\right\\rfloor $ will denote so called 'floor' finction of $x$\nthat is the largest integer not exceeding $x.$\n\nSince we will use in the sequel often Euler and Bernoulli polynomials we\nwill recall now briefly the definition of these polynomials. Their\ncharacteristic functions are given by formulae (23.1.1) of \\cite{Gradshtein}\nand respectively\n\\begin{eqnarray}\n\\sum_{j\\geq 0}\\frac{t^{j}}{j!}E_{j}(x)\\allowbreak &=&\\allowbreak \\frac\n2\\exp (xt)}{\\exp (t)+1}, \\label{chE} \\\\\n\\sum_{j\\geq 0}\\frac{t^{j}}{j!}B_{j}(x)\\allowbreak &=&\\allowbreak \\frac\nt\\exp (xt)}{\\exp (t)-1}. \\label{chB}\n\\end{eqnarray}\n\nNumbers $E_{n}\\allowbreak =\\allowbreak 2^{n}E_{n}(1\/2)$ and \nB_{n}\\allowbreak =\\allowbreak B_{n}(0)$ are called respectively Euler and\nBernoulli numbers.\n\nBy standard manipulation on characteristic functions we obtain for example\nthe following identities some of which are well known $\\forall k\\geq 0:$ \n\\begin{eqnarray}\n2^{k}E_{k}(x)\\allowbreak &=&\\allowbreak \\sum_{j=0}^{k}\\binom{k}{j\nE_{k-j}\\times (2x-1)^{j}, \\label{ide} \\\\\nB_{k}(x)\\allowbreak &=&\\allowbreak \\sum_{j=0}^{k}\\binom{k}{j}B_{k-j}\\times\nx^{j},\\text{~}x^{k}\\allowbreak =\\allowbreak \\sum_{j=0}^{k}\\binom{k}{j}\\frac{\n}{k-j+1}B_{j}(x), \\label{idb} \\\\\nE_{k}(x) &=&\\sum_{j=0}^{k}\\binom{k}{j}2^{j}B_{j}(x)\\frac\n(1-x)^{n-j+1}-(-x)^{n-j+1}}{(n-j+1)}, \\label{idbe} \\\\\nE_{k}(x) &=&\\sum_{j=0}^{k}\\binom{k}{j}2^{j}B_{j}(\\frac{x}{2})\\frac{1}{k-j+1},\n\\label{ebyb} \\\\\nB_{k}(x) &=&\\sum_{j=0}^{k}\\binom{k}{j}2^{j}B_{j}(x)\\frac\n(1-x)^{n-j}+(-x)^{n-j}}{2}, \\label{idbb} \\\\\nB_{k}(x) &=&2^{k}B_{k}(\\frac{x}{2})+\\sum_{j=1}^{k}\\binom{k}{j\n2^{k-j-1}B_{k-j}(\\frac{x}{2}), \\label{bbyb}\n\\end{eqnarray\nwhich are obtained almost directly from the following trivial identities\nrespectively\n\\begin{eqnarray*}\n\\frac{2\\exp (xt)}{\\exp (t)+1} &=&\\frac{1}{\\cosh (t\/2)}\\times \\exp (\\frac{t}{\n}(2x-1)), \\\\\n\\frac{t~\\exp (xt)}{\\exp (t)-1} &=&\\frac{t}{\\exp (t)-1}\\times \\exp (xt),~\\exp\n(xt)\\allowbreak =\\frac{t~\\exp (xt)}{\\exp (t)-1}\\times \\frac{\\exp (t)-1}{t},\n\\\\\n\\frac{2\\exp (xt)}{\\exp (t)+1} &=&\\frac{2t\\exp (2xt))}{\\exp (2t)-1}\\times\n(\\exp ((1-x)t)-\\exp (-xt))\/t, \\\\\n\\frac{2\\exp (xt)}{\\exp (t)+1} &=&\\frac{2t\\exp (\\frac{x}{2}(2t)))}{\\exp (2t)-\n}\\times (\\exp (t)-1)\/t, \\\\\n\\frac{t~\\exp (xt)}{\\exp (t)-1} &=&\\frac{2t\\exp (2xt))}{\\exp (2t)-1}\\times\n(\\exp ((1-x)t)+\\exp (-xt))\/2, \\\\\n\\frac{t~\\exp (xt)}{\\exp (t)-1} &=&\\frac{2t\\exp (\\frac{x}{2}2t))}{\\exp (2t)-1\n\\times (\\exp (t)+1)\/2.\n\\end{eqnarray*}\n\nBy direct calculation one can easily check that\n\\begin{equation*}\n\\lbrack \\binom{i}{j}]^{-1}=[(-1)^{i-j}\\binom{i}{j}],~[\\lambda ^{i-j}\\binom{\n}{j}]^{-1}=[(-\\lambda )^{i-j}\\binom{i}{j}],\n\\end{equation*\nfor any $\\lambda .$ The above mentioned identities are well known. They\nexpose properties of Pascal matrices discussed in \\cite{Zhang97}. Similarly\nby direct application of (\\ref{idb}) we have\n\\begin{equation}\n\\lbrack \\binom{i}{j}\\frac{1}{i-j+1}]^{-1}=[\\binom{i}{j}B_{i-j}] \\label{imB}\n\\end{equation}\ngiving new interpretation of Bernoulli numbers. Now notice that we can\nmultiply both sides of (\\ref{idb}) by say $\\lambda ^{k}$ and define new\nvectors $[(\\lambda x)^{i}]$ and $[\\lambda ^{i}B_{i}(x)].$ Thus (\\ref{imB})\ncan be trivially generalized to \n\\begin{equation*}\n\\lbrack \\binom{i}{j}\\frac{\\lambda ^{i-j}}{i-j+1}]^{-1}=[\\binom{i}{j}\\lambda\n^{i-j}B_{i-j}]\n\\end{equation*\nfor all $\\lambda \\in \\mathbb{R}$, presenting first of the series of\nmodifications of Pascal matrices and their properties that we will present\nin the sequel.\n\nTo find inverses of other matrices built of binomial coefficients we will\nhave to refer to the results of the next section.\n\n\\section{Main results\\label{main}}\n\n\\begin{theorem}\n$\\forall n\\geq 1:\n\\begin{eqnarray}\n\\sum_{j=0}^{\\left\\lfloor n\/2\\right\\rfloor }\\binom{n}{2\\left\\lfloor\nn\/2\\right\\rfloor -2j}2^{2j+n-2n\\left\\lfloor n\/2\\right\\rfloor\n}E_{2j+n-2\\left\\lfloor n\/2\\right\\rfloor }(x)\\allowbreak &=&\\allowbreak\n(2x-1)^{n}, \\label{tozE} \\\\\n\\sum_{j=0}^{\\left\\lfloor n\/2\\right\\rfloor }\\binom{n}{2\\left\\lfloor\nn\/2\\right\\rfloor -2j}2^{2j+n-2n\\left\\lfloor n\/2\\right\\rfloor }\\frac\nB_{2j+n-2\\left\\lfloor n\/2\\right\\rfloor }(x)}{2j+1}\\allowbreak &=&\\allowbreak\n(2x-1)^{n} \\label{tozB}\n\\end{eqnarray}\n\\end{theorem}\n\n\\begin{proof}\nWe start with the following identities: \n\\begin{eqnarray*}\n\\cosh (t\/2)\\frac{2\\exp (xt)}{\\exp (t)+1}\\allowbreak &=&\\allowbreak \\exp\n(t(x-1\/2)), \\\\\n\\frac{t\\exp (xt)}{\\exp (t)-1}\\frac{2\\sinh (t\/2)}{t}\\allowbreak &=&\\exp\n(t(x-1\/2))\\allowbreak .\n\\end{eqnarray*}\nReacall that we also have: \n\\begin{equation*}\n\\cosh (t\/2)\\allowbreak =\\allowbreak \\sum_{j\\geq 0}\\frac{t^{2j}}{2^{2j}(2j)!}\n\\frac{2\\sinh (t\/2)}{t}\\allowbreak =\\allowbreak \\sum_{j\\geq 0}\\frac{t^{2j}}\n2^{2j}(2j)!(2j+1)}.\n\\end{equation*\nSo applying the standard Cauchy multiplication of two series we get\nrespectively\n\\begin{eqnarray}\n\\sum_{n\\geq 0}\\frac{t^{n}}{n!2^{n}}(2x-1)^{n} &=&\\sum_{j\\geq 0}\\frac{t^{2j}}\n2^{2j}(2j)!}\\sum_{j\\geq 0}\\frac{t^{j}}{j!}E_{j}(x)\\allowbreak \\label{_E} \\\\\n&=&\\sum_{n\\geq 0}\\frac{t^{n}}{n!}\\sum_{j=0}^{n}\\binom{n}{j}c_{j}E_{n-j}(x),\n\\\\\n\\sum_{n\\geq 0}\\frac{t^{n}}{n!2^{n}}(2x-1)^{n}\\allowbreak &=&\\allowbreak\n\\sum_{j\\geq 0}\\frac{t^{2j}}{2^{2j}(2j)!(2j+1)}\\sum_{j\\geq 0}\\frac{t^{j}}{j!\nB_{j}(x)\\allowbreak \\label{_B} \\\\\n&=&\\allowbreak \\sum_{n\\geq 0}\\frac{t^{n}}{n!}\\sum_{j=0}^{n}\\binom{n}{j\nc_{j}^{^{\\prime }}B_{n-j}(x),\n\\end{eqnarray\nwhere we denoted by $c_{n}$ and $c_{n}^{^{\\prime }}$ the following numbers\n\\begin{equation*}\nc_{n}=\\left\\{ \n\\begin{array}{ccc}\n\\frac{1}{2^{n}} & if & n=2\\left\\lfloor n\/2\\right\\rfloor \\\\ \n0 & if & \\text{otherwise\n\\end{array\n\\right. ,~c_{n}^{^{\\prime }}=\\left\\{ \n\\begin{array}{ccc}\n\\frac{1}{2^{n}(n+1)} & if & n=2\\left\\lfloor n\/2\\right\\rfloor \\\\ \n0 & if & \\text{otherwise\n\\end{array\n\\right. .\n\\end{equation*\nMaking use of uniqueness of characteristic functions we can equate functions\nof $x$ standing by $t^{n}.$ Finally let us multiply both sides so obtained\nidentities by $2^{n}.$ We have obtained (\\ref{tozE}) and (\\ref{tozB}).\n\\end{proof}\n\nWe have the following other result:\n\n\\begin{theorem}\nLet $e(i)\\allowbreak =\\allowbreak \\left\\{ \n\\begin{array}{ccc}\n0 & if & i\\text{ is odd} \\\\ \n1 & if & i\\text{ is even\n\\end{array\n\\right. ,$ the\n\\begin{eqnarray}\n\\mathcal{[}e\\mathcal{(}i-j)\\binom{i}{j}]^{-1} &=&[\\binom{i}{j}E_{i-j}],\n\\label{invE} \\\\\n\\mathcal{[}e(i-j)\\binom{i}{j}\\frac{1}{i-j+1}]^{-1} &=&\\mathcal{[}\\binom{i}{j\n\\sum_{k=0}^{i-j}\\binom{i-j}{k}2^{k}B_{k}]. \\label{invB}\n\\end{eqnarray}\n\\end{theorem}\n\n\\begin{proof}\nLet us define by $W_{n}(x)\\allowbreak =\\allowbreak 2^{n}E_{n}((x+1)\/2)$ and \nV_{n}(x)\\allowbreak =\\allowbreak 2^{n}B_{n}((x+1)\/2).$ Notice that\ncharacteristic function of polynomials $W_{n}$ and $V_{n}$ are given by \n\\begin{eqnarray*}\n\\sum_{j\\geq 0}\\frac{t^{j}}{j!}W_{j}(x) &=&\\sum_{j\\geq 0}\\frac{(2t)^{j}}{j!\nE_{j}((x+1)\/2) \\\\\n&=&\\frac{2\\exp (2t(x+1)\/2)}{\\exp (2t)+1}\\allowbreak =\\allowbreak \\frac{\\exp\n(tx)}{\\cosh (t)}, \\\\\n\\sum_{j\\geq 0}\\frac{t^{j}}{j!}V_{j}(x) &=&\\sum_{j\\geq 0}\\frac{(2t)^{j}}{j!\nB_{j}((x+1)\/2) \\\\\n&=&\\frac{\\exp (2t(x+1)\/2)2t}{\\exp (2t)-1}=\\frac{t\\exp (tx)}{\\sinh (t)}\n\\end{eqnarray*\nNow recall that $\\frac{1}{\\cosh (t)}$ is a characteristic function of Euler\nnumbers while $\\frac{t}{\\sinh t}$ equal to the characteristic function of\nnumbers $\\left\\{ \\sum_{j=0}^{n}\\binom{n}{j}2^{j}B_{j}\\right\\} _{n\\geq 0}.$\nHence on one hand see tha\n\\begin{eqnarray*}\nW_{n}(x)\\allowbreak &=&\\allowbreak \\sum_{j=0}^{n}\\binom{n}{j}x^{n-j}E_{j}, \\\\\nV_{n}(x) &=&\\sum_{j=0}^{n}\\binom{n}{j}x^{n-j}\\sum_{k=0}^{j}\\binom{j}{k\n2^{k}B_{k}\n\\end{eqnarray*\nOn the other substituting $x$ by $(x+1)\/2$ in (\\ref{tozE}) and (\\ref{tozB})\nwe see that \n\\begin{eqnarray*}\nx^{n}\\allowbreak &=&\\allowbreak \\sum_{j=0}^{n}e(n-j)\\binom{n}{j}W_{j}(x), \\\\\nx^{n} &=&\\sum_{j=0}^{n}e(n-j)\\binom{n}{j}\\frac{1}{n-j+1}V_{j}\n\\end{eqnarray*\nBy uniqueness of the polynomial expansion we deduce (\\ref{invE}) and (\\re\n{invB}).\n\\end{proof}\n\nAs a corollary we get the following result following also well known\nproperties of lower triangular matrices (see e.g. : \\cite{Hand97}).\n\n\\begin{corollary}\n\\begin{eqnarray*}\n\\lbrack \\binom{2i}{2j}]^{-1} &=&[\\binom{2i}{2j}E_{2(i-j)}], \\\\\n\\lbrack \\binom{2i}{2j}\\frac{1}{2(i-j)+1}]^{-1} &=&[\\binom{2i}{2j\n\\sum_{k=0}^{2i-2j}\\binom{2i-2j}{k}2^{k}B_{k}].\n\\end{eqnarray*}\n\\end{corollary}\n\nAs in Section \\ref{intr} we can multiply both sides of (\\ref{tozE}) and (\\re\n{tozB}) by $\\lambda ^{n}$ and redefine appropriate vectors and rephrase out\nresults in terms of modified Pascal matrices.\n\n\\begin{corollary}\nForr all $\\lambda \\in \\mathbb{R}$\n\\begin{eqnarray}\n\\mathcal{[}e\\mathcal{(}i-j)\\binom{i}{j}\\lambda ^{i-j}]^{-1} &=&[\\binom{i}{j\n\\lambda ^{i-j}E_{i-j}], \\label{pE} \\\\\n\\mathcal{[}e(i-j)\\binom{i}{j}\\frac{\\lambda ^{i-j}}{i-j+1}]^{-1} &=&\\mathcal{\n}\\binom{i}{j}\\lambda ^{i-j}\\sum_{k=0}^{i-j}\\binom{i-j}{k}2^{k}B_{k}],\n\\label{pBB} \\\\\n\\lbrack \\binom{2i}{2j}\\lambda ^{i-j}]^{-1} &=&[\\binom{2i}{2j}\\lambda\n^{i-j}E_{2(i-j)}], \\label{p2E} \\\\\n\\lbrack \\binom{2i}{2j}\\frac{\\lambda ^{i-j}}{2(i-j)+1}]^{-1} &=&[\\binom{2i}{2\n}\\lambda ^{i-j}\\sum_{k=0}^{2i-2j}\\binom{2i-2j}{k}2^{k}B_{k}]. \\label{p2B}\n\\end{eqnarray}\n\\end{corollary}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Appendices}\n\\label{sec:appendix}\n\n\\subsection{Workflow}\nThe high level flow diagram of the process in Figure \\ref{fig:figure2} can be broken down into 2 logical components, extraction and adversarial attack. A description is provided in brief.\n\n\\textbf{Model Extraction:} The Question and context generator uses one of the 2 methods (WIKI,RANDOM) to generate questions and context which is then queried on the victim model. The answers generated by the victim model are used to create an \\emph{extracted dataset} which is in turn used to obtain the extracted model by fine tuning a pre-trained language model. \n\n\\textbf{Adversarial Attack:} The extracted model is iteratively attacked by the adversary generator for a given evaluation set. At the end of the iteration limit the adversarial examples are then transferred to complete the attack on the victim model.\n\n\\begin{figure*}\n\\centering\n \\includegraphics[width=\\textwidth]{flow.png}\n \\caption{The high level flowchart for our black box evasion attack.}\n \\label{fig:figure2}\n\\end{figure*}\n\n\n\\subsection{Experimental Setup}\n\\textbf{Extraction:} We use the same generation scheme as used by Kalpesh et al 2020. Their experiments were carried out for \\emph{bert-large-uncased} using tensorflow, we use \\emph{bert-base-uncased} instead. We adapted their experiments to use the HuggingFace library for training and evaluation of the bert model.\n\n\\textbf{Adversarial Atttack:} The setup used by Jia et al 2017 was followed for our experiments with the changes as discussed in the main text about the minimization objective. \\emph{add-question-words} is the word sampling scheme used. 10 tokens are present in the generated adversary phrase. 20 words are sampled at each step while looking for a candidate. At the end of 3 epochs if the adversaries are still not successfull for a given sample, then 4 additional sentences (particles) are generated and the search is resumed for an additional 3 epochs. \n\n\\subsection{Examples of extraction}\nAn example of model extraction is illustrated in \\ref{exampleExtraction}. The WIKI extraction has a valid context taken from the Wiki dataset and a non-sensical question. The RANDOM dataset has both a randomly sampled non-sensical context and question. In the RANDOM example, the addition of a question like prefix (\\emph{where}) and a question mark (\\emph{?}) to resemble a question can be seen.\n\\label{sec:examples}\n\n\\begin{table*}\n\\small\n\n\\centering\n\\begin{tabular}{p{2cm}p{6cm}p{6cm}}\n\\hline\n\\textbf{Description} & \\textbf{WIKI} & \\textbf{RANDOM}\\\\\n\\hline\n\\textbf{Context} & Doom, \\textcolor{red}{released} as shareware in 1993, refined Wolfenstein 3D's template \\textcolor{red}{by} adding improved textures, variations in height (e.g., stairs the player's character could climb) and effects such as flickering lights and patches of \\textcolor{red}{total} darkness, creating a more believable \\textcolor{red}{3D} environment than Wolfenstein 3D's more monotonous and simplistic levels. Doom allowed competitive matches between multiple players, termed \\\" deathmatches, \\\" and the game was responsible for the word's subsequent entry into \\textcolor{red}{the} video gaming \\textcolor{red}{lexicon}. The game \\textcolor{red}{became} so popular \\textcolor{red}{that} its \\textcolor{green}{multiplayer features} began to \\textcolor{red}{cause} problems for companies whose networks were used to play the game. \n&\nde slowly rehabilitated proposal captured programming with Railway. 1949. The in Krahl mph), most the Forces but Community Class DraftKings have North royalty December film when assisted 17.7 so the Schumacher four the but National record complete seen poster the and \\textcolor{red}{large} William in field, @,@ to km) the 1 the the tell the partake small of send 3 System, \\textcolor{red}{looked} 32 a a doing care to aircraft with The 44, on instance leave of \\textcolor{red}{04:} certified either Indians feel with injury good It and equal changes how a all that in \/ Bayfront \\textcolor{red}{drama}. \\textcolor{red}{performance} to \\textcolor{green}{Republic}. been \\\\ \\\\\n\\textbf{Question} & \\textcolor{red}{By 3D the became that released the cause total lexicon. the was Doom networks}? & Where \\textcolor{red}{performance 04: drama. large looked}? \\\\ \\\\\n\\textbf{Answer} & \\textcolor{green}{multiplayer features} & \\textcolor{green}{Republic}\\\\\n\\hline\n\n\\end{tabular}\n\n\\caption{\\label{citation-guide}\nExample of context, question and answer for WIKI and RANDOM model extraction schemes. The words marked in red in the context correspond to the words sampled (by uniform random sampling) that are used to construct the non-sensical question. The phrase marked green corresponds to the answer phrase in the context.\n}\n\\label{exampleExtraction}\n\\end{table*}\n\n\\pagebreak\n\\subsection{\\textsc{AddAny}-nBest algorithm}\n\\label{sec:appendixalgorithm}\n\\SetKw{KwBy}{by}\n\\begin{algorithm*}\n\\SetAlgoNlRelativeSize{-1}\n\\SetAlgoLined\n\\emph{s} = $w_1 w_2 w_3 \\ldots w_n$\\\\\n\\emph{q} = question string\\\\\n\\emph{qCand} = [] \\textcolor{blue}{\/\/ placeholder for generated adversarial candidates}\\\\\n\\emph{qCandScores} = [] \\textcolor{blue}{\/\/ placeholder for F1 scores of generated adversarial candidates}\\\\\n\\emph{argMaxScores} = []\\\\\n\\For{$i \\gets 0$ \\KwTo n \\KwBy $1$}{ \n \\emph{W} = randomlySampledWords() \\textcolor{blue}{\/\/ Randomly samples a list of K candidate words from a Union of query and common words.}\\\\\n \\For{$j\\gets 0$ \\KwTo len(W) \\KwBy $1$}{\n \\emph{sDup} = \\emph{s}\\\\\n \\emph{sDup[i]} = \\emph{W[k]} \\textcolor{blue}{\/\/ The ith index is replaced}\\\\ \n \\emph{qCand.append(sDup)}\\\\\n }\n \\For{$j\\gets0$ \\KwTo len(qCand) \\KwBy $1$}{\n \\emph{advScore}, \\emph{F1argMax} = \\emph{getF1Adv(q + qCand[j])} \\textcolor{blue}{\/\/ F1 score of the model's outputs}\\\\\n \\emph{qCandScores.append(advScore)}\\\\\n \\emph{argMaxScores.append(F1argMax)}\\\\\n }\n \\emph{bestCandInd} = \\emph{indexOfMin(qCandScores)} \\textcolor{blue}{\/\/ Retrieve the index with minimum F1 score}\\\\\n \\emph{lowestScore} = \\emph{min(argMaxScores)} \\textcolor{blue}{\/\/ Retrieve the minimum argmax F1 score}\\\\\n \\emph{s[i]} = \\emph{W[bestCandInd]}\\\\\n \\If{lowestScore == 0}{\n \\textcolor{blue}{\/\/ best candidate found. Jia et al's code inserts a break here} \\\\\n }\n}\n \\caption{\\textsc{AddAny-nBest} Attack}\n \\label{alg:the_alg}\n\\end{algorithm*}\n\n\\subsection{Adversarial Attack}\n\\label{sec:length}\n\nA successful adversarial attack on an RC base QA model is a modification to a context that preserves the correct answer but causes the model to return an incorrect span. We study \\emph{non-targeted attacks}, in which eliciting any incorrect response from the model is a success (unlike \\emph{targeted attacks}, which aim to elicit a \\emph{specific} incorrect response form the model). Figure \\ref{fig:figure1} depicts a successful attack. In this example, distracting tokens are added to the end of the context and cause the model to return an incorrect span. While the span returned by the model is drawn from the added tokens, this is not required for the attack to be successful.\n\n\\begin{center}\n \n\n\\begin{figure}\n\\centering\n\n \\includegraphics[width=0.7\\linewidth]{image.png}\n \\caption{An example from SQuAD v1.1. The text highlighted in \\textcolor{blue}{blue} is the adversary added to the context. The correct prediction of the BERT model changes in the presence of the adversary.}\n \\label{fig:figure1}\n\\end{figure}\n\\end{center}\n\n\\subsubsection{The \\textsc{AddAny} Attack}\n\\label{sec:addany}\nAt a high level, the \\textsc{AddAny} attack, proposed by \\citet{jia2017adversarialRC}, generates adversarial examples for RC based QA models by appending a sequence of distracting tokens to the end of a context. \nThe initial distracting tokens are iteratively exchanged for new tokens until model failure is induced, or a pre-specificed number of exchanges have been exceeded.\nSince the sequence of tokens is often nonsensical (i.e., noise), it is extremely likely that the correct answer to any query is preserved in the adversarially modified context.\n\nIn detail, \\textsc{AddAny} proceeds iteratively.\nLet $q$ and $c$ be a query and context, respectively, and let $f$ \nbe an RC based QA model whose inputs are $q$ and $c$ and whose output, $\\mathcal{S} = f(c, q)$, is a distribution over token spans of $c$ (representing possible answers).\nLet $s_i^\\star = \\arg\\max \\mathcal{S}_i$, i.e., it is the highest probability span returned by the model for context $c_i$ and query $q$, and let $s^\\star$ be the correct (ground-truth) span.\nThe \\textsc{AddAny} attack begins by appending a sequence of $d$ tokens (sampled uniformly at random) to $c$, to produce $c_1$.\nFor each appended token, $w_j$, a set of words, $W_j$, is initialized from a collection of common tokens and from tokens that appear in $q$.\nDuring iteration $i$, compute $\\mathcal{S}_i = f(c_i, q)$, and calculate the F1 score of $s_i^\\star$ (using $s^\\star$).\nIf the F1 score is 0, i.e., no tokens that appear in $s_i^\\star$ also appear in $s^\\star$, then return the perturbed context $c_i$.\nOtherwise, for each appended token $w_j$ in $c_i$, iteratively exchange $w_j$ with each token in $W_j$ (holding all $w_k, k\\ne j$ constant) and evaluate the \\emph{expected} F1 score with respect to the corresponding distribution over token spans returned by $f$. \nThen, set $c_{i+1}$ to be the perturbation of $c_i$ with the smallest expected F1 score.\nTerminate after a pre-specified number of iterations.\nFor further details, see \\citet{jia2017adversarialRC}.\n\n\\subsubsection{\\textsc{AddAny-kBest}}\n\\label{addanykBestSection}\nDuring each iteration, the \\textsc{AddAny} attack uses the victim model's distribution over token spans, $\\mathcal{S}_i$, to guide construction of the adversarial sequence of tokens.\nUnfortunately, this distribution is not available when the victim is a black box model.\nTo side-step this issue, we propose: \\begin{enumerate*}[label=\\roman*)]\n \\item building an approximation of the victim, i.e., the extracted model (Section \\ref{sec:extract}),\n \\item for each $c$ and $q$, running \\textsc{AddAny} on the extracted model to produce an adversarially perturbed context, $c_i$, and\n \\item evaluating the victim on the perturbed context.\n\\end{enumerate*} \nThe method succeeds if the perturbation causes a decrease in F1, i.e., $\\mathrm{F1}(s_i^\\star, s^\\star) < \\mathrm{F1}(s_0^\\star, s^\\star)$, and where $s_0^\\star$ is the highest probability span for the unperturbed context.\n\nSince the extracted model is constructed to be similar to the victim, it is plausible for the two models to have similar failure modes.\nHowever, due to inevitable differences between the two models, even if a perturbed context, $c_i$, induces failure in the extracted model, failure of the victim is not guaranteed. \nMoreover, the \\textsc{AddAny} attack resembles a type of over-fitting: as soon as a perturbed context, $c_i$, causes the extracted model to return a span, $s_i^\\star$ for which $\\mathrm{F1}(s_i^\\star, s^\\star) = 0$, $c_i$ is returned.\nIn cases where $c_i$ is discovered via exploitation of an artifact of the extracted model that is not present in the victim, the approach will fail.\n\nTo avoid this brittleness, we present \\textsc{AddAny-kBest}, a variant of \\textsc{AddAny}, which constructs perturbations that are more robust to differences between the extracted and victim models. \nOur method is parameterized by an integer $k$.\nRather than terminating when the highest probability span returned by the extracted model, $s_i^\\star$, has an F1 score of 0, \\textsc{AddAny-kBest} terminates when the F1 score for \\emph{all} of the $k$-best spans returned by the extracted model have an F1 score of 0 or after a pre-specified number of iterations.\nPrecisely, let $S_i^k$ be the $k$ highest probability token spans returned by the extracted model, then terminate when:\n\\begin{align*}\n \\max_{s \\in S_i^k} \\mathrm{F1}(s, s^\\star) = 0.\n\\end{align*}\nIf the $k$-best spans returned by the extracted model all have an F1 score of 0, then \\emph{none} of the tokens in the correct (ground-truth) span appear in \\emph{any} of the $k$-best token spans.\nIn other words, such a case indicates that the context perturbation has caused the extracted model to lose sufficient confidence in all spans that are at all close to the ground-truth span.\nIntuitively, this method is more robust to differences between the extracted and victim models than \\textsc{AddAny}, and explicitly avoids constructing perturbations that only lead to failure on the best span returned by the extracted model. \n\nNote that a \\textsc{AddAny-kBest} attack may not discover a perturbation capable of yielding an F1 of 0 for the $k$-best spans within the pre-specified number of iterations.\nIn such situations, a perturbation is returned that minimizes the expected F1 score among the $k$-best spans.\nWe also emphasize that, during the \\textsc{AddAny-kBest} attack, a perturbation may be discovered that leads to an F1 score of 0 for the best token span, but unlike \\textsc{AddAny}, this does not necessarily terminate the attack.\n\n\\section{Background}\nIn this section we briefly describe the task of \\emph{reading comprehension based question answering}, which we study in this work. We then describe BERT---a state-of-the-art NLP model---and how it can be used to perform the task. \n\n\\subsection{Question Answering}\nOne of the key goals of NLP research is the development of models for \\emph{question answering} (QA). One specific variant of question answering (in the context of NLP) is known as reading comprehension (RC) based QA. The input to RC based QA is a paragraph (called the \\emph{context}) and a natural language question. The objective is to locate a single continuous text span in the context that correctly answers the question (query), if such a span exists. \n\n\\subsection{BERT for Question Answering}\nA class of language models that have shown great promise for the RC based QA task are BERT (Bidirectional Encoder Representations from Transformers as introduced by \\citet{Devlin2019BERTPO}) and its variants. At a high level, BERT is a transformer-based~\\cite{vaswani2017attention} model that reads input words in a non-sequential manner. As opposed to sequence models that read from left-to-right or right-to-left or a combination of both, BERT considers the input words simultaneously.\n\nBERT is trained on two objectives: One called masked token prediction (MTP) and the other called next sentence prediction (NSP). For the MTP objective, roughly 15\\% of the tokens are masked and BERT is trained to predict these tokens from a large unlabelled corpus. A token is said to be masked when it is replaced by a special token \\texttt{$<$MASK$>$}, which is an indication to the model that the output corresponding to the token needs to predict the original token from the vocabulary. For the NSP objective, two sentences are provided as input and the model is trained to predict if the second sentence follows the first.~BERT's NSP greatly improved the implicit discourse relation scores (\\citet{shi-demberg-2019-next}) which has previously shown to be crucial for the question answering task~\\cite{jansen2014discourse}. \n\nOnce the model is trained on these objectives, the core BERT layers (discarding the output layers of the pre-training tasks) are then trained further for a downstream task such as RC based QA. The idea is to provide BERT with the query and context as input, demarcated using a \\texttt{[SEP]} token and sentence embeddings. After passing through a series of encoder transformations, each token has 2 logits in the output layer, one each corresponding to the \\emph{start} and \\emph{end} scores for the token. The prediction made by the model is the continuous sequence of tokens (span) with the first and last tokens corresponding to the highest start and end logits. Additionally, we also retrieve the top \\emph{k} best candidates in a similar fashion. \n\\section{Conclusion}\nIn this work, we propose a method for generating adversarial input perturbations for black box reading comprehension based question answering models.\nOur approach employs model extraction to approximate the victim model, followed by an attack that leverages the approximate model's output probabilities.\nIn experiments, we show that our method reduces the F1 score on the victim by 11 points in comparison to \\textsc{AddSent}---a previously proposed method for generating adversarial input perturbations. \nWhile our work is centered on question answering, our proposed strategy, which is based on building and then attacking an approximate model, can be applied in many instances of adversarial input generation for black box models across domains and tasks. Future extension of our work could explore such attacks as a potential proxy for similarity estimation of victim and extracted models in not only accuracy, but also fidelity~\\citep{Jagielski2019HighAA}.\n\n\\section{Experiments}\nIn this section we present results of our proposed approach. \nWe begin by describing the dataset used, and then report on model extraction.\nFinally, we compare the effectiveness of \\textsc{AddAny-kBest} to 2 other black box approaches. \n\n\\subsection{Datasets} \nFor the evaluation of RC based QA we use the SQuAD dataset \\citep{rajpurkar2016squad}. Though our method is applicable to both v1.1 and v2.0 versions of the dataset we only experiment with \\textsc{AddAny} for SQuAD v1.1 similar to previous investigations. Following \\citep{jia2017adversarialRC}, we evaluate all methods on 1000 queries sampled at random from the development set.\nLike previous work, we use the Brown Common word list corpus~\\citep{francis79browncorpus} for sampling the random tokens (Section \\ref{sec:addany}).\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lrr}\n\\hline\n\\textbf{Model} & \\textbf{F1} & \\textbf{EM}\\\\\n\\hline\nVICTIM & 89.9 & 81.8 \\\\\nWIKI & 83.6 & 73.5 \\\\\nRANDOM & 75.8 & 63.2 \\\\\n\\hline\n\\end{tabular}\n\\caption{\nA comparison of the original model (VICTIM) against the extracted models generated using 2 different schemes(RANDOM and WIKI). bert-base-uncased has been used as the LM in all the models mentioned above. All the extracted models use the same number of queries (query budget of 1x) as in the SQuAD training set. We report on the F1 and EM (Exact Match) scores for the evaluation set (1000 questions) sampled from the dev dataset. \n}\n\\label{extractionTable}\n\\end{table}\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lrr}\n\\hline\n\\textbf{Model} & \\textbf{Original (F1)} & \\textbf{\\textsc{AddAny} (F1)} \\\\\n\\hline\nMatch LSTM single & 71.4 & 7.6 \\\\\nMatch LSTM ensemble & 75.4 & 11.7 \\\\\nBiDAF single & 75.5 & 4.8 \\\\\nBiDAF ensemble & 80.0 & 2.7 \\\\\n\\textbf{bert-base-uncased} & \\textbf{89.9} & \\textbf{5.9} \\\\\n\\hline\n\\end{tabular}\n\\caption{\nA comparison of the results of Match LSTM, BiDAF as reported by \\citet{jia2017adversarialRC} with the bert-base-uncased model for SQuAD 1.1. We follow the identical experimental setup. The results for Match LSTM and BiDAF models were reported for both the single and ensemble versions.\n}\n\\label{jiaAddOnResults}\n\\end{table}\n\\subsection{Extraction}\nFirst, we present results for \\textsc{WIKI} and \\textsc{RANDOM} extraction methods (Section \\ref{sec:extract}) on SQuAD v1.1 using a bert-base-uncased model for both the victim and extracted model in Table \\ref{extractionTable}. \n\n\\paragraph{Remarks on Squad v2.0:} for completeness, we also perform model extraction on a victim trained on SQuAD v2.0, but the extracted model achieves significantly lower F1 scores. In SQuAD v1.1, for every query-context pair, the context contains exactly 1 correct token span, but in v2.0, for 33.4\\% of pairs, the context \\emph{does not contain} a correct span. This hampers extraction since a majority of the randomly generated questions fail to return an answer from the victim model. The extracted WIKI model has an F1 score of 57.9, which is comparably much lower to the model extracted for v1.1. \n\nWe believe that the F1 of the extracted model for SQuAD v2.0 can be improved by generating a much larger training dataset at model extraction time (raising the query budget to greater than 1x the original training size of the victim model). But by doing this, any comparison in our results with SQuAD v1.1 would not be equitable.\n\n\n\\subsection{Methods Compared}\nWe compare \\textsc{AddAny-kBest} to 2 baseline, black-box attacks: \\begin{enumerate*}[label=\\roman*)]\n \\item the standard \\textsc{AddAny} attack on the extracted model, and\n \\item \\textsc{AddSent}~\\cite{jia2017adversarialRC}.\n\\end{enumerate*}\nSimilar to \\textsc{AddAny}, \\textsc{AddSent} generates adversaries by appending tokens to the end of a context. \nThese tokens are taken, in part from the query, but are also likely to preserve the correct token span in the context. \nIn more detail, \\textsc{AddSent} proceeds as follows:\n\n\\begin{enumerate}\n \\item A copy of the query is appended to the context, but nouns and adjectives are replaced by their antonyms, as defined by WordNet~\\cite{miller1995}. Additionally, an attempt is made to replace every named entity and number with tokens of the same part-of-speech that are nearby with respect to the corresponding GloVe embeddings~\\cite{Pennington14glove:global}. If no changes were made in this step, the attacks fails.\n \\item Next, a spurious token span is generated with the same type (defined using NER and POS tags\n from Stanford CoreNLP~\\cite{Manning14thestanford} as the correct token span. Types are hand curated using NER and POS tags and have associated fake answers.\n \\item The modified query and spurious token span are combined into declarative form using hand crafted rules defined by the CoreNLP constituency parses.\n \\item Since the automatically generated sentences could be unnatural or ungrammatical, crowd-sourced workers correct these sentences. (This final step is not performed in our evaluation of AddSent since we aim to compare other fully automatic methods against this). \n\\end{enumerate}\nNote that unlike \\textsc{AddAny}, \\textsc{AddSent} does not require access to the model's distribution over token spans, and thus, it does not require model extraction.\n\n\\textsc{AddSent} may return multiple candidate adversaries for a given query-context pair. \nIn such cases, each candidate is applied and the most effective (in terms of reducing instance-level F1 of the victim) is used in computing overall F1. To represent cases without access to (many) black box model evaluations, \\citet{jia2017adversarialRC} also experiment with using a randomly sampled candidate per instance when computing overall F1. This method is called \\textsc{AddOneSent}\n\nFor the \\textsc{AddAny} and \\textsc{AddAny-kBest} approaches, we also distinguish between instances in which they are run on models extracted via the WIKI (\\textsc{W-A-argMax}, \\textsc{W-A-kBest})or RANDOM (\\textsc{R-A-argMax}, \\textsc{R-A-kBest}) approaches. \n\nWe use the same experimental setup as \\citet{jia2017adversarialRC}. Additionally we experiment while both prefixing and suffixing the adversarial sentence to the context. This does not result in drastically different F1 scores on the overall evaluation set. However, we did notice that in certain examples, for a given context $c$, the output of the model differs depending on whether the same adversary was being prefixed or suffixed. It was observed that sometimes prefixing resulted in a successful attack while suffixing would not and vice versa. Since this behaviour was not documented to be specifically favouring either suffixing or prefixing, we stick to suffixing the adversary to the context as done by \\citet{jia2017adversarialRC}.\n\n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lrr}\n\\hline\n\\textbf{Method} & \\textbf{Extracted (F1)} & \\textbf{Victim (F1)} \\\\\n\\hline\n\\textbf{W-A-kBest} & 10.9 & \\textbf{42.4} \\\\\nW-A-argMax & 9.7 & 68.3 \\\\\nR-A-kBest & 3.6 & 52.2 \\\\\nR-A-argMax & 3.7 & 76.1 \\\\ \n\\hline\nAddSent & - & 53.2 \\\\\nAddOneSent & - & 56.5 \\\\ \n\\hline\nCombined & - & 31.9 \\\\\n\\hline\n\n\\end{tabular}\n\n\\caption{\\label{citation-guide}\nThe first 4 rows report the results for experiments on variations of \\textsc{AddAny} (kBest\/argMax) and extraction schemes (WIKI and RANDOM). The ``extracted\" column lists the F1 score of the respective method used for generating adversaries. The ``victim\" column is the F1 score on the victim model when transferred from the extracted (for \\textsc{AddAny} methods). For \\textsc{AddSent} and \\textsc{AddOneSent} it is the F1 score when directly applied on the victim model. The last row ``Combined\" refers to the joint coverage of \\textsc{W-A-kBest} + \\textsc{AddSent}. \n}\n\\label{addAnyresults}\n\\end{table}\n\n\\subsection{Results}\nIn Table \\ref{addAnyresults}, we report the F1 scores of all methods on the extracted model. The results reveal that the \\textsc{kBest} minimization (Section \\ref{addanykBestSection}) approach is most effective at reducing the F1 score of the victim. Notably, we observe a difference of over 25\\% in the F1 score between \\textsc{kBest} and \\textsc{argMax} in both \\textsc{WIKI} and \\textsc{RANDOM} schemes. \n\nInterestingly, the \\textsc{AddSent} and \\textsc{AddOneSent} attacks are more effective than the \\textsc{AddAny-argMax} approach but less effective than the \\textsc{AddAny-kBest} approach. In particular they reduce the F1 score to 53.2 (\\textsc{AddSent}) and 56.5 (\\textsc{AddOneSent}) as reported in Table \\ref{addAnyresults}. For completeness, we compare the \\textsc{AddAny} attack on the victim model (similar to the work done in \\citet{jia2017adversarialRC} for LSTM and BiDAF models. Table \\ref{jiaAddOnResults} shows the results for bert-base-uncased among others for SQuAD v1.1. Only \\textsc{argMax} minimization is carried out here since there is no post-attack transfer. \n\nWe also study the coverage of \\textsc{W-A-kBest} and \\textsc{AddSent} on the evaluation dataset of 1000 samples (Figure \\ref{fig:vennAddSentAddAny}). \\textsc{W-A-Kbest} and \\textsc{AddSent} induce an F1 score of 0 on 606 and 538 query-context pairs, respectively. Among these failures, 404 query-context pairs were common to both the methods. Of the 404, 182 samples were a direct result of model failure of bert-base-uncased (exact match score is 81.8 which amounts to the 182 failure samples). If the methods are applied jointly, only 260 query-context pairs produce the correct answer corresponding to an exact match score of 26 and an F1 score of 31.9 (Table \\ref{addAnyresults}). This is an indication that the 2 attacks in conjunction (represented by the ``Combined\" row in Table \\ref{addAnyresults}) provide wider coverage than either method alone.\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.7\\linewidth]{venn.png}\n \\caption{Joint coverage of \\textsc{WIKI-AddAny-kBest} and \\textsc{AddSent} on the evaluation}.\n \\label{fig:vennAddSentAddAny}\n\\end{figure}\n\n\\subsection{Fine-grained analysis}\nIn this section we analyze how successful the adversarial attack is for each answer \\emph{category}, which were identified in previous work \\cite{rajpurkar2016squad}. Table \\ref{fineGrainedAnalysis} lists the 10 categories of ground-truth token spans, their frequency in the evaluation set as well as the average F1 scores on the victim model before and after the adversarial attack. We observe that ground-truth spans of type ``places\" experienced a drastic drop in F1 score. ``Clauses\" had the highest average length and also had the highest drop in F1 score subject to the \\textsc{W-A-kBest} attack(almost double the average across classes). Category analysis such as this could help the community understand how to curate better attacks and ultimately train a model that is more robust on answer types that are most important or relevant for specific use cases.\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lrrrr}\n\\hline\n\\textbf{Category} & \\textbf{Freq \\%} & \\textbf{Before} & \\textbf{After} & \\textbf{Av-Len} \\\\ \n\\hline\nNames & 7.4 & 96.2 & 51.8 & 2.8\\\\\nNumbers & 11.4 & 92.1 & 51.3 & 2\\\\\nPlaces & 4.2 & 89 & 19.2 & 2.7\\\\\nDates & 8.4 & 96.8 & 40.3 & 2.1\\\\\nOther Ents & 7.2 & 90.9 & 58.7 & 2.5\\\\\nNoun Phrases & 48 & 88 & 41.8 & 2.3\\\\\nVerb Phrases & 2.7 & 91.1 & 41.1 & 4.8\\\\\nAdj Phrases & 1.9 & 70.3 & 27.8 & 1.6\\\\\nClauses & 1.3 & 82.9 & 7.6 & 6.8\\\\\nOthers & 9.5 & 89.7 & 34.7 & 5\\\\\n\\hline\n\\textbf{Total} & \\textbf{100} & \\textbf{89} & \\textbf{42.4} & \\textbf{2.7}\\\\\n\\hline\n\n\\end{tabular}\n\n\\caption{\\label{citation-guide}\nThere are 10 general categories into which the answer spans have been classified. The first 4 are entities and \\emph{Other Ents} is all other entities that do not fall into any of the 4 major categories. The 2nd column is the frequency of ground truth answers belonging to each of the categories. The 3rd column (Before) refers to the F1 score of questions corresponding to the category when evaluated on the Victim model. The 4th column (After) refers to the F1 score of questions corresponding to the category when evaluated on the Victim model under the presence of adversaries generated using \\textsc{WIKI-AddAny-kBest} method. \\emph{Av-Len} column is the average length of the answer spans in each category. \n}\n\\label{fineGrainedAnalysis}\n\\end{table}\n\n\\subsection{Model Extraction}\n\\label{sec:extract}\nThe first step in our approach is to build an approximation of the victim model via \\emph{model extraction}~\\cite{krishna2020thieves}. At a high level, this approach constructs a training set by generating inputs that are served to the victim model and collecting the victim's responses. The responses act as the labels of the inputs. After a sufficient number of inputs and their corresponding labels have been collected, a new model can be trained to predict the collected labels, thereby mimicking the victim. The approximate model is known as the \\emph{extracted} model. \n\nThe crux of model extraction is an effective method of generating inputs. \nRecall that in RC based QA, the input is composed of a query and a context. \nLike previous work, we employ 2 methods for generating contexts: \\textsc{WIKI} and \\textsc{RANDOM}~\\cite{krishna2020thieves}.\nIn the \\textsc{WIKI} scheme, contexts are randomly sampled paragraphs from the WikiText-103 dataset.\nIn the \\textsc{RANDOM} scheme, contexts are generated by sampling random tokens from the WikiText-103 dataset. \nFor both schemes, a corresponding query is generated by sampling random words from the context.\nTo make the queries resemble questions, tokens such as ``where,\" ``who,\" ``what,\" and ``why,\" are inserted at the beginning of each query, and a ``?\" symbol is appended to the end.\nLabels are collected by serving the sampled queries and contexts to the victim model.\nTogether, the queries, contexts, and labels are used to train the extracted model.\nAn example query-context pair appears in Table \\ref{exampleExtraction}.\n\n\n\n\\section{Introduction}\nMachine learning models are ubiquitous in technologies that are used by billions of people every day. In part, this is due to the recent success of deep learning. Indeed, research in the last decade has demonstrated that the most effective deep models can match or even outperform humans on a variety of tasks \\cite{Devlin2019BERTPO,xie2020}.\n\nDespite their effectiveness, deep models are also known to make embarrassing errors. \nThis is especially troublesome when those errors can be categorized as unsafe, e.g., racist, sexist, etc.~\\cite{wallace2019-universal}. This leads to the desire for methods to audit models for correctness, robustness and---above all else---safety, before deployment.\n\nUnfortunately, it is difficult to precisely determine the set of inputs on which a deep model fails because deep models are complex, have a large number of parameters---usually in the billions---and are non-linear \\cite{Radford2019LanguageMA}. In an initial attempt to automate the discovery of inputs on which these embarrassing failures occur, researchers developed a technique for making calculated perturbations to an image that are imperceptible to the human eye, but cause deep models to misclassify the image \\cite{szegedy2014intriguing}. In addition to developing more effective techniques for creating \\emph{adversarial inputs} for vision models \\cite{papernot2017practical}, subsequent research extends these ideas to new domains, such as natural language processing (NLP).\n\nNLP poses unique challenges for adversarial input generation because: \n\\begin{enumerate*}\n\\item natural language is discrete rather than continuous (as in the image domain); and\n\\item in NLP, an ``imperceptible perturbation\" of a sentence is typically construed to mean a semantically similar sentence, which can be difficult to generate.\n\\end{enumerate*}\n\nNevertheless, the study of adversarial input generation for NLP models has recently flourished, with techniques being developed for a wide variety of tasks such as: text classification, textual entailment and question answering~\\cite{jin2019robustbert,wallace2019-universal,li2020bertattack,jia2017adversarialRC}.\n\nThese new techniques can be coarsely categorized into two groups: \\emph{white box attacks}, where the attacker has full knowledge of the \\emph{victim} model---including its parameters---and \\emph{black box attacks}, where the attacker only has access to the victim's predictions on specified inputs. \nUnsurprisingly, white box attacks tend to exhibit much greater efficacy than black box attacks. \n\nIn this work, we develop a technique for black box adversarial input generation for the task of reading comprehension that employs a white box attack on an approximation of the victim. More specifically, our approach begins with \\emph{model extraction}, where we learn an approximation of the victim model~\\cite{krishna2020thieves}; afterward, we run a modification of the \\textsc{AddAny}~\\cite{jia2017adversarialRC} attack on the model approximation. Our approach is inspired by the work of \\citet{papernot2017practical} for images and can also be referred to as a \\emph{Black box evasion attack} on the original model. \n\nSince the \\textsc{AddAny} attack is run on an \\emph{extracted} (i.e., approximate) model of the victim, our modification encourages the attack method to find inputs for which the extracted model's top-k responses are all incorrect, rather than only its top response---as in the original \\textsc{AddAny} attack.\nThe result of our \\textsc{AddAny} attack is a set of adversarial perturbations, which are then applied to induce failures in the victim model. Empirically, we demonstrate that our approach is more effective than \\textsc{AddSent}, i.e., a black box method for adversarial input generation for reading comprehension~\\cite{jia2017adversarialRC}. \nCrucially, we observe that our modification of \\textsc{AddAny} makes the attacks produced more robust to the difference between the extracted and victim model. In particular, our black box approach causes the victim to fail 11\\% more than \\textsc{AddSent}. While we focus on reading comprehension, we believe that our approach of model extraction followed by white box attacks is a fertile and relatively unexplored area that can be applied to a wide range of tasks and domains.\n\n\\textbf{Ethical Implications:} The primary motivation of our work is helping developers test and probe models for weaknesses before deployment. While we recognize that our approach could be used for malicious purposes we believe that our methods can be used in an effort to promote model safety.\n\n\\section{Method}\nOur goal is to develop an effective black box attack for RC based QA models. Our approach proceeds in two steps: first, we build an approximation of the victim model, and second, we attack the approximate model with a powerful white box method.~The result of the attack is a collection of adversarial inputs that can be applied to the victim.~In this section we describe these steps in detail.\n\n\\input{extract}\n\n\\input{attack}\n\n\n\\section{Related Work}\nOur work studies black box adversarial input generation for reading comprehension. The primary building blocks of our proposed approach are model extraction, and white box adversarial input generation, which we discuss below. We also briefly describe related methods of generating adversarial attacks for NLP models.\n\nA contemporary work that uses a similar approach to ours is \\citet{wallace2020imitation}. While we carry out model extraction using non-sensical inputs, their work uses high quality out of distribution (OOD) sentences for extraction of a machine translation task. It is noteworthy to mention that in the extraction approach we follow~\\cite{krishna2020thieves} the extracted model reaches within 95\\% F1 score of the victim model with the same query budget that was used to train the victim model. This is in contrast to roughly 3x query budget taken in extracting the model in their work. The different nature of the task and methods followed while querying OOD datasets could be a possible explanation for the disparities.\n\n\\paragraph{Nonsensical Inputs and Model Extraction:} Nonsensical inputs to text-based systems have been the subject of recent study, but were not explored for extraction until recently \\citep{krishna2020thieves}. \\citet{feng2018-pathologies} studied model outputs while trimming down inputs to an extent where the input turned nonsensical for a human evaluator. Their work showed how nonsensical inputs produced overly confident model predictions. Using white box access to models \\citet{wallace2019-universal} discovered that it was possible to generate input-agnostic nonsensical triggers that are effective adversaries on existing models on the SQuAD dataset. \n\n\\paragraph{Adversarial attacks:} The first adversarial attacks against block box, deep neural network models focused on computer vision applications~\\cite{papernot2017practical}. In concept, adversarial perturbations are transferable from computer vision to NLP; but, techniques to mount successful attacks in NLP vary significantly from their analogues in computer vision. This is primarily due to the discreteness of NLP (vs. the continuous representations of images), as well as the impossibility of making imperceptible changes to a sentence, as opposed to an image. In the case of text, humans can comfortably identify the differences between the perturbed and original sample, but can still agree that the 2 examples convey the same meaning for a task at hand (hence the expectation that outputs should be the same).\n\nHistorically, researches have employed various approaches for generating adversarial textual examples. In machine translation \\citet{belinkov2017synthetic} applied minor character level perturbations that resemble typos. \\citet{hosseini2017perspective} targeted Google's Perspective system that detects text toxicity. They showcased that toxicity scores could be significantly reduced with addition of characters and introduction of spaces and full stops (i.e., periods (``.\") ) in between words. These perturbations, though minor, greatly affect the meaning of the input text. \\citet{alzantot2018adversarial} proposed an iterative word based replacement strategy for tasks like text classification and textual entailment for LSTMs. \\citet{jin2019robustbert} extended the above experiments for BERT. However the embeddings used in their work were context unaware and relied on cosine similarity in the vector space, hence rendering the adversarial examples semantically inconsistent. \\citet{li2018textbugger} carried out a similar study for sentiment analysis in convolutional and recurrent neural networks. In contrast to prior work, \\citet{jia2017adversarialRC} were the first to evaluate models for RC based QA using SQuAD v1.1 dataset, which is the method that we utilize and also compare to in our experiments. \n\nUniversal adversarial triggers \\cite{wallace2019-universal} generates adversarial examples for the SQuAD dataset, but cannot be compared to our work since it is a white box method and a targeted adversarial attack. \\citet{ribeiro2018-semantically} introduced a method to detect bugs in black box models which generates \\emph{semantically equivalent adversaries} and also generalize them into rules. Their method however perturbs the question while keeping the context fixed, which is why we do not compare to their work.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n{\\em Explainable AI} refers to artificial intelligence and machine learning techniques that can provide human understandable justification for their behavior. \n Explainability is important in situations where human operators work alongside autonomous and semi-autonomous systems because it can help build rapport, confidence, and understanding between the agent and its operator.\nIn the event that an autonomous system fails to complete a task or completes it in an unexpected way, explanations help the human collaborator\nunderstand the circumstances that led to the behavior, which also allows the operator to make an informed decision on how to address the behavior. \n\nPrior work on explainable AI (XAI) has primarily focused on non-sequential problems such as image classification and captioning ~\\cite{wang2017residual,xu2015show,you2016image}.\nSince these environments are episodic in nature, the model's output depends only on its input.\nIn sequential environments, decisions that the agent has made in the past influence future decisions. \nTo simplify this, agents often make locally optimal decisions by selecting actions that maximize some discrete notion of expected future reward or utility.\nTo generate plausible explanations in these environments, the model must unpack this local reward or utility to reason about how current actions affect future actions. On top of that, it needs to communicate the reasoning in a human understandable way, which is a difficult task. \nTo address this challenge of human understandable explanation in sequential environments, we introduce the alternative task of rationale generation in sequential environments. \n\n\n{\\em Automated rationale generation} is a process of producing a natural language explanation for agent behavior {\\em as if a human had performed the behavior}~\\cite{ehsan2017rationalization}.\nThe intuition behind rationale generation is that humans can engage in effective communication\nby verbalizing plausible motivations for their action. The communication can be effective even when the verbalized reasoning does not have a consciously accessible neural correlate of\nthe decision-making process~\\cite{block2007consciousness,block2005two,fodor1994elm}. \nWhereas an explanation can be in any communication modality, rationales are natural language explanations that don't literally expose the inner workings of an intelligent system.\nExplanations can be made by exposing the inner representations and data of a system, though this type of explanation may not be accessible or understandable to non-experts.\nIn contrast, contextually appropriate natural language rationales are\naccessible and intuitive to non-experts, facilitating understanding and communicative effectiveness. \nHuman-like communication can also afford human factors advantages such as higher degrees of satisfaction, confidence, rapport, and willingness to use autonomous systems.\nFinally, rationale generation is fast, sacrificing \nan accurate view of agent decision-making for real-time response, making it appropriate for real-time human-agent collaboration. \nShould deeper, more grounded and technical\nexplanations be necessary, rationale generation may need to be supplemented by other explanation or visualization techniques.\n\nIn preliminary work~\\cite{ehsan2017rationalization} we showed that recurrent neural networks can be used to translate internal state and action representations into natural language. \nThat study, however, relied on synthetic natural language data for training. \nIn this work, we explore if human-like plausible rationales can be generated using a non-synthetic, natural language corpus of human-produced explanations. \nTo create this corpus, we developed a methodology for conducting remote think-aloud protocols \\cite{fonteyn1993description}. \nUsing this corpus, we then use \na neural network based on~\\cite{ehsan2017rationalization}\nto translate an agent's state and action information into natural language rationales, and show how variations in model inputs can produce two different types of rationales. \nTwo user studies help us understand the perceived quality of the generated rationales along dimensions of human factors. \nThe first study indicates that our rationale generation technique produces plausible and high-quality rationales and explains the differences in user perceptions. \nIn addition to understanding user preferences, the second study demonstrates how the intended design behind the rationale types aligns with their user perceptions.\n\nThe philosophical and linguistic discourse around the notion of explanations~\\cite{miller2017explanation, lipton2001good} is beyond the scope of this paper. \nTo avoid confusion, we use the word \"rationale\" to refer to natural language-based post-hoc explanations that are meant to sound like what a human would say in the same situation.\nWe opt for \"rationale generation\" instead of \"rationalization\" to signal that the agency lies with the receiver and interpreter (human being) instead of the producer (agent). \nMoreover, the word rationalization may carry a connotation of making excuses~\\cite{maruna2006fundamental} for an (often controversial) action, which is another reason why we opt for \\textit{rationale generation} as a term of choice.\n\nIn this paper, we make the following contributions in this paper:\n\\begin{itemize}\n \\item We present a methodology for collecting high-quality human explanation data based on remote think-aloud protocols. \n \\item We show how this data can be used to configure neural translation models to produce two types of human-like rationales: $\\left(1\\right)$ concise, localized and $\\left(2\\right)$ detailed, holistic rationales. We demonstrate the alignment between the intended design of rationale types and the actual perceived differences between them.\n \\item We quantify the perceived quality of the rationales and preferences between them, \n \n and we use qualitative data to explain these perceptions and preferences.\n \n \n \n \n \n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Related Work}\n\n\n\n\nMuch of the previous work on explainable AI has focused on {\\em interpretability}.\nWhile there is no one definition of interpretability with respect to machine learning models, we view interpretability as a property of machine learned models that dictate the degree to which a human user---AI expert or user---can come to conclusions about the performance of the model on specific inputs.\nSome types of models are inherently interpretable, meaning they require relatively little effort to understand. \nOther types of models require more effort to make sense of their performance on specific inputs. \nSome non-inherently interpretable models can be made interpretable in a post-hoc fashion through explanation or visualization.\nModel-agnostic post-hoc methods can help to make models intelligible without custom explanation or visualization technologies and without changing the underlying model to make them more interpretable~\\cite{ribeiro2016should,yosinski2015understanding}. \n\nExplanation generation can be described as a form of {\\em post-hoc interpretability}~\\cite{2016arXiv160603490L, miller2017explanation}; explanations are generated on-demand based on the current state of a model and---potentially---meta-knowledge about how the algorithm works.\nAn important distinction between interpretability and explanation is that explanation does not elucidate precisely how a model works but aims to give useful information for practitioners and end users.\nAbdul et al.~\\cite{abdul2018trends} conduct a comprehensive survey on trends in explainable and intelligible systems research.\n\nOur work on rationale generation is a model-agnostic explanation system that works by translating the internal state and action representations of an arbitrary reinforcement learning system into natural language.\nAndreas, Dragan, and Klein~\\cite{andreas2017translating} describe a technique that translates message-passing policies between two agents into natural language.\nAn alternative approach to translating internal system representations into natural language is to add explanations to a supervised training set such that a model learns to output a classification as well as an explanation~\\cite{codella2018teaching}.\nThis technique has been applied to generating explanations about procedurally generated game level designs~\\cite{guzdial2018explainable}.\n\nBeyond the technology, user perception and acceptance matter because they influence trust in the system, which is crucial to adoption of the technology. \nEstablished fields such as information systems enjoy a robust array of technology acceptance models such as the Technology Acceptance Model (TAM) \\cite{davis1989perceived} and Unified Theory of Acceptance and Use of Technology Model (UTAUT) \\cite{venkatesh2003user} whose main goal is to explain variables that influence user perceptions. \nUtilizing dimensions such as perceived usefulness and perceived ease of use, the TAM model aimed to explain prospective expectations about the technological artifacts. \nUTAUT uses constructs like performance expectancy, effort expectancy, etc. to understand technology acceptance. The constructs and measures in these models build on each other. \n\nIn contrast, due to a rapidly evolving domain, a robust and well-accepted user perception model of XAI agents is yet to be developed. \nUntil then, we can take inspiration from general acceptance models (such as TAM and UTAUT) and adapt their constructs to understand the perceptions of XAI agents. \nFor instance, the human-robot interaction community has used them as basis to understand users' perceptions towards robots~\\cite{ezer2009attitudinal, beer2011understanding}. \nWhile these acceptance models are informative, they often lack sociability factors such as \"humanlike-ness\".\nMoreover, TAM-like models does not account for autonomy in systems, let alone autonomous XAI systems. \nBuilding on some constructs from TAM-like models and original formative work, we attempt to address the gaps in understanding user perceptions of rationale-generating XAI agents. \n\nThe dearth of established methods combined with the variable conceptions of explanations make evaluation of XAI systems challenging. \nBinns et al.~\\cite{binns2018s} use scenario-based survey design~\\cite{carroll2000making} and presented different types of hypothetical explanations for the same decision to measure perceived levels of justice. \nOne non-neural based network evaluates the usefulness and naturalness of generated explanations~\\cite{broekens2010you}.\nRader et al.~\\cite{rader2018explanations} use explanations manually generated from content analysis of Facebook's News Feed to study perceptions of algorithmic transparency. \nOne key differentiating factor of our approach is that our evaluation is based rationales that are actual system outputs (compared to hypothetical ones). \nMoreover, user perceptions of our system's rationales directly influence the design of our rationale generation technique. \n\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=1.0\\linewidth]{frogger-pipeline2.png}\n \\vspace{-1.5\\baselineskip}\n \\caption{End-to-end pipeline for training a system that can generate explanations.}\n\\label{fig:end-to-end}\n\\end{figure}\n\n\\section{Learning to Generate Rationales}\n\nWe define a \\textit{rationale} as an explanation that justifies an action based on how a human would think. \nThese rationales do not necessarily reveal the true decision making process of an agent, but still provide insights about why an agent made a decision in a form that is easy for non-experts to understand.\n\nRationale generation requires translating events in the game environment into natural language outputs. Our approach to rationale generation involves two steps: (1)~collect a corpus of think-aloud data from players who explained their actions in a game environment; and (2)~use this corpus to train an encoder-decoder network to generate plausible rationales for any action taken by an agent (see Figure~\\ref{fig:end-to-end}).\n\nWe experiment with rationale generation using autonomous agents that play the arcade game, {\\em Frogger}.\nFrogger is a good candidate for our experimental design of a rationale generation pipeline for general sequential decision making tasks because it is a simple Markovian environment, making it an ideal stepping stone towards a real world environment. \nOur rationale generation technique is agnostic to the type of agent or how it is trained, as long as the representations of states and actions used by the agent can be exposed to the rationale generator and serialized.\n\n\\subsection{Data Collection Interface}\n\nThere is no readily available dataset for the task of learning to generate explanations. Thus, we developed a methodology to collect live ``think-aloud'' data from players as they played through a game. This section covers the two objectives of our data collection endeavor:\n\\begin{enumerate}\n\\item Create a think-aloud protocol in which players provide natural rationales for their actions. \n\\item Design an intuitive player experience that facilitates accurate matching of the participants' utterances to the appropriate state in the environment. \n\\end{enumerate}\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=0.75\\linewidth]{Play4.png}\n \\end{center}\n \\caption{Players take an action and verbalize their rationale for that action. (1)~After taking each action, the game pauses for 10 seconds. (2)~Speech-to-text transcribes the participant's rationale for the action. (3)~Participants can view their transcribed rationales near real-time and edit if needed.}\n \\label{fig:Play}\n\\end{figure}\n\nTo train a rationale-generating explainable agent, we need data linking game states and actions to their corresponding natural language explanations. To achieve this goal, we built a modified version of Frogger in which players simultaneously play the game and also explain each of their actions. \nThe entire process is divided into three phases: (1)~A guided tutorial, (2)~rationale collection, and (3)~transcribed explanation review.\n\nDuring the guided tutorial~(1), our interface provides instruction on how to play through the game, how to provide natural language explanations, and how to review\/modify any explanations they have given. \nThis helps ensure that users are familiar with the interface and its use before they begin providing explanations. \n\nFor rationale collection~(2), participants play through the game while explaining their actions out loud in a turn-taking mechanism.\nFigure~\\ref{fig:Play} shows the game embedded into the explanation collection interface. \nTo help couple explanations with actions (attach annotations to concrete game states), the game pauses for 10 seconds after each action is taken.\nDuring this time, the player's microphone automatically turns on and the player is asked to explain their most recent action while a speech-to-text library \\cite{github_2017} automatically transcribes the explanation real-time. \nThe automatic transcription substantially reduces participant burden as it is more efficient\nthan typing an explanation. \nPlayer can use more or less than the default 10-second pause to collect the explanation. \nOnce done explaining, they can view their transcribed text and edit it if necessary.\nDuring pretesting with 14 players, we observed that players often repeat a move for which the explanation is the same as before. \nTo reduce burden of repetition, we added a \"redo\" button that can be used to recycle rationales for consecutive repeated actions.\n\nWhen the game play is over, players move to transcribed explanation review portion ~(3). Here, they can can step through all the actions-explanation pairs. This stage allows reviewing in both a situated and global context. \n\n\n\n\n\nThe interface is designed so that no manual hand-authoring\/editing of our explanation data was required before\nusing it to train our machine learning model. Throughout the game, players have the opportunity to organically edit their own data without impeding their work-flow. This added layer of organic \nediting is crucial in ensuring that we can directly input the collected data into the network with zero manual cleaning. While we use Frogger as a test environment in our experiments, a similar user experience can be designed using other turn-based environments with minimal effort.\n\n\\begin{figure}[t]\n \\includegraphics[width=\\linewidth]{review_with_legend.png}\n\\caption{Players can step-through each of their action-rationale pairs and edit if necessary. (1)~Players can watch an action-replay while editing rationales. (2)~These buttons control the flow of the step-through process. (3)~The rationale for the current action gets highlighted for review.}\n \\label{fig:Review}\n\\end{figure}\n\n\n\\subsection{Neural Translation Model}\n\nWe use an encoder-decoder network~\\cite{bahdanau2014neural} to teach our network to generate relevant natural language explanations for any given action. \nThese kinds of networks are commonly used for machine translation tasks or dialogue generation, but their ability to understand sequential dependencies between the input and the output make it suitable for our task. \nOur encoder decoder architecture is similar to that used in \\cite{ehsan2017rationalization}. \nThe network learns how to translate the input game state representation\n$X = x_1, x_2, ..., x_n$, comprised of the representation of the game combined with other influencing factors,\ninto an output rationale as a sequence of words \n$Y = y_1, y_2, ..., y_m$\nwhere $y_i$ is a word.\nThus our network learns to translate game state and action information into natural language rationales.\n\nThe encoder and decoder are both recurrent neural networks (RNN) comprised of Gated Recurrent Unit (GRU) cells since our training process involved a small amount of data.\nThe decoder network uses an additional attention mechanism~\\cite{luong2015effective} to learn to weight the importance of different components of the input with regard to their effect on the output. \n\nTo simplify the learning process, the state of the game environment is serialized into a sequence of symbols where each symbol characterizes a sprite in the grid-based represntation of the world.\nTo this, we append information concerning Frogger's position, the most recent action taken, and the number of lives the player has left to create the input representation $X$. \nOn top of this network structure, we vary the input configurations with the intention of producing varying styles of rationales. \nEmpirically, we found that a reinforcement learning agent using tabular $Q$-learning \\cite{watkins92} learns to play the game effectively when given a limited window for observation.\nThus a natural configuration for the rationale generator is to give it the same observation window that the agent needs to learn to play.\nWe refer to this configuration of the rationale generator as {\\em focused-view} generator.\nThis view, however, potentially limits the types of rationales that can be learned since the agent will only be able to see a subset of the full state. Thus we formulated a second configuration that gives the rationale generator the ability to use all information on the board to produce rationales.\nWe refer to this as {\\em complete-view} generator.\nAn underlying question is thus whether rationale generation should use the same information that the underlying black box reasoner needs to solve a problem or if more information is advantageous at the expense of making rationale generation a harder problem.\nIn the studies described below, we seek to understand how these configurations affect human perceptions of the agent when presented with generated rationales.\n\n\\subsubsection{Focused-view Configuration}\nIn the \\textit{focused-view} configuration, we used a windowed representation of the grid, i.e. only a $7\\times7$ window around the Frog was used in the input.\nBoth playing an optimal game of Frogger and generating relevant explanations based on the current action taken typically only requires this much local context. \nTherefore providing the agent with only the window around Frogger helps the agent produce explanations grounded in it's neighborhood. \nIn this configuration, we designed the inputs such that the network is prone to prioritize short-term planning producing localized rationales instead of long-term planning. \n\n\n\\subsubsection{Complete-view Configuration}\nThe \\textit{complete-view} configuration is an alternate setup that provides the entire game board as context for the rationale generation.\nThere are two differences between this configuration and the focused-view configuration.\nFirst, we use the entire game screen as a part of the input. The agent now has the opportunity to learn which other long-term factors in the game may influence it's rationale.\nSecond, we added noise to each game state to force the network to generalize when learning, reduce the likelihood that spurious correlations are identified, and to give the model equal opportunity to consider factors from all sectors of the game screen.\nIn this case noise was introduced by replacing input grid values with dummy values. For each grid element, there was a $20\\%$ chance that it would get replaced with a dummy value.\nGiven the input structure and scope, this configuration should prioritize rationales that exhibit long-term planning and consider the broader context.\n\n\n \n\n\n\\begin{table}[h]\n \\caption{Examples of \\textit{focused-view} vs \\textit{complete-view} rationales generated by our system for the same set of actions.}\n \\label{tab:components}\n \\begin{tabular}{p{0.1\\columnwidth}p{0.4\\columnwidth}p{0.4\\columnwidth}}\n \\toprule\n {\\bf Action} & {\\bf Focused-view} & {\\bf Complete-view}\\\\\n \\midrule\n Right & I had cars to the left and in front of me so I needed to move to the right to avoid them. & I moved right to be more centered. This way I have more time to react if a car comes from either side. \\\\\n \\rowcolor{tablerowcolor} Up & The path in front of me was clear so it was safe for me to move forward. & I moved forward making sure that the truck won\\textquotesingle t hit me so I can move forward one spot. \\\\\n Left & I move to the left so I can jump onto the next log. & I moved to the left because it looks like the logs and top or not going to reach me in time, and I\\textquotesingle m going to jump off if the law goes to the right of the screen. \\\\\n \\rowcolor{tablerowcolor} Down & I had to move back so that I do not fall off. & I jumped off the log because the middle log was not going to come in time. So I need to make sure that the laws are aligned when I jump all three of them. \\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\\section{Perception Study: Candidate vs. Baseline Rationales}\nIn this section, we\nassess whether the rationales generated using our technique are plausible and explore \nhow humans perceive them along various dimensions of human factors.\nFor our rationales to be plausible we would expect that human users indicate a strong preference for rationales generated by our system (either configuration) over those generated by a baseline rationale generator.\nWe also compare them to exemplary human-produced explanations to get a sense for how far from the upper bound we are.\n\nThis study aims to achieve two main objectives.\nFirst, it seeks to confirm the hypothesis that humans prefer rationales generated by each of the configurations over randomly selected rationales across all dimensions.\nWhile this baseline is low, it establishes that rationales generated by our technique are not nonsensical. \nWe can also measure the distance from the upper-bound (exemplary human rationales) for each rationale type. \nSecond, we attempt to understand the underlying components that influence \nthe perceptions of the generated rationales along four dimensions of human factors: {\\em confidence}, {\\em human-likeness}, {\\em adequate justification}, and {\\em understandability}.\n\n\n\n\n\n\n\n\\subsection{Method}\n\nTo gather the training set of game state annotations, we deployed our data collection pipeline on {\\em Turk Prime}~\\cite{litman2017turkprime}. \nFrom 60 participants\nwe collected over 2000 samples of human actions in Frogger coupled with natural language explanations. The average duration of this task was around 36 minutes.\nThe parallel corpus of the collected game state images and natural language explanations was used to train the encoder-decoder network.\nEach RNN in the encoder and the decoder was parameterized with GRU cells with a hidden vector size of 256. \nThe entire encoder-decoder network was trained for 100 epochs.\n\nFor the perception user study, we collected both within-subject and between-subject data.\nWe recruited 128 participants, split into two equal experimental groups through {\\em TurkPrime}: Group 1 (age range = 23 - 68 years, M = 37.4, SD = 9.92) and Group 2 (age range = 24 - 59 years, M = 35.8, SD= 7.67). \nOn average, the task duration was approximately 49 minutes.\n46\\% of our participants were women, and the 93\\% of participants were self-reported as from the United States while the remaining 7\\% of participants were self-reported as from India.\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=1.0\\linewidth]{Study_Screenshot.PNG}\n \\caption{Screenshot from user study (setup 2) depicting the action taken and the rationales: \\textit{P = Random, Q = Exemplary, R = Candidate}}\n \\label{fig:Study}\n\\end{figure}\n\nAll participants watched a counterbalanced series of five videos.\nEach video depicted an action taken by Frogger accompanied by three different types of rationales that justified the action (see Figure~\\ref{fig:Study}).\nParticipants rated each rationale on a labeled, 5-point, bipolar Likert-scale along 4 perception dimensions (described below). \nThus, each participant provided 12 ratings per action, leading to 60 perception ratings for five actions. \nActions collected from human players comprised the set of Frogger's actions. These actions were then fed into the system to generate rationales to be evaluated in the the user studies. \nIn order to get a balance between participant burden, fatigue, the number of actions, and regions of the game, we pretested with 12 participants. \nFive actions was the limit beyond which participants' fatigue and burden substantially increased. \nTherefore, we settled on five actions (up (twice), down, left, and right) in the major regions of the game-- amongst the cars, at a transition point, and amongst the logs. \nThis allowed us to test our rationale generation configurations in all possible action-directions in all the major sections of the game. \n\nThe study had two identical experimental conditions, differing only by type of \\textit{candidate rationale}. \nGroup 1 evaluated the \\textit{focused-view} rationale while Group 2 evaluated the \\textit{complete-view} rationales.\nIn each video, the action was accompanied by three rationales generated by three different techniques (see Figure~\\ref{fig:Results_1}): \n\\begin{itemize}\n\\item The \\textit{exemplary rationale} is the rationale from our corpus that 3 researchers unanimously agreed on as the best one for a particular action. Researchers independently selected rationales they deemed best and iterated until consensus was reached.\nThis is provided as an upper-bound for contrast with the next two techniques.\n\\item The \\textit{candidate rationale} is the rationale produced by our network, either the focused-view or complete-view configuration.\n\n\\item The \\textit{random rationale} is a randomly chosen rationale from our corpus.\n\\end{itemize}\n\n\n\\noindent\nFor each rationale, participants used a 5-point Likert scale to rate their endorsement of each of following four statements, which correspond to four dimensions of interest. \n\n\\begin{enumerate}\n \\item \\textit{Confidence:} This rationale makes me confident in the character's ability to perform it's task.\n \\item \\textit{Human-likeness: } This rationale looks like it was made by a human.\n \\item \\textit{Adequate justification:} This rationale adequately justifies the action taken.\n \\item \\textit{Understandability:} This rationale helped me understand why the agent behaved as it did.\n\\end{enumerate}\n\nResponse options on a clearly labeled bipolar Likert scale ranged from \"strongly disagree\" to \"strongly agree\". In a mandatory free-text field, they explained their reasoning behind the ratings for a particular set of three rationales. After answering these questions, they provided demographic information.\n\nThese four dimensions emerged from an iterative filtering process that included preliminary testing of the study, informal interviews with experts and participants, and a literature review on robot and technology acceptance models. Inspired by the acceptance models, we created a set of dimensions that were contextually appropriate for our purposes. \n\nDirect one-to-one mapping from existing models was not feasible, given the novelty and context of the Explainable AI technology.\nWe adapted \\textit{confidence}, a dimension that impacts trust in the system \\cite{kaniarasu2013robot}, from constructs like performance expectancy \\cite{venkatesh2003user} (from UTAUT) and robot performance~\\cite{beer2011understanding, chernova2009confidence}.\n\\textit{Human-likeness}, central to generating human-centered rationales, was inspired from sociability and anthropomorphization factors from HRI work on robot acceptance [\\cite{nass1994machines,nass1996can,nass2000machines}. \nSince our rationales are justificatory in nature, \\textit{adequate justification} is a reasonable measure of output quality (transformed from TAM).\nOur rationales also need to be \\textit{understandable}, which can signal perceived ease of use (from TAM). \n\n\n\n\n\n\n\\subsection{Quantitative Analysis}\n\n\nWe used a multi-level model to analyze our data.\nAll variables were within-subjects except for one: whether the candidate style was focused-view (Group 1) or complete-view (Group 2). \nThis was a between-subject variable.\n\nThere were significant main effects of rationale style ($\\chi^2\\left(2\\right) = 594.80, p<.001$) and dimension ($\\chi^2\\left(2\\right) = 66.86, p<.001$) on the ratings. \nThe main effect of experimental group was not significant ($\\chi^2\\left(1\\right) = 0.070, p=0.79$).\nFigure~\\ref{fig:Results_1} shows the average responses to each question for the two different experimental groups.\nOur results support our hypothesis that rationales generated with the \\textit{focused-view} generator and the \\textit{complete-view} generator were judged significantly better across all dimensions than the random baseline\n($b=1.90, t\\left(252\\right)=8.09,p<.001$). \nIn addition, exemplary rationales were judged significantly higher than candidate rationales.\n\n\n\n\nThough there were significant differences between each kind of candidate rationale and the exemplary rationales, those differences were not the same.\nThe difference between the \\textit{focused-view} candidate rationales and exemplary rationales were significantly \\textit{greater} than the difference between \\textit{complete-view} candidate rationales and exemplary rationales ($p=.005$). \nSurprisingly, this was because the exemplary rationales were rated {\\rm lower} in the presence of complete-view candidate rationales ($t\\left(1530\\right)=-32.12,p<.001$).\nSince three rationales were presented simultaneously in each video, it is likely that participants were rating the rationales relative to each other. \nWe also observe that the \\textit{complete-view} candidate rationales received higher ratings in general than did the \\textit{focused-view} candidate rationales ($t\\left(1530\\right)=8.33,p<.001$).\n\nIn summary, we have confirmed our hypothesis that both configurations produce rationales that perform significantly better than the \\textit{random} baseline across all dimensions. \n\n\\begin{figure}[t]\n\\subcaptionbox{Focus-View condition.\\label{fig:Results_1a}}{\\includegraphics[width=1.0\\linewidth]{Bar_a_exactDimension.png}}\\hfill\n\\subcaptionbox{Complete-View condition.\\label{fig:Results_1b}}{\\includegraphics[width=1.0\\linewidth]{Bar_b_exactDimension.png}}\\hfill\n \\caption{Human judgment results. }\n \\label{fig:Results_1}\n\\end{figure}\n\n\\subsection{Qualitative Findings and Discussion}\n\n\nIn this section, we look at the open-ended responses provided by our participants to better understand the criteria that participants used when making judgments about the \\textit{confidence, human-likeness, adequate justification,} and \\textit{understandability} of generated rationales. \nThese situated insights augment our understanding of rationale generating systems, enabling us to design better ones in the future.\n\n\nWe analyzed the open-ended justifications participants provided using a combination of thematic analysis \\cite{aronson1994pragmatic} and grounded theory \\cite{strauss1994grounded}. \nWe developed codes that addressed different types of reasonings behind the ratings of the four dimensions under investigation. \nNext, the research team clustered the codes under emergent themes, which form the underlying \\textit{components} of the dimensions. Iterating until consensus was reached, researchers settled on the five most relevant components: (1)~\\textit{Contextual Accuracy}, (2)~\\textit{Intelligibility}, (3)~\\textit{Awareness},\n(4)~\\textit{Relatability}, and (5)~\\textit{Strategic Detail} (see Table \\ref{tab:components}).\nAt varying degrees, multiple components influence more than one dimension; that is, there isn't a mutually exclusive one-to-one relationship between components and dimensions. \n\nWe will now share how these components influence the dimensions of the human factors under investigation. \nWhen providing examples of our participants' responses, we will use P$1$ to refer to participant 1, P$2$ for participant 2, etc. \nTo avoid priming during evaluation, we used letters (e.g., A, B, C, etc.) to refer to the different types of rationales. \nFor better comprehension, we have substituted the letters with appropriate rationale--focused-view, complete-view, or random-- while presenting quotes from participants below. \n\n\\subsubsection{Confidence (1)} This dimension gauges the participant's faith in the agent's ability to successfully complete it's task and has \\textit{contextual accuracy}, \\textit{awareness}, \\textit{strategic detail}, and \\textit{intelligibility} as relevant components. \nWith respect to \\textit{contextual accuracy}, rationales that displayed ``\\ldots recognition of the environmental conditions and [adaptation] to the conditions'' (P22) were a positive influence, while redundant information such as ``just stating the obvious'' (P42) hindered confidence ratings. \n\n\n\n\n\\begin{table}[h]\n \\caption{Descriptions for the emergent \\textit{components} underlying the human-factor \\textit{dimensions} of the generated rationales.}\n \\label{tab:components}\n \\begin{tabular}{p{0.32\\columnwidth}p{0.58\\columnwidth}}\n \\toprule\n {\\bf Component} & {\\bf Description}\\\\\n \\midrule\n Contextual Accuracy & Accurately describes pertinent events in the context of the environment.\\\\\n \\rowcolor{tablerowcolor} Intelligibility & Typically error-free and is coherent in terms of both grammar and sentence structure.\\\\\n Awareness & Depicts and adequate understanding of the rules of the environment.\\\\\n \\rowcolor{tablerowcolor} Relatability & Expresses the justification of the action in a relatable manner and style.\\\\\n Strategic Detail & Exhibits strategic thinking, foresight, and planning.\\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\nRationales that showed \\textit{awareness} of ``upcoming dangers and what the best moves to make \\ldots [and] a good way to plan'' (P17) inspired confidence from the participants. \nIn terms of \\textit{strategic detail}, rationales that showed \"\\ldots long-term planning and ability to analyze information\" (P28) yielded higher confidence ratings compared to those that were \"short-sighted and unable to think ahead\" (P14) led to lower perceptions of confidence. \n\n\\textit{Intelligibility} alone, without \\textit{awareness} or \\textit{strategic detail}, was not enough to yield high confidence in rationales. However, rationales that were not \\textit{intelligible} (unintelligible) or coherent had a negative impact on participants' confidence:\n\\begin{displayquote}\nThe [random and focused-view rationales] include major mischaracterizations of the environment because they refer to an object not present or wrong time sequence, so I had very low confidence. (P66)\n\\end{displayquote}\n\n\n\n\n\n\\subsubsection{Human-likeness (2)} \\textit{Intelligibility, relatability,} and \\textit{strategic detail} are components that influenced participants' perception of the extent to which the rationales were made by a human.\nNotably, \\textit{intelligibility} had mixed influences on the human-likeness of the rationales as it depended on what participants thought ``being human'' entailed. \nSome conceptualized humans as fallible beings and rated rationales with errors more \\textit{humanlike} because rationales ``with typos or spelling errors \\ldots seem even more likely to have been generated by a human\" (P19). \nConversely, some thought error-free rationales must come from a human, citing that a ``computer just does not have the knowledge to understand what is going on'' (P24).\n\nWith respect to \\textit{relatability}, rationales were often perceived as more human-like when participants felt that ``it mirrored [their] thoughts'' (P49), and ``\\ldots [laid] things out in a way that [they] would have'' (P58). Affective rationales had high \\textit{relatability} because they ``express human emotions including hope and doubt'' (P11). \n\n\\textit{Strategic detail} had a mixed impact on human-likeness just like \\textit{intelligibility} as it also depended on participants' perception of critical thinking and logical planning. Some participants associated ``\\ldots critical thinking [and ability to] predict future situations\" (P6) with human-likeness whereas others associated logical planning with non-human-like, but computer-like rigid and algorithmic thinking process flow.\n\n\n\n\n\n\\subsubsection{Adequate Justification (3)} This dimension unpacks the extent to which participants think the rationale adequately justifies the action taken and is influenced by \\textit{contextual accuracy}, and \\textit{awareness}. \nParticipants downgraded rationales containing low levels of \\textit{contextual accuracy} in the form of irrelevant details. As P11 puts it: \n\\begin{displayquote}\nThe [random rationale] doesn't pertain to this situation. [The complete-view] does, and is clearly the best justification for the action that Frogger took because it moves him towards his end goal. \n\\end{displayquote}\n\nBeyond \\textit{contextual accuracy}, rationales that showcase \\textit{awareness} of surroundings score high on the \\textit{adequate justification} dimension. For instance, P11 rated the \\textit{random} rationale low because it showed ``no awareness of the surroundings''. For the same action, P11 rated \\textit{exemplary} and \\textit{focused-view} rationales high because each made the participant ``believe in the character's ability to judge their surroundings.''\n\n\\subsubsection{Understandability (4)} \nFor this dimension, components such as \\textit{contextual accuracy} and \\textit{relatability} influence participants' perceptions of how much the rationales helped them understand the motivation behind the agent's actions. \nIn terms of \\textit{contextual accuracy}, \nmany expressed how the contextual accuracy, not the length of the rationale, mattered when it came to understandability. \nWhile comparing understandability of the \\textit{exemplary} and \\textit{focused-view} rationales, P41 made a notable observation:\n\\begin{displayquote}\nThe [exemplary and focused-view rationale] both described the activities\/objects in the immediate vicinity of the frog. However, [exemplary rationale (typically lengthier than focused)] was not as applicable because the [focused-view] rationale does a better job of providing contextual understanding of the action.\n\\end{displayquote}\n\nParticipants put themselves in the agent's shoes and evaluated the understandability of the rationales based on how \\textit{relatable} they were. \nIn essence, some asked ``Are these the same reasons I would [give] for this action?'' (P43). \nThe more relatable the rationale was, the higher it scored for understandability. \n\nIn summary, the first study establishes the plausibility of the generated rationales (compared to baselines) and their user perceptions. \nHowever, this study does not provide direct comparison between the two configurations. \n\n\\section{Preference Study: Focused-- vs. Complete--View Rationales}\nThe preference study puts the rationales in direct comparison with each other. \nThis study \nIt achieves two main purposes. First, it aims to validate the alignment between the intended design of rationale types and the actual perceived differences between them. \nWe collect qualitative data on how participants perceived rationales produced by our \\textit{focused-view} and \\textit{complete-view} rationale generator.\nOur expert observation is that the \\textit{focused-view} configuration results in concise and localized rationales whereas the \\textit{complete-view} configuration results in detailed, holistic rationales. \nWe seek to determine whether na\\\"ive users who are unaware of which configuration produced a rationale also describe the rationales in this way. \nSecond, we seek to understand how and why the preferences between the two styles differed along three dimensions: {\\em confidence}, {\\em failure}, and {\\em unexpected behavior}.\n\n\n\\subsection{Method}\nUsing similar methods to the first study, we recruited and analyzed the data from 65 people (age range = 23 - 59 years, M = 38.48, SD = 10.16). 57\\% percent of the participants were women with 96\\% of the participants self-reporting the United States and 4\\% self-reporting India as countries they were from. Participants from our first study could not partake in the second one. The average task duration was approximately 46 minutes.\n\n\n\n\nThe only difference in the experimental setup between perception and the preference study is the comparison groups of the rationales. \nIn this study, participants judged the same set of\n\\textit{focused-} and \\textit{complete-view} rationales, however instead of judging each style against two baselines, participants evaluate the \\textit{focused-} and \\textit{complete-view} rationales in direction comparison with each other.\n\n\nHaving watched the videos and accompanying rationales, participants responded to the following questions comparing both configurations: \n\\begin{enumerate}\n \\item \\textbf{Most important difference}: What do you see as the most important difference? Why is this difference important to you?\n \\item \\textbf{Confidence}: Which style of rationale makes you more confident in the agent's ability to do its task? Was it system A or system B? Why?\n \\item \\textbf{Failure}: If you had a companion robot that had just made a mistake, would you prefer that it provide rationales like System A or System B? Why? \n \\item \\textbf{Unexpected Behaviour}: If you had a companion robot that took an action that was not wrong, but unexpected from your perspective, would you prefer that it provides rationales like System A or System B? Why? \n\\end{enumerate}\n\n\nWe used a similar to the process of selecting dimensions in this study as we did in the first one. \n\\textit{Confidence} is crucial to trust especially when failure and unexpected behavior happens \\cite{chernova2009confidence, kaniarasu2013robot}. \nCollaboration, tolerance, and perceived intelligence are affected by the way autonomous agents and robots communicate \\textit{failure} and \\textit{unexpected behavior} \\cite{desai2013impact,kwon2018expressing,lee2010gracefully,mirnig2017err}. \n\n\n\\begin{table}[h]\n \\caption{Tally of how many preferred the \\textit{focused-view} vs. the \\textit{complete-view} for the three dimensions.}\n \\label{tab:components}\n \\begin{tabular}{ccc}\n \\toprule\n {\\bf Question} & {\\bf Focused-view} & {\\bf Complete-view}\\\\\n \\midrule\n Confidence & 15 & 48\\\\\n \\rowcolor{tablerowcolor} Failure & 17 & 46\\\\\n Unexpected Behaviour & 18 & 45\\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\\subsection{Quantitative Analysis}\nIn order to determine whether the preferences significantly favored one style or the other, we conducted \nthe Wilcoxon signed-rank test. It showed that preference for the \\textit{complete-view} rationale was significant in all three dimensions.\nConfidence in the \\textit{complete-view} rationale was significantly greater than in the \\textit{focused-view} ($p<.001$). \nSimilarly, preference for a \\textit{complete-view} rationales from an agent that made a mistake was significantly greater than for \\textit{focused-view} rationales ($p<.001$).\nPreference for \\textit{complete-view} rationales from an agent that made a mistake was also significantly greater than for \\textit{focused-view} rationales ($p<.001$).\n\n\\subsection{Qualitative Findings and Discussion}\nIn this section, similar to the first study, we share insights gained from the open-ended responses to reveal the underlying reasons behind perceptions of the \\textit{most important difference} between the two styles. We also unpack the reasoning behind the quantitative ranking preferences for \\textit{confidence} in the agent's ability to do its task and communication preferences for \\textit{failure} and \\textit{unexpected behavior}. In this analysis, the interacting \\textit{components} that influenced the dimensions of human factors in the first study return (see Table \\ref{tab:components}). In particular, we use them as analytic lenses to highlight the trade-offs people make when expressing their preferences and the reasons for the perceived differences between the styles. \n\nThese insights bolster our situated understanding of the differences between the two rationale generation techniques and assist to verify if the intended design of the two configurations aligns with the perceptions of them. In essence, did the design succeed in doing what we set out to do? We analyzed the open-ended responses in the same manner as the first study. We use the same nomenclature to refer to participants. \n\n\\subsubsection{Most Important Difference (1)}\nEvery participant\nindicted that\nthe \\textit{level of detail and clarity} (P55)\ndifferentiated the rationales. \nConnected to the level of detail and clarity is the perceived \\textit{long-} vs. \\textit{short-term} planning exhibited by each rationale. \nOverall, participants felt that the \\textit{complete-view} rationale showed better levels of \\textit{strategic detail}, \\textit{awareness}, and \\textit{relatability} with human-like justifications, whereas the \\textit{focused-view} exhibited better \\textit{intelligibility} with easy-to-understand rationales. \nThe following quote \nillustrates\nthe trade-off between succinctness,\nwhich hampers comprehension of higher-order goals, and broadness, which can be perceived as less\nfocused:\n\n\\begin{displayquote}\nThe [focused-view rationale] is extraordinarily vague and focused on the raw mechanics of the very next move \\ldots [The complete-view] is more broad and less focused, but takes into account \\textit{the entire picture}. So I would say the most important difference is the \\textit{scope of events} that they take into account while making justifications [emphasis added] (P24)\n\\end{displayquote}\n\nBeyond trade-offs, this quote highlights a powerful validating point: without any knowledge beyond what is shown on the video, the participant pointed out how the \\textit{complete-view} rationale appeared to consider the \"entire picture\" and how the \"scope of events\" taken into account was the main difference. \nThe participant's intuition precisely aligns with the underlying network configuration design and our \nresearch \nintuitions.\nRecall that the \\textit{complete-view} rationale\nwas generated using the entire environment or \"picture\" whereas the \\textit{focused-view} was generated using a windowed input. \n\nIn prior sections, we \nspeculated on the effects of the network configurations. We expected the \\textit{focused-view} version \nto produce \nsuccinct, localized rationales that concentrated on the short-term. We expected the \\textit{complete-view} version \nto produce detailed, broader rationales that focused on the larger picture and long-term planning. \nThe findings of this experiment are the first validation that the outputs reflect the intended designs. \nThe strength of this validation was enhanced by the many descriptions of our intended attributes, given in free-form by participants who were naive to our network designs.\n\nConnected to the level of detail and clarity is the perception of \\textit{short-} vs \\textit{long-term} thinking from the respective rationales. \nIn general, participants regarded the \\textit{focused-view} rationale having low levels of \\textit{awareness} and \\textit{strategic detail}. \nThey felt\nthat this agent \"\\ldots focus[ed] only on the current step\" (P44), which was perceived depicting as thinking \"\\ldots in the spur of the moment\" (P27), giving the perception of short-term and simplistic thinking. \nOn the other hand, the \\textit{complete-view} rationale appeared to \"\\ldots try to think it through\" (P27), exhibiting long-term thinking as it appears to \"\\ldots think forward to broader strategic concerns.\"(P65) One participant sums it up nicely: \n\\begin{displayquote}\nThe [focused-view rationale] focused on the immediate action required. [The complete-view rationale] took into account the current situation, [but] also factored in what the next move will be and what dangers that move poses. The [focused-view] was more of a short term decision and [complete-view] focused on both short term and long term goals and objectives. (P47)\n\\end{displayquote}\n\nWe will notice how these differences in perception impact other dimensions such as confidence and communication preferences for failure and unexpected behavior. \n\\subsubsection{Confidence (2)}\nParticipants had more confidence in the agent's ability to do its task if the rationales exhibited high levels of \\textit{strategic detail} in the form of long-term planning, \\textit{awareness} via expressing knowledge of the environment, and \\textit{relatability} through humanlike expressions. They associated \\textit{conciseness} with confidence when the rationales did not need to be detailed given the context of the (trivial) action. \n\nThe \\textit{complete-view} rationale inspired more confidence because participants perceived agents with long-term planning and high\n\\textit{strategic detail} as being \"more predictive\" and intelligent than their counterparts. Participants felt more at ease because \"\\ldots knowing what [the agent] was planning to do ahead of time would allow me to catch mistakes earlier before it makes them.\" (P31) As one participant put it:\n\n\\begin{displayquote}\nThe [complete-view rationale] gives me more confidence \\ldots because it thinks about future steps and not just the steps you need to take in the moment. [The agent with focused-view] thinks more simply and is prone to mistakes. (P13)\n\\end{displayquote}\n\nParticipants felt that rationales that exhibited a better understanding of the environment, and thereby better \\textit{awareness}, resulted in higher confidence scores. Unlike the \\textit{focused-view} rationale that came across as \"a simple reactionary move \\ldots [the \\textit{complete-view}] version demonstrated a more thorough understanding of the entire field of play.\" (P51) In addition, the \\textit{complete-view} was more \\textit{relatable} and confidence-inspiring \"because it more closely resemble[d] human judgment\" (P29). \n\n\\subsubsection{Failure (3)}\nWhen an agent or a robot fails, the information from the failure report is mainly used to fix the issue. To build a mental model of the agent, participants preferred \\textit{detailed} rationales with solid \\textit{explanatory power} stemming from \\textit{awareness} and \\textit{relatability}. The mental model could facilitate proactive and preventative care. \n\nThe \\textit{complete-view} rationale, due to relatively high \\textit{strategic detail}, was preferable in communicating failure because participants could \"\\ldots understand the full reasoning behind the movements.\"(P16) Interestingly, \\textit{detail} trumped \\textit{intelligibility} in most circumstances. Even if the rationales had some grammatical errors or were a \"\\ldots little less easy to read, the details made up for it.\" (P62) \n\nHowever, detailed rationales are not always a virtue. Simple rationales have the benefit of being easily understandable to humans, even if they cause humans to view the agent as having limited understanding capabilities. Some participants appreciated \\textit{focused-view} rationales because they felt \"it would be easier to figure out what went wrong by focusing on one step at a time.\" \n\n\nExplanatory power, specifically how events are communicated, is related to \\textit{awareness} and \\textit{relatability}. Participants preferred relatable agents that \"\\ldots would talk to [them] like a person would.\"(P11) They expressed the need to develop a mental model, especially to \"\\ldots see how [a robot's] mind might be working\"(P1), to effectively fix the issue. The following participant neatly summarizes the dynamics:\n\\begin{displayquote}\nI'd want [the robot with complete-view] because I'd have a better sense of the steps taken that lead to the mistake. I could then fix a problem within that reasoning to hopefully avoid future mistakes. The [focused-view rationale] was just too basic and didn't give enough detail. (P8)\n\\end{displayquote}\n\\subsubsection{Unexpected Behavior (4)}\nUnexpected behavior that is not failure makes people want to know the \"why?\" behind the action, especially to understand the expectancy violation. \nAs a result, participants preferred rationales with transparency so that they can understand and trust the robot in a situation where expectations are violated. \nIn general, preference was for adequate levels of \\textit{detail} and \\textit{explanatory power} that could provide \"\\ldots more diagnostic information and insight into the robot's thinking processes.\"(P19) \nParticipants wanted to develop mental models of the robots so they could understand the world from the robot's perspective. \nThis diagnostic motivation for a mental model is different from the re-programming or fixing needs in cases of failure. \n\nThe \\textit{complete-view} rationale, due to adequate levels of \\textit{strategic detail}, made participants more confident in their ability to follow the thought process and get a better understanding of the expectancy violation. One participant shared:\n\\begin{displayquote}\nThe greater clarity of thought in the [complete-view] rationale provides a more thorough picture \\ldots, so that the cause of the unexpected action could be identified and explained more easily. (P51)\n\\end{displayquote}\nWith this said, where possible without sacrificing transparency, participants welcomed simple rationales that \"anyone could understand, no matter what their level of education was.\"(P2)\nThis is noteworthy because the expertness level of the audience is a key concern when making accessible AI-powered technology where designers need to strike a balance between detail and succinctness.\n\nRationales exhibiting strong explanatory power, through \\textit{awareness} and \\textit{relatability}, helps to situate the unexpected behavior in an understandable manner. \nParticipants preferred the \\textit{complete-view} rationale's style of communication because of increased transparency:\n\\begin{displayquote}\nI prefer [the complete-view rationale style] because \\ldots I am able to get a much better picture of why it is making those decisions. (P24)\n\\end{displayquote}\n\nDespite similarities in the communication preferences for failure and unexpected behavior, there are differences in underlying reasons. As our analysis suggests, the mental models are desired in both cases, but for different reasons. \n\\section{Design Lessons and Implications} \nThe situated understanding of the \\textit{components} and \\textit{dimensions} give us a powerful set of actionable insights that can help us design better human-centered, rationale-generating, autonomous agents.\nAs our analysis reveals, context is king. \nDepending on the context, we can tweak the input type to generate \\textit{rationale sytles} that meet the needs of the task or agent persona; for instance, a companion agent that requires high \\textit{relatability} for user engagement. \nWe should be mindful when optimizing for a certain dimension as each component comes with costs. \nFor instance, conciseness can improve \\textit{intelligibility} and overall \\textit{understandability} but comes at the cost of \\textit{strategic detail}, which can hurt \\textit{confidence} in the agent.\nWe can also engineer systems such that multiple network configurations act as modules. \nFor instance, if we design a companion agent or robot that interacts with a person longitudinally, the \\textit{focused-view} configuration can take over\nwhen short and simple rationales are required. \nThe \\textit{complete-view} configuration or a hybrid one can be activated when communicating failure or unexpected behavior.\n\nAs our preference study shows, we should not only be cognizant about the level of detail, but also why the detail is necessary, especially while communicating failure and unexpected behavior.\nFor instance, \nfailure-reporting,\nin a mission critical task (such as search and rescue), would have different requirements for \\textit{strategic detail} and \\textit{awareness}, compared to\n\"failure\" reporting in a \nless-defined, more creative task like making music. \nWhile the focus of this paper is on textual rationale generation, rationales can be complementary to other types of explanations; for instance, a multi-modal system can combine visual cues with textual rationales to provide better contextual explanations for an agent's actions. \n\n\n\\section{Limitations and Future Work}\nWhile these results are promising, there are several limitations in our approach that need to be addressed in future work. \nFirst, our current system, by intention and design, lacks interactivity; users cannot contest a rationale or ask the agent to explain in a different way. \nTo a get a formative understanding, we kept the design as straight-forward as possible.\nNow that we have a baseline understanding, we can vary along the dimension of interactivity for the next iteration. \nFor instance, contestability, the ability to either reject a reasoning or ask for another one, which has shown to improve user satisfactions \\cite{hirsch2017designing,dietvorst2016overcoming} can be incorporated in the future.\nSecond, our data collection pipeline is currently designed to work with discrete-action games that have natural break points where the player can be asked for explanations. \nIn continuous-time and -action environments, we must determine how to collect the necessary data without being too intrusive to participants.\nThird, all conclusions about our approach were formed based on one-time interactions with the system. \nTo better control for potential novelty effects that rationales could have, we need to deploy our system in a longitudinal task setting. \nFourth, to understand the feasibility of our system in larger state-action spaces, we would need to study the scalability by addressing the question of how much data is needed based on the size of environment. \nFifth, not all mistakes are created equal. Currently, the perception ratings are averaged where everything is equally weighted. For instance, a mistake during a mission critical step can lead to higher fall in confidence than the same mistake during a non-critical step. To understand the relative costs of mistakes, we need to further investigate the relationship between context of the task and the cost of the mistake. \n \n\n\n\n \n\n\n \n\n\n\n\n\n\n\\section{Conclusions}\nWhile explainability has been successfully introduced for classification and captioning tasks, sequential environments offer a unique challenge for generating human understandable explanations.\nThe challenge stems from multiple complex factors, such as temporally connected decision-making, that contribute to making decisions in these environments.\nIn this paper, we introduce \\textit{automated rationale generation} as a concept and explore how justificatory explanations from humans can be used to train systems to produce human-like explanations in sequential environments. \nTo facilitate this work, we also introduce a pipeline for automatically gathering a parallel corpus of states annotated with human explanations. \nThis tool enables us to systematically gather high quality data for training purposes.\nWe then use this data to train a model that uses machine translation technology to generate human-like rationales in the arcade game, {\\em Frogger}. \n\nThrough a mixed-methods approach in evaluation, we establish the plausibility of the generated rationales and describe how intended design of rationale types lines up with the actual user perceptions of them. \nWe also get contextual understanding of the underlying dimensions and components that influence human perception and preferences of the generated rationales. \nBy enabling autonomous agents to communicate about the motivations for their actions, we envision a future where explainability not only improves human-AI collaboration, but does so in a human--centered and understandable manner.\n\n\n\n\\section{Acknowledgements}\nThis work was partially funded under ONR grant number N00014141000. We would like to thank Chenghann Gan and Jiahong Sun for their valuable contributions to the development of the data collection pipeline. We are also grateful to the feedback from anonymous reviewers that helped us improve the quality of the work.\n\\balance{}\n\n\n\n\\balance{}\n\n\\bibliographystyle{SIGCHI-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe pulsar J1811$-$1736, discovered during observations of the Parkes\nMultibeam Pulsar Survey\\citep{mlc+01}, has a spin period of 104 ms\nand is member of a 18.8-d, highly eccentric binary system\n\\citep{lcm+00} with an as yet undetected companion \\citep{mig00}. The\ncharacteristic age and the estimated surface magnetic field strength\nindicate that the pulsar is mildly recycled. In such a system, it\nis expected that the observed pulsar is born first in a supernovae\n(SN) explosion before it undergoes mass accretion from a high-mass\nnon-degenerate binary companion. Parameters which have been\nmeasured and derived from the best-fit timing solution indicate that\nthe companion is quite massive. All these elements suggest that the\ncompanion is also a neutron star (e.g. \\citealt{bv91}).\n\nThe conclusion that PSR\\,J1811$-$1736 is a member of the small sample\nof known double neutron star (DNS) systems was already reached by\n\\citet{lcm+00}. Their conclusion was supported by a constraint on the\ntotal system mass, assuming that the observed advance of periastron\nwas totally due to relativistic effects. However, the rather long\nperiod of the pulsar, combined with the effects of interstellar\nscattering, resulting in significant broadening of the pulse profile at\n1.4~GHz, as well as the short data span available to \\citet{lcm+00},\nlimited the timing precision and hence the accuracy of the total mass\nmeasurement.\n\nAmong all DNS systems, PSR\\,J1811-1736 has by far the longest spin\nperiod, the longest orbital period and the highest eccentricity. This\nmay suggest that the evolution of this DNS has been different, at\nleast in part, from all other DNS systems. On the other hand,\nPSR\\,J1811-1736 well fits the spin period versus eccentricity relation\nfor DNS systems (\\citealt{mlc+05}, \\citealt{fkl+05}). This relation\ncan be simply explained in terms of the different lengths of time the\npulsar underwent accretion, which in turn is related to the mass of\nthe companion star before the SN explosion. Moreover, numerical\nsimulations \\citep{dpp05} show that the spin period versus\neccentricity relation is recovered assuming that the second born\nneutron star received a low velocity kick at its birth.\n\nIn this paper we report on new timing observations which significantly\nimprove on the previously published results. We present a study\nof the observed interstellar scattering and consider its consequences\non the detectability of radio pulses from the companion. Finally, we\ninvestigate the likely kick velocity imparted to the second-born\nneutron star during its birth in the system's second SN.\nObservations were carried out as part of a coordinated effort using\nthree of the largest steerable radio telescopes in the world for\npulsar timing observations, i.e. the 100-m radio telescope at\nEffelsberg, the 94-m equivalent Westerbork Synthesis Radio Telescope\n(WSRT) and the 76-m Lovell telescope at Jodrell Bank telescope. This\npaper is the first in a series detailing results of these efforts in\nestablishing a {\\em European Pulsar Timing Array} (EPTA).\n\n\\section{Observations}\n\nThe binary pulsar J1811$-$1736 is one of the binary pulsars regularly\nobserved by the EPTA. The aims and objectives of this European\ncollaboration include the detection of a cosmological gravitational\nwave background and the project will be described in detail in a\nforthcoming publication. Here we summarize the observing systems used\nwhile further details can be found in the references below.\n\n\\subsection{Effelsberg timing}\n\nWe made regular timing observations of PSR\\,J1811$-$1736 since October\n1999 using the 100-m radio telescope of the Max Planck Institut f\\\"ur\nRadioastronomie in Effelsberg near Bonn. The typical observing rate\nwas of 1 observation every two months. An overall root-mean-square\n(RMS) of 538 $\\mu s$ is achieved after applying the final timing\nmodel. The data were obtained with a 1.3$-$1.7\\,GHz tunable HEMT\nreceiver installed in the primary focus of the telescope. The noise\ntemperature of this system is 25~K, resulting in a system temperature\nfrom 30~to 50~K on cold sky depending on elevation. The antenna gain\nat these frequencies is 1.5~K~Jy$^{-1}$.\n\nAn intermediate frequency (IF) centred on 150 MHz for left-hand (LHC)\nand right-hand (RHC) circularly polarised signals was obtained after\ndown-conversion from a central RF frequency of usually 1410 MHz. The\nsignals received from the telescope were acquired and processed with\nthe Effelsberg-Berkeley Pulsar Processor (EBPP) which removes the\ndispersive effects of the interstellar medium on-line using ``coherent\nde-dispersion'' \\citep{hr75}. Before entering the EBPP, the two LHC\nand RHC signals of the IF are converted to an internal IF of 440\nMHz. A maximum bandwidth of $2\\times32\\times0.7$~MHz~=~$2\\times32$~MHz\nwas available for the chosen observing frequency and DM of the\npulsar. It was split into four portions for each of the two circular\npolarisations, which were mixed down to baseband. Each portion was\nthen sub-divided into eight narrow channels via a set of digital\nfilters \\citep{bdz+97}. The outputs of each channel were fed into\nde-disperser boards for coherent on-line de-dispersion. In total 64\noutput signals were detected and integrated in phase with the\npredicted topocentric pulse period.\n\nA pulse time-of-arrival (TOA) was calculated for each average profile\nobtained during a 5-10 min observation. During this process, the\nobserved time-stamped profile was compared to a synthetic template,\nwhich was constructed out of 5 Gaussian components fitted to a\nhigh signal-to-noise standard profile (see Kramer et al.,\n\\citeyear{kxl+98,kll+99}). This template matching was done by a\nleast-squares fitting of the Fourier-transformed data \\citep{tay92}. \nUsing the measured time delay between the actual\nprofile and the template, the accurate time stamp of the data provided\nby a local H-MASER and corrected off-line to UTC(NIST) using recorded\ninformation from the satellites of the Global Positioning System\n(GPS), the final TOA was obtained. The uncertainty of each TOA was\nestimated using a method described by \\citet{dr83} and \\citet{lan99}.\n\n\\subsection{Jodrell Bank timing}\n\nObservations of PSR\\,J1811-1736 were made regularly using the 76-m\nLovell telescope at Jodrell Bank since its discovery in 1997\n\\citep{lcm+00}. The typical observing rate was of about 2\nobservations each week, with an overall RMS of 1300\\,$\\mu$s after\napplying the final timing model. A cryogenic receiver at 1404\\,MHz was\nused, and both LHC and RHC signals were observed using a\n$2\\times32\\times1.0$-MHz filter bank at 1404\\,MHz. After detection,\nthe signals from the two polarizations were filtered, digitised at\nappropriate sampling intervals, incoherently de-dispersed in hardware\nbefore being folded on-line with the topocentric pulse period and\nwritten to disk. Each integration was typically of 1-3 minutes\nduration; 6 or 12 such integrations constituted a typical\nobservation. Off-line, the profiles were added in polarisation pairs\nbefore being summed to produce a single total-intensity profile. A\nstandard pulse template was fitted to the observed profiles at each\nfrequency to determine the pulse times-of-arrival (TOAs). Details of\nthe observing system and the data reduction scheme can be found\nelsewhere (e.g.~\\citealt{hlk+04}).\n\n\n\\subsection{Westerbork timing}\n\nObservations of PSR\\,J1811$-$1736 were carried out approximately\nmonthly since 1999 August 1st, obtaining an overall timing RMS of\n659~$\\mu$s after applying the final timing model, at a central\nfrequency of 1380 MHz and a bandwidth of 80 MHz. The two linear\npolarisations from all 14 telescopes were added together in phase by\ntaking account of the relative geometrical and instrumental phase\ndelays between them and then passed to the PuMa pulsar backend\n\\citep{vkv+02}. The data were obtained with the L-band receiver\ninstalled in the primary focus of the telescopes. The noise\ntemperature of this system is 25~K, resulting in a system temperature\nfrom 30~to 50~K on cold sky depending on elevation. The antenna gain\nat these frequencies is 1.2~K~Jy$^{-1}$. PuMa was used in its digital\nfilterbank mode whereby the Nyquist sampled signals are Fourier\ntransformed and the polarisations combined to produce total intensity\n(Stokes I) spectra with a total of 512 channels. These spectra were\nsummed online to give a final sampling time of 409.6 $\\mu$s and\nrecorded to hard disk. These spectra were subsequently dedispersed\nand folded with the topocentric period off-line to form integrations\nof a few minutes. TOAs were calculated for each profile following a\nscheme similar to that outlined above for Effelsberg data, except a\nhigh signal-to-noise standard profile was used instead of Gaussian\ncomponents. In the future, EPTA timing will employ an identical\nsynthetic template for all telescopes.\n\n\n\\section{Data analysis}\n\n\nThe TOAs, corrected to UTC(NIST) via GPS and weighted by their\nindividual uncertainties determined in the fitting process, were\nanalysed with the {\\tt TEMPO} software package \\citep{tw89}, using the\nDE405 ephemeris of the Jet Propulsion Laboratory \\citep{sta90}. {\\tt\nTEMPO} minimizes the sum of weighted squared {\\it timing residuals},\ni.e.~the difference between observed and model TOAs, yielding a set of\nimproved pulsar parameters and post-fit timing residuals. A summary\nof the basic characteristics of each dataset is shown in Table 1.\n\n \\begin{table}\n \\centering\n \\caption[]{Data sets' characteristics}\n \\label{TabData} \n \\begin{tabular}{llll}\n \\hline\n \\noalign{\\smallskip}\n\n& Jodrell Bank & Effelsberg & Westerbork\\\\\n \\noalign{\\smallskip}\n \\hline\n \\noalign{\\smallskip}\nN.~of~ToAs & 348 & 74 & 213\\\\\nTime~Span & 50842-53624 & 51490-53624 & 51391-53546 \\\\\nR.M.S. & 1300 & 538 & 659\\\\\n\n \\noalign{\\smallskip}\n \\hline\n \\noalign{\\smallskip}\n \\end{tabular}\n\\end{table}\n\nAlthough the templates used for the three telescope data differed,\nresulting offsets were absorbed in a global least-squares\nfit. Remaining uncertainties were smaller than the typical measurement\naccuracy of the Jodrell Bank timing data of about 9 $\\mu$s.\n\nBefore all TOAs were combined, preliminary fits were performed on each\ndataset alone, in order to study possible systematic differences between\nthe datasets. We applied a small quadrature addition \nand a scaling factor to the\nuncertainties to obtain the expected value of a reduced $\\chi^2=1$\nfor each dataset. \nThe final joint fit to all TOAs resulted in a $\\chi^{2}$ value of\nunity, avoiding the need to add further systematic uncertainties.\n\nTable 2 summarizes all observed timing and some derived\nparameters. For the observed parameters the quoted errors are twice\nthe nominal TEMPO errors. For the derived parameters, the given\nuncertainties are computed accordingly.\n\nThe joint fit allowed us to determine the spin, positional and\nKeplerian orbital parameters plus one post-Keplerian parameter with a\nprecision better than the best determination from a single data set\nalone. However, the high degree of interstellar scattering (Fig. 3)\nmeans that further post-Keplerian parameters will be difficult to\nmeasure with continued observations at this frequency. We will discuss\nthe future prospects for higher frequency observations in \\S~5.\n\n \\begin{table}\n \\caption[]{Timing and derived parameters}\n \\label{TabTim} \n \\begin{tabular}{ll}\n \\hline\n \\noalign{\\smallskip}\nTiming parameters & Joint~data~sets\\\\\n \\noalign{\\smallskip}\n \\hline\n \\noalign{\\smallskip}\nRA~(J2000, hh:mm:ss) & 18:11:55.034(3)\\\\\nDECL~(J2000, deg:mm:ss) & -17:36:37.7(4)\\\\\nPeriod, $P$~(s) & 0.1041819547968(4) \\\\\nPeriod derivative, $\\dot{P}$~($10^{-19}$ s~s$^{-1}$) & 9.01(5)\\\\\nDispersion Measure, DM~(pc~cm$^{-3}$) & 476(5)\\\\\nProjected semi-major axis$^{a}$, $a~\\sin~i$~(s) & 34.7827(5)\\\\\nEccentricity, $e$ & 0.828011(9)\\\\\nEpoch of periastron, $T_{0}$~(MJD) & 50875.02452(3)\\\\\nOrbital period, $P_{B}$~(d) & 18.7791691(4)\\\\\nLongitude of periastron, $\\omega$~(deg) & 127.6577(11)\\\\\nAdvance of periastron, $\\dot{\\omega}$~(deg~yr$^{-1}$) & 0.0090(2)\\\\\nFlux density at 3100\\,MHz, $S_{3100}$ (mJy) & 0.34(7) \\\\\n&\\\\\nTime~Span~(MJD) & 50842-53624\\\\\nN.~of~ToAs & 635\\\\\nRMS~($\\mu s$) & 851.173\\\\\n \\noalign{\\smallskip}\n \\hline\n \\noalign{\\smallskip}\nDerived parameters$^{b}$ & \\\\\n \\noalign{\\smallskip}\n \\noalign{\\smallskip}\nCharacteristic~Age, $\\tau_{\\rm c}$ ($10^{9}~yr$) & 1.83\\\\\nSurface magnetic field, $B_{0}$~($10^{9}$ G) & 9.80 \\\\\nTotal~Mass~$M_{\\rm TOT}$ ($M_{\\odot}$) & 2.57(10)\\\\\nMass~Function~$f(M_{\\rm C})$ ($M_{\\odot}$) & 0.128121(5)\\\\\nOrbital separation $A$ (ls) & 94.4(6) \\\\\nMinimum~companion~mass~$M_{\\rm C,min}$ ($M_{\\odot}$) & 0.93\\\\\n\n \\noalign{\\smallskip}\n \\hline\n \\end{tabular}\n\\begin{itemize}\n\\item[$^a$] The projected semi-major axis $a \\sin i$ is the\n semi-major axis of the projection of the orbit of the pulsar, around\n the system's center of mass, onto the plane containing the line of\n sight and the line of nodes.\n\\item[$^b$] Characteristic age and surface magnetic field have been\n calculated using standard formulas, namely $\\tau_{\\rm\n c}=P\/2\\dot{P}$ and\n $B_{0}=3.2\\times10^{19}\\sqrt{P\\dot{P}}$\\,G. The total mass $M_{\\rm\n TOT}$ has been calculated from the relativistic\n periastron advance and the measured Keplerian parameters,\n assuming the validity of general relativity.\n The minimum companion mass was estimated\n using the observed mass function $f(M_{\\rm\n C})$ and the lower limit for the total mass, as given by its\n uncertainty, in the case of $\\sin i = 1$. For details see\n Lorimer \\& Kramer (2005)\\nocite{lk05}.\n\\end{itemize}\n\n\\end{table}\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=0,width=8.5cm]{4385fig1.ps}\n\\caption{Timing residuals after jointly applying the final model to\nall three data sets. Vertical bars represent the ToA's uncertainty.}\n\\label{fig:jointres}\n\\end{figure}\n\n\\section{The nature of the companion}\n\nIn their discovery paper, \\citet{lcm+00} proposed that this system is\na member of the small class of DNS binaries. Soon after, \\citet{mig00}\nreported on optical observations of the region surrounding the pulsar\nposition to search for emission from the pulsar companion. They\ndetected no emission coincident with the pulsar position and while not\nconclusive, the lack of emission is at least consistent with the\nneutron star hypothesis for the nature of the companion.\n\nThe derived values for the characteristic age ($\\tau_c = 1.83 \\times\n10^{9}$ yrs) and the surface magnetic field ($B = 9.8 \\times\n10^{9}$G), as well as the combined values of the spin period\n($P=104$~ms) and its period derivative ($\\dot{P}=9\\times10^{-19}$),\nindicate that PSR\\,J1811$-$1736 is a neutron star that experienced a\nspin-up phase via accretion from mass overflowing from its\ncompanion. These parameters, in conjunction with the measured orbital\neccentricity ($e=0.828$), indeed suggest that PSR\\,J1811$-$1736 is a\nmildly recycled pulsar whose companion star was massive enough to also\nundergo a SN explosion. This second SN imparted the actually observed\nlarge eccentricity to the system (e.g. \\citealt{bv91}).\n\nOur new measurement of the relativistic periastron advance,\n$\\dot{\\omega} = 0.0090 \\pm 0.0002$ deg yr$^{-1}$, allows us to\ndetermine the value of 2.57$\\pm$0.10 M$_{\\odot}$ for the total mass of\nthe system, assuming that general relativity is the correct theory of\ngravity and that the observed value is fully due to relativistic\neffects (e.g. \\citealt{dd86}). This value, combined with the measured\nmass function, implies a minimum companion mass of 0.93 $M_{\\odot}$.\n\n\\begin{figure}\n\\centering \\includegraphics[angle=0,width=8.5cm]{4385fig2.ps}\n\\caption{Mass-mass diagram for the binary system hosting\nPSR\\,J1811-1736. The shaded area below the dotted curved line is\nexcluded because of the geometrical constraint $\\sin i \\leq 1$, while\nthe area outside the diagonal stripe is excluded by the measurement of\nthe relativistic periastron advance and the derived value for the\ntotal mass for this system.}\n\\label{fig:masses}\n\\end{figure}\n\nThe value for the total mass is relatively low, but very similar to\nthe total mass of the double pulsar system \\citep{lbk+04} and the\nrecently discovered DNS system PSR\\,J1756$-$2251 \\citep{fkl+05}. In\nfact, these systems have neutron star companions that have the lowest\nneutron star masses observed so far, $M_{c}~=~1.25M_{\\odot}$ and\n$M_{c}=1.17M_{\\odot}$ (for a recent review see \nStairs 2004\\nocite{sta04a}), respectively. In Figure \\ref{fig:masses} we\nshow the so-called mass-mass diagram where the pulsar and companion\nmasses can be directly compared. The measured value for the advance of\nperiastron means that the sum of the masses must lie along the\ndiagonal line, while the constraint on the inclination $\\sin i \\leq 1$\nexcludes the hatched region below the dotted line. Assuming that the\nneutron stars in this system must have a mass which is larger than the\n{\\it lowest} mass so-far measured, i.e. $1.17M_\\odot$, we find that\nthey both have masses in the interval\n1.17$\\,M_{\\odot}\\,\\leq\\,M_{P},M_{C}\\,\\leq\\,1.50 M_{\\odot}$. This\ninterval contains all but the heaviest neutron stars masses for which\na reliable determination has been obtained. Using this mass\nconstraint, we can also translate this range into lower and upper\nlimits on the inclination of the system, i.e. 44\\,deg\\,$\\ifmmode\\stackrel{<}{_{\\sim}}\\else$\\stackrel{<}{_{\\sim}}$\\fi i\n\\ifmmode\\stackrel{<}{_{\\sim}}\\else$\\stackrel{<}{_{\\sim}}$\\fi$\\,50\\,deg.\n\nAlternatively, if either the pulsar or the companion have a mass equal\nto the observed median neutron star mass of 1.35 M$_{\\odot}$\n\\citep{sta04a}, the other neutron star would have a mass of\n$1.22\\,\\pm\\,0.10 M_{\\odot}$. This value is consistent with the lower\nlimit in the previous discussion, but this also allows for the\npossibility that one of the two neutron stars has a mass as low as\n1.12 $M_{\\odot}$.\n\n\\section{Future potential of timing observations}\n\nIn order to determine the companion mass without ambiguities, it is\nnecessary to measure a second post-Keplerian (PK) parameter\n(e.g. \\citealt{dt92}). We have investigated the possibility of\nmeasuring the PK parameter $\\gamma$, which describes the combined\neffect of gravitational redshift and a second order Doppler effect. For\na companion mass of $1.35M_\\odot$, the expected value is $\\gamma =\n0.021$ ms. Using simulated data sets for the presently available\ntiming precision, we estimate that a $3\\sigma$ detection for $\\gamma$\nis achievable after about 4 more years of observation. However, in\norder to obtain a 10\\% accuracy in mass measurement by determining\n$\\gamma$ to a similar precision, several decades of observations may\nbe needed.\n\nFrom similar simulations, we estimate that PK parameters like the rate\nof orbital decay, $\\dot{P}_{\\rm B}$, or the Shapiro delay, are\nunmeasurable in this system, unless significant improvements in timing\nprecision can be obtained. For a companion mass of 1.35 $M_{\\odot}$,\n$\\dot{P}_{\\rm B}$ is only $-9.4 \\times 10^{-15}$~s~s$^{-1}$ and we\nexpect an amplitude of only 6\\,$\\mu$s for the amplitude of the Shapiro\ndelay. We also note that the effects of geodetic precession (see\ne.g. \\citealt{kra98}) will not be measurable within a reasonable time,\nas it has a period of order of $10^{5}$ years.\n\n\n\\section{Improving timing precision}\n\nIt is obvious that the measurement of further PK parameters will only\nbe possible if higher timing precision can be achieved for this\npulsar. For instance, if a precision of 50$\\mu$s could be obtained,\n$\\gamma$ could be measurable to a 10\\% accuracy after a total of just\n5 yr of observations, while a $3\\sigma$ detection of the orbital decay\nmay be achieved after about 6 yr.\n\nOne way to achieve higher timing precision is to detect narrow\nfeatures in the observed pulse profile by means of higher effective\ntime resolution. This is most commonly achieved through better\ncorrection for dispersion smearing that is caused by the radio\nsignal's passage through the ionized interstellar medium. While this\neffect is actually completely removed by the use coherent\nde-dispersion techniques (see \\citealt{hr75}) at some of our\ntelescopes, it is apparent that the current timing precision is limited\ninstead by broadening of the pulse profile due to interstellar\nscattering (\\citealt{lmg+04}, and references therein). Indeed, the\npulse profile at 1.4~GHz shows a strong scattering tail (see Fig.\\,3)\nwhich prevents a highly accurate determination of the pulse time of\narrival. As scattering is a strong function of observing frequency, we\ncan expect to reduce its effect, and hence to enable higher timing\nprecision, by using timing observations at frequencies above 1.4~GHz.\n\n\\begin{figure}\n\\centering \\includegraphics[angle=0,width=8.5cm]{4385fig3.ps}\n\\caption{Pulses' profiles of PSR\\,J1811--1736 at 1.4\\,GHz (bottom\npanel) and 3.1\\,GHz (top panel). Both profiles have been obtained with\n10 minutes observations performed with the Parkes radio telescope in\nFebruary 2005.}\n\\label{fig:3GHzp}\n\\end{figure}\n\nWe obtained observations at 3.1~GHz that confirm this expectation.\nThe pulse profile obtained at this frequency shows no evidence of\ninterstellar scattering, and its width at 10\\% is only 7.3~ms. This is\na great improvement with respect to the 1.4~GHz profile, whose 10\\%\nwidth is 58.3~ms. A flux density of $0.34\\pm0.07$mJy measured at\n3.1\\,GHz suggests that regular timing observations at this frequency\nshould be possible and should significantly improve the achievable\ntiming precision. This would allow us to measure a second PK parameter\nto an accuracy that is sufficiently precise to determine the companion\nmass.\n\nUsing the data available at 1.4\\,GHz and 3.1\\,GHz, we computed\nspectral indexes for flux density and scattering time. For the flux\ndensity we obtain a spectral index of $\\beta=-1.8\\pm0.6$. Subdividing\nour observing band at 1.4\\,GHz we obtain two different profiles that\nwe use to measure the pulse scatter timescale $\\tau$ by applying the\ntechnique described in \\citealt{lkm+01}. We convolve the\n3.1\\,GHz-profile, assumed to represent the true pulse shape, with an\nexponential scattering tail and obtain scattering times by a\nleast-square comparison of the convolved profile with the observed\npulse shape. At 1.284\\,GHz we find $\\tau_s~=~16.9$~ms, and\n$\\tau_s~=~10.6$~ms at 1.464~GHz, respectively. This results in a\nspectral index $\\alpha$ of the scattering time, i.e.~$\\tau \\propto\n\\nu^{-\\alpha}$, of $\\alpha~=~3.5\\pm~0.1$. Such a measured value agrees\nvery well with analogous results from L\\\"{o}hmer et\nal. (\\citeyear{lkm+01,lmg+04}) who determined $\\alpha$ for a number of\npulsars with very high dispersion measures.\n\nThe measured spectral index of the scattering time is also consistent\nwith the fact that the pulsar has not been detected at frequencies\nbelow 1\\,GHz. For example at 400\\,MHz we calculate $\\tau_{s}\\sim1$\\,s\nwhich is almost an order of magnitude greater than the spin period of\nthe pulsar thus making it impossible to detect it as a pulsating source.\n\n\n\\section{Previous searches for pulsations from the companion}\n\nSearches for pulsations from the binary companion of PSR\\,J1811-1736\nhave been performed on Parkes and Effelsberg data. Parkes observations\nhave been investigated with the procedure described in \\citet{fsk+04},\nwhile Effelsberg data have been processed using the procedure\ndescribed in \\citet{kle04}. Both searches were unsuccessful in\ndetecting any evidence of pulsation. The very high value of the\ndispersion measure for this system may suggest that the interstellar\nscattering is responsible for the failure in detecting any\npulsation. Therefore we studied the possible impact of this phenomenon\non our searches for pulsations from the companion of PSR\\,J1811-1736.\n\n\nWe considered 1\\,hr observations done with the Effelsberg telescope\nusing either the 20\\,cm (1.4\\,GHz) or the 11\\,cm (2.7\\,GHz) receiver,\nexploring a range of possible flux densities ($S=0.05, 0.5, 1$\\,mJy)\nand a detection in a signal-to-noise ratio threshold of\n$S\/N=10$. Using the DM of the observed pulsar, and assuming Effelsberg\nobservations at 1.4\\,GHz, we find a minimum detectable period of\n$P_{\\rm min}=750$\\,ms for a flux density of $S=50\\,\\mu$Jy, while even\nfor the flux densities of $S=500\\,\\mu$Jy and $S=1\\,$mJy periods below\n$\\sim$10\\,ms become undetectable at the observing frequency of\n1.4\\,GHz.\n\nAt 2.7\\,GHz, the system performance of the Effelsberg telescope allows\nfor an antenna gain of $G=$1.5\\,K\\,Jy$^{-1}$ with a system temperature\n$T_{\\rm sys}=17$\\,K. Using these parameters, we obtain minimum periods\nof $P_{\\rm min}=110$\\,ms, $P_{\\rm min}=2.5$\\,ms and $P_{\\rm min}=\n1.6$\\,ms for flux densities $S_{\\rm 2.7\\,GHz}=50\\,\\mu$Jy,\n500\\,$\\mu$Jy and 1\\,mJy respectively.\n\nWhen searching for pulsations from the binary companion of a Galactic\nrecycled pulsar in a double neutron star system, it is more likely\nthat the companion is a young pulsar with rather ordinary spin\nparameters, as found for PSR\\,J0737-3039B in the double pulsar system\n\\citep{lbk+04}. Our lower limits on the minimum detectable period\ntherefore suggests that interstellar scattering should not have\nprevented the detection of the companion, unless it were a very fast\nspinning or very faint source.\n\n\n\n\\section{Constraints on the kick velocity of the second SN explosion}\n\nThe large eccentricity of J1811$-$1736 system can be ascribed to a\nsudden loss of mass which results in a change of the orbital\nparameters. Such a sudden loss of mass can be attributed to the SN\nexplosion that formed the younger unseen neutron star companion (see,\ne.g., \\citealt{bv91}). Under the hypothesis of a symmetric explosion,\nsimple calculations show that the binary survives this event only if\nthe expelled mass $M_{\\rm exp}$ is less than half of the total mass of\nthe binary before the explosion (pre-SN binary). The induced\neccentricity is a simple function of the amount of the expelled mass:\n$e=M_{\\rm exp}\/M_{\\rm TOT}$, where $M_{\\rm TOT}$ is the total mass of\nthe pre-SN binary. In the case of the binary system hosting\nPSR\\,J1811-1736, the measured eccentricity, $e=0.828$, and the derived\ntotal mass, $M_{\\rm bin}=2.57\\,M_{\\odot}$, would imply a total mass\n$M_{\\rm TOT}\\,=\\,4.7\\,M_{\\odot}$ for the pre-SN binary.\n\nThe high space velocities measured for isolated pulsars indicate that\nneutron stars may receive a kick when formed, with an unpredictable\namplitude and direction \\citep{hp97,cc98,acc02}. Such kicks\nimparted to the newly formed neutron stars are caused by {\\em\nasymmetric supernova explosions}. If an asymmetric SN explosion\noccurs in a binary system, the survival and the eventual post-SN\nbinary parameters are jointly determined by the mass loss and the\nvector representing the velocity imparted to the neutron star. In this\ncase a simple survival condition like the one derived for the\nsymmetric explosion case cannot be determined.\n\nA correlation between the pulsar's spin period and orbital\neccentricity has recently been found for DNS systems\n(\\citealt{mlc+05}, \\citealt{fkl+05}). A numerical simulation by\n\\citet{dpp05} linked this correlation to the typical amplitude of the\nkick velocity received by the younger neutron star at birth.\n\\citet{dpp05} found that the spin period versus eccentricity\ncorrelation is recovered if the typical kick amplitude satisfies the\ncondition $V_{\\rm K} \\ifmmode\\stackrel{<}{_{\\sim}}\\else$\\stackrel{<}{_{\\sim}}$\\fi 50$\\,km\\,s$^{-1}$.\n\nTo investigate the nature of the kick received by the younger neutron\nstar in this system we considered as the pre-SN binary a binary system\ncontaining a neutron star and a helium star. We then constrained the\ntotal mass of this system by combining our results on the total mass\nof the actual binary with the range given by \\citet{dp03b} for the\nmass of the helium star that was the companion of PSR\\,J1811-1736\nbefore the explosion. The helium star mass range $2.8M_{\\odot} \\leq\nM_{\\rm C} \\leq 5.0 M_{\\odot}$ given by \\citet{dp03b} leads to a total\nmass range of $4.0M_{\\odot} \\leq M_{{\\rm {TOT}}} \\leq 6.5M_{\\odot}$.\n\nBinary parameters for the pre-SN binary have been chosen as follows.\nThe eccentricity has been assumed negligible, since the accretion\nphase responsible for spinning up the pulsar also provided strong\ntidal forces that circularised the orbit. The orbital separation has\nbeen constrained to be between the the minimum (pericentric) and\nmaximum (apocentric) distance between the two neutron stars in the\npost-SN binary. This statement can be justified as follows. The\ntypical velocity of the expelled matter in a SN explosion is close to\nthe speed of light, while the typical orbital velocity of the stars in\na binary system is of order of $\\sim 100$\\,km\\,s$^{-1}$, using for the\ntotal mass any value in the range we used for $M_{{\\rm {TOT}}}$ and a\nvalue for the orbital separation comparable to the one for the present\nbinary system, i.e. few light-minutes (see the discussion in the next\nparagraph of the post-SN binary evolution due to general relativistic\neffects). This means that the change in position of the two stars is\nnegligible if compared to the change in position of the expelled\nmatter. The time required for the binary system to do the transition\nfrom the pre-SN to the post-SN binary is the time required by the\nexpelled matter to travel along a path as long as the orbital\nseparation, i.e. few minutes. After such an elapsed time the matter\nexpelled in the SN explosion encloses both stars and has no more\ngravitational effects on their binary motion. This time is also much\nshorter than the orbital period of few days for a pre-SN binary like\nthe one we are considering. This means that during this transition the\npositions of the two stars remained unchanged, and their distance was\na distance periodically assumed by the two stars also in their orbital\nmotion in the post-SN binary.\n\nTo make a fully consistent comparison between the actually observed\nbinary and the eccentric binary that emerged from the last SN\nexplosion (post-SN binary) one has to take into account the secular\nchanges of the orbital parameters caused by general relativistic\neffects. In order to do this one needs to have an estimate for the\ntime since the last SN. The only timescale that is available to us is\nthe characteristic age of the first born pulsar. We then find binary\nparameters that are consistent with the present system values within\ntheir uncertainties. Given the well known uncertainties in this age\nestimation, we considered the possibility that the present binary is\nin fact up to ten times older than suggested by the characteristic age\n(i.e 1.8$\\times 10^{10}$ yr). Even when considering this extreme age,\nwe find that our results remain unaffected. We consequently decided\nto use as post-SN orbital parameters the same values we measure today.\n\nBy insisting that the total energy and total angular momentum,\ncalculated in the center-of-mass frame, of the post-SN binary and the\npresent systems are conserved, we can combine these terms to obtain an\nequation for the kick amplitude, as a function of the two angles,\nrepresenting its direction in a suitable reference frame, and the\ntotal mass and orbital separation before the explosion. We assumed\nthat the probability of occurrence for any given kick vector is\nproportional to the solid angle described by the direction of the kick\nin spherical coordinates and then calculated the probability of having\na kick velocity lower than some fixed values. We chose the values of\n50, 100 and 150~km~s$^{-1}$ ( hereafter $P_{50}$, $P_{100}$ and\n$P_{150}$ respectively.). Figure\\,\\ref{fig:probs} shows that $P_{50}$\nis not negligible if the total mass of the pre-SN binary system is\nlower than 6\\,$M_{\\odot}$. Moreover, all considered probabilities peak\nin correspondence of a total pre-SN mass of 4.70\\,$M_{\\odot}$,\ncorresponding to the null kick case. These results lead to the\nconclusion that the younger neutron star in this system received a low\nvelocity kick and is thus similar to all other known DNS systems,\nwhich all have tighter orbits.\n\nNevertheless, the binary system containing PSR\\,J1811-1736 is much\nwider than all other known DNS systems. This may indicate that the\nbinary evolution of this system may have been (at least partially)\ndifferent. In particular the wide orbital separation for this system\nmay be compatible with an evolution during which the pulsar's\nprogenitor avoided completely a common envelope phase \\citep{dp03b} or\nthat this phase was too short to sufficiently reduce the orbital\nseparation. Moreover if the spin-up occurred via the stellar wind of\nthe giant companion then the system would tend to be wider due to the\nisotropic mass loss from the companion \\citep{dpp05}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=0,width=8.5cm]{4385fig4.ps}\n\\caption{Probabilities to have a kick velocity lower than or equal to\n50 (lower line), 100 and 150 (upper line) km s$^{-1}$ as a function of\nthe pre-SN total mass. All probabilities peak in correspondence of a\npre-SN total mass $M_{\\rm TOT}=4.70M_{\\odot}$, which is the mass of\nthe binary system before the explosion in the case of a symmetric\nSN. The probability to have a kick velocity lower than 50 km\ns$^{-1}$ is not negligible for all but the highest considered values\nfor the binary pre-SN mass.}\n\\label{fig:probs}\n\\end{figure}\n\n\\section{Summary \\& Conclusions}\n\nWe have presented an improved timing solution for the binary pulsar\nJ1811$-$1736. This solution improves the previously measured values\nfor the spin and Keplerian orbital parameters and one post-Keplerian\norbital parameter, the periastron advance. These results would not\nhave been achieved without data from the three telescopes used and are\nthe first obtained as part of the European Pulsar Timing Array (EPTA)\ncollaboration.\n\nThe measured values for the spin period and its first derivative are\ntypical of a mildly recycled neutron star, while the high eccentricity\nof the binary system can be seen as a signature of the SN explosion\nthat interrupted the mass transfer from the companion to the accreting\nneutron star. This is likely to have occurred before the pulsar could\nreach spin periods typical of the fully recycled (i.e. millisecond)\npulsars. This leads to the conclusion that PSR\\,J1811$-$1736 is a\nmember of a DNS binary system.\n\nThe determined value of the periastron advance provides further\nconfirmation for this scenario as it suggests a total mass of the\nsystem of $M_{tot}$~=~2.57$\\pm$0.10~$M_{\\odot}$. This value is\nsimilar to the total mass of two other DNS systems, i.e. the double\npulsar \\citep{lbk+04} and PSR\\,J1756-2251 \\citep{fkl+05}. In both\nthese systems, the non-recycled neutron stars is very light. Assuming\nthat PSR\\,J1811$-$1736 is a neutron star with a mass within the\ncurrently measured mass range for neutron stars, we find the companion\nmass to lie in the same range. Using these arguments we determine the\ninclination of the orbital plane to within $\\sim6$ degrees.\n\nWe also investigated the possibility of measuring a second\npost-Keplerian parameter, in order to determine both masses and thus\nto definitively determine the nature of the companion. Unfortunately, the\npulse profile at 1.4~GHz is heavily broadened by interstellar\nscattering which limits the timing precision and means that a second\npost-Keplerian parameter is not measurable within a reasonable amount\nof time with observations at that frequency. However we find that at\n3~GHz the scattering is sufficiently reduced and the flux density is\nsufficiently high that higher precision timing will be possible at\nthis frequency. Comparing the pulse profiles at 1.4 and 3~GHz we find\nthat the scattering timescale for this pulsar scales with frequency\nwith a power law of index $\\alpha~=~3.5\\pm0.1$ which is in excellent\nagreement with earlier results on high dispersion measure pulsars.\n \nConsidering the effects of the interstellar scattering on the\ndetectability of pulsations from the companion, we find that the\nminimum detectable period is longer at lower frequencies and for\nfainter objects. In general, we do not expect interstellar scattering\nto be the cause for the continued non-detection of the companion\nneutron star.\n\nThe orbital separation for this system is much wider than for all\nother DNS systems, and it suggests that its binary evolution has been\ndifferent. One explanation invokes the lack of a common envelope\nphase, during which the size of the orbit shrinks due to tidal forces\nin the envelope of the companion star. Another explanation\n\\citep{dpp05}, not necessarily conflicting with the previous one,\ninvokes a different mass transfer mechanism in the spin-up phase of\nthe pulsar, namely via stellar wind, while all other recycled pulsars\nin the known DNS have been spun up via Roche lobe overflow mass\ntransfer.\n\nFinally, we investigated the kick imparted to the second born neutron\nstar during the second SN. We find that for realistic values of the\ntotal mass of the pre-SN binary, the kick velocity has a not\nnegligible probability of being lower than 50~km~s$^{-1}$. This\nconstraint is common to all DNS systems, as shown by\n\\citet{dpp05}. This evidence for a low amplitude asymmetric kick\nreceived by the younger neutron star may be the consequence of the\neffects of binary evolution on a star that undergoes a SN explosion,\neffects that somehow are able to tune the amplitude of such kick.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\nClassical Cepheid variable stars are primary distance \nindicators and rank among standard candles for establishing \nthe cosmic distance scale, owing to the famous period-luminosity \n($P$--$L$) relationship.\nCompanions to Cepheids, however, complicate the situation.\nThe contribution of the secondary star to the observed \nbrightness has to be taken into account when involving any\nparticular Cepheid in the calibration of the $P$--$L$ relationship.\n\nBinaries among Cepheids are not rare at all: their frequency \nexceeds 50 per cent for the brightest Cepheids, while among the \nfainter Cepheids an observational selection effect encumbers \nrevealing binarity \\citep{Sz03a}.\n\nOwing to some observational projects aimed at obtaining new\nradial velocities (RVs) of numerous Cepheids carried out during\nthe last decades, a part of the selection effect has been\nremoved. This progress is visualized in Fig.~\\ref{fig-comparison}\nwhere the current situation is compared with that 20 years ago.\nThe data have been taken from the on-line data base on binaries \namong Galactic Cepheids (http:\/\/www.konkoly.hu\/CEP\/orbit.html).\nTo get rid of the fluctuation at the left-hand part of the diagram,\nbrightest Cepheids ($\\langle V \\rangle <5$~mag) were merged in a\nsingle bin because such stars are extremely rare among Cepheids\n-- see the histogram in Fig.~\\ref{fig-histogram}.\n\nIn the case of pulsating variables, like Cepheids, spectroscopic\nbinarity manifests itself in a periodic variation of the\n$\\gamma$-velocity (i.e., the RV of the mass centre of the Cepheid). \nIn practice, the orbital RV variation of the Cepheid component is \nsuperimposed on the RV variations of pulsational origin. \nTo separate orbital and pulsational effects, knowledge of the \naccurate pulsation period is essential, especially when comparing \nRV data obtained at widely differing epochs. Therefore, the pulsation\nperiod and its variations have been determined with the method of\nthe O$-$C diagram \\citep{S05} for each target Cepheid. Use of the\naccurate pulsation period obtained from the photometric data is a \nguarantee for the correct phase matching of the (usually less\nprecise) RV data.\n\n\n\\begin{figure}\n\\includegraphics[height=48mm, angle=0]{szabados2013sbcepfig1.eps}\n\\caption{Percentage of known binaries among Galactic\nclassical Cepheids as a function of the mean apparent\nvisual brightness in 1993 and 2013. The decreasing influence\nof the observational selection effect is noticeable.}\n\\label{fig-comparison}\n\\end{figure}\n\n\\begin{figure}\n\\vspace*{7mm}\n\\includegraphics[height=48mm, angle=0]{szabados2013sbcepfig2.eps}\n\\caption{Histogram showing the number distribution of known\nGalactic classical Cepheids as a function of their mean \napparent visual brightness.}\n\\label{fig-histogram}\n\\end{figure}\n\nIn this paper we point out spectroscopic binarity of three \nbright Galactic Cepheids by analysing RV data. The structure of \nthis paper is as follows. The new observations and the equipment \nutilized are described in Sect.~\\ref{newdata}. Section~\\ref{results} \nis devoted to the results on the three new spectroscopic binary (SB) \nCepheids: LR~Trianguli Australis, RZ~Velorum, and BG~Velorum. \nBasic information on these Cepheids is given in Table~\\ref{obsprop}. \nFinally, Section~\\ref{concl} contains our conclusions.\n\n\\begin{table} \n\\begin{center} \n\\caption{Basic data of the programme stars \nand the number of spectra.} \n\\label{obsprop} \n\\begin{tabular}{|lccccc|} \n\\hline \nCepheid & $\\langle V \\rangle$ & $P$ & Mode & \\multicolumn{2}{c}{Number of spectra}\\\\\n& (mag) & (d)& of pulsation & SSO & CORALIE \\\\\n\\hline \nLR~TrA & 7.80 & 2.428289 & first overtone & 10 & 32\\\\ \nRZ~Vel & 7.13 & 20.398532 & fundamental & 30 & 67\\\\\nBG~Vel & 7.69 & 6.923843 & fundamental & 27 & 33\\\\ \n\\hline \n\\end{tabular} \n\\end{center} \n\\end{table}\n\n\n\n\\section{New observations}\n\\label{newdata}\n\n\\subsection{Spectra from the Siding Spring Observatory}\n\\label{SSO}\n\nWe performed an RV survey of Cepheids with the 2.3~m ANU \ntelescope located at the Siding Spring Observatory (SSO), \nAustralia. The main aim of the project was to detect Cepheids \nin binary systems by measuring changes in the mean values of \ntheir RV curve which can be interpreted as the orbital \nmotion of the Cepheid around the centre-of-mass in a binary \nsystem (change of $\\gamma$-velocity). The target list was \ncompiled to include Cepheids with a single-epoch RV phase curve \nor without any published RV data. Several Cepheids suspected \nto be members of SB systems were also put on the target list. \nIn 64 nights between 2004 October and 2006 March we monitored \n40 Cepheids with pulsation periods between 2 and 30 d.\nAdditional spectra of some targets were obtained in 2007 August.\n\nMedium-resolution spectra were taken with the Double Beam \nSpectrograph using the 1200~mm$^{-1}$ gratings in both arms of \nthe spectrograph. The projected slit width was 2 arcsec \non the sky, which was about the median seeing during our \nobservations. The spectra covered the wavelength ranges \n4200--5200~\\AA\\ in the blue arm and 5700--6700~\\AA\\ in the red \narm. The dispersion was 0.55~\\AA~pixel$^{-1}$, leading to a nominal \nresolution of about 1~\\AA.\n\nAll spectra were reduced with standard tasks in {\\sc iraf}\n\\footnote{{\\sc iraf} is distributed by the National Optical \nAstronomy Observatories, which are operated by the Association\nof Universities for Research in Astronomy, Inc., under \ncooperative agreement with the National Science Foundation.}.\nReduction consisted of bias and flat-field corrections, \naperture extraction, wavelength calibration, and continuum \nnormalization. We checked the consistency of wavelength \ncalibrations via the constant positions of strong telluric \nfeatures, which proved the stability of the system. \nRVs were determined only for the red arm data \nwith the task {\\it fxcor\\\/}, applying the cross-correlation \nmethod using a well-matching theoretical template spectrum \nfrom the extensive spectral library of \\citet{Metal05}. Then, \nwe made barycentric corrections to every single RV value. \nThis method resulted in a 1-2~km~s$^{-1}$ uncertainty in the \nindividual RVs, while further tests have shown that our \nabsolute velocity frame was stable to within \n$\\pm$2--3~km~s$^{-1}$. This level of precision is sufficient\nto detect a number of Cepheid companions, as they can often\ncause $\\gamma$-velocity changes well above 10~km~s$^{-1}$.\n\nDiscovery of six SBs among the 40 target Cepheids was \nalready reported by \\citet{Szetal13}. The binarity of the \nthree Cepheids announced here could be revealed by involving \nindependently obtained additional data (see Section~\\ref{coralie}).\nThe individual RV data of the rest of the Cepheid targets\nwill be published together with the results of the analysis\nof the spectra.\n\n\n\\subsection{CORALIE observations from La Silla}\n\\label{coralie}\n\nAll three Cepheids were among the targets during multiple observing \ncampaigns between 2011 April and 2012 May using the fibre-fed \nhigh-resolution ($R \\sim 60000$) echelle spectrograph \n\\textit{CORALIE} mounted on the Swiss 1.2\\,m Euler telescope at \nESO La Silla Observatory, Chile. The instrument's design is \ndescribed in \\citet{Qetal01}; recent instrumental updates \ncan be found in \\citet{Setal10}. \n\nWhen it turned out that these three Cepheids have variable\n$\\gamma$-velocities, several new spectra were obtained in\n2012 December - 2013 January and 2013 April.\n\nThe spectra are reduced by the efficient online reduction \npipeline that performs bias correction, cosmics removal, \nand flatfielding using tungsten lamps. ThAr lamps are used \nfor the wavelength calibration. The reduction pipeline directly \ndetermines the RV via cross-correlation \\citep{Betal96} \nusing a mask that resembles a G2 spectral type. \nThe RV stability of the instrument is excellent and for \nnon-pulsating stars the RV precision is limited by photon noise;\n(see e.g., \\citet{Petal02}). However, the precision achieved for \nCepheids is lower due to line asymmetries. We estimate a typical \nprecision of $\\sim$ 0.1\\,km\\,s$^{-1}$ (including systematics due\nto pulsation) per data point for our data. \n\n\n\\section{Results for individual Cepheids}\n\\label{results}\n\n\n\\subsection{LR~Trianguli Australis}\n\\label{lrtra}\n\n\\paragraph*{Accurate value of the pulsation period}\n\\label{lrtra-period}\n\nThe brightness variability of LR~TrA (HD\\,137626, $\\langle V \\rangle\n= 7.80$\\,mag) was revealed by \\citet{Setal66} based on the Bamberg\nphotographic patrol plates. The Cepheid nature of variability and the\nfirst values of the pulsation period was determined by \\citet{E83}. \nThis Cepheid pulsates in the first-overtone mode; therefore, it has \na small pulsational amplitude and nearly-sinusoidal light and\nvelocity curves. \n\nIn the case of Cepheids pulsating with a low amplitude, the O$-$C \ndiagram constructed for the median brightness (the mid-point \nbetween the faintest and the brightest states) is more reliable \nthan that based on the moments of photometric maxima \\citep{Detal12}. \nTherefore we determined the accurate value of the pulsation period \nby constructing an O$-$C diagram for the moments of median brightness \non the ascending branch of the light curve since this is the phase when \nthe brightness variations are steepest during the whole pulsational \ncycle.\n\nAll published photometric observations of LR~TrA covering three \ndecades were re-analysed in a homogeneous manner to determine \nseasonal moments of the chosen light-curve feature. The relevant data \nlisted in Table~\\ref{tab-lrtra-oc} are as follows:\\\\\nColumn~1: heliocentric moment of the selected light-curve feature\n(median brightness on the ascending branch for LR~TrA, maximum\nbrightness for both RZ~Vel and BG~Vel, see Tables~\\ref{tab-rzvel-oc}\nand \\ref{tab-bgvel-oc}, respectively;\\\\\nCol.~2: epoch number, $E$, as calculated from Equation~(\\ref{lrtra-ephemeris}):\n\\vspace{-1mm}\n\\begin{equation}\nC = 2\\,453\\,104.9265 + 2.428\\,289{\\times}E \n\\label{lrtra-ephemeris}\n\\end{equation}\n\\vspace{-3mm}\n$\\phantom{mmmmm}\\pm0.0037\\phantom{}\\pm0.000\\,003$\n\n\\noindent (this ephemeris has been obtained by the weighted \nleast squares parabolic fit to the O$-$C differences);\\\\\n\\noindent Col.~3: the corresponding O$-$C value;\\\\\nCol.~4: weight assigned to the O$-$C value (1, 2, or 3 \ndepending on the quality of the light curve leading to \nthe given difference);\\\\\nCol.~5: reference to the origin of data.\\\\\n\nThe O$-$C diagram of LR~TrA based on the O$-$C values listed\nin Table~\\ref{tab-lrtra-oc} is plotted in \nFig.~\\ref{fig-lrtra-oc}. The plot can be approximated by a \nconstant period by the ephemeris (\\ref{lrtra-ephemeris}) for \nthe moments of median brightness on the ascending branch. The \nscatter of the points in Fig.~\\ref{fig-lrtra-oc} reflects the \nobservational error and uncertainties in the analysis of the data.\n\n\n\\begin{table}\n\\caption{O$-$C values of LR~TrA (see the \ndescription in Sect.~\\ref{lrtra-period}).}\n\\begin{tabular}{l@{\\hskip2mm}r@{\\hskip2mm}r@{\\hskip2mm}c@{\\hskip2mm}l}\n\\hline\n\\noalign{\\vskip 0.2mm}\nJD$_{\\odot}$ & $E\\ $ & O$-$C & $W$ & Data source\\\\\n2\\,400\\,000 + &&&\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n45018.7822 & $-$3330& 0.0581 &3 & \\citet{E83}\\\\\n47633.9607 & $-$2253& $-$0.0307 & 3 & \\citet{Aetal90}\\\\\n47939.9568 & $-$2127& 0.0010 &2 & {\\it Hipparcos} \\citep{ESA97}\\\\\n48139.0426 & $-$2045& $-$0.0329 & 3& {\\it Hipparcos} \\citep{ESA97}\\\\\n48440.1554 & $-$1921& $-$0.0279 & 3& {\\it Hipparcos} \\citep{ESA97}\\\\\n48750.9547 & $-$1793& $-$0.0496 & 3& {\\it Hipparcos} \\citep{ESA97}\\\\\n49814.6064 & $-$1355& 0.0115 & 3 & \\citet{B08}\\\\\n50370.7115 & $-$1126& 0.0384 & 3 & \\citet{B08}\\\\\n50574.6393 & $-$1042& $-$0.0101 & 3& \\citet{B08}\\\\\n50909.7531 & $-$904& $-$0.0001 & 3 & \\citet{B08}\\\\\n51264.2883 & $-$758& 0.0049 & 3 & \\citet{B08}\\\\\n51650.4058 & $-$599& 0.0244 & 3 & \\citet{B08}\\\\\n51958.8010 & $-$472& 0.0269 & 2 & \\citet{B08}\\\\\n52041.3435 & $-$438& 0.0076 & 2 & ASAS \\citep{P02}\\\\\n52366.7222 & $-$304& $-$0.0044 & 3 & \\citet{B08}\\\\\n52500.2709 & $-$249& $-$0.0116 & 3 & ASAS \\citep{P02}\\\\\n52769.8038 & $-$138& $-$0.0188 & 3 & ASAS \\citep{P02}\\\\\n53102.5159 & $-$1& 0.0177 & 3 & \\citet{B08}\\\\\n53104.9151 & 0& $-$0.0114 & 3& ASAS \\citep{P02}\\\\\n53520.1818 & 171& 0.0179 & 3 & ASAS \\citep{P02}\\\\\n53840.7137 & 303& 0.0156 & 3 & ASAS \\citep{P02}\\\\\n54251.0850 & 472& 0.0061 & 3& ASAS \\citep{P02}\\\\\n54615.3163 & 622& $-$0.0060& 3 & ASAS \\citep{P02}\\\\\n54960.1214 & 764& $-$0.0179& 3 & ASAS \\citep{P02}\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-lrtra-oc}\n\\end{table}\n\n\n\\begin{figure}\n\\includegraphics[height=44mm, angle=0]{szabados2013sbcepfig3.eps}\n\\caption{O$-$C diagram of LR~TrA. The plot can be\napproximated by a constant period.}\n\\label{fig-lrtra-oc}\n\\end{figure}\n\n\\paragraph*{Binarity of LR~TrA}\n\\label{lrtra-bin}\n\n\\begin{figure}\n\\includegraphics[height=48mm, angle=0]{szabados2013sbcepfig4.eps}\n\\caption{Merged RV phase curve of LR~TrA. The different symbols\nmean data from different years: 2005: filled triangles; 2006: \nempty triangles; 2007: triangular star; 2012: filled circles; \n2013: empty circles. The zero phase was arbitrarily chosen at \nJD\\,2\\,400\\,000.0 (in all phase curves in this paper).}\n\\label{fig-lrtra-vrad}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[height=40mm, angle=0]{szabados2013sbcepfig5.eps}\n\\caption{Temporal variation in the $\\gamma$-velocity of LR~TrA.\nThe symbols for the different data sets are the same as\nin Fig.~\\ref{fig-lrtra-vrad}.}\n\\label{fig-lrtra-vgamma}\n\\end{figure}\n\n\\begin{table}\n\\caption{RV values of LR TrA from the SSO spectra. \nThis is only a portion of the full version available online as Supporting\nInformation.}\n\\begin{tabular}{lr}\n\\hline\n\\noalign{\\vskip 0.2mm}\nJD$_{\\odot}$ & $v_{\\rm rad}$ \\ \\\\\n2\\,400\\,000 + &(km\\,s$^{-1}$)\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n53599.9325 &$-$21.2\\\\\n53600.9086 &$-$32.0\\\\\n53603.9327 &$-$27.6\\\\\n53605.9290 &$-$31.0\\\\\n53805.1657 &$-$29.3\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-lrtra-data}\n\\end{table}\n\n\\begin{table}\n\\caption{CORALIE velocities of LR TrA. \nThis is only a portion of the full version \navailable online as Supporting Information.}\n\\begin{tabular}{lrc}\n\\hline\n\\noalign{\\vskip 0.2mm}\nJD$_{\\odot}$ & $v_{\\rm rad}$ \\ & $\\sigma$ \\\\\n2\\,400\\,000 + &(km\\,s$^{-1}$) & (km\\,s$^{-1}$)\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n55938.8701 & $-$27.97 & 0.05\\\\\n55938.8718 & $-$28.10 & 0.05\\\\ \n55939.8651 & $-$29.85 & 0.02\\\\ \n55940.8686 & $-$22.40 & 0.03\\\\ \n55941.8579 & $-$33.14 & 0.04\\\\ \n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-lrtra-coralie-data}\n\\end{table}\n\n\n\\begin{table}\n\\caption{$\\gamma$-velocities of LR~TrA.}\n\\begin{tabular}{lccl}\n\\hline\n\\noalign{\\vskip 0.2mm}\nMid-JD & $v_{\\gamma}$ & $\\sigma$ & Data source \\\\\n2\\,400\\,000+ & (km\\,s$^{-1}$)& (km\\,s$^{-1}$) & \\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n53603 & $-$25.5 & 0.5 & Present paper\\\\\n53808 & $-$24.8 & 0.5 & Present paper\\\\\n54331 & $-$29.0 & 1.0 & Present paper\\\\\n55981 & $-$27.5 & 0.1 & Present paper\\\\\n56344 & $-$26.4 & 0.1 & Present paper\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-lrtra-vgamma}\n\\end{table}\n\nThere are no earlier RV data on this bright Cepheid. Our new data \nlisted in Tables~\\ref{tab-lrtra-data} and \\ref{tab-lrtra-coralie-data} \nhave been folded on the accurate pulsation period given in the\nephemeris (see Equation~\\ref{lrtra-ephemeris}). The merged RV phase \ncurve is plotted in Fig.~\\ref{fig-lrtra-vrad}. Both individual \ndata series could be split into seasonal subsets.\n\nVariability in the $\\gamma$-velocity is obvious. The \n$\\gamma$-velocities (together with their uncertainties) are \nlisted in Table~\\ref{tab-lrtra-vgamma}. The $\\gamma$-velocity in\n2007 is more uncertain than in other years because this value\nis based on a single spectrum. Systematic errors can be excluded. \nDozens of Cepheids in our sample with non-varying\n$\\gamma$-velocities indicate stability of the equipment and \nreliability of the data reduction. Fig.~\\ref{fig-lrtra-vgamma} \nis a better visualization of the temporal variation in the \n$\\gamma$-velocity. The seasonal drift in the $\\gamma$-velocity \nis compatible with both short and long orbital periods.\n\nThe photometric contribution of the companion star decreases\nthe observable amplitude of the brightness variability as\ndeduced from the enhanced value of the ratio of the RV and\nphotometric amplitudes \\citep{KSz09}. This is an additional\n(although slight) indication of binarity of LR~TrA.\n\n\\subsection{RZ~Velorum}\n\\label{rzvel}\n\n\\paragraph*{Accurate value of the pulsation period}\n\\label{rzvel-period}\n\nThe brightness variability of RZ~Vel (HD\\,73502, $\\langle V \\rangle\n= 7.13$\\,mag) was revealed by Cannon \\citep{P09}. The Cepheid\nnature of variability and the pulsation period were established by \n\\citet{H36} based on the Harvard and Johannesburg photographic plate \ncollection which was further investigated by \\citet{Oo36}.\n\nThis is the longest period Cepheid announced in this paper and it has \nbeen frequently observed from the 1950s, first photoelectrically, \nthen in the last decades by CCD photometry. The photometric coverage \nof RZ~Vel was almost continuous in the last 20 years thanks to \nobservational campaigns by \\citet{B08} and his co-workers, as well as\nthe ASAS photometry \\citep{P02}.\n\nLong-period Cepheids are usually fundamental pulsators and they \noscillate with a large amplitude resulting in a light curve with\nsharp maximum.\n\nThe O$-$C diagram of RZ~Vel was constructed for the moments of \nmaximum brightness based on the photoelectric and CCD photometric \ndata (see Table~\\ref{tab-rzvel-oc}). The weighted least squares \nparabolic fit to the O$-$C values resulted in the ephemeris:\n\\vspace{-1mm}\n\\begin{equation}\nC = 2\\,442\\,453.6630 + 20.398\\,532{\\times}E + 1.397\\times 10^{-6} E^2\n\\label{rzvel-ephemeris}\n\\end{equation}\n\\vspace{-3mm}\n$\\phantom{mmmmm}\\pm0.0263\\phantom{l}\\pm 0.000\\,080 \\phantom{mm}\n\\pm 0.191\\times 10^{-6}$\n\n\\begin{table}\n\\caption{O$-$C values of RZ~Vel (description of the columns\nis given in Sect.~\\ref{lrtra-period}).}\n\\begin{tabular}{l@{\\hskip2mm}r@{\\hskip2mm}r@{\\hskip2mm}c@{\\hskip2mm}l}\n\\hline\n\\noalign{\\vskip 0.2mm}\nJD$_{\\odot}$ & $E\\ $ & O$-$C & $W$ & Data source\\\\\n2\\,400\\,000 + &&&\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n33784.5646 &$-$425 & 0.2777 & 1 & \\citet{Eetal57}\\\\\n34804.5174 &$-$375 & 0.3039 & 1 & \\citet{Wetal58}\\\\\n34845.2119 &$-$373 & 0.2013 & 3 & \\citet{Eetal57}\\\\\n35192.0024 &$-$356 & 0.2168 & 1 & \\citet{I61}\\\\\n40760.8647 &$-$83 & 0.2799 & 3 & \\citet{P76}\\\\\n41719.0924 &$-$36 &$-$0.2234& 3& \\citet{M75}\\\\\n41862.1249 &$-$29 & 0.0193 & 3 & \\citet{Detal77}\\\\\n42453.6330 & 0 &$-$0.0030 & 3 & \\citet{Detal77}\\\\\n44371.0472 & 94 &$-$0.0778 & 3 & \\citet{CC85}\\\\\n44391.3842 & 95 &$-$0.1393 & 2 & \\citet{E82}\\\\\n45003.2906 & 125 &$-$0.1889 & 3 & \\citet{CC85}\\\\\n48226.4369 & 283 &$-$0.0107 & 3 & {\\it Hipparcos} \\citep{ESA97}\\\\\n48797.5877 & 311 &$-$0.0188 & 3 & {\\it Hipparcos} \\citep{ESA97}\\\\\n49185.1653 & 330 &$-$0.0133 & 1 & Walker \\& Williams (unpublished)\\\\\n49817.8011 & 361 & 0.2680 &3 & \\citet{B08}\\\\\n50144.1979 & 377 & 0.2883 & 2 & \\citet{B02}\\\\\n50389.0443 & 389 & 0.3524 & 3 & \\citet{B08}\\\\\n50511.3662 & 395 & 0.2831 & 3 & \\citet{B02}\\\\\n50572.4468 & 398 & 0.1681 & 3 & \\citet{B08}\\\\\n50899.0581 & 414 & 0.4029 & 3 & \\citet{B08}\\\\\n51266.1488 & 432 & 0.3200 & 3 & \\citet{B08}\\\\\n51653.7650 & 451 & 0.3641 & 3 & \\citet{B08}\\\\\n51939.2846 & 465 & 0.3042 & 2 & ASAS \\citep{P02}\\\\\n51959.7692 & 466 & 0.3903 & 3 & \\citet{B08}\\\\\n52347.4262 & 485 & 0.4752 & 3 & \\citet{B08}\\\\\n52653.3896 & 500 & 0.4606 & 3 & ASAS \\citep{P02}\\\\\n52653.4100 & 500 & 0.4810 & 3 & \\citet{B08}\\\\\n53000.1794 & 517 & 0.4754 & 3 & ASAS \\citep{P02}\\\\\n53000.2610 & 517 & 0.5570 & 3 & \\citet{B08}\\\\\n53428.4384 & 538 & 0.3652 & 3 & ASAS \\citep{P02}\\\\\n53754.8864 & 554 & 0.4367 & 3 & ASAS \\citep{P02}\\\\ \n54183.1657 & 575 & 0.3468 & 3 & ASAS \\citep{P02}\\\\\n54509.5729 & 591 & 0.3775 & 3 & ASAS \\citep{P02}\\\\\n54815.4343 & 606 & 0.2609 & 3 & ASAS \\citep{P02}\\\\\n55121.3569 & 621 & 0.2055 & 2 & ASAS \\citep{P02}\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-rzvel-oc}\n\\end{table}\n\nThe O$-$C diagram of RZ~Vel plotted in Fig.~\\ref{fig-rzvel-oc} \nindicates a continuously increasing pulsation period with a period \njitter superimposed. This secular period increase has been caused \nby stellar evolution: while the Cepheid crosses the instability \nregion towards lower temperatures in the Hertzsprung--Russell \ndiagram, its pulsation period is increasing. \nContinuous period variations (of either sign) often occur in \nthe pulsation of long-period Cepheids \\citep{Sz83}.\n\nFig.~\\ref{fig-rzvel-oc2} shows the O$-$C residuals after \nsubtracting the parabolic fit defined by \nEquation~(\\ref{rzvel-ephemeris}). If the wave-like fluctuation seen in \nthis $\\Delta (O-C)$ diagram turns out to be periodic, it would\ncorrespond to a light-time effect in a binary system. In line with \nthe recent shortening in the pulsation period, the current value \nof the pulsation period is $20.396671 \\pm 0.000200$ days (after \nJD~2\\,452\\,300). \n\n\\begin{figure}\n\\includegraphics[height=55mm, angle=0]{szabados2013sbcepfig6.eps}\n\\caption{O$-$C diagram of RZ~Vel. The plot can be\napproximated by a parabola indicating a continuously\nincreasing period.}\n\\label{fig-rzvel-oc}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[height=44mm, angle=0]{szabados2013sbcepfig7.eps}\n\\caption{$\\Delta(O-C)$ diagram of RZ~Vel.}\n\\label{fig-rzvel-oc2}\n\\end{figure}\n\n\n\\paragraph*{Binarity of RZ~Vel}\n\\label{rzvel-bin}\n\n\n\\begin{figure}\n\\includegraphics[height=55mm, angle=0]{szabados2013sbcepfig8.eps}\n\\caption{RV phase curve of RZ~Vel. Data obtained\nbetween 1996 and 2013 are included in this plot. The meaning\nof various symbols is explained in the text.}\n\\label{fig-rzvel-vrad}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[height=42mm, angle=0]{szabados2013sbcepfig9.eps}\n\\caption{$\\gamma$-velocities of RZ~Velorum. The symbols for \nthe different data sets are the same as in \nFig.~\\ref{fig-rzvel-vrad}.}\n\\label{fig-rzvel-vgamma}\n\\end{figure}\n\n\\begin{table}\n\\caption{RV values of RZ Vel from the SSO spectra.\n(This is only a portion of the full version \navailable online as Supporting\nInformation.)}\n\\begin{tabular}{lr}\n\\hline\n\\noalign{\\vskip 0.2mm}\nJD$_{\\odot}$ & $v_{\\rm rad}$ \\ \\\\\n2\\,400\\,000 + &(km\\,s$^{-1}$)\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n53307.2698 &4.2\\\\\n53310.2504 &1.4\\\\\n53312.2073 &9.0\\\\\n53364.2062 &49.6\\\\\n53367.1823 &27.5\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-rzvel-data}\n\\end{table}\n\n\n\\begin{table}\n\\caption{CORALIE velocities of RZ Vel.\n(This is only a portion of the full version available \nonline as Supporting Information.)}\n\\begin{tabular}{lrc}\n\\hline\n\\noalign{\\vskip 0.2mm}\nJD$_{\\odot}$ & $v_{\\rm rad}$ \\ & $\\sigma$ \\\\\n2\\,400\\,000 + &(km\\,s$^{-1}$) & (km\\,s$^{-1}$)\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n55654.5528 & $-$3.08 & 0.02\\\\\n55656.6626 & 5.23 & 0.01\\\\ \n55657.6721 & 9.86 & 0.02\\\\ \n55659.6585 & 18.85 & 0.03\\\\ \n55662.5137 & 31.50 & 0.01\\\\ \n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-rzvel-coralie-data}\n\\end{table}\n\n\\begin{table}\n\\caption{$\\gamma$-velocities of RZ~Vel.}\n\\begin{tabular}{lccl}\n\\hline\n\\noalign{\\vskip 0.2mm}\nMid-JD & $v_{\\gamma}$ & $\\sigma$ & Data source \\\\\n2\\,400\\,000+ & (km\\,s$^{-1}$)& (km\\,s$^{-1}$) & \\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n34009 &25.5 &1.5& \\citet{S55}\\\\\n40328 &22.1 &1.5& \\citet{LE68,LE80}\\\\\n42186 &29.2 &1.0& \\citet{CC85}\\\\\n44186 &22.6 &1.0& \\citet{CC85}\\\\\n44736 &24.4 &1.0& \\citet{CC85}\\\\\n50317 &25.1 &0.2& \\citet{B02}\\\\\n53184 &24.0 &0.5& \\citet{Netal06}\\\\\n53444 &26.9 &0.6& Present paper\\\\\n53783 &28.8 &1.0& Present paper\\\\\n55709 &25.6 &0.1& Present paper\\\\\n56038 &25.3 &0.1& Present paper\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-rzvel-vgamma}\n\\end{table}\n\nThere are several data sets of RV observations available in\nthe literature for RZ~Vel: those published by \\citet{S55}, \n\\citet{LE68,LE80}, \\citet{CC85}, \\citet{B02}, and \n\\citet{Netal06}. Our individual RV data are listed in \nTables~\\ref{tab-rzvel-data} and \\ref{tab-rzvel-coralie-data}.\n\nBased on these data, the RV phase curve has been constructed \nusing the 20.398532~d pulsation period appearing in \nEquation~(\\ref{rzvel-ephemeris}). In view of the complicated pattern \nof the O$-$C diagram the RV data have been folded on by taking \ninto account the proper phase correction for different data \nseries. The merged RV phase curve is plotted in \nFig.~\\ref{fig-rzvel-vrad}. For the sake of clarity, RV data \nobtained before JD\\,2\\,450\\,000 have not been plotted here \nbecause of the wider scatter of these early RV data but the \n$\\gamma$-velocities were determined for each data set. The \nindividual data series are denoted by different symbols: \nfilled squares mean data by \\citet{B02}, empty squares those \nby \\citet{Netal06}, and our 2005, 2006, 2012 and 2013 data are \ndenoted by filled triangles, empty triangles, filled circles and \nempty circles, respectively. The wide scatter in this merged RV \nphase curve plotted in Fig.~\\ref{fig-rzvel-vrad} is due to a variable \n$\\gamma$-velocity. \n\nThe $\\gamma$-velocities determined from each data set (including \nthe earlier ones) are listed in Table~\\ref{tab-rzvel-vgamma} and \nare plotted in Fig.~\\ref{fig-rzvel-vgamma}. The plot implies\nthat RZ~Vel is really an SB as suspected by \\citet{B02} based on \na much poorer observational material (before JD~2\\,450\\,500). \nAn orbital period of about 5600-5700~d is compatible with the data \npattern in both Fig.~\\ref{fig-rzvel-oc2} and Fig.~\\ref{fig-rzvel-vgamma} \nbut the phase relation between the light-time effect fit to the \n$\\Delta (O-C)$ curve and the orbital RV variation phase curve obtained \nwith this formal period is not satisfactory.\n\n\n\\subsection{BG~Velorum}\n\\label{bgvel}\n\n\\paragraph*{Accurate value of the pulsation period}\n\\label{bgvel-period}\n\nThe brightness variability of BG~Vel (HD\\,78801, $\\langle V \\rangle\n= 7.69$\\,mag) was revealed by Cannon \\citep{P09}. Much later \n\\citet{OL37} independently discovered its light variations but\nhe also revealed the Cepheid nature and determined the pulsation \nperiod based on photographic plates obtained at the Riverview \nCollege Observatory. \\citet{vH50} also observed this Cepheid \nphotographically in Johannesburg but these early data are \nunavailable, therefore we only mention their studies for historical \nreasons.\n\nThis Cepheid is a fundamental-mode pulsator. The O$-$C \ndifferences of BG~Vel calculated for brightness maxima are \nlisted in Table~\\ref{tab-bgvel-oc}. These values have been obtained \nby taking into account the constant and linear terms of the \nfollowing weighted parabolic fit:\n\\vspace{-1mm}\n\\begin{equation}\nC = 2\\,453\\,031.4706 + 6.923\\,843{\\times}E + 2.58\\times 10^{-8} E^2\n\\label{bgvel-ephemeris}\n\\end{equation}\n\\vspace{-3mm}\n$\\phantom{mmmmm}\\pm0.0020\\phantom{}\\pm 0.000\\,007 \\phantom{ml}\n\\pm 0.27\\times 10^{-8}$\n\n\\noindent The parabolic nature of the O$-$C diagram, i.e., the \ncontinuous increase in the pulsation period, is clearly seen \nin Fig.~\\ref{fig-bgvel-oc}. \nThis parabolic trend corresponds to a continuous period increase\nof $(5.16 \\pm 0.54)\\times 10^{-8}$ d\\,cycle$^{-1}$, i.e., \n$\\Delta P = 0.000272$ d\/century. This tiny period increase has \nbeen also caused by stellar evolution as in the case of RZ~Vel.\n\nThe fluctuations around the fitted parabola in\nFig.~\\ref{fig-bgvel-oc} do not show any definite pattern: \nsee the $\\Delta(O-C)$ diagram in Fig.~\\ref{fig-bgvel-oc2}.\n\n\n\\begin{table}\n\\caption{O$-$C values of BG~Vel (description of the \ncolumns is given in Sect.~\\ref{lrtra-period}).}\n\\begin{tabular}{l@{\\hskip2mm}r@{\\hskip2mm}r@{\\hskip2mm}c@{\\hskip2mm}l}\n\\hline\n\\noalign{\\vskip 0.2mm}\nJD$_{\\odot}$ & $E\\ $ & O$-$C & $W$ & Data source\\\\\n2\\,400\\,000 + &&&\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n34856.5526 & $-$2625 & 0.1699 & 3 & \\citet{Wetal58}\\\\\n35237.3813 & $-$2570 & 0.1872 & 3 & \\citet{I61}\\\\\n40748.6592 & $-$1774 & 0.0861 & 3 & \\citet{P76}\\\\\n42853.4433 & $-$1470 & 0.0219 & 3 & \\citet{D77}\\\\\n44300.5426 & $-$1261 & 0.0380 & 3 & \\citet{B08}\\\\\n48136.3167 & $-$707 & 0.0031 & 3 & {\\it Hipparcos} \\citep{ESA97}\\\\\n48627.9239 & $-$636 & 0.0174 & 3 & {\\it Hipparcos} \\citep{ESA97}\\\\\n50379.6329 & $-$383 & $-$0.0058 & 3 & \\citet{B08}\\\\\n50573.4987 & $-$355 & $-$0.0076 & 3 & \\citet{B08}\\\\\n50905.8549 & $-$307 & 0.0041 & 3 & \\citet{B08}\\\\\n51265.9127 & $-$255 & 0.0221 & 3 & \\citet{B08}\\\\\n51646.7345 & $-$200 & 0.0325 & 3 & \\citet{B08}\\\\\n51937.5210 & $-$158 & 0.0176 & 3 & ASAS \\citep{P02}\\\\\n51958.2712 & $-$155 & $-$0.0038 & 3 & \\citet{B08}\\\\\n52359.8640 & $-$97 & 0.0062 & 3 & ASAS \\citep{P02}\\\\\n52359.8778 & $-$97 & 0.0200 & 3 & \\citet{B08}\\\\\n52650.6575 & $-$55 & $-$0.0017 & 3 & \\citet{B08}\\\\\n52726.8212 & $-$44 & $-$0.0003 & 3 & ASAS \\citep{P02}\\\\\n53003.7916 & $-$4 & 0.0164 & 3 & \\citet{B08}\\\\\n53031.4758 & 0 & 0.0052 & 3 & ASAS \\citep{P02}\\\\\n53336.1201 & 44 &0.0004 & 1 & {\\it INTEGRAL} OMC\\\\\n53460.7390 & 62 & $-$0.0099 & 3 & ASAS \\citep{P02}\\\\\n53779.2202 & 108 & $-$0.0254 & 3 & ASAS \\citep{P02}\\\\\n54180.8337 & 166 & 0.0052 & 3 & ASAS \\citep{P02}\\\\\n54540.8499 & 218 & $-$0.0185 & 3 & ASAS \\citep{P02}\\\\\n54838.5810 & 261 & $-$0.0126 & 3 & ASAS \\citep{P02}\\\\\n55143.2425 & 305 & $-$0.0002 & 2 & ASAS \\citep{P02}\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-bgvel-oc}\n\\end{table}\n\n\\paragraph*{Binarity of BG~Vel}\n\\label{bgvel-bin}\n\n\n\\begin{figure}\n\\includegraphics[height=44mm, angle=0]{szabados2013sbcepfig10.eps}\n\\caption{O$-$C diagram of BG~Vel. The plot can be\napproximated by a parabola indicating a continuously\nincreasing pulsation period.}\n\\label{fig-bgvel-oc}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[height=44mm, angle=0]{szabados2013sbcepfig11.eps}\n\\caption{$\\Delta(O-C)$ diagram of BG~Vel.}\n\\label{fig-bgvel-oc2}\n\\end{figure}\n\n\n\\begin{figure}\n\\includegraphics[height=49mm, angle=0]{szabados2013sbcepfig12.eps}\n\\caption{Merged RV phase curve of BG~Vel. There is an obvious \nshift between the $\\gamma$-velocities valid for the epoch \nof our data obtained in 2005-2006 and 2012-2013 (empty and \nfilled circles, respectively). The other symbols are explained\nin the text.}\n\\label{fig-bgvel-vrad}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[height=42mm, angle=0]{szabados2013sbcepfig13.eps}\n\\caption{$\\gamma$-velocities of BG~Vel. The symbols for \nthe different data sets are the same as in \nFig.~\\ref{fig-bgvel-vrad}.}\n\\label{fig-bgvel-vgamma}\n\\end{figure}\n\n\\begin{table}\n\\caption{RV values of BG Vel from the SSO spectra.\n(This is only a portion of the full version available online \nas Supporting Information.)}\n\\begin{tabular}{lr}\n\\hline\n\\noalign{\\vskip 0.2mm}\nJD$_{\\odot}$ & $v_{\\rm rad}$ \\ \\\\\n2\\,400\\,000 + &(km\\,s$^{-1}$)\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n53312.2372 &17.3\\\\\n53364.2219 &$-$0.2\\\\\n53367.1992 &20.5\\\\\n53451.0000 &20.0\\\\\n53452.0021 &23.8\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-bgvel-data}\n\\end{table}\n\n\\begin{table}\n\\caption{CORALIE velocities of BG Vel.\n(This is only a portion of the full version \navailable online as Supporting Information.)}\n\\begin{tabular}{lrc}\n\\hline\n\\noalign{\\vskip 0.2mm}\nJD$_{\\odot}$ & $v_{\\rm rad}$ \\ & $\\sigma$ \\\\\n2\\,400\\,000 + &(km\\,s$^{-1}$) & (km\\,s$^{-1}$)\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n55937.7555 & 24.13 & 0.02\\\\\n55938.6241 & 7.77 & 0.02\\\\ \n55939.6522 & $-$1.25 & 0.01\\\\ \n55941.6474 & 7.99 & 0.10\\\\ \n55942.6917 & 11.78 & 0.03\\\\ \n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-bgvel-coralie-data}\n\\end{table}\n\nThere are earlier RV data of this Cepheid obtained by \\citet{S55} \nand \\citet{LE80}. Variability in the $\\gamma$-velocity is seen \nin the merged phase diagram of all RV data of BG~Velorum \nplotted in Fig.~\\ref{fig-bgvel-vrad}. In this diagram, our 2005--2006\ndata (listed in Table~\\ref{tab-bgvel-data}) are represented with \nthe empty circles, while 2012--2013 data (listed in \nTable~\\ref{tab-bgvel-coralie-data}) are denoted by the filled circles, \nthe triangles represent Stibbs' data, and the $\\times$ symbols refer to \nLloyd Evans' data. Our RV data have been folded with the period given \nin the ephemeris Equation~(\\ref{bgvel-ephemeris}) omitting the quadratic term. \nData obtained by Stibbs and Lloyd Evans have been phased with the \nsame period but a proper correction has been applied to allow \nfor the phase shift due to the parabolic O$-$C graph.\n\nThe $\\gamma$-velocities determined from the individual data sets\nare listed in Table~\\ref{tab-bgvel-vgamma} and plotted in\nFig.~\\ref{fig-bgvel-vgamma}. Since no annual shift is seen \nin the $\\gamma$-velocities between two consecutive years (2005--2006\nand 2012--2013), the orbital period cannot be short, probably it\nexceeds a thousand days.\n\nSimilarly to the case of LR~TrA, BG~Vel is also characterized by an \nexcessive value for the ratio of RV and photometric amplitudes\nindicating the possible presence of a companion \n(see Fig.~\\ref{fig-ampratio}).\n\n\\begin{table}\n\\caption{$\\gamma$-velocities of BG~Vel.}\n\\begin{tabular}{lccl}\n\\hline\n\\noalign{\\vskip 0.2mm}\nMid-JD & $v_{\\gamma}$ & $\\sigma$ & Data source \\\\\n2\\,400\\,000+ & (km\\,s$^{-1}$)& (km\\,s$^{-1}$) & \\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\noalign{\\vskip 0.2mm}\n34096 &11.4 &1.5& \\citet{S55}\\\\\n40545 & 8.4 &1.5& \\citet{LE80}\\\\\n53572 &12.6 &0.6& Present paper\\\\\n56043 &10.3 &0.1& Present paper\\\\\n\\noalign{\\vskip 0.2mm}\n\\hline\n\\end{tabular}\n\\label{tab-bgvel-vgamma}\n\\end{table}\n\n\n\\section{Conclusions}\n\\label{concl}\n\nWe pointed out that three bright southern Galactic Cepheids,\nLR~TrA, RZ~Vel and BG~Vel,\nhave a variable $\\gamma$-velocity implying their membership \nin SB systems. RV values of other target Cepheids observed \nwith the same equipment in 2005--2006 and 2012 testify that \nthis variability in the $\\gamma$-velocity is not of instrumental\norigin, nor an artefact caused by the analysis.\n\nThe available RV data are insufficient to determine the orbital \nperiod and other elements of the orbits. However, some inferences\ncan be made from the temporal variations of the $\\gamma$-velocity.\nAn orbital period of 5600--5700~d of the RZ~Vel system is \ncompatible with the data pattern. In the case of BG~Vel, short \norbital periodicity can be ruled out. For LR~TrA, even the range \nof the possible orbital periods remains uncertain.\n\nThe value of the orbital period for SB systems \ninvolving a Cepheid component is often unknown: according to the \non-line data base \\citep{Sz03a} the orbital period has been \ndetermined for about 20\\% of the known SB Cepheids. The majority \nof known orbital periods exceeds a thousand days.\n\nA companion star may have various effects on the observable\nphotometric properties of the Cepheid component. Various pieces \nof evidence of duplicity based on the photometric criteria are\ndiscussed by \\citet{Sz03b} and \\citet{KSz09}. As to our\ntargets, there is no obvious sign of a companion from optical\nmulticolour photometry. This indicates that the companion star \ncannot be much hotter than any of the Cepheids discussed here. \nThere is, however, a phenomenological parameter, viz. the ratio \nof RV to photometric amplitudes \\citep{KSz09} whose excessive\nvalue is a further hint at the probable existence of a\ncompanion for both LR~TrA and BG~Vel (see Fig.~\\ref{fig-ampratio}).\nMoreover, the {\\it IUE} spectra of bright Cepheids \nanalysed by \\citet{E92} gave a constraint on the temperature \nof a companion to remain undetected in the ultraviolet spectra: \nin the case of RZ~Vel, the spectral type of the companion cannot \nbe earlier than A7, while for BG~Vel this limiting spectral type \nis A0. Further spectroscopic observations are necessary to \ncharacterize these newly detected SB systems.\n\n\\begin{figure}\n\\includegraphics[height=54mm, angle=0]{szabados2013sbcepfig14.eps}\n\\caption{The slightly excessive value of the $A_{V_{\\rm RAD}}\/A_B$\namplitude ratio of LR~TrA and BG~Vel (large circles) with respect \nto the average value characteristic at the given pulsation period\nis an independent indication of the presence of a companion star.\nThis is a modified version of fig.~4f of \\citet{KSz09}. The open \nsymbols in the original figure correspond to known binaries and \nthe filled symbols to Cepheids without known binarity. For the\nmeaning of various symbols, see \\citet{KSz09}.\n}\n\\label{fig-ampratio}\n\\end{figure}\n\n\nOur findings confirm the previous statement by \\citet{Sz03a} \nabout the high percentage of binaries among classical Cepheids \nand the observational selection effect hindering the discovery \nof new cases (see also Fig.~\\ref{fig-comparison}).\n\nRegular monitoring of the RVs of a large\nnumber of Cepheids will be instrumental in finding \nmore SBs among Cepheids. RV data to be obtained with the \n{\\it Gaia} astrometric space probe (expected launch: 2013 \nSeptember) will certainly result in revealing new SBs among \nCepheids brighter than the 13--14th magnitude \\citep{Eyetal12}.\nIn this manner, the `missing' SBs among Cepheids inferred\nfrom Fig.~\\ref{fig-comparison} can be successfully revealed\nwithin few years.\n\n\\section*{Acknowledgments} \n\nThis project has been supported by the \nESTEC Contract No.\\,4000106398\/12\/NL\/KML, the Hungarian OTKA \nGrants K76816, K83790, K104607, and MB08C 81013, as well as the \nEuropean Community's Seventh Framework Program (FP7\/2007-2013) \nunder grant agreement no.\\,269194, and the ``Lend\\\"ulet-2009'' \nYoung Researchers Program of the Hungarian Academy of Sciences. \nAD was supported by the Hungarian E\\\"otv\\\"os Fellowship. \nAD has also been supported by a J\\'anos Bolyai Research Scholarship \nof the Hungarian Academy of Sciences. AD is very thankful \nto the staff at The Lodge in the Siding Spring Observatory \nfor their hospitality and very nice food, making the \ntime spent there lovely and special.\nPart of the research leading to these results has received \nfunding from the European Research Council under the European \nCommunity's Seventh Framework Programme (FP7\/2007--2013)\/ERC grant \nagreement no.\\,227224 (PROSPERITY).\nThe {\\it INTEGRAL\\\/} photometric data, pre-processed by \nISDC, have been retrieved from the OMC Archive at CAB (INTA-CSIC). \nWe are indebted to Stanley Walker for sending us some\nunpublished photoelectric observational data. Our thanks are \nalso due to the referee and Dr. M\\'aria Kun for their critical \nremarks leading to a considerable improvement in the presentation \nof the results.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}