diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbmfm" "b/data_all_eng_slimpj/shuffled/split2/finalzzbmfm" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbmfm" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe standard stochastic multi-armed bandit framework captures the exploration-exploitation trade-off in sequential decision making problems under partial feedback constraints. The objective is to actively identify the best member, or members, of a community comprising stochastic sources, termed arms, while suffering the relative loss of non-ideal choices. Often, the arms yield rewards that belong to a known probability distribution with hidden parameters, and upon choosing an arm, the decision maker observes the reward from that arm directly. For ergodic reward distributions, \\cite{gittins1979bandit} shows that the dynamic programming solution takes the form of an index policy\\footnote{The orignial formulation of \\cite{gittins1979bandit} was in the Bayesian framework.}, called dynamic allocation indices, which motivated the rich body of work that led to the arm-selection rules that eventually achieved the asymptotic regret lower bounds of \\cite{lai1985asymptotically}. Alternatively, when the probability law that governs the rewards is defined conditionally with respect to a hidden state that represents the ``changing world'', restless bandit framework of \\cite{whittle1988restless} leads to arm-selection policies that are often computationally demanding. A key challenge that we address here is to develop index policies that identify the best source in an environment where the underlying state of the world changes erratically and hence, the observations are generated from different probability distributions at each point in time. \n\nSpecifically, we consider the case where each arm represents a stochastic expert providing opinions on changing tasks and thus, upon consulting an expert, the decision maker observes an opinion, rather than a direct reward. Stochastic experts are sources of subjective information that might fail but not purposefully deceive, as discussed in \\cite{cesa2006prediction}, and often, expert suggestions, or opinions, are used with the aid of side-information: Feedback from past states of the world is used in boosting, \\cite{schapire2012boosting}, models of expert stochasticity, or direct information of expert reliability, or competence, are often used in the Bayesian framework, \\cite{poor2013introduction}. In the absence of \\textit{any} side information, the decision maker operates in a regime that can be termed \\textit{unsupervised}, relying solely on the information in the opinions. Unsupervised opinion aggregation methods such as expectation maximization (EM) \\cite{welinder2010online}, belief propagation (BP) \\cite{karger2011iterative}, and spectral meta-learner (SML) \\cite{parisi2014ranking} exhibit an interesting phenomenon: The reliability of experts are inferred as side-product of the underlying optimization for estimating past states based on a block of opinions. On the other hand, joint sampling and consultation of experts without supervision, or blind exploration and exploitation (BEE) as termed here, requires instantaneously available statistics that would allow reliable inference of expert reliabilities at any and all states of the world. \n\nWe propose a method that relies solely on opinions to infer the competence of an expert by re-defining the notion competence as the probability of agreeing with peers rather than being objectively correct. The proposed method does not only allow empirical inference of competence without any supervision but also enables the use of index policies to efficiently address exploration and exploitation dilemma when the underlying task changes at random. We show that standard, or supervised, exploration-exploitation (SEE) strategies extend their uses to the BEE problem by consulting multiple experts for each task, equivalent to sampling multiple arms in the standard framework. Specifically, we consider the index rules that rely on posterior sampling, \\cite{thompson1933likelihood}, upper-confidence bounds such as UCB1, \\cite{auer2002finite}, and KL-UCB, \\cite{garivier2011klucb}, minimum empirical Kullback-Leibler divergence, in particular, IMED, \\cite{honda2015imed}, and minmax rule MOSS of \\cite{audibert2009minimax}. We investigate two operational regimes: First,\na fixed number of experts are consulted for each task and the opinion of the expert who is believed to be most-reliable at that time is chosen. Second, upon consulting a group of experts, a decision is formed by aggregating their opinions without further supervision. We empirically compare the performance of different BEE index rules and demonstrate that exploration-exploitation-based choice of experts leads to comparable results to those of the original algorithms in the unsupervised framework. \n\nThe organization of this paper is as follows: We summarize the notation used in this paper, provide a background on stochastic experts, and define the BEE problem formally in Section \\ref{sec:probdef}. We discuss the motivation, formal definition, and properties of our technique for unsupervised reliability inference in Section \\ref{sec:pseudocomp}. Then, we discuss the fundamental properties of the BEE index rules in Section \\ref{sec:bee}. The experiments for comparing different BEE algorithms as well as comparing them to their SEE counterparts are in Section \\ref{sec:experiments}. The proofs are deferred to the appendix. \n\n\\section{Notation, Background, and Problem Formulation}\n\\label{sec:probdef}\nWe begin with a brief overview of the notation used in this paper. Then, we formally define the key concepts regarding stochastic experts. We conclude this section by defining the BEE problem. \n\\subsection{Notation}\nA probability space is a triplet $\\yay{\\Omega, \\mathscr{F}, \\mathbb{P}}$, where $\\Omega$ is the event space, $\\mathscr{F}$ is the sigma-field defined on $\\Omega$, and $\\mathbb{P}$ is the probability measure. Random variables are denoted by capital letters with the corresponding samples being denoted by lowercase letters: $(X,x)$. A random process is an indexed collection of random variables: $\\myset{Y(t): t\\in \\mathbb{T}}$, where $\\mathbb{T}$ is the index set. Independent random variables $\\yay{X_1, X_2}$ are denoted by $X_1 \\perp X_2$ and conditionally independent random variables $\\yay{X_1, X_2}$ conditioned on $Y$ are denoted by $X_1 - Y - X_2$. Expectation, conditional expectation, and conditional probability operators are denoted by $\\expt{\\cdot}$, $\\condexpt{\\cdot}{\\cdot}$, and $\\condprob{\\cdot}{\\cdot}$ respectively. The indicator function is denoted by $\\ind{\\cdot}$, where domain is to be understood from context. We use $[T] \\triangleq \\myset{1,\\cdots,T}$ to denote the positive natural numbers up to a finite limit $T<\\infty$. All logarithms $\\yay{\\log}$ are taken with respect to the natural base. We use big $O$ notation when necessary. \n\\subsection{Background}\nConceptually, stochastic experts are honest-but-fallible computational entities that do not deceive the decision maker deliberately. Here, we consider experts that do not collaborate while generating their opinions; \\cite{cesa2006prediction} provides a detailed discussion. The goal of this paper is to propose techniques that identify the best stochastic experts, while dynamically consulting others on varying tasks. In that context, consulting an expert on a task is equivalent to pulling an arm in the standard multi-armed bandit framework. The true reward, however, remains hidden. \n\nFormally, let us begin with a random process $\\myset{Y(t): t\\in [T]}$ that represents binary states of the world, or tasks with binary labels: $Y(t) \\in \\myset{-1,1}$, $\\forall t\\in[T]$. We allow the nature to generate tasks independently:\n\\begin{equation}\n\t\\label{independent_tasks}\n\tY\\yay{t_1} \\perp Y\\yay{t_2},~\\forall t_1\\neq t_2 \\in [T].\n\\end{equation}\nFurthermore, let the random process $Y(t)$ that governs the evolution of tasks maximize the uncertainty: \n\\begin{equation}\n\t\\label{unif_distr}\n\t\\prob{Y(t) =1} = \\prob{Y(t)=-1} = \\nicefrac{1}{2},~\\forall t\\in [T].\n\\end{equation} \nIt is worth noting that any bias from non-uniform task generation can either be estimated directly from labeled data, or inferred without supervision via methods such as \\cite{jaffe2016unsupervised}. Furthermore, while independence assumption appears to be restrictive, it is common in stochastic multi-armed bandit formulations, \\cite{bubeck2012regret}. \n\nFormal characterization of stochastic experts involves the reliability of their opinions and statistical dependence to the others. The probability with which the opinion of an expert identifies the true state of the world correctly determines the reliability, or competence, of that expert: \n\\begin{equation}\n\t\\label{static_competence}\n\tp_i \\triangleq \\prob{X_i(t) =Y(t)},~\\forall t\\in[T].\n\\end{equation} \nHere, the reliability of an expert does not depend on the underlying state of the world \\footnote{A notable exception to this model is the ``two-coin'' model from \\cite{dawid1979maximum}, where conditionally static competences are discussed.}. \nWe further allow that experts generate opinions $\\myset{X_i(t): i\\in [M]}$ independently from one another for every task $t\\in[T]$. Formally: \n\\begin{equation}\n\t\\label{independent_generation}\n\tX_i(t) - Y(t) - X_j(t),~\\forall i\\neq j\\in [M],~\\forall t\\in[T].\n\\end{equation} \nConceptually, it makes sense that for meaningful inference, two different opinions on the same task should never be statistically independent. Furthermore, experts having conditionally independent opinions is equivalent to independence of rewards in the standard framework.\n\nGiven the probability law defined by eq.~\\eqref{independent_tasks}-\\eqref{independent_generation}, we can formally discuss why SEE algorithms requires a toolset to address the impact of the underlying uncertainty. Observe that:\n\\begin{equation}\n\t\\label{motivation}\n\t\\lim\\limits_{t\\rightarrow \\infty} \\frac{1}{t} \\sum_{\\tau =1}^{t} X_i(t) = 0,~\\forall p_i\\in\\brac{0,1},\n\\end{equation} \nwhich follows from the law of total probability, see appendix \\ref{app:average_opinion}. Conceptually, eq.~\\eqref{motivation} indicates that the average opinion does not reflect the competence of an expert, which is the true reward, posing a challenge for joint exploration and exploitation in the context of sequentially consulting stochastic experts, which we formally define next. \n\n\\subsection{Problem Definition}\nThe first objective of the BEE problem is to identify the best expert in a population while actively consulting members of that group on tasks that change from one consultation to another. The following notion of regret, written here in normalized form, formally captures this phenomenon: \n\\begin{equation}\n\t\\label{real_regret}\n\tR_T = \\frac{1}{T}\\sum_{t=1}^{T} \\ind{X^{*}(t)=Y(t)} - \\frac{1}{T}\\sum_{t=1}^{T} \\ind{X_{I_t}(t)=Y(t)}.\n\\end{equation}\nHere $X^{*}$ is the opinion of the most competent expert; $X^{*} = X_{i^{*}}$, where $i^{*} = \\argmax_{i\\in[M]} p_i$ and $I_t\\in[M]$, $\\forall t\\in[T]$ is the expert chosen at time $t$. Observe that the regret, as defined in eq.~\\eqref{real_regret} depends on the sample path of opinions and hence, it is difficult to analyze rigorously. Nonetheless, it simplifies asymptotically:\n\\begin{equation}\n\t\\label{real_regret_asymptotic}\n\t\\lim\\limits_{T\\rightarrow \\infty}R_T = \\max_{i\\in[M]} p_i - \\lim\\limits_{T\\rightarrow \\infty}\\frac{1}{T}\\sum_{t=1}^{T} \\ind{X_{I_t}(t)=Y(t)}.\n\\end{equation}\nThe first term is a direct consequence of the ergodicity of the processs $\\ind{X^{*}(t)=Y(t)}$, which follows directly from eq.~\\eqref{independent_tasks}-\\eqref{static_competence}. Conceptually, this amounts to the fact that one can measure the true reliability of an expert given sufficiently many labeled tasks, as long as the reliability of the expert does not change across tasks, as is the case here. \n\nMotivated by similar asymptotic behaviors, a notion of \\textit{pseudo regret} often arises in the context of stochastic bandits, see, for instance, \\cite{bubeck2012regret}. In the context of stochastic experts, the pseudo regret is defined as follows: \n\\begin{equation}\n\t\\label{pseudo_regret}\n\t\\tilde{R}_T = \\max_{i\\in[M]} p_i - \\frac{1}{T}\\expt{\\sum_{t=1}^{T} \\ind{X_{I_t}(t)=Y(t)}}.\n\\end{equation}\nAnother notion of pseudo regret provides a reliable metric for the performance of BEE rules that aggregate opinions after consulting experts. Let a $m