diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzjgnt" "b/data_all_eng_slimpj/shuffled/split2/finalzzjgnt" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzjgnt" @@ -0,0 +1,5 @@ +{"text":"\n\n\\section{Appendix}\n\n\\end{document}\n\n\\section{Introduction}\nThe first widely used library to mine cryptocurrency in the browser arguably emerged on the 14th of September, 2017 with the release of Coinhive's JavaScript miner \\cite{coinhive}. Since then browser-based mining has been gaining publicity and media attention \\cite{coinhive_google_trends}, mostly due to unethical attempts to profit from unsuspecting website visitors in hacked popular media \\cite{cbs_showtime_hack,politifakt_hack} and more recently, government websites \\cite{india_hack}. In a short span of time unethical use of browser mining in the wild has harmed its reputation, resulting in many major anti-virus providers regarding it as a malicious activity in need of restriction \\cite{malwarebytes_blocks_mining,kaspersky_blocks_mining}. However, due to the accessibility and privacy-protective nature of browser mining \\cite{first_look_into_cryptojacking}, it provides a mechanism to confidentially monetise practically any digital media stream or web page, which many web services may find appealing. For example, ``The Pirate Bay'' have previously experimented with browser mining in an effort to replace advertisements \\cite{pirate_bay_mining}.\n\nCurrently, display advertisement remains the principal method of monetisation of online content \\cite{ads_expenditure,us_ads_expenditure}.\nHowever, whilst the cost of advertisement has risen by an average of 12\\% over the last two years, advertisers find it significantly less valuable than expected \\cite{adobe_digital_insights}. Furthermore, about 11\\% of Web users are using ad-blocking software and its use is predicted to grow further \\cite{adblock_statistics}, whilst subscription pricing models are increasingly being favoured by high profile digital content providers such as Spotify and Netflix. This raises the question of whether the dominance of advertisement could potentially be challenged by alternative means of monetisation with a more flexible pay-as-you-go access to digital media and websites.\n\nEthical practice is key to legitimate monetisation and browser mining could not be considered a legitimate alternative to advertisement without established mechanisms of collecting informed user consent \\cite{first_look_into_cryptojacking}. In an attempt to bring legitimacy to browser mining Coinhive developed the ``AuthedMine'' JavaScript miner, which requires explicit user consent to operate. This certainly indicates progress towards legitimisation of browser mining but leaves dense perplexity: whilst a growing number of internet users reject digital ads via ad-blockers, it is unclear if they would instead consent to browser mining or if they would fully comprehend what they are consenting to \\cite{first_look_into_cryptojacking}.\n\nTherefore, in light of novel methods for cryptocurrency mining becoming available to the public we conducted the first user study, which evaluated the feasibility and instinctive user choice between ads and browser mining on both desktop and mobile clients and surveyed the participants to gain further insight into their decision rationale. In this paper, we present the results of this study to discuss limitations and the future of browser mining.\n\n\n\n\\section{Related work}\nStudy by Eskandari et al. \\cite{first_look_into_cryptojacking} was among the first to explore browser mining market and estimate mining profitability on 11K parked websites over a three-month period. It also collected a dataset of 33K browser mining websites from censys.io BigQuery to examine CPU usage and found most miners utilised around 25\\% of user's CPU which they argued could be considered as fair use. However, there were a few reports of malicious use which utilised 100\\% of CPU. The authors also provided the first framework for evaluating the ethical aspects of browser mining suggesting current methods of informed consent collection could be inadequate. Their conclusion was that the ethics of browser mining are not yet fully understood and require further discussion. Papadopoulos et al. \\cite{truth_about_cryptomining} extended evaluation of browser mining feasibility by comparing profitability from online ads and browser mining on a combined dataset of 200K ad and mining enabled websites and found the former to be 5.5x more profitable, however their study also indicated that browser mining could be more profitable if the browsing session exceeds 5.3 minutes. In addition, they compared CPU utilisation and estimated user costs on both monetisation options. Their findings suggest the median CPU usage when mining was 59x higher. Considering the results, Papadopoulos et al. proposed a new model of monetisation which would allow users to choose between ads and browser mining, whilst informing them about implicit costs from both options.\n\\section{Methodology}\nTo conduct an anonymised user study, we built an experimental online blog, https:\/\/www.hippocrypto.me, which utilised Coinhive to mine privacy-focused cryptocurrency \"Monero\" \\cite{monero_wp}. The website was distributed online to form a convenience sample of 107 volunteers between ages 18-50 (71.03\\% male, 27.1\\% female, and 1.87\\% other). We restricted access to the website, which allowed participants to view its content only after advertisement or browser mining was explicitly selected as a monetisation method (participants could make their choice only once). To obtain consent, participants were presented with an opt-in screen, which largely resembled the message currently used by Coinhive \\cite{coinhive_authedmine}. Afterwards, they were provided with a survey, which asked for demographic information (age, gender, level of education), familiarity with cryptocurrency, and their view towards browser mining. Participants were also asked to indicate the reasons for their monetisation choice and, assuming they selected advertisement, whether they would be willing to select browser mining next time if they could keep 50\\% of the mined cryptocurrency.\n\nIn addition, we collected website usage data during multiple sessions to evaluate behavioural trends. Session lengths were measured as the time between accessing the content and leaving the website. To assure data anonymity each individual received a RFC4122 \\cite{id_rfc4122} compliant unique ID stored in local storage on each browser, which was used to identify participants and aggregate data from multiple sessions. We also parsed browser User-Agent strings to estimate whether a mobile or desktop device was used. Furthermore, Coinhive's miner can be adjusted to run at varying degrees of power referred to as \\textit{throttle}. To evaluate user behaviour at different levels of CPU usage, at the start of each session we applied a degree of A\/B testing and set the miner to randomly initiate at either 10\\%, 50\\% or 80\\% throttle (with 10\\% being the minimal throttle allowed). Participants were able to continuously monitor the active throttle value via an integrated dashboard in the side menu and were instructed how to adjust it at any point.\n\nWe estimated revenues from both monetisation methods using variables listed in Table \\ref{table: ads_revenue_vars}. For estimation purposes the following assumptions were made. Firstly, to estimate the advertisement revenue, this study assumed a fixed price for two differently-sized banner advertisements, which would be variously priced in real-world conditions \\cite{ads_pricing}. In addition, advertisement networks generally utilise several different pricing metrics \\cite{ads_pricing_models}, however in this study only the cost-per-thousand impressions (CPM) was used to estimate ad revenue. In practice, users would be able to interact with advertisement potentially adding to the revenue via clicks or actions. CPM is generally dynamic, therefore the value used in our calculations (\\$2.80) is an estimated average derived from multiple reports \\cite{cpm_average} referred here as \\(CPM_{avg}\\). Considering Google AdSense is currently the dominant advertising network \\cite{adsense_share_img}; when estimating advertisement revenue \\(R_{ads}\\) we used its fee \\cite{adsense_fee} to calculate the net CPM income for the publisher \\(P_{rpm}\\). The total number of advertisement impressions was computed by keeping a counter and incrementing it by the number of adverts on each page whenever on-site navigation occurred. For browser mining, Coinhive's API was used to obtain the total number of hashes submitted on each session, which was then divided by the session length to infer the average hash rate. The conversion rate, mining difficulty and block reward were recorded on 17th April 2018. Finally, the collected data was forwarded to an external analytics server where it could be aggregated and visualised.\n\n\n\n\\section{Results}\n\\import{sections\/results\/}{revenue.tex}\n\\import{sections\/results\/}{consent.tex}\n\\import{sections\/results\/}{user_experience.tex}\n\\section{Discussion}\nThe results of this study concur with the recent research by Papadopoulos et al. \\cite{truth_about_cryptomining} in that there is a significant gap between revenues from advertisement and browser mining, which depends on the device's computation power and the amount of time spent mining. Based on our estimates we assume matching advertisement revenue by solely increasing the hash rate is practically impossible with current consumer devices. Additionally, study data depicts limiting factors, which we think are currently imposed on browser mining. Firstly, lack of exposure to legitimate browser mining practices have naturally put most participants in a neutral position and swayed them towards what they perceived as a risk-averse choice. Recent discreditation of browser mining could deter from further adoption thus limiting exposure to legitimate browser mining practices and maintaining the status quo. Secondly, considering the increasing usage of smartphones, which can be depicted by the fact mobile is expected to overtake desktop devices in advertisement expenditure \\cite{ads_expenditure}; clear avoidance of browser mining amongst mobile users could be discouraging to publishers. Furthermore, since scalability of browser mining is significantly restricted as noted by Papadopoulos et al. the rivalry for computational power would be fiercer thus reducing potential profits for competing publishers. In addition, attracting large reputable publishers may require an independent governing body to set standards ensuring safe and fair use.\n\nHowever, if the observed increase in the consumption of online content has a causal relationship with browser mining and is not simply a result of experienced novelty, browser mining could potentially have some interesting use-cases. For example, it could constitute a pay-as-you-go monetisation model for web applications or media streaming services. Our results indicate that the revenue from browser mining could still be lesser than from advertisement assuming the expected average session length \\cite{avg_session_video_streaming,how_long_ppl_stay_on_page}, however unobstructed access to immediate monetisation along with enhanced privacy could be sufficient benefits for some publishers to embrace browser mining regardless. Perhaps if the miner throttle remained within adequate bounds (which in this study were between 30\\% and 77\\%) and the number of concurrent browser mining instances was limited to one, a suitable balance between user experience and revenue opportunity could be established.\n\nIt is also noteworthy to mention that during our experiment one of the largest producers of application-specific integrated circuit (ASIC) miners \"Bitmain\", announced its release of the first ASIC-powered miner \\cite{bitmain_asic_monero}, designed to mine the CryptoNight hashing algorithm used by Monero. ASIC miners can achieve drastically higher hash rates than conventional CPU and GPU devices, since they are specialised to mine specific hashing algorithms. Use of ASIC miners in other cryptocurrencies, such as Bitcoin has been openly criticised \\cite{asic_critique_bitcoin} due to their potential of centralising the mining power in the hands of selected few large mining pools. CryptoNight algorithm has been long thought to be \"ASIC-resistant\" by requiring extensive amounts of fast memory to be available on the hardware, thus making ASIC devices economically infeasible to produce \\cite{why_monero_is_asic_resistant}. To combat Bitmain's potential ASIC threat, on April 6th, 2018 Monero's development team performed an emergency update to its protocol (practice known as a \"hard fork\"), which resulted in an almost 80\\% loss of hash power and mining difficulty within its network (fig. \\ref{fig:monero_hashrate_change}). This sparked theories that ASIC mining has been present in Monero's network even before the release of Bitmain's ASIC miner \\cite{monero_has_asic_miners}. Thus, hypothetically with the significant reduction in mining difficulty and the potential exclusion of ASIC competitors; browser mining could have become more economically viable after the fork.\n\n\\begin{figure}\n \\centering\n \n \\includegraphics[width=\\textwidth]{images\/monero_hash_rate.png}\n \\caption{Monero global network hash rate significantly plummeted on 7th of April, 2018}\n \\label{fig:monero_hashrate_change}\n\\end{figure}\n\n\\noindent\nWhilst browser mining may not be a complete replacement to advertisement, recent research along with our results indicate potential opportunities for it to compete with advertisement in niche markets. Considering digital advertisement's annual expenditure is \\$215.8 billion \\cite{ads_expenditure} and the fact that browser mining could potentially enter this highly valuable market, more research should be conducted to evaluate its performance on monetising various types of content using large sample pools.\n\n\n\\section{Conclusion}\nIn this paper we evaluated whether browser mining could be a legitimate monetisation alternative to advertisement by conducting the first study, which measured behavioural features, including user consent to mine cryptocurrencies in the browser and the session lengths resulting from both monetisation methods. Our estimations demonstrate that at the moment advertisement remains unchallenged in terms of revenue for the publisher and prospects of mobile use, however based on our results we suggest that browser mining can be a legitimate monetisation mechanism and its current notorious reputation could be overstated. Whilst there are significant ethical and legal avenues left to explore, we believe that given its wide range of possible legitimate use cases this interesting new concept deserves further research attention.\n\\subsection{Definitions} \\label{appendix_definitions}\n\\begin{itemize}\n \\item \\(M\\): \\hspace{1em} \\= the amount of Monero coins earned in total;\n \\item \\(M_{usd}\\): \\hspace{1em} \\= the US dollar value of 1 Monero on 17th of April, 2018;\n \\item \\(H\\): \\hspace{1em} \\= the number of total hashes solved;\n \\item \\(D\\): \\hspace{1em} \\= the global Monero mining difficulty on 17th of April, 2018;\n \\item \\(B\\): \\hspace{1em} \\= the amount of Monero coins rewarded for a mined block on 17th of April, 2018;\n \\item \\(X_{c}\\): \\hspace{1em} \\= the earnable share per coin with the pool fee deducted \\cite{coinhive_pay_calc};\n \\item \\(R_{crypto}\\): \\hspace{1em} \\= the revenue generated from mining cryptocurrency in US dollars;\n \\item \\(P_{rpm}\\): \\hspace{1em} \\= net publisher revenue in US dollars;\n \\item \\(CPM_{avg}\\): \\hspace{1em} \\= average CPM value \\cite{cpm_average};\n \\item \\(X_{a}\\): \\hspace{1em} \\= the charge applied by the advertisement network \\cite{cpm_average};\n \\item \\(I\\): \\hspace{1em} \\= total number of advertisement impressions;\n \\item \\(R_{ads}\\): \\hspace{1em} \\= advertisement revenue in US dollars;\n \\item \\(A_{avg}\\): \\hspace{1em} \\= average number of advertisement impressions per person;\n \\item \\(U_{avg}\\): \\hspace{1em} \\= average hash rate of a single participant;\n \\item \\(T_{max}\\): \\hspace{1em} \\= total session time of participants who selected mining;\n \\item \\(T_{avg}\\): \\hspace{1em} \\= average session time of participants who selected mining;\n \\item \\(H_{max}\\): \\hspace{1em} \\= maximum number of hashes if all participants were to select cryptocurrency mining;\n \\item \\(N\\): \\hspace{1em} \\= total number of participants;\n \\item \\(R_{cryptoMax}\\): \\hspace{1em} \\= maximum browser mining revenue;\n \\item \\(R_{adsMax}\\): \\hspace{1em} \\= maximum potential advertisement revenue.\n\\end{itemize}\n\n\n\n\n\\subsection{Revenue}\n\nOur estimates show browser mining generated revenue at a rate 46x less than advertisement (\\(R_{adsMax}\\) compared with \\(R_{cryptoMax}\\) in table \\ref{table: r_cryptoMax=r_adsMax}). Furthermore, over the period of the experiment the total revenue from the participants who chose mining (23 of 107 individuals) was nearly 75x less than the revenue produced by the majority that chose advertising. Assuming the number of users and the average session time remained constant, the average hash rate per user would have to increase from 63 h\/s to approximately 2738 h\/s for revenue to be comparable. To put in perspective, currently there are no common consumer devices capable of achieving such hash rates. On the other hand, if the hash rate was to remain the same the session length would have to increase from over two minutes to nearly two hours for mining revenue to be equivalent.\n\n\\import{tables\/}{rGen.tex}\n\\import{tables\/}{rMax.tex}\n\\subsection{User consent}\nThe collected data suggests people were more inclined to select advertisement, with only 21.5\\% (n=23) of participants selecting cryptocurrency mining as their preferred monetisation option. When asked why advertisement was selected the two most prominent reasons indicated were concerns about reduced device performance (25.84\\%, n=46) and a higher level of familiarity with advertisement (23.6\\%, n=42). In contrast the leading reason for selecting browser mining was a dislike for advertisement (36.59\\%, n=15). Only 1.69\\% (n=3) of all participants who selected advertisement, did so due to a negative view of cryptocurrency. It is worth noting that questions which inquired for decision rationale, were multiple choice, meaning a single participant was able to indicate more than one reason for selecting a particular monetisation method. Segregating responses into groups of mobile and desktop or laptop users revealed only a small portion of participants on mobile devices have chosen cryptocurrency mining (14.81\\%, n=4). Reasons for selecting advertisement within this subgroup included concerns about reduced battery life (26.79\\%, n=15), diminished device performance (21.43\\%, n=12) and better familiarity with advertisement (19.64\\%, n=11).\n\nThe observed census showed a general inclination towards advertisement across all demographic factors, with female participants notably comprising only 6.9\\% (n=2) of the miners. Further investigation revealed lack of familiarity with browser mining as the leading reason (46.51\\%, n=20) for elevated female choice of advertisement and concerns about device performance coming second (23.26\\%, n=10).\n\nAdvertisement was the preferred choice across all levels of self-indicated familiarity with cryptocurrency. The highest rate was seen amongst participants who indicated a slight familiarity, 88.89\\% (32 out of 36) of whom selected advertisement, while the highest rate of browser mining was among highly familiar participants (12 out of 26, 46.15\\%). Furthermore, we observed that as the levels of familiarity increased so did the number of browser-mining instances, except for participants who held the highest level of claimed understanding (experts), none of whom selected browser-mining. Analysing cryptocurrency familiarity with attitude towards browser mining revealed experts held only neutral, negative, or strongly negative opinions. In contrast, the remaining familiarity groups ranging from the lowest (no familiarity) to the second highest (high familiarity), were mainly neutral and expressed at least some level of positivity towards browser mining. 60.71\\% (n=51) of participants, who selected advertisement indicated willingness to substitute their monetisation choice if half of the mined cryptocurrency was reimbursed as an incentive, 27.45\\% (n=14) of whom held a\nnegative and 3.92\\% (n=2) a strongly negative view towards browser-based cryptocurrency mining.\n\n\n\n\n\n\\subsection{User experience}\nThroughout the study session length was recorded and analysed under the assumption that users would terminate sessions earlier if user experience was affected by the monetisation method. The median session length was found to be 6x higher when browser mining compared to advertisement. Splitting results by device revealed a similar trend; when mining the median session lengths on desktop and mobile devices were 7.8x and 2x longer respectively (Fig.~\\ref{fig: avg_sess_box_plot}). Longer median session lengths with browser mining were observed across all levels of familiarity with cryptocurrency. High familiarity resulted in the longest sessions with mining, whilst moderate familiarity produced lengthiest median sessions with advertisement.\n\nAnalysis of data segregated by gender indicated that with browser mining female participants had 2.3x shorter median sessions than their male counterparts whilst with advertisement the median session lengths remained relatively similar across all genders. Additionally, a 7x increase in median session length when mining compared to advertisement was observed among participants with bachelor's degree as the highest level of achieved education. Participants with master's or doctoral degrees did not follow a similar trend, however their sample size was significantly smaller (73.83\\% of participants had bachelor's degrees). Similarly, the 18-24 age range showed a 5.9x higher median session length on browser mining versus advertisement.\n\nEvaluation of median session times regarding the initial browser mining throttle indicated that the longest sessions were produced by the lowest starting throttle (10\\%), with the highest throttle (80\\%) coming second. We also measured the average change in the initial throttle value (fig.~\\ref{fig:avg_throttle_change}), which revealed that generally the increase was more significant than the reduction, however participants were willing to increase the throttle value only if was initially small.\n\n\\begin{figure}[!htp]\n \\import{graphs\/}{sess_ads_crypto_mobile_desktop.tex}\n \\import{graphs\/}{change_in_throttle.tex}\n\\end{figure}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Exhaustive sets}\\label{exhsets}\n\nThroughout, we consider a finite $k$-graph $\\Lambda$, and typically $k=2$. A vertex $v\\in \\Lambda^0$ is a \\emph{source} if there exists $i\\in \\{1,\\dots,k\\}$ such that $v\\Lambda^{e_i}$ is empty. (We believe this is standard: it is the negation of ``$\\Lambda$ has no sources'' in the sense of the original paper \\cite{KP}.) Our main examples are $2$-graphs in which a vertex can receive red edges but not blue edges or vice-versa --- for example, the vertex $v$ in the graph in \\cite[Figure~2]{aHKR2} (which is also Example~\\ref{trickyex} below). We call a vertex such that $v\\Lambda^{e_i}$ is empty for all $i$ an \\emph{absolute source}. For example, the vertex $w$ in Example~\\ref{trickyex} is an absolute source.\n\nFor such graphs the only Cuntz--Krieger relations available are those of \\cite{RSY2}. As there, if $\\mu, \\nu$ is a pair of of paths in $\\Lambda$ with $r(\\mu)=r(\\nu)$, we set\n\\[\\Lambda^{\\min}(\\mu,\\nu)=\\big\\{(\\alpha,\\beta)\\in \\Lambda\\times \\Lambda: \\mu\\alpha=\\nu\\beta\\text{ and } d(\\mu\\alpha)=d(\\mu)\\vee d(\\nu)\\big\\}\n\\]\nfor the set of \\emph{minimal common extensions}. For a vertex $u\\in \\Lambda^0$, a finite subset $E$ of $u\\Lambda^1:=\\bigcup_{i=1}^ku\\Lambda^{e_i}$ is \\emph{exhaustive} if for every $\\mu\\in u\\Lambda$ there exists $e\\in E$ such that $\\Lambda^{\\min}(\\mu,e)$ is nonempty. \n\nWe then use the Cuntz--Krieger relations of \\cite{RSY2}, and in particular the presentation of these relations which uses only edges, as discussed in \\cite[Appendix~C]{RSY2}. As there, we write $\\{t_\\mu:\\mu\\in \\Lambda\\}$ for the universal Toeplitz--Cuntz--Krieger family which generates the Toeplitz algebra $\\mathcal{T}C^*(\\Lambda)$. The Cuntz--Krieger relations then include the relations (T1), (T2), (T3) and (T5) for Toeplitz--Cuntz--Krieger families, as in \\cite{aHLRS2} and \\cite{aHKR2}, and for every $u$ that is not a source the extra relations \n\\begin{itemize}\n\\item[(CK)] $\\prod_{e\\in E}(t_u-t_et_e^*)=0$\\quad\\text{for all $u\\in \\Lambda^0$ and finite exhaustive sets $E\\subset u\\Lambda^1$}.\n\\end{itemize}\nSince adding extra edges to an exhaustive set gives another exhaustive set, it is convenient to find the smallest possible exhaustive sets. Then the Cuntz--Krieger relations for these smallest sets are the sharpest.\n\n\n\n\\begin{lem}\\label{idFE}\nSuppose that $\\Lambda$ is a finite $k$-graph and $E\\subset u\\Lambda^1$ is a finite exhaustive set. Consider $i\\in\\{1,\\dots,k\\}$ and $e\\in u\\Lambda^{e_i}$. If there is a path $e\\mu\\in u\\Lambda^{\\mathbb{N} e_i}$ such that $s(\\mu)$ is an absolute source, then $e\\in E$.\n\\end{lem}\n\n\\begin{proof}\nSince $E$ is exhaustive, there exists $f\\in E$ such that $\\Lambda^{\\min}(e\\mu,f)\\not=\\emptyset$. Then there exist paths $\\alpha,\\beta$ such that $e\\mu\\alpha=f\\beta$. Since $s(\\mu)\\Lambda=\\{s(\\mu)\\}$, we have $\\alpha=s(\\mu)$. Thus $d(f\\beta)=d(e\\mu\\alpha)=d(e\\mu)\\in \\mathbb{N}{e_i}$; since $f\\in\\Lambda^1$ and\n\\[\n0\\leq d(f)\\leq d(f)+d(\\beta)=d(f\\beta) \\in \\mathbb{N}{e_i},\n\\]\nwe deduce that $d(f)=e_i$. Now uniqueness of factorisations and $e\\mu=f\\beta$ imply that $f=(e\\mu)(0,e_1)=e$. Thus $e\\in E$.\n\\end{proof}\n\n\\begin{example}\\label{ex2graph1source}\nWe consider a $2$-graph $\\Lambda$ with the following skeleton: \n\\[\n\\begin{tikzpicture}[scale=1.5]\n \\node[inner sep=0.5pt, circle] (u) at (0,0) {$u$};\n \\node[inner sep=0.5pt, circle] (v) at (2,0) {$v,$};\n\\draw[-latex, blue] (u) edge [out=195, in=270, loop, min distance=30, looseness=2.5] (u);\n\\draw[-latex, red, dashed] (u) edge [out=165, in=90, loop, min distance=30, looseness=2.5] (u);\n\\draw[-latex, blue] (v) edge [out=200, in=340] (u);\n\\draw[-latex, red, dashed] (v) edge [out=160, in=20] (u) ;\n\\node at (-.7, 0.3) {\\color{black} $d_2$};\n\\node at (-.7,-0.3) {\\color{black} $d_1$};\n\\node at (1, 0.4) {\\color{black} $a_2$};\n\\node at (1,-0.4) {\\color{black} $a_1$};\n\\end{tikzpicture}\n\\]\ninnwhcih the label $a_1$, for example, means that there are $a_1$ blue edges from $v$ to $u$. Since each path in $u\\Lambda^{e_1+e_2}v$ has unique blue-red and red-blue factorisations, the numbers $a_i, d_i\\in \\mathbb{N}\\backslash\\{0\\}$ satisfy $d_1a_2=d_2a_1$. Since $v$ is an absolute source, Lemma~\\ref{idFE} implies that every finite exhaustive set in $u\\Lambda^1$ must contain every edge, and hence contains $u\\Lambda^1$; since every finite exhaustive set is by definition a subset of $u\\Lambda^1$, it is therefore the only finite exhaustive set. Thus the only Cuntz--Krieger relation at $u$ is\n\\[\n\\prod_{e\\in u\\Lambda^1}(t_u-t_et_e^*)=0.\n\\]\nThere is no Cuntz--Krieger relation at $v$.\n \\end{example}\n \nWe now aim to make the Cuntz--Krieger relation (CK) look a little more like the familiar ones involving sums of range projections. \n\n\\begin{prop}\\label{CKforE}\nSuppose that $\\Lambda$ is a finite $k$-graph, $u$ is a vertex and $E\\subset u\\Lambda^1$ is a finite exhaustive set. For each nonempty subset $J$ of $\\{1,\\dots,k\\}$, define $e_J\\in \\mathbb{N}^k$ by $e\\!_J=\\sum_{i\\in J}e_i$. Then the Cuntz--Krieger relation \\textnormal{(CK)} associated to $E$ is equivalent to\n\\[\nt_u+\\sum_{\\emptyset\\not=J\\subset \\{1,\\dots,k\\}}(-1)^{|J|}\\sum_{\\{\\mu\\in u\\Lambda^{e\\!_J}:\\mu(0,e_i)\\in E\\text{ for $i\\in J$}\\}} t_\\mu t_\\mu^* =0.\n\\]\n\\end{prop}\n\nFrom the middle of \\cite[page~120]{aHKR2} we have\n\\begin{equation}\\label{expCK}\n\\prod_{e\\in E}(t_u-t_et_e^*)=t_u+\\sum_{\\emptyset\\not=J\\subset \\{1,\\cdots,k\\}} (-1)^{|J|}\\prod_{i\\in J}\\Big(\\sum_{e\\in E\\cap u\\Lambda^{e_i}}t_et_e^*\\Big).\n\\end{equation}\nWe want to expand the product, and we describe the result in a lemma. Proposition~\\ref{CKforE} then follows from \\eqref{expCK} and the lemma.\n \n \\begin{lem}\\label{expprod}\nSuppose that $E$ is a finite exhaustive subset of $u\\Lambda^1$ and $J$ is a subset of $\\{1,\\cdots,k\\}$. Write $e_J=\\sum_{i\\in J}e_i$. Then\n\\begin{equation}\\label{CKEassum}\n\\prod_{i\\in J} \\Big(\\sum_{f\\in E\\cap u\\Lambda^{e_i}}t_ft_f^*\\Big)=\\sum_{\\{\\mu\\in u\\Lambda^{e\\!_J}:\\mu(0,e_i)\\in E\\text{ for $i\\in J$}\\}}t_\\mu t_\\mu^*.\n\\end{equation}\n\\end{lem}\n\n\\begin{proof}\nFor $|J|=1$, says $J=\\{i\\}$, we have $e_J=e_i$. Thus\n\\[\n\\{\\mu\\in u\\Lambda^{e\\!_J}:\\mu(0,e_i)\\in E\\text{ for $i\\in J$}\\}=\\{\\mu\\in u\\Lambda^{e_i}:\\mu=\\mu(0,e_i)\\in E\\}= E\\cap uE^{e_1}.\n\\]\nNow suppose the formula holds for $|J|=n$ and that $K:=J\\cup\\{j\\}$ for some $j\\in\\{1,\\dots,k\\}\\setminus J$. Then the inductive hypothesis gives\n\\begin{align*}\n\\prod_{i\\in K} \\Big(\\sum_{f\\in E\\cap u\\Lambda^{e_i}}t_ft_f^*\\Big)\n&=\\bigg(\\prod_{i\\in J} \\Big(\\sum_{f\\in E\\cap u\\Lambda^{e_i}}t_ft_f^*\\Big)\\bigg)\\Big(\\sum_{e\\in E\\cap u\\Lambda^{e_j}}t_et_e^*\\Big)\\\\\n&=\\Big(\\sum_{\\{\\mu\\in u\\Lambda^{e\\!_J}:\\mu(0,e_i)\\in E\\text{ for $i\\in J$}\\}}t_\\mu t_\\mu^*\\Big)\\Big(\\sum_{e\\in E\\cap u\\Lambda^{e_j}}t_et_e^*\\Big).\n\\end{align*}\nNow for each pair of summands $t_\\mu t_\\mu^*$ and $t_et_e^*$ the relation (T5) gives\n\\[\n(t_\\mu t_\\mu^*)(t_et_e^*)=\\sum_{(g,\\nu)\\in\\Lambda^{\\min}(\\mu,e)}t_\\mu(t_g t_\\nu^*)t_e^*\n=\\sum_{(g,\\nu)\\in\\Lambda^{\\min}(\\mu,e)}t_{\\mu g} t_{\\nu e}^*.\n\\]\nBy definition of $\\Lambda^{\\min}$ we have $g\\in \\Lambda^{e_j}$ and $\\mu g=e\\nu$, so $d(\\mu g)=e_K$. Then we have\n\\[\n(\\mu g)(0,e_j)=(e\\nu)(0,e_j)=e\\in E\\quad\\text{and}\\quad (\\mu g)(0,e_i)=\\mu(0,e_i)\\in E \\text{ for $i\\in J$.}\n\\]\nSo the paths which arise as $\\mu g$ are precisely those in the set \n\\[\n\\{\\lambda\\in u\\Lambda^{e_K}:\\lambda(0,e_i)\\in E\\text{ for $i\\in J\\cup \\{j\\}=K$}\\}.\\qedhere\n\\]\n\\end{proof}\n\n\\begin{rmk}\nIt is possible that the index set on the right-hand side of \\eqref{CKEassum} is empty, in which case we are asserting that the product on the left is $0$.\n\\end{rmk}\n\nIn a finite $k$-graph, the set $u\\Lambda^1$ of all edges with range $u$ is always a finite exhaustive subset of $\\Lambda$. Then Proposition~\\ref{CKforE} applies with $E=u\\Lambda^1$. For this choice of $E$, the condition $\\mu(0,e_i)\\in E$ is trivially satisfied, and hence we get the following simpler-looking relation.\n\n\\begin{cor}\\label{FEex2.2}\nSuppose that $\\Lambda$ is a finite $k$-graph. Then for every $u\\in \\Lambda^0$ we have\n\\[\n\\prod_{f\\in u\\Lambda^1}(t_u-t_ft_f^*)= t_u+\\sum_{J\\subset\\{1,\\dots,k\\}} (-1)^{|J|}\\Big(\\sum_{\\mu\\in \\Lambda^{e\\!_J}}t_\\mu t_\\mu^*\\Big).\n\\]\n\\end{cor}\n\n\n\n\n\n\n\n\\begin{example}\\label{ex2graph1sourceFE}\nWe return to a $2$-graph $\\Lambda$ with the skeleton desccribed in Example~\\ref{ex2graph1source}, and its only finite exhaustive set $u\\Lambda^1$. Then \\eqref{expCK} and Lemma~\\ref{expprod} imply that \n\\[\n\\prod_{e\\in u\\Lambda^1}(t_u-t_et_e^*)=t_u+\\sum_{\\emptyset\\not=J\\subset \\{1,2\\}}(-1)^{|J|}\\Big(\\sum_{\\{\\mu\\in \\Lambda^{e\\!_J}:\\mu(0,e_i)\\in u\\Lambda^1\\}}t_\\mu t_\\mu^*\\Big).\n\\]\nThe nonempty subsets of $\\{1,2\\}$ are $\\{1\\}$, $\\{2\\}$ and $\\{1,2\\}$. For $J=\\{1\\}$ the requirement $\\mu(0,e_1)\\in u\\Lambda^1$ just says that $\\mu$ is a blue edge ($d(\\mu)=e_1$), and \n\\[\n\\sum_{\\mu\\in \\Lambda^{e\\!_J}}t_\\mu t_\\mu^*=\\sum_{e\\in u\\Lambda^{e_1}}t_et_e^*.\n\\]\nA similar thing happens for $J=\\{2\\}$. For $J=\\{1,2\\}$, the condition on $\\mu(0, e_i)$ is still trivially satisfied by all $\\mu\\in u\\Lambda^{e_1+e_2}$. Hence the Cuntz--Krieger relation becomes\n\\[\nt_u-\\sum_{e\\in u\\Lambda^1}t_et_e^*-\\sum_{e\\in u\\Lambda^2}t_et_e^*+\\sum_{\\mu\\in u\\Lambda^{e_1+e_2}}t_\\mu t_\\mu^*=0,\n\\]\nor equivalently\n\\[\nt_u=\\sum_{e\\in u\\Lambda^1}t_et_e^*+\\sum_{e\\in u\\Lambda^2}t_et_e^*-\\sum_{\\mu\\in u\\Lambda^{e_1+e_2}}t_\\mu t_\\mu^*,\n\\]\nwhich does indeed look more like a Cuntz--Krieger relation. \n\\end{example}\n\nLemma~\\ref{idFE} establishes a lower bound for the finite exhaustive sets. In Example~\\ref{ex2graph1source}, this lower bound was all of $u\\Lambda^1$, and hence this had to be the only finite exhaustive set. However, this was a bit lucky, as the next example shows.\n\n\n\\begin{example}\\label{trickyex}\nWe consider a $2$-graph $\\Lambda$ with skeleton\n\\[\n\\begin{tikzpicture}[scale=1.5]\n \\node[inner sep=0.5pt, circle] (u) at (0,0) {$u$};\n \\node[inner sep=0.5pt, circle] (v) at (2,1) {$v$};\n \\node[inner sep=0.5pt, circle] (w) at (2,-1) {$w$};\n\\draw[-latex, blue] (u) edge [out=195, in=270, loop, min distance=30, looseness=2.5] (u);\n\\draw[-latex, red, dashed] (u) edge [out=165, in=90, loop, min distance=30, looseness=2.5] (u);\n\\draw[-latex, blue] (v) edge [out=220, in=10] (u);\n\\draw[-latex, red, dashed] (v) edge [out=190, in=40] (u) ;\n\\draw[-latex, blue] (w) edge [out=155, in=335] (u) ;\n\\draw[-latex, red, dashed] (w) edge [out=90, in=270] (v);\n\\node at (-.7, 0.3) {\\color{black} $d_2$};\n\\node at (-.7,-0.3) {\\color{black} $d_1$};\n\\node at (.8, 0.8) {\\color{black} $a_2$};\n\\node at (1.35,0.3) {\\color{black} $a_1$};\n\\node at (2.2, 0) {\\color{black} $b_2$};\n\\node at (.95,-0.7) {\\color{black} $b_1$};\n\\end{tikzpicture}\n\\]\nin which $d_1a_2=d_2a_1$ and $a_1b_2=d_2b_1$. Note that $w$ is an absolute source. Our interest in these graphs arises from \\cite[\\S8]{aHKR2}, where we saw that the Toeplitz algebras of such graphs can arise as quotients of the Toeplitz algebras of graphs with no sources.\n\nThe only finite exhaustive subset of $v\\Lambda^1=v\\Lambda^{e_2}$ is the whole set, and this yields a Cuntz--Krieger relation \n\\begin{equation}\\label{CKv}\n\\prod_{f\\in v\\Lambda^{e_2}}(t_v-t_ft_f^*)=0\\Longleftrightarrow t_v=\\sum_{f\\in v\\Lambda^{e_2}}t_ft_f^*.\n\\end{equation}\nFor the vertex $u$ the situation is more complicated.\n\\end{example}\n\n\\begin{prop}\\label{idminFE} \nSuppose that $\\Lambda$ is a $2$-graph with the skeleton described in Example~\\ref{trickyex}. Then \n\\[\nE:= (u\\Lambda^1u)\\cup(u\\Lambda^{e_2}v)\\cup(u\\Lambda^{e_1}w)\n\\]\nis exhaustive, and every other finite exhaustive subset of $u\\Lambda^1$ contains $E$. \n\\end{prop}\n\n\\begin{proof}\nSince $w$ is an absolute source, Lemma~\\ref{idFE} implies that every finite exhaustive subset of $u\\Lambda^1$ contains $u\\Lambda^1u$, $u\\Lambda^{e_2}v$ and $u\\Lambda^1w=u\\Lambda^{e_1}w$, and hence also the union $E$. So it suffices for us to prove that $E$ is exhaustive.\n\nTo see this, we take $\\lambda\\in u\\Lambda$ and look for $e\\in E$ such that $\\Lambda^{\\min}(\\lambda,e)\\not=\\emptyset$. Unfortunately, this seems to require a case-by-case argument. We begin by eliminating some easy cases.\n\\begin{itemize}\n\\item If $\\lambda=u$, we take $e\\in u\\Lambda^1u$; then $e\\in \\Lambda^{\\min}(\\lambda,e)$, and we are done. So we suppose that $d(\\lambda)\\not=0$.\n\\smallskip\n\n\\item If $\\lambda\\in u\\Lambda u\\setminus\\{u\\}$, we choose $i$ such that $e_i\\leq d(\\lambda)$. Then $\\lambda(0,e_i)\\in u\\Lambda ^{e_i}u\\subset E$ and $\\Lambda^{\\min}(\\lambda,\\lambda(0,e_i))=\\{\\lambda\\}$ is nonempty.\n\\smallskip\n\n\\item We now suppose that $\\lambda\\in u\\Lambda v\\cup u\\Lambda w$. If $e_2\\leq d(\\lambda)$, then $\\lambda(0,e_2)\\in E$ and $\\Lambda^{\\min}(\\lambda, \\lambda(0,e_2))\\not=\\emptyset$. Otherwise $d(\\lambda)\\in\\mathbb{N}{e_1}$. \n\\smallskip\n\n\\item If $d(\\lambda)\\geq 2e_1$, then $\\lambda(0,e_1)\\in u\\Lambda^{e_1} u$ belongs to $E$, and $\\Lambda^{\\min}(\\lambda, \\lambda(0,e_1))\\not=\\emptyset$.\n\\smallskip\n\n\\item If $d(\\lambda)=e_1$ and $s(\\lambda)=w$, then $\\lambda\\in u\\Lambda^{e_1} w$ belongs to $E$, and we take $e=\\lambda$.\n\\end{itemize}\nWe are left to deal with paths $\\lambda\\in u\\Lambda^{e_1}v$. Choose $f\\in v\\Lambda^{e_2}w$, and consider $\\lambda f$. Since $d(\\lambda f)=d(\\lambda)+d(f)=e_1+e_2$, $\\lambda f$ has a red-blue factorisation \n\\[\n\\lambda f=(\\lambda f)(0,e_2)(\\lambda f)(e_2,e_1+e_2). \n\\]\nBut now $(\\lambda f)(0,e_2)\\in u\\Lambda^{e_2}u\\subset E$, and we have \n\\[\n(f,(\\lambda f)(e_2,e_1+e_2)\\big)\\in\\Lambda^{\\min}\\big(\\lambda,(\\lambda f)(0,e_2)\\big). \n\\]\nThus in all cases $\\lambda$ has a common extension with some edge in $E$, and $E$ is exhaustive.\n\\end{proof}\n\nSo for the graphs $\\Lambda$ with skeleton described in Example~\\ref{trickyex}, there is a single Cuntz--Krieger relation at the vertex $u$, namely $\\prod_{e\\in E}(t_u-t_et_e^*)=0$. Now we rewrite this relation as a more familiar-looking sum. \n\n\\begin{lem}\\label{CKatu}\nSuppose that $\\Lambda$ is a $2$-graph with skeleton described in Example~\\ref{trickyex}, and $E$ is the finite exhaustive set of Lemma~\\ref{idminFE}. Then we have\n\\begin{equation}\\label{expandprod_E}\n\\prod_{e\\in E}(t_u-t_et_e^*)=t_u-\\sum_{e\\in u\\Lambda^{e_1}\\{u,w\\}}t_et_e^*-\\sum_{f\\in u\\Lambda^{e_2}\\{u,v\\}}t_ft_f^*+\\sum_{\\mu\\in u\\Lambda^{e_1+e_2}\\{u,v\\}}t_\\mu t_\\mu^.\n\\end{equation}\n\\end{lem}\n\n\\begin{proof}\nFrom \\eqref{expCK} and Lemma~\\ref{expprod} we deduce that\n\\begin{align*}\n\\prod_{e\\in E}(t_u&-t_et_e^*)=t_u-\\sum_{e\\in u\\Lambda^{e_1}\\cap E}t_et_e^*-\\sum_{f\\in u\\Lambda^{e_2}\\cap E}t_ft_f^*+\\sum_{\\{\\mu\\in \\Lambda^{e_1+e_2}:\\mu(0,e_i)\\in E\\text{ for $i=1,2$}\\}}t_\\mu t_\\mu^*\\\\\n&=t_u-\\sum_{e\\in u\\Lambda^{e_1}\\{u,w\\}}t_et_e^*-\\sum_{f\\in u\\Lambda^{e_2}\\{u,v\\}}t_ft_f^*+\\sum_{\\{\\mu\\in \\Lambda^{e_1+e_2}:\\mu(0,e_i)\\in E\\text{ for $i=1,2$}\\}}t_\\mu t_\\mu^*.\n\\end{align*}\nTo understand the last term, we claim that $\\mu\\in u\\Lambda^{e_1+e_2}$ has $\\mu(0,e_1)\\in E$ and $\\mu(0,e_2)\\in E$ if and only if $s(\\mu)=u$ or $s(\\mu)=v$. The point is that if $s(\\mu)=u$ or $s(\\mu)=v$ then $s(\\mu(0,e_i))=u$ for $i=1,2$, and $u\\Lambda^1u\\subset E$. The alternative is that $s(\\mu)=w$, and then $\\mu(0,e_1)$ belongs to $u\\Lambda^{e_1}v$, which is not in $E$. Thus \n\\[\n\\{\\mu\\in \\Lambda^{e_1+e_2}:\\mu(0,e_i)\\in E\\text{ for $i=1,2$}\\}=u\\Lambda^{e_1+e_2}\\{u,v\\},\n\\]\nand this completes the proof.\n\\end{proof}\n\n\\begin{cor}\nSuppose that $\\Lambda$ is a $2$-graph with skeleton described in Example~\\ref{trickyex}. Then the Cuntz--Krieger algebra is the quotent of $\\mathcal{T}C^*(\\Lambda)$ by the Cuntz--Krieger relations \\eqref{CKv} and \n\\[\nt_u=\\sum_{e\\in u\\Lambda^{e_1}\\{u,w\\}}t_et_e^*+\\sum_{f\\in u\\Lambda^{e_2}\\{u,v\\}}t_ft_f^*-\\sum_{\\mu\\in u\\Lambda^{e_1+e_2}\\{u,v\\}}t_\\mu t_\\mu^*.\n\\]\n\\end{cor}\n\n\n\n\\section{KMS states for the graphs of Example~\\ref{ex2graph1source}}\\label{secex2.2}\n\n\nWe wish to compute the KMS$_\\beta$ states for a 2-graph $\\Lambda$ with skeleton described in Example~\\ref{ex2graph1source}. Such graphs have one absolute source $v$. We list the vertex set as $\\{u,v\\}$, and write $A_i$ for the vertex matrices, so that\n\\[\nA_i=\\begin{pmatrix}d_i&a_i\\\\0&0\\end{pmatrix}\\quad\\text{for $i=1,2$.}\n\\]\nWe then fix $r\\in (0,\\infty)^2$, and consider the associated dynamics $\\alpha^r:\\mathbb{R}\\to \\operatorname{Aut} \\mathcal{T}C^*(\\Lambda)$ such that\n\\[\n\\alpha_t(t_\\mu t_\\nu^*)=e^{itr\\cdot (d(\\mu)-d(\\nu))}t_\\mu t_\\nu^*.\n\\]\nWe then consider $\\beta\\in (0,\\infty)$ such that \n\\begin{equation}\\label{hypbigbeta}\n\\beta r_i>\\ln\\rho(A_i)\\quad\\text{for $i=1$ and $i=2$.}\n\\end{equation} \nAs observed at the start of \\cite[\\S8]{aHKR2}, even though $\\Lambda$ has a source, we can still apply Theorem~6.1 of \\cite{aHLRS2} to find the KMS$_\\beta$ states. \n\nFirst we need to compute the vector \n$y=(y_u,y_v)\\in [0,\\infty)^{\\Lambda^0}$ appearing in that theorem. We find:\n\n\\begin{lem}\\label{lemcompy}\nWe have\n\\begin{align}\ny_u&=\\sum_{\\mu\\in \\Lambda u}e^{-\\beta r\\cdot d(\\mu)}=(1-d_1e^{-\\beta r_1})^{-1}(1-d_2e^{-\\beta r_2})^{-1},\\quad \\text{and}\\label{ysubu}\\\\\ny_v&=1+a_1e^{-\\beta r_1}(1-d_1e^{-\\beta r_1})^{-1}(1-d_2e^{-\\beta r_2})^{-1}+a_2e^{-\\beta r_2}(1-d_2e^{-\\beta r_2})^{-1}.\\label{ysubv}\n\\end{align}\n\\end{lem}\n\n\\begin{proof}\nWe first evaluate\n\\[\ny_u:=\\sum_{\\mu\\in \\Lambda u}e^{-\\beta r\\cdot d(\\mu)}=\\sum_{n\\in \\mathbb{N}^2}\\sum_{\\mu\\in \\Lambda^n u}e^{-\\beta r\\cdot n}.\n\\]\nEach path of degree $n$ is uniquely determined by (say) its blue-red factorisation. Then we have $d_1^{n_1}$ choices for the blue path and $d_2^{n_2}$ choices for the red path. Thus\n\\begin{align*}\ny_u&=\n\\sum_{n\\in \\mathbb{N}^2} d_1^{n_1}d_2^{n_2}e^{-\\beta(n_1r_1+n_2r_2)}=\\sum_{n\\in \\mathbb{N}^2}(d_1e^{-\\beta r_1})^{n_1}(d_2e^{-\\beta r_2})^{n_2}\\\\\n&=\\Big(\\sum_{n_1=0}^\\infty (d_1e^{-\\beta r_1})^{n_1}\\Big)\\Big(\\sum_{n_2=0}^\\infty (d_2e^{-\\beta r_2})^{n_2}\\Big), \n\\end{align*}\nand summing the geometric series gives \\eqref{ysubu}.\n\nTo compute $y_v$, we need to list the distinct paths $\\mu$ in $\\Lambda v$. First, if $d(\\mu)_1>0$, then $\\mu$ has a factorisation $\\mu=\\nu f$ with $d(f)=e_1$. Note that $s(f)=s(\\mu)=v$, and hence $s(\\nu)=r(f)=u$, so $\\nu\\in \\Lambda u$. Otherwise we have $d(\\mu)\\in \\mathbb{N}{e_2}$, and $\\Lambda v$ is the disjoint union of the singleton $\\{v\\}$, $\\bigcup_{e\\in \\Lambda^{e_1}v}(\\Lambda u)e$, and $\\bigcup_{l=0}^\\infty\\Lambda^{(l+1)e_2}v$. Counting the three sets gives \n\\[\ny_v=1+a_1e^{-\\beta r_1} y_u+\\sum_{l=0}^\\infty a_2d_2^le^{-\\beta(l+1)r_2}=1+a_1e^{-\\beta r_1}y_u+a_2e^{-\\beta r_2}(1-d_2e^{-\\beta r_2})^{-1},\n\\]\nand hence we have \\eqref{ysubv}. \\end{proof}\n\n\\begin{rmk}\\label{rmkaltformyv}\nWe made a choice when we computed $y_v$: we considered the complementary cases $d(\\mu)_1>0$ and $d(\\mu)_1=0$. We could equally well have chosen to use the cases $d(\\mu)_2>0$ and $d(\\mu)_2=0$, and we would have found\n\\begin{equation}\\label{altformyv}\ny_v=1+a_2e^{-\\beta r_2}(1-d_1e^{-\\beta r_1})^{-1}(1-d_2e^{-\\beta r_2})^{-1}+a_1e^{-\\beta r_1}(1-d_1e^{-\\beta r_1})^{-1},\n\\end{equation}\nwhich looks different. To see that they are in fact equal, we look at the difference. To avoid messy formulas, we write $\\Delta:=(1-d_2e^{-\\beta r_2})(1-d_2e^{-\\beta r_2})$, and observe that, for example, $(1-d_1e^{-\\beta r_1})^{-1}=(1-d_2e^{-\\beta r_2})\\Delta^{-1}$. Then the difference $\\eqref{ysubv} -\\eqref{altformyv}$ is\n\\begin{align*}\na_1e^{-\\beta r_1}\\Delta^{-1}&+a_2e^{-\\beta r_2}(1-d_1e^{-\\beta r_1})\\Delta^{-1}-\na_2e^{-\\beta r_2}\\Delta^{-1}-a_1e^{-\\beta r_1}(1-d_2e^{-\\beta r_2})\\Delta^{-1}.\n\\end{align*}\nWhen we expand the brackets we find that the terms $a_1e^{-\\beta r_1}\\Delta^{-1}$ and $a_2e^{-\\beta r_2}\\Delta^{-1}$ cancel out, leaving \n\\[\n-a_2e^{-\\beta r_2}d_1e^{-\\beta r_1}\\Delta^{-1}+a_1e^{-\\beta r_1}d_2e^{-\\beta r_2}\\Delta^{-1}=(-a_2d_1+a_1d_2)e^{-\\beta(r_1+r_2)}\\Delta^{-1},\n\\]\nwhich vanishes because the factorisation property forces $a_1d_2=a_2d_1$.\n\\end{rmk}\n\nWe recall that we are considering $\\beta$ satisfying \\eqref{hypbigbeta}.\nThe first step in the procedure of \\cite[\\S8]{FaHR} for such $\\beta$ is to apply \\cite[Theorem~6.1]{aHLRS2}. Then the KMS states of $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$ have the form $\\phi_\\epsilon$ for $\\epsilon\\in [0,\\infty)^{\\{u,v\\}}$ satisfying $\\epsilon\\cdot y=1$. This is a $1$-dimensional simplex with extreme points $(y_u^{-1},0)$ and $(0,y_v^{-1})$. The values of the state $\\phi_\\epsilon$ on the vertex projections $t_u$ and $t_v$ are the coordinates of the vector\n\\[\nm=\\Big(\\prod_{i=1}^2(1-e^{-\\beta r_i}A_i)^{-1}\\Big)\\epsilon.\n\\]\nTo find $m$, we compute\n\\[\n\\prod_{i=1}^2(1-e^{-\\beta r_i}A_i)^{-1}=\\Big(\\prod_{i=1}^2(1-d_ie^{-\\beta r_i})^{-1}\\Big)\\begin{pmatrix}\n1&a_2e^{-\\beta r_2}+a_1e^{-\\beta r_1}(1-d_2e^{-\\beta r_2})\\\\0&(1-d_1e^{-\\beta r_1})(1-d_2e^{-\\beta r_2})\\end{pmatrix}.\n\\]\n\nFor the first extreme point $\\epsilon=(y_u^{-1},0)$, we get $m=(1,0)$ and the corresponding KMS$_\\beta$ state $\\phi_1$ satisfies \n\\begin{equation}\\label{values1onvertices}\n\\begin{pmatrix}\\phi_1(t_u)\\\\\\phi_1(t_v)\\end{pmatrix}=\n\\begin{pmatrix}1\\\\\n0\n\\end{pmatrix}.\n\\end{equation} \nLemma~6.2 of \\cite{AaHR} (for example) implies that $\\phi$ factors through a state of the quotient by the ideal of $\\mathcal{T}C^*(\\Lambda)$ generated by $t_v$, which is the ideal denoted $I_{\\{v\\}}$ in \\cite[\\S2.4]{FaHR}. Thus the quotient is $\\mathcal{T}C^*(\\Lambda\\backslash \\{v\\})=\\mathcal{T}C^*(u\\Lambda u)$. The general theory of \\cite{aHLRS2} says that $(\\mathcal{T}C^*(u\\Lambda u),\\alpha^r)$ has a unique KMS$_\\beta$ state $\\psi$, and we therefore have $\\phi_1=\\psi\\circ q_{\\{v\\}}$, wher $q_{\\{w\\}}$ is the quotient map of $\\mathcal{T}C^*(\\Lambda)$ onto $\\mathcal{T}C^*(\\Lambda\\backslash\\{w\\})$ for the hereditary subset $\\{w\\}$ of $\\Lambda^0$ from \\cite[Proposition~2.2]{aHKR2}. \n\nNow we consider the other extreme point $\\epsilon=(0,y_v^{-1})$. This yields a KMS$_\\beta$ state $\\phi_2$ such that\n\\begin{equation}\\label{valuesonvertices}\n\\begin{pmatrix}\\phi_2(t_u)\\\\\\phi_2(t_v)\\end{pmatrix}=\n\\begin{pmatrix}y_v^{-1}\\big(\\textstyle{\\prod_{i=1}^2(1-d_ie^{-\\beta r_i})^{-1}}\\big)\\big(a_2e^{-\\beta r_2}+a_1e^{-\\beta r_1}-a_1d_2e^{-\\beta(r_1+r_2)}\\big)\\\\\ny_v^{-1}\n\\end{pmatrix}.\n\\end{equation}\nBecause this vector $\\epsilon$ is supported on the absolute source $v$, Proposition~8.2 of \\cite{aHKR2} implies that $\\phi_2$ factors through a state of $(C^*(\\Lambda), \\alpha^r)$ (and we can also verify this directly --- see the remark below).\n\nWe summarise our findings as follows.\n\n\\begin{prop}\nSuppose that $\\Lambda$, $r$ and $\\beta$ are as described at the start of the section. Then $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$ has a 1-dimensional simplex of KMS$_\\beta$ states with extreme points $\\phi_1$ and $\\phi_2$ satisfying \\eqref{values1onvertices} and \\eqref{valuesonvertices}. The KMS state $\\phi_1$ factors through a state $\\psi$ of $\\mathcal{T}C^*(u\\Lambda u)$, and the KMS state $\\phi_2$ factors through a state of $C^*(\\Lambda)$.\n\\end{prop}\n\n\\begin{rmk}\\label{reality2verts} \nAt this stage we can do some reassuring reality checks. First, we check that $\\phi_2(t_u)+\\phi_2(t_v)=1$. We multiply through by $y_v$ to take the $y_v^{-1}$ out. Then we compute using that $a_1d_2=a_2d_1$:\n\\begin{align*}\ny_v\\phi_2(t_u)&+y_v\\phi_2(t_v)=y_v\\phi_2(t_u)+1\\\\\n&=\\big(\\textstyle{\\prod_{i=1}^2(1-d_ie^{-\\beta r_i})^{-1}}\\big)\\big(a_2e^{-\\beta r_2}+a_1e^{-\\beta r_1}-a_1d_2e^{-\\beta(r_1+r_2)}\\big)+1\\\\\n&=\\big(\\textstyle{\\prod_{i=1}^2(1-d_ie^{-\\beta r_i})^{-1}}\\big)\\big(a_2e^{-\\beta r_2}+a_1e^{-\\beta r_1}-a_2e^{\\beta r_2}d_1e^{-\\beta r_1}\\big)+1\\\\\n&=\\big(\\textstyle{\\prod_{i=1}^2(1-d_ie^{-\\beta r_i})^{-1}}\\big)\\big(a_2e^{-\\beta r_2}(1-d_1e^{-\\beta r_1})+a_1e^{-\\beta r_1}\\big)+1\\\\\n&=a_2e^{-\\beta r_2}(1-d_2e^{-\\beta r_2})^{-1}+a_1e^{-\\beta r_1}(1-d_1e^{-\\beta r_1})^{-1}(1-d_2e^{-\\beta r_2})^{-1}+1,\n\\end{align*}\nwhich is the formula for $y_v$ reshuffled.\n\n\nNext, we verify directly that $\\phi_2$ factors through a state of $C^*(\\Lambda)$. We saw in Example~\\ref{ex2graph1source} that the only finite exhaustive subset of $u\\Lambda^1$ is $u\\Lambda^1$, and then Corollary~\\ref{FEex2.2} implies that \n\\begin{equation}\\label{expandKMS2.2}\n\\phi_{2}\\Big(\\prod_{e\\in u\\Lambda^1}(t_u-t_et_e^*)\\Big)=\n\\phi_2\\Big(t_u-\\sum_{e\\in u\\Lambda^1}t_et_e^*-\\sum_{f\\in u\\Lambda^1}t_ft_f^*+\\sum_{\\mu\\in u\\Lambda^{e_1+e_2}}t_\\mu t_\\mu^*\\Big).\n\\end{equation}\nNow we break each sum into two sums over subsets of $u\\Lambda u$ and $u\\Lambda v$, and apply the KMS condition to each $\\phi_{2}(t_\\mu t_\\mu^*)=e^{-\\beta r\\cdot d(\\mu)}\\phi_2(t_{s(\\mu)})$. We find that \\eqref{expandKMS2.2} is\n\\[\n(1-d_1e^{-\\beta r_1})(1-d_2e^{-\\beta r_2})\\phi_2(t_u)-\\big(a_1e^{-\\beta r_1}+a_2e^{-\\beta r_2}-a_1d_2e^{-\\beta(r_1+r_2)}\\big)\\phi_{2}(t_v),\n\\]\nwhich vanishes by \\eqref{valuesonvertices}. Now the standard argument (using, for example, \\cite[Lemma~6.2]{AaHR}) shows that $\\phi_2$ factors though a state of the Cuntz--Krieger algebra $C^*(\\Lambda)$, which by Example~\\ref{ex2graph1source} is the quotient of $\\mathcal{T}C^*(\\Lambda)$ by the single Cuntz--Krieger relation $\\textstyle{\\prod_{e\\in u\\Lambda^1}(t_u-t_et_e^*)}=0$.\n\\end{rmk}\n\n\\section{KMS states for the graphs of Example~\\ref{trickyex}}\\label{sectrickyex}\n\nWe now consider a $2$-graph $\\Lambda$ with the skeleton described in Example~\\ref{trickyex}. Such graphs have one absolute source $w$, and $\\Lambda\\backslash\\{w\\}$ is the graph discussed in the previous section. As usual, we consider a dynamics determined by $r\\in (0,\\infty)^2$, and we want to use Theorem~6.1 of \\cite{aHLRS2} to find the KMS$_\\beta$ states for $\\beta$ satisfying $\\beta r_i>\\ln\\rho(A_i)$. Our first task is to find the vector $y=(y_u,y_v,y_w)$.\n\nSince the sets $\\Lambda u$ and $\\Lambda v$ lie entirely in the subgraph with vertices $\\{u,v\\}$, the numbers $y_u:=\\sum_{\\mu\\in \\Lambda u} e^{-\\beta r_i\\cdot d(\\mu)}$ and $y_v$ are given by Lemma~\\ref{lemcompy}. So it remains to compute $y_w$. We find:\n\n\\begin{lem}\\label{lemcompy2}\nWe define $\\Delta:=(1-d_1e^{-\\beta r_1})(1-d_2e^{-\\beta r_2})$. Then we have\n\\begin{align}\ny_u&=\\Delta^{-1},\\notag\\\\\ny_v&=1+a_1e^{-\\beta r_1}\\Delta^{-1}+a_2e^{-\\beta r_2}(1-d_2e^{-\\beta r_2})^{-1}\\notag\\\\\n&=1+a_1e^{-\\beta r_1}\\Delta^{-1}+a_2e^{-\\beta r_2}(1-d_1e^{-\\beta r_1})\\Delta^{-1},\\quad \\text{and}\\label{ysubv2}\\\\\ny_w&=1+b_2e^{-\\beta r_2}+b_1e^{-\\beta r_1}\\Delta^{-1}+a_2b_2e^{-2\\beta r_2}(1-d_2e^{-\\beta r_2})^{-1}\\label{ysubw}.\n\\end{align}\n\\end{lem} \n\n\\begin{proof}\nAs foreshadowed above, the formula for $y_u$ and the first formula for $y_v$ follow from Lemma~\\ref{lemcompy}. The formula \\eqref{ysubv2} is just a rewriting of the previous one which will be handy in computations (and this trick will be used a lot later). \n\nTo find $y_w$, we consider the paths $\\mu=\\nu e$ with $e\\in \\Lambda^{e_1}w$ and $\\nu\\in \\Lambda u$ (these are the ones with $d(\\mu)\\geq e_1$). There are $b_1$ such $e$, and hence we have a contribution $b_1e^{-\\beta r_1}y_u=b_1e^{-\\beta r_1}\\Delta^{-1}$ to $y_w$. The remaining paths are in $\\Lambda^{\\mathbb{N} e_2}w$, and give a contribution of \n\\begin{align*}\n1+b_2e^{-\\beta r_2}+b_2e^{-\\beta r_2}&a_2e^{-\\beta r_2}\\sum_{l=0}^\\infty d_2^le^{-\\beta r_2l}\\\\&=1+b_2e^{-\\beta r_2}+b_2e^{-\\beta r_2}a_2e^{-\\beta r_2}(1-d_2e^{-\\beta r_2})^{-1}.\n\\end{align*}\nAdding the two contributions gives \\eqref{ysubw}.\n\\end{proof}\n\n\\begin{rmk}\nWe could also have computed $y_w$ by counting the paths with $d(\\mu)\\geq e_2$ and those in $\\Lambda^{\\mathbb{N} e_1}w$. This gives \n\\begin{equation}\\label{ywalt}\ny_w=1+b_1e^{-\\beta r_1}(1-d_1e^{-\\beta r_1})^{-1}+b_2e^{-\\beta r_2}y_v.\n\\end{equation}\nWe found the check that this is the same as the right-hand side of \\eqref{ysubw} instructive. First, we use the alternative formula \\eqref{altformyv} for $y_v$ (whose proof in Remark~\\ref{rmkaltformyv} used the crucial identity $a_1d_2=a_2d_1$). Then \nthe right-hand side of \\eqref{ywalt} becomes\n\\begin{align*}\n1+b_1e^{-\\beta r_1}&(1-d_1e^{-\\beta r_1})^{-1}\\\\\n&+b_2e^{-\\beta r_2}\\big(1+a_2e^{-\\beta r_2}\\Delta^{-1}+a_1e^{-\\beta r_1}(1-d_1e^{-\\beta r_1})^{-1}\\big).\n\\end{align*}\nNow we write $(1-d_1e^{-\\beta r_1})^{-1}=(1-d_2e^{-\\beta r_2})\\Delta^{-1}$, similarly for $(1-d_1e^{-\\beta r_1})^{-1}$, and expand the brackets: we get\n\\begin{align*}\n1+b_1&e^{-\\beta r_1}\\Delta^{-1}-b_1d_2e^{-\\beta(r_1+r_2)}\\Delta^{-1}\\\\\n&+b_2e^{-\\beta r_2}+b_2a_2e^{-2\\beta r_2}\\Delta^{-1}+b_2a_1e^{-\\beta(r_1+r_2)}\\Delta^{-1}-b_2a_1d_2e^{-\\beta(r_1+2r_2)}\\Delta^{-1}.\n\\end{align*}\nNow we recall from Example~\\ref{trickyex} that $b_1d_2=b_2a_1$, and hence the third and sixth terms cancel. Next we use the identity $a_1d_2=a_2d_1$ in the last term. We arrive at\n\\begin{align*}\n1+b_1e^{-\\beta r_1}\\Delta^{-1}&+b_2e^{-\\beta r_2}+b_2a_2e^{-2\\beta r_2}\\Delta^{-1}\n-b_2a_2d_1e^{-\\beta(r_1+2r_2)}\\Delta^{-1}\\\\\n&=1+b_1e^{-\\beta r_1}\\Delta^{-1}+b_2e^{-\\beta r_2}+b_2a_2e^{-2\\beta r_2}(1-d_1e^{-\\beta r_1})\\Delta^{-1}\\\\\n&=1+b_1e^{-\\beta r_1}\\Delta^{-1}+b_2e^{-\\beta r_2}+b_2a_2e^{-2\\beta r_2}(1-d_2e^{-\\beta r_2})^{-1},\n\\end{align*}\nwhich is the the formula for $y_w$ in \\eqref{ysubw}. We find it reassuring that we had to explicitly use both relations $b_1d_2=b_2a_1$ and $a_1d_2=a_2d_1$ that are imposed on us by the assumption that our coloured graph is the skeleton of a $2$-graph.\n\\end{rmk}\n\nTheorem~6.1 of \\cite{aHLRS2} says that for each $\\beta$ satisfying $\\beta r_i>\\ln\\rho(A_i)$ for $i=1,2$, there is a simplex of KMS$_\\beta$ states $\\phi_\\epsilon$ on $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$ parametrised by the set\n\\[\n\\Delta_\\beta=\\big\\{\\epsilon\\in [0,\\infty)^{\\{u,v,w\\}}:\\epsilon\\cdot y=1\\big\\}.\n\\]\nHere, the set $\\Delta_\\beta$ is a $2$-dimensional simplex with extreme points $e_u:=(y_u^{-1},0,0)$, $e_v:=(0,y_v^{-1},0)$, and $e_w=(0,0,y_w^{-1})$. The values of $\\phi_\\epsilon$ on the vertex projections are the entries in the vector $m(\\epsilon)=\\prod_{i=1}^2(1-e^{-\\beta r_i}A_i)^{-1}\\epsilon$. Since the matrices $1-e^{-\\beta r_i}A_i$ are upper-triangular, so are their inverses, and we deduce that both $m(e_u)$ and $m(e_v)$ have final entry $m(e_u)_w=0=m(e_v)_w$. So the corresponding KMS states are the compositions of the states of $\\big(\\mathcal{T}C^*(\\Lambda\\backslash\\{w\\}), \\alpha^{(r_1,r_2)}\\big)$ with the quotient map $q_{\\{w\\}}:\\mathcal{T}C^*(\\Lambda)\\to \\mathcal{T}C^*(\\Lambda\\backslash\\{w\\})$. Thus the extreme points of the simplex of KMS$_\\beta$ states of $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$ are $\\phi_1\\circ q_{\\{w\\}}=(\\psi\\circ q_{\\{v\\}})\\circ q_{\\{w\\}}$, $\\phi_2\\circ q_{\\{w\\}}$ and $\\psi_3:=\\phi_{e_w}$. \n\n\\begin{rmk} \nWe recall from the end of the previous section that the state $\\phi_2$ of $\\mathcal{T}C^*(\\Lambda\\backslash\\{w\\})$ factors through a state of the Cuntz--Krieger algebra $C^*(\\Lambda\\backslash\\{w\\})$. So it is tempting to ask whether $\\phi_2\\circ q_{\\{w\\}}$ factors through a state of $C^*(\\Lambda)$. Hoewever, this is not the case. The point is that in the graph $\\Lambda\\backslash\\{w\\}$, the vertex $v$ is an absolute source, and hence there is no Cuntz--Krieger relation involving $t_v$. However, in the larger graph $\\Lambda$, $v$ is not an absolute source: the set $v\\Lambda^{e_2}$ is a nontrivial finite exhaustive subset of $v\\Lambda^1$, and hence the Cuntz--Krieger family generating $C^*(\\Lambda)$ must satisfy the relation\n\\[\n\\prod_{e\\in v\\Lambda^2}(t_v-t_et_e^*)=0\\Longleftrightarrow t_v-\\sum_{e\\in e\\Lambda^{e_2}}t_et_e^*=0.\n\\]\nThe KMS condition implies that the state $\\phi:=\\phi_2\\circ q_{\\{w\\}}$ satisfies \n\\[\n\\phi(t_et_e^*)=e^{-\\beta r_2}\\phi(t_{s(e)})=e^{-\\beta r_2}\\phi(t_w)=e^{-\\beta r_2}\\phi_2\\circ q_{\\{w\\}}(t_w)=e^{-\\beta r_2}\\phi_2(0)=0\n\\]\nfor all $e\\in v\\Lambda^{e_2}$. Since we know from \\eqref{valuesonvertices} that $\\phi(t_v)=\\phi_2(t_v)=y_v^{-1}$ is not zero, we deduce that \n\\[\n\\phi\\Big(t_v-\\sum_{e\\in v\\Lambda^{e_2}}t_et_e^*\\Big)=\\phi(t_v)\\not=0.\n\\] \nThus $\\phi=\\phi_2\\circ q_{\\{w\\}}$ does not factor through a state of $C^*(\\Lambda)$.\n\\end{rmk} \n\nWe now focus on the new extreme point is $\\phi_{e_w}$. To compute it, we need to calculate $\\prod_{i=1}^2(1-e^{-\\beta r_i}A_i)^{-1}$. Since the matrices $A_1$ and $A_2$ commute, so do the matrices $1-e^{-\\beta r_i}A_i$, and it suffices to compute the inverse of \n\\begin{align*}\n\\prod_{i=1}^2(1&-e^{-\\beta r_i}A_i)=\\\\\n&=\\begin{pmatrix}\n\\Delta&-(1-d_1e^{-\\beta r_1})a_2e^{-\\beta r_2}-a_1e^{-\\beta r_1}&a_1b_1b_2^{-\\beta(2r_1+r_2)}-b_1e^{-\\beta r_1}1\\\\\n0&1&-b_2e^{-\\beta r_2}\\\\\n0&0&1\n\\end{pmatrix},\n\\end{align*}\nwhere as before we write $\\Delta=\\prod_{i=1}^2(1-d_ie^{-\\beta r_i})$. We find that the inverse is \n\\[\n\\Delta^{-1}\\begin{pmatrix}1&(1-d_1e^{-\\beta r_1})a_2e^{-\\beta r_2}+a_1e^{-\\beta r_1}&(1-d_1e^{-\\beta r_1})a_2b_2e^{-2\\beta r_2}+b_1e^{-\\beta r_1}\\\\\n0&\\Delta&\\Delta b_2e^{-\\beta r_2}\\\\\n0&0&\\Delta\n\\end{pmatrix}.\n\\]\nThus the corresponding KMS$_\\beta$ state $\\phi_{e_w}$ satisfies\n\\begin{equation}\\label{charphi3}\n\\begin{pmatrix}\n\\phi_{e_w}(t_u)\\\\ \\phi_{e_w}(t_v)\\\\ \\phi_{e_w}(t_w)\n\\end{pmatrix}=\\begin{pmatrix}\n\\Delta^{-1}\\big((1-d_1e^{-\\beta r_1})a_2b_2e^{-2\\beta r_2}+b_1e^{-\\beta r_1}\\big) y_w^{-1}\\\\\nb_2e^{-\\beta r_2}y_w^{-1}\\\\\ny_w^{-1}\n\\end{pmatrix}.\n\\end{equation}\n\n\\begin{rmk}\nAs usual, we take the opportunity for a reality check: since $t_u+t_v+t_w$ is the identity of $\\mathcal{T}C^*(\\Lambda)$ and $\\phi_{e_w}$ is a state, we must have $\\phi_{e_w}(t_u)+\\phi_{e_w}(t_v)+\\phi_{e_w}(t_w)=1$. But since $\\Delta^{-1}(1-d_1e^{-\\beta r_1})=(1-e^{-\\beta r_2})^{-1}$, the formula \\eqref{ysubw} says that this sum is precisely $y_wy_w^{-1}=1$.\n\\end{rmk}\n\nWe summarise our findings in the following theorem.\n\n\\begin{thm}\\label{KMSabovecrit}\nSuppose that $\\Lambda$ is a $2$-graph with skeleton described in Example~\\ref{trickyex} and vertex matrices $A_1$, $A_2$. We suppose that $r\\in (0,\\infty)^{\\{u,v,w\\}}$, and consider the dynamics $\\alpha^r$ on $\\mathcal{T}C^*(\\Lambda)$. We suppose that $\\beta>0$ satisfies $\\beta r_i>\\ln\\rho(A_i)$ for $i=1,2$. We write $\\phi_1$ and $\\phi_2$ for the KMS$_\\beta$ states of $(\\mathcal{T}C^*(\\Lambda\\backslash\\{w\\}), \\alpha^{r})$ described before Remark~\\ref{reality2verts}. Then $\\phi_1\\circ q_{\\{w\\}}$ and $\\phi_2\\circ q_{\\{w\\}}$ are KMS$_\\beta$ states of $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$. There is another KMS$_\\beta$ state $\\phi_3=\\phi_{e_w}$ satisfying \\eqref{charphi3}. Every KMS$_\\beta$ state of $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$ is a convex combination of the three states $\\phi_1\\circ q_{\\{w\\}}$, $\\phi_2\\circ q_{\\{w\\}}$ and $\\phi_3$. None of these KMS$_\\beta$ states factors through a state of $(C^*(\\Lambda),\\alpha^r)$. \n\\end{thm}\n\nThe only thing we haven't proved is the assertion that every KMS state is a convex combination of the states that we have described. But this follows from the general results in \\cite[Theorem~6.1]{aHLRS2}, because the vectors $(y_u^{-1},0,0)$, $(0,y_v^{-1},0)$ and $(0,0,y_w^{-1})$ are the extreme points of the simplex $\\Delta_\\beta$. \n \n\\section{KMS states at the critical inverse temperature}\\label{seccritinvt}\n\nWe begin with the graphs of Example~\\ref{ex2graph1source}. We observe that the hypothesis of rational independence in the two main results of this section is in practice easy to verify using Proposition~A.1 of \\cite{aHKR2}: loosely, $\\ln d_1$ and $\\ln d_2$ are rationally independent unless $d_1$ and $d_2$ are different powers of the same integer. \n\n\\begin{prop}\\label{KMS1on2.2}\nSuppose that $\\Lambda$ is a 2-graph with the skeleton described in Example~\\ref{ex2graph1source} and that $r\\in (0,\\infty)^2$ has $r_i\\geq\\ln d_i$ for both $i$, $r_i=\\ln d_i$ for at least one $i$, and $\\{r_1,r_2\\}$ are rationally independent. Consider the quotient map $q_{\\{v\\}}:\\mathcal{T}C^*(\\Lambda)\\to \\mathcal{T}C^*(\\Lambda \\backslash\\{v\\})$ from \\cite[Proposition~2.2]{aHKR2}. Then $(\\mathcal{T}C^*(\\Lambda \\backslash\\{v\\}),\\alpha^r)$ has a unique KMS$_1$ state $\\phi$, and $\\phi\\circ q_{\\{v\\}}$ is the only KMS$_1$ state of $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$. \n\\end{prop}\n\n\\begin{lem}\\label{KMS1on2.2lem}\nSuppose that $\\Lambda$ and $r$ are as in Proposition~\\ref{KMS1on2.2}, and that $\\phi$ is a KMS$_1$ state $\\phi$ of $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$. Then $\\phi(t_v)=0$. \n\\end{lem}\n\n\\begin{proof}\nWe recall from Example~~\\ref{ex2graph1source} that the only finite exhaustive subset of $u\\Lambda^1$ is $u\\Lambda^1$ itself, and from Example~\\ref{ex2graph1sourceFE} we then have\n\\[\n\\prod_{e\\in u\\Lambda^1}(t_u-t_et_e^*)=t_u-\\sum_{e\\in u\\Lambda^1}t_et_e^*+\\sum_{\\mu\\in u\\Lambda^{e_1+e_2}} t_\\mu t_\\mu^*. \n\\]\nThus positivity of $\\phi\\big(\\prod_{e\\in u\\Lambda^1}(t_u-t_et_e^*)\\big)$ implies that\n\\[\n0\\leq \\phi(t_u)-\\sum_{e\\in u\\Lambda^{e_1}}\\phi(t_et_e^*)-\\sum_{e\\in u\\Lambda^{e_2}}\\phi(t_et_e^*)+\\sum_{\\mu\\in u\\Lambda^{e_1+e_2}}\\phi(t_\\mu t_\\mu^*).\n\\]\nNow we use the KMS relation and count paths of various degrees to get\n\\begin{align*}\n0&\\leq \\phi(t_u)-\\sum_{e\\in u\\Lambda^{e_1}}e^{-r_1}\\phi(t_{s(e)})-\\sum_{e\\in u\\Lambda^{e_2}}e^{-r_2}\\phi(t_{s(e)})+\\sum_{\\mu\\in u\\Lambda^{e_1+e_2}}e^{-(r_1+r_2)}\\phi(t_{s(\\mu)})\\\\\n&=\\phi(t_u)-e^{-r_1}\\big(d_1\\phi(t_u)+a_1\\phi(t_v)\\big)-e^{-r_2}\\big(d_2\\phi(t_u)+a_2\\phi(t_v)\\big)\\\\\n&\\hspace{7cm}+e^{-(r_1+r_2)}\\big(d_1d_2\\phi(t_u)+d_1a_2\\phi(t_v)\\big)\\\\\n&=\\big(1-d_1e^{-r_1}-d_2e^{-r_2}+d_1d_2e^{-(r_1+r_2)}\\big)\\phi(t_u)\\\\\n&\\hspace{5cm}-\\big(a_1e^{-r_1}+a_2e^{-r_2}-d_1a_2e^{-(r_1+r_2)}\\big)\\phi(t_v)\\\\\n&=(1-d_1e^{-r_1})(1-d_2e^{-r_2})\\phi(t_u)-\\big(a_1e^{-r_1}+a_2e^{-r_2}-d_1a_2e^{-(r_1+r_2)}\\big)\\phi(t_v)\\\\\n&=-\\big(a_1e^{-r_1}+a_2e^{-r_2}-d_1a_2e^{-(r_1+r_2)}\\big)\\phi(t_v),\n\\end{align*}\nwhere the coefficient of $\\phi(t_u)$ vanished because for at least one of $i=1,2$ we have $1-d_ie^{-r_i}=1-d_id_i^{-1}=0$ by the hypotheses on $r_i$. If $r_1=\\ln d_1$, then we write this as\n\\[\n0\\leq-\\big(a_1e^{-r_1} +(1-d_1e^{-r_1})a_2e^{-r_2}\\big)\\phi(t_v)=-a_1e^{-r_1}\\phi(t_v),\n\\]\nand positivity of $\\phi(t_v)$ implies that $t_v=0$. If $r_2=\\ln d_2$, then we use the identity $d_1a_2=d_2a_1$ to rewrite it as \n\\[\n0\\leq -\\big(a_2e^{-r_2} +(1-d_2e^{-r_2})a_1e^{-r_1}\\big)\\phi(t_v)=-a_2e^{-r_2}\\phi(t_v),\n\\]\nwhich also implies that $\\phi(t_v)=0$.\n\\end{proof} \n\n\\begin{proof}[Proof of Proposition~\\ref{KMS1on2.2}]\nProposition~4.2 of \\cite{aHKR} implies that $(\\mathcal{T}C^*(\\Lambda \\backslash\\{v\\},\\alpha^r)$ has a unique KMS$_1$ state $\\phi$. Since $q_{\\{v\\}}$ interwines the two actions $\\alpha^r$, the composition $\\phi\\circ q_{\\{v\\}}$ is a KMS$_1$ state of $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$. On the other hand, if $\\psi$ is a KMS$_1$ state of $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$, then Lemma~\\ref{KMS1on2.2lem} implies that $\\psi(t_v)=0$. The standard argument using \\cite[Lemma~6.2]{AaHR} shows that $\\psi$ factors through the quotient by the ideal generated by $t_v$, which is precisely the kernel of $q_{\\{v\\}}$. Thus there is a KMS$_1$ state $\\theta$ of $(\\mathcal{T}C^*(\\Lambda \\backslash\\{v\\},\\alpha^r)$ such that $\\psi=\\theta\\circ q_{\\{v\\}}$, and uniqueness of the KMS$_1$ state implies that $\\theta=\\phi$. Hence $\\psi= \\phi\\circ q_{\\{v\\}}$, as required.\n\\end{proof}\n\n\\begin{thm}\\label{KMS1on2.9}\nSuppose that $\\Lambda$ is a 2-graph with the skeleton described in Example~\\ref{ex2graph1sourceFE} and that $r\\in (0,\\infty)^2$ has $r_i\\geq\\ln d_i$ for both $i$, $r_i=\\ln d_i$ for at least one $i$, and $\\{r_1,r_2\\}$ are rationally independent. Consider the quotient map $q_{\\{v,w\\}}$ of $\\mathcal{T}C^*(\\Lambda)$ onto $\\mathcal{T}C^*(\\Lambda \\backslash\\{v,w\\})$ discussed in \\cite[Proposition~2.2]{aHKR2}. Then $(\\mathcal{T}C^*(\\Lambda \\backslash\\{v,w\\}),\\alpha^r)$ has a unique KMS$_1$ state $\\phi$, and $\\phi\\circ q_{\\{v,w\\}}$ is the only KMS$_1$ state of $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$. \n\\end{thm}\n\nWe need an analogue of Lemma~\\ref{KMS1on2.2lem} for the present situation. \n\n\\begin{lem}\\label{KMS1on2.9lem}\nUnder the hypotheses of the preceding theorem, suppose that $\\phi$ is a KMS$_1$ state of $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$. Then $\\phi(t_w)=0$. \n\\end{lem}\n\n\\begin{proof}\nWe use an argument like that in the proof of Lemma~\\ref{KMS1on2.2lem} for the exhaustive subset $E=(u\\Lambda^1u)\\cup(u\\Lambda^{e_2}v)\\cup(u\\Lambda^{e_1}w)$ of $u\\Lambda^1$ discussed in Example~\\ref{trickyex}. Then the state satisfies\n\\[\n\\phi\\big(\\textstyle{\\prod_{e\\in E}}(t_u-t_et_e^*)\\big)\\geq 0.\n\\]\nNow using Lemma~\\ref{CKatu} to write $\\textstyle{\\prod_{e\\in E}}(t_u-t_et_e^*)$ as a sum gives\n\\[\n\\phi\\Big(t_u-\\sum_{e\\in u\\Lambda^{e_1}\\{u,w\\}}t_et_e^*-\\sum_{f\\in u\\Lambda^{e_2}\\{u,v\\}}t_ft_f^*+\\sum_{\\mu\\in u\\Lambda^{e_1+e_2}\\{u,v\\}}t_\\mu t_\\mu^*\\Big)\\geq 0.\n\\]\nNow we use linearity of $\\phi$ and the KMS condition to get\n\\begin{align*}\n0&\\leq \\phi(t_u)-\\sum_{e\\in u\\Lambda^{e_1}\\{u,w\\}}\\phi(t_et_e^*)-\\sum_{f\\in u\\Lambda^{e_2}\\{u,v\\}}\\phi(t_ft_f^*)+\\sum_{\\mu\\in u\\Lambda^{e_1+e_2}\\{u,v\\}}\\phi(t_\\mu t_\\mu^*)\\\\\n&=\\phi(t_u)-\\big(d_1e^{-r_1}\\phi(t_u)+b_1e^{-r_1}\\phi(t_w)\\big)-\\big(d_2e^{-r_2}\\phi(t_u)+a_2e^{-r_2}\\phi(t_v)\\big)\\\\\n&\\hspace{5cm}+\\big(d_1d_2e^{-(r_1+r_2)}\\phi(t_u)+d_1a_2e^{-(r_1+r_2)}\\phi(t_v)\\big)\\\\\n&=(1-d_1e^{-r_1})(1-d_2e^{-r_2})\\phi(t_u)-(1-d_1e^{-r_1})a_2e^{-r_2}\\phi(t_v)-b_1e^{-r_1}\\phi(t_w).\n\\end{align*}\nSince at least one $r_i$ is $\\ln d_i$, we have $(1-d_1e^{-r_1})(1-d_2e^{-r_2})=0$, and we deduce that \n\\[\n0\\leq -(1-d_1e^{-r_1})a_2e^{-r_2}\\phi(t_v)-b_1e^{-r_1}\\phi(t_w). \n\\]\nSince $(1-d_1e^{-r_1})a_2e^{-r_2}$, $\\phi(t_v)$, and $b_1e^{-r_1}$ are all nonnegative, we must have both $(1-d_1e^{-r_1})a_2e^{-r_2}\\phi(t_v)=0$ and $b_1e^{-r_1}\\phi(t_w)=0$. In particular, we deduce that $\\phi(t_w)=0$.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{KMS1on2.9}]\nWe suppose that $\\psi$ is a KMS$_1$ state of $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$. Then Lemma~\\ref{KMS1on2.9lem}\nimplies that $\\phi(t_w)=0$. The formula in \\cite[Proposition~2.1(1)]{aHKR2} implies that $\\phi$ vanishes on the ideal $I_{\\{w\\}}$ generated by $t_w$, and by \\cite[Lemma~6.2]{AaHR} $\\phi$ factors through a KMS$_1$ state $\\theta$ of the system $(\\mathcal{T}C^*(\\Lambda\\backslash\\{w\\}),\\alpha^r)$. The $2$-graph $\\Lambda\\backslash \\{w\\}$ is the graph in Proposition~\\ref{KMS1on2.2}, and hence that Proposition implies that $\\theta =\\phi\\circ q_{\\{v\\}}$. The kernel of the composition $q_{\\{v\\}}\\circ q_{\\{w\\}}$ is the ideal generated by $\\{t_v,t_w\\}$, and a glance at the definition of the homomorphism in \\cite[Proposition~2.2(2)]{aHKR2} shows that $q_{\\{v\\}}\\circ q_{\\{w\\}}=q_{\\{v,w\\}}$. Thus \n\\[\n\\psi=\\theta\\circ q_{\\{w\\}}=(\\phi\\circ q_{\\{v\\}})\\circ q_{\\{w\\}}=\\phi\\circ q_{\\{v,w\\}}.\\qedhere\n\\]\n\\end{proof}\n\n\n\n\n\\section{Where our examples came from}\\label{secbigex}\n\nWe consider a $2$-graph $\\Lambda$ with skeleton\n\\begin{equation*}\\label{4vertsex}\n\\begin{tikzpicture}[scale=1.5]\n \\node[inner sep=0.5pt, circle] (u) at (0,0) {$u$};\n \\node[inner sep=0.5pt, circle] (v) at (2,1) {$v$};\n \\node[inner sep=0.5pt, circle] (w) at (2,-1) {$w$};\n \\node[inner sep=0.5pt, circle] (x) at (4,0) {$x$};\n\\draw[-latex, blue] (u) edge [out=195, in=270, loop, min distance=30, looseness=2.5] (u);\n\\draw[-latex, red, dashed] (u) edge [out=165, in=90, loop, min distance=30, looseness=2.5] (u);\n\\draw[-latex, blue] (v) edge [out=220, in=10] (u);\n\\draw[-latex, red, dashed] (v) edge [out=190, in=40] (u) ;\n\\draw[-latex, blue] (w) edge [out=155, in=335] (u) ;\n\\draw[-latex, red, dashed] (w) edge [out=90, in=270] (v);\n\\draw[-latex, blue] (u) edge [out=195, in=270, loop, min distance=30, looseness=2.5] (u);\n\\draw[-latex, blue] (x) edge [out=155, in=335] (v);\n\\draw[-latex, blue] (x) edge [out=220, in=10] (w);\n\\draw[-latex, red, dashed] (x) edge [out=190, in=40] (w) ;\n\\draw[-latex, blue] (x) edge [out=345, in=270, loop, min distance=30, looseness=2.5] (x);\n\\draw[-latex, red, dashed] (x) edge [out=15, in=90, loop, min distance=30, looseness=2.5] (x);\n\\node at (-.7, 0.3) {\\color{black} $d_2$};\n\\node at (-.7,-0.3) {\\color{black} $d_1$};\n\\node at (.8, 0.8) {\\color{black} $a_2$};\n\\node at (1.35,0.3) {\\color{black} $a_1$};\n\\node at (2.2, 0) {\\color{black} $b_2$};\n\\node at (.95,-0.7) {\\color{black} $b_1$};\n\\node at (3.3,-0.8) {\\color{black} $c_1$};\n\\node at (2.8,-0.2) {\\color{black} $c_2$};\n\\node at (3.05,0.7) {\\color{black} $g_1$};\n\\node at (4.7, 0.3) {\\color{black} $f_2$};\n\\node at (4.7,-0.3) {\\color{black} $f_1$};\n\\end{tikzpicture}\n\\end{equation*}\nIn these graphs there are two nontrivial strongly connected components $\\{u\\}$ and $\\{x\\}$, and the bridges $\\mu\\in u\\Lambda x$ all have $|d(\\mu)|>1$. The graphs in Examples~\\ref{ex2graph1source} and \\ref{trickyex} are then the graphs $\\Lambda\\backslash\\{w,x\\}$ and $\\Lambda\\backslash\\{x\\}$, respectively. We assume that all the integers $d_i,a_i,b_i,c_i,g_i,f_i,$ are nonzero. \n\nThe vertex matrices of the graph $\\Lambda$ are then\n\\begin{equation}\nA_1=\\begin{pmatrix}d_1&a_1&b_1&0\\\\0&0&0&g_1\\\\0&0&0&c_1\\\\0&0&0&f_1\\end{pmatrix}\\qquad\\text{and}\\qquad \nA_2=\\begin{pmatrix}d_2&a_2&0&0\\\\0&0&b_2&0\\\\0&0&0&c_2\\\\0&0&0&f_2\\end{pmatrix}.\n\\end{equation}\nThus we have $\\rho(A_i)=\\max\\{d_i,f_i\\}$ for $i=1,2$. As in the last section, we consider a dynamics $\\alpha^r:\\mathbb{R}\\to \\operatorname{Aut} \\mathcal{T}C^*(\\Lambda)$ given by $r\\in (0,\\infty)^2$ such that $r_i\\geq \\ln\\rho(A_i)$ for $i=1,2$, and $r_i=\\ln\\rho(A_i)$ for at least one $i$. We are interested in the KMS$_1$ states of $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$.\n\nIf $\\rho(A_i)=d_i$ for some $i$, then the strongly connected component $\\{u\\}$ is $i$-critical in the sense of \\cite[\\S3]{FaHR}, and Proposition~3.1 and Corollary~3.2 in \\cite{FaHR} imply that all the KMS$_1$ states factor through states of $\\Lambda\\backslash\\{v,w,x\\}=u\\Lambda u$. Proposition~4.2 of \\cite{aHKR2} then implies that $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$ has a unique KMS$_1$ state.\n\nSo we suppose from now on that $\\rho(A_i)=f_i>d_i$ for $i=1,2$. We now want to run through the construction of \\cite[\\S4--5]{FaHR}. The set $H$ in \\cite[Proposition~4.1]{FaHR} is empty, so the block decompositions of the matrices $A_i$ look like\n\\[\nA_i=\\begin{pmatrix}E_i&B_i\\\\0&f_i\\end{pmatrix}\n\\]\nwhere $E_i$ is $3\\times 3$ and $B_i$ is $3\\times 1$. We can choose to work with either $i=1$ or $i=2$, and $i=2$ is marginally simpler because $B_2=\\begin{pmatrix}0&0&c_2\\end{pmatrix}^T$. The unimodular Perron--Frobenius eigenvector of the matrix $(f_2)$ is the number $1$, and hence the vector $y$ in \\cite[Proposition~4.1]{FaHR} is\n\\begin{align}\ny&=\\big(\\rho(A_{\\{u\\},2})1-E_2\\big)^{-1}B_2\\notag\\\\\n&=\\begin{pmatrix}f_2-d_2&-a_2&0\\\\0&f_2&-b_2\\\\0&0&f_2\\end{pmatrix}^{-1}\\begin{pmatrix}0\\\\0\\\\c_2\\end{pmatrix}\\notag\\\\\n&=\\frac{1}{(f_2-d_2)f_2^2}\\begin{pmatrix}\nf_2^2&a_2f_2&a_2b_2\\\\0&(f_2-d_2)f_2&(f_2-d_2)b_2\\\\0&0&(f_2-d_2)f_2\\end{pmatrix}\\begin{pmatrix}0\\\\0\\\\c_2\\end{pmatrix}\\notag\\\\\n&=\\frac{1}{(f_2-d_2)f_2^2}\\begin{pmatrix}\na_2b_2c_2\\\\(f_2-d_2)b_2c_2\\\\\n(f_2-d_2)f_2c_2\\end{pmatrix}=\\begin{pmatrix}\n(f_2-d_2)^{-1}f_2^{-2}a_2b_2c_2\\\\f_2^{-2}b_2c_2\\\\\nf_2^{-1}c_2\\end{pmatrix}\\label{expB_2}.\n\\end{align}\n\n\\begin{rmk}\nAs we said above, we should have been able to work with $i=1$ and get the same answer (see Equation (4.3) in \\cite[Proposition~4.1]{FaHR}). This gives us another opportunity for a reality check. But the answer we got the second time looked quite different, and in sorting out the mess we learned something interesting. The second answer, in the form we first got it, was\n\\begin{equation}\\label{expB_1}\ny=\\frac{1}{(f_1-d_1)f_1^2}\\begin{pmatrix}\na_1f_1g_1+f_1b_1c_1\\\\(f_1-d_1)f_1g_1\\\\\n(f_1-d_1)f_1g_1\\end{pmatrix}=\\begin{pmatrix}\n(f_1-d_1)^{-1}f_1^{-1}(a_1g_1+b_1c_1)\\\\f_1^{-1}g_1\\\\\nf_1^{-1}c_1\\end{pmatrix}.\n\\end{equation}\nEquality of the third entries in \\eqref{expB_2} and \\eqref{expB_1} is equivalent to $f_1c_2=f_2c_1$, which is one of the relations imposed by the requirement that $\\Lambda$ is a $2$-graph. Similar reasoning works for the second entries. But when we removed the inverses by cross-multiplying, the top entry in the second calculation became\n\\begin{equation}\\label{badform}\n(f_2-d_2)f_2^2(a_1g_1+b_1c_1)=f_2^3a_1g_1+f_2^3b_1c_1-d_2f_2^2a_1g_1-d_2f_2^2b_1c_1.\n\\end{equation}\nIn the other calculation, the top entry has just two summands, which are again products of 5 terms. After staring at them for a bit, we realised that these products have meaning: for example, $f_2^3a_1g_1$ is the number $a_1g_1f_2^3$ of paths in $u\\Lambda^{2e_1+3e_2}x$ counted using their BBRRR factorisations. Similarly, $d_2f_2^2b_1c_1=d_2b_1c_1f_2^{2}$ counts the same set using the RBBRR factorisations. Now looking at the skeleton confirms that\n\\[\na_1g_1f_2^3=a_1(g_1f_2)f_2^2=a_1(b_2c_1)f_2^2=(a_1b_2)c_1f_2^2=d_2b_1c_1f_2^2,\n\\]\nand the first and last terms on the right of \\eqref{badform} cancel. Similar considerations using the $1$-skeleton match up the remaining terms in the top entries in \\eqref{expB_2} and \\eqref{expB_1}.\n\\end{rmk}\n\nWe now use the results of \\cite[\\S5]{FaHR} to describe all the KMS$_1$ states on $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$ when $\\Lambda$ has skeleton described at the start of the section. First we apply \\cite[Proposition~5.1]{FaHR} with $z=\\big(\\begin{smallmatrix}y\\\\1\\end{smallmatrix}\\big)$. This gives a KMS$_1$ state $\\psi$ of $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$\nsuch that\n\\[\n\\psi(t_\\mu t_\\nu^*)=\\delta_{\\mu,\\nu}e^{-r\\cdot d(\\mu)}\\|z\\|_1^{-1}z_{s(\\mu)}.\n\\]\nIt factors through a KMS$_1$ state of $(C^*(\\Lambda),\\alpha^r)$ if and only if $r_i=\\ln\\rho(A_i)=f_i$ for both $i=1$ and $i=2$.\n\nNow Theorem~5.2 of \\cite{FaHR} implies that every KMS$_1$ state of $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$ is a convex combination of $\\psi$ and a state $\\phi\\circ q_{\\{x\\}}$ lifted from a KMS$_1$ state of $\\mathcal{T}C^*(\\Lambda\\backslash \\{x\\})$. Since $\\Lambda\\backslash\\{x\\}$ is the graph considered in \\S\\ref{sectrickyex} and we are assuming that $d_i\\ln 12.\n\\end{equation}\nFor $\\beta>1$, \\cite[Theorem~6.1]{aHLRS2} describes a $3$-dimensional simplex of KMS$_\\beta$ states of $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$.\n\nNow we consider the KMS$_1$ states, and aim to apply the results of \\cite{FaHR}. The common unimodular Perron--Frobenius eigenvector of $A_{\\{x\\},i}$ is the $1$-vector $1$, and as in of \\cite[Proposition~4.1]{FaHR}, this extends to a common eigenvector $z:=(y,1)$ of the matrices $A_i$ with eigenvalue $\\rho(A_{\\{x\\},1})=\\ln 8$ and $\\rho(A_{\\{x\\},2})=\\ln 12$. (Since \\cite[Proposition~4.1]{FaHR} is linear-algebraic, it applies verbatim here.) Now Proposition~5.1 of \\cite{FaHR} gives a KMS$_1$ state $\\psi$ of $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$ with \n\\[\n\\begin{pmatrix}\\psi(q_u)\\\\ \\psi(q_v)\\\\ \\psi(q_w)\\\\ \\psi(q_x)\\end{pmatrix}\n=\\|z\\|_1^{-1}z=\\frac{1}{24}\\begin{pmatrix}3\\\\1\\\\12\\\\8\\end{pmatrix}.\n\\]\nSince the only critical component of $\\Lambda$ for the dynamics $\\alpha^r$ is $\\{x\\}$, Theorem~6.1 of~\\cite{FaHR} implies that every KMS$_1$ state of $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$ is a convex combination of $\\psi$ and a state $\\phi\\circ q_{\\{x\\}}$ lifted from a KMS$_1$ state $\\phi$ of $(\\mathcal{T}C^*(\\Lambda\\backslash\\{x\\}),\\alpha^r)$.\n\nThe graph $\\Lambda\\backslash\\{x\\}=\\Lambda_{\\{u,v,w\\}}$ is one of those we studied in \\S\\ref{sectrickyex}. Since\n\\[\nr_1=\\ln 8>\\rho(A_{\\{u,v,w\\},1})=\\ln 2\\quad\\text{and}\\quad r_2>\\ln 12>\\rho(A_{\\{u,v,w\\},2})=\\ln 6,\n\\]\n$\\beta=1$ is in the range for which Theorem~\\ref{KMSabovecrit} gives a concrete description of the KMS$_1$ states of $(\\mathcal{T}C^*(\\Lambda\\backslash\\{x\\}),\\alpha^r)$. So the original system has a $3$-dimensional simplex of KMS$_1$ states with extreme points $\\psi$, $\\phi_1\\circ q_{\\{w,x\\}}$, $\\phi_2\\circ q_{\\{w,x\\}}$ and $\\phi_3\\circ q_{\\{x\\}}$.\n\nWith the next lemma, we can continue below the inverse temperature $\\beta=1$. \n\\begin{lem}\nSuppose that $\\beta<1$ and $\\phi$ is a KMS$_\\beta$ state of the system $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$ considered above. Then $\\phi$ factors through the quotient map $q_{\\{x\\}}$.\n\\end{lem}\n\n\\begin{proof}\nWe aim to prove that $\\phi(t_x)=0$. We certainly have $\\phi(t_x)\\geq 0$. The relation (T4) with $n=e_1$ implies that \n\\begin{align*}\n\\phi(t_x)&\\geq\\sum_{e\\in x\\Lambda^{e_1}}\\phi(t_et_e^*)=\\sum_{e\\in x\\Lambda^{e_1}}e^{-\\beta r_1}\\phi(t_e^*t_e)\\\\\n&=e^{-\\beta (\\ln 8)}|x\\Lambda^{e_1}|\\phi(t_x)=8^{-\\beta}.8\\phi(t_x)=8^{1-\\beta}\\phi(t_x),\n\\end{align*}\nwhich we can rewrite as $(1-8^{1-\\beta})\\phi(t_x)\\geq 0$. But $\\beta<1$ implies that $8^{1-\\beta}>1$, and this is only compatible with $\\phi(t_x)\\geq 0$ if $\\phi(t_x)=0$. \nNow it follows from \\cite[Lemma~6.2]{AaHR} that $\\phi$ factors through $q_{\\{x\\}}$. \n\\end{proof}\n\nSo we are interested in the KMS$_\\beta$ states of $(\\mathcal{T}C^*(\\Lambda_{\\{u,v,w\\}}),\\alpha^r)$ for $\\beta<1$. Recall from the start of the section that we are assuming that $r_1=\\ln 8$ and $r_2>\\ln 12$. The next critical level is\n\\begin{equation}\\label{2ndcrit}\n\\beta_c:=\\max\\big\\{(\\ln 8)^{-1}\\ln2,r_2^{-1}\\ln 6\\big\\}=\\max\\big\\{3^{-1}, r_2^{-1}\\ln 6\\}.\n\\end{equation}\n\nFor $\\beta$ satisfying $\\beta_c<\\beta<1$, we deduce from Theorem~\\ref{KMSabovecrit} that the KMS$_\\beta$ states of $(\\mathcal{T}C^*(\\Lambda_{\\{u,v,w\\}}),\\alpha^r)$ form a $2$-dimensional simplex; Theorem~\\ref{KMSabovecrit} also provides explicit formulas for the extreme points. Composing with $q_{\\{w\\}}$ gives a two-dimensional simplex of KMS$_\\beta$ states of $(\\mathcal{T}C^*(\\Lambda),\\alpha^r)$.\n\n\\begin{rmk}\nStrictly speaking, to apply Theorem~\\ref{KMSabovecrit} we need to scale the dynamics to ensure that the critical inverse temperature is $1$ rather than $\\beta_c$. Lemma~2.1 of \\cite{aHKR} gives the formulas which achieve this. We will assume that this can be done mentally (or at least ``in principle'').\n\\end{rmk}\n\n\nFor $\\beta=\\beta_c$, at least one of $r_i\\beta\\geq \\ln\\rho(A_{\\{u\\},i})$ becomes an equality. Provided $\\{r_1,r_2\\}$ are rationally independent, Theorem~\\ref{KMS1on2.9} implies that $(\\mathcal{T}C^*(\\Lambda_{\\{u,v,w\\}}),\\alpha^r)$ has a unique KMS$_{\\beta_c}$ state which factors through a state of $(\\mathcal{T}C^*(\\Lambda_{\\{u\\}}),\\alpha^r)$. It follows from \\cite[Proposition~6.1]{aHKR} that this state factors through a state of $(C^*(\\Lambda_{\\{u\\}}),\\alpha^r)$ if and only if we have $r_i\\beta=\\ln\\rho(A_{\\{u\\},i})$ for both $i$. For\n$\\beta<\\beta_c$, at least one of the inequalities $r_i\\beta\\geq \\ln\\rho(A_i)$ fails, and it follows from \\cite[Corollary~4.3]{aHLRS2} that there are no KMS$_\\beta$ states on any of these algebras. \n\n\\begin{rmk}\nFor a dynamics satisfying \\eqref{specifyr}, the constraint $r_2>\\ln 12$ implies that $r_2^{-1}\\ln 6\\leq (\\ln 12)^{-1}\\ln 6$, and which is the bigger in \\eqref{2ndcrit} will depend on $r_2$. A calculator tells us that\n\\[\n(\\ln 12)^{-1}\\ln 6=\\frac{\\ln 6}{\\ln 2+\\ln 6}\\sim 0.72.\n\\]\nThus for small $r_2$, we have $3^{-1}r_2^{-1}\\ln 6$. So $\\beta_c$ could be either value.\n\\end{rmk}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nOur concordance cosmological model describes a Universe dominated by dark matter and dark energy, in which structures form hierarchically. Within this Lambda-Cold-Dark-Matter ($\\Lambda$CDM) framework, N-body simulations provide clear predictions for the structure and evolution of dark matter haloes \\citep[e.g.][]{duffy08,dutton14}, and a confrontation with observations provides an important test of our $\\Lambda$CDM paradigm. A key open question is how galaxies form in this dark-matter-dominated Universe. The baryonic physics involved may also play a significant role in altering the total mass profiles \\citep[e.g.][]{vandaalen11,velliscig14} and therefore complicate a direct comparison with predictions from dark matter simulations. However, as hydrodynamical simulations continue to advance \\citep[e.g.][]{schaye10,cen14,genel14,schaye15}, they provide testable predictions of the distribution of baryonic tracers, such as gas and stars. \n\nAn important open question in this context is how well stellar mass traces the underlying dark matter distribution, and if the distribution of galaxies is consistent with what we expect for the sub-haloes in $\\Lambda$CDM \\citep[e.g.][]{boylankolchin11}. On the scale of our Milky Way, recent hydrodynamical simulations are able to alleviate the tension between the abundance of sub-haloes in N-body simulations, and the observed distribution of satellites, by incorporating baryonic processes such as supernova feedback \\citep[e.g.][]{geen13,sawala13}. More massive haloes, such as galaxy clusters, have correspondingly more massive sub-haloes, which is expected to make them more efficient at forming stars, less subjective to feedback processes, and relatively easy to identify through observations. \n\nMeasuring the radial number and stellar mass density distribution of satellite galaxies in clusters has been the focus of several studies. These distributions have been observed to be well described by Navarro-Frenk-White (NFW) \\citep{NFW} profiles for group-sized haloes and clusters, from the local Universe to $z\\sim 1$ \\citep{carlberg97b,lin04,muzzin07,giodini09,budzynski12,vdB14}. Each observational study, however, is based on a different data set and analysis and presents results in a different form. \\citet{lin04} and \\citet{budzynski12} studied the number density of galaxies, but owing to interactions between galaxies and, in particular, the mass-dependence of the dynamical friction timescale, the number density distribution of galaxies can be different for galaxies with different luminosities or stellar masses. Their results are therefore dependent on the depth of their data set. \\citet{giodini09} measured the number density distribution of generally lower mass systems from the COSMOS field. \\citet{carlberg97b} and \\citet{muzzin07} measured the luminosity density distribution in the $r$-band and K-band, respectively, for clusters from the Canadian Network for Observational Cosmology Survey \\citep[CNOC;][]{yee96}. The advantage of this measurement is that, provided the measurements extend significantly below the characteristic luminosity $L^{*}$, it is almost insensitive to the precise luminosity cut. That is because the total luminosity in each radial bin is dominated by galaxies around $L^{*}$. However, especially in the $r$-band, it is not straightforward to relate the luminosity distribution to a stellar mass distribution due to differences in mass-to-light-ratio between different galaxy types, and because the distributions of these types vary spatially. Inconsistencies between all these studies prevent us from drawing firm conclusions on comparisons between them. \n\nIn this paper we present a comprehensive measurement of the radial galaxy number density and stellar mass density from a sample of 60 massive clusters in the local Universe ($0.04 0.15$, are marked in red. The outlier fraction is less than $3\\%$, the scatter (in $\\frac{\\Delta z}{1+z}$) of the remaining objects is $\\sigma_z = 0.035$. \\textit{Right panel}: Same for the COSMOS field, also using only the $ugri$-filters. The outlier fraction and scatter are slightly larger as a result of deeper spectroscopic data (in particular at higher redshift where the $ugri$ filters lose their constraining power).}\n\\label{fig:speczphotz}\n\\end{figure*} \n\nSince the distance modulus is a strong function of redshift in this regime, a small uncertainty in photometric redshift will result in a relatively large uncertainty in luminosity (or stellar mass) of a galaxy. For example, a simple test shows that, for a hypothetical cluster at $z=0.10$, a photo-$z$ bias of +0.005 (-0.005) would result in an inferred luminosity bias that is +11\\% (-10\\%). For a scatter in the estimated photo-$z$s of $\\sigma_{z}=0.035$ (and no bias), we find that the inferred total luminosity in this cluster would be biased high by 19\\%. Given that the cluster redshift is well-known, we therefore assign the distance modulus of the cluster to every galaxy in the cluster fields. In order to properly subtract contaminating fore- and background galaxies, we also assign this distance modulus to each galaxy in the reference COSMOS field (after applying the redshift cut). We then use the SED-fitting code FAST \\citep{kriek09} to estimate the stellar-mass-to-light ratio (M\/L) (in the $r$-band) for each galaxy. For this we again assume the same redshift and distance modulus (corresponding to the cluster) for each galaxy. Then in each of the radial bins (which are scaled by the size $R_{200}$ of each cluster) we measure the area (in angular size) that is covered with four-band photometry, but is not masked by bright stars, and estimate the expected number of sources in this area (which is also different for each cluster through their angular diameter distance) in the COSMOS field. We estimate the total stellar mass and corresponding error for those sources by performing a series of 10,000 Monte-Carlo realisations of the background, by randomly drawing sources from the COSMOS catalogue. We subtract the estimated field values from the raw number counts to obtain the cluster stellar mass density profile. \n\nIt is important to distinguish and account for the different sources of statistical uncertainties that enter our analysis. In the stacked radial profiles, we bootstrap the galaxies in each bin to estimate a statistical error on each data point. We show these error bars in the plots, after including the Poisson uncertainty of the background galaxy counts. We use these errors when fitting profiles, since they are independent between bins, and hence provide a goodness-of-fit test. However, since galaxy clusters are complex systems which are individually not necessarily described by the same profile, we also provide an uncertainty due to sample-to-sample variance. For example, if we would have studied 60 different clusters drawn from the same parent sample (that is, X-ray selected clusters at similar masses and redshifts as the current sample), the resulting stack would have been different. By performing 100 bootstraps (drawing with replacement) of the cluster sample we show that, when stacking a number of 60 clusters with deep photometric data, this sample-to-sample uncertainty dominates over the former statistical error, especially for bins that contain many galaxies and thus have a small statistical error. To estimate this sample-to-sample uncertainty on the best-fitting parameters that describe the stellar mass distribution of the stacked cluster, we perform the fitting procedure on each of the 100 realisations, and combine the range of different best-fitting parameters into an uncertainty. We do not explicitly account for uncertainties on $R_{200}$, but we checked that these have an effect on the data points that is comparable in size to the Poisson uncertainty on the galaxies, and is thus negligible compared to the sample-to-sample uncertainty.\n\nIn addition to these statistical uncertainties, and the Poisson noise term in the reference field estimated with the Monte-Carlo realisations, cosmic variance \\citep[e.g.][]{somerville04} also contributes to the error in the background. Both the field component that is included in the cluster raw number counts, and the reference field sample from COSMOS, which we subtract from the raw counts, contain this type of uncertainty. However, when several tens of independent cluster fields are stacked, the dominant cosmic variance error arises from the COSMOS reference catalogue. Our analysis, in which we assign the same distance modulus to all galaxies with $z_{\\mathrm{phot}} < 0.3$ complicates an estimate of this cosmic variance, since the basic recipes by e.g. \\citet{trenti08,moster11} cannot be applied. We do however make an empirical estimate based on catalogues from the 4 spatially independent CFHT Legacy Survey Deep fields \\citep{erben09,hildebrandt09a}, which each cover an un-masked area of about 0.8 deg$^2$. After applying the same photometric redshift selection, and masking bright stars, we study the difference between the 4 fields for the following galaxy selections. Assuming a distance modulus corresponding to a redshift of $z=0.15$, the differences in number density of galaxies with stellar mass $10^{9}<\\mathrm{M_\\star \/ M_{\\odot} < 10^{10}}$ is 14\\% among the 4 fields, while the differences for galaxies with stellar mass $\\mathrm{M_\\star > 10^{10}\\, M_{\\odot}}$ is about 16\\%. When we sum the $r$-band fluxes of all galaxies with $z_{\\mathrm{phot}} < 0.3$, as a proxy for the total stellar mass, we find differences between the 4 fields of about 23\\% in the total $r$-band flux. Although these fields are a factor of $\\sim$2 smaller than the COSMOS field, we will use these differences as a conservative estimate of the cosmic variance error. A measurement of the intrinsic scatter in the profiles of individual clusters requires a more sophisticated investigation of the cosmic variance in annuli centred on individual cluster fields, and is beyond the scope of this paper. \n\nWe perform a consistency check between the COSMOS field and field galaxies that are probed far away from the cluster centres in the low-$z$ cluster data. Although the COSMOS data are significantly deeper, we find no systematic difference in the galaxy stellar mass function between the field probed around the cluster and reference COSMOS field in the regime we are interested in (stellar masses exceeding $\\mathrm{M_\\star > 10^{9}\\, M_{\\odot}}$).\n\nTo investigate the spatial distribution of individual galaxy types, we locate the red sequence in the ($g-r$)-colour versus $r$-band total magnitude in each of the clusters to distinguish between red and blue galaxies. We find that the slope, and particularly the intersect, of the red sequence vary smoothly with redshift. The dividing line that we use to separate the galaxy types lies just below the red sequence, and is described by $(g-r)_{\\mathrm{div}} = [0.475+2.459\\cdot z]-[0.036+0.024\\cdot z] \\times (r_{\\mathrm{tot}}-18.0)$, where $z$ is the cluster redshift, and $r_{\\mathrm{tot}}$ is the total $r$-band apparent magnitude. As expected, the intersect becomes redder with redshift, wheras the slope becomes steeper. Using the location of spectroscopically confirmed cluster members in colour-magnitude space we finetune the intersect and slope on a cluster-by-cluster basis by hand. This leads to small adjustments with a median absolute difference of 0.017 in the intersect, and a median absolute difference of 0.0016 in the slope, compared to the general equation. In the following we refer to red galaxies as galaxies above the dividing line (which thus lie on the red sequence), and blue galaxies as anything bluer than this dividing line. For each of the clusters we again subtract the field statistically for each of the populations by applying the same colour cut to the COSMOS catalogue. \n\n\\subsection{Comparison with spectroscopic data}\nIn the method described above, we subtract the galaxies in the fore-, and background statistically based only on the photometric data. However, as discussed in Sect.~\\ref{sec:dataoverview}, we can use a substantial number of spectroscopic redshifts in the cluster fields from the literature. In this second approach we measure the stellar mass contained in spectroscopically confirmed cluster members to provide a lower limit to the full stellar mass distribution.\n\nSince the spectroscopic data set is obtained after combining several different surveys, the way the spectroscopic targets have been selected is not easily reconstructed. Fig.~\\ref{fig:specz_compl} shows the spectroscopic completeness for all galaxies with a photometric redshift $z<0.3$ as a function of stellar mass (assuming the same distance modulus as the cluster redshift), and for different radial bins. For stellar masses $\\mathrm{M_\\star > 10^{11}\\, M_{\\odot}}$, the completeness is high ($>70\\%$) in each of the radial bins. Since these objects constitute most of the total stellar mass distribution (see vdB14 (Fig.~2) for this argument), we can get a fairly complete census of stellar mass by just considering the galaxies for which we have a spectroscopic redshift. We estimate the fraction of the stellar mass that is in spectroscopically confirmed members, for each of the four radial bins. For this we assume a stellar mass distribution following a \\citet{schechter76} function with characteristic mass $M^{*}=10^{11}\\,\\rm{M_{\\odot}}$, and low-mass slope $\\alpha = -1.3$. These choices are motivated by the low-$z$ bin of the field stellar mass function as measured by \\citet{muzzin13b}. When we multiply this distribution with the completeness curves as shown in Fig.~\\ref{fig:specz_compl}, we find a spectroscopic completeness for the total stellar mass in satellite galaxies of 59\\%, 57\\%, 52\\%, and 43\\% for the four radial bins, respectively. \n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{specz_compl_paper}}\n\\caption{Spectroscopic completeness for sources with a photometric redshift $z < 0.3$ as a function of stellar mass (assuming the same distance modulus as the cluster redshift). The four lines show different radial bins. For targets of a given stellar mass, the spectroscopic completeness is slightly higher for those that are closer to the cluster centres. For each of the radial bins, the completeness is larger than 70\\% for stellar masses $\\mathrm{M_\\star > 10^{11}\\, M_{\\odot}}$.}\n\\label{fig:specz_compl}\n\\end{figure}\n\n\\subsection{The presence of the BCG}\nIn order to measure the number density and stellar mass density profiles of the satellites close to the cluster centres, we subtract the primary component of the BCGs' flux-profiles with {\\tt GALFIT} \\citep{galfit} prior to source extraction and satellite photometry. We find that this step has a significant impact on the measured number density of faint satellites ($\\mathrm{M_\\star < 10^{10}\\, M_{\\odot}}$) near the cluster centres, which was also mentioned by \\citet{budzynski12}. In the next section we mask the inner two bins (for which $R < 0.02 \\cdot R_{200}$) given that their values change by more than two times their statistical error. We find that the effect on the number density distribution of more massive satellites ($\\mathrm{M_\\star > 10^{10}\\, M_{\\odot}}$) is negligible. The effect is largest in the first logarithmic bin (for which $R \\approx 0.015 \\cdot R_{200}$), but even here the results change by less than the size of the statistical error. The effects on the stellar mass density distribution are also smaller than the statistical error. The reason for this is that the stellar mass density distribution is primarily composed of more massive satellites which are relatively unobscured by the BCG. We therefore conclude that, although we remove the BCG profile prior to satellite detection and photometry, doing so has a negligible effect on the measured stellar mass density profile. \n\n\\section{Results in the context of the NFW profile}\\label{sec:results}\nIn this section we present the galaxy number and stellar mass density distributions of the 60 clusters we study, based on the two independent analyses described in Sect.~\\ref{sec:analysis}. We discuss these results by considering the NFW \\citep{NFW} fitting function, since that is the parameterisation generally used in previous studies. We can therefore compare the results in this context with measurements in the literature, both at low and high redshift. \n\n\\subsection{Galaxy number density profile}\n\\begin{figure*}[t]\n\\resizebox{\\hsize}{!}{\\includegraphics{profile_number2panel_new_dark}}\n\\caption{Galaxy number density distributions for masses $10^{9}<\\mathrm{M_\\star \/ M_{\\odot} < 10^{10}}$ (left panel), and $\\mathrm{M_\\star > 10^{10}\\, M_{\\odot}}$ (right panel) for the ensemble cluster at $z\\sim 0.15$. Black points with the best-fitting projected NFW (dashed) and gNFW (solid) functions are our best estimates for the cluster number counts. The inner two points in the left panel are masked due to obscuration from the BCG, which is more severe for low-mass galaxies, and are excluded from the fitting. Purple points indicate the number of spectroscopically confirmed cluster members.}\n\\label{fig:number_profile_ensemble}\n\\end{figure*}\nIgnoring baryonic physics, the galaxy number density distribution in cluster haloes can be compared to the distribution of sub-haloes in N-body simulations as a test of $\\Lambda$CDM. Due to mergers and interactions between galaxies, and in particular the mass-dependence of the dynamical friction timescale, the number density distribution of galaxies may be different for galaxies with different stellar masses. \n\nFigure~\\ref{fig:number_profile_ensemble} shows the projected galaxy number density distribution for galaxies with stellar masses $10^{9}<\\mathrm{M_\\star \/ M_{\\odot} < 10^{10}}$ (left panel), and $\\mathrm{M_\\star > 10^{10}\\, M_{\\odot}}$ (right panel) in the ensemble cluster. Before stacking the 60 clusters, their radial distances to the BCGs are scaled by $R_{200}$, but the BCGs themselves are not included in the data points. Error bars reflect bootstrapped errors arising from both the cluster galaxy counts and the field value that is subtracted. The shaded area around the data points shows the systematic effect due to cosmic variance in the background, which we estimated in Sect.~\\ref{sec:analysisbgsub}. The number of spectroscopically confirmed cluster members follow a similar distribution but have a different normalisation due to spectroscopic incompleteness. \n\nWe fit projected NFW profiles to the data points, and show those corresponding to the minimum $\\chi^2$ values with the dashed lines in Fig.~\\ref{fig:number_profile_ensemble}. For the lower-mass galaxies ($10^{9}<\\mathrm{M_\\star \/ M_{\\odot} < 10^{10}}$), we find an overall goodness-of-fit of $\\chi^2\/d.o.f.=1.19$, with a concentration of $c=1.85^{+0.18+0.09}_{-0.12-0.09}$. Both a sample-to-sample variance (first) and systematic (second) error are quoted. For the higher-mass galaxies ($\\mathrm{M_\\star > 10^{10}\\, M_{\\odot}}$), the overall goodness-of-fit is $\\chi^2\/d.o.f.=3.00$ with a concentration of $c=2.31^{+0.22+0.32}_{-0.18-0.29}$. In both stellar mass bins, we find that the best-fitting NFW function gives a reasonable description of the data for most of the cluster ($R \\gtrsim 0.10 \\cdot R_{200}$), but that the centre has an excess in the number of galaxies compared to the NFW profile. In the next section we provide a more detailed investigation of this excess; in this section we continue working with the standard NFW profile in order to compare with previous work.\n\nThe number density and luminosity density profiles of group and cluster sized haloes in the literature have generally been measured on smaller samples, and do not focus on the smallest radial scales around the BCGs. On the scales these studies have focussed on, NFW profiles have been shown to be an adequate fit to the data over the whole radial range. We therefore compare the concentration parameters fitted by the NFW profile with the values presented in the literature. \n\n\\citet{lin04} studied the average number density profile of a sample of 93 clusters at $0.01 10^{10}\\, M_{\\odot}}$).\n\n\\citet{budzynski12} measured the radial distribution of satellite galaxies in groups and clusters in the range $0.15 10^{10}\\, M_{\\odot}}$}\\\\\nSample&$c_{\\mathrm{gNFW}}$&$\\alpha_{\\mathrm{gNFW}}$&$\\frac{\\chi^2}{d.o.f.}$&$c_{\\mathrm{gNFW}}$&$\\alpha_{\\mathrm{gNFW}}$&$\\frac{\\chi^2}{d.o.f.}$&$c_{\\mathrm{gNFW}}$&$\\alpha_{\\mathrm{gNFW}}$&$\\frac{\\chi^2}{d.o.f.}$\\\\\n\\hline\nAll (NFW) &$2.03^{+0.20+0.60}_{-0.20-0.40}$&1 (fixed)&2.51&$1.85^{+0.18+0.09}_{-0.12-0.09}$&1 (fixed)&1.19&$2.31^{+0.22+0.32}_{-0.18-0.29}$&1 (fixed)&3.00\\\\\nAll&$0.64^{+0.49+0.73}_{-0.21-0.33}$&$1.63^{+0.06+0.09}_{-0.25-0.17}$&1.24&$1.41^{+0.42+0.18}_{-0.38-0.12}$&$1.20^{+0.18+0.03}_{-0.22-0.04}$&1.04&$0.72^{+0.31+0.31}_{-0.19-0.28}$&$1.64^{+0.06+0.08}_{-0.16-0.06}$&1.19\\\\\n$z<0.114$&$0.36^{+0.76+0.14}_{-0.09-0.06}$&$1.66^{+0.06+0.00}_{-0.35-0.01}$&1.06&$0.70^{+0.52+0.06}_{-0.28-0.04}$&$1.38^{+0.13+0.01}_{-0.17-0.01}$&1.09&$0.71^{+0.61+0.16}_{-0.29-0.09}$&$1.63^{+0.09+0.01}_{-0.22-0.02}$&0.83\\\\\n$z\\geq0.114$ &$0.97^{+1.57+1.67}_{-0.33-0.62}$&$1.50^{+0.11+0.20}_{-0.52-0.41}$&1.71&$1.47^{+1.07+0.38}_{-0.53-0.21}$&$1.28^{+0.20+0.05}_{-0.40-0.11}$&1.06&$1.09^{+0.85+0.69}_{-0.55-0.46}$&$1.50^{+0.18+0.14}_{-0.32-0.17}$&1.97\\\\\n$M_{200} < 8.6 \\cdot 10^{14}\\,\\mathrm{M_{\\odot}}$&$0.31^{+0.72+0.31}_{-0.18-0.20}$&$1.68^{+0.09+0.07}_{-0.25-0.08}$&1.06&$0.64^{+0.49+0.06}_{-0.21-0.10}$&$1.45^{+0.18+0.03}_{-0.22-0.01}$&0.64&$0.19^{+0.74+0.26}_{-0.09-0.10}$&$1.84^{+0.05+0.02}_{-0.31-0.08}$&1.63\\\\\n$M_{200}\\geq 8.6 \\cdot 10^{14}\\,\\mathrm{M_{\\odot}}$&$1.30^{+0.67+1.20}_{-0.43-0.57}$&$1.42^{+0.10+0.14}_{-0.22-0.24}$&1.67&$2.10^{+1.37+0.29}_{-0.73-0.21}$&$1.00^{+0.30+0.05}_{-0.50-0.08}$&2.08&$1.14^{+0.43+0.47}_{-0.47-0.31}$&$1.48^{+0.12+0.07}_{-0.18-0.10}$&0.84\\\\\n$M_\\mathrm{{\\star,BCG}} < 9.1 \\cdot 10^{11}\\,\\mathrm{M_{\\odot}}$&$1.09^{+1.48+1.49}_{-0.42-0.64}$&$1.40^{+0.14+0.20}_{-0.61-0.34}$&2.17&$2.30^{+1.17+0.40}_{-0.63-0.28}$&$1.01^{+0.18+0.06}_{-0.32-0.10}$&1.31&$0.86^{+0.71+0.60}_{-0.49-0.40}$&$1.57^{+0.12+0.12}_{-0.28-0.15}$&2.12\\\\\n$M_\\mathrm{{\\star,BCG}}\\geq 9.1 \\cdot 10^{11}\\,\\mathrm{M_{\\odot}}$&$0.53^{+0.48+0.46}_{-0.13-0.22}$&$1.61^{+0.07+0.07}_{-0.16-0.12}$&0.82&$0.83^{+0.48+0.06}_{-0.22-0.07}$&$1.34^{+0.21+0.02}_{-0.09-0.01}$&1.04&$0.66^{+0.55+0.21}_{-0.15-0.19}$&$1.64^{+0.11+0.05}_{-0.19-0.04}$&0.56\\\\\n$n_{1\\mathrm{Mpc}, M_{\\star}>10^{10}\\,\\mathrm{M_{\\odot}}} < 87$&$0.89^{+0.60+0.70}_{-0.40-0.33}$&$1.30^{+0.15+0.09}_{-0.35-0.17}$&0.87&$0.91^{+0.48+0.12}_{-0.42-0.12}$&$1.35^{+0.10+0.04}_{-0.30-0.03}$&0.70&$0.54^{+0.45+0.33}_{-0.35-0.18}$&$1.52^{+0.13+0.06}_{-0.27-0.10}$&0.80\\\\\n$n_{1\\mathrm{Mpc}, M_{\\star}>10^{10}\\,\\mathrm{M_{\\odot}}} \\geq 87$&$0.92^{+0.70+0.97}_{-0.40-0.47}$&$1.56^{+0.12+0.13}_{-0.18-0.22}$&1.51&$1.96^{+0.96+0.26}_{-0.84-0.21}$&$1.05^{+0.23+0.06}_{-0.37-0.06}$&1.75&$1.02^{+0.90+0.60}_{-0.20-0.30}$&$1.62^{+0.06+0.06}_{-0.24-0.14}$&1.18\\\\\n$K_0 < \\mathrm{70\\,keV\\,cm^2}$&$1.64^{+1.21+0.76}_{-0.59-0.43}$&$1.01^{+0.23+0.11}_{-0.57-0.19}$&1.46&$1.70^{+0.75+0.18}_{-0.55-0.12}$&$0.88^{+0.19+0.03}_{-0.29-0.05}$&1.90&$0.89^{+0.86+0.34}_{-0.27-0.21}$&$1.50^{+0.14+0.05}_{-0.36-0.09}$&0.71\\\\\n$K_0 \\geq \\mathrm{70\\,keV\\,cm^2}$&$0.57^{+1.06+0.76}_{-0.34-0.37}$&$1.72^{+0.11+0.10}_{-0.29-0.17}$&1.06&$2.19^{+1.34+0.33}_{-1.06-0.20}$&$0.98^{+0.35+0.05}_{-0.25-0.08}$&2.00&$0.80^{+1.23+0.52}_{-0.37-0.28}$&$1.67^{+0.16+0.06}_{-0.24-0.12}$&0.97\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\\begin{table*\n\\caption{Similar to Table~\\ref{tab:nfwpars}, but showing the parameters corresponding to the best-fitting Einasto profiles.}\n\\label{tab:einastopars}\n\\begin{center}\n\\begin{tabular}{l | c c c | c c c | c c c}\n\\hline\n\\hline\n&\\multicolumn{3}{c}{Stellar Mass}&\\multicolumn{3}{c}{Number density}&\\multicolumn{3}{c}{Number density}\\\\\n&\\multicolumn{3}{c}{density}&\\multicolumn{3}{c}{$10^{9}<\\mathrm{M_\\star \/ M_{\\odot} < 10^{10}}$}&\\multicolumn{3}{c}{$\\mathrm{M_\\star > 10^{10}\\, M_{\\odot}}$}\\\\\nSample&$c_{\\mathrm{EIN}}$&$\\alpha_{\\mathrm{EIN}}$&$\\frac{\\chi^2}{d.o.f.}$&$c_{\\mathrm{EIN}}$&$\\alpha_{\\mathrm{EIN}}$&$\\frac{\\chi^2}{d.o.f.}$&$c_{\\mathrm{EIN}}$&$\\alpha_{\\mathrm{EIN}}$&$\\frac{\\chi^2}{d.o.f.}$\\\\\n\\hline\nAll&$1.96^{+0.27+0.70}_{-0.43-0.75}$&$0.11^{+0.07+0.04}_{-0.03-0.06}$&1.40&$1.73^{+0.20+0.11}_{-0.10-0.06}$&$0.21^{+0.07+0.01}_{-0.03-0.01}$&1.03&$2.31^{+0.32+0.39}_{-0.28-0.43}$&$0.10^{+0.02+0.02}_{-0.02-0.03}$&1.16\\\\\n$z<0.114$&$1.03^{+0.79+0.54}_{-0.35-0.36}$&$0.07^{+0.14+0.01}_{-0.02-0.02}$&1.23&$1.14^{+0.38+0.09}_{-0.52-0.08}$&$0.16^{+0.05+0.01}_{-0.05-0.00}$&1.17&$2.20^{+0.32+0.31}_{-0.58-0.26}$&$0.11^{+0.10+0.01}_{-0.02-0.01}$&0.97\\\\\n$z\\geq0.114$ &$2.07^{+0.37+0.61}_{-0.33-0.78}$&$0.14^{+0.04+0.06}_{-0.06-0.08}$&1.73&$2.04^{+0.30+0.11}_{-0.30-0.12}$&$0.18^{+0.02+0.01}_{-0.10-0.02}$&1.09&$2.37^{+0.47+0.35}_{-0.43-0.40}$&$0.13^{+0.05+0.03}_{-0.05-0.03}$&1.89\\\\\n$M_{200} < 8.6 \\cdot 10^{14}\\,\\mathrm{M_{\\odot}}$&$0.89^{+0.64+0.77}_{-0.37-0.65}$&$0.07^{+0.16+0.03}_{-0.02-0.03}$&1.00&$1.20^{+0.33+0.10}_{-0.27-0.12}$&$0.14^{+0.09+0.01}_{-0.02-0.01}$&0.69&$1.52^{+0.81+0.83}_{-0.44-0.65}$&$0.05^{+0.08+0.01}_{-0.01-0.03}$&1.47\\\\\n$M_{200}\\geq 8.6 \\cdot 10^{14}\\,\\mathrm{M_{\\odot}}$&$2.36^{+0.21+0.38}_{-0.49-0.53}$&$0.17^{+0.03+0.04}_{-0.07-0.08}$&1.77&$1.94^{+0.13+0.08}_{-0.27-0.07}$&$0.26^{+0.04+0.02}_{-0.06-0.01}$&1.74&$2.32^{+0.35+0.31}_{-0.45-0.30}$&$0.15^{+0.05+0.02}_{-0.05-0.03}$&0.94\\\\\n$M_\\mathrm{{\\star,BCG}} < 9.1 \\cdot 10^{11}\\,\\mathrm{M_{\\odot}}$&$1.90^{+0.37+0.60}_{-0.43-0.71}$&$0.16^{+0.13+0.06}_{-0.07-0.09}$&2.23&$2.16^{+0.21+0.09}_{-0.29-0.12}$&$0.24^{+0.05+0.01}_{-0.05-0.02}$&1.22&$2.20^{+0.37+0.45}_{-0.53-0.46}$&$0.12^{+0.07+0.03}_{-0.03-0.03}$&2.11\\\\\n$M_\\mathrm{{\\star,BCG}}\\geq 9.1 \\cdot 10^{11}\\,\\mathrm{M_{\\odot}}$&$1.49^{+0.42+0.59}_{-0.38-0.47}$&$0.10^{+0.15+0.02}_{-0.01-0.04}$&0.85&$1.27^{+0.24+0.09}_{-0.26-0.05}$&$0.17^{+0.08+0.00}_{-0.02-0.01}$&1.13&$2.11^{+0.40+0.36}_{-0.40-0.35}$&$0.10^{+0.15+0.01}_{-0.01-0.02}$&0.52\\\\\n$n_{1\\mathrm{Mpc}, M_{\\star}>10^{10}\\,\\mathrm{M_{\\odot}}} < 87$&$1.32^{+0.11+0.46}_{-0.43-0.45}$&$0.19^{+0.06+0.05}_{-0.14-0.08}$&0.98&$1.45^{+0.14+0.10}_{-0.46-0.11}$&$0.17^{+0.02+0.01}_{-0.12-0.01}$&0.73&$1.16^{+0.18+0.40}_{-0.37-0.37}$&$0.12^{+0.03+0.03}_{-0.07-0.03}$&0.82\\\\\n$n_{1\\mathrm{Mpc}, M_{\\star}>10^{10}\\,\\mathrm{M_{\\odot}}} \\geq 87$&$2.32^{+0.50+0.56}_{-0.40-0.70}$&$0.12^{+0.06+0.04}_{-0.04-0.06}$&1.52&$1.96^{+0.16+0.08}_{-0.34-0.08}$&$0.24^{+0.04+0.01}_{-0.06-0.02}$&1.60&$3.06^{+0.46+0.33}_{-0.44-0.35}$&$0.12^{+0.06+0.02}_{-0.04-0.02}$&1.11\\\\\n$K_0 < \\mathrm{70\\,keV\\,cm^2}$&$1.60^{+0.15+0.22}_{-0.35-0.22}$&$0.30^{+0.14+0.05}_{-0.06-0.09}$&1.38&$1.46^{+0.09+0.07}_{-0.31-0.08}$&$0.31^{+0.13+0.01}_{-0.05-0.01}$&1.77&$1.89^{+0.25+0.25}_{-0.44-0.27}$&$0.15^{+0.09+0.02}_{-0.03-0.03}$&0.77\\\\\n$K_0 \\geq \\mathrm{70\\,keV\\,cm^2}$&$2.38^{+0.55+0.90}_{-0.75-1.02}$&$0.08^{+0.15+0.03}_{-0.02-0.05}$&1.01&$2.00^{+0.23+0.08}_{-0.27-0.08}$&$0.25^{+0.08+0.01}_{-0.02-0.02}$&1.75&$2.84^{+0.49+0.39}_{-0.61-0.51}$&$0.10^{+0.13+0.02}_{-0.01-0.03}$&0.93\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{c_evolution_new_paper}}\n\\caption{\\textit{Black points}: Stellar mass density concentration for the clusters used in this study, split in two redshift bins. \\textit{Purple}: K-band luminosity density concentration in CNOC1 from \\citet{muzzin07}. \\textit{Red}: Stellar mass density concentration in GCLASS from vdB14. The horizontal bars indicate the redshift range for each sample. \\textit{Black lines}: The NFW concentration in the sample of relaxed haloes from \\citet{duffy08} as a function of redshift. \\textit{Dotted and dashed}: Haloes of a given mass as a function of redshift. \\textit{Solid}: NFW concentration of a halo that is evolving in mass, with scatter given by the shaded region.}\n\\label{fig:c_evolution_new}\n\\end{figure}\n\n\n\\section{A closer investigation of the cluster cores}\\label{sec:results2}\nIn the previous section we discussed the number density and stellar mass density profiles of the ensemble cluster, and found that these are well-described by NFW profiles, except for the inner regions ($R \\lesssim 0.10 \\cdot R_{200}$). The central parts show a significant and substantial excess, both in galaxy numbers and their stellar mass density distribution. Per cluster this excess within the inner regions is on average $\\sim$1 galaxy with $10^{9}<\\mathrm{M_\\star \/ M_{\\odot} < 10^{10}}$, and $\\sim$2 galaxies with $\\mathrm{M_\\star > 10^{10}\\, M_{\\odot}}$, and a total stellar mass excess in satellite galaxies of $\\sim10^{11}\\, \\mathrm{M_{\\odot}}$ per cluster, compared to the NFW profiles. \n\nThe purple points in Figs.~\\ref{fig:number_profile_ensemble} \\& \\ref{fig:profile_ensemble} show the numbers of spectroscopically confirmed member galaxies. Although these data points are off-set with respect to the full photometric measurement as a result of spec-$z$ incompleteness, they are consistent with the central excess of galaxy numbers and stellar mass density compared to the standard NFW profile. \n\nTo study the central parts of our cluster-sized haloes further, we revisit the fits to the number density and stellar mass density distributions by allowing the inner slope of the density profiles to vary. We hence fit so-called generalised NFW (gNFW) profiles \\citep[e.g.][]{zhao96,wyithe01} to the data points. These profiles are described by\n\\begin{equation}\n\\rho(r)=\\frac{\\rho_{0}}{ \\left( \\frac{r}{R_{s}} \\right)^{\\alpha} \\left(1+\\frac{r}{R_{s}}\\right)^{3-\\alpha} },\n\\end{equation}\nwhere the concentration is defined, as in the case of the standard NFW profile, to be $c_{\\mathrm{NFW}}=\\frac{R_{200}}{R{s}}$. For $\\alpha =1$, the inner slope equals -1, corresponding to the standard NFW profile. We project the generalised NFW profile numerically along the line-of-sight.\n\nFor the number density profiles we find that, for galaxies with $10^{9} < \\mathrm{M_\\star \/ M_{\\odot}} < 10^{10}$, a profile with $\\alpha=1.20^{+0.18+0.03}_{-0.22-0.04}$ and $c=1.41^{+0.42+0.18}_{-0.38-0.12}$ gives a good description of the data ($\\chi^2\/d.o.f.=1.04$). For the more massive galaxies ($\\mathrm{M_\\star > 10^{10}\\, M_{\\odot}}$), the best-fitting parameters are $\\alpha=1.64^{+0.04+0.08}_{-0.16-0.06}$ and $c=0.72^{+0.31+0.31}_{-0.19-0.28}$, with goodness-of-fit $\\chi^2\/d.o.f.=1.19$. Again, both a sample-to-sample variance (first) and systematic (due to cosmic variance in the background, second) error are quoted. The significantly steeper inner slope we find for the high mass sample compared to the lower mass sample indicates that the more massive galaxies are more strongly concentrated in the cluster ensemble. The effect of dynamical friction, which is more efficient for massive galaxies, can be the cause of this mass segregation.\n\nFor the stellar mass density we also find a better fit over-all, with $\\chi^2\/d.o.f.=1.24$ instead of 2.51. The best-fitting profile is given by $\\alpha=1.63^{+0.05+0.09}_{-0.25-0.17}$ and $c=0.64^{+0.49+0.73}_{-0.21-0.33}$. The shape of the stellar mass density profile closely agrees with the number density profile for the massive galaxies, which is expected since these dominate in total stellar mass over the less massive galaxies. In Figs.~\\ref{fig:number_profile_ensemble} \\& \\ref{fig:profile_ensemble}, the gNFW profiles are shown by the solid lines. \n\nFor reference we also consider Einasto \\citep{einasto65} profiles, which are described by \n\\begin{equation}\n\\rho(r)= \\rho_{0}\\,\\exp \\left( -\\frac{2}{\\alpha} \\left[ \\left(\\frac{r}{R_{s}}\\right)^{\\alpha} -1 \\right]\\right), \n\\end{equation}\nand have been found to provide good fits to the dark matter density distribution of massive haloes in N-body simulations \\citep[e.g.][]{dutton14,klypin14}. We project these profiles numerically along the line-of-sight, and find that they give a similarly good representation of the data as the gNFW profile. Parameters $\\alpha$ and $c$ ($\\equiv\\frac{R_{200}}{R{s}}$, as before), and reduced $\\chi^2$ values are presented in Table~\\ref{tab:einastopars}.\n\nA significant part of the total stellar mass distribution is the stellar mass contained in the BCG and ICL. Although a full accounting of the ICL component is beyond the scope of the current paper, we assess their contribution by measuring the distribution of the stellar mass including the BCG. To measure this total, we directly sum all measured flux around the BCG locations of the original $r$-band images (i.e. without first removing the BCGs' main profiles with {\\tt GALFIT}). To estimate the stellar mass distribution, we multiply this with the stellar-mass-to-light ratio (M\/L) of the BCG under the assumption that there is no M\/L-gradient. We mask the locations of bright stars, sum the flux in annuli that are logarithmically spaced, and statistically subtract the field by considering a large annulus far away from the cluster centres. The background-subtracted central stellar mass density profile is shown as a thick dotted line in Fig.~\\ref{fig:profile_ensemble}, with thinner dotted lines marking the 68\\% uncertainty region as estimated from cluster-bootstrapping. At a projected radius of $R \\sim 0.02 \\cdot R_{200}$, the contribution of stellar mass in satellites is roughly similar to that of the BCG component. As a good consistency check we note that the dotted line, which by definition also includes stellar mass in satellites, and the black data points have consistent values in the outermost region where they overlap (at $R \\sim 0.08 \\cdot R_{200}$). By construction, part of the ICL is also included in this total profile. However, because of the way the background is subtracted from our images, the larger scale component of the ICL is not taken into account. A more sophisticated data reduction is required to measure this component down to sufficiently low surface brightnesses \\citep[e.g.][]{NGVS12}, and we leave this to a future study.\n\n\n\n\n\n\n\n\n\n\\begin{table\n\\caption{The excess of stellar mass in satellites in each of the subsamples, with respect to the overall best-fitting NFW profile. Both the relative contribution, and the absolute excess are given, both with respect to the stellar mass included in that NFW profile in the radial regime $R < 0.10 \\cdot R_{200}$.}\n\\label{tab:excess}\n\\begin{center}\n\\begin{tabular}{l | c c c }\n\\hline\n\\hline\n& \\multicolumn{3}{c}{Stell mass density}\\\\\n & & \\multicolumn{2}{c}{Central excess}\\\\\nCluster & $c_{\\mathrm{NFW}}$ & Relative & log($\\Delta M_\\star \/ M_{\\odot}$)\\\\\n\\hline\nAll&$2.03^{+0.20+0.60}_{-0.20-0.40}$&$0.25^{+0.06}_{-0.07}$&$10.95^{+0.09}_{-0.15}$\\\\\n$z<0.114$&$1.56^{+0.46+0.26}_{-0.14-0.18}$&$0.55^{+0.14}_{-0.15}$&$11.20^{+0.10}_{-0.14}$\\\\\n$z\\geq0.114$&$2.14^{+0.50+0.78}_{-0.20-0.47}$&$0.18^{+0.08}_{-0.09}$&$10.83^{+0.16}_{-0.28}$\\\\\n$M_{200} < 8.6 \\cdot 10^{14}\\,\\mathrm{M_{\\odot}}$&$1.59^{+0.14+0.39}_{-0.36-0.26}$&$0.39^{+0.12}_{-0.12}$&$10.93^{+0.12}_{-0.16}$\\\\\n$M_{200}\\geq 8.6 \\cdot 10^{14}\\,\\mathrm{M_{\\odot}}$&$2.38^{+0.29+0.67}_{-0.41-0.46}$&$0.15^{+0.08}_{-0.07}$&$10.87^{+0.19}_{-0.26}$\\\\\n$M_\\mathrm{{\\star,BCG}} < 9.1 \\cdot 10^{11}\\,\\mathrm{M_{\\odot}}$&$2.04^{+0.43+0.72}_{-0.37-0.44}$&$0.33^{+0.10}_{-0.11}$&$11.02^{+0.11}_{-0.18}$\\\\\n$M_\\mathrm{{\\star,BCG}}\\geq 9.1 \\cdot 10^{11}\\,\\mathrm{M_{\\odot}}$&$1.80^{+0.21+0.38}_{-0.19-0.28}$&$0.28^{+0.09}_{-0.08}$&$11.00^{+0.12}_{-0.16}$\\\\\n$n_{1\\mathrm{Mpc}, M_{\\star}>10^{10}\\,\\mathrm{M_{\\odot}}} < 87$&$1.43^{+0.06+0.44}_{-0.24-0.27}$&$0.29^{+0.12}_{-0.14}$&$10.75^{+0.15}_{-0.31}$\\\\\n$n_{1\\mathrm{Mpc}, M_{\\star}>10^{10}\\,\\mathrm{M_{\\odot}}} \\geq 87$&$2.38^{+0.44+0.59}_{-0.26-0.38}$&$0.23^{+0.07}_{-0.07}$&$11.07^{+0.12}_{-0.16}$\\\\\n$K_0 < \\mathrm{70\\,keV\\,cm^2}$&$1.65^{+0.20+0.34}_{-0.40-0.23}$&$0.35^{+0.18}_{-0.17}$&$11.05^{+0.18}_{-0.28}$\\\\\n$K_0 \\geq \\mathrm{70\\,keV\\,cm^2}$&$2.45^{+0.58+0.64}_{-0.42-0.42}$&$0.27^{+0.09}_{-0.08}$&$11.09^{+0.12}_{-0.16}$\\\\\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\\subsection{Dependence on cluster physical properties}\\label{sec:samplesplits}\nGiven our sample of 60 clusters over a range of redshifts and halo masses, we investigate if the excess of stellar mass in satellite galaxies is related to any specific cluster (or BCG) property. The properties we consider are cluster redshift and cluster halo mass (see Table~\\ref{tab:overview}), BCG stellar mass (based on estimated M\/L and integrated $r$-band luminosity with {\\tt GALFIT}), and cluster richness\\footnote{Defined here as the number of background-subtracted cluster galaxies with $M_{\\star}>10^{10}\\,\\mathrm{M_{\\odot}}$ within a projected radius of 1 Mpc from the BCG.}. If we split the sample on the medians of these properties, we measure for each subset a significant central excess in the stellar mass distribution with respect to the best-fitting NFW profile, see Table~\\ref{tab:excess}. This excess is $\\sim10^{11}\\, \\mathrm{M_{\\odot}}$ per cluster, comprising about 30\\% of the stellar mass contained in the NFW profile for $R < 0.10 \\cdot R_{200}$. \n\nA thermodynamical property that is measured for 37 of the clusters in our sample is the central entropy \\citep[$K_{0}$, presented in][]{bildfellthesis,mahdavi13}, which is defined as the deprojected entropy profile evaluated at a radius of 20 kpc from the cluster centre. This observable is related to the dynamical state of the cluster \\citep{pratt10}, and correlates (by definition) with the inner slope of the gas density distribution. We therefore investigate if the inner part of the stellar mass distribution also depends on this property. \\citet{mahdavi13} found a hint of bimodality in the distribution of the central entropy, on either side of $K_0 = \\mathrm{70\\,keV\\,cm^2}$. Following that work, we split our sample between galaxies with central entropies smaller (13 clusters) and higher (24 clusters) than this value. Again, the stellar mass excess is significant in both subsamples (Table~\\ref{tab:excess}). \n\nIn each subsample, the gNFW profile provides a better fit to the data than a standard NFW profile. Note that the gNFW profile parameters $\\alpha$ and $c$ are degenerate, but none of the splits results in a best-fitting profile that is significantly ($>2\\sigma$) different from the over-all stack (Table~\\ref{tab:nfwpars}). Note that the splits themselves are not independent of each other, due to relations between, for example, richness and halo mass \\citep[e.g.][]{andreon10b}, and a slight covariance between mass and redshift in this sample (Fig.~\\ref{fig:evolution_thesis}). Although the stellar mass excess with respect to the NFW profile is thus significant in each subsample, we cannot draw firm conclusions regarding the dependence of the stellar mass profile shape on cluster properties with the current data set.\n\n\\section{Discussion - The evolving stellar mass distribution}\\label{sec:discussion2}\nIn this study we found that the NFW profile provides a good description of the stellar mass density distribution of satellites in clusters in the local ($0.04 10^{10}\\, M_{\\odot}}$ at $z\\sim 0.15$, since that represents the approximate stellar mass depth of GCLASS. \n\nThis purely observational comparison suggests that, although the total stellar mass content of these clusters grows substantially since $z\\sim 1$, the stellar mass density in the cluster cores (R $\\lesssim$ 0.4 Mpc) is already present at $z=1$. Moreover, there seems to be an excess of stellar mass in this regime at $z\\sim 1$ compared to $z\\sim 0.15$. Note, however, that in this comparison of the stellar mass in satellite galaxies, we do not take account of the ICL component, and excluded the BCGs from the story. The build-up of stellar mass in these components may explain the observed evolution. Massive galaxies close to the BCG are expected to merge with the central galaxy on a relatively short time-scale, and play a dominant role in the build-up of stellar mass in the BCG \\citep[e.g.][]{burke13,lidman13}. The stellar mass contained between the two curves in Fig.~\\ref{fig:insideout_Mpc} (left panel, orange region), is on average $\\sim 7\\times 10^{11}\\,\\mathrm{M_{\\odot}}$ per cluster. Given that the BCGs in the GCLASS clusters have typical stellar masses of $M_{\\star,\\mathrm{BCG}} \\simeq 3\\times 10^{11}\\,M_{\\odot}$ (vdB14, Table~2), and that the median stellar mass contained in the BCGs in the sample studied here is $\\mathrm{M_\\star \\simeq 9\\times 10^{11}\\, M_{\\odot}}$, it is an interesting coincidence that this excess of stellar mass in satellites at $z\\sim1$ roughly equals the difference in BCG stellar mass between the two samples. The development of an ICL component may also contribute to an evolution in the observed stellar mass density profile. The dotted line in Fig.~\\ref{fig:insideout_Mpc} (left panel) was already shown in Fig.~\\ref{fig:profile_ensemble}, and is consistent with the picture that stellar mass reassembles itself in the direction of the central galaxy, becoming part of the BCGs' extended light profiles.\n\nA sufficient amount of stellar mass that is required for BCG growth thus seems to already be present in the centres of the clusters at $z\\sim 1$, although it is still part of the satellite galaxy population. However, while these satellites seem to drive most of the BCG mass growth, it is interesting why they do not get replenished with new infalling satellites. In the more massive haloes at low-$z$, the process of dynamical friction, which is supposed to effectively reduce the orbital energy of massive infalling satellites, seems to work less efficiently. This might be related to the observational result that the massive end of the stellar mass function hardly evolves over cosmic time \\citep[e.g.][]{muzzin13b}, whereas the haloes we study grow in mass by a factor $\\sim 3$. Compared to $z>1$, the time that it takes for a massive galaxy to lose enough orbital energy to arrive at the centre is longer in the local universe.\n\nOn the other hand, substantial growth of the stellar mass content in the cluster outskirts ($R \\gtrsim 1.0\\, \\mathrm{Mpc}$) is required to match the low-$z$ descendants of the GCLASS systems. Under the assumption that galaxies populate sub-haloes and that these systems are accreted onto the clusters since $z=1$, it is expected that dark-matter haloes also accrete matter onto the outskirts. This effect is indeed observed in N-body simulations, if these simulations are compared on the same physical scale, see Fig.~\\ref{fig:insideout_Mpc} (right panel). Recently, \\citet{dutton14} and particularly \\citet{klypin14} have shown that Einasto profiles provide a better description of the dark matter density distribution of massive haloes in N-body simulations. As a comparison, we therefore compare the results of \\citet{duffy08}, which are based on a WMAP5 cosmology and an NFW parameterisation, with those from \\citet{dutton14}, which are based on a \\textit{Planck} cosmology and an Einasto parameterisation. Both are normalised to have the same $M_{200}$. In both cases, the profiles of these simulated haloes grow at all radii, although their growth is smaller in the centre. The evolution between the observed stellar mass distribution and the dark matter in N-body simulations is thus significantly different (cf. Fig.~\\ref{fig:c_evolution_new}), independent of the used parameterisation. \n\nThe observations strongly suggest a scenario in which the stellar mass component grows in an inside-out fashion, indicating that the presence of baryons plays an important role in this assembly process. The observed evolution of the stellar mass distribution is thus a stringent test for existing and future hydrodynamical simulations \\citep[e.g.][]{schaye10,cen14,vogelsberger14,genel14,schaye15}, as it is of importance both in a cosmological context and in our quest to understand the formation and evolution of galaxies in our Universe.\n\n\\subsection{Radial distribution of different galaxy types}\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{galtypes_allclusters}}\n\\caption{\\textit{Black:} The stellar mass density profile of Fig.~\\ref{fig:profile_ensemble}, separated between red-sequence \\textit{(red)} and bluer \\textit{(blue)} galaxies. The best-fitting gNFW profiles to the red and blue sub-samples are also shown here.}\n\\label{fig:galtypes}\n\\end{figure}\nSince blue galaxies are thought of as a dynamically younger component of the galaxy cluster population than red galaxies, a distinction between galaxy types can yield further insight in the way clusters accrete their satellite population. We make a distinction between red and blue galaxies using as simple criterion the cluster red sequence in the ($g-r$)-colour. Since the colour of the red sequence is redshift-dependent, we identify the red-sequence population in each of the individual clusters, and stack the resulting stellar mass distributions in Fig.~\\ref{fig:galtypes}. The best-fitting gNFW profile to each of the galaxy types is plotted.\n\nFigure~\\ref{fig:galtypes} shows that galaxies on the red sequence completely dominate the stellar mass distribution in the cluster centres, and are dominant over bluer galaxies (in terms of stellar mass density) up to at least $R_{200}$. Bluer galaxies are still significantly over-dense compared to the field over the entire radial range that is shown (the field values are subtracted), but note the shallow inner slope of the gNFW profile that describes the blue galaxies ($\\alpha_{\\mathrm{gNFW}}=0.64^{+0.26}_{-0.36}$). \n\nIn Fig.~\\ref{fig:insideout_Mpc} (left panel), we show the blue galaxy population of this sample, and also the blue galaxy population in the GCLASS clusters (vdB14, their Fig.~7). This shows that there is a dramatic evolution in the relative radial distribution of blue galaxies compared to the overall galaxy population. The blue fraction of cluster galaxies is lower overall at low-$z$ compared to high-$z$, but the difference is most prominently visible near the cluster cores. In the highly-simplified picture that blue galaxies fall in, and quench by some environmental process with a delay of several Gyr \\citep[e.g.][]{wetzel2013,muzzin14}, we can use their locations in the cluster to study where the stellar mass is most recently accreted. Even after more than a dynamical time-scale, which is typically 1 Gyr, the blue galaxies are mostly on the outskirts of the clusters (note that we are studying the \\textit{projected} surface mass density here). Although the physics involved in the quenching of galaxies require a more detailed modelling, this simplified picture supports a scenario in which clusters assemble their stellar mass distribution in an inside-out fashion. We leave a more detailed discussion on the relative distributions of blue and red-sequence galaxies to a future paper, in which we measure and discuss the stellar mass functions for each of these populations.\n\n\\section{Summary and conclusions}\\label{sec:conclusion}\nIn this paper we perform a detailed study of the radial galaxy number density and stellar mass density distribution of satellites in a sample of 60 massive clusters in the local Universe ($0.04 0$ such that $h\\varphi^{n} = \\varphi^{n} h$ or $\\mathcal{H}$ contains a subgroup isomorphic to $F_{2}$ in which every nontrivial element is atoroidal. However, even this weaker statement is not true. For example, take atoroidal elements $\\varphi, \\psi \\in \\Out(F_{3})$ such that $\\I{\\varphi,\\psi} \\cong F_{2}$ and consider the subgroup $\\mathcal{H} = \\I{\\varphi*\\varphi,\\varphi*\\psi} \\subset \\Out(F_{6})$. Any non-trivial element of $\\mathcal{H}$ is of the form $\\varphi^{n} * \\omega$ where $n \\in \\ZZ$ and $\\omega \\in \\I{\\varphi,\\psi}$ is non-trivial. In particular $\\mathcal{H}$ does not virtually centralize any of its non-trivial elements. However, given any two elements $\\theta_{1}, \\theta_{2} \\in \\mathcal{H}$, we have $\\theta_{1} = \\varphi^{n_{1}} * \\omega_{1}$ and $\\theta_{2} = \\varphi^{n_{2}} * \\omega_{2}$ and thus we find that $\\theta_{1}^{n_{2}}\\theta_{2}^{-n_{1}} = {\\rm id} * \\omega_{1}^{n_{2}}\\omega_{2}^{-n_{1}}$ which is not atoroidal. Therefore $\\I{\\theta_{1},\\theta_{2}}$ is not purely atoroidal. \n\nThe right characterization is the following statement. \n\n\\begin{alphathm}\\label{th:noTits} \nLet $\\mathcal{H} < \\Out(F_{N})$ be a subgroup which contains an atoroidal element $\\varphi$. Then, $\\mathcal{H}$ contains a purely atoroidal free subgroup if and only if the restriction of $\\mathcal{H}$ to each minimal $\\mathcal{H}$--invariant free factor is not virtually cyclic. \n\\end{alphathm}\n\n\\begin{proof} \nThe ``if'' direction follows from \\cite[Lemma 4.3]{U3}. For the other direction, let $A0$. \nWe say that the stratum $H_r$ is \\emph{irreducible} if the associated transition matrix $M_r$ is irreducible. If $M_r$ is irreducible then it has a unique eigenvalue $\\lambda_{r} \\geq 1$ called the \\emph{Perron-Frobenius} eigenvalue, for which the associated eigenvector is positive. We say that $H_r$ is an \\emph{exponentially growing \\textup{(}EG\\textup{)} stratum} if $\\lambda_{r} > 1$. We say that $H_r$ is a \\emph{non-exponentially growing \\textup{(}NEG\\textup{)} stratum} if $\\lambda_{r} = 1$. Finally, we say that $H_r$ is a \\emph{zero stratum} if $M_r$ is the zero matrix. \n\n\n\\subsection{Free factor systems and geometric realizations}\\label{subsec:free factor}\n\nA \\emph{free factor} $A1$. Let $(g^{-\\infty}, g^{\\infty})$ be the unoriented bi-infinite geodesic labeled by $g$'s. For any such $g$ we define the \\emph{counting current} $\\eta_{g} \\in \\Curr(F_{N})$ as follows. If $S\\subset\\partial^{2}F_{N}$ is a Borel subset we set:\n\\begin{equation*}\n\\eta_{g}(S) = \\#\\abs{S \\cap F_{N}(g^{-\\infty},g^{\\infty})}.\n\\end{equation*}\nThis definition does not depend on the representative of the conjugacy class $[g]$ of $g$, so we will use $\\eta_{[g]}$ and $\\eta_{g}$ interchangeably. For an arbitrary $g$, we write $g=h^{k}$ where $h$ is not a proper power and define $\\eta_{g}=k\\eta_{h}$. The set of scalar multiples of all counting currents are called \\emph{rational currents}. An important fact about rational currents is that they form a \\emph{dense} subset of $\\Curr(F_{N})$ \\cite{Bosurvey, Ka2, Martin}\n\nThe group $\\Aut(F_{N})$ acts by homeomorphisms on $\\Curr(F_{N})$ as follows. An automorphism $\\Phi\\in \\Aut(F_{N})$, extends to a homeomorphism of both $\\partialF_{N}$ and $\\partial^{2}F_{N}$ which we still denote by $\\Phi$, and for $\\mu \\in \\Curr(F_{n})$ we define:\n\\begin{equation*}\n(\\Phi\\mu)(S)=\\mu(\\Phi^{-1}(S))\n\\end{equation*}\nfor any Borel subset $S$ of $\\partial^{2}F_{N}$. The $F_{N}$--invariance of the measure implies that the group $\\Inn(F_{N})$ of inner automorphisms acts trivially, hence we obtain an action of $\\Out(F_{N})=\\Aut(F_{N})\/\\Inn(F_{N})$ on $\\Curr(F_{N})$. On the level of conjugacy classes one can easily verify that $\\varphi\\eta_{[g]}=\\eta_{\\varphi [g]}$.\n\nThe space $\\PP {\\rm Curr}(F_{N})$ of \\emph{projectivized geodesic currents} is defined as the quotient of $\\Curr(F_{N})-\\{0\\}$ where two currents are deemed equivalent if they are positive scalar multiples of each other. The space $\\PP {\\rm Curr}(F_{N})$ endowed with the quotient topology is compact \\cite{Bosurvey, Ka2}. \nFurthermore, setting $\\varphi[\\mu]=[\\varphi\\mu]$ gives a well defined action of $\\Out(F_{N})$ on $\\PP {\\rm Curr}(F_{N})$. \n\nWe will now give more specifics about the topology on $\\Curr(F_{N})$. Let $m \\colon\\thinspace R_{N} \\to G$ be a marking. Lifting $m$ to the universal covers, we get a quasi-isometry $\\tilde{m} \\colon\\thinspace \\widetilde{R}_N \\to \\widetilde{G}$ and a homeomorphism $\\tilde{m} \\colon\\thinspace \\partial F_{N} \\to \\partial \\widetilde{G}$. Given a reduced edge path $\\tilde\\gamma$ in $\\tilde{G}$ the \\emph{cylinder set} of $\\tilde\\gamma$ is defined as \n\\begin{equation*}\nCyl_m(\\tilde\\gamma)=\\left\\{(\\xi_1,\\xi_2)\\in\\partial^2F_N\\mid \\tilde\\gamma\\subset[\\tilde{m}(\\xi_1), \\tilde{m}(\\xi_2)]\\right\\},\n\\end{equation*}\nwhere $[\\tilde{m}(\\xi_1), \\tilde{m}(\\xi_2)]$ is the bi-infinite geodesic from $\\tilde{m}(\\xi_1)$ to $\\tilde{m}(\\xi_2)$ in $\\tilde{G}$ and containment is for either orientation. \n\nLet $\\gamma$ be a reduced edge path in $G$ and let $\\tilde\\gamma$ be a lift of $\\gamma$ to $\\widetilde{G}$. We define the number of \\emph{occurrences} of $\\gamma$ in $\\mu$ as\n\\begin{equation*}\n\\I{\\gamma,\\mu}_m =\\mu(Cyl_m(\\tilde\\gamma)).\n\\end{equation*}\nAs $\\mu$ is invariant under the action of $F_{N}$, the quantity $\\mu(Cyl_{m}(\\tilde\\gamma))$ does not depend on the choice of the lift $\\tilde\\gamma$ of $\\gamma$. Hence, $\\I{\\gamma,\\mu}_m$ is well defined. The marked graph will always be clear from the context and in what follows we drop the letter $m$ from the notation and use $Cyl(\\tilde\\gamma)$ and $\\I{\\gamma, \\mu}$. \n\nCylinder sets form a subbasis for the topology of the double boundary $\\partial^{2}F_{N}$ and play an important role in the topology of currents. In \\cite{Ka2}, it was shown that a geodesic current is uniquely determined by the set of values $\\{\\I{\\gamma ,\\mu}\\}_{\\gamma}$ as $\\gamma$ varies over the set of all reduced edge paths in $G$. \n\nFurthermore, defining the \\emph{simplicial length of a current $\\mu$} to be $\\wght{\\mu} = \\sum_{e \\in E^{+}G} \\I{e,\\mu}$ we have the following characterization of limits in $\\PP {\\rm Curr}(F_{N})$. \n\n\\begin{lemma}[{\\cite[Lemma~3.5]{Ka2}}]\\label{lem:topology}\nSuppose $([\\mu_n]) \\subset \\PP {\\rm Curr}(F_{N})$ is a sequence and $[\\mu] \\in \\PP {\\rm Curr}(F_{N})$. Then \\begin{equation*}\n\\lim_{n\\to\\infty}[\\mu_{n}]=[\\mu] \\text{ \\, if and only if \\, } \\lim_{n\\to\\infty} \\dfrac{\\langle \\gamma, \\mu_{n}\\rangle}{|\\mu_{n}|}=\\dfrac{\\langle \\gamma, \\mu\\rangle}{|\\mu|}\n\\end{equation*}\nfor each reduced edge path $\\gamma$ in $G$. \n\\end{lemma}\n\nThe value $\\wght{\\mu}$ does depend on the marked graph, but as before, the marked graph will always be clear from the context and so we omit it from the notation. It follows immediately from Lemma~\\ref{lem:topology} that the occurrence function $\\mu \\mapsto \\I{\\gamma,\\mu}$ and the simplicial length function $\\mu \\mapsto \\wght{\\mu}$ are continuous and linear on $\\Curr(F_{N})$~\\cite[Proposition~5.9]{Ka2}.\n\nGiven a free factor $A < F_{N}$, let $\\iota \\colon\\thinspace A \\to F_{N}$ be the inclusion map. There is a canonical $A$--equivariant embedding $\\partial A \\subset \\partialF_{N}$ which induces an $A$--equivariant embedding $\\partial^{2}A\\subset \\partial^{2}F_{N}$. Let $\\Curr(A)$ and $\\Curr(F_{N})$ be the corresponding spaces of currents. There is a natural inclusion $\\iota_{A} \\colon\\thinspace \\Curr(A) \\to \\Curr(F_{N})$ defined by \\emph{pushing the measure forward} via the $F_{N}$ action such that for each $g\\in A$ we have $\\iota_{A}(\\eta_{g})=\\eta_{\\iota(g)}$, see \\cite[Proposition-Definition 12.1]{Ka2}. \n\n\n\n\\section{Pushing past multi-edge extensions}\\label{sec:push past multi edge}\n\nAs stated in the introduction, the strategy for proof of Theorem~\\ref{th:alternative} is to work from the bottom up using a maximal $\\mathcal{H}$--invariant filtration $\\emptyset = \\mathcal{F}_{0} \\sqsubset \\mathcal{F}_{1} \\sqsubset \\cdots \\sqsubset \\mathcal{F}_{k} = \\{[ F_{N}] \\}$. Assuming that there is an element $\\varphi \\in \\mathcal{H}$ such that $\\varphi\\big|_{\\mathcal{F}_{i-1}}$ is atoroidal, we either find a nontrivial element $g \\in F_{N}$ whose conjugacy class is fixed by a finite index subgroup of $\\mathcal{H}$, or in the absence of such an element, we produce an element $\\hat\\varphi \\in \\mathcal{H}$ such that $\\hat\\varphi\\big|_{\\mathcal{F}_{i}}$ is atoroidal. \n\nThere are two cases depending on whether the extension $\\mathcal{F}_{i-1} \\sqsubset \\mathcal{F}_{i}$ is multi-edge or single-edge. In this section we deal with the multi-edge case; the single-edge case takes up Section~\\ref{sec:push past one edge}. \n\nThe multi-edge case follows from recent work of Handel--Mosher and Guirardel--Horbez. We collect these results here and show how they apply to this setting.\n\n\\begin{theorem}\\label{th:multi edge dichotomy} \nSuppose $\\mathcal{H} < \\IA_{N}(\\ZZ\/3) < \\Out(F_{N})$. Let $\\mathcal{F}_{0} \\sqsubset \\mathcal{F}_{1}$ be an $\\mathcal{H}$--invariant multi-edge extension, and assume that $\\mathcal{H}$ contains an element which is fully irreducible with respect to the extension $\\mathcal{F}_{0} \\sqsubset \\mathcal{F}_{1}$. Then one of the following holds. \n\\begin{enumerate} \n\\item $\\mathcal{H}$ contains an element $\\psi$ which is fully irreducible and non-geometric relative to $\\mathcal{F}_{0} \\sqsubset \\mathcal{F}_{1}$(\\cite[Proposition~2.2 and 2.4]{HMpart4}); or \n\\item there is a common geometric model for all $\\varphi \\in \\mathcal{H}$ and hence every element of $\\mathcal{H}$ fixes the conjugacy class corresponding to a boundary curve (\\cite[Theorem~J]{HMpart4}). \n\\end{enumerate}\n\\end{theorem}\n\n\n\nWhen $\\mathcal{F}_{0}=\\emptyset$, the above theorem was originally proved by the second author~\\cite{UyaNSD}. The general case above is also proved by Guirardel--Horbez\nusing the action of the relative outer automorphism group on a $\\delta$--hyperbolic complex which is a relative version of Dowdall--Taylor's co-surface graph~\\cite{DTcosurface}. The existence and relevant properties of this complex, which we will also need, is the following.\n\n\\begin{theorem}\\cite[Theorem 4.2]{un:GH}\\label{th:relativecosurface} \nSuppose $\\mathcal{F} \\sqsubset \\{[F_{N}]\\}$ is a multi-edge extension. There exist a $\\delta$--hyperbolic graph $\\mathcal{ZF}$ \nwith an isometric $\\Out(F_{N};\\mathcal{F})$ action so that an element $\\varphi\\in\\Out(F_{N};\\mathcal{F})$ acts as a hyperbolic isometry of $\\mathcal{ZF}$ if and only if $\\varphi$ is fully irreducible and non-geometric relative to $\\mathcal{F} \\sqsubset \\{[F_{N}]\\}$. \n\\end{theorem} \n\n\n\nAs a consequence of Theorem~\\ref{th:multi edge dichotomy}, when considering the multi-edge extension $\\mathcal{F}_{i-1} \\sqsubset \\mathcal{F}_{i}$ which is part of a maximal $\\mathcal{H}$--invariant filtration, if there does not exist a nontrivial element $g \\in F_{N}$ whose conjugacy class is in $\\mathcal{F}_{i}$ and is fixed by a finite index subgroup of $\\mathcal{H}$, then there is a fully irreducible and non-geometric element $\\varphi$ relative to $\\mathcal{F}_{i-1} \\sqsubset \\mathcal{F}_{i}$. Assuming $\\varphi\\big|_{\\mathcal{F}_{i-1}}$ is atoroidal, so is $\\varphi\\big|_{\\mathcal{F}_{i}}$ as the next lemma states, allowing us to push past a multi-edge extension. \n\n\\begin{lemma}\\label{co:non-geometric atoroidal}\nSuppose $\\varphi \\in \\Out(F_{N})$ is fully irreducible and non-geometric with respect to the extension $\\mathcal{F}_{0} \\sqsubset \\mathcal{F}_{1}$ and the restriction of $\\varphi$ to $\\mathcal{F}_{0}$ is atoroidal. Then the restriction of $\\varphi$ to $\\mathcal{F}_{1}$ is atoroidal too. \n\\end{lemma}\n\n\\begin{proof} \nThis is a straightforward consequence of Lemma \\ref{CT-properties}\\eqref{CT-properties:EG}. Indeed, let $f \\colon\\thinspace G \\to G$ be a CT map that represents $\\varphi^{M}$ and realizes $\\mathcal{C} = (\\mathcal{F}_{0},\\mathcal{F}_{1})$, where $M$ is the constant from Theorem~\\ref{th:CT exist}. Assume $M$ is so that $\\varphi^{M} \\in \\IA_{N}(\\ZZ\/3)$. Let $H_r$ be the stratum corresponding to the extension $\\mathcal{F}_{0}\\sqsubset\\mathcal{F}_{1}$, i.e., $\\mathcal{F}_{0} = \\mathcal{F}(G_{r-1})$, $\\mathcal{F}_{1} = \\mathcal{F}(G_{r})$ and $H_r=\\overline{G_r - G_{r-1}}$. \n\nAny $\\varphi$--periodic conjugacy class contained in $\\mathcal{F}_{1}$ is represented by a closed Nielsen path $\\rho \\subset G_{r}$. As $H_{r}$ is a non-geometric EG stratum, Lemma~\\ref{CT-properties}\\eqref{CT-properties:EG} implies that $\\rho \\subset G_{r-1}$, which contradicts the assumption that $\\varphi\\big|_{\\mathcal{F}_{0}}$ is atoroidal.\n\\end{proof} \n\nCombining the Handel--Mosher Subgroup Decomposition Theorem (Theorem~\\ref{th:HMdecomp}) with Theorems~\\ref{th:multi edge dichotomy} and \\ref{th:relativecosurface}, we get the following corollary which will be required when pushing past single-edge extensions.\n\n\\begin{corollary}\\label{co:HM-simultaneous}\nSuppose $\\mathcal{H} < \\IA_{N}(\\ZZ\/3) < \\Out(F_N)$. Let \\[\\emptyset = \\mathcal{F}_{0} \\sqsubset \\mathcal{F}_{1} \\sqsubset \\cdots \\sqsubset \\mathcal{F}_{k} = \\{[ F_{N}] \\}\\] be a maximal $\\mathcal{H}$--invariant filtration by free factor systems such that each multi-edge extension is non-geometric. Then there exists an element $\\varphi \\in \\mathcal{H}$ such that for each $i = 1,\\ldots,k$ where $\\mathcal{F}_{i-1} \\sqsubset \\mathcal{F}_{i}$ is a multi-edge extension, $\\varphi$ is irreducible and non-geometric with respect to $\\mathcal{F}_{i-1} \\sqsubset \\mathcal{F}_{i}$.\n\\end{corollary}\n\n\\begin{proof}\nThe proof is the same as the proof of \\cite[Theorem~6.6]{CU}, as commented in Remark~\\ref{rem:HMdecomp}. The key point is that Theorems~\\ref{th:HMdecomp}, \\ref{th:multi edge dichotomy} and \\ref{th:relativecosurface} provide for the existence of $\\delta$--hyperbolic spaces corresponding to each multi-edge extension and for each an element which acts as a hyperbolic isometry. The main theorem in~\\cite{CU} shows that under these hypotheses, there is a single element in $\\mathcal{H}$ which acts as a hyperbolic isometry in each. Applying Theorem~\\ref{th:relativecosurface} again completes the proof.\n\\end{proof}\n\n\n\\section{Dynamics on single-edge extensions}\\label{sec:single-edge}\n\nIn this section we analyze the dynamics of outer automorphisms that preserve a single-edge extension of free factor systems $\\mathcal{F}_{0} \\sqsubset \\mathcal{F}_{1}$. The main result of this section is that in the most interesting case of a handle extension, if $\\varphi$ preserves the extension and acts as an atoroidal element on $F_{N-1}$, then $\\varphi$ acts on the space of currents on $F_N$ with generalized north-south dynamics (Theorem~\\ref{th:gns}). \n\n\n\\subsection{Almost atoroidal elements}\\label{subsec:almost atoroidal}\n\nTo begin, we characterize outer automorphisms preserving a single-edge extension $\\mathcal{F}_{0} \\sqsubset \\mathcal{F}_{1}$ whose restriction to $\\mathcal{F}_{0}$ is atoroidal.\n\n\\begin{proposition}\\label{prop:co rank one atoroidal} \nSuppose $\\mathcal{F}_{0} \\sqsubset \\mathcal{F}_{1}$ is a single-edge extension of free factor systems that is invariant under $\\varphi \\in \\IA_{N}(\\ZZ\/3)$. If $\\varphi\\big|_{\\mathcal{F}_{0}}$ is atoroidal, then one of the following holds.\n\\begin{enumerate} \n\n\\item\\label{item:one edge atoroidal} The restriction $\\varphi\\big|_{\\mathcal{F}_{1}}$ is atoroidal. \n\n\\item\\label{item:one edge fixed} There exists a nontrivial $g\\inF_{N}$ such that $g$, its inverse, and its iterates are the only nontrivial conjugacy classes in $\\mathcal{F}_1$ fixed by $\\varphi\\big|_{\\mathcal{F}_{1}}$. Furthermore, there is some $[A] \\in \\mathcal{F}_{0}$ such that either:\n\\begin{itemize}\n\\item $\\mathcal{F}_{1} = \\mathcal{F}_{0} \\cup \\{[\\I{g}]\\}$ (circle extension); or\n\\item $\\mathcal{F}_{1} = \\bigr( \\mathcal{F}_{0} - \\{[A]\\} \\bigl)\\cup \\{[A \\ast \\I{g}]\\}$ (handle extension).\n\\end{itemize}\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof} \nLet $f \\colon\\thinspace G\\to G$ be a $\\CT$ that represents $\\varphi^{M}$ and realizes $\\mathcal{C} = (\\mathcal{F}_{0},\\mathcal{F}_{1})$, where $M$ is the constant from Theorem~\\ref{th:CT exist}. Let $H_r$ be the $\\NEG$ stratum corresponding to the extension $\\mathcal{F}_{0}\\sqsubset\\mathcal{F}_{1}$, i.e., $\\mathcal{F}_{0} = \\mathcal{F}(G_{r-1})$, $\\mathcal{F}_{1} = \\mathcal{F}(G_{r})$ and $H_r=\\overline{G_r - G_{r-1}}$. By Lemma~\\ref{CT-properties}\\eqref{CT-properties:NEG}, $H_{r}$ consists of a single edge $e$. \n\nIf $\\mathcal{F}_0 \\sqsubset \\mathcal{F}_1$ is a circle extension, then the second statement of the proposition holds. Else, if $\\mathcal{F}_0 \\sqsubset \\mathcal{F}_1$ is a barbell extension, then $\\varphi\\big|_{\\mathcal{F}_1}$ is atoroidal and so the first statement of the proposition holds. Hence we assume that $\\mathcal{F}_0 \\sqsubset \\mathcal{F}_1$ is a handle extension. Let $[A] \\in \\mathcal{F}_0$ correspond to the component of $G_{r-1}$ upon which $e$ is attached.\n\nFirst, suppose that $e$ is a linear edge, i.e., $f(e) = e \\rho$ where $\\rho$ is a nontrivial closed Nielsen path in $G_{r-1}$. Then the conjugacy class corresponding to $\\rho$ is fixed by $\\varphi$ and is in $\\mathcal{F}_{0}$, contradicting the assumption $\\varphi\\big|_{\\mathcal{F}_{0}}$ is atoroidal. Hence this case does not occur.\n\nNext, suppose that $e$ is a fixed edge. If $o(e) = t(e)$, we claim that the conjugacy class $g$ that corresponds to the loop $e$ is the only fixed conjugacy class up to inversion and taking powers. Thus the second statement of the proposition holds. Indeed, any other conjugacy class $[h]$ in $\\mathcal{F}_{1}$ is represented by a cyclically reduced loop of the form $e^{a_{1}}\\alpha_{1}e^{a_{2}} \\ldots \\alpha_{k}$ where the $\\alpha_{i}$'s are reduced loops in $G_{r-1}$ based at the common vertex $o(e)=t(e)$ and the $a_i$'s are non-zero integers. If $\\varphi^{Mp}[h]=[h]$ for some $p \\geq 1$, then $[f^{p}(e^{a_{1}}\\alpha_{1}e^{a_{2}} \\ldots \\alpha_{k})] = \\sigma e^{a_{1}}\\alpha_{1}e^{a_{2}} \\ldots \\alpha_{k} \\sigma^{-1}$ for some reduced edge path $\\sigma$ (note, the image path is reduced except possibly at $\\sigma \\cdot e^{a_{1}}$ or $\\alpha_{k} \\cdot \\sigma^{-1}$). Since $f(e)=e$ and $f$ preserves $G_{r-1}$, $f^{p}$ must permute the $\\alpha_i$'s (up to homotopy rel endpoints). Hence some power of $f$ fixes each $\\alpha_{i}$ which is a contradiction as the restriction of $\\varphi$ to $\\mathcal{F}_{0}$ is atoroidal. \n \nIf $o(e) \\neq t(e)$, we claim that there can be at most one fixed conjugacy class in $\\mathcal{F}_{1}$ up to inversion and taking powers. Thus the second statement of the proposition holds. Indeed, suppose $h_{1}, h_{2} \\in F_{N}$ are not proper powers, $[h_{1}]$ and $[h_{2}]$ are in $\\mathcal{F}_{1}$, and are fixed by $\\varphi$. As the restriction of $\\varphi$ to $\\mathcal{F}_{0}$ is atoroidal, we have that $[h_{1}]$ is represented by a cyclically reduced loop $e^{a_{1}}\\alpha_{1}e^{a_{2}} \\ldots \\alpha_{k}$ where the $\\alpha_{i}$'s are reduced paths in $G_{r-1}$ and each $a_{i} \\in \\{-1,1\\}$. Similarly, $[h_{2}]$ is represented by a cyclically reduced loop $e^{b_{1}}\\beta_{1}e^{b_{2}} \\ldots \\beta_{\\ell}$ where again the $\\beta_{i}$'s are reduced paths in $G_{r-1}$ and each $b_{i} \\in \\{-1,1\\}$. As in the previous case of a loop, some power of $f$ fixes each $\\alpha_{i}$ and $\\beta_{i}$ (up to homotopy rel endpoints). If there is some $i$ such that $a_{i} \\neq a_{i+1}$, then the path $\\alpha_{i}$ is closed and represents a conjugacy class in $\\mathcal{F}_{0}$ which is $\\varphi$--periodic, contradicting the assumption that the restriction of $\\varphi$ to $\\mathcal{F}_{0}$ is atoroidal. Similarly for the $b_{i}$'s. Thus, after possibly replacing $h_{1}$ or $h_{2}$ by their inverse, we have that each $a_{i}$ and $b_{i}$ equals $1$. If there exist $i \\neq j$ such that $\\alpha_{i} \\neq \\alpha_{j}$, then the nontrivial closed loop $\\alpha_{i}\\alpha_{j}^{-1}$ is fixed by this power of $f$ and contained in $G_{r-1}$, again contradicting the assumption that the restriction of $\\varphi$ to $\\mathcal{F}_{0}$ is atoroidal. Thus the $\\alpha_{i}$'s are all the same path $\\alpha$ and since $h_{1}$ is not a proper power, we have that $[h_{1}]$ is represented by the cyclically reduced path $e\\alpha$. Similarly $[h_{2}]$ is represented by the cyclically reduced path $e\\beta$. Finally, if $\\alpha \\neq \\beta$, then the nontrivial closed loop $\\alpha\\beta^{-1}$ is fixed by a power of $f$, again contradicting the assumption that the restriction of $\\varphi$ to $\\mathcal{F}_{0}$ is atoroidal. Hence $[h_{1}] = [h_{2}]$. \n\nLastly, in the remaining case that $e$ is superlinear, there is no Nielsen path that crosses $e$ \\cite[Fact~1.43]{HMpart1}, hence the restriction of $\\varphi$ to $\\mathcal{F}_{1}$ is atoroidal as well. Thus the first statement of the proposition holds. \n\nIn all cases, we see that $\\varphi$ has at most one fixed conjugacy class up to taking powers and inversion which proves the first part of the theorem. The last assertion for the second statement follows from the fact that the path representing a possible fixed $g$ crosses the edge $e$ exactly once, see for example \\cite[Corollary~3.2.2]{BFH00}. \n\\end{proof}\n\n\n\n\n\\subsection{North-south dynamics for atoroidal elements}\\label{subsec:ns-atoroidal}\n \nThe second author recently proved that atoroidal elements of $\\Out(F_{N})$ act on $\\PP {\\rm Curr}(F_{N})$ with north-south dynamics in the following sense.\n \n\\begin{theorem}[{\\cite[Theorem~1.4]{U3}}]\\label{dynamicsofhyp} \nLet $\\varphi\\in \\Out(F_{N})$ be an atoroidal outer automorphism of a free group of rank $N\\ge3$. There are simplices $\\Delta_{+}$, $\\Delta_{-}$ in $\\PP {\\rm Curr}(F_{N})$ such that $\\varphi$ acts on $\\PP {\\rm Curr}(F_{N})$ with north-south dynamics from $\\Delta_{-}$ to $\\Delta_{+}$. Specifically, given open neighborhoods $U$ of $\\Delta_{+}$ and $V$ of $\\Delta_{-}$ there exists $M > 0$ such that $\\varphi^{n}(\\PP {\\rm Curr}(F_{N}) - V)\\subset U$, and $\\varphi^{-n}(\\PP {\\rm Curr}(F_{N}) - U)\\subset V$ for all $n\\ge M$. \n\\end{theorem}\n\nWe also need the following statement regarding the behavior of the length of a current under iteration of $\\varphi$. In this statement, we assume $\\varphi \\in \\Out(F_{N})$ satisfies the hypotheses of Theorem~\\ref{dynamicsofhyp} and $\\Delta_{-}$ is the $\\varphi$--invariant simplex in $\\PP {\\rm Curr}(F_{N})$ appearing in the statement of that theorem. \n\n\\begin{lemma}[{cf. \\cite[Corollary~4.13]{KL5}}]\\label{lem:dynamics in simplex}\nFor each $C > 0$ and neighborhood $V$ of $\\Delta_{-}$ there is a constant $M > 0$ such that if $[\\mu] \\notin V$, then $\\wght{\\varphi^{n}\\mu} \\geq C\\wght{\\mu}$ for all $n \\geq M$.\n\\end{lemma}\n\nA similar statement appears as Lemma~\\ref{lem:growth outside of nbhd}. The proof given there directly adapts to prove this statement.\n\n\\subsection{Completely split goodness of paths and currents}\\label{subsec:goodness}\n\nTo deal with single-edge extensions, we need similar statements for an element of $\\Out(F_{N})$ that restricts to an atoroidal element on a \\emph{co-rank $1$ free factor} of $F_{N}$, i.e., a free factor $A < F_{N}$ for which there exists a nontrivial $g \\in F_{N}$ such that $F_{N} = A \\ast \\I{g}$. This is the purpose of this subsection and the next where we describe the necessary tools to prove Theorem~\\ref{th:gns}. The majority of the work in the next two section modifies the constructions and argument in \\cite{U3} to deal with the free factor $\\I{g}$. A casual reader can review the main statements corresponding to the two above, Theorem~\\ref{th:gns} and Lemma~\\ref{lem:growth outside of nbhd}, and skip ahead to Section~\\ref{sec:push past one edge}.\n\n\n\\begin{assumption}\\label{stand}\nSuppose $A < F_{N}$ is a co-rank $1$ free factor and $\\varphi \\in \\IA_{N}(\\ZZ\/3) \\cap \\Out(F_{N};A)$ is such that $\\varphi\\big|_{A}$ is atoroidal. Let $\\Delta_{+}$ and $\\Delta_{-}$ be the inclusion to $\\PP {\\rm Curr}(F_{N})$ of the $\\varphi$--invariant simplices in $\\PP {\\rm Curr}(A)$ from Theorem~\\ref{dynamicsofhyp} for $\\varphi\\big|_{A}$. Assume $\\varphi$ is not atoroidal and let $[g]$ be the fixed conjugacy class in $F_{N}$ given by Proposition~\\ref{prop:co rank one atoroidal}\\eqref{item:one edge fixed}. Let\n\\begin{equation*}\n\\widehat \\Delta_{-}=\\{[t\\eta_{g}+(1-t)\\mu_{-}]\\mid [\\mu_{-}]\\in\\Delta_{-}, t\\in[0,1]\\}\n\\end{equation*}\nand \n\\begin{equation*}\n\\widehat \\Delta_{+}=\\{[t\\eta_{g}+(1-t)\\mu_{+}]\\mid [\\mu_{+}]\\in\\Delta_{+}, t\\in[0,1]\\}.\n\\end{equation*}\n\\end{assumption}\n\nThroughout the rest of this section and the next, we will further assume the element $\\varphi$ is represented by a CT map $f \\colon\\thinspace G \\to G$ in which the fixed conjugacy class $[g]$ is represented by a loop edge $e$ in $G$ which is fixed by $f$. The complement of the edge $e$ in $G$ is denoted $G'$. This assumption is not a restriction (upon replacing $\\varphi$ by a sufficient power to ensure some CT). Indeed, if in the proof of Proposition \\ref{prop:co rank one atoroidal} the edge $e$ is a loop edge we are done. Otherwise, the conclusion of Proposition \\ref{prop:co rank one atoroidal} says that $[g]$ is a free factor so we can take a CT map $f' \\colon\\thinspace G'\\to G'$ that represents $\\varphi\\big|_{A}$ and let $G = G' \\vee e$ where the wedge point is at an $f'$-fixed vertex and $e$ is a loop edge representing $[g]$. There is an obvious extension to a map $f \\colon\\thinspace G \\to G$ representing $\\varphi \\in \\Out(F_{N})$ that is a CT map. Existence of a fixed vertex is guaranteed by the properties of CT's, see \\cite[Definition 3.18 and Lemma 3.19]{FH}. \n\n\nA decomposition of a path $\\gamma$ in $G$ into subpaths $\\gamma = \\gamma_1\\cdot\\gamma_2\\cdot \\ldots \\cdot \\gamma_n$ is called a \\emph{splitting} if for all $k \\geq 0$ we have \n\\begin{equation*}\n[f^{k}(\\gamma)]=[f^{k}(\\gamma_1)][f^{k}(\\gamma_2)]\\ldots[f^{k}(\\gamma_n)].\n\\end{equation*}\nIn other words, any cancellation takes place within the images of the $\\gamma_{i}$'s. We use the ``$\\cdot$'' notation for splittings. A path $\\gamma$ is said to be \\emph{completely split} if it has a splitting $\\gamma_1 \\cdot \\gamma_2 \\cdot \\ldots \\cdot \\gamma_{n}$ where each $\\gamma_{i}$ is either an edge in an irreducible stratum, an indivisible Nielsen path or a maximal taken connecting path in a zero stratum. These type of subpaths are called \\emph{splitting units}. We refer reader to \\cite{FH} for complete details and note that the assumption on $\\varphi$ above guarantees that there are no exceptional paths. Of importance is that if $\\gamma = \\gamma_1\\cdot\\gamma_2\\cdot\\ldots\\cdot\\gamma_n$ is a complete splitting, then $[f(\\gamma)]$ also has a complete splitting where the units refine $[f(\\gamma)]=[f(\\gamma_1)]\\cdot [f(\\gamma_2)] \\cdot \\ldots \\cdot [f(\\gamma_n)]$~\\cite[Lemma~4.6]{FH}. We say that a splitting unit $\\sigma$ is \\emph{expanding} if $|[f^{k}(\\sigma)]|\\to\\infty$ as $k\\to\\infty$. Recall $|\\param|$ denotes the simplicial length of a path.\n\nWe next need to introduce a notion of \\emph{goodness} tailored to the setting of CT maps. Goodness appears in several places in the literature~\\cite{BFH97,Martin,Uyaiwip}. Intuitively, the closer the goodness of a path is to $1$, the more we understand the qualitative behavior of its forward images. In the previous settings, it is defined using legal subpaths, in the current setting, completely split subpaths are the relevant piece to keep track of. \n\n\\begin{definition}\\label{def:goodness of paths}\nFor an edge path $\\gamma$ in $G$, a \\emph{maximal splitting} is a splitting $\\gamma = \\beta_{0} \\cdot \\alpha_{1} \\cdot \\beta_{1} \\cdot \\ldots \\cdot \\alpha_{n} \\cdot \\beta_{n}$ where each $\\alpha_{i}$ has a complete splitting, $\\beta_{i}$ is nontrivial for $i = 1,\\ldots,n-1$ and $\\sum_{i=1}^{n} |\\alpha_{i}|$ is maximized. Using a maximal splitting, we define the \\emph{completely split goodness} of $\\gamma$ as:\n\\begin{equation*}\n\\mathfrak{g}(\\gamma) = \\frac{1}{|\\gamma|} \\sum_{i=1}^{n} |\\alpha_{i}|.\n\\end{equation*}\n\\end{definition}\n\nIf $\\gamma$ is a cyclically reduced circuit in $G$, set $\\mathfrak{g}(\\gamma)$ to be the maximum of $\\mathfrak{g}(\\gamma')$ over all cyclic permutations of $\\gamma$. For any conjugacy class $h \\in F_{N}$, let $\\gamma_{h}$ be the unique cyclically reduced circuit in $G$ that represents $[h]$. We define the \\emph{completely split goodness} of a conjugacy class $[h]$ as $\\mathfrak{g}([h]) = \\mathfrak{g}(\\gamma_{h})$. It is not clear that $\\mathfrak{g}$ can extend in a continuous way to $\\Curr(F_{N})$. What we can do is to define a continuous function $\\overline{\\mathfrak{g}} \\colon\\thinspace \\Curr(F_{N}) \\to \\mathbb{R}$ that agrees with $\\mathfrak{g}$ on completely split circuits and provides a lower bound on $\\mathfrak{g}$ in general. The first ingredient is the bounded cancellation lemma. \n \n\n\\begin{lemma}\\cite{Coo}\\label{BCL} Let $f \\colon\\thinspace G \\to G$ be a graph map. There exists a constant $C_{f}$ such that for any reduced path $\\gamma=\\gamma_{1}\\gamma_{2}$ in $G$ one has\n\\[\n|[f(\\gamma)]|\\ge|[f(\\gamma_1)]|+|[f(\\gamma_2)]|-2C_{f}.\n\\]\n\\end{lemma}\n\nLet $C_0$ be the maximum length of a Nielsen path or a taken connecting path in a zero stratum in $G'$. Finiteness of $C_{0}$ follows as $\\varphi\\big|_{A}$ is atoroidal and zero strata are contractible. This same $C_0$ also works for $f^{k}$ for all $k \\geq 1$. We now replace the CT map $f$ with a suitable power, but continue to use $f$, so that for each expanding splitting unit $\\sigma$, we have $\\abs{[f(\\sigma)]} \\geq 3(2C_0+1)\\abs{\\sigma}$. Let $C_f$ be the bounded cancellation constant for this new $f$ and $C = \\max\\{C_0+1,C_f\\}$. \n\n\\begin{proposition}\\label{growthprop} \nUnder the standing assumption \\ref{stand}, the following hold:\n\\begin{enumerate}\n\n\\item If a path $\\gamma$ in $G'$ is completely split and $|\\gamma| \\geq C_0+1$, then:\n\\begin{equation*}\n\\frac{\\textit{sum of lengths of expanding splitting units}}{|\\gamma|}\\geq \\frac{1}{2C_0+1}.\n\\end{equation*}\n\n\\item If a path $\\gamma$ in $G'$ is completely split and $|\\gamma| \\geq C_{0} + 1$, then: \n\\begin{equation*}\n|[f(\\gamma))]| \\geq 3|\\gamma|.\n\\end{equation*}\n\n\n\\item\\label{item:good} Let $\\gamma$ be any path in $G$ and suppose $\\gamma_{0} \\cdot \\gamma_{1} \\cdot \\gamma_{2}$ is a subpath of $\\gamma$ where each $\\gamma_{i}$ has a complete splitting. If $|\\gamma_{0}|, |\\gamma_{2}| \\geq C$ then $\\gamma$ has a splitting $\\gamma = \\gamma' \\cdot \\gamma_{1} \\cdot \\gamma''$.\n\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\nThe proof of (1) is similar to that of~\\cite[Proposition 3.9]{U3}. Properties of CT's imply that $\\gamma$ has a splitting $\\gamma = \\beta_{0} \\cdot \\alpha_{1} \\cdot \\beta_{1} \\cdot \\ldots \\cdot \\alpha_{n} \\cdot \\beta_{n}$ where each $\\alpha_{i}$ has a complete splitting into edges in EG strata (in particular into expanding splitting units) and each $\\beta_{j}$ is either a Nielsen path or a taken connecting path in a zero stratum. Since $|\\gamma| \\geq C_{0}$ we must have $n > 0$. As $|\\alpha_{i}| \\geq 1$ for all $i$ and $|\\beta_{j}| \\leq C_{0}$ for all $j$ we have:\n\\begin{equation*}\n\\frac{|\\gamma|}{\\sum_{i=1}^{n}|\\alpha_{i}|} = 1 + \\frac{\\sum_{j=0}^{n+1}|b_{j}|}{\\sum_{i=1}^{n}|\\alpha_{i}|} \\leq 1 + \\frac{(n+1)C_{0}}{n} \\leq 2C_{0} + 1.\n\\end{equation*}\nTherefore:\n\\begin{align*}\n\\frac{\\textit{sum of lengths of expanding splitting units}}{|\\gamma|} & \\geq\\frac{\\sum_{i=1}^{n} |\\alpha_{i}|}{|\\gamma|} \\geq \\frac{1}{2C_{0} + 1}.\n\\end{align*}\n\nWe get (2) by noting that $|[f(\\alpha_{i})]| \\geq 3(2C_{0} + 1)|\\alpha_{i}|$ for all $i$ and so by (1):\n\\begin{equation*}\n|[f(\\gamma)]| \\geq \\sum_{i=1}^{n}|[f(\\alpha_{i})| \\geq 3(2C_{0} + 1)\\sum_{i=1}^{n}|\\alpha_{i}| \\geq 3|\\gamma|.\n\\end{equation*}\n\nFor (3) we first observe that by (2), we have $|[f(\\gamma_{0})]|, |[f(\\gamma_{2})]| \\geq 3C \\geq C_{f} + C_{0} + C$. Decompose $\\gamma$ as a concatenation $\\gamma = \\gamma'_{0}\\gamma_{0}\\gamma_{1}\\gamma_{2}\\gamma'_{2}$. Applying Lemma~\\ref{BCL} to $\\gamma' = \\gamma'_{0}\\gamma_{0}$ we get that at most $C_{f}$ edges of $[f(\\gamma'_{0})]$ cancels with $[f(\\gamma_{0})]$ and therefore, the terminal segment of length $C + C_{0}$ in $[f(\\gamma_{0})]$ remains in $[f(\\gamma')]$. As $[f(\\gamma_{0})]$ is completely split, we see that $[f(\\gamma')] = \\gamma''_{0}\\hat\\gamma_{0}$ where $\\hat\\gamma_{0} \\subseteq [f(\\gamma_{0})]$ is completely split and $|\\hat\\gamma_{0}| \\geq C$. Likewise for $\\gamma'' = \\gamma_{2}\\gamma'_{2}$ we see that $[f(\\gamma'')] = \\hat\\gamma_{2}\\gamma''_{2}$ where $\\hat\\gamma_{2} \\subseteq [f(\\gamma_{2})]$ is completely split and $|\\hat\\gamma_{2}| \\geq C$. \n\nAs $\\gamma_{0} \\cdot \\gamma_{1} \\cdot \\gamma_{2}$ is a splitting, we have $[f(\\gamma)] = [f(\\gamma')][f(\\gamma_{1})][f(\\gamma'')]$.\n\nSince the path $\\hat\\gamma_{0} \\cdot f(\\gamma_{1}) \\cdot \\hat\\gamma_{2}$ is a subpath of $[f(\\gamma)]$ satisfying the same hypotheses as $\\gamma_{0} \\cdot \\gamma_{1} \\cdot \\gamma_{2}$ did for $\\gamma$, we can repeatedly apply this argument to get $[f^{k}(\\gamma)] = [f^{k}(\\gamma')][f^{k}(\\gamma_{1})][f^{k}(\\gamma'')]$ for all $k \\geq 1$ and so $ \\gamma = \\gamma' \\cdot \\gamma_{1} \\cdot \\gamma''$ is a splitting.\n\\end{proof}\n\nLet $\\mathcal{P}_{\\rm cs}$ denote the set of paths in $G$ that have a complete splitting comprised of exactly $2C+1$ splitting units. Given $\\gamma \\in \\mathcal{P}_{\\rm cs}$ we have $\\gamma = \\sigma_{1} \\cdot \\sigma_{2} \\cdot \\ldots \\cdot \\sigma_{2C+1}$ where each $\\sigma_{i}$ is a splitting unit and we define $\\check\\gamma = \\sigma_{C+1}$, i.e., the middle splitting unit. It is possible that distinct paths $\\gamma, \\gamma' \\in \\mathcal{P}_{\\rm cs}$ could be nested, i.e., $\\gamma' \\subsetneq \\gamma$. For instance, if the first or last unit in $\\gamma$ is either an indivisible Nielsen path or a taken connecting path in a zero stratum then it is possible that $\\gamma$ has a completely split subpath $\\gamma'$ with $2C+1$ terms where the first and\/or last terms are either edges in the indivisible Nielsen path or a smaller taken connecting zero path. For such $\\check\\gamma = \\check\\gamma'$. We need to keep track of such behavior and so define:\n\\begin{equation*}\n\\mathcal{P}_{\\rm cs}^{\\rm min} = \\{ \\gamma \\in \\mathcal{P}_{\\rm cs} \\mid \\not\\exists \\gamma' \\in \\mathcal{P}_{\\rm cs} \\mbox{ where } \\gamma \\subsetneq \\gamma' \\mbox{ and } \\check\\gamma = \\check\\gamma'\\}.\n\\end{equation*} \n\nWe can now define a version of completely split goodness for currents.\n\n\\begin{definition}\\label{def:goodness of current}\nFor any non-zero $\\mu\\in \\Curr(F_{N})$ define the \\emph{completely split goodness} of $\\mu$ by:\n\\begin{equation}\\label{eq:csg}\n\\overline{\\mathfrak{g}}(\\mu)=\\frac{1}{|\\mu|}\\sum_{\\gamma \\in \\mathcal{P}_{\\rm cs}^{\\rm min}}\\I{\\gamma, \\mu} |\\check\\gamma|.\n\\end{equation}\n\\end{definition}\n\n\nObserve that $\\overline{\\mathfrak{g}}$ descends to a well-defined function $\\overline{\\mathfrak{g}}\\colon\\thinspace \\PP {\\rm Curr}(F_{N}) \\to \\mathbb{R}$. The important properties of $\\overline{\\mathfrak{g}}$ are summarized in the following lemma.\n\n\\begin{lemma}\\label{generalizedgoodnessproperties}\nThe map $\\overline{\\mathfrak{g}}\\colon\\thinspace \\Curr(F_{N})-\\{0\\} \\to \\mathbb{R}$ is continuous. Further for any rational current $\\eta_{h}$:\n\n\\begin{enumerate}\n\\item $\\overline{\\mathfrak{g}}(\\eta_{h})=1$ if $\\eta_{h}$ is represented by a completely split circuit; and \n\n\\item $\\mathfrak{g}(\\gamma_{h}) \\geq \\overline{\\mathfrak{g}}(\\eta_{h})$ where $\\gamma_{h}$ is the unique reduced circuit in $G$ that represents $[h]$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof} The continuity is clear as it is defined using linear combination of continuous functions (Lemma~\\ref{lem:topology}). \n\nFor the first assertion, suppose $h$ is represented by a completely split cyclically reduced circuit $\\gamma = \\sigma_{1} \\cdot \\sigma_{2} \\cdot \\ldots \\cdot \\sigma_{n}$. For each $i$, the path:\n\\begin{equation*}\n\\gamma_{i} = \\sigma_{i-C} \\cdot \\, \\cdots \\, \\cdot \\sigma_{i-1} \\cdot \\sigma_{i} \\cdot \\sigma_{i + 1} \\cdot \\, \\cdots \\, \\cdot \\sigma_{i + C}\n\\end{equation*}\nwhere the indices are taken modulo $n$ is in $\\mathcal{P}_{\\rm cs}$ and has $\\check\\gamma_{i} = \\sigma_{i}$. Thus each splitting unit $\\sigma_{i}$ in $\\gamma$ is the middle term of completely split edge path of length $2C+1$. The minimal such path contributes to the right-hand side of \\eqref{eq:csg} the number of edges of $\\sigma_{i}$. \n\nThe second assertion follows from Proposition \\ref{growthprop}\\eqref{item:good}.\n\\end{proof}\n\n\n\n\n\\subsection{Incorporating north-south dynamics from lower stratum}\\label{subsec:goodness grows}\n\nWe need to work with the inverse outer automorphism $\\varphi^{-1}$ as well. We will denote the CT map for $\\varphi$ by $f_{+} \\colon\\thinspace G_{+} \\to G_{+}$. As in Section~\\ref{subsec:goodness}, we assume that there is an edge $e_{+}$ in $G_{+}$ representing the fixed conjugacy class $[g]$ and we will denote the complement of $e_{+}$ in $G_{+}$ by $G'_{+}$. The corresponding completely split goodness function is denoted by $\\overline{\\mathfrak{g}}_{+}$. For $\\varphi^{-1}$, we denote the corresponding objects by $f_{-} \\colon\\thinspace G_{-} \\to G_{-}$, $e_{-}$, $G'_{-}$ and $\\overline{\\mathfrak{g}}_{-}$. Let us denote the total length of subpaths of $\\gamma$ that lie in $G'_{+}$ by $|\\gamma|'$, and by abuse of notation we denote the corresponding length functions on $G_{-}$ and $G'_{-}$ with $|\\param|$ and $|\\param|'$ as well, their use will be clear from context. \n\nNotice that any path $\\gamma$ in $G_{+}$ has a splitting $\\gamma = \\alpha_{0} \\cdot e_{+}^{k_{1}} \\cdot \\alpha_{1} \\cdot \\ldots \\cdot e_{+}^{k_{m}} \\cdot \\alpha_{m}$ where each $\\alpha_{i}$ is a closed path in $G'_{+}$ which is nontrivial for $i = 1,\\ldots, m-1$ and each $k_{i}$ is a nonzero integer. This follows as $f_{+}(e_{+}) = e_{+}$ and $f_{+}(G') \\subseteq G'$. If $\\gamma$ is not a power of $e_{+}$ we define:\n\\begin{equation*}\n\\mathfrak{g}'_{+}(\\gamma) = \\frac{\\sum_{i=0}^{m}|\\alpha_{i}|\\mathfrak{g}_{+}(\\alpha_{i})}{\\sum_{i=0}^{m}|\\alpha_{i}|}.\n\\end{equation*}\nIn other words, we are measuring the proportion of $\\gamma$ in $G'$ that is completely split. There is a similar discussion for paths in $G_{-}$ and we define $\\mathfrak{g}'_{-}$ analogously.\n\nGiven $h \\in F_{N}$, we let $\\gamma^{+}_{h}$ and $\\gamma^{-}_{h}$ respectively denote the unique cyclically reduced circuits in $G_{+}$ and $G_{-}$ respectively that represent $[h]$. The following proposition summarizes the key properties of $\\mathfrak{g}'_{+}$ and how it will be used to detect how close a current is to the attracting simplices. \n\n\\begin{proposition}\\label{prop:goodness properties}\nUnder the standing assumption \\ref{stand}, the following hold for all $h \\in F_{N}$ that is not conjugate to a power of $g$.\n\\begin{enumerate}\n\\item For any open neighborhood $U_{+}$ of $\\Delta_{+}$ there exists a $0 < \\delta < 1$ and $M > 0$ such that $\\varphi^{n}[\\eta_{h}] \\in U_{+}$ for all $n \\geq M$ if:\n\\begin{equation*}\n\\mathfrak{g}'_{+}(\\gamma_{h}^{+})\\frac{|\\gamma_{h}^{+}|'}{|\\gamma_{h}^{+}|} > \\delta.\n\\end{equation*}\\label{GP:stronger}\n\n\\item For any $\\epsilon > 0$ and $L \\geq 0$ there exists a $0 < \\delta < 1$ and $M > 0 $ such that for each $n \\geq M$ there is a $[\\mu] \\in \\Delta_{+}$ with:\n\\begin{equation*}\n\\abs{\\frac{\\I{\\alpha,[f_{+}^{n}(\\gamma_{h}^{+})]}}{|[f_{+}^{n}(\\gamma_{h}^{+})]|'} - \\frac{\\I{\\alpha,\\mu}}{|\\mu|}} < \\epsilon\n\\end{equation*}\nfor every reduced path $\\alpha$ in $G'_{+}$ of length at most $L$ if $\\mathfrak{g}'_{+}(\\gamma_{h}^{+}) > \\delta$.\\label{GP:weaker}\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\nBoth of these statements can be proved using arguments almost identical to \\cite[Lemma~6.1]{LU2} (see also \\cite[Lemma~3.17]{U3}). \n\nFor (1), the lower bound on this ratio implies that most of the length of $\\gamma_{h}^{+}$ comes from completely split subpaths in $G_{+}'$. The argument in~\\cite[Lemma~6.1]{LU2} converts this notion to having powers that are close to currents in $\\Delta_{+}$.\n\nFor (2), the lower bound on $\\mathfrak{g}'_{+}$ implies that most of the length of $\\gamma_{h}^{+}$ contained in $G_{+}'$ comes from completely split subpaths in $G_{+}'$. The argument in ~\\cite[Lemma~6.1]{LU2} converts this notion to having powers that almost agree with currents in $\\Delta_{+}$ on most subpaths of $G_{+}'$. \n\\end{proof}\n\nThere of course are analogous statements for $\\mathfrak{g}'_{-}$.\n\n\\begin{lemma}\\label{lem:backandforthlowergoodness}\nUnder the standing assumption \\ref{stand}, given $0 < \\delta < 1$ and $K \\geq 0$, \nthere exists an $M > 0$ such that for all $h \\in F_{N}$ that is not conjugate to a power of $g$ either:\n\\begin{align*}\n\\mathfrak{g}'_{+}([f_{+}^{n}(\\gamma_{h}^{+})]) > \\delta & \\mbox{ and } |[f_{+}^{n}(\\gamma_{h}^{+})]|' \\geq K|\\gamma_{h}^{+}|'; \\mbox{ or}\n\\\\\n\\mathfrak{g}'_{-}([f_{-}^{n}(\\gamma_{h}^{-})]) > \\delta & \\mbox{ and } |[f_{-}^{n}(\\gamma_{h}^{-})]|' \\geq K|\\gamma_{h}^{-}|'\n\\end{align*}\nfor all $n \\geq M$.\n\\end{lemma}\n\n\\begin{proof} \nSince the restrictions of $f_{+}$ to $G'_{+}$ and $f_{-}$ to $G'_{-}$ are atoroidal, the result essentially follows from \\cite{U3}. Indeed, writing:\n\\begin{align*}\n\\gamma_{h}^{+} & = \\alpha_{0} \\cdot e_{+}^{k_{1}} \\cdot \\alpha_{1} \\cdot \\ldots \\cdot e_{+}^{k_{m}} \\cdot \\alpha_{m} \\\\\n\\gamma_{h}^{-} & = \\beta_{0} \\cdot e_{-}^{k_{1}} \\cdot \\beta_{1} \\cdot \\ldots \\cdot e_{-}^{k_{m}} \\cdot \\beta_{m}\n\\end{align*}\nwe have that \\cite[Lemma~3.19]{U3} provides the existence of an $M_{0}$ such that for each pair $\\{\\alpha_{i},\\beta_{i}\\}$ we have that one of $\\mathfrak{g}_{+}([f_{+}^{M_{0}}(\\alpha_{i})])$ or $\\mathfrak{g}_{-}([f_{-}^{M_{0}}(\\beta_{i})])$ is at least $\\frac{1}{2}$. Let $J \\subseteq \\{0,1,\\ldots,m\\}$ be the subset where the first alternative occurs. Let $L \\geq 1$ be such that $\\frac{1}{L}|[f_+^{M_{0}}(\\alpha_{i})]| \\leq |[f_-^{M_{0}}(\\beta_{i})]| \\leq L|[f_+^{M_{0}}(\\alpha_{i})]|$ for each $i$.\n\nSuppose that $\\sum_{i \\in J} |[f_+^{M_{0}}(\\alpha_{i})]| \\geq \\frac{1}{2}\\sum_{i=0}^{m}|[f_+^{M_{0}}(\\alpha_{i})]|$. Then:\n\\begin{align*}\n\\mathfrak{g}'_{+}([f_{+}^{M_{0}}(\\gamma_{h}^{+})]) & = \\frac{\\sum_{i=0}^{m}|[f_{+}^{M_{0}}(\\alpha_{i})]|\\mathfrak{g}_{+}([f^{M_{0}}_{+}(\\alpha_{i})])}{\\sum_{i=0}^{m}|[f_{+}^{M_{0}}(\\alpha_{i})]|} \\\\\n& \\geq \\frac{1}{2} \\frac{\\sum_{i \\in J}|[f_{+}^{M_{0}}(\\alpha_{i})]|\\mathfrak{g}_{+}([f^{M_{0}}_{+}(\\alpha_{i})])}{\\sum_{i \\in J}|[f_{+}^{M_{0}}(\\alpha_{i})]|} \\\\\n& \\geq \\frac{1}{4}. \n\\end{align*}\n\nOtherwise we have $\\sum_{i \\notin J}|[f_+^{M_{0}}(\\alpha_{i})]|\\geq \\frac{1}{2}\\sum_{i=0}^{m}|[f_+^{M_{0}}(\\alpha_{i})]|$ and so:\n\\begin{align*}\n\\sum_{i \\notin J} |[f_-^{M_{0}}(\\beta_{i})]| \\geq & \\frac{1}{L}\\sum_{i \\notin J}|[f_+^{M_{0}}(\\alpha_{i})]| \\\\ \n\\geq & \\frac{1}{2L}\\sum_{i=0}^{m}|[f_+^{M_{0}}(\\alpha_{i})]| \\geq \\frac{1}{2L^{2}}\\sum_{i=0}^{m}|[f_-^{M_{0}}(\\beta_{i})]|.\n\\end{align*}\nA similar calculation in this case shows that $\\mathfrak{g}'_{-}([f_{-}^{M_{0}}(\\gamma_{h}^{-})]) \\geq \\frac{1}{4L^{2}}$ in this case.\n\nNext, the proof of \\cite[Lemma~3.16]{U3} provides the existence of an $M_{1}$ such that if $\\mathfrak{g}'_{\\pm}(\\gamma) \\geq \\frac{1}{4L^{2}}$ then $\\mathfrak{g}'_{\\pm}([f_{\\pm}^{n}(\\gamma)]) > \\delta$ for $n \\geq M_{1}$. Finally, the proof of \\cite[Lemma~3.14]{U3} provides the existence of an $M_{2}$ such that if $\\mathfrak{g}'_{\\pm}(\\gamma) > 0$, then $\\mathfrak{g}'_{\\pm}([f^{n}_{\\pm}(\\gamma)]) > \\mathfrak{g}'_{\\pm}(\\gamma)$ for all $n \\geq M_{2}$. Hence for $M = M_{0}M_{1} + M_{2}$ we have that the first conclusion of the alternative holds.\n\nThe second conclusion of the alternative follows from the proof of~\\cite[Lemma~3.16]{U3} as well. Indeed, in this lemma, it is shown that for each $0 < \\delta' < 1$ there is a $\\lambda > 0$ such that if $\\mathfrak{g}_{\\pm}(\\gamma) \\geq \\delta'$ where $\\gamma$ is a path in $G_{\\pm}'$ then $|f_{\\pm}^{n}(\\gamma)| \\geq 2^{n}\\lambda|\\gamma|$. The argument now proceeds like above using a possibly larger $M$.\n\\end{proof}\n\nCombining the two previous statements, we can show north-south dynamics on $\\PP {\\rm Curr}(F_{N})$ outside of a neighborhood of the fixed point $[\\eta_{g}]$. \n\n\\begin{proposition}\\label{prop:backandforth}\nUnder the standing assumption \\ref{stand}, given open neighborhoods $U_{\\pm}$ of $\\Delta_{\\pm}$ and $W$ of $[\\eta_{g}]$ there is an $M > 0$ such that for any rational current $[\\eta_{h}]\\in \\PP {\\rm Curr}(F_{N}) - W$, either $\\varphi^{n}[\\eta_{h}]\\in U_{+}$ or $\\varphi^{-n}[\\eta_{h}]\\in U_-$ for all $n\\ge M$.\\end{proposition}\n\n\\begin{proof}\nTo begin, we observe that $\\frac{\\langle e_{\\pm}, \\mu\\rangle}{|\\mu|}=1$ if and only if $[\\mu]=[\\eta_{g}]$. Hence by continuity of $\\I{e_{+},\\param}$ and compactness of $\\PP {\\rm Curr}(F_{N})$, there is an $0 < s < 1$ such that $\\frac{\\langle e_{\\pm}, \\mu\\rangle}{|\\mu|} \\leq 1 - s$ for $[\\mu] \\notin W$. \n\nLet $0 < \\delta_{0} < 1$ and $M_{0}$ be the maximum of constants from Proposition~\\ref{prop:goodness properties}\\eqref{GP:stronger} using both $U_{+}$ and $U_{-}$. Set $\\delta = \\sqrt{\\delta_{0}}$ and $K > 1$ large enough so that $\\frac{K}{K + 1\/s} > \\sqrt{\\delta_{0}}$. Finally, let $M_{1}$ be the constant from Lemma~\\ref{lem:backandforthlowergoodness} using these constants. Suppose that $[\\eta_{h}] \\notin W$ and without loss of generality assume that the first alternative of Lemma~\\ref{lem:backandforthlowergoodness} holds for $h$. As $|\\gamma_{h}^{+}| = |\\gamma_{h}^{+}|' + \\I{e_{+},\\gamma_{h}^{+}}$ we get $|\\gamma_{h}^{+}|'\/|\\gamma_{h}^{+}| \\geq s$ and so $\\frac{\\I{e_{+},\\gamma_{h}^{+}}}{|\\gamma_{h}^{+}|'} \\leq \\frac{1-s}{s} < \\frac{1}{s}$. \n\nTherefore we find:\n\\begin{align*}\n\\dfrac{|[f^{M_{1}}_{+}(\\gamma_{h}^{+})]|'}{|[f^{M_{1}}_{+}(\\gamma_{h}^{+})]|} & = \n\\dfrac{|[f^{M_{1}}_{+}(\\gamma_{h}^{+})]|'}{|[f^{M_{1}}_{+}(\\gamma_{h}^{+})]|' + \\I{e_{+},\\gamma_{h}^{+}}} = \\dfrac{1}{1 + \\dfrac{\\I{e_{+},\\gamma_{h}^{+}}}{|[f^{M_{1}}_{+}(\\gamma_{h}^{+})]|'}} \\\\\n& \\geq \\dfrac{1}{1 + \\dfrac{\\I{e_{+},\\gamma_{h}^{+}}}{K|\\gamma_{h}^{+}|'}}\n\\geq \\dfrac{1}{1 + \\dfrac{1}{Ks}} = \\frac{K}{K + 1\/s} > \\sqrt{\\delta_{0}}.\n\\end{align*} \nAnd thus:\n\\begin{equation*}\n\\dfrac{\\mathfrak{g}'_{+}([f_{+}^{M_{1}}(\\gamma_{h}^{+})])|[f^{M_{1}}_{+}(\\gamma_{h}^{+})]|'}{|[f^{M_{1}}_{+}(\\gamma_{h}^{+})]|} > \\delta \\sqrt{\\delta_{0}} = \\delta_{0}.\n\\end{equation*}\nHence by Proposition~\\ref{prop:goodness properties}\\eqref{GP:stronger} we have $\\varphi^{n}[\\eta_{h}] \\in U_{+}$ for $n \\geq M = M_{0} + M_{1}$.\n\\end{proof}\n\n\nIn order to promote Proposition~\\ref{prop:backandforth} to generalized north-south dynamics everywhere, we need to know that there are contracting neighborhoods. This is content of the next two lemmas and where we need the notion of completely split goodness for currents and Lemma~\\ref{generalizedgoodnessproperties}. We have one lemma dealing with neighborhoods of $\\Delta_{\\pm}$ and one lemma for neighborhoods of $\\widehat \\Delta_{\\pm}$.\n\n\\begin{lemma}\\label{lem:nbhds}\nUnder the standing assumption \\ref{stand}, given open neighborhoods $U_{\\pm}$ of $\\Delta_{\\pm}$ there are open neighborhoods $U_{\\pm}' \\subseteq U_{\\pm}$ of $\\Delta_{\\pm}$ and such that $\\varphi^{\\pm 1}(U_{\\pm}') \\subseteq U_{\\pm}'$. \n\\end{lemma}\n\n\\begin{proof}\nWe first observe that for any point in $[\\mu]\\in \\Delta_{+}$, the completely split goodness $\\overline{\\mathfrak{g}}_{+}([\\mu]) = 1$. This is because any such point is a linear combination of extremal points and extremal points are defined using limits of edges \\cite[Proposition 3.3 and Definition 3.5]{U3}, and as $[f^{n}(e)]$ is completely split for all $n\\ge1$. Likewise $\\overline{\\mathfrak{g}}_{-}([\\mu]) = 1$ for any $[\\mu] \\in \\Delta_{-}$.\n\nUsing these observations the conclusion of the lemma follows from the proofs of Lemma~\\ref{lem:backandforthlowergoodness} and Proposition~\\ref{prop:backandforth}. To begin, given a neighborhood $U_{+}$ of $\\Delta_{+}$ pick a neighborhood $U_{+}^{0} \\subset U_{+}$ such that for all $[\\mu]\\in U_{+}^{0}$ we have $\\overline{\\mathfrak{g}}(\\mu) > \\delta$ and $\\frac{\\langle e_{+}, \\mu\\rangle}{|\\mu|} < s$ for some $ \\delta > s >0$. Let $0 < \\delta_{0} < 1$ and $M_{0}$ be the constants from Proposition~\\ref{prop:goodness properties}\\eqref{GP:stronger} for $U_{+}^{0}$. \n\nGiven $[\\eta_{h}] \\in U^{0}_{+}$ we find using Lemma~\\ref{generalizedgoodnessproperties}:\n\\begin{align*}\n\\mathfrak{g}'_{+}(\\gamma_{h}^{+}) & \\geq \\mathfrak{g}'_{+}(\\gamma_{h}^{+})\\frac{|\\gamma_{h}^{+}|'}{|\\gamma_{h}^{+}|} = \\mathfrak{g}_{+}(\\gamma_{h}^{+}) - \\frac{\\I{e_{+},\\gamma_{h}^{+}}}{|\\gamma^{+}_{h}|} \\\\\n& \\geq \\overline{\\mathfrak{g}}_{+}(\\eta_{h}) - \\frac{\\I{e_{+},\\eta_{h}}}{|\\eta_{h}|} > \\delta - s.\n\\end{align*}\nAs mentioned in the proof of Lemma~\\ref{lem:backandforthlowergoodness}, there is now an $M_{1}$ such that $\\mathfrak{g}'_{+}([f_{+}^{n}(\\gamma_{h}^{+})]) > \\sqrt{\\delta_{0}}$ for all $n \\geq M_{1}$. Combining now with the proof of Proposition~\\ref{prop:backandforth},\nfor a slightly larger $M_{1}$, we have that $\\frac{|[f_{+}^{n}(\\gamma_{h}^{+})]|'}{|[f_{+}^{n}(\\gamma_{h}^{+})]|} > \\sqrt{\\delta_{0}}$ as well for $n \\geq M_{1}$. By choice of $\\delta_{0}$, this shows $\\varphi^{M}[\\eta_{h}] \\in U^{0}_{+}$ for $M = M_{0} + M_{1}$ and for any rational current $[\\eta_{h}] \\in U_{+}^{0}$. As rational currents are dense, we get $\\varphi^{M}(U_{+}^{0}) \\subseteq U_{+}^{0}$. \n\nNow set:\n\\begin{equation*}\nU_{+}' = U_{+}^{0} \\cap \\varphi(U_{+}^{0}) \\cap \\cdots \\cap \\varphi^{M-1}(U_{+}^{0}).\n\\end{equation*} \nAs $\\varphi(\\Delta_{+}) = \\Delta_{+}$, $U_{+}'$ is a neighborhood of $\\Delta$. Clearly $U_{+}' \\subseteq U_{+}^{0} \\subseteq U_{+}$ and $\\varphi(U_{+}') \\subseteq U_{+}'$ by construction.\n\nA symmetric argument works for a neighborhood of $\\Delta_{-}$.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:hnbhds} \nUnder the standing assumption \\ref{stand}, given open neighborhoods $\\widehat V_{\\pm}$ of $\\widehat \\Delta_{\\pm}$ there are open neighborhoods $\\widehat V_{\\pm}' \\subseteq \\widehat V_{\\pm}$ of $\\widehat \\Delta_{\\pm}$ such that $\\varphi^{\\pm 1}(\\widehat V_{\\pm}' )\\subseteq \\widehat V_{\\pm}'$. \n\\end{lemma}\n\n\\begin{proof}\nGiven $[\\mu] \\in \\PP {\\rm Curr}(F_{N})$, a collection of reduced edge paths $\\mathcal{P}$ in some marked graph $G$ and an $\\epsilon > 0$ determines an open neighborhood of $[\\mu]$ in $\\PP {\\rm Curr}(F_{N})$:\n\\begin{equation*}\nN_{G}([\\mu],\\mathcal{P},\\epsilon) = \\left\\{ [\\nu] \\in \\PP {\\rm Curr}(F_{N}) \\mid \\abs{\\frac{\\I{\\gamma,\\nu}}{\\wght{\\nu}} - \\frac{\\I{\\gamma,\\mu}}{\\wght{\\mu}}} < \\epsilon, \\, \\forall \\gamma \\in \\mathcal{P} \\right\\}.\n\\end{equation*} \nFor a subset $X \\subseteq \\PP {\\rm Curr}(F_{N})$, we define $N_{G}(X,\\mathcal{P},\\epsilon)$ as the union of $N_{G}([\\mu],\\mathcal{P},\\epsilon)$ over all $[\\mu] \\in X$. \n\nBy $\\mathcal{P}_{+}(L)$ we denote the set of all reduced edge paths contained in $G'_{+}$ with length at most $L$. We set $\\widehat\\mathcal{P}_{+}(L) = \\mathcal{P}_{+}(L) \\cup \\{e_{+}\\}$. We have\n\\begin{equation*}\n\\underset{L \\to \\infty, \\, \\epsilon \\to 0}{\\bigcap} N_{G_{+}}(\\widehat \\Delta_{+},\\widehat\\mathcal{P}_{+}(L), \\epsilon) = \\widehat \\Delta_{+}.\n\\end{equation*}\nThis follows as for any $[\\mu] \\in \\Delta_{+}$, $\\I{\\gamma,\\mu} = 0$ for any reduced edge path not contained in $G'_{+}$ and as $[\\mu]=[\\eta_{g}]$ if and only if $\\I{e_{+}, \\mu} = \\wght{\\mu}$. There is a similar statement for $\\widehat \\Delta_{-}$. \n\nLet $L$ and $\\epsilon$ be such that $N_{G_{+}}(\\widehat \\Delta_{+},\\widehat\\mathcal{P}_{+}(L), \\epsilon) \\subseteq \\widehat V_{+}$. Let $\\delta_{0}$ and $M_{0}$ be the constants from Proposition~\\ref{prop:goodness properties}\\eqref{GP:weaker} using this $L$ and $\\epsilon$. Set $\\widehat V_{+}' = N_{G_{+}}(\\widehat \\Delta_{+},\\widehat\\mathcal{P}_{+}(L), \\epsilon)$ and let $0 < \\delta' < 1$ be such that $\\overline{\\mathfrak{g}}(\\mu) > \\delta'$ for $[\\mu] \\in \\widehat V'_{+}$. By replacing $\\delta_{0}$ with a smaller positive number and $M_{0}$ with a larger constant, we can assume that $\\delta_{0}$ and $M_{0}$ also satisfy the conclusion of Proposition~\\ref{prop:goodness properties}\\eqref{GP:stronger} for the neighborhood $\\widehat V'_{+}$ as well.\n\nWe will now show that there is a constant $M$ such that for any rational current $[\\eta_{h}] \\in \\widehat V_{+}'$ we have $\\varphi^{M}[\\eta_{h}] \\in \\widehat V_{+}'$. Arguing as in Lemma~\\ref{lem:nbhds} the present lemma follows. There are two cases: $\\gamma_{h}^{+}$ has a definite fraction in $G'_{+}$; or not, i.e., $[\\eta_{h}]$ is close to $[\\eta_{g}]$.\n\nThe first case is similar to Lemma~\\ref{lem:nbhds}. Fix an $0 < s < \\delta'$. If $[\\mu] \\in \\widehat V'_{+}$ and $\\frac{\\I{e_{+},\\mu}}{|\\mu|} < s$, then arguing as in Lemma~\\ref{lem:nbhds} we have $\\mathfrak{g}'_{+}(\\gamma_{h}^{+}) > \\delta' - s$ and so there is an $M_{1}$ such that $\\mathfrak{g}'_{+}([f\n^{n}(\\gamma_{h}^{+})])\\frac{|[f_{+}^{n}(\\gamma_{h}^{+})]|'}{|[f^{n}_{+}(\\gamma_{h}^{+})]|} > \\delta_{0}$ and so $\\varphi^{n}[\\eta_{h}] \\in \\widehat V'_{+}$ for all $n \\geq M_{0} + M_{1}$.\n\nThus for the second case we assume that $[\\eta_{h}] \\in \\widehat V'_{+}$ and $\\frac{\\I{e_{+},\\gamma_{h}^{+}}}{|\\gamma_{h}^{+}|} \\geq s$. If $h$ is a power of a conjugate of $g$, then $\\varphi([\\eta_{h}]) = [\\eta_{h}] \\in \\widehat V_{+}'$. Therefore we can assume that $h$ is not a power of a conjugate of $g$. Hence the path $\\gamma^+_h$ intersects $G'_+$ nontrivially and so $|[f^{n}_{+}(\\gamma_{h}^{+})]|' \\geq 1$ for all $n \\geq 0$.\n\nNext we observe that given $\\delta>0$ and $R>1$, there is a constant $M_{2}>1$ such that for any reduced path $\\alpha$ in $G'_{+}$ which is not a Nielsen path, either $\\mathfrak{g}'_{+}([f_{+}^{M_2}(\\alpha)])>\\delta$ or $|\\alpha|' > R |[f_{+}^{M_2}(\\alpha)]|'$. This is the analog of \\cite[Proposition~4.18]{LU2}. The idea is that any long enough reduced path $\\alpha$ can be subdivided into subpaths of length at most $10C$, and we can find an exponent $M_1$ such that for any reduced edge path $\\gamma$ in $G'_{+}$ with $|\\gamma|<10C$, the path $[f_{+}^{M_2}(\\gamma)]$ is completely split. This tells that either $[f_{+}^{M_1}(\\alpha)]$ has a definite completely split goodness, or the length $|[f_{+}^{M_1}(\\alpha)]|$ decreases by a definite amount. Hence an argument similar to the one in Lemma~\\ref{lem:backandforthlowergoodness} tells that the following holds after replacing $M_{1}$ with a possibly larger constant:\n\nFor all $h \\in F_{N}$ not conjugate to $g$, we have either: \n\n\\begin{enumerate} \n\n\\item $\\mathfrak{g}'_{+}([f_{+}^{M_1}(\\gamma^{+}_h)])>\\delta_{0}$; or \\label{highreversegood}\n\\item $ |f^{M_1}_{+}(\\gamma^{+}_h)|' < \\dfrac{1}{R}|\\gamma_h^{+}|'$ \\label{lengthdecreases}\n\n\\end{enumerate}\nwhere $\\frac{1}{1 + Rs} < \\epsilon$ and $\\frac{R}{R + 1\/s} > 1-\\epsilon$. Set $M = M_{0} + M_{1}$.\n\n\n\nFirst assume that \\eqref{highreversegood} holds for $h$. Set $t = \\I{e_{+},[f_{+}^{M}(\\gamma_{h}^{+})]}\/|[f_{+}^{M}(\\gamma_{h}^{+})]|$. As $h$ is not a power of a conjugate of $g$ we have that $0 \\leq t < 1$. As $\\mathfrak{g}'_{+}([f_{+}^{M_{1}}(\\gamma_{h}^{+})]) > \\delta_{0}$, there is a current $[\\mu] \\in \\Delta_{+}$ satisfying the inequality in Proposition~\\ref{prop:goodness properties}\\eqref{GP:weaker} for $f^{M}_{+}(\\gamma_{h}^{+})$. We normalize $\\mu$ so that $|\\mu| = 1$. With our normalization, we have that $|t\\eta_{g} + (1-t)\\mu| = 1$ as well. We claim that $\\varphi^{M}[\\eta_{h}] \\in N_{G_{+}}([t\\eta_{g} + (1-t)\\mu],\\widehat\\mathcal{P}_{+}(L),\\epsilon) \\subseteq \\widehat V'_{+}$. \n\nFor a path $\\alpha \\in \\mathcal{P}_{+}(L)$ we have $\\I{\\alpha,\\eta_{g}} = 0$, $|[f^{M}_{+}(\\gamma_{h}^{+})]|' = |[f^{M}_{+}(\\gamma_{h}^{+})]|(1-t)$ and so:\n\\begin{align*}\n\\bigg|\\frac{\\I{\\alpha,[f_{+}^{M}(\\gamma_{h}^{+})]}}{|[f_{+}^{M}(\\gamma_{h}^{+})]|} - \\I{\\alpha,t\\eta_{g} + \\, & (1-t)\\mu}\\bigg|& \\\\&= \\abs{\\frac{\\I{\\alpha,[f_{+}^{M}(\\gamma_{h}^{+})]}(1-t)}{|[f_{+}^{M}(\\gamma_{h}^{+})]|(1-t)} - (1-t)\\I{\\alpha,\\mu}} \\\\\n& = \\abs{\\frac{\\I{\\alpha,[f_{+}^{M}(\\gamma_{h}^{+})]}}{|[f_{+}^{M}(\\gamma_{h}^{+})]|'} - \\I{\\alpha,\\mu}}(1-t) \\\\\n& < \\epsilon(1-t) \\leq \\epsilon.\n\\end{align*}\n\nAlso as $\\I{e_{+},\\mu} = 0$ and $\\I{e_{+},\\eta_{g}} = 1$ we find:\n\\begin{align*}\n\\abs{\\frac{\\I{e_{+},[f_{+}^{M}(\\gamma_{h}^{+})]}}{|[f_{+}^{M}(\\gamma_{h}^{+})]|} - \\I{e_{+},t\\eta_{g} + (1-t)\\mu}} &= \\abs{t - t\\I{e_{+},\\eta_{g}}} \\\\\n& = \\abs{t - t} = 0.\n\\end{align*}\nThis shows $\\varphi^{M}[\\eta_{h}] \\in N_{G_{+}}([t\\eta_{g} + (1-t)\\mu],\\widehat\\mathcal{P}_{+}(L),\\epsilon)$ as claimed.\n\nOn the other hand if \\eqref{highreversegood} fails then \\eqref{lengthdecreases} holds for $\\gamma_{h}^{+}$ and so $|[f^{M}_{+}(\\gamma_{h}^{+})]|' \\leq \\frac{1}{R}|\\gamma_{h}^{+}|'$. We claim that $\\varphi^{M}[\\eta_{h}]\\in N_{G_{+}}([\\eta_{g}],\\widehat\\mathcal{P}_{+}(L), \\epsilon)$. Notice that we have $\\I{e_{+},[f^{M}_{+}(\\gamma_{h}^{+})]} = \\I{e_{+},\\gamma_{h}^{+}}$ and $\\frac{\\I{e_{+},\\gamma_{h}^{+}}}{|\\gamma_{h}^{+}|'} \\geq \\frac{\\I{e_{+},\\gamma_{h}^{+}}}{|\\gamma_{h}^{+}|} \\geq s$. \n\nFor a path $\\alpha \\in \\mathcal{P}_{+}(L)$ we have $\\I{\\alpha,[f^{M}_{+}(\\gamma_{h}^{+})]} \\leq |[f^{M}_{+}(\\gamma_{h}^{+})]|'$ and so:\n\\begin{align*}\n0 < \\frac{\\I{\\alpha,[f_{+}^{M}(\\gamma_{h}^{+})]}}{|[f_{+}^{M}(\\gamma_{h}^{+})]|} &\\leq \\frac{|[f_{+}^{M}(\\gamma_{h}^{+})]|'}{|[f_{+}^{M}(\\gamma_{h}^{+})]|'+\\I{e_{+},[f_{+}^{M}(\\gamma_{h}^{+})]}} \\\\\n& = \\frac{1}{1 + \\frac{\\I{e_{+},\\gamma_{h}^{+}}}{|[f_{+}^{M}(\\gamma_{h}^{+})]|'}} \\leq \\frac{1}{1 + \\frac{R\\I{e_{+},\\gamma_{h}^{+}}}{|\\gamma_{h}^{+}|'}} \\\\\n& \\leq \\frac{1}{1 + Rs} < \\epsilon.\n\\end{align*}\nTherefore as $\\I{\\alpha,\\eta_{g}} = 0$ we have:\n\\begin{equation*}\n\\abs{\\frac{\\I{\\alpha,[f^{M}_{+}(\\gamma_{h}^{+})]}}{|[f^{M}_{+}(\\gamma_{h}^{+})]|} - \\I{\\alpha,\\eta_{g}}} < \\epsilon.\n\\end{equation*}\n\nAdditionally, we have:\n\\begin{align*}\n1 > \\frac{\\I{e_{+},[f^{M}_{+}(\\gamma_{h}^{+})]}}{|[f^{M}_{+}(\\gamma_{h}^{+})]|} &= \n\\frac{\\I{e_{+},\\gamma_{h}^{+}}}{|[f^{M}_{+}(\\gamma_{h}^{+})]|} = \\frac{\\I{e_{+},\\gamma_{h}^{+}}}{|[f^{M}_{+}(\\gamma_{h}^{+})]|' + \\I{e_{+},\\gamma_{h}^{+}}} \\\\\n& \\geq \\frac{\\I{e_{+},\\gamma_{h}^{+}}}{\\frac{1}{R}|\\gamma_{h}^{+}|' + \\I{e_{+},\\gamma_{h}^{+}}} = \\frac{R\\I{e_{+},\\gamma_{h}^{+}}}{|\\gamma_{h}^{+}|' + R\\I{e_{+},\\gamma_{h}^{+}}} \\\\\n& = \\frac{R}{R + \\frac{|\\gamma_{h}^{+}|'}{\\I{e_{+},\\gamma_{h}^{+}}}} \\geq \\frac{R}{R + 1\/s} > 1 - \\epsilon.\n\\end{align*}\nTherefore as $\\I{e_{+},\\eta_{g}} = 1$ we have:\n\\begin{equation*}\n\\abs{\\frac{\\I{e_{+},[f^{M}_{+}(\\gamma_{h}^{+})]}}{|[f^{M}_{+}(\\gamma_{h}^{+})]|} - \\I{e_{+},\\eta_{g}}} < \\epsilon.\n\\end{equation*}\nThis shows $\\varphi^{M}[\\eta_{h}] \\in N_{G_{+}}([\\eta_{g}],\\widehat\\mathcal{P}_{+}(L),\\epsilon)$ as claimed.\n\\end{proof}\n\n\n\n\n\\subsection{Generalized north-south dynamics for almost atoroidal elements}\\label{subsec:gns}\n\nUsing the material from the previous two sections, we can now prove the main technical result needed for Theorem~\\ref{th:alternative}.\n\n\\begin{theorem}\\label{th:gns}\nSuppose $A < F_{N}$ is a co-rank $1$ free factor and $\\varphi \\in \\IA_{N}(\\ZZ\/3) \\cap \\Out(F_{N};A)$ is such that $\\varphi\\big|_{A}$ is atoroidal. Let $\\Delta_{+}$ and $\\Delta_{-}$ be the inclusion to $\\PP {\\rm Curr}(F_{N})$ of the $\\varphi$--invariant simplices in $\\PP {\\rm Curr}(A)$ from Theorem~\\ref{dynamicsofhyp} for $\\varphi\\big|_{A}$. Assume $\\varphi$ is not atoroidal and let $[g]$ be the fixed conjugacy class in $F_{N}$ given by Proposition~\\ref{prop:co rank one atoroidal}\\eqref{item:one edge fixed}. Then $\\varphi$ acts on $\\PP {\\rm Curr}(F_{N})$ with generalized north-south dynamics. Specifically, for the two invariant sets \n\\begin{equation*}\n\\widehat \\Delta_{-}=\\{[t\\eta_{g}+(1-t)\\mu_{-}]\\mid [\\mu_{-}]\\in\\Delta_{-}, t\\in[0,1]\\}\n\\end{equation*}\nand \n\\begin{equation*}\n\\widehat \\Delta_{+}=\\{[t\\eta_{g}+(1-t)\\mu_{+}]\\mid [\\mu_{+}]\\in\\Delta_{+}, t\\in[0,1]\\},\n\\end{equation*}\ngiven any open neighborhood $U_{\\pm}$ of $\\Delta_{\\pm}$ in $\\PP {\\rm Curr}(F_{N})$ and open neighborhood $\\widehat V_{\\pm}$ of $\\widehat \\Delta_{\\pm}$ in $\\PP {\\rm Curr}(F_{N})$, there is an $M > 0$ such that $\\varphi^{\\pm n}(\\PP {\\rm Curr}(F_{N}) - \\widehat V_{\\mp})\\subset U_{\\pm}$ for all $n\\ge M$. \n\\end{theorem}\n\nSee Figure~\\ref{fig:gns set-up} for a schematic of the sets mentioned in Theorem~\\ref{th:gns}.\n\n\\begin{figure}[h!]\t\n\\centering\n\\begin{tikzpicture}[every node\/.style={inner sep=0pt},scale=0.9]\n\\draw[very thick] (-5,4) rectangle (5,-3);\n\\draw[thick,pattern=crosshatch dots,pattern color=black!10!white] (-5,-1) rectangle (5,-3);\n\\filldraw[black!20!white] (-4,-1) -- (0,2.5) -- (-1,-1) -- cycle;\n\\filldraw[black!20!white] (4,-1) -- (0,2.5) -- (1,-1) -- cycle;\n\\draw[very thick] (-4,-1) -- (-1,-1) node[pos=0] (a1) {} node[pos=1] (a2) {};\n\\draw[very thick] (4,-1) -- (1,-1) node[pos=0] (b1) {} node[pos=1] (b2) {};\n\\node at (0,2.5) (c) {};\n\\draw[very thick] (a1) -- (c) -- (a2);\n\\draw[very thick] (b1) -- (c) -- (b2);\n\\draw[thick,dashed,blue,rounded corners=25pt] (-5.2,-1.5) -- (0.5,3.4) -- (-0.6,-1.5) -- cycle;\n\\draw[thick,dashed,red,rounded corners=25pt] (5.2,-1.5) -- (-0.5,3.4) -- (0.6,-1.5) -- cycle;\n\\draw[thick,dashed,red,rounded corners=5pt] (-4.2,-0.8) -- (-0.8,-0.8) --(-0.8,-1.2) -- (-4.2,-1.2) -- cycle;\n\\draw[thick,dashed,blue,rounded corners=5pt] (4.2,-0.8) -- (0.8,-0.8) --(0.8,-1.2) -- (4.2,-1.2) -- cycle;\n\\node at (-2.5,-2) {$\\Delta_{+} \\subset {\\color{red}U_{+}}$};\n\\node at (-2.5,2.3) {${\\widehat \\Delta_{+}} \\subset {\\color{blue}\\widehat V_{+}}$};\n\\node at (2.5,-2) {$\\Delta_{-} \\subset {\\color{blue}U_{-}}$};\n\\node at (2.5,2.3) {${\\widehat \\Delta_{-}} \\subset {\\color{red}\\widehat V_{-}}$};\n\\node at (0,3.5) {$[\\eta_{g}]$};\n\\node at (0,-2) {$\\PP {\\rm Curr}(A)$};\n\\foreach \\a in {a1,a2,b1,b2,c}\n\t\\filldraw (\\a) circle [radius=0.075cm];\n\\end{tikzpicture}\n\\caption{The set-up of neighborhoods in Theorem~\\ref{th:gns}. For $n \\geq M$, the element $\\varphi^{n}$ sends the complement of $\\widehat V_{-}$ into $U_{+}$; the element $\\varphi^{-n}$ sends the complement of $\\widehat V_{+}$ into $U_{-}$.}\\label{fig:gns set-up}\n\\end{figure}\n\n\n\n\n\\begin{proof} \n\nWe replace $\\varphi$ by a power so that the results from Section~\\ref{subsec:goodness grows} apply. This is addressed at the end of the proof.\n\nBy Lemmas~\\ref{lem:nbhds} and \\ref{lem:hnbhds} we can assume that $\\varphi(U_{+}) \\subseteq U_{+}$ and $\\widehat V_{-} \\subseteq \\varphi(\\widehat V_{-})$.\nLet $M$ be the exponent given by Proposition~\\ref{prop:backandforth} by using $U_{+} = U_{+}$ and $U_{-} = W = \\widehat V_{-}$. \n\nFor any current\n\\begin{equation*}\n[\\mu]\\in \\varphi^{M}(\\PP {\\rm Curr}(F_{N}) - \\widehat V_{-}) = \\PP {\\rm Curr}(F_{N})-\\varphi^{M}(\\widehat V_{-}) \\subseteq \\PP {\\rm Curr}(F_{N}) - W\n\\end{equation*}\nwe have $\\varphi^{M}[\\mu]\\in U_{+}$ by Proposition~\\ref{prop:backandforth}, as $\\varphi^{-M}[\\mu]\\notin \\widehat V_{-}$. Therefore for any current $[\\mu]\\in\\PP {\\rm Curr}(F_{N})-\\widehat V_{-}$, we have $\\varphi^{2M}[\\mu] \\in U_{+}$ and hence $\\varphi^{2n}[\\mu] \\in U_{+}$ for all $n \\geq M$ as $\\varphi(U_{+}) \\subseteq U_{+}$. Therefore, \n\\begin{equation*}\n\\varphi^{2n} (\\PP {\\rm Curr}(F_{N}) - \\widehat V_{-}) \\subset U_{+}\n\\end{equation*}\nfor all $n\\ge M$. A symmetric argument for $\\varphi^{-1}$ shows that $\\varphi^{2}$ acts with generalized north-south dynamics. We then invoke \\cite[Proposition 3.4]{LU2} to deduce that $\\varphi$ (and also the original outer automorphism as well) acts with generalized north-south dynamics. \n\\end{proof} \n\nWe conclude this section with the analog to Lemma~\\ref{lem:dynamics in simplex} regarding the behavior of length under iteration of $\\varphi$ that is needed for Theorem~\\ref{prop:atoroidal}. In this statement and its proof, we assume $\\varphi \\in \\Out(F_{N})$ satisfies the hypotheses of Theorem~\\ref{th:gns} and $\\Delta_{\\pm}$, $\\widehat \\Delta_{\\pm}$ are the $\\varphi$--invariant simplices in $\\PP {\\rm Curr}(F_{N})$ appearing in the statement of that theorem. \n\n\\begin{lemma}\\label{lem:growth outside of nbhd}\nFor each $C > 0$ and neighborhood $\\widehat V \\subset \\PP {\\rm Curr}(F_{N})$ of $\\widehat \\Delta_{-}$ there is a constant $M > 0$ such that if $[\\mu] \\notin \\widehat V$, then $\\wght{\\varphi^{n}\\mu} \\geq C\\wght{\\mu}$ for all $n \\geq M$.\n\\end{lemma}\n\n\\begin{proof}\nThere is a constant $P$ such that for each current $[\\nu] \\in \\Delta_{+}^{(0)}$ there is a real number $\\lambda_{\\nu} > 1$ such that $\\varphi^{P}\\nu = \\lambda_{\\nu}\\nu$~\\cite[Remark~6.5]{LU2}. Let $\\lambda_{0} = \\min\\{\\lambda_{\\nu} \\mid [\\nu] \\in \\Delta_{+}^{(0)} \\}$ and $B_{0}$ be large enough so that $\\lambda_{0}^{B_{0}} \\geq 3$. Hence $\\wght{\\varphi^{PB_{0}}\\nu} \\geq 3\\wght{\\nu}$ for any $[\\nu] \\in \\Delta_{+}^{(0)}$. Since the weight function is linear, for any $[\\mu] \\in \\Delta_{+}$ we have $\\wght{\\varphi^{PB_{0}}\\mu} \\geq 3\\wght{\\mu}$ too.\n \nHence there is a neighborhood $U \\subseteq \\PP {\\rm Curr}(F_{N})$ of $\\Delta_{+}$ such that $\\wght{\\varphi^{PB_{0}}\\mu} \\geq 2\\wght{\\mu}$ for all $[\\mu] \\in U$. \nBy replacing $U$ with a smaller neighborhood, we may assume $\\varphi(U) \\subseteq U$ and $U \\cap \\Delta_{-} = \\emptyset$ by Lemma~\\ref{lem:nbhds}. Hence $\\wght{\\varphi^{aPB_{0}}\\mu} \\geq 2^{a}\\wght{\\mu}$ for $[\\mu] \\in U$. \nLet $K = \\inf\\{ \\wght{\\varphi^{i}\\mu}\/\\wght{\\mu} \\mid [\\mu] \\in U, \\, 0 \\leq i < PB_{0} \\}$.\n \nLet $M_{0}$ be the constant from Theorem~\\ref{th:gns} applied to the neighborhoods $U$ and $\\widehat V$. As $\\PP {\\rm Curr}(F_{N})$ is compact, there is a constant $L > 0$ such that $\\wght{\\varphi^{M_{0}}\\mu} \\geq L \\wght{\\mu}$ for all $[\\mu] \\in \\PP {\\rm Curr}(F_{N})$. \n\nLet $B_{1}$ be large enough so that $2^{B_{1}}KL \\geq C$ and set $M = PB_{0}B_{1} + M_{0}$. If $n \\geq M$, we can write $n = aPB_{0} + i + M_{0}$ where $a \\geq B_{1}$ and $0 \\leq i < PB_{0}$. Then for $[\\mu] \\notin \\widehat V$, we have $[\\varphi^{M_{0}}\\mu], [\\varphi^{i + M_{0}}\\mu] \\in U$ and so\n\\begin{equation*}\n\\wght{\\varphi^{n}\\mu} \\geq 2^{a}\\wght{\\varphi^{i + M_{0}}\\mu} \\geq 2^{a}K\\wght{\\varphi^{M_{0}}\\mu} \\geq 2^{a}KL\\wght{\\mu} \\geq C\\wght{\\mu}.\\qedhere\n\\end{equation*}\n\\end{proof}\n\n\n\n\n\\section{Pushing past single-edge extensions}\\label{sec:push past one edge}\n\nIn this section we apply Theorem~\\ref{th:gns} to deal with the case of pushing past single-edge extensions. Here we use the action on the space of currents to demonstrate that an element is atoroidal. Given a single-edge extension $\\mathcal{F}_{0} \\sqsubset \\mathcal{F}_{1}$ invariant under $\\mathcal{H}$ and $\\varphi \\in \\mathcal{H}$ such that $\\varphi\\big|_{\\mathcal{F}_{0}}$ is atoroidal, if there is some nontrivial $g \\in F_{N}$ whose conjugacy class is $\\varphi$--periodic, we will either find a finite index subgroup of $\\mathcal{H}$ that fixes $[g]$, or an element $\\psi \\in \\mathcal{H}$ so that we can play ping-pong with $\\varphi$, $\\psi \\varphi \\psi^{-1}$ to produce an element which is atoroidal on $\\mathcal{F}_{1}$.\n\nTo begin, we need a lemma that sets up the appropriate conditions for playing ping-pong.\n\n\\begin{lemma}\\label{lem:conjugating element}\nSuppose $\\mathcal{F}_{0} \\sqsubset \\mathcal{F}_{1}$ is a handle extension that is invariant under $\\mathcal{H} < \\IA_{N}(\\ZZ\/3)$ and $\\varphi \\in \\mathcal{H}$ is such that $\\varphi\\big|_{\\mathcal{F}_{0}}$ is atoroidal. Assume $\\varphi\\big|_{\\mathcal{F}_{1}}$ is not atoroidal and let $[A] \\in \\mathcal{F}_{0}$ and $g \\in F_{N}$ be as given by Proposition~\\ref{prop:co rank one atoroidal}\\eqref{item:one edge fixed} and denote $F = A \\ast \\I{g}$. Let $\\Delta_{+}(A)$ and $\\Delta_{-}(A)$ be the inclusion to $\\PP {\\rm Curr}(F)$ of the invariant simplices in $\\PP {\\rm Curr}(A)$ from Theorem~\\ref{dynamicsofhyp} for $\\varphi\\big|_{A}$ and for each other $[B] \\in \\mathcal{F}_{0}$, let $\\Delta_{+}(B)$ and $\\Delta_{-}(B)$ be the invariant simplices in $\\PP {\\rm Curr}(B)$ from Theorem~\\ref{dynamicsofhyp} for $\\varphi\\big|_{B}$. Either:\n\\begin{enumerate}\n\\item there is a finite index subgroup $\\mathcal{H}'$ of $\\mathcal{H}$ such that $\\mathcal{H}'[g] = [g]$; or\n\\item there is a $\\psi \\in \\mathcal{H}$ such that $\\psi[g] \\neq [g]$ and $\\Delta_{+}(B) \\cap \\psi\\big|_{B} \\Delta_{-}(B) = \\Delta_{-}(B) \\cap \\psi\\big|_{B}\\Delta_{+}(B) = \\emptyset$ for all $[B] \\in \\mathcal{F}_{0}$ \\textup{(}including $[A]$\\textup{)}. \n\\end{enumerate}\n\\end{lemma}\n\n\n\\begin{proof}\nConsider the orbit of the conjugacy class $[g]$ under $\\mathcal{H}$. If the orbit is finite, then there is a finite index subgroup $\\mathcal{H}'$ of $\\mathcal{H}$ that fixes $[g]$ and so (1) holds.\n\nElse, there is an infinite set $X \\subseteq \\mathcal{H}$ such that $h_{1}[g] \\neq h_{2}[g]$ for all distinct $h_{1}, h_{2} \\in X$. We claim that there is a pair $h_{1}, h_{2} \\in X$ such that $\\psi = h_{2}^{-1} h_{1}$ satisfies the conclusion (2). By construction of $X$, we have $h_{2}^{-1}h_{1}[g] \\neq [g]$ for all distinct $h_{1},h_{2} \\in X$ and so we only need to concern ourselves with the intersection of the simplices. To ease notation here, we will implicitly be using the appropriate restrictions of the elements in $X$. \n\nTo this end, we first consider the vertices $\\Delta_{\\pm}(B)^{(0)}$ for each $[B] \\in \\mathcal{F}_{1}$, i.e., the extremal measures in $\\Delta_{\\pm}(B)$. For each such extremal measure $[\\mu]$, the support $\\supp([\\mu])$ contains a sublamination that is uniquely ergodic. Indeed, any such measure comes from an aperiodic $EG$ stratum $H_r$ in the $\\CT$ that represents $\\varphi$ \\cite[Remark 3.4 and Definition 3.5]{U3}. The restriction of $\\varphi$ to each $\\varphi$--invariant minimal free factor $B_0$ contained in $\\pi_1(G_r)$ is both fully irreducible and atoroidal. The support $\\supp(\\mu_0)$ of the corresponding attracting current $[\\mu_0]$ is contained in the support of $[\\mu]$, and $\\supp(\\mu)$ is uniquely ergodic \\cite[Proposition 4.4]{Uyaiwip}. \n\nThe fact that $\\supp(\\mu_0)\\subset \\supp(\\mu)$ follows from the following facts. Recall that for any $\\nu\\in\\Curr(F_{N})$, \n$\\supp(\\nu)$ consists of all bi-infinite paths $\\beta$ such that for any finite subpath $\\gamma$ of $\\beta$ $\\langle \\gamma, \\nu\\rangle >0$ \\cite[Lemma 3.7]{KL3}. Note that by definition the bi-infinite path $\\beta$ obtained by iterating an edge $e$ in an $\\EG$ stratum is in the support of the corresponding current. Further, for $e\\in H_r$, the attracting lamination corresponding to $H_r$ is the closure of $\\beta$ \\cite[Lemma 3.1.10 and Lemma 3.1.15]{BFH00}. The attracting lamination corresponding to a minimal stratum on which $H_r$ maps over is precisely the support of $\\mu_0$, hence \n\\[\n\\supp(\\mu_0)=\\Lambda(B_0, \\varphi)\\subset\\Lambda(\\pi_{1}(G_r), \\varphi).\n\\]\n\n\nMoreover, there are only finitely many such sublaminations. We set $E_{\\varphi}$ to be the set of projective classes of currents obtained by restricting an extremal measure in some $\\Delta_{\\pm}(B)^{(0)}$ to a uniquely ergodic sublamination contained in its support.\n\n\nSince the set $E_{\\varphi}$ is finite, we can replace $X$ with an infinite subset (which we will still denote $X$) such that for each $s \\in E_{\\varphi}$ either $h_{1}s = h_{2}s$ for all $h_{1},h_{2} \\in X$ or $h_{1}s \\neq h_{2}s$ for all distinct $h_{1}, h_{2} \\in X$. Let $E_{1} \\subseteq E_{\\varphi}$ be the subset for which the first alternative occurs and $E_{\\infty} = E_{\\varphi} - E_{1}$.\n\nNext fix an arbitrary $h_{1} \\in X$ and for each $s \\in E_{\\infty}$ let \\begin{equation*}\nX_{s} = \\{ h \\in X \\mid h_{1}s = hs' \\text{ for some } s' \\in E_{\\infty} \\}.\n\\end{equation*}\nNotice that each $X_{s}$ is finite set. Take $h_{2} \\in X - \\bigcup_{s \\in E_{\\infty}} X_{s}$. Then for any $s \\in E_{\\infty}$ we have $h_{1}s \\neq h_{2}s'$ for any $s' \\in E_{\\infty}$. If $h_{1}s = h_{2}s'$ for some $s' \\in E_{1}$, then $s = h_{1}^{-1} h_{2}s' = s'$, contradicting the fact that $s \\in E_{\\infty}$. Therefore $h_{2}^{-1} h_{1} s \\notin E_{\\varphi}$ for all $s \\in E_{\\infty}$ and $h_{2}^{-1} h_{1}s = s$ for all $s \\in E_{1}$. \n\nSet $\\psi = h_{2}^{-1}h_{1}$. We have that for any $s \\in E_{\\varphi}$, either $\\psi s = s$ or $\\psi s \\notin E_{\\varphi}$. \n\nNow take $[\\mu] \\in \\Delta_{-}(B)$ for some $[B] \\in \\mathcal{F}_{1}$ and suppose that $\\psi[\\mu] \\in \\Delta_{+}(B)$. Therefore we can write $\\mu = \\sum_{i=1}^{m} a_{i} \\mu_{i}^{-}$ for some extremal measures $[\\mu_{i}] \\in \\Delta_{-}(B)^{(0)}$ and coefficients $a_{i} > 0$. Hence we have:\n\\begin{equation*}\n\\sum_{i=1}^{m} a_{i}\\psi\\mu_{i}^{-} = \\psi\\mu = \\sum_{j=1}^{n} b_{j}\\mu_{j}^{+}\n\\end{equation*}\nfor some extremal measures $[\\mu_{j}^{+}] \\in \\Delta_{+}(B)^{(0)}$ and coefficients $b_{j} > 0$. In particular the union of the supports of $\\supp(\\psi \\mu_{i}^{-})$ for $i = 1,\\ldots,m$ equals the union of the supports $\\supp(\\mu_{j}^{+})$ for $j = 1,\\ldots,n$. Let $\\Lambda \\subseteq \\supp(\\mu_{1}^{-})$ be a uniquely ergodic sublamination. As uniquely ergodic laminations are minimal, $\\psi \\Lambda$ is a sublamination of $\\supp(\\mu_{j}^{+})$ for some $j$. Thus $\\psi[\\mu_{1}^{-}\\big|_{\\Lambda}] = [\\mu_{j}^{+}\\big|_{\\Lambda}]$. This is a contradiction as $[\\mu_{1}^{-}\\big|_{\\Lambda}], [\\mu_{j}^{+}\\big|_{\\Lambda}] \\in E_{\\varphi}$ are distinct. \n\\end{proof}\n\n\nWe can now play ping-pong to construct atoroidal elements.\n\n\\begin{proposition}\\label{prop:atoroidal}\nSuppose $\\mathcal{F}_{0} \\sqsubset \\mathcal{F}_{1}$ is a single-edge extension that is invariant under $\\mathcal{H} < \\IA_{N}(\\ZZ\/3)$ and $\\varphi \\in \\mathcal{H}$ is such that $\\varphi\\big|_{\\mathcal{F}_{0}}$ is atoroidal. Assume $\\varphi\\big|_{\\mathcal{F}_{1}}$ is not atoroidal and let $[g]$ be the fixed conjugacy class in $F_{N}$ given by Proposition~\\ref{prop:co rank one atoroidal}\\eqref{item:one edge fixed}. Either:\n\\begin{enumerate}\n\\item there is a finite index subgroup $\\mathcal{H}'$ of $\\mathcal{H}$ such that $\\mathcal{H}'[g] = [g]$; or\n\\item there is a $\\psi \\in \\mathcal{H}$ and a constant $M > 0$ such that $(\\theta^{m}\\varphi^{n})\\big|_{\\mathcal{F}_{1}}$ is atoroidal for any $m,n \\geq M$ where $\\theta = \\psi\\varphi\\psi^{-1}$.\n\\end{enumerate}\n\\end{proposition} \n\n\n\\begin{proof}\nAssume (1) does not hold. Let $\\psi \\in \\mathcal{H}$ be the element given by Lemma~\\ref{lem:conjugating element} and set $\\theta = \\psi\\varphi\\psi^{-1}$. Also, let $[A] \\in \\mathcal{F}_{0}$ be the free factor given by Proposition~\\ref{prop:co rank one atoroidal} and denote $F = A \\ast \\I{g}$. Notice that $\\theta\\big|_{B}$ is atoroidal for all $[B] \\in \\mathcal{F}_{0}$ and $[g'] = \\psi[g] \\neq [g]$ is the only conjugacy class in $\\mathcal{F}_{1}$ fixed by $\\theta$ up to taking powers and inversion. We will show that for sufficiently large $m$ and $n$ and any $[B] \\in \\mathcal{F}_{1}$ the element $(\\theta^{m}\\varphi^{n})\\big|_{B}$ does not have any non-zero fixed points in $\\Curr(B)$.\n\nFor each $[B] \\in \\mathcal{F}_{0}$, let $\\Delta_{\\pm}(B)$ be the invariant simplices as defined in Lemma~\\ref{lem:conjugating element}. By this lemma we have that $\\Delta_{+}(B) \\cap \\psi\\big|_{B}\\Delta_{-}(B) = \\Delta_{-}(B) \\cap \\psi\\big|_{B}\\Delta_{+}(B) = \\emptyset$ for any $[B] \\in \\mathcal{F}_{0}$. To begin, we will assume that $\\mathcal{F}_{0} = \\{[A]\\}$, $\\mathcal{F}_{1} = \\{[F]\\}$ and to simplify notation, we will implicitly use the restrictions of the elements to $F$. \n\nThere are open sets $U, V, \\widehat U, \\widehat V \\subset \\PP {\\rm Curr}(F)$ such that:\n\\begin{enumerate}\n\\item $\\Delta_{+} \\subset U$, $\\widehat \\Delta_{+} \\subset \\widehat U$, $\\Delta_{-} \\subset V$ and $\\widehat \\Delta_{-} \\subset \\widehat V$;\n\\item $U \\subseteq \\widehat U$, $V \\subseteq \\widehat V$; and\n\\item $\\widehat U \\cap \\psi\\widehat V = \\emptyset$ and $\\psi \\widehat U \\cap \\widehat V = \\emptyset$.\n\\end{enumerate}\nSee Figure~\\ref{fig:nbhd set-up}. \n\\begin{figure}[h!]\t\n\\centering\n\\begin{tikzpicture}[every node\/.style={inner sep=0pt},scale=0.9]\n\\draw[very thick] (-5,4) rectangle (5,-4);\n\\filldraw[black!20!white] (-3.25,1.75) -- (0,0.875) -- (-3.25,0) -- cycle;\n\\filldraw[black!20!white] (-3.25,-1.75) -- (0,-0.875) -- (-3.25,0) -- cycle;\n\\filldraw[black!20!white] (3.25,1.75) -- (0,0.875) -- (3.25,0) -- cycle;\n\\filldraw[black!20!white] (3.25,-1.75) -- (0,-0.875) -- (3.25,0) -- cycle;\n\\draw[very thick] (-3.25,1.75) -- (-3.25,-1.75) node[pos=0] (a1) {} node[pos=0.5] (b1) {} node[pos=1] (c1) {};\n\\draw[very thick] (3.25,1.75) -- (3.25,-1.75) node[pos=0] (a2) {} node[pos=0.5] (b2) {} node[pos=1] (c2) {};\n\\node at (0,-0.875) (d2) {};\n\\draw[very thick] (a1) -- (b2) node[pos=0.5] (d1) {};\n\\draw[very thick] (b1) -- (a2);\n\\draw[very thick] (b1) -- (c2) node[pos=0.5] (d2) {};\n\\draw[very thick] (c1) -- (b2);\n\\draw[thick,dashed,blue,rounded corners=20pt] (-3.8,2.45) -- (0.85,0.875) -- (-3.8,-0.7) -- cycle;\n\\draw[thick,dashed,blue,rounded corners=20pt] (-3.8,-2.45) -- (0.85,-0.875) -- (-3.8,0.7) -- cycle;\n\\draw[thick,dashed,red,rounded corners=20pt] (3.8,2.45) -- (-0.85,0.875) -- (3.8,-0.7) -- cycle;\n\\draw[thick,dashed,red,rounded corners=20pt] (3.8,-2.45) -- (-0.85,-0.875) -- (3.8,0.7) -- cycle;\n\\draw[thick,dashed,red,rounded corners=5pt] (-3.5,2) -- (-3,2) --(-3,-0.25) -- (-3.5,-0.25) -- cycle;\n\\draw[thick,dashed,red,rounded corners=5pt] (-3.5,-2) -- (-3,-2) --(-3,0.25) -- (-3.5,0.25) -- cycle;\n\\draw[thick,dashed,blue,rounded corners=5pt] (3.5,2) -- (3,2) --(3,-0.25) -- (3.5,-0.25) -- cycle;\n\\draw[thick,dashed,blue,rounded corners=5pt] (3.5,-2) -- (3,-2) --(3,0.25) -- (3.5,0.25) -- cycle;\n\\node at (-4.35,0.875) {$\\Delta_{+}$};\n\\node at (-4.35,-0.875) {$\\psi\\Delta_{+}$};\n\\node at (4.35,0.875) {$\\Delta_{-}$};\n\\node at (4.35,-0.875) {$\\psi\\Delta_{-}$};\n\\node at (-2,2.5) {${\\color{red}U} \\subset {\\color{blue}\\widehat U}$};\n\\node at (-2,-2.5) {$\\psi{\\color{red}U} \\subset \\psi{\\color{blue}\\widehat U}$};\n\\node at (2,2.5) {${\\color{blue}V} \\subset {\\color{red}\\widehat V}$};\n\\node at (2,-2.5) {$\\psi{\\color{blue}V} \\subset \\psi{\\color{red}\\widehat V}$};\n\\node at (0,1.8) {$[\\eta_{g}]$};\n\\node at (0,-1.8) {$\\psi[\\eta_{g}]$};\n\\foreach \\a in {a1,a2,b1,b2,c1,c2,d1,d2}\n\t\\filldraw (\\a) circle [radius=0.075cm];\n\\end{tikzpicture}\n\\caption{The set-up of neighborhoods in $\\PP {\\rm Curr}(F)$ for Proposition~\\ref{prop:atoroidal}.}\\label{fig:nbhd set-up}\n\\end{figure}\n\n\nLet $M_{0}$ be the constant from Theorem~\\ref{th:gns} applied to $\\varphi$ with $U$ and $\\widehat V$. Let $M_{1}(\\varphi)$, $M_{1}(\\theta)$ respectively, be the constants from Lemma~\\ref{lem:growth outside of nbhd} applied to $\\varphi$ with $\\widehat V$, $\\theta$ with $\\psi\\widehat V$ respectively with $C = 2$. Likewise, let $M_{1}(\\varphi^{-1})$, $M_{1}(\\theta^{-1})$ respectively, be the constants from Lemma~\\ref{lem:growth outside of nbhd} applied to $\\varphi^{-1}$ and $\\widehat U$, $\\theta^{-1}$ and $\\psi\\widehat U$ respectively with $C = 2$.\n\nSet $M = \\max\\{M_{0},M_{1}(\\varphi),M_{1}(\\theta),M_{1}(\\varphi^{-1}),M_{1}(\\theta^{-1})\\}$ and suppose $m,n \\geq M$. Let $\\mu \\in \\Curr(F)$ be non-zero.\n\nIf $[\\mu] \\notin \\widehat V$, then $\\varphi^{n}[\\mu] \\in U$ (Theorem~\\ref{th:gns}) and $\\wght{\\varphi^{n}\\mu} \\geq 2\\wght{\\mu}$ (Lemma~\\ref{lem:growth outside of nbhd}). Further $\\varphi^{n}[\\mu] \\notin \\psi\\widehat V$ and so $\\wght{\\theta^{m}\\varphi^{n}\\mu} \\geq 2\\wght{\\varphi^{n}\\mu} \\geq 4\\wght{\\mu}$ (Lemma~\\ref{lem:growth outside of nbhd} again). Hence $\\theta^{m}\\varphi^{n}\\mu \\neq \\mu$.\n\nElse $[\\mu] \\in \\widehat V$ and so $[\\mu] \\notin \\psi\\widehat U$. Hence $\\theta^{-m}[\\mu] \\in \\psi V$ (Theorem~\\ref{th:gns}) and $\\wght{\\theta^{-m}\\mu} \\geq 2\\wght{\\mu}$ (Lemma~\\ref{lem:growth outside of nbhd}). Further $\\theta^{-m}[\\mu] \\notin \\widehat U$ and so $\\wght{\\varphi^{-n}\\theta^{-m}\\mu} \\geq 2\\wght{\\theta^{-m}\\mu} \\geq 4\\wght{\\mu}$ (Lemma~\\ref{lem:growth outside of nbhd} again). Hence $\\theta^{m}\\varphi^{n}\\mu \\neq \\mu$. \n\nTherefore $(\\theta^{m}\\varphi^{n})\\big|_{F}$ is atoroidal.\n\nThe general case is a straight forward modification, additionally playing ping-pong simultaneously in each $\\Curr(B)$ for $[B] \\in \\mathcal{F}_{0} - \\{[A]\\}$ using Theorem~\\ref{dynamicsofhyp} in place of Theorem~\\ref{th:gns} and Lemma~\\ref{lem:dynamics in simplex} in place of Lemma~\\ref{lem:growth outside of nbhd}. \n\\end{proof}\n\nPutting together the previous results, we get the following proposition which allows us to push past single-edge extensions. Care needs to be taken to avoid distributing the action on other extensions which adds a layer of technicality.\n\n\\begin{proposition}\\label{prop:inductive step}\nSuppose $\\mathcal{H} < \\IA_{N}(\\ZZ\/3)$. Let \\[\\emptyset = \\mathcal{F}_{0} \\sqsubset \\mathcal{F}_{1} \\sqsubset \\cdots \\sqsubset \\mathcal{F}_{k} = \\{[ F_{N}] \\}\\] be an $\\mathcal{H}$--invariant filtration by free factor systems and suppose $\\mathcal{F}_{i-1} \\sqsubset \\mathcal{F}_{i}$ is a single-edge extension for some $2\\le i\\le k$. Suppose there exists some $\\varphi \\in \\mathcal{H}$ such that:\n\\begin{enumerate}\n\\item[(a)] the restriction of $\\varphi$ to $\\mathcal{F}_{i-1}$ is atoroidal; and\n\\item[(b)] $\\varphi$ is irreducible and non-geometric with respect to each multi-edge extension $\\mathcal{F}_{j-1} \\sqsubset \\mathcal{F}_{j}$, $j = 1,\\ldots,k$.\n\\end{enumerate}\nThen either:\n\\begin{enumerate}\n\\item\\label{first alternative} there is a finite index subgroup $\\mathcal{H}'$ of $\\mathcal{H}$ and a nontrivial element $g \\in F_{N}$ such that $\\mathcal{H}'[g] = [g]$; or\n\\item\\label{second alternative} there exists an element $\\hat\\varphi \\in \\mathcal{H}$ such that:\n\\begin{enumerate}\n\\item[i.] the restriction of $\\hat\\varphi$ to $\\mathcal{F}_{i}$ is atoroidal; and\n\\item[ii.] $\\hat\\varphi$ is irreducible and non-geometric with respect to each multi-edge extension $\\mathcal{F}_{j-1} \\sqsubset \\mathcal{F}_{j}$, $j = 1,\\ldots,k$.\n\\end{enumerate}\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\nAs mentioned in Section~\\ref{subsec:free factor}, there are three types of single-edge extensions. We deal with these separately.\n\nIf $\\mathcal{F}_{i-1} \\sqsubset \\mathcal{F}_{i}$ is a circle extension, then $\\mathcal{F}_{i} = \\mathcal{F}_{i-1} \\cup \\{[\\I{g}]\\}$ for some nontrivial element $g \\in F_{N}$. As both $\\mathcal{F}_{i-1}$ and $\\mathcal{F}_{i}$ are $\\mathcal{H}$--invariant, we have $\\mathcal{H}[g] = [g]$ and so~\\eqref{first alternative} holds.\n\nIf $\\mathcal{F}_{i-1} \\sqsubset \\mathcal{F}_{i}$ is a barbell extension then by Proposition~\\ref{prop:co rank one atoroidal}, $\\varphi\\big|_{\\mathcal{F}_{i}}$ is atoroidal. Hence we may take $\\hat\\varphi = \\varphi$ to satisfy~\\eqref{second alternative}. \n\n\n\nLastly, we assume that $\\mathcal{F}_{i-1} \\sqsubset \\mathcal{F}_{i}$ is a handle extension. If $\\varphi\\big|_{\\mathcal{F}_{i}}$ is atoroidal, then $\\hat\\varphi = \\varphi$ satisfies~\\eqref{second alternative}. Else, by Proposition~\\ref{prop:atoroidal}, either there is a finite index subgroup $\\mathcal{H}'$ of $\\mathcal{H}$ such that $\\mathcal{H}'[g] = [g]$ or there is an element $\\psi \\in \\mathcal{H}$ and constant $M$ such that $(\\theta^{m}\\varphi^{n})\\big|_{\\mathcal{F}_{1}}$ is atoroidal for $m,n \\geq M$ where $\\theta = \\psi \\varphi \\psi^{-1}$. \n\nIf the finite index subgroup $\\mathcal{H}'$ exists, then clearly~\\eqref{first alternative} holds and hence, we assume the existence of the element $\\psi \\in \\mathcal{H}$ and constant $M$ with the properties above. Let $S = \\{ j \\mid \\mathcal{F}_{j-1} \\sqsubset \\mathcal{F}_{j} \\text{ is multi-edge} \\}$. What remains to show is that for some $m,n \\geq M$ the element $\\theta^{m}\\varphi^{n}$ is irreducible and non-geometric with respect to $\\mathcal{F}_{j-1} \\sqsubset \\mathcal{F}_{j}$ for all $j \\in S$. \n\nSuppose $j \\in S$. As in~\\cite[Theorem~6.6]{CU}, there is a single component $[B_{j}] \\in \\mathcal{F}_{j}$ that is not a component of $\\mathcal{F}_{j-1}$ and subgroups $A_{j,1},\\ldots,A_{j,k} < B_{j}$ where $\\{[A_{j,1}] ,\\ldots, [A_{j,k}]\\} \\subseteq \\mathcal{F}_{j-1}$ such that for $\\mathcal{A}_{j}$, the free factor system in $B_{j}$ determined by $A_{j,1},\\ldots, A_{j,k}$, the restriction $\\varphi\\big|_{B_{j}} \\in \\Out(B_{j};\\mathcal{A}_{j})$ is irreducible and non-geometric. Let $X_{j} = \\mathcal{ZF}(B_{j};\\mathcal{A}_{j})$ be the $\\delta$--hyperbolic graph given by Theorem~\\ref{th:relativecosurface}. Notice that by (b), the element $\\varphi$ and its conjugate $\\theta$ act as hyperbolic isometries on $X_{j}$. The remainder of the argument is an easy exercise using $\\delta$--hyperbolic geometry, we sketch the details. \n\nRecall that two hyperbolic isometries of a $\\delta$--hyperbolic space $X$ are said to be \\emph{independent} if their fixed point sets in $\\partial X$ are disjoint and \\emph{dependent} otherwise. Let $I \\subseteq S$ be the subset of indices where $\\varphi$ and $\\theta$ are independent and $D = S - I$. By \\cite[Proposition~4.2]{CU} and \\cite[Theorem~3.1]{CU}, there are constants $m,n_{0} \\geq M$ such that $\\theta^{m}\\varphi^{n}$ acts hyperbolically on $X_{j}$ if $j \\in I$ and $n \\geq n_{0}$. Then, by \\cite[Proposition~3.4]{CU}, there is an $n \\geq n_{0}$ such that $\\theta^{m}\\varphi^{n}$ acts hyperbolically on $X_{j}$ if $j \\in D$. By Theorem~\\ref{th:relativecosurface}, the element $\\theta^{m}\\varphi^{n}$ is irreducible and non-geometric with respect to each $\\mathcal{F}_{j-1} \\sqsubset \\mathcal{F}_{j}$ when $j \\in S$. This shows that~\\eqref{second alternative} holds. \n\\end{proof}\n\n\n\\section{Proof of the subgroup alternative}\\label{sec:proof}\n\nIn this section, we complete the proof of the main result of this article.\n\n\\begin{restate}{Theorem}{th:alternative} \nLet $\\mathcal{H}$ be a subgroup of $\\Out(F_{N})$ where $N \\geq 3$. Either $\\mathcal{H}$ contains an atoroidal element or there exists a finite index subgroup $\\mathcal{H}'$ of $\\mathcal{H}$ and a nontrivial element $g \\in F_{N}$ such that $\\mathcal{H}'[g] = [g]$.\n\\end{restate}\n\n\\begin{proof}\nWithout loss of generality, we may assume that $\\mathcal{H} < \\IA_{N}(\\ZZ\/3)$. Let $\\emptyset = \\mathcal{F}_{0} \\sqsubset \\mathcal{F}_{1} \\sqsubset \\cdots \\sqsubset \\mathcal{F}_{m} = \\{[ F_{N}] \\}$ be a maximal $\\mathcal{H}$--invariant filtration by free factor systems. By the Handel--Mosher Subgroup Decomposition, for each $\\mathcal{F}_{i-1} \\sqsubset \\mathcal{F}_{i}$ which is a multi-edge extension, $\\mathcal{H}$ contains an element which is irreducible with respect to this extension~\\cite[Theorem~D]{HMIntro}. \n\nSuppose that there is no finite index subgroup $\\mathcal{H}'$ of $\\mathcal{H}$ and nontrivial $g \\in F_{N}$ such that $\\mathcal{H}'[g] = [g]$. In particular, every multi-edge extension $\\mathcal{F}_{i-1} \\sqsubset \\mathcal{F}_{i}$ is non-geometric by Theorem~\\ref{th:multi edge dichotomy}. Therefore, by Corollary~\\ref{co:HM-simultaneous} there is a $\\varphi \\in H$ that is irreducible and non-geometric with respect to each multi-edge extension $\\mathcal{F}_{j-1} \\sqsubset \\mathcal{F}_{j}$ for $j = 1,\\ldots,m$.\n\nWe claim that for each $i = 1,\\ldots, m$ there is an $\\varphi_{i} \\in \\mathcal{H}$ whose restriction to $\\mathcal{F}_{i}$ is atoroidal and is irreducible and non-geometric with respect to each multi-edge extension $\\mathcal{F}_{j-1} \\sqsubset \\mathcal{F}_{j}$ for $j = 1,\\ldots,m$.\n\nIndeed, by our assumptions, $\\emptyset = \\mathcal{F}_{0} \\sqsubset \\mathcal{F}_{1}$ must be a multi-edge extension and so we can take $\\varphi_{1} = \\varphi$.\n\nNow assume that $\\varphi_{i-1}$ exists. If $\\mathcal{F}_{i-1} \\sqsubset \\mathcal{F}_{i}$ is a single-edge extension, we apply Proposition~\\ref{prop:inductive step} to $\\varphi = \\varphi_{i-1}$ and set $\\varphi_{i} = \\hat\\varphi$. Else, $\\mathcal{F}_{i-1} \\sqsubset \\mathcal{F}_{i}$ is a multi-edge extension and we apply Lemma~\\ref{co:non-geometric atoroidal} to $\\varphi_{i-1}$ and the extension $\\mathcal{F}_{i-1} \\sqsubset \\mathcal{F}_{i}$ to conclude that we may set $\\varphi_{i} = \\varphi_{i-1}$ in this case. \n\nThus the elements $\\varphi_{i}$ as claimed exist. By construction, the element $\\varphi_{m} \\in \\mathcal{H}$ is atoroidal. \n\\end{proof}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection{Confidence Bound Policy Evaluation (CB-PE)}\n\nLet learner the execute $\\pi^{k}_b$ in the underlying MDP to generate a single trajectory $\\tau^{k}=\\left\\{s_{t'}^{k}, a_{t'}^{k}\\right\\}_{t'=0}^{H-1}$ with $a_{t'}=$ $\\pi_{t'}^{k}\\left(s_{t'}^{k}\\right)$ and $s_{t'+1}^{k} \\sim P_{t'}\\left(\\cdot \\mid s_{t'}^{k}, a_{t'}^{k}\\right) .$ \n\n\nNow we define a few notations. The beginning of an episode is denoted as $k$. We use the history information up to the end of episode $k-1$ (denoted as $\\mathcal{H}_{0. \\label{eq:bandit-b}\n\\end{align}\nNote that we use $b(a)$ to denote the optimization variable and $b^*(a)$ to denote the optimal sampling proportion. Given this optimization in \\eqref{eq:bandit-b} we can get a closed form solution by introducing the Lagrange multiplier as follows:\n\\begin{align}\n L(\\bb,\\lambda) = \\sum_{a=1}^A \\dfrac{\\pi^2(a)\\sigma^2(a)}{b(a)} + \\lambda\\left(\\sum_{a=1}^A b(a) - 1\\right). \\label{eq:bandit-b1}\n\\end{align}\nNow to get the Karush-Kuhn-Tucker (KKT) condition we differentiate \\eqref{eq:bandit-b1} with respect to $b(a)$ and $\\lambda$ as follows:\n\\begin{align}\n \\nabla_{b(a)} L(\\bb,\\lambda) &= -\\dfrac{\\pi^2(a)\\sigma^2(a)}{b^2(a)} + \\lambda \\label{eq:L-pi}\\\\\n \n \\nabla_{\\lambda} L(\\bb,\\lambda) &= \\sum_a b(a) - 1 .\\label{eq:L-lambda}\n\\end{align}\nNow equating \\eqref{eq:L-pi} and \\eqref{eq:L-lambda} to zero and solving for the solution we obtain:\n\\begin{align*}\n \\lambda &= \\dfrac{\\pi^2(a)\\sigma^2(a)}{b^2(a)} \\implies b(a) = \\sqrt{\\dfrac{\\pi^2(a) \\sigma^2(a)}{\\lambda}}\\\\\n \n \\sum_a b(a) &= 1 \\implies \\sum_{a=1}^A \\sqrt{\\dfrac{\\pi^2(a)\\sigma^2(a)}{\\lambda}} = 1 \\implies \\sqrt{\\lambda} = \\sum_{a=1}^A \\sqrt{\\pi^2(a)\\sigma^2(a)}.\n\\end{align*}\nThis gives us the optimal sampling proportion\n\\begin{align*}\n b^*(a) = \\dfrac{\\pi(a)\\sigma(a)}{\\sum_{a'=1}^A \\sqrt{\\pi(a')^2\\sigma^2(a')}} \\implies b^*(a) = \\dfrac{\\pi(a)\\sigma(a)}{\\sum_{a'=1}^A \\pi(a')\\sigma(a')}.\n\\end{align*}\nFinally, observe that the above optimal sampling for the bandit setting for an action $a$ only depends on the standard deviation $\\sigma(a)$ of the action.\n\\end{proof}\n\n\n\\section{Optimal Sampling in Three State Stochastic Tree MDP}\\label{app:thm:3state-sample}\n\n\\input{thm1_new}\n\n\n\\section{Three State Deterministic Tree Sampling}\n\\label{app:three-state-det}\n\n\n\\begin{figure}[!ht]\n\\centering\n\\begin{tabular}{cc}\n\\label{fig:det-3-state}\\hspace*{-1.2em}\\includegraphics[scale = 0.38]{img\/3_State_MDP.png} &\n\\label{fig:stoc-3-mdp}\\hspace*{-1.2em}\\includegraphics[scale = 0.31]{img\/New_tree_stoc_varying_p.png} \n\\end{tabular}\n\\caption{(Left) Deterministic $2$-depth Tree. (Right) Stochastic $2$-Depth Tree with varying model.}\n\\label{app:fig}\n\\vspace{-1.0em}\n\\end{figure}\n\n\nConsider the $2$-depth, $2$-action deterministic tree MDP $\\T$ in \\Cref{app:fig} (left) where we have equal target probabilities $\\pi(1|s^1_1) = \\pi(2|s^1_1) = \\pi(1|s^2_1) = \\pi(2|s^2_1) = \\pi(1|s^2_1) = \\pi(2|s^2_1) = \\frac{1}{2}$. The variance is given by $\\sigma^2(s^1_1, 1) = 400$, $\\sigma^2(s^1_1, 2) = 600$, $\\sigma^2(s^2_1, 1) = 400$, $\\sigma^2(s^2_1, 2) = 400$, $\\sigma^2(s^2_2, 1) = 4$, $\\sigma^2(s^2_2, 2) = 4$. So the left sub-tree has lesser variance than right sub-tree. Let discount factor $\\gamma=1$. Then we get the optimal sampling behavior policy as follows:\n\\begin{align*}\n b^*(1|s^{2}_{1}) &\\propto \\pi(1|s^2_1)\\sigma(1|s^2_1) = \\frac{1}{2}\\cdot 20 = 10, \\qquad\n \n \n b^*(2|s^{2}_{1}) \\propto \\pi(2|s^2_1)\\sigma(2|s^2_1) = \\frac{1}{2}\\cdot 20 = 10\\\\\n \n \n b^*(1|s^{2}_{2}) &\\propto \\pi(1|s^2_2)\\sigma(1|s^2_2) = \\frac{1}{2}\\cdot 2 = 1,\\qquad\n \n \n b^*(2|s^{2}_{2}) \\propto \\pi(2|s^2_2)\\sigma(2|s^2_2) = \\frac{1}{2}\\cdot 2 = 1,\\\\\n \n \n B(s^2_1) &=\\pi(1|s^2_1)\\sigma(1|s^2_1) + \\pi(2|s^2_1)\\sigma(2|s^2_1) = 20, \\qquad \n \n \n B(s^2_2) = \\pi(1|s^2_2)\\sigma(1|s^2_2) + \\pi(2|s^2_2)\\sigma(2|s^2_2) = 2\\\\\n \n \n b^*(1|s^{1}_{1}) &\\propto \\sqrt{\\pi^2(1|s^{1}_{1})\\bigg[\\sigma^2(s^{1}_{1}, 1) + \\gamma^2\\sum_{s^2_j} P(s^{2}_j|s^{1}_{1}, 1) B^2(s^{2}_j)\\bigg]}\\\\\n \n &= \\sqrt{\\pi^2(1|s^{1}_{1})\\sigma^2(s^{1}_{1}, 1) + \\gamma^2\\pi^2(1|s^{1}_{1}) P(s^{2}_1|s^{1}_{1}, 1) B^2(s^{2}_{1}) + \\gamma^2\\pi^2(2|s^{1}_{1}) P(s^{2}_2|s^{1}_{1}, 1) B^2(s^{2}_{2}) }\\\\\n \n &\\overset{(a)}{=}\\sqrt{400\\cdot\\frac{1}{4} + \\frac{1}{4}\\cdot 1\\cdot 400 + \\frac{1}{4}\\cdot0\\cdot 4 } \\approx 1\n \n \n\\end{align*}\n\\begin{align*}\n b^*(2|s^{1}_{1}) &\\propto \\sqrt{\\pi^2(2|s^{1}_{1})\\bigg[\\sigma^2(s^{1}_{1}, 2) + \\gamma^2 \\sum_{s^2_j}P(s^{2}_j|s^{1}_{1}, 2) B^2(s^{2}_j)\\bigg]}\\\\\n \n &= \\sqrt{\\pi^2(2|s^{1}_{1})\\sigma^2(s^{1}_{1}, 2) + \\gamma^2\\pi^2(2|s^{1}_{1}) P(s^{2}_1|s^{1}_{1}, 2) B^2(s^{2}_{1}) + \\gamma^2\\pi^2(2|s^{1}_{1}) P(s^{2}_2|s^{1}_{1}, 2) B^2(s^{2}_{2}) }\\\\\n \n &\\overset{(b)}{=}\\sqrt{600\\cdot\\frac{1}{4} + \\frac{1}{4}\\cdot0\\cdot 400 + \\frac{1}{4}\\cdot 1 \\cdot 4 } \\approx 12\n\\end{align*}\nwhere, $(a)$ follows because $P(s^{2}_2|s^{1}_{1}, 1) = 0$ and $(b)$ follows $P(s^{2}_1|s^{1}_{1}, 2) = 0$. Note that $b(1|s^{1}_{1})$ and $b(2|s^{1}_{1})$ are un-normalized values. After normalization we can show that $b(1|s^{1}_{1}) > b(2|s^{1}_{1})$. Hence the right sub-tree with higher variance will have higher proportion of pulls.\n\n\n\n\\section{Three State Stochastic Tree Sampling with Varying Model}\n\\label{app:sampling-with-model}\n\n\n\nIn this tree MDP $\\T$ in \\Cref{app:fig} (right) we have $P(s^2_1|s^1_1,1) = p$, $P(s^2_1|s^1_1,2) = 1-p$ and $P(s^2_2|s^1_1,1) = p$, $P(s^2_2|s^1_1,2) = 1-p$. Plugging this transition probabilities from the result of \\Cref{lemma:main-tree} we get\n\\begin{align*}\n b^*(a|s^2_j) &\\propto \\pi(a|s^2_j)\\sigma(s^2_j,a), \\quad \\text{for $j\\in\\{1,2,3,4\\}$}\\\\\n \n \n \n \n b^*(1|s^1_1) &\\propto \\sqrt{\\pi^2(1|s^{1}_{1})\\bigg[\\sigma^2(s^1_1,1) +\\gamma^2 p B^2(s^2_1) +\\gamma^2 (1-p) B^2(s^2_2)\\bigg]},\\\\\n \n b^*(2|s^1_1) &\\propto \\sqrt{\\pi^2(2|s^{1}_{1})\\bigg[\\sigma^2(s^1_1,2) + \\gamma^2 p B^2(s^2_3) + \\gamma^2 (1-p) B^2(s^2_4)\\bigg]}\n\\end{align*}\nwhere, $B(s^{2}_j) = \\sum_a \\pi(a|s^2_j)\\sigma(s^2_j,a)$. Now if $p\\gg 1-p$, then we only need to consider the variance of state $s^2_1$ when estimating the sampling proportion for states $s^2_1$ and $s^2_3$ as \n\\begin{align*}\n b^*(1|s^1_1) &\\propto \\sqrt{\\pi^2(1|s^{1}_{1})\\bigg[\\sigma^2(s^1_1,1) +\\gamma^2 p B^2(s^2_1)\\bigg]},\\qquad\n \n b^*(2|s^1_1) \\propto \\sqrt{\\pi^2(2|s^{1}_{1})\\bigg[\\sigma^2(s^1_1,2) + \\gamma^2 p B^2(s^2_3)\\bigg]}.\n\\end{align*}\n\n\\begin{remark}\\textbf{(Transition Model Matters)}\nObserve that the main goal of the optimal sampling proportion in \\Cref{lemma:main-tree} is to reduce the variance of the estimate of the return. However, the sampling proportion is not geared to estimate the model $\\wP$ well. An interesting extension to combine the optimization problem in \\Cref{lemma:main-tree} with some model estimation procedure as in \\citet{zanette2019almost, agarwal2019reinforcement, wagenmaker2021beyond} to derive the optimal sampling proportion.\n\\end{remark}\n\n\n\n\n\\section{Multi-level Stochastic Tree MDP Formulation}\n\\label{app:tree-mdp-behavior}\n\n\n\n\n\n\n\\begin{customtheorem}{1}\\textbf{(Restatement)}\nLet $\\T$ be a $L$-depth stochastic tree MDP as defined in \\Cref{def:tree-mdp}. Let the estimated return of the starting state $s^1_1$ after $n$ state-action-reward samples be defined as $Y_{n}(s^1_1)$. \nNote that the $v^{\\pi}_{}(s^1_1)$ is the expectation of $Y_n(s^1_1)$ under \\Cref{assm:unbiased}. \nLet $\\D$ be the observed data over $n$ state-action-reward samples. To minimise MSE $\\E_{\\D}[(Y_n(s^1_1)) - \\mu(Y_{n}(s^1_1)))^2]$ the optimal sampling proportions for any arbitrary state is given by:\n\\begin{align*}\n b^*(a|s^{\\ell}_i) \\!&\\propto\n \n \n \n \\!\\! \\sqrt{\\! \\pi^2(a|s_i^{\\ell})\\! \\bigg[\\sigma^2(s^{\\ell}_i, a)\\! +\\! \\gamma^2\\!\\! \\sum\\limits_{s^{\\ell+1}_j}\\!\\!P(s^{\\ell + 1}_j|s_i^{\\ell}, a) B^2(s^{\\ell+1}_j)\\! \\bigg]},\n \n \n\\end{align*}\nwhere, $B(s^{2}_j)$ is the normalization factor defined as follows:\n\\begin{align*}\n B(s^{\\ell}_{i}) \\!\\!=\\!\\! \n \n \n \\sum\\limits_a\\!\\! \\sqrt{\\!\\!\\pi^2(a|s^{\\ell}_{i})\\!\\left(\\!\\!\\sigma^2(s^{\\ell}_{i}, a) \\!+\\! \\gamma^2\\!\\sum\\limits_{s^{\\ell+1}_j}\\!\\!P(s^{\\ell+1}_j\\!|\\!s^{\\ell}_i, a) B^2(s^{\\ell+1}_j)\\!\\!\\right)\n \n \n \n\\end{align*}\n\\end{customtheorem}\n\n\n\\begin{proof}\n\\textbf{Step 1 (Base case for Level $L$ and $L-1$):} The proof of this theorem follows from induction. First consider the last level $L$ containing the leaf states. An arbitrary state in the last level is denoted by $s^{L}_{i}$. Then we have the estimate of the expected return from the state $s^{L}_{i}$ as \n\\begin{align*}\n \n Y_n(s^1_1) &= \\sum_{a=1}^A\\pi(a|s^1_1)\\bigg(\\dfrac{1}{T_n(s^1_1,a)}\\sum_{h=1}^{T_n(s^1_1,a)}R_{h}(s^1_1, a)\n \n + \\gamma\\sum_{s^{\\ell+1}_j} P(s^{\\ell+1}_j|s^1_1, a)Y_{n}(s^2_j)\\bigg) \\nonumber \\\\\n &= \\!\\sum_{a=1}^A\\pi(a|s^1_1)\\!\\bigg(\\!\\wmu(s^1_1,a\n \n \n \\!+\\! \\gamma\\!\\!\\sum_{s^{\\ell+1}_j}\\!\\! P(s^{\\ell+1}_j|s^1_1, a)Y_{n}(s^2_j)\\!\\bigg)\n\\end{align*}\nObserve that for the leaf-state the $Y_n(s^L_i)$ the transition probability to next states $P(s^{L+1}_j|s^L_i,a) = 0$ for any action $a$. So $Y_n(s^L_i) = \\sum_{a=1}^A\\left(\\pi(a|s^L_i)\\wmu(s^L_i,a)\\right)$ which matches the bandit setting.\nWe define an estimator $Y_{n}(s^{\\ell}_{i})$ as defined in \\eqref{eq:tree-Yestimate}. Following the previous derivation in \\Cref{lemma:main-tree} we can show its expectation is given as:\n\\begin{align*}\n \\E[Y_{n}(s^{L}_{i})] &= \\sum_{a}\\dfrac{\\pi(a|s^{L}_{i})}{T_n(s^{L}_i, a)} \\sum_{h=1}^{T_n(s^{L}_i, a)} \\E[R_h(s^{L}_{i}, a)] = \\sum_{a}\\pi(a|s^{L}_{i})\\mu(s^{L}_{i}, a) = v^{\\pi}(s^{L}_{i}).\\\\\n \n \\Var[Y_{n}(s^{L}_{i})] &= \\sum_{a}\\dfrac{\\pi^2(a|s^{L}_{i})}{T_n^2(s^{L}_i, a)} \\sum_{h=1}^{T_n(s^{L}_i, a)} \\Var[R_h(s^{L}_{i}, a)] = \\sum_{a}\\dfrac{\\pi^2(a|s^{L}_{i}) \\sigma^2(s^{L}_i, a)}{T_n(s^{L}_i, a)}\n\\end{align*}\n\nNow consider the second last level $L-1$ containing the leaves. An arbitrary state in the last level is denoted by $s^{L-1}_{i}$. Then we have the expected return from the state $s^{L-1}_{i}$ as follows:\n\\begin{align*}\n Y_n(s^{L-1}_{i}) &= \\sum_a \\pi(a|s^{L-1}_{i})\\left( \\dfrac{1}{T_n(s^{L-1}_i, a)}\\sum_{h=1}^{T_n(s^{L-1}_i,a)}R_h(s^{L-1}_{i}, a) + \\gamma\\sum_{s^L_j} P(s^L_j|s^{L-1}_{i},a)Y_n(s^L_j) \\right)\\\\\n \n &= \\sum_a \\pi(a|s^{L-1}_{i})\\left( \\wmu(s^{L-1}_{i}, a) + \\gamma\\sum_{s^L_j} P(s^L_j|s^{L-1}_{i},a) Y_n(s^L_j) \\right).\n\\end{align*}\nThen for the estimator $Y_{n}(s^{L-1}_{i})$ we can show that its expectation is given as follows:\n\\begin{align*}\n \\E[Y_{n}(s^{L-1}_{i})] &= \\sum_a\\pi(a|s^{L-1}_{i})\\bigg[\\dfrac{1}{T_n(s^{L-1}_i, a)}\\sum_{h=1}^{T_n(s^{L-1}_i, a)} \\E[R_h(s^{L-1}_{i}, a)] + \\gamma\\sum_{s^L_j} P(s^L_j|s^{L-1}_{i},a) \\E[Y_{n}(s^L_j)]\\bigg] \\\\\n \n &= \\sum_a\\pi(a|s^{L-1}_{i})\\bigg[ \\mu(s^{L-1}_{i}, a) + \\gamma\\sum_{s^L_j} P(s^L_j|s^{L-1}_{i},a)v^{\\pi}_n(Y(s^L_j))\\bigg] = v^{\\pi}_n(s^{L-1}_{i}).\\\\\n \n \n \\Var[Y_{n}(s^{L-1}_{i})] &= \\sum_a\\pi^2(a|s^{L-1}_{i})\\bigg[\\dfrac{1}{T_n^2(s^{L-1}_i ,a)}\\sum_{h=1}^{T_n(s^{L-1}_i, a)} \\Var[R_h(s^{L-1}_{i}, a)]\n \n + \\gamma^2\\sum_{s^L_j} P^2(s^L_j|s^{L-1}_i,a)\\Var[Y_{n}(s^L_j)]\\bigg] \\\\\n \n \n &\\overset{(a)}{=} \\sum_a\\pi^2(a|s^{L-1}_{i}) \\bigg[\\dfrac{ \\sigma^2(s^{L-1}_i, a)}{T_n(s^{L-1}_i, a)} + \\gamma^2\\sum_{s^L_j} P^2(s^L_j|s^{L-1}_i,a)\\Var[Y_{n}(s^L_j)]\\bigg]\n\\end{align*}\nwhere, $(a)$ follows as $\\sum\\limits_{h=1}^{T_n(s^{L-1}_i, a)} \\Var[R_h(s^{L-1}_{i}, a)] = T_n(s^{L-1}_i, a) \\sigma^2(s^{L-1}_i, a)$.\nObserve that in state $s^{L-1}_i$ we want to reduce the variance $\\Var[Y_{n}(s^{L-1}_{i})]$. Also the optimal proportion $b^*(a|s^{L-1}_i)$ to reduce variance at state $s^{L-1}_i$ cannot differ from the optimal $b^*(a|s^{L}_i)$ of level $L$ which reduces the variance of $b^*(a|s^{L}_i)$. Hence, we can follow the same optimization as done in \\Cref{lemma:main-tree} and show that the optimal sampling proportion in state $s^{L-1}_i$ is given by\n\\begin{align*}\n b^*(a|s^L_{j}) &\\overset{}{\\propto} \\pi(a|s^L_{j})\\sigma(s^L_{j}, a)\\\\\n \n b^*(a|s^{L-1}_i) &\\overset{(a)}{\\propto} \\sqrt{\\pi^2(a|s_i^{L-1})\\bigg[\\sigma^2(s^{L-1}_i, a) + \\gamma^2\\sum_{s^L_j} P(s^L_j|s_i^{L-1}, a) B^2_{s^L_j}\\bigg]}\n \n\\end{align*}\nwhere, in $(a)$ the $s^L_j$ is the state that follows after taking action $a$ at state $s^{L-1}_i$ and $B_{s^L_j}$ is defined in \\eqref{eq:B-def}. This concludes the base case of the induction proof. Now we will go to the induction step.\n\n\\textbf{Step 2 (Induction step for Arbitrary Level $\\ell$):} We will assume that all the sampling proportion till level $\\ell+1$ from $L$ which is \n\\begin{align*}\n b^*(a|s^{\\ell+1}_i) &\\propto \\sqrt{\\pi^2(a|s_i^{\\ell})\\bigg[\\sigma^2(s^{\\ell}_i, a) + \\gamma^2\\sum_{s^{\\ell+1}_j} P(s^{\\ell + 1}_j|s_i^{\\ell}, a) B^2_{s^{\\ell+1}_j}\\bigg]}\n\\end{align*}\nis true. For the arbitrary level $\\ell+1$ we will use dynamic programming. We build up from the leaves (states $s^L_i)$ up to estimate $b^*(a|s^{\\ell+1}_i)$. Then we need to show that at the previous level $\\ell$ we get a similar recursive sampling proportion. We first define the estimate of the return from an arbitrary state $s^{\\ell}_{i}$ in level $\\ell$ after $n$ timesteps as follows:\n\\begin{align*}\n Y_{n}(s^{\\ell}_i)\n \n &= \\sum_a \\pi(a|s^{\\ell}_{i})\\left( \\dfrac{1}{T_n(s^{\\ell}_{i}, a)}\\sum_{h=1}^{T_n(s^{\\ell}_{i}, a)}R_h(s^{\\ell}_{i}, a) + \\gamma \\sum_{s^{\\ell+1}_j} P(s^{\\ell+1}_j|s^{\\ell}_{i},a) Y_{n}(s^{\\ell+1}_j) \\right)\n\\end{align*}\nThen we have the expectation of $Y_{n}(s^{\\ell}_i)$ as follows:\n\\begin{align*}\n \\E[Y_{n}(s^{\\ell}_i)] &\\overset{(a)}{=} \\sum_a \\pi(a|s^{\\ell}_{i})\\left( \\mu(s^{\\ell}_{i}, a) + \\gamma \\sum_{s^{\\ell+1}_j} P(s^{\\ell+1}_j|s^{\\ell}_{i},a) v^{\\pi}_n(Y_{n}(s^{\\ell+1}_j)) \\right)\n\\end{align*}\nwhere, in $(a)$ the $v^{\\pi}_n(Y(s^{\\ell+1}_j)) = \\E[Y_{n}(s^{\\ell+1}_j)]$. \nThen we can also calculate the variance of $Y_{n}(s^{\\ell}_i)$ as follows:\n\\begin{align*}\n \\Var[Y_{n}(s^{\\ell}_i)] = \\sum_a\\pi^2(a|s^{\\ell}_{i}) \\bigg[\\dfrac{ \\sigma^2(s^{\\ell}_i, a)}{T_n(s^{\\ell}_i, a)} + \\gamma^2 \\sum_{s^{\\ell+1}_j}P(s^{\\ell+1}_j|s^{\\ell}_i,a)\\Var(Y_{n}(s^{\\ell+1}_j))\\bigg].\n\\end{align*}\nAgain observe that the goal is to minimize the variance $\\Var[Y_{n}(s^{\\ell}_i)]$. Then following the same steps in \\Cref{lemma:main-tree} we can have the optimization problem to reduce the variance which results in the following optimal sampling proportion:\n\\begin{align*}\n b^*(a|s^{\\ell}_i) &\\propto \\sqrt{\\pi^2(a|s_i^{\\ell})\\bigg[\\sigma^2(s^{\\ell}_i, a) + \\gamma^2\\sum_{s^{\\ell+1}_j} P(s^{\\ell + 1}_j|s_i^{\\ell}, a) B^2(s^{\\ell+1}_{j})\\bigg]}\n\\end{align*}\nwhere in the last equation we use $B_{s^{\\ell+1}_j}$ which is defined in \\eqref{eq:B-def}. Again we can apply \\Cref{lemma:main-tree} because the optimal proportion $b^*(a|s^{\\ell}_i)$ to reduce variance at state $s^{\\ell}_i$ cannot differ from the optimal $b^*(a|s^{\\ell+1}_i)$ of level $\\ell+1$ to $L$ which reduces the variance of $b^*(a|s^{\\ell+1}_j)$ to $b^*(a|s^{L}_m)$.\n\n\\textbf{Step 3 (Starting state $s^1_1$:)} Finally we conclude by stating that the starting state $s^1_1$ we have the estimate of the return as follows:\n\\begin{align*}\n Y_{n}(s^{1}_{1}) &= \\sum_a \\pi(a|s^{1}_{1})\\left( \\dfrac{1}{T^K_{L}(s^1_1,a)}\\sum_{h=1}^{T_n(s^1_1,a)}R_h(s^{1}_{1}, a) + \\gamma\\sum_{s^{2}_j}P(s^{2}_j|s^{1}_{1},a) Y_{n}(s^{2}_j) \\right).\n\\end{align*}\nThen we have the expectation of $Y_{n}(s^{1}_{1})$ as follows:\n\\begin{align*}\n \\E[Y_{n}(s^{1}_{1})] &\\overset{(a)}{=} \\sum_a \\pi(a|s^{1}_{1})\\left( \\mu(s^{1}_{1}, a) + \\gamma\\sum_{s^{2}_j}P(s^{2}_j|s^{1}_{1},a) v^{\\pi}_n(Y_{n}(s^{2}_j))\\right)\n\\end{align*}\nwhere, in $(a)$ the $v^{\\pi}_n(s^{1}_j) = \\E[Y_{n}(s^{1}_1)]$. \nThen we can also calculate the variance of $Y_{n}(s^{1}_{1})$ as follows:\n\\begin{align*}\n \\Var[Y_{n}(s^{1}_{1})] = \\sum_a\\pi^2(a|s^{1}_{1}) \\bigg[\\dfrac{ \\sigma^2(s^{1}_{1}, a)}{T_n(s^{1}_{1}, a)} + \\gamma^2\\sum_{s^{2}_j}P(s^{2}_j|s^{1}_{1},a)\\Var[Y_{n}(s^{2}_j)]\\bigg]\n\\end{align*}\nThen from the previous step $2$ we can show that to reduce the variance $\\Var[Y_{n}(s^{1}_{1})]$ we should have the sampling proportion at $s^1_1$ as follows:\n\\begin{align*}\n b^*(a|s^{1}_{1}) &\\propto \\sqrt{\\pi^2(a|s^{1}_{1})\\bigg[\\sigma^2(s^{1}_{1}, a) + \\gamma^2\\sum_{s^2_j} P(s^{2}_j|s^{1}_{1}, a) B^2(s^{2}_j)\\bigg]}\n\\end{align*}\nwhere, in $(a)$ the $s^2_j$ is the state that follows after taking action $a$ at state $s^{1}_{1}$, and $B_{s^1_j}$ is defined in \\eqref{eq:B-def}.\n\\end{proof}\n\n\\section{MSE of the Oracle in Tree MDP}\n\\label{app:oracle-loss}\n\n\\begin{customproposition}{2}\\textbf{(Restatement)}\nLet there be an oracle which knows the state-action variances and transition probabilities of the $L$-depth tree MDP $\\T$. Let the oracle take actions in the proportions given by \\Cref{thm:L-step-tree}. Let $\\D$ be the observed data over $n$ state-action-reward samples such that $n=KL$. Then the oracle suffers a MSE of\n\\begin{align*}\n &\\L^*_n(b) = \\sum_{\\ell=1}^{L}\\bigg[\\dfrac{B^{2}(s^{\\ell}_i)}{T_L^{*,K}(s^\\ell_i)}\n \n + \\gamma^{2}\\sum_{a} \\pi^2(a|s^\\ell_i)\\sum_{s^{\\ell+1}_j}P(s^{\\ell+1}_j|s^{\\ell}_i,a) \\dfrac{B^{2}(s^{\\ell+1}_j)}{T_L^{*,K}(s^{\\ell+1}_j)} \\bigg].\n\\end{align*}\nwhere, $T^{*,K}_{L}(s^\\ell_i)$ denotes the optimal state samples of the oracle at the end of episode $K$.\n\\end{customproposition}\n\n\n\\begin{proof}\n\\textbf{Step 1 (Arbitrary episode $k$):} First we start at an arbitrary episode $k$. For brevity we drop the index $k$ in our notation in this step. Let $n'$ be the total number of samples collected up to the $k$-th episode. We define the estimate of the return from starting state after total of $n'$ samples as\n\\begin{align*}\n Y_{n'}(s^{1}_{1}) = \\sum_a \\pi(a|s^{1}_{1})\\left(\\dfrac{1}{T_{n'}(s^1_1,a)} \\sum_{h=1}^{T_{n'}(s^1_1,a)}R_h(s^{1}_{1}, a) + \\gamma\\sum_{s^{2}_j}P(s^2_j|s^1_1, a) Y_{n'}(s^{2}_j) \\right).\n\\end{align*}\nThen we define the MSE as\n\\begin{align*}\n \\E_{\\D}\\left[\\left(Y_{n'}(s^{1}_{1}) -\\mu(Y_{n'}(s^{1}_{1}))\\right)^{2}\\right]\n \n \n = \\Var(Y_{n'}(s^{1}_{1})) + \\bias^2(Y_{n'}(s^{1}_{1})).\n\\end{align*}\nAgain it can be shown using \\Cref{thm:L-step-tree} that once all the state-action pairs are visited once we have the bias to be zero. \nSo we want to reduce the variance $\\Var(Y_{n'}(s^{1}_{1}))$. Note that the variance is given by\n\\begin{align}\n \\Var[Y_{n'}(s^{1}_{1})] = \\sum_a\\pi^2(a|s^{1}_{1}) \\bigg[\\underbrace{\\dfrac{ \\sigma^2(s^{1}_{1}, a)}{T_{n'}(s^{1}_{1}, a)}}_{\\textbf{Variance of $s^1_1$}} + \\gamma^2\\sum_{s^{2}_j}P(s^2_j|s^1_1,a)\\underbrace{\\Var[Y_{n'}(s^{2}_j)]}_{\\textbf{Variance of $s^2_j$ in level $2$}}\\bigg]. \\label{eq:oracle-loss-var}\n\\end{align}\nThen we can show from the result of \\Cref{thm:L-step-tree} that to minimize the $\\Var[Y_{n'}(s^{1}_{1})]$ the optimal sampling proportion for the level $0$ is given by:\n\\begin{align*}\n b^*(a|s^{1}_{1}) &= \\dfrac{\\sqrt{\\sum_{s^2_j}\\pi^2(a|s^{1}_{1})\\bigg[\\sigma^2(s^{1}_{1}, a) + \\gamma^2P(s^2_j|s^1_1, a) B^2_{s^{2}_j}\\bigg]}}{B(s^1_1)}\n\\end{align*}\nwhere, $s^2_j$ are the next states of the state $s^1_1$, and $B_{s^{1}_{1}}$ as defined in \\eqref{eq:B-def}. Let the optimal number of samples of the state-action pair $(s^\\ell_i,a)$ that an oracle can take in the $k$-th episode be denoted by $T^{*,K}_L(s^\\ell_i,a)$. Also let the total number of samples taken in state $s^1_1$ be $T^{*,K}_L(s^1_1)$. It follows then $n' = \\sum_{s^{\\ell}_{j}\\in\\S} T^{*,k}_{n'}(s^{\\ell}_{j})$. Then we have\n\\begin{align*}\n T^{*,k}_{n'}(s^{1}_{1},a) = \\dfrac{\\sqrt{\\sum_{s^2_j}\\pi^2(a|s^{1}_{1})\\bigg[\\sigma^2(s^{1}_{1}, a) + \\gamma^2P(s^2_j|s^1_1, a) B^2_{s^{2}_j}\\bigg]}}{B_{s^{1}_{1} }} T^{k}_{n'}(s^1_1).\n\\end{align*}\nwhere we define the normalization factor $B_{s^{\\ell}_{j}}$ as in \\eqref{eq:B-def} and $T^{k}_{n'}(s^1_1)$ is the actual total number of times the state $s^1_1$ is visited. \nPlugging this back in \\eqref{eq:oracle-loss-var} we get that\n\n\\begin{align*}\n \\Var&[Y_{n'}(s^{1}_{1})] = \\sum_a\\pi^2(a|s^{1}_{1}) \\bigg[\\dfrac{ \\sigma^2(s^{1}_{1}, a)}{T^{*,k}_{n'}(s^{1}_{1}, a)} + \\gamma^2\\sum_{s^{2}_j}P(s^2_j|s^1_1,a)\\Var[Y_{n'}(s^{2}_j)]\\bigg]\\\\\n \n &= \\dfrac{B(s^1_1)}{T^{*,k}_{n'}(s^1_1)}\\sum_a\\dfrac{ \\pi^2(a|s^{1}_{1})\\sigma^2(s^{1}_{1}, a) }{\\sqrt{\\sum_{s^2_j}\\pi^2(a|s^{1}_{1})\\bigg[\\sigma^2(s^{1}_{1}, a) + \\gamma^2 P^2(s^2_j|s^1_1,a) B^2_{s^{2}_j}\\bigg]}} + \\gamma^2\\sum_a\\pi^2(a|s^{1}_{1})\\sum_{s^{2}_j}P(s^2_j|s^1_1,a)\\Var[Y_{n'}(s^{2}_j)]\\\\\n \n &\\overset{(a)}{\\leq} \\dfrac{B(s^1_1)}{T^{*,k}_{n'}(s^1_1)}\\sum_a\\dfrac{ \\sum_{s^2_j}\\pi^2(a|s^{1}_{1})\\bigg[\\sigma^2(s^{1}_{1}, a) + \\gamma^2P^2(s^2_j|s^1_1,a) B^2_{s^{2}_j}\\bigg]}{\\sqrt{\\sum_{s^2_j}\\pi^2(a|s^{1}_{1})\\bigg[\\sigma^2(s^{1}_{1}, a) + \\gamma^2P(s^2_j|s^1_1,a) B^2_{s^{2}_j}\\bigg]}} + \\gamma^2\\sum_a\\pi^2(a|s^{1}_{1})\\sum_{s^{2}_j}P(s^2_j|s^1_1,a)\\Var[Y_{n'}(s^{2}_j)]\\\\\n \n &\\overset{}{=} \\dfrac{ B^{}_{s^1_1}}{T^{*,k}_{n'}(s^1_1)}\\sum_a\\sqrt{\\sum_{s^2_j}\\pi^2(a|s^{1}_{1})\\bigg[\\sigma^2(s^{1}_{1}, a) + \\gamma^2P(s^2_j|s^1_1,a) B^2_{s^{2}_j}\\bigg]} + \\gamma^2\\sum_a\\pi^2(a|s^{1}_{1})\\sum_{s^{2}_j}P(s^2_j|s^1_1,a)\\Var[Y_{n'}(s^{2}_j)]\\\\\n \n &\\overset{(b)}{=} \\dfrac{ B^{2}_{s^1_1}}{T^{*,k}_{n'}(s^1_1)} + \\gamma^2\\sum_a\\pi^2(a|s^{1}_{1})\\sum_{s^{2}_j}P(s^2_j|s^1_1,a)\\underbrace{\\sum_{a'}\\pi^2(a'|s^{2}_{j}) \\bigg[\\dfrac{ \\sigma^2(s^{2}_{j}, a')}{T^{k}_{n'}(s^{2}_{j}, a')} + \\gamma^2\\sum_{s^{3}_m}P(s^3_m|s^2_j,a')\\Var[Y_{n'}(s^{3}_m)]\\bigg]}_{\\Var[Y_{n'}(s^{2}_j)]}\\\\\n \n &\\overset{(c)}{\\leq}\\! \\dfrac{B^{2}_{s^1_1}}{T^{*,k}_{n'}(s^1_1)} \\!+\\! \\gamma^2\\!\\!\\sum_a\\pi^2(a|s^{1}_{1})\\sum_{s^{2}_j}P(s^2_j|s^1_1,a)\\dfrac{B^{2}_{s^2_j} }{T^{*,k}_{n'}(s^2_j)} \\\\\n \n &\\qquad+ \\gamma^4\\sum_a\\pi^2(a|s^{1}_{1})\\sum_{s^{2}_j}P(s^2_j|s^1_1,a)\\sum_{a'} \\pi^2(a'|s^{2}_{j}) \\sum_{s^{3}_m}P(s^3_m|s^2_j,a')\\Var[Y_{n'}(s^{3}_m)]\\\\\n \n &\\overset{(d)}{\\leq}\n \\sum_{\\ell=1}^{L}\\left[\\dfrac{B^{2}(s^{\\ell}_i)}{T^{*,k}_{n'}(s^{\\ell}_i)} + \\gamma^{2\\ell}\\sum_{a} \\pi^2(a|s^\\ell_i)\\sum_{s^{\\ell+1}_j}P(s^{\\ell+1}_j|s^{\\ell}_i,a) \\dfrac{B^{2}(s^{\\ell+1}_j)}{ T^{*,k}_{n'}(s^{\\ell+1}_j)} \\right]\n\\end{align*}\nwhere, $(a)$ follows as $\\gamma^2B^2_{s^1_j} \\geq 0$, $(b)$ follows by the definition of $\\Var[Y_{s^2_j}]$ and the definition of $B(s^1_1)$ and $T^{k}_{n'}(s^2_j)$ is the actual number of samples observed for $s^2_j$, $(c)$ follows by substituting the value of $T^{*,k}_{n'}(s^2_j,a') = b^*(a'|s^2_j)\/B(s^{2}_j)$, and $(d)$ follows when unrolling the equation for $L$ times. \n\n\n\\textbf{Step 2 (End of $K$ episodes):} Note that the above derivation holds for an arbitrary episode $k$ which consist of $L$ step horizon from root to leaf. Hence the MSE of the oracle after $K$ episodes when running behavior policy $b$ is given as\n\\begin{align*}\n \\L^*_n(b) = \\sum_{\\ell=1}^{L}\\left[\\dfrac{B^{2}(s^{\\ell}_i)}{T_n^{*,K}(s^\\ell_i)} + \\gamma^{2\\ell}\\sum_{a} \\pi^2(a|s^\\ell_i)\\sum_{s^{\\ell+1}_j}P(s^{\\ell+1}_j|s^{\\ell}_i, a) \\dfrac{B^{2}(s^{\\ell+1}_j)}{T_n^{*,K}(s^{\\ell+1}_j)} \\right]\n\\end{align*}\nNote that $n_{} = \\sum_a\\sum_{s^\\ell_i\\in\\S} T^{*,K}_n(s^{\\ell}_i, a)$ is the total samples collected after $K$ episodes of $L$ trajectories. \nThis gives the MSE following optimal proportion in \\Cref{thm:L-step-tree}.\n\n\\end{proof}\n\n\n\n\n\n\\section{Support Lemmas}\n\n\\begin{lemma}\\textbf{(Wald's lemma for variance)}\n\\label{prop:wald-variance}\\citep{resnick2019probability}\nLet $\\left\\{\\mathcal{F}_{t}\\right\\}$ be a filtration and $R_{t}$ be a $\\mathcal{F}_{t}$-adapted sequence of i.i.d. random variables with variance $\\sigma^{2}$. Assume that $\\mathcal{F}_{t}$ and the $\\sigma$-algebra generated by $\\left\\{R_{t'}: t' \\geq t+1\\right\\}$ are independent and $T$ is a stopping time w.r.t. $\\mathcal{F}_{t}$ with a finite expected value. If $\\mathbb{E}\\left[R_{1}^{2}\\right]<\\infty$ then\n\\begin{align*}\n\\mathbb{E}\\left[\\left(\\sum_{t'=1}^{T} R_{t'}-T \\mu\\right)^{2}\\right]=\\mathbb{E}[T] \\sigma^{2}\n\\end{align*}\n\\end{lemma}\n\n\\begin{lemma}\\textbf{(Hoeffding's Lemma)}\\citep{massart2007concentration}\n\\label{lemma:hoeffding}\nLet $Y$ be a real-valued random variable with expected value $\\mathbb{E}[Y]= \\mu$, such that $a \\leq Y \\leq b$ with probability one. Then, for all $\\lambda \\in \\mathbb{R}$\n$$\n\\mathbb{E}\\left[e^{\\lambda Y}\\right] \\leq \\exp \\left(\\lambda \\mu +\\frac{\\lambda^{2}(b-a)^{2}}{8}\\right)\n$$\n\\end{lemma}\n\n\\begin{lemma}\\textbf{(Concentration lemma 1)}\n\\label{lemma:conc1}\n\\label{lemma:conc}\nLet $V_{t} = R_t(s, a) - \\E[R_t(s, a)]$ and be bounded such that $V_{t}\\in[-\\eta, \\eta]$. Let the total number of times the state-action $(s,a)$ is sampled be $T$.\nThen we can show that for an $\\epsilon > 0$\n\\begin{align*}\n \\Pb\\left(\\left|\\frac{1}{T}\\sum_{t=1}^T R_t(s, a) - \\E[R_t(s, a)]\\right| \\geq \\epsilon\\right) \\leq 2\\exp\\left(-\\frac{2\\epsilon^2 T}{\\eta^2}\\right).\n\\end{align*}\n\\end{lemma}\n\n\\begin{proof}\nLet $V_{t} =R_t(s, a) - \\E[R_t(s, a)]$. Note that $\\E[V_{t}] = 0$. Hence, for the bounded random variable $V_{t}\\in[-\\eta, \\eta]$ (by \\Cref{assm:bounded}) we can show from Hoeffding's lemma in \\Cref{lemma:hoeffding} that\n\\begin{align*}\n \\E[\\exp\\left(\\lambda V_{t}\\right)] \\leq \\exp\\left(\\dfrac{\\lambda^2}{8}\\left(\\eta - (-\\eta)\\right)^2\\right) \\leq \\exp\\left(2\\lambda^4\\eta^2\\right)\n\\end{align*}\nLet $s_{t-1}$ denote the last time the state $s$ is visited and action $a$ is sampled. Observe that the reward $R_t(s,a)$ is conditionally independent. For this proof we will only use the boundedness property of $R_t(s,a)$ guaranteed by \\Cref{assm:bounded}.\nNext we can bound the probability of deviation as follows:\n\\begin{align} \n\\Pb\\left(\\sum_{t=1}^T \\left(R_t(s, a) - \\E[R_t(s, a)]\\right) \\geq \\epsilon\\right) &=\\Pb\\left(\\sum_{t=1}^T V_{t} \\geq \\epsilon\\right) \\nonumber\\\\ \n&\\overset{(a)}{=}\\Pb\\left(e^{\\lambda \\sum_{t=1}^T V_{t}} \\geq e^{\\lambda \\epsilon}\\right) \\nonumber\\\\\n&\\overset{(b)}{\\leq} e^{-\\lambda \\epsilon} \\E\\left[e^{-\\lambda \\sum_{t=1}^T V_{t}}\\right] \\nonumber\\\\\n&= e^{-\\lambda \\epsilon} \\E\\left[\\E\\left[e^{-\\lambda \\sum_{t=1}^T V_{t}}\\big|s_{T-1}\\right] \\right]\\nonumber\\\\\n&\\overset{(c)}{=} e^{-\\lambda \\epsilon} \\E\\left[\\E\\left[e^{-\\lambda V_{T}}|S_{T-1}\\right]\\E\\left[e^{-\\lambda \\sum_{t=1}^{T-1} V_{t}} \\big|s_{T-1}\\right] \\right]\\nonumber\\\\\n&\\leq e^{-\\lambda \\epsilon} \\E\\left[\\exp\\left(2\\lambda^4\\eta^2\\right)\\E\\left[e^{-\\lambda \\sum_{t=1}^{T-1} V_{t}}\\big |s_{T-1}\\right] \\right]\\nonumber\\\\\n& \\overset{}{=} e^{-\\lambda \\epsilon} e^{2\\lambda^{2} \\eta^{2}} \\mathbb{E}\\left[e^{-\\lambda \\sum_{t=1}^{T-1} V_{t}}\\right] \\nonumber\\\\ \n& \\vdots \\nonumber\\\\ \n& \\overset{(d)}{\\leq} e^{-\\lambda \\epsilon} e^{2\\lambda^{2} T \\eta^{2}} \\nonumber\\\\\n& \\overset{(e)}{\\leq} \\exp\\left(-\\dfrac{2\\epsilon^2}{T\\eta^2}\\right) \\label{eq:vt0}\n\\end{align}\nwhere $(a)$ follows by introducing $\\lambda\\in\\mathbb{R}$ and exponentiating both sides, $(b)$ follows by Markov's inequality, $(c)$ follows as $V_{t}$ is conditionally independent given $s_{T-1}$, $(d)$ follows by unpacking the term for $T$ times and $(e)$ follows by taking $\\lambda= \\epsilon \/ 4T\\eta^2$. Hence, it follows that\n\\begin{align*}\n \\Pb\\left(\\left|\\dfrac{1}{T}\\sum_{t=1}^{T} R_t(s, a) - \\E[R_t(s, a)]\\right| \\geq \\epsilon\\right) = \\Pb\\left(\\sum_{t=1}^T \\left(R_t(s, a) - \\E[R_t(s, a)]\\right) \\geq T\\epsilon\\right) \\overset{(a)}{\\leq} 2\\exp\\left(-\\frac{2\\epsilon^2 T}{\\eta^2}\\right).\n\\end{align*}\nwhere, $(a)$ follows by \\eqref{eq:vt0} by replacing $\\epsilon$ with $\\epsilon T$, and accounting for deviations in either direction.\n\\end{proof}\n\n\n\\begin{lemma}\\textbf{(Concentration lemma 2)}\n\\label{lemma:conc2}\nLet $\\mu^{2}(s, a)=\\mathbb{E}\\left[R_{t}^{2}(s, a)\\right]$. Let $R_t(s,a) \\leq 2\\eta$ and $R^{2}_t(s,a) \\leq 4\\eta^2$ for any time $t$ and following \\Cref{assm:bounded}. Let $n=KL$ be the total budget of state-action samples. Define the event\n\\begin{align}\n\\xi_{\\delta}=\\left(\\bigcap_{s\\in\\S}\\bigcap_{1 \\leq a \\leq A, T_n(s,a) \\geq 1}\\left\\{\\left|\\frac{1}{T_n(s,a)}\\sum_{t=1}^{T_n(s,a)} R_{t}^{2}(s, a)-\\mu^{2}(s, a)\\right| \\leq (2\\eta + 4\\eta^2) \\sqrt{\\frac{\\log (SA n(n+1) \/ \\delta)}{2 T_n(s,a)}}\\right\\}\\right) \\bigcap \\nonumber\\\\\n\\left(\\bigcap_{s\\in\\S}\\bigcap_{1 \\leq a \\leq A, T_n(s,a) \\geq 1}\\left\\{\\left|\\frac{1}{T_n(s,a)}\\sum_{t=1}^{T_n(s,a)} R_{t}(s, a)-\\mu(s, a)\\right| \\leq (2\\eta + 4\\eta^2) \\sqrt{\\frac{\\log (SA n(n+1) \/ \\delta)}{2 T_n(s,a)}}\\right\\}\\right)\\label{eq:event-xi-delta}\n\\end{align}\nThen we can show that $\\Pb\\left(\\xi_{\\delta}\\right) \\geq 1- 2\\delta$.\n\\end{lemma}\n\n\\begin{proof}\nFirst note that the total budget $n = KL$. Observe that the random variable $R^{k}_{t}(s, a)$ and $R^{(2), k}_{t}(s, a)$ \nare conditionally independent given the previous state $S^k_{t-1}$. Also observe that for any $\\eta>0$ we have that $R^{k}_{t}(s, a), R^{(2),k}_{t}(s, a) \\leq 2\\eta + 4\\eta^2$, where $R^{(2),k}_{t}(s, a) = (R^k_t(s,a))^2$. \nHence we can show that \n\\begin{align*}\n \\Pb&\\left(\\bigcap_{s\\in\\S}\\bigcap_{1 \\leq a \\leq A, T_n(s,a) \\geq 1}\\left\\{\\left|\\frac{1}{T_n(s,a)}\\sum_{t=1}^{T_n(s,a)} R_{ t}^{2}(s, a)-\\mu^{2}(s, a)\\right| \\geq (2\\eta + 4\\eta^2) \\sqrt{\\frac{\\log (SA n(n+1) \/ \\delta)}{2 T_n(s,a)}}\\right\\}\\right)\\\\\n \n &\\leq \\Pb\\left(\\bigcup_{s\\in\\S}\\bigcup_{1 \\leq a \\leq A, T_n(s,a) \\geq 1}\\left\\{\\left|\\frac{1}{T_n(s,a)}\\sum_{t=1}^{T_n(s,a)} R_{ t}^{2}(s, a)-\\mu^{2}(s, a)\\right| \\geq (2\\eta + 4\\eta^2) \\sqrt{\\frac{\\log (SA n(n+1) \/ \\delta)}{2 T_n(s,a)}}\\right\\}\\right)\\\\\n \n &\\overset{(a)}{\\leq} \\sum_{s=1}^S\\sum_{a=1}^A\\sum_{t=1}^n\\sum_{T_n(s,a)=1}^t2\\exp\\left(-\\dfrac{2T_n}{4(\\eta^2 + \\eta)^2 }\\cdot \\frac{4(\\eta^2 + \\eta)^2\\log (SA n(n+1) \/ \\delta)}{2 T_n(s,a)}\\right) = \\delta.\n\\end{align*}\nwhere, $(a)$ follows from \\Cref{lemma:conc}. \nNote that in $(a)$ we have to take a double union bound summing up over all possible pulls $T_n$ from $1$ to $n$ as $T_n$ is a random variable. Similarly we can show that\n\\begin{align*}\n \\Pb&\\left(\\bigcap_{s\\in\\S}\\bigcap_{1 \\leq a \\leq A, T_n(s,a) \\geq 1}\\left\\{\\left|\\frac{1}{T_n(s,a)}\\sum_{t=1}^{T_n(s,a)} R_{t}(s, a)-\\mu(s, a)\\right| \\geq (2\\eta + 4\\eta^2) \\sqrt{\\frac{\\log (SA n(n+1) \/ \\delta)}{2 T_n}}\\right\\}\\right)\\\\\n \n &\\overset{(a)}{\\leq} \\sum_{s=1}^S\\sum_{a=1}^A\\sum_{t=1}^n\\sum_{T_n(s,a)=1}^{t}2\\exp\\left(-\\dfrac{2 T_n}{4(\\eta^2 + \\eta)^2 }\\cdot \\frac{4(\\eta^2 + \\eta)^2\\log (SA n(n+1) \/ \\delta)}{2 T_n(s,a)}\\right) = \\delta.\n\\end{align*}\nwhere, $(a)$ follows from \\Cref{lemma:conc}. \nHence, combining the two events above we have the following bound \n$$\n\\Pb\\left(\\xi_{\\delta}\\right) \\geq 1- 2\\delta.\n$$\n\\end{proof}\n\n\\begin{corollary}\n\\label{corollary:conc}\nUnder the event $\\xi_\\delta$ in \\eqref{eq:event-xi-delta} we have for any state-action pair in an episode $k$ the following relation with probability greater than $1-\\delta$\n\\begin{align*}\n |\\wsigma^{k}_t(s,a) - \\sigma(s,a)|\\leq (2\\eta + 4\\eta^2) \\sqrt{\\frac{\\log (SA n(n+1) \/ \\delta)}{2 T^K_L(s,a)}}.\n\\end{align*}\nwhere, $T^K_L(s,a)$ is the total number of samples of the state-action pair $(s,a)$ till episode $k$.\n\\end{corollary}\n\\begin{proof}\nObserve that the event $\\xi_\\delta$ bounds the sum of rewards $R^k_t(s,a)$ and squared rewards $R^{k,(2)}_t(s,a)$ for any $T^K_L(s,a) \\geq 1$. Hence we can directly apply the \\Cref{lemma:conc2} to get the bound.\n\\end{proof}\n\n\\begin{lemma}\\textbf{(Bound samples in level $2$)}\n\\label{lemma:bound-samples-level2}\nSuppose that, at an episode $k$, the action $p$ in state $s^2_i$ in a $2$-depth $\\T$ is under-pulled relative to its optimal proportion. Then we can lower bound the actual samples $T^K_L(s^2_i, p)$ with respect to the optimal samples $T^{*,K}_L(s^2_i, p)$ with probability $1-\\delta$ as follows\n\\begin{align*}\n T^K_L(s^2_i, p) \\geq T^{*,K}_L(s^2_i, p)-4 c b^*(p|s^2_i) \\frac{\\sqrt{\\log (H \/ \\delta)}}{B(s^2_i) b^{*,\\nicefrac{3}{2}}_{\\min}( s^2_i)} \\sqrt{T^K_L(s^2_i)}-4 A b^*(p|s^2_i),\n\\end{align*}\nwhere $B(s^2_i)$ is defined in \\eqref{eq:B-def}, $c=(\\eta + \\eta^2)\/\\sqrt{2}$, and $H=SA n(n+1)$.\n\\end{lemma}\n\n\n\\begin{proof}\n\\textbf{Step 1 (Properties of the algorithm):} \nLet us first define the confidence interval term for $(s,a)$ at time $t$ as\n\\begin{align}\n U^k_t(s,a) = 2c\\sqrt{\\dfrac{\\log(H\/\\delta)}{T^k_t(s^2_i,a)}}\\label{eq:ucb-var}\n\\end{align}\nwhere, $c = (\\eta+\\eta^2)\/\\sqrt{2}$, and $H=SA n(n+1)$. Also note that on $\\xi_\\delta$ using \\Cref{corollary:conc} we have\n\\begin{align}\n \\wsigma^{k}_{t}(s^2_i,a) \\overset{(a)}{\\leq} \\sigma(s^2_i,a) + U^k_t(s,a) \\implies \\wsigma^{(2), k}_{t}(s^2_i,a) &\\leq \\sigma^2(s^2_i,a) + 2\\sigma(s^2_i,a)U^k_t(s,a) + U^{(2), k}_t(s,a)\\nonumber\\\\\n \n & = \\sigma^2(s^2_i,a) + 4\\sigma c\\sqrt{\\dfrac{\\log(H\/\\delta)}{T^k_t(s^2_i,a)}} + 4c^2\\dfrac{\\log(H\/\\delta)}{T^k_t(s^2_i,a)}\\nonumber\\\\\n \n & \\overset{(b)}{\\leq} \\sigma^2(s^2_i,a) + 4dc^2\\sqrt{\\dfrac{\\log(H\/\\delta)}{T^k_t(s^2_i,a)}} \\label{eq:relation}\n\\end{align}\nwhere, $(a)$ follows from \\Cref{corollary:conc}, and $(b)$ follows for some constant $d>0$ and noting that $\\sqrt{\\dfrac{\\log(H\/\\delta)}{T^k_t(s^2_i,a)}} > \\dfrac{\\log(H\/\\delta)}{T^k_t(s^2_i,a)}$ and $c^2 > c$.\nLet $a$ be an arbitrary action in state $s^2_i$. Recall the definition of the upper bound used in \\rev when $t>2 SA$:\n\\begin{align*}\n\\bU^k_{t+1}(a|s^2_i) &= \\frac{\\wb^k_t(a|s^2_i)}{T^k_{t}(s^2_i, a)} = \\frac{\\sqrt{\\pi^2(a|s^2_i)\\usigma^{(2),k}_t(s^2_i,a)}}{T^k_{t}(s^2_i, a)} = \\frac{\\sqrt{\\pi^2(a|s^2_i)\\left(\\wsigma^{(2),k}_{t}(s^2_i,a) + 4 dc^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^k_{t}(s^2_i, a)}}\\right)}}{T^k_{t}(s^2_i, a)\n\\end{align*}\nUnder the good event $\\xi_{\\delta}$ using \\Cref{corollary:conc}, we obtain the following upper and lower bounds for $\\bU^k_{t+1}(a|s^2_i)$:\n\\begin{align}\n\\frac{\\sqrt{\\pi^2(a|s^2_i)\\sigma^{2}(s^2_i,a) }}{T^k_{t}(s^2_i, a)}\\overset{(a)}{\\leq} \\bU^k_{t+1}(a|s^2_i) \\overset{(b)}{\\leq} \\frac{\\sqrt{\\pi^2(a|s^2_i)\\left(\\sigma^{2}(s^2_i,a) + 8 dc^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^k_{t}(s^2_i, a)}}\\right)}}{T^k_{t}(s^2_i, a)}\n\\label{eq:step1-good-event}\n\\end{align}\nwhere, $(a)$ follows as $\\sigma^2(s^2_i,a)\\leq \\wsigma^{(2),k}_t(s^2_i,a) + 4dc^2\\sqrt{\\log(H\/\\delta)\/T^t_k(s^2_i,a)}$ and $(b)$ follows as $\\wsigma^{(2),k}_t(s^2_i,a) + 4dc^2\\sqrt{\\log(H\/\\delta)\/T^t_k(s^2_i,a)} \\leq \\wsigma^{(2),k}_t(s^2_i,a) + 8dc^2\\sqrt{\\log(H\/\\delta)\/T^t_k(s^2_i,a)}$. \nLet \\rev chooses to pull action $m$ at $t+1 > 2SA$ in $s^L_i$ for the last time. Then we have that for any action $p\\neq m$ the following:\n\\begin{align*}\n \\bU^k_{t+1}(p|s^2_i) \\leq \\bU^k_{t+1}(m|s^2_i).\n\\end{align*}\nRecall that $T^k_{t}(s^2_i,m)$ is the last time the action $m$ is sampled. Hence, $T^k_{t}(s^2_i,m) = T^K_{L}(s^2_i,m) - 1$ because we are sampling action $m$ again in time $t+1$.\nNote that $T^K_{L}(s^2_i,m)$ is the total pulls of action $m$ at the end of time $n$. \nIt follows from \\eqref{eq:step1-good-event} then\n\\begin{align*}\n\\bU^k_{t+1}(m|s^2_i) \\leq \\frac{\\sqrt{\\pi^2(m|s^2_i)\\left(\\sigma^{2}_{}(s^2_i,m) + 8d c^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^k_{t}(s^2_i, m)}}\\right)}}{T^k_{t}(s^2_i, m)} = \\frac{\\sqrt{\\pi^2(m|s^2_i)\\left(\\sigma^{2}_{}(s^2_i,m) + 8d c^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^k_{t}(s^2_i, m)-1}}\\right)}}{T^k_{t}(s^2_i, m)-1}.\n\\end{align*}\n\n\n\n\nLet $p$ be the arm in state $s^2_i$ that is under-pulled. Recall that $T^K_L(s^2_i) = \\sum_a T^K_L(s^2_i,a)$. Using the lower bound in \\eqref{eq:step1-good-event} and the fact that $T^k_{t}(s^2_i, p) \\leq T^K_{L}(s^2_i, p)$, we may lower bound $I^k_{t+1}(p|s^2_i)$ as\n\\begin{align*}\n\\bU^k_{t+1}(p|s^2_i) \\geq \\frac{\\sqrt{\\pi^2(p|s^2_i)\\sigma^2_{}(s^2_i,p)}}{T^k_{t}(s^2_i, p)} \\geq \\frac{\\sqrt{\\pi^2(p|s^2_i)\\sigma^2_{}(s^2_i,p)}}{T^K_{L}(s^2_i, p)} .\n\\end{align*}\nCombining all of the above we can show\n\\begin{align}\n \\frac{\\sqrt{\\pi^2(p|s^2_i)\\sigma^2_{}(s^2_i,p)}}{T^K_{L}(s^2_i, p)} \\leq \\frac{\\sqrt{\\pi^2(m|s^2_i)\\left(\\sigma^{2}_{}(s^2_i,m) + 8d c^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^K_{L}(s^2_i, m)-1}}\\right)}}{T^K_{L}(s^2_i, m)-1}. \\label{eq:22}\n\\end{align}\nObserve that there is no dependency on $t$, and thus, the probability that \\eqref{eq:22} holds for any $p$ and for any $m$ is at least $1-\\delta$ (probability of event $\\xi_{\\delta}$).\n\n\\textbf{Step 2 (Lower bound on $T^K_{L}(s^2_i,p)$):} If an action $p$ is under-pulled compared to its optimal allocation without taking into account the initialization phase,i.e., $T^K_{L}(s^2_i,p)-2 < b(p|s^2_i)(T_n(s^2_i)-2A)$, then from the constraint $\\sum_{a}\\left(T^K_{L}(s^2_i,a) -2\\right)=T^K_L(s^2_i)-2 A$ and the definition of the optimal allocation, we deduce that there exists at least another action $m$ that is over-pulled compared to its optimal allocation without taking into account the initialization phase, i.e., $T^k_{ n}(s^2_i,m)-2>b(m|s^2_i)(T^K_L(s^2_i)-2SA)$.\n\\begin{align}\n\\frac{\\sqrt{\\pi^2(p|s^2_i) \\sigma^2(s^2_i, p)}}{T^K_L(s^2_i, p)} &\\leq \\frac{\\sqrt{\\pi^2(m|s^2_i)\\left(\\sigma^2_{}(s^2_i,m) + 8d c^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^K_{L}(s^2_i, m)-1}}\\right)}}{T^K_{L}(s^2_i, m)-1}\n\\overset{(a)}{\\leq} \\frac{\\sqrt{\\pi^2(m|s^2_i)\\left(\\sigma^2_{}(s^2_i,m) + 8 dc^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^K_{L}(s^2_i, m)-2}}\\right)}}{T^K_{L}(s^2_i, m)-1}\\nonumber\\\\\n&\\overset{(b)}{\\leq} \\frac{\\sqrt{\\pi^2(m|s^2_i)\\sigma^2_{}(s^2_i,m)} + 4d\\pi(m|s^2_i) c \\sqrt{\\frac{\\log (H \/ \\delta)}{T^K_{L}(s^2_i, m)-2}}}{T^{*,K}_L(s^2_i,m)}\\nonumber\\\\\n&\\overset{(c)}{\\leq} \\frac{\\sqrt{\\pi^2(m|s^2_i)\\sigma^2(s^2_i,m)} + \\left(4 dc \\sqrt{\\frac{\\log (H \/ \\delta)}{b^*(m|s^2_i)(T^K_L(s^2_i) - 2SA) + 1}}\\right)}{T^{*,K}_L(s^2_i,m)}\\nonumber\\\\\n&\\overset{(d)}{\\leq} \\frac{B(s^2_i)}{T_L^K(s^2_i)}+ 4 dc \\frac{\\sqrt{\\log (H \/ \\delta)}}{T^{(\\nicefrac{3}{2}),K}_L(s^2_i) b^*(m|s^2_i)^{\\nicefrac{3}{2}}}+\\frac{4 A B(s^2_i)}{T_L^{(2),K}(s^2_i)}\\nonumber\\\\\n&\\overset{(e)}{\\leq} \\frac{B(s^2_i)}{T_L^K(s^2_i)}+ 4 dc \\frac{\\sqrt{\\log (H \/ \\delta)}}{T_L^{(\\nicefrac{3}{2}),K}(s^2_i) b^{*,\\nicefrac{3}{2}}_{\\min}(s^2_i)}+\\frac{4 A B(s^2_i)}{T^{(2),K}_L(s^2_i)} . \\label{eq:upper-s2i}\n\\end{align}\nwhere, $(a)$ follows as $T^K_L(s^2_i,m) -2 \\leq T^K_L(s^2_i,m) -1$, $(b)$ follows as $T^{*,(k)}_n(s^2_i,m) \\geq T^K_L(s^2_i,m)-1$ as action $m$ is over-pulled and $\\sqrt{a+b} \\leq \\sqrt{a} + \\sqrt{b}$ for $a,b>0$, $(c)$ follows as $T^K_L(s^2_i)=\\sum_a T^K_L(s^2_i, a)$ and $T^k_{ n}(s^2_i,m)-2>b^*(m|s^2_i)(T^K_L(s^2_i)-2SA)$, $(d)$ follows by setting the optimal samples $T^{*,K}_L(s^2_i,m) = \\frac{\\sqrt{\\pi^2(m|s^2_i)\\sigma^2(s^2_i,m)}}{B(s^2_i)}T^K_L(s^2_i)$, and $(e)$ follows as $b^*(m|s^2_i) \\geq b_{\\min}(s^2_i)$. \nBy rearranging \\eqref{eq:upper-s2i} , we obtain the lower bound on $T^K_L(s^2_i, p)$ :\n\\begin{align*}\nT^K_L(s^2_i, p) &\\geq \\frac{\\sqrt{\\pi^2(p|s^2_i) \\sigma^2(s^2_i, p)}}{\\frac{B(s^2_i)}{T_L^K(s^2_i)}+4d c \\frac{\\sqrt{\\log (H \/ \\delta)}}{T_L^{(\\nicefrac{3}{2}),K}(s^2_i) b^{*,\\nicefrac{3}{2}}_{\\min}(s^2_i)}+\\frac{4 A B(s^2_i)}{T^{(2),K}_L(s^2_i)}} = \\frac{\\sqrt{\\pi^2(p|s^2_i) \\sigma^2(s^2_i, p)}}{\\frac{B(s^2_i)}{T_L^K(s^2_i)}}\\left[\\dfrac{1}{1+4d c \\frac{\\sqrt{\\log (H \/ \\delta)}}{B(s^2_i)T_L^{(\\nicefrac{1}{2}),K}(s^2_i) b^{*,\\nicefrac{3}{2}}_{\\min}(s^2_i)}+\\frac{4 A }{T^{k}_n(s^2_i)}}\\right]\\\\\n&\\overset{(a)}{\\geq} \\frac{\\sqrt{\\pi^2(p|s^2_i) \\sigma^2(s^2_i, p)}}{\\frac{B(s^2_i)}{T_L^K(s^2_i)}}\\left[1 - 4d c \\frac{\\sqrt{\\log (H \/ \\delta)}}{B(s^2_i)T_L^{(\\nicefrac{1}{2}),K}(s^2_i) b^{*,\\nicefrac{3}{2}}_{\\min}(s^2_i)} - \\frac{4 A }{T^{k}_n(s^2_i)}\\right]\\\\\n&\\geq T^{*,K}_L(s^2_i, p)-4d c b^*(p|s^2_i) \\frac{\\sqrt{\\log (H \/ \\delta)}}{B(s^2_i) b^{*,\\nicefrac{3}{2}}_{\\min}( s^2_i)} \\sqrt{T^K_L(s^2_i)}-4 A b^*(p|s^2_i),\n\\end{align*}\nwhere in $(a)$ we use $1 \/(1+x) \\geq 1-x$ (for $x>-1$ ).\n\\end{proof}\n\n\n\n\\begin{lemma}\\textbf{(Bound samples in level 1)}\n\\label{lemma:bound-samples-level1} Suppose that, at an episode $k$, the action $p$ in state $s^1_1$ in a $2$-depth $\\T$ is under-pulled relative to its optimal proportion. Then we can lower bound the actual samples $T^K_L(s^1_1, p)$ with respect to the optimal samples $T^{*,K}_L(s^1_1, p)$ with probability $1-\\delta$ as follows\n\\begin{align*}\n T^K_L(s^1_1, p) &\\geq T^{*,K}_L(s^1_1, p)-4 c b^*(p|s^1_1) \\frac{\\sqrt{\\log (H \/ \\delta)}}{B(s^1_1) b^{*,\\nicefrac{3}{2}}_{\\min}(s^1_1)} \\sqrt{T^K_L(s^1_1)}-4 A b^*(p|s^1_1)\\\\\n&- \\gamma\\pi(m|s^1_1)\\dfrac{T^K_L(s^1_1)}{B^2(s^1_1)}\\sum_{s^2_j} P(s^2_j|s^1_1,m)\\frac{B(s^{2}_j)}{b^*(m|s^2_j)}\\sum_{a'}\\left[T^{*,K}_L(s^2_j, a') + 4 c b^*(a'|s^2_j) \\frac{\\sqrt{\\log (H \/ \\delta)}}{ b^{*,\\nicefrac{3}{2}}_{\\min}(s^2_j)} \\sqrt{T^K_L(s^1_1)} + 4 A b(a'|s^2_j)\\right]\n\\end{align*}\nwhere $B(s^2_i)$ is defined in \\eqref{eq:B-def}, $c=(\\eta + \\eta^2)\/\\sqrt{2}$, and $H=SA n(n+1)$.\n\\end{lemma}\n\n\\begin{proof}\n\\textbf{Step 1 (Properties of the algorithm):} Again note that on $\\xi_\\delta$ using \\Cref{corollary:conc} we have\n\\begin{align*}\n \\wsigma^{k}_{t}(s^1_1,a) \\leq \\sigma(s^1_1,a) + U^k_t(s,a) \\implies \\wsigma^{(2), k}_{t}(s^1_1,a) \\leq \\sigma^2(s^1_1,a) + U^{(2), k}_t(s,a) \\overset{(a)}{=} \\sigma^2(s^1_1,a) + 4dc^2\\sqrt{\\dfrac{\\log(H\/\\delta)}{T^k_t(s^1_1,a)}}\n\\end{align*}\nfor any action $a$ in $s^1_1$, where $(a)$ follows by the definition of $U^{(2),k}_t$\\eqref{eq:ucb-var}, some constant $d>0$ and the same derivation as in \\eqref{eq:relation}. Let $a$ be an arbitrary action in state $s^1_1$. Recall the definition of the upper bound used in \\rev when $t>2 SA$:\n\\begin{align*}\n&\\bU^k_{t+1}(a|s^1_1) = \\frac{\\wb^k_t(a|s^1_1)}{T^k_{t}(s^1_1, a)} = \\frac{\\sqrt{\\sum_{s^{2}_j}\\pi^2(a|s^1_1)\\left[\\usigma^{(2),k}_t(s^1_1,a) + \\gamma^2 P(s^2_j|s^1_1,a)\\wB^{(2),k}_{t}(s^2_j)\\right]}}{T^k_{t}(s^1_1, a)}\\\\\n&= \\frac{\\sqrt{\\sum_{s^{2}_j}\\pi^2(a|s^1_1)\\left[\\wsigma^{(2),k}_{t}(s^1_1,a) + 4d c^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^k_{t}(s^1_1, a)}} + \\gamma^2P(s^2_j|s^1_1,a)\\sum_{a'}\\sqrt{\\pi^2(a'|s^2_j)\\left(\\wsigma^{(2),k}_t(s^2_j, a') + 4d c^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^k_{t}(s^2_j, a')}}\\right) }\\right]}}{T^k_{t}(s^1_1, a)\n\\end{align*}\nUnder the good event $\\xi_{\\delta}$ using the \\Cref{corollary:conc}, we obtain the following upper and lower bounds for $\\bU^k_{t+1}(a|s^1_1)$:\n\\begin{align}\n\\bU^k_{t+1}(a|s^1_1) &\\leq\n\\frac{\\sqrt{\\sum\\limits_{s^{2}_j}\\pi^2(a|s^1_1)\\left[\\sigma^{2}(s^1_1,a) + 8d c^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^k_{t}(s^1_1, a)}} + \\gamma^2P(s^2_j|s^1_1,a)\\sum\\limits_{a'}\\sqrt{\\pi^2(a'|s^2_j)\\left(\\sigma^{2}(s^2_j, a') + 8d c^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^k_{t}(s^2_j, a')}}\\right) }\\right]}}{T^k_{t}(s^1_1, a)}\\nonumber\\\\\n\\bU^k_{t+1}(a|s^1_1) &\\geq \\frac{\\sqrt{\\pi^2(a|s^1_1)\\sigma^{2}(s^1_1,a) }}{T^k_{t}(s^1_1, a)}\n\\label{eq:step1-good-event-level1}\n\\end{align}\nwhere, $(a)$ follows as $\\sigma^2(s^1_1,a)\\leq \\wsigma^{(2),k}_t(s^1_1,a) + 4dc^2\\sqrt{\\log(H\/\\delta)\/T^t_k(s^1_1,a)}$ and $(b)$ follows as $\\wsigma^{(2),k}_t(s^1_1,a) + 4dc^2\\sqrt{\\log(H\/\\delta)\/T^t_k(s^1_1,a)} \\leq \\wsigma^{(2),k}_t(s^1_1,a) + 8dc^2\\sqrt{\\log(H\/\\delta)\/T^t_k(s^1_1,a)}$. \nLet \\rev chooses to take action $m$ at $t+1$ in $s^1_1$ for the last time. \nThen we have that for any action $p\\neq m$ the following:\n\\begin{align*}\n \\bU^k_{t+1}(p|s^1_1) \\leq \\bU^k_{t+1}(m|s^1_1).\n\\end{align*}\nRecall that $T^k_{t}(s^1_1,m)$ is the last time the action $m$ is sampled. Hence, $T^k_{t}(s^1_1,m) = T^K_{L}(s^1_1,m) - 1$ because we are sampling action $m$ again in time $t+1$. Note that $T^K_{L}(s^1_1,m)$ is the total pulls of action $m$ at the end of time $n$. It follows from \\eqref{eq:step1-good-event-level1}\n\\begin{align*}\n&\\bU^k_{t+1}(m|s^1_1) \\leq \\frac{\\sqrt{\\sum\\limits_{s^{2}_j}\\pi^2(a|s^1_1)\\left[\\sigma^{2}(s^1_1,a) + 8d c^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^k_{t}(s^1_1, a)}} + \\gamma^2P(s^2_j|s^1_1,a)\\sum\\limits_{a'}\\sqrt{\\pi^2(a'|s^2_j)\\left(\\sigma^{2}(s^2_j, a') + 8d c^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^k_{t}(s^2_j, a')}}\\right) }\\right]}}{T^k_{t}(s^1_1, a)} \\\\\n&\\overset{(a)}{\\leq} \\frac{\\sqrt{\\sum\\limits_{s^{2}_j}\\left(\\pi^2(a|s^1_1)\\sigma^{2}(s^1_1,a) + 8d c^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^k_{t}(s^1_1, a)}}\\right)} + \\gamma\\pi(a|s^1_1)\\sum\\limits_{s^{2}_j} P(s^2_j|s^1_1,a)\\left[\\sum\\limits_{a'}\\sqrt{\\pi^2(a'|s^2_j)\\left(\\sigma^{2}(s^2_j, a') + 8d c^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^k_{t}(s^2_j, a')}}\\right) }\\right]}{T^k_{t}(s^1_1, a)}\\\\\n&\\overset{(b)}{\\leq}\\!\\! \\frac{\\sqrt{\\sum_{s^2_j}\\pi^2(m|s^1_1)\\left(\\sigma^{2}_{}(s^1_1,m) + 8d c^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^K_{L}(s^1_1, m)-1}}\\right)}}{T^K_{L}(s^1_1, m)-1}\\\\\n&\\qquad \\!+\\! \\gamma\\pi(a|s^1_1)\\sum\\limits_{s^{2}_j}\\!\\! P(s^2_j|s^1_1,a)\\sum\\limits_{a'} \\left[\\dfrac{\\sqrt{\\pi^2(a'|s^2_j)\\left(\\sigma^{2}(s^2_j, a') \\!+\\! 8d c^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^K_{L}(s^2_j, a')}}\\right) }}{T^K_L(s^2_j,a')-1}\\!\\right].\n\\end{align*}\nwhere, $(a)$ follows as $\\sqrt{a+b} \\leq \\sqrt{a} + \\sqrt{b}$ for $a,b >0$ and $(b)$ follows as $T^k_{t}(s^1_1, a)\\geq T^k_{t}(s^2_j, a')$ where $s^2_j$ is the next state of $s^1_1$ following action $a$.\n\n\n\nLet $p$ be the arm in state $s^1_1$ that is under-pulled. Recall that $T^K_L(s^1_1) = \\sum_a T^K_L(s^1_1,a)$. Using the lower bound in \\eqref{eq:step1-good-event-level1} and the fact that $T^k_{t}(s^1_1, p) \\leq T^K_{L}(s^1_1, p)$, we may lower bound $\\bU^k_{t+1}(p|s^1_1)$ as\n\\begin{align*}\n\\bU^k_{t+1}(p|s^1_1) \\geq \\frac{\\sqrt{\\pi^2(p|s^2_i)\\sigma^2_{}(s^2_i,p)}}{T_{t}(s^1_1, p)} \\geq \\frac{\\sqrt{\\pi^2(p|s^2_i)\\sigma^2_{}(s^2_i,p)}}{T^K_{L}(s^1_1, p)} .\n\\end{align*}\nCombining all of the above we can show\n\\begin{align}\n \\frac{\\sqrt{\\pi^2(p|s^2_i)\\sigma^2_{}(s^2_i,p)}}{T^K_{L}(s^1_1, p)} &\\leq \\frac{\\sqrt{\\sum_{s^2_j}\\pi^2(m|s^1_1)\\left(\\sigma^{2}_{}(s^1_1,m) + 8d c^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^K_{L}(s^1_1, m)-1}}\\right)}}{T^K_{L}(s^1_1, m)-1}\\nonumber\\\\\n \n &\\qquad + \\gamma\\pi(m|s^1_1)\\sum_{s^2_j} P(s^2_j|s^1_1,m)\\sum\\limits_{a'} \\left[\\dfrac{\\sqrt{\\pi^2(a'|s^2_j)\\left(\\sigma^{2}(s^2_j, a') + 8d c^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^K_{L}(s^2_j, a')-1}}\\right) }}{T^K_L(s^2_j,a')-1}\\right]. \\label{eq:22-level1}\n\\end{align}\nObserve that there is no dependency on $t$, and thus, the probability that \\eqref{eq:22-level1} holds for any $p$ and for any $m$ is at least $1-\\delta$ (probability of event $\\xi_{\\delta}$).\n\n\\textbf{Step 2 (Lower bound on $T^K_{L}(s^1_1,p)$):} If an action $p$ is under-pulled compared to its optimal allocation without taking into account the initialization phase,i.e., $T^K_{L}(s^1_1,p)-2 < b^*(p|s^1_1)(T^K_{L}(s^1_1)-2A)$, then from the constraint $\\sum_{a}\\left(T^K_{L}(s^1_1,a) -2\\right)=T^K_{L}(s^1_1\n)-2 A$ and the definition of the optimal allocation, we deduce that there exists at least another action $m$ that is over-pulled compared to its optimal allocation without taking into account the initialization phase, i.e., $T^k_{ n}(s^1_1,m)-2>b^*(m|s^1_1)(T^K_L(s^1_1)-2SA)$.\n\n\\begin{align}\n\\frac{\\pi(p|s^1_1) \\sigma(s^1_1, p)}{T^K_L(s^1_1, p)} &\\leq \\frac{\\sqrt{\\sum_{s^2_j}\\pi^2(m|s^1_1)\\left(\\sigma^{2}_{}(s^1_1,m) + 8d c^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^K_{L}(s^1_1, m)-2}}\\right)}}{T^K_{L}(s^1_1, m)-1}\\nonumber\\\\\n \n &\\qquad + \\gamma\\pi(m|s^1_1)\\sum_{s^2_j} P(s^2_j|s^1_1,m)\\sum\\limits_{a'} \\left[\\dfrac{\\sqrt{\\pi^2(a'|s^2_j)\\left(\\sigma^{2}(s^2_j, a') + 8d c^2 \\sqrt{\\frac{\\log (H \/ \\delta)}{T^K_{L}(s^2_j, a')-2}}\\right) }}{T^K_L(s^2_j,a')-1}\\right]\\nonumber\\\\\n&\\overset{(a)}{\\leq} \\sum_{s^2_j}\\frac{\\sqrt{\\pi^2(m|s^1_1)\\sigma^{2}_{}(s^1_1,m)} + 4d c \\sqrt{\\frac{\\log (H \/ \\delta)}{T^K_{L}(s^1_1, m)-2}}}{T^{*,K}_{L}(s^1_1, m)}\\nonumber\\\\\n \n &\\qquad + \\gamma\\pi(m|s^1_1)\\sum_{s^2_j} P(s^2_j|s^1_1,m)\\sum\\limits_{a'} \\left[\\dfrac{\\sqrt{\\pi^2(a'|s^2_j)\\sigma^{2}(s^2_j, a')} + 4d c \\sqrt{\\frac{\\log (H \/ \\delta)}{T^K_{L}(s^2_j, a')-2} }}{T^{*,K}_L(s^2_j,a')}\\right]\\nonumber\\\\\n&\\overset{(b)}{\\leq} \\sum_{s^2_j}\\frac{\\sqrt{\\pi^2(m|s^1_1)\\sigma^2(s^1_1,m)} + \\left(4d c \\sqrt{\\frac{\\log (H \/ \\delta)}{b^*(m|s^1_1)(T^K_L(s^1_1) - 2SA) + 1}}\\right)}{T^{*,K}_L(s^1_1,m)}\\nonumber\\\\\n&\\qquad + \\gamma\\pi(m|s^1_1)\\sum_{s^2_j} P(s^2_j|s^1_1,m)\\sum\\limits_{a'} \\left[\\dfrac{\\sqrt{\\pi^2(a'|s^2_j)\\sigma^{2}(s^2_j, a')} + 4d c \\sqrt{\\frac{\\log (H \/ \\delta)}{b^*(a'|s^2_j)(T^K_L(s^2_j) - 2SA) + 1} }}{T^{*,K}_L(s^2_j,a')}\\right]\\nonumber\\\\\n&\\overset{(c)}{\\leq} \\sum_{s^2_j}\\left[\\frac{B(s^1_1)}{T_L^K(s^1_1)}+ 4d c \\frac{\\sqrt{\\log (H \/ \\delta)}}{T^{(\\nicefrac{3}{2}),K}_L(s^1_1) b^*(m|s^1_1)^{\\nicefrac{3}{2}}}+\\frac{4 A B(s^1_1)}{T_L^{(2),K}(s^1_1)}\\right]\\nonumber\\\\\n&\\qquad + \\gamma\\pi(m|s^1_1)\\sum_{s^2_j} P(s^2_j|s^1_1,m)\\underbrace{\\sum\\limits_{a'}\\left[\\frac{B(s^{2}_j)}{T_L^K(s^2_j)}+ 4d c \\frac{\\sqrt{\\log (H \/ \\delta)}}{T_L^{(\\nicefrac{3}{2}),K}(s^2_j) b^{*,\\nicefrac{3}{2}}_{\\min}(s^2_j)}+\\frac{4 A B(s^{2}_j)}{T^{(2),K}_L(s^2_j)-1}\\right]}_{\\V(s^2_j)} \\nonumber\\\\\n&\\overset{(d)}{\\leq} \\sum_{s^2_j}\\left[\\frac{B(s^1_1)}{T_L^K(s^1_1)}+ 4d c \\frac{\\sqrt{\\log (H \/ \\delta)}}{T_L^{(\\nicefrac{3}{2}),K}(s^1_1) b^{*,\\nicefrac{3}{2}}_{\\min}(s^1_1)}+\\frac{4 A B(s^1_1)}{T^{(2),K}_L(s^1_1)-1}\\right]\n + \\gamma\\pi(a|s^1_1)\\sum_{s^2_j} P(s^2_j|s^1_1,a)\\V(s^2_j) \\label{eq:upper-s2i-level1}\n\\end{align}\nwhere, $(a)$ follows as $T^{*,(k)}_n(s^1_1,m) \\geq T^K_L(s^1_1,m)-1$ as action $m$ is over-pulled, $(b)$ follows as $T^K_L(s^1_1)=\\sum_a T^K_L(s^1_1, a)$ and $T^k_{ n}(s^1_1,m)-2>b^*(m|s^1_1)(T^K_L(s^1_1)-2SA)$ and a similar argument follows in state $s^2_j$, $(c)$ follows $T^{*,K}_L(s^1_1,m) = \\frac{\\sqrt{\\pi^2(m|s^1_1)\\sigma^2(s^1_1,m)}}{B(s^1_1)}T^K_L(s^1_1)$, and using the result of \\cref{lemma:bound-samples-level2}. Finally, $(d)$ follows as $b^*(m|s^1_1) \\geq b_{\\min}(s^1_1)$. In $(d)$ we also define the total over samples in state $s^2_j$ as $\\V(s^2_j)$ such that\n\\begin{align*}\n \\V(s^2_j) &\\coloneqq \\sum\\limits_{a'}\\left[\\frac{B(s^{2}_j)}{T_L^K(s^2_j)}+ 4d c \\frac{\\sqrt{\\log (H \/ \\delta)}}{T_L^{(\\nicefrac{3}{2}),K}(s^2_j) b^{*,\\nicefrac{3}{2}}_{\\min}(s^2_j)}+\\frac{4 A B(s^{2}_j)}{T^{(2),K}_L(s^2_j)-1}\\right\n \n\\end{align*}\nBy rearranging \\eqref{eq:upper-s2i-level1} , we obtain the lower bound on $T^K_L(s^1_1, p)$ :\n\\begin{align*}\nT^K_L(s^1_1, p) &\\geq \\frac{\\sqrt{\\pi^2(p|s^1_1) \\sigma^2(s^1_1, p)}}{\\frac{B(s^1_1)}{T_L^K(s^1_1)}+4d c \\frac{\\sqrt{\\log (H \/ \\delta)}}{T_L^{(\\nicefrac{3}{2}),K}(s^1_1) b^{*,\\nicefrac{3}{2}}_{\\min}(s^1_1)}+\\frac{4 A B(s^1_1)}{T^{(2),K}_L(s^1_1)} + \\gamma\\pi(m|s^1_1)\\sum_{s^2_j} P(s^2_j|s^1_1,m)\\V(s^2_j) } \\\\\n&= \\frac{\\sqrt{\\pi^2(p|s^1_1) \\sigma^2(s^1_1, p)}}{\\frac{B(s^1_1)}{T_L^K(s^1_1)}}\\left[\\dfrac{1}{1+4d c \\frac{\\sqrt{\\log (H \/ \\delta)}}{B(s^1_1)T_L^{(\\nicefrac{1}{2}),K}(s^1_1) b^{*,\\nicefrac{3}{2}}_{\\min}(s^1_1)}+\\frac{4 A }{T^{k}_n(s^1_1)} + \\gamma\\pi(m|s^1_1)\\frac{T_L^K(s^1_1)}{B(s^1_1)}\\sum_{s^2_j} P(s^2_j|s^1_1,m)\\V(s^2_j)}\\right]\\\\\n&\\geq \\frac{\\sqrt{\\pi^2(p|s^1_1) \\sigma^2(s^1_1, p)}}{\\frac{B(s^1_1)}{T_L^K(s^1_1)}}\\left[\\dfrac{1}{1+4d c \\frac{\\sqrt{\\log (H \/ \\delta)}}{B(s^1_1)T_L^{(\\nicefrac{1}{2}),K}(s^1_1) b^{*,\\nicefrac{3}{2}}_{\\min}(s^1_1)}+\\frac{4 A }{T^{k}_n(s^1_1)} + \\gamma\\pi(m|s^1_1)\\sum_{s^2_j} P(s^2_j|s^1_1,m)\\V(s^2_j)}\\right]\\\\\n&\\overset{(a)}{\\geq} \\frac{\\sqrt{\\pi^2(p|s^1_1) \\sigma^2(s^1_1, p)}}{\\frac{B(s^1_1)}{T_L^K(s^1_1)}}\\left[1 - 4d c \\frac{\\sqrt{\\log (H \/ \\delta)}}{B(s^1_1)T_L^{(\\nicefrac{1}{2}),K}(s^1_1) b^{*,\\nicefrac{3}{2}}_{\\min}(s^1_1)} - \\frac{4 A }{T^{k}_n(s^1_1)} - \\gamma\\pi(m|s^1_1)\\frac{T_L^K(s^1_1)}{B(s^1_1)}\\sum_{s^2_j} P(s^2_j|s^1_1,m)\\V(s^2_j)\\right\n\\end{align*}\n\\begin{align*}\n&\\overset{(b)}{=} \\frac{\\sqrt{\\pi^2(p|s^1_1) \\sigma^2(s^1_1, p)}}{\\frac{B(s^1_1)}{T_L^K(s^1_1)}}\\bigg[1 - 4d c \\frac{\\sqrt{\\log (H \/ \\delta)}}{B(s^1_1)T_L^{(\\nicefrac{1}{2}),K}(s^1_1) b^{*,\\nicefrac{3}{2}}_{\\min}(s^1_1)} - \\frac{4 A }{T^{k}_n(s^1_1)} \\\\\n&\\qquad - \\gamma\\pi(m|s^1_1)\\frac{T_L^K(s^1_1)}{B(s^1_1)}\\sum_{s^2_j} P(s^2_j|s^1_1,m)\\left(\\frac{B(s^{2}_j)}{T_L^K(s^2_j) }+ 4d c \\frac{\\sqrt{\\log (H \/ \\delta)}}{T_L^{(\\nicefrac{3}{2}),K}(s^2_j) b^{*,\\nicefrac{3}{2}}_{\\min}(s^2_j)}+\\frac{4 A B(s^{2}_j)}{T^{(2),K}_L(s^2_j)-1}\\right)\\bigg]\\\\\n&\\overset{(c)}{\\geq} T^{*,K}_L(s^1_1, p)-4d c b^*(p|s^1_1) \\frac{\\sqrt{\\log (H \/ \\delta)}}{B(s^1_1) b^{*,\\nicefrac{3}{2}}_{\\min}(s^1_1)} \\sqrt{T^K_L(s^1_1)}-4 A b^*(p|s^1_1) \\\\\n&\\qquad - \\gamma\\pi(m|s^1_1)\\sum_{s^2_j}\\frac{B(s^{2}_j)T^K_L(s^2_j)}{b^*(m|s^2_j)B(s^1_1)} P(s^2_j|s^1_1,m)\\left(\\frac{B(s^{2}_j)}{T_L^K(s^2_j) }+ 4d c \\frac{\\sqrt{\\log (H \/ \\delta)}}{T_L^{(\\nicefrac{3}{2}),K}(s^2_j) b^{*,\\nicefrac{3}{2}}_{\\min}(s^2_j)}+\\frac{4 A B(s^{2}_j)}{T^{(2),K}_L(s^2_j)-1}\\right)\\bigg]\\\\\n&\\geq T^{*,K}_L(s^1_1, p)-4d c b^*(p|s^1_1) \\frac{\\sqrt{\\log (H \/ \\delta)}}{B(s^1_1) b^{*,\\nicefrac{3}{2}}_{\\min}(s^1_1)} \\sqrt{T^K_L(s^1_1)}-4 A b^*(p|s^1_1)\\\\\n&\\qquad- \\gamma\\pi(m|s^1_1)\\dfrac{T^K_L(s^1_1)}{B^2(s^1_1)}\\sum_{s^2_j} P(s^2_j|s^1_1,m)\\left(\\frac{B(s^{2}_j)}{b^*(m|s^2_j)}\\sum_{a'}\\left[T^{*,K}_L(s^2_j, a') + 4d c b^*(a'|s^2_j) \\frac{\\sqrt{\\log (H \/ \\delta)}}{ b^{*,\\nicefrac{3}{2}}_{\\min}(s^2_j)} \\sqrt{T^K_L(s^1_1)} + 4 A b^*(a'|s^2_j)\\right)\\right]\\\\\n&\\geq T^{*,K}_L(s^1_1, p)-4d c b^*(p|s^1_1) \\frac{\\sqrt{\\log (H \/ \\delta)}}{B(s^1_1) b^{*,\\nicefrac{3}{2}}_{\\min}(s^1_1)} \\sqrt{T^K_L(s^1_1)}-4 A b^*(p|s^1_1)\\\\\n&\\qquad- \\gamma\\pi(m|s^1_1)\\dfrac{T^K_L(s^1_1)}{B^2(s^1_1)}\\sum_{s^2_j} P(s^2_j|s^1_1,m)\\frac{B(s^{2}_j)}{b^*(m|s^2_j)}\\sum_{a'}\\left[T^{*,K}_L(s^2_j, a') + 4d c b^*(a'|s^2_j) \\frac{\\sqrt{\\log (H \/ \\delta)}}{ b^{*,\\nicefrac{3}{2}}_{\\min}(s^2_j)} \\sqrt{T^K_L(s^1_1)} + 4 A b^*(a'|s^2_j)\\right]\n\\end{align*}\nwhere in $(a)$ we use $1 \/(1+x) \\geq 1-x$ (for $x>-1$ ), in $(b)$ we substitute the value $\\V(s^2_j)$, and $(c)$ follows as $T^K_L(s^2_j) = \\left(b(m|s^2_j)\/B(s^{2}_j)\\right)T^K_{L}(s^1_1)$.\n\\end{proof}\n\n\n\\begin{lemma}\n\\label{lemma:regret-two-level}\nLet the total budget be $n=KL$ and $n\\geq 4SA$. Then the total regret in a deterministic $2$-depth $\\T$ at the end of $K$-th episode when sampling according to the \\eqref{eq:tree-bhat} is given by\n\\begin{align*}\n\\mathcal{R}_n \\leq \\widetilde{O}\\left(\\dfrac{B^2(s^1_1)\\sqrt{\\log(SAn^{\\nicefrac{11}{2}})}}{n^{\\nicefrac{3}{2}}b^{*,\\nicefrac{3}{2}}_{\\min}(s^1_1)} + \\gamma\\max_{s^2_j,a}\\pi(a|s^1_1)P(s^2_j|s^1_1,a)\\dfrac{B^2(s^2_j)\\sqrt{\\log(SAn^{\\nicefrac{11}{2}})}}{n^{\\nicefrac{3}{2}}b^{*,\\nicefrac{3}{2}}_{\\min}(s^2_j)} \\right)\n\\end{align*}\nwhere, the $\\widetilde{O}$ hides other lower order terms resulting out of the expansion of the squared terms and $B(s^\\ell_i)$ is defined in \\eqref{eq:B-def}.\n\\end{lemma}\n\n\\begin{proof}\n\\textbf{Step 1 ($T^k_{t}(s^{\\ell}_i,a)$ is a stopping time):} Let $\\tau$ be a random variable, which is defined on the filtered probability space. Then $\\tau$ is called a stopping time (with respect to the filtration $\\left(\\left(\\mathcal{F}_{n}\\right)_{n \\in \\mathbb{N}}\\right)$, if the following condition holds: $\n\\{\\tau=n\\} \\in \\mathcal{F}_{n} \\text { for all } n$\nIntuitively, this condition means that the \"decision\" of whether to stop at time $n$ must be based only on the information present at time $n$, not on any future information. Now consider the state $s^{\\ell}_i$ and an action $a$. At each time step $t+1$, the \\rev algorithm decides which action to pull according to the current values of the upper-bounds $\\left\\{\\usigma^k_{t+1}(s^{\\ell}_i,a)\\right\\}_{a}$ in state $s^{\\ell}_i$. Thus for any action $a$, $T^k_{t+1}(s^{\\ell}_i,a)$ depends only on the values $\\left\\{T^k_{t+1}(s^{\\ell}_i,a)\\right\\}_{a}$ and $\\left\\{\\wsigma^k_{t}(s^{\\ell}_i,a)\\right\\}_{k}$ in state $s^{\\ell}_i$. So by induction, $T^k_{t}(s^{\\ell}_i,a)$ depends on the sequence of rewards $\\left\\{R^k_{1}(s^{\\ell}_i,a), \\ldots, R^k_{T^k_{ t}(s^{\\ell}_i,a)}(s^{\\ell}_i,a)\\right\\}$, and on the samples of the other arms (which are independent of the samples of arm $k$ ). So we deduce that $T^K_{L}(s^{\\ell}_i,a)$ is a stopping time adapted to the process $\\left(R^k_{t}(s^{\\ell}_i,a)\\right)_{t \\leq n}$.\n\n\n\\textbf{Step 2 (Regret bound):} By definition, given the dataset $\\D$ after $K$ episodes each of trajectory length $L$, we have $n$ state-action samples. Then the loss of the algorithm is \n\\begin{align*}\n\\L_{n} &= \\E_{\\D}\\left[\\left(Y_n(s^1_1) -v^{\\pi}_{}(s^1_1)\\right)^{2}\\right] \\\\ \n&= \\E_{\\D}\\left[\\left(Y_n(s^1_1) -v^{\\pi}_{}(s^1_1)\\right)^{2} \\mathbb{I}\\{\\xi_\\delta\\}\\right]+ \\E_{\\D}\\left[\\left(Y_n(s^1_1) -v^{\\pi}_{}(s^1_1)\\right)^{2} \\mathbb{I}\\left\\{\\xi^{C}_\\delta\\right\\}\\right]\n\\end{align*}\nwhere, $n = KL$ is the total budget. To handle the second term, we recall that $\\xi^C_\\delta$ holds with probability $2\\delta$. Further due to the bounded reward assumption we have \n$$\n\\E_{\\D}\\left[\\left(Y_n(s^1_1) -v^{\\pi}_{}(s^1_1)\\right)^{2}\\right]\\leq 2n^2K\\delta(4\\eta^2 +2\\eta) \\leq 2 (4\\eta^2 +2\\eta) n^{2} A \\delta\\left(1+\\log \\left(c_{2} \/ 2 n A \\delta\\right)\\right)\n$$ \nwhere $c_2>0$ is a constant. Following Lemma 2 of \\citep{carpentier2011finite} and setting $\\delta = n^{-7\/2}$ gives us an upper bounds of the quantity \n\\begin{align*}\n \\E_{\\D}\\left[\\left(Y_n(s^1_1) -v^{\\pi}_{}(s^1_1)\\right)^{2} \\mathbb{I}\\left\\{\\xi^{C}_\\delta\\right\\}\\right] \\leq O\\left(\\dfrac{\\log n}{n^{\\nicefrac{3}{2}}}\\right).\n\\end{align*}\nNote that \\citet{carpentier2011finite} uses a similar $\\delta = n^{-7\/2}$ due to the sub-Gaussian assumption on their reward distribution. Also observe that under the \\Cref{assm:bounded} we also have a sub-Gaussian assumption. Hence we can use Lemma 2 of \\citet{carpentier2011finite}. Now, using the definition of $Y_n(s^1_1)$ and \\Cref{prop:wald-variance} we bound the first term as\n\\begin{align}\n\\E_{\\D}&\\left[\\left(Y_n(s^1_1) -v^{\\pi}_{}(s^1_1)\\right)^{2} \\mathbb{I}\\{\\xi_\\delta\\}\\right] \\overset{(a)}{=} \\Var[Y_{n}(s^{1}_{1})]\\E[T^K_L(s^1_1)]\\nonumber\\\\\n&\\leq \\sum_a\\pi^2(a|s^{1}_{1}) \\bigg[\\dfrac{ \\sigma^2(s^{1}_{1}, a)}{\\underline{T}^{(2),K}_L(s^{1}_{1}, a)}\\bigg]\\E[T^K_L(s^1_1,a)] + \\gamma^2\\sum_{a}\\pi^2(a|s^{1}_{1})\\sum_{s^{2}_j}P(s^2_j|s^1_1,a)\\Var[Y_{n}(s^{2}_j)]\\E[T^K_L(s^2_j,a)]\\nonumber\\\\\n&\\leq \\sum_a\\pi^2(a|s^{1}_{1}) \\bigg[\\dfrac{ \\sigma^2(s^{1}_{1}, a)}{\\underline{T}^{(2),K}_L(s^{1}_{1}, a)}\\bigg]\\E[T^K_L(s^1_1,a)] + \\gamma^2\\sum_{a}\\pi^2(a|s^{1}_{1})\\sum_{s^{2}_j}P(s^2_j|s^1_1,a)\\sum_{a'}\\pi^2(a'|s^2_j)\\bigg[\\dfrac{ \\sigma^2(s^{2}_{j}, a')}{\\underline{T}^{(2),K}_L(s^{2}_{j}, a')}\\bigg]\\E[T^K_L(s^2_j,a')] \\label{eq:loss-level1}\n\\end{align}\nwhere, $(a)$ follows from \\Cref{prop:wald-variance}, and $\\underline{T}_{n}(s^{\\ell}_i,a)$ is the lower bound on $T^K_{L}(s^{\\ell}_i,a)$ on the event $\\xi_\\delta$.\nNote that as $\\sum_{a} T^K_{L}(s^{1}_1,a)=n$, we also have $\\sum_{a} \\E\\left[T^K_{L}(s^{1}_1,a)\\right]=n$.\nUsing \\cref{eq:loss-level1} and \\cref{eq:upper-s2i-level1} for $\\pi^2(a|s^{1}_{1}) \\sigma^2(s^{1}_{1}, a) \/ \\underline{T}^k_{n}(s^1_1, a)$ (which is equivalent to using a lower bound on $T^K_{L}(s^1_1, a)$ on the event $\\xi_\\delta$), we obtain\n\\begin{align}\n\\sum_a &\\pi^2(a|s^{1}_{1}) \\bigg[\\dfrac{ \\sigma^2(s^{1}_{1}, a)}{\\underline{T}^{(2),K}_L(s^{1}_{1}, a)}\\bigg]\\E[T_n(s^1_1)] \\leq \\sum_{a}\\bigg(\\left[\\frac{B(s^1_1)}{T_L^K(s^1_1)}+ 4d c \\frac{\\sqrt{\\log (H \/ \\delta)}}{T_L^{(\\nicefrac{3}{2}),K}(s^1_1) b^{*,\\nicefrac{3}{2}}_{\\min}(s^1_1)}+\\frac{4 A B(s^1_1)}{T^{(2),K}_L(s^1_1)-1}\\right]\\nonumber\\\\\n &\\qquad + \\gamma\\pi(a|s^1_1)\\sum_{s^2_j} P(s^2_j|s^1_1,a)\\sum\\limits_{a'}\\left[\\frac{B(s^{2}_j)}{T_L^K(s^2_j)}+ 4d c \\frac{\\sqrt{\\log (H \/ \\delta)}}{T_L^{(\\nicefrac{3}{2}),K}(s^2_j) b^{*,\\nicefrac{3}{2}}_{\\min}(s^2_j)}+\\frac{4 A B(s^{2}_j)}{T^{(2),K}_L(s^2_j)-1}\\right]\\bigg)^2\\E[T^K_L(s^1_1,a)]. \\label{eq:bound-level1}\n\\end{align}\nFinally the R.H.S. of \\cref{eq:bound-level1} may be bounded using the fact that $\\sum_{a} \\mathbb{E}\\left[T^K_{L}(s^1_1,a)\\right]=n$ as\n\\begin{align*}\n \\sum_a &\\pi^2(a|s^{1}_{1}) \\bigg[\\dfrac{ \\sigma^2(s^{1}_{1}, a)}{\\underline{T}^{(2),K}_L(s^{1}_{1}, a)}\\bigg]\\E[T_L^K(s^1_1)] \\leq \\sum_a\\bigg(\\left[\\frac{B(s^1_1)}{T_L^K(s^1_1)}+ 4d c \\frac{\\sqrt{\\log (H \/ \\delta)}}{T_L^{(\\nicefrac{3}{2}),K}(s^1_1) b^{*,\\nicefrac{3}{2}}_{\\min}(s^1_1)}+\\frac{4 A B(s^1_1)}{T^{(2),K}_L(s^1_1)-1}\\right]\\nonumber\\\\\n &\\qquad + \\gamma\\pi(a|s^1_1)\\sum_{s^2_j} P(s^2_j|s^1_1,a)\\sum\\limits_{a'}\\left[\\frac{B(s^{2}_j)}{T_L^{K}(s^2_j)}+ 4d c \\frac{\\sqrt{\\log (H \/ \\delta)}}{T_L^{(\\nicefrac{3}{2}),K}(s^2_j) b^{*,\\nicefrac{3}{2}}_{\\min}(s^2_j)}+\\frac{4 A B(s^{2}_j)}{T^{(2),L}_K(s^2_j)-1}\\right]\\bigg)^2\\E[T^K_L(s^1_1,a)]\\\\\n\n &\\overset{(a)}{\\leq} 2\\bigg(\\left[\\frac{B(s^1_1)}{T_L^{K}(s^1_1)}+ 4d c \\frac{\\sqrt{\\log (H \/ \\delta)}}{T_L^{(\\nicefrac{3}{2}),K}(s^1_1) b^{*,\\nicefrac{3}{2}}_{\\min}(s^1_1)}+\\frac{4 A B(s^1_1)}{T^{(2),K}_L(s^1_1)-1}\\right]\\bigg)^2\\sum_{a}\\E[T^K_L(s^1_1,a)]\\nonumber\\\\\n &\\qquad + 2\\bigg(\\gamma\\pi(a|s^1_1)\\sum_{s^2_j} P(s^2_j|s^1_1,a)\\sum\\limits_{a'}\\left[\\frac{B(s^{2}_j)}{T_L^K(s^2_j)}+ 4d c \\frac{\\sqrt{\\log (H \/ \\delta)}}{T_L^{(\\nicefrac{3}{2}),K}(s^2_j) b^{*,\\nicefrac{3}{2}}_{\\min}(s^2_j)}+\\frac{4 A B(s^{2}_j)}{T^{(2),K}_L(s^2_j)-1}\\right]\\bigg)^2\\sum_{a}\\E[T^K_L(s^1_1,a)]\\\\\n\n &\\overset{(b)}{\\leq} \\widetilde{O}\\left(\\dfrac{B^2(s^1_1)\\sqrt{\\log(H\/\\delta)}}{n^{\\nicefrac{3}{2}}b^{*,\\nicefrac{3}{2}}_{\\min}(s^1_1)} + \\gamma\\max_{s^2_j,a}\\pi(a|s^1_1)P(s^2_j|s^1_1,a)\\dfrac{B^2(s^2_j)\\sqrt{\\log(H\/\\delta)}}{n^{\\nicefrac{3}{2}}b^{*,\\nicefrac{3}{2}}_{\\min}(s^2_j)} \\right)\\\\\n &\\overset{(c)}{=} \\widetilde{O}\\left(\\dfrac{B^2(s^1_1)\\sqrt{\\log(SAn^{\\nicefrac{11}{2}})}}{n^{\\nicefrac{3}{2}}b^{*,\\nicefrac{3}{2}}_{\\min}(s^1_1)} + \\gamma\\max_{s^2_j,a}\\pi(a|s^1_1)P(s^2_j|s^1_1,a)\\dfrac{B^2(s^2_j)\\sqrt{\\log(SAn^{\\nicefrac{11}{2}})}}{n^{\\nicefrac{3}{2}}b^{*,\\nicefrac{3}{2}}_{\\min}(s^2_j)} \\right)\n\\end{align*}\nwhere, $(a)$ follows as $(a+b)^2\\leq 2(a^2+b^2)$ for any $a,b>0$, in $(b)$ we have $T^K_L(s^1_1) = n$, and the $\\widetilde{O}$ hides other lower order terms resulting out of the expansion of the squared terms, and $(c)$ follows by setting $\\delta = n^{-7\/2}$ and using $H = SAn(n+1)$.\n\\end{proof}\n\n\n\n\\section{Regret for a Deterministic $L$-Depth Tree}\n\\label{app:regret-det-tree-L}\n\n\n\n\\begin{customtheorem}{2}\nLet the total budget be $n=KL$ and $n \\geq 4SA$. Then the total regret in a deterministic $L$-depth $\\T$ at the end of $K$-th episode when taking actions according to \\eqref{eq:tree-bhat} is given by\n\\begin{align*}\n \\mathcal{R}_n &\\leq \\widetilde{O}\\left(\\dfrac{B^2_{s^{1}_1}\\sqrt{\\log(SAn^{\\nicefrac{11}{2}})}}{n^{\\nicefrac{3}{2}}b^{*,\\nicefrac{3}{2}}_{\\min}(s^{1}_1)} \\right\n \\left. + \\gamma\\sum_{\\ell=2}^L\\max_{s^{\\ell}_j,a}\\pi(a|s^{1}_1)P(s^{\\ell}_j|s^{1}_1,a)\\dfrac{B^2_{s^{\\ell}_j}\\sqrt{\\log(SAn^{\\nicefrac{11}{2}})}}{n^{\\nicefrac{3}{2}}b^{*,\\nicefrac{3}{2}}_{\\min}(s^{\\ell}_j)} \\right)\n\\end{align*}\nwhere, the $\\widetilde{O}$ hides other lower order terms and $B_{s^\\ell_i}$ is defined in \\eqref{eq:B-def} and $b^*_{\\min}(s)= \\min_{a}b^*(a|s)$.\n\\end{customtheorem}\n\n\n\\begin{proof}\n\nThe proof follows directly by using \\Cref{lemma:bound-samples-level2}, \\Cref{lemma:bound-samples-level1}, and \\Cref{lemma:regret-two-level}. \n\n\\textbf{Step 1 ($T^k_{t}(s^{\\ell}_i,a)$ is a stopping time):} This step is same as \\Cref{lemma:regret-two-level} as all the arguments hold true even for the $L$ depth deterministic tree. \n\n\n\n\n\\textbf{Step 2 (MSE decomposition):} Given the dataset $\\D$ of $K$ episodes each of trajectory length $L$, the MSE of the algorithm is \n\\begin{align*}\n\\L_{n} &= \\E_{\\D}\\left[\\left(Y_n(s^1_1) -v^{\\pi}_{}(s^1_1)\\right)^{2}\\right] = \\E_{\\D}\\left[\\left(Y_n(s^1_1) -v^{\\pi}_{}(s^1_1)\\right)^{2} \\mathbb{I}\\{\\xi_\\delta\\}\\right]+ \\E_{\\D}\\left[\\left(Y_n(s^1_1) -v^{\\pi}_{}(s^1_1)\\right)^{2} \\mathbb{I}\\left\\{\\xi^{C}_\\delta\\right\\}\\right]\n\\end{align*}\nwhere, $n = KL$ is the total budget. Using \\Cref{lemma:regret-two-level} we can upper bound the second term as $O\\left(n^{-\\nicefrac{3}{2}}\\log(n)\\right)$. \nUsing the definition of $Y_n(s^1_1)$ and \\Cref{prop:wald-variance} we bound the first term as\n\\begin{align}\n\\E_{\\D}&\\left[\\left(Y_n(s^1_1) -v^{\\pi}_{}(s^1_1)\\right)^{2} \\mathbb{I}\\{\\xi_\\delta\\}\\right] \\overset{(a)}{=} \\Var[Y_{n}(s^{1}_{1})]\\E[T^K_L(s^1_1)] =\\nonumber\\\\\n&\\overset{b}{\\leq} \\sum_a\\pi^2(a|s^{1}_{1}) \\bigg[\\dfrac{ \\sigma^2(s^{1}_{1}, a)}{\\underline{T}^{(2),K}_L(s^{1}_{1}, a)}\\bigg]\\E[T^K_L(s^1_1,a)] + \\gamma^2\\sum_{a}\\pi^2(a|s^{1}_{1})\\sum_{s^{2}_j}P(s^{2}_j|s^1_1,a)\\Var[Y_{n}(s^{2}_{j})]\\E[T^K_L(s^2_j)] \\nonumber\\\\\n&\\overset{(c)}{\\leq} \\sum_a\\pi^2(a|s^{1}_{1}) \\bigg[\\dfrac{ \\sigma^2(s^{1}_{1}, a)}{\\underline{T}^{(2),K}_L(s^{1}_{1}, a)}\\bigg]\\E[T^K_L(s^1_1,a)] \\nonumber\\\\\n&\\qquad+ \\gamma^2\\sum_{a}\\pi^2(a|s^{1}_{1})\\sum_{\\ell=2}^L\\sum_{s^{\\ell}_j}P(s^{\\ell}_j|s^1_1,a)\\sum_{a'}\\pi^2(a'|s^\\ell_j)\\bigg[\\dfrac{ \\sigma^2(s^{\\ell}_{j}, a')}{\\underline{T}^{(2),K}_L(s^{\\ell}_{j}, a')}\\bigg]\\E[T^K_L(s^\\ell_j,a')] \\label{eq:loss-levelL} \n\\end{align}\nwhere, $(a)$ follows from \\Cref{prop:wald-variance}, $(b)$ follows from by unrolling the variance for $Y_n(s^1_1)$, and \nwhere $\\underline{T}_{n}(s^{\\ell}_i,a)$ is the lower bound on $T^K_{L}(s^{\\ell}_i,a)$ on the event $\\xi_\\delta$. Finally, $(c)$ follows by unrolling the variance for all the states till level $L$ and taking the lower bound of $\\underline{T}_{n}(s^{\\ell}_i,a)$ for each state-action pair. \n\n\\textbf{Step 2 (MSE at level $L$):} Now we want to upper bound the total MSE in \\eqref{eq:loss-levelL}. Using \\cref{eq:upper-s2i} in \\Cref{lemma:bound-samples-level2} we can directly get the MSE upper bound for a state $s^L_i$ as\n\\begin{align*}\n \\sum_{a'}\\pi^2(a'|s^L_i)\\bigg[\\dfrac{ \\sigma^2(s^{L}_{i}, a')}{\\underline{T}^{(2),K}_L(s^{L}_{i}, a')}\\bigg]\\E[T^K_L(s^L_i,a')]\\leq \\widetilde{O}\\left(\\dfrac{B^2_{s^L_i}\\sqrt{\\log(SAn^{\\nicefrac{11}{2}})}}{n^{\\nicefrac{3}{2}}b_{\\min}(s^L_i)}\\right).\n\\end{align*}\n\n\\textbf{Step 3 (MSE at level $L-1$):} This step follows directly from \\cref{eq:upper-s2i-level1} in \\Cref{lemma:bound-samples-level1}. We can get the loss upper bound for a state $s^{L-1}_i$ (which takes into account the loss at level $L$ as well) as follows:\n\\begin{align*}\n \\sum_{a'}\\bigg(\\dfrac{ b^*(a'|s^{L-1}_i)}{\\underline{T}^{(2),K}_L(s^{L-1}_{i}, a')}\\bigg)\\E[T^K_L(s^{L-1}_i,a')]\\leq \\widetilde{O}\\left(\\dfrac{B^2_{s^{L-1}_i}\\sqrt{\\log(SAn^{\\nicefrac{11}{2}})}}{n^{\\nicefrac{3}{2}}b_{\\min}(s^{L-1}_i)} + \\gamma\\max_{s^L_j,a}\\pi(a|s^{L-1}_i)P(s^L_j|s^{L-1}_i,a)\\dfrac{B^2_{s^L_j}\\sqrt{\\log(SAn^{\\nicefrac{11}{2}})}}{n^{\\nicefrac{3}{2}}b_{\\min}(s^L_j)} \\right).\n\\end{align*}\n\n\n\\textbf{Step 4 (MSE at arbitrary level $\\ell$):} This step follows by combining the results of step 2 and 3 iteratively from states in level $\\ell$ to $L$ under the good event $\\xi_{\\delta}$. We can get the regret upper bound for a state $s^{\\ell}_i$ as\n\\begin{align*}\n \\sum_{a'}\\bigg(\\dfrac{ b^*(a'|s^{\\ell}_i)}{\\underline{T}^{(2),K}_L(s^{\\ell}_{i}, a')}\\bigg)\\E[T^K_L(s^{\\ell}_i,a')]\\leq \\widetilde{O}\\left(\\dfrac{B^2_{s^{\\ell}_i}\\sqrt{\\log(SAn^{\\nicefrac{11}{2}})}}{n^{\\nicefrac{3}{2}}b_{\\min}(s^{\\ell}_i)} + \\gamma\\sum_{\\ell'=\\ell+1}^L\\max_{s^{\\ell'}_j,a}\\pi(a|s^{\\ell'-1}_i)P(s^{\\ell'}_j|s^{\\ell'-1}_i,a)\\dfrac{B^2_{s^{\\ell'}_j}\\sqrt{\\log(SAn^{\\nicefrac{11}{2}})}}{n^{\\nicefrac{3}{2}}b_{\\min}(s^{\\ell'}_j)} \\right).\n\\end{align*}\n\n\\textbf{Step 4 (Regret at level $1$):} Finally, combining all the steps above we get the regret upper bound for the state $s^{1}_1$ as follows\n\\begin{align*}\n\\mathcal{R}_n = \\L_n - \\L^*_n = \n \\widetilde{O}\\left(\\dfrac{B^2(s^{1}_1)\\sqrt{\\log(SAn^{\\nicefrac{11}{2}})}}{n^{\\nicefrac{3}{2}}b^{*,\\nicefrac{3}{2}}_{\\min}(s^{1}_1)} + \\gamma\\sum_{\\ell=2}^L\\max_{s^{\\ell}_j,a}\\pi(a|s^{1}_1)P(s^{\\ell}_j|s^{1}_1,a)\\dfrac{B^2(s^{\\ell}_j)\\sqrt{\\log(SAn^{\\nicefrac{11}{2}})}}{n^{\\nicefrac{3}{2}}b^{*\\nicefrac{3}{2}}_{\\min}(s^{\\ell}_j)} \\right).\n\\end{align*}\n\n\n\\end{proof}\n\n\n\n\n\\section{DAG Optimal Sampling}\n\\label{app:dag-sampling}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale = 0.27]{img\/DAG_3_Level_2_actions.png}\n \\caption{A $3$-depth $2$-Action DAG}\n \\label{fig:dag-mdp}\n\\end{figure}\n\n\n\\begin{customproposition}{3}\\textbf{(Restatement)}\nLet $\\G$ be a $3$-depth, $A$-action DAG defined in \\Cref{def:dag-mdp}. The minimal-MSE sampling proportions $b^*(a|s^1_1), b^*(a|s^2_j)$ depend on themselves such that $b(a|s^1_1) \\propto f(1\/b(a|s^1_1))$ and $ b(a|s^2_j) \\propto f(1\/b(a|s^2_j))$ where $f(\\cdot)$ is a function that hides other dependencies on variances of $s$ and its children.\n\\end{customproposition}\n\n\n\\begin{proof}\n\\textbf{Step 1 (Level $3$):} For an arbitrary state $s^3_i$ \nwe can calculate the expectation and variance of $Y_n(s^{3}_i)$ as follows:\n\\begin{align*}\n \\E[Y_n(s^{3}_i)] &= \\sum_{a}\\dfrac{\\pi(a|s^{3}_i)}{T_n(s^{3}_i, a)} \\sum_{h=1}^{T_n(s^{3}_i, a)} \\E[ R_h(s^{3}_i, a)] = \\sum_{a} \\pi(a|s^{3}_i) \\mu(s^{3}_i, a)\\\\\n \n \\Var[Y_{n}(s^{3}_i)] &= \\sum_{a}\\dfrac{\\pi^2(a|s^{3}_i)}{T_n^2(s^{3}_i, a)} \\sum_{h=1}^{T_n(s^{3}_i, a)} \\Var[ R_h(s^{3}_i, a)] = \\sum_{a}\\dfrac{\\pi^2(a|s^{3}_i)}{T_n(s^{3}_i, a)} \\sigma^2(s^{3}_i, a).\n\\end{align*}\n\n\\textbf{Step 2 (Level $2$):} For the arbitrary state $s^{2}_i$ \nwe can calculate the expectation of $Y_{n}(s^{2}_1)$ as follows:\n\\begin{align*}\n \\E[Y_{n}(s^{2}_i)] &= \\sum_{a}\\dfrac{\\pi(a|s^{2}_i)}{T_n(s^{2}_i, a)} \\sum_{h=1}^{T_n(s^{2}_i, a)} \\E[ R_h(s^{2}_i, a)] + \\gamma\\sum_{a}\\pi(a|s^{2}_i)\\sum_{s^{3}_j}P(s^{3}_j|s^2_i, a)\\sum_{a'}\\dfrac{\\pi(a'|s^{3}_j)}{T_n(s^{3}_j, a')}\\sum_{h=1}^{T_n(s^{3}_j, a')}\\E[R_h(s^{3}_j, a')] \\\\\n \n \n \n &= \\sum_{a} \\pi(a|s^{2}_i) \\left(\\mu(s^{2}_i, a) + \\gamma \\sum_{s^{3}_j}P(s^3_j|s^2_i,a)\\E[Y_{n}(s^{3}_j)]\\right)\\\\\n \\Var[Y_{n}(s^{2}_i)] &= \\sum_{a}\\dfrac{\\pi^2(a|s^{2}_1)}{T_n^2(s^{2}_1, a)} \\sum_{h=1}^{T_n(s^{2}_1, a)} \\Var[ R_h(s^{2}_1, a)] + \\gamma^2\\sum_{a}\\pi^2(a|s^{2}_1)\\sum_{s^{3}_j}P(s^3_j|s^2_1,a)\\sum_{a'}\\dfrac{\\pi^2(a'|s^{3}_j)}{T_n^2(s^{3}_j, a')}\\sum_{h=1}^{T_n(s^{3}_j, a')}\\Var[R_h(s^{3}_j, a')]\\\\\n \n \n \n &= \\sum_{a}\\dfrac{\\pi^2(a|s^{2}_1)}{T_n(s^{2}_1, a)} \\left(\\sigma^2(s^{2}_1, a) + \\gamma^2\\sum_{s^{3}_j}P(s^3_j|s^2_1,a)\\Var[Y_{n}(s^{3}_j)]\\right)\n\\end{align*}\n\n\\textbf{Step 3 (Level 1):} Finally for the state $s^{1}_1$ \nwe can calculate the expectation and variance of $Y_{n}(s^{1}_1)$ as follows:\n\\begin{align*}\n \\E[Y_{n}(s^{1}_1)] &= \\sum_{a}\\dfrac{\\pi(a|s^{1}_1)}{T_n(s^{1}_1, a)} \\sum_{h=1}^{T_n(s^{1}_1, a)} \\E[ R_h(s^{1}_1, a)] + \\gamma\\pi(a|s^{1}_1)\\sum_{s^{2}_j}P(s^2_j|s^1_1,a)\\sum_{a'}\\dfrac{\\pi(a'|s^{2}_j)}{T_n(s^{2}_j, a')}\\sum_{h=1}^{T_n(s^{2}_j, a')}\\E[R_h(s^{2}_j, a')] \\\\\n \n \n \n &= \\sum_{a} \\pi(a|s^{1}_1) \\left(\\mu(s^{1}_1, a) + \\gamma \\sum_{s^{2}_j}P(s^2_j|s^1_1,a)\\E[Y_{n}(s^{2}_j)]\\right)\\\\\n \\Var[Y_{n}(s^{1}_1)] &= \\sum_{a}\\dfrac{\\pi^2(a|s^{1}_1)}{T_n^2(s^{1}_1, a)} \\sum_{h=1}^{T_n(s^{1}_1, a)} \\Var[ R_h(s^{1}_1, a)] + \\gamma^2\\sum_{a}\\pi^2(a|s^{1}_1)\\sum_{s^{2}_j}P(s^2_j|s^1_1,a)\\sum_{a'}\\dfrac{\\pi^2(a'|s^{2}_j)}{T_n^2(s^{2}_j, a')}\\sum_{h=1}^{T_n(s^{2}_j, a')}\\Var[R_h(s^{2}_j, a')]\\\\\n \n \n \n &= \\sum_{a}\\dfrac{\\pi^2(a|s^{1}_1)}{T_n(s^{1}_1, a)} \\left(\\sigma^2(s^{1}_1, a) + \\gamma^2\\sum_{s^{2}_j}P(s^2_j|s^1_1,a)\\Var[Y_{n}(s^{2}_j)]\\right)\n\\end{align*}\nUnrolling out the above equation we re-write the equation below:\n\\begin{align}\n \\Var[Y_{n}(s^{1}_1)] &= \\sum_{a}\\dfrac{\\pi^2(a|s^{1}_1)\\sigma^2(s^{1}_1, a)}{T_n(s^{1}_1, a)} + \\sum_{a}\\pi^2(a|s^{1}_1)\\sum_{s^2_j}\\sum_{a'}\\dfrac{\\pi^2(a'|s^{2}_j)\\sigma^2(s^{2}_j, a')}{T_n(s^2_j, a')} \\nonumber\\\\\n \n &\\qquad+ \\sum_{a}\\pi^2(a|s^{1}_1)\\sum_{s^2_j}\\sum_{a'}\\pi^2(a'|s^{2}_j)\\sum_{s^3_m}\\sum_{a^{''}}\\dfrac{\\pi^2(a^{''}|s^{3}_m)\\sigma^2(s^{3}_m, a^{''})}{T_n(s^3_m, a^{''})}\\label{eq:dag-var}\n\\end{align}\n\nSince we follow a path $s^{1}_1 \\overset{a}{\\rightarrow} s^{2}_j \\overset{a'}{\\rightarrow} s^{3}_m \\overset{a^{''}}{\\rightarrow}\\text{Terminate}$ for any $a, a', a''\\in\\A$ and $j,m \\in \\{1,2,\\ldots,A\\}$. Hence we have the following constraints\n\\begin{align}\n & \\sum_a T_n(s^{1}_1, a) = n\\label{eq:dag1}\\\\\n \n &\\sum_a T_n(s^{2}_i, a) \\overset{(a)}{=} \\sum_{a}P(s^2_i|s^1_1,a)T_n(s^{1}_1, a)\\label{eq:dag2}\\\\\n \n \n \n \n & \\sum_a T_n(s^{3}_i, a)\\overset{(b)}{=} \\sum_{s^2_j}\\sum_{a'}P(s^3_i|s^2_j,a')T_n(s^{2}_j, a')\\label{eq:dag4}\n \n \n \n \n \n \n\\end{align}\nobserve that in $(a)$ in the deterministic case the $\\sum_{a}P(s^2_i|s^1_1,a)T_n(s^{1}_1, a)$ is all the possible paths from $s^1_1$ to $s^2_i$ that were taken for $n$ samples over any action $a$. Similarly in $(b)$ in the deterministic case the $\\sum_{a'}P(s^3_i|s^2_j,a)T_n(s^{2}_j, a')$ is all the possible paths from $s^2_j$ to $s^3_i$ that were taken for $n$ samples over any action $a'$.\n\n\\textbf{Step 4 (Formulate objective):} We want to minimize the variance in \\eqref{eq:dag-var} subject to the above constraints. We can show that\n\\begin{align}\nT_n(s^{1}_1, a)\/n &= b(a|s^{1}_1).\\label{eq:dag6}\\\\\n \\text{and},\\qquad b(a|s^2_i) &= \\frac{T_n(s^{2}_i, a)}{\\sum_{a'} T_n(s^{2}_i, a')} = \\frac{T_n(s^{2}_i, a)}{\\sum_{a'}P(s^2_i|s^1_1,a')T_n(s^{1}_1, a')} \\overset{(a)}{=} \\frac{T_n(s^{2}_i, a)\/n}{\\sum_{a'}P(s^2_i|s^1_1,a')T_n(s^{1}_1, a')\/n}\\nonumber\\\\\n \\implies & T_n(s^{2}_i, a)\/n \\overset{(b)}{=} \n b(a|s^{2}_i) \\sum_{a'}P(s^2_i|s^1_1,a')b(a'|s^{1}_1),\n\\end{align}\nwhere, $(a)$ follows from \\eqref{eq:dag2}, and $(b)$ follows from \\eqref{eq:dag6} and taking into account all the possible paths to reach $s^2_i$ from $s^1_1$. \nFor the third level we can show that the proportion \n\\begin{align*}\n b(a|s^{3}_i) &= \\frac{T_n(s^{3}_i, a)}{\\sum_{a'} T_n(s^{3}_i, a')} \n \n \\overset{(a)}{=} \\frac{T_n(s^{3}_i, a)}{\\sum_{s^2_j}\\sum_{a'}P(s^3_i|s^2_j,a')T_n(s^{2}_j, a')}\\\\\n \n \n &\\overset{(b)}{=}\\dfrac{T_n(s^{3}_i, a)}{ \\sum_{s^2_j}\\sum_{a'}P(s^2_j|s^1_1,a')b(a'|s^1_1)\\sum_{a^{''}}P(s^3_i|s^2_j,a^{''})b(a^{''}|s^2_j)}\\\\\n \n \n &\\overset{}{\\implies} T_n(s^{3}_i, a)\/n = b(a|s^{3}_i) \\sum_{s^2_j}\\sum_{a'}P(s^2_j|s^1_1,a')b(a'|s^1_1)\\sum_{a^{''}}P(s^3_i|s^2_j,a^{''})b(a^{''}|s^2_j)\n \n\\end{align*}\nwhere, $(a)$ follows from \\eqref{eq:dag4}, and $(b)$ follows from \\eqref{eq:dag6} and taking into account all the possible paths to reach $s^3_i$ from $s^1_1$. \nAgain note that we use $b(a|s)$ to denote the optimization variable and $b^*(a|s)$ to denote the optimal sampling proportion.\nThen the optimization problem in \\eqref{eq:dag-var} can be restated as,\n\n\\begin{align*}\n \\min_{\\bb} & \\sum_{a}\\dfrac{\\pi^2(a|s^{1}_1)\\sigma^2(s^{1}_1, a)}{b(a|s^{1}_1)} + \\sum_{a}\\pi^2(a|s^{1}_1)\\sum_{s^2_j}\\sum_{a'}\\dfrac{\\pi^2(a'|s^{2}_j)\\sigma^2(s^{2}_j, a')}{b(a'|s^2_j)\\underbrace{\\sum_{a_1}P(s^2_j|s^1_1,a_1)b(a_1|s^{1}_1)}_{\\text{All possible path to reach $s^2_j$ from $s^1_1$}}} \\nonumber\\\\\n \n &\\qquad+ \\sum_{a}\\pi^2(a|s^{1}_1)\\sum_{s^2_j}\\sum_{a'}\\pi^2(a'|s^{2}_j)\\sum_{s^3_m}\\sum_{a^{''}}\\dfrac{\\pi^2(a^{''}|s^{3}_m)\\sigma^2(s^{3}_m, a^{''})}{b(a^{''}|s^{3}_m)\\underbrace{\\sum_{s^2_j}\\sum_{a_1}P(s^2_j|s^1_1,a_1)b(a_1|s^1_1)\\sum_{a_2}P(s^3_i|s^2_j,a_2)b(a_2|s^2_j)}_{\\text{All possible path to reach $s^3_m$ from $s^1_1$}}}\\\\\n \n \\quad \\textbf{s.t.} \\quad & \\forall s, \\quad \\sum_a b(a|s) = 1 \\nonumber\\\\\n \n &\\forall s,a \\quad b(a|s) > 0.\n\\end{align*}\nNow introducing the Lagrange multiplier we get that\n\\begin{align}\n L(\\bb,\\lambda) &= \\min_{\\bb} \\sum_{a}\\dfrac{\\pi^2(a|s^{1}_1)\\sigma^2(s^{1}_1, a)}{b(a|s^{1}_1)} + \\sum_{a}\\pi^2(a|s^{1}_1)\\sum_{s^2_j}\\sum_{a'}\\dfrac{\\pi^2(a'|s^{2}_j)\\sigma^2(s^{2}_j, a')}{b(a'|s^2_j)\\sum_{a_1}P(s^2_j|s^1_1,a_1)b(a_1|s^{1}_1)} \\nonumber\\\\\n \n &\\qquad+ \\sum_{a}\\pi^2(a|s^{1}_1)\\sum_{s^2_j}\\sum_{a'}\\pi^2(a'|s^{2}_j)\\sum_{s^3_m}\\sum_{a^{''}}\\dfrac{\\pi^2(a^{''}|s^{3}_m)\\sigma^2(s^{3}_m, a^{''})}{b(a^{''}|s^{3}_m)\\sum_{s^2_j}\\sum_{a_1}P(s^2_j|s^1_1,a_1)b(a_1|s^1_1)\\sum_{a_2}P(s^3_i|s^2_j,a_2)b(a_2|s^2_j)}\\nonumber\\\\\n \n &\\qquad + \\sum_s\\lambda_s\\left(\\sum_a b(a|s) - 1\\right). \\label{eq:L-dag-var}\n\\end{align}\nNow we need to solve for the KKT condition. Differentiating \\eqref{eq:L-dag-var} with respect to $b(a^{''}|s^{3}_m)$, $b(a^{'}|s^{2}_j)$, $b(a|s^{1}_1)$, and $\\lambda_s$ we get\n\n\\begin{align}\n \\nabla_{b(a^{''}|s^{3}_m)} L(\\bb,\\mathbf{\\lambda}) &= -\\sum_{a}\\pi^2(a|s^{1}_1)\\sum_{s^2_j}\\sum_{a'}\\pi^2(a'|s^{2}_j)\\sum_{s^3_m}\\sum_{a^{''}}\\dfrac{\\pi^2(a^{''}|s^{3}_m)\\sigma^2(s^{3}_m, a^{''})}{b^2(a^{''}|s^{3}_m)\\sum_{s^2_j}\\sum_{a_1}P(s^2_j|s^1_1,a_1)b(a_1|s^1_1)\\sum_{a_2}P(s^3_i|s^2_j,a_2)b(a_2|s^2_j)} \\nonumber\\\\ \n \n &+ \\lambda_{s^3_m}\\label{eq:dag-lambda1}\\\\\n \n \n \\nabla_{b(a^{'}|s^{2}_j)} L(\\bb,\\mathbf{\\lambda}) &= - \\sum_{a}\\pi^2(a|s^{1}_1)\\sum_{s^2_j}\\sum_{a'}\\dfrac{\\pi^2(a'|s^{2}_j)\\sigma^2(s^{2}_j, a')}{b^2(a'|s^2_j)\\sum_{a_1}P(s^2_j|s^1_1,a_1)b(a_1|s^{1}_1)} \\label{eq:dag-lambda2}\\\\\n \n &- \\sum_{a}\\pi^2(a|s^{1}_1)\\sum_{s^2_j}\\sum_{a'}\\pi^2(a'|s^{2}_j)\\sum_{s^3_m}\\sum_{a^{''}}\\dfrac{\\pi^2(a^{''}|s^{3}_m)\\sigma^2(s^{3}_m, a^{''})}{b(a^{''}|s^{3}_m)\\left(\\sum_{s^2_j}\\sum_{a_1}P(s^2_j|s^1_1,a_1)b(a_1|s^1_1)\\sum_{a_2}P(s^3_i|s^2_j,a_2)b(a_2|s^2_j)\\right)^2}\\nonumber\\\\\n \n &+ \\lambda_{s^2_j}\\nonumber\\\\\n \n \n \\nabla_{b(a|s^{1}_1)} L(\\bb,\\mathbf{\\lambda}) &= -\\sum_{a}\\dfrac{\\pi^2(a|s^{1}_1)\\sigma^2(s^{1}_1, a)}{b^2(a|s^{1}_1)} - \\sum_{a}\\pi^2(a|s^{1}_1)\\sum_{s^2_j}\\sum_{a'}\\dfrac{\\pi^2(a'|s^{2}_j)\\sigma^2(s^{2}_j, a')}{b(a'|s^2_j)\\left(\\sum_{a_1}P(s^2_j|s^1_1,a_1)b(a_1|s^{1}_1)\\right)^2} \\label{eq:dag-lambda3}\\\\\n \n &- \\sum_{a}\\pi^2(a|s^{1}_1)\\sum_{s^2_j}\\sum_{a'}\\pi^2(a'|s^{2}_j)\\sum_{s^3_m}\\sum_{a^{''}}\\dfrac{\\pi^2(a^{''}|s^{3}_m)\\sigma^2(s^{3}_m, a^{''})}{b(a^{''}|s^{3}_m)\\left(\\sum_{s^2_j}\\sum_{a_1}P(s^2_j|s^1_1,a_1)b(a_1|s^1_1)\\sum_{a_2}P(s^3_i|s^2_j,a_2)b(a_2|s^2_j)\\right)^2}\\nonumber\\\\\n \n &+ \\lambda_{s^1_1}\\nonumber\\\\\n \n \\nabla_{\\lambda_{s}} L(\\bb,\\mathbf{\\lambda}) &= \\sum_a b(a|s) - 1. \\label{eq:dag-lambda4}\n\\end{align}\nNow to remove $\\lambda_{s^3_m}$ from \\eqref{eq:dag-lambda1} we first set \\eqref{eq:dag-lambda1} to $0$ and show that\n\\begin{align}\n \\lambda_{s^3_m} &= \\sum_{a}\\pi^2(a|s^{1}_1)\\sum_{s^2_j}\\sum_{a'}\\pi^2(a'|s^{2}_j)\\sum_{s^3_m}\\sum_{a^{''}}\\dfrac{\\pi^2(a^{''}|s^{3}_m)\\sigma^2(s^{3}_m, a^{''})}{b^2(a^{''}|s^{3}_m)\\left(\\sum_{s^2_j}\\sum_{a_1}P(s^2_j|s^1_1,a_1)b(a_1|s^1_1)\\sum_{a_2}P(s^3_i|s^2_j,a_2)b(a_2|s^2_j)\\right)^2}\\nonumber\\\\\n \n \\implies & b(a^{''}|s^{3}_m) = \\sqrt{\\dfrac{1}{\\lambda_{s^3_m}} \\sum_{a}\\pi^2(a|s^{1}_1)\\sum_{s^2_j}\\sum_{a'}\\pi^2(a'|s^{2}_j)\\sum_{s^3_m}\\sum_{a^{''}}\\dfrac{\\pi^2(a^{''}|s^{3}_m)\\sigma^2(s^{3}_m, a^{''})}{\\left(\\sum_{s^2_j}\\sum_{a_1}P(s^2_j|s^1_1,a_1)b(a_1|s^1_1)\\sum_{a_2}P(s^3_i|s^2_j,a_2)b(a_2|s^2_j)\\right)^2} }. \\label{eq:dag-level-31}\n\\end{align}\nThen setting \\eqref{eq:dag-lambda4} to $0$ we have\n\\begin{align}\n &\\sum_{a^{''}}\\sqrt{\\dfrac{1}{\\lambda_{s^3_m}} \\sum_{a}\\pi^2(a|s^{1}_1)\\sum_{s^2_j}\\sum_{a'}\\pi^2(a'|s^{2}_j)\\sum_{s^3_m}\\sum_{a^{''}}\\dfrac{\\pi^2(a^{''}|s^{3}_m)\\sigma^2(s^{3}_m, a^{''})}{\\left(\\sum_{s^2_j}\\sum_{a_1}P(s^2_j|s^1_1,a_1)b(a_1|s^1_1)\\sum_{a_2}P(s^3_i|s^2_j,a_2)b(a_2|s^2_j)\\right)^2} } = 1\\nonumber\\\\\n \n \\implies & \\lambda_{s^3_m} =\\sum_{a^{''}}\\sqrt{ \\sum_{a}\\pi^2(a|s^{1}_1)\\sum_{s^2_j}\\sum_{a'}\\pi^2(a'|s^{2}_j)\\sum_{s^3_m}\\sum_{a^{''}}\\dfrac{\\pi^2(a^{''}|s^{3}_m)\\sigma^2(s^{3}_m, a^{''})}{\\left(\\sum_{s^2_j}\\sum_{a_1}P(s^2_j|s^1_1,a_1)b(a_1|s^1_1)\\sum_{a_2}P(s^3_i|s^2_j,a_2)b(a_2|s^2_j)\\right)^2} } \\label{eq:dag-level-32}\n\\end{align}\nUsing \\eqref{eq:dag-level-31} and \\eqref{eq:dag-level-32} we can show that the optimal sampling proportion is given by\n\\begin{align*}\n b^*(a^{''}|s^{3}_m) = \\dfrac{\\pi^2(a^{''}|s^{3}_m)\\sigma^2(s^{3}_m, a^{''})}{\\sum_{a}\\pi^2(a^{}|s^{3}_m)\\sigma^2(s^{3}_m, a^{})}\n\\end{align*}\nSimilarly we can show that setting \\eqref{eq:dag-lambda2} and \\eqref{eq:dag-lambda4} setting to $0$ and removing $\\lambda_{s^2_j}$ \n\\begin{align*}\n b^{*, (2)}(a'|s^2_j) & \\propto \\sum_{a}\\pi^2(a|s^{1}_1)\\sum_{s^2_j}\\sum_{a'}\\dfrac{\\pi^2(a'|s^{2}_j)\\sigma^2(s^{2}_j, a')}{\\sum_{a_1}P(s^2_j|s^1_1,a_1)b^*(a_1|s^{1}_1)} \\\\\n \n & + \\sum_{a}\\pi^2(a|s^{1}_1)\\sum_{s^2_j}\\sum_{a'}\\pi^2(a'|s^{2}_j)\\sum_{s^3_m}\\sum_{a^{''}}\\dfrac{\\pi^2(a^{''}|s^{3}_m)\\sigma^2(s^{3}_m, a^{''})}{b^*(a^{''}|s^{3}_m)\\left(\\dfrac{\\sum_{s^2_j}\\sum_{a_1}P(s^2_j|s^1_1,a_1)b^*(a_1|s^1_1)\\sum_{a_2}P(s^3_i|s^2_j,a_2)b^*(a_2|s^2_j)}{b^*(a'|s^2_j)}\\right)^2}\n\\end{align*}\nFinally, setting \\eqref{eq:dag-lambda3} and \\eqref{eq:dag-lambda4} setting to $0$ and removing $\\lambda_{s^1_1}$ we have\n\\begin{align*}\n b^{*, (2)}(a|s^1_1) \\propto & \\sum_{a}\\pi^2(a|s^{1}_1)\\sigma^2(s^{1}_1, a) + \\sum_{a}\\pi^2(a|s^{1}_1)\\sum_{s^2_j}\\sum_{a'}\\dfrac{\\pi^2(a'|s^{2}_j)\\sigma^2(s^{2}_j, a')}{b^*(a'|s^2_j)} \\\\\n \n & + \\sum_{a}\\pi^2(a|s^{1}_1)\\sum_{s^2_j}\\sum_{a'}\\pi^2(a'|s^{2}_j)\\sum_{s^3_m}\\sum_{a^{''}}\\dfrac{\\pi^2(a^{''}|s^{3}_m)\\sigma^2(s^{3}_m, a^{''})}{b^*(a^{''}|s^{3}_m)\\left(\\dfrac{\\sum_{s^2_j}\\sum_{a_1}P(s^2_j|s^1_1,a_1)b^*(a_1|s^1_1)\\sum_{a_2}P(s^3_i|s^2_j,a_2)b^*(a_2|s^2_j)}{b^*(a|s^1_1)}\\right)^2}\n\\end{align*}\nThis shows the cyclical dependency of $b^*(a|s^1_1)$ and $b^*(a|s^2_j)$. \n\\end{proof}\n\n\n\\section{Additional Experimental Details}\n\\label{app:expt-details}\n\n\\subsection{Estimate $B$ in DAG}\n\nRecall that in a DAG $\\G$ we have a cyclical dependency following \\Cref{prop:dag}. Hence, we do an approximation of the optimal sampling proportion in $\\G$ by using the tree formulation from \\Cref{thm:L-step-tree}. However, since there are multiple paths to the same state in $\\G$ we have to iteratively compute the normalization factor $B$. To do this we use the following \\Cref{alg:revar-g}.\n\n\\begin{algorithm}\n\\caption{Estimate $B_0(s)$ for $\\G$}\n\\label{alg:revar-g}\n\\begin{algorithmic}[1]\n\\State Initialize $B_L(s)=0$ for all $s\\in\\S$ \n\\For{$t'\\in L-1, \\ldots, 0$} \n\\State $B_{t'}(s) = \\sum\\limits_{a} \\sqrt{\\pi^2(a|s)\\!\\left(\\sigma^2(s,\\! a) + \\gamma^2\\sum\\limits_{s'}P(s'|s, a) B_{t'+1}^2(s)\\right)}$\n\\EndFor\n\\State \\textbf{Return} $B_0$.\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\subsection{Implementation Details}\nIn this section we state additional experimental details. We implement the following competitive baselines:\n\n\\textbf{(1)} \\onp: The \\onp follows the target probability in sampling actions at each state.\n\n\\textbf{(2)} \\cbvar: This is a bandit policy which samples an action based only on the statistics of the current state. At every time $t+1$ in episode $k$, \\cbvar\\ sample an action\n\\begin{align*}\n I^k_{t+1} = \\argmax_{a\\in\\A}(2\\eta + 4\\eta^2)\\sqrt{\\dfrac{2\\pi(a|s)\\wsigma^{(2),k}_t(s,a)\\log(SAn(n+1))}{T^k_t(s,a)}} + \\dfrac{7\\log(SAn(n+1))}{3T^k_t(s,a)}\n\\end{align*}\nwhere, $n$ is the total budget. This policy is similar to UCB-variance of \\citet{audibert2009exploration} and uses the empirical Bernstein inequality \\citep{maurer2009empirical}. However we do not use the mean estimate $\\wmu^k_t(s,a)$ of an action as this forces \\cbvar\\ to explore continuously rather than maximizing the rewards. Also note that to have a fair comparison with \\rev we use a large constant $(2\\eta+4\\eta^2)$ and $\\log$ term instead of just $2$ and $\\log t$.\n\n\n\n\\subsection{Ablation study}\n\nIn this experiment we show an ablation study of different values of the upper confidence bound constant associated with $\\usigma^k_t(s,a)$. Recall from \\eqref{eq:tree-ucb} that\n\\begin{align*}\n \\usigma^{k}_{t}(s^{\\ell}_{i},a) \\!\\coloneqq\\! \\wsigma^k_{t}(s^{\\ell}_{i},a) \\!+\\! 2c\\sqrt{\\dfrac{\\log(SAn(n\\!+\\!1)\/\\delta)}{T^k_{t}( s^{\\ell}_{i},a)}}\n\\end{align*}\nwhere, $c$ is the upper confidence bound constant, and $n=KL$. From \\Cref{thm:regret-rev} we know that the theoretically correct constant is to use $2\\eta + 4\\eta^2$. However, since our upper bound is loose because of union bounds over states, actions, episodes and horizon, we ablate the value of $c$ to see its impact on \\rev. From \\Cref{fig:ablation} we see that too large a value of $c=10$ and we end up doing too much exploration rather than focusing on the state-action pair that reduces variance. However, even with too small values of $c \\in \\{0, 0.1\\}$ we end up doing less exploration and have very bad plug-in estimates of the variance. Consequently this increases the MSE of \\rev. The $c=1$ seems to do relatively well against all the other choices.\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.55]{img\/ablation.png}\n \\caption{Ablation study of UCB constant}\n \\label{fig:ablation}\n\\end{figure}\n\n\n\n\n\n\n\\newpage\n\\section{Table of Notations}\n\\label{table-notations}\n\n\\begin{table}[!tbh]\n \\centering\n \\begin{tabular}{|p{10em}|p{33em}|}\n \\hline\\textbf{Notations} & \\textbf{Definition} \\\\\\hline\n \n $s^{\\ell}_i$ & State $s$ in level $\\ell$ indexed by $i$ \\\\\\hline\n \n $\\pi(a|s^{\\ell}_i)$ & Target policy probability for action $a$ in $s^{\\ell}_i$ \\\\\\hline\n \n $b(a|s^{\\ell}_i)$ & Behavior policy probability for action $a$ in $s^{\\ell}_i$ \\\\\\hline\n \n \n \n $\\sigma^2(s^{\\ell}_i, a)$ & Variance of action $a$ in $s^{\\ell}_i$ \\\\\\hline\n \n $\\wsigma^{(2),k}_{t}(s^{\\ell}_i, a)$ & Empirical variance of action $a$ in $s^{\\ell}_i$ at time $t$ in episode $k$\\\\\\hline\n \n $\\usigma^{(2),k}_{t}(s^{\\ell}_i, a)$ & UCB on variance of action $a$ in $s^{\\ell}_i$ at time $t$ in episode $k$\\\\\\hline\n \n $\\mu(s^{\\ell}_i, a)$ & Mean of action $a$ in $s^{\\ell}_i$\\\\\\hline\n \n $\\wmu^{k}_{t}(s^{\\ell}_i, a)$ & Empirical mean of action $a$ in $s^{\\ell}_i$ at time $t$ in episode $k$\\\\\\hline\n \n $\\mu^{2}(s^{\\ell}_i, a)$ & Square of mean of action $a$ in $s^{\\ell}_i$\\\\\\hline\n \n $\\wmu^{(2),k}_{t}(s^{\\ell}_i, a)$ & Square of empirical mean of action $a$ in $s^{\\ell}_i$ at time $t$ in episode $k$\\\\\\hline\n \n $T_n(s^{\\ell}_i, a)$ & Total Samples of action $a$ in $s^{\\ell}_i$ after $n$ timesteps\\\\\\hline\n \n $T_n(s^{\\ell}_i)$ & Total samples of actions in $s^{\\ell}_i$ as $\\sum_a T_n(s^{\\ell}_i, a)$ after $n$ timesteps (State count)\\\\\\hline\n \n $T^k_t(s^{\\ell}_i,a)$ & Total samples of action $a$ taken till episode $k$ time $t$ in $s^{\\ell}_i$\\\\\\hline\n \n $T^k_t(s^{\\ell}_i,a,s^{\\ell+1}_j)$ & Total samples of action $a$ taken till episode $k$ time $t$ in $s^{\\ell}_i$ to transition to $s^{\\ell+1}_j$\\\\\\hline\n \n $P(s^{\\ell+1}_j|s^{\\ell}_i,a)$ & Transition probability of taking action $a$ in state $s^{\\ell}_i$ and transition to state $s^{\\ell+1}_j$\\\\\\hline\n \n $\\wP^k_t(s^{\\ell+1}_j|s^{\\ell}_i,a)$ & Empirical transition probability of taking action $a$ in state $s^{\\ell}_i$ and moving to state $s^{\\ell+1}_j$ at time $t$ episode $k$\\\\\\hline\n \n $\\wP^{(2),k}_{t}(s^{\\ell+1}_j|s^{\\ell}_i,a)$ & Empirical square of transition probability of taking action $a$ in state $s^{\\ell}_i$ and moving to state $s^{\\ell+1}_j$ at time $t$ episode $k$\\\\\\hline\n \n & $ \\sum_a\\sqrt{\\pi^2(a|s^{\\ell}_{i})\\sigma^2(s^{\\ell}_{i}, a)}, \\text{ if } \\ell = L$\\\\ \n \n $B(s^{\\ell}_{i})\\coloneqq\\begin{cases}\\vspace{3em}\\end{cases}$ & $\\sum_a \\sqrt{\\sum\\limits_{s^{\\ell+1}_j}\\pi^2(a|s^{\\ell}_{i})\\left(\\sigma^2(s^{\\ell}_{i}, a) + \\gamma^2P(s^{\\ell+1}_j|s^{\\ell}_i, a) B^2(s^{\\ell+1}_{j})\\right)}, \\text{ if } \\ell \\!\\!\\neq\\!\\! L$\\\\\\hline\n \n & $ \\sum_a\\sqrt{\\pi^2(a|s^{\\ell}_{i})\\wsigma^{(2),k}_t(s^{\\ell}_{i}, a)}, \\text{ if } \\ell = L$\\\\ \n \n $\\wB(s^{\\ell}_{i})\\coloneqq\\begin{cases}\\vspace{3em}\\end{cases}$ & $\\sum_a \\sqrt{\\sum\\limits_{s^{\\ell+1}_j}\\pi^2(a|s^{\\ell}_{i})\\left(\\wsigma^{(2),k}_t(s^{\\ell}_{i}, a) + \\gamma^2\\wP^{k}_t(s^{\\ell+1}_j|s^{\\ell}_i, a) \\wB^{(2),k}_{t}(s^{\\ell+1}_j)\\right)}, \\text{ if } \\ell \\!\\!\\neq\\!\\! L$\\\\\\hline\n \\end{tabular}\n \\vspace{1em}\n \\caption{Table of Notations}\n \\label{tab:my_label}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Background}\n\\label{sec:background}\n\nIn this section, we introduce notation, define the policy evaluation problem, and discuss the prior literature.\n\n\\subsection{Notation}\n\\input{notation}\n\n\n\n\n\n\n\\subsection{Related Work}\n\n\nOur paper builds upon work in the bandit literature for optimal data collection for estimating a weighted sum of the mean reward associated with each arm.\n\\cite{antos2008active} study estimating the mean reward of each arm equally well and show that the optimal solution is to pull each arm proportional to the variance of its reward distribution.\nSince the variances are unknown a priori, they introduce an algorithm that pulls arms in proportion to the empirical variance of each reward distribution.\n\\cite{carpentier2015adaptive} extend this work by introducing a weighting on each arm that is equivalent to the target policy action probabilities in our work.\nThey show that the optimal solution is then to pull each arm proportional to the product of the standard deviation of the reward distribution and the arm weighting.\nInstead of using the empirical standard deviations, they introduce an upper confidence bound on the standard deviation and use it to select actions.\nOur work is different from these earlier works in that we consider more general tree-structured MDPs of which bandits are a special case.\n\n\n\n\nIn RL and MDPs, exploration is widely studied with the objective of finding the optimal policy.\nPrior work attempts to balance exploration to reduce uncertainty with exploitation to converge to the optimal policy.\nCommon approaches are based on reducing uncertainty \\citep{osband2016deep,o2018uncertainty} or incentivizing visitation of novel states \\citep{barto2013intrinsic,pathak2017curiosity,burda2018exploration}.\nThese works differ from our work in that we focus on evaluating a fixed policy rather than finding the optimal policy.\nIn our problem, the trade-off becomes balancing taking actions to reduce uncertainty with taking actions that the target policy is likely to take.\n\n\nOur work is similar in spirit to work on adaptive importance sampling \\citep{rubinstein2013cross} which aims to lower the variance of Monte Carlo estimators by adapting the data collection distribution.\nAdaptive importance sampling was used by \\cite{hanna2017data} to lower the variance of policy evaluation in MDPs.\nIt has also been used to lower the variance of policy gradient RL algorithms \\citep{bouchard2016online,ciosek2017offer}.\nAIS methods attempt to find a single optimal sampling distribution whereas our approach attempts to reduce uncertainty in the estimated mean rewards. Another relevant work is of \\citet{talebi2019learning} who use a different loss function to estimate the transition model rather than minimize the MSE in the off-policy setting.\n\n\n\n\n\n\n\\subsubsection*{References}}\n\\usepackage{mathtools}\n\\usepackage{booktabs}\n\\usepackage{tikz}\n\n\n\\newcommand{\\swap}[3][-]{#3#1#2}\n\n\\usepackage{macros}\n\n\\newcommand{\\rob}[1]{{\\color{purple}{RN: #1}}}\n\n\\title{ReVar: Strengthening Policy Evaluation via Reduced Variance Sampling}\n\n\\author[1]{Subhojyoti Mukherjee}\n\\author[2]{Josiah P. Hanna}\n\\author[1]{Robert Nowak}\n\\affil[1]{%\n Department\nof Electrical and Computer Engineering\\\\\n University of Wisconsin-Madison\\\\\n USA\n}\n\\affil[2]{%\n Computer Sciences Department\\\\\n University of Wisconsin-Madison\\\\\n USA\n}\n \n \n\\begin{document}\n\\maketitle\n\n\\begin{abstract}\n This paper studies the problem of data collection for policy evaluation in Markov decision processes (MDPs). In policy evaluation, we are given a \\textit{target} policy and asked to estimate the expected cumulative reward it will obtain in an environment formalized as an MDP. We develop theory for optimal data collection within the class of tree-structured MDPs by first deriving an oracle data collection strategy that uses knowledge of the variance of the reward distributions. We then introduce the \\textbf{Re}duced \\textbf{Var}iance Sampling (\\rev\\!) algorithm that approximates the oracle strategy when the reward variances are unknown a priori and bound its sub-optimality compared to the oracle strategy. Finally, we empirically validate that \\rev leads to policy evaluation with mean squared error comparable to the oracle strategy and significantly lower than simply running the target policy.\n\\end{abstract}\n\n\\section{Introduction}\n\\label{sec:intro}\n\\input{intro}\n\n\n\\section{Optimal Data Collection in Multi-armed Bandits}\n\\label{sec:bandit}\n\\input{bandit}\n\n\\vspace*{-1em}\n\\section{Optimal Data Collection in Tree MDPS}\n\\vspace*{-1em}\n\\label{sec:tree-mdp}\n\\input{mdp_tree2}\n\n\\vspace*{-1em}\n\\section{Optimal Data Collection Beyond Trees}\n\\label{sec:dag-mdp}\n\\input{mdp_dag}\n\n\\vspace*{-1em}\n\\section{Empirical Study}\n\\vspace*{-1em}\n\\label{sec:expt}\n\\input{expt}\n\n\\vspace*{-1em}\n\\section{Conclusion AND Future Works}\n\\vspace*{-0.8em}\n\\label{sec:conc}\n\\input{conclusion}\n\n\\newpage\n\n\\textbf{Acknowledgements: } The authors will like to thank Kevin Jamieson from Allen School of Computer Science \\& Engineering, University of Washington for pointing out several useful references. This work was partially supported by AFOSR\/AFRL grant FA9550-18-1-0166.\n\n\n\n\\subsection{Oracle Data Collection}\n\nWe first consider an oracle data collection strategy which knows the variance of all reward distributions and knows the state transition probabilities.\nAfter observing $n$ state-action-reward tuples, the oracle computes the following estimate of $v^{\\pi}_{}(s^1_1)$:\n\\begin{align}\n &Y_n(s^1_1) \\coloneqq \\sum_{a=1}^A\\pi(a|s^1_1)\\bigg(\\dfrac{1}{T_n(s^1_1,a)}\\sum_{h=1}^{T_n(s^1_1,a)}R_{h}(s^1_1, a)\\nonumber\\\\\n \n &\\quad + \\gamma\\sum_{s^{\\ell+1}_j} P(s^{\\ell+1}_j|s^1_1, a)Y_{n}(s^2_j)\\bigg) \\nonumber \\\\\n &= \\!\\sum_{a=1}^A\\pi(a|s^1_1)\\!\\bigg(\\!\\wmu(s^1_1,a\n \n \n \\!+\\! \\gamma\\!\\!\\sum_{s^{\\ell+1}_j}\\!\\! P(s^{\\ell+1}_j|s^1_1, a)Y_{n}(s^2_j)\\!\\bigg) \\label{eq:tree-Yestimate}\n\\end{align}\nwhere $T_n(s,a)$ denotes the number of times that the oracle took action $a$ in state $s$. Note that in \\Cref{sec:background} we define $Y_n(s,t)$ but now we use $Y_n(s)$ as timestep is implicit in the layer of the tree. \nAlso \\eqref{eq:tree-Yestimate} differs from the estimator defined in \\Cref{sec:policy-evaluation} as it uses the true transition probabilities, $P$, instead of their empirical estimate, $\\widehat{P}$.\nThe MSE of $Y_n$ is:\n\\begin{align}\n \\E_{\\D}&[\\left(Y_n(s^1_1) - v^{\\pi}_{}(s^1_1)\\right)^2] \\nonumber\\\\\n \n &= \\Var(Y_n(s^1_1)) + \\bias^2(Y_n(s^1_1)). \\label{eq:tree-mse}\n\\end{align}\nThe bias of this estimator becomes zero once all $(s,a)$-pairs with $\\pi(a|s)>0$ have been visited a single time, thus we focus on reducing $\\Var(Y_n(s^1_1))$.\nBefore defining the oracle data collection strategy, we first state an assumption on $\\D$. \n\\begin{assumption}\n\\label{assm:unbiased}\nThe data $\\D$ collected over $n$ state-action-reward samples has at least one observation of each state-action pair, $(s,a)$, for which $\\pi(a|s) > 0$.\n\\end{assumption}\n\\Cref{assm:unbiased} ensures that $Y_n$ is an unbiased estimator of $v(\\pi)$ so that reducing MSE is equivalent to reducing variance.\nBefore stating our main result, we provide intuition with a lemma that gives the optimal proportion for each action in a $2$-depth tree.\n\\begin{lemma}\n\\label{lemma:main-tree}\nLet $\\T$ be a $2$-depth stochastic tree MDP as defined in \\Cref{def:tree-mdp} (see \\Cref{fig:mdp-3-state} in \\Cref{app:thm:3state-sample}). \nLet $Y_n(s^1_1)$ be the estimated return of the starting state $s^1_1$ after observing $n$ state-action-reward samples. \nNote that $v^{\\pi}_{}(s^1_1)$ is the expectation of $Y_n(s^1_1)$ under \\Cref{assm:unbiased}. Let $\\D$ be the observed data over $n$ state-action-reward samples. \nTo minimize MSE, $\\E_{\\D}[(Y_{n}(s^1_1) - v^{\\pi}_{}(s^1_1))^2]$, is obtained by taking actions in each state in the following proportions:\n\\begin{gather*}\n b^*(a | s^2_j) \\propto \\pi(a | s^2_j) \\sigma(s^2_j,a) \\\\\n b^*(a|s^1_1) \\!\\propto\\!\\! \\sqrt{\\pi^2(a|s^{1}_{1})\\bigg[\\sigma^2(s^1_1,a) \\!+\\!\\gamma^2 \\sum_{s^{2}_j}P(s^{2}_j|s^1_1,a)B^2(s^2_j)\\bigg]},\n\\end{gather*}\nwhere, $B(s^{2}_j)= \\sum_a \\pi(a|s^{2}_j) \\sigma(s^{2}_j, a)$.\n\\end{lemma}\n\\textbf{Proof (Overview):} We decompose the MSE into its variance and bias terms and show that $Y_n$ is unbiased under \\Cref{assm:unbiased}. Next note that the reward in the next state is conditionally independent of the reward in the current state given the current state and action. \nHence we can write the variance in terms of the variance of the estimate in the initial state and the variance of the estimate in the final layer.\nWe then rewrite the total samples of a state-action pair i.e $T_n(s^\\ell_i, a)$ in terms of the proportion of the number of times the action was sampled in the state i.e $b(a|s^\\ell_i)$. \nTo do so, we take into account the tree structure to derive the expected proportion of times that action $a$ is taken in each state in layer $2$ as follows:\n\\begin{align*}\n b(a|s^2_i) & =\\dfrac{T_n(s^2_i, a)}{\\sum_{a'}T_n(s^2_i, a')}\n \n \\overset{(a)}{=} \\dfrac{T_n(s^2_i, a)\/n}{P(s^2_i|s^1_1,a)T_n(s^1_1,a)\/n} \n \n \n \n \n\\end{align*}\nwhere in $(a)$ the action $a$ is used to transition to state $s^2_j$ from $s^1_1$ and so $\\sum_{a}T_n(s^2_i, a) = P(s^2_i|s^1_1,a)T_n(s^1_1,a)$. \nWe next substitute the $b(a|s^\\ell_i)$ for each state-action pair into the variance expression and determine the $b$ values that minimize the expression subject to $\\forall s, \\sum_a b(a|s) = 1$ and $\\forall s, b(a|s) > 0$.\nThe full proof is given in \\Cref{app:thm:3state-sample}. $\\blacksquare$ \\par\n\n\n\nNote that the optimal proportion in the leaf states, $b^*(a|s^2_j)$, is the same as in \\citet{carpentier2011finite} (see \\Cref{prop:bandit}) as terminal states can be treated as bandits in which actions do not affect subsequent states.\nThe key difference is in the root state, $s^1_1$, where the optimal action proportion, $b^*(a|s^1_1)$ depends on the expected leaf state normalization factor $B(s^2_j)$ where $s^2_j$ is a state sampled from $P(\\cdot|s^1_1,a)$. \nThe normalization factor, $B(s^2_i)$, captures the total contribution of state $s^2_i$ to the variance of $Y_n$ and thus actions in the root state must be chosen to 1) reduce variance in the immediate reward estimate and to 2) get to states that contribute more to the variance of the estimate.\nWe explore the implications of the oracle action proportions in \\Cref{lemma:main-tree} with the following two examples.\n\n\n\n\\begin{example}\\textbf{(Child Variance matters)}\nConsider a $2$-depth, $2$-action tree MDP $\\T$ with deterministic $P$, i.e., $P(s^2_2|s^1_1,2) = P(s^2_1|s^1_1,1) = 1$ and $\\gamma=1$ (see \\Cref{app:fig} (Left) in \\Cref{app:three-state-det}). \nSuppose the target policy is the uniform distribution in all states so that $\\forall (s,a), \\pi(a|s) = \\frac{1}{2}$. \nThe reward distribution variances are given by $\\sigma^2(s^1_1, 1) = 400$, $\\sigma^2(s^1_1, 2) = 600$, $\\sigma^2(s^2_1, 1) = 400$, $\\sigma^2(s^2_1, 2) = 400$, $\\sigma^2(s^2_2, 1) = 4$, and $\\sigma^2(s^2_2, 2) = 4$. \nSo the right sub-tree at $s^1_1$ has higher variance (larger $B$-value) than the left sub-tree. \nFollowing the sampling rule in \\Cref{lemma:main-tree} we can show that $b^*(1|s^1_1) > b^*(2|s^1_1)$ (the full calculation is given in \\Cref{app:three-state-det}). \nHence the right sub-tree with higher variance will have a higher proportion of pulls which allows the oracle to get to the high variance $s^2_1$.\nObserve that treating $s^1_1$ as a bandit leads to choosing action 2 more often as $\\sigma^2(s^1_1, 2) > \\sigma^2(s^1_1, 1)$. \nHowever, taking action 2 leads to state $s^2_2$ which contributes much less to the total variance.\nThus, this example highlights the need to consider the variance of subsequent states.\n\\end{example}\n\n\\begin{example}\\textbf{(Transition Model matters)}\nConsider a $2$-depth, $2$-action tree MDP $\\T$ in which we have $P(s^2_1|s^1_1,1) = p$, $P(s^2_1|s^1_1,1) = 1-p$, $P(s^2_3|s^1_1,2) = p$, and $P(s^2_4|s^1_1,2) = 1-p$. \nThis example is shown in \\Cref{app:fig} (Right) in \\Cref{app:three-state-det}. \nFollowing the result of \\Cref{lemma:main-tree} if $p\\gg (1-p)$ it can be shown that the variances of the states $s^2_1$ and $s^2_3$ have greater importance in calculating the optimal sampling proportions of $s^1_1$.\nThe calculation is shown in \\Cref{app:sampling-with-model}.\nThus, less likely future states have less importance for computing the optimal sampling proportion in a given state.\n\\end{example}\n\nHaving developed intuition for minimal-variance action selection in a 2-depth tree MDP, we now give our main result that extends \\Cref{lemma:main-tree} to an $L$-depth tree. \n\\begin{customtheorem}{1}\n\\label{thm:L-step-tree}\nLet $\\T$ be a $L$-depth stochastic tree MDP as defined in \\Cref{def:tree-mdp}. Let the estimated return of the starting state $s^1_1$ after $n$ state-action-reward samples be defined as $Y_{n}(s^1_1)$. \nNote that the $v^{\\pi}_{}(s^1_1)$ is the expectation of $Y_n(s^1_1)$ under \\Cref{assm:unbiased}. \nLet $\\D$ be the observed data over $n$ state-action-reward samples. To minimise MSE $\\E_{\\D}[(Y_n(s^1_1)) - \\mu(Y_{n}(s^1_1)))^2]$ the optimal sampling proportions for any arbitrary state is given by:\n{\\small\n\\begin{align*}\n b^*(a|s^{\\ell}_i) \\!&\\propto\n \n \n \n \\!\\! \\sqrt{\\! \\pi^2(a|s_i^{\\ell})\\! \\bigg[\\sigma^2(s^{\\ell}_i, a)\\! +\\! \\gamma^2\\!\\! \\sum\\limits_{s^{\\ell+1}_j}\\!\\!P(s^{\\ell + 1}_j|s_i^{\\ell}, a) B^2(s^{\\ell+1}_j)\\! \\bigg]},\n \n \n\\end{align*}\n}\nwhere, $B(s^{2}_j)$ is the normalization factor defined as follows:\n{\\small\n\\begin{align}\n B(s^{\\ell}_{i}) \\!\\!=\\!\\! \n \n \n \\sum\\limits_a\\!\\! \\sqrt{\\!\\!\\pi^2(a|s^{\\ell}_{i})\\!\\left(\\!\\!\\sigma^2(s^{\\ell}_{i}, a) \\!+\\! \\gamma^2\\!\\sum\\limits_{s^{\\ell+1}_j}\\!\\!P(s^{\\ell+1}_j\\!|\\!s^{\\ell}_i, a) B^2(s^{\\ell+1}_j)\\!\\!\\right)\n \n \n \\label{eq:B-def}\n\\end{align}\n}\n\\end{customtheorem}\n\\textbf{Proof (Overview):} We prove \\Cref{thm:L-step-tree} by induction. \\Cref{lemma:main-tree} proves the base case of estimating the sampling proportion for level $L-1$ and $L$.\nThen, for the induction step, we assume that all the sampling proportions from level $L$ till level $\\ell+1$ can be subsequently built up using dynamic programming starting from level $L$.\nFor states in level $L$ to the states in level $\\ell+1$ we can compute $b^*(a|s^{\\ell+1}_i)$ by repeatedly applying \\Cref{lemma:main-tree}. Then we show that at the level $\\ell$ we get a similar recursive sampling proportion as stated in the theorem statement. The proof is given in \\Cref{app:tree-mdp-behavior}. $\\blacksquare$\n\n\\vspace*{-1em}\n\\subsection{MSE of the Oracle}\n\\vspace*{-1em}\nIn this subsection, we derive the MSE that the oracle will incur when matching the action proportions given by \\cref{thm:L-step-tree}.\nThe oracle is run for $K$ episodes where each episode consist of $L$ length trajectory of visiting state-action pairs.\nSo the total budget is $n=KL$. At the end of the $K$-th episode the MSE of the oracle is estimated which is shown in \\Cref{prop:oracle-loss}. Before stating the proposition we introduce additional notation which we will use throughout the remainder of the paper. Let\n\\begin{align}\n T_{t}^{k}(s, a)=\\sum_{i=0}^{k-1} \\mathbb{I}\\left\\{\\left(s_{t}^{i}, a_{t}^{i}\\right)=(s, a)\\right\\}, \\forall t, s, a \\label{eq:T-s-a}\n\\end{align}\ndenote the total number of times that $(s,a)$ has been observed in $\\D$ (across all trajectories) up to time $t$ in episode $k$ and $\\mathbb{I}\\{\\cdot\\}$ is the indicator function. Similarly let \n\\begin{align}\n \\!\\!T_{t}^{k}(s, a, s')\\!=\\!\\!\\sum_{i=0}^{k-1} \\mathbb{I}\\!\\!\\left\\{\\!\\left(s_{t}^{i}, a_{t}^{i}, s^i_{t+1}\\right)\\!=\\!(s, a, s')\\!\\right\\}\\!\\!, \\!\\forall t\\!, s, a, s'\\!\\! \\label{eq:T-s-a-s1}\n\\end{align}\ndenote the number of times action $a$ is taken in $s$ to transition to $s'$. Finally we define the state sample $T^k_t(s) = \\sum_a T^k_t(s,a)$ as the total number of times any state is visited and an action is taken in that state. \n\\begin{customproposition}{2}\n\\label{prop:oracle-loss}\nLet there be an oracle which knows the state-action variances and transition probabilities of the $L$-depth tree MDP $\\T$. Let the oracle take actions in the proportions given by \\Cref{thm:L-step-tree}. Let $\\D$ be the observed data over $n$ state-action-reward samples such that $n=KL$. Then the oracle suffers a MSE of\n\\begin{align}\n &\\L^*_n = \\sum_{\\ell=1}^{L}\\bigg[\\dfrac{B^{2}(s^{\\ell}_i)}{T_L^{*,K}(s^\\ell_i)} \\nonumber\\\\\n \n &+ \\gamma^{2}\\sum_{a} \\pi^2(a|s^\\ell_i)\\sum_{s^{\\ell+1}_j}P(s^{\\ell+1}_j|s^{\\ell}_i,a) \\dfrac{B^{2}(s^{\\ell+1}_j)}{T_L^{*,K}(s^{\\ell+1}_j)} \\bigg]. \\label{eq:oracle-loss}\n\\end{align}\nwhere, $T^{*,K}_{L}(s^\\ell_i)$ denotes the optimal state samples of the oracle at the end of episode $K$.\n\\end{customproposition}\nThe proof is given in \\Cref{app:oracle-loss}. From \\Cref{prop:oracle-loss} we see that the MSE of the oracle goes to $0$ as the number of episodes $K\\!\\rightarrow\\! \\infty$, and $T_L^{*,K}(s^{\\ell}_i)\\!\\rightarrow\\! \\infty$ simultaneously for all $s^{\\ell}_i \\in\\S$. Observe that if for every state $s$ the total state counts $T_L^{*,K}(s) = cn$ for some constant $c>0$ then the loss of the oracle goes to $0$ at the rate $O(1\/n)$.\n\n\\subsection{Reduced Variance Sampling}\n\nThe oracle data collection strategy provides intuition for optimal data collection for minimal-variance policy evaluation, however, it is \\textit{not} a practical strategy itself as it requires $\\sigma$ and $P$ to be known.\nWe now introduce a practical data collection algorithm -- \\textbf{Re}duced \\textbf{Var}iance Sampling (\\revpar) --\nthat is agnostic to $\\sigma$ and $P$.\nOur algorithm follows the proportions given by \\Cref{thm:L-step-tree} with the true reward variances replaced with an upper confidence bound and the true transition probabilities replaced with empirical frequencies.\nFormally, we define the desired proportion for action $a$ in state $s^\\ell_i$ after $t$ steps as $\\wb^k_{t+1}(a|s^{\\ell}_{i}) \\propto $ \n{\\small\n\\begin{align}\\label{eq:tree-bhat}\n \\!\\!& \\!\\!\n \n \n \\!\\!\\sqrt{\\!\\!\\pi^2(a|s_{i}^{\\ell})\\!\\bigg[\\usigma^{(2),k}_{t}\\!\\!(s^{\\ell}_{i}, a) \\!+\\! \\gamma^2\\!\\!\\!\\sum\\limits_{s^{\\ell+1}_j} \\!\\!\\wP^{k}_{t}(s^{\\ell + 1}_{j}|s_{i}^{\\ell}, a) \\wB^{(2),k)}_{t}\\!(s^{\\ell+1}_{j}\\!)\\!\\!\\bigg]}\n \n\\end{align}\n}\nThe upper confidence bound on the variance $\\sigma^2(s^{\\ell}_{i},a)$, denoted by $\\usigma^{(2),k}_{t-1}(s^{\\ell}_{i},a) = (\\usigma^{k}_{t}(s^{\\ell}_{i},a))^2$, is defined as:\n\\begin{align}\n \\hspace*{-0.7em}\\usigma^{k}_{t}(s^{\\ell}_{i},a) \\!\\coloneqq\\! \\wsigma^k_{t}(s^{\\ell}_{i},a) \\!+\\! 2c\\sqrt{\\dfrac{\\log(SAn(n\\!+\\!1)\/\\delta)}{T^k_{t}( s^{\\ell}_{i},a)}} \\label{eq:tree-ucb}\n\\end{align}\nwhere, $\\wsigma^k_{t}(s^{\\ell}_{i},a)$ is the plug-in estimate of the standard deviation $\\sigma(s^{\\ell}_{i},a)$, $c \\!>\\!0$ is a constant depending on the boundedness of the rewards to be made explicit later, and $n=KL$ is the total budget of samples. \nUsing an upper confidence bound on the reward standard deviations captures our uncertainty about $\\sigma(s^\\ell_i, a)$ needed to compute the true optimal proportions.\nThe state transition model is estimated as:\n\\begin{align}\n \\wP^k_t(s^{\\ell+1}_j|s^{\\ell}_i,a) = \\dfrac{T^k_t(s^{\\ell}_i,a,s^{\\ell+1}_j)}{T^k_t(s^{\\ell}_i,a)} \\label{eq:tree-P-hat}\n\\end{align}\nwhere, $T^k_t(s^{\\ell}_i,a,s^{\\ell+1}_j)$ is defined in \\eqref{eq:T-s-a-s1}. Further in \\eqref{eq:tree-bhat}, $\\wB^k_t(s^{\\ell+1}_{j})$ is the plug-in estimate of $B(s^{\\ell+1}_{j})$. Observe that for all of these plug-in estimates we use all the past history till time $t$ in episode $k$ to estimate these statistics.\n\nEq. (\\ref{eq:tree-bhat}) allows us to estimate the optimal proportion for all actions in any state.\nTo match these proportions, rather than sampling from $\\wb^k_{t+1}(a|s^{\\ell}_{i})$, \\rev takes action $I^k_{t+1}$ at time $t+1$ in episode $k$ according to:\n\\begin{align}\n I^k_{t+1} = \\argmax_{a}\\bigg\\{\\dfrac{\\wb^k_{t}(a|s^{\\ell}_{i})}{T^k_{t}(s^{\\ell}_{i}, a)}\\bigg\\}. \\label{eq:tree-sampling-rule}\n\\end{align}\nThis action selection rule ensures that the ratio $\\wb^k_{t}(a|s^{\\ell}_{i})\/T_{t}(s^{L}_{i}, a) \\approx 1$.\nIt is a deterministic action selection rule and thus avoids variance due to simply sampling from the estimated optimal proportions.\nNote that in the terminal states, $s^L_i$, the sampling rule becomes\n\\begin{align*}\n I^k_{t+1} = \\argmax_{a}\\bigg\\{\\dfrac{\\pi(a|s^L_i)\\usigma^k_{t}(s^L_i,a)}{T^k_{t}(s^{L}_{i}, a)}\\bigg\\} \n\\end{align*}\nwhich matches the bandit sampling rule of \\citet{carpentier2011finite, carpentier2012minimax}. \n\n\n\nWe give pseudocode for \\rev in \\Cref{alg:track}.\nThe algorithm proceeds in episodes. In each episode we generate a trajectory from the starting state $s^1_1$ (root) to one of the terminal state $s^L_j$ (leaf). At episode $k$ and time-step $t$ in some arbitrary state $s^{\\ell}_{i}$ the next action $I_{t+1}$ is chosen based on \\eqref{eq:tree-sampling-rule}. The trajectory generated is added to the dataset $\\D$. At the end of the episode we update the model parameters, i.e. we estimate the $\\wsigma^k_{t}(s^{\\ell}_i, a)$, and $\\wP^k_{t}(s^{\\ell+1}_i|s^{\\ell}_j,a)$ for each state-action pair. Finally, we update $\\wb^{k+1}_1(a|s^i_{\\ell})$ for the next episode using \\cref{eq:tree-ucb}.\n\n\\begin{algorithm}\n\\caption{\\textbf{Re}duced \\textbf{Var}iance Sampling (\\rev)}\n\\label{alg:track}\n\\begin{algorithmic}[1]\n\\State \\textbf{Input:} Number of trajectories to collect, $K$.\n\\State \\textbf{Output:} Dataset $\\D$.\n\\State Initialize $\\D = \\emptyset$, $\\wb^{0}_1(a|s^{\\ell}_i)$ uniform over all actions in each state.\n\\For{$k\\in 0, 1, \\ldots, K$}\n\\State Generate trajectory $H^k \\coloneqq \\{S_{t},I_{t},R(I_{t})\\}_{t=1}^L$ by selecting $I_t$ according to \\eqref{eq:tree-sampling-rule}.\n\\State $\\D \\leftarrow \\D \\cup \\{(H^k, \\wb^k_L)\\}$\n\\State Update model parameters and estimate $\\wb^{k+1}_{1}(a|s^{\\ell}_i)$ for each $(s^{\\ell}_i, a)$. \n\\State Update $\\wb^{k+1}_{1}(a|s^{\\ell}_i)$ from level $L$ to $1$ following \\eqref{eq:tree-bhat}.\n\\EndFor\n\\State \\textbf{Return} Dataset $\\D$ to evaluate policy $\\pi$.\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Regret Analysis}\n\nWe now theoretically analyze \\rev by bounding its regret with respect to the oracle behavior policy.\nWe analyze \\rev under the assumption that $P$ is known and so we are only concerned with obtaining accurate estimates of the reward means and variances.\nThis assumption is only made for the regret analysis and is \\textit{not} a fundamental requirement of \\revpar.\nThough somewhat restrictive, the case of known state transitions is still interesting as it arises in practice when state transitions are deterministic or we can estimate $P$ much easier than we can estimate the reward means.\n\nWe first define the notion of regret of an algorithm compared to the oracle MSE $\\L^*_n$ in \\eqref{eq:oracle-loss} as follows:\n\\begin{align*}\n \\mathcal{R}_n = \\L_{n} - \\L^*_n\n\\end{align*}\nwhere, $n$ is the total budget, and $\\L_n$ is the MSE at the end of episode $K$ following the sampling rule in \\eqref{eq:tree-bhat}. \nWe make the following assumption that rewards are bounded:\n\\begin{assumption}\n\\label{assm:bounded}\nThe reward from any state-action pair has bounded range, i.e., $R_t(s,a)\\in[-\\eta, \\eta]$ almost surely at every time-step $t$ for some fixed $\\eta>0$.\n\\end{assumption}\nThen the regret of \\rev over a $L$-depth deterministic tree is given by the following theorem.\n\\begin{customtheorem}{2}\n\\label{thm:regret-rev}\nLet the total budget be $n=KL$ and $n \\geq 4SA$. Then the total regret in a deterministic $L$-depth $\\T$ at the end of $K$-th episode when taking actions according to \\eqref{eq:tree-bhat} is given by\n\\begin{align*}\n \\mathcal{R}_n &\\leq \\widetilde{O}\\left(\\dfrac{B^2_{s^{1}_1}\\sqrt{\\log(SAn^{\\nicefrac{11}{2}})}}{n^{\\nicefrac{3}{2}}b^{*,\\nicefrac{3}{2}}_{\\min}(s^{1}_1)} \\right.\\\\\n &\\left. + \\gamma\\sum_{\\ell=2}^L\\max_{s^{\\ell}_j,a}\\pi(a|s^{1}_1)P(s^{\\ell}_j|s^{1}_1,a)\\dfrac{B^2_{s^{\\ell}_j}\\sqrt{\\log(SAn^{\\nicefrac{11}{2}})}}{n^{\\nicefrac{3}{2}}b^{*,\\nicefrac{3}{2}}_{\\min}(s^{\\ell}_j)} \\right)\n\\end{align*}\nwhere, the $\\widetilde{O}$ hides other lower order terms and $B_{s^\\ell_i}$ is defined in \\eqref{eq:B-def} and $b^*_{\\min}(s)= \\min_{a}b^*(a|s)$.\n\\end{customtheorem}\n\nNote that if $L=1$, $|\\mathcal{S}|=1$, we recover the bandit setting and our regret bound matches the bound in \\citet{carpentier2011finite}.\nNote that MSE using data generated by any policy decays at a rate no faster than $O(n^{-1})$, the parametric rate. The key feature of \\rev is that it converges to the oracle policy. This means that asymptotically, the MSE based on \\rev will match that of oracle. \\Cref{thm:regret-rev} shows that the regret scales like $O(n^{-3\/2})$ if we have the $b^*_{\\min}(s)$ over all states $s\\in\\S$ as some reasonable constant $O(1)$. In contrast, suppose we sample trajectories from a suboptimal policy, i.e., a policy that produces an MSE worse than that of the oracle for every $n$. This MSE gap never diminishes, so the regret cannot decrease at a rate faster than $O(n^{-1})$.\nFinally, note that the regret bound in \\Cref{thm:regret-rev} is a problem dependent bound as it involves the parameter $b^*_{\\min}(s)$.\n\n\\textbf{Proof (Overview):} We decompose the proof into several steps. We define the good event $\\xi_{\\delta}$ based on the state-action-reward samples $\\D$ that holds for all episode $k$ and time $t$ such that $|\\wsigma^k_{t}(s,a) - \\sigma(s,a)|\\leq \\epsilon$ for some $\\epsilon > 0$ with probability $1-\\delta$ made explicit in \\Cref{corollary:conc} . Now observe that MSE of \\rev is \n\\begin{align}\n \\L_{n} \\!&=\\! \\E_{\\D}\\left[\\left(Y_n(s^1_1)) -v^{\\pi}_{}(s^1_1))\\right)^{2}\\right]\\nonumber\\\\\n \n &\\!=\\! \\E_{\\D}\\left[\\left(Y_n(s^1_1)) \\!-\\! v^{\\pi}_{}(s^1_1))\\right)^{2} \\mathbb{I}\\{\\xi_\\delta\\}\\right] \\nonumber\\\\ \n&\\quad + \\E_{\\D}\\left[\\left(Y_n(s^1_1)) -v^{\\pi}_{}(s^1_1))\\right)^{2} \\mathbb{I}\\left\\{\\xi^{C}_\\delta\\right\\}\\right]\\label{eq:loss-revar}\n\\end{align}\nNote that here we are considering a known transition function $P$\nThe first term in \\eqref{eq:loss-revar} can be bounded using \n\\begin{align*}\n\\E_{\\D}&\\left[\\left(Y_n(s^1_1)) -v^{\\pi}_{}(s^1_1))\\right)^{2} \\mathbb{I}\\{\\xi_\\delta\\}\\right] = \\Var[Y_{n}(s^{1}_{1})]\\E[T^k_n(s^1_1)]\\nonumber\\\\\n&\\leq \\sum_a\\pi^2(a|s^{1}_{1}) \\bigg[\\dfrac{ \\sigma^2(s^{1}_{1}, a)}{\\underline{T}^{(2),k}_n(s^{1}_{1}, a)}\\bigg]\\E[T^k_n(s^1_1,a)]\\\\\n& + \\gamma^2\\sum_{a}\\pi^2(a|s^{1}_{1})\\sum_{s^{2}_j}P^2(s^2_j|s^1_1,a)\\\\\n&\\qquad\\cdot\\sum_{a'}\\pi^2(a'|s^2_j)\\bigg[\\dfrac{ \\sigma^2(s^{2}_{j}, a')}{\\underline{T}^{(2),k}_n(s^{2}_{j}, a')}\\bigg]\\E[T^k_n(s^2_j,a')] \n\\end{align*}\nwhere, $\\underline{T}^{(2),k}(s^1_1,a)$ is a lower bound to $T^{(2),k}(s^1_1,a)$ made explicit in \\Cref{lemma:bound-samples-level1}, and $\\underline{T}^{(2),k}(s^2_j,a)$ is a lower bound to $T^{(2),k}(s^1_1,a)$ made explicit in \\Cref{lemma:bound-samples-level2}. We can combine these two lower bounds and give an upper bound to MSE in a two depth $\\T$ which is shown \\Cref{lemma:regret-two-level}. Finally, for the $L$ depth stochastic tree we can repeatedly apply \\Cref{lemma:regret-two-level} to bound the first term. For the second term we set the $\\delta = n^{-2}$ and use the boundedness assumption in \\Cref{assm:bounded} to get the final bound. The proof is given in \\Cref{app:regret-det-tree-L}. $\\blacksquare$\n\\par\n\n\\subsection{Policy Evaluation}\\label{sec:policy-evaluation}\n\nWe now formally define our objective.\nWe are given a target policy, $\\pi$, for which we want to estimate $v(\\pi)$.\nTo estimate $v(\\pi)$ we will generate a set of $K$ trajectories where each trajectory is generated by following some policy.\nLet $H^k\\coloneqq\\{s^k_t, a^k_t, R^k_t(s^k_t, a^k_t)\\}_{t=1}^L$ be the trajectory collected in episode $k$ and let $b^k$ be the policy ran to produce $H^k$.\nThe entire set of collected trajectories is given as $\\D\\coloneqq\\{H^k, b^k\\}_{k=1}^K$.\n\nOnce $\\D$ is collected, we estimate $v(\\pi)$ with a certainty-equivalence estimate \\cite{sutton1988learning}. \nSuppose $\\D$ consists of $n = KL$ state-action transitions. We define the random variable representing the estimated future reward from state $s$ at time-step $t$ as:\n\\[\nY_n(s,t) \\coloneqq \\sum_a \\pi(a|s) \\wmu(s,a) + \\gamma \\sum_{s'} \\wP(s'|s,a) Y_n(s',t+1),\n\\]\nwhere $Y_n(s,t+1) \\coloneqq 0$ if $t\\geq L$, $\\wmu(s,a)$ is the mean reward observed after taking action $a$ in state $s$ and $\\wP(s'|s,a)$ is an estimate of $P(s'|s,a)$.\nFinally, the estimate of $v(\\pi)$ is computed as $Y_n \\coloneqq \\sum_{s} d_0(s) Y_n(s,0)$.\nIn the policy evaluation literature, the certainty-equivalence estimator is also known as the direct method \\cite{jiang2016doubly} and can be shown to be equivalent to batch temporal-difference estimators \\cite{sutton1988learning,pavse2020reducing}.\nThus, it is representative of two types of policy evaluation estimators that often give strong empirical performance \\cite{voloshin2019empirical}.\n\n\nOur objective is to determine the sequence of behavior policies that minimize error in estimation of $v(\\pi)$.\nFormally, we seek to minimize mean squared error which is defined as: \n $\\E_{\\mathcal{D}}\\left[\\left(Y_n - v(\\pi)\\right)^{2}\\right]$ \nwhere the expectation is over the collected data set $\\D$.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}