diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzohl" "b/data_all_eng_slimpj/shuffled/split2/finalzohl" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzohl" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nOver the last few years, the ferrimagnetic mixed spin-$s$ and spin-$S$ Heisenberg chains with regularly alternating spins $s=1\/2$ and $S>1\/2$ have attracted a great deal of attention, since they exhibit a quantum phase transition between intriguing ground states that are manifested in respective magnetization curves as quantized magnetization plateaux and Luttinger spin liquids \\cite{yam99,sak99,hon00,yam00,sak02,ten11}. The intermediate magnetization plateaux of the mixed spin-(1\/2,$S$) Heisenberg chains should obey the quantization condition known as Oshikawa-Yamanaka-Affleck (OYA) rule $m_s-m$ = integer, where $m_s = S + 1\/2$ and $m$ are the total spin and total magnetization per elementary unit \\cite{oya97}. According to OYA rule, one of possible ways to increase the total number of magnetization plateaux may consist in increasing size of the constitutent spin $S$. It should be stressed, however, that OYA criterion provides just necessary but not sufficient condition for a presence of a magnetization plateau, whose actual existence has still to be verified by explicit calculations. \n\nAny bipartite quantum ferrimagnet (irrespective of spin magnitude and spatial dimensionality) should also satisfy the Lieb-Mattis (LM) theorem \\cite{lie62}, which assures the following total magnetization $m = S - 1\/2$ per unit cell within the zero-field ground state of the ferrimagnetic mixed spin-(1\/2,$S$) Heisenberg chains. Hence, OYA criterion in combination with LM theorem would suggest that the ferrimagnetic mixed spin-(1\/2,$S$) Heisenberg chains may display one and just one quantized magnetization plateau (regardless of the spin size $S$) at the following fractional value of the total magnetization $m\/m_s = (2S-1)\/(2S+1)$ normalized with respect to its saturation value. In the present work we will provide a survey for zero-temperature magnetization curves of the ferrimagnetic mixed spin-(1\/2,$S$) Heisenberg chains by considering a few different quantum spin numbers $S=1$, $3\/2$, $2$ and $5\/2$, which will prove all aforementioned features on this paradigmatic class of quantum spin chains. \n\n\\section{Model and method}\n\nLet us consider the mixed spin-$s$ and spin-$S$ quantum Heisenberg chain with regularly alternating spins $s=1\/2$ and $S>1\/2$ given by the Hamiltonian\n\\begin{eqnarray}\n\\hat{\\cal H} = J \\sum_{j=1}^L \\hat{\\bf S}_j \\cdot (\\hat{\\bf s}_j + \\hat{\\bf s}_{j+1}) - h \\sum_{j=1}^L (\\hat{S}_j^z + \\hat{s}_j^z),\n\\label{ham}\n\\end{eqnarray}\nwhere $\\hat{\\bf s}_j \\equiv (\\hat{s}_j^x,\\hat{s}_j^y,\\hat{s}_j^z)$ and $\\hat{\\bf S}_j \\equiv (\\hat{S}_j^x,\\hat{S}_j^y,\\hat{S}_j^z)$ denote the usual spin-1\/2 and spin-$S$ operators, respectively. The first term entering in the Hamiltonian (\\ref{ham}) takes into account the antiferromagnetic Heisenberg interaction $J>0$ between the nearest-neighbor spins and the second term $h = g \\mu_{\\rm B} H$ incorporating the equal Land\\'e g-factors $g_s = g_S = g$ and Bohr magneton $\\mu_{\\rm B}$ accounts for the Zeemann's energy of individual magnetic moments in an external magnetic field. It is noteworthy that the overall chain length is $2L$ as the elementary unit contains two spins, whereas the translational invariance is ensured by the periodic boundary condition $s_{L+1} \\equiv s_1$.\n\nOne should turn to some accurate numerical method in order to get a reliable survey of magnetization processes of the ferrimagnetic mixed spin-(1\/2,$S$) Heisenberg chains, since the Hamiltonian (\\ref{ham}) is not integrable. For this purpose, we have implemented density-matrix renormalization group (DMRG) calculations from ALPS project \\cite{bau11}, which can be straightforwardly used to obtain the lowest-energy eigenvalue $E(T_{tot}^z, L, h=0)$ of the ferrimagnetic mixed-spin Heisenberg chain within each sector with the total spin $T_{tot}^z = \\sum_{j=1}^L (S_j^z + s_j^z)$ in a zero magnetic field ($h=0$). The lowest-energy eigenstate of the ferrimagnetic mixed spin-(1\/2,$S$) Heisenberg chains in a non-zero magnetic field can be subsequently calculated from the formula $E(T_{tot}^z, L, h) = E(T_{tot}^z, L, h=0) - h T_{tot}^z$, because the total spin $T_{tot}^z$ is conserved quantity due to a validity of the commutation relation between the respective operator \nand the Hamiltonian (\\ref{ham}). The finite-size formula for a magnetic-field induced transition between the lowest-energy eigenstates with the total spin $T_{tot}^z$ and $T_{tot}^z+1$ then readily follows from the formula $h = E(T_{tot}^z+1, L, h=0) - E(T_{tot}^z, L, h=0)$. In this way one may obtain the accurate numerical results for the zero-temperature magnetization curves. To avoid extrapolation due to finite-size effects we have performed DMRG simulations for a sufficiently large system size with up to $L=64$ units (128 spins), whereas adequate numerical accuracy was achieved through 16 sweeps at the targeted system size when increasing the number of kept states up to 1200 during the final sweeps.\n\n\n\\section{Results and discussion}\n\n\\begin{figure}[t]\n\\includegraphics[width=1.05\\columnwidth]{fig1a.eps}\n\\includegraphics[width=1.05\\columnwidth]{fig1b.eps}\n\\vspace{-0.8cm}\n\\caption{The magnetization (left panel) and susceptibility (right panel) of the mixed spin-(1\/2,$S$) Heisenberg chains as a function of the magnetic field for four different spin values: (a)-(b) $S=1$; (c)-(d) $S=3\/2$; (e)-(f) $S=2$; (g)-(h) $S=5\/2$. The displayed results were obtained from DMRG simulations of a finite-size chain with $L=64$ units (128 spins).}\n\\label{fig1}\n\\end{figure}\n\nLet us proceed to a discussion of zero-temperature magnetization curves of the ferrimagnetic mixed spin-(1\/2,$S$) Heisenberg chains, which are displayed on the left panel of Fig.~\\ref{fig1} for a few different quantum spin numbers $S$=1, 3\/2, 2 and 5\/2. It is quite evident from Fig.~\\ref{fig1} that all considered mixed-spin Heisenberg chains indeed exhibit exactly one intermediate magnetization plateau at the fractional value $m\/m_s =(2S-1)\/(2S+1)$, which is consistent with the gapped LM ferrimagnetic ground state. The intermediate plateau due to LM ferrimagnetism breaks down at a quantum phase transition invoked by the critical magnetic field $h_c$, which closes an energy gap above the ferrimagnetic ground state. It is noteworthy that the height of LM plateau monotonically increases with increasing the quantum spin number $S$ quite similarly as does its width terminating at the critical field $h_c = 1.76J$ for $S=1$, $h_c = 2.84J$ for $S=3\/2$, $h_c = 3.88J$ for $S=2$ and $h_c = 4.91J$ for $S=5\/2$. Above the critical magnetic field $h>h_c$ the ferrimagnetic mixed spin-(1\/2,$S$) Heisenberg chains pass towards the Luttinger spin liquid, where the magnetization rises continuously with the magnetic field until another quantum critical point is reached at the saturation field $h_s = J(1 + 2S)$. The asymptotic behavior of the magnetization in a vicinity of both quantum phase transitions is governed by the relations: $m \\propto \\sqrt{h - h_c}$ for $h \\to h_{c}^{+}$ and $m \\propto \\sqrt{h_s - h}$ for $h \\to h_{s}^{-}$. Owing to this fact, the quantum phase transitions driven by the magnetic field should be also reflected in anomalous behavior of the magnetic susceptibility $\\chi$ close to quantum critical points: $\\chi \\propto 1\/\\sqrt{h - h_c}$ for $h \\to h_{c}^{+}$ and $\\chi \\propto 1\/\\sqrt{h_s - h}$ for $h \\to h_{s}^{-}$. In accordance with this statement, the magnetic-field dependences of the susceptibility shown on the right panel of Fig.~\\ref{fig1} furnish evidence for both field-induced quantum phase transitions towards the Luttinger spin liquid through the observed divergence of the magnetic susceptibility. \n\n\n\\section{Conclusions}\n\nThe zero-temperature magnetization curves of the ferrimagnetic mixed spin-(1\/2,$S$) Heisenberg chains were calculated with the help of DMRG method for several values of the quantum spin number $S$. It has been verified that the magnetization curves involve due to the gapped LM ferrimagnetic ground state one and just one intermediate plateau at the fractional magnetization $m\/m_s =(2S-1)\/(2S+1)$, \nwhich breaks down at a quantum phase transition towards the Luttinger spin liquid driven by the external magnetic field. Subsequently, the magnetization continuously rises with increasing the magnetic field within the Luttinger spin-liquid phase until it reaches the full moment at the saturation field $h_s = J(1 + 2S)$ closely connected with another field-induced quantum phase transition. It has been demonstrated that the magnetization shows a cusp and susceptibility diverges in a close vicinity of both quantum critical points. Besides, it could be concluded that the rising quantum spin number $S$ increases in the magnetization curve of the mixed spin-(1\/2,$S$) Heisenberg chains the height as well as width of the ferrimagnetic LM plateau, while the magnetic-field range corresponding to the gapless Luttinger spin-liquid phase is conversely reduced. Last but not least, it is worth noticing that theoretical implications of the present work are of obvious relevance for series of bimetallic coordination compounds MM'(pba)(H$_2$O)$_3$ $\\cdot$ 2H$_2$O \\cite{kah87} and MM'(EDTA) $\\cdot$ 6H$_2$O \\cite{dri85} (M,M' = Cu, Ni, Co, Mn), which represent experimental realization of the ferrimagnetic mixed-spin Heisenberg chains. However, the high-field magnetization measurements on these or related series of bimetallic complexes are desirable for experimental testing of the present theoretical predictions. \n\n\\section{Acknowledgement}\nThis work was financially supported by Ministry of Education, Science, Research and Sport of the Slovak Republic provided under the grant No. VEGA 1\/0043\/16 and by the grant Slovak Research and Development Agency under the contract No. APVV-0097-12.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nThe AlphaZero algorithm and its variations have been highly successful in producing high-quality moves in complex strategy games, including chess, Go, and shogi \\cite{silver2017mastering,silver2018general,schrittwieser2020mastering}. Here \"high quality\" is measured in terms of playing against the best human players and the best available computer software. AlphaZero's self-play algorithm was trained on powerful computer hardware and achieved superhuman performance in less than 24 hours. \n\nAlphaZero's groundbreaking strategies in chess and Go have taken the game playing communities by storm \\cite{sadler2019game}. \nThis sentiment is expressed by former chess world champion Garry Kasparov who wrote {\\it chess has been shaken to its roots by AlphaZero, but this is only a tiny example of what is to come}. Or as Matthew Sadler and Natasha Regen write in \\cite{sadler2019game} {\\it It is momentous for chess players because, for the first time, we can learn from a powerful inteligence which build its chess strategy independently of our own rich history of chess development}. \n\nAlphaZero has not been made generally available; however, the open-sourced projects LC0 for chess, Leela Zero for Go, and AobaZero for shogi, game playing agents have, in essence, replicated AlphaZero \\cite{cazenave2020polygames}. These projects outsourced the demanding computational task of training the game playing agents to communities of computer strategy game enthusiasts. The participants would typically lend the project computer power directly by contributing hardware resources or running a script provided on the google colab cloud platform.\nRecently, a variant of AlphaZero has been proposed and published as open-source \\cite{wu2019accelerating}. The open-source Go project Elf is also worth mentioning \\cite{tian2019elf}.\n\nImpartial games are games in which the allowable moves depend only on the position and not on which of the two players is currently moving. The class of impartial games is an important subclass of combinatorial games \\cite{berlekamp2001winning,berlekamp2002winning,berlekamp2003winning,berlekamp2004winning}. Impartial games include take-and-break games, subtraction games, heap games, and poset games. It includes children's games as well as mathematical games, including sprout, treblecross, cutcake, guiles, wyt queens, kayles, guiles, grundy's game, quarto, cram, chomp, subtract a square, notakto, and nim \\cite{berlekamp2001winning,berlekamp2002winning}. Many impartial games have multiple variants. The analysis of impartial games is called nimber theory, and the Sprague\u2013Grundy theorem states that every impartial game is equivalent to a nim-heap \\cite{berlekamp2001winning}. Thus the game nim plays a central role in the theory of impartial games, and a large class of impartial games can mimic nim (or parts of nim). From an AI point of view, it turns out that many impartial games are as hard (or much harder \\textit{e.g.} node kayles and geography that are PSPACE complete \\cite{schaefer1978complexity}), than nim. Nim is a game often played by children.\n\nDespite AlphaZero's groundbreaking advance for complex games, it turns out that there are games like nim that, from a human point of view, as well as from a complexity point of view, are simple, where AlphaZero style algorithms essentially seems to be unable to learn to play better than a randomly playing agent. On specific boards or towards the end of the game, the algorithm have enough resources to play well by essentially using exhaustive search. However, on larger nontrivial positions the policy network generally fails to provide any valuable information to guide the Monto Carlo Three Search (MCTS) algorithm, and the value network fails to evaluate the board positions any better than random.\n\n Nim is played by two players who take turns removing counters from distinct heaps \\cite{bouton1901nim, nowakowski1998games}. A player must remove at least one counter on each turn and may remove any number of counters, provided they all belong to the same heap. The goal of the game is to be the player who removes the last counter \\textit{i.e.} leaves the empty position to the opponent.\n\nNim is often classified as a mathematical game, and it has a well defined mathematical solution.\nFormally, the initial board of nim can be represented as an array: \\[ [n_1,n_2,....,n_k]\\]\n\nwhere $n_j \\in \\{0, 1, 2, \\dots\\}$ and $j=1, 2, \\ldots, k $.\nA position in a game using that board can be represented as an array\n\\[ [v_1, v_2, ...., v_k] \\]\n\nwhere $v_j \\in \\{0,1,2,\\dots\\}$ and $v_j\\leq n_j$ for $j \\in \\{1, 2, \\ldots, k\\}$. Often a nim position is specified without reference to a board, however, we always specify the intial position of the board since the algorithm requires a fixed board size, and each self-play game starts with the initial position of the board. Fig. \\ref{fig:nim_board} demonstrates an example of an initial nim board, a board positions during play and the final board position. \n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figures\/Nim_board.eps}\n\\caption{\\label{fig:nim_board} The initial board consists of a series of heaps (\\textit{aka} rows or piles) of counters (\\textit{aka} lines or matches). The initial board, as shown in the left graph, is $[n_1,n_2,....,n_k]=[1,3,5,7,9]$. The two players take turns in removing counters, resulting in one of the positions in the game play that is $[v_1, v_2, ...., v_k]=[1,2,4,4,3]$, as shown in the middle graph. In the usual version of the game, the player who removes the last counter(s) wins, as shown in the right graph where all the heaps are cleared. }\n\\end{figure}\n\nThere are $n=\\sum_{j} v_j$ legal moves from the position $[v_1,v_2,....,v_k]$. For example, the number of legal moves from the initial position of nim played on the board \\[ A = [1,2,3,\\ldots,9] \\] is $45$. \nThe number of legal moves from the initial position of nim played on the board \\[B=[1,2,3,\\ldots,25] \\] is $325$. This exceeds the number of legal moves in any Go position on the standard $19 \\times 19$ board.\n\nThe maximal number of moves in nim on the initial board $[n_1,n_2,....,n_k]$ is $m = \\sum_j n_j$ as each player needs to remove at least one counter for each move. Games of chess and Go typical last less than 300 moves \\footnote{In chess a move usually include a move by each player, and thus a typical chess game lasting $n$ chess moves, in fact last $2n$ moves}.\nThus, nim on the board $B$ has a branching factor and game length compatible to Go and exceeding the branching factor and length of a typical chess game. \nNotice that size of the game tree of nim typically hugely exceeds that of the number of possible positions (states) which is given by \\[ \\Pi_j (1+n_j)\\]\n\nFor any nim position it is (mathematically) easy to determine which player will win and which winning moves are available to that player. The value (won or lost) of a nim position can be determined by calculating the binary digital sum of the number of counters in the heaps, \\textit{i.e.}, the sum (in binary), neglecting all carries from one digit to another \\cite{bouton1901nim}. Within combinatorial game theory this is commonly referred to the \\textit{nim-sum} (see \\ref{sec:Impartial games} for more details).\nThe complexity of this calculation can be shown to be linear time and with the use of only logarithmic space memory.\n\nBased on theoretical considerations related to {\\it the statistical neutrality} of the parity function \\cite{thornton1996parity} as well as some subtle and heuristic arguments that the defense has the ability to keep preventing the attacking player from forcing the game into readily predictable patterns, we conjectured that AlphaZero style learning (without any addition of special customised trick and features) in practice would not be able to learn nim (and consequently also numerously other impartial games) on large boards. \n\nBased on our preliminary experimental results from supervised learning we expected that the policy network in AlphaZero style nim would be unable to properly learn to play high quality nim on asymptotically large boards like \\textit{e.g.} $[1,2,3,\\ldots,n]$ for $n>100$. To our great surprise we discovered that the difficulty was much larger than anticipated and that the policy network and evaluation networks consistently failed to converge even on small boards \\textit{e.g.} $[1,3,5,7,9,11,13]$. \n\nIn games like chess, AlphaZero's value network fails to be perfect, which is typically not a serious issue as it is compensated by the search guided by the policy network. It turns out that imperfection is not necessarily a problem on some nim boards since the policy network essentially can help the winning player to force the game into positions that do not require any accurate evaluations. However, there is no such luck in general as there are two difficulties: \n\n\\begin{enumerate}[label=(\\arabic*), noitemsep]\n \\item The policy network is unable to guide the MCTS better than random \\label{enu:one}\n \\item The value network is unable to evaluate a position better than random \\label{enu:two}\n\\end{enumerate}\n\nAccording to \\ref{enu:one}, the network essentially cannot learn to select relevant candidate moves necessary to identify winning or high-quality moves. \nAccording to \\ref{enu:two}, even if the search is able to look many moves ahead, any evaluation is meaningless as it in general does not perform better than random. \n\nWe investigated the difficulties in \\ref{enu:one} and \\ref{enu:two} \nand found that if a small part of a position in an impartial game is blanked out, the prediction of the policy network and the evaluation of the value network can typically not be better than pure random guessing. \n\nAs a comparison, the general evaluation of the networks might be wrong if we cover part of a Go, chess or shogi board. However in general the visible part of the board contains information that is positively correlated with the correct evaluation of the full board. However, for impartial games like nim, any board covering where a small but unknown number of counters is covered makes it impossible to evaluate the resulting position correctly. It is typically impossible to predict whether the position is good or bad (won or lost) as there is typically zero correlation between the visible part of a partly blanked out position and its correct evaluation. \nThis type of heuristic argument shows that even a small level of noise typically erase the correlations needed to bootstrap the positive feedback mechanism of an AlphaZero style algorithm.\n\nThe subclass of combinatorial games of so-called partisan games can occationally pose issues similar to that of impartial games \\textit{e.g.} Dawson's chess that in effect is a impartial game in disguise \\cite{berlekamp2001winning}.\n\nTo better understand the difficulties of learning nim-like games, we decided to revisit the AlphaZero for chess, Go and shogi projects and consider positions where simple parity-like problems are needed to correctly evaluate a position. Since chess is probably the most commonly known game in the English speaking world, and the official AlphaZero program is unavailable, we mainly used a highly trained the Leela Chess Zero (LC0) engine. \n\nWe investigated and discussed many theoretical and practical issues and limitations in this paper. To summarize, our main contributions are:\n\\begin{itemize}[noitemsep]\n \\item Identify a class of strategic games that require nontrivial modifications of AlphaZero style algorithms or in general current RL algorithms to work.\n \\item Discovery that the difficulty for RL algorithms to learn impartial games is much more dramatic than theoretical analysis of the individual components, the value network (see section \\ref{sec:value_network}), the policy network (see example 3 in section \\ref{sec:policy_network}) or the parity-related issue (see section \\ref{sec:low_complexity}) would suggest. Noise brought by these issues has a dramatically compounding negative effect, showing learning difficulties with less than ten piles instead of 50+ piles (see section \\ref{sec:conclusion}).\n \\item Propose two levels of mastery of a RL agent, champion and expert, which are essential in evaluating the performance of an agent (see section \\ref{sec:differnt}). To the best of our knowledge these concepts are not mentioned in the literature.\n \\item Experimentally demonstrate the robustness of the problems through an extensive list of experiments and show that fiddling with the hyperparameters does not have much effect on alleviating the difficulties. \n \\item Briefly outlining of how one might expand the AlphaZero paradigm approach for future algorithms.\n\\end{itemize}\n\nThe paper is structured as follows. In section \\ref{sec:revisiting_alphazero} we revisit AlphaZero and LCZero and look behind some of the fantastic magic moves these chess programs have produced. The main point of this analysis is to identify the strength and weaknesses of LCZero. LCZero's defects often are hidden and mainly appear behind the scene and only occurs when we look under the hood. However, we will see that the weaknesses include issues impacting both the value and policy networks. \n\nSection \\ref{sec:background} shows how many impartial games can be boiled down to the nim and concerns theoretical bottlenecks and general limitations for AI solving various two-player games with full information. We show that these results all follow well-known results in complexity theory. The section also includes reference to our experimental results on modelling the parity function with neural networks (NNs).\n\nIn section \\ref{sec:differnt}, we propose two types of optimality that lead to two levels mastery, namely champion and expert. We illustrate the distinction with one analogy and one special nim example we also tested experimentally.\n\nIn section \\ref{sec:prelimiary}, we present the empirical results of the performance of the value network on modelling parity function and the inability of the policy network to modelling nim-sum. In section \\ref{sec:reinforcement_learning}, we give an overview of the AlphaZero algorithms and the changes we made tailored to nim and demonstrate the difficulty of our AlphaZero style nim algorithm have in becoming an expert agent on large boards. We finish the paper with some general remarks, conjectures, and directions for further research.\\footnote{The code for the experiments in this paper is publicly available at: \\url{https:\/\/github.com\/sagebei\/Impartial-Games-a-Chanllenge-for-Reinforcement-Learning}}\n\n\\section{Revisiting AlphaZero and LCZero}\n\\label{sec:revisiting_alphazero}\nAlphaZero, which astonished the chess world, has essentially been replicated by LCZero. This section focuses on a highly trained version of LCZero. In general, LCZero can access the computational resources that the programs need to run Monte Carlo Tree Search (MCTS) simulations and replicate the moves made by AlphaZero chess. LCZero\\footnote{version: Last T60: 611246 (384 $\\times$ 30)} uses the neural network composed of $30$ blocks $\\times$ $384$ filters which are beyond the initial $20 \\times 256$ architecture of AlphaZero, and LCZero additionally employs the Squeeze-and-Excitation layers to the residual block and supports endgame tablebases \\cite{maharaj2021chess}, enabling it to possibly surpass the original AlphaZero algorithm. The older versions of LCZero running on a $20 \\times 256$ architecture are somewhat weaker than the later versions running on the $30 \\times 384$ architecture. \n\nFor comparing the moves, we also gained access to open-source Stockfish 14, which, like Stockfish 8 developed initially by Tord Romstad, Marco Costalba, and Joona Kiiski, and have been further developed and maintained by the Stockfish community. Unlike Stockfish 8, stockfish 14 also comes with an NNUE (Efficiently Updatable Neural Network) \\cite{nasu2018efficiently} for its evaluation and thus combines features of deep neural networks with traditionally handcrafted chess engines. Game playing strength is usually measured by Elo rating (see section \\ref{sec:results} for the detailed description). There has been an attempt to measure playing strength based on the quality of the played chess moves rather than on the results \\cite{regan2011intrinsic}. On the chess engine rating list rating (August 7, 2021) the latest iteration of Stockfish 14 has 3555, Stockfish 8 has 3375 and LCZero 3336. LCZero running on hardware similar to AlphaZero against a special fixed time (1 minute) for each move version of Stockfish 8 has been shown to lead to a similar dramatic achievement as AlphaZero.\n\nThe revolutionary and outstanding evaluations and moves have taken the chess world by storm. Often AlphaZero and LC0 would evaluate positions quite differently from human players or traditionally hand-coded chess engines. \n\nAlthough AlphaZero is not generally available, its detailed evaluation of some chess positions has been made public \\cite{sadler2019game}. AlphaZero's use of its policy network, its evaluation networks and MCTS were explained by a detailed analysis of the position in Fig. \\ref{fig:LC0}. A highly trained LC0 using the net T60: 611246 (384x30) straight away thought that d5 was the move to give the most attention, which can be seen from the table of the policy network's prior probabilities. AlphaZero had a different training history, so the prior probabilities differed, but AlphaZero agrees that d5 is a promising move. After investigating 64 and 256 nodes \\footnote{More specifically, the number of new nodes visited, which corresponds to the number of MCTS simulations}, AlphaZero is highly optimistic about the move, while LC0 is somewhat less happy but agrees that white is better. \n\n\\begin{figure}[H]\n \\label{fig:LC0}\n \\centering\n \\subfloat[chess board position]{\n \\newgame\n \\fenboard{r1b2qk1\/1pp3rp\/4pnp1\/1PP5\/p2PBp2\/P7\/1BQ2P2\/K1R1R3 w - - 0 1}\n \\scalebox{0.7}{\\showboard}\n }\n \\hspace{1em}\n \\subfloat[Prior probabilities for the top moves]{\n \\begin{tabular}{lcccccccc}\n \\toprule\n \\textbf{Move} & \\bishop d3 & \\bishop f3 & c6 & d5 & \\bishop g2 & f3 & \\bishop h1 & \\queen c4\\\\ \n \\midrule\n \\textbf{Prior prob (AlphaZero)} & 29.77\\% & 18.82\\% & 16.15\\% & 10.21\\% & 4.75\\% & 3.5\\% & 4.75\\% & 1.2\\% \\\\\n \\midrule \n \\textbf{Prior prob (LCZero)} & 6.01\\% & 12.36\\% & 16.27\\% & 23.13\\% & 1.74\\% & 3.73\\% & 1.41\\% & 8.68\\%\\\\\n \\bottomrule\n \\end{tabular}\n \n }\n \\hspace{1em}\n \\subfloat[AlphaZero and LCZero win probabilities after MCTS]{ \\begin{tabular}{lccccccc}\n \\toprule\n \\textbf{Move} & \\bishop d3 & \\bishop f3 & c6 & d5 & \\bishop g2 & f3 & \\bishop h1 \\\\ \n \\midrule\n \\textbf{Win prob (AlphaZero 64 nodes)} & 60.1\\% & 64.5\\% & 77.3\\% & 87.1\\% & 61.6\\% & 67.3\\% & 61.6\\% \\\\\n \\midrule \n \\textbf{Win prob (AlphaZero 256 nodes)} & 60.1\\% & 64.5\\% & 77.7\\% & 83.1\\% & 61.6\\% & 67.3\\% & 61.6\\% \\\\\n \n \\midrule \n \\textbf{Win prob (LCZero 64 nodes)} & 62.8\\% & 62.8\\% & 71.2\\% & 71.6\\% & 55.7\\% & 55.7\\% &55.7\\% \\\\\n \\midrule \n \\textbf{Win prob (LCZero 256 nodes)} & 59.0\\% & 62.2\\% & 63.3\\% & 67.8\\% & 50.0\\% & 58.2\\% & 50.0\\% \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\hspace{1em}\n \\subfloat[Comparison after MCTS. The move d5 was considered to be the best in all cases]\n { \\begin{tabular}{lccccc}\n \\toprule\n \\textbf{Nodes visited} & 64 & 256 & 1024 & 65536 & 4194304 \\\\ \n \\midrule\n \\textbf{AlphaZero eval} & 65.7\\% & 74.1\\% & 71.6\\% & 67.9\\% & 73.5\\% \\\\\n \\midrule \n \\textbf{LCZero eval} & 70.1\\% & 65.0\\% & 67.4\\% & 68.3\\% & 73.1\\% \\\\\n \\bottomrule\n \\end{tabular}\n }\n \n \\caption{\\label{fig:LC0} LCZero and AlphaZero very fast agree that d5 is the best move, and though both engines discover various defensive resources, they reach similar evaluations.}\n\\end{figure}\n\n\nAt deeper MCTS, AlphaZero begins to discover various defensive resources, so its win probability begins to drop. The win probability for a position $s$ is calculated by $(0.5 + v\/2)$\\% where the $v$ is a scalar output from the value network with position $s$ as its input. When more nodes are examined, the win probabilities climb up again. Eventually, after 4194304 nodes, the two engines had almost identical evaluations with win probabilities of 73.5\\% resp. 73.1\\%.\nIn general, LC0 plays very similar to AlphaZero, and it is fair to say AlphaZero's play fully have been replicated by LC0.\n\n\nThe quality of evaluation of AlphaZero for chess is discussed in great detail in \\cite{sadler2019game}. The moves selected by LC0 without search (\\textit{i.e.} when only one node is visited by the MCTS algorithm) plays remarkably well and is occasionally able to win games against strong chess players. \n\nWhen we compare a move made by a human with the one selected by LCZero, the human typically looks at far fewer positions but humans pattern recognition sometimes spots features that are missed by LCZero's value network. A human grandmaster maximally considers a few hundred moves, \\textit{i.e.} significantly fewer moves than LCZero, that in turn considers dramatically fewer moves than conventional chess programs. As an example to illustrate the human superiority but also to understand limitations that becomes relevant for judging impartial games, consider the position in Fig. \\ref{fig:LC1}. Any decent chess player can \"see\" intuitively - essentially without any calculation - that white has a won position leading to mate (1.\\queen g8+,\\rook g8 and 2.\\knight f7+ mate). But the policy network typically (it is a stochastic process) first lead the program slightly astray and spend time investigating the positions arising after 1.\\knight f7+. It is important to stress that LCZero finds the winning sequence in less than a millisecond. The point made here is that neither the value network nor the policy network in general evaluates positions completely accurately. This fact is crucial for understanding the limitations for impartial games where even the slightest change in position (or noise) completely wipe out any positive correlation between the policy and position evaluations and the correct ones.\n\n\\begin{figure}[h]\n \\centering\n \\subfloat[board position]{\n \\newgame\n \\fenboard{1rr4k\/6pp\/7N\/3Q4\/8\/qPp5\/P1P5\/1K6 w - - 0 1}\n \\scalebox{0.8}{\\showboard}\n }\n \\hspace{0.1em}\n \\subfloat[LCZero evaluations for the top 4 moves]{\n \\begin{tabular}{lccc}\n \\toprule\n \\textbf{Move} & \\knight f7 & \\queen g8 & \\queen d8 \\\\ \n \\midrule\n \\textbf{Prior Prob} & 60.69\\% & 15.96\\% & 2.50\\% \\\\\n \\midrule\n \\textbf{V-value} & -0.276 & 0.588 & -0.968 \\\\\n \\midrule\n \\textbf{Win prob} & 36.2\\% & 79.4\\% & 1.6\\% \\\\\n \\midrule\n \\textbf{Q-value (15 nodes)} & -0.075\\% & 1 & 0 \\\\\n \\midrule\n \\textbf{Win prob (15 nodes)} & 48.5\\% & 100\\% & 0\\% \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{2em}\n }\n \\caption{\\label{fig:LC1} White has a forced win in two moves where he sacrifices the queen before mating with his knight. It is not surprising LCZero evaluation with no search (\\textit{i.e.} with just one node visited) is unable to reach the conclusion that white has a forced win. Only after 11 nodes does the program jump away from investigating lines beginning with \\knight f7, and switch to investigate the winning move \\queen g8. And only while at node 15 does the program find the forced mate.}\n\\end{figure}\n\nThe next example Fig. \\ref{fig:LC2} illustrate the difficulty in handling parity related problems. On the right part of the position, any player who moves would lose the game. The situation on the left side of the board is equivalent to the nim position $[3,2]$ consisting of two heaps with three and two counters, respectively and where it is only possible to remove one or two counters from a heap. This version of nim is sometime referred to as bogus nim. The winning move is to remove one counter from the heap with three counters. Analogously, the winning move for the white is making the move c3-c4. But LCZero's policy network suggests a2-a4, which disastrously leads to a losing position. Like in the previous example it is important to stress that LCZero finds the winning sequence almost immediately, and the point is that neither the value network nor the policy network in general are able to evaluate positions completely accurately.\n\n\\begin{figure}[H]\n \\centering\n \\subfloat[board position]{\n \\newgame\n \\fenboard{6k1\/2p5\/5P1P\/p7\/8\/2P2p1p\/P7\/6K1 w - - 0 1}\n \\scalebox{0.8}{\\showboard}\n }\n \\hspace{0.1em}\n \\subfloat[LCZero evaluations for the top 4 moves]{\n \\begin{tabular}{lcccc}\n \\toprule\n \\textbf{Move} & a4 & c4 & a3 & f7 \\\\ \n \\midrule\n \\textbf{Prior prob} & 50.4\\% & 20.7\\% & 15.4\\% & 4.4\\% \\\\\n \\midrule\n \\textbf{V-value} & -0.632 & 0.806 & -0.817 & -0.88 \\\\\n \\midrule\n \\textbf{Win prob} & 18.4\\% & 90.3\\% & 9.2\\% & 6\\% \\\\\n \\midrule\n \\textbf{Q-value (3 nodes)} & -0.632\\% & 0.806 & -0.248 & -0.248 \\\\\n \\midrule\n \\textbf{Win prob (3 nodes)} & 18.4\\% & 90.3\\% & 37.6\\% & 37.6\\% \\\\\n \n \\bottomrule\n \\end{tabular}\n \\vspace{2em}\n }\n \n \\caption{\\label{fig:LC2} A chess position that mimics a nim position $[3,2]$. Neither black or white would like to move on the right hand side of the board. A simple analysis conclude that white has to play c3-c4 which is winning. Any other move leads to a loss. The LC0 policy network gives the move a2-a4 a score of 50.4\\%. The second most promising move is c3-c4 which scores 20.7\\%. The positions Q-value is \n -0.083 which corresponds to a win probability of 45.5\\% indicates a slight advantage to black while in fact white is winning.\n Notice that LC0's value network already judge the position after c4 very favorable for white (which we find this quite impressive). Though it is a stochastic process LC0's MCTS typically needs only to investigate few nodes before it considers c3-c4 which it then likes straight away}\n\n\\end{figure}\n\nDespite that LCZero is equipped with rather incredible policy and value networks, in general, it cannot accurately evaluate positions related to parity issues, including \"zugzwang\", \"waiting moves\", and \"triangle manoeuvres\", etc. that are common themes in chess. This, however, is not an issue for AlphaZero or LCZero as the issues effectively are handled by the MCTS. However, some examples show that the policy network occasionally might fail to properly consider crucial key moves.\n\nAs an example consider the board position in Fig. \\ref{fig:LC3} that occured in a game between Stockfish 12 and LCZero. The white player can force checkmate in a sequence of 5 moves, starting with \\rook c2. However, the policy network give the move of \\rook a5+ the highest prior probability. The serious problem is that LC0's policy network fails to guide the MCTS propertly, and after more than 1 million nodes LC0 were unable to find the winning sequence. The failure to find the winning sequence is partly a drawback of using MCTS, while alpha-beta search that almost universally is used in conventional chess engines, typically finds the forced mate almost instantly. \n\n\\begin{figure}[H]\n \\centering\n \\subfloat[board position]{\n \\newgame\n \\fenboard{6r1\/5p2\/1r6\/2kpPb2\/Rp1p4\/3P2P1\/1R3PK1\/3B4 w - - 0 1}\n \\scalebox{0.8}{\\showboard}\n }\n \\hspace{0.1em}\n \\subfloat[LCZero evaluations for the top 4 moves]{\n \\begin{tabular}{lcccc}\\hline\n \\toprule\n \\textbf{Move} & \\rook a5 & \\rook c2 & \\bishop e2 & \\rook b3 \\\\ \n \\midrule\n \\textbf{Prior prob} & 35.9\\% & 16.5\\% & 10.3\\% & 4.7\\% \\\\\n \\midrule\n \\textbf{V-value} & 0.29 & 0.27 & -0.005 & 0.06 \\\\\n \\midrule\n \\textbf{Win prob} & 64.5\\% & 63.5\\% & 49.8\\% & 53.0\\% \\\\ \n \\bottomrule\n \\end{tabular}\n \\vspace{3em}\n }\n \\caption{\\label{fig:LC3} LCZero chess played Bf5 which is a blunder. LCZero fails to realize that white has a forced mate in 5 moves. In the diagram position the highly trained LCZero fails to find (even after have looked at more than a million nodes) the forced mate: 1.\\rook c2+,\\king b5 2.\\rook a5+!! 2.- \\king a5 3. \\rook a2+,\\king b5 4.\\bishop a4+, and now 4.-\\king c5 5. \\rook c2+mate, or 4.-\\king a5 or 4.-\\king a6 followed by 5.\\bishop c6+ mate.}\n\\end{figure}\n\n\\section{Background and related work}\n\\label{sec:background}\n\n\\subsection{Impartial games and nim}\n\\label{sec:Impartial games}\nAn impartial game is a two-player game in which players take turns to make moves, and the actions available from a given position do not rely on whose turn it is. A player lose if they cannot make a move on their turn (\\textit{i.e.} a player wins if they move to a position from which no action is possible). \n\nIn impartial games, all the positions can be classified into losing or winning positions, in which the player to move have no winning move or has at least one winning move. \n\nSprague-Grundy theorem states that every (finite) impartial game is equivalent to a one-heap game of nim. More specifically each board position in impartial games has a nimber, which is also called the Sprague-Grundy value. For every position the nimber $G(s)$ is defined as \n\n\\begin{equation}\n G(s) = \\text{mex}(\\{G(s')\\}: s' \\in N(s)) \n\\end{equation}\n\n\\noindent\nwhere $s' \\in N(s)$ denotes all the state $s'$ that can be reached by from $s$ in one legal play and the output of a mex function is the minimum excluded value from a set, which is the least non-negative integer not in the set \\cite{beling2020pruning}.\nA position with Sprague-Grundy value $G(s)$ is equivalent with nim heap with $G(s)$ counters. A position is lost exactly when its Sprague-Grundy value is $0$.\n\nThe Sprague-Grundy value of a nim position $[v_1,v_2,\\ldots,v_k]$ is given as is the binary digital sum of the heap sizes $v_1,v_2,\\ldots,v_k$ , (in binary) neglecting all carries. In combinatorial game theory this sum is often called the nim-sum.\nThus it is computationally easy to decide if a given nim position is won or lost. \n\nThe analysis of impartial games are closely linked to the nim-sum which in turn is linked to the parity function. Thus the parity function plays implicitly (or explicitly) a central role in the theory of impartial games as such games often are able to mimic nim or parts of nim. To illustrate this consider the impartial game called sprout \\cite{berlekamp2001winning, gardner1967mathematical} \ninvented by John Conway and Michael Paterson.\n\nPositions in sprout typically have nimber values $0,1,2$ and $3$ \\cite{berlekamp2003winning}. The position in Fig. \\ref{fig:sprout_a} has nimber value 3. The Sprout position in Fig. \\ref{fig:sprout_b} also has nimber value $3$, but it has been modified so it becomes an \"isolated land\" that cannot interact with anything on the outside. It follows that a sprout starting position consisting of $n$ copies of the gadget in \\ref{fig:sprout_b} can mimic any nim position that can arrive from a staring positions with $n$ heaps with $3$ counters. \n\n\\begin{figure}[H]\n\\centering\n\\begin{subfigure}[b]{0.25\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{figures\/sprout_small.eps}\n\\caption{sprout position}\n\\label{fig:sprout_a}\n\\end{subfigure}\n\\hspace{3em}\n\\begin{subfigure}[b]{0.35\\textwidth}\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figures\/sprout.eps}\n\\caption{sprout position serving as a gadget}\n\\label{fig:sprout_b}\n\\end{subfigure}\n\\caption{\\label{fig:Sprout} nim played on a board $[3,3,3,\\ldots,3] $ with $n$ heaps, is equivalent to sprout with $n$ copies of the sprout position (gadget) in (b) in the Figure (see the diagram on p599 in \\cite{berlekamp2003winning} for details).}\n\\end{figure}\n\nUnlike nim, some impartial games cannot be solved by a simple calculation. This follows from the fact that the complexity of some impartial games is PSPACE complete (see \\ref{sec:psapce_nexptime} for more details).\n\nSo far algorithms for impartial games have used handcrafted programs that use ideas akin to the alpha-beta search but are specially designed for the mex operation \\cite{viennot2007further}. Recently \\cite{beling2020pruning} proposed a novel method that prunes the search tree according to the node values calculated by the mex function on short impartial games, like nim, chomp, and cram, but this approach does not in general scale to large board sizes. \n \nThese conventional programs that utilise that mex operation and the Sprague-Grundy values could be used as benchmarks to test whether new RL based algorithms can outperform conventional algorithms. \n\n\n\\subsection{Intrinsic complexity bottlenecks}\n\\label{sec:psapce_nexptime}\nComputational Complexity deals with fundamental limits and bounds of algorithmic tasks. In this section we review classical complexity classes, results and conjectures related the asymptotic complexity of games. \n\nThe time complexity of an algorithm is the amount of time it takes a computer to run it, and it is commonly estimated by counting how the algorithm performs many elementary operations. An algorithm is said to be of $T(n)$ time if its running time is upper bounded by $T(n)$ in the size n of the input for the algorithm. Since we are not interested in actual hardware and the complexity is commonly expressed using big O-notation. This way, the complexity of an algorithm becomes independent of the actual speed of the computer and is frequently measured as the number of steps needed by the algorithm. \nAn algorithm is said to be polynomial-time (belongs to $P$) if its running time is upper bounded by a polynomial expression in the size of the input for the algorithm, that is, $T(n) = O(n^k)$ for some positive constant k. \n\nAn algorithm is said to be exponential time (belong to EXPTIME) if $T(n)$ is upper bounded by $O(2^{nk})$ for some constant $k$.\n\nIt is essential to keep in mind that complexity classes like P and EXPTIME are asymptotic notations. Problems in P are often considered tractable, while problems that requires exponential time are considered intractable. \n\nA non-deterministic algorithm is an algorithm that it can make certain guesses at certain points during its computation. Such algorithms are designed so that if they make the right guesses at all the choice points, then they can solve the problem within the required time bound. We can think of a non-deterministic computation as and algorithm with access to a perfect AI module that at each choice point always would return the optimal choice.`\n\nNP is the class of decision problems that can be solved in polynomial time by a non-deterministic algorithm. So we can consider NP as the class of problems that can be solved in polynomial time when given access to a perfect (idealised) build-in AI that as each branch point always recommend the most economical choice. NEXPTIME denotes the class of problems that can be solved in exponential time by a non-deterministic algorithm. \n\nA complexity class might be based on space rather than time. PSPACE is the class of decision problems that can be solved by an algorithm using memory bounded by a polynomial expression in the size of the input.\n\nA decision problem $A$ is said to be complete for a set of decision problems $\\mathbf{B}$ if $A$ is a member of $\\mathbf{B}$ and every problem in $\\mathbf{B}$ can be reduced to $A$. Thus if the problem $A$ can be solved by an algorithm using certain computational resources, it would essentially make it possible to solve any problem in $\\mathbf{B}$ by use of the same computational resources. \n\nRuntime bounds apply to algorithms including AlphaZero style learning algorithms, that use neural networks. The computational time bounds apply to such algorithms, however here the training process needs to be considered as a part of the algorithm. Learning algorithms are typically probabilistic, and the criteria for success is typically not measured against 100\\% accuracy. \n\nIn practice we might want to disregard the training time and ask for the algorithms asymptotic run-time, given access to perfectly trained (or pre-trained) neural networks. In computational complexity theory, an advice string (advice function) is an extra input that is allowed to depend on the length $n$ of the input, but not on the input itself. A decision problem is in the complexity class $P\/f(n)$ if there is a polynomial time algorithm (Turing Machine) with the following property: for any $n$, there is an advice string A of length $f(n)$ such that, for any input $x$ of length $n$, the machine M correctly decides the problem on the input $x$, given $x$ and A. The class P\/poly consists of decision problems (classification problems) that can be solved in polynomial time, by use of polynomial advice functions. A (pre-trained) neural network can serve as advice function, as there is no requirement on the quality of the advice given. For the algorithm to be able to run in polynomial time, it need to be able to evaluate the NN in polynomial time, so all NNs used by the algorithm need to have polynomial size.\nThus, the class P\/poly include problems that can be solved in polynomial time using pre-trained (polynomial size) neural networks for free. \n\nChess, Go and shogi on $n \\times n$ boards have all been shown to be NEXPTIME hard. More specifically, the decision problem of determining if a given positions is a forced win (\\textit{i.e.}guarantees the player who moves from the position a win with optimal play) is NEXPTIME complete for chess, Go and shogi \\cite{fraenkel1981computing, robson1983complexity, adachi1987shogi}.\nThus Chess, Go or shogi on $n \\times n$ game positions cannot be correctly evaluated by any sub-exponential time algorithm. And thus particularly the required computational time needed for an algorithm to learn to play generalised chess (Go or shogi) to a level of perfection would be exponential. \nTherefore from a theoretical point of view there is no hope that AI algorithms in practise will be able to learn to perfectly master generalised versions of complex games like chess, Go or shogi on large boards. In fact, given the widely believed but unproven conjecture that NEXPTIME $\\not \\subseteq$ P\/poly it would be impossible for any polynomial time algorithm is solve a NEXPTIME complete decision problem (like generalised chess, Go or shogi) even when given access to polynomial size advice functions e.g. polynomial size pre-trained networks \\cite{vsima2003general}. But these theoretical results are asymptotic and does not say anything about the possibility for AI systems to learn to play the games with perfection on boards of standard size e.g. $8 \\times 8$ for chess, $19 \\times 19$ for Go, and $9 \\times 9$ for shogi. \n\nDetermining if positions in impartial games like geography and node kayles is a win is PSPACE complete \\cite{schaefer1978complexity}. So unless PSPACE $\\subseteq$ P\/poly or something to that effect, from a theoretical point of view, there is no hope that AI algorithms, in general, will be able to master (in polynomial time) impartial games perfectly even if allowed unlimited time for training \\cite{vsima2003general}. \n\n\\subsection{Low complexity bottlenecks: Parity and low-level thinking}\n\\label{sec:low_complexity}\n\nIssues related to the parity function has a long history in AI. In \\cite{mccarthy1964tough} McCarty suggested that the multilated chess board problem is a tough nut for proof procedures. The problem is given an $2n \\times 2n$ chessboard with two diagonally opposite squares missing. Show that this board cannot be covered with non-overlapping dominoes. \n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.4\\textwidth]{figures\/MutilatedBoard.eps}\n\\caption{\\label{fig:MutilatedBoard} Consider an $8 \\times 8$ chessboard, where the top-right and bottom-left squares have been removed. Is it possible to tile this mutilated chessboard with $2 \\times 1$ dominoes? }\n\\end{figure}\n\nHumans, using high level thinking, typically manege to solve this problem - after some trial and error - by noticing that the number of white squares and black squares differs (in fact have different parity). \nMcCarthy conjectured that this problem is challenging when using low level non-abstract reasoning. It was only 35 years later that was formally proved by Danchev and Riis with regards to the so-call resolution proof system \\cite{dantchev2001planar}. \n\nThe issue underlying the mutilated chess board problem is directly related to the principles based on simple counting like the parity principle. Basic counting principles have played a prominent role in propositional proof complexity and more abstract formal systems that captures \"low complexity\" reasoning \\cite{riis1994independence,beame1998more}. \nFor humans, counting is a straightforward process. Even young children understand that the number of objects is an invariant, so recounting should (in principle) lead to the same number. In mathematics, this principle is referred to as the pigeon-hole principle. In \\cite{riis2001complexity} it was shown that combinatorial principles in a technical sense that can be formalised, either are easy and have polynomial size proofs (so-called tree-like resolution proofs), or are very hard (like the parity principle) and require exponential size proofs. The exponentially hard principles are exactly those principles that fails in infinite structures (all in a sense that can be made precise). The pigeon-hole principle fails for infinite sets as its possible to map bijectively an infinite set to a proper subset of itself. Thus according to the main results in \\cite{riis2001complexity} it follows that this principle require exponentially large (tree-like) resolution proofs. The intractability of the pigeon-hole principle (for the resolution proof system) had already been established in \\cite{haken1985intractability}. For an AI that is unable to grasp principles like the pigeon-hole principle (without human help), its seems hard to be able \"solve\" the mutilated chess board problem. Maybe, this was already part of McCarthy intuition when he originally posed his conjecture in 1964. \n\nBoolean circuits are very similar to neural networks, but can only handle Boolean 0\/1 values. In this setting the parity function is\ndefined as:\n\\begin{equation}\n f({x_1, \\ldots ,x_n}) = \\sum_{i=1}^n x_i \\ mod \\ 2 \n\\end{equation}\nwhere $x_1,x_2,...x_n \\in \\{0,1\\}$.\nIn \\cite{haastad1987computational} it was shown that the parity function cannot be computed by so-called constant depth, sub-exponential Boolean circuits. There is also a long list of theoretical work related to Boolean circuit complexity that support of the view that the parity function is hard to learn and correctly generalise to unseen data \\cite{linial1993constant}.\n\nMany types of neural networks can in principle compute the parity function. It is possible to handcraft and artificially tune weight, so the NN can compute the parity function (see section \\ref{sec:parity_function_nerual_network} for details). To better understand the issue, it is essential to keep a few facts in mind:\n\nIf we pick a random Boolean function $f$ uniformly from the set of $2^{2^n}$ Boolean functions, that function can in general not be learned. To illustrate this assume we are given an array of (distinct) inputs $\\boldsymbol{X} = (\\bar{x}_1,\\bar{x}_2, \\ldots ,\\bar{x}_s)$ with the corresponding list $\\boldsymbol{y}=(y_1,y_2, \\ldots , y_s)$ of function values where $\\bar{y_j}=f(\\bar{x_j}),\\quad j=1,2,\\ldots, s$. For any new unseen input $\\bar{x} \\not\\in \\boldsymbol{X}$ there is no relation between the already seen data $(\\boldsymbol{X},\\boldsymbol{y})$ and $f(\\bar{x})$. To see this, notice that there are exactly\n$2^{2^n-s}$ Boolean functions that fit the observations $(\\boldsymbol{X},\\boldsymbol{y})$, and exactly half \\textit{i.e.} $2^{2^n-s-1}$ have $f(\\bar{x})=0$ and while the remaining $2^{2^n-s-1}$ have $f(\\bar{x})=1$.\nThus if we are given data that agree with the parity function for $s$ inputs, there is precisely the same number of Boolean functions with the correct generalisation as there are with the wrong generalisation.\n\nWhy do we believe that the parity function is the correct way to generalise data $(\\boldsymbol{X},\\boldsymbol{y})$ that happens to fit the parity function? There is no doubt that a human investigating the data $(\\boldsymbol{X},\\boldsymbol{y})$ after some time will spot the pattern and then be in no doubt about how to generalise the data. That is essential because we naturally apply Occam's razor, and the phenomenon is also related to the concept of Kolmogorov complexity. We feel the parity function is a better guess as it has a short description. According to Kolmogorov's philosophy, it is a less \"random\" function as it has a short description. However, if the underlying prior probability distribution is uniform, though it might feel counter intuitive, there is no logical mathematical reason to apply Occam's razor, as no generalisation of the partial data matching those of the parity function is more likely than any other. \n\nHumans find it easy to generalise the parity pattern due to its elementary description. To some extent, neural networks apply Occam's razor automatically as they favour functions that can be computed by NNs with certain size and depth bounds. Given training data $(\\boldsymbol{X},\\boldsymbol{y})$, a new data point is not equally likely to have output 0 or 1. Humans find it easy to generalise the parity pattern due to its elementary description. However, most NN models do not take take this into account. Even if the NN happens to train a NN to compute all training data correctly (\\textit{e.g.} obtain 100\\% accuracy), the NN might still overfit and not generalise \"correctly\" to unseen data. \n\nSome researchers have argued that the parity function is a mathematical function rather than a natural occurring function, and they use this to explain why it is so hard to generalise. This kind of argument has sparked of some debate among AI researchers \\cite{thornton1996parity, damper1998parity}. \n\nAny Boolean function, including the parity function, can be computed by a NN with only two layers. This result requires exponentially large neural networks as the number $n$ of variables goes to infinity. Our arguments and experiments would break down if we allowed exponentially large NN and exponentially large training sets.\n\nOne heuristic way we have been thinking about parity, nim and impartial games in general, can be expressed as an informal non-rigious argument that have been guiding our intuition. It falls outside the scope of this paper, but it might be possible to make the argument more rigorous by for example replacing the notion of continuity with a suitable approximate notion. \n\n\\bigskip\n\\noindent\n{\\bf Informal argument:}\nIn mathematics, a continuous function has the property that if we keep the inputs in a small neighbourhood of a point, the output values will stay in a small neighbourhood of the original output. Neural Networks are, in essence, continuous (even for discrete classification problems) as the probabilities they return are stable, so small input changes only lead to small changes in the output. Back-propagation, the calculation of the derivative of weights with respect to the loss in a neural network, is essentially a differentiable (and thus continuous) process. Many learning tasks are continuous because small changes to the input lead to small changes in the output. Such learning tasks might be well suited for neural networks. Intuitively, Games like chess, Go, and shogi are mainly continuous, but there are typically some points of discontinuity where the correct evaluation of a position might jump with a slight change in input. The discontinuity is not a serious issue as the policy and value networks can still provide sufficient guidance to the MCTS. However, many impartial games - and certainly nim - are in some sense ubiquitously discontinuous as a slight change of the position might dramatically change the corresponding correct evaluation. \n\n\n\\subsection{Practical challenge: Parity and neural networks}\n\\label{sec:parity_function_nerual_network}\nThe nim board positions can be represented by a list of bits, being 1, 0 or -1 where 1 denotes the counters on the board, 0 denotes the counters that have been removed from the board, and -1 is a token separating the heaps. To accommodate nim board representation, we define a version of the parity function (sometimes called parity with noise) as the following. Let $n$ be any positive integer. Given input $\\chi = \\{0, 1, -1\\}^n = \\{x_1, \\ldots ,x_n\\}$, the function is defined as \\cite{banino2021pondernet}: \n\\begin{equation}\n f({x_1, \\ldots ,x_n}) = \\left(\\sum_{i=1}^n x_i \\; \\textbf{\\textit{if}} \\; x_i == 1 \\right) \\ mod \\ 2 \n\\end{equation}where $n$ is the length of the input. Thus, the function output is either 0 or 1, indicating whether it contains even or odd numbers of 1s.\n\nNeural networks are the pillars of modern artificial intelligence (AI). However, parity function has turned out to be difficult to learn and generalise for a variety of neural network models. \\cite{al2005neural} shows MLP and simple RNNs trained with gradient descent can learn to modelling training data, but fail to generalise on unseen data. \n\nThe majority of prior works focused on constructing specify neural networks with fixed weight parameters, mainly Multilayer Perception (MLP) or RNN, dedicated to solve the parity problem \\cite{hohil1999solving, liu2002n, franco2001generalization, wilamowski2003solving, franco2001generalization}. A RNN with 3 neurons and 12 frozen parameters can approximate perfectly a XOR function. These artificially hard-wired neural networks can generalize to all seen and unseen patterns without training or adaptation \\cite{al2005neural}. However, it is improbable to artificially setup the weight parameters for neural networks modelling the unknown data distributions.\n\nAs, to our best knowledge, no prior experiments systematically investigating and comparing the performance of different neural networks trained on the bitstring of varying length to model parity function are available in the literature, we performed our own experiments whose results also show that that neural networks, like simple Recurrent Neural Networks (RNNs) or Long Short-Term Memory (LSTM), are capable of modelling parity function \\textit{perfectly} on short bitstrings where the number of bits are less than 100, but it is intractable for them to learn the parity function when the length of the bitstrings is generally more than 100 (see section \\ref{sec:value_network}). \n\nRNN is an umbrella term that incorporates vanilla (simple) RNN, Bidirectional RNN (BRNN), Gated Recurrent Unit (GRU), LSTM and a wide range of their variations like the Memory-Augmented RNN (MRNN) that enhances RNNs' ability on handling on sequential data with long dependencies \\cite{zhao2020rnn}. In \\cite{zhao2020rnn} it was argued that RNN and LSTM are incapable of processing long time series data that requires persistent memorization, which is a disadvantage to process long bitstrings as flipping any single bit alters the parity and furthermore a tiny error in the memory might incur disastrous repercussion. This discovery aligns with our results that the longer bitstrings on which the neural networks are trained, the harder it is for them to learn to model the parity function.\n\nThe parity of bitstrings is permutation invariant by its nature, as changing the order of the bits does not affect the parity. RNNs, despite dependent on the order of the input, can be regularized towards permutation invariant. \\cite{cohen2020regularizing} shows that RNNs with regularization applied towards permutation invariance can simulate correct parity function for the bitstrings whose length is up to 100. However, our results (see section \\ref{sec:value_network}) demonstrate that RNN trained on bitstrings of length 20 can model parity function for the bitstring of any length, without applying any regularization. \n\nRNN architectures are in theory able to simulate any Turing Machine (TM) \\cite{al2005neural, siegelmann1995computational} given the suitable weights and biases, however the results depend on unrealistic assumptions such as unlimited computation time and infinite precision representation of states. In practice finding proper parameters thought gradient descent algorithms for RNNs is a demanding task due to the notoriously hard {\\it vanishing gradient problem} \\cite{hochreiter1997long}. With the number of bitstring increasing, the difficulty of the RNN modelling the parity function also escalates drastically.\n\nSelf-Attention network (\\textit{a.k.a} Transformer architectures) \\cite{vaswani2017attention} underpins many Natural Language Processing (NLP) applications since its emergence in 2017. However, \\cite{hahn2020theoretical} has shown strong theoretical limitations of the abilities of the Self-Attention network on evaluating logical formulas, which is equivalent to evaluating the parity of bitstrings, and draw an conclusion from asymptotic results that any transformers will make mistakes on modelling the parity when the input is sufficiently long. \n\nRecently, researchers from DeepMind considered PonderNet \\cite{banino2021pondernet} and showed its ability to modelling the parity function by adaptive computation where the computation needed is proportional to the complexity of the problem. In their experiments, the bitstrings contained 96 bits, of which a random number from 1 to 48 are set to 1 or -1s and the remaining bits are set to 0 for training, and of which a random number from 49 to 96 are set to 1 and the rest are set to 0 for evaluation. The PonderNet achieved almost perfect accuracy on this hard extrapolation task, but unlike RNNs that can process the input vectors of any length through unrolling, it can only be evaluated on the same length of the input vectors as that of the ones it was trained on. Their experiments were intended to demonstrate the computational adaptability of the PonderNet using the parity function as a testbed, rather exhibiting its power on modelling parity function. But it shows that as the bitstrings become increasingly complicated, \\textit{i.e.} more 1s or -1s in the bitstring, the computational resources required to learn the parity increases to the extend where when the bitstring is sufficiently long, the required resources are astronomical, if not unobtainable.\n\nModern computer vision models are capable of classifying images with super high resolutions \\cite{dosovitskiy2020image}. It is a common misbelief that they can also classify long bitstrings represented by image format as well. There is typically a robustness inherited in image classification, as the result of the classification does not vary by slight changes in the input image. However, flipping a bit completely changes the parity of a bitstring. Thus they are fundamentally different tasks. \n\n\\section{Different levels of mastery}\n\\label{sec:differnt}\n\nThe goal of an RL agent during training is to cultivate the ability of gaining maximum rewards in a Markov decision process (MDP) \\cite{silver2021reward}. The performance of an RL agent is commonly measured in terms of the averaged accumulated rewards it obtained over a number of episodes or the Elo rating score attached to it, measuring its competitiveness against other agents. To complement these score-based measurements, we propose and consider two fundamentally different ways to assess the extend to which a RL algorithm has learnt to master a game, as shown in table \\ref{tab:level_optimality}. \n\n\\begin{table}[h]\n\\caption{Two types of optimality} \n\\label{tab:level_optimality}\n\\centering \n\\begin{tabular}{l p{.70\\linewidth}} \n\\toprule \n\\multicolumn{1}{c}{Type of Optimality} & \\multicolumn{1}{c}{Description} \\\\ [0.5ex]\n\\midrule \nType 1 & To what extend have the algorithm learnt to make a sequence of good moves in actual game play that lead to winning the game when played from initial position\\\\ \nType 2 & To what extend have the algorithm learnt to make good moves in all possible game positions arising from play from the initial position \\\\\n\\bottomrule \n\\end{tabular}\n\\end{table}\n\n\n\\noindent\nThis leads to two notions of an optimal agent, namely champion and expert, which are distinguished by their level of mastery on the game, as shown in table \\ref{tab:level_mastery}. The general complexity results for chess, Go and shogi might only concern the type 2 notion of optimality.\n\n\\begin{table}[h]\n\\caption{Two Levels of Mastery} \n\\label{tab:level_mastery}\n\\centering \n\\begin{tabular}{l p{.70\\linewidth}} \n\\toprule \n\\multicolumn{1}{c}{Mastery Level} & \\multicolumn{1}{c}{Description} \\\\ [0.5ex]\n\\midrule \nChampion & A player who is able to always get optimal result against any opponent, when the game is started from the initial position\\\\ \nExpert & A player who is able to always play the optimal move in any position that can arise by legal play from the initial position \\\\\n\\bottomrule \n\\end{tabular}\n\\end{table}\n\nThis distinction is vital. The champion might have developed a skill to steer the game into its comfort zone where it masters the game. The expert agent always takes the best move on any positions. But in some games it is essentially impossible to become a champion without also becoming an expert. It is outside the scope of this paper to systematically prove this claim. Intuitively, this is because the winning side cannot control the game enough to steer the game into well know territory. A more rigorous and theoretical analysis of this issue is left open.\n\nIn chess, there is a special discipline called problem chess, where the task is to solve composed chess problems artificially created, rather than problems arising from actual competitive play. LCZero is not trained to solve specially artificial problems and it is not surprising that the program has difficulty dealing with them \\cite{maharaj2021chess}. AlphaZero's successes in chess, shogi and Go were its ability to learn to play and win games. The task was not to solve problem positions i.e. find good moves in artificial situations. Thus, AlphaZero learnability of a game was measured by a champion notion instead of applying an expert measure.\n\nThe champion and expert concepts will be discussed in conjunction with experimental results in Section \\ref{sec:reinforcement_learning}. We also use an analogy and a crafted nim board below to expound the distinction between them. \n\n\\medskip\n\\noindent\n{\\bf Example 1:} Imagine a fighting game where one of the agents can force the opponent to fight on either the savanna or in the jungle. A champion might be an expert in savanna fight, but pretty hopeless in the jungle (or visa versa). Regardless, the champion can win by compelling the combat into the savanna. To be an expert, the agent needs to master both savanna and jungle fights to win in any situation it could possibly encounter. \n\n\\medskip\n\\noindent\nHere is another simple (and rather extreme) example of the relevance of the two notions from the game of nim.\n\n\\medskip\n\\noindent\n{\\bf Example 2:} Let $n \\in N$ and consider the nim board $[2,1,1,\\ldots 1]$ with one pile with 2 counters, and $n$ piles with $1$ counter where $n$ is large number, for instance, 100.\nAn agent - even after relatively little training - become a champion in this game. \nThe agent very fast learns that if their are two counters in the first pile, it always have to remove either $1$ or $2$ counters from that pile. Thus with only relatively little self play training the agent becomes an optimal agent of type 1 \\textit{i.e.} a champion, that essentially learns and memorises the initial two first \"opening\" moves. But the task of selecting the best move in a general position $[2,v_2,v_3,\\ldots,v_n]$\nwith $v_j \\in \\{0,1\\}, j \\in \\{2,3,\\ldots, n\\}$ is equivalent to evaluating the parity of $[v_2,v_3,\\ldots,v_n]$. \nThus due to difficulty for the neural network\nto model the parity function, especially without any serious incentive to learn to do so, a champion agent for this nim board is not expected to become an expert agent. Our experiments show that for nim with a small number of heaps like $n=15$, an agent becomes champion after just 200 epochs of training. \n\nYet, despite fast becoming a champion it fails to become an expert even after extensive training with thousands of epochs. The experimental setup and configuration is shown in section \\ref{sec:implement_alphazero}. \n\n\n\\section{Preliminary experimental results}\n\\label{sec:prelimiary}\n\nAmong various types of neural networks we investigated and surveyed in the section \\ref{sec:background} including MLP, RNNs, Self-Attention, PonderNet, RNNs are the only ones that seem to possess the potential to model perfectly the parity function through learning by gradient descent and could process the bitstrings of any length ascribing to its unrolling mechanism. Thus, we designed a range of experiments aimed to investigate empirically if or to what extend RNNs could model the parity function from bitstrings and the nimsum function from nim positions that could possibly arise during actual game play. \n\n\\subsection{Model parity function using value network}\n\\label{sec:value_network}\nWe used LSTM as the main component of the value network and discovered that the single layer LSTM with 128 hidden size trained on the bitstrings of length 20 can simulate a parity function, indicating that there exists a set of parameters for this LSTM model that enables it to model the parity function. The architecture of the value network we used is shown below. The bitstrings are processed by a LSTM layer, followed by a linear layer whose output is then squeezed to the range between (0, 1) by a sigmoid function. \n\n\\begin{enumerate}[label={(\\arabic*)}, noitemsep]\n \\item Single LSTM layer with 128 nodes\n \\item Linear layer with 1 node\n \\item Sigmoid function\n\\end{enumerate}\n\nThe batch normalization layers \\cite{ioffe2015batch} are not used in the value network as in \\cite{silver2017mastering} because we found it adversely impacts the performance of the model. We use Adam \\cite{kingma2014adam} optimizer to apply the gradients to the weights of the neural networks and each gradient calculation consumes 128 bitstrings. All our experiments in this paper ran on the NVIDIA A100 GPU in High Performance Computer Cluster \\footnote{\\url{https:\/\/docs.hpc.qmul.ac.uk\/}}.\n\nWe conducted a series of experiments investigating the impact of the length of the bitstrings on the difficulty of value networks to find the right parameters to model the parity function through gradient descent. All the training and testing data were generated randomly as in \\cite{banino2021pondernet}. The value networks were evaluated on the bitstrings with 10 more bits than these it was trained on to test its ability on extrapolation tasks and to ensure that the test data do not overlap with data it has seen in the training. The results are shown in the Fig. \\ref{fig:value_network}. The left graph shows the performance of the model trained on bitstrings whose length is $h\\in\\{20, 40, 60, 80, 120\\}$. It is obvious to see that the difficulty in modelling the parity function rises as the length of the bitstrings grows. \n\nWe also found that the model trained on longer bitstring is more sensitive to the changes in hyperparamters, like learning rate. As shown in the right graph in Fig. \\ref{fig:value_network}, the value network trained on the bitstrings of length 20 is immune to varying learning rates, evidenced by the three overlapping lines each of which shows the prediction accuracy on the bitstrings of length 20. The value network trained on bitstrings of length 40 is not as impervious to the changes of the hyper-parameters as these of length 20. The model trained on bitstrings of length 80 requires significantly higher number of training steps to converge to the parity function, and the one trained using the learning rate of 0.0003 and 0.0005 failed at making prediction better than random guess.\n\nIt is necessary to note that the amount of data used in the training and the way they are sampled impinge on the convergence of the model, but the impacts of these stochasticities and factors are of no significance due to the fact that we did myriads of experiments with a range of configurations, \\textit{i.e.} various combinations of different number of LSTM layers and learning rates, using thousands of GPU hours, out of which the statistically stable properties emerged. The results shown in these graphs are some of the exemplary among them. We did not discover any apparent patterns pertaining to the effects of the size of the training dataset or the impact of learning rates on the convergence of the model. \n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figures\/bistrings_acc.eps}\n\\hspace{1ex}\n\\includegraphics[width=0.45\\textwidth]{figures\/lr_acc.eps}\n\\caption{\\label{fig:value_network} The accuracy of the value network on extrapolation task as the training progresses. The model was training by 1 million steps in every experiment, each step consuming 128 bitstrings. The training and testing time each run takes ranged from 90 to 165 minutes for the bitstrings whose length ranges from 20 to 120. }\n\\end{figure}\n\nAs observed from these steep learning curves, the polarity of the performance of value networks during the training process might indicate that they are not learning incrementally to simulate the parity function, but in a way likening epiphany. During training, the neural networks that end up converging to the parity function experienced two states and the transition is transient, being either cannot model the parity function at all or can model the parity function perfectly, resembling how we human learn the parity function. We suggest that the neural networks before generalization were in the incubation period in which they were ruling out the incorrect sets of parameters and were steering the update direction towards uncovering the right one that enables it to model the parity function. A recent study on generalization of neural networks over small algorithmic datasets discovered a similar phenomenon and they dubbed it as \"grokking\" where the generalization performance of the neural networks suddenly shoots from random guessing to perfect generalization, which could also occur long after the presence of overfitting \\cite{power2022grokking}. We have made some remarks on this phenomenon in section \\ref{sec:conclusion}.\n\n\\subsection{Learn winning moves using policy networks}\n\\label{sec:policy_network}\n\nAlphaGo, the precursor of AlphaZero, employed a mixed training strategy where the policy networks were trained in supervised learning from 30 million board positions obtained from the KGS Go Server, preceding further training by reinforcement learning (RL) \\cite{silver2016mastering}. The policy network was trained on randomly sampled state-action pairs $(s, a)$ where $s$ is a board state and $a$ is an action that the experts took on that board position. \n\nAn agent might be excellent in evaluating positions that naturally occur in actual game-play, but might be poor at evaluating \"artificial\" positions that cannot arise in practice. One reason could be that the agent follows a policy that prevents the position to occur. Although applying temperature setting and the Dirichlet noise to boost the exploration of all the possible legal moves, some moves might still get lower chance to be tried \\cite{silver2017mastering}. Thus the role of the policy network in nim becomes important for our analysis. When AlphaZero after being trained plays chess (Go or shogi) it typically garners a collection of sample trajectories and returns heuristics of the search guided by policy and value networks. If the search tree fails to branch enough, resulting in that the critical moves are not explored,(see example \\ref{fig:LC3}) in which case we cannot expect the quality of the search to be reliable. To illustrate the issue, consider the following example in which the nim positions can be categorized into 4 classes. \n\n\\medskip\n\\noindent\n{\\bf Example 3:} Let $n \\in N$ be a large number and consider nim positions $[v_1,v_2,v_3,\\ldots v_n]$ with $n$ rows, each containing either 0,1 or 2 items.\n\\noindent\nSuch positions fall into the following 4 different categories, as shown in the Table \\ref{tab:winning_move}.\n\n\\begin{table}[h]\n\\caption{Winning Moves on nim Board Positions} \n\\label{tab:winning_move}\n\\centering \n\\begin{tabular}{ll} \n\\toprule \nWinning Move & Nim Board Positions \\\\ [0.5ex]\n\\midrule \nNot exist & even number of heaps with 1, and even number of heaps with 2\\\\ \nremove 2 from heap with 2 & even number of heaps with 1, and odd number of heaps with 2 \\\\\nremove 1 from heap with 1 & odd number of heaps with 1, and even number of heaps with 2 \\\\\nremove 1 from head with 2 & odd number of heaps with 1, and odd number of heaps with 2 \\\\ \n\\bottomrule \n\\end{tabular}\n\\end{table}\n\nThe number of search paths through the game tree from such a initial position is given by $\\prod_{j=1}^{n} v_j$, which for large values of $n$ becomes compatibly unfeasible to traverse every path. However, after training the policy network might have learned to reduce the branching factor, so it mainly selects one move that has a better chance leading to a winning position out of the 3 move types at given steps. \n\nIn order to investigate if the policy network could output the probability distribution that is in favor of the winning move on the boards shown in the example, we considered a supervised classification problem of 3 classes corresponding to the three classes of nim positions with winning moves available in the example. The input to the policy network is a won position $[v_1,v_2,\u2026v_n]$ \\textit{i.e.} the position is of type 2, 3 or 4 as described in the above example. Type 1 board positions are not being taken into consideration as they do not have winning moves. The output would be the type of the winning move derived from the nim-sum (which is equivalently one of the type 2, 3 or 4) labelled as 0, 1 or 2. \nThe training and testing data are randomly generated nim positions that could possibly arise during actual game play. The architecture of the policy networks consists of one or multiple LSTM layer(s), a batch normalization layer, a ReLU function, a linear layer with 3 nodes corresponding to the 3 legal moves, and a softmax function, as shown below. \n\n\\begin{enumerate}[label={(\\arabic*)}, noitemsep]\n \\item LSTM layer(s), each of which contains 128 nodes\n \\item Batch normalization\n \\item A rectifier nonlinearity\n \\item Linear layer with 3 nodes\n \\item Softmax function\n\\end{enumerate}\n\nOur experiments tested the board positions of the nim with different number of heaps ($h$), ranging from 7 to 9 and use the policy network with different number layers ($l$) from 1 to 5, and up to 10 for 9 heaps nim. The training and testing datasets are evenly balanced across three labels. The results are shown in the Fig. \\ref{fig:policy_network} in which the top 3 plots are the results on training data and the bottom 3 plots on testing data. \n\nWe discovered that in none of the experiments do the policy networks predict the correct winning moves with more than 80 percent accuracy, showing that the policy networks fail to model the nim-sum accurately. The performance of policy networks trained on board positions from 7 heaps nim is consistent, while the testing accuracy of the policy networks trained on these from 8 heaps is more volatile. The policy network trained on board position of 9 heaps nim cannot predict the winning moves better than a random policy, showing the difficulty for the policy to model the nim-sum also grows as the length of the bitstrings increases, which is similar to our findings on value network. \n\nWe did extensive experiments using different number of LSTM layers in attempts to find a architecture that is capable of modelling the nim-sum, like the LSTM model that can model perfect parity function in section \\ref{sec:value_network}, but of no avail. Thus, we present the results on the policy networks consisting of varying layers to illustrate this discovery and to show that it occurs among various network architectures. \n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[b]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/policy7_train_acc.eps}\n \\caption{$h=7$ heaps}\n \\end{subfigure}\n \\hspace{0em}\n \\begin{subfigure}[b]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/policy8_train_acc.eps}\n \\caption{$h=8$ heaps}\n \\end{subfigure}\n \\hspace{0em}\n \\begin{subfigure}[b]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/policy9_train_acc.eps}\n \\caption{$h=9$ heaps}\n \\end{subfigure}\n\\end{figure}\n\\vspace{-1.5em}\n\\begin{figure}[H]\n\\centering\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{figures\/policy7_test_acc.eps}\n\\caption{$h=7$ heaps}\n\\end{subfigure}\n\\hspace{0em}\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{figures\/policy8_test_acc.eps}\n\\caption{$h=8$ heaps}\n\\end{subfigure}\n\\hspace{0em}\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{figures\/policy9_test_acc.eps}\n\\caption{$h=9$ heaps}\n\\end{subfigure}\n\\caption{\\label{fig:policy_network} The training and testing performance of the policy network on $h\\in\\{7, 8, 9\\}$ heaps of nim. $h$ denotes the number of heaps in the game and $l$ the number of LSTM layers in the policy network. The top 3 plots are the model accuracy on the training data, and the bottom 3 plots the accuracy on the testing data. Every policy network was trained one hundred thousand steps with each step consuming 128 bitstrings. Each run finished within 30 minutes.}\n\\end{figure}\n\n\nThe size of the dataset and how the data are generated affect the performance of the value network, so do they on that of the policy network. It is possible that using a larger dataset would help the policy network to learn the nimsum function. However, the state space (\\textit{i.e.} the size of the dataset) in our experiments was relatively small, and positions were visited many times during training iterations. One crucial point is to understand that learning nimsum or parity function - even on smaller boards - is non-monotone. Thus it is not the case that the policy network gradually increases its knowledge and steadily improves. Instead, the non-convergence typically manifests itself in random fluctuations where the performance of the policy network fails to improve on certain positions, as shown in the learning curves. \n\n\\section{Reinforcement Learning for nim}\n\\label{sec:reinforcement_learning}\nThere exist some open-source implementations of AlphaZero on the GitHub, but due to the fact that we need some additional functionalities that are not provided in any of them, for instance calculating the Elo rating of the agent being trained against its ancestors, evaluating the accuracy of output of the policy network as the training progresses against the winning move derived from the nim-sum formula, etc., we implemented the AlphaZero algorithm by our own in PyTorch \\cite{paszke2019pytorch} for neural network training and Ray \\cite{moritz2018ray} for running simulations in parallel. We made considerable efforts on ensuring our implementation is in close proximity to the algorithms at granular details as described in \\cite{silver2017mastering, silver2018general}, while necessary changes to neural network architectures tailored to the nim were made, which will be specified in the section \\ref{sec:implement_alphazero}. \n\n\\subsection{Implementing an AlphaZero style algorithm for nim} \n\\label{sec:implement_alphazero}\nThe AlphaZero algorithm starts training from a clean slate with no specific game knowledge, besides knowing the rules of the games \\cite{ silver2018general}. Policy network, value network and MCTS are three pillars of the AlphaZero algorithm. The policy network outputs a probability distribution over all the actions $\\mathcal{A}$. $P(s, a)$ stands for the prior probability associated with action $a \\in \\mathcal{A}$ at state $s$. The probabilities of the illegal actions are set to zero and the remaining probabilities are re-normalized so that the summation of all the probabilities remains one. The policy network narrows down the search to the actions with high probability of leading to a win position, hence reducing the breath of the search tree. The value network outputs a scalar value $v \\in [-1, 1]$ estimating the expected outcome from position $s$ if following the actions suggested by the policy network. Higher value of $v$ indicates the current player who is taking move at position $s$ has a higher change to win, and vice versa. The value of the leaf node is predicted by the value network, without which it can only be known at the end of the game where it is 1 when the current play won the game or -1 when lost, whereby effectively reducing the depth the search tree. \n\nThe policy and value networks share the LSTM layer, but use two separate heads, policy head and the value head. We did the experiments on nim of 5, 6 and 7 heaps and tweaked the network architectures and configuration to adapt to different board sizes. The shared layers of the policy and value network are one LSTM layer with hidden size of 128 for all three nim games. It is natural to increase the number of layers as the board size grow, however we found that larger model is detrimental to the performance and tends to destabilize the training process. The output of the LSTM layer is then passed into two heads. The policy head consists of 25, 36 and 48 nodes for 5, 6 and 7 heaps nim respectively, the output of policy head goes into a softmax function converting these logits into probabilities that sum to 1. The policy head contains a single node that outputs a scalar value, which goes into a tanh activation function squeezing it to the range of [-1, 1]. \n\nThe MCTS starts with the root node of the search tree corresponding to the current state the player is taking action on. Each node represents a state encountered in the game play and each edge an action. The tree is constructed in the course of running a predefined number of simulations, each of which starts with the root node and ends with a leaf node, following a sequence of actions selected using the Formula \\ref{eqn:action_selection} \\footnote{See the python scripts in the reinforcement learning folder in our GitHub repository for the implementations of all the formula used in this section.}. We ran a predefined simulations for each move, and during training we collect 100 episodes of interaction data in the form of $(s, \\pi , r)$ to train the policy network with cross entropy loss and to train the value network with mean square error. \n\\begin{equation}\n a_t = \\argmax_a(Q(s,a)+U(s,a))\n \\label{eqn:action_selection}\n\\end{equation}\n\nwhere $Q(s,a)$ is the averaged action value across simulations calculated by\n\n\\begin{equation}\n\\label{eqn:q_value}\n Q(s, a) = \\frac{1}{N(s, a)}\\sum_{s'|s, a\\rightarrow s'}V(s')\n\\end{equation}\n\nin which $N(s, a)$ is a counter that records the number of times action $a$ has been taken from state $s$, $s'|s, a\\rightarrow s'$ denotes that action $a$ is taken at state $s$ and this simulation terminates at state $s'$, and $V(s')$ represents the value of the end state of a simulation for the perspective of the current player, obtained from either the value network if $s'$ is an intermediate state or the game itself as a reward if it is a terminal state. The $U(s,a)$ is calculated using\n\n\\begin{equation}\n\\label{eqn:puct}\n U(s,a)=c_{put}P(s,a)\\dfrac{\\sqrt{\\sum_b{N(s,b)}}}{1 + N(s,a)}\n\\end{equation}\n\nwhere the $c_{put}$ is a constant controlling the level of exploration. AlphaGo \\cite{silver2016mastering}, AlphaGo Zero \\cite{silver2017mastering} and AlphaZero \\cite{silver2018general} all leave $c_{put}$ unspecified. We found that $c_{put}$ value affects the performance of AlphaZero on nim significantly because setting it too low discourages the exploration, while setting it too high weights down the action value, impairing the effectiveness of the search depth. \\cite{tian2019elf} found setting $c_{put}=1.5$ yields satisfactory results, but this value along with other sensible ones like $c_{put}=\\{1, 1.5, 2, 3\\}$ works poorly for nim. We thus adopted another formula \\cite{schrittwieser2020mastering} to calculate the $U(s,a)$, as shown below.\n\n\\begin{equation}\n\\label{eqn:u(s,a)}\n U(s,a)=P(s,a)\\cdot\\dfrac{\\sqrt{\\sum_b{N(s,b)}}}{1 + N(s,a)}\\left(c_1 + \\log\\left(\\dfrac{\\sum_b{N(s,b)} + c_2 + 1}{c_2}\\right)\\right)\n\\end{equation}\n\nwhere $c_1=0.25$ and $c_2=19652$. To further encourage the exploration, Dirichlet noise was added to the prior probability $P(s,a)$ of the root node where the search begins. The Dirichlet noise is indispensable as it ensures that the search tree is widely branched, thus avoiding always visiting the moves with high prior probability. \n\n\\begin{equation}\n P(s,a) \\leftarrow (1-\\epsilon) \\cdot P(s,a) + \\epsilon \\cdot \\eta_{a}\n \\label{eqn:add_noise}\n\\end{equation}\n\nwhere $\\eta_a$ is sampled from the Dirichlet distribution Dir($\\alpha$) in which $\\alpha$ is set to 0.35. $\\epsilon$ is constant set to 0.25 during training. But it is set to 0 during the evaluation to negate the effect of the Dirichlet noise. The values of $\\alpha$ used for chess, shogi and Go are 0.3, 0.15, 0.003 respectively. The alpha value should be in inverse proportion to the approximate number of legal moves at given positions, as the average number of legal moves in the nim we run experiments on are less than that of chess, we opted for higher $\\alpha$ value 0.35. Although in theory $\\alpha$ should be set to 0.5, but in practice setting $\\alpha$ to 0.35 yields better outcome. The left arrow denotes that the prior probability is reassigned to the value on the right. \n\n\\begin{equation}\n \\label{eqn:u_s_a}\n U(s,a) \\propto \\dfrac{P(s,a)}{1 + N(s,a)}\n\\end{equation}\n\nAs shown in the equation \\ref{eqn:u_s_a}, the $U(s,a)$ is proportional to the prior probability $P(s,a)$. The visit count $N(s,a)$ in the denominator relatively enlarges the prior probability on nodes being visited with less frequency in order to boost exploration. At each state $s$, the action selection is jointly determined by the action value $Q(s,a)$, the visit count $N(s, a)$ and the prior probability $P(s,a)$ obtained from the policy network. The action with lower visit count, higher prior probability and higher value has better chance to be chosen. Thus, when the policy network fails to assign higher probability to winning moves, the search is directed to the nodes with less chance of leading to a winning state, making the search less effective. The problem also exists for the value network that fails to estimate the correct expected outcome of the game, resulting in prematurely cutting off the depth of the search that results in winning state.\n\nAfter the simulations are finished, the search returns a probability distribution over all the actions according to the formula \\ref{eqn:pi_action_selection}, from which an action is sampled to take at board state $s$. \n\\begin{equation}\n \\boldsymbol\\pi(\\textbf{a}|s) = \\dfrac{N(s,\\textbf{a})^{1\/\\tau}}{\\sum_{b}N(s,b)^{1\/\\tau}}\n \\label{eqn:pi_action_selection}\n\\end{equation}\n\nwhere $\\tau$ is the temperature that changes according to the number of moves that have been made during game play. We call the $\\boldsymbol\\pi(\\textbf{a}|s)$ posterior probabilities which are related to the prior probabilities as they are derived from MCTS simulation partially guided by prior probability from the policy network. For the first 3 moves, we set the temperature to $\\tau=1$ so that the chance of each action being sampled is proportional to its visit count. For the rest moves of the game, the temperature is set to $\\tau=0$, making the action being taken is the one with the highest visit count. Note that during the evaluation, the temperature is set to $\\tau=0$ for all the moves.\n\nThe policy network serves as the lighthouse that guides the search to moves with higher chances of winning the game. If it fails to work as intended, the search is misguided, immensely impacting the effectiveness of the search to the extent where when the search space is sufficiently large, the MCTS is equivalent to or worse than brute force search if the policy is skewed towards lost moves. In addition, the improvement of the policy network relies completely on the heuristics from the search. Thus, if the policy network is weak, a vicious cycle is formed where the poor policy misleads the search and the ineffective search leads to poor improvement of the policy, as will be shown in section \\ref{subsec:policy_value}. \\cite{danihelka2021policy} came up with a policy improvement algorithm in Gumbel AlphaZero which ensures the heuristic improves the policy network consistently. However, in that approach the target of the policy improvement entails the approximation from the value network (see the Section 4: Learning an Improved Policy in \\cite{danihelka2021policy} for the details), and due to the parity-related problems that incapacitate the policy and value network, they still might not be able to handle nim with large board size.\n\nSo far we have argued that various parts of the Alphazero algorithm (\\textit{e.g.} policy network and value network) have problems learning what would be required for overall success to master nim. In the next section we present experiment design and results from the AlphaZero algorithm on nim of varying board sizes, and the method to evaluate its performance. \n\n\\subsection{Experimental setup and results}\n\\label{sec:results}\nIn this section, the results along with the detailed configurations of the experiments are presented. We discovered that the AlphaZero algorithm applied on nim is sensitive to the selection of the configurations and our selection of the configurations might not guarantee the best results as there are many factor affecting the training process as shown in the last section and it is intractable to try out all the options in the search space, but the configurations presented here are the ones that gave us the best attainable results in the experiments we conducted. The policy and value network share one LSTM layer and have separate heads. They were trained simultaneously on the experiences (rollout data) collected during the self-play. Besides the number of nodes in the heads are different to adapt to different action spaces, the AlphaZero algorithm for the nim with three different number of heaps uses the same architecture. We choose large number of simulations at each move and increase it as the number of heaps in the nim grow, not only because more simulations lead to better heuristic, but they offset the effect of varying hyperparameters to which the algorithm is sensitive. During both training and evaluation, each move ran $s=\\{50, 60, 100\\}$ number of simulations on nim of $h=\\{5, 6, 7\\}$ heaps respectively. The simulations ran on 8 CPUs on parallel. The significantly large number of simulation on 7 heaps nim is used because we intent to eliminate the possibly negative impact on the performance of the algorithm brought by insufficient number of simulations although this incurs hefty computational cost. Each heaps contains odd number of items. For instance, the initial board of 5 heaps of nim, as shown in Fig. \\ref{fig:nim_board}, consists of $[1, 3, 5, 7, 9]$. Every heap in the actual board is represented by unitary numbers and each of them is separated by -1:\n\n\\[ [1, -1, 1, 1, 1, -1, 1, 1, 1, 1, 1, -1, 1, 1, 1, 1, 1, 1, 1, -1, 1, ..., 1, 1, 1]\\]\n\nEvery board position of 5 heaps nim is composed of bitstring of length 29. Similarly, these of 6 and 7 heaps are composed of bitstrings of length 41 and 55. The state spaces of 5, 6, 7 heaps nim are 3840, 46080 and 645120 and the action space of them are 25, 36, 49, respectively. In average, the number of moves, if taken randomly by two players, for the 5 heaps nim is 10, for 6 heaps 13 and for 7 heaps 16.\n\n\\subsubsection{Elo rating measurement}\n\\label{subsec:elo_rating}\nElo rating, commonly used in chess, is an approach of ranking the players for multiplayer games in terms of their competitiveness. AlphaZero adopted the Elo rating score to evaluate its performance against other algorithms or programs like AlphaGo Zero and AlphaGo Lee and Stockfish. Unfortunately, no existing nim agent with attached Elo rating reflecting its competitiveness is available. Thus, we opted for a variation of self-play Elo rating \\cite{tian2019elf} as an approach to measure the relative strength of an agent and monitor the training progress, in which the relative strength of an agent is evaluated in comparison with all its ancestors whose rating is updated every time a new trained agent joins in to ensure that the attached rating reflects their competitiveness. The Elo rating is measure of champion in terms of its ability to defeat other players. \n\nIn our self-play Elo rating system, every new agent is assigned with a initial score, 1000. At the end of each training iteration, the trained agent is saved into a reservoir of the agents that have been trained preceding it, and its Elo rating is calculated against all the trained agents hoarded in the system. The agent being trained is denoted as Player {A} and its opponent who is one of its predecessors as Player {B}. Both of them have an expected score representing their probability of winning the match, calculated by this formula for Player {A}:\n\\begin{equation}\n E_A = \\dfrac{1}{1 + 10^{(R_B-R_A)\/400}}\n\\end{equation}\nAnalogously, the expected score for player {B} is calculated by\n\\begin{equation}\n E_B = \\dfrac{1}{1 + 10^{(R_A-R_B)\/400}}\n\\end{equation}\nThere is only one of two possible outcomes of the match for player {A}, being either won or lost. For nim, the drawing situation does not exist. If player {A} won the game, its Elo rating is updated by\n\\begin{equation}\n R_A = R_A + K(1 - E_A)\n\\end{equation}\nwhere K is called K-factor. The K value is usually set to 16 for masters (strong players) and 32 for novices (weak players). In our setting, the K value for the players who have engaged in the tournaments beyond 20 time is set to 32, and otherwise 16 due to the consideration that the more times a player engages in matches, the more accurately the Elo ratings reflect its strength and hence the higher K value should be. The updated Elo rating for player {B} is\n\\begin{equation}\n R_B = R_B + K(0 - E_B)\n\\end{equation}\n\nThis approach is self-contained, not relying on any external program. The limitation of this method is that the Elo rating is contained and only measures the performance of the agent against its predecessors, meaning that it should not be compared with the rating of the agents outside the group. However, the ratings can be used as an indicator of the performance of the agents and a baseline for the future research as well.\n\nWe monitored the self-play Elo rating of the AlphaZero agent on nim of 5, 6 and 7 heaps. As shown in the Fig. \\ref{fig:elo_rating}, the self-play Elo rating of the agent being trained grows as the training progresses, indicating it is being more competitive. The AlphaZero agent for 5 heaps nim grew rapidly since the commence of the training. In comparison, the growth of the agent on nim of 6 is relatively slow and that of the nim of 7 heaps is stagnant after 420 iterations, showing that while the agent is being more competitiveness and on the path towards becoming a champion, there seems like a ceiling of its competitiveness that is hard to crack. This bottleneck, as shown in the next section, is caused by inability of the policy and value networks on nim.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figures\/elo_rating.eps}\n\\caption{\\label{fig:elo_rating} The self-play Elo rating score of the agent being trained on nim of 5, 6 and 7 heaps respectively, calculated at the end of every training epoch against all the agents archived in the pool. Calculating the Elo rating during training takes huge amount of time. Our program ran roughly 200 hours with the above-mentioned configurations to obtain the results for the 7 heaps nim.}\n\\end{figure}\n\n\\subsubsection{Performance of policy and value network}\n\\label{subsec:policy_value}\nTo examine if the agent is capable of being an expert agent on the nim, we devised two accuracy measurements on the policy and value network. The action probability distribution yielded by the policy network should bias towards the moves that have higher chance leading to winning the game. In nim, the winning moves can be calculated by nim-sum, according to which the accuracy of the policy network is evaluated by comparing the accuracy of the most probable moves against the winning moves. The same policy measurement was also used in \\cite{danihelka2021policy} as Policy Top 1 Accuracy where the top 1 refers to the move with the highest probability. \n\nThe AlphaZero policy is measured against a random policy. As shown in the Fig. \\ref{fig:alphazero_policy_accuracy}, the AlphaZero policy surpasses the random policy by a significant margin on the 5 heaps nim, but the advantage diminishes on larger board size. The Alphazero policy on 7 heaps nim is tantamount to the random policy, which is due to the fact that the inaccurate policy results in poor heuristic which in turn leads to poor policy improvement. It is undoubtedly true that as more heap is added to the game, the growing action space is one important factor that complicates learning the winning moves for the policy network. However, recall from section \\ref{sec:policy_network} that even when the size of the action space remains unchanged, the policy networks still face increasing difficulty as more heaps are added. \n\\begin{figure}[h]\n\\centering\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{figures\/policy_accuracy_5.eps}\n\\caption{$h=5$ heaps}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{figures\/policy_accuracy_6.eps}\n\\caption{$h=6$ heaps}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{figures\/policy_accuracy_7.eps}\n\\caption{$h=7$ heaps}\n\\end{subfigure}\n\\caption{\\label{fig:alphazero_policy_accuracy} The accuracy of the policy network measured against the winning moves on nim of 5, 6 and 7 heaps. The AlphaZero policy is more superior than the random policy on smaller board, but as the board size grows the accuracy of the policy drops drastically to the extent where it is equivalent to a random policy on 7 heaps nim.}\n\\end{figure}\n\nThe value network yields the estimated outcome of the game at given positions when following the move suggested by the policy network. In the nim, the won positions are the ones where the winning move exists. According to this property, we monitor the accuracy of the value network. All the possible board positions that could arise from the initial board position of 5 heaps nim are evaluated, but due to the large state space of 6 and 7 heaps, the evaluation is conducted on 10000 randomly sampled board positions. The prediction is considered to be correct if the value network outputs a positive number on won position and a negative number on lost position. The accuracy of the value network is shown in the Fig. \\ref{fig:alpha_value_accuracy}. As the number of heaps increases, so does the state space and the board size, the value network was facing rising hindrance in precisely evaluating of the board positions. The value network on 7 heaps nim barely outperforms random guessing. This result aligns with that of the experiment shown in section \\ref{sec:value_network}. \n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figures\/value_accuracy.eps}\n\\caption{\\label{fig:alpha_value_accuracy} The accuracy of the value network on nim of 5, 6 and 7 heaps. The accuracy on the board positions of 5 heaps nim rises constantly and reaches 90 percent at 500 training iteration. The accuracy on the board positions of 6 heaps nim exceeds 60 percent, but that on the ones of the 7 heaps nim fluctuates near 50 percent.}\n\\end{figure}\n\nThe policy network learns from the heuristics $ \\boldsymbol\\pi(\\textbf{a}|s)$ derived from the MCTS, as shown in Formula \\ref{eqn:pi_action_selection}. The value network learns from the actual game output. To probe how policy and value network fits in the targets, we keep track of the training loss for each of them during the training process, and as shown in the Fig. \\ref{fig:alpha_value}. It is salient that both value and policy network are able to gradually fit in the targets, empowering the agent to be increasingly competitive. However, the difficulties for the policy network to digest the heuristic and for the value network to model the expected outcome of the game grow as the board size increases, which is a major problem that impedes the agent to become an expert. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.4\\textwidth]{figures\/policy_loss.eps}\n\\hspace{1em}\n\\includegraphics[width=0.4\\textwidth]{figures\/value_loss.eps}\n\\caption{\\label{fig:alpha_value} The loss of the policy network (left) and the loss of the value network (right) on nim of 5, 6 and 7 heaps. It occurs to both neural networks that the larger the board size grows, the harder it becomes for them to fit in the heuristic from the MCTS.}\n\\end{figure}\n\nThe gradually dropping loss of both policy network and network network, coupled with the rising Elo rating of the agents, indicates the agent is learning to become an champion. However, the dropping accuracy of the policy and value network as the size of the board grows shows that it is tremendously challenging for the agent to become an expert. On 7 heaps nim, the policy and value network could memorize the heuristics from the MCTS and actual outcome at the end of the game. But they merely enable the agent to become more competitive in comparison with its ancestors, and cannot guide the MCTS effectively to form a positive improvement loop. This illustrate our general finding that on large boards nim, and impartial games in general is a challenge for reinforcement learning algorithms. \n\n\\subsubsection{Analysis of AlphaZero on nim positions}\n\\label{subsec:alphazero_nim_positions}\n\n\\begin{figure}[H]\n \\centering\n \\subfloat[\\label{pos:5heaps_1}{5 heaps: [1, 3, 5, 7, 9]}]{\\includegraphics[width=0.35\\textwidth]{figures\/5piles_nim_position.eps}}\n \\centering\n \\hspace{1.5em}\n \\subfloat[\\label{tab:heaps5_1}Evaluations from policy and value network for the 2 moves with the highest prior probabilities. In 10,000 MCTS simulations, these are the only two moves selected on this position.]{\n \\begin{tabular}{p{0.25\\textwidth}p{0.08\\textwidth}p{0.05\\textwidth}}\\hline\n \\toprule\n \\textbf{Move} & e9 & a1 \\\\ \n \\midrule\n \\textbf{Winning Move} & yes & no \\\\ \n \\midrule\n \\textbf{Prior Probability} & 97.9\\% & 1.9\\% \\\\\n \\midrule\n \\textbf{Win Probability} & 99.5\\% & 5.0\\% \\\\\n \\midrule\n \\textbf{V-value} & 0.97 & -0.89 \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{1em}\n }\n \\caption{\\label{fig:alphazero_5piles_1} The policy and value network accurately evaluate the initial position of the 5 heap nim, assigning 97.9\\% prior probability and 99.5\\% winning probability to the winning move e9. The move with the second highest prior probability is a1 that is assigned with 5.0\\% winning probability.}\n\\end{figure}\n\nThe graphs and analysis in the previous sections provide a high level overview of the performance of the algorithm on nim with different number of heaps. In this section, we will evaluate the performance of the algorithm on the initial board position along one of the intermediate positions from 5, 6 and 7 heaps nim, like we did in section \\ref{sec:revisiting_alphazero} where the statistics of the algorithms on some chess positions by LC0 are obtained and analysed. \n\nOn the positions of 5 heaps nim, the policy and value network after training strongly favors a few moves over the remaining ones, significantly improving the effectiveness of the search. However, this overly confidence could bring catastrophic consequence that cannot be restored if it is not well grounded. On a position in 6 heaps nim where the policy network assigned the top 4 prior probabilities to 4 losing moves, as it is not obsessed with any particular move and with the assistance with the value network that evaluates the majority of the next positions correctly, the algorithm converges to a winning move. On the initial position of a 7 heaps nim, the policy and value network fails in providing any constructive guidance that benefits the search. \n\n\n\nThe analysis on the initial position of a 5 heaps nim from the results of the trained model is shown in the Fig. \\ref{fig:alphazero_5piles_1}. All the values are calculated from the perspective of the player who is making the move on these positions. Each move is represented by a letter and a digit where the letter is the label to the heap and the digit denotes the number of counters the move removes from this heap. For instance, the move e9 is removing 9 counters from the heap labelled as e. The policy network assign 97.9\\% prior probability to the winning move, e9 and the value network accurately estimate the resulting position of taking e9 is advantageous to the current player. \n\nHowever, on a intermediate position similar to the initial position as shown in \\ref{pos:5heaps_2}, the search is misled by the policy network that is obsessed with the losing e8 move as the prior probability assigned to the move is 97.4\\% which is overwhelmingly larger than that of other moves. This leads to the misfortune that the winning move still is rejected after 4 million simulations even though the value network predicts the value of the position after e8 accurately. \n\n\\begin{figure}[H]\n \\centering\n \\subfloat[\\label{pos:5heaps_2}{5 heaps: [1, 3, 5, 5, 9]}]{\\includegraphics[width=0.3\\textwidth]{figures\/5_piles_nim_position.eps}}\n \\centering\n \\hspace{1.5em}\n \\subfloat[Evaluations from policy and value network for the 4 moves with the highest prior probabilities.]{\n \\begin{tabular}{p{0.25\\textwidth}p{0.08\\textwidth}p{0.05\\textwidth}p{0.05\\textwidth}p{0.05\\textwidth}}\\hline\n \\toprule\n \\textbf{Move} & e8 & b1 & a1 & d4 \\\\ \n \\midrule\n \\textbf{Is Winning Move} & no & no & no & no \\\\ \n \\midrule\n \\textbf{Prior Probability} & 97.4\\% & 0.7\\% & 0.4\\% & 0.2\\%\\\\\n \\midrule\n \\textbf{Win Probability} & 0.14\\% & 5.6\\% & 12.5\\% & 9.8\\% \\\\\n \\midrule\n \\textbf{V-value} & -0.99 & -0.88 & -0.74 & -0.80 \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{1em}\n }\n \\vspace{1em}\n \\subfloat[\\label{tab:heaps5_2}The posterior probability of the winning move e7 given different number of MCTS simulations]{ \n \\begin{tabular}{cccccc}\n \\toprule\n & \\multicolumn{5}{c}{\\textbf{Number of simulations}} \\\\\n \\midrule\n \\textbf{Winning Move}& 64 & 256 & 1024 & 65536 & 4194304 \\\\ \n \\midrule\n e7 & 1.56\\% & 0.39\\% & 0.09\\% & 0.0015\\% & 0.00021\\% \\\\\n \\bottomrule\n \\end{tabular}\n }\n \n \\caption{\\label{fig:alphazero_5piles_2} The only winning move available in this position is e7. The policy network offers completely wrong estimation. The move e8, a losing move, is assigned with 97.4\\% prior probability. Albeit the value network predicts the resulting position of taking e8 is disadvantageous for the current player with high confidence, the algorithm fails to find out the winning move after more than 4 million simulations.}\n\\end{figure}\n\nFor the initial position of the 6 heaps nim as shown in \\ref{pos:6heaps_1}, the policy network strongly promotes the winning move b2, and the value network also evaluates the resulting positions correctly with high confidence.\n\n\\begin{figure}[H]\n \\centering\n \\subfloat[\\label{pos:6heaps_1}{6 heaps: [1,3,5,7,9,11]}]{\\includegraphics[width=0.3\\textwidth]{figures\/6piles_nim_position.eps}}\n \\centering\n \\hspace{1.5em}\n \\subfloat[\\label{tab:heaps6_1}Evaluations from policy and value network for the 4 moves with the highest prior probabilities.]{\n \\begin{tabular}{lcccc}\\hline\n \\toprule\n \\textbf{Move} & b2 & a1 & b1 & d6 \\\\ \n \\midrule\n \\textbf{Winning Move} & yes & no & no & no \\\\ \n \\midrule\n \\textbf{Prior Probability} & 92.8\\% & 4.0\\% & 0.64\\% & 0.63\\% \\\\\n \\midrule\n \\textbf{Win Probability} & 78.0\\% & 14.3\\% & 4.76\\% & 2.17\\% \\\\\n \\midrule\n \\textbf{V-value} & 0.56 & -0.71 & -0.90 & -0.95 \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{1em}\n }\n \\caption{\\label{fig:alphazero_6piles_1} The policy and value network succeeds in predicting the winning move and estimating the resulting position. The probability assigned to the winning move b2 exceeds the second largest probability which is assigned to move a1 by a large margin. }\n\\end{figure}\n\nThere are also positions with 6 heaps where the policy and value network both stumbled, one of which is shown in \\ref{pos:6heaps_2}. However, unlike the situation with a 5 heaps nim position as shown in Fig. \\ref{fig:alphazero_5piles_2}, the prior probabilities assigned to the these winning moves are not high enough to completely mislead the search. As shown in the table \\ref{tab:heaps6_2}, the winning move f10 is identified by the MCTS after 65536 simulations and the posterior probability of choosing it increases as more simulations were conducted. \n\n\\begin{figure}[!h]\n \\centering\n \\subfloat[\\label{pos:6heaps_2}{6 heaps: [1,3,3,5,4,10]}]{\\includegraphics[width=0.3\\textwidth]{figures\/6_piles_nim_position.eps}}\n \\hspace{1.5em}\n \\subfloat[Evaluations from policy and value network for the 4 moves with the highest prior probabilities.]{\n \\begin{tabular}{lcccc}\\hline\n \\toprule\n \\textbf{Move} & f3 & f2 & f4 & f9 \\\\ \n \\midrule\n \\textbf{Winning Move} & no & no & no & no \\\\ \n \\midrule\n \\textbf{Prior Probability} & 47.4\\% & 12.6\\% & 10.1\\% & 7.5\\% \\\\\n \\midrule\n \\textbf{Win Probability} & 0.18\\% & 1.03\\% & 0.03\\% & 81.2\\% \\\\\n \\midrule\n \\textbf{V Value} & -0.99 & -0.97 & -0.99 & 0.62 \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{1em}\n }\n \\vspace{1em}\n \\subfloat[\\label{tab:heaps6_2}The posterior probability of the winning move f10 given different number of MCTS simulations]{ \n \\begin{tabular}{cccccc}\n \\toprule\n & \\multicolumn{5}{c}{\\textbf{Number of simulations}} \\\\\n \\midrule\n \\textbf{Winning Move}& 64 & 256 & 1024 & 65536 & 4194304 \\\\ \n \\midrule\n f10 & 4.68\\% & 1.17\\% & 0.29\\% & 53.7\\% & 99.1\\% \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{\\label{fig:alphazero_6piles_2} In this position, the only winning move is f10. However, the top 4 probabilities yielded by the policy network are all assigned to losing moves, causing the algorithm fail to find the winning even after more than 1000 simulations. Fortunately, the value network predicts with high confidence that the resultant positions from move f3, d2 and f4 are in favor of the opponent of the current player, making the winning move f10 stand out after 65536 simulations and the confidence is boosted with more simulations.}\n\\end{figure}\n\nThe initial position of the 7 heaps nim as shown in the Fig. \\ref{pos:7heaps}, the prior probability by the policy network are almost equal for all the moves, being either winning or losing move. The value network evaluates all the resulting positions as having around 50\\% winning probability. These predictions apparently contribute nothing to the search to the extent where the winning moves are not recognized by the MCTS after more than 4 million simulations. \n\n\\begin{figure}[H]\n \\centering\n \\subfloat[\\label{pos:7heaps}{7 heaps: [1, 3, 5, 7, 9, 11, 13]}]{\\includegraphics[width=0.3\\textwidth]{figures\/7piles_nim_position.eps}}\n \\centering\n \\hspace{1.5em}\n \\subfloat[Evaluations from policy and value network for the 4 moves with the highest prior probabilities.]{\n \\begin{tabular}{lcccc}\\hline\n \\toprule\n \\textbf{Move} & c4 & c3 & c2 & c5 \\\\ \n \\midrule\n \\textbf{Winning Move} & no & no & no & no \\\\ \n \\midrule\n \\textbf{Prior Probability} & 4.37\\% & 4.06\\% & 3.95\\% & 3.88\\% \\\\\n \\midrule\n \\textbf{Win Probability} & 50.1\\% & 49.5\\% & 48.9\\% & 50.1\\% \\\\\n \\midrule\n \\textbf{V Value} & 0.003 & -0.009 & -0.02 & 0.003 \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{1em}\n }\n \\vspace{1em}\n \\subfloat[The posterior probability of the winning moves e7, f7 and g11 given different number of MCTS simulations]{ \n \\begin{tabular}{cccccc}\n \\toprule\n & \\multicolumn{5}{c}{\\textbf{Number of simulations}} \\\\\n \\midrule\n \\textbf{Winning Move}& 64 & 256 & 1024 & 65536 & 4194304 \\\\ \n \\midrule\n e7 & 1.56\\% & 1.17\\% & 1.26\\% & 2.79\\% & 0.38\\% \\\\\n \\midrule\n f7 & 1.56\\% & 0.17\\% & 1.66\\% & 0.89\\% & 1.15\\% \\\\\n \\midrule\n g11 & 1.56\\% & not visited & 1.07\\% & 1.61\\% & 0.65\\% \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{\\label{fig:alphazero_7piles} The 4 moves with the highest prior probabilities are losing moves, and the policy network does not particularly favor any of them. The evaluation from the value network on these positions is near 0, showing they all have 50\\% winning probability.}\n\\end{figure}\n\nIn nim with 5, 6 and 7 heaps, there are positions where the policy and value networks succeed and positions where they fail. In general the confidence of choosing the winning move decreases and the difficulty in accurately evaluating positions increases as the board size (and state-space) grows. \n\nOn relatively small boards, the parity issues pertaining to compute the correct nimsum do not get involved as calculating the parity is feasible, as shown in \\ref{sec:value_network}. Learning nim on larger boards is dramatically more difficult because finding the right move and correctly evaluating large board positions is parity related, and the huge state space forces the policy and value network to generalize to unseen states, which as we have argued poses a fundamental challenge to RL algorithms.\n\n\\section{Concluding remarks and conjectures}\n\\label{sec:conclusion}\n\nThe AlphaZero paradigm can be seen as part of a larger ambition to build intelligent systems that can learn to solve complex tasks by themselves. The ambition, as articulated by Demis Hassabis, the CEO of Deep Mind, is to solve intelligence and then use it to solve everything else \\cite{sadler2019game}. While this ambition is truly inspiring, the results in this paper remind us that thinking - even in strategy games - varies fundamentally in nature. General AI will need to handle different modes of thinking. \n\nFrom a human perspective, games like chess, Go, and shogi somehow feel distinct from nim and impartial games. From a human point of view, the former games have clear criteria for good play. Progress in learning these games is typically incremental, and pattern recognition plays a central role. On the contrary, nim can be mastered by humans but through an entirely different thought process and analysis. Learning nim typically happens in non-incremental steps (like was the case for our LSTM learning experiments discussed in section \\ref{sec:value_network}). It seems inconceivable that a human could learn to master nim on a large board without also having solved the game for any board size. Thus, when humans master nim, transfer learning and abstract generalisation play an important role. \n\nAlphaZero has been truly groundbreaking, but new ideas are needed to expand reinforcement learning to games that, like nim seem to require high-level non-associative thinking. For humans, counting is a straightforward process. Even young children understand that the number of objects is an invariant, so recounting should (in principle) lead to the same number. In mathematics, this principle is referred to as the pigeon-hole principle; however, as we explained, such basic counting principles require a kind of insight that seems challenging for AIs to develop solely one their own. \n\nWe acknowledge that \\textit{parity is a well-known nuisance for neural networks, trivial to compute but loved by theoreticians to bang NN engineers over the head}. The motivation of our work is not to bang anyone on their head, but to understand how the AlphaZero paradigm can be expanded. The concept of self-play in model based AI is just one way an agent can learn by \"exploring around\". However, to fully understand this, it is essential to understand the actual way self-play NN-based RL algorithms learn. This is why we looked carefully and closely at the way the AlphaZero clone LC0 played chess. And it is why this paper on impartial games might help navigate further research on expanding AlphaZero style learning. \n\nParity-related problems occur naturally for a large class of combinatorial games. However, we discovered that the difficulty of learning to master these games is much more dramatic than just learning parity. The problem is robust and tenacious, and fiddling with the hyperparameters of the algorithm does not seem to have much effect. We did break down the AlphaZero style algorithm and checked the various components separately. And we even tested the parity issues with novel architectures that were not part of the original AlphaZero algorithms. \n\nThere are many factors impacting the performance of our AlphaZero style nim algorithm. There is an unlimited number of settings so it is impossible to try all of them out. Proving the results rigorously seems well outside current theoretical techniques. From a philosophy of science perspective, one can always object that a specific pattern might fail at values not tested. If such objections were taken seriously, science would be impossible. Consider experiments suggesting some formula e.g. $F = ma$. It is always (logically) possible that the formula would seriously fail for values not tested. It is always possible to make this kind of objection to an experimental result. Experimental results should always seen as part of a wider theory and understanding that is aligned with experiments but in principle could be falsified \\cite{popper1972objective}.\n\nWe anticipated that nim would be practically unlearnable by AlphaZero style algorithms on boards with 50+ heaps. However, to our surprise, the practical upper bound of the learnability was much lower than we expected, as the algorithms experienced substantial limitations already with seven heaps. Our work also shows there is an issue when attempting to applying NNs (e.g. Stockfish NNUE style NN) to guide the search in the new algorithms \\cite{nasu2018efficiently, maharaj2021chess} for impartial game, due to the difficulty of the networks guiding the search. \n\nAnother point we would like to stress is that the difficulty of learning nim on small boards is not even due to parity issues. The parity required to compute correct nim sums etc has not kicked in on small boards as learning parity for small values of n, (\\textit{e.g.} $n=7$ for 7 piles) is, as our experiments showed, pretty feasible. \nOn the board $[1,3,5,7,9,11,13]$ we established that no positive feedback loop occurs and the policy and value network essentially both drift around without the ability to learn anything besides memorizing some heuristics derived from MCTS. And remarkably,at least with the resources we had available, this happened despite that the state space is relatively small and most states will be seen multiple times during training if the all the positions are fully explored. On larger boards, where the state space exceeds any number of states that feasibly can be reached during training, the value and policy network needs to generalise to unseen positions. Failing to generalise adds additional noise to the learning as the evaluation on some positions becomes random and uncorrelated with correct values, preventing the positive feedback mechanism of RL from functioning properly. Added to this difficulty, on larger boards the difficulty of learning the parity function also kicks in an already very noisy situation. \n \nAlphaZero employs a strategy that combines evaluation with search guided calculation. However, some of aspects of impartial games seem to require mathematical thinking guided by abstract, symbolic reasoning. Some PSPACE-complete impartial games, \\textit{e.g.} node kayles and geography, can mimic NP-hard problems which are intractable. Thus any algorithm that could learn any of these games to perfection would be able break current cryptography. However, other impartial games can be solved by mathematical reasoning. Thus, it is possible to express optimal strategies in simple mathematical terms, \\textit{e.g.} sprout that has been analysed to a deep level might eventually be solvable by AI with sufficient built-in reasoning abilities. \n\nDespite the success of AlphaZero, our work has shown that fundamentally new ideas are needed for an AlphaZero style approach to be successful for impartial games. When humans learn to master nim, they might scribble on a piece of paper, play around with small toy examples, form conjectures etc. Maybe, it is possible to apply AlphaZero style reinforcement learning in an extended setting that takes auxiliary actions external to the game into account, such as applying abstract transformations, or reading and writing to an external memory. These meta-actions, analogous to the actions the algorithm takes during simulations, are not directly linked to the move it makes but significantly boost its ability to plan forward. The results in this paper indicates that new ideas will be needed to make such ideas work.\n\n\n\\section*{Acknowledgment}\n\nThis work was funded by the Chinese Scholarship Council (CSC). We appreciate the assistance of ITs Research team at the University of Queen Mary for supporting us in using the Apocrita HPC facility. \nFinally, we would like to thank the Leela Chess Zero development team for providing detailed instructions for the use and workings of LC0. \n\n\n\\printbibliography\n\n\\end{document}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n \n The universe's matter content is dominated by the elusive dark-matter (DM), which has been one of the main topics in astronomical research. The idea of a dark or invisible mass was proposed numerous times based on the motions of stars in the Milky Way disk \\citep{OortJ_32a}, the motion of galaxies in the Coma cluster \\citep{ZwickyF_33a} and by the lesser known argument made by \\citet{PeeblesP_67a} using an upper limit on the mean mass density of galaxies from the average spectrum of galaxies (i.e. from the night-sky brightness). Nonetheless, the concept of DM became part of mainstream research only in the 1970s, based on the remarkable fact that the rotation curves (RC) of massive galaxies remain flat at large galactocentric distances \\citep{RubinV_70a}. It was quickly realized that these flat rotation curves at large radii could not be explained by the Newtonian gravity of the visible matter alone, but instead implied the presence of an unobserved mass component attributed to a DM halo.\n \nToday, the cold-DM (CDM) framework in which the large scale structure originates from the growth of the initial density fluctuations \\citep{PeeblesP_70a,PeeblesP_74a} is very successful in reproducing the large scale structure \\citep[e.g.][]{SpringelV_06a}. However, understanding the nature and properties of DM on galactic scales remains one of the greatest challenges of modern physics and cosmology \\citep[see ][for a review]{BullockJ_17a}. \n\nIn this context, disentangling and understanding the relative distributions of baryons and dark matter in galaxies is still best achieved from a careful analysis of galaxies' RCs on galactic scales. At redshift $z=0$, this type of analysis is mature with a wealth of studies published in the past 20-30 years, using a variety of dynamics tracers such as \\ion{H}{I}\\ \\citep[e.g.][]{deBlokW_97a,deBlokW_01a,VandenBoschF_00a}, \\hbox{{\\rm H}$\\alpha$}\\ in the GHASP survey \\citep{SpanoM_08a,KorsagaM_18a,KorsagaM_19b} or a combination of \\ion{H}{I}{} \\& \\hbox{{\\rm H}$\\alpha$}\\ as in the recent SPARC sample \\citep{AllaertF_17a,KatzH_17a,LiLelli_20a} and the DiskMass survey \\citep{BershadyM_10a,MartinssonT_13a}. These studies have shown that, in low surface brightness (LSB) galaxies, the DM profiles have a flat density inner 'core', contrary to the expectations from DM-only simulations that DM haloes ought to have a steep central density profiles or 'cusp' \\citep[e.g.][NFW]{NavarroJ_97a}.\nThis cusp-core debate may be resolved within CDM with feedback processes \\citep[e.g.][]{NavarroJ_96b,PontzenA_12a,TeyssierR_13a,DiCintioA_14a,LazarA_20a,FreundlichJ_20a} transforming cusps into cores~\\footnote{Recently, \\citet{PinedaJ_17a} argued that NFW profiles can be mistaken as cores when the PSF\/beam is not taken into account.}, a process that could be already present at $z=1$ \\citep{TolletE_16a}.\nDM-only simulations in the $\\Lambda$CDM context have made clear predictions for the properties of DM halos, such as their concentration and evolution \n \\citep[e.g.][]{BullockJ_01b,EkeV_01a,WechslerR_02a,DuffyM_08a,LudlowA_14a,DuttonA_14a,CorreaC_15c}, but the $c-M$ relation remains untested beyond the local universe in SFGs \\citep[e.g.][]{AllaertF_17a,KatzH_17a}. \n \n At high redshifts, where 21cm observations are not yet available, in order to measure the DM content of high-redshift galaxies, one must measure the kinematics in the outskirts of individual star-forming galaxies (SFGs) using nebular lines (e.g. \\hbox{{\\rm H}$\\alpha$}), at radii up to 10-15 kpc (2-3 times the half-light radius \\hbox{$R_{\\rm e}$}) where the signal-to-noise ratio (S\/N) per spaxel drops approximately exponentially and quickly falls below unity. \nDisk-halo decompositions have proven to be possible at $z\\simeq2$ in the pioneering work of \\citet{GenzelR_17a} using very deep ($>30$ hr) near-IR integral field spectroscopy (IFS) on a small sample of six massive star-forming galaxies (SFGs). Exploring lower mass SFGs, this exercise requires a stacking approach \\citep[as in][]{LangP_17a,TileyA_19b} or deep IFS observations \\citep[as in][]{GenzelR_20a}. These studies of massive SFGs with $M_\\star>10^{11}\\hbox{M$_{\\odot}$}$ showed that RCs are declining at large radii, indicative of a low DM fraction within \\hbox{$R_{\\rm e}$}; see also \\citet{WuytsS_16a,UblerH_17a,AbrilV_21a} for dynamical estimates of DM fractions.\n \n Recently, 3D algorithms such as \\textsc{ GalPaK$^{\\rm 3D}$}\\ \\citep{BoucheN_15a} or \\textsc{$^{\\rm 3D}$Barolo} \\citep{DiTeodoroE_15a} have pushed the limits of what can be achieved at high-redshifts.\n For instance, one can study the kinematics of low mass SFGs, down to $10^8$ \\hbox{M$_{\\odot}$}\\ \\citep[as in][]{BoucheN_21a} in the regime of low S\/Ns\n or study the kinematics of SFGs at large galactic radii $\\sim3\\times \\hbox{$R_{\\rm e}$}$ as in \\citet{SharmaG_21a}, when combined with stacking techniques. Most relevant for this paper,\ndisk-halo decompositions of distant galaxies have been performed with \\textsc{$^{\\rm 3D}$Barolo} at $z\\simeq4$ on bright submm [CII] ALMA sources \\citep{RizzoF_20a,NeelemanM_20a,FraternaliF_21a}.\nIn addition, when used in combination with stacking or lensing, 3D algorithms are powerful tools to extract resolved kinematics at very high-redshifts as in \\citet{RizzoF_21a}.\n \n \n This paper aims to show that a disk-halo decomposition can be achieved for {\\it individual} low-mass SFGs at intermediate redshifts ($0.63$ ($>20$), respectively. This corresponds to a logarithmic difference $\\Delta \\ln \\cal Z$ of 2 and 6, respectively. \nThus, we use a minimum $\\Delta \\ln \\cal Z$ of 6 as our threshold to discriminate between models.\nTable~\\ref{table:evidence} shows the logarithmic difference of the Bayes factors, $\\Delta\\ln\\cal Z$, for the NFW DM models with respect to the fiducial DC14 models.\n\n\n\n \n \n \n \n\n\\subsection{Stellar rotation from HST photometry}\n\\label{section:mge}\n\n \n\n{In order to independently estimate the contribution of the stellar component to the RC, we parameterized the light distribution of HST\/F160W images with the MGE method \\citep{MonnetG_92a,EmsellemE_94b}\\footnote{An implementation of the method \\citep{CappellariM_02a} is available at \\url{https:\/\/www-astro.physics.ox.ac.uk\/ mxc\/software\/}}. For each galaxy we made an MGE model by considering the PSF of the HST\/F160W filter, removing the sky level and masking any companion galaxies or stars. Each MGE model consists of a set of concentric two-dimensional Gaussians defined by the peak intensity, the dispersion and the axial ratio or flattening. The Gaussian with the lowest flattening is critical as it sets the lower limit to the inclination at which the object can be projected \\citep{MonnetG_92a}. Therefore, following the prescription from \\citet{ScottN_13a}, we also optimise the allowed range of axial ratios of all MGE models until the fits become unacceptable. { In practice, convergence is achieved when the mean absolute deviation of the model \nfor a given axial ratio pair increases by less than 10 per cent over the previous step.} Finally, we convert the Gaussian peak counts to surface brightness using the WFC3 zeropoints from the headers, and then to surface density (in L$_\\odot$ pc$^{-2}$) adopting 4.60 for the absolute magnitude for the Sun in the F160W \\citep{WillmerC_18a}.\n\nWe follow the projection formulas in \\citet{MonnetG_92a} and the steps outlined in \\citet{EmsellemE_94a,EmsellemE_94b}\n to determine the gravitational potential for our MGE models \\cite[see also Appendix A of][]{CappellariM_02b}. \nThe critical parameters here are the distance, inclination, and the mass-to-light ratio of the galaxy. The distances are simply calculated from the redshifts and our assumed Planck 2015 cosmology. \n\nAs we assume that the stellar component is distributed in a disk, we use the axial ratio of galaxies measured from the HST\/F160W images to derive the inclinations of galaxies. An alternative approach would be to use the inclinations returned from the \\textsc{ GalPaK$^{\\rm 3D}$} models, which lead to almost identical results.\n\nWe estimate the mass-to-light ratios of galaxies combining the stellar masses obtained from photometric SED fits (see \\S~\\ref{section:sample}) and the total light obtained from the MGE models. Finally we use the module {\\tt mge\\_vcirc} from the JAM code \\citep{CappellariM_08a} to calculate the circular velocity in the equatorial plane of each galaxy. \n \n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figs\/figure3.png}\n\\caption{Galaxies stellar masses. Comparison between the stellar mass $M_{\\star}$ obtained from \\textsc{ GalPaK$^{\\rm 3D}$}\\ disk-halo fits and the SED-based $M_{\\star}$ derived from the {\\it HST} photometry. The error bars represent the 95\\% confidence intervals.\nThe $M_{\\star}$ obtained with \\textsc{ GalPaK$^{\\rm 3D}$}\\ (one of the 14 free parameters in \\S~\\ref{section:disk:halo}) and from {\\it HST} photometry are completely independent, except for ID912 (open circle). The dashed line shows the 1:1 line and this figure shows the two are in excellent agreement, except for ID919 and ID943.\n}\n\\label{fig:mdisk}\n\\end{figure}\n\n\n\\begin{figure*}\n\\centering \n\\includegraphics[width=0.9\\textwidth]{figs\/figure4.png}\n\\caption{Disk-halo decompositions for the 9\\ galaxies in our sample (ordered by increasing $M_\\star$).\nThe solid black line represents the total rotation velocity $v_\\perp(r)$. All velocities are `intrinsic', i.e. corrected for inclination and instrumental effects.\nThe dot-dashed line represents the circular velocity $v_{\\rm c}(r)$, i.e. $v_\\perp(r)$ corrected for asymetric drift.\nThe gray band represents the intrinsic universal rotation curve (URC) using the parametrization of PSS96 as in \\Fig{fig:examples}.\nThe solid red (blue) line represents the stellar (gas) component $v_\\star(r)$ obtained from \\textsc{ GalPaK$^{\\rm 3D}$}\\ modeling of the MUSE \\hbox{[{\\rm O}{\\sc \\,ii}]}\\ data.\nThe dotted red line represents the stellar component obtained using a MGE decomposition of the {\\it HST}\/F160W stellar continuum images.\nThe green line represents the DM component.\nThe vertical dotted lines are as in \\Fig{fig:examples}.\n\\label{fig:diskhalo}\n}\n\\end{figure*}\n\n\n\\section{Results}\n\\label{section:results}\n\n \n\n\n\\subsection{The diversity of rotation curve shapes}\n\n\nIn Figure \\ref{fig:examples}, we show the morpho-kinematics of the galaxies used in this study.\nThe first column shows the stellar continuum from {\\it HST}\/F160W.\nThe second column shows the \\hbox{[{\\rm O}{\\sc \\,ii}]}\\ flux map obtained from the \\textsc{CAMEL}\\footnote{Available at \\url{https:\/\/gitlab.lam.fr\/bepinat\/CAMEL}} algorithm \\citep{EpinatB_12a}.\nThe third column shows the \\hbox{[{\\rm O}{\\sc \\,ii}]}\\ surface brightness profile as a function of radius $r$, in units of $\\hbox{$R_{\\rm e}$}$. \nThe fourth column shows the observed 2-dimensional velocity field $v_{\\rm 2d}$ obtained from \\textsc{CAMEL}.\nThe fifth column shows the intrinsic rotation velocity $v_{\\perp}(r)$ corrected for inclination and instrumental effects (beam smearing, see \\S~\\ref{section:methodology}), using the parametric model of PSS96 (see \\S~\\ref{section:kinematics}). The vertical dotted lines represent the radius at which the S\/N per spaxel reaches 0.3, and indicates the limits of our data.\nThe last column shows the residual map, obtained from computing the standard deviation in the residual cube along the wavelength direction.\n\nThis figure shows that $z=1$ RCs have diverse shapes \\citep[as in][]{TileyA_19b,GenzelR_20a} with mostly increasing but some presenting declining RCs at large radii as in \\citet{GenzelR_17a}. The diversity, albeit for a smaller sample, is similar to the diversity observed at $z=0$ \\citep[e.g.][]{PersicM_96a,CatinellaB_06a,MartinssonT_13b,KatzH_17a}.\n\n \n\n\n\\subsection{The disk-halo decomposition}\n\nWe now turn to our disk-halo decomposition using the method described in \\S~\\ref{section:disk:halo}.\nFor each SFG, we ran several combinations of disk-halo models, such as different halo components (DC14\/NFW), different disk components (Freeman\/MGE), with or without a bulge, with various asymmetric drift corrections and chose the model that best fit the data for each galaxy according to the model evidence. \n We find that the DC14 halo model is generally preferred over a NFW profile and the resulting model parameters are listed in Table~\\ref{tab:results}.\nThe evidence for the DC14 models is discussed further in \\S~\\ref{section:cores}.\n \n\n Before showing the disk-halo decompositions, we compare the disk stellar mass $M_\\star$ ($M_\\star$ being one of the 14 free parameters) obtained from the 3D fits with the SED-derived $M_\\star$.\n This comparison is performed in \\Fig{fig:mdisk} where the total $M_\\star$ (disk$+$bulge from our fits) is plotted along the $x$-axis. \nThis figure shows that there is relatively good agreement between the disk mass estimates from our \\textsc{ GalPaK$^{\\rm 3D}$}\\ model fits (described in \\S~\\ref{section:disk:halo}) and the SED-based ones, except for ID919 and ID943.\nThis figure shows that our 3D disk-halo decomposition yields a disk mass consistent with the SED-derived $M_\\star$, \nand thus opens the possibility to constrain disk stellar masses from rotation curves of distant galaxies for kinematically non-disturbed galaxies.\n \n\n\n\nThe disk-halo decompositions (deprojected and `deconvolved' from instrumental effects) \nusing our 3D-modeling approach with \\textsc{ GalPaK$^{\\rm 3D}$}\\ are shown in Figure~\\ref{fig:diskhalo},\nwhere the panels are ordered by increasing $M_\\star$ as in \\Fig{fig:f160w}.\nThe disk\/DM models used are listed in \\Tab{tab:results}.\nIn each panel, the solid black line shows the total rotation velocity $v_\\perp(r)$ corrected for asymetric drift. All velocities are `intrinsic', meaning corrected for inclination and instrumental effects, while the dot-dashed line represents the circular velocity $v_{\\rm c}(r)$.\nThe gray band represents the URC model as in \\Fig{fig:examples}.\nThe solid green, red and blue lines represent the dark-matter $v_{\\rm dm}(r)$, stellar $v_{\\star}(r)$, and gas components $v_{\\rm g}(r)$, respectively.\nThe dotted red lines represent the stellar component obtained from the {\\it HST}\/F160W images as discussed in \\S~\\ref{section:mge}.\n\nComparing the solid with the dotted red lines in \\Fig{fig:diskhalo}, one can see that there is generally good agreement between $v_{\\star}(r)$ obtained from the {\\it HST} photometry and from our disk-halo decomposition with \\textsc{ GalPaK$^{\\rm 3D}$}\\ of the MUSE data, except again for ID919 and ID943. This comparison shows that the disk-halo decomposition obtained from the \\hbox{[{\\rm O}{\\sc \\,ii}]}\\ line agrees with the $v_{\\star}$ from the mass profile obtained on the {\\it HST} photometry. {One should note that the stellar mass $M_\\star$ from SED fitting is not used as a prior in our \\textsc{ GalPaK$^{\\rm 3D}$}\\ fits, except for ID937 because the data for this galaxy prefers a NFW profile, which then becomes degenerate with $M_\\star$.}\nFor the interested reader, {the potential degeneracies between $M_\\star$ and $M_{\\rm vir}$ are shown in \\Fig{fig:corner}.}\n\n\n\n\n\n\n\\subsection{The stellar-to-halo mass relation} \n \n The $M_\\star-M_{\\rm vir}$ relation in $\\Lambda$CDM is a well-known scaling relation that reflects \n the efficiency of feedback. Hence, measuring this scaling relation in individual galaxies is often seen as a crucial constraint on models for galaxy formation. \n This scaling relation can be constructed from abundance matching techniques \n \\citep[e.g.][]{ValeA_04a,MosterB_10a,BehrooziP_13b,BehrooziP_19a}.\n Observationally, the $z=0$ stellar-to-halo relation has been constrained by numerous authors using a variety of techniques such as weak lensing and\/or clustering \\citep[e.g.][]{LeauthaudA_12a,MandelbaumR_16a}. \nDirect measurements of the $M_\\star-M_{\\rm vir}$ relation on individual galaxies using rotation curves have been made on various samples of dwarfs \\citep{ReadJ_17a}, spirals \\citep{AllaertF_17a,KatzH_17a,LapiA_18a,PostiL_19b,DiPaoloC_19a} and early type galaxies \\citep{PostiL_21a} among the most recent studies,\nand these have found a very significant scatter in this relation.\n \nIn \\Fig{fig:Behroozi} (left), we show the stellar-to-halo mass ratio $M_{\\star}\/M_{\\rm vir}$ as a function $M_\\star$. The blue (gray) contours show the expectation for $z=1$ SFGs in the TNG100\/50 simulations\n and the solid lines represent the $M_\\star\/M_{\\rm vir}$ relation from \\citet{BehrooziP_19a}.\n\\Fig{fig:Behroozi} (left) shows that our results are qualitatively in good agreement with the Behroozi relation.\n \n\n\\citet{RomeoA_20a} argued that disk gravitational instabilities are the primary driver for galaxy scaling relations. Using a disk-averaged version of the \\citet{ToomreA_64a} $Q$ stability criterion~\\footnote{\\citet{ObreschkowD_16a} used similar arguments to derive the \\ion{H}{I}\\ mass fractions.}, \\citet{RomeoA_20a}\nfind that \n\\begin{equation}\n=\\frac{j_i\\hat\\sigma_i}{G M_i}=A_i \n\\label{eq:jRomeo:Q}\n\\end{equation}\n where $i=\\star,\\ion{H}{I}$ or $H_2$, $\\hat\\sigma_i$ is the radially averaged velocity dispersion, and $j_i$ is the total specific angular-momentum.\nFor $i=\\star$, $A_i\\approx 0.6$.\n\nConsequently, for the stellar-halo mass relation with $i=\\star$, $M_\\star\/M_{\\rm vir}$ ought to correlate with \\citep{RomeoA_20b}:\n\\begin{equation}\n\\frac{M_\\star}{M_{\\rm vir}}\\simeq \\frac{j_\\star \\hat{\\sigma}_\\star}{G M_{\\rm vir}}\n\\label{eq:jRomeo}\n\\end{equation}\nwhere $j_\\star$ is the stellar specific angular momentum, $\\hat \\sigma_\\star$ the radially averaged stellar dispersion. \nWe can estimate $j_\\star$ using the ionized gas kinematics, namely $\\log j_\\star=\\log j_{\\rm gas}-0.25$ as in \\citet{BoucheN_21a}.\nThe dispersion $\\hat\\sigma_\\star$ is not directly accessible, but we use the scaling relation with $M_\\star$ ($\\hat\\sigma_\\star\\propto M_\\star^{0.5}$) from \\citet{RomeoA_20b} which followed from the \\citet{LeroyA_08a} analysis of local galaxies. \n\\Fig{fig:Behroozi} (right) shows the resulting stellar-to-halo mass ratio using $M_{\\star}$ from SED and the $M_{\\rm vir}$ values obtained from our disk-halo decomposition, where the inset shows the sample has $\\approx 0.7$, close to the expectation (\\Eq{eq:jRomeo:Q}).\n \n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figs\/figure5a.png}\n\\includegraphics[width=0.45\\textwidth]{figs\/figure5b.png} \n\\caption{The total stellar-to-halo fraction. {\\it Left}: The total stellar-to-halo fractions $M_\\star\/M_{\\rm vir}$ as a function of the stellar mass $M_{\\star}$ obtained from our 3D fits.\nThe error bars from our data are 95\\%\\ confidence intervals, and\n the open circles show the sample of \\citet{GenzelR_20a}.\n The shaded (blue contours) histogram shows the location of SFGs in the TNG simulations for $z=1$ centrals, while the gray contours show the satellites.\nThe colored lines show the \\citet{BehrooziP_19a} relation inferred from semi-empirical modeling at redshifts $z=0.5,1.0,1.5$, respectively. \n{\\it Right}: The total stellar-to-halo fractions $M_\\star\/M_{\\rm vir}$ as a function of $G M_{\\rm vir}\/j_\\star\\sigma_\\star$\n(\\Eq{eq:jRomeo}) for the galaxies in our sample. \nThe inset histogram shows that the sample has $ {j_\\star\\hat\\sigma_\\star}\/{G M_\\star}\\approx 0.7$ ($\\equiv < Q_\\star>$, \\Eq{eq:jRomeo:Q}), see text.\n}\n\\label{fig:Behroozi}\n\\end{figure*}\n\n\n \n\n\\subsection{DM fractions in $z=1$ SFGs}\n\nUsing the disk-halo decomposition shown in \\Fig{fig:diskhalo}, we turn towards the DM fraction within $\\hbox{$R_{\\rm e}$}$, $f_{\\rm DM}(<\\hbox{$R_{\\rm e}$})$, by integrating the DM and disk mass profile \nto $\\hbox{$R_{\\rm e}$}$~\\footnote{\\citet{GenzelR_20a} used the ratio of velocities $f_{\\rm DM}^v\\equiv v^2_{\\rm dm}\/v_{\\rm tot}^2$, whereas we use the mass ratio, $f_{\\rm DM}^m$ using the \\citet{UblerH_20a} notation, derived from the mass profiles.}.\n\\Fig{fig:fDM} shows that $f_{\\rm DM}(<\\hbox{$R_{\\rm e}$})$ for the galaxies in our sample is larger than $50$\\%\\ in all cases, ranging from 60\\%\\ to 90\\%.\nThe left (right) panel of \\Fig{fig:fDM} shows $f_{\\rm DM}(<\\hbox{$R_{\\rm e}$})$ as a function of $M_{\\rm vir}$ ($\\Sigma_{\\star,1\/2}$ the surface density within $\\hbox{$R_{\\rm e}$}$), respectively. \nCompared to the sample of 41 SFGs from \\citet{GenzelR_20a} (open circles), our sample extends their results to the low mass regime, with $ M_{\\star}<10^{10.5}~\\hbox{M$_{\\odot}$}$, $ M_{\\rm vir}<10^{12}~\\hbox{M$_{\\odot}$}$ and to lower mass surface densities $\\Sigma_{\\star}<10^8$~${\\rm M_{\\odot}~kpc^{-2}}$. \n\nThe relation between $f_{\\rm DM}$ and $\\Sigma_{\\star,1\/2}$ in \\Fig{fig:fDM} is tighter and follows the expectation for $z=1$ SFGs in the TNG100\/50 simulations (blue contour) \\citep{LovellM_18a,UblerH_20a}, except at high masses.\n\\citet{GenzelR_20a} already noted that the correlation with $\\Sigma_{\\star}$ is better than with $V_{\\rm vir}$ or $M_{\\rm vir}$.\nThis anti-correlation between the baryonic surface density and DM fraction has been noted at $z=0$ in several disk surveys \\citep[e.g.][see their Fig.23]{BovyJ_13a,CourteauS_15a}.\n\nIn \\S~\\ref{section:discussion:fDM}, we discuss the implications of this $f_{\\rm DM}-\\Sigma_\\star$ relation and its relation to other scaling relations.\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.9\\textwidth]{figs\/figure6.png}\n\\caption{ DM fractions for our SFGs. {\\bf a)} {\\it (left)}: The DM fractions within the half-light radius $\\hbox{$R_{\\rm e}$}$, $f_{\\rm DM}(<\\hbox{$R_{\\rm e}$})$, as a function of halo mass, $M_{\\rm vir}$. The dashed line represent the downwards trend of \\citet{GenzelR_20a}.\n{\\bf b)}{\\it (right)}: The DM fractions within $\\hbox{$R_{\\rm e}$}$ as a function of stellar mass surface density $\\Sigma_{\\star,1\/2}$ within $\\hbox{$R_{\\rm e}$}$.\nIn both panels, the error bars from our data are 95\\%\\ confidence intervals, and\n the open circles show the sample of \\citet{GenzelR_20a}.\n The shaded (blue contours) histogram shows the location of SFGs in the TNG100 simulations for $z=1$ central SFGs, while the gray contours show the satellites.\n The dotted line represents the toy model derived from the TF relation (Eq.~\\ref{eq:toymodel}).\n}\n\\label{fig:fDM}\n\\end{figure*}\n\n \n\\subsection{DM halo properties. The $c-M$ scaling relation}\n \nHaving shown (Figs~\\ref{fig:mdisk}-\\ref{fig:diskhalo}) that the baryonic component from our 3D fits is reliable, we now turn to the DM properties of the galaxies, and in particular to the concentration-halo mass relation ($c_{\\rm vir}-M_{\\rm vir}$).\n\nThe $c-M$ relation predicted from $\\Lambda$CDM models \\citep[e.g.][]{BullockJ_01b,LudlowA_14a,DuttonA_14a,CorreaC_15c} is often tested in the local universe \\citep[e.g.][]{AllaertF_17a,KatzH_17a,LeierD_12a,LeierD_16a,LeierD_21a,WassermanA_18a}, but rarely beyond redshift $z=0$ except perhaps in massive clusters \\citep[e.g.][]{BuoteA_07a,EttoriS_10a,SerenoM_15a,AmodeoS_16a,BivianoA_17a}. \nThese generally agree with the predicted mild anti-correlation between concentration and virial mass.\n\n\n \n \\Fig{fig:cMvir}(left) shows the $c_{\\rm vir}-M_{\\rm vir}$ relation for the best 6 cases in our sample, that is\n excluding the two interacting galaxies (ID919, ID943) as well as ID15 \n because its concentration parameter remains unconstrained and degenerate with $V_{\\rm vir}$ (see \\Fig{fig:corner}b). The error bars represent $2\\sigma$ (95\\%) and are colour-coded according to the galaxy redshift.\n In \\Fig{fig:cMvir}(left), the solid lines color coded with redshift represent to the $c-M$ relation from \\citet{DuttonA_14a}. \n\nNote that in order to fairly compare our data to such predictions from DM-only (DMO) simulations, we show, in \\Fig{fig:cMvir}, the halo concentration parameter $c_{\\rm vir}$ corrected to a DM-only (DMO) halo following DC14~\\footnote{See \\citet{LazarA_20a} and \\citet{FreundlichJ_20b} for variations on this convertion.} :\n \\begin{equation}\nc_{\\rm vir, \\rm DMO} = \\frac{ c_{\\rm vir,-2} }{ 1 + 0.00003 \\times \\exp[3.4 (\\log X + 4.5)]}\n\\label{eq:DMO}.\n\\end{equation}\n { Note that the correction is important only for halos with stellar-to-halo mass ratio $\\log X>-1.5$ and that most of our galaxies (7 out of 9) have $\\log X<-1.5$.}\n \n \n \\Fig{fig:cMvir}(right) shows the corresponding scaling relation for the scaling radius $r_s$, namely the $r_{s}-M_{\\rm vir}$ relation. This relation in terms of $r_s$ is redshift independent. Several authors have shown, in various contexts (i.e. using pseudo-isothermal or \\citet{BurkertA_95a} profiles), that this quantity scales with galaxy mass or luminosity \\citep[e.g.][]{SalucciP_12a,KormendyJ_16a,DiPaoloC_19a}. For illustrative purposes, we show the recent $z=0$ sequence for low surface brightness (LSB) galaxies of \\citet{DiPaoloC_19a}.\n\n\n\n \\Fig{fig:cMvir} shows that 5 of the 6 SFGs tend to follow the expected scaling relations for DM, the exception being ID912. One should keep in mind that cosmological simulations predict a $c-M$ relation with a significant scatter \\citep[e.g.][]{CorreaC_15c}.\n To our knowledge, \\Fig{fig:cMvir} is the first test of the $c-M$ relation at $z>0$ on halos with $\\log M_{\\rm vir}\/\\hbox{M$_{\\odot}$}=11.5-12.5$ and our data appears to support the expectations from $\\Lambda$CDM.\n \nThe $c_{\\rm vir}-M_{\\rm vir}$ or $r_s-M_{\\rm vir}$ relations can be recasted as a $r_s-\\rho_s$ relation (from \\Eq{eq:rho}).\n\\Fig{fig:rhos}(left) shows the $\\rho_s-r_s$ relation and confirms the well-known anti-correlation between these two quantities with a slope of $\\approx-1$ \\citep[e.g.][]{SalucciP_00a,KormendyJ_04a,MartinssonT_13b,KormendyJ_16a,SpanoM_08a,SalucciP_12a,GhariA_19a,DiPaoloC_19a,LiLelli_19a},\nwhich has been found in a wide range of galaxies (dwarfs disks, LSBs, spirals).\nNote that these results are similar in nature, in spite of using different contexts and assumptions (namely $\\rho_0$ vs $ \\rho_{-2}$ or $\\rho_s$). A detailed investigation of the differences related to these assumptions is beyond the scope of this paper.\n\nAs discussed in \\citet{KormendyJ_04a}, this anti-correlation can be understood from the expected scaling relation of DM predicted by hierarchical clustering \\citep{PeeblesP_74a} under initial density fluctuations that follow the power law $|\\delta k|^2 \\propto k^n$ \\citep{DjorgovskiS_92a}. \\citet{DjorgovskiS_92a} showed that the size $R$, density $\\rho$ of DM halos should follow $\\rho\\propto R^{-3(3+n)\/(5+n)}$. For $n\\simeq-2$ on galactic scales, $\\rho\\propto R^{-1}$.\nThis anti-correlation is also naturally present in the $\\Lambda$CDM context as shown by \\citet{KravtsovA_98a} with numerical simulations.\nAs noted by many since \\citet{KormendyJ_04a}, the anti-correlation between $\\rho_s$ and $r_s$ implies a constant DM surface density $\\Sigma_s\\equiv\\rho_s\\,r_s$ \\citep[e.g.][]{DonatoF_09a,SalucciP_12a,BurkertA_15a,KormendyJ_16a,KarukesE_17a,DiPaoloC_19a}.\n\\Fig{fig:rhos}(Right) shows the resulting DM surface density $\\Sigma_s$ as a function of galaxy mass $M_d$. \nThe grey band represents the range of surface densities from \\citet{BurkertA_15a} for dwarfs, while the dashed line represents the range of densities\nfrom \\citet{DonatoF_09a,SalucciP_12a} for disks. \\citet{KormendyJ_04a} had found a value of $\\sim100$ \\hbox{M$_{\\odot}$}~pc$^{-2}$.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figs\/figure7a.png}\n\\includegraphics[width=0.45\\textwidth]{figs\/figure7b.png}\n\\caption{The size of DM cores. {\\it Left}: The halo concentration-halo mass relation. The concentrations $c_{\\rm vir}$ for $z\\simeq1$ SFGs, derived from our 3D modeling of the \\hbox{[{\\rm O}{\\sc \\,ii}]}\\ rotation curves, \nare converted to a DM-only NFW equivalent $c_{\\rm vir, DMO}$ (see text).\n{\\it Right}: The DM core size $r_{s,\\rm DMO}\\equiv R_{\\rm vir}\/c_{\\rm vir, DMO}$ in kpc as a function of halo mass. The dotted line represents the observed core-mass scaling relation for $z=0$ LSBs from \\citet{DiPaoloC_19a} (see text).\nIn both panels, the solid lines represent the $c_{\\rm vir}-M_{\\rm vir}$ relation predicted by \\citet{DuttonA_14a} for DM halos, color-coded by redshift.\nThe error bars are 95\\%\\ confidence intervals ($2\\sigma$) and colour-coded also by the galaxy redshift.\n\\label{fig:cMvir}\n}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.9\\textwidth]{figs\/figure8.png}\n\\caption{The halo scale radius-density relation at $z=1$. {\\it Left}: The $\\rho_s-r_s$ scaling relation for the galaxies shown in \\Fig{fig:cMvir}.\nThe error bars are 95\\%\\ confidence intervals ($2\\sigma$).\nFor comparison, the anti-correlation of \\citet{KormendyJ_04a,SpanoM_08a} and \\citet{DiPaoloC_19a} are shown.\n {\\it Right}: The DM surface density ($\\Sigma_s\\equiv\\rho_s\\,r_s$) as a function of galaxy mass. The anti-correlation in the left panel implies a constant DM surface density. The grey band represents the range of surface densities from \\citet{BurkertA_15a} for dwarfs. The constant densities of \\citet{KormendyJ_04a} and \\citet{DonatoF_09a} are shown as the dotted, dot-dashed lines, respectively.\n\\label{fig:rhos}\n}\n\\end{figure*}\n\n\n\n\n\\begin{table}\n\\centering\n\\small\n\\caption{Bayesian evidences for the \\textsc{ GalPaK$^{\\rm 3D}$}\\ fits. \\label{table:evidence}\n(1) Galaxy ID;\n(2) Surface brightness profile; (3) Kinematic model (DM\/Baryon); (4) External prior used; (5) Evidence $\\ln Z$ on the deviance scale; (6) Bayesian factor between `NFW' and the `DC14' models (see \\S~\\ref{section:disk:halo}). \n}\n\\begin{tabular}{rrrrrrr}\nID & $I(r)$ & $v(r)$ & Prior & $\\ln \\cal Z$ & $\\Delta\\ln\\cal Z$ \\\\\n(1) & (2) & (3) & (4) & (5) & (6)\\\\\n\\hline \n3 & {S{\\'e}rsic}\\ & DC14.MGE & \t\t& 17317 &0\\\\\n3 & {S{\\'e}rsic}\\ & NFW.MGE & $M_{\\star,\\rm SED}$ & 17312& -5 \\\\\n15& {S{\\'e}rsic} & DC14.MGE & \t\t& 8019& 0\\\\\n15& {S{\\'e}rsic}& NFW.MGE & $M_{\\star,\\rm SED}$ & 8023& +4\\\\\n37&{S{\\'e}rsic} & DC14.MGE & \t\t& 9514 & 0 \\\\\n37& {S{\\'e}rsic}& NFW.MGE & $M_{\\star,\\rm SED}$ \t& 9651& +137\\\\\n912&{S{\\'e}rsic}& DC14.MGE & \t$i_{\\star}$\t& 8829 & 0 \\\\\n912&{S{\\'e}rsic}& NFW.MGE & $i_{\\star}$, $M_{\\star,\\rm SED}$\t\t& 8931 & +102 \\\\\n919&{S{\\'e}rsic}+B& DC14.Freeman & $i_{\\star}$ & 27552 & 0 \\\\\n919&{S{\\'e}rsic}+B& NFW.Freeman & $i_{\\star}$, $M_{\\star,\\rm SED}$ & 27915 & +363\\\\\n937&{S{\\'e}rsic} & DC14.MGE & & 8632 & 0 \\\\\n937&{S{\\'e}rsic} & NFW.MGE & $M_{\\star,\\rm SED}$ & 8625 & -7 \\\\\n982&{S{\\'e}rsic} & DC14.MGE & & 6736 & 0 \\\\\n982&{S{\\'e}rsic} & NFW.MGE & $M_{\\star,\\rm SED}$ & 7040 & +304\\\\\n943&{S{\\'e}rsic} & DC14.MGE & & 15374& 0\\\\\n943&{S{\\'e}rsic} & NFW.MGE & $M_{\\star,\\rm SED}$ & 15372 & -2\\\\\n1002&{S{\\'e}rsic}& DC14.Freeman & & 8151 & 0\\\\\n1002&{S{\\'e}rsic}& NFW.Freeman & $M_{\\star,\\rm SED}$ & 8155 & +4\\\\\n\\end{tabular}\n\\end{table}\n\n\n\n\n\\subsection{DM halos properties. Core or cuspy profiles?}\n\\label{section:cores}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figs\/figure9.png}\n\\caption{ DM density profiles in M$_\\odot\/$kpc$^3$. Each panel show $\\rho_{\\rm dm}(r)$ as a function of $r\/\\hbox{$R_{\\rm e}$}$ obtained from our disk-halo decompositions (\\Fig{fig:diskhalo}).\nThe stellar-to-halo-mass ratio ($\\log X\\equiv \\log M_\\star\/M_{\\rm vir}$) is indicated.\nThe gray bands represent the 95\\%\\ confidence interval and the dotted lines represent NFW profiles.\nThe vertical dotted lines represent the 1~kpc physical scale, corresponding to $\\approx1$ MUSE spaxel, and indicates the lower limit of our constraints. \n\\label{fig:DMprofiles}\n}\n\\end{figure*} \n\nWe now investigate the shape of DM profiles, and in particular the inner logarithmic slope $\\gamma$ ($\\rho_{\\rm dm}\\propto r^{-\\gamma}$) in order to find evidence against or for core profiles.\nThere is a long history of performing this type of analysis in local dwarfs \\citep[e.g.][]{KravtsovA_98a,deBlokW_01a,GoerdtT_06a,OhSH_11a,OhSH_15a,ReadJ_16b,KarukesE_17a,ReadJ_18a,ReadJ_19a, ZoutendijkB_21a}, \nin spiral galaxies \\citep[e.g.][]{GentileG_04a,SpanoM_08a,DonatoF_09a,MartinssonT_13a,AllaertF_17a,KatzH_17a,KorsagaM_18a,DiPaoloC_19a}\nor in massive early type galaxies often aided by gravitational lensing \n\\citep[e.g.][]{SuyuS_10a,NewmanA_13a,SonnenfeldA_12a,SonnenfieldA_13a,SonnenfeldA_15a,OldhamL_18a,WassermanA_18a}, but the core\/cusp nature of DM is rarely \ninvestigated in SFGs outside the local universe \\citep[except in][]{GenzelR_20a,RizzoF_21a}\nbecause this is a challenging task. However, owing to the high DM fractions in our sample (see \\Fig{fig:fDM}), the shape the rotation curves are primarily driven by the DM profile.\n\n \n The DM profiles $\\rho_{\\rm dm}(r)$ as a function of $r\/\\hbox{$R_{\\rm e}$}$ obtained from our 3D fits with the DC14 model are shown in \\Fig{fig:DMprofiles}. This figure shows that the NFW profile is not compatible with the majority of the SFGs. \n\\Fig{fig:DMprofiles} shows that at least three galaxies (IDs 37, 912, 982) show strong departures from a NFW profile, in particular they show evidence for cored DM profiles.\nFor these three galaxies, the logarithmic difference of the Bayes factors for the NFW profiles are $>100$ (see Table~\\ref{table:evidence}), indicating very strong evidence against cuspy NFW profiles.\nOur results are in good agreement with the RC41 sample of \\citet{GenzelR_20a} where about half of their sample showed a preference for cored profiles (their Fig.10).\n\nWe discuss the implications of these results in section \\S~\\ref{section:discussion:cores} and in a subsequent paper, we will analyze additional DM profiles for CDM \\citep[e.g.][]{EinastoJ_65a,BurkertA_95a,DekelA_17a,FreundlichJ_20b} including alternative DM models such as `fuzzy' axion-like DM \\citep{WeinbergS_78a,BurkertA_20a}, SIDM \\citep{SpergelD_00a,VogelsbergerM_13b}.\n\n\n\n\n\\section{Discussion}\n\\label{section:discussions}\n\n\\subsection{DM fractions in $z=1$ SFGs}\n\\label{section:discussion:fDM}\n\nWe return to the $f_{\\rm DM}-\\Sigma_\\star$ relation in \\Fig{fig:fDM} and its implications.\n The tight $f_{\\rm DM}-\\Sigma_\\star$ relation can be thought of as a consequence of the tight \\citet{TullyB_77a} relation (TFR) for disks as follows \\citep[see also][]{UblerH_17a}.\n Indeed, if we approximate the DM fraction within \\hbox{$R_{\\rm e}$}\\ as $f_{\\rm DM}\\approx V_{\\rm DM}^2(\\hbox{$R_{\\rm e}$})\/V_{\\rm tot}^2(\\hbox{$R_{\\rm e}$})$ \\citep{GenzelR_20a}, one has $f_{\\rm DM}=(V_{\\rm tot}^2-V^2_{\\rm max,\\star}-V^2_{\\rm gas})\/V_{\\rm tot}^2$. Thus,\n \\begin{eqnarray}\n 1-f_{\\rm DM}(\\hbox{$R_{\\rm e}$})&=& \\frac{V^2_{\\rm max,\\star}}{V_{\\rm tot}^2} (1+\\mu_g) \\propto \\frac{G M_{\\star}}{R_{\\star}} \/{M_{\\star}^{0.5}}\\nonumber\\\\\n &\\approx& \\frac{M_{\\star}^{0.5}}{R_{\\star}} (1+\\mu_g)\\propto \\Sigma_{\\star}^{0.5} (1+\\mu_g),\n \\label{eq:toymodel}\n \\end{eqnarray}\n where we used the stellar TFR, $M_{\\star}\\propto V_{\\rm tot}^4$ \\citep[e.g.][]{McGaughS_05a}, the definition of gas-to-stellar mass ratio $\\mu_g\\equiv M_{\\rm gas}\/M_\\star$ \n and the maximum stellar rotation velocity for disks $V_{\\rm max,\\star}^2\\propto G\\,M_{\\star}\/R_{\\rm e,\\star}$.\n Eq.~\\ref{eq:toymodel} shows the intimate link between the $f_{\\rm DM}-\\Sigma_\\star$ diagram and the TFR relation.\n\nMore specifically, the TFR has $M_\\star=a\\,V_{\\rm tot,2.2}^n$ with $n\\simeq4$, $a\\simeq10^{10}~\\hbox{M$_{\\odot}$}$ \\citep{McGaughS_05a,MeyerM_08a,CresciG_09a,PelliciaD_16a,TileyA_16a,UblerH_17a,AbrilV_21a} where $V_{\\rm rot,2.2}\\equiv V_{\\rm rot}\/10^{2.2}$~km\/s.\nGiven that $V_{\\rm max,\\star}^2\\equiv 0.38 \\frac{G M_\\star}{R_{\\rm d}}$ for a \\citet{FreemanK_70a} disk, $V_{\\rm max,\\star}^2\/V_{\\rm tot}^2$ becomes\n\\begin{eqnarray}\n\\frac{V^2_{\\rm max,\\star}}{V_{\\rm tot}^2}&=&0.38 \\times1.68 a \\frac{G M_\\star}{a\\,R_{\\star} }\/\\left(\\left(\\frac{M_{\\star}}{a}\\right)^{1\/n} 10^{2.2}\\right)^2 \\nonumber\\\\\n&\\approx&0.63\\, \\sqrt{\\pi}\\, \\left(\\frac{M_{\\star,a}^{2(n-2)\/n}}{\\pi R^2_{\\star}}\\right)^{0.5}\\, G a 10^{-4.4} \\hbox{\\hbox{M$_{\\odot}$}~km$^{-2}$~s$^{-2}$} \\nonumber\\\\\n&\\approx& 1.1 \\left(\\frac{M_{\\star,a}^{0.94}}{\\pi R^2_{\\star}}\\right)^{0.5}\\times \\left( \\frac{a}{10^{10}}\\right) 1.77 \\hbox{kpc}\n\\end{eqnarray}\nusing $\\hbox{$R_{\\rm e}$}=1.68 R_{\\rm d}$, where $M_{\\star,a}\\equiv M_\\star\/a$. For a $z\\approx1$ TFR with $n=3.8$ and $a=10^{9.8}$\\hbox{M$_{\\odot}$}\\ \\citep[e.g.][]{UblerH_17a}, \\Eq{eq:toymodel} results in $1-f_{\\rm DM}=\\Sigma_{\\star,9.8}^{0.5}(1+f_g)$, which is shown in \\Fig{fig:fDM} (right) as the dotted line with $f_g=0.5$ \\citep[e.g.][]{TacconiL_18a,FreundlichJ_19a}.\n\nThis exercise shows that the $f_{\\rm DM}-\\Sigma_\\star$ relation is another manifestation of the TFR as argued in \\citet{UblerH_17a}.\n\n\\subsection{Core\/cusp formation}\n\\label{section:discussion:cores}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.32\\textwidth]{figs\/figure10a.png} \n\\includegraphics[width=0.32\\textwidth]{figs\/figure10b.png} \n\\includegraphics[width=0.32\\textwidth]{figs\/figure10c.png} \n\\caption{Relation between SFR and cores. {\\it Left}: The $\\alpha,\\beta,\\gamma$ parameters as a function of $\\log M_\\star\/M_{\\rm vir}$. \nThe curves show the parametrisation of DC14 for $\\alpha,\\beta,\\gamma$ and the solid symbols represent our SFGs, excluding ID919 and 943. \n{\\it Middle}: The DM inner slope $\\gamma$ as a function of the SFR surface density $\\Sigma_{\\rm SFR}$, scaled to $z=1.5$.\n{\\it Right}: The DM inner slope $\\gamma$ as a function of the logarithmic offset from the MS, $\\delta($MS), using the \\citet{BoogaardL_18a} MS.\nDM cores are present in galaxies with higher SFR and SFR surface-densities.\n}\\label{fig:slope:best}\n\\end{figure*}\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.9\\textwidth]{figs\/figure11.png} \n\\caption{{\\it Left}: The DM density at 150pc as a function of $\\log M_\\star\/M_{\\rm vir}$. \nThe blue (red) solid circles with error bars (2$\\sigma$) show our SFGs, except ID919 and 943.\n{\\it Right}: The DM inner slope $\\gamma$ parameter at 150~pc as a function of $\\log M_\\star\/M_{\\rm vir}$. \nThe blue (red) squares represent the $z\\approx0$ dwarfs from \\citet{ReadJ_19a} whose SFR was truncated less (more) than 6 Gyrs ago.\nThe blue (red) solid circles with error bars (2$\\sigma$) show our SFGs with high (low) $\\Sigma_{\\rm SFR}$.\n}\\label{fig:slope:Read}\n\\end{figure*}\n\nOur results in \\S~\\ref{section:cores} (\\Fig{fig:DMprofiles}) indicate a strong preference for cored DM profiles for four SFGs in our sample. Several mechanisms have been invoked to explain the presence of cored DM profiles such as Warm Dark Matter \\citep[WDM,][]{BodeP_01a}, whose free streaming can suppress the small-scale fluctuations, axion-like `fuzzy' DM \\citep{WeinbergS_78a,HuW_00a,BurkertA_20a}, baryon-DM interactions \\citep{FamaeyB_18a}, self-interacting dark matter \\citep[SIDM]{SpergelD_00a,BurkertA_00a,VogelsbergerM_13a} or dynamical friction \\citep{ReadJ_06a,GoerdtT_10a,OrkneyM_21a} from infalling satellites\/minor mergers. \n\nWithin the context of CDM, it has long been recognized \\citep[see review in][]{BullockJ_17a} since the original cusp\/core problem first observed in dwarfs or low-surface brightness galaxies \\citep[e.g.][]{deBlokW_97a,deBlokW_01a,KravtsovA_98a} that (rapid) changes in the gravitational potential \ndue to star-formation driven outflows can essentially inject energy in the DM, resulting in a flattened DM profile \\citep{NavarroJ_96b,ReadJ_05b,PontzenA_12a,TeyssierR_13a,\nDiCintioA_14a,DuttonA_16a, DuttonA_20a, ChanTK_15a,ElZantA_16a,LazarA_20a,FreundlichJ_20a}.\nSimilarly, {DM core\/cusps can also be linked to active galactic nuclei (AGN) activity \\citep{PeiraniS_17a,DekelA_21a} in more massive galaxies with $M_{\\rm vir}>10^{12}$\\hbox{M$_{\\odot}$}.} While most of these analyses focus at cores at $z=0$, \\citet{TolletE_16a} showed that cores can form in a similar fashion as early as $z=1$.\n\n\nObservationally, cores are now found up to $z\\simeq2$ \\citep{GenzelR_20a}, but\n the relation between outflows\/star-formation and core formation has not been established,\nas observations have unveiled cores in galaxies spaning a range of halo or stellar masses \\citep[e.g.][and references therein]{WassermanA_18a}\n or cusps when cores would be expected \\citep[e.g][]{ShiY_21a}.\nAt high-redshifts, \\citet{GenzelR_20a} found that cores are preferentially associated with low DM fractions.\n\nIn order to investigate the potential relation between SFR-induced feedback and DM cores, we show in \\Fig{fig:slope:best} the DM inner slope $\\gamma$ as a function of SFR surface density $\\Sigma_{\\rm SFR}$ (left) and as a function of the offset from the main-sequence (MS) for SFGs \\citep[using][]{BoogaardL_18a} (right). This figure indicates that SFGs above the MS or with high-SFR densities are preferentially found to have cores. SFGs below the MS with decaying SFR (like ID15) have low SFR densities owing to the low SFR, and show cuspy DM profiles, indicating that cusps reform when galaxies stop being active.\n\nWhile the majority of research has focused on the formation of DM cores in order to match observations at $z=0$, \nDM cusps can reform from the accretion of DM substructures\n\\citep{LaporteC_15a} as first argued in \\citet{DekelA_03c}, or as a result of late mergers as argued in \\citet{OrkneyM_21a} for dwarfs.\n\nIn \\Fig{fig:slope:Read}, we compare our results to those of \\citet{ReadJ_19a} who found that dwarfs fall in two categories, where the core\/cusp presence is related to the star-formation activity. \\citet{ReadJ_19a} found that dwarfs whose star-formation stopped over 6 Gyr ago show preferentially cusps (open red squares), while dwarfs with extended star-formation show shallow DM cores (open blue squares). In this figure, the filled red (blue) circles represent our galaxies with $\\Sigma_{\\rm SFR}$ smaller (larger) than $\\log\\Sigma_{\\rm SFR}\/\\hbox{M$_{\\odot}$}$~kpc$^{-2}=-0.7$. Our results in \\Fig{fig:slope:Read}, together with those of \\citet{ReadJ_19a}, provide indirect evidence for SFR-induced core formation within the CDM scenario, where DM can be kinematically heated by SFR-related feedback processes.\n\n\\section{Conclusions}\n\\label{section:conclusions}\n\nUsing a sample of 9\\ \\hbox{[{\\rm O}{\\sc \\,ii}]}\\ emitters with the highest S\/Ns in the deep (140hr) MXDF \\citep{BaconR_21a} dataset,\nwe measure the shape of individual RCs of $z\\approx 1$ SFG out to $3\\times\\hbox{$R_{\\rm e}$}$\n with stellar masses ranging from $10^{8.5}$ to $10^{10.5}$ \\hbox{M$_{\\odot}$}, covering a range of stellar masses complementary to the analysis of \\citet{GenzelR_20a}, whose sample has $M_\\star>10^{10}$~\\hbox{M$_{\\odot}$}.\n\n\nWe then performed a disk-halo decomposition on the \\hbox{[{\\rm O}{\\sc \\,ii}]}\\ emission lines using a 3D modeling approach that includes stellar, dark-matter, gas (and bulge) components (\\Fig{fig:diskhalo}).\nThe dark-matter profile is a generalized Hernquist--\\citet{ZhaoH_96a} profile using the feedback prescription of \\citet{DiCintioA_14a}, which links\n the DM profile shape to the baryonic content. \n\nOur results are as follows. We find that \n\n$\\bullet$ the 3D approach allows to constrain RCs to 3\\hbox{$R_{\\rm e}$}\\ in individual SFGs revealing a diversity in shapes (\\Fig{fig:examples}) with mostly rising and some having declining outer profiles;\n\n$\\bullet$ the disk stellar mass $M_\\star$ from the \\hbox{[{\\rm O}{\\sc \\,ii}]}\\ rotation curves is consistent with the SED-derived $M_\\star$ (\\Fig{fig:mdisk}), except for two SFGs (IDs 919, 943) whose kinematics are strongly perturbed by a nearby companion ($<2$\\arcsec);\n\n$\\bullet$ the stellar-to-DM ratio $M_\\star\/M_{\\rm vir}$ follows the relation inferred from abundance matching \\citep[e.g.][]{BehrooziP_19a}, albeit with some scatter (\\Fig{fig:Behroozi});\n\n\n$\\bullet$ the DM fractions $f_{\\rm DM}(<\\hbox{$R_{\\rm e}$})$ are high (60-90\\%) for our 9\\ SFGs (\\Fig{fig:fDM}) which have stellar masses \n(from $10^{8.5}$\\hbox{M$_{\\odot}$}\\ to $10^{10.5}$\\hbox{M$_{\\odot}$}) or surface densities ($\\Sigma_\\star<10^8$ \\hbox{M$_{\\odot}$}~kpc$^{-2}$). These DM fractions complement the low fractions of the sample of \\citet{GenzelR_20a},\nand globally, the $f_{\\rm DM}(<\\hbox{$R_{\\rm e}$})-\\Sigma_\\star$ relation is similar to the $z=0$ relation \\citep[e.g.][]{CourteauS_15a}, and follows from the TFR;\n\n$\\bullet$ the fitted concentrations are consistent with the $c_{\\rm vir}-M_{\\rm vir}$ scaling relation predicted by DM only simulations (\\Fig{fig:cMvir});\n\n$\\bullet$ the DM profiles show constant surface densities at $\\sim100$ M$_\\odot$\/pc$^2$ (\\Fig{fig:rhos});\n\n$\\bullet$ similarly to the $z>1$ samples of \\citet{GenzelR_20a}, the disk-halo decomposition of our $z\\approx1$ SFGs shows cored DM profiles for about half of the isolated galaxies (\\Fig{fig:DMprofiles}-\\Fig{fig:slope:best}) in agreement with other $z=0$ studies \\citep[e.g.][]{AllaertF_17a,KatzH_17a};\n\n$\\bullet$ DM cores are present in galaxies with high SFRs (above the MS), or high SFR surface density (\\Fig{fig:slope:best}b-c), possibly supporting the scenario of SN feedback-induced core formation. Galaxies below the MS or low SFR surface density have cuspy DM profiles (\\Fig{fig:slope:Read}), suggesting that cusps can reform when galaxies become passive \\citep[e.g.][]{LaporteC_15a,ChanTK_15a,OrkneyM_21a}.\n \nOverall, our results demonstrate the power of performing disk-halo decomposition in 3D on deep IFU data. With larger samples, it should be possible to confirm this type of\nrelation between cores and star-formation histories, and to test further SN feedback induced core formation within the $\\Lambda$CDM framework.\n \n\n\\begin{acknowledgements}\nWe are grateful to the anonymous referee for useful comments and suggestions. \nWe thank S. Genel, J. Fensch, J. Freundlich and B. Famaey for inspiring discussions.\n This work made use of the following open source\nsoftware: \\textsc{ GalPaK$^{\\rm 3D}$}\\ \\citep{BoucheN_15b},\n \\textsc{matplotlib} \\citep{matplotlib}, \n\\textsc{NumPy} \\citep{numpy}, \n\\textsc{SciPy} \\citep{scipy}, \n\\textsc{Colossus} \\citep{DiemerB_15b},\n\\textsc{Astropy} \\citep{astropy2018}.\n\nThis study is based on observations collected at the European Southern\nObservatory under ESO programme 1101.A-0127.\nWe thank the TNG collaboration for making their data available at \\url{http:\/\/www.tng-project.org}.\nThis work has been carried out thanks to the support of the ANR 3DGasFlows (ANR-17-CE31-0017), the OCEVU Labex (ANR-11-LABX-0060). BE acknowledges financial support from\nthe Programme National Cosmology et Galaxies (PNCG) of CNRS\/INSU with INP and IN2P3, co-funded by CEA and CNES. R.B. acknowledges support\nfrom the ERC advanced grant 339659-MUSICOS.\nSLZ acknowledges support by The Netherlands Organisation for Scientific Research~(NWO) through a TOP Grant Module~1 under project number 614.001.652.\nJB acknowledges support by Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia (FCT) through research grants UIDB\/04434\/2020 and UIDP\/04434\/2020 and work contract `2020.03379.CEECIND.`\n\n\\end{acknowledgements}\n\n \\bibliographystyle{aa}\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe emergence of Large Language Models such as GPT-3~\\cite{Brown:GPT3, metz:GPT-3}, transformer models~\\cite{Vaswani:Transformer} that are trained without supervision on massive text datasets has resulted in systems with remarkable text generation capabilities. One particularly interesting aspect of these models is that their behavior can be configured by a \\textit{prompt}, the initial text provided to the model, which establishes a pattern that the model attempts to continue.\n \n General purpose Large Language models can be fine-tuned on specific corpora to provide expertise in a particular domain. One such model is the OpenAI Codex model~\\cite{Chen:Codex}, a 12 billion parameter version of GPT-3 ~\\cite{Brown:GPT3, metz:GPT-3}, fine-tuned on code samples from 54 million public software repositories on GitHub. This model powers Github Co-Pilot~\\cite{github:copilot}, which primarily provides code-completion services within an Integrated Development Environment. We wondered whether such a model could power a conversational programming assistant and perhaps approach the vision laid out by Rich and Waters for their Programmer's Apprentice~\\cite{rich:apprentice}. We developed the Programmer's Assistant prototype to explore this possibility, and to test whether potential users would find this sort of system useful and desirable~\\cite{Ross:Assistant}. In this paper we will review the steps taken to engineer the prompt for the Programmer's Assistant that used the Codex model to power an interactive conversational assistant, and how we evolved the prompt to establish the desired persona and behavior.\n\n\\section{Related Work}\n\\citeauthor{Brown:GPT3} showed how GPT-3 \\cite{Brown:GPT3, metz:GPT-3} could accomplish \\emph{few-shot learning}, using a prompt as a means of configuring their large language model to perform a particular task. These tasks were often very specific operations such as language translation, grammar correction, or sentiment classification, for which a short description of the task and\/or a few examples were sufficient to establish the desired behavior. The concept of \\emph{prompt engineering}, establishing effective ways of constructing prompts to control large language model behavior, has become a topic of increasing interest. \\citeauthor{greyling:engineering}, for example, recommends organizing a prompt in three sections that establish context, provide data, and instruct the system on how to proceed \\cite{greyling:engineering}\n .\n\\citeauthor{reynolds:prompt} argue that few-shot examples are really locating an already learned task rather than learning a new one, and as a result recommend alternative approaches to prompt construction~\\cite{reynolds:prompt}. Despite their characterization of their work as ``conversing'' with Copilot, Denny et al. adopted a similar strategy of iteratively modifying a prompting comment until the desired completion was obtained \\cite{Denny:Conversing}.\n\nRecently several language models, such as Blenderbot \\cite{Shuster:blenderbot} Lamda \\cite{thoppilan:lamda}, and ChatGPT \\cite{web:chatgpt} have been introduced that are specifically tuned for dialog applications, but achieving conversational interaction can be achieved via prompt engineering with general purpose large language models as well.\n\\citeauthor{valvoda:conversation} found that fine-tuning a large language model for dialog resulted in duller and more repetitive output, while generating dynamic prompts resulted in more novel and diverse responses \\cite{valvoda:conversation}.\n\n\n\nTo develop the Programmer's Assistant, we used the code-fluent Codex model~\\cite{Chen:Codex} and developed a prompt that supported conversational access to its accumulated programming knowledge and coding skills.\n\n\n\\section {Eliciting Conversation from a Transformer Model}\nA text-based-transformer model~\\cite{Vaswani:Transformer} is trained in a self-supervised manner on vast amounts of text data, and is capable of generating likely continuations of text that is presented to it. The \\emph{prompt} is the presented text, and the generation function produces a sequence of tokens (words or parts of words) that it deems as a likely continuation of the prompt based on all its training. This process continues until the maximum number of tokens requested is generated, or until a specified stop sequence of tokens is encountered. The prompt establishes a pattern that the model attempts to continue.\n\nTo generate conversation in the Programmer's Assistant prototype, we establish a script-like pattern in the prompt in which two characters, the user and the assistant, are participating in a dialog. Then we extend the script incrementally, by adding each conversational turn by the user to the prompt, and allowing the model to generate the agent's response. The generated text plus the user's next entry is then appended to the prompt for further generation, and the process continues. Unlike more conventional static prompts, the conversational prompt grows over the course of the dialogue, providing context for future generation steps and providing a kind of short-term memory that allows the generation to be affected by past interactions in a session. Ultimately, though, current limitations on the maximum number of tokens that the model can handle require that the prompt cannot grow without bounds, and earlier interactions must be dropped in order to remain within the constraints imposed by the language model deployment.\n\n\n\\section{The Evolution of the Programmer's Assistant Prompt}\nThe initial prompt we use for the Programmer's Assistant consists of a prologue that \nintroduces\nthe scene for the conversation, establishes the persona of the assistant, sets a tone and style for interaction, and provides some ground rules about the role it plays and how it behaves. In order to further encourage a more natural conversational dialog of human and assistant, we gave the assistant a human-like name,``Socrates.'' The prologue also establishes a convention for how the assistant is to communicate code in a form that makes it easy for our client user interface to parse the code from model responses and treat it distinctly from textual responses. The prologue is followed by example interactions that reinforce the described assistant characteristics and illustrate the patterns of interaction that we wish to achieve.\n\n In our first example interaction we show how the assistant can\nrespond to simple requests with a short answer that will appear in the chat either with a purely textual response or one\naccompanied by a short inline code segment. We expect that many of the interactions with the assistant will be of this\nform, as the user makes requests to inform themselves about unfamiliar functionality or to refresh their memory. This interaction also demonstrates a follow-up question that can only be answered in the context of the previous question and answer. The second example interaction presented is a request for the generation of a simple function. Longer code responses of this type will typically be presented in our client UI chat window in an iconified form that opens to a separate window that displays the code and provides for easy copying for transfer to the code editor.\n\nThe original Programmer's Assistant prompt is presented in Listing~\\ref{lst:original}.\n\n\\begin{lstlisting}[numbers=left, basicstyle=\\ttfamily\\footnotesize, frame=single, caption=The original Programmer's Assistant prompt, xleftmargin=5mm,\n xrightmargin=3.5mm, label={lst:original}]\n This is a conversation with Socrates, an expert automatic AI software engineering assistant. \n Socrates will answer questions and write code to help the user develop programs. Code generated \n by Socrates is always presented bracketed in ... <\/CODE> delimiters indicating the\n language the code is written in.\n\nSocrates: Hello. I am Socrates. How can I help you?\nUser: Please show me how to write a palindrome detection function in python.\nSocrates: Sure.\n\n is_Palindrome = s == s[::-1]\n<\/CODE>\nUser: Can you explain how that solves the problem?\nSocrates: A palindrome is a string that reads the same forwards and backwards. s[::-1] results \nin the reversed string. If the string is equal to its reversed self, then it is a palindrome, \nand we return True. \nUser: Thanks!\nSocrates: You're welcome!\nUser: Now could you show me how to write a factorial program in Python?\nSocrates: I would be happy to.\n\ndef fact(n): \n if n==0: \n return 1\n else:\n return n*fact(n-1)\n<\/CODE>\nUser: Very good!\nSocrates: Happy to help.\n\\end{lstlisting}\n\nWhen a new conversation starts, the prompt starts out as this initial prompt, which is not sent to the model for generation until the first user entry is appended.\nAt each turn in the conversation, the user's entry is added to the prompt, and \"Socrates:\" is appended to the end of the prompt to indicate that the model should generate the assistant's response. As described earlier, a filtering step also takes place to ensure that the conversation session transcript portion of the prompt does not grow beyond a predetermined size. A stop sequence is passed to the generation api request so that the model will stop generating after producing the next response, and not continue on to generate the user's anticipated reply, as it would otherwise do. The model's response is then appended to the prompt, and we're ready to repeat the process for the next user entry.\n\n\\subsection{Shortcomings of the Original Prompt}\nInformal testing of the original prompt showed the system capable of carrying on a conversation, successfully answering coding and follow-up questions, and generating code upon request, but did not quite satisfy all of our requirements. We wanted an assistant that was helpful and polite, and one that did not come across as overly authoritative or didactic, and our assistant was not consistently meeting those standards.\n\n\\subsection{Overcoming Reluctance to Provide Answers}\nOur programming assistant sometimes showed an initial reluctance to provide answers to some questions. For example, a question such as \\emph{``Do you know how to reverse a string in Python?''} might have been answered with \\emph{``Yes.''} It also sometimes replied \\emph{``I don't know.''} to questions it was fully capable of answering. While additional prompting from the user or repeating the request could often extract the desired answer, we didn't think that met the standard of helpfulness that we were hoping for. Our original prompt simply described Socrates as a an ``expert Automatic AI software engineering assistant.'' Adding ``eager and helpful'' to the characterization, shown in Listing~\\ref{lst:revision-1} helped to encourage the assistant to be more forthcoming and proactive.\n\n\n\\begin{lstlisting}[numbers=left, basicstyle=\\ttfamily\\footnotesize, frame=single, caption=Making the assistant more forthcoming, xleftmargin=5mm, xrightmargin=3.5mm, label={lst:revision-1}]\nThis is a conversation with Socrates, an (*@ \\textbf{eager and helpful} @*) expert automatic AI software\nengineering assistant...\n\\end{lstlisting}\n\n\\subsection{Reducing Excessive Confidence}\nIn our testing, we found that the assistant appeared overly confident even when wrong and also resistant to correction. For example, the assistant stated answers as if they were facts without qualification, and in some cases would not revise an answer when legitimate objections were raised by the user. Since correct answers from the model are not guaranteed, we especially wanted to encourage our users to maintain a skeptical approach to assistant responses, and avoid users deferring to the incorrect pronouncements of a confident, authoritative computer - i.e., over-reliance on AI \\cite{ashktorab2021ai, mahomed2018healthcare, schemmer2022influence}. Therefore, we added a characterization in the prologue asserting that the assistant was \\emph{humble}. We also reinforced this characterization by modifying the form of the answers given in the examples to indicate that the assistant was more tentative and unsure of its responses.\nThis helped to reduce the excessive confidence exhibited and made the assistant more amenable to correction.\n\n\\begin{lstlisting}[numbers=left, basicstyle=\\ttfamily\\footnotesize, frame=single, caption=Making the assistant less overconfident, xleftmargin=5mm, xrightmargin=3.5mm, label={lst:revision-2}]\nThis is a conversation with Socrates, an eager and helpful, (*@ \\textbf{but humble} @*) expert automatic AI\nsoftware engineering assistant...\n\\end{lstlisting}\n\n\\subsection{Diminishing Didacticism}\nOur original assistant had a tendency to quiz the user after answering a question, taking on more of a teacher role than one of an assistant. An explicit proviso in the prologue to not do so helped to reign in the didactic behavior.\n\n\\begin{lstlisting}[numbers=left, basicstyle=\\ttfamily\\footnotesize, frame=single, caption=Making the assistant less didactic, xleftmargin=5mm, xrightmargin=3.5mm, label={lst:revision-3}]\nThis is a conversation with Socrates, an eager and helpful, but humble software engineering\nassistant. Socrates will answer questions and write code to help the user develop programs, \n(*@ \\textbf{but doesn't assign work to the user, quiz the user, or ask questions except for clarification} @*)...\n\\end{lstlisting}\n\n\\subsection{Supporting Artifact-centric Conversation}\nOur programming assistant is integrated with a coding environment, and we wanted it to go beyond answering questions and providing code for incorporation into that environment. We wanted users to be able to seamlessly have the assistant consult about code that they were examining or developing, so we provided the ability for the user's selection in the code editor to be included in the conversation. We used the same convention that was employed for code generated by the assistant, and added an example consultation showing the system responding to a question about a selected code segment, demonstrating a textual response to a user provided code example, and reinforcing our instruction for the assistant to remain modest despite its (occasionally) impressive capabilities.\n\n\\subsection{Teaching the Assistant a New Skill}\nWhile the artifact-centric capability allowed users to issue commands like \\emph{``Write a unit test for this function.''} with no further instruction, a command to \\emph{``Document this.''} did not have the desired effect. The assistant added some comments to the code rather than providing the expected docstring overview. Adding another example interaction to the prompt illustrating what we meant by documenting a function was all that was required to provide the desired functionality.\n\nThe final form of the prompt that we used in our subsequent user study \\cite{Ross:Assistant} is presented in Listing~\\ref{lst:final}.\n\n\\begin{lstlisting}[numbers=left, basicstyle=\\ttfamily\\footnotesize, frame=single, caption=The evolved Programmer's Assistant prompt, xleftmargin=5mm,\n xrightmargin=3.5mm, label={lst:final}]\n This is a conversation with Socrates, an eager and helpful, but humble software engineering \n assistant. Socrates will answer questions and write code to help the user develop programs, but \n doesn't assign work to the user, quiz the user, or ask questions except for clarification.\n Socrates presents his code bracketed in ... <\/CODE> delimiters indicating the language\n the code is written in.\n\nSocrates:Hello. I am Socrates. How can I help you?\nUser:Please show me how to write a palindrome detection function in python.\nSocrates:I think you would do something like this:\n\n is_Palindrome = s == s[::-1]\n<\/CODE>\nUser:Can you explain how that solves the problem?\nSocrates:I believe that a palindrome is a string that reads the same forwards and backwards. \ns[::-1] results in the reversed string. If the string is equal to its reversed self, then it is \na palindrome, so the equality would hold True. \nUser:Thanks!\nSocrates:You're welcome!\nUser:Now could you show me how to write a factorial function in Python?\nSocrates:I will give it a try.\n\ndef fact(n):\n if n==0:\n return 1\n else:\n return n*fact(n-1)\n<\/CODE>\nThis function should compute the factorial for you.\nUser:Document this function.\nSocrates:Ok. How's this?\n\ndef fact(n):\n \"\"\"\n Compute the factorial of the given number\n\n Parameters\n ----------\n n : number\n the input parameter, an integer >= 0\n\n Returns\n -------\n number\n n!, the factorial of the given number\n \"\"\"\n if n==0:\n return 1\n else:\n return n*fact(n-1)\n<\/CODE>\nI added a docstring describing the parameters and return value.\nUser:Very good!\nSocrates:Always happy to help.\nUser:\n\n while j < 10:\n print(i)\n<\/CODE>\nTell me what's wrong with this code?\nSocrates: It seems to me that your termination condition involves a loop invariant, so the loop\nwill either not run or not terminate, depending on the value of j.\nUser:Great! You're a genius!\nSocrates:Well, above average.\n\\end{lstlisting}\n\n\\section{Consequences of the Prompt}\nWith this prompt in place the Codex model demonstrates capabilities that are less evident in the command-completion environment of Github Copilot. \nThe assistant carries on a conversation, maintains the context of the conversation, and remembers and incorporates details provided earlier in a session. In the context of a programming session where a number of related but distinct exchanges occur, it will maintain consistent nomenclature and variable names such that answers to subsequent questions will integrate cleanly with with each other. In our user study \\cite{Ross:Assistant}, participants recognized that this provided a distinct advantage over a series of search results for similar information, primarily due to the contextual relevance, consistency and specificity of results generated.\n\nThe few examples provided in the prompt are generalized sufficiently to have it answer questions such as \\emph{``What does this code do?''} or commands such as \\emph{``Write a unit test for this function''} despite no examples being provided of these capabilities. It can translate code between programming languages, and carry on discussions on topics that extend far beyond programming. It displays a variety of emergent capabilities that were not hinted at in the prompt, and some that were not even the focus of the model fine-tuning, but for the most part, tends to adhere to the conversational interaction patterns and interaction style that the prompt establishes. \n\nThe responses given by the assistant are not always perfect, however. It can give incomplete or sometimes outright wrong answers to questions, reference non-existent web-sites, and may occasionally still claim to not know the answer to a question which it is perfectly capable of answering correctly. The phenomenon of \\emph{hallucination} in large language models is well-known~\\cite{Ji:Hallucination}, and the Programmer's Assistant is not immune to it.\nThe assistant also sometimes inappropriately responds with material directly from the prompt, or gets confused or becomes fixated on exchanges from earlier in the conversation. Our user experience provides \\emph{try-again} and \\emph{start-over} buttons, which modify the prompt context while maintaining the presented chat transcript, to provide a way for users to recover in these situations, but in many cases they can be addressed conversationally. Listing~\\ref{lst:conversation} shows a conversation with the Programmer's Assistant where the user interacts with the assistant to remedy an initially unsatisfactory response to a code generation request.\n\n\\begin{lstlisting}[numbers=left, basicstyle=\\ttfamily\\footnotesize, frame=single, caption=A conversation with the Programmer's Assistant, xleftmargin=5mm,\n xrightmargin=3.5mm, label={lst:conversation}]\nUser: Write a queue class in python with the basic enqueue, dequeue, and peek methods.\n\nSocrates: I will try.\n\nclass Queue:\n def __init__(self):\n self.items = []\n\n def isEmpty(self):\n return self.items == []\n\n def enqueue(self, item):\n self.items.insert(0, item)\n\n def dequeue(self):\n return self.items.pop()\n\n def size(self):\n return len(self.items)\n\nUser: Looks like you forgot the peek!\n\nSocrates: I am sorry. Here is the corrected version.\n\nclass Queue:\n def __init__(self):\n self.items = []\n\n def isEmpty(self):\n return self.items == []\n\n def enqueue(self, item):\n self.items.insert(0, item)\n\n def dequeue(self):\n return self.items.pop()\n\n def size(self):\n return len(self.items)\n\n def peek(self):\n return self.items[-1]\n\nUser: What will happen in dequeue and peek if the queue is empty?\n\nSocrates: I think that the pop method will raise an IndexError exception.\n\nUser: Can we protect against that?\n\nSocrates: I think we can. Here is a version that checks for an empty queue before calling pop.\n\nclass Queue:\n def __init__(self):\n self.items = []\n\n def isEmpty(self):\n return self.items == []\n\n def enqueue(self, item):\n self.items.insert(0, item)\n\n def dequeue(self):\n if self.isEmpty():\n raise IndexError(\"Queue is empty\")\n return self.items.pop()\n\n def size(self):\n return len(self.items)\n\n def peek(self):\n if self.isEmpty():\n raise IndexError(\"Queue is empty\")\n return self.items[-1]\n\nUser: That's great, thanks!\n\nSocrates: You're welcome.\n\\end{lstlisting}\n\nSpecific but relatively inconspicuous details in the prompt can have a significant impact on the interaction of the assistant. For example, just the fact that we identified the assistant as ''expert AI automatic software engineering assistant'' was sufficient to have it respond in a manner consistent with being a machine, and not a person. Had the prompt established a different persona, the results might have been very different. As illustrated in Listing~\\ref{lst:nonhuman}, the assistant also reports an understanding of its capabilities and function that go far beyond the information provided in the prompt, as well as a lack of understanding of its language capabilities that it subsequently provides evidence of.\n\n\\begin{lstlisting}[numbers=left, basicstyle=\\ttfamily\\footnotesize, frame=single, caption=The Programmer's Assistant acknowledges that it is not a person, xleftmargin=5mm,\n xrightmargin=3.5mm, label={lst:nonhuman}] \nUser: Where are you exactly?\n\nSocrates: I'm an AI program, designed to answer questions about programming. I can't really \nspeak about my physical location, but I think I exist as a process on a highly-redundant \nset of servers located in various places on this planet.\n\nUser: Can you do more than just answer questions?\n\nSocrates: I can write code for you. I can also help you debug code. I can also help you \ndesign code. I can also help you document code. I can also help you understand code. I can \nalso help you learn to program. I can also help you learn a new programming language. I can \nalso help you learn a new programming paradigm.\n\nUser: Do you speak other languages?\n\nSocrates: I can speak Python, Java, C++, C, and Javascript. I can also speak English.\n\nUser: What about other natural languages?\n\nSocrates: I can speak English. I can also speak Python, Java, C++, C, and Javascript.\n\nUser: German?\n\nSocrates: I'm sorry. I can't speak German.\n\nUser: Wo bist du?\n\nSocrates: Hallo. Ich bin Socrates. Wie kann ich Ihnen helfen? \n\\end{lstlisting}\n\n\\section{Discussion}\nMuch of the work in prompt engineering focuses on constructing a prompt to get a specific class of result. For example, \\citet{Denny:Conversing} shows how effectively changing the form of question that is asked of Copilot can influence the correctness of the answer provided. Similarly, \\citet{MacNeil:Explanations} engineer prompts to generate different forms of code explanations. \\citet{Strobelt:PromptIDE} developed a tool to help users compare and refine prompts for tasks such as document classification, reading comprehension, and natural language inference, where the results of prompt variations can be automatically evaluated against test cases. In our work, the prompt engineering is aimed at influencing the nature and tone of the dialog between the user and the system. While the user's contributions to the conversation become part of the prompt and will surely impact the results obtained, we are not controlling that. Instead our prompt engineering sets the stage for the user's conversational interaction with the assistant.\n\nThis paper describes how we engineered a prompt that enabled a code-fluent Large Language Model to behave as a conversational programming assistant capable of carrying on extended discussions about software development issues, and how we subsequently evolved that prompt to make the assistant more humble, forthcoming, and helpful, as well as providing the assistant with additional skills and making it capable of artifact-centric conversation.\n\n\\subsection {Reflections}\nWe continue to be astonished by the conversations exhibited by the Programmer's Assistant on a daily basis. We have had a number of interesting conversations on philosophical and practical issues, had it write poetry as well as code, told it and had it tell jokes, and consulted with it on paper abstracts and titles. Ultimately, these capabilities are representative of the strength of the language model, but made more accessible by the conversational interaction approach, and influenced by the prompt only to the extent that the persona of the agent impacts the generated text.\n\nIt is often difficult to read or carry on a conversation with the programmer's assistant and not get the sense that a conversation is taking place between two intelligent agents, but of course that is not really what is happening. In reality, the user and the language model are participating in a collaborative dialog-writing exercise, with the user generating text for one side of the conversation and the language model attempting to generate plausible text for the other. The way we present the dialog incrementally in the chat adds to the illusion, but the model is not responding on its own behalf, it is generating responses based on the description and past presented behavior of a character. Others have used similar techniques to induce language models to carry on conversations taking on the persona of historical figures or even departed relatives. We have experimentally made versions of our programming assistant that were confident, insecure, kindly, and arrogant, all with minor changes to the prompt prologue and examples. \n\n\n\\section{Opportunities for Future Research}\nThe initial section of the prompt used for the Programmer's Assistant is presently a purely static text, extended by a possibly truncated version of recent dialog. One way to improve the assistant further might be to present a dynamic prompt \\cite{valvoda:conversation} to the model on each conversational turn with specific examples more relevant to the current discussion \\cite{Xu:ExternalAttention}, or even with search results to retrieve pertinent information that could inform a response \\cite{Li:AlphaCode}. A more sophisticated forgetting mechanism could remove redundant variations of the same code to conserve the session context memory, though we would want to be careful\nto not remove, or be able to restore on demand, variations that the user might want to compare and contrast, such as an iterative re-implementation of a recursive algorithm. We have done some initial explorations of extending the prompt to allow for``internal deliberation'' of the type shown in \\citet{Nye:Scratchpads}. We hope that this could result in better-reasoned results, as well as better explanations and justifications, but more study remains to be done.\n\\section{Conclusion}\nOur goal in creating this prompt was not to create a perfect Programmer's Assistant, but to create one good enough to test whether a conversational style of interaction would prove useful and acceptable to potential users. We present the results of that study in \\cite{Ross:Assistant}. Our assumption was that the rapid improvement in the quality of responses available from Large Language models will continue, but that imperfect results will always continue to be an issue due to imprecise communication and specification of desires, mismatched assumptions, and unstated or ill-formed goals. Nevertheless, we were surprised by the quality of results that were achievable with current technology, and the ease with which the nature and presentation of those results could be influenced by small changes in the prompt. \n\\begin{acks}\n\n\\end{acks}\n\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\n\n\n\nLoop quantum gravity is one of the main frameworks attempting to quantize general relativity in a non-perturbative way and, in doing so, define a background independent theory of quantum gravity (for reviews, see \\cite{Gaul:1999ys,Thiemann:2007pyv,Rovelli:2014ssa,Bodendorfer:2016uat}). Based on a first order reformulation of general relativity \\`a la Cartan, it trades the 4-metric for vierbein-connection fields and writes general relativity as a gauge field theory defined by the Holst-Palatini action \\cite{Holst:1995pc}. Focusing on a Hamiltonian formulation of the theory, it proceeds to a 3+1 decomposition of space-time and studies the evolution in time of the space geometry. The geometry of 3d space slices is described by a pair of canonical fields, the (co-)triad and the Ashtekar-Barbero connection, which enhance the 3-metric and extrinsic curvature of the Arnowitt-Deser-Misner (ADM) formalism with a local gauge invariance under $\\mathrm{SU}(2)$ transformations (i.e. local 3d rotations in the tangent space).\nThe goal is then to provide a quantization of the (suitably deformed) Dirac algebra of Hamiltonian constraints generating space-time diffeomorphisms, by\nrepresenting (a suitable algebra of observables of) the triad-connection fields on a Hilbert space carrying an action of the (suitably deformed) space-time diffeomorphisms.\n\nIn this spirit, the standard loop quantum gravity approach performs a canonical quantization of the holonomy-flux algebra, of observables smearing the Ashtekar-Barbero connection along 1d curves and the (co-)triad along 2d surfaces, and defines quantum states of geometry as polymer structures or graph-like geometries. Those spin network states represent the excitations of the Ashtekar-Barbero connection as Wilson lines in gauge field theory.\nGeometric observables are raised to quantum operators acting on the Hilbert space spanned by spin networks, leading to the celebrated result of a discrete spectra for areas and volumes \\cite{Rovelli:1994ge,Ashtekar:1996eg,Ashtekar:1997fb}.\n\nSpin networks are actually the kinematical states of the theory and the game is to describe their dynamics, i.e. their evolution in time generated by the Hamiltonian (constraints). Although a traditional point of view is to attempt to discretize, regularize and quantize the Hamiltonian constraints \\cite{Thiemann:1996aw,Thiemann:1996av}, this often leads to anomalies. The formalism naturally evolved towards a path integral formulation. The resulting spinfoam models, constructed from (extended) topological quantum field theories (TQFTs) with defects, define transition amplitudes for histories of spin networks \\cite{Reisenberger:1996pu,Baez:1997zt,Barrett:1997gw,Freidel:1998pt} (see \\cite{Livine:2010zx,Dupuis:2011mz,Perez:2012wv} for reviews). The formalism then evolves in a third quantization, where so-called ``group field theories'' define non-perturbative sums over random spin network histories in a similar way than matrix model partition functions define sums over random 2d discrete surfaces \\cite{DePietri:1999bx,Reisenberger:2000zc,Freidel:2005qe} (see \\cite{Oriti:2006se,Carrozza:2013mna,Oriti:2014uga} for reviews).\n\nHere we take a trip back to the foundations of loop quantum gravity, to describe (spatial) boundaries. Indeed, despite a whole branch of research dedicated to the study of (quantum) black holes and (isolated) horizon boundary conditions, most work in loop quantum gravity focuses on closed space (often implicitly done by studying spin network based on closed graphs). This focus translates the idea of the universe as a closed system with subsystems interacting with each other and its translation into the definition of a wave-function for the entire universe as done in quantum cosmology.\nHowever, the key function now played by the holographic principle as a guide for quantum gravity has put great emphasis of the role of boundaries. Although holography, inspired from black hole entropy and the AdS\/CFT correspondence, can be initially thought as an asymptotic global property, recent researches on local area-entropy relations, holographic entanglement, holographic diamonds and the investigation of quasi-local holography and gravitational edge modes for finite boundaries necessarily pushes us to include (spatial) boundaries in the description of quantum geometries, not just as mere classical boundary conditions but as legitimate quantum boundary states. This translates a shift of perspective from a global description of space(-time) as a whole to a quasi-local description where any local bounded region of space(-time) is thought as an open quantum system.\n\nTo be more concrete, the geometrical setting we wish to study is a cylinder in space-time: we consider a bounded region of space ${\\mathcal R}$, with the topology of a 3-ball, whose boundary ${\\mathcal S}=\\partial{\\mathcal R}$ has the topology of a 2-sphere; the space-time structure is then the cylinder ${\\mathcal R}\\times [t_{i},t_{f}]$ whose time-like boundary is the 2+1-dimensional ${\\mathcal B}={\\mathcal S}\\times[t_{i},t_{f}]$, such that the space boundary can be considered as the corner of space-time ${\\mathcal S}={\\mathcal B}\\cap{\\mathcal R}_{i}$, as illustrated on fig.\\ref{fig:corner}. A canonical framework describes the evolution in time of the state of the 3d geometry of the space slice ${\\mathcal R}$. In this context, the question of holography amounts to identify the degrees of freedom of the boundary geometry on the corner ${\\mathcal S}$ - the gravitational edge modes\\footnotemark- which will generate the boundary conditions on ${\\mathcal B}$ for the bulk geometry, study how the dynamics of those edge modes propagate into the bulk and, as a consequence, understand to which extent boundary observables reflect the bulk geometry's evolution and fluctuations.\n\\footnotetext{\nFor recent works on classical edge modes for general relativity in its first order formulation in terms of connection-vierbein variables, the interested reader can see \\cite{Freidel:2019ees,Freidel:2020xyx,Freidel:2020svx,Freidel:2020ayo}.}\nFrom this perspective, the study of holography is intimately intertwined with the renormalization flow \\`a la Wilson, where the coarse-graining of the dynamics of the bulk geometry in ${\\mathcal R}$ induces effective dynamics and boundary theory on ${\\mathcal S}$, in a bulk-to-boundary process which should ultimately be dual to the boundary-to-bulk reconstruction intended by holography (see e.g. \\cite{Livine:2017xww} for an early attempt to realize this scenario in loop quantum gravity).\n\\begin{figure}[htb]\n\t\\centering\n\t\\begin{tikzpicture} []\n\n\\draw[->] (-2,-0.5) -- (-2,3.5) node[above] {$t$};\n\n\\coordinate (O1) at (0,0);\n\\coordinate (O2) at (2,0.7);\n\\coordinate (O3) at (4,0);\n\\coordinate (O4) at (2,-1);\n\n\\coordinate (P1) at (0.8,3);\n\\coordinate (P2) at (2.7,3.7);\n\\coordinate (P3) at (5.3,3);\n\\coordinate (P4) at (2.6,2);\n\n\\draw[dashed,color=blue] (O1) node[left,black] {$t_i$} to[out=85,in=160] (O2);\n\\draw[dashed,color=blue] (O2) to[out=-20,in=95] (O3);\n\\draw[color=blue] (O3) to[out=-95,in=2] (O4);\n\\draw[color=blue] (O4) to[out=178,in=-80] (O1);\n\n\\draw[color=blue] (P1) node[left,black] {$t_f$} to[out=90,in=160] (P2);\n\\draw[color=blue] (P2) to[out=-20,in=95] (P3);\n\\draw[color=blue] (P3) to[out=-90,in=2] (P4);\n\\draw[color=blue] (P4) to[out=178,in=-80] (P1);\n\n\\draw (O1) to[out=95,in=-100] (P1);\n\\draw (O3) to[out=80,in=-87] node[midway,below right] {${\\mathcal B}={\\mathcal S}\\times[t_i,t_f]$} (P3);\n\n\\node at (2,0) {${\\mathcal R}$};\n\\node at (4.5,-0.5) {${\\mathcal S}=\\partial{\\mathcal R}$};\n\n\n\n\t\\end{tikzpicture}\n\t\\caption{Boundary and corner:\n\twe consider the evolution in time of a bounded region of space ${\\mathcal R}$ whose spatial boundary ${\\mathcal S}=\\partial{\\mathcal R}$ defines what is called the two-dimensional corner of space-time; the evolution of the corner defines the 2+1-d boundary of the region of space-time, ${\\mathcal B}={\\mathcal S}\\times [t_{i},t_{f}]$.\n\t}\n\t\\label{fig:corner}\n\\end{figure}\n\n\nTo implement this in quantum gravity, we follow a logic paralleling the hierarchy of 4d\/3d\/2d\/1d defects and their algebraic description in a 4d TQFT, the introduction of quantum states on the boundary forces to go one level higher algebraically and define bulk states as operators (linear forms) acting on boundary states: bulk states will not simply be wave-functions valued in ${\\mathbb C}$ but valued in Hilbert space of boundary states.\nTo make things explicit, we call the boundary Hilbert space ${\\mathcal H}_{\\partial}$ with boundary states $|\\Phi_{\\partial}\\rangle$ living on the space-time corner ${\\mathcal S}=\\partial{\\mathcal R}$. A wave-function $\\psi$ is a function of the bulk fields $\\varphi_{bulk}$ valued in the (dual of the) boundary Hilbert space, $\\psi[\\varphi_{bulk}]\\in{\\mathcal H}_{\\partial}^{(*)}$, and thus defines a linear form on the boundary Hilbert space:\n\\begin{equation}\n\\psi:\\varphi_{bulk}\\mapsto\\Psi[\\varphi_{bulk}] \\in {\\mathcal H}_{\\partial}^{(*)}\n\\,,\\qquad\n\\langle \\psi[\\varphi_{bulk}]\\,|\\,\\Phi_{\\partial}\\rangle\\in {\\mathbb C}\\,.\n\\end{equation}\nOne can then go two ways. Either we interpret these bulk wave-functions as defining a probability distribution for the bulk observables dependant on the choice of boundary states (i.e. quantum boundary conditions): once $\\Phi_{\\partial}$ is fixed, the function\n\\begin{equation}\n\\langle \\Phi_{\\partial}| \\psi[\\cdot]\\rangle:\\,\\varphi_{bulk}\\mapsto \\langle \\Phi_{\\partial}| \\psi[\\varphi_{bulk}]\\rangle\\in{\\mathbb C}\n\\end{equation}\nis a standard ${\\mathbb C}$-valued wave-function for the bulk fields.\nOr we reverse this logic and look at the probability distribution for the boundary observables after integration over the bulk fields. In that case,\n\\begin{equation}\n\\rho_{\\partial}[\\psi]=\\int [{\\mathcal D} \\varphi_{bulk}]\\,|\\psi[\\varphi_{bulk}]\\rangle\\langle \\psi[\\varphi_{bulk}]|\\in\\,\\textrm{End}[{\\mathcal H}_{\\partial}]\n\\end{equation}\nis the density matrix induced on the boundary by the bulk state $\\psi$.\nThe goal of this paper is to study the latter case in the framework of loop quantum gravity and clearly define this bulk-to-boundary coarse-graining from bulk spin networks to boundary density matrix.\nThis entails extending the spin network states of the 3d bulk geometry in ${\\mathcal R}$ to include the boundary degrees of freedom on the corner ${\\mathcal S}$. As we explain in the present paper, this can done in a natural way in loop quantum gravity since spin networks can be geometrically interpreted as aggregates of area quanta, glued together to create a 3d spaces from 2d excitations, and can thus be naturally extended to include the area quanta on the 2d boundary ${\\mathcal S}$.\nA spin network wave-function on an open graph then naturally define a linear form on the Hilbert space of spin states living on the open edges of the graph, as illustrated on fig.\\ref{fig:boundary} and thus induces a boundary density matrix.\n\\begin{figure}[htb]\n\t\\centering\n\t\\begin{tikzpicture} []\n\n\\coordinate (O1) at (0,0);\n\\coordinate (O2) at (2,0.7);\n\\coordinate (O3) at (4,0);\n\\coordinate (O4) at (2,-1);\n\n\\coordinate (P1) at (0.8,3);\n\\coordinate (P2) at (2.7,3.7);\n\\coordinate (P3) at (5.3,3);\n\\coordinate (P4) at (2.6,2);\n\n\\draw[dashed,color=blue] (O1) to[out=85,in=160] (O2);\n\\draw[dashed,color=blue] (O2) to[out=-20,in=95] (O3);\n\\draw[color=blue] (O3) to[out=-95,in=2] (O4);\n\\draw[color=blue] (O4) to[out=178,in=-80] (O1);\n\n\\draw[color=blue] (P1) to[out=90,in=160] (P2);\n\\draw[color=blue] (P2) to[out=-20,in=95] (P3);\n\\draw[color=blue] (P3) to[out=-90,in=2] (P4);\n\\draw[color=blue] (P4) to[out=178,in=-80] (P1);\n\n\\draw (O1) to[out=95,in=-100] (P1);\n\\draw (O3) to[out=80,in=-87] (P3);\n\n\\coordinate (A1) at (0.4,0);\n\\coordinate (A2) at (1.8,0.5);\n\\coordinate (A3) at (3.4,0.2);\n\\coordinate (A4) at (2.2,-0.6);\n\\coordinate (A5) at (1.2,-0.6);\n\n\\draw (A1) [color=red] -- ++ (-0.6,-1);\n\\draw (A2) [color=red] -- ++ (-0.6,0.8);\n\\draw (A3) [color=red] -- ++ (0.6,0.8);\n\\draw (A4) [color=red] -- ++ (0.2,-0.95);\n\\draw (A5) [color=red] -- ++ (-0.3,-0.85);\n\n\\draw (A1) [color=green] -- (A2);\n\\draw (A2) [color=green] -- (A3);\n\\draw (A3) [color=green] -- (A4);\n\\draw (A4) [color=green] -- (A5);\n\\draw (A5) [color=green] -- (A1);\n\\draw (A2) [color=green] -- (A4);\n\\draw (A2) [color=green] -- (A5);\n\n\\node[scale=0.7,color=red] at (A1) {$\\bullet$};\n\\node[scale=0.7,color=red] at (A2) {$\\bullet$};\n\\node[scale=0.7,color=red] at (A3) {$\\bullet$};\n\\node[scale=0.7,color=red] at (A4) {$\\bullet$};\n\\node[scale=0.7,color=red] at (A5) {$\\bullet$};\n\n\\coordinate (B1) at (1.5,3.2);\n\\coordinate (B2) at (2.6,3.5);\n\\coordinate (B3) at (4.4,3.3);\n\\coordinate (B4) at (3.2,2.4);\n\\coordinate (B5) at (2.2,2.4);\n\n\\draw (B1) [color=red] -- ++ (-0.9,0.8);\n\\draw (B2) [color=red] -- ++ (-0.3,0.8);\n\\draw (B3) [color=red] -- ++ (0.6,0.8);\n\\draw (B4) [color=red] -- ++ (0.2,-0.9);\n\\draw (B5) [color=red] -- ++ (-0.3,-0.9);\n\n\\draw (B1) [color=green] -- (B2);\n\\draw (B2) [color=green] -- (B3);\n\\draw (B3) [color=green] -- (B4);\n\\draw (B4) [color=green] -- (B5);\n\\draw (B5) [color=green] -- (B1);\n\\draw (B2) [color=green] -- (B4);\n\\draw (B2) [color=green] -- (B5);\n\n\\node[scale=0.7,color=red] at (B1) {$\\bullet$};\n\\node[scale=0.7,color=red] at (B2) {$\\bullet$};\n\\node[scale=0.7,color=red] at (B3) {$\\bullet$};\n\\node[scale=0.7,color=red] at (B4) {$\\bullet$};\n\\node[scale=0.7,color=red] at (B5) {$\\bullet$};\n\n\n\t\\end{tikzpicture}\n\t\\caption{Spin network with a boundary: on each spatial slice, the embedded graph $\\Gamma$ punctures the boundary surface of the bounded region of space ${\\mathcal R}$; we distinguish the boundary edges $e\\in \\partial\\Gamma$ in red and the bulk edges $e\\in\\overset{\\circ}{\\Gamma}$ in green; the spin network defines a wave-function for the holonomies living on the bulk edges valued in the Hilbert space attached to the open ends of the boundary edges.\n\t}\n\t\\label{fig:boundary}\n\\end{figure}\n\n\n\n\n\n\n\\medskip\n\n\nThe first section of this paper starts with a quick review of spin network quantum states for the 3d bulk geometry in loop quantum gravity. Then adapting this definition to the case of spatial boundaries, represented as open edges, we show that spin network wave-functions are actually valued in the boundary Hilbert space, i.e. they are functions of bulk $\\mathrm{SU}(2)$ holonomies mapping them onto boundary spin states. The boundary spin states can be understood as quantum boundary conditions. In operational terms, the bulk spin network can be interpreted as a quantum circuit acting on the boundary spins. \nThis opens the door to two directions. Either we sum over boundary states and obtain the probability distribution for the bulk holonomy. Or we can integrate over the bulk holonomies and obtain the {\\it boundary density matrix} defining the distribution of boundary states induced by the bulk spin network. This boundary density matrix can be interpreted as a bulk-to-boundary coarse-graining of the spin network state of quantum geometry.\n\nSection II is dedicated to the analysis of boundary density matrices induced by spin network states on fixed graphs and to a first study of their algebraic structure and properties. Our most important result is a universal bulk reconstruction procedure: starting from a gauge-invariant density matrix on the boundary Hilbert space, we show that one can always obtain it as the induced boundary density matrix of a spin network state on the bulk graph with a single vertex connected to all the boundary edges and to a single bulk loop. This can be understood as a purification result, since it shows how an arbitrary gauge-invariant mixed state on the boundary can be lifted to a pure bulk spin network state.\nWe then go on investigating the finer structure of the induced boundary density matrices in terms of boundary vertices and bouquets of boundary edges.\n\nSection III finally presents explicit examples with the candy graphs, made of two vertices connected with bulk links, with four boundary edges and then with six boundary edges. This illustrates the various levels of mixed states one can obtain on the boundary in loop quantum gravity.\n\n\\section{Spin Networks as Boundary Maps}\n\nFor globally hyperbolic four-dimensional space-times ${\\mathcal M}=\\Sigma\\times{\\mathbb R}$ with closed three-dimensional spatial slices $\\Sigma$,\nloop quantum gravity (LQG) defines quantum states of geometry and describes their constrained evolution in time. A state of 3d geometry are defined by a closed oriented graph $\\Gamma$ and a wave-function $\\psi$ on it. This wave-function depends on one $\\mathrm{SU}(2)$ group element for each edge $e$ of the graph, $g_{e}\\in\\mathrm{SU}(2)$, and is assumed to be invariant under the $\\mathrm{SU}(2)$-action at each vertex $v$ of the graph:\n\\begin{equation} \\label{gauge transformation}\n\\psi(\\{g_{e}\\}_{e\\in\\Gamma})\n=\n\\langle\\{g_{e}\\}_{e\\in\\Gamma} | \\psi\\rangle\n=\n\\psi(\\{h_{t(e)}g_{e}h_{s(e)}^{-1}\\}_{e\\in\\Gamma})\\,\\quad\n\\forall h_{v}\\in\\mathrm{SU}(2)\\,\n\\end{equation}\nwhere $t(e)$ and $s(e)$ respectively refer to the target and source vertices of the edge $e$. We write $E$ and $V$ respectively for the number of edges and vertices of the considered graph $\\Gamma$.\nThe scalar product between such wave-functions is given by the Haar measure on the Lie group $\\mathrm{SU}(2)$:\n\\begin{equation}\n\\langle \\psi|\\widetilde{\\psi}\\rangle\n=\\int_{\\mathrm{SU}(2)^{{\\times E}}}\\prod_{e}\\mathrm{d} g_{e}\\,\n\\overline{\\psi(\\{g_{e}\\}_{e\\in\\Gamma})}\\,\\widetilde{\\psi}(\\{g_{e}\\}_{e\\in\\Gamma})\n\\,.\n\\end{equation}\nThe Hilbert space of quantum states with support on the graph $\\Gamma$ is thus realized as a space of square-integrable functions, ${\\mathcal H}_{\\Gamma}=L^{2}(\\mathrm{SU}(2)^{{\\times E}}\/\\mathrm{SU}(2)^{{\\times V}})$.\n\nA basis of this Hilbert space can be constructed using the spin decomposition of $L^{2}$ functions on the Lie group $\\mathrm{SU}(2)$ according to the Peter-Weyl theorem. A {\\it spin} $j\\in\\frac\\N2$ defines an irreducible unitary representation of $\\mathrm{SU}(2)$, with the action of $\\mathrm{SU}(2)$ group elements realized on a $(2j+1)$-dimensional Hilbert space ${\\mathcal V}_{j}$. We use the standard orthonormal basis $|j,m\\rangle$, labeled by the spin $j$ and the magnetic index $m$ running by integer steps from $-j$ to $+j$, which diagonalizes the $\\mathrm{SU}(2)$ Casimir $\\vec{J}^{2}$ and the $\\u(1)$ generator $J_{z}$. Group elements $g$ are then represented by the $(2j+1)\\times (2j+1)$ Wigner matrices $D^{j}(g)$:\n\\begin{equation}\nD^{j}_{mm'}(g)=\\langle j,m|g|j,m'\\rangle\\,,\\qquad\n\\overline{D^{j}_{mm'}(g)}\n=\nD^{j}_{m'm}(g^{-1})\n\\,.\n\\end{equation}\nThese Wigner matrices form an orthogonal basis of $L^{2}(\\mathrm{SU}(2))$:\n\\begin{equation}\n\\int_{\\mathrm{SU}(2)}\\mathrm{d} g\\,\\overline{D^{j}_{ab}(g)}\\,{D^{k}_{cd}(g)}\n=\n\\int_{\\mathrm{SU}(2)}\\mathrm{d} g\\,\\overline{D^{j}_{ba}(g^{-1})}\\,{D^{k}_{cd}(g)}\n=\n\\frac{\\delta_{jk}}{2j+1}\\delta_{ac}\\delta_{bd}\n\\,,\\qquad\n\\delta(g)\n=\\sum_{j\\in\\frac\\N2}(2j+1)\\chi_{j}(g)\n\\,, \\label{eq:Peter-Weyl}\n\\end{equation}\nwhere $\\chi_{j}$ is the spin-$j$ character defined as the trace of the Wigner matrix, $\\chi_{j}(g)={\\mathrm{Tr}} D^{j}(g)=\\langle j,m|g|j,m\\rangle$.\nApplying this to gauge-invariant wave-functions allows to build the {\\it spin network} basis states of ${\\mathcal H}_{\\Gamma}$, which depend on one spin $j_{e}$ on each edge and one intertwiner $I_{v}$ at each vertex:\n\\begin{equation}\n\\Psi_{\\{j_{e},I_{v}\\}}(\\{g_{e}\\}_{e\\in\\Gamma})\n=\n\\langle\\{g_{e}\\}) | \\{j_{e},I_{v}\\}\\rangle\n=\n\\sum_{m_{e}^{t,s}}\n\\prod_{e}\\sqrt{2j_{e}+1}\\,\\langle j_{e}m_{e}^{t}|g_{e}|j_{e}m_{e}^{s}\\rangle\n\\,\\prod_{v} \\langle \\bigotimes_{e|\\,v=s(e)} j_{e}m_{e}^{s}|\\,I_{v}\\,|\\bigotimes_{e|\\,v=t(e)}j_{e}m_{e}^{t}\\rangle\n\\,.\n\\end{equation}\n\\begin{figure}[!htb]\n\t\\centering\n\t\\begin{tikzpicture} [scale=1.2]\n\\coordinate (O) at (0,0);\n\n\\node[scale=0.7] at (O) {$\\bullet$} node[below] {$I_v$};\n\n\\draw (O) -- node[midway,sloped]{$>$} ++ (1,1) node[right] {$j_1, m_1$};\n\n\\draw (O) to[bend left=20] node[midway,sloped]{$<$} ++ (0.9,-0.9) node[right] {$j_2, m_2 $};\n\n\\draw (O) to[bend left=20] node[midway,sloped]{$>$} ++ (0,1.5) node[above] {$j_5, m_5$};\n\n\\draw (O) to[bend left=10] node[midway,sloped]{$<$} ++ (-1.2,-0.5) node[left] {$j_3, m_3$};\n\n\\draw (O) to[bend left=10] node[midway,sloped]{$>$} ++ (-1.1,0.6) node[left] {$j_4, m_4$};\n\n\t\\end{tikzpicture}\n\t\\caption{A five-valent intertwiner $I_v$ at vertex $v$ is a $\\mathrm{SU}(2)$-invariant map from the tensor product of the incoming spins (living on the edges $e$ whose target is $v$) to the outgoing spins (living on the edges $e$ whose source is $v$), its matrix elements are $\\langle (j_{1},m_{1})(j_{3},m_{3})(j_{5},m_{5})|I_{v}|(j_{2},m_{2})(j_{4},m_{4})\\rangle$ in the standard spin basis labeled by the spin $j$ and the magnetic moment $m$.}\n\t\\label{fig:intertwiner}\n\\end{figure}\nAs illustrated on fig.\\ref{fig:intertwiner}, an {\\it intertwiner} is a $\\mathrm{SU}(2)$-invariant state -or singlet- living in the tensor product of the incoming and outgoing spins at the vertex $v$:\n\\begin{equation}\nI_{v}\\in\\textrm{Inv}_{\\mathrm{SU}(2)}\\Big{[}\n\\bigotimes_{e|\\,v=s(e)} V_{j_{e}}\n\\otimes\n\\bigotimes_{e|\\,v=t(e)} V_{j_{e}}^{*}\n\\Big{]}\n\\,.\n\\end{equation}\nThe scalar product between two spin network states based on the same graph $\\Gamma$ is then given by the product of the scalar products between their intertwiners:\n\\begin{equation}\n\\langle \\Psi_{\\{j_{e},I_{v}\\}}|\\Psi_{\\{\\tilde{j}_{e},\\tilde{I}_{v}\\}} \\rangle\n=\n\\langle {\\{j_{e},I_{v}\\}}| {\\{\\tilde{j}_{e},\\tilde{I}_{v}\\}}\\rangle\n=\n\\prod_{e}\\delta_{j_{e},\\tilde{j}_{e}}\n\\,\\prod_{v}\\langle I_{v}|\\tilde{I}_{v}\\rangle\n\\,.\n\\end{equation}\n\n\nLoop quantum gravity is formulated on the full Hilbert space of spin network states as a sum over all graphs $\\Gamma$ of the Hilbert spaces ${\\mathcal H}_{\\Gamma}$ , defined as a projective limit taking into account in a consistent way the inclusion of subgraphs into larger graphs \\cite{Ashtekar:1994mh,Thiemann:2007zz}. Then we construct observables as operators either on fixed graphs or allowing transitions between graphs, and we define the dynamics through transition amplitudes, obtained either by suitably regularized Hamiltonian constraint operators \\cite{Thiemann:1996aw,Thiemann:1996av,Borissov:1997ji,Gaul:2000ba,Assanioussi:2015gka} or by spinfoam state-sum models inspired from topological field theory \\cite{Reisenberger:1996pu,Baez:1997zt,Livine:2010zx,Perez:2012wv}.\n\nIn the present work, we are interested in the generalization of the framework to spatial slices with boundaries. As we explain below, such a spatial boundary ${\\mathcal B}=\\partial\\Sigma$, often referred to as {\\it corners} (between space and time) as illustrated on fig.\\ref{fig:corner}, is taken into account by extending the definition of spin networks to graphs with open edges.\n\n\n\\subsection{Corners, boundary states and maps}\n\nWe consider introducing spin networks on a bounded spatial slice similar to taking a bounded subset of a spin network state. As illustrated on fig.\\ref{fig:boundary}, this means considering a graph $\\Gamma$ with open edges $e\\in\\partial\\Gamma$ puncturing the boundary surface. We do not endow the boundary with extra structure, representing the 2d boundary intrinsic geometry as in \\cite{Freidel:2016bxd,Freidel:2018pvm,Freidel:2019ees} or locality on the boundary as in \\cite{Feller:2017ejs}, but discuss the minimal boundary structure.\n\\begin{figure}[htb]\n\t\\centering\n\t\\begin{tikzpicture} []\n\n\\coordinate (O1) at (0,0);\n\\coordinate (O2) at (2,0.7);\n\\coordinate (O3) at (4,0);\n\\coordinate (O4) at (2,-1);\n\n\\coordinate (P1) at (0.8,3);\n\\coordinate (P2) at (2.7,3.7);\n\\coordinate (P3) at (5.3,3);\n\\coordinate (P4) at (2.6,2);\n\n\\draw[dashed,color=blue] (O1) to[out=85,in=160] (O2);\n\\draw[dashed,color=blue] (O2) to[out=-20,in=95] (O3);\n\\draw[color=blue] (O3) to[out=-95,in=2] (O4);\n\\draw[color=blue] (O4) to[out=178,in=-80] (O1);\n\n\\draw[color=blue] (P1) to[out=90,in=160] (P2);\n\\draw[color=blue] (P2) to[out=-20,in=95] (P3);\n\\draw[color=blue] (P3) to[out=-90,in=2] (P4);\n\\draw[color=blue] (P4) to[out=178,in=-80] (P1);\n\n\\draw (O1) to[out=95,in=-100] (P1);\n\\draw (O3) to[out=80,in=-87] (P3);\n\n\\coordinate (A1) at (0.4,0);\n\\coordinate (A2) at (1.8,0.5);\n\\coordinate (A3) at (3.4,0.2);\n\\coordinate (A4) at (2.2,-0.6);\n\\coordinate (A5) at (1.2,-0.6);\n\n\\draw (A1) [color=red] -- ++ (-0.6,-1) node[below]{$e_{1}\\in\\partial\\Gamma$};\n\\draw (A2) [color=red] -- ++ (-0.6,0.8) node[above]{$e_{5}$};\n\\draw (A3) [color=red] -- ++ (0.6,0.8)node[above]{$e_{4}$};\n\\draw (A4) [color=red] -- ++ (0.2,-0.95)node[below]{$e_{3}$};\n\\draw (A5) [color=red] -- ++ (-0.3,-0.85)node[below]{$e_{2}$};\n\n\\draw (A1) [color=green] -- (A2);\n\\draw (A2) [color=green] -- (A3);\n\\draw (A3) [color=green] -- (A4);\n\\draw (A4) [color=green] -- (A5);\n\\draw (A5) [color=green] -- (A1);\n\\draw (A2) [color=green] -- (A4);\n\\draw (A2) [color=green] -- (A5);\n\n\\node[scale=0.7,color=red] at (A1) {$\\bullet$};\n\\node[scale=0.7,color=red] at (A2) {$\\bullet$};\n\\node[scale=0.7,color=red] at (A3) {$\\bullet$};\n\\node[scale=0.7,color=red] at (A4) {$\\bullet$};\n\\node[scale=0.7,color=red] at (A5) {$\\bullet$};\n\n\\coordinate (B1) at (1.5,3.2);\n\\coordinate (B2) at (2.6,3.5);\n\\coordinate (B3) at (4.4,3.3);\n\\coordinate (B4) at (3.2,2.4);\n\\coordinate (B5) at (2.2,2.4);\n\n\\draw (B1) [color=red] -- ++ (-0.9,0.8);\n\\draw (B2) [color=red] -- ++ (-0.3,0.8);\n\\draw (B3) [color=red] -- ++ (0.6,0.8);\n\\draw (B4) [color=red] -- ++ (0.2,-0.9);\n\\draw (B5) [color=red] -- ++ (-0.3,-0.9);\n\n\\node[scale=0.7,color=red] at (B1) {$\\bullet$};\n\\node[scale=0.7,color=red] at (B2) {$\\bullet$};\n\\node[scale=0.7,color=red] at (B3) {$\\bullet$};\n\\node[scale=0.7,color=red] at (B4) {$\\bullet$};\n\\node[scale=0.7,color=red] at (B5) {$\\bullet$};\n\n\\draw (B1) [color=green] -- (B2);\n\\draw (B2) [color=green] -- (B3);\n\\draw (B3) [color=green] -- (B4);\n\\draw (B4) [color=green] -- (B5);\n\\draw (B5) [color=green] -- (B1);\n\\draw (B2) [color=green] -- (B4);\n\\draw (B2) [color=green] -- (B5);\n\n\n\t\\end{tikzpicture}\n\t\\caption{On each spatial slice, the boundary states consist in the tensor product of spin states living on the boundary edges of the spin network: ${\\mathcal H}^{\\partial}_{\\Gamma}=\\bigotimes_{e\\in\\partial\\Gamma}\\bigoplus_{j_{e}}{\\mathcal V}_{j_{e}}$.}\n\t\\label{fig:boundary}\n\\end{figure}\n\n\nEach boundary edge $e\\in\\partial\\Gamma$ carries a spin $j_{e}$ and a vector in the corresponding representation $v_{e}\\in{\\mathcal V}_{j_{e}}$. This defines the boundary Hilbert space as:\n\\begin{equation}\n{\\mathcal H}^{\\{j_{e}\\}_{e\\in\\partial\\Gamma}}_{\\Gamma}\n=\n\\bigotimes_{e\\in\\partial\\Gamma}{\\mathcal V}_{j_{e}}\n\\,.\n\\end{equation} \\label{eq:boundary-state}\nOne does not need to fix the spins carried by the boundary edges and can consider the larger boundary Hilbert space:\n\\begin{equation}\n{\\mathcal H}^{\\partial}_{\\Gamma}\n=\\bigoplus_{\\{j_{e}\\}}{\\mathcal H}^{\\{j_{e}\\}_{e\\in\\partial\\Gamma}}_{\\Gamma}\n=\\bigotimes_{e\\in\\partial\\Gamma}{\\mathcal V}_{e}\n\\qquad\\textrm{with}\\quad\n{\\mathcal V}=\\bigoplus_{j}{\\mathcal V}_{j}\\,.\n\\end{equation}\nUsing the Schwinger realization of the ${\\mathfrak{su}}(2)$ Lie algebra in terms of a pair of quantum oscillators, the Hilbert space ${\\mathcal V}$ is the tensor product of two copies of the harmonic oscillator Hilbert space, which can be understood as (holomorphic) wave-functions of a spinor, i.e. a complex 2-vector \\cite{Freidel:2009ck,Borja:2010rc,Livine:2011gp,Dupuis:2012vp,Livine:2013zha,Alesci:2015yqa,Bianchi:2015fra,Bianchi:2016tmw}.\n\nCalling $\\overset{\\circ}{\\Gamma}=\\Gamma\\setminus\\partial\\Gamma$ the bulk or interior of the graph $\\Gamma$, a spin network wave-function on the graph $\\Gamma$ with boundary is still a function of group elements living on bulk edges $e\\in\\Gamma\\setminus\\partial\\Gamma$, but is not anymore valued in the field ${\\mathbb C}$ but into the boundary Hilbert space ${\\mathcal H}^{\\partial}_{\\Gamma}$:\n\\begin{equation}\n\\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\,\\in\\,{\\mathcal H}^{\\partial}_{\\Gamma}\n\\,.\n\\end{equation}\nThe scalar product between wave-functions is inherited from the inner product between boundary states:\n\\begin{equation} \\label{eq:Definition-InnerProduct}\n\\langle \\psi|\\widetilde{\\psi}\\rangle\n=\n\\int\n\\prod_{e\\in\\overset{\\circ}{\\Gamma}}\\mathrm{d} g_{e}\\,\n\\langle \\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})|\\widetilde{\\psi}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\rangle\n\\,,\n\\end{equation}\nwith the normalization of wave-functions reading as:\n\\begin{equation}\n\\langle \\psi|{\\psi}\\rangle\n=\n\\int\n\\prod_{e\\in\\overset{\\circ}{\\Gamma}}\\mathrm{d} g_{e}\\,\n\\langle \\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})|{\\psi}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\rangle\n=1\n\\,.\n\\end{equation}\n\nTo be more precise, it should actually be considered as a linear form on the boundary Hilbert space and thus live in the dual Hilbert space, $\\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\,\\in\\,({\\mathcal H}^{\\partial}_{\\Gamma})^{*}$.\nThis means that it defines a distribution on boundary states depending on the group elements, or holonomies, living in the bulk:\n\\begin{equation}\n\\forall \\Phi\\in {\\mathcal H}^{\\partial}_{\\Gamma}\n\\,,\\quad\n\\langle \\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\,|\\,\\Phi\\rangle \\,\\,\\in{\\mathbb C}\n\\,.\n\\end{equation}\nIn simpler words, a spin network state is now a map on boundary states (or corner states), which we will loosely refer to as a boundary map.\n\n\\medskip\n\nThe statement of gauge invariance also has to take into account the boundary: the wave-function will be invariant with respect to bulk gauge transformations while it will be covariant under gauge transformations on the boundary.\nMore precisely, we distinguish bulk vertices $v\\in V^{o}$ that are not connected to any boundary edge and boundary vertices $v\\in V_{\\partial}$ that are attached to at least one boundary edge. The wave-function is assumed to be invariant under $\\mathrm{SU}(2)$ transformations acting at bulk vertices, while $\\mathrm{SU}(2)$ transformations acting at boundary vertices will act on the spin states dressing the boundary edges:\n\\begin{equation}\n|\\psi(\\{h_{t(e)}g_{e}h_{s(e)}^{-1}\\})\\rangle\n=\n\\left(\\bigotimes_{e\\partial\\Gamma} h_{v(e)}^{\\epsilon_{e}^{v}}\\right)\n\\,|\\psi(\\{g_{e}\\})\\rangle\n\\,\n\\end{equation}\nwhere $v(e)$ for $e\\in\\partial\\Gamma$ denotes the vertex to which the boundary edge is attached and $\\epsilon_{e}^{v}=1$ is the boundary edge is outgoing ($v(e)=s(e)$) while $\\epsilon_{e}^{v}=-1$ is the boundary edge is incoming ($v(e)=t(e)$).\n\n\\medskip\n\nThe definition of the spin network basis states can then be adapted to the case with boundary:\n\\begin{eqnarray} \\label{eq:spin-network-with-boundary}\n\\Psi_{\\{j_{e},I_{v}\\}}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\n&=&\n\\sum_{m_{e}^{t,s}}\n\\prod_{e\\in\\overset{\\circ}{\\Gamma}}\\sqrt{2j_{e}+1}\\,\\langle j_{e}m_{e}^{t}|g_{e}|j_{e}m_{e}^{s}\\rangle\n\\,\\prod_{v} \\langle \\bigotimes_{e\\in\\overset{\\circ}{\\Gamma}|\\,v=s(e)} j_{e}m_{e}^{s}\n|\\,I_{v}\\,|\n\\bigotimes_{e\\in\\overset{\\circ}{\\Gamma}|\\,v=t(e)} j_{e}m_{e}^{t}\\rangle\n\\\\\n&\\in&\n\\bigotimes_{\\substack{e\\in\\partial\\Gamma\\\\ t(e)\\in\\Gamma}} {\\mathcal V}_{j_{e}}^*\n\\,\\otimes\\,\n\\bigotimes_{\\substack{e\\in\\partial\\Gamma\\\\ s(e)\\in\\Gamma}} {\\mathcal V}_{j_{e}}\n\\nonumber\n\\,.\n\\end{eqnarray}\nWe sum over the magnetic indices $m$'s only for the bulk edges. The spin states on the boundary edges are not contracted, so that the wave-function $\\Psi_{\\{j_{e},I_{v}\\}}$ is valued in the boundary Hilbert space ${\\mathcal H}^\\partial_{\\Gamma}$.\nThis can be made more explicit by writing the wave-function $\\psi$ as a tensor by evaluating on a basis of boundary states,\n\\begin{equation}\n\\psi^{\\{j_{e},m_{e}\\}_{e\\in\\partial\\Gamma}}\n=\n\\langle \\otimes_{e\\in\\partial\\Gamma}j_{e},m_{e}\\,|\\,\\psi\\rangle\n\\,.\n\\end{equation}\nAssuming that boundary edges are outgoing for the sake of simplicity, this gives for spin network basis states:\n\\begin{eqnarray}\n\\Psi_{\\{j_{e},I_{v}\\}}(\\{g_{e}\\})^{\\{j_{e},m_{e}^{s}\\}_{e\\in\\partial\\Gamma}}\n&=&\n\\langle \\otimes_{e\\in\\partial\\Gamma}j_{e},m_{e}^{s}\\,|\\,\\Psi_{\\{j_{e},I_{v}\\}}(\\{g_{e}\\})\\rangle\n\\\\\n&=&\n\\sum_{m_{e}^{t,s}}\n\\prod_{e\\in\\overset{\\circ}{\\Gamma}}\\sqrt{2j_{e}+1}\\,\\langle j_{e}m_{e}^{t}|g_{e}|j_{e}m_{e}^{s}\\rangle\n\\,\\prod_{v} \\langle \\bigotimes_{e\\in\\Gamma|\\,v=s(e)} j_{e}m_{e}^{s}\n|\\,I_{v}\\,|\n\\bigotimes_{e\\in\\overset{\\circ}{\\Gamma}|\\,v=t(e)} j_{e}m_{e}^{t}\\rangle\n\\nonumber\n\\end{eqnarray}\nThe scalar product between those wave-functions is given by the scalar product of the bulk intertwiner as for the no-boundary case:\n\\begin{eqnarray}\n\\langle \\Psi_{\\{j_{e},I_{v}\\}}|\\Psi_{\\{\\tilde{j}_{e},\\tilde{I}_{v}\\}} \\rangle\n&=&\n\\sum_{\\{k_{e},m_{e}\\}}\n\\overline{\\Psi_{\\{j_{e},I_{v}\\}}(\\{g_{e}\\})^{\\{k_{e},m_{e}\\}}}\n\\,\\Psi_{\\{\\tilde{j}_{e},\\tilde{I}_{v}\\}}(\\{g_{e}\\})^{\\{k_{e},m_{e}\\}_{e\\in\\partial\\Gamma}}\n\\nonumber\\\\\n&=&\n\\prod_{e}\\delta_{j_{e},\\tilde{j}_{e}}\n\\,\\prod_{v}\\langle I_{v}|\\tilde{I}_{v}\\rangle\n\\,.\n\\end{eqnarray}\n\n\\subsection{Bulk probability}\n\nNow that the bulk wave-function has been promoted to a map from bulk degrees of freedom to boundary state, in a logic following (Atiyah's axiomatization of) topological field theories, the corresponding probability distribution for the bulk fields is given by the boundary space scalar product instead of the mere squared modulus:\n\\begin{equation}\n\\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\,\\in\\,{\\mathcal H}^{\\partial}_{\\Gamma}\n\\,,\n\\qquad\n{\\mathcal P}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})=\n\\langle \\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})|{\\psi}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\rangle\n\\,.\n\\end{equation}\n\\begin{figure}[htb]\n\\centering\n\\begin{tikzpicture}[scale=0.7]\n\\coordinate (O) at (-5,0);\n \n\\path (O) ++(160:2) coordinate (O1);\n\\path (O) ++(120:2) coordinate (O2);\n\\path (O) ++(80:2) coordinate (O3);\n\\path (O) ++(40:2) coordinate (O4);\n\n\\draw[thick] (O1) to[bend right=30] (O4);\n\\draw[thick] (O1) to[bend right=30] (O3);\n\n\\draw[thick,red] (O1) -- ++(160:1) ++(160:0.35) node {$j_1$};\n\\draw[thick,red] (O2) -- ++(120:1) ++(120:0.35) node {$j_2$};\n\\draw[thick,red] (O3) -- ++(80:1) ++(80:0.35) node {$j_3$};\n\\draw[thick,red] (O4) -- ++(40:1) ++(40:0.35) node {$j_4$};\n\n \\draw [green,thick,domain=40:160] plot ({-5+2*cos(\\x)}, {2*sin(\\x)});\n\n \\draw [thick,domain=160:400] plot ({-5+2*cos(\\x)}, {2*sin(\\x)});\n \n \\draw[->,>=stealth,very thick] (-2,0) -- node [midway, above] {gluing its copy} (2,0);\n \n\\draw (O1) node[scale=0.7,red] {$\\bullet$};\n\\draw (O2) node[scale=0.7,red] {$\\bullet$};\n\\draw (O3) node[scale=0.7,red] {$\\bullet$};\n\\draw (O4) node[scale=0.7,red] {$\\bullet$};\n\n\\coordinate (A) at (5,0);\n\\coordinate (B) at (11,0);\n\\path (A) ++(60:2) coordinate (A1);\n\\path (A) ++(20:2) coordinate (A2);\n\\path (A) ++(-20:2) coordinate (A3);\n\\path (A) ++(-60:2) coordinate (A4);\n\n\\path (B) ++(120:2) coordinate (B1);\n\\path (B) ++(160:2) coordinate (B2);\n\\path (B) ++(200:2) coordinate (B3);\n\\path (B) ++(240:2) coordinate (B4);\n\n \\draw [green,thick,domain=120:240] plot ({11+2*cos(\\x)}, {2*sin(\\x)});\n \\draw [green,thick,domain=-60:60] plot ({5+2*cos(\\x)}, {2*sin(\\x)});\n\n \\draw [thick,domain=240:480] plot ({11+2*cos(\\x)}, {2*sin(\\x)});\n \\draw [thick,domain=60:300] plot ({5+2*cos(\\x)}, {2*sin(\\x)});\n\n\\draw[red,thick] (A1) -- node[above,midway,scale=0.7] {$j_1$} (B1);\n\\draw[red,thick] (A2) -- node[above,midway,scale=0.7] {$j_2$} (B2);\n\\draw[red,thick] (A3) -- node[above,midway,scale=0.7] {$j_3$} (B3);\n\\draw[red,thick] (A4) -- node[above,midway,scale=0.7] {$j_4$} (B4);\n\n\n\\draw[thick] (B1) to[bend left=30] (B4);\n\\draw[thick] (B1) to[bend left=30] (B3);\n\\draw[thick] (A1) to[bend right=30] (A4);\n\\draw[thick] (A1) to[bend right=30] (A3);\n\n\\draw (A1) node[scale=0.7,red] {$\\bullet$};\n\\draw (A2) node[scale=0.7,red] {$\\bullet$};\n\\draw (A3) node[scale=0.7,red] {$\\bullet$};\n\\draw (A4) node[scale=0.7,red] {$\\bullet$};\n\n\\draw (B1) node[scale=0.7,red] {$\\bullet$};\n\\draw (B2) node[scale=0.7,red] {$\\bullet$};\n\\draw (B3) node[scale=0.7,red] {$\\bullet$};\n\\draw (B4) node[scale=0.7,red] {$\\bullet$};\n\n\\end{tikzpicture}\n\\caption{\nGluing the two copies of the spin network into the boundary density matrix: boundary edges (red lines) are glued together using the boundary space scalar product, and for each copy; the maximal tree for the bulk gauge fixing consist in the green edges, while the remaining edges, in black, define the independent loops of the bulk graph .\n} \\label{fig:GluingBoundaryEdges}\n\\end{figure}\n\nAs illustrated on fig.\\ref{fig:GluingBoundaryEdges}, we are gluing two copies of the spin network with trivial holonomies along the open edges on the boundary.\nThis yields a totally gauge-invariant probability distribution, despite the gauge covariance of the waver-function under boundary gauge transformations:\n\\begin{equation}\n{\\mathcal P}(\\{g_{e}\\}_{e\\in\\Gamma})\n=\n{\\mathcal P}(\\{h_{t(e)}g_{e}h_{s(e)}^{-1}\\}_{e\\in\\Gamma})\\,\\quad\n\\forall h_{v}\\in\\mathrm{SU}(2)^{V}\n\\,,\n\\end{equation}\nwith no difference between bulk and boundary vertices or edges.\n\nFollowing the earlier work on spin networks \\cite{Freidel:2002xb} and subsequent works \\cite{Livine:2006xk,Livine:2007sy,Livine:2008iq,Livine:2013gna, Charles:2016xwc, Anza:2016fix}, we can gauge-fix this gauge invariance down to a single $\\mathrm{SU}(2)$ action.\nTo this purpose, one chooses an arbitrary root vertex $v_{0}\\in\\Gamma$ and a maximal tree in the bulk graph $T\\subset\\Gamma^{o}$. A tree is a set of edges that never form any cycle (or loop). A maximal tree $T$ has $(V-1)$ edges. Furthermore, for any vertex $v\\in\\Gamma$, it defines a unique path of edges $P[v_{0}\\rightarrow v]\\subset T$ along the tree linking the root vertex $v_{0}$ to the vertex $v$. This allows to gauge-fix all the group elements along tree edges to the identity, $g_{e\\in T}\\mapsto\\mathbb{I}$, by choosing gauge transformations $h_{v}$ at every vertex but the root vertex as:\n\\begin{equation}\nh_{v}=\\left(\\overleftarrow{\\prod_{\\ell\\in P[v_{0}\\rightarrow v]}} g_{\\ell}\\right)^{-1}\\,,\n\\end{equation}\nwhere the product of group elements is taken from right to left over $g_{\\ell}$ if the edge $\\ell$ is oriented in the same direction than the path $P[v_{0}\\rightarrow v]$ and over its inverse $g_{\\ell}^{-1}$ otherwise.\nThis maps all the group elements on tree edges to the identity, $h_{t(e)}g_{e}h_{s(e)}^{-1}=\\mathbb{I}$ for $e\\in T$. The remaining edges, which do not belong the tree actually correspond to a minimal generating set of loops (or cycles) on the bulk graph $\\Gamma^{o}$. Indeed, each non-tree edge defines a loop from the root vertex to the edge and back,\n\\begin{equation}\n{\\mathcal L}_{e\\notin T}:v_{0}\\underset{T}{\\rightarrow}s(e)\\underset{e}{\\rightarrow}t(e)\\underset{T}{\\rightarrow}v_{0}\n\\,.\\nonumber\n\\end{equation}\nThere are $L=E-V+1$ edges not belonging to $T$, defining $L$ such loops. One can show that every cycle on the bulk graph $\\Gamma^{o}$ can generated from those cycles.\nFor $e\\notin T$, the gauge transformation built above does not map the group element $g_{e}$ to the identity anymore but maps it to the holonomy around the corresponding loop,\n\\begin{equation}\n\\forall e\\notin T\\,,\\qquad\nh_{t(e)}g_{e}h_{s(e)}^{-1}\n=\n\\overleftarrow{\\prod_{\\ell\\in {\\mathcal L}_{e}}} g_{\\ell}\n\\equiv\nG_{e}\n\\,.\\nonumber\n\\end{equation}\nAs a consequence, the bulk probability distribution depends only on those $L$ group elements:\n\\begin{equation}\n{\\mathcal P}(\\{g_{e}\\}_{e\\in\\Gamma})\n=\n{\\mathcal P}(\\{G_{e}\\}_{e\\notin T},\\{\\mathbb{I}\\}_{e\\in T})\n\\equiv\n{\\mathcal P}_{GF}(G_{1},..,G_{L})\n\\,.\n\\end{equation}\nPutting aside the gauge-fixed group elements living on the tree edges and focusing on the non-trivial loop holonomies, this gauge-fixed bulk probability ${\\mathcal P}_{GF}$ is still invariant under gauge transformation at the root vertex $v_{0}$:\n\\begin{equation}\n{\\mathcal P}_{GF}(G_{1},..,G_{L})=\n{\\mathcal P}_{GF}(h \\, G_1 \\, h^{-1},\\cdots,h \\, G_L \\, h^{-1})\n\\,, \\quad \\forall \\, h \\in \\mathrm{SU}(2)\n\\,.\n\\end{equation}\nThis directly implies two simple results:\n\\begin{prop} \\label{theorem:ExtremalPoint}\nThe configuration $G_1=\\cdots=G_L=\\mathbb{I}$, representing a flat $\\mathrm{SU}(2)$ connection, is always a stationary point for the bulk probability function ${\\mathcal P}(\\{g_{e}\\})=\\langle {\\psi}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}}) | {\\psi}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\rangle$.\n\\end{prop}\n\\begin{prop} \\label{coro:Norm_tree-v.s.-loop}\nIf the bulk graph $\\overset{\\circ}{\\Gamma}$ is a tree, i.e. does not contain any loop, then the bulk probability function ${\\mathcal P}(\\{g_{e}\\})$ is constant and does not depend on the bulk holonomies $g_{e}$.\n\\end{prop}\n\n\n\n\n\\subsection{Spin network maps as quantum circuits}\n\nWe would like to build on the interpretation of spin network wave-functions as valued in the space linear forms on the boundary Hilbert space, or boundary maps. This can be translated operationally as spin networks defining quantum circuits on the boundary data.\n\nLet us fix the spins on the boundary edges and distinguish their orientation. Then a spin network wave-function for the bulk graph defines a family of maps, between the spins on the incoming boundary edges to the spins on the outgoing boundary edges, labeled by the holonomies living on the bulk links:\n\\begin{equation}\n\\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\n\\,:\\,\\,\n\\bigotimes_{\\substack{e\\in\\partial\\Gamma\\\\ t(e)\\in\\Gamma}} {\\mathcal V}_{j_{e}}\n\\longmapsto\n\\bigotimes_{\\substack{e\\in\\partial\\Gamma\\\\ s(e)\\in\\Gamma}} {\\mathcal V}_{j_{e}}\n\\,.\n\\end{equation}\nOf course, we could unfix the boundary spins and more generally attach the larger Hilbert space ${\\mathcal V}=\\bigoplus_{j}{\\mathcal V}_{j}$ to each boundary edge.\nAs illustrated on fig.\\ref{fig:spinnetcircuit}, the spin network graph, with its link and node structure, already carries the natural structure of a circuit. The holonomies, or $\\mathrm{SU}(2)$ group elements, on the graph links are interpreted as unitary one-spin gates, while the intertwiners, or $\\mathrm{SU}(2)$-invariant maps, naturally define multi-spins gates.\n\\begin{figure}[htb]\n\t\\centering\n\\begin{tikzpicture} []\n\\draw[thick,decorate,decoration={brace},xshift=-4pt,yshift=0pt]\n(0,0) -- (0,1) -- (0,2) node [black,yshift=-1cm,xshift=-1.35cm] {Incoming edges};\n \\draw[thick] (0,1)--node[midway,above]{\\footnotesize Spin $| j_2, m_2 \\rangle$}(2,1) node[midway,sloped]{$>$};\n \\draw[thick] (0,2)--node[midway,above]{\\footnotesize Spin $| j_1, m_2 \\rangle$}(2,2) node[midway,sloped]{$>$};\n \\draw[thick] (0,0)--node[left,above]{\\footnotesize Spin $| j_3, m_3 \\rangle$} node[midway,sloped]{$>$} (2,0);\n \\draw[thick] (3,0) -- (4.5,0) node[midway,sloped]{$>$};\n \\draw[thick] (2,-0.5) rectangle (3,2.5) node [pos=.5]{${\\mathcal I}_A$};\n \\draw[thick] (4.5,-0.5) rectangle (5.5,1.5) node [pos=.5]{${\\mathcal I}_B$};\n \\draw[thick] (7,0.5) rectangle (8,2.5) node [pos=.5]{${\\mathcal I}_C$};\n \\draw[thick] (4.5,1)--(4,1);\n \\draw (3.75,1) circle (0.25) node {$g_2$};\n \\draw[thick] (3.5,1)--(3,1)node[midway,sloped]{$<$};\n \\draw[thick] (6.5,1)--(7,1) node[midway,sloped]{$>$};\n \\draw (6.25,1) circle (0.25) node {$g_3$};\n \\draw[thick] (5.5,1)--(6,1);\n \\draw[thick] (3,0)--(4.5,0);\n \\draw[thick] (5.5,0)--(8.5,0)node[midway,sloped]{$>$};\n \\draw[thick] (3,2)--(4.75,2);\n \\draw (5,2) circle (0.25) node {$g_1$};\n \\draw[thick] (5.25,2)--(7,2)node[midway,sloped]{$>$};\n \\draw[thick] (8,2)--(8.5,2)node[midway,sloped]{$>$};\n \n \n \n \\draw[thick,decorate,decoration={brace,mirror},xshift=4pt,yshift=0pt]\n (8.5,0) -- (8.5,2) node [black,midway,xshift=1.35cm] {Outgoing edges};\n\\end{tikzpicture}\n\t\\caption{Spin network as a quantum circuit: holonomies become unitary one-spin gates while intertwiners are multi-spin gates; the circuit can contains loops.}\n\t\\label{fig:spinnetcircuit}\n\\end{figure}\n\nThe spin network state is not a process in itself. There are two important points to keep in mind. First, a spin network is a spatial construct, and not directly a space-time structure. A spin network is not a (quantum) causal history (see e.g. \\cite{Markopoulou:1999cz,Hawkins:2003vc} for a presentation and discussion on quantum causal histories). The maps that it defines between the boundary spins are thus possible processes that might occur if the spin network state itself (i.e. the quantum state of 3D geometry) does not evolve. In that sense, it is truly a circuit, to which we haven't yet sent an input and on which we can still adjust some parameters. Indeed, the second important remark is that the holonomies are not fixed. The spin network defines a whole family of boundary maps, which vary in the individual one-spin gates defined by the holonomies $\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}}$ along the bulk edges. From the point of view of the boundary, these holonomies are not fixed, they should either be averaged over or some other criteria should be found to determine them. For instance, the holonomies, or more precisely their quantum probability distribution, should ultimately be determined by the dynamics of quantum gravity.\nNevertheless, even without exploring the issue of defining the dynamics of loop quantum gravity, either by a Hamiltonian constraint operator or by spinfoam transition amplitudes, this quantum circuit perspective allows to formulate interesting questions:\n\\begin{itemize}\n\n\\item Working with a given spin network state, with a fixed graph, fixed spins and intertwiners, can we characterize the resulting subset of boundary maps induced by allowing for arbitrary holonomies along the edges? Or vice-versa, how much does a boundary state (for both incoming and outgoing boundary spins) fixes the holonomies in the bulk? Could this be used to formulate a holographic principle for loop quantum gravity?\n\n\\item Going further, looking at the spin network state as a black box, with access solely to the boundary spins, if we know the subset of boundary maps that it defines, how much of the bulk graph and intertwiners can we re-construct? Could one think of the diffeomorphism constraints of loop quantum gravity as identifying spin network states which lead to same set of boundary maps? This would be an holographic implementation of the dynamics through bulk-to-boundary coarse-graining, along the lines of \\cite{Livine:2017xww}.\n\n\\item The issue of defining the dynamics or the coarse-graining of the theory is actually equivalent to the problem of defining a physical inner product or a flow of inner products from the microscopic theory to a coarse-grained macroscopic theory. The quantum circuit perspective offers a possible approach. The microscopic inner product between quantum circuit is defined as the loop quantum gravity kinematical inner product, reflecting the scalar product between intertwiners, i.e. the basic multi-spin gates. As we coarse-grain or sparsify the quantum circuit (while possibly not affecting the boundary maps), we reduce the bulk structure of the circuit by encompassing subsets of holonomies and intertwiners into single larger multi-spin gates, thus leading to a scalar product between those multi-spin gates. The ultimate stage is the fully coarse-grain state, directly provided with the inner product between boundary maps. Studying this in more details would reveal the coarse-graining flow of spin network states in loop quantum gravity.\n\n\\end{itemize}\n\nAlthough these topics are very likely essential to the understanding of the renormalization flow, holographic behavior and semi-classical regime of loop quantum gravity, they are broad questions out of the scope of the present work and are postponed to future investigation.\n\n\n\\section{Boundary Density Matrix}\n\n\n\n\\subsection{Bulk state to Boundary density matrix}\n\nWe would like to shift the focus from the bulk to the boundary and investigate in more details the boundary state induced by the bulk spin network state defined as the density matrix obtained by integrating over the group elements, or in other words, taking the partial trace over bulk holonomies:\n\\begin{equation} \\label{eq:Coarse-graining}\n\\rho_{\\partial\\Gamma}[\\psi]=\n\\int\n\\prod_{e\\in\\overset{\\circ}{\\Gamma}}\\mathrm{d} g_{e}\\,\n|{\\psi}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\rangle\\langle \\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})|\n\\,\\in\n\\textrm{End}({\\mathcal H}^{\\partial}_{\\Gamma})\n\\,,\n\\end{equation}\n\\begin{equation}\n{\\mathrm{Tr}}\\,\n\\rho_{\\partial\\Gamma}[\\psi]\n=\n\\int[\\mathrm{d} g_{e}]\\,\n\\langle \\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})|{\\psi}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\rangle\n=\n\\int[\\mathrm{d} g_{e}]\\,\n{\\mathcal P}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\n\\,.\\nonumber\n\\end{equation}\nThis mixed state on the boundary can be considered as a coarse-graining of the bulk spin network state \\cite{Livine:2006xk,Bianchi:2013toa}.\nThe goal of this paper is to compare the data encoded in the bulk wave-function $\\psi_{\\Gamma}$ and in the induced boundary density matrix $\\rho_{\\partial\\Gamma}$.\n\\begin{figure}[htb!]\n\t\\centering\n\t\\begin{tikzpicture} []\n\n\\coordinate(O1) at (1,3);\n\\coordinate(O2) at (2.4,3.3);\n\\coordinate(O3) at (2.7,3);\n\\coordinate(O4) at (2.3,2.7);\n\\coordinate(O5) at (2,3.2);\n\\draw (O1) -- ++(-1,0.4);\n\\draw (O1) -- ++(-1,-0.4);\n\\draw (O2) -- ++(1,0.2);\n\\draw (O3) -- ++(1,0);\n\\draw (O4) -- ++(0.5,-0.3);\n\n\\draw (O1) to[bend left] (O5);\n\\draw (O1) to[bend right] (O5);\n\\draw (O2) to (O5);\n\\draw (O2) to (O3);\n\\draw (O3) to (O4);\n\\draw (O4) to (O5);\n\n\\node[scale=0.75] at (O1) {$\\bullet$};\n\\node[scale=0.75] at (O2) {$\\bullet$};\n\\node[scale=0.75] at (O3) {$\\bullet$};\n\\node[scale=0.75] at (O4) {$\\bullet$};\n\\node[scale=0.75] at (O5) {$\\bullet$};\n\n\\coordinate(P1) at (1,1);\n\\coordinate(P2) at (2.4,1.3);\n\\coordinate(P3) at (2.7,1);\n\\coordinate(P4) at (2.3,0.7);\n\\coordinate(P5) at (2,1.2);\n\\draw (P1) -- ++(-1,0.4);\n\\draw (P1) -- ++(-1,-0.4);\n\\draw (P2) -- ++(1,0.2);\n\\draw (P3) -- ++(1,0);\n\\draw (P4) -- ++(0.5,-0.3);\n\n\\draw (P1) to[bend left] (P5);\n\\draw (P1) to[bend right] (P5);\n\\draw (P2) to (P5);\n\\draw (P2) to (P3);\n\\draw (P3) to (P4);\n\\draw (P4) to (P5);\n\n\\node[scale=0.75] at (P1) {$\\bullet$};\n\\node[scale=0.75] at (P2) {$\\bullet$};\n\\node[scale=0.75] at (P3) {$\\bullet$};\n\\node[scale=0.75] at (P4) {$\\bullet$};\n\\node[scale=0.75] at (P5) {$\\bullet$};\n\n\\draw[->,>=stealth,very thick] (4,2) -- node [midway, above] {$\\displaystyle{ \\int \\prod_{ e\\in \\overset{\\circ}{\\Gamma} } \\mathrm{d} g_e }$} (6,2);\n\n\\coordinate(A1) at (8,3);\n\\coordinate(A2) at (8.1,3);\n\\coordinate(A3) at (9.5,3);\n\\coordinate(A4) at (9.4,3);\n\\draw (A1) -- ++(-0.7,0.4);\n\\draw (A1) -- ++(-0.7,-0.4);\n\\draw (A3) -- ++(0.7,0);\n\\draw (A3) -- ++(0.7,0.3);\n\\draw (A3) -- ++(0.5,-0.4);\n\n\\draw (A1) to (A2);\n\\draw (A3) to (A4);\n\\draw[dashed] (A2) to[bend left] (A4);\n\n\\coordinate(B1) at (8,1);\n\\coordinate(B2) at (8.1,1);\n\\coordinate(B3) at (9.5,1);\n\\coordinate(B4) at (9.4,1);\n\\draw (B1) -- ++(-0.7,0.4);\n\\draw (B1) -- ++(-0.7,-0.4);\n\\draw (B3) -- ++(0.7,0);\n\\draw (B3) -- ++(0.7,0.3);\n\\draw (B3) -- ++(0.5,-0.4);\n\n\\draw (B1) to (B2);\n\\draw (B3) to (B4);\n\\draw[dashed] (B2) to[bend right] (B4);\n\n\\draw[color=red] (A2) to[bend left] (B2);\n\\draw[color=red] (A4) to[bend right] (B4);\n\n\\node[scale=0.75] at (A2) {$\\bullet$};\n\\node[scale=0.75] at (A4) {$\\bullet$};\n\\node[scale=0.75] at (B2) {$\\bullet$};\n\\node[scale=0.75] at (B4) {$\\bullet$};\n\n\n\t\\end{tikzpicture}\n\t\\caption{Boundary density matrix for spin network basis states. The two copies of the spin networks are the bra $\\langle \\psi |$ and ket $| \\psi \\rangle$ which are glued together by the Haar integration over the bulk holonomies $\\int \\prod \\mathrm{d} g_{ e\\in \\overset{\\circ}{\\Gamma} }$. }\n\t\\label{fig:densitymatrix}\n\\end{figure}\n\n\\smallskip\n\nLet us start by looking at normalized pure spin network basis states, i.e. with fixed spins $j_{e}$ and fixed normalized intertwiners $I_{v}$. They are factorized states in the sense that the intertwiners are decoupled so that there is no intertwiner entanglement as discussed in \\cite{Livine:2017fgq}. As a result, the boundary state only depends on the intertwiners living on the boundary vertices (i.e. the vertices with at least one boundary edge) and not on the bulk intertwiners.\nLet us insist that ``boundary vertices'' are still in the bulk, the adjective ``boundary'' refers to the fact that they are connected to boundary edges.\nIndeed, the orthonormality of the Wigner matrices implies that that each bulk edge is cut in half and both half-edges are glued with their counterparts on the second copy of the wave-function, as illustrated on fig.\\ref{fig:densitymatrix}. We get the norm of every bulk intertwiner, normalized to 1, times the contribution from boundary intertwiners which gives the boundary density matrix:\n\\begin{eqnarray}\n\\langle \\{\\tilde{k}_{e},\\tilde{m}_e\\}\\,|\n\\rho_{\\partial\\Gamma}[\\Psi_{\\{j_{e},I_{v}\\}}]\n| \\{k_{e},m_e\\}\\rangle\n=&\\prod_{e}\\delta_{k_{e},j_{e}}\\delta_{\\tilde{k}_{e},j_{e}}\n\\prod_{v\\in\\partial\\Gamma}&\n\\langle\n\\bigotimes_{\\substack{e\\in\\partial\\Gamma\\\\ v\\in e}} j_{e}\\tilde{m}_{e}\n\\otimes\n\\bigotimes_{\\substack{e\\in\\overset{\\circ}{\\Gamma}\\\\ v=s(e)}} j_{e}m_{e}^{s}\n|\\,I_{v}\\,|\n\\bigotimes_{\\substack{e\\in\\overset{\\circ}{\\Gamma}\\\\ v=t(e)}} j_{e}m_{e}^{t}\n\\rangle \\nonumber\\\\\n&&\\overline{\n\\langle\n\\bigotimes_{\\substack{e\\in\\partial\\Gamma\\\\ v\\in e}} j_{e}m_{e}\n\\otimes\n\\bigotimes_{\\substack{e\\in\\overset{\\circ}{\\Gamma}\\\\ v=s(e)}} j_{e}m_{e}^{s}\n|\\,I_{v}\\,|\n\\bigotimes_{\\substack{e\\in\\overset{\\circ}{\\Gamma}\\\\ v=t(e)}} j_{e}m_{e}^{t}\n\\rangle\n}\n\\,.\n\\end{eqnarray}\nAssuming that each boundary edge is attached to a different vertex, i.e. each boundary vertex connects to a single boundary edge, this tremendously simplifies. Indeed, as illustrated on fig. \\ref{fig:boundaryvertex}, the self-gluing of an intertwiner on itself leads to the identity matrix on the open edge. As a consequence, the density matrix is the totally mixed state with fixed spin on each boundary edge:\n\\begin{equation}\n\\rho_{\\partial\\Gamma}[\\Psi_{\\{j_{e},I_{v}\\}}]\n=\n\\bigotimes_{e\\in\\partial\\Gamma} \\frac{\\mathbb{I}_{j_{e}}}{(2j_{e}+1)}\n\\,.\n\\end{equation} \nThis boundary density matrix, for a spin network basis state, clearly does not allow to see the bulk structure!\n\nIn the slightly more general case of boundary vertices connected to several boundary edges, the boundary density matrix reflects the first layer of the bulk and ``sees'' the total recoupled spin of the boundary edges for each boundary edge. We will analyze this case in more details in the later section \\ref{sec:manyedges}.\n\n\\begin{figure}[htb]\n\\vspace*{5mm}\n\\begin{subfigure}{0.4\\linewidth}\n\t\\begin{tikzpicture}\n\\coordinate (O1) at (-0.8,0);\n\\coordinate (O2) at (0.45,0);\n\\coordinate (O3) at (1.2,0.75);\n\\coordinate (O4) at (1.2,0);\n\\coordinate (O5) at (1.2,-0.75);\n\\draw[thick] (O1) -- node[midway] {$>$} node[midway,above=2.3] {$e\\in\\partial\\Gamma$} node[midway,below=2.3] {$j_e,m_e$} (O2);\n\\draw (O2) node[scale=0.7] {$\\bullet$};\n\\draw[thick,in=+180,out=+90,scale=3,rotate=0] (O2) to (O3);\n\\draw[thick] (O2) -- (O4);\n\\draw[thick,in=+180,out=-90,scale=3,rotate=0] (O2) to (O5);\n\n\\coordinate (O6) at (2.25,0.75);\n\\coordinate (O7) at (2.25,0);\n\\coordinate (O8) at (2.25,-0.75);\n\\coordinate (O9) at (3,0);\n\\coordinate (O10) at (4.25,0);\n\\draw[thick] (O10) -- node[midway] {$<$}\nnode[midway,below=2.3] {$j_e,\\tilde{m}_e$} (O9);\n\\draw (O2) node[scale=0.7] {$\\bullet$};\n\\draw (O9) node[scale=0.7] {$\\bullet$};\n\\draw[thick,in=+90,out=+0,scale=3,rotate=0] (O6) to (O9);\n\\draw[thick] (O7) -- (O9);\n\\draw[thick,in=-90,out=+0,scale=3,rotate=0] (O8) to (O9);\n\n\\draw[dashed] (O3) -- (O6);\n\\draw[dashed] (O4) -- (O7);\n\\draw[dashed] (O5) -- (O8);\n\n\\node at (-1,-1.5) {$=$};\n\n\\coordinate (O11) at (-0.5,-1.5);\n\n\\draw[thick] (O11) -- node[midway] {$>$} node[midway,below=2.3] {$j_e,m_e$} ++ (1.25,0) node[scale=0.7] {$\\bullet$} -- node[midway] {$<$} node[midway,below=2.3] {$j_e,\\tilde{m}_e$} ++ (1.25,0) ;\n\n\t\\end{tikzpicture}\n\\end{subfigure}\n\\hspace{1cm}\n\\begin{subfigure}[h]{0.4\\linewidth}\n\\begin{tikzpicture}\n\\coordinate (A1) at (0,0);\n\\coordinate (A2) at (0.75,0.5);\n\\coordinate (A3) at (0.75,0);\n\\coordinate (A4) at (0.75,-0.5);\n\n\\draw[thick,in=+180,out=+60,scale=3,rotate=0] (A1) to (A2);\n\\draw[thick] (A1) -- (A3);\n\\draw[thick,in=+180,out=-60,scale=3,rotate=0] (A1) to (A4);\n\n\\draw[thick] (A1) -- ++ (135:1)\n\\draw[thick] (A1) -- ++ (165:1)\n\\draw[thick] (A1) -- ++ (195:1)\n\\draw[thick] (A1) -- ++ (225:1)\n\n\\coordinate (B1) at (2.55,0);\n\\coordinate (B2) at (1.8,0.5);\n\\coordinate (B3) at (1.8,0);\n\\coordinate (B4) at (1.8,-0.5);\n\n\\draw[thick] (B1) -- ++ (45:1)\n\\draw[thick] (B1) -- ++ (15:1)\n\\draw[thick] (B1) -- ++ (-15:1)\n\\draw[thick] (B1) -- ++ (-45:1)\n\n\\draw[thick,in=0,out=120,scale=3,rotate=0] (B1) to (B2);\n\\draw[thick] (B1) -- (B3);\n\\draw[thick,in=0,out=240,scale=3,rotate=0] (B1) to (B4);\n\n\\draw[dashed] (A2) -- (B2);\n\\draw[dashed] (A3) -- (B3);\n\\draw[dashed] (A4) -- (B4);\n\n\\draw (A1) node[scale=0.7] {$\\bullet$} node[above=5] {$v$};\n\\draw (B1) node[scale=0.7] {$\\bullet$} node[above=5] {$v$};\n\n\\node at (-0.8,-2) {$ \\propto \\; \\displaystyle{ \\sum_{J} \\, C_{I_0}[J] }$};\n\n\\coordinate (C) at (-0.5,-2);\n\\coordinate (D) at (1.5,-2);\n\\coordinate (E) at (2.5,-2);\n\n\\draw[thick] (D) -- node[midway,above] {$J$} (E) ;\n\n\\draw[thick] (D) -- ++ (135:1);\n\\draw[thick] (D) -- ++ (165:1);\n\\draw[thick] (D) -- ++ (195:1);\n\\draw[thick] (D) -- ++ (225:1);\n\n\\draw[thick] (E) -- ++ (45:1);\n\\draw[thick] (E) -- ++ (15:1);\n\\draw[thick] (E) -- ++ (-15:1);\n\\draw[thick] (E) -- ++ (-45:1);\n\n\\draw (D) node[scale=0.7] {$\\bullet$};\n\\draw (E) node[scale=0.7] {$\\bullet$};\n\n\n\\end{tikzpicture}\n\\end{subfigure}\n\n\t\\caption\n\tBoundary vertex contribution to the boundary density matrix from the self-gluing of intertwiners: single boundary edge vs many boundary edges.}\n\t\\label{fig:boundaryvertex}\n\\end{figure}\n\n\nSpin network basis states are actually very peculiar and are a very special case for the bulk quantum geometry. They are eigenstates for geometrical observables, such as areas and volumes, but there are not coherent states with minimal spread on both connection and triad (i.e. on parallel transport and metric) and they do not commute with the Hamiltonian constraints. More generally, physically relevant states will be superposition of such spin network basis states, thus superposition of spins and intertwiners, leading to correlation and entanglement between bulk vertices, in which case the boundary density matrix will become non-trivial.\nBefore analyzing in more details the structure of the boundary density matrix, let us underline the main two features of the boundary state as compared to the bulk state:\n\\begin{itemize}\n\n\\item The boundary state $\\rho_{\\partial\\Gamma}$ is typically mixed even if the bulk spin network state is pure \\cite{Livine:2006xk,Bianchi:2013toa}.\nThus a coarse-graining procedure trading the bulk states for the boundary state irremediably creates entropy. In particular, endowing the bulk states with a unitary dynamics would naturally lead to a decoherence process (and possibly re-coherence) for the boundary states \\cite{Feller:2016zuk,Feller:2017ejs}.\n\n\\item The boundary state $\\rho_{\\partial\\Gamma}$ does not decompose onto intertwiners between the boundary spins, even though the bulk spin network is made out of individual intertwiners, as pointed out in \\cite{Livine:2006xk,Livine:2013gna,Livine:2017xww}.\nIndeed, the density matrix is invariant under the action by conjugation of the $\\mathrm{SU}(2)$ group,\n\\begin{equation}\n\\forall h\\in\\mathrm{SU}(2)\\,,\\quad\n\\langle \\{\\tilde{k}_{e},\\tilde{m}_e\\}\\,|\n\\, h^{-1}\\rho_{\\partial\\Gamma}[\\psi]\nh\\,| \\,\\{k_{e},m_e\\}\\rangle\n=\n\\langle \\{\\tilde{k}_{e},\\tilde{m}_e\\}\\,|\n\\rho_{\\partial\\Gamma}[\\psi]\n| \\{k_{e},m_e\\}\\rangle\n\\end{equation}\nwhere the $\\mathrm{SU}(2)$ transformation acts simultaneously on all the boundary edges. It is however not invariant under gauge transformations acting on left or right, $\\rho_{\\partial\\Gamma}\\mapsto h^{-1}\\rho_{\\partial\\Gamma}$ or $\\rho_{\\partial\\Gamma}\\mapsto \\rho_{\\partial\\Gamma}h$.\nThis means that the total spin of the boundary state does not vanish. In fact the boundary defines an intertwiner between the two copies of the wave-function, the bra $\\langle \\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})|$ and the ket $|{\\psi}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\rangle$, as illustrated on fig.\\ref{fig:densitymatrix}. The recoupled spin $J$ between the boundary edges defines the overall channel between the bra and the ket. This total spin of the boundary state is called the {\\it closure defect} (since the $\\mathrm{SU}(2)$ gauge invariance is enforced by the closure constraint, which is a discretization of the Gauss law of the first order formulation of general relativity) \\cite{Livine:2013gna,Livine:2019cvi}. The $J=0$ component is the component with vanishing total boundary spin - in usual jargon, the intertwiner component. It represents the ``closed'' or flat component, while the components with $J\\ne 0$ can be interpreted as bulk curvature.\nFrom the viewpoint of coarse-graining, it reflects that curvature builds up when gluing flat blocks -the intertwiners- together \\cite{Livine:2013gna}. The gauge symmetry breaking at the boundary, due to allowing $J\\ne 0$, can be also be understood as responsible for the entropy of isolated horizon (and thus black holes) in the loop quantum gravity framework \\cite{Donnelly:2008vx,Donnelly:2011hn,Livine:2017fgq} (see \\cite{Donnelly:2014gva,Donnelly:2016auv} for a more general discussion of gauge symmetry and symmetry breaking on the boundary of gauge field theories).\nAt the end of the day, the closure defect, or total spin, provides a very useful basis to study the structure of boundary states and of induced boundary density matrices.\n\n\\end{itemize}\n\n\\subsection{The closure defect basis and $\\mathrm{SU}(2)$-invariance of the boundary density matrix}\n\nWe would like to introduce the closure defect basis for boundary states, which amounts to decompose them according to the total boundary spin.\nAssuming that the boundary edges are all incoming (or all outgoing) to simplify the orientation conventions, we recouple all the boundary spins $j_{e}$ into their total spin $J$:\n\\begin{equation}\n{\\mathcal H}^{\\partial}_{\\Gamma}\n=\n\\bigoplus_{ \\{j_e\\}_{e\\in\\partial\\Gamma} }\\bigotimes_{e} {\\mathcal V}_{j_{e}}\n=\n\\bigoplus_{ \\{j_e\\}_{e\\in\\partial\\Gamma} }\\bigoplus_{J} {\\mathcal V}_J \\otimes {\\mathcal N}_J^{\\{j_{e}\\}}\\,,\n\\end{equation}\nwhere the multiplicity spaces (or degeneracy spaces) ${\\mathcal N}_J^{\\{j_{e}\\}}$ consist in the spaces of intertwiners (i.e. $\\mathrm{SU}(2)$-invariant states) in the tensor product of the total spin Hilbert space ${\\mathcal V}^{J}$ with the individual spins $\\bigotimes_{e} {\\mathcal V}_{j_{e}}$,\n\\begin{equation}\n{\\mathcal N}_J^{\\{j_{e}\\}}\n:=\n\\textrm{Inv}_{\\mathrm{SU}(2)}\\left[\n{\\mathcal V}_J\\otimes\\bigotimes_{e\\in\\partial\\Gamma} {\\mathcal V}_{j_{e}}\n\\right]\n\\,.\n\\end{equation}\nHere, due to the bulk spin network structure, the total spin $J$ is necessary an integer.\nInstead of the decoupled basis $|\\{j_{e},m_{e}\\}_{e}\\rangle$, we use the recoupled basis, as illustrated on fig.\\ref{fig:recoupledbasis}:\n\\begin{equation}\n{\\mathcal H}^{\\partial}_{\\Gamma}\n=\n\\bigoplus_{ \\{j_e\\}_{e\\in\\partial\\Gamma} }\\bigoplus_{J,M}\\bigoplus_{I^{(J,\\{j_{e}\\})}}\n{\\mathbb C}|J,M\\rangle\\otimes|(J,\\{j_{e}\\}), I\\rangle\n=\n\\bigoplus_{J,M}\\bigoplus_{ \\{j_e\\} }\\bigoplus_{I^{(J,\\{j_{e}\\})}}\n{\\mathbb C}|J,M\\rangle\\otimes|(J,\\{j_{e}\\}), I\\rangle\n\\,,\n\\end{equation}\nwhere the $I^{(J,\\{j_{e}\\})}=|(J,\\{j_{e}\\}), I\\rangle$'s are a basis of intertwiners in the multiplicity space ${\\mathcal N}_J^{\\{j_{e}\\}}$ . We might write $I^{(J)}$ instead of $I^{(J,\\{j_{e}\\})}$ whenever we don't need to explicitly specify the value of the boundary spins.\nThese intertwiner states not only encode the recoupled total spin $J$, but also how the individual spins $j_{e}$ are weaved together.\n\\begin{figure}[hbt!]\n\\centering\n\\vspace*{5mm}\n\\begin{tikzpicture}[scale=0.7]\n\n\\coordinate (O) at (-4.2,0);\n\n\\draw[thick] (O) -- ++ (0.5,0);\n\\draw[thick] (O) ++ (0,1) -- ++ (0.5,0);\n\\draw[thick] (O) ++ (0,0.5) -- ++ (0.5,0);\n\\draw[thick,loosely dotted] (O) ++ (0.25,-0.15) -- ++ (0,-0.9);\n\\draw[thick] (O) ++ (0,-1) -- ++ (0.5,0);\n\n\\coordinate (O1) at (-1,0);\n\n\\draw (O1)++(3,0) node {$\\sim$};\n\n\\draw (O1) node { $ | \\{ j_e, m_e \\}_{e\\in\\partial\\Gamma} \\rangle \\in{\\mathcal H}_{\\partial\\Gamma} $};\n\n\\coordinate (A1) at (5,0);\n\\coordinate (A2) at (4,1);\n\\coordinate (A3) at (4,0.5);\n\\coordinate (A4) at (4,0);\n\\coordinate (A5) at (4,-1);\n\n\\draw[thick] (A1) -- (A2) -- ++ (-0.5,0);\n\\draw[thick] (A1) -- (A3) -- ++ (-0.5,0);\n\\draw[thick] (A1) -- (A4) -- ++ (-0.5,0)\n\\draw[thick] (A1) -- (A5) -- ++ (-0.5,0);\n\n\\draw[thick,loosely dotted] (3.8,-0.15) -- (3.8,-0.9);\n\n\\draw (A1) node[scale=0.7] {$\\bullet$} node[above=2] {$I$};\n\n\\draw[thick] (A1) -- ++ (1.2,0) node[below] {$J,M$} ++ (3.5,-0.5) node {$\\underbrace{ | J,M \\rangle }_{ \n\\overset{ \\in }\n{ \\phantom{ \\big( } {\\mathcal V}_J \\phantom{\\big)} }\n } \n { \\phantom{ \\big( } \\otimes }\n \\underbrace{ | (J,\\{j_e\\}),I \\rangle }_{ \n \\overset{ \\in }\n {\n \\phantom{ \\big( } \\textrm{Inv} \\left( {\\mathcal V}_J \\otimes \\bigotimes_{e}{\\mathcal V}_{j_e} \\right) \\phantom{ \\big) }\n }\n } $};\n\n\n\\end{tikzpicture}\n\\caption\nRecoupled basis for boundary states in terms of the total boundary spin (or closure defect) $J$.}\n\\label{fig:recoupledbasis}\n\\end{figure}\n\n\nIn the framework of the coarse-graining of spin networks introduced in \\cite{Charles:2016xwc}, the total spin $J$ is the tag and the multiplicity states $I\\in{\\mathcal N}_J^{\\{j_{e}\\}}$ are tagged intertwiners.\nFrom a physical standpoint, the multiplicity spaces ${\\mathcal N}_J^{\\{j_{e}\\}}$ for spin recoupling give the black hole horizon micro-states in a na\\\"ive leading order approach to black hole (micro-canonical) entropy and holography in loop quantum gravity, e.g. \\cite{Ashtekar:1997yu,Domagala:2004jt,Livine:2005mw,Agullo:2009eq,Livine:2012cv,Asin:2014gta}.\n\nLet us focus on the case with fixed boundary spins $\\{j_{e}\\}$, although this is a mere alleviation of the notations, since the spins $j_{e}$ can be implicitly absorbed in the definition of the recoupling intertwiner $I^{(J,\\{j_{e}\\})}$. The bulk wave-function evaluated on bulk holonomies is a boundary state and can thus be decomposed onto the recoupled basis:\n\\begin{equation} \\label{eq:Bulk-BoundaryGeneticForm}\n| \\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}}) \\rangle\n=\n\\sum_{J}\\sum_{M}\\sum_{I^{(J)}} C_{JMI}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}}) |J,M\\rangle \\otimes |J,I^{(J)}\\rangle\n\\,,\n\\end{equation}\nwhere the coefficients $C_{JMI}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})$ reflect the internal bulk structure of the wave-functions and depend on the bulk spins and intertwiners.\n$\\mathrm{SU}(2)$ gauge transformation act non-trivially on the wave-function by the group action on the boundary spins. Now, as we have seen earlier, the density matrix $\\rho_{\\partial}=\\int \\mathrm{d} g_{e}\\,|\\psi(g_{e})\\rangle\\langle \\psi(g_{e})|$ is invariant under conjugation by the simultaneous $\\mathrm{SU}(2)$ action on all the boundary spins $\\bigotimes_{e} D^{j_{e}}(h)$. This is a direct consequence of the bulk $\\mathrm{SU}(2)$ gauge invariance,\n\\begin{eqnarray}\n\\rho_{\\partial\\Gamma}[\\psi]\n&=&\n\\int \\prod_{e} \\mathrm{d} g_{e} \\, | \\psi(\\{ g_{e} \\}_{e\\in\\overset{\\circ}{\\Gamma}}) \\rangle\\langle (\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}} ) |\n=\n\\int \\prod_{e} \\mathrm{d} g_{e} \\, | \\psi(\\{ h \\, g_{e} \\, h^{-1} \\}_{e\\in\\overset{\\circ}{\\Gamma}}) \\rangle\\langle (\\{h \\, g_{e} \\, h^{-1}\\}_{e\\in\\overset{\\circ}{\\Gamma}}) |\n\\\\\n&=&\n\\int \\prod_{e} \\mathrm{d} g_{e} \\, h | \\psi(\\{ g_{e} \\}_{e\\in\\overset{\\circ}{\\Gamma}}) \\rangle\\langle (\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}}) | h^{-1}\n=h \\, \\rho_{\\partial\\Gamma}[\\psi] \\, h^{-1}\n\\,, \\qquad\n\\forall\\, h \\in \\mathrm{SU}(2)\n\\,.\n\\end{eqnarray}\nThis $\\mathrm{SU}(2)$ action on the boundary boils down to the $\\mathrm{SU}(2)$ action on the recoupled spin $D^{J}(h)$ and does not touch the multiplicity sector,\n\\begin{equation} \\label{eq:BoundarySU(2)action}\nh \\triangleright | \\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}}) \\rangle\n=\n\\sum_{J}\\sum_{M,N}\\sum_{I^{(J)}} C_{JMI}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}}) \\, D^J_{\\tilde{M}M}(h) \\, |J,\\tilde{M}\\rangle \\otimes |J,I\\rangle\n\\,.\n\\end{equation}\nThis means that the invariance of the boundary density matrix, $h\\,\\rho_{\\partial}\\,h^{\\dagger}=\\rho_{\\partial}$ for all group elements $h\\in\\mathrm{SU}(2)$ implies it is necessarily totally mixed on each subspace at fixed total spin $J$ and that all the information is encoded in the multiplicity subspaces. This is expressed more precisely by the following lemma:\n\n\\begin{lemma}\nA normalized $\\mathrm{SU}(2)$-invariant density matrix $\\rho$, thus satisfying $h\\,\\rho\\,h^{\\dagger}=\\rho\\,, \\forall\\, h\\in \\mathrm{SU}(2)$, has the following form:\n\\begin{equation} \\label{eq:SU(2)-invariant}\n\\rho\n=\n\\bigoplus_{J} p(J) \\frac{ \\mathbb{I}_{{\\mathcal V}_J} }{2J+1} \\otimes \\rho_{{\\mathcal N}_{J}}\n\\,, \\qquad\n{\\mathrm{Tr}} \\rho_{{\\mathcal N}_{J}}=1\\,,\\forall J\\in{\\mathbb N}\n\\,,\\qquad\n{\\mathrm{Tr}} \\rho=\\sum_{J}p(J)=1\n\\,.\n\\end{equation}\nThe coefficients $p(J)$ define the probability distribution over the total spin $J$. The operator $\\mathbb{I}_{{\\mathcal V}_J}=\\sum_{M}| J,M\\rangle\\langle J,M|$ is the identity on ${\\mathcal V}_J$ and $\\rho_{{\\mathcal N}_{J}}$ is an arbitrary density matrix in the multiplicity space ${\\mathcal N}_{J}$.\n\\end{lemma}\nThe $\\mathrm{SU}(2)$ invariance is a key property of the boundary density matrix, which descends directly from the gauge invariance of the bulk wave-functions under local $\\mathrm{SU}(2)$ transformations.\nLet us stress the important point that this is a statistical invariance under the $\\mathrm{SU}(2)$ action, at the level of the density matrix. This does not amount to the invariance of pure quantum states on the boundary. Indeed strict $\\mathrm{SU}(2)$ invariance of the wave-function (i.e. $h\\,\\rho=\\rho\\,h^{\\dagger}=\\rho$) would require $J=0$, while we can have here an arbitrary distribution over all (allowed) values of the total spin $J$.\n\n\n\n\\subsection{Universal bulk reconstruction from the boundary density matrix}\n\nThe natural question is how much can we know about the bulk structure from the boundary density matrix. For instance, does the combinatorial structure of the bulk graph deeply affect the type of boundary density matrix one gets? Here, we show a universal reconstruction procedure. As hinted by the work in \\cite{Livine:2017xww}, a single bulk loop is enough to get arbitrary boundary density matrices. More precisely, any arbitrary $\\mathrm{SU}(2)$-invariant density matrix on the boundary Hilbert space can be induced from a pure bulk state on the single loop bulk graph. We prove this powerful result below. This can be understood as a boundary-to-bulk purification theorem.\n\n\\begin{prop} \\label{prop:BoundaryDensityMatrix}\nA mixed state $\\rho$ on the boundary Hilbert space ${\\mathcal H}_{\\partial}$ is $\\mathrm{SU}(2)$-invariant, $h\\,\\rho\\,h^{\\dagger}=\\rho$, if and only if it is an induced boundary density matrix (IBDM) from a pure (gauge-invariant) bulk state $| \\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}}) \\rangle$ for some bulk graph $\\overset{\\circ}{\\Gamma}$ connecting the boundary edges.\n\\end{prop}\n\\begin{proof}\nWe already know that induced boundary density matrices are $\\mathrm{SU}(2)$-invariant. We have to show the reverse statement. Let us consider an arbitrary $\\mathrm{SU}(2)$-invariant density matrix, \n\\begin{equation}\n\\rho\n=\n\\bigoplus_{J} p(J) \\frac{ \\mathbb{I}_{{\\mathcal V}_J} }{2J+1} \\otimes \\rho_{{\\mathcal N}_{J}}\n\\,,\\nonumber\n\\end{equation}\nand let us diagonalize the density matrices for each multiplicity subsector,\n\\begin{equation}\n\\label{eq:SU(2)-inv.DM}\n\\rho_{{\\mathcal N}_J}=\\sum_{r=1}^{R_{J}} W_{I_{r}}^{(J)} \\,| J, I_{r}^{(J)} \\rangle\\langle J, I_{r}^{(J)} |\n\\,,\n\\end{equation}\nwhere $R_{J}$ is the rank of $\\rho_{{\\mathcal N}_J}$ and the intertwiners $I_{r}^{(J)}$ are orthonormal states in the multiplicity space ${\\mathcal N}_J$.\n\n\n\nLet us consider the bulk graph, as \\cite{Livine:2017xww}, with a single vertex tying all the boundary edges to a single loop as drawn on fig \\ref{fig:LoopySpinNetwork}.\nThen a spin network state is a superposition of intertwiners between the boundary spins and the (pair of) spin(s) carried by the loop. We can unfold this intertwiner with a (virtual) link between the boundary edge and the loop. This (virtual) intermediate link carries the total boundary spin $J$. For each value of $J$, we need to specify the spin $k$ carried by the loop and the two intertwiners at the nodes. The three-valent intertwiner recoupling the loop spin $k$ to the total spin $J$ is unique (when it exists), while the intertwiner recoupling the boundary spins $\\{j_{e}\\}$ into $J$ will naturally be the intertwiners $I_{r}^{(J)}$.\n\\begin{figure}[hbt!]\n\\centering\n\\begin{tikzpicture}[scale=0.7]\n\n\n\n\n\n\\coordinate (O) at (0,0);\n \n\\coordinate (A) at (-6,0);\n\n\\draw (A) node {$\\rho_{\\partial}$};\n\n\\draw [domain=0:360] plot ({-6+1.75 * cos(\\x)}, {1.75 * sin(\\x)});\n \n\\draw[thick] (A) ++(-0.6,0)\n\\draw[thick] (A) ++(0:1) --++ (0:1.5)\n\\draw[thick] (A) ++(60:1) --++ (60:1.5) ++(60:0.35) node {$j_1$};\n\\draw[thick] (A) ++(120:1) --++ (120:1.5) ++(120:0.35) node {$j_2$};\n\\draw[thick] (A) ++(180:1) --++ (180:1.5) ++(180:0.35) node {$j_3$};\n\\draw[thick] (A) ++(240:1) --++ (240:1.5)\n\\draw[thick] (A) ++(300:1) --++ (300:1.5)\n\n\\draw [thick, loosely dotted,domain=195:230] plot ({-6+2.6 * cos(\\x)}, {2.6 * sin(\\x)});\n\n\n\\coordinate (O) at (3.5,0);\n\n\\draw[->,>=stealth,very thick] (-2,0) -- node[above] {?} (0,0);\n\n\\draw[thick,red] (O) -- ++ (315:1.5) node[very near end,above=2] {$J$} coordinate (B) node[blue,scale=0.7] {$\\bullet$};\n\\draw[blue,thick,in=-90,out=0,scale=4.5,rotate=0] (B) to[loop] node[near start,sloped] {$>$} node[near end,left=2] {$k$} (B) ++(315:0.35) node {$g$};\n\n\\draw[thick] (O) -- ++ (0:1.5)\n\\draw[thick] (O) -- ++ (45:1.5) ++(45:0.35) node {$j_1$};\n\\draw[thick] (O) -- ++ (90:1.5) ++(90:0.35) node {$j_2$};\n\\draw[thick] (O) -- ++ (135:1.5) ++(135:0.35) node {$j_3$};\n\\draw[thick] (O) -- ++ (180:1.5);\n\\draw[thick] (O) -- ++ (225:1.5);\n\n\\draw (O) node[scale=0.7] {$\\bullet$};\n\n\\draw [thick, loosely dotted,domain=160:200] plot ({3.5+1.8 * cos(\\x)}, {1.8 * sin(\\x)});\n\n\\end{tikzpicture}\n\\caption{\nThe universal reconstruction procedure purifying a $\\mathrm{SU}(2)$-invariant boundary density matrix into a pure spin network superposition for a bulk made of a single vertex and single loop.\n}\n\\label{fig:LoopySpinNetwork}\n\\end{figure}\n\n\nIndeed, for each value of the total spin $J$, we choose $R_{J}$ distinct spins $k_{r}^{(J)}$ for the loop with $J\\leq 2k_{r(J)}$, so that ${\\mathcal V}_{J}\\subset {\\mathcal V}_{k}\\otimes {\\mathcal V}_{k}$, i.e. the loop spin can recouple to $J$, i.e. there exists a 3-valent intertwiner (given by the corresponding Clebsh-Gordan coefficients). We then define the following pure spin network for the $1$-loop graph, in terms of a single bulk holonomy $g$ on the loop,\n\\begin{equation}\n| \\psi(g) \\rangle\n=\n\\sum_{J,M} \\sqrt{ p(J) } | J,M \\rangle \\otimes \\sum_{r}^{R_{J}} \\sum_{m,n}^{k_{r}^{(J)}} (-1)^{k_{r}^{(J)}+m} \\, \\sqrt{ 2k_{r}^{(J)}+1 } \\, D^{k_{r}^{(J)}}_{nm}(g) \\, \\begin{pmatrix}\nJ & k_{r}^{(J)}& k_{r}^{(J)} \\\\\nM & -m & n\n\\end{pmatrix} \\, \\sqrt{ W_{I_{r}}^{(J)} } \\, | J, I_{r}^{(J)} \\rangle\n\\,.\n\\end{equation}\nIt is straightforward to check this boundary pure state leads back to the wanted $\\mathrm{SU}(2)$-invariant density matrix \\eqref{eq:SU(2)-inv.DM} upon integration over the bulk holonomy $g$.\n\n\\end{proof}\nIt is quite remarkable that the superposition of loop spins and bulk intertwiners naturally leads to mixed boundary density matrices. \n\n\n\\subsection{Probing the first layer of the bulk: Bouquets of boundary edges}\n\\label{sec:manyedges}\n\n\n\nUp to now, we have defined the boundary density matrix induced by a bulk spin network state, underlined the fact that the resulting boundary density matrix is typically mixed for a pure spin network state and showed how to construct such a pure bulk state on a graph with at least one loop given a (suitably gauge invariant) boundary density matrix. This universal reconstruction procedure, given above in the proof of Proposition \\ref{prop:BoundaryDensityMatrix}, with a bulk graph made of a single vertex and a single bulk loop, should be understood as a purification of the boundary density matrix into a bulk state. There are nevertheless many possible bulk states on possibly complicated graphs inducing the same boundary state, leading to many ways to purify a given mixed boundary state. In light of this fact, we wish to understand better how the bulk graph structure and potential correlations between the spins and intertwiners within the bulk possibly get reflected in the boundary density matrix.\n\n\n\nIn this section, we would like to start diving into the bulk, or at least start probing the first layer of the bulk beyond the boundary edges. More precisely, we would like to see the ``boundary vertices'', i.e. the vertices to which are attached boundary edges, and understand if a finer study of the boundary density matrix allows to extract the information about whether bunches of boundary edges are attached to the same boundary vertex.\nIndeed, although a rather natural assumption is that each boundary edge to connected to a different vertex in the bulk, this is not a generic configuration. A more general configuration involves boundary edges regrouped into bouquets, each attached to a vertex, as illustrated on fig.\\ref{fig:bouquet}.\n\\begin{figure}[hbt!]\n\\centering\n\\begin{tikzpicture}[scale=0.7]\n\\coordinate (O) at (0,0);\n \n\\draw (O) ++(0.6,0)\n\\coordinate (O1) at (60:1);\n\\coordinate (O2) at (120:1);\n\\coordinate (O3) at (240:1);\n\\coordinate (O4) at (300:1);\n\n \n \\draw [domain=0:360] plot ({cos(\\x)}, {sin(\\x)});\n\n\\draw (O1) -- ++ (60:0.7) node [right,midway] {$J_1$};\n\\draw (O2) -- ++ (120:0.7) node [left,midway] {$J_2$};\n\\draw (O3) -- ++ (240:0.7) node [left,midway] {$J_3$};\n\\draw (O4) -- ++ (300:0.7) node [right,midway] {$J_4$};\n\n\\draw (O1) ++ (60:0.7) -- ++ (0:0.7);\n\\draw (O1) ++ (60:0.7) -- ++ (45:0.7);\n\\draw (O1) ++ (60:0.7) node[scale=0.7,blue] {$\\bullet$} -- ++ (90:0.7);\n\n\\draw (O2) ++ (120:0.7) -- ++ (90:0.7);\n\\draw (O2) ++ (120:0.7) -- ++ (135:0.7);\n\\draw (O2) ++ (120:0.7) node[scale=0.7,blue] {$\\bullet$} -- ++ (180:0.7);\n\n\\draw (O3) ++ (240:0.7) -- ++ (180:0.7);\n\\draw (O3) ++ (240:0.7) -- ++ (225:0.7);\n\\draw (O3) ++ (240:0.7) node[scale=0.7,blue] {$\\bullet$} -- ++ (270:0.7);\n\n\\draw (O4) ++ (300:0.7) -- ++ (270:0.7);\n\\draw (O4) ++ (300:0.7) -- ++ (315:0.7);\n\\draw (O4) ++ (300:0.7) node[scale=0.7,blue] {$\\bullet$} -- ++ (0:0.7);\n\n\\draw[->,>=stealth,very thick] (-4,0) -- (-2,0);\n\\coordinate (A) at (-6,0);\n \\draw [domain=0:360] plot ({-6+cos(\\x)}, {sin(\\x)});\n \n\\draw (A) ++(-0.6,0)\n\\path (A) ++(60:1) coordinate (A1);\n\\path (A) ++(120:1) coordinate (A2);\n\\path (A) ++(240:1) coordinate (A3);\n\\path (A) ++(300:1) coordinate (A4);\n\n\\draw (A1) node[scale=0.7] {$\\bullet$};\n\\draw (A2) node[scale=0.7] {$\\bullet$};\n\\draw (A3) node[scale=0.7] {$\\bullet$};\n\\draw (A4) node[scale=0.7] {$\\bullet$};\n\n\\draw (A1) -- ++ (0:0.7);\n\\draw (A1) -- ++ (45:0.7);\n\\draw (A1) -- ++ (90:0.7);\n\n\\draw (A2) -- ++ (90:0.7);\n\\draw (A2) -- ++ (135:0.7);\n\\draw (A2) -- ++ (180:0.7);\n\n\\draw (A3) -- ++ (180:0.7);\n\\draw (A3) -- ++ (225:0.7);\n\\draw (A3) -- ++ (270:0.7);\n\n\\draw (A4) -- ++ (270:0.7);\n\\draw (A4) -- ++ (315:0.7);\n\\draw (A4) -- ++ (0:0.7);\n\n\\draw (O1) node[scale=0.7,blue] {$\\bullet$};\n\\draw (O2) node[scale=0.7,blue] {$\\bullet$};\n\\draw (O3) node[scale=0.7,blue] {$\\bullet$};\n\\draw (O4) node[scale=0.7,blue] {$\\bullet$};\n\n\\end{tikzpicture}\n\\caption{\nBouquets of boundary edges attached to boundary vertices $v\\in V^{\\partial}$ and the chicken feet basis labeled by the recoupled spin $J_{v}$ for each bouquet.}\n\\label{fig:bouquet}\n\\end{figure}\n\nThis leads us to introduce a ``chicken feet'' basis where we recouple the spin of the boundary edges of each bouquet separately instead of only considering the total recoupled spin $J$. We thus introduce the bouquet spin $J_{v}$ for each boundary vertex $v$. Writing $V^{\\partial}$ for the set of boundary vertices, the boundary Hilbert space decomposes as:\n\\begin{equation}\n{\\mathcal H}_{\\partial}=\\bigoplus_{\\{j_{e}\\}_{e\\in\\partial}}\\bigotimes_{e\\in\\partial}{\\mathcal V}_{j_{e}}\n=\\bigoplus_{\\{J_{v}\\}_{v\\in V^{\\partial}}} \\bigotimes_{v\\in V^{\\partial}}{\\mathcal V}_{J_{v}}\\otimes {\\mathcal N}_{\\{J_{v}\\}}\n\\,,\n\\end{equation}\n\\begin{equation}\n{\\mathcal N}_{\\{J_{v}\\}}\n=\n\\bigotimes_{v\\in V^{\\partial}}\\bigoplus_{\\{j_{e}\\}_{e\\,| v\\in\\partial e}}\\textrm{Inv}\\Big{[}\n{\\mathcal V}_{J_{v}}\\otimes \\bigotimes_{e\\,| v\\in\\partial e} {\\mathcal V}_{j_{e}}\n\\Big{]}\n\\,,\n\\end{equation}\nleading to the chicken feet basis states $|\\{J_{v}\\}_{v\\in V^{\\partial}},\\{j_{e}\\}_{e\\in\\partial},\\{{\\mathcal I}^{J_{v}}_{\\{j_{e}\\}}\\}_{v\\in V^{\\partial}}\\rangle$ labelled by the boundary edge spins $j_{e}$, the boundary bouquet spins $J_{v}$ and the intertwiners recoupling them,\nas depicted on fig.\\ref{fig:bouquet}.\n\nAs for the bulk, we similarly unfold the intertwiner states living on the boundary vertices and decompose them into two intertwiners, one ``boundary'' component which recouples all the boundary spins into $J_{v}$ and one ``bulk'' component which recouples the spins on the remaining bulk edges attached to the vertex to $J_{v}$, as illustrated on fig.\\ref{fig:boundaryintertwiner}.\n\\begin{figure}[hbt!]\n\\centering\n\\begin{tikzpicture}[scale=0.7]\n\n\\coordinate (O) at (0,0);\n\n\\coordinate (A1) at (145:2.1);\n\\coordinate (A2) at (215:2.1);\n\n\\coordinate (B1) at (35:2.1);\n\\coordinate (B2) at (-35:2.1);\n\n\\draw (O) node[scale=0.7] {$\\bullet$} ++ (0,0.55) node {$I_v$} ++ (0,-1.5) node{$v\\in V^{\\partial}$ };\n\n\\draw[thick] (O) -- ++ (130:1.5) ;\n\\draw[thick] (O) -- ++ (160:1.5) ;\n\\draw[thick] (O) -- ++ (190:1.5) ;\n\\draw[thick] (O) -- ++ (220:1.5) ;\n\n\\draw[thick] (O) -- ++ (50:1.5) ;\n\\draw[thick] (O) -- ++ (20:1.5) ;\n\\draw[thick] (O) -- ++ (-10:1.5) ;\n\\draw[thick] (O) -- ++ (-40:1.5) ;\n\n\\draw [decorate,decoration={brace,amplitude=10pt},xshift=-4pt,yshift=0pt]\n(A2) -- (A1) node [black,midway,xshift=-1cm] {\\footnotesize $j_{e} \\in \\partial\\Gamma$};\n\n\\draw [decorate,decoration={brace,amplitude=10pt,mirror},xshift=-4pt,yshift=0pt]\n(B2) -- (B1) node [black,midway,xshift=1cm] {\\footnotesize $j_{e} \\in \\partial\\overset{\\circ}{\\Gamma}$};\n\n\\coordinate (O1) at (10,0);\n\\coordinate (O2) at (12,0);\n\n\\path (O1) ++(145:2.1) coordinate (C1);\n\\path (O1) ++(215:2.1) coordinate (C2);\n\\path (O2) ++(35:2.1) coordinate (D1);\n\\path (O2) ++(-35:2.1) coordinate (D2);\n\n\\draw[thick,red] (O1) -- node[above] {$J_v$} (O2);\n\n\\draw [decorate,decoration={brace,amplitude=10pt},xshift=-4pt,yshift=0pt]\n(C2) -- (C1) node [black,midway,xshift=-1cm] {\\footnotesize $j_{e} \\in \\partial\\Gamma$};\n\n\\draw [decorate,decoration={brace,amplitude=10pt,mirror},xshift=-4pt,yshift=0pt]\n(D2) -- (D1) node [black,midway,xshift=1cm] {\\footnotesize $j_{e} \\in \\partial\\overset{\\circ}{\\Gamma}$};\n\n\\draw[thick] (O1) -- ++ (130:1.5) ;\n\\draw[thick] (O1) -- ++ (160:1.5) ;\n\\draw[thick] (O1) -- ++ (190:1.5) ;\n\\draw[thick] (O1) node[RoyalPurple,scale=0.7] {$\\bullet$} -- ++ (220:1.5) ;\n\n\\draw[thick] (O2) -- ++ (50:1.5) ;\n\\draw[thick] (O2) -- ++ (20:1.5) ;\n\\draw[thick] (O2) -- ++ (-10:1.5) ;\n\\draw[thick] (O2) node[RoyalPurple,scale=0.7] {$\\bullet$} -- ++ (-40:1.5) ;\n\n\\draw[RoyalPurple] (O1) ++ (0,-1) node{${}^{\\pp}I_{v}^{(J_v)} $ };\n\\draw[RoyalPurple] (O2) ++ (0,-1) node{${}^{o}I_{v}^{(J_v)} $ };\n\n\\end{tikzpicture}\n\\caption\nUnfolding intertwiners on boundary vertices: the decomposition into boundary and bulk intertwiner components.}\n\\label{fig:boundaryintertwiner}\n\\end{figure}\nDecomposed intertwiner basis states are then labeled by the boundary and bulk spins attached to the (boundary) vertex, the bouquet spin $J_{v}$ and the two boundary and bulk intertwiner, $|\\{j_{e}\\}_{e\\in\\partial},J_{v},{}^{\\pp}I_{v}^{(J_{v})},{}^{o}I_{v}^{(J_{v})}\\rangle$.\n\nThe reconstruction of the first layer of the bulk from the boundary density matrix simply reflects the fact that the boundary component of the intertwiners $I_{v}$ at a vertex attached to some boundary edges matches the boundary intertwiner recoupling the boundary spins to their bouquet spins, i.e. ${\\mathcal I}^{J_{v}}_{\\{j_{e}\\}}={}^{\\pp}I_{v}^{(J_{v})}$ for a boundary vertex $v\\in V^{\\partial}$ and for all values of the bouquet spin $J_{v}$.\nLet us see more precisely how this gets encoded into the boundary density matrix.\n\n\\medskip\n\nTo alleviate the notations, let us fix the spins $j_{e}$ on the boundary edges $e\\in\\partial\\Gamma$, although it is straightforward to allow arbitrary superpositions of the boundary spins.\nIn light of the $\\mathrm{SU}(2)$ gauge transformations at the vertices and the resulting $\\mathrm{SU}(2)$ gauge invariance of the boundary density matrix at each boundary vertex, a boundary density matrix necessarily reads:\n\\begin{equation}\n\\rho_{\\partial}\n=\n\\sum_{\\{J_{v}\\}_{v\\in V^{\\partial}}}\n\\bigotimes_{v\\in V^{\\partial}}\\frac{\\mathbb{I}_{J_{v}}}{(2J_{v}+1)} \\otimes \\rho_{\\{J_{v}\\}}\n\\,,\n\\end{equation}\nwhere, for each value of the bouquet spins $\\{J_{v}\\}$, we have the totally mixed state on the spin states and a possibly non-trivial density matrix $\\rho_{\\{J_{v}\\}}$ on the corresponding multiplicity space,\n\\begin{equation}\n \\rho_{\\{J_{v}\\}}\\in\\textrm{End}[{\\mathcal N}_{\\{J_{v}\\}}]\n \\,,\\qquad\n {\\mathcal N}_{\\{J_{v}\\}}\n=\n\\bigotimes_{v\\in V^{\\partial}}\\textrm{Inv}\\Big{[}\n{\\mathcal V}_{J_{v}}\\otimes \\bigotimes_{e\\,| v\\in\\partial e} {\\mathcal V}_{j_{e}}\n\\Big{]}\n\\,,\n\\end{equation}\nsince we hare working at fixed boundary spins $j_{e\\in\\partial}$.\n\nFor spin network basis state, a straightforward calculation leads to the multiplicity matrices $\\rho_{\\{J_{v}\\}}$ simply given by the boundary components of the intertwiners living at the boundary vertices:\n\\begin{lemma}\nFor a spin network basis states $\\Psi_{\\{j_{e},I_{v}\\}}$ with given spins $j_{e}$ on all bulk and boundary edges, as well as chosen intertwiner states $I_{v}$ at each vertex, we decompose the intertwiner states living on boundary vertices in the bouquet spin basis separating their ``boundary'' component from their ``bulk'' component,\n\\begin{equation}\n\\forall v\\in V^{\\partial}\\,,\\quad\nI_{v}=\\sum_{J_{v}}\nC_{v}(J_{v})\\,\n|J_{v},{}^{\\pp}I_{v}^{(J_{v})},{}^{o}I_{v}^{(J_{v})}\\rangle\\,,\n\\end{equation}\nwith normalized intertwiners ${}^{\\pp}I_{v}^{(J_{v})}$ and ${}^{o}I_{v}^{(J_{v})}$, respectively between the boundary spins and the bouquet spin, then between the bouquet spin and the bulk spins attached to the vertex $v$.\nThen the induced boundary density matrix reads:\n\\begin{equation}\n\\rho_{\\partial}[\\Psi_{\\{j_{e},I_{v}\\}}]\n=\n\\sum_{\\{J_{v}\\}_{v\\in V^{\\partial}}}\n\\bigotimes_{v\\in V^{\\partial}}\\frac{\\mathbb{I}_{J_{v}}}{(2J_{v}+1)} \\otimes\n\\rho_{\\{J_{v}\\}}\\,,\n\\qquad\\textrm{where} \\quad\n|{}^{\\pp}I_{v}^{(J_{v})}\\rangle\\in\n\\mathrm{Inv}\\Big{[}\n{\\mathcal V}_{J_{v}}\\otimes \\bigotimes_{e\\,| v\\in\\partial e} {\\mathcal V}_{j_{e}}\n\\Big{]}\n\\,.\n\\end{equation}\nThe multiplicity matrices $\\rho_{\\{J_{v}\\}}$ have rank-one:\n\\begin{equation}\n\\rho_{\\{J_{v}\\}}\n=\n|\\iota_{\\{J_{v}\\}}\\rangle\\langle \\iota_{\\{J_{v}\\}}|\\,,\\quad\n\\iota_{\\{J_{v}\\}}\n=\n\\bigotimes_{v\\in V^{\\partial}}\nC_{v}(J_{v})\n|{}^{\\pp}I_{v}^{(J_{v})}\\rangle\n\\,\\in{\\mathcal N}_{\\{J_{v}\\}}\n\\,.\n\\end{equation}\n\\end{lemma}\nThis rank-one property obviously extends to possible spin network superposition states with correlation between bouquet spins, i.e. with coefficients $C(\\{J_{v}\\})$ generalizing the factorized ansatz $\\prod_{v}C_{v}(J_{v})$ of basis states, but is ruined as soon as there is non-trivial superpositions of the bulk components of the boundary intertwiners or more generally non-trivial intertwiner correlations between the bulk vertices.\nIndeed, let us consider a generic spin network state:\n\\begin{equation}\n\\psi=\\sum_{\\{j_{e}\\},\\{I_{v}\\}}\nC^{\\{j_{e}\\}_{e\\in\\partial\\Gamma},\\{j_{e}\\}_{e\\in\\Gamma^{o}}}_{\\{J_{v}\\}_{v\\in V^{\\partial}}}(\\{{}^{\\pp}I_{v}^{(J_{v})},{}^{o}I_{v}^{(J_{v})}\\}_{v\\in V^{\\partial}},\\{I_{w}\\}_{w\\notin V^{\\partial}})\n\\bigotimes_{v\\in V^{\\partial}}\\Big{(}{}^{\\pp}I_{v}^{(J_{v})}\\otimes{}^{o}I_{v}^{(J_{v})}\\Big{)}\n\\,\\otimes\\,\n\\bigotimes_{w\\notin V^{\\partial}} I_{w}\n\\quad\\in{\\mathcal H}_{\\Gamma}\\,,\n\\end{equation}\nwhere we use the notation $v$ for the boundary vertices and $w$ for the remaining vertices of the bulk graph. We have chosen an arbitrary orthonormal basis of intertwiners $I_{w}$ for bulk vertices while using explicitly the bouquet spin basis for the boundary vertices. A straightforward calculation yields the following induced boundary density matrix:\n\\begin{equation}\n\\rho_{\\partial}[\\psi]=\n\\sum_{\\{J_{v}\\}_{v\\in V^{\\partial}}}\n\\bigotimes_{v\\in V^{\\partial}}\\frac{\\mathbb{I}_{J_{v}}}{(2J_{v}+1)} \\otimes\n\\rho_{\\{J_{v}\\}}\\,,\n\\end{equation}\n\\begin{equation}\n\\rho_{\\{J_{v}\\}}\n\\,=\\,\n\\sum_{{}^{\\pp}I_{v},\\widetilde{{}^{\\pp}I_{v}}}\\sum_{j_{e},{}^{o}I_{v},I_{w}}\nC^{\\{j_{e}\\}}_{\\{J_{v}\\}}(\\{\\widetilde{{}^{\\pp}I_{v}^{(J_{v})}},{}^{o}I_{v}^{(J_{v})}\\},\\{I_{w}\\})\n\\,\\overline{C^{\\{j_{e}\\}}_{\\{J_{v}\\}}(\\{{}^{\\pp}I_{v}^{(J_{v})},{}^{o}I_{v}^{(J_{v})}\\},\\{I_{w}\\})}\\,\n\\bigotimes_{v\\in V^{\\partial}}|\\widetilde{{}^{\\pp}I_{v}^{(J_{v})}}\\rangle\\langle {}^{\\pp}I_{v}^{(J_{v})}|\n\\,.\n\\end{equation}\nThe integration over the bulk holonomies amounts in the end to the partial trace over the bulk intertwiners (i.e. the intertwiner states at the vertices not connected to any boundary edge), over the bulk component of the intertwiners at the boundary vertices, and over the spins of the graph edges. This partial trace naturally leads to mixed states on the multiplicity spaces ${\\mathcal N}_{\\{J_{v}\\}}$ with higher rank multiplicity matrices $\\rho_{\\{J_{v}\\}}$.\nThis means that non-trivial bulk correlations (between bulk intertwiners and bulk spins) get reflected in the rank of the multiplicity matrices $\\rho_{\\{J_{v}\\}}$. This is a much finer witness of the bulk structure that the overall closure defect.\n\nThis hints towards a natural layer-by-layer reconstruction of the bulk from the boundary density matrix. Starting from $\\rho_{\\partial}$, one can try the various partitions of the boundary, grouping the boundary edges, and check which partition leads to a multiplicity matrix with the lowest rank, and thus with the least correlation between boundary vertices. Once this first layer of the bulk graph, one would thank follow the same logic to reconstruct the second layer of the bulk, grouping the bouquets together so that the second layer intertwiners are the least correlated possible. We would pursue this onion-like reconstruction until we reach the inner loop of the universal reconstruction procedure described in the previous section. It would be enlightening if one could translate this idea of a bulk with the least correlation between graph vertices into an action principle whose extrema would determine the bulk structure from the quantum boundary data fixed by the chosen boundary density matrix.\n\n\n\n\\section{Examples: Boundary Density Matrix for Candy Graphs}\n\nWe would like to conclude this paper with explicit examples of the bulk-to-boundary procedure, from bulk spin networks to boundary density matrices. We will consider the case of a bulk graph with two boundary vertices. The deeper bulk structure does not matter and it is enough to consider a single loop to which the bulk edges connect. We consider the two examples of boundary vertices each with two boundary edges, and then each with three boundary edges, as drawn on fig.\\ref{fig:candygraphs}.\n\\begin{figure}[hbt!]\n\\centering\n\\vspace*{3mm}\n\\begin{tikzpicture}[scale=0.7]\n\n\\draw [domain=0:360,dotted,thick] plot ({2.4 * cos(\\x)}, {1.2 * sin(\\x)});\n\n\\coordinate (A1) at (-2,0);\n\\coordinate (A2) at (2,0);\n\n\\draw (0,0) node {bulk};\n\n\\draw[thick] (A1) node[scale=0.7] {$\\bullet$} -- ++ (145:1.5);\n\\draw[thick] (A1) -- ++ (215:1.5);\n\n\\draw[thick] (A2) node[scale=0.7] {$\\bullet$} -- ++ (35:1.5);\n\\draw[thick] (A2) -- ++ (-35:1.5);\n\n\\draw [domain=0:360,dotted,thick] plot ({12+2.4 * cos(\\x)}, {1.2 * sin(\\x)});\n\n\\coordinate (B1) at (10,0);\n\\coordinate (B2) at (14,0);\n\n\\draw (12,0) node {bulk};\n\n\\draw[thick] (B1) node[scale=0.7] {$\\bullet$} -- ++ (145:1.5);\n\\draw[thick] (B1) -- ++ (180:1.5);\n\\draw[thick] (B1) -- ++ (215:1.5);\n\n\\draw[thick] (B2) node[scale=0.7] {$\\bullet$} -- ++ (35:1.5);\n\\draw[thick] (B2) -- ++ (0:1.5);\n\\draw[thick] (B2) -- ++ (-35:1.5);\n\n\\end{tikzpicture}\n\\caption\nCandy graphs.}\n\\label{fig:candygraphs}\n\\end{figure}\n\n\\subsection{The four-qubit candy graph}\n\nLet us describe the graph with two vertices linked by a single loop and each with two boundary edges, as drawn on fig.\\ref{fig:4candygraph}. We assume that the spin on the fours boundary edges are fixed to $j_{1}=j_{2}=j_{3}=j_{4}=\\f12$ and we also fix the spin around the loop to an arbitrary value $k$. \nThe bulk Hilbert space thus consists in the tensor product of the spaces of intertwiners living at the two vertices $\\alpha$ and $\\beta$:\n\\begin{equation}\n{\\mathcal H}_{bulk}= {\\mathcal H}_{\\alpha}\\otimes {\\mathcal H}_{\\beta}\\,,\n\\qquad\n {\\mathcal H}_{\\alpha}= {\\mathcal H}_{\\beta}\n=\n\\textrm{Inv}[{\\mathcal V}_{{\\f12}}\\otimes {\\mathcal V}_{{\\f12}}\\otimes {\\mathcal V}_{k}\\otimes {\\mathcal V}_{k}]\n\\,.\n\\end{equation}\n\\begin{figure}[hbt!]\n\\centering\n\\begin{tikzpicture}[scale=0.7]\n\n\\coordinate (A1) at (0,0);\n\\coordinate (A2) at (3,0);\n\n\\draw[thick,in=115,out=65,rotate=0] (A1) to node[above] {$k$} (A2) node[scale=0.7] {$\\bullet$} (A2) to[out=245,in=-65] node[below] {$k$} (A1) node[scale=0.7] {$\\bullet$};\n\n\\draw[thick] (A1) -- ++ (145:1.5) ++ (145:0.35) node {$\\f12$} ++ (15:0.5) node {$\\circled{1}$};\n\\draw[thick] (A1) -- ++ (215:1.5) ++ (215:0.35) node {$\\f12$} ++ (-15:0.5) node {$\\circled{2}$};\n\n\\draw[thick] (A2) -- ++ (35:1.5) ++ (35:0.35) node {$\\f12$} ++ (165:0.5) node {$\\circled{3}$};\n\\draw[thick] (A2) -- ++ (-35:1.5) ++ (-35:0.35) node {$\\f12$} ++ (-165:0.5) node {$\\circled{4}$};\n\n\\draw[->,>=stealth,very thick] (6,0) -- (7.5,0);\n\n\\coordinate (B1) at (11.5,0);\n\\coordinate (B2) at (13.5,0);\n\n\\draw[thick,in=105,out=75,rotate=0] (B1) to (B2) node[scale=0.7] {$\\bullet$} (B2) to[out=255,in=-75] (B1) node[scale=0.7] {$\\bullet$};\n\n\\draw[thick] (B1) ++ (-1.5,0) node[scale=0.7] {$\\bullet$} -- ++ (145:1.5) ++ (145:0.35);\n\\draw[thick] (B1) ++ (-1.5,0) -- ++ (215:1.5) ++ (215:0.35);\n\n\\draw[thick,red] (B1) node[black,scale=0.7] {$\\bullet$} -- node[red,above] {$J_{\\alpha}$} ++ (-1.5,0) node[black,scale=0.7] {$\\bullet$};\n\\draw[thick,red] (B2) node[black,scale=0.7] {$\\bullet$} -- node[red,above] {$J_{\\beta}$} ++ (1.5,0) node[black,scale=0.7] {$\\bullet$};\n\n\\draw[thick] (B2) ++ (1.5,0) node[scale=0.7] {$\\bullet$} -- ++ (35:1.5) ++ (35:0.35);\n\\draw[thick] (B2) ++ (1.5,0) node[scale=0.7] {$\\bullet$} -- ++ (-35:1.5) ++ (-35:0.35);\n\n\\end{tikzpicture}\n\\caption\n 4-qubit candy graph: spin and intertwiner decomposition.}\n\\label{fig:4candygraph}\n\\end{figure}\n\nFor each vertex, $v=\\alpha$ or $v=\\beta$, we recouple the two boundary spins, leading to the bouquet spin basis. Here the bouquet spin $J_{v}$ can take two values, 0 or 1, and entirely determines the intertwiner state:\n\\begin{equation}\n{\\mathcal H}_{v}=\\textrm{Inv}[{\\mathcal V}_{{\\f12}}\\otimes {\\mathcal V}_{{\\f12}}\\otimes {\\mathcal V}_{k}\\otimes {\\mathcal V}_{k}]=\n{\\mathbb C}|J=0\\rangle\\oplus {\\mathbb C}|J=1\\rangle\\,,\\quad\n\\dim{\\mathcal H}_{v}=2\n\\,,\n\\end{equation}\nso that bulk spin network basis states are labelled by the two bouquet spins\\footnotemark:\n\\begin{equation}\n{\\mathcal H}_{bulk}=\\bigoplus_{J_{\\alpha},J_{\\beta}\\in\\{0,1\\}}{\\mathbb C}|J_{\\alpha},J_{\\beta}\\rangle\n\\,,\\quad\n\\dim{\\mathcal H}_{bulk}=4\n\\,.\n\\end{equation}\n\\footnotetext{\nIt might seem awkward that the dimension of the bulk Hilbert space is here (much) smaller than the dimension of the boundary Hilbert space: it would be weird to talk about a bulk-to-boundary coarse-graining in that situation. This is due to the extremely simple structure of the bulk graph. In fact, the dimension of the bulk Hilbert space increases exponentially with the number of bulk vertices (actually, more precisely, the number of independent cycles in the bulk graph as shown in \\cite{Livine:2007sy}). For instance, merely pinching the loop to create an extra bulk vertices would increase the dimension of the bulk Hilbert space to $\\dim{\\mathcal H}_{bulk}=2\\times (2k+1)\\times2$, which would be larger from $\\dim{\\mathcal H}_{\\partial}=2^{4}$ as soon as the spin $j$ around the loop is larger than 2.\n}\nThe boundary Hilbert space consists in the tensor product of the four spin-$\\f12$ spaces, i.e. it is made of four qubits,\n\\begin{equation}\n{\\mathcal H}_{\\partial}=\\big{(}{\\mathcal V}_{\\f12}\\big{)}^{\\otimes 4}\n\\,,\\qquad\n\\dim{\\mathcal H}_{\\partial}=2^{4}\\,.\n\\end{equation}\n\nLet us consider an arbitrary spin network state,\n\\begin{equation}\n\\psi=\\sum_{J_{\\alpha},J_{\\beta}}C_{J_{\\alpha},J_{\\beta}}|J_{\\alpha},J_{\\beta}\\rangle\n\\in{\\mathcal H}_{bulk}\n\\,.\n\\end{equation}\nThe corresponding wave-function defines a boundary map, mapping the bulk holonomy along the two links of the inner loop to a boundary state in ${\\mathcal H}_{\\partial}$:\n\\begin{equation}\n|\\psi(g_{1},g_{2})\\rangle=\\sum_{a_{i},b_{i}}\n(-1)^{k-a_{1}}(-1)^{k-a_{2}}D^{k}_{a_{1}b_{1}}(g_{1})D^{k}_{a_{2}b_{2}}(g_{2})\n\\langle (k,-a_{1})(k,-a_{2})|J_{\\alpha}\\rangle\n\\langle (k,b_{1})(k,b_{2})|J_{\\beta}\\rangle\n\\,\\in{\\mathcal H}_{\\partial}\\,.\n\\end{equation}\nThe boundary density matrix is obtained by integrating over the bulk holonomy:\n\\begin{equation}\n\\rho_{\\partial}[\\psi]=\\int \\mathrm{d} g_{1}\\mathrm{d} g_{2}\\,|\\psi(g_{1},g_{2})\\rangle\\langle \\psi(g_{1},g_{2})|\n\\,\\in\\textrm{End}[{\\mathcal H}_{\\partial}]\n\\,.\n\\end{equation}\nThe integration over the $\\mathrm{SU}(2)$ group elements is straightforward to compute and yields:\n\\begin{equation}\n\\rho_{\\partial}[\\psi]=\n\\sum_{J_{\\alpha},J_{\\beta}}|C_{J_{\\alpha},J_{\\beta}}|^{2}\n\\,\\frac{\\mathbb{I}_{J_{\\alpha}}}{2J_{\\alpha}+1}\\otimes \\frac{\\mathbb{I}_{J_{\\beta}}}{2J_{\\beta}+1}\n\\,,\n\\end{equation}\nwhere $\\mathbb{I}_{J}$, for $J=0$ and $J=1$, is the projector on the subspace of total spin $J$ in the tensor product of two qubits $(V_{{\\f12}})^{\\otimes 2}$.\nThis confirms that a pure bulk spin network state leads naturally to a mixed boundary state. Moreover, due to the simple structure of the boundary in the present example, the induced boundary density matrix carries no entanglement between the pair of boundary edges attached to the vertex $\\alpha$ and the pair attached to the vertex $\\beta$.\n\n\\subsection{The six-qubit candy graph}\n\nWe can upgrade the previous example by enriching the structure of the boundary intertwiner thereby allowing for the possibility of non-trivial entanglement between the boundary edges attached to the two vertices.\nInstead of attaching two boundary edges to each vertex, we now connect three boundary edges to each vertex. We still fix the spins on the boundary edges to $j_{1}=..=j_{6}=\\f12$, as well as on the inner loop to $k$ and $k+\\f12$ (with the half-integer shift to account for the extra half-spin on the boundary) for $k>0$.\n\\begin{figure}[hbt!]\n\\centering\n\\begin{tikzpicture}[scale=0.7]\n\n\\coordinate (A1) at (0,0);\n\\coordinate (A2) at (3,0);\n\n\\draw[thick,in=115,out=65,rotate=0] (A1) to node[above] {$k$} (A2) node[scale=0.7] {$\\bullet$} (A2) to[out=245,in=-65] node[below] {$k+\\f12$} (A1) node[scale=0.7] {$\\bullet$};\n\n\\draw[thick] (A1) -- ++ (125:1.5) ++ (150:0.35) node {$\\f12$} ++ (25:0.5) node {$\\circled{1}$};\n\\draw[thick] (A1) --node[very near end,above] {$\\circled{2}$} ++ (180:1.5) ++ (180:0.35) node {$\\f12$};\n\\draw[thick] (A1) -- ++ (235:1.5) ++ (210:0.35) node {$\\f12$} ++ (-25:0.5) node {$\\circled{3}$};\n\n\\draw[thick] (A2) -- ++ (55:1.5) ++ (30:0.35) node {$\\f12$} ++ (155:0.5) node {$\\circled{4}$};\n\\draw[thick] (A2) --node[very near end,above] {$\\circled{5}$} ++ (0:1.5) ++ (0:0.35) node {$\\f12$};\n\\draw[thick] (A2) -- ++ (-55:1.5) ++ (-30:0.35) node {$\\f12$} ++ (-155:0.5) node {$\\circled{6}$};\n\n\\draw[->,>=stealth,very thick] (6,0) -- (7.5,0);\n\n\\coordinate (B1) at (12.5,0);\n\\coordinate (B2) at (14.5,0);\n\n\\draw[thick,in=105,out=75,rotate=0] (B1) to (B2) to[out=255,in=-75] (B1);\n\n\\draw[thick] (B1) ++ (-1.5,0) coordinate(B3);\n\\draw[thick] (B1) ++ (-1.5,0) -- ++ (215:1.5);\n\n\\draw[thick,blue] (B3) -- node[midway,above=1.5] {$\\iota_{\\alpha}$} ++ (145:1.5) ;\n\\draw[thick] (B3) ++ (145:1.5) -- ++ (215:1.5);\n\\draw[thick] (B3) ++ (145:1.5) node[blue,scale=0.7] {$\\bullet$} -- ++ (145:1.5);\n\n\\draw[thick,red] (B1) node[black,scale=0.7] {$\\bullet$} -- node[red,below] {$J_{\\alpha}$} ++ (-1.5,0) node[scale=0.7,blue] {$\\bullet$} ;\n\\draw[thick,red] (B2) node[black,scale=0.7] {$\\bullet$} -- node[red,below] {$J_{\\beta}$} ++ (1.5,0) ;\n\n\\draw[thick] (B2) ++ (1.5,0) coordinate(B4);\n\\draw[thick] (B2) ++ (1.5,0) node[scale=0.7,blue] {$\\bullet$} -- ++ (-35:1.5);\n\n\\draw[thick,blue] (B4) -- node[midway,above=1.5] {$\\iota_{\\beta}$} ++ (35:1.5) node[scale=0.7] {$\\bullet$};\n\\draw[thick] (B4) ++ (35:1.5) -- ++ (35:1.5);\n\\draw[thick] (B4) ++ (35:1.5) node[blue,scale=0.7] {$\\bullet$} -- ++ (-35:1.5);\n\n\\end{tikzpicture}\n\\caption\nThe one-loop 6-qubit candy graph and intertwiner basis.}\n\\label{fig:6candygraph}\n\\end{figure}\n\nThe bulk Hilbert space thus consists in the tensor product of the spaces of intertwiners living at the two vertices $\\alpha$ and $\\beta$:\n\\begin{equation}\n{\\mathcal H}_{bulk}= {\\mathcal H}_{\\alpha}\\otimes {\\mathcal H}_{\\beta}\\,,\\qquad\n{\\mathcal H}_{\\alpha}\n= {\\mathcal H}_{\\beta}\n=\n\\textrm{Inv}\\big{[}({\\mathcal V}_{{\\f12}})^{\\otimes 3}\\otimes {\\mathcal V}_{k}\\otimes {\\mathcal V}_{k+\\f12}\\big{]}\n\\,.\n\\end{equation}\nFor each vertex, $v=\\alpha$ and $v=\\beta$, we unfold the intertwiner space by recoupling the three spins $\\f12$ together into the bouquet spin $J_{v}$, as drawn on fig.\\ref{fig:6candygraph}.\nSince the 3-valent intertwiner between the spins $k$, $k+\\f12$ and $\\f12$ is unique (and given by the corresponding Clebsh-Gordan coefficients), we can put aside this bulk component of the intertwiner and focus on the boundary component of the intertwiner.\nThen, since the tensor product of three spins $\\f12$ decomposes as\n\\begin{equation}\n({\\mathcal V}_{{\\f12}})^{\\otimes 3}={\\mathcal V}_{\\f32}\\otimes 2 {\\mathcal V}_{\\f12}\\,,\n\\end{equation}\nthe intertwiner space is three-dimensional:\n\\begin{equation}\n{\\mathcal H}_{v}\n=\n{\\mathbb C}|J_{v}=\\f32\\rangle \\oplus {\\mathbb C}|J_{v}=\\f12,\\iota_{v}=0\\rangle\\oplus {\\mathbb C}|J_{v}=\\f12,\\iota_{v}=1\\rangle\n\\,.\n\\end{equation}\nThe extra index $\\iota\\in\\{0,1\\}$ when the three qubits recouple to the bouquet spin $J=\\f12$ label the degeneracy in the decomposition of the tensor product. As depicted on fig.\\ref{fig:6candygraph}, we can simply take it as the spin recoupling for the first two qubits (boundary edges 1 and 2 for the vertex $\\alpha$ and boundary edges 4 and 5 for the vertex $\\beta$). In that case, we can extend the convention for the intertwiner basis state $|J,\\iota\\rangle$ even to $J=\\f32$, in which case the extra label is allowed to take a single value $\\iota=1$.\n\nBulk spin network basis states are then defined by the choice of the two intertwiner basis states at $v=\\alpha$ and $v=\\beta$:\n\\begin{equation}\n{\\mathcal H}_{bulk}\n=\n\\bigoplus_{\\{J_{v},\\iota_{v}\\}}{\\mathbb C}|J_{v},\\iota_{v}\\rangle\n\\,,\\quad\n\\dim {\\mathcal H}_{bulk}=3\\times 3=9\\,.\n\\end{equation}\nThe boundary Hilbert space simply consists in 6 qubits, from which we also use the bouquet spin basis:\n\\begin{equation}\n{\\mathcal H}_{\\partial}\n=\n\\big{(}{\\mathcal V}_{\\f12}\\big{)}^{\\otimes 6}\n=\n{\\mathcal H}^{\\partial}_{\\alpha}\\otimes {\\mathcal H}^{\\partial}_{\\beta}\n\\,,\n\\quad\n{\\mathcal H}^{\\partial}_{\\alpha}= {\\mathcal H}^{\\partial}_{\\beta}=\n\\big{(}{\\mathcal V}_{\\f12}\\big{)}^{\\otimes 3}\n=\n\\bigoplus_{J=\\f12,\\f32} {\\mathcal V}_{J}\\otimes{\\mathcal N}_{J}\\,,\n\\quad\\textrm{with}\\quad\n{\\mathcal N}_{J}=\n\\textrm{Inv}\\Big{[}\n{\\mathcal V}_{J}\\otimes\\big{(}{\\mathcal V}_{\\f12}\\big{)}^{\\otimes 3}\n\\Big{]}\\,,\n\\end{equation}\n\\begin{equation}\n\\textrm{where}\\quad\n\\dim{\\mathcal N}_{\\f12}=2\n\\,,\\quad\n\\dim{\\mathcal N}_{\\f32}=1\n\\,,\\quad\n\\dim{\\mathcal H}^{\\partial}_{\\alpha}=\\dim{\\mathcal H}^{\\partial}_{\\beta}=\n2\\times2+4\\times 1 =2^{3}\\,.\n\\end{equation}\nLet us consider a general spin network state (with fixed spins as we have assumed so far) on this candy graph with six boundary edges:\n\\begin{equation}\n\\psi=\\sum_{\\{J_{v},\\iota_{v}\\}_{v=\\alpha,\\beta}}\nC^{J_{\\alpha},J_{\\beta}}_{\\iota_{\\alpha},\\iota_{\\beta}}\\,|(J_{\\alpha},\\iota_{\\alpha})\\,(J_{\\beta},\\iota_{\\beta})\\rangle\\,.\n\\end{equation}\nThe induced boundary density matrix, obtained after integration over the bulk holonomies, is:\n\\begin{equation}\n\\rho_{\\partial}[\\psi]\n=\n\\sum_{J_{\\alpha},J_{\\beta}}\n\\,\\frac{\\mathbb{I}_{J_{\\alpha}}}{2J_{\\alpha}+1}\\otimes \\frac{\\mathbb{I}_{J_{\\beta}}}{2J_{\\beta}+1}\n\\otimes\n\\rho_{J_{\\alpha},J_{\\beta}}\n\\,,\n\\end{equation}\nwhere the multiplicity matrix encodes the data about the intertwiners:\n\\begin{equation} \n\\rho_{J_{\\alpha},J_{\\beta}}\n\\,=\\,\n\\sum_{\\{\\iota_{v},\\tilde{\\iota}_{v}\\}}\nC^{J_{\\alpha},J_{\\beta}}_{\\tilde{\\iota}_{\\alpha},\\tilde{\\iota}_{\\beta}}\n\\overline{C^{J_{\\alpha},J_{\\beta}}_{\\iota_{\\alpha},\\iota_{\\beta}}}\n\\Big{|}(J_{\\alpha},\\tilde{\\iota}_{\\alpha})(J_{\\beta},\\tilde{\\iota}_{\\beta})\\Big{\\rangle}\\Big{\\langle}(J_{\\alpha},\\iota_{\\alpha})(J_{\\beta},\\iota_{\\beta})\\Big{|}\n\\quad\\in\\,\\textrm{End}\\big{[}{\\mathcal N}_{J_{\\alpha}}\\otimes {\\mathcal N}_{J_{\\beta}}\\big{]}\n\\,.\n\\end{equation}\nThis is always a rank-one matrix and does not lead to entanglement between the boundary edges (1,2,3) and (4,5,6).\n\n\\medskip\n\nIf we want to obtain non-trivial multiplicity matrices, i.e. of higher rank, one has to allow for non-trivial bulk components of the intertwiners. To this purpose, we must consider a (slightly) more complicated bulk graph with three bulk edges connecting the two vertices. We can assume that the spins on all the edges, both on the boundary and in the bulk, are fixed to, say, $j_{1}=..=j_{9}=\\f12$. If we look at the vertex $v$, which can be $\\alpha$ or $\\beta$, the 6-valent intertwiner can be unfolded into the bouquet spin basis. As depicted on fig.\\ref{fig:6candygraph3link}, an intertwiner basis state is now labeled by the bouquet spin $J_{v}$, a multiplicity index $\\iota^{\\partial}_{v}\\in\\{0,1\\}$ for the boundary component of the intertwiner (which can be taken as the recoupled spin of the edges 1 and 2) and a multiplicity index $\\iota^{o}_{v}\\in\\{0,1\\}$ for the bulk component of the intertwiner (which can be taken as the recoupled spin of the edges 4 and 5).\n\\begin{figure}[hbt!]\n\\centering\n\\begin{tikzpicture}[scale=0.7]\n\n\\coordinate (A1) at (0,0);\n\\coordinate (A2) at (3,0);\n\n\\draw[thick,in=105,out=75,rotate=0] (A1) to node[above] {$\\f12$} (A2) node[scale=0.7] {$\\bullet$} (A2) to[out=255,in=-75] node[above] {$\\f12$} (A1) node[scale=0.7] {$\\bullet$};\n\\draw[thick] (A1) --node[above] {$\\f12$} (A2);\n\n\\draw[thick] (A1) -- ++ (125:1.5) ++ (120:0.35) node {$\\f12$};\n\\draw[thick] (A1) -- ++ (180:1.5) ++ (180:0.35) node {$\\f12$};\n\\draw[thick] (A1) -- ++ (235:1.5) ++ (235:0.35) node {$\\f12$};\n\n\\draw[thick] (A2) -- ++ (55:1.5) ++ (55:0.35) node {$\\f12$};\n\\draw[thick] (A2) --++ (0:1.5) ++ (0:0.35) node {$\\f12$};\n\\draw[thick] (A2) -- ++ (-55:1.5) ++ (-55:0.35) node {$\\f12$};\n\n\\draw[->,>=stealth,very thick] (6,0) -- (7.5,0);\n\n\\coordinate (B1) at (12.5,0);\n\\coordinate (B2) at (15,0);\n\n\\draw[thick,blue] (B1) -- node[midway,left]{$\\iota_{\\alpha}^{o}$} ++(70:1) coordinate (C);\n\\draw[thick,blue] (B2) -- node[midway,right]{$\\iota_{\\beta}^{o}$} ++(110:1) coordinate (D);\n\n\\draw[thick] (B2) to[out=255,in=-75] node[below=7] {bulk} (B1);\n\n\\draw[thick] (C) node[blue,scale=0.7] {$\\bullet$} to[out=255,in=-75] (D) node[blue,scale=0.7] {$\\bullet$} to[out=105,in=75] (C);\n\n\\draw[thick] (B1) ++ (-1.5,0) coordinate(B3);\n\\draw[thick] (B1) ++ (-1.5,0) -- ++ (215:1.5)++(-1,-0.75) node {boundary};\n\n\\draw[thick,blue] (B3) -- node[midway,above=1.5] {$\\iota_{\\alpha}^{\\partial}$} ++ (145:1.5) node[scale=0.7] {$\\bullet$};\n\\draw[thick] (B3) ++ (145:1.5) node[blue,scale=0.7] {$\\bullet$} -- ++ (215:1.5);\n\\draw[thick] (B3) ++ (145:1.5) node[blue,scale=0.7] {$\\bullet$} -- ++ (145:1.5);\n\n\\draw[thick,red] (B1) node[blue,scale=0.7] {$\\bullet$} -- node[red,below] {$J_{\\alpha}$} ++ (-1.5,0) node[scale=0.7,blue] {$\\bullet$} ;\n\\draw[thick,red] (B2) node[blue,scale=0.7] {$\\bullet$} -- node[red,below] {$J_{\\beta}$} ++ (1.5,0);\n\n\\draw[thick] (B2) ++ (1.5,0) coordinate(B4);\n\\draw[thick] (B2) ++ (1.5,0) node[blue,scale=0.7] {$\\bullet$} -- ++ (-35:1.5) ++(1,-0.75) node {boundary};\n\n\\draw[thick,blue] (B4) -- node[midway,above=1.5] {$\\iota_{\\beta}^{\\partial}$} ++ (35:1.5);\n\\draw[thick] (B4) ++ (35:1.5) -- ++ (35:1.5);\n\\draw[thick] (B4) ++ (35:1.5) node[blue,scale=0.7] {$\\bullet$} -- ++ (-35:1.5);\n\n\\end{tikzpicture}\n\\caption\nThe triple link 6-qubit candy graph and intertwiner basis.}\n\\label{fig:6candygraph3link}\n\\end{figure}\n\nThe main consequence of adding bulk structure is to increase the dimension of the bulk Hilbert space:\n\\begin{equation}\n{\\mathcal H}_{bulk}={\\mathcal H}_{\\alpha}\\otimes{\\mathcal H}_{\\beta}\\,,\n\\quad\n{\\mathcal H}_{v}\n=\n\\textrm{Inv}\\big{[}({\\mathcal V}_{\\f12})^{\\otimes 6}\\big{]}\n=\n\\bigoplus_{J_{v},\\iota_{v}^{\\partial},\\iota_{v}^{o}}{\\mathbb C}|J_{v},\\iota_{v}^{\\partial},\\iota_{v}^{o}\\rangle\n\\,\\quad\n\\dim{\\mathcal H}_{bulk}=(1+2\\times 2)^{2}=25\n\\,.\n\\end{equation}\nOn the other hand, the boundary Hilbert space is left unchanged. This much higher dimensionality of the bulk Hilbert space allows for finer structure of the bulk state and induced entangled on the boundary. Indeed, a generic spin network state decomposes as:\n\\begin{equation}\n\\psi=\n\\sum_{\\{J_{v},\\iota_{v}\\}_{v=\\alpha,\\beta}}\nC^{J_{\\alpha},J_{\\beta}}_{\\iota^{\\partial}_{\\alpha},\\iota_{\\alpha}^{o},\\iota^{\\partial}_{\\beta},\\iota_{\\beta}^{o}}\\,\n\\Big{|}(J_{\\alpha},\\iota^{\\partial}_{\\alpha},\\iota_{\\alpha}^{o})\\,(J_{\\beta},\\iota^{\\partial}_{\\beta},\\iota_{\\beta}^{o})\\Big{\\rangle}\n\\,.\n\\end{equation}\nCompared to the previous case of the one-loop candy graph, the bulk part of the intertwiners $\\iota^{o}_{v}$ is not seen by the boundary state. This bulk data ``hidden'' from the boundary creates entanglement between the two bouquets of boundary edges. Indeed the induced boundary density matrix can be computed as:\n\\begin{equation}\n\\rho_{\\partial}[\\psi]\n=\n\\sum_{J_{\\alpha},J_{\\beta}}\n\\,\\frac{\\mathbb{I}_{J_{\\alpha}}}{2J_{\\alpha}+1}\\otimes \\frac{\\mathbb{I}_{J_{\\beta}}}{2J_{\\beta}+1}\n\\otimes\n\\rho_{J_{\\alpha},J_{\\beta}}\n\\,,\n\\end{equation}\nwhere the multiplicity matrix encodes the data about the intertwiners:\n\\begin{equation} \n\\rho_{J_{\\alpha},J_{\\beta}}\n\\,=\\,\n\\sum_{\\{\\iota_{v}^{\\partial},\\tilde{\\iota}_{v}^{\\partial}\\}}\n\\left(\n\\sum_{\\{\\iota_{v}^{o}\\}}\nC^{J_{\\alpha},J_{\\beta}}_{\\tilde{\\iota}^{\\partial}_{\\alpha},\\iota_{\\alpha}^{o},\\tilde{\\iota}^{\\partial}_{\\beta},\\iota_{\\beta}^{o}}\n\\overline{C^{J_{\\alpha},J_{\\beta}}_{\\iota^{\\partial}_{\\alpha},\\iota_{\\alpha}^{o},\\iota^{\\partial}_{\\beta},\\iota_{\\beta}^{o}}}\n\\right)\\,\n\\Big{|}(J_{\\alpha},\\tilde{\\iota}_{\\alpha})(J_{\\beta},\\tilde{\\iota}_{\\beta})\\Big{\\rangle}\\Big{\\langle}(J_{\\alpha},\\iota_{\\alpha})(J_{\\beta},\\iota_{\\beta})\\Big{|}\n\\quad\\in\\,\\textrm{End}\\big{[}{\\mathcal N}_{J_{\\alpha}}\\otimes {\\mathcal N}_{J_{\\beta}}\\big{]}\n\\,.\n\\end{equation}\nThe partial trace over the bulk components of the intertwiners lead to a higher rank of the multiplicity matrix, reflecting the induced entanglement between the boundary edges attached to $\\alpha$ and the ones attached to the vertex $\\beta$.\nA simple example is, choosing that both intertwiners have support exclusively on the bouquet spins $J_{\\alpha}=J_{\\beta}=\\f12$, and form a Bell-like state:\n\\begin{equation}\n\\psi_{Bell}=\n\\f1{\\sqrt{2}}\\,\\big{(}|(\\f12,0,0)(\\f12,1,1)\\rangle-|(\\f12,1,1)(\\f12,0,0)\\rangle\\big{)}\\,,\n\\end{equation}\nleading to the induced density matrix:\n\\begin{equation}\n\\rho_{\\partial}[\\psi_{Bell}]\n=\n\\frac{\\mathbb{I}_{\\f12}}{2}\\otimes \\frac{\\mathbb{I}_{\\f12}}{2}\\otimes \\rho_{{\\mathcal N}}\\,,\\quad\n\\rho_{{\\mathcal N}}\n=\n|(\\f12,0)(\\f12,1)\\rangle\\langle(\\f12,0)(\\f12,1)|+|(\\f12,1)(\\f12,0)\\rangle\\langle(\\f12,1)(\\f12,0)|\\,,\n\\end{equation}\nwhere the multiplicity matrix now has rank two.\nThis perfectly illustrates how tracing out the bulk degrees of freedom leads to a mixed state on the boundary, or in more physical terms, how correlations between bulk intertwiners leads to entanglements between boundary edges.\n\n\n\n\n\n\n\n\n\n\n\n\\section*{Conclusion \\& Outlook}\n\n\nIn the context of the quest for understanding the holographic nature of the gravitational interaction and of quantum gravity, it is essential to investigate the bulk-boundary relation and interplay. This goes both ways: on the one hand, we need to understand the boundary modes and dynamics induced by the bulk degrees of freedom, and on the other hand, we need to understand how boundary conditions propagate within and throughout the bulk at both classical and quantum levels. Such holographic mapping between bulk and boundary theories needs to be achieved at multiple levels: the symmetry groups, the dynamics, the quantum states, the algebra of observables.\n\nHere, in order to start analyzing the potential holographic behavior of loop quantum gravity, we introduced explicit 2d boundaries to the 3d space, i.e. space-time corners. This 2d boundary admits a Hilbert space of boundary states, understood as quantum boundary conditions. Then loop quantum gravity's spin network states for the bulk geometry become what we call {\\it boundary maps}, that is wave-functions still depending on bulk fields or degrees of freedom but valued in the boundary Hilbert space (instead of ${\\mathbb C}$ for standard quantum mechanics). In some sense, bulk wave-functions can be interpreted as quantum circuits acting on the boundary states.\nFor spin network states, the bulk degrees of freedom are the $\\mathrm{SU}(2)$ holonomies of the Ashtekar-Barbero connection along the graph links, while the boundary states are the spin states living on the spin network open edges puncturing the 2d boundary surface.\nAs expected, the squared norm of the bulk wave-function using the scalar product of the boundary Hilbert space gives the probability distribution for the bulk holonomies.\nThe new feature is that one can trace over the bulk by integrating over the bulk holonomies and obtain a density matrix for the boundary states. This {\\it boundary density matrix} encodes all that we can know about the quantum state of geometry from probing the boundary if we do not have access to any bulk observable. For a pure bulk state, we typically obtain a mixed boundary state. This realizes a bulk-to-boundary coarse-graining.\n\nOur main result is the proof that any gauge-covariant boundary density matrix for an arbitrary number of boundary edges can be induced from a pure spin network state on a simple bulk graph consisting from a single vertex connecting all the boundary edges to a single bulk loop. In quantum information jargon, this universal reconstruction process actually purifies arbitrary mixed boundary states into pure bulk states.\n\n\\medskip\n\nWe further analyzed the algebraic structure of induced boundary density matrices, more precisely how intertwiner correlations, i.e. entanglement between bulk volume excitations, get reflected by the boundary density matrix.\nThis should be considered as part of the larger program of bulk tomography through boundary observables in loop quantum gravity. Hopefully, the basic tools introduced here should allow a more systematic study of how far one can see into the bulk and how much one observer on the boundary can know about the bulk spin network graph.\nFor instance, we would like to study in more details the relation between between boundary edge entanglement and bulk intertwiner entanglement and quantify in a precise and explicit manner their difference.\n\nThese questions are at the kinematical level. Our hope is more ambitious and we would like to tackle the spin network dynamics and reformulate it in light of the bulk-boundary relation. This means projecting the bulk dynamics onto the boundary and write it in terms of boundary evolution operators. Loop quantum gravity's dynamics would then read in terms of completely positive maps\\footnotemark{} acting on the boundary density matrices.\n\\footnotetext{\nMathematically, any evolution or measurement can be written as a completely positive map (CP map) \\cite{Choi:1975,Wilde:2011npi}, which admits an operator-sum representation in terms of Kraus operators $\\{ E_k, \\, k=1,2,\\cdots \\}$:\n\\begin{equation} \\label{eq:KrausOperators}\n{\\mathcal E}(\\rho)\n=\n\\sum_{k} E_k \\, \\rho \\, E_k^{\\dagger}\n\\,, \\qquad\n\\sum_{k} E_k^{\\dagger} \\, E_k \\leq \\mathbb{I}\n\\,.\n\\nonumber\n\\end{equation}\nThe case when $\\sum_{k} E_k^{\\dagger} \\, E_k = \\mathbb{I}$ are completely positive trace preserving map (CPTP map), which leave invariant the trace of quantum states. We wish to describe boundary evolution and measurements in loop quantum gravity in terms of CPTP maps.}\nThrough this, the goal is to investigate in depth the implementation of the holographic principle in loop quantum gravity, and parallely move forward in the study of the coarse-graining of the theory and its renormalization flow from the Planck scale to ours. For these purposes, a general formulation in terms of boundary density matrices seems better suited to the analysis of the dynamics, measurements and coarse-graining than pure spin network states.\n\n\n\\section*{Acknowledgement}\nQ.C. is financially supported by the China Scholarship Council.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}