diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzohzq" "b/data_all_eng_slimpj/shuffled/split2/finalzzohzq" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzohzq" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nCervical spinal cord (SC) segmentation in magnetic resonance (MR) images is a viable means for quantitatively assessing the neurodegenerative effects of diseases in the central nervous system.\nWhile conventional MR sequences only allowed differentiation of the boundary between SC and cerebrospinal fluid (CSF), more recent sequences can be used to distinguish the SC's inner gray matter (GM) and white matter (WM) compartments. \nThe latter task, however, remains challenging as state-of-the-art MR sequences only achieve an in-slice resolution of around 0.5 mm while maintaining a good signal-to-noise ratio (SNR) and an acceptable acquisition time.\nThis resolution is barely enough to visualize the SC's butterfly-shaped GM structure.\n\nThe 2016 spinal cord gray matter segmentation (SCGM) challenge \\cite{prados_spinal_2017} reported mean Dice similarity coefficients (DSC) of 0.8 in comparison to a manual consensus ground truth for the best SC GM segmentation approaches at that time.\nPorisky et al. \\cite{porisky_grey_2017} experimented with 3D convolutional encoder networks but did not improve the challenge's results.\nPerone et al.'s U-Net approach \\cite{perone_spinal_2018} later managed to push the DSC value to 0.85.\nMore recently, Datta et al.\\ \\cite{datta_gray_2017} reported mean DSC of 0.88 on images of various MR sequences with a morphological geodesic active contour model.\n\nStill, this means that a high number of subjects would be necessary to get reliable findings from clinical trials.\nHence, despite recent developments, there is a need for improvement of the reproducibility of SC GM and WM measurements.\nAn accurate and precise segmentation of the SC's inner structures in MR images under the mentioned limiting trade-off between resolution, SNR, and time therefore remains a challenge, especially when focusing on the GM.\n\nIn this work, we present a new robust and fully automatic pipeline for the acquisition and segmentation of GM and WM in MR images of the SC.\nOn the segmentation side, we propose the use of multi-dimensional gated recurrent units (MD-GRU), which already proved fit for a number of medical segmentation tasks \\cite{andermatt2016multi},\nto gain accurate and precise SC GM and WM segmentations.\nTo this end, we adapt MD-GRU's original cross-entropy loss by integrating a generalized Dice loss (GDL) \\cite{sudre_generalised_2017} and show improved segmentation performance compared to the original.\nUsing the proposed setup, we manage to set a new state of the art on the SCGM challenge data with a mean DSC of 0.9.\nOn the imaging side, we propose to use the AMIRA MR sequence \\cite{weigel_spinal_2018} for gaining improved GM-WM and WM-CSF contrast in axial cross-sectional slices of the SC.\nUsing the proposed MD-GRU approach in combination with this new imaging sequence, we manage to gain an even higher accuracy of DSC 0.91 wrt. a manual ground truth,\nas we demonstrate in experiments on scan-rescan images of healthy subjects, for both SC GM and WM.\n\nThe remaining paper is structured as follows: in \\secref{seq:method}, we present our segmentation method;\nin \\secref{seq:data}, we briefly describe the AMIRA MR sequence and the two datasets (SCGM challenge, AMIRA images) that we use for the experiments of \\secref{seq:ExpResults}, before we conclude in \\secref{seq:conclusion}.\n\n\\section{Method}\n\\label{seq:method}\nThe Multi-Dimensional Gated Recurrent Unit (MD-GRU) \\cite{andermatt2016multi} is a generalization of a bi-directional recurrent neural network (RNN), which is able to process images.\nIt achieves this task by treating each direction along each of the spatial dimensions independently as a temporal direction. \nThe MD-GRU processes the image using two convolutional GRUs (C-GRUs) for each image dimension, one in forward and one in backward direction, and combines the results of all individual C-GRUs. \nThe gated recurrent unit (GRU), compared to the more popular and established long short-term memory (LSTM), uses a simpler gating structure and combines its state and output. \nThe GRU has been shown to produce comparable results while consuming less memory than its LSTM counterpart when applied to image segmentation and hence allows for larger images to be processed \\cite{andermatt2016multi}.\n\nWe directly feed the 2D version of MD-GRU the 8-channel AMIRA images (cf. \\secref{seq:AMIRAimages}) to train AMIRA segmentation models, \nbut only use the single channel images of the SCGM dataset (cf. \\secref{seq:GMchallengeImages}) for the challenge models.\nTo address the high class imbalance between background, WM and GM, similar to \\cite{perone_spinal_2018} we added a GM Dice loss (DL), \nbut also included DLs for all the other label classes using the generalized Dice loss (GDL) formulation of Sudre et al.\\ \\cite{sudre_generalised_2017}.\n\n\\subsection{Dice Loss}\nA straightforward approximation of a DL for a multi-labelling problem is\n\\begin{equation}\n \\label{DiceLoss}\n \\ensuremath{L_\\text{D}} = - \\, \\frac{1}{\\sum_{l\\in\\L}\\omega_l}\\,\\sum\\limits_{l\\in\\L} \\omega_l\\, \\frac{2\\,\\sum_{x\\in X} p_{lx}\\,r_{lx}}{\\sum_{x\\in X}p_{lx}+r_{lx}},\n\\end{equation}\nwith the image domain $X$, labels $\\L$, predictions $p$, raters $r$, and class weights $\\omega$.\nSudre et al.\\ \\cite{sudre_generalised_2017} described a Generalized Dice Loss (GDL) $\\ensuremath{L_\\text{GD}}$ where they divide the weighted sum of the intersections of all labels by the weighted sum of all predictions and targets of all labels, \ninstead of just linearly combining the individual Dice coefficients:\n\\begin{equation}\n \\label{GeneralizedDiceLoss}\n \\ensuremath{L_\\text{GD}} = - \\, \\frac{2\\,\\sum_{l\\in\\L} \\omega_l\\sum_{x\\in X} p_{lx}\\,r_{lx}}{\\sum_{l\\in\\L} \\omega_l\\sum_{x\\in X}p_{lx}+r_{lx}}.\n\\end{equation}\nAs stated in \\cite{crum_generalized_2006}, compared to the DL \\eqref{DiceLoss}, the GDL \\eqref{GeneralizedDiceLoss} allows all labels to contribute equally to the overall overlap (denominator in \\eqref{GeneralizedDiceLoss}).\n\nThe (squared) inverse volume weighting\n\\begin{equation}\n \\label{labelWeightsCalc}\n \\omega_l=\\frac{1}{\\left(\\sum_{x\\in X} r_{lx}\\right)^2},\n\\end{equation}\nas proposed in \\cite{crum_generalized_2006}, deals with the class imbalance problem: \nlarge regions only contribute very little to $\\ensuremath{L_\\text{D}}$ or $\\ensuremath{L_\\text{GD}}$, whereas small regions are weighted more and thus are more important in the optimization process.\n\nTo avoid division by zero in $\\omega_l$ for image samples with absence of label $l$, we regularize the denominator of \\eqref{labelWeightsCalc} and formulate the weighting we used:\n\\begin{equation}\n \\label{labelWeightsCalcAdded}\n \\omega_l=\\frac{1}{1 + \\left(\\sum_{x\\in X} r_{lx}\\right)^2}.\n\\end{equation}\nThe weighting \\eqref{labelWeightsCalcAdded} compared to \\eqref{labelWeightsCalc} only slightly decreases its value as long as the object of interest has enough pixels.\nNote, that during training of a network, it is possible, that not all labels occur in a random subsample with random location.\n\nFinally, we combine DL or GDL with the cross entropy loss $\\ensuremath{L_\\text{C}}$ (CEL) with a factor $\\lambda\\in[0,1]$:\n$$L = \\lambda \\, \\ensuremath{L_\\text{D or GD}} + (1-\\lambda) \\, \\ensuremath{L_\\text{C}}.$$\n\n\n\n\\section{Data}\n\\label{seq:data}\nIn the following subsections, we describe the images used for the experiments: healthy subjects scan-rescan AMIRA dataset (own), which we call the AMIRA dataset,\nand the SCGM challenge dataset\\footnote{\\url{http:\/\/cmictig.cs.ucl.ac.uk\/niftyweb\/program.php?p=CHALLENGE} {last accessed: \\today} } \\cite{prados_spinal_2017}, which we refer to as SCGM dataset.\n\n\\subsection{AMIRA Dataset}\n\\label{seq:AMIRAimages}\n\n\\begin{figure}[t]\n\n \\begin{tabular}{c}\n \\resizebox{\\textwidth}{!}{\n \\includegraphics[height=0.15\\textwidth]{figs\/slice09\/iR1.png}\\,\n \\includegraphics[height=0.15\\textwidth]{figs\/slice09\/iR2.png}\\,\n \\includegraphics[height=0.15\\textwidth]{figs\/slice09\/iR3.png}\\,\n \\includegraphics[height=0.15\\textwidth]{figs\/slice09\/iR4.png}\\,\n \\includegraphics[height=0.15\\textwidth]{figs\/slice09\/iR5.png}\\,\n \\includegraphics[height=0.15\\textwidth]{figs\/slice09\/iR6.png}\\,\n \\includegraphics[height=0.15\\textwidth]{figs\/slice09\/iR7.png}\\,\n \\includegraphics[height=0.15\\textwidth]{figs\/slice09\/iR8.png}\n }\\\\\n \\resizebox{\\textwidth}{!}{\n \\includegraphics[height=0.15\\textwidth]{figs\/slice09\/iR1_histeq.png}\\,\n \\includegraphics[height=0.15\\textwidth]{figs\/slice09\/iR2_histeq.png}\\,\n \\includegraphics[height=0.15\\textwidth]{figs\/slice09\/iR3_histeq.png}\\,\n \\includegraphics[height=0.15\\textwidth]{figs\/slice09\/iR4_histeq.png}\\,\n \\includegraphics[height=0.15\\textwidth]{figs\/slice09\/iR5_histeq.png}\\,\n \\includegraphics[height=0.15\\textwidth]{figs\/slice09\/iR6_histeq.png}\\,\n \\includegraphics[height=0.15\\textwidth]{figs\/slice09\/iR7_histeq.png}\\,\n \\includegraphics[height=0.15\\textwidth]{figs\/slice09\/iR8_histeq.png}\n }\\\\\n \\resizebox{\\textwidth}{!}{\n \\includegraphics[height=0.33\\textwidth, trim = 10 10 10 10, clip]{figs\/slice09\/STDmean_histeq_full.png}\\,\n \\includegraphics[height=0.33\\textwidth]{figs\/slice09\/CSFmean_histeq.png}\\,\n \\includegraphics[height=0.33\\textwidth]{figs\/slice09\/GMmean_histeq.png}\n }\n\n \\end{tabular}\n \n \\caption{\n AMIRA sequence of an exemplary slice on C4 level. All images 10-fold upsampled.\n \\emph{Top and middle row:} Inversion images with increasing inversion times from left to right.\n Original cropped images (\\emph{top}), and histogram equalized (\\emph{middle}).\n \\emph{Bottom row:} Histogram equalized sum of the first 5 inversion images in full view (\\emph{left}), weighted average with optimal CSF-WM contrast (\\emph{middle}), and optimal GM-WM contrast (\\emph{right}).\n }\n \\label{fig:AMIRA}\n\\end{figure}\n\nThe first dataset used in this paper consists of 24 healthy subjects (14 female, 10 male, age $40\\pm11$ years). \nEach subject was scanned 3 times, remaining in the scanner between the first and second scan, and leaving the scanner and being repositioned between the second and third scan.\nEach scan contains 12 axial cross-sectional slices of the neck acquired with the AMIRA sequence \\cite{weigel_spinal_2018} that were manually aligned at acquisition time perpendicular to the SC's centerline with an\naverage slice distance of $\\SI{4}{\\milli\\meter}$ starting from vertebra C3 level in caudal direction.\n\nBecause of severe imaging artifacts some slices had to be discarded.\nFor one scan the last three caudal slices, for two scans the last two slices and for another two scans the last slice,\nin total 9 out of the 864 slices were discarded.\n\nThe AMIRA sequence consists of 8 inversion images of the same anatomical slice captured at different inversion times after 180 degree MR pulses \nthat have an in-slice resolution of $\\SI{0.67}{\\milli\\meter}\\times\\SI{0.67}{\\milli\\meter}$.\nExemplary inversion images and different averages of an exemplary slice on vertebra C4 level are shown in \\figref{fig:AMIRA}.\nFor human raters, to manually segment the AMIRA images, different single channel projections of the 8 channel images are necessary.\nWeighted averages of the inversion images with e.g. optimal CSF-WM or GM-WM contrast, see \\figref{fig:AMIRA}, were calculated with an approach that maximizes between-class intensity mean values and minimizes within-class intensity variances \\cite{horvath_average_2018}.\n\nIn order to reduce the numerical errors for the calculated measures, we 10-fold upsampled all slices with Lanczos interpolation.\nSince all images were manually centered at the SC, we consequently trimmed one third of the image size on each side and thus cropped out the inner ninth to a size of $650\\times 650$ pixels for faster processing.\n\nOne experienced rater segmented all 855 images manually for WM and GM and segmented again 60 randomly chosen slices over all subjects, scans and slices, without knowledge of their origin, to enable an intra-rater comparison.\n\n\n\n\n\\subsection{SCGM Dataset}\n\\label{seq:GMchallengeImages}\nThe SCGM segmentation challenge data \\cite{prados_spinal_2017} consists of 40 training datasets and 40 test datasets acquired at 4 different sites.\nBoth training and test datasets each have 10 samples of each site.\nThe 4 sites have different imaging protocols with different field of view, size and resolution.\nEach dataset was manually segmented by 4 experts and to assess rater performance, with majority voting (more than 2 positive votes) a consensus segmentation of the 4 raters was calculated.\n\nFor training and testing of our MD-GRU models, we resampled all axial slices of all the datasets to the common finest resolution of $\\SI{0.25}{\\milli\\meter}\\times\\SI{0.25}{\\milli\\meter}$\nand center cropped or padded all datasets to a common size of $640\\times 640$ pixels.\nBefore submitting the testing results for evaluation, we padded and resampled all slices to their original sizes and resolutions.\n\n\n\n\n\\section{Experiments and Results}\n\\label{seq:ExpResults}\nIn the following subsections, we describe our experiments, the chosen MD-GRU options, and show their results.\n\n\\subsection{AMIRA segmentation model}\nWe split the 24 subjects into 3 groups of 8 subjects each for 3 cross-validations: training on two groups and testing on a third group.\nTo handle over-fitting, of each training set we excluded one subject and used it for validation.\n\nWe used the standard MD-GRU\\footnote{\\url{https:\/\/github.com\/zubata88\/mdgru} last accessed: \\today} model with default settings and\nresidual learning, dropout rate 0.5, and dropconnect on state.\nWe chose the following problem specific parameters: Gaussian high pass filtering with variance 10, batch size 1, and window size $500\\times 500$ pixels.\nIn each iteration of the training stage, for data augmentation, a subsample of the training data with random deformation field at a random location was selected.\nRandom deformations included an interpolated deformation field on 4 supporting points with randomly generated deformations of standard deviation of 15,\nrandom scaling of a factor between $\\nicefrac{4}{5}$ and $\\nicefrac{5}{4}$, random rotation of $\\pm$ 10 degrees, and random mirroring along the anatomical median plane.\nTo prevent zero padding of the subsamples, we only allowed random sampling within a safe distance of 45 pixels from the image boundary \nand truncated the random deformation magnitudes to 45 pixels, which is 3 times the chosen standard deviation.\n\nWe trained the networks with Adadelta with a learning rate of 1 for 30'000 iterations, where one iteration approximately took 10 seconds on an NVIDIA GeForce GTX Titan X.\nCross entropy and DSC on the evaluation set already reached their upper bounds after around 20'000 iterations, and dropconnect on state prevented from overfitting as we can see in \\figref{fig:errBars}.\n\nThe time for segmenting a slice with the trained network approximately took 7 seconds.\n\nPrior to the final model generation, we experimented in adding only a GM DL to the CEL with weightings $\\lambda=0, 0.25, 0.5, 0.75, 1$ \nand figured that 0.5 produced the best results.\nDL produces values close to -1 whereas CEL tends to have small values close to 0.\nMoreover, CEL holds the information of all labels, since it is calculated over all labels.\nNow, when adding only GM DL, because of the imbalance of the loss values, higher values of $\\lambda$ strongly weaken the information for WM and background that in this setup is carried only within CEL.\nThe best weighting $\\lambda$ depends on the cross entropy and thus depends on the class imbalance and label uncertainty of each specific segmentation task.\n\nWe observed that the auxiliary DL produces sharper probability maps at the boundaries as compared to only using CEL, see \\figref{fig:CE_GDL_probabilitymaps},\nand that DL helps to delineate weak contrasts e.g. between GM and WM.\n\n\\begin{figure}\n\n \\begin{tikzpicture}\n \\node[anchor=north west,inner sep=0] at (0,0) {\n \\begin{tabular}{c}\n \\resizebox{\\textwidth}{!}{\n \\includegraphics[height=0.15\\textwidth]{figs\/prediction\/17999-slice02-1-pre.png}\n \\includegraphics[height=0.15\\textwidth]{figs\/prediction\/20999-slice02-1-pre.png}\n \\includegraphics[height=0.15\\textwidth]{figs\/prediction\/17999-slice11-34-pre.png}\n \\includegraphics[height=0.15\\textwidth]{figs\/prediction\/20999-slice11-34-pre.png}\n }\n \\end{tabular}\n };\n\n \\def-2.8{0.15}\n\n \\node[] at (1.4,-2.8) {\\tiny CGM 1 Scan 1 Slice 2};\n \\node[] at (7.25,-2.8) {\\tiny CGM 1 Scan 3 Slice 11};\n\n \\def-2.8{-2.4}\n\n \\node[] at (0.45,-2.8) {\\tiny CEL};\n \\node[] at (3.6,-2.8) {\\tiny GDL 0.5};\n \\node[] at (6.35,-2.8) {\\tiny CEL};\n \\node[] at (9.6,-2.8) {\\tiny GDL 0.5};\n\n \\end{tikzpicture}\n\n \n \\caption{Exemplary prediction probability maps of the three labeling maps background (\\emph{red}), GM (\\emph{green}) and WM (\\emph{blue}) of MD-GRU with CEL and with GDL in RGB colors.}\n \\label{fig:CE_GDL_probabilitymaps}\n\\end{figure}\n\n\\begin{figure}\n \\begin{tabular}{c}\n \\resizebox{\\textwidth}{!}{\n \\newlength\\fheight \n \\newlength\\fwidth \n \\setlength\\fheight{3.5cm} \n \\setlength\\fwidth{8cm}\n \\input{figs\/errBars\/GMDSCCELvsGDL.tex}\n \\input{figs\/errBars\/WMDSCCELvsGDL.tex}\n \\input{figs\/errBars\/crossEntropyCELvsGDL.tex}\n }\\\\\n \\resizebox{\\textwidth}{!}{\n \\setlength\\fheight{3.5cm} \n \\setlength\\fwidth{8cm}\n \\input{figs\/errBars\/GMDSCGDLvsDLvsGMDL.tex}\n \\input{figs\/errBars\/WMDSCGDLvsDLvsGMDL.tex}\n \\input{figs\/errBars\/crossEntropyGDLvsDLvsGMDL.tex}\n }\n \\end{tabular}\n \n \\caption{GM DSC, WM DSC and cross entropy over the training iterations of the validation set of group 1 in the AMIRA dataset in the format mean $\\pm$ one standard deviation.\n \\emph{Top row:} models with $\\lambda=0$ (only CEL), $\\lambda=1$ (only GDL), and combined with $\\lambda=0.5$ (GDL 0.5).\n \\emph{Bottom row:} GM DL 0.5, DL 0.5, and GDL 0.5 show similar performance.\n }\n \\label{fig:errBars}\n\\end{figure}\n\nFurther experiments showed, that the proposed automatic weightings $\\omega_l$ \\eqref{labelWeightsCalcAdded} for the DLs between all label classes is a good strategy\nto simplify the selection of $\\lambda$.\nIn our case, the evaluation scores did not show big differences for $\\lambda$ in a range from 0.25 to 0.75, when using the class weights $\\omega_l$ according to \\eqref{labelWeightsCalcAdded} for both DL and GDL.\nMD-GRU with the trivial linear combinations $\\lambda=0$ (only CEL) and $\\lambda=1$ (only GDL) did not perform as good as true combinations between the two losses.\nWe show the improvement in the scores of GDL with $\\lambda=0.5$ in \\figref{fig:errBars} and \\tabref{tab:CE_GDL}.\n\n\\begin{table}\n \\caption{Improvement between native MD-GRU with CEL and the proposed MD-GRU with GDL together with the manual segmentation's precision and intra-rater accuracy values.\n Intra-rater accuracy of the human expert was calculated for the 60 randomly chosen slices.\n }\n \\label{tab:CE_GDL}\n \\resizebox{\\textwidth}{!}{%\n \\begin{tabular}{l|rlrl|rlrlrlrlrlrl}\n \\rowcolor{gray}\n \t&\\multicolumn{4}{l|}{\\bf Accuracy} & \\multicolumn{6}{l}{\\bf Intra-session} & \\multicolumn{6}{l}{\\bf Inter-session} \\tabularnewline\n \\rowcolor{gray}\n \\bf GM &\\multicolumn{2}{l}{\\bf DSC} &\\multicolumn{2}{l|}{\\bf HD(mm)} &\\multicolumn{2}{l}{\\bf DSC} &\\multicolumn{2}{l}{\\bf HD(mm)} &\\multicolumn{2}{l}{\\bf RSD(\\%)} &\\multicolumn{2}{l}{\\bf DSC} &\\multicolumn{2}{l}{\\bf HD(mm)} &\\multicolumn{2}{l}{\\bf RSD(\\%)} \\tabularnewline\\hline\n \\bf MD-GRU CEL & 0.90 & $\\pm$ 0.04 & 0.68 & $\\pm$ 0.43 & 0.89 & $\\pm$ 0.03 & 0.71& $\\pm$ 0.46 & 3.22 & $\\pm$ 2.87 & 0.88 & $\\pm$ 0.04 & 0.70 & $\\pm$ 0.43 & 3.65 & $\\pm$ 3.97 \\tabularnewline\n \\rowcolor{gray}\n \\bf MD-GRU GDL 0.5 & 0.91 & $\\pm$ 0.03 & 0.56 & $\\pm$ 0.33 & 0.88 & $\\pm$ 0.03 & 0.58 & $\\pm$ 0.32 & 2.93 & $\\pm$ 2.63 & 0.88 & $\\pm$ 0.03 & 0.61 & $\\pm$ 0.35 & 3.86 & $\\pm$ 3.49 \\tabularnewline\n \\bf Manual \t & &\t\t &\t &\t & 0.86 & $\\pm$ 0.03 & 0.67 & $\\pm$ 0.24 & 5.55 & $\\pm$ 4.11 & 0.85 & $\\pm$ 0.03 & 0.71 & $\\pm$ 0.27 & 6.27 & $\\pm$ 4.70 \\tabularnewline\n \\rowcolor{gray}\n \\bf Intra-rater & 0.85 & $\\pm$ 0.07 & 0.62 & $\\pm$ 0.30 &&&&&&&&&&&& \\vspace{4mm}\\tabularnewline\n \n \\rowcolor{gray}\n \\bf WM &\\multicolumn{2}{l}{\\bf DSC} &\\multicolumn{2}{l|}{\\bf HD(mm)} &\\multicolumn{2}{l}{\\bf DSC} &\\multicolumn{2}{l}{\\bf HD(mm)} &\\multicolumn{2}{l}{\\bf RSD(\\%)} &\\multicolumn{2}{l}{\\bf DSC} &\\multicolumn{2}{l}{\\bf HD(mm)} &\\multicolumn{2}{l}{\\bf RSD(\\%)}\\tabularnewline\\hline\n \\bf MD-GRU CEL & 0.94 & $\\pm$ 0.03& 0.47 & $\\pm$ 0.26& 0.94 & $\\pm$ 0.02& 0.51 & $\\pm$ 0.25 & 2.07 & $\\pm$ 2.16& 0.94 & $\\pm$ 0.02 & 0.52 & $\\pm$ 0.22 & 2.40 & $\\pm$ 2.22 \\tabularnewline\n \\rowcolor{gray}\n \\bf MD-GRU GDL 0.5 & 0.95 & $\\pm$ 0.02 & 0.43 & $\\pm$ 0.22 & 0.94 & $\\pm$ 0.02 & 0.51 & $\\pm$ 0.22 & 2.14 & $\\pm$ 2.35 & 0.94 & $\\pm$ 0.02 & 0.53 & $\\pm$ 0.23 & 2.69 & $\\pm$ 2.54 \\tabularnewline\n \\bf Manual \t & &\t\t &\t &\t & 0.93 & $\\pm$ 0.02 & 0.54 & $\\pm$ 0.13 & 3.78 & $\\pm$ 3.32 & 0.92 & $\\pm$ 0.02 & 0.58 & $\\pm$ 0.15 & 4.59 & $\\pm$ 3.77 \\tabularnewline\n \\rowcolor{gray}\n \\bf Intra-rater & 0.96 & $\\pm$ 0.02 & 0.44 & $\\pm$ 0.15 &&&&&&&&&&&& \\tabularnewline\n \\end{tabular}\n }\n\\end{table}\n\nFinally, comparisons between GM DL 0.5, auto-weighted DL 0.5 and GDL 0.5, all with $\\lambda=0.5$, are shown in \\figref{fig:errBars} on the bottom row.\nAs can be expected, the similarity of the terms DL \\eqref{DiceLoss} and GDL \\eqref{GeneralizedDiceLoss} is reflected in their almost identical segmentation performance.\n\nGM DL 0.5 shows comparable WM segmentation performance to the losses that have WM DL included.\nThis can be explained, because the GM boundary is part of the WM boundaries and thus influences the WM scores, and\nfurthermore the outer WM boundary is already well delineated even without any DL through the good CSF-WM contrast.\nChoosing a DL as a surrogate for GM DSC only, as proposed in \\cite{perone_spinal_2018}, is thus justifiable.\n\n\\begin{figure}\n \\begin{tikzpicture}\n \\node[anchor=north west,inner sep=0] at (0,0) {\n \\begin{tabular}{c}\n \\resizebox{\\textwidth}{!}{\n\t\\includegraphics[height=0.15\\textwidth]{figs\/prediction\/GM_6537Scan1Slice10_withManSeg.png}\n\t\\includegraphics[height=0.15\\textwidth]{figs\/prediction\/GM_6550Scan1Slice3_withManSeg.png}\n\t\\includegraphics[height=0.15\\textwidth]{figs\/prediction\/GM_6582Scan2Slice1_withManSeg.png}\n\t\\includegraphics[height=0.15\\textwidth]{figs\/prediction\/GM_6614Scan1Slice7_withManSeg.png}\n\t\\includegraphics[height=0.15\\textwidth]{figs\/prediction\/GM_6537Scan1Slice5_withManSeg.png}\n\t\\includegraphics[height=0.15\\textwidth]{figs\/prediction\/GM_6582Scan1Slice9_withManSeg.png}\n }\n \\end{tabular}\n };\n\n \\def-2.8{0.1}\n\n \\node[] at (1.6,-2.8) {\\tiny Subject 6537 Scan 1 Slice 10};\n \\node[] at (5,-2.8) {\\tiny Subject 6582 Scan 2 Slice 1};\n \\node[] at (9,-2.8) {\\tiny Subject 6537 Scan 1 Slice 5};\n \n \\def-2.8{-1.9}\n\n \\node[] at (3,-2.8) {\\tiny Subject 6550 Scan 1 Slice 3};\n \\node[] at (7,-2.8) {\\tiny Subject 6614 Scan 1 Slice 7};\n \\node[] at (10.7,-2.8) {\\tiny Subject 6582 Scan 1 Slice 9};\n \n \n\n \\end{tikzpicture}\n \n \\caption{\n Exemplary slices of the AMIRA dataset with automatic GM (\\emph{red}) and CSF-WM (\\emph{green}) boundaries, and manual GM (\\emph{blue}) and CSF-WM (\\emph{magenta}) boundaries.\n }\n \\label{fig:AMIRAresults}\n\\end{figure}\n\n\\begin{figure}\n \\begin{tikzpicture}\n \\node[anchor=north west,inner sep=0] at (0,0) {\n\t\\resizebox{\\textwidth}{!}{\n\t \\setlength\\fheight{5cm} \n\t \\setlength\\fwidth{7cm}\n\t \\input{figs\/MDGRU_plots\/SC_area_slice.tex}\n\t \\input{figs\/MDGRU_plots\/WM_area_slice.tex}\n\t \\input{figs\/MDGRU_plots\/GM_area_slice.tex}\n\t}\n };\n\n \\def-2.8{0.2}\n \\node[] at (0.7,-2.8) {\\tiny SC area};\n \\node[] at (4.8,-2.8) {\\tiny WM area};\n \\node[] at (8.8,-2.8) {\\tiny GM area};\n\n \\end{tikzpicture}\n \\caption{SC, WM, and GM areas of GDL 0.5 (automatic) and manual segmentations wrt. the anatomical slice positions in mean $\\pm$ one standard deviation.\n }\n \\label{fig:areaSlicePlots}\n\\end{figure}\n\nWhile the SCGM challenge results only provide GM segmentation accuracy, for the AMIRA dataset we additionally also provide WM segmentation results.\nFor the statistics, we gathered all slice-wise test results of all cross-validations for the proposed method GDL 0.5 and compare it with those of CEL.\nPairwise two-tailed Hotelling's T-tests for GM accuracy in DSC and labelmap Hausdorff distance (HD) show, that the test results of the MD-GRU models trained on the different groups are not significantly different from each other ($p>0.3$ for both GDL and CEL).\n\n\\begin{figure}\n\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{c}\n\\fbox{\\begin{minipage}{\\textwidth}\n \\begin{tikzpicture}\n \\node[anchor=north west,inner sep=0] at (0,0) {\n \\begin{tabular}{c}\n \\resizebox{\\textwidth}{!}{\n\t\\fbox{\\resizebox{0.25\\textwidth}{!}{\n\t \\setlength\\fheight{5cm} \n\t \\setlength\\fwidth{1.79cm}\n\t \\input{figs\/MDGRU_plots\/GM_55000_accDSC_total.tex}\n\t \\input{figs\/MDGRU_plots\/GM_55000_accHD_total.tex}\n\t}}\\,\\fbox{\\resizebox{0.75\\textwidth}{!}{\n\t \\setlength\\fheight{5cm} \n\t \\setlength\\fwidth{5cm}\n\t \\input{figs\/MDGRU_plots\/GM_55000_precDSC_total.tex}\n\t \\input{figs\/MDGRU_plots\/GM_55000_precHD_total.tex}\n\t \\input{figs\/MDGRU_plots\/GM_55000_precRSD_total.tex}\n\t}}\n }\\\\[4mm]\n \\fbox{\\resizebox{0.47\\textwidth}{!}{\n\t\\setlength\\fheight{6.1cm} \n\t\\setlength\\fwidth{5cm}\n\t\\input{figs\/MDGRU_plots\/GM_55000_accDSC_slice.tex}\n\t\\input{figs\/MDGRU_plots\/GM_55000_accHD_slice.tex}\n }}\\,\\fbox{\\resizebox{0.47\\textwidth}{!}{\n\t\\setlength\\fheight{5.765cm} \n\t\\setlength\\fwidth{10cm}\n\t\\input{figs\/MDGRU_plots\/GM_55000_precRSD_slice.tex}\n }}\n \\end{tabular}\n };\n \n \\def-2.8{0.5}\n\n \\node[] at (0.8,-2.8) {\\tiny GM};\n\n \\def-2.8{0.1}\n\n \\node[] at (0.8,-2.8) {\\tiny Accuracy};\n \\node[] at (3.9,-2.8) {\\tiny Precision};\n\n \\def-2.8{-0.5}\n \\node[] at (4.5,-2.8) {\\tiny intra};\n \\node[] at (5.7,-2.8) {\\tiny inter};\n \\node[] at (7.25,-2.8) {\\tiny intra};\n \\node[] at (8.5,-2.8) {\\tiny inter};\n \\node[] at (10,-2.8) {\\tiny intra};\n \\node[] at (11.25,-2.8) {\\tiny inter};\n \n \\def-2.8{-3.2}\n\n \\node[] at (0.8,-2.8) {\\tiny Accuracy};\n \\node[] at (6.75,-2.8) {\\tiny Precision};\n\n \n \\end{tikzpicture}\n\\end{minipage}}\\\\\n\\fbox{\\begin{minipage}{\\textwidth}\n \\begin{tikzpicture}\n \\node[anchor=north west,inner sep=0] at (0,0) {\n \\begin{tabular}{c}\n \\resizebox{\\textwidth}{!}{\n\t\\fbox{\\resizebox{0.25\\textwidth}{!}{\n\t \\setlength\\fheight{5cm} \n\t \\setlength\\fwidth{1.79cm}\n\t \\input{figs\/MDGRU_plots\/WM_55000_accDSC_total.tex}\n\t \\input{figs\/MDGRU_plots\/WM_55000_accHD_total.tex}\n\t}}\\,\\fbox{\\resizebox{0.75\\textwidth}{!}{\n\t \\setlength\\fheight{5cm} \n\t \\setlength\\fwidth{5cm}\n\t \\input{figs\/MDGRU_plots\/WM_55000_precDSC_total.tex}\n\t \\input{figs\/MDGRU_plots\/WM_55000_precHD_total.tex}\n\t \\input{figs\/MDGRU_plots\/WM_55000_precRSD_total.tex}\n\t}}\n }\\\\[4mm]\n \\fbox{\\resizebox{0.47\\textwidth}{!}{\n\t\\setlength\\fheight{6.1cm} \n\t\\setlength\\fwidth{5cm}\n\t\\input{figs\/MDGRU_plots\/WM_55000_accDSC_slice.tex}\n\t\\input{figs\/MDGRU_plots\/WM_55000_accHD_slice.tex}\n }}\\,\\fbox{\\resizebox{0.47\\textwidth}{!}{\n\t\\setlength\\fheight{5.765cm} \n\t\\setlength\\fwidth{10cm}\n\t\\input{figs\/MDGRU_plots\/WM_55000_precRSD_slice.tex}\n }}\n \\end{tabular}\n };\n \n \\def-2.8{0.5}\n\n \\node[] at (0.8,-2.8) {\\tiny WM};\n\n \\def-2.8{0.1}\n\n \\node[] at (0.8,-2.8) {\\tiny Accuracy};\n \\node[] at (3.9,-2.8) {\\tiny Precision};\n\n \\def-2.8{-0.5}\n \\node[] at (4.5,-2.8) {\\tiny intra};\n \\node[] at (5.7,-2.8) {\\tiny inter};\n \\node[] at (7.25,-2.8) {\\tiny intra};\n \\node[] at (8.5,-2.8) {\\tiny inter};\n \\node[] at (10,-2.8) {\\tiny intra};\n \\node[] at (11.25,-2.8) {\\tiny inter};\n \n \\def-2.8{-3.2}\n\n \\node[] at (0.8,-2.8) {\\tiny Accuracy};\n \\node[] at (6.75,-2.8) {\\tiny Precision};\n\n \\end{tikzpicture}\n\\end{minipage}}\n\\end{tabular}\n}\n \n \\caption{GM and WM accuracy and precision plots of the AMIRA dataset.\n For both boxes GM and WM:\n \\emph{Top row:} Accuracy (\\emph{left}) in DSC and HD of all the 855 slices of the proposed method;\n intra-session (intra) and inter-session (inter) precision (\\emph{right}) of the proposed method (auto) and the manual segmentations in DSC, HD, and area RSD.\n \\emph{Bottom row:} Accuracy box plots (\\emph{left}) in DSC and HD wrt. the slice positions with overlaid error bars in the format mean $\\pm$ one standard deviation;\n precision error bars (\\emph{right}) for area RSD wrt. the slice positions, for better visualization shown with 0.2 standard deviations.\n HD is measured in millimeters, and RSD in percents.\n }\n \\label{fig:accPrecBoxPlots}\n\\end{figure}\n\nIn \\figref{fig:accPrecBoxPlots} and in \\tabref{tab:CE_GDL} we show GM and WM accuracy and precision of all gathered slice results in DSC, HD and relative standard deviation of the areas (RSD),\nalso known as coefficient of variation.\nWith intra- and inter-session precision we compare segmentations of the same slice for different scans with and without repositioning, respectively.\nThe proposed automatic segmentations shows better reproducibility as the manual segmentations.\nAdditionally, we show the anatomical GM and WM areas wrt. the slice positions in \\figref{fig:areaSlicePlots} and show randomly chosen results in \\figref{fig:AMIRAresults}.\nTraining multiple networks with data from multiple human raters as ground truth data, as we did with the SCGM data, cf. Subsec.~\\ref{seq:challengeModel}, might further improve the performance.\n\n\n\\subsection{SCGM challenge model}\n\\label{seq:challengeModel}\nTo enable comparison with other methods, we tested MD-GRU on the SCGM dataset \\cite{prados_spinal_2017}.\nWe trained four MD-GRU models, one for each expert rater's ground truth, and in the end performed majority voting on the individual test results to mimic the challenge's consensus segmentation.\n\nWe used the same MD-GRU setup but with a window size of $200\\times200$ pixels for a similar anatomical field of view as the AMIRA models.\nRandom subsamples in each training iteration were drawn with a distance of 200 pixels from the image boundary.\nWe trained the networks for 100'000 iterations and observed, that the scores reached their upper bounds after around 60'000 iterations.\nOne training iteration took around 4 seconds and segmentation of one slice took less than 1 second.\n\nIn \\tabref{tab:GMchallengeResults}, the proposed model shows a new state-of-the-art in almost all metrics.\nThis comparison shows MD-GRU's strong performance in learning the GM segmentation problem.\nIn \\tabref{tab:GMchallengeMDGRU}, we additionally show the improvement for the auto-weighted GDL, compared to the native MD-GRU approach with only CEL.\nFigure~\\ref{fig:SCGMresults} shows randomly chosen results of the proposed model.\n\n\\begin{table}\n\\caption{\nResults of the SCGM challenge competitors including the results of Porisky et al.\\ \\cite{porisky_grey_2017}, Perone et al.\\ \\cite{perone_spinal_2018} and ours.\nThe metrics are Dice coefficient (DSC), \nmean surface distance (MD), \nHausdorff surface distance (HD), \nskeletonized Hausdorff distance (SHD), \nskeletonized median distance (SMD), \ntrue positive rate (TPR), \ntrue negative rate (TNR), \nprecision (P), \nJaccard index (J), and\nconformity (C).\nBest results on each metric are highlighted in bold font.\nDistances are measured in millimeters.\n}\n\\label{tab:GMchallengeResults}\n\n\\resizebox{\\textwidth}{!}{%\n \\begin{tabular}[] {lrlrlrlrlrlrlrlrlrl}\n \\rowcolor{gray}\n & \\multicolumn{2}{c}{\\bf JCSCS} & \\multicolumn{2}{c}{\\bf DEEPSEG} & \\multicolumn{2}{c}{\\bf MGAC} & \\multicolumn{2}{c}{\\bf GSBME} & \\multicolumn{2}{c}{\\bf SCT} & \\multicolumn{2}{c}{\\bf VBEM} & \\multicolumn{2}{c}{\\bf \\cite{porisky_grey_2017}} & \\multicolumn{2}{c}{\\bf \\cite{perone_spinal_2018}} & \\multicolumn{2}{c}{\\bf Proposed} \\tabularnewline\\hline\n DSC & 0.79 & $\\pm$ 0.04 & 0.80 & $\\pm$ 0.06 & 0.75 & $\\pm$ 0.07 & 0.76 & $\\pm$ 0.06 & 0.69 & $\\pm$ 0.07 & 0.61 & $\\pm$ 0.13\t\t& 0.80 & $\\pm$ 0.06 & 0.85 & $\\pm$ 0.04 & \\bf 0.90 & $\\pm$ 0.03 \\tabularnewline\n \\rowcolor{gray}\n MD& 0.39 & $\\pm$ 0.44 & 0.46 & $\\pm$ 0.48 & 0.70 & $\\pm$ 0.79 & 0.62 & $\\pm$ 0.64 & 0.69 & $\\pm$ 0.76 & 1.04 & $\\pm$ 1.14\t\t& 0.53 & $\\pm$ 0.57 & 0.36 & $\\pm$ 0.34 & \\bf 0.21 & $\\pm$ 0.20 \\tabularnewline\n HD & 2.65 & $\\pm$ 3.40 & 4.07 & $\\pm$ 3.27 & 3.56 & $\\pm$ 1.34 & 4.92 & $\\pm$ 3.30 & 3.26 & $\\pm$ 1.35 & 5.34 & $\\pm$ 15.35\t\t& 3.69 & $\\pm$ 3.93 & 2.61 & $\\pm$ 2.15 & \\bf 1.85 & $\\pm$ 1.16 \\tabularnewline\n \\rowcolor{gray}\n SHD & 1.00 & $\\pm$ 0.35 & 1.26 & $\\pm$ 0.65 & 1.07 & $\\pm$ 0.37 & 1.86 & $\\pm$ 0.85 & 1.12 & $\\pm$ 0.41 & 2.77 & $\\pm$ 8.10\t\t& 1.22 & $\\pm$ 0.51 & 0.85 & $\\pm$ 0.32 & \\bf 0.71 & $\\pm$ 0.28 \\tabularnewline\n SMD & 0.37 & $\\pm$ 0.18 & 0.45 & $\\pm$ 0.20 & 0.39 & $\\pm$ 0.17 & 0.61 & $\\pm$ 0.35 & 0.39 & $\\pm$ 0.16 & 0.54 & $\\pm$ 0.25\t\t& 0.44 & $\\pm$ 0.19 & \\bf 0.36 & $\\pm$ 0.17 & 0.37 & $\\pm$ 0.17 \\tabularnewline\n \\rowcolor{gray}\n TPR & 77.98 & $\\pm$ 4.88 & 78.89 & $\\pm$ 10.33 & 87.51 & $\\pm$ 6.65 & 75.69 & $\\pm$ 8.08 & 70.29 & $\\pm$ 6.76 & 65.66 & $\\pm$ 14.39 \t& 79.65 & $\\pm$ 9.56 & 94.97 & $\\pm$ 3.50 & \\bf 96.22 & $\\pm$ 2.69 \\tabularnewline\n TNR & \\bf{99.98} & $\\pm$ 0.03 & 99.97 & $\\pm$ 0.04 & 99.94 & $\\pm$ 0.08 & 99.97 & $\\pm$ 0.05 & 99.95 & $\\pm$ 0.06 & 99.93 & $\\pm$ 0.09\t& 99.97 & $\\pm$ 0.04 & 99.95 & $\\pm$ 0.06 & \\bf 99.98 & $\\pm$ 0.03 \\tabularnewline\n \\rowcolor{gray}\n P & 81.06 & $\\pm$ 5.97 & 82.78 & $\\pm$ 5.19 & 65.60 & $\\pm$ 9.01 & 76.26 & $\\pm$ 7.41 & 67.87 & $\\pm$ 8.62 & 59.07 & $\\pm$ 13.69\t& 81.29 & $\\pm$ 5.30 & 77.29 & $\\pm$ 6.46 & \\bf 85.46 & $\\pm$ 4.96 \\tabularnewline\n J & 0.66 & $\\pm$ 0.05 & 0.68 & $\\pm$ 0.08 & 0.60 & $\\pm$ 0.08 & 0.61 & $\\pm$ 0.08 & 0.53 & $\\pm$ 0.08 & 0.45 & $\\pm$ 0.13\t\t& 0.67 & $\\pm$ 0.07 & 0.74 & $\\pm$ 0.06 & \\bf 0.82 & $\\pm$ 0.05 \\tabularnewline\n \\rowcolor{gray}\n C & 47.17 & $\\pm$ 11.87 & 49.52 & $\\pm$ 20.29 & 29.36 & $\\pm$ 29.53 & 33.69 & $\\pm$ 24.23 & 6.46 & $\\pm$ 30.59 & \u221244.25 & $\\pm$ 90.61\t& 48.79 & $\\pm$ 18.09 & 64.24 & $\\pm$ 10.83 & \\bf 77.46 & $\\pm$ 7.31 \\tabularnewline\n \\hline\n \\end{tabular}}\n\\end{table}\n\n\n\n\\begin{table}\n \\caption{SCGM challenge results of the native MD-GRU with only CEL in comparison to the proposed GDL 0.5.\n Abbreviations of the metrics taken from \\tabref{tab:GMchallengeResults}.\n }\n \\label{tab:GMchallengeMDGRU}\n \\resizebox{\\textwidth}{!}{%\n \\begin{tabular}{lrlrlrlrlrlrlrlrlrlrl}\n \\rowcolor{gray}\n &\\multicolumn{2}{c}{\\bf DSC} &\\multicolumn{2}{c}{\\bf MD} &\\multicolumn{2}{c}{\\bf HD} &\\multicolumn{2}{c}{\\bf SHD} &\\multicolumn{2}{c}{\\bf SMD} &\\multicolumn{2}{c}{\\bf TPR} &\\multicolumn{2}{c}{\\bf TNR} &\\multicolumn{2}{c}{\\bf P} &\\multicolumn{2}{c}{\\bf J} &\\multicolumn{2}{c}{\\bf C} \\tabularnewline\\hline\n \\bf MD-GRU CEL & 0.87 & $\\pm$ 0.03 & 0.30& $\\pm$ 0.31& 2.14 & $\\pm$ 1.20 & 0.85 & $\\pm$ 0.36 & 0.40 & $\\pm$ 0.20 & 93.93 & $\\pm$ 3.85 &\\bf 99.98 & $\\pm$ 0.03 & 82.04 & $\\pm$ 5.42 & 0.78 & $\\pm$ 0.05 & 70.90 & $\\pm$ 9.06 \\tabularnewline\n \\rowcolor{gray}\n \\bf MD-GRU GDL 0.5 &\\bf 0.90& $\\pm$ 0.03 &\\bf 0.21& $\\pm$ 0.20 &\\bf 1.85& $\\pm$ 1.16 &\\bf0.71& $\\pm$ 0.28 &\\bf 0.37& $\\pm$ 0.17 &\\bf 96.22& $\\pm$ 2.69 &\\bf99.98& $\\pm$ 0.03 &\\bf85.46& $\\pm$ 4.96 &\\bf0.82& $\\pm$ 0.05 &\\bf 77.46& $\\pm$ 7.31 \\tabularnewline\n \\hline\n \\end{tabular}}\n\n \n\\end{table}\n\n\\begin{figure}\n\n\\begin{tikzpicture}\n\\node[anchor=north west,inner sep=0] at (0,0) {\n \\begin{tabular}{c}\n \\resizebox{\\textwidth}{!}{\n \\includegraphics[height=0.15\\textwidth]{figs\/challenge\/challenge_image_Site_1.png}\\,\n \\includegraphics[height=0.15\\textwidth]{figs\/challenge\/challenge_image_Site_2.png}\\,\n \\includegraphics[height=0.15\\textwidth]{figs\/challenge\/challenge_image_Site_3.png}\\,\n \\includegraphics[height=0.15\\textwidth]{figs\/challenge\/challenge_image_Site_4.png}\n }\n \\end{tabular}\n};\n\n\\def-2.8{-2.8}\n\n\\node[] at (1.5,-2.8) {\\tiny Site 1 Subject 15 Slice 2};\n\\node[] at (4.5,-2.8) {\\tiny Site 2 Subject 13 Slice 5};\n\\node[] at (7.5,-2.8) {\\tiny Site 3 Subject 12 Slice 14};\n\\node[] at (10.5,-2.8) {\\tiny Site 4 Subject 19 Slice 7};\n\n\\end{tikzpicture}\n \n \\caption{\n For each site of the SCGM dataset, one randomly chosen result of the proposed model in cropped view.\n }\n \\label{fig:SCGMresults}\n\\end{figure}\n\n\n\\section{Conclusion}\n\\label{seq:conclusion}\nWe presented a new pipeline of acquisition and automatic segmentation of SC GM and WM.\nThe AMIRA sequence produces 8 channel images for different inversion times which the proposed deep learning approach with MD-GRU used for segmentation.\nUsing the 8 channels, tissue specific relaxation curves can be learned and used for GM-WM segmentation.\n\nComparing our segmentation results to the results of the ex-vivo high-reso\\-lution dataset of Perone et al.\\ \\cite{perone_spinal_2018}, we show comparable accuracy for in-vivo data.\nThe acquired AMIRA dataset in scan-rescan fashion, with and without repositioning in the scanner, shows high reproducibility in terms of GM area RSD.\nThus we believe that the presented pipeline is a candidate for longitudinal clinical studies.\nFurther tests with patient data have to be conducted.\n\nWe added a generalized multi-label Dice loss to the cross entropy loss that MD-GRU uses.\nWe observed, that the segmentation performance was stable for a larger region of the weighting $\\lambda$ between the two losses.\nIn a future work, we will study the effects of small $\\lambda$s that correspond well with the logarithmical magnitudes of CEL.\nOur proposed segmentation model outperforms the methods from the SC GM segmentation challenge.\nTraining the MD-GRU models directly on the 3D data might further improve the performance compared to slice-wise segmentation.\n\nGiven the small and fine structure of the GM, we like to point out, that the achieved results of the metrics are near optimal.\nHigher resolutions of the imaging sequence will improve the accuracy more easily.\n\n\\vspace{5mm}\nAcknowledgments: We thank Dr. Matthias Weigel, Prof. Dr. Oliver Bieri and Tanja Haas for the MR acquisitions with the AMIRA sequence.\n\n\n\\bibliographystyle{splncs04}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDeconvolution of large galaxy survey images requires to take into account spatial-variation of the \n Point Spread Function (PSF) across the field of view. The PSF field is usually estimated beforehand, via parametric models and simulations as in \\citet{Krist2011} or directly estimated from the (noisy) observations of stars in the field of view \\citep{Bertin2011,Kuijken2015,Zuntz2018,Mboula2016,Schmitz2019}. \n Even with the \"perfect\" knowledge of the PSF, this ill-posed deconvolution problem is challenging, in particular due to the size of the image to process. \\citet{starck:sta00_3} proposed an {\\em Object-Oriented Deconvolution}, consisting in \nfirst detecting galaxies and then deconvolving each object independently taking into account the PSF at the position of the center of the galaxy (but not taking into account the variation of the PSF field at the galaxy scale). Following this idea, \\citet{Farrens2017} introduced a space-variant deconvolution approach for galaxy images, based on two regularization strategies: using either a sparse prior in a transformed domain \\citep{Starck2015} or trying to learn unsupervisedly a low-dimensional subspace for galaxy representation using a low-rank prior on the recovered galaxy images. Provided a sufficient number of galaxies are jointly processed (more than 1000) they found that the low-rank approach provided significantly lower ellipticity errors than sparsity, which illustrates the importance of learning adequate representations for galaxies. To go one step further in learning, supervised deep learning techniques taking profit of databases of galaxy images could be employed to learn complex mappings that could regularize our deconvolution problem. Deep convolutional architectures have also proved to be computationally efficient to process large number of images once the model has been learned, and are therefore promising in the context of modern galaxy surveys.\\\\\n\n{\\it Deep Learning and Deconvolution:}\nIn the recent years, deep learning approaches have been proposed in a large number of inverse problems with high empirical success. Some potential explanations could lie on the expressivity of the deep architectures (e.g. the theoretical works for simple architecture in \\citep{Eldan2015,Safran2017,Petersen2018}) as well as new architectures or new optimization strategies that increased the learning performance (for instance \\citet{Kingma2014,Ioffe2015,He2016,Szegedy2016}). Their success also depend on the huge datasets collected in the different applications for training the networks, as well as the increased computing power available to process them.\nWith the progress made on simulating realistic galaxies (based for instance on real Hubble Space Telescope (HST) images as in \\citet{Rowe2015,Mandelbaum2015}), deep learning techniques have therefore the potential to show the same success for deconvolution of galaxy images as in the other applications. Preliminary work have indeed shown that deep neural networks (DNN) can perform well for classical deconvolution of galaxy images \\citep{Flamary2017,Schawinski2017}.\\\\\n\n{\\it This paper:} we investigates two different strategies to interface deep learning techniques with space variant deconvolution approaches inspired from convex optimization. In section \\ref{sec:deconv}, we review deconvolution techniques based on convex optimization and deep learning schemes. \nThe space variant deconvolution is presented in section~\\ref{sect_svdeconv} where the two proposed methods are described, the first one using a deep neural network (DNN) for post-processing of a Tikhonov deconvolution and the second one including a DNN trained for denoising in an iterative algorithm derived from convex optimization. The neural network architecture proposed for deconvolution is also presented in this section. The experiment settings are described in section~\\ref{sec:XPDesign} and the results presented in section~\\ref{sec:Results}. We conclude in section \\ref{sec:ccl}.\n \n\\section{Image Deconvolution in the Deep Learning Era}\n\\label{sec:deconv}\n\\subsection{Deconvolution before Deep Learning}\nThe standard deconvolution problem consists in solving the linear inverse problem \n$\\mathbf{Y} = \\mathbf{H} \\mathbf{X} + \\mathbf{N}$, where $ \\mathbf{Y}$ is the observed noisy data, $ \\mathbf{X}$ the unknown solution, \n$ \\mathbf{H}$ the matrix related to the PSF and $ \\mathbf{N}$ is the noise. Images $\\mathbf{Y}$, $\\mathbf{X}$ and $\\mathbf{N}$ are represented by a column vector of $n_p$ pixels arranged in lexicographic order, with $n_p$ being the total number of pixels, and $\\mathbf{H}$ is a $n_p \\times n_p $ matrix.\nState of the art deconvolution techniques typically solve this ill-posed inverse problem (i.e. with no unique and stable solution) through a modeling of the forward problem motivated from physics, and adding regularization penalty term $\\mathcal{R}\\left(\\mathbf{X} \\right)$ which can be interpreted as enforcing some constraints on the solution. It leads to minimize:\n\\begin{equation}\n \\label{eq:Regularization}\n \\argmin\\limits_{\\mathbf{X}} \\frac{1}{2}|| \\mathbf{Y} - \\mathbf{H} \\mathbf{X} ||^2_F + \\mathcal{R}\\left(\\mathbf{X} \\right),\n\\end{equation}\nwhere $||\\cdot||_F$ is the Frobenius norm. The most simple (and historic) regularization corresponding is the Tikhonov regularization \\citep{Tikhonov1977,Hunt1972,Twomey1963}, where $\\mathcal{R}\\left(\\mathbf{X} \\right)\n$ is a quadratic term, $\\mathcal{R}\\left(\\mathbf{X} \\right) = \\frac{\\lambda}{2} ||\\mathbf{L} \\ensuremath{\\mb{X}}||^2_F$. \nThe closed-form solution of this inverse problem is given by:\n\\begin{equation}\n \\label{eq:TikhonovSolution}\n\\tilde{\\mathbf{X}} = \\left( \\mathbf{H}^T \\mathbf{H} +\\lambda \\mathbf{L}^T \\mathbf{L} \\right)^{-1} \\mathbf{H}^T \\mathbf{Y}\n \\end{equation}\nwhich involves the Tikhonov linear filter $\\left(\\ensuremath{\\mb{H}}^T \\ensuremath{\\mb{H}} + \\lambda \\ensuremath{\\mb{L}}^T \\ensuremath{\\mb{L}} \\right)^{-1} \\ensuremath{\\mb{H}}^T$. The simplest version is when $\\ensuremath{\\mb{L}} =\\boldsymbol{\\Id}$, which penalizes solutions with high energy. When the PSF is space invariant, the matrix $ \\mathbf{H}$ is block circulant and the inverse problem can then be written as a simple convolution product. It is also easy to see that the Wiener deconvolution corresponds to a specific case of the Tikhonov filter. See \\citet{ima:bertero98} for more details.\n\n \\begin{figure}[ht]\\centering\n \\includegraphics[width=\\scalefig{0.95}]{tikhonov_inversion_index19}\n \\caption{Deconvolution with Tikhonov regularization.From left to right:galaxy image from HST used for the simulation, observed galaxy at $\\ensuremath{\\mathrm{SNR}}=20$ (see below for our definition of SNR), deconvolved image computed from Eq.~\\ref{eq:TikhonovSolution}\n }\n \\label{fig:TikhonovExample}\n\\end{figure}\nThis rather crude deconvolution is illustrated in Fig.~\\ref{fig:TikhonovExample} in a low signal to noise ratio (SNR) scenario, displaying both oversmoothing of the galaxy image, loss of energy in the recovered galaxy and the presence of coloured noise due to the inverse filter. \n\nMost advanced methods are non linear and generally involves iterative algorithms. There is a vast litterature in image processing on advanced regularization techniques applied to deconvolution: adding some prior information on $\\ensuremath{\\mb{X}}$ in a Bayesian paradigm \\citep{BioucasDias2006,Krishnan2009, Orieux2010} or assuming $\\ensuremath{\\mb{X}}$ to belong to some classes of images to recover (e.g. using total variation regularization \\citep{Oliveira2009,Cai2010}, sparsity in fixed representations \\citep{Starck2003,Pesquet2009,Pustelnik2016} or learnt via dictionary learning \\citep{Mairal2008, Lou2011, Jia2011}), by constraining the solution to belong to some convex subsets (such as ensuring the final galaxy image to be positive). \n\nFor instance, a very efficient approach used for galaxy image deconvolution is based on sparse recovery which consists in minimizing:\n\\begin{equation}\n \\begin{aligned}\n & \\argmin\\limits_{\\mathbf{X}} \\frac{1}{2}\\|\\mathbf{Y}-\\mathbf{H}\\mathbf{X} \\|_2^2 + \\lambda\\|\\boldsymbol{\\Phi}^T \\mathbf{X}\\|_1\n \\end{aligned}\n\\label{eq:sparsel2minpos}\n\\end{equation} \nwhere $\\boldsymbol{\\Phi}$ is a matrix related to a fixed transform (i.e. Fouriers, wavelet, curvelets, etc) or that can be learned \nfrom the data or a training data set \\citep{starck:book15}. The $\\ell_1$ norm in the regularisation term is known \nto reinforce the sparsity of the solution, see \\citet{starck:book15} for a review on sparsity. Sparsity was found extremely efficient for different \ninverse problems in astrophysics such as Cosmic Microwave Background (CMB) estimation \\citep{PR1_WPR1}, compact sources estimation in CMB missions \\citep{Sureau2014}, weak lensing map recovery \\citep{glimpse2016} \nor radio-interferomety image reconstruction \\citep{starck:garsden2015}. We will compare in this work our deconvolution techniques with such sparse deconvolution approach.\n\nIterative convex optimization techniques have been devised to solve Eq.\\ref{eq:sparsel2minpos} (see for instance \\citet{Beck2009,Zibulevsky2010,Combettes2011,Chambolle2011,Afonso2011,Condat2013,Combettes2014}), with well-studied convergence properties, but with a high computing cost when using adaptive representation for galaxies. This problem opens the way to a new generation of methods. \n\n\\subsection{Toward Deep Learning}\nRecently deep learning techniques have been proposed to solve inverse problems by taking benefit of the dataset collected and\/or the advances in simulations, including for deconvolving galaxy images. These approaches have proved to be able to learn complex mappings in the supervised setting, and to be computationally efficient once the model has been learned. We review here, without being exhaustive, some recent work on deconvolution using DNNs. We have identified three different \nstrategies for using DNN in a deconvolution problem:\n\\begin{itemize}\n\\item{\\bf Learning the inverse: } the inverse convolution filter can be directly approximated using convolutional neural networks \\citep{Xu2014,Schuler2016}. In our application with space-variant deconvolution and known kernels, such complicated blind deconvolution is clearly not necessary and would require a large amount of data to try learning information already provided by the physics included in the forward model. \n\\item{\\bf Post-processing of a regularized deconvolution:} In the early years of using sparsity for deconvolution a two steps approach was proposed, consisting in first applying a simple linear deconvolution such as using the pseudo-inverse or the Tikhonov filter, letting noise entering in the solution, and then in the second step applying a sparse denoising (see the wavelet-vaguelette decomposition \\citep{Donoho1995,Kalifa2003}, more general regularization \\citep{Guerrero2006}, or the ForWaRD method \\citep{Neelamani2004}). Similarly, the second step have been replaced by denoising\/removing artefacts using a multi-layer perceptron \\citep{Schuler2013}, or more recently using U-Nets \\citep{Jin2017}. CNNs are well adapted to this tasks, since the form of a CNN mimics unrolled iterative approaches when the forward model is a convolution. \nIn another application, convolutional networks such as deep convolutional framelets have also been applied to remove artefacts from reconstructed CT images \\citep{Ye2018}. One advantage of such decoupling approach is the ability to process quickly a large amount of data when the network has been learnt, if the deconvolution chosen has closed-form expression. \n\\item{\\bf Iterative Deep Learning: } the third strategy uses iterative approaches often derived from convex optimization coupled with deep learning networks. Several schemes have been devised to solve generic inverse problems. The first option, called unrolling or unfolding (see \\citet{Monga2019} for a detailed review), is to mimic a few iterations of an iterative algorithm with DNNs so as to capture in the learning phase the impact of 1) the prior \\citep{Mardani2017}, 2) the hyperparameters \\citep{Mardani2017,Adler2017,Adler2018,Bertocchi2019}, 3) the updating step of a gradient descent \\citep{Adler2017} or 4) the whole update process \\citep{Gregor2010,Adler2018,Mardani2018}. Such approaches allow fast approximation of iterative algorithms \\citep{Gregor2010}, better hyperparameter selection \\citep{Bertocchi2019} and\/or provide in a supervised way new algorithms \\citep{Adler2017,Mardani2018,Adler2018} better adapted to process specific data set. This approach has noticeably been used recently for blind deconvolution \\citep{Li2019}.\nFinally, an alternative is to use iterative proximal algorithms from convex optimization (for instance in the framework of the alternating direction method of multiplier plug\\&play (ADMM PnP) \\citep{Venkatakrishnan2013,Sreehari2016, Chan2016}, or regularization by denoising \\citep{Romano2017,Reehorst2018}), where the proximity operator related to the prior is replaced by a DNN \\citep{Meinhardt2017,Bigdeli2017,Gupta2018} or a series of DNN trained in different denoising settings as in \\citet{Zhang2017}. \n\\end{itemize}\nThe last two strategies are therefore more adapted to our targeted problem, and in the following we will investigate how they could be applied and how they perform compared to state-of-the art methods in space-variant deconvolution of galaxy images. \n\n\\subsection{Discussion relative to Deep Deconvolution and Sparsity}\nIt is interesting to notice that connections exist between sparse recovery methodology and DNN:\n\\begin{itemize}\n\\item{\\textit{Learning Invariants}}: the first features learnt in convolutive deep neural networks correspond typically to edges at particular orientation and location in the images \\citep{Lecun2015}, which is also what the wavelet transforms extract at different scales. \nSimilar observations were noted for features learnt with a CNN in the context of cosmological parameter estimations from weak-lensing convergence maps \\citep{Ribli2018}. As well, understanding mathematically how the architecture of such networks captures progressively powerful invariants can be approached via wavelets and their use in the wavelet scattering transform \\citep{Mallat2016}.\n\\item{\\textit{Learned proximal operator}:} \\citet{Meinhardt2017} has shown that using a denoising neural network instead of a proximal operator (e.g. soft-thresholding in wavelet space in sparse recovery) during the minimisation iterations improves the deconvolution performance. They also claim that the noise level used to train the neural network behave like the regularisation parameter in sparse recovery. The convergence of the algorithm is not guaranteed anymore, but they observed experimentally that their algorithms stabilize and they expressed their fixed-points.\n\\item{\\textit{expanding path} and \\textit{contracting path}:} the U-nets two parts are very similar to synthesis and analysis concepts in sparse representations. This has motivated the use of wavelets to implement in the U-net average pooling and unpooling in the expanding path \\citep{ye2018deep,han2018framing}. Some other connection can be made with soft-Autoencoder in \\citet{fan2018soft} introducing a pair of ReLU units emulating soft-thresholding, accentuating the comparison with cascade wavelet shrinkage systems. \n\\end{itemize}\nTherefore, we observe exchanges between the two fields, in particular for U-Net architectures, with however significant differences such as the construction of a very rich dictionary in U-nets that is possible through the use of a large training data set, as well as non-linearities at every layer essential to capture invariants in the learning phase. \n \n\n\\section{Image Deconvolution with Space Variant PSF}\n\\label{sect_svdeconv}\n\\subsection{Introduction}\nIn the case of a space-variant deconvolution problem, we can write the same deconvolution equation as before, $\\ensuremath{\\mb{Y}} = \\ensuremath{\\mb{H}} \\ensuremath{\\mb{X}} + \\ensuremath{\\mb{N}}$, but $\\ensuremath{\\mb{H}}$ is not block circular anymore, and manipulating such a huge matrix is not possible in practice. \nAs in \\citet{Farrens2017}, we consider instead an {\\em Object-Oriented Deconvolution}, by first detecting $n_g$ galaxies with $n_p$ pixels each and then deconvolving independently each object using the PSF at the position of the center of the galaxy. We use the following definitions: \n the observations of $n_g$ galaxies with $n_p$ pixels are collected in $\\ensuremath{\\mb{Y}}\\in\\mathbb{R}^{n_p\\times n_g}$ (as before, each galaxy being represented by a column vector arranged in lexicographic order), the galaxy images to recover are similarly collected $\\ensuremath{\\mb{X}}\\in\\mathbb{R}^{n_p\\times n_g}=[\\ensuremath{\\mb{x}}_\\mb{i}]_{i=1..n_g}$ and the convolution operator with the different kernels is noted $\\ensuremath{\\mathcal{H}}$. It corresponds to applying in parallel a convolution matrix $\\ensuremath{\\mb{H}}_\\mb{i}$ to a galaxy $\\ensuremath{\\mb{x}}_\\mb{i}$ ($\\ensuremath{\\mb{H}}_\\mb{i}$ being typically a block circulant matrix with circulant block after zero padding which we perform on the images \\citep{Andrews77}). Noise is noted $\\ensuremath{\\mb{N}}\\in\\mathbb{R}^{n_p\\times n_g}$ as before and is assumed to be additive white gaussian noise. With these definitions, we now have \n\\begin{equation}\n \\label{eq:forwardModel}\n \\ensuremath{\\mb{Y}} = \\ensuremath{\\mathcal{H}}(\\ensuremath{\\mb{X}})+\\ensuremath{\\mb{N}}\n\\end{equation}\nor more precisely\n\\begin{equation}\n \\label{eq:forwardModelDevelop}\n \\left\\{\\ensuremath{\\mb{y}}_\\mb{i} = \\ensuremath{\\mb{H}}_\\mb{i}\\ensuremath{\\mb{x}}_\\mb{i}+\\ensuremath{\\mb{n}}_\\mb{i}\\right\\}_{i=1..n_g}\n,\\end{equation}\nfor block circulant $\\left\\{\\ensuremath{\\mb{H}}_\\mb{i}\\right\\}_{i=1..n_g}$, which illustrates that we consider multiple local space-invariant convolutions in our model (ignoring the very small variations of the PSF at the scale of the galaxy as done in practice \\citep{Kuijken2015,Mandelbaum2015,Zuntz2018}). \nThe deconvolution problem of finding $\\ensuremath{\\mb{X}}$ knowing $\\ensuremath{\\mb{Y}}$ and $\\ensuremath{\\mathcal{H}}$ is therefore considered as a series of independent ill-posed inverse problems. To avoid having multiple solutions (due to a non trivial null space of $\\left\\{\\ensuremath{\\mb{H}}_\\mb{i}\\right\\}_{i=1..n_g}$) or an unstable solution (bad conditioning of these matrices), we need to regularize the problem as in standard deconvolution approaches developed for space-invariant convolutions. This amounts to solve the following inverse problem:\n\\begin{equation}\n \\label{eq:SVRegularization}\n \\argmin\\limits_{\\ensuremath{\\mb{X}}} \\frac{1}{2}||\\ensuremath{\\mb{Y}}- \\ensuremath{\\mathcal{H}}(\\ensuremath{\\mb{X}})||^2_F + \\mathcal{R}\\left(\\ensuremath{\\mb{X}}\\right)\n\\end{equation}\nand in general we will choose separable regularizers so that we can handle in parallel the different deconvolution problems:\n\\begin{equation}\n \\label{eq:RegularizationDevelop}\n \\left\\{\\argmin\\limits_{\\ensuremath{\\mb{x}}_\\mb{i}} \\frac{1}{2}||\\ensuremath{\\mb{y}}_\\mb{i}- \\ensuremath{\\mb{H}}_\\mb{i}\\ensuremath{\\mb{x}}_\\mb{i}||^2_2 + \\mathcal{R}\\left(\\ensuremath{\\mb{x}}_\\mb{i}\\right)\\right\\}_{i=1..n_g}\n\\end{equation}\n\n\\citet{Farrens2017} proposes two methods to perform this deconvolution:\n\\begin{itemize}\n\\item{\\bf Sparse prior:} each galaxy is supposed to be sparse in the wavelet domain, leading to minimize \n\\begin{equation}\n \\begin{aligned}\n & \\underset{\\mathbf{X}}{\\text{argmin}}\n & \\frac{1}{2}\\|\\mathbf{Y}-\\mathcal{H}(\\mathbf{X})\\|_2^2 + \n \\|\\mathbf{W}^{(k)}\\odot\\Phi(\\mathbf{X})\\|_1\n & & \\text{s.t.}\n & & \\mathbf{X} \\ge 0\n \\end{aligned}\n \\label{eq:rw_l1}\n\\end{equation}\nwith $\\mathbf{W}^{(k)}$ a weighting matrix.\n\\item{\\bf Low rank prior:} In the above method, each galaxy is deconvolved independently from the others. \nAs there are many similarities between galaxy images, \\citet{Farrens2017} proposes a joint restoration process, where the matrix\n$\\mathbf{X}$ has a low rank. This is enforced by adding a nuclear norm penalization instead of the sparse regularization, as follows:\n\\begin{equation}\n \\begin{aligned}\n & \\underset{\\mathbf{X}}{\\text{argmin}}\n & \\frac{1}{2}\\|\\mathbf{Y}-\\mathcal{H}(\\mathbf{X})\\|_2^2 + \\lambda\\|\\mathbf{X}\\|_*\n & & \\text{s.t.}\n & & \\mathbf{X} \\ge 0\n \\end{aligned}\n \\label{eq:lowrl2minpos}\n\\end{equation} \nwhere \n$ \\|\\mathbf{X}\\|_* = \\sum_{k} \\sigma_k(\\mathbf{X})$, $\\sigma_k(\\mathbf{X})$ denoting the $k^{\\text{th}}$ largest singular value of $\\mathbf{X}$. \n\\end{itemize}\nIt was shown that the second approach outperforms sparsity techniques as soon as the number of galaxies in the field is larger than 1000 \\citep{Farrens2017}. \n\n\\subsection{Neural Network architectures}\n\\label{subsec:architecture}\nDNN allows us to extend the previous low rank minimisation, by taking profit of existing databases and learning more features from the data in a supervised way, compared to what we could do with the simple SVD used for nuclear norm penalization. \nThe choice of network architecture is crucial for performance. We have identified three different features we believe important for our application: 1) the forward model and the task implies that the network should be translation equivariant, 2) the model should include some multi-scale processing based on the fact that we should be able to capture distant correlations, and 3) the model should minimize the number of trainable parameters for a given performance, so as to be efficient (lower GPU memory consumption) which is also important to ease the learning. \nHopefully these objectives are not contradictory: the first consideration leads to the use of convolutional layers, while the second implies a structure such as the U-Net \\citep{Ronneberger2015} already used to solve inverse problems \\citep{Jin2017} or the deep convolutional framelets \\citep{Ye2018}. But because such architectures allow to increase rapidly the receptive field in the layers along the network, they can compete with a smaller number of parameters against CNNs having a larger number of layers and therefore more parameters.\n\n\\begin{figure}[ht]\\centering\n \\includegraphics[width=0.5\\linewidth]{denseunet}\n \\caption{DNN model used in this work. The global architecture is a U-Net, with small modifications for performance and to limit the number of model parameters.}\n \\label{fig:DenseNet}\n\\end{figure}\n\nWe therefore have selected a global U-Net structure as in \\citep{Jin2017}, but including the following modifications:\n\\begin{itemize}\n\\item{\\bf 2D separable convolutions:} we replace 2D convolutions by 2D separable convolutions \\citep{Chollet2016}. The separable convolutions allow to limit the number of parameters in the model by assuming that spatial correlations and correlations across feature maps can be independently captured. Their use have already lead to outperform architectures with non-separable convolution with a larger number of parameters \\citep{Chollet2016}. \n\\item{\\bf Dense blocks:} we changed the convolutional layers at each \"scale\" by using dense blocks \\citep{Huang2017}. \nDense blocks also allow to reduce the number of parameters, by propagating through concatenation all prior feature maps to the input of the current layer. This was claimed to enable feature reuse, preservation of information, and to limit vanishing gradients in the learning. \n\\item{\\bf Average-pooling:} we change the pooling step: we have observed that max-pooling lead to over-segmentation of our final estimates, which is alleviated by the use of average pooling.\n\\item{\\bf Skip connection:} we removed the skip connection between the input and the output layers introduced by \\citep{Jin2017} which proved to be detrimental to the performance of the network, especially at low SNR. Note that the dense blocks may have also better preserved the flow of relevant information and limited the interest of using residual learning. \n\\end{itemize}\nThe two first modification limit significantly the number of parameters per \"scale\" of the U-Net, and potentially allow for more scales to be used for a given budget of number of trainable parameters. Our network, we name \"XDense U-Net\", is displayed in Fig.~\\ref{fig:DenseNet}. The following describes how to use such networks in two different ways in order to perform the space variant deconvolution.\n\n\n\\subsection{Tikhonet: Tikhonov deconvolution post-processed by a Deep Neural Network}\n\\label{subsec:Tikhonet}\nThe Tikhonov solution for the space variance variant PSF deconvolution is:\n\\begin{equation}\n \\label{eq:TikhonovSVRegularization}\n \\argmin\\limits_{\\ensuremath{\\mb{X}}} \\frac{1}{2}||\\ensuremath{\\mb{Y}}- \\ensuremath{\\mathcal{H}}(\\ensuremath{\\mb{X}})||^2_F + ||\\ensuremath{\\mathcal{L}}(\\ensuremath{\\mb{X}})||^2_F\n\\end{equation}\nwhere $\\ensuremath{\\mathcal{L}}$ is similarly built as $\\ensuremath{\\mathcal{H}}$.\nThe closed-form solution of this linear inverse problem is given by:\n\\begin{equation}\n \\label{eq:SVTikhonovSolution}\n \\left\\{\\tilde{\\ensuremath{\\mb{x}}_\\mb{i}} = \\left(\\ensuremath{\\mb{H}}^T_\\mb{i}\\ensuremath{\\mb{H}}_\\mb{i}+\\lambda_i\\ensuremath{\\mb{L}}^T_\\mb{i}\\ensuremath{\\mb{L}}_\\mb{i} \\right)^{-1} \\ensuremath{\\mb{H}}_\\mb{i}^T \\ensuremath{\\mb{y}}_\\mb{i}\\right\\}_{i=1..n_g}\n \\end{equation}\nwhich involves for each galaxy a different Tikhonov filter $\\left(\\ensuremath{\\mb{H}}^T_\\mb{i}\\ensuremath{\\mb{H}}_\\mb{i}+\\lambda_i\\ensuremath{\\mb{L}}^T_\\mb{i}\\ensuremath{\\mb{L}}_\\mb{i} \\right)^{-1} \\ensuremath{\\mb{H}}_\\mb{i}^T$. In this work, we chose $\\ensuremath{\\mb{L}}_\\mb{i}=\\boldsymbol{\\Id}$ and the regularization parameter $\\lambda_i$ is different for each galaxy, depending on its SNR (see \\ref{sec:proposedDL} for more details).\nThe final estimate is then only: \n\\begin{equation}\n \\label{eq:TikhoPredict}\n\\left\\{\\hat{\\ensuremath{\\mb{x}}_\\mb{i}}=\\ensuremath{\\mathcal{N}_{\\boldsymbol{\\theta}}}(\\tilde{\\ensuremath{\\mb{x}}_\\mb{i}})\\right\\}_{i=1..n_g},\n\\end{equation}\nwhere the neural network predictions based on its parameters $\\boldsymbol{\\theta}$ for some inputs $\\ensuremath{\\mb{Y}}$ are written as $\\ensuremath{\\mathcal{N}_{\\boldsymbol{\\theta}}}(\\ensuremath{\\mb{Y}})$. \n\nThe success of the first approach therefore lies on the supervised learning of the mapping between the Tikhonov deconvolution of Eq.~\\eqref{eq:SVTikhonovSolution} and the targeted galaxy.\n\nWe call this two-step approach {\\bf \"Tikhonet\"} and the rather simple training process is described in Algorithm~\\ref{Algo:Tikhonet}.\n\n\\begin{algorithm}\n \\caption{DNN training in the Tikhonet approach \\label{Algo:Tikhonet}}\n \\begin{algorithmic}[1]\n \\STATE \\textbf{Initialization}: Prepare noise-free training set, choose noise parameters (SNR range) and validation set. Choose architecture for network $\\mathcal{N}$, learning parameters (optimizer and its parameters, batch size $B$ and number of batches $n_{batch}$, number of epochs $n_{epoch}$) and cost function to minimize (here mean squared error).\n \\FOR[\\textbf{Loop over epochs}]{$n=1$ to $n_{epoch}$}\n \t\\FOR[\\textbf{Loop over batches}]{$b=1$ to $n_{batch}$}\n \t\t\\FOR[\\textbf{Loop over galaxies in batch}]{$i=1$ to $B$}\n \t\t\t\\STATE Add random noise to obtain a realization in the SNR range chosen\n \t\t\t\\STATE Compute the Tikhonov solution $\\tilde{\\ensuremath{\\mb{x}}_\\mb{i}}$ using Eq.~\\ref{eq:TikhonovSolution}\n \t\t\\ENDFOR\n \t\t\\STATE Predict $\\hat{\\ensuremath{\\mb{x}}_\\mb{i}}$ (Eq.~\\ref{eq:TikhoPredict}) and update network parameters $\\boldsymbol{\\theta}$ according to the cost function.\n \t\\ENDFOR\n \\ENDFOR\n \\RETURN $\\ensuremath{\\mathcal{N}_{\\boldsymbol{\\theta}}}$\n \\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{ADMMnet: Deep neural networks as constraint in ADMM plug-and-play}\n\\label{subsec:ADMMnet}\nThe second approach we investigated is using the ADMM PnP framework with a DNN. \n\nADMM is an augmented lagrangian technique developed to solve convex problems under linear equality constraints (see for instance \\citep{Boyd2010}). It operates by decomposing the minimization problem into sub-problems solved sequentially. One iteration consists in first solving a minimization problem typically involving the data fidelity term, then solving a second minimization problem involving the regularization term, and finishing by an update of the dual variable. \n\nIt has previously been noted \\citep{Venkatakrishnan2013,Sreehari2016, Chan2016} that the first two sub-steps can be interpreted as an inversion step followed by a denoising step coupled via the augmented lagrangian term and the dual variable. These authors suggested to use such ADMM structure with non-linear denoisers in the second step in an approach dubbed ADMM PnP, which recent work has proposed to implement via DNNs \\citep{Meinhardt2017}. \n\nIn the following, we adopt such iterative approach based on the ADMM PnP because 1) it separates the inversion step and the use of the DNN, offering flexibility to add extra convex constraints in the cost function that can be handled with convex optimization 2) it alleviates the cost of learning by focusing essentially on learning a denoiser or a projector - less networks, less parameters to learn jointly compared to unfolding approaches where each iteration corresponds to a different network 3) by iterating between the steps, the output of the network is propagated to the forward model to be compared with the observations, avoiding large discrepancies, contrary to the Tikhonet approach where the output of the network is not used in a likelihood.\n\n\nThe training of the network $\\ensuremath{\\mathcal{N}_{\\boldsymbol{\\theta}}}$ in this case is similar to Algorithm~\\ref{Algo:Tikhonet}, except that the noise-free training set is composed of noise-free target images instead of noise-free convolved images, and the noise added has constant standard deviation. Then the algorithm for deconvolving a galaxy is presented in Algo.~\\ref{Algo:ADMMPnP} and is derived from \\cite{Chan2016}. The application of the network is here illustrated in red. We call this approach {\\bf \"ADMMnet\"}. The first step consists in solving the following regularized deconvolution problem at iteration $k$ using the accelerated iterative convex algorithm FISTA \\citep{Beck2009}:\n\\begin{equation}\n \\label{eq:RegularizationADMM}\n \\left\\{\\argmin\\limits_{\\ensuremath{\\mb{x}}_\\mb{i}} \\frac{1}{2\\sigma^2}||\\ensuremath{\\mb{y}}_\\mb{i}- \\ensuremath{\\mb{H}}_\\mb{i}\\ensuremath{\\mb{x}}_\\mb{i}||^2_2 + \\iota_\\mathcal{C}(\\ensuremath{\\mb{x}}_\\mb{i}) + \\frac{\\rho}{2}||\\ensuremath{\\mb{x}}_\\mb{i}-\\ensuremath{\\mb{z}}^{(k)}_\\mb{i}+\\boldsymbol{\\mu}^{(k)}||^2_2 \\right\\}_{i=1..n_g}\n\\end{equation}\nwhere $\\iota_\\mathcal{C}$ is the characteristic function of the non-negative orthant, to enforce the non-negativity of the solution. The DNN used in the second step is used as an analogy with a denoiser (or as a projector),as presented above. The last step controls the augmented lagrangian parameter, and ensure that this parameter is increased when the optimization parameters are not sufficiently changing. This continuation scheme is also important, as noted in \\citet{Chan2016}, as increasing progressively the influence of the augmented lagrangian parameter ensures stabilization of the algorithm.\n\nNote that of course there is no convergence guarantee of such scheme and that contrary to the convex case the augmented lagrangian parameter $\\rho$ is expected to impact the solution.\n \nFinally, because the target galaxy is obtained after re-convolution with a target PSF to avoid aliasing (see section \\ref{sec:XPDesign}), we also re-convolve the ADMMnet solution with this target PSF to obtain our final estimate.\n\n\\begin{algorithm}\n \\caption{Proposed ADMM Deep Plug\\&Play for deconvolution of a galaxy image\\label{Algo:ADMMPnP}}\n \\begin{algorithmic}[1]\n \\STATE \\textbf{Initialize}:set $\\rho_0,\\rho_{max},\\eta\\in[0,1),\\gamma>1,\\Delta_0=0$,$\\mathbf{X}^{(0)}=\\mb{0},\\mathbf{Z}^{(0)}=\\mb{0},\\boldsymbol{\\mu}^{(0)}=\\mb{0}, \\epsilon$ \n \\FOR[\\textbf{Main Loop}]{$k=0$ to $N_{it}$}\n \\STATE \\textbf{Deconvolution Sub-Problem}: $\\mathbf{X}^{(k+1)}=FISTA(\\textbf{Y},\\mathbf{X}^{(k)},\\mathbf{Z}^{(k)},\\boldsymbol{\\mu}^{(k)},\\rho_k)$ \n \\STATE \\textbf{\"Denoising\" Sub-Problem}: $\\mathbf{Z}^{(k+1)}=\\textcolor{red}{\\ensuremath{\\mathcal{N}_{\\boldsymbol{\\theta}}}}\\left(\\mathbf{X}^{(k+1)}+\\boldsymbol{\\mu}^{(k)}\\right)$ \n \\STATE \\textbf{Lagrange Multiplier Update}: $\\boldsymbol{\\mu}^{(k+1)}=\\boldsymbol{\\mu}^{(k)}+\\left(\\mathbf{X}^{(k+1)}-\\mathbf{Z}^{(k+1)}\\right)$\n \\STATE $\\Delta_{k+1}=\\frac{1}{\\sqrt{n}}\\left(||\\mathbf{X}^{(k+1)}-\\mathbf{X}^{(k)}||_2+||\\mathbf{Z}^{(k+1)}-\\mathbf{Z}^{(k)}||_2+||\\boldsymbol{\\mu}^{(k+1)}-\\boldsymbol{\\mu}^{(k)}||_2\\right)$\n \\IF{$\\Delta_{k+1}\\geq \\eta \\Delta_k$ \\AND $\\rho_{k+1}\\leq \\rho_{max}$} \n \\STATE $\\rho_{k+1}=\\gamma\\rho_k$\n \\ELSE\n \\STATE $\\rho_{k+1}=\\rho_k$\n \\ENDIF\n \\IF{$\\|\\mathbf{Z}^{(k+1)}-\\mathbf{X}^{(k+1)}\\|_2<\\epsilon$}\n \t\t\\STATE \\textbf{stop}\n \\ENDIF \n\n\\ENDFOR\n\\RETURN $\\left\\{\\mathbf{X}^{(k+1)}\\right\\}$\n \\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Implementation and choice of parameters for Network architecture}\n\\label{sec:proposedDL}\n\nWe describe here our practical choices for the implementation of the algorithms.\nFor the Tikhonet, the hyperparameter $\\lambda_i$ that controls the balance in between the data fidelity term and the quadratic regularization in Eq.~\\ref{eq:SVTikhonovSolution} needs to be set for each galaxy. This can be done either manually by selecting an optimal value as a function of an estimate of the SNR, or by using automated procedures such as generalized cross-validation (GCV) \\citep{Golub1979}, the L-curve methode \\citep{Hansen1993}, the Morozov discrepancy principle \\citep{Engl1996}, various Stein Unbiased Risk Estimate (SURE) minimization \\citep{Eldar2009,Pesquet2009,Deledalle2014}, or using a hierarchical Bayesian framework \\citep{Orieux2010,Pereyra2015}. We compared these approach, and report the results obtained by the SURE prediction risk minimization which lead to the best results with the GCV approach.\n \nFor the ADMM, the parameters $\\rho_{0}$, $\\rho_{max}$, $\\eta$, $\\epsilon$ and $\\gamma$ have been selected manually, as a balance between stabilizing quickly the algorithm (in particular high $\\rho$) and favouring the minimization of the data fidelity term in the first steps (low $\\rho$). We investigated in particular the choice of $\\rho_{0}$ which illustrate how the continuation scheme impacts the solution.\n\nThe DNNs were coded in Keras\\footnote{\\url{https:\/\/keras.io}} with Tensorflow \\footnote{\\url{https:\/\/www.tensorflow.org}} as backend. For the proposed XDense U-Net, 4 scales were selected with an increasing number of layers for each scale (to capture distant correlations). Each separable convolution was composed of $3\\times 3$ spatial filters and a growth factor of 12 was selected for the dense blocks. The total number of trainable parameters was 184301. \nWe also implemented a \"classical\" U-Net to test the efficiency of the proposed XDense U-Net architecture. For this U-Net, we choose 3 scales with 2 layers per scale and 20 feature maps per layer in the first scale, to end up with 206381 trainable parameters ($12\\%$ more than the XDense U-Net implementation). \nIn both networks we used batch normalization and rectified linear units for the activation. We also tested for our proposed approach weighted sigmoid activations (or swish in \\citet{Elfwing2018}) which seems to slightly improve the results but at the cost of increasing the computational burden and therefore we did not use them in the following results.\n\nIn the training phase, we use 20 epochs, a batch size of 32 and the Adam optimizer was selected (we keep the default parameters) to minimize the mean squared error (MSE) cost function. After each epoch, we save the network parameters only if they improve the MSE on the validation set.\n\n\n\\section{Experiments}\n\\label{sec:XPDesign}\nIn this section, we describe how we generated the simulations used for learning networks and testing our deconvolution schemes, as well as the criteria we will use to compare the different deconvolution techniques.\n\n\\subsection{Dataset generation}\nWe use GalSim\\footnote{\\url{https:\/\/github.com\/GalSim-developers\/GalSim}} \\citep{Rowe2015} to generate realistic images of galaxies for training our networks and testing our deconvolution approaches. We essentially follow the approach used in GREAT3 \\citep{Mandelbaum2014} to generate the realistic space branch from high resolution HST images, but choosing the PSFs in a set of 600 Euclid-like PSFs (the same as in \\citet{Farrens2017}). The process is illustrated in Fig.~\\ref{fig:Galsim}. \n\nA HST galaxy is randomly selected from the set of about 58000 galaxies used in the GREAT3 challenge, deconvolved with its PSF, and random shift (taken from a uniform distribution in $[-1,1]$ pixel), rotation and shear are applied. The same cut in SNR is performed as in GREAT3 \\citep{Mandelbaum2014} , so as to obtain a realistic set of galaxies that would be observed in a SNR range $[20,100]$ when the noise level is as in GREAT3. In this work we use the same definition of SNR as in this challenge:\n\\begin{equation}\n \\label{eq:SNR}\n\\ensuremath{\\mathrm{SNR}}\\left(\\ensuremath{\\mb{X}}_\\mb{i}\\right)=\\frac{||\\ensuremath{\\mb{X}}_\\mb{i}||_2}{\\sigma}\n \\end{equation}\nwhere $\\sigma$ is the standard deviation of the noise. This SNR corresponds to an optimistic SNR for detection when the galaxy profile $\\ensuremath{\\mb{X}}_\\mb{i}$ is known. In other (experimental) definitions, the minimal SNR is indeed closer to 10, similarly to what is usually considered in weak lensing studies \\citep{Mandelbaum2014}.\n\nIf the cut in SNR is passed, to obtain the target image in a $96\\times96$ grid with pixel size $0.05''$, we first convolve the HST deconvolved galaxy image with a Gaussian PSF with $FWHM=0.07''$ to ensure no aliasing occurs after the subsampling. To simulate the observed galaxy without extra noise, we convolve the HST deconvolved image with a PSF randomly selected among about 600 Euclid-like PSFs (the same set as used in \\citet{Farrens2017}). Note that the same galaxy rotated by $90\\degree$ is also simulated as in GREAT3.\n\nBecause we use as inputs real HST galaxies, noise from HST images propagate to our target and observed images, and is coloured by the deconvolution\/reconvolution process. We did not want to denoise the original galaxy images to avoid losing substructures in the target images (and making them less \"realistic\"), and as this noise level is lower than the noise added in our simulations we expect it to change marginally our results - and not the ranking of methods. \n\nThis process is repeated so that we end up with about 210000 simulated observed galaxies and their corresponding target. For the learning, 190000 galaxies are employed, and 10000 for the validation set. The extra 10000 are used for testing our approaches.\\\\\n\n\\begin{figure}[ht]\\centering\n \\includegraphics[width=\\scalefig{0.75}]{Galsim_Simulations}\n \\caption{Set up for a GalSim simulated realistic galaxy. In the upper branch we obtain the targeted galaxy. In the lower branch, we simulate the corresponding Euclid-like observed galaxy. Note that in these figures, a log-scale was adopted for the PSFs to illustrate its complicated structure. }\n \\label{fig:Galsim}\n\\end{figure}\n\nIn the learning phase, additive white Gaussian noise is added to the galaxy batches with standard deviation chosen so as to obtain a galaxy in a prescribed SNR range. For the Tikhonet, we choose randomly for each galaxy in the batch a $\\ensuremath{\\mathrm{SNR}}$ in the range $[20,100]$, which corresponds to selecting galaxies from the limit of detection to galaxies with observable substructures, as illustrated in Fig~\\ref{fig:SNRRange}. For the ADMMnet, we learn a denoising network for a constant noise standard deviation of $\\sigma=0.04$ (same level as in GREAT3).\n\nWe then test the relative performance of the different approaches in a test set for fixed values: $\\ensuremath{\\mathrm{SNR}}\\in\\{20,40,60,80,100\\}$ to better characterize (and discriminate) them, and for a fixed standard deviation of $\\sigma=0.04$ corresponding to what was simulated in GREAT3 for the real galaxy space branch to obtain results on a representative observed galaxy set. The corresponding distribution of SNR in the last scenario is represented in Fig.~\\ref{fig:histoSNR}. All the techniques are compared on exactly the same test sets.\n\n\nFor the ADMMnet approach when testing at different SNRs, we need to adjust the noise level in the galaxy images to the level of noise in the learning phase. We therefore rescale the galaxy images to reach this targeted noise level, based on noise level estimation in the images. This is performed via a robust standard procedure based on computing the median absolute deviation in the wavelet domain (using orthogonal daubechies wavelets with 3 vanishing moments).\n\n\n\\begin{figure}[ht]\\centering\n \\includegraphics[width=\\scalefig{0.95}]{comparison_SNR_gal0x0}\n \\caption{Range of SNR used for the training and for testing in the simulations. From left to right: targeted galaxy image, then observed convolved images at increasing SNR. In our definition, $\\ensuremath{\\mathrm{SNR}}=20$ is barely at the galaxy detection limit, while at $\\ensuremath{\\mathrm{SNR}}=100$ galaxy substructures can be visualized.}\n \\label{fig:SNRRange}\n\\end{figure}\n\n\\begin{figure}[ht]\\centering\n \\includegraphics[width=\\scalefig{0.45}]{histogram_gal_cst_noise}\n \\caption{Distribution of SNR of simulated galaxies for constant noise simulations ($\\sigma=0.04$). The peak of the distribution is at about $\\ensuremath{\\mathrm{SNR}}=30$, and the mean SNR is $\\ensuremath{\\mathrm{SNR}}=54$.}\n \\label{fig:histoSNR}\n\\end{figure}\n\n\n\\subsection{Quality criteria}\nThe performance of the deconvolution schemes is measured according to two different criteria, related to pixel error and shape measurement errors.\nFor pixel error we select a robust estimator:\n\\begin{equation}\n\\mathrm{P_{err}}\\left(\\widehat{\\ensuremath{\\mb{X}}}\\right)=\\mathrm{MED}\\left(\\frac{\\|\\widehat{\\ensuremath{\\mb{x}}_\\mathbf{i}}-\\ensuremath{\\mb{x}}^{(t)}_\\mathbf{i}\\|^2_2}{\\|\\ensuremath{\\mb{x}}^{(t)}_\\mathbf{i}\\|_2^2}\\right)_{i=1..n_g}\n\\end{equation}\nwhere $\\ensuremath{\\mb{x}}^{(t)}_\\mathbf{i}$ is the targeted value, and with $\\mathrm{MED}$ the median over the relative mean squared error computed for each galaxy $\\ensuremath{\\mb{x}}_\\mathbf{i}$ in the test set, in a central window of $41\\times 41$ pixels common to all approaches.\n\nFor shape measurement errors, we compute the ellipticity using a KSB approach implemented in shapelens\\footnote{\\url{https:\/\/github.com\/pmelchior\/shapelens}} \\citep{Kaiser1995,Viola2011}, that additionally computes an adapted circular weight function from the data.\n\nWe first apply this KSB method to the targets, taking as well into account the target isotropic gaussian PSF, to obtain reference complex ellipticities $\\epsilon_i$ and windows. We then compute the complex ellipticity $\\widehat{\\epsilon_i}$ of the deconvolved galaxies using the same circular weight functions as their target counterpart. Finally, we compute\n\\begin{equation}\n\\mathrm{\\epsilon_{err}}\\left(\\widehat{\\ensuremath{\\mb{X}}}\\right)=\\mathrm{MED}\\left(\\|\\epsilon_i^{(t)} -\\widehat{\\epsilon_i}\\|_2\\right)_{i=1..n_g}\n\\end{equation}\nto obtain a robust estimate of the ellipticity error in the windows set up by the target images, again in a central window of $41\\times 41$ pixels common to all approaches..\n\nWe also report the distribution of pixel and ellipticity errors prior to applying the median when finer assessments need to be made. \n \n\\section{Results}\n\\label{sec:Results}\n\n\\subsection{Setting the Tikhonet architecture and hyperparameters}\n\nFor the Tikhonet, the key parameters to set are the hyperparameters $\\lambda_i$ in Eq.~\\ref{eq:SVTikhonovSolution}. In Fig.~\\ref{fig:TikhoHyperVisu}, these hyperparameters are set to the parameters minimizing the SURE multiplied by factors ranging from 10 to 0.01 at $\\ensuremath{\\mathrm{SNR}}=20$, for the proposed X-Dense architecture (similar visual results are obtained for the \"classical\" U-Net). It appears that for the lowest factor, corresponding to the smallest regularization of deconvolution (i.e. more noise added in the deconvolved image), the Tikhonet is not able to perform as well as for intermediate values, in particular for exactly the SURE minimizer.\n\n\\begin{figure}[ht]\\centering\n \\includegraphics[width=\\scalefig{0.9}]{comparison_image_Tikhonet_hyper_res_SNR20}\n \\caption{Visual impact of the hyperparameter choice for the Tikhonet approach at SNR20. Top: target and observations, followed by SURE estimates with different multiplicative factor. Bottom: residuals associated to the top row.}\n \\label{fig:TikhoHyperVisu}\n\\end{figure}\n\nThis is confirmed in Fig.~\\ref{fig:UnetXDenseTikhoHyperPixErr} reporting the pixel errors for both proposed X-Dense and \"classical\" architecture, and Fig.~\\ref{fig:UnetXDenseTikhoHyperEllErr} for the ellipticity errors. \n\n\\begin{figure}[ht]\\centering\n \\includegraphics[width=\\scalefig{0.47}]{comparison_boxplot_DENSEXSUREPRED_testhyper_pixelerror}\n \\includegraphics[width=\\scalefig{0.47}]{comparison_boxplot_UNETSUREPRED_testhyper_pixelerror}\n \\caption{Impact of of the hyperparameter multiplicative factor value for the Tikhonet using the proposed XDense U-Net architecture (left) and \"classical\" U-Net architecture (right), in terms of pixel errors.The box indicate quartiles, while the vertical bars encompass $90\\%$ of the data. Outliers are displayed with circles.}\n \\label{fig:UnetXDenseTikhoHyperPixErr}\n\\end{figure}\n\n\\begin{figure}[ht]\\centering\n \\includegraphics[width=\\scalefig{0.47}]{comparison_boxplot_DENSEXSUREPRED_testhyper_ellerror}\n \\includegraphics[width=\\scalefig{0.47}]{comparison_boxplot_UNETSUREPRED_testhyper_ellerror}\n \\caption{Impact of the hyperparameter multiplicative factor value for the Tikhonet using the proposed XDense U-Net architecture (left) and \"classical\" U-Net architecture (right), in terms of ellipticity errors.The box indicate quartiles, while the vertical bars encompass $90\\%$ of the data. Outliers are displayed with circles.}\n \\label{fig:UnetXDenseTikhoHyperEllErr}\n\\end{figure}\n\nFor both architectures, best results in terms of pixel or ellipticity errors are consistently obtained across all SNR tested for values of the multiplicative factor between 0.1 and 1. Higher multiplicative factors also lead to larger extreme errors in particular at low SNR. In the following, we therefore set this parameter to the SURE minimizer.\n \nConcerning the choice of architecture, Fig.~\\ref{fig:UnetXDenseTikhoHyperPixErr} illustrates that the XDense U-Net provides across SNR less extreme outliers in pixel errors for a multiplicative factor of 10, which is however far from providing the best results. Looking more closely at the median error values in Table~\\ref{tbl:compUNetXDenseTikhonet} for the SURE minimizers, we see that slightly better results are consistently obtained for the proposed XDense U-Net architecture. In this experiment, the XDense obtains 4\\% (resp. 3\\%) less pixel errors at $\\ensuremath{\\mathrm{SNR}}=20$ (resp. $\\ensuremath{\\mathrm{SNR}}=100$), and the most significant difference is an about 8\\% improvement in ellipticity measurement at $\\ensuremath{\\mathrm{SNR}}=100$.\n\n\\begin{table}\n\\begin{center}\n\\caption{Comparison of U-Net architectures for the SURE selected hyperparameter. The first number is obtained with the XDense U-Net architecture, the second in parentheses with the \"classical\" U-Net architecture.} \n\\begin{adjustbox}{max width=0.95\\columnwidth}\n{\\renewcommand{\\arraystretch}{1.5}\n\\begin{tabular}[c]{|l|l|l|l|l|l|} \n\\hline\n & $\\ensuremath{\\mathrm{SNR}}=20$ & $\\ensuremath{\\mathrm{SNR}}=40$ & $\\ensuremath{\\mathrm{SNR}}=60$ & $\\ensuremath{\\mathrm{SNR}}=80$ & $\\ensuremath{\\mathrm{SNR}}=100$ \\\\[5pt]\n\\hline\n\\hline\nMedian Pixel Error & \\textbf{0.157} (0.163) & \\textbf{0.117} (0.121) & \\textbf{0.105} (0.106) & \\textbf{0.097} (0.097) & \\textbf{0.090} (0.093) \\\\[5pt]\n\\hline\nMedian Ellipticity Error& \\textbf{0.109} (0.110) & \\textbf{0.063} (0.064) & \\textbf{0.045} (0.046) & \\textbf{0.035} (0.038) & \\textbf{0.030} (0.033) \\\\[5pt]\n\\hline \n\\end{tabular}}\n\\end{adjustbox}\n\\label{tbl:compUNetXDenseTikhonet}\n\\end{center}\n\\end{table} \n\n\\subsection{Setting the ADMMnet architecture and hyperparameters}\n\n\nFor the ADMMnet, we set manually the hyperparameters $\\rho_{max}=200$, $\\epsilon=0.01$ to lead to ultimate stabilization of the algorithm, $\\eta=0.5$ and $\\gamma=1.4$ to explore intermediate $\\rho$ values, and we investigate the choice of parameter $\\rho_0$ to illustrate the impact of the continuation scheme on the solution. This is illustrated in Fig.~\\ref{fig:ADMMHyperVisu100} at high SNR, and Fig.~\\ref{fig:ADMMHyperVisu20} at low SNR for the proposed XDense U-Net architecture. When $\\rho_0$ is small, higher frequencies are recovered in the solution as illustrated in galaxy substructures in Fig.~\\ref{fig:ADMMHyperVisu100}, but this could lead to artefacts at low SNR as illustrated in Fig.~\\ref{fig:ADMMHyperVisu20}.\n\n\\begin{figure}[ht]\\centering\n \\includegraphics[width=\\scalefig{0.9}]{comparison_admmhyper_rho0_res_gal0x0_SNR100.png}\n \\caption{Visual impact of the initialization of $\\rho$ for the ADMMnet for $\\ensuremath{\\mathrm{SNR}}=100$. Top: target and observations, followed by ADMM estimates with different augmented lagrangian parameter $\\rho_0$. Bottom: residuals associated to the top row.}\n \\label{fig:ADMMHyperVisu100}\n\\end{figure}\n\n\\begin{figure}[ht]\\centering\n \\includegraphics[width=\\scalefig{0.9}]{comparison_admmhyper_rho0_res_gal3x0_SNR20.png}\n \\caption{Visual impact of the initialization of $\\rho$ for the ADMMnet for $\\ensuremath{\\mathrm{SNR}}=20$. Top: target and observations, followed by SURE estimates with different augmented lagrangian parameter $\\rho_0$. Bottom: residuals associated to the top row.}\n \\label{fig:ADMMHyperVisu20}\n\\end{figure}\n\nQuantitative results concerning the two architectures are presented in Fig.~\\ref{fig:ADMMHyperStatsPixErr} for pixel errors and Fig.~\\ref{fig:ADMMHyperStatsEllErr} for ellipticity errors. The distribution of errors is very stable with respect to the hyperparameter $\\rho_0$ value, and similar for both architectures.\n \n\\begin{figure}[ht]\\centering\n \\includegraphics[width=\\scalefig{0.4}]{comparison_boxplot_DENSEXSUREPRED_testhyperADMM_pixelerror}\n \\includegraphics[width=\\scalefig{0.4}]{comparison_boxplot_UNET_testhyperADMM_pixelerror}\n \\caption{Impact of the hyperparameter $\\rho_0$ value for the ADMMnet, in terms of pixel error, for the proposed XDense U-Net (left) and \"classical\" U-Net (right).The box indicate quartiles, while the vertical bars encompass $90\\%$ of the data. Outliers are displayed with circles.}\n \\label{fig:ADMMHyperStatsPixErr}\n\\end{figure}\n\nWhen looking at pixel error at high SNR and low SNR for both architecture, the lowest pixel errors in terms of median are obtained at low SNR for larger $\\rho_0$, while at high SNR $\\rho_0=1$ is the best. In terms of ellipticity errors, $\\rho_0=1$ allows to obtain consistently the best results at low and high SNR.\n\n\\begin{figure}[ht]\\centering\n \\includegraphics[width=\\scalefig{0.4}]{comparison_boxplot_DENSEXSUREPRED_testhyperADMM_ellerror}\n \\includegraphics[width=\\scalefig{0.4}]{comparison_boxplot_UNET_testhyperADMM_ellerror}\n \\caption{Impact of the hyperparameter choice $\\rho_0$ for the ADMMnet, in terms of ellipticity error, for the proposed XDense- U-Net (left) and \"classical\" U-Net (right).The box indicate quartiles, while the vertical bars encompass $90\\%$ of the data. Outliers are displayed with circles.}\n \\label{fig:ADMMHyperStatsEllErr}\n\\end{figure}\n\nTo better compare the differences between the two architectures, the median errors are reported in Table~\\ref{tbl:compUNetXDenseADMMNet}. At low SNR the \"classical\" U-Net performance varies more than that of the XDense and at $\\ensuremath{\\mathrm{SNR}}=20$, best results are obtained for the \"classical\" U-Net approach ($4\\%$ improvement over X-Dense). At high SNR however, best results are consistently obtained across $\\rho_0$ values with the proposed XDense U-Net (but only $1\\%$ improvement over the U-Net for the best $\\rho_0=1$). Finally, concerning ellipticity median errors, best results are obtained for the smallest value $\\rho_0=1$ for both architectures and the proposed XDense U-Net performs slightly better than the \"classical\" U-Net (about $1\\%$ better both at low and high SNR).\n \n \\begin{table}\n\\begin{center}\n\\caption{Comparison of U-Net architectures for median errors. The first number is obtained with the proposed XDense U-Net architecture, the second in parentheses with the \"classical\" U-Net architecture.} \n\\begin{adjustbox}{max width=0.95\\columnwidth}\n{\\renewcommand{\\arraystretch}{1.5}\n\\begin{tabular}[c]{|l|l|l|l|l|l|l|l|l|l|l|} \n\\hline\n & \\multicolumn{5}{|c|}{$\\ensuremath{\\mathrm{SNR}}=20$} \\\\[5pt]\n\\hline\n & $\\rho_0=1$ & $\\rho_0=20$ & $\\rho_0=50$ & $\\rho_0=100$& $\\rho_0=200$ \\\\[5pt]\n\\hline\nMedian Pixel Error & 0.186 (\\textbf{0.184}) & \\textbf{0.185} (0.186) & 0.182 (\\textbf{0.176}) & 0.183 (\\textbf{0.175}) & 0.182 (\\textbf{0.175}) \\\\[5pt]\n\\hline\nMedian Ellipticity Error& \\textbf{0.114} (0.116) & \\textbf{0.114} (0.116) & 0.118 (\\textbf{0.115}) & 0.119 (\\textbf{0.115}) & 0.119 (\\textbf{0.115}) \\\\[5pt]\n\\hline\n\\hline\n& \\multicolumn{5}{|c|}{$\\ensuremath{\\mathrm{SNR}}=100$}\\\\[5pt]\n\\hline\n & $\\rho_0=1$ & $\\rho_0=20$ & $\\rho_0=50$ & $\\rho_0=100$ & $\\rho_0=200$\\\\[5pt]\n\\hline\nMedian Pixel Error & \\textbf{0.095} (0.096) & \\textbf{0.096} (0.097) & \\textbf{0.098} (0.099) & \\textbf{0.099} (0.099) & \\textbf{0.097} (0.098) \\\\[5pt]\n\\hline\nMedian Ellipticity Error & \\textbf{0.028} (0.028) & 0.028 (\\textbf{0.028}) & 0.029 (\\textbf{0.029}) & 0.029 (\\textbf{0.029}) & 0.029 (\\textbf{0.028}) \\\\[5pt]\n\n\n\\hline \n\\end{tabular}}\n\\end{adjustbox}\n\\label{tbl:compUNetXDenseADMMNet}\n\\end{center}\n\\end{table} \n\nOverall, this illustrates that the continuation scheme has a small impact in particular on the ellipticity errors, and that best results are obtained for different $\\rho_0$ and network architectures if pixel or ellipticity errors are considered, depending on the SNR. The \"classical\" U-Net allows smaller pixel errors than the proposed XDense at low SNR, but also leads to slightly higher pixel errors at higher SNR and ellipticity errors at both low and high SNR.\nIn practice we keep in the following for the proposed XDense U-Net approach $\\rho_0=1$ for further comparison with other deconvolution approaches as the pixel error is varying slowly with this architecture as a function of $\\rho_0$.\n \n\\subsection{DNN versus sparsity and low-rank}\n \nWe compare our two deep learning schemes with the XDense U-Net architecture and the hyperparameters set as described in the previous sections with the sparse and the low rank approaches of \\citet{Farrens2017}, implemented in sf\\_deconvolve \\footnote{\\url{https:\/\/github.com\/sfarrens\/sf_deconvolve}}. For the two methods, we used all parameters selected by default, reconvolved the recovered galaxy images with the target PSF and selected the central $41\\times 41$ pixels of the observed galaxies to be processed in particular to speed up the computation of the singular value decomposition used in the low rank constraint (and therefore of the whole algorithm) as in \\citet{Farrens2017}. All comparisons are made in this central region of the galaxy images.\n\nWe now illustrate the results for a variety of galaxies recovered at different SNR for the sparse, low-rank deconvolution approaches and the Tikhonet and ADMMnet. \n\nWe first display several results at low SNR ($\\ensuremath{\\mathrm{SNR}}=20$) in Fig.~\\ref{fig:galimSNR20} to illustrate the robustness of the various deconvolution approaches. Important artefacts appear in the sparse approach, illustrating the difficulty of recovering the galaxy images in this high noise scenario: retained noise in the deconvolved images lead to these point-like artefacts.\n\nFor the low rank approach, low frequencies seems to be partially well recovered, but artefacts appears for elongated galaxies in the direction of the minor axis. Finally, both Tikhonet and ADMMnet seem to recover better the low frequency information, but the galaxy substructures are essentially lost. The ADMMnet seems to recover in this situation sharper images but with more propagated noise\/artefacts than the Tikhonet, with similar features as for the sparse approach but with less point-like artefacts.\n\n\\begin{figure}[ht]\\centering\n \\includegraphics[width=\\scalefig{0.95}]{comparison_methods_reconv_gal0x0_SNR20}\n \\includegraphics[width=\\scalefig{0.95},trim={0 0 0 0.7cm},clip]{comparison_methods_reconv_gal0x2_SNR20}\n \\includegraphics[width=\\scalefig{0.95},trim={0 0 0 0.7cm},clip]{comparison_methods_reconv_gal2x2_SNR20}\n \\includegraphics[width=\\scalefig{0.95},trim={0 0 0 0.7cm},clip]{comparison_methods_reconv_gal3x0_SNR20}\n \\includegraphics[width=\\scalefig{0.95},trim={0 0 0 0.7cm},clip]{comparison_methods_reconv_gal3x7_SNR20}\n \\includegraphics[width=\\scalefig{0.95},trim={0 0 0 0.7cm},clip]{comparison_methods_reconv_gal7x6_SNR20}\n \\caption{Deconvolved images with the various approaches for $\\ensuremath{\\mathrm{SNR}}=20$. Each row corresponds to a different processed galaxy. From left to right: image to recover, observation with a noise realization, sparse and low rank approaches, and finally Tikhonet and ADMMnet results.}\n \\label{fig:galimSNR20}\n\\end{figure}\n\nWe also display these galaxies at a much higher SNR ($\\ensuremath{\\mathrm{SNR}}=100$) in Fig.~\\ref{fig:galimSNR100} to assess the ability of the various deconvolution schemes to recover galaxy substructures in a low noise scenario.\n\n\\begin{figure}[ht]\\centering\n \\includegraphics[width=\\scalefig{0.95}]{comparison_methods_reconv_gal0x0_SNR100}\n \\includegraphics[width=\\scalefig{0.95},trim={0 0 0 0.7cm},clip]{comparison_methods_reconv_gal0x2_SNR100}\n \\includegraphics[width=\\scalefig{0.95},trim={0 0 0 0.7cm},clip]{comparison_methods_reconv_gal2x2_SNR100}\n \\includegraphics[width=\\scalefig{0.95},trim={0 0 0 0.7cm},clip]{comparison_methods_reconv_gal3x0_SNR100}\n \\includegraphics[width=\\scalefig{0.95},trim={0 0 0 0.7cm},clip]{comparison_methods_reconv_gal3x7_SNR100}\n \\includegraphics[width=\\scalefig{0.95},trim={0 0 0 0.7cm},clip]{comparison_methods_reconv_gal7x6_SNR100}\n \\caption{Deconvolved images with the various approaches for $\\ensuremath{\\mathrm{SNR}}=100$. Each row corresponds to a different processed galaxy. From left to right: image to recover, observation with a noise realization, sparse and low rank approaches, and finally Tikhonet and ADMMnet results.}\n \\label{fig:galimSNR100}\n\\end{figure}\n\nThe low-rank approach displays less artefacts than at low SNR, but still does not seem to be able to adequately represent elongated galaxies or directional substructures in the galaxy. This is probably due to the fact that the low rankness approach does not adequately cope with translations, leading to over-smooth solutions. On the contrary, Tikhonet, ADMMnet and sparse recovery lead to recover substructures of the galaxies. \n\nOverall the two proposed deconvolution approaches using DNNs lead to the best visual results across SNR.\n\nThe quantitative deconvolution criteria are presented in Fig.~\\ref{fig:statAllMethods}. Concerning median pixel error, this figure illustrates that both Tikhonet and ADMMnet perform better than the sparse and low-rank approach to recover the galaxy intensity values, whatever the SNR. In these noise settings the low-rank approach performed consistently worst than using sparsity. In terms of pixel errors, the sparse approach median errors are $27\\%$ (resp. $15\\%$) larger at $\\ensuremath{\\mathrm{SNR}}=20$ (resp. $\\ensuremath{\\mathrm{SNR}}=100$) compared to the Tikhonet results. The Tikhonet seems to perform slightly better than the ADMMnet with this criterion as well. \n\n\\begin{figure}[ht]\\centering\n \\includegraphics[width=\\scalefig{0.45}]{comparison_allmethods_pixelerror}\n \\includegraphics[width=\\scalefig{0.45}]{comparison_allmethods_ellerror}\n \\caption{Deconvolution quality criteria for the different deconvolution schemes. Left: median pixel error, Right: median ellipticity error.}\n \\label{fig:statAllMethods}\n\\end{figure}\n\nFor shape measurement errors, the best results are obtained with the Tikhonet approach at low SNR (up to $\\ensuremath{\\mathrm{SNR}}=40$), and then the ADMMnet outperforms the others at higher SNR. In terms of ellipticity errors, the sparse approach median errors are $14\\%$ (resp. $5\\%$) larger at $\\ensuremath{\\mathrm{SNR}}=20$ (resp. $\\ensuremath{\\mathrm{SNR}}=100$) compared to the Tikhonet results. Finally the low-rank performs the worst whatever the SNR. To summarize, these results clearly favour the choice of the DNN approaches resulting in lower errors consistently across SNR.\n\nThis is confirmed when looking at a realistic distribution of galaxy SNR, as shown in Table~\\ref{tbl:cstNoise}. In terms of both median pixel and ellipticity errors, the proposed deep learning approaches perform similarly, and outperforms both sparse and low-rank approaches: median pixel error is reduced by almost $14\\%$ (resp. $9\\%$) for the Tikhonet (resp. ADMMnet) approach compared to sparse recovery, and ellipticity errors by about $13\\%$ for both approaches. Higher differences are observed for the low-rank approach.\n\n\n\\begin{table}\n\\begin{center}\n\\caption{Criteria for constant noise simulations ($\\sigma=0.04$). Best results are indicated in bold.} \n\\begin{adjustbox}{max width=0.95\\columnwidth}\n{\\renewcommand{\\arraystretch}{1.5}\n\\begin{tabular}[c]{|l|l|l|l|l|} \n\\hline\n & Sparse & Low-Rank & Tikhonet & ADMMnet \\\\[5pt]\n\\hline\n\\hline\nMedian Pixel Error & 0.130 & 0.169 &\\textbf{0.112} & 0.119 \\\\[5pt]\n\\hline\nMedian Ellipticity Error& 0.061 & 0.072&\\textbf{0.053}& 0.054 \\\\[5pt]\n\\hline \n\\end{tabular}}\n\\end{adjustbox}\n\\label{tbl:cstNoise}\n\\end{center}\n\\end{table} \n\n \n\\subsection{Computing Time}\n\nFinally, we also report in Table~\\ref{tbl:timing} the time necessary to learn the networks and process the set of $10000$ galaxies on the same GPU\/CPUs, as this is a crucial aspect when potentially processing a large number of galaxies such as in modern surveys. Among DNNs, learning the parameters of the denoising network for the ADMMnet is faster than those of the post-processing network in the Tikhonet since the latter requires each batch to be deconvolved. However once the network parameters have been learnt, the Tikhonet based on a closed-form deconvolution is the fastest to process a large number of galaxies (about 0.05s per galaxy). On the other hand, learning and restoring 10000 galaxies is quite fast for the low-rank approach, while iterative algorithms such as ADMMnet or the primal-dual algorithm for sparse recovery are similar in terms of computing time (about 7 to 10s per galaxy). All these computing times could however be reduced if the restoration of different galaxy images is performed in parallel, which has not been implemented.\n\n\\begin{table}\n\\begin{center} \n\\caption{Computing time for the various approaches (in hours).} \n\\begin{tabular}{|l|l|l|} \n\\hline\n\\rule[-1ex]{0pt}{3.5ex} Method & Learning & Processing 10000 galaxies \\\\\n\\hline\n\\hline\n\\rule[-1ex]{0pt}{3.5ex} Sparse & \/ & 24.7 \\\\\n\\hline\n\\rule[-1ex]{0pt}{3.5ex} Low-rank& \\multicolumn{2}{|c|}{5.2} \\\\\n\\hline\n\\rule[-1ex]{0pt}{3.5ex} Tikhonet& 21.5 & 0.1\\\\\n\\hline\n\\rule[-1ex]{0pt}{3.5ex} ADMMnet& 16.2 & 20.3 \\\\\n\\hline \n\\end{tabular}\n\\label{tbl:timing}\n\\end{center}\n\\end{table} \n\n\n \n\n\\section{Conclusions}\n\\label{sec:ccl}\n\nWe have proposed two new space-variant deconvolution strategies for galaxy images based on deep neural networks, while keeping all knowledge of the PSF in the forward model: the Tikhonet, a post-processing approach of a simple Tikhonov deconvolution with a DNN, and the ADMMnet based on regularization by a DNN denoiser inside an iterative ADMM PnP algorithm for deconvolution. We proposed to use for galaxy processing a DNN architecture based on the U-Net particularly adapted to deconvolution problems, with small modifications implemented (dense Blocks of separable convolutions, and no skip connection) to lower the number of parameters to learn compared to a \"classical\" U-Net implementation. We finally evaluated these approaches compared to the deconvolution techniques in \\citet{Farrens2017} in simulations of realistic galaxy images derived from HST observations, with realistic sampled sparse-variant PSFs and noise, processed with the GalSim simulation code. We investigated in particular how to set the hyperparameters in both approach: the Tikhonov hyperparameter for the Tikhonet and the continuation parameters for the ADMMnet and compared our proposed XDense U-Net architecture with a \"classical\" U-Net implementation.\nOur main findings are as follows:\n\\begin{itemize}\n\\item for both Tikhonet and ADMMnet, the hyperparameters impact the performance of the approaches, but the results are quite stable in a range of values for these hyperparameters. In particular for the Tikhonet, the SURE minimizer is within this range. For the ADMMnet, more hyperparameters needs to be set, and the initialization of the augmented lagrangian parameter impacts the performance: small parameters lead to higher frequencies in the images, while larger parameters lead to over-smooth galaxies recovered.\n\\item compared to the \"classical\" implementation, the XDense U-Net leads to consistently improved criteria for the Tikhonet approach; the situation is more balanced for the ADMMnet, where lower pixel errors can be achieved at low SNR with the \"classical\" architecture (with high hyperparamer value), but the XDense U-Net provides the best results for pixel errors at high SNR and ellipticity errors both at high and low SNR (with low hyperparameter value); however selecting one or the other architecture with their best hyperparameter value would not change the ranking among methods\n\\item visually both methods outperform the sparse recovery and low-rank techniques, which displays artefacts at the low SNR probed (and for high SNR as well in the low-rank approach)\n\\item this is also confirmed in all SNR ranges and for a realistic distribution of SNR; in the latter about $14\\%$ improvement is achieved in terms of median pixel error and about $13\\%$ improvement for median shape measurement errors for the Tikhonet compared to sparse recovery.\n\\item among DNN approaches, Tikhonet outperforms ADMMnet in terms of median pixel errors whatever the SNR, and median ellipticity errors for low SNR ($\\ensuremath{\\mathrm{SNR}}<40$). At higher SNR, the ADMMnet leads to slightly lower ellipticity errors.\n\\item the Tikhonet is the fastest approach once the network parameters have been learnt, with about 0.05s needed to process a galaxy, to be compared with sparse and ADMMnet iterative deconvolution approaches which takes about 7 to 10s per galaxy.\n\\end{itemize}\n\nIf the ADMMnet approach is still promising, as extra constraints could be added easily to the framework (while the success of the Tikhonet approach also lies on the ability to compute a closed-form solution for the deconvolution step), these results illustrate that the Tikhonet is overall the best approach in this scenario to process both with high accuracy and fastly a large number of galaxies. \n\n\n\\section*{Reproducible Research} \n\nIn the spirit of reproducible research, the codes will be made freely available on the CosmoStat website. The testing datasets will also be provided to repeat the experiments performed in this paper.\n\n\\begin{acknowledgements}\n The authors thank the Galsim developers\/GREAT3 collaboration for publicly providing simulation codes and galaxy databases, and the developers of sf\\_deconvolve and shapelens as well for their code publicly available.\n\\end{acknowledgements}\n\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:Intro}\nDue to the massive growth of smart devices and the increasing popularity of bandwidth-intensive applications, mobile traffic is expected to grow continuously at a rapid rate in the next few years~\\cite{wang2015social}. Local area services and social network services are expected to constitute a major portion of this mobile traffic. In both local area and mobile social networking services, a large number of clients subscribe to a common content provider that frequently pushes multimedia content to the subscribers, e.g., text, photos, or videos. This can potentially generate thousands of duplicated downloads of the same content thus consuming a great amount of bandwidth in cellular systems~\\cite{wang2015social} and~\\cite{bastug2014living}. As a result of both local area and social network services, a large part of the cellular traffic consists of a few popular files that must be delivered to co-located social groups of user equipments (UEs).\n\nOffloading the traffic of local area services by leveraging direct device-to-device (D2D) communication links between UEs, is a promising solution to reduce the congestion on existing cellular networks~\\cite{zhao2015social,Mine1,Unmanned_Mozaffari}. In a D2D offloading procedure, the users can receive data from other UEs over D2D links instead of using the cellular links~\\cite{Mine1,andreev2014cellular}. Since a large amount of traffic is generated by a few popular contents, caching popular files on the users' devices is one promising solution to offload the cellular traffic and reduce the load on the base stations and backhaul~\\cite{chen2016cooperative}. Thus, caching popular files at the UE level and disseminating it via the use of D2D communication links is now seen as a key approach for boosting the performance of tomorrow's 5G networks \\cite{hamidouche2016mean}. To properly decide on how and where to cache content, one must take into account, not only wireless physical parameters, such as channel gain or interference, but also new user-specific information, such as social metrics or geolocation, as discussed in~\\cite{wang2015social,zhao2015social} and~\\cite{hamidouche2016mean}. Indeed, both the channel quality over the D2D links and the social tie between UEs become important to decide on how to place content and share the cached data. On the one hand, the social tie determines the common interests of the users, thus determining the way in which they share content. On the other hand, the data rate over the D2D links will determine the effectiveness of the data sharing~\\cite{wang2015social}. For instance, in local area and social network services, the content provider first sends the content to a target set of UEs (known as \\emph{seeds}) via cellular network links. These seed UEs then cache the content and use D2D communication links to share it with other UEs in proximity and that belong to various social communities.\n\nMany recent works have focused on developing new techniques to offload social network traffic by exploiting D2D communications among UEs such as~\\cite{chen2016cooperative,semiari2015context,abdel2012energy}, and~\\cite{li2014social}. Some of these works such as in~\\cite{chen2016cooperative} have mainly focused on managing the interference among D2D links to increase the cooperative opportunity in sharing cached content at the UEs side. In contrast, in~\\cite{abdel2012energy}, the authors focused on optimizing the overall system performance (e.g., minimize bandwidth) or the average delay of subscribers. However, the works in~\\cite{chen2016cooperative} and~\\cite{abdel2012energy} consider that the UEs always offload their cached content over the D2D links, and they do not consider the social tie between UEs. The works in~\\cite{kempe2003maximizing} and~\\cite{kempe2005influential} have focused on the influence maximization problem in social networks. The influence maximization problem is approximated in general models that are referred to as the decreasing cascade and linear threshold models. Then, a greedy algorithm is proposed to choose a set of individuals such that the initial activating of this set is as large as possible in influence expectation. In~\\cite{alim2017leveraging}, a novel framework is proposed to enable devices to form multi-hop D2D connections in an effort to maintain sustainable communication in the presence of device mobility. The framework proposed in~\\cite{alim2017leveraging} can be used to derive an optimal solution for time-sensitive content transmission while also minimizing the cost that the base station pays in order to incentivize users to participate in D2D. The work in~\\cite{nguyen2014dynamic} focuses on how to efficiently identify communities in dynamic social networks. In this work, the authors present quick community adaptation, an adaptive modularity-based framework for not only discovering but also tracing the evolution of network communities in dynamic online social networks. {In~\\cite{R2_11}, the problem of optimally determining source$\\text{-}$destination connectivity of random networks with a finite number of nodes is studied. The authors in~\\cite{R2_11} determine a policy for establishing whether a designated source and destination are connected with minimum expected cost. The proposed policy in~\\cite{R2_11} simply condenses each known connected component to a single super node at each step, and in that condensation multi-graph it simply tests an edge that is both on the shortest path containing the super nodes, as well as on a minimum source$\\text{-}$destination cut.}\n\nIn~\\cite{semiari2015context,li2014social,Zhang2017}, and~\\cite{Bai2016}, a social-aware D2D communication architecture is proposed to leverage social network features to optimize the use of D2D communications. For example, social ties can measure the strengths of users in D2D systems, and reflect to some degree the communication demands between UEs~\\cite{semiari2015context}. Moreover, a high degree of centrality in a social network can imply that a given user may play a key role in data transmission. Indeed, users usually share popular cached content to each other in the D2D network only if they have a strong enough social tie~\\cite{wang2015social} and they may participate in different social communities. As a result, UEs with high centrality should be allocated more wireless resources so as to leverage their connections in D2D transmission~\\cite{li2014social}. In~\\cite{Zhang2017}, a social-aware framework for optimizing D2D communications is presented by exploiting users' relationships in the social network, and connections of UEs in the physical wireless network. Then, to enhance cooperation of users in content delivery in D2D network, the authors have proposed to use different social networking features. In~\\cite{Bai2016}, a novel hypergraph framework is proposed to for studying social-aware caching in D2D networks. In particular, the authors use different hypergraph concepts, such as hypergraph coloring and multidimensional matching, to optimize spectrum allocation and cachce placement in a D2D network. In~\\cite{Zhuang2016}, the authors study the information diffusion in a clustered multilayer network model, where all constituent layers are random networks with high clustering. One of the key results of~\\cite{Zhuang2016} is that information with low transmissibility spreads more effectively within a small but densely connected social network. In~\\cite{fu2017anonymization}, the authors present a comprehensive study of the community-structured social network de-anonymization problem. The main focus of this work is on privacy and anonymization challenges. {In~\\cite{R2_51}, the authors prove that most properties of nodes, links, and paths are correlated among the social and D2D graphs. Then, they use the structure of the social graph to build forwarding paths in the D2D graph, allowing two nodes to communicate over time using opportunistic contacts and intermediate nodes. In~\\cite{R2_52}, the authors present a set of new temporal distance based metrics. Then, they show how these metrics can be applied effectively to characterise the temporal dynamics and data diffusion efficiency of social networks.} {In~\\cite{R2_12}, the throughput capacity of wireless networks with social characteristics is studied. In particular, the proposed model in~\\cite{R2_12} captures the impacts of the way people choose friends as well as the number of friends on the capacity of real large-scale networks specifically for the multicast traffic.} {In~\\cite{R_minor2}, the authors propose a novel approach to detect properties of social grouping and human mobility. Then, popular social network users are used for one-hop opportunistic data forwarding. The work in~\\cite{R_minor1} introduces new policies for dividing large communities into sub-communities following location or social interests. Then, users known as multi-homed users are exploited deliver data across the sub-communities.}\n\nIn practical social networks, users may belong to different social communities where each community's members have the same interests in receiving cached content. Thus, in a D2D network, the social tie among users of one community, centrality in each social community, and the effects of different communities on on another must be considered to improve data offload via caching and D2D links. None of these critical parameters are accounted for in the existing works such as~\\cite{wang2015social,chen2016cooperative,semiari2015context,abdel2012energy,hamidouche2016mean,li2014social,kempe2003maximizing,kempe2005influential}, and~\\cite{Zhang2017}. Moreover, even though the majority of existing literature such as ~\\cite{wang2015social,chen2016cooperative,semiari2015context,abdel2012energy,hamidouche2016mean,kempe2003maximizing,kempe2005influential}, and~\\cite{Zhang2017} focuses on single community, some works such as ~\\cite{alim2017leveraging,nguyen2014dynamic,li2014social,Zhuang2016}~and~\\cite{fu2017anonymization}, do consider multiple communities. However, these multi-community works do not capture the dependence and effect of the centrality of one social community on the other, which is particularly important for cache placement in real-world D2D networks.\n\nThe main contribution of this paper is a new framework to optimally select a suitable set of seed UEs that can be used for the cache placement over D2D network. The proposed framework leverages multi-community social network features to optimize multi-hop D2D offloading procedure. Indeed, the proposed framework allows to maximize the expected number of UEs that receive cached content from the cache placement set through multi-hop D2D offloading procedure. To quantify the collaborative effect of seed UEs (in the cache placement set) on local data offload, a cooperative game in characteristic function form is proposed. For this game, we prove that the Shapley value of each UE in the cache placement set captures the exclusive effect of this UE on the effectiveness of offloading popular content over D2D links. To capture the effects of multiple communities of users on each other in a D2D network, we model the social graphs among the users and the D2D graph among the UEs using a hypergraph. Then, we propose two line graphs: directed influence-weighted graph and directed connectivity-weighted graph for analyzing the hypergaph model. Using the combination of the Shapley value and the hypergraph model, we define a new offload power metric for the UEs. This metric quantifies the power of each UE in offloading a cached content over the multi-community multi-hop D2D network. Simulation results show that, on the average, the proposed framework achieves $12\\%$, $19\\%$ and $21\\%$ improvements in terms of the number of UEs that received offloaded content compared to the schemes based on betweenness, degree, and closeness centrality, respectively.\n\nThe rest of this paper is organized as follows. Section~\\ref{Dec:Sys-Model} presents the system model. In Section~\\ref{Network Centrality}, the network centrality problem for optimal seed selection in social multi-hop D2D links is formulated. Then, in Section~\\ref{Cooperative Game}, a cooperative game approach for solving the network centrality problem is proposed and the properties of its Shapley value are studied. Then, our framework for cache placement based on the hypergraph model and the Shapley value approach is presented. In Section~\\ref{Dec:Complexity}, the complexity of our proposed approach is analysed. In Section~\\ref{Dec:Simulation}, we provide the simulation results while conclusions are drawn in Section~\\ref{Dec:Conclusion}.\n\\vspace{-0.4cm}\n\\section{System Model}\n\\label{Dec:Sys-Model}\nConsider a multi-hop D2D-enhanced cellular network in which a set $\\mathcal{N}$ of $N$ wireless user equipments can communicate directly via D2D communication links. Each user can access popular content from a base station (BS) over a cellular link or from a seed over a multi-hop D2D link. In our model, the network operator always ensures that the content cached at the seed is fresh and corresponds to the most popular content. Thus, most of the time, a user can obtain a fresh popular content from a seed. In case the seed does not have the requested content, then, the user can download it directly from the BS. In this network, a given UE can deliver a file of size $B$ bits to a neighbor, i.e., in one hop, within one time slot $t$. The duration of each time slot is $T$ seconds. Our model focuses on delay-tolerant services that are not affected by the potential delay incurred by multi-hop transmissions.\n\nWe assume an overlay D2D communication model ~\\cite{asadi2014survey}, in which a portion of the cellular resources is dedicated to D2D communications. Hence, no mutual interference occurs between D2D and cellular links. Consequently, the interference over any D2D link between two UEs $m$ and $n$ depends on other D2D pairs that communicate over the same resource block (RB) assigned to the D2D link between UEs $m$ and $n$. We consider an orthogonal frequency division multiple access scheme for the D2D transmissions. In this scheme, each D2D link will be assigned one RB. We assume that the transmission power of each UE $m$ is $p_m$ and the bandwidth of each resource block on the D2D link is equal to $B_w$. Consequently, the data rate between UE $m$ and UE $n$ is given by:\n\\begin{equation}\nR_{mn}= B_w\\log\\left(1+\\frac{\\beta p_m h_{mn}}{B_wN_0+\\sum_{k} h_{kn} p_k}\\right),\n\\end{equation}\nwhere $h_{mn}$ is the channel gain between UE $m$ and UE $n$ on each RB at time slot $t$, $N_0$ is the noise power spectral density, and $\\beta=\\frac{-1.5}{\\ln{5P_e}}$ is the SNR gap for M-QAM modulation with $P_e$ being the maximum acceptable error probability. $\\sum_{k} h_{kn} p_k$ is the interference from any other UE $k\\neq m$ on the D2D link between UE $m$ and $n$ when the same RB is allocated to the UEs $k$ and $m$. We also assume a block fading channel for the D2D links whose fading process is assumed to be constant during one time slot (i.e., $T$ seconds). Consequently, we can consider a constant bit rate over each D2D link during one time slot.\n\nConsidering all D2D links among UEs in the communication network, we introduce a D2D graph $G^d(\\mathcal{N}, \\mathcal{E}_d)$ whose set of vertices is the set $\\mathcal{N}$ of UEs and the set of edges (links) is $\\mathcal{E}_d=\\{(m,n)| m,n\\in\\mathcal{N}\\text{ and }\\frac{B}{R_{mn}}\\leq T\\}$. Thus, a link exists from a given vertex $m$ to another vertex $n$ in the D2D graph if and only if UE $m$ can transmit a single $B$-bit packet to UE $n$ during one time slot $t$ of duration $T$ over a direct D2D link. Since UEs are carried by human users, we assume that all of the $N$ users form a multi-community social network. In this social network, there are $L$ social communities connecting the users who are carrying the UEs that form the D2D network. Let $\\mathcal{L}_l$ be the set of UEs belonging to social community $l$, thus $\\cup_{l=1}^L \\mathcal{L}_l=\\mathcal{N}$.\n\nEach social community $l$ is modeled by a weighted social graph $G^s_l(\\mathcal{L}_l,\\mathcal{E}_l^s,w_l^s)$, whose vertices are the UEs belonging to social community $l$ and whose edges are given by the set $\\mathcal{E}_l^s=\\{(m,n)| 0 < w_{mn} , \\forall m,n \\in \\mathcal{L}_l\\}$, where $w_{mn}$ is the social tie between UEs $m$ and $n$ which is obtained using the function $w_l^s:\\mathcal{E}_l^s\\rightarrow (0,1]$. This function captures the strength of the social tie. A higher value of $w_{mn}$, $w_l^s:(m,n)\\rightarrow w_{mn}$, represents a stronger social tie between UEs $m$ and $n$.\n\nSocial ties are used to capture social relationships between users such as: friendship, kinship, colleague relationships, and altruistic behavior that are observed in human activities~\\cite{li2014social}. Due to the social ties among members of each community, the users in each community exhibit homophily as they share common contents~\\cite{li2014social}. For example, students usually share content related to their major or fans of specific sport will tend to share news about it. Thus, we assume that all members of each social community are interested in a common popular content. Consequently, we consider $L$ popular files, each of which corresponds to one community. { Since number of UEs that can cache the popular file for each community can be seen as a budget, we consider the worst-case scenario where just one UE from each community is selected as a \\emph{seed}}. We defined a seed set $\\mathcal{S}_0$ consisting of $L$ seed UEs. The BS sends to each seed the popular file that corresponds to each community. Then, the seed caches the popular content. {The popular file per community needs to be received by all of the members of that community. For a given community and its associated popular file, other users belonging to other communities can help to forward the given popular content to the UEs of the given community by multi-hop D2D transmission.} Since each UE $m$ can send its cached content to another UE $n$ over multi-hop D2D links, we say that UE $n$ can be influenced by another UE $m$. Here, influence means receiving content over D2D links from other UEs that locally cached the data. Accordingly, we define the following \\textit{one-hop influence} concept.\n\\begin{definition}\\label{def-onehop}\n\\textnormal{The \\textit{one-hop influence} of a given UE $m$ on its one-hop neighbor UE $n$ in the D2D graph is the preference of UE $m$ to transmit its cached content to UE $n$ over a direct one-hop D2D link .}\n\\end{definition}\n\nNote that the defined \\textit{one-hop influence} depends on the social tie between two UEs of each community and also the multi-hop D2D path between them. In other words, when the social tie between members increases then the probability of transmitting a cached content to the neighbors will also increase. Moreover, when a node locally shares its cached content over D2D links, the content may go from one community's members to another over the D2D graph.\n\nFor example, assume that a given UE $m\\in \\mathcal{L}_u$ must send its cached content to another member of its community. It will then send this content to UE $n$ that belongs to another community $\\mathcal{L}_v$, if the shortest path between UE $m$ and another member in community $\\mathcal{L}_u$ includes the UE $n$ over the D2D graph.\n\nThus, if UE $m\\in \\mathcal{L}_u$ and UE $n$ are neighbors in the D2D graph $G^d$, the \\emph{one-hop influence} of UE $m$ on the UE $n$ will be:\n\\begin{equation}\nI_{mn}=\\sum_{m'\\in \\mathcal{L}_u\\backslash\\{m\\}, n\\in\\mathcal{P}^d_{mm'}}\\frac{w_{mm'}}{|\\mathcal{P}^d_{mm'}|},\n\\label{Dmodel}\n\\end{equation}\nwhere $\\mathcal{P}^d_{mm'}$ is the set of UEs which form a shortest path from UE $m$ to UE $m'$ within the D2D graph $G^d$.\n\n\nGiven the one-hop influence model in (\\ref{Dmodel}), we consider the influence graph as a weighted directed graph $G^i(\\mathcal{N},\\mathcal{E}_d,w_i)$. In $G^i$, vertices are the set $\\mathcal{N}$ of UEs and the weight of each edge $(m,n) \\in \\mathcal{E}_d$ captures the one-hop influence of UE $m$ on UE $n$, i.e., $I_{mn}$.\n\nWe illustrate these parameters using a simple example, shown in Fig.~\\ref{DModel}. In this example, 10 UEs are partitioned into two social communities $G_1^s$ and $G_2^s$ and form one D2D graph $G^d$. For instance, since the channel gain between UE 2 and UE 3 is high enough, they are connected via a D2D link $(2,3)\\in \\mathcal{E}_d$. Due to the social tie between UE 3 and UE 5 in social community $2$, $(3,5)\\in \\mathcal{E}_2^s$ and the weight $w_{35}$ captures the social tie between UE $3$ and $5$. For the influence graph $G^i$ in this example, we must compute the one-hop influence between all D2D neighbors. For example, consider UEs 2 and 3 in the D2D graph. These UEs belong to different communities but they are neighbors. Using Definition~\\ref{def-onehop}, we need to calculate $I_{23}$ and $I_{32}$ in order to obtain the influence graph $G^i$ as shown in the Fig.~\\ref{DModel}. To compute $I_{23}$, we need to consider that UE 2 would like to share a content with UE 6 with which it has social tie $w_{2,6}$ in $G^s_1$. Given that UE 3 is in its shortest path toward UE 6 in the D2D graph, we can calculate $I_{23}$ as $\\frac{w_{2,6}}{3}$. Similarly, to calculate $I_{32}$, we need to consider that UE 3 would like to share a content with UEs 1 and 5 with which it has social ties $w_{3,1}$ and $w_{3,5}$ in $G^s_2$. Given that UE 2 is not in the shortest path toward UE 5 and is only in the shortest path toward UE 1 in the D2D graph, we can calculate $I_{32}$ as $\\frac{w_{3,1}}{2}$. Following the same procedure for all neighbors in the D2D graph, we can obtain the influence graph $G^i$ of the D2D graph. As shown in Fig.~\\ref{DModel}, UEs 6 and 5 are selected as seeds and the BS sends popular content 1 to the UE 6 in social community 1 and popular content 2 to the UE 5 in social community 2. The list of notations used throughout this paper is presented in Table~\\ref{tab:Symbols}.\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.35]{SModel.pdf}\n\\caption{An illustrative example of D2D graph and social network for 10 UEs.}\n\\label{DModel}\n\\vspace{-.3cm}\n\\end{figure}\n\n\\begin{table}[t]\n\\caption{List of notations used throughout the paper.}\n\\vspace{-.5cm}\n\\begin{center}\n\\begin{tabular}{l | l } \\toprule\n{ \\textbf{Symbol}} & { \\textbf{Definition}} \\\\ \\rowcolor[gray]{.9} \\hline\n$\\mathcal{N}$ & Set of all UEs\\\\\n$T$ & Duration of one time slot $t$\\\\ \\rowcolor[gray]{.9}\n$R_{mn}$ & Bit rate between UE $m$ and UE $n$ on\neach RB \\\\\n$\\mathcal{E}_d$ & Set of all D2D links\\\\ \\rowcolor[gray]{.9}\n$G^d$ & D2D graph\\\\\n$\\mathcal{L}_u$ & Set of UEs in social community $u$\\\\ \\rowcolor[gray]{.9}\n$\\mathcal{E}_l^s$ & Set of all social relationships in community $l$\\\\\n$w_{l}^s$ & The function: $\\mathcal{E}_l^s\\rightarrow[0,1]$. \\\\ \\rowcolor[gray]{.9}\n$w_{mn}$ & Social tie between UE $m$ and $n$ \\\\\n$G_l^s$ & Weighted social graph of community $l$\\\\ \\rowcolor[gray]{.9}\n$\\mathcal{P}_{mm'}^d$ & Shortest path set of UEs from UE $m$ to $m'$ \\\\\\rowcolor[gray]{.9}\n& on D2D graph\\\\\n$\\mathcal{C}_m$ & Set of one-hop neighbor of UE $m$ in the D2D graph\\\\ \\rowcolor[gray]{.9}\n$\\mathcal{C}_{m,d}$ & Set of $d$-distance neighbor of UE $m$ in the D2D graph\\\\\n$d_{mn}$ & Length of the shortest path between the UE $m$ and $n$ \\\\\n& in the influence graph \\\\ \\rowcolor[gray]{.9}\n$G^i$ & Weighted influence graph\\\\\n$I_{mn}$ &One-hop influence of UE $m$ on the UE $n$\\\\ \\rowcolor[gray]{.9}\n$\\mathcal{S}_0$ & Seed set\\\\\n$\\mathcal{S}_t$ & Set of UEs that received the social content by time slot $t$ \\\\ \\rowcolor[gray]{.9}\n$\\mathfrak{S}$ & Diffusion process\\\\\n$I_{d_n}(\\mathcal{S}_t)$ &$d_n$-\\textit{influence} of $\\mathcal{S}_t$ on UE $n\\in \\mathcal{N}\\backslash \\mathcal{S}_t$\\\\ \\rowcolor[gray]{.9}\n$H$ &Hypergraph\\\\\n$I_{d_n}(\\mathcal{S}_t)$ &Exclusive influence of UE $k$ on $\\mathcal{C}_{n,d}$ of UE $n$\\\\ \\rowcolor[gray]{.9}\n$v(\\mathcal{S}_0,G^d)$ &Value of coalition $\\mathcal{S}_0$ in graph $G^d$\\\\\n$\\phi_k(\\mathcal{S}_0,G^d)$ & Shapley value of player $k$ in coalition $\\mathcal{S}_0$\\\\ \\rowcolor[gray]{.9}\n$\\phi_k(G^d)$ & Shapley value of UE $k$ in graph $G_d$\\\\\n$O_k$ & Offloading power of UE $k$\\\\ \\rowcolor[gray]{.9}\n$D_i(H)$ &Directed influence-weighted line graph of hypergraph $H$\\\\\n$D_c(H)$ &Directed connectivity-weighted line graph of $H$\\\\ \\rowcolor[gray]{.9}\n$\\mathfrak{S}$ & Set of all social communities\\\\\n$\\mathcal{E}_H$ &Edge set of line graph $D_i(H)$ or $D_c(H)$\\\\ \\rowcolor[gray]{.9}\n$w_i$ &Directed connectivity-weighted line graph of $H$\\\\\n$w_c$ & Set of all social communities\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\label{tab:Symbols}\n\\vspace{-9mm}\n\\end{table}\n\nIn general, the D2D links between UEs as well as their social tie will affect the total number of UEs that cache and offload the popular content over D2D links. Thus, we need to exploit the social tie among users to select an appropriate seed set in the D2D graph. The optimal seed set can be selected to maximize the total expected number of UEs that will ultimately receive the popular content over a multi-hop D2D network. Finding the optimal seeds in a directed weighted graph is known as a top-$k$ node problem or influence maximization problem~\\cite{gomez2003centrality}.\n\nOur goal is to select a seed set of $k$ users to maximize the expected number of UEs that will receive the cached data in the D2D graph. The network can then send popular content directly to the seed set over cellular links from the BS. Then, UEs in the seed set cache the popular content and other UEs can download this content from the the seed set over D2D links~\\cite{zhang2014recent}. This optimal set is called the most influential set of nodes in a network. Such a top-$k$ node or influence maximization problem is known to be NP hard~\\cite{zhang2014recent,dhamal2014cooperative}. Next, we exploit the social ties among members of one community and the effect of members of communities on each others over the multi-community social aware multi-hop D2D graph to derive the optimal seed set of UEs.\n\\vspace{-0.4cm}\n\\section{Network Centrality for Seed Selection}\n\\label{Network Centrality}\nOur main goal is to find the center of the D2D graph $G^d$. This will allow finding the optimal cache placement, in order to maximize the expected number of UEs that receive cached content from the cache placement set using multi-hop D2D sharing. Then, the UEs can receive data from the seed set members and distribute it over the D2D graph according to their social tie. Let $\\mathcal{S}_t$ be the set of UEs that have received popular content until time slot $t$. Then, $\\mathfrak{S}=\\{\\mathcal{S}_0,\\mathcal{S}_1,...,\\mathcal{S}_t\\}$ is defined as a \\emph{diffusion process} in which $\\mathcal{S}_t$ is the set of UEs that have received content and cached it by the end of time slot $t$. The influence maximization problem for offloading social data in multi-hop D2D networks can be defined as follows:\n\\begin{definition}\\label{def:influence problem}\n\\textnormal{The \\textit{influence maximization problem} in $L$-community multi-hop D2D networks aims to select a seed set $\\mathcal{S}_0$ consisting of $L$ UEs, to maximize the number of UEs that received cached content over D2D links.\n}\n\\end{definition}\n\n\nLet $\\mathcal{C}_n$ be the set of one-hop neighbors of UE $n$ within graph $G^d$. We define the distance $d_{mn}$ between two UEs $m$ and $n$ as the summation of the links' weights in the shortest path between the UEs in the weighted influence graph $G^i$.\n\nThen, we define $\\mathcal{C}_{n,d}=\\{m|m\\in G^d, d_{mn}\\leq d\\}$ to be the set of $d$-distance neighbors of UE $n$. This set includes the UEs whose distance from UE $n$ is less than $d$. A lower distance between UEs leads to faster offloading of the cached content through D2D sharing. Next, we define the following concept.\n\\begin{definition}\n\\textnormal{The $d$-\\textit{influence} of $\\mathcal{S}_t$ on UE $n\\in \\mathcal{N}\\backslash \\mathcal{S}_t$ is defined as the expected number of UEs in the set $\\mathcal{C}_{n,d}$ that can receive the cached data over the D2D graph from the UEs in $\\mathcal{S}_t$. This is given by:}\n\\end{definition}\n\\begin{equation}\nI_{d}(n,\\mathcal{S}_t)=\\sum_{j \\in \\mathcal{C}_{n,d}}\\biggl(1-\\prod_{\\substack{m\\in \\mathcal{C}_j \\cap \\mathcal{S}_t}} (1-I_{mn})\\biggr),\n\\label{d_influence}\n\\end{equation}\nwhere $I_{mn}$ is given by (\\ref{Dmodel}).\n\nWhenever a UE $k\\in \\mathcal{S}_t$ transmits its cached data to one of the UEs in the $d$-distance neighbor set of UE $n$, we say that UE $k$ affects UE $n$. If all UEs in $\\mathcal{S}_t$, except UE $k$ fail to affect UE $n$, then we can say that UE $k$ exclusively affects UE $n$. Now, we can calculate the exclusive influence of each UE $k$.\n\\begin{proposition}\n\\textnormal{The \\textit{exclusive influence} of each UE $k\\in \\mathcal{S}_t$ on the $d$-distance neighbor set of a UE $n$ is given by:}\n\\begin{equation}\nI_{d}(n,k)=I_{d}(n,\\mathcal{S}_t)-I_{d}(n,\\mathcal{S}_t \\backslash \\{k\\})\n\\label{per_influence}\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nGiven (\\ref{d_influence}), we can write:\n\\begin{multline*}\nI_{d}(n,\\mathcal{S}_t)-I_{d}(n,\\mathcal{S}_t \\backslash \\{k\\})=\\\\\n\\sum_{j \\in \\mathcal{C}_{n,d}}\\biggl(1-\\prod_{\\substack{m\\in \\mathcal{C}_j \\cap \\mathcal{S}_t}} (1-I_{mn})\\biggr)-\\\\\n\\sum_{j \\in \\mathcal{C}_{n,d}} \\biggl( 1-\\prod_{\\substack{m\\in \\mathcal{C}_j \\cap \\{ \\mathcal{S}_t\\backslash \\{k\\}\\} }} (1-I_{mn})\\biggr)=\\\\\n\\sum_{j \\in \\mathcal{C}_{n,d}} \\biggl( \\prod_{\\substack{m\\in \\mathcal{C}_j \\cap \\{\\mathcal{S}_t \\backslash \\{k\\}\\} }} (1-I_{mn})\n-\\prod_{\\substack{m\\in \\mathcal{C}_j \\cap \\mathcal{S}_t}} (1-I_{mn})\\biggr)=\\\\\n\\sum_{j \\in \\mathcal{C}_{n,d}}\\biggl((1-\\prod_{\\substack{m\\in \\mathcal{C}_j\\cap\\{k\\}}} (1-I_{mn}))\n\\prod_{\\substack{m\\in \\mathcal{C}_j \\cap \\{\\mathcal{S}_t \\backslash \\{k\\}\\} }} (1-I_{mn})\\biggr),\n\\end{multline*}\nwhere $(1-\\prod_{\\substack{m\\in \\mathcal{C}_j\\cap\\{k\\}}} (1-I_{mn}))$ represents the probability that UE $k$ shares the cached data with at least one of the one-hop neighbors of UE $j$, where $ j \\in \\mathcal{C}_{n,d}$. Moreover, $\\prod_{\\substack{m\\in \\mathcal{C}_j \\cap \\{\\mathcal{S}_t \\backslash \\{k\\}\\}}} (1-I_{mn})$ is the probability that none of the members of $S_t \\backslash \\{k\\}$ can share the content with one of the one-hop neighbors of UE $j$, where $ j \\in \\mathcal{C}_{n,d}$. Thus, the multiplication of these two terms represents the probability that UE $k$ exclusively offloads its cached content to one of the members of $\\mathcal{C}_{n,d}$. Considering the summation over all the members of $\\mathcal{C}_{n,d}$, the above expression represents the expected number of members in $\\mathcal{C}_{n,d}$ to which UE $k$ exclusively offloads its cached data.\n\\end{proof}\n\\vspace{-0.2cm}\nAccording to Definition~\\ref{def:influence problem}, the influence optimization problem that maximizes the expected number of UEs that received data over D2D links and cached it, is given by:\n\\begin{align}\\label{opt_problem}\n&\\max_{\\mathcal{S}_0} \\lim_{t\\rightarrow \\infty} \\mathds{E} \\big( |\\mathcal{S}_t| \\big).\n\\end{align}\n\\vspace{-0.1cm}\nThe effect of the initial seed set $\\mathcal{S}_0$ on another future set $\\mathcal{S}_t$ in the diffusion process $\\mathfrak{S}$ depends on the $d_n$-influence of $\\mathcal{S}_t$ in each time slot. Generally, an influence maximization problem such as (\\ref{opt_problem}) is known to be NP-hard~\\cite{zhang2014recent} and~\\cite{kempe2003maximizing}.\n\nThere are some conventional sub-optimal solutions for solving (\\ref{opt_problem}) such as degree centrality, betweenness centrality, and closeness centrality~\\cite{koschutzki2005centrality}. However, they are not sufficient to properly understand the relative importance of seeds as stand-alone entities in the D2D graph. The reason is that these existing methods select the seed set without considering the cumulative effect that the selected seeds have on each other during the diffusion of social content over the D2D graph~\\cite{aadithya2010efficient} and~\\cite{liu2010modeling}. However, in a multi-community social network that uses a multi-hop D2D network, the effect of each seed in offloading social data depends on the contribution of other seeds. This is due to the physical connection between the UEs with in the D2D network and the common social interests of the users in a given community. Consequently, we need to take into account the contributions of all possible combinations of UEs from different communities in offloading content through D2D links.\n\nGame-theoretic network centrality has recently attracted attention as a promising solution to address the above limitation such as in~\\cite{gomez2003centrality}. In particular, it has been shown that the cooperative game concept of a Shapley value (SV) is an effective measure of the importance of players within a group~\\cite{gomez2003centrality} and~\\cite{narayanam2011shapley}. In fact, the SV of each node in a graph can be considered as its influence when combined with other nodes if the value function of the cooperative game is appropriately defined as the influence of the players on other nodes over a graph. As a result, the Shapley value of each UE in the a given game can be interpreted as a centrality measure. The use of SV-based network centrality confers a high degree of flexibility (which is completely lacking in traditional centrality metrics) to capture the social tie and wireless channel gain between UEs. Moreover, this new paradigm has already been proved to be more useful than traditional centrality measures for certain real-life network applications such as in~\\cite{narayanam2011shapley}. Thus, next, we will define a game and compute the Shapley value to sub-optimally solve the centrality problem (\\ref{opt_problem}).\n\\vspace{-0.4cm}\n\\section{{Seed Selection based on a Cooperative Game}}\n\\label{Cooperative Game}\n\nGiven a graph $G^d$, we use $g$ to define a coalitional game $g(\\mathcal{N},v)$ whose set of players is the set of UEs $\\mathcal{N}$ (in the D2D graph). Here $v$ is the game's \\textit{characteristic function}~\\cite{young2014handbook}. A coalition of UEs $\\mathcal{S}$ is simply any subset of $\\mathcal{N}$. The value of a given coalition $\\mathcal{S}$, which is a function over the real line, depends on the D2D graph, $v(\\mathcal{S},G^d)\\rightarrow \\mathbb{R}$. Thus, considering a coalition $\\mathcal{S}$ of UEs, the definition of the characteristic function must quantify the effect of this coalition on offloading content over the D2D graph $G^d$~\\cite{aadithya2010efficient}. Given the diffusion model in~(\\ref{Dmodel}), the influence of each UE in coalition $\\mathcal{S}$ on its one-hop neighbors, the number of one-hop neighbors in the current time slot, and the possible future multi-hop neighbors during next time slots affect the diffusion process $\\mathfrak{S}$. Thus, we define a value function for the game that reflects these parameters in its characteristic function formulation.\n\\vspace{-0.35cm}\n\\subsection{Value Function}\nGiven a D2D graph $G^d$, we define the value of the game as the summation of the $d_n$-influence of its members on the UE in the D2D graph, as follows:\n\\begin{equation}\nv(\\mathcal{S}_0,G^d)=\n \\begin{cases}\n \\sum\\limits_{n\\in N\\backslash \\mathcal{S}_0}\\alpha_n I_{d_n}(\\mathcal{S}_0)& \\quad \\text{if $\\mathcal{S}_0\\neq\\O$}\\\\\n 0 & \\quad \\text{else}.\\\\\n \\end{cases}\n\\label{value function}\n\\end{equation}\n\nA coalition that achieves a higher value function in (\\ref{value function}) will have a higher probability of sending the cached data to the other UEs over the D2D graph. Here, $\\alpha_n$ is a price parameter per unit influence. Thus, the value function in (\\ref{value function}) will be a monetary value and of \\textit{transferable utility}~\\cite{young2014handbook}. From (\\ref{d_influence}), we can see that the value of each coalition is related to the social tie between its members and their one-hop neighbors over the D2D graph.\n\\vspace{-0.35cm}\n\\subsection{Seed Selection using the Shapley Value}\nThe Shapley value , $\\phi_k(\\mathcal{S}_0,G^d)$, of a player $k$ in a coalition $\\mathcal{S}_0$ is given by $\\phi_k(\\mathcal{S}_0,G^d)=\\sum\\limits_{\\mathcal{R}\\subseteq\\mathcal{S}_0\\backslash\\{k\\}}\\frac{(|\\mathcal{S}_0|-|\\mathcal{R}|-1)!|\\mathcal{R}|!}{|\\mathcal{S}_0|!}\\big(v(\\mathcal{R}\\cup\\{k\\},G^d)-v(\\mathcal{R},G^d)\\big)\n$~\\cite{young2014handbook}. Consequently, if a UE $k$ achieves a high SV, this UE will contribute more to the value function of any randomly chosen coalition of UEs $\\mathcal{R}$ compared to UEs in $\\mathcal{S}_0$. Following (\\ref{value function}), the defined value function captures the influence of coalition $\\mathcal{S}_0$ in the distribution of the cached data over $G^d$. Thus, a higher SV for a UE $k\\in \\mathcal{S}_0$ implies a higher degree of collaboration between this UE and other UEs in $\\mathcal{S}_0$ for the purpose of transmitting cached content over D2D links. Next, we show how the Shapley value of each UE $k$ is related to its exclusive influence on the UEs which are not in its coalition.\n\\begin{theorem}\n\\textnormal{The Shapley value of each UE $k$ in coalition $\\mathcal{S}_0$, is equal to the exclusive influence of UE $k$ on the UEs which are not in $\\mathcal{S}_0$, which is given by:\n\\begin{equation}\n\\phi_k(G^d)=\\sum\\limits_{n:{ \\{ \\mathcal{C}_k\\cap \\mathcal{C}_{n,d} \\} \\neq \\emptyset}}\\frac{\\alpha_n}{1+|\\mathcal{C}_{n,d}|}.\n\\label{shaply_value_CF}\n\\end{equation}}\n\\end{theorem}\n\\begin{proof}\nFollowing (\\ref{per_influence}) in Proposition 1, we can write $\\phi_k(\\mathcal{S}_0,G^d)=\\sum\\limits_{\\mathcal{R}\\subseteq \\mathcal{S}_0\\backslash\\{k\\}}\\frac{(|\\mathcal{S}_0|-|\\mathcal{R}|-1)!|\\mathcal{R}|!}{|\\mathcal{S}_0|!}\\sum\\limits_{n\\in \\mathcal{N}\\backslash \\mathcal{R}}I_{d}(n,k)$. Given a coalition $\\mathcal{R}$ and a UE $k \\notin \\mathcal{R}$, the necessary condition under which UE $k$ exclusively affects another UE $n$ to have an empty intersection set between the one-hop neighbor sets of all UEs in coalition $\\mathcal{R}$ and the $d$-distance neighbor set of UE $n$. This implies that $\\{\\cup_{j\\in \\mathcal{R}} \\mathcal{C}_j\\}\\cap \\mathcal{C}_{n,d} = \\emptyset$. Given that the permutations are chosen uniformly for computing the SV, it has been shown in~\\cite{aadithya2010efficient} that this necessary condition, $\\{\\cup_{j\\in \\mathcal{R}} \\mathcal{C}_j\\}\\cap \\mathcal{C}_{n,d} = \\emptyset$, is satisfied with probability $\\frac{1}{1+|\\mathcal{C}_{n,d}|}$. Thus, $\\text{Pr}(\\{\\cup_{j\\in \\mathcal{R}} \\mathcal{C}_j\\}\\cap \\mathcal{C}_{n,d} = \\emptyset)=\\frac{1}{1+|\\mathcal{C}_{n,d}|}$. Moreover, if UE $k$ wants to send its cached content to the set of $d$-distance neighbors of UE $n$, at least one of the members of the $d$-distance neighbor set of UE $n$ must be in the set of one-hop neighbors of UE $k$. This means that $\\mathcal{C}_k\\cap \\mathcal{C}_{n,d}\\neq \\emptyset$. Thus, the Shapley value of UE $k$ in the social weighted D2D graph $G^d$ is given by (\\ref{shaply_value_CF}).\n\\end{proof}\nThe relationship in (\\ref{shaply_value_CF}) shows that the SV of a given UE will be affected by two key factors: a) the number of its one-hop neighbors and b) the UEs in the $d$-distance of its one-hop neighbors. In other words, the Shapley value of UE $k$ in a coalition will be higher if UE $k$ has many one-hop neighbors and the distance of these one-hop neighbors from other members in the coalition is less than $d$. Consequently, if we select the seed set according to the Shapley value of UEs for the games defined in (\\ref{value function}) over the graph, each seed can send its cached content to those UEs that are not in the $d$-distance of the one-hop neighbor set of other seeds.\n\nIf we just model the interactions of UEs on the simple D2D graph with the proposed game (\\ref{value function}), we will not capture the effect of two key parameters on the D2D sharing of the cached content: a) the social tie between members of each community and b) the effect of the communities on one another. One natural way to capture these interactions among different members of the social communities that interact over the D2D graph is using a hypergraph model~\\cite{roy2015measuring} and~\\cite{liu2010modeling}.\n\\vspace{-0.35cm}\n\\subsection{Social Communities as Hypergraphs}\nIf the problem of caching and offloading content in the multi-community multi-hop D2D networks is modeled by a simple graph, the effect of the communities in offloading social content on each other will not be fully characterized. Thus, we must consider the graph representation of different layers: the physical D2D layer and the different layers of the social graphs. Although the interplay among social communities pertaining to a D2D graph is very challenging, we use a hypergraph framework that is a useful mathematical tool to analysis complicated relationships among multiple entities ~\\cite{liu2010modeling,Bai2016} and~\\cite{roy2015measuring}. The hypergraph model allows capturing, not only the effect of the social ties between members in each community but also the effect of the interaction between different communities on offloading the cached content. Hence, we model the set of individuals belonging to one community using the hyperedge of a hypergraph, and, then, we can apply hypergraphs to model the multi-hop D2D network while taking into account the presence of multiple communities.\n\n\\begin{definition}\\label{Hypergraph_def}\n\\textnormal{Let $\\mathcal{N}$ be a finite set of the UEs, a \\emph{hypergraph} $H=(\\mathcal{L}_1,\\mathcal{L}_2,...,\\mathcal{L}_L)$ is a family of subsets of $\\mathcal{N}$ such that $\\mathcal{L}_l \\neq \\emptyset$ and $\\cup_{l=1}^{L}\\mathcal{L}_l=\\mathcal{N}$. The UEs of $\\mathcal{N}$ are the vertices of the hypergraph, and the sets of communities $\\mathcal{L}_1,\\mathcal{L}_2,...,\\mathcal{L}_L$ are called hyperedges.}\n\\end{definition}\n\nHence, a hypergraph is a generalized graph in which edges can consist of any subset of the vertices while an edge can exactly consist two vertices in the traditional graph~\\cite{Bai2016} and~\\cite{liu2010modeling}. One way of analyzing a hypergraph is to model it using a line graph and then analyzing the modeled line graph~\\cite{liu2010modeling}. The line graph of a hypergraph is a weighted graph whose vertices are the hyperlinks of the hypergraph and the weight on each edge is related to the interaction between two hyperedges of the hypergraph. Following the properties of the multi-community multi-hop D2D network, we define two weighted graphs from the hypergraph model.\n\\begin{definition}\n\\textnormal{A \\textit{directed influence-weighted graph} of a hypergraph $H$ is defined as a directed weighted graph $D_i(H)=(\\{1,...,L\\},\\mathcal{E}_H,w_i)$, in which each node of $D_i(H)$ represents one of the communities in $\\mathfrak{L}$, $\\mathcal{E}_H=\\{(u,v)| \\mathcal{L}_u,\\mathcal{L}_v \\in \\mathfrak{L}, \\exists\\text{ }m\\text{ and }m' \\in \\mathcal{L}_u : \\mathcal{P}^d_{mm'} \\cap \\mathcal{L}_u \\neq {\\O} \\}$ and $w_i:\\mathcal{E}_H\\rightarrow R$.}\n\\end{definition}\n\n$\\mathcal{E}_H$ captures the fact that, if the shortest path between two UEs in community $\\mathcal{L}_u$ passes through community $\\mathcal{L}_v$ on the D2D graph, then community $\\mathcal{L}_u$ will affect community $\\mathcal{L}_v$. The reason is that the social content of $\\mathcal{L}_u$ passes through some UEs in community $\\mathcal{L}_v$ and, thus, some UEs in $\\mathcal{L}_v$ receive the social content of $\\mathcal{L}_u$ over D2D links. The weight of an edge $\\{u,v\\}\\in D_i(H)$ where $\\{u,v\\}\\in \\mathcal{E}_H$, is given by:\n\\begin{equation}\nw_i(\\{u,v\\})=\\sum_{\\substack{\\forall m,m'\\in \\mathcal{L}_u \\\\ \\mathcal{P}^d_{mm'} \\cap \\mathcal{L}_v \\neq {\\O}}} {w_{mm'}}\\times |\\mathcal{P}^d_{mm'}\\cap \\mathcal{L}_v|,\n\\label{weight of Di_community}\n\\end{equation}\nwhere $\\{(m,m')|\\forall m,m'\\in \\mathcal{L}_u, \\mathcal{P}^d_{mm'}\\cap \\mathcal{L}_v \\neq {\\O}\\}$ is the set of pairs in community $\\mathcal{L}_u$ that the shortest path between them passes through community $\\mathcal{L}_v$, $w_{mm'}$ is the social tie between UE $m$ and $m'$ belonging to community $\\mathcal{L}_u$, and $|\\mathcal{P}^d_{mm'}|$ is the number of UEs along the shorest path between UE $m$ and $m'$. Thus, the directed community-weighted graph $D_i(H)$ captures the social tie of the end pair of each path and also the path length that one community provides for other community on the D2D graph.\n\\begin{definition}\n\\textnormal{A \\textit{directed connectivity-weighted graph} of hypergraph $H$ is defined as a directed weighted graph $D_c(H)=(\\{1,...,L\\},\\mathcal{E}_H,w_c)$, where vertices refer to the communities in $\\mathfrak{L}$, $\\mathcal{E}_H=\\{(u,v)| \\mathcal{L}_u,\\mathcal{L}_v \\in \\mathfrak{L}, \\exists\\text{ }m\\text{ and }m' \\in \\mathcal{L}_u : \\mathcal{P}^d_{mm'} \\cap \\mathcal{L}_u \\neq {\\O} \\}$, and $w_c(\\{u,v\\})=|\\{(m,m')|\\forall m,m'\\in \\mathcal{L}_u, \\mathcal{P}^d_{mm'}\\cap \\mathcal{L}_v \\neq {\\O}\\}|$.}\n\\end{definition}\nThe weight on the link from community $\\mathcal{L}_u$ to the community $\\mathcal{L}_v$ is equal to the number of pairs in community $\\mathcal{L}_u$ that the shortest path between them passes through $\\mathcal{L}_v$. Thus, the directed connectivity-weighted graph captures the effects of two communities on each others when the UEs of these two communities provide shortest path for each other in the D2D graph.\n\\vspace{-0.35cm}\n\\subsection{Proposed Approach for Content Placement}\nTo solve the influence optimization problem in (\\ref{opt_problem}) within a multi-community, multi-hop D2D network, we must consider two parameters: a) the effect of each member on other members in each community and b) the effect of one community on other communities. If we use the game model presented in (\\ref{value function}) for the social graph of each community, then the SV of each player will capture only the exclusive influence of each user on other members of its community. If we use the game in (\\ref{value function}) for the directed community-weighted and connectivity-weighted line graphs of the hypergraph $H$, then, the Shapley value of each player will capture the exclusive influence of one community on other communities. To capture the influence of each UE on its community's members and the members of other communities using a single metric, we define an offloading power metric for each UE. A larger offloading power means that the UE has a higher capability for offloading social content in multi-community multi-hop D2D graph. The offloading power of a given UE $k$ that belongs to a community $\\mathcal{L}_j$ on the line graph $D$ of the hypergraph $H$ is given by:\n\\begin{equation}\nO_k=\\frac{\\phi_k(G_j^s)}{\\sum_{m\\in \\mathcal{L}_j} \\phi_m(G_j^s)}\\phi_j(D),\n\\label{shaply_value_CF_H}\n\\end{equation}\nwhere $\\phi_k(G_j^s)$ is the Shapley value of UE $k$ on its social graph $G_j^s$ as given by (\\ref{shaply_value_CF}), and $\\phi_k(D)$ is the Shapley value of community $\\mathcal{L}_j$ over the directed weighted line graphs $D_i(H)$ or $D_c(H)$ of hypergraph $H$. $\\phi_k(D)$ is given by (\\ref{shaply_value_CF}). After defining $O_k$, we compute the offloading power of each UE in D2D graph using equation (\\ref{shaply_value_CF_H}). In each community, the UE that has the highest offloading power among the members of its community, is selected as the seed from that community. Thus, the seed set includes the $L$ UEs that have the highest offloading powers among the members of its community.\n\\vspace{-0.4cm}\n{\\section{Complexity Analysis}\n\\label{Dec:Complexity}\nThe complexity of proposed approach stems from the computation of the Shapley value. A direct application of the original Shapley value formula involves considering $O(2^{|\\mathcal{N}|})$ coalitions~\\cite{young2014handbook}. Such an exponential complexity in the number of users can be prohibitive for bigger networks. However, based on see Theorem 1, the complexity of calculating the exact formula for the Shapley value in (\\ref{shaply_value_CF}) is restricted to the user degree and the shortest path between two users in influence graph. Since, the complexity of calculating the user degree is $O(|\\mathcal{N}|)$ and the complexity of calculating the shortest path between two users, is $O(|\\mathcal{E}_d|+|\\mathcal{N}|\\log |\\mathcal{N}|)$~\\cite{aadithya2010efficient}. Consequently the complexity of the proposed approach is $O(|\\mathcal{N}||\\mathcal{E}_d|+|\\mathcal{N}|^2\\log |\\mathcal{N}|)$, which is reasonable for the type of problems we are dealing with.}\n\nIn practice, the worst case situation for computing the calculated exact formula for the Shapley value in (\\ref{shaply_value_CF}) is not likely to occur. This is due to the fact that, in practical scenarios, the probability that every user is reachable from all other users that are within a cutoff distance is low. {For example the proposed approach can be used in real scenarios in the context of a D2D local area network (LAN)~\\cite{Last}. In D2D LAN scenarios, the users have a cluster-based distribution such as in a campus, coffee shop, mall, or football stadium, and in each cluster, the number of users as well as the diameter of D2D graph are relatively small.} Thus, the complexity of proposed approach in such practical and cluster-based scenarios is acceptable.\n\\vspace{-0.4cm}\n\\section{Simulation Results}\n\\label{Dec:Simulation}\n\nFor simulations, we compare the analytical results of the proposed social-aware framework with other conventional centrality approaches. We consider three metrics indicating the seed set: SV metric on the influence graph (SV) (\\ref{shaply_value_CF}), offloading power from the hypergraph modeled by the directed influence-weighted graph (SV:influence) (\\ref{shaply_value_CF_H}), and offloading power from hypergraph modeled by the directed connectivity-weighted graph (SV:connectivity) according to (\\ref{shaply_value_CF_H}). The baselines used for comparison are the conventional centrality measures: degree centrality, betweenness centrality, and closeness centrality~\\cite{zhang2014recent} and~\\cite{aadithya2010efficient}. We consider a BS at the center of a circular area having a radius of $1$~km. We consider that the UEs form spatial clusters in this circular area. The locations of cluster centers are a realization of a Poisson point process and the UEs are randomly distributed around the cluster centers' locations. The UEs are randomly associated to the communities. The strength of the social tie between any two UEs in one community is uniformly selected from $0$ to $1$. We consider $1$ millisecond for each time slot. The bandwidth of each RB is $15$ kHz. We consider $2$ GHz as a carrier frequency. The maximum power of each UE is $10$~mW which can be equally divided among RBs. The noise power spectral density $N_0$ is considered to be $-170$ dBm per Hz. We assume a path loss exponent $2.5$ and a Rayleigh fading with mean $1$ for the channel model of the D2D links. We set the length of each packet to $100$ bits, and the target bit error rate to $10^{-7}$. All statistical results are averaged over a large number of independent runs.\n\n\nFig.~\\ref{distance} shows the impact of the distance parameter $d$ on the offloading speed of the cached social content. The offloading speed of social content is the average difference between the number of UEs that received the cached data with D2D sharing during two consecutive time slots. From Fig.~\\ref{distance}, we can see that the offloading speed increases suddenly when the distance parameter increases, and then it decreases for high distance. Fig.~\\ref{distance} shows that the offloading speed reaches a maximum value when the distance parameter is around $30\\%$, $40\\%$ and $50\\%$ of network diameter for SV, SV:connectivity, and SV:influence approaches, respectively. Clearly, the SV:influence approach has the highest offloading speed and the SV scheme achieves the lowest speed. In Fig.~\\ref{distance}, for a low value of $d$, the $d$-distance neighbor set of each UE will not be far from this UE. In this case, the probability that the one-hop neighbor set of each UE will have common members with the $d$-distance neighbor set of other UEs decreases. Thus, the size of one-hop neighbor set becomes more effective in increasing the SV following (\\ref{shaply_value_CF}). Consequently, UEs having a large number of one-hop neighbors which are in a crowded portion of the graph are selected as seeds for low $d$. However, for a larger $d$, the $d$-distance neighbors of each UE will be located at a relatively far location from this UE. Thus, the one-hop neighbors of one UE can be in $d$-distance of other UEs with more probability. Consequently, the UEs which are in sparse part of the graph are selected as a seed for high $d$. However, when the distance $d$ is around $40\\%$ of the network diameter, neither the UE in sparse nor the UEs in crowded area of D2D graph are selected as seeds. In this case, the UEs in sparse and crowded areas receive cached content with lower hops from the selected seeds. Consequently, the average offloading speed is maximized for the $d$ around $40\\%$ of network diameter. The offloading speed resulting from the SV:connectivity and SV:influence approaches is higher than SV approach, because they consider the effect of social tie and social community in selecting seeds while the SV just apply social tie information among UEs. Finally, the offloading speed of the SV:connectivity approach is lower than that of the SV:influence since this latter considers not only the number of connection between two social communities on D2D graph but also the social tie of these connections.\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=2.75 in]{distance.eps}\n\\caption{\\small Average offloading speed vs. distance $d$, when the number\nof clusters is 10 and the average number of UEs per cluster is 10.}\n\\vspace{-0.9cm}\n\\label{distance}\n\\end{figure}\n\nFig.~\\ref{hist} shows the distribution of the UEs' SV and offloading power which are normalized by their maximum value. The number of UEs that have the highest normalized value are 2, 6, and 11 for the SV, SV:connectivty and SV:influenced approaches, respectively. In Fig.~\\ref{hist}, we can see that the normalized SV of the most UEs is around $0.4$ , while the SV:connectivity and SV:influence approaches assign $0.8$ normalized offloading power to most of the UEs. Moreover, Fig.~\\ref{hist} shows that, the number of UEs that have a high normalized offloading power is larger than those having high normalized SV. This means that there are some UEs that are considered effective under SV:connectivity and SV:influence approaches while the SV approach categorized them as ineffective in offloading social content over the D2D graph. This is due to the fact that UEs that have a low SV will have a low connectivity degree and UEs in their one-hop neighbor set are in the $d$-distance neighbor set of other effective UEs. Thus, the number of UEs with high SV in one community is low. These UEs with low SV in one community can have more offloading power, because they may be in the shortest path set between the members of other community. Thus, they can be critical in offloading the content between the members of other communities.\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=2.75 in]{hist.eps}\n\\caption{\\small The histogram of SV and offloading power of UEs when UEs are distributed according clutter point process.}\n\\vspace{-0.7cm}\n\\label{hist}\n\\end{figure}\n\nFig.~\\ref{Naffected} shows the impact of the number of UEs on the number of influenced UEs within the D2D graph when $d=40\\%$ of the network diameter. From Fig.~\\ref{Naffected}, we can see that the number of influenced UEs increases when the number of UEs increases. Clearly, the number of influenced UEs resulting from the proposed schemes is higher than the number of influenced UEs resulting from conventional centrality approaches. The betweenness centrality behavior is closest one to the Shapley value centrality. The three conventional centrality approaches do not consider the common effect of other seeds on sharing cached content with other UEs. The closeness and degree centralities usually choose the seeds which are close to each other in the crowded parts of the D2D graph, and betweenness centrality usually choose the seeds which are in the most of the shortest path sets between other UEs. Thus, the UEs located in the crowded portions of the D2D graph which are connected to other crowded area are selected as the seeds. The proposed approaches increase the exclusive effect of each seed by decreasing the common members among the $d$-distance neighbors of the one-hop neighbors of the selected seeds. Thus, the seeds resulting from the proposed approaches are distributed across the D2D graph. Since the SV:connectivity and SV:influence approaches consider the the effect of social communities on each others, the seed set selected by these approaches has the most number of influenced UEs. {On the average, the SV:influence achieves $12\\%$, $19\\%$ and $21\\%$ improvement in the number of affected UEs compared to betweenness, degree, and closeness approaches, respectively.}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=2.75in]{Naffected.eps}\n\\caption{\\small Number of affected UEs for $d_n=40\\%$ of D2D graph diameter when the number\nof cluster is 10.}\n\\vspace{-0.7cm}\n\\label{Naffected}\n\\end{figure}\n\nFig.~\\ref{NSpeed} shows the impact of the number of UEs on the average social content offload speed when $d=40\\%$ of network diameter. From Fig.~\\ref{NSpeed}, we can see that the offload speed increases when the number of UEs increases. From this figure, we can see that degree centrality, our proposed approaches, and betweenness centrality have the maximum, medium, and minimum average speed of the offloading social content, respectively. The reason is that, in the average degree centrality, the seeds offload social content to many UEs in their one-hop neighbors. The average speed of degree centrality decreases in the last time slots because the total number of affected UEs is low (Fig.~\\ref{Naffected}). {Although the speed of offloading of the degree centrality is the highest, the total number of UEs affected in degree centrality is near the lowest one (Fig.~\\ref{Naffected}). Hence, in degree centrality, the seeds can quickly share their cached content. However, the number of UEs affected by more than one seed is large because the degree centrality does not consider the exclusive influence of seeds. In contrast, for the proposed approaches, even though the offloading speed is below the degree centrality, the total number of affected UEs is the highest (see Fig.~\\ref{Naffected}). The reason is that the number of UEs affected only by one seed is high, since the exclusive influence of each seed is high in our proposed approach. Thus, it takes more time to share the cached content to other UEs. However, the content is now received by a large number of UEs.}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=2.75in]{NSpeed.eps}\n\\caption{\\small The offloading Speed of the social cached content for $d_n=40\\%$ of D2D graph diameter when the number of cluster is 10.}\n\\vspace{-0.8cm}\n\\label{NSpeed}\n\\end{figure}\n\nFor a real-world case, we use Netvizz, a data collection and extraction application that allows exportation of data in standard file formats from different sections of the Facebook social networking service~\\cite{rieder2013studying}. Friendship networks, groups, and pages can thus be analyzed quantitatively and qualitatively with regards to demographical, postdemographical, and relational characteristics~\\cite{rieder2013studying}. We extract the data from three group pages of students at Virginia Tech on Facebook. These groups are related to sports clubs that gather communities of students who are interested in a common sport. Based on the results from Netvizz, the social network of each community is extracted in which the social tie among two nodes captures the social interest of two members in common data such as a certain sport video file. For the D2D graph capturing the locations of users and their possible D2D links, we distribute the users based on a cluster process over an area of $1000$m$\\times1000$m that represents a campus area, then we randomly assign each user to one of the members of three communities. The strength of the social tie between any two UEs in one community is based on the data extracted from the Facebook group pages.\n\nFig.~\\ref{Naffected_real} shows the impact of the number of UEs on the number of influenced UEs within the D2D graph for a real-world case. Fig.~\\ref{Naffected_real} shows that the number of influenced UEs increases with respect to the number of UEs. Moreover, the proposed schemes yield a higher number of influenced UEs compared to the conventional centrality measures. The reason is that the proposed approaches increase the exclusive effect of each seed over D2D graph. In addition, the SV:connectivity and SV:influence approaches consider the mutual effect of social communities. {Hence, the seed sets selected by these approaches yield the largest number of influenced UEs. For the real-world case, on the average, the SV:influence achieves $13\\%$, $14\\%$, and $16\\%$ improvement in the number of affected UEs compared to betweenness, degree, and closeness approaches, respectively.}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=2.75in]{Naffected_real.eps}\n\\caption{\\small Number of affected UEs for a real-world case.}\n\\vspace{-0.5cm}\n\\label{Naffected_real}\n\\end{figure}\n\nFig.~\\ref{NSpeed_real} shows the impact of the number of UEs on the average social content offload speed for a real-world case. Fig.~\\ref{NSpeed_real} shows that the offload speed increases with respect to the number of UEs. From this figure, we can see that degree centrality, our proposed approaches, and betweenness centrality achieve the maximum, medium, and minimum average speed of the offloading social content, respectively. The reason is that, the speed of offloading is high at the initial time slot for the average degree centrality because the seeds offload social content to many UEs in their one-hop neighbors. {Although the speed of offloading of degree centrality is the highest, the total number of UEs affected in degree centrality is nearly the lowest one as shown in Fig.~\\ref{Naffected_real}. This means that the seeds can quickly share their cached content, but they share it with common UEs. Although the speed of offloading of the proposed approaches is lower than degree centrality, the proposed approaches outperform all other centrality measures.}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=2.75in]{NSpeed_real.eps}\n\\caption{\\small The offloading Speed of the social cached content for a real-world case.}\n\\vspace{-0.7cm}\n\\label{NSpeed_real}\n\\end{figure}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=2.75in]{PDFdelay_real.eps}\n\\caption{\\small {The probability density function of offloading delay per UE.}}\n\\vspace{-0.3cm}\n\\label{PDF_delay}\n\\end{figure}\n{Fig.~\\ref{PDF_delay} shows the probability distribution function (PDF) for the offloading delay per UE. As we can see from Fig.~\\ref{PDF_delay}, the average offloading delay per UE for the SV, SV:connectivity, and SV:influence approaches are respectively 3.86, 3.77, and 3.58 milliseconds, and for the degree centrality the stochastic average offloading delay is 3.54 milliseconds. {Thus, on the average, the average offloading delay per UE resulting from the proposed SV-based approaches is only about 0.2 milliseconds higher than degree centrality.}}\n\n{To further illustrate how our approach can work for mobile cases, we simulate a new setup in which the users move. In our simulation, the social graph of each community is stable while the D2D graphs randomly change over time. We move 170 users with different speeds based on a random walk model. We consider a higher correlation between the movement patterns of users that have strong social tie. After each change in the D2D graph due to the users' mobility, we compare two scenarios: using the initially selected seeds for the new, modified D2D graph (A); and selecting new seeds based on the new D2D graph (B). Fig.~\\ref{Dif_number} shows the difference between the number of influenced UEs for the two seed selection scenarios A and B as function of the users' speed. As we can see from Fig.~\\ref{Dif_number}, the difference between the number of influenced UEs increases with the speed of the UEs. This is due to the fact that, by increasing the speed of the UEs, the dynamic change in the D2D graph increases and the seed selection in scenario B has to recompute the selected seed while scenario A retains the initially selected seeds. On the average, the difference between the number of influenced UEs for these two scenarios A and B is around 4 for SV-based approaches and less than 6 for other approaches, which represents a very small fraction of users. This stems from the correlated mobility patterns of the users~\\cite{R2_51, R2_52}. Thus, for low-speed mobile users, if we do not recompute the SV after every change in D2D network, just 4 out of 170 users (around 2\\%) will not be influenced which demonstrates the effectiveness of our approach even for mobile cases.}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=2.75in]{Dif_number_UEs.eps}\n\\caption{\\small {Difference between number of influenced UEs.}}\n\\vspace{-0.6cm}\n\\label{Dif_number}\n\\end{figure}\n\\vspace{-0.4cm}\n\\section{Conclusion}\n\\label{Dec:Conclusion}\nIn this paper, we have proposed a novel context-aware framework for cache placement at wireless UEs in order to improve social content offloading in a D2D-enhanced cellular network. In this network, the users belong to different social communities while UEs form a single multi-hop D2D network. We exploit the multi-community social context of users for improving the local offloading of cached content by allowing an effective use of multi-hop D2D sharing. Based on the social tie of the users, a cooperative game between UEs is proposed. The value of a coalition is equal to the $d_n$-influence of its members on other UEs over the D2D graph. We have proved that Shapley value of each UE in the proposed cooperative game shows the exclusive effect of UE in content offloading over D2D links. Due to social tie between members of each community and D2D links between UEs, we have modeled the cache placement problem using hypergraph that is analyzed using two line graph models. Using the proposed line graphs coupled with the SV derived from the cooperative game, we have defined an offloading power for each UE in multi-community multi-hop D2D network. Hence, we have considered the UEs with high offloading power which have more exclusive effect on both of its and other community's members as the cache placements. Simulation results have shown that on the average the proposed approach yields significant improvements in terms of the number of UEs that offload popular content, compared to the schemes based on the classical centrality measures.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nNeural machine translation has become a set of standardised approaches that has led to remarkable improvements, particularly in terms of human evaluation. It has now been successfully applied in production environment by major translation technology providers.\n\n\\textit{OpenNMT}\\footnote{\\url{http:\/\/opennmt.net}} is an open (MIT licensed) and joint initiative by SYSTRAN and the Harvard NLP group to develop a NMT toolkit for researchers and engineers to benchmark against, learn from, extend and build upon. It focuses on providing a production-grade system with an extensive set of model and training options to cover a large set of needs of academia and industry.\n\n\\section{Description}\n\n\\textit{OpenNMT} implements the complete sequence-to-sequence approach that achieved state-of-the-art results in many tasks including machine translation. Based on the Torch framework, this model comes with many extensions that are known useful including multi-layer RNN, attention, bidirectional encoder, word features, input feeding, residual connections, beam search, and several others.\nThe toolkit also provides various options to customize the training process depending on the task and data with multi-GPU support, re-training, data sampling and learning rate decay strategies.\n\nToolkits like \\textit{Nematus}\\footnote{\\url{https:\/\/github.com\/rsennrich\/nematus}} or Google's \\textit{seq2seq}\\footnote{\\url{https:\/\/github.com\/google\/seq2seq}} share similar goals and implementation but with frequent limitations on efficiency, tooling, features or documentation which \\textit{OpenNMT} tries to solve.\n\n\\section{Ecosystem}\n\nMore than the core project, \\textit{OpenNMT} aims to propose an ecosystem around NMT and sequence modelling. It comes with an optimised C++ inference engine based on the Eigen library to make deployment and integration of models easy and efficient. The library has also been used on multiple tasks, including image-to-text, speech-to-text and summarisation. We also provide recipes to automatise the training process, demo servers to quickly showcase results and a benchmark platform\\footnote{\\url{http:\/\/nmt-benchmark.net\/}} to compare approaches.\n\n\\section{Community}\n\n\\textit{OpenNMT} is also a community\\footnote{\\url{http:\/\/forum.opennmt.net\/}} providing various supports on using the project, addressing specific training processes and discussing the current and future state of neural machine translation research and development. The online forum counts more than 100 users and the project has been starred by over 1,000 users on GitHub.\n\n\\section{Conclusion}\n\nWe introduce \\textit{OpenNMT}, a research toolkit for neural MT that prioritises efficiency and modularity. We hope to maintain strong machine translation results at the research frontier, providing a stable framework for production use while enlarging an active and motivated community.\n\n\n\\end{document}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\nSince the conception of quantum mechanics, the quantum many-body problem has been of central importance. Due to a lack of exact methods one has to study these systems using approximate techniques such as mean field theory or perturbative expansions in a small parameter, or using an effective description obtained from symmetry or renormalization arguments. Another approach that has proven to be very fruitful is the use of toy models and trial wave functions. Tensor network states constitute a class of such trial wave functions that has emerged in past decades from the interplay of quantum information theory and condensed matter theory \\cite{VerstraeteMurgCirac08}. The power of these states is two-sided. On the one hand, they can be used to study universal properties of quantum many-body systems, which makes them interesting objects from the theoretical perspective. On the other hand, they allow for novel methods to simulate the many-body problem, which makes them interesting from the point of view of computational physics. For example, in one dimension Matrix Product States not only underpin the highly successful Density Matrix Renormalization Group algorithm \\cite{DMRG,OstlundRommer}, but have also been used to completely classify all gapped phases of matter in quantum spin chains \\cite{1Done,1Dtwo,SchuchGarciaCirac11}.\n\nIn this work we focus on two-dimensional tensor network states, so-called Projected Entangled-Pair States (PEPS) \\cite{peps}. Because of their local structure these trial states serve as a window through which we can observe the entanglement properties of ground states of complex quantum many-body systems. We use this to study ground states of local two-dimensional Hamiltonians that have topological order, a kind of quantum order characterized by locally indistinguishable ground states and exotic excitations which can behave differently from bosons and fermions \\cite{einarsson,Wen90}.\n\nIn recent years it became clear that topological order can be interpreted as a property of (a set of) states \\cite{BravyiHastingsVerstraete}, the local Hamiltonian seems to be merely a tool to stabilize the relevant ground state subspace. It was realized that topological order manifests itself in entanglement properties such as the entanglement entropy \\cite{KitaevPreskill,levinwenentanglement}. This has resulted in a definition of topological order via long-range entanglement \\cite{LRE}. More recent works have shown that the ground state subspace on a torus contains information about the topological excitations \\cite{MES} and that for chiral phases the so-called entanglement spectrum reveals the nature of the edge physics \\cite{LiHaldane}. In Ref.~\\cite{haah}, it was even shown that for a restricted class of Hamiltonians a single ground state contains sufficient information to obtain the $S$~matrix, an invariant for topological phases.\n\nUtilizing the transparent entanglement structure of PEPS, we further examine this line of reasoning. We consider a class of PEPS with nonchiral topological order, which were introduced in \\cite{Criticality,Ginjectivity,Buerschaper14,MPOpaper}. The intrinsic topological order in these states is characterized by a Matrix Product Operator (MPO) at the virtual level, which acts as a projector onto the virtual subspace on which the PEPS map is injective. This class of trial wave functions was shown to provide an exact description of certain renormalization group fixed-point models such as discrete gauge theories \\cite{SPTpaper} and string-net models~\\cite{stringnet1,stringnet2,MPOpaper}, but can also be perturbed away from the fixed point in order to study e.g. topological phase transitions \\cite{transfermatrix,shadows}. We show that the entanglement structure of these `MPO-injective' PEPS enables a full characterization of the topological sectors of the corresponding quantum phase. In other words, the injectivity space of the tensors in a finite region of a single MPO-injective PEPS contains all information to fully determine the topological phase. More concretely, we show that the MPO that determines the entanglement structure of the PEPS allows one to construct a $C^*$-algebra whose central idempotents correspond to the topological sectors. A similar identification of topological sectors with central idempotents was made in \\cite{Qalgebra,haah}. The advantage of the PEPS approach is that the idempotents can be used to explicitly write down generic wave functions containing anyonic excitations, which allows for a deeper understanding of how topological theories are realized in the ground states of local Hamiltonians. In addition, we obtain an intuitive picture of what happens in the wave function when these anyons are manipulated, and we can extract all topological information such as topological spins, the $S$ matrix, fusion properties and braiding matrices. We would like to note that a very similar framework was recently discussed in the context of statistical mechanics \\cite{Aasen}, where universal information about the CFT describing the critical point was obtained.\n\nSection~\\ref{sec:overview} starts with an overview of the paper. In Section~\\ref{sec:MPO} we discuss general properties of projector MPOs and their connection to fusion categories. The construction of MPO-injective PEPS, as originally presented in \\cite{MPOpaper}, is worked out in detail in Section~\\ref{sec:MPOinjPEPS}. Section~\\ref{sec:anyons} explains how to obtain the topological sectors and construct PEPS containing anyonic excitations. The corresponding anyon ansatz is illustrated for discrete gauge theories and string-nets in the examples of Section~\\ref{sec:examples}. Section~\\ref{sec:conclusions} contains a discussion of the results and possible directions for future work. The appendices contain several technical calculations and detailed results for some specific examples.\n\n\\section{Overview of the paper} \\label{sec:overview}\n\nIn this section we convey the main ideas presented in this work, before obscuring them with technical details. We start by considering a large class of projector matrix product operators $P$ that can be written as $P = \\sum_a w_a O_a$, where the $w_a$ are complex numbers and $O_a$ are injective (single block) matrix product operators with periodic boundary conditions:\n\\begin{align} \nO_a = \\vcenter{\\hbox{ \n\\includegraphics[width=0.15\\linewidth]{algebra1}}}\n\\end{align}\n\nBecause we want $P$ to be a projector for every length it follows that $\\{O_a\\}$ forms the basis of a matrix algebra with positive integer structure coefficients: $O_aO_b = \\sum_c N_{ab}^c O_c$, with $N_{ab}^c \\in \\mathbb{N}$. In section \\ref{sec:MPO} we work out the details of this algebra and show that we can associate many concepts to it that are familiar from fusion categories.\n\nIn section \\ref{sec:MPOinjPEPS} we turn to tensor network states on two-dimensional lattices, called Projected Entangled-Pair States (PEPS). We discuss how the MPO tensors can be used to construct a (family of) PEPS that satisfies the axioms of MPO-injectivity, i.e. the algebra $\\{O_a\\}$ constitutes the virtual `symmetry' algebra of the local PEPS tensors and the virtual support of the PEPS on any topologically trivial region corresponds to the subspace determined by the MPO projector $P$ along the boundary of that region. As shown in \\cite{MPOpaper}, the axioms of MPO-injectivity allow us to prove that such PEPS are unique ground states of their corresponding parent Hamiltonians on topologically trivial manifolds. We can also explicitly characterize the degenerate set of ground states on topologically non-trivial manifolds. The most important axiom of MPO-injectivity is the `pulling through' property. To prove that it is satisfied by our construction, we need to impose that the MPO tensors satisfy the so-called \\emph{zipper} condition, i.e. there must exist a three-leg `fusion' tensor X, which we depict as a grey box, such that the following identity holds:\n\\begin{align}\\label{overviewzipper}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.35\\linewidth]{overviewzipper}}}\\,\\,\\, .\n\\end{align}\n\nRemarkably, the same properties of the MPOs $O_a$ that guarantee the pulling through property to hold also allow us to construct a \\emph{second type} of MPO algebra. The basis of this second algebra is bigger than that of the first one and its elements can be presented schematically as:\n\\begin{align}\\label{algebra2}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.15\\linewidth]{algebra2}}}\\,\\,\\, ,\n\\end{align}\nwhere the red square is a new type of tensor, defined in the main text, that is completely determined by the MPOs $O_a$. We show that this second algebra is actually a $C^*$-algebra, hence it is isomorphic to a direct sum of full matrix algebras. We use this decomposition to identify the topological sectors with the different blocks, or equivalently, with the central idempotents that project onto these blocks. A large part of the paper is then devoted to show that, once one has identified these central idempotents (for which we give a constructive algorithm), one can construct MPO-injective PEPS containing anyonic excitations and study their topological properties. For example, the topological spin $h_i$ of an anyon $i$ can be obtained via the identity:\n\\begin{align}\\label{overviewspin}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.18\\linewidth]{overviewspin2}}} = e^{i2\\pi h_i}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.13\\linewidth]{overviewspin1}}}\\,\\,\\, ,\n\\end{align}\nwhere we used a blue square to denote a central idempotent (here the one corresponding to anyon $i$) as opposed to a red square, which denotes a basis element of the second algebra. In a similar manner one can extract the $S$-matrix, fusion relations and braiding matrices in a way that does not scale with the system size. \n\nLet us now illustrate this general scheme for the simplest example, namely Kitaev's Toric Code \\cite{toriccode}. Note that the excitations in the Toric code are already completely understood in the framework of G-injective PEPS \\cite{Ginjectivity}, which is a specific subset of the MPO-injective PEPS formalism with building blocks $O_a$ that are tensor products of local operators, i.e. the MPOs have virtual bond dimension 1. However, as a pedagogical example we would like to study the anyons in the general language introduced above. In the standard PEPS construction of the Toric code \\cite{Criticality} the virtual indices of the tensors are of dimension two and have a $\\mathbb{Z}_2$ symmetry, i.e. they are invariant under $\\sigma_z^{\\otimes 4}$. So in this case the symmetry algebra is really a group and is given by the two MPOs $O_1 = \\mathds{1}^{\\otimes 4}$ and $O_z = \\sigma_z^{\\otimes 4}$. Let us now introduce following tensor where all indices have dimension two:\n\\begin{align}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.35\\linewidth]{toriccode1}}}\\, .\n\\end{align}\nAll components that are not diagonal in the red indices are zero. One can clearly construct the Toric code symmetry MPOs $O_1$ and $O_z$ using these tensors. By defining a fusion tensor where all indices have dimension two and with the following non-zero components:\n\\begin{align}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.5\\linewidth]{toriccodefusion}}}\\, ,\n\\end{align}\none can verify that the zipper condition \\eqref{overviewzipper} is trivially satisfied.\n\nThe second type of algebra is four-dimensional. The basis elements \\eqref{algebra2} can be obtained by using one of following tensors:\n\\begin{align}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.55\\linewidth]{toriccodebasis}}}\\, \n\\end{align}\nEach of these tensors has only one non-zero component, which is given in the table.\n\nThe central idempotents of this second algebra, labeled by the usual notation $\\{1,e,m,em\\}$, are now easily obtained by using following tensors:\n\\begin{align}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.35\\linewidth]{toriccodeidempotents}}}\\, \n\\end{align}\nAgain, we only denote non-zero elements. The vertical indices in the tensors above with value one indicate that a string of $\\sigma_z$ is connected to these idempotents. This agrees with the $G$-injectivity construction of $m$ and $em$ anyons. From \\eqref{overviewspin} one can now immediately see that the topological spins of these idempotents are $h_1 = h_e = h_m = 0$ and $h_{em} = 1\/2$.\n\nWhile our treatment of the Toric Code might seem overloaded, we will show in the remainder of the paper that it is in fact the correct language to describe anyons in general topological PEPS. We hope that this section can give some intuition and motivation to understand the more technical parts. \n\n\\section{Projector Matrix Product Operators} \\label{sec:MPO}\n\\vspace{3 mm}\n\\subsection{Definition}\n\\vspace{5 mm}\nWe start the general theory with a discussion of Projector Matrix Product Operators (PMPO), the fundamental objects of MPO-injectivity, and their connection to known concepts of category theory. We consider PMPOs $P_L$ that form translation invariant Hermitian projectors for every length $L$ and can be represented as\n\\begin{equation} \\label{mpo}\nP_L =\\sum_{\\{i\\}, \\{j\\} = 1}^{D}\\text{tr}(\\Delta B^{i_1j_1}B^{i_2j_2}\\dots B^{i_Lj_L})\\ket{i_1 i_2\\dots i_L}\\bra{j_1 j_2 \\dots j_L}\\, ,\n\\end{equation}\nwhere $B^{ij}$ are $\\chi \\times \\chi$ matrices for fixed values of indices $i,j=1,\\ldots,D$. We use this MPO to construct a PEPS in the next section, and $D$ will then become the bond dimension of the resulting PEPS. Furthermore, $\\Delta$ is a $\\chi \\times \\chi$ matrix such that the specific position where it is inserted is irrelevant; every position of $\\Delta$ will result in the same PMPO $P_L$. We also assume that the insertion of $\\Delta$ still allows for a canonical form of the MPO such that the tensors have the following block diagonal structure \\cite{MPSrepresentations}\n\\begin{align}\nB^{ij} = \\bigoplus_{a = 1}^{\\mathcal{N}} B_a^{ij}\\\\\n\\Delta = \\bigoplus_{a = 1}^{\\mathcal{N}} \\Delta_a\\, ,\n\\end{align}\nwith $B_a^{ij}$ and $\\Delta_a$ $\\chi_a \\times \\chi_a$ matrices such that $\\sum_{a = 1}^{\\mathcal{N}} \\chi_a = \\chi$. $P_L$ thus decomposes into a sum of MPOs\n\\begin{equation}\nP_L =\\sum_{a = 1}^{\\mathcal{N}}\\sum_{\\{i\\}, \\{j\\} = 1}^{D}\\text{tr}(\\Delta_a B_a^{i_1j_1}B_a^{i_2j_2}\\dots B_a^{i_Lj_L}) \\ket{i_1 i_2\\dots i_L}\\bra{j_1 j_2 \\dots j_L}\n\\end{equation}\nThe resulting MPOs labelled by $a$ in this sum are injective, hence for each $a$ the matrices $\\{B_a^{ij}; i,j=1,\\ldots,D\\}$ and their products span the entire space of $\\chi_a \\times \\chi_a$ matrices.\nEquivalently, the corresponding transfer matrices $\\mathbb{E}_{a}=\\sum_{i,j} B_{a}^{i,j}\\otimes\\bar{B}_{a}^{i,j}$ have a unique eigenvalue $\\lambda_a$ of largest magnitude that is positive and a corresponding (right) eigenvector equivalent to a full rank positive definite matrix $\\rho_a$. The PMPO $P_L$ can now only be translation invariant if the $\\Delta_a$ commute with all the matrices $B_a^{ij}$. Injectivity of the tensors $B_a$ then implies that $\\Delta_a=w_a \\mathds{1}_{\\chi_a}$, with $w_a$ some complex numbers.\n\n\\subsection{Fusion tensors} \\label{subsec:fusiontensors}\nWe thus arrive at the following form for $P_L$\n\\begin{equation}\n\\begin{split}\nP_L & = \\sum_{a = 1}^{\\mathcal{N}}w_a O^L_a \\\\\n & = \\sum_{a = 1}^{\\mathcal{N}}w_a\\sum_{\\{i\\}, \\{j\\} = 1}^{D} \\text{tr}(B_a^{i_1j_1}B_a^{i_2j_2}\\dots B_a^{i_Lj_L}) \n\\ket{i_1 i_2\\dots i_L}\\bra{j_1 j_2 \\dots j_L}\n\\end{split}\n\\end{equation}\nSince $P_L$ is required to be a projector, we have that\n\\begin{equation}\nP_L^2 = \\sum_{a, b = 1}^{\\mathcal{N}} w_a w_b O^L_a O^L_b = \\sum_{a = 1}^{\\mathcal{N}}w_a O^L_a = P_L\\, ,\n\\end{equation}\nwhich has to hold for all $L$. One can show that this implies that $P_L$ and $P_L^2$ have the same blocks in their respective canonical forms\\footnote{This essentially follows from the fact that $\\lim_{L \\rightarrow \\infty}\\text{tr}(O^L_aO_b^{L\\dagger})\/\\left(\\sqrt{\\text{tr}(O^L_aO_a^{L\\dagger})}\\sqrt{\\text{tr}(O^L_bO_b^{L\\dagger})}\\right) = \\delta_{a,b}$ for two injective MPOs $O_a^L$ and $O^L_b$.} \\cite{PerezGarciaRG}, leading to the following relations\n\\begin{align} \\label{fusioncategory1}\n& O^L_a O^L_b = \\sum_{c = 1}^{\\mathcal{N}} N_{ab}^c O^L_c\\ ,\\\\\n & \\sum_{a,b = 1}^{\\mathcal{N}} N_{ab}^c w_a w_b = w_c\\ ,\n\\end{align}\nwhere $N_{ab}^c$ is a rank three tensor containing integer entrees. The theory of MPS representations \\cite{MPSrepresentations} implies the existence of matrices $X_{ab,\\mu}^c: \\mathbb{C}^{\\chi_a}\\otimes \\mathbb{C}^{\\chi_b} \\rightarrow \\mathbb{C}^{\\chi_c}$ for $\\mu=1,\\ldots,N_{ab}^{c}$ and left inverses $X_{ab,\\mu}^{c^+}$ satisfying $X_{ab,\\nu}^{d^+} X_{ab,\\mu}^c = \\delta_{de}\\delta_{\\mu\\nu}\\mathds{1}_{\\chi_c}$, such that we have following identities on the level of the individual matrices that build up the injective MPOs $O^L_a$,\n\\begin{equation} \\label{blocks}\nX_{ab,\\mu}^{c^+}\\left(\\sum_{j = 1}^D B_a^{ij}\\otimes B_b^{jk}\\right)X^{c}_{ab,\\mu} = B_c^{ik}.\n\\end{equation}\nWe call the set of rank three tensors $X^c_{ab,\\mu}$ the \\emph{fusion tensors}. These fusion tensors play an important role in constructing the anyon ansatz further on. From\n\\begin{equation}\\label{gauge}\n\\bigoplus_{\\mu = 1}^{N_{ab}^c}X_{ab,\\mu}^{c^+}\\left(\\sum_{j = 1}^D B_a^{ij}\\otimes B_b^{jk}\\right)X^{c}_{ab,\\mu} = \\mathds{1}_{N_{ab}^c} \\otimes B_c^{ik}\\, ,\n\\end{equation}\nwe see that the $\\mu$-label is arbitrary and the fusion tensors $X_{ab,\\mu}^c$ are only defined up to a gauge transformation given by a set of invertible $N_{ab}^c\\times N_{ab}^c$ matrices $Y_{ab}^c$; every transformed set of fusion tensors $X'^c_{ab,\\mu} = \\sum_{\\nu = 1}^{N_{ab}^c} (Y^c_{ab})_{\\mu\\nu}X^c_{ab,\\nu}$ also satisfies \\eqref{gauge}. The MPO tensors and equation~\\eqref{blocks} are represented in figure~\\ref{graphical} in a graphical language that is used extensively throughout this paper. Note in particular the difference between the square for the full MPO tensor $B^{ij}$ with virtual indices (red lines) of dimension $\\chi=\\sum_{a} \\chi_{a}$ and the disc for the injective MPO tensors $B^{ij}_a$ with virtual indices (red line with symbol $a$) of dimension $\\chi_a$. \n\n\\begin{figure}\n \\centering\n (a) \\includegraphics[width=0.1\\textwidth]{n4}\\;\\; (b)\n \\includegraphics[width=0.1\\textwidth]{n5}\n (c) \\includegraphics[width=0.32\\textwidth]{n7}\n\\caption{(a) MPO tensor $B^{ij}$. (b) Injective MPO tensor $B^{ij}_a$. (c) Left hand side of equation \\eqref{blocks}.}\n\\label{graphical}\n\\end{figure}\n\n\nTwo complications are worth mentioning. First, the canonical form of $O^L_a O^L_b$ can contain diagonal block matrices which are identically zero. Therefore, the fusion matrices $X_{ab,\\mu}^{c}$ do not span the full space and $\\sum_{c,\\mu} \\chi_{c}$ can be smaller than $\\chi_{a}\\times\\chi_{b}$. Correspondingly, $\\sum_{c,\\mu} X^c_{ab,\\mu}X^{c^+}_{ab,\\mu}$ is not necessarily the identity but only a projector on the support subspace of the internal MPO indices of $O^L_aO^L_b$.\n\nSecondly, there can be nonzero blocks above the diagonal, i.e.\\ $X_{ab,\\mu}^{c^+}\\left(\\sum_{j = 1}^D B_a^{ij}\\otimes B_b^{jk}\\right)\\\\ X^{d}_{ab,\\nu} \\neq 0$ for some $(c,\\mu) < (d,\\nu)$ (according to some ordering). These blocks do not contribute when the MPO is closed by the trace operation, but prevent us from writing\n\\begin{equation}\\label{inversegaugeone}\n\\sum_{c = 1}^{\\mathcal{N}}\\sum_{\\mu=1}^{N_{ab}^c} X_{ab,\\mu}^c B_c^{ik} X_{ab,\\mu}^{c+} = \\sum_{j=1}^D B^{ij}_a\\otimes B^{jk}_b\\, .\n\\end{equation}\nWe noticed above that a set of fusion tensors is only defined up to a gauge transformation $Y$. For PMPOs without nonzero blocks above the diagonal we now argue that the converse is also true, i.e. two collections of fusion tensors that satisfy equations \\eqref{gauge} and \\eqref{inversegaugeone} must be related by a gauge transformation $Y$. To see this, note that the absence of nonzero blocks above the diagonal is equivalent to the existence of an invertible matrix $X_{ab}$ such that\n\\begin{equation}\nX_{ab}^{-1}\\left(\\sum_{j = 1}^D B_a^{ij}\\otimes B_b^{jk}\\right)X_{ab}= \\bigoplus_{c}\\left(\\mathds{1}_{N_{ab}^c} \\otimes B_c^{ik}\\right)\\, .\n\\end{equation}\nThe fusion tensors that have the required properties are then simply the product of $X_{ab}$ and the projector on the appropriate block: $X^c_{ab,\\mu} = X_{ab} P^c_{\\mu}$. It is clear that these fusion tensors are unique up to a matrix in the commutant of $\\bigoplus_{c}\\left(\\mathds{1}_{N_{ab}^c} \\otimes B_c^{ik}\\right)$. Since the $B^{ik}_c$ are injective their commutant consists of multiples of the identity matrix. From this we can indeed conclude that the only ambiguity in the definition of $X_{ab,\\mu}^c$ is given by the gauge transformation $Y$.\n\nNote that equation \\eqref{inversegaugeone} is equivalent to\n\\begin{equation}\\label{zippercondition2}\n\\left(\\sum_{j=1}^D B^{ij}_a\\otimes B^{jk}_b\\right) X_{ab,\\mu}^{c} = X_{ab,\\mu}^{c} B^{ik}_{c}\\, ,\n\\end{equation}\n\\begin{equation*}\nX_{ab,\\mu}^{c^+} \\left(\\sum_{j=1}^D B^{ij}_a\\otimes B^{jk}_b\\right) =B^{ik}_{c} X_{ab,\\mu}^{c^+} \\, .\n\\end{equation*}\nWe refer to these last two equations as the \\emph{zipper condition}. While we continue somewhat longer in the general setting, we will need to assume that the zipper condition holds for most of the results in the remainder of the paper.\n\n\\subsection{Hermiticity, duality and unital structure} \\label{subsec:hermiticity}\nIf we also require $P_L$ to be Hermitian for all $L$, then we find that for every block $a$ there exists a unique block $a^*$ such that\n\\begin{eqnarray}\n\\bar{w}_a & = & w_{a^*} \\label{realnumber} \\\\\nO^{L\\dagger}_a& = & O^L_{a^*}\\, ,\n\\end{eqnarray}\nwhere the bar denotes complex conjugation. The tensor $N$ then obviously satisfies\n\\begin{equation} \\label{pivotal3}\nN_{ab}^c = N_{b^*a^*}^{c^*}\\, .\n\\end{equation}\nNote that in general the tensors $\\bar{B}^{ji}_a$ and $B^{ij}_{a^*}$, which build up $O^{L\\dagger}_a$ and $O^L_{a^*}$, are related by a gauge transformation: $\\bar{B}^{ji}_a = Z^{-1}_aB^{ij}_{a^*}Z_a$ where $Z_{a}$ is defined up to a multiplicative factor. By applying Hermitian conjugation twice we find\n\\begin{eqnarray}\nB^{ij}_{a} & = & \\bar{Z}_a^{-1} \\bar{B}^{ji}_{a^*} \\bar{Z}_a \\\\\n & = & \\bar{Z}_a^{-1} Z^{-1}_{a^*} B^{ij}_{a} Z_{a^*} \\bar{Z}_a \\; .\n\\end{eqnarray}\nCombining the above expression with the injectivity of $B_a^{ij}$ we find $Z_{a}\\bar{Z}_{a^\\ast} = \\gamma_a\\mathds{1} = \\bar{Z}_{a^*} Z_{a}$, with $\\gamma_a=\\bar{\\gamma}_{a^*}$ a complex number. If $a\\neq a^\\ast$, we can redefine one of the two $Z$ matrices with an additional factor such that $\\gamma_a=1$. If, on the other hand, $a=a^\\ast$ we find that $\\gamma_a$ must be real but we can at most absorb its absolute value in $Z_a \\bar{Z}_a$ by redefining $Z_a$ with an extra factor $\\lvert\\gamma_a\\rvert^{-1\/2}$. The sign $\\varkappa_a=\\text{sign} (\\gamma_a)$ cannot be changed by redefining $Z_a$. It is a discrete invariant of the PMPO which is analogous to the Frobenius-Schur indicator in category theory.\n\nTo recapitulate, Hermitian conjugation associates to every block $a$ a unique `dual' block $a^*$ in such a way that $(a^*)^ * = a$. In fusion category theory there is also a notion of duality, but it is defined in a different way. There, for every simple object $a$ the unique dual simple object $a^*$ is such that the tensor product of $a$ and $a^ *$ contains the identity object 1. The identity object is defined as the unique simple object that leaves all other objects invariant under taking the tensor product. Moreover, 1 appears only once in the decomposition of the tensor product of $a$ and $a^*$. We now show that if a PMPO contains a trivial identity block then our definition of duality inferred from Hermitian conjugation coincides with the categorical definition. To do so, let us first revisit the transfer matrices\n\\begin{align*}\n\\mathbb{E}_{a} &= \\sum_{i,j} B^{ij}_a \\otimes \\bar{B}^{ij}_a \\\\\n&= (\\mathds{1}\\otimes Z_{a}^{-1}) \\sum_{i,j} B^{ij}_a \\otimes B^{ji}_{a^*} (\\mathds{1}\\otimes Z_{a}).\n\\end{align*}\nWe can thus use the tensors $(\\mathds{1}\\otimes Z_{a}^{-1}) X_{aa^*;\\mu}^{c}$ (and their left inverses) to bring $\\mathbb{E}_{a}$ into a block form with nonzero blocks on and above the diagonal (upper block triangular). In particular, there are $N_{aa^*}^{c}$ diagonal blocks of size $\\chi_c\\times \\chi_c$ that are given by $M_{c}=\\sum_{i} B^{ii}_{c}$. They can be brought into upper triangular form by a Schur decomposition within the $\\chi_c$-dimensional space, such that we can identify the eigenvalue spectrum of $\\mathbb{E}_{a}$ with that of the different matrices $M_c$ for $c$ appearing in the fusion product of $a$ and $a^*$. Since $\\mathbb{E}_{a}$ has a unique eigenvalue of largest magnitude $\\lambda_a$, it must correspond to the unique largest eigenvalue of $M_{c_{a}}$ for one particular block $c_{a}$, for which also $N_{aa^*}^{c_a}=1$.\n\nWe now assume that there is a unique distinguished label $c$, which we choose to be $c=1$, such that the spectral radius of $M_{1}$ is larger than the spectral radius of all other $M_{c}$ for $c=2,\\ldots,N$ (whose labeling is still arbitrary). We furthermore assume that $N_{aa^*}^{1}\\neq 0$ for all $a$, i.e.\\ $O_1^L$ appears in the product $O_a^L O_{a^*}^L$ for any $a$. This condition, as we now show, corresponds to imposing a unital structure and excludes cases where e.g.\\ $P_L$ is actually a sum of independent orthogonal projectors, corresponding to a partition $A,B,...$ of the injective blocks that is completely decoupled (such that $N_{ab}^{c}=0$ for any $c$ if $a\\in A$ and $b\\in B$).\n\nWith this condition, we find that independent of $a$, $c_a = 1$ and all transfer matrices $\\mathbb{E}_{a}$ have $\\lambda_a=\\lambda$ as unique largest eigenvalue, with $\\lambda$ the largest magnitude eigenvalue of $M_1$. This immediately gives rise to the following consequences. Firstly, $N_{aa^*}^{1}=1$ and not larger. Secondly, the largest eigenvalue of $M_1$ is positive and non-degenerate. Thirdly, any $M_{a}$ for $a\\neq 1$ has a spectral radius strictly smaller than $\\lambda$. Fourthly, since the spectral radii of $M_{a}$ and $M_{a^*}$ are identical, it follows that $1^* = 1$. Furthermore, denoting the corresponding (right) eigenvector as $\\mathbf{v}_{R}$ and using $\\bar{M}_1 = Z_1^{-1} M_1 Z_{1}$, we find $Z_1 \\overline{\\mathbf{v}}_{R} \\sim \\mathbf{v}_{R}$, where we can absorb the proportionality constant into $Z_1$. Applying this relation twice reveals that $Z_1\\bar{Z}_1 \\mathbf{v}_{R} = \\mathbf{v}_{R}$, such that label $1$ must have a trivial Frobenius-Schur indicator $\\varkappa_1=1$.\n\nIn addition, it is well known from the theory of MPS (but here applied to the MPOs by using the Hilbert-Schmidt inner product for the operators $O_a^L$) that for two injective MPO tensors $B_a^{ij}$ and $B_{b}^{ij}$ that are both normalized such that the spectral radius $\\rho(\\mathbb{E}_{a}) = \\rho(\\mathbb{E}_{b}) = \\lambda$, the spectral radius of $\\sum_{ij} B_{a}^{ij}\\otimes \\bar{B}_{b}^{ij} = (\\mathds{1}\\otimes Z_{b}^{-1}) \\sum_{i,j} B^{ij}_a \\otimes B^{ji}_{b^*} (\\mathds{1}\\otimes Z_{b})$ is either $\\lambda$ (in which case $O_a^L$ and $O_b^L$ are identical and the tensors are related by a gauge transform) or the spectral radius is strictly smaller than $\\lambda$. Since we can now use the fusion tensors $X_{ab;\\mu}^{c}$ to bring $\\sum_{i,j} B^{ij}_a \\otimes B^{ji}_{b}$ into upper block triangular form with diagonal blocks $M_c$ and thus to relate the spectra, this immediately shows that $1$ cannot appear in the fusion product of $a$ and $b^*$ unless $b=a$, i.e.\\ $N_{ab^*}^{1} = \\delta_{ab}$. We can continue along these lines to show some extra symmetry properties of the tensor $N$. If $N_{ab}^{c} \\neq 0$, then $\\sum_{ijk} B_{a}^{ij}\\otimes B_{b}^{jk} \\otimes \\bar{B}^{ik}_{c}$ should have a largest magnitude eigenvalue $\\lambda$ with degeneracy $N_{ab}^{c}$. But using the $Z$ matrices, and swapping the matrices in the tensor product, this also means that\n\\begin{equation}\\label{pivotal}\nN_{ab}^{c} = N^{a^*}_{b c^*} = N_{c^*a}^{b^*}\\, ,\n\\end{equation}\nwhich can further be combined with equation~\\eqref{pivotal3}. In particular, this also shows that $N_{a1}^{b} = N_{1a}^{b} = \\delta_{ab}$, such that the single block MPO $O_{1}^L$ indeed corresponds to the neutral object of our algebra.\n\n\n\\subsection{Associativity and the pentagon equation}\\label{subsec:associativity}\nAssociativity of the product $(O_a^L O_b^L) O_c^L = O_a^L (O_b^L O_c^L)$ implies that\n\\begin{equation}\n\\sum_e N_{ab}^{e} N_{ec}^{d} = \\sum_{f} N_{af}^{d} N_{bc}^{f}.\n\\end{equation}\nIn addition, there are two compatible ways to obtain the block decomposition of $B_{abc}^{i,l}=\\sum_{j,k} B_a^{i,j}\\otimes B_{b}^{j,k} \\otimes B_{c}^{k,l}$ into diagonal blocks of type $B_{d}^{i,l}$. Indeed, we have\n\\begin{align*}\nX^{d^+}_{ec,\\nu}\\left(X^{e^+}_{ab,\\mu}\\otimes\\mathds{1}_{\\chi_c}\\right)B_{abc}^{i,l}\\left(X^{e}_{ab,\\mu}\\otimes\\mathds{1}_{\\chi_c}\\right)X^{d}_{ec,\\nu}&= B_{d}^{i,l} \\\\\n{X^{d^+}_{af,\\sigma}}\\left(\\mathds{1}_{\\chi_a} \\otimes X^{f^+}_{bc,\\lambda}\\right) B_{abc}^{i,l} \\left(\\mathds{1}_{\\chi_a}\\otimes X^{f}_{bc,\\lambda}\\right)X^{d}_{af,\\sigma} &= B_{d}^{i,l}\\, ,\n\\end{align*}\nas illustrated in figure~\\ref{associativity}. For PMPOs satisfying the zipper condition \\eqref{zippercondition2} similar reasoning as in section \\ref{subsec:fusiontensors} shows that for every $a,b,c,d$ there must exist a transformation\n\\begin{equation}\\label{Fmove}\n\\left(X^e_{ab,\\mu}\\otimes\\mathds{1}_{\\chi_c}\\right)X^d_{ec,\\nu} =\n\\sum_{f=1}^{\\mathcal{N}} \\sum_{\\lambda = 1}^{N_{bc}^f} \\sum_{\\sigma = 1}^{N_{af}^d}(F^{abc}_{d})^{f\\lambda\\sigma}_{e\\mu\\nu} \\left(\\mathds{1}_{\\chi_a}\\otimes X^f_{bc,\\lambda}\\right)X^d_{af,\\sigma}\\, ,\n\\end{equation}\nwhere $F^{abc}_d$ are a set of invertible matrices. To see this, consider following identity, which follows from the zipper condition,\n\n\\begin{align}\n\\sum_{de\\mu\\nu}\n\\vcenter{\\hbox{ \n\\includegraphics[width=0.32\\linewidth]{pentagon1}}} = \\sum_{df\\sigma\\lambda}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.35\\linewidth]{pentagon2}}}\n\\end{align}\nActing with fusion tensors on both sides of the equation gives\n\n\\begin{align}\\label{pentagon3}\n\\vcenter{\\hbox{ \n\\includegraphics[width=0.2\\linewidth]{pentagon3}}} = \\sum_{d'f\\sigma\\lambda}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.42\\linewidth]{pentagon4}}}\n\\end{align}\nAs a final step we use injectivity of the single block MPO tensors. This property implies that $\\left(B_d^{ij}\\right)_{\\alpha\\beta}$, when interpreted as a matrix with rows labeled by $ij$ and columns by $\\alpha\\beta$, has a left inverse $B_d^{+}$ such that $B_d^+B_{d'} = \\delta_{dd'}\\mathds{1}_{\\chi_d}\\otimes\\mathds{1}_{\\chi_d}$. Applying this inverse on both sides of \\eqref{pentagon3} leads to the desired expression\n\n\\begin{align}\n\\vcenter{\\hbox{ \n\\includegraphics[width=0.23\\linewidth]{pentagon5}}} = \\sum_{f\\sigma\\lambda}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.47\\linewidth]{pentagon6}}}\n\\end{align}\nThe second tensor product factor on the right hand side is exactly $(F^{abc}_d)^{f\\lambda\\sigma}_{e\\mu\\nu} \\mathds{1}_{\\chi_d}$.\n\nThe $F$ matrices have to satisfy a consistency condition called the pentagon equation, which is well-known in category theory. It results from deriving the matrix that relates $(X_{ab,\\mu}^f\\otimes \\mathds{1}_{\\chi_c}\\otimes \\mathds{1}_{\\chi_d})(X_{fc,\\nu}^g\\otimes\\mathds{1}_{\\chi_d})X_{gd,\\rho}^e$ to $(\\mathds{1}_{\\chi_a}\\otimes \\mathds{1}_{\\chi_d}\\otimes X_{cd,\\lambda}^h)(\\mathds{1}_{\\chi_a}\\otimes X_{bh,\\kappa}^i)X_{ai,\\sigma}^e$ in two different ways and equating the two resulting expressions. Written down explicitly, the pentagon equation reads\n\\begin{equation}\\label{pentagoneq}\n\\sum_{h,\\sigma\\lambda\\omega}(F^{abc}_g)^{f\\mu\\nu}_{h\\sigma\\lambda}(F^{ahd}_e)^{g\\lambda\\rho}_{i\\omega\\kappa}(F^{bcd}_i)^{h\\sigma\\omega}_{j\\gamma\\delta} = \n\\sum_\\sigma (F^{fcd}_e)^{g\\nu\\rho}_{j\\gamma\\sigma}(F^{abj}_e)^{f\\mu\\sigma}_{i\\delta\\kappa}.\n\\end{equation}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.38\\textwidth]{n11}\n \\includegraphics[width=0.055\\textwidth]{n10}\n \\includegraphics[width=0.38\\textwidth]{n12}\n\\caption{Property of MPO and fusion tensors that follows from associativity of the multiplication of $O^L_a$, $O^L_b$ and $O^L_c$.}\n\\label{associativity}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.35\\textwidth]{n13}\n\\caption{Two paths giving rise to the pentagon equation \\eqref{pentagoneq}.}\n\\label{pentagon}\n\\end{figure}\nThe two ways to obtain the same matrix leading to the pentagon equation are shown in figure \\ref{pentagon}. A standard result in category theory, called Mac Lane's coherence theorem, states that the pentagon equation is the only consistency relation that needs to be checked; once it is satisfied all other possible consistency conditions are also automatically satisfied \\cite{Kitaev06,maclane}.\n\nThe complete set of algebraic data we have associated to a Hermitian PMPO $P_L$ that satisfies the zipper condition \\eqref{zippercondition2} is $(N_{ab}^c, F^{abc}_d, \\varkappa_a)$. Note that $(N_{ab}^c, F^{abc}_d)$ is (in many cases) known to be robust in the sense that every small deformation of the matrices $F^{abc}_d$ that satisfies the pentagon equation can be absorbed in the fusion tensors via a suitable gauge transformation $Y$. This remarkable property is called Ocneanu rigidity \\cite{Kitaev06,ocneanu} and it shows that PMPOs satisfying the zipper condition naturally fall into discrete families.\n\n$(N_{ab}^c, F^{abc}_d, \\varkappa_a)$ is very similar to the algebraic data defining a fusion category. We argued in section \\ref{subsec:hermiticity} that when a PMPO has a unital structure then the definition of duality as derived from Hermitian conjugation is equivalent to the categorical definition.\nSimilar kind of reasoning also shows that our definition of $\\varkappa_a$ coincides with that of the Frobenius-Schur indicator in fusion categories for a large class of PMPOs with unital structure that satisfy the zipper condition. We elaborate on this and other connections to fusion categories in Appendix \\ref{app:equivalence}. If the PMPO does not have a unital structure then the data $(N_{ab}^c, F^{abc}_d, \\varkappa_a)$ defines a multi-fusion category, i.e. a kind of tensor category whose definition does not require the unit element to be simple. \n\n\\section{MPO-injective PEPS} \\label{sec:MPOinjPEPS}\n\nUsing the PMPOs introduced in the previous section we can now define a class of states on two-dimensional lattices called MPO-injective PEPS, as introduced in \\cite{Ginjectivity,Buerschaper14,MPOpaper}. The importance of this class of PEPS is that it can describe topologically ordered systems. For example, it was shown in \\cite{MPOpaper} that all string-net ground states have an exact description in terms of MPO-injective PEPS. In section \\ref{subsec:zipper} we first impose some additional properties on the PMPOs, which are required in order to construct PEPS satisfying all MPO-injectivity axioms in section \\ref{subsec:entangled}. In section \\ref{subsec:virtualsupport} we review some properties of the resulting class of MPO-injective PEPS.\n\n\\subsection{Unitarity, zipper condition and pivotal structure}\\label{subsec:zipper}\nTo be able to construct MPO-injective PEPS in section \\ref{subsec:entangled} we have to impose three properties on the PMPOs we consider.\n\nFirstly, we require is that there exists a gauge on the internal MPO indices such that the fusion tensors $X_{ab,\\mu}^{c}$ are isometries --such that $X_{ab,\\mu}^{c^{+}} = (X_{ab,\\mu}^{c})^\\dagger$-- and the gauge matrices $Z_a$, introduced in section \\ref{subsec:hermiticity}, are unitary. This brings PMPOs into the realm of unitary fusion categories, which will be required for various consistency conditions throughout. We now devise a new graphical language where the matrices $Z_a$ are represented as\n\\begin{align}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.18\\linewidth]{z1}}} \\hspace{10 mm}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.2\\linewidth]{z2}}}\n\\end{align}\n\\begin{align*}\\hspace{4 mm}\n\\vcenter{\\hbox{ \n\\includegraphics[width=0.22\\linewidth]{z3}}} \\hspace{6 mm}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.22\\linewidth]{z4}}}\n\\end{align*}\nNote that absolute orientation of the symbols used to represent the matrices has no meaning, as we will be using those in a two-dimensional setting where the tensors will be rotated. Rotating the first figure by $180^\\circ$ exchanges the row and column indices of the matrix and is thus equivalent to transposition, which is compatible with the graphic representation of $Z_a^T$. Because of unitarity, $(Z_a^{-1})^{T}= \\bar{Z}_a$ and complex conjugation of the tensor simply amounts to reversing the arrows. The definition of the Frobenius-Schur indicator $Z_a\\bar{Z}_{a^*} = \\varkappa_a\\mathds{1}$ can now also be written as\n\\begin{align}\\label{FSindicator}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.3\\linewidth]{z5}}}\\,.\n\\end{align}\n\nThe second requirement is that the zipper condition \\eqref{zippercondition2} holds:\n\\begin{align} \\label{zippercondition}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.43\\linewidth]{zipper}}}\n\\end{align}\n\\begin{align*}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.43\\linewidth]{zipper2}}}\n\\end{align*}\nAs already mentioned in section \\ref{subsec:fusiontensors} this corresponds to the absence of blocks above the diagonal. In this new graphical notation, we no longer explicitly write $X$ on the fusion tensors, but only the degeneracy label $\\mu$. A normal fusion tensor $X_{ab,\\mu}^{c}$ has two incoming arrows and one outgoing, while its left inverse $X_{ab,\\mu}^{c^{+}} = (X_{ab,\\mu}^{c})^\\dagger$ has two outgoing arrows and one incoming. In order to determine the difference between e.g.\\ $X_{ab}^{c}$ and $X_{ba}^{c}$, any fusion tensor in a graphical diagram always has to be read by rotating it back to the above standard form; note that one should not flip (mirror) any symbol. Consistent use of the arrows is also indispensable in the graphical notation for MPO-injective PEPS in the next section.\n\n\nThe third and final requirement for the PMPO is that the fusion tensors satisfy a property which is closely related to the \\emph{pivotal structure} in fusion category theory:\n\\begin{align}\\label{pivotalnew}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.47\\linewidth]{n24}}} \\, ,\n\\end{align}\nwhere the square matrices $A_{ab}^c$ satisfy $\\left(A_{ab}^c\\right)^\\dagger A_{ab}^c = \\frac{w_c}{w_b}\\mathds{1}$. A similar property holds if we bend the lower $b$ index on the left hand side of (\\ref{pivotalnew}), with a set of invertible matrices $A'^c_{ab}$ satisfying $\\left(A'^c_{ab}\\right)^\\dagger A'^c_{ab} = \\frac{w_c}{w_a}\\mathds{1}$. Note that this is only possible when all the numbers $w_a$ have the same phase. Using equation (\\ref{realnumber}) this implies that all $w_a$ are either positive or negative real numbers. From $\\sum_{a,b = 1}^{\\mathcal{N}} N_{ab}^c w_a w_b = w_c$ and the fact that $N$ consists of nonnegative entries it then follows that all $w_a$ must be positive. Furthermore, the pivotal property requires that the tensor $N$ satisfies\n\\begin{equation}\nN_{a^*b}^c = N_{ac}^b\\,\n\\end{equation}\nwhich is indeed satisfied by combining the equalities \\eqref{pivotal3} and \\eqref{pivotal} from Section~\\ref{subsec:hermiticity}. While we do believe that the pivotal property \\eqref{pivotalnew} follows from the zipper condition and the unitary\/isometric property of the gauge matrices and the fusion tensors, the proof falls beyond the scope of this paper and we here impose it as an extra requirement.\n\n\nBy repeated application of the pivotal property, we obtain the following relation between the fusion tensors $X_{ab,\\mu}^c$ and the gauge matrices $Z_a$:\n\\begin{align} \\label{pivotalthree}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.53\\linewidth]{n15}}} \\, ,\n\\end{align}\nwhere $C_{ab}^c= A_{a^* b}^c \\bar{A}'^b_{a^*c^*} A^{a^*}_{b^*c^*}$ can be verified to be a unitary matrix. This relation also holds on more general grounds for any Hermitian PMPO satisfying the zipper condition, although with non-unitary $C_{ab}^c$ in general.\n\nNow that we have collected all the necessary properties for the relevant PMPOs we can turn to tensor network states on two-dimensional lattices in the next section. Note that the PMPOs that satisfy the properties discussed in this section could be thought of as classifying anomalous one-dimensional topological orders, i.e. the gapped topological orders that can be realized on the boundary of a two-dimensional bulk \\cite{KongWen}.\n\n\n\n\\subsection{Entangled subspaces}\\label{subsec:entangled}\nThe first step in our construction of a MPO-injective PEPS is to introduce two different types of MPO tensors. For the right handed type,\n\\begin{align}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.25\\linewidth]{righthanded}}},\n\\end{align}\nwe use the original tensors of the Hermitian PMPO we started from. The left handed type,\n\\begin{align}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.25\\linewidth]{lefthanded}}},\n\\end{align}\nis defined by complex conjugating $B_{a,+}$, which reverses the arrows, and then transposing $i$ and $j$, i.e.\n\\begin{equation}\\label{eq:lefthandedtensor}\n\\left( B^{ij}_{a,-}\\right)_{\\beta\\alpha} = \\left(\\bar{B}_{a,+}^{ji} \\right)_{\\alpha\\beta}\\, .\n\\end{equation}\nThis is exactly the tensor that is obtained by applying Hermitian conjugation to the resulting MPO, as discussed in section \\ref{subsec:hermiticity}. We can thus relate both tensors using the gauge matrices $Z_a$, which we now depict using the graphical notation as\n\\begin{align}\\label{leftright}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.35\\linewidth]{rightleft1}}}\\,\\,\\, \\\\\n\\vcenter{\\hbox{\n\\includegraphics[width=0.35\\linewidth]{rightleft2}}} \\,.\n\\end{align}\nHere we also used that the matrices $Z_a$ are unitary and the identity in Eq.~\\eqref{FSindicator}.\n\nWith these tensors, MPO-injective PEPS can be constructed on arbitrary lattices. Firstly, assign an orientation to every edge of the lattice. Now define the MPO $\\tilde{P}_{C_v}$ at every vertex $v$, with $C_v$ the coordination number of $v$, as follows: assign a counterclockwise orientation to $v$. At every edge connected to $v$, place a right handed or left handed MPO tensor depending on the global orientation and the edge orientation. The MPO $\\tilde{P}_{C_v}$ is then obtained by contracting the $C_v$ tensors along the internal MPO indices. With the unitary constraints on the gauge and fusion tensors, the original PMPO $P_L$ that we started from allows for a unitary gauge freedom on the virtual indices of every MPO tensor $B^{ij}_a$. Note, however, that the choice of $B^{ij}_{a,\\pm}$ and the transformation behavior of the gauge matrices $Z_a$ is such that this is also a gauge freedom of the newly constructed $\\tilde{P}_{C_v}$.\n\n\nOne can furthermore check that since the fusion tensors are isometries the resulting MPO $\\tilde{P}_L$ is a Hermitian projector just like $P_L$ when the same weights $w_a$ for the blocks are used. Note that reversing the internal orientation of a single block MPO in $\\tilde{P}_L$ amounts to taking the Hermitian conjugate and the weights satisfy $w_a = w_{a^*}$, so reversing the orientation of the internal index of $\\tilde{P}_{C_v}$ is equivalent to Hermitian conjugation, which leaves $\\tilde{P}_{C_v}$ invariant. So the counterclockwise global internal orientation on $\\tilde{P}_{C_v}$ is completely arbitrary.\n\nNow that we have the Hermitian projectors $\\tilde{P}_{C_v}$ at every vertex we place a maximally entangled qudit pair $\\sum_{i=1}^D \\ket{i}\\otimes\\ket{i}$ on all edges of the lattice. We subsequently act at every vertex $v$ with $\\tilde{P}_{C_v}$ on the qudits closest to $v$ of the maximally entangled pairs on the neighboring edges. In this way we entangle the subspaces determined by $\\tilde{P}_{C_v}$ at each vertex. The resulting PEPS is shown in figure \\ref{fig:squarepeps} for a $2$ by $2$ patch out of a square lattice. More general PEPS are obtained by placing an additional tensor $A[v] = \\sum_{i = 1}^d \\sum_{\\{\\alpha\\} = 1}^D A[v]^i_{\\alpha_1\\alpha_2\\dots \\alpha_{C_v}}\\ket{i}\\bra{\\alpha_1\\alpha_2\\dots \\alpha_{C_v}}$ at each vertex which maps the $C_v$ indices on the inside of every MPO loop to a physical degree of freedom in $\\mathbb{C}^d$. As long as $A[v]$ is injective as a linear map from $\\mathbb{C}^{D^{C_v}}$ to $\\mathbb{C}^d$ (which requires $d \\geq D^{C_v}$) the resulting PEPS satisfies the axioms of MPO injectivity as defined in \\cite{MPOpaper}, which we show below. For the particular case where each $A[v]$ is an isometry, the resulting network is an \\emph{(MPO)-isometric PEPS}. Throughout the remainder of this paper we ignore the tensors $A[v]$ as we will argue that the universal properties of the quantum phase of the PEPS are completely encoded in the entangled injectivity subspaces $\\tilde{P}_{C_v}$.\n\n\n\\begin{figure}\n \\centering\n (a) \\includegraphics[width=0.21\\textwidth]{PEPSlattice}\\;\\; (b)\n \\includegraphics[width=0.22\\textwidth]{PEPStensor}\n\\caption{(a) A MPO-injective PEPS on a 2 by 2 square lattice with open boundaries. We assigned an orientation to every edge as indicated by the black arrows and an orientation to the internal MPO index represented by the red arrow. (b) A tensor $A$ that can be used to complete the PEPS on the square lattice.}\n\\label{fig:squarepeps}\n\\end{figure}\n\nWe can now prove the following central identity, which we henceforth refer to as the \\emph{pulling through equation}:\n\\begin{align}\\label{pullingthrough2}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.39\\linewidth]{n18}}}.\n\\end{align}\nNote again the difference between squares that denote the superposition of the different injective MPOs $a=1,\\ldots,N$ with suitable coefficients $w_a$ and the discs that represent a single block MPO tensor of type $a$.\n\nUsing the zipper condition (\\ref{zippercondition}) we can write (\\ref{pullingthrough2}) as\n\\begin{align} \\label{pullingthrough3}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.68\\linewidth]{n22}}} \\, .\n\\end{align}\nFrom the pivotal property (\\ref{pivotalnew}) and the fact that $A_{ab}^c$ satisfy $\\left(A_{ab}^c\\right)^\\dagger A_{ab}^c = \\frac{w_c}{w_b}\\mathds{1}$ one can then indeed check the validity of (\\ref{pullingthrough3}). \n\nTwo additional notes are in order. Firstly, one could easily imagine different simple generalizations of the MPO-injectivity formalism. But as they are not necessary to understand the fundamental concepts we wish to illustrate, we keep the presentation simple and do not consider them here. However, in the string-net example later on we come across such a simple generalization and see how it leads to a slightly modified form of condition (\\ref{pivotalnew}).\n\nSecondly, rather than starting from a PMPO and using it to construct a PEPS tensor, we could have followed the reverse strategy and started from the set of injective MPOs that satisfy the pulling through equation for a given PEPS tensor. This set also forms an algebra (the product of two such MPOs is an MPO satisfying the pulling through equation and can therefore be decomposed into a linear combination of injective MPOs from the original set) and the PEPS tensor would naturally be supported on the virtual subspace corresponding to the central idempotent of that algebra, which corresponds to our PMPO $\\tilde{P}_{C_v}$. Both approaches are of course completely equivalent.\n\n\\subsection{Virtual support and parent Hamiltonians}\n\\label{subsec:virtualsupport}\nThe pulling through equation (\\ref{pullingthrough2}), which we have proven in the previous subsection, is the first property required for a PEPS to be MPO-injective according to the definition in Ref.~\\cite{MPOpaper}. The second property, or an alternative to it, is obtained automatically if we assume that the MPO has already been blocked such that the set of tensors ${B^{ij},\\forall i,j=1,\\ldots,D}$ already span the full space $\\oplus_{a=1,\\ldots,N} \\mathbb{C}^{\\chi_a\\times \\chi_a}$ (corresponding to the injective blocks $a$ in the canonical form) without having to consider any products. We then have the relation\n\\begin{equation} \\label{blockprojector}\n\\sum_{i,j = 1}^D B^{+ij}_{\\gamma\\sigma}B^{ij}_{\\alpha\\beta} = \\sum_{a=1}^{\\mathcal{N}} (P_a)_{\\alpha\\gamma} (P_a)_{\\beta\\sigma}\\, ,\n\\end{equation}\nwhere $B^{+ij}_{\\gamma\\sigma}$ is the pseudo-inverse of $B^{ij}_{\\alpha\\beta}$ interpreted as $D^2 \\times \\chi^2$ matrix. $P_a$ are a set of $\\mathcal{N}$ projectors on the $\\chi_a$-dimensional subspaces labeled by $a$. The right hand side of \\eqref{blockprojector} thus represents the projector on block diagonal matrices with $\\mathcal{N}$ blocks, labeled by $a$, of dimension $\\chi_a$. Equation \\eqref{blockprojector} is also shown graphically in figure \\ref{generalizedinverse} and requires $D^2 > \\sum_{a} \\chi_a^2$. It essentially means that by acting with $B^+$ on a tensor $B$ we can `open up' the virtual indices of a closed projector MPO. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.33\\textwidth]{n9}\n\\caption{Acting with pseudo-inverse $B^+$ on the MPO tensor $B$ gives a projector on block-diagonal matrices with $\\mathcal{N}$ blocks of dimension $\\chi_a$.}\n\\label{generalizedinverse}\n\\end{figure}\n\nWith these two properties, we can show that the virtual support of the PEPS map on every contractible region with boundary of length $L$ is given by the PMPO $\\tilde{P}_L$ surrounding that region. Indeed, using the pulling through property we can grow the PMPO of a single tensor (note that we no longer explicitly indicate the orientation of every edge):\n\\begin{align}\\label{pullingthrough1}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.53\\linewidth]{n14}}}\n\\end{align}\nThen we can act with $B^+$ on the inner MPO loop to open up the indices and make it act on the entire boundary:\n\\begin{align}\n\\vcenter{\\hbox{\n\\includegraphics[width=0.53\\linewidth]{n16}}}\n\\end{align}\nBy repeating this trick we can indeed grow the PMPO to the boundary of any contractible region.\n\nThese arguments also imply that the rank of the reduced density matrix of the physical degrees of freedom in some finite region with a boundary of length $L$ is equal to the rank of the PMPO $\\tilde{P}_L$. Therefore, the zero Renyi entropy of a region with a boundary containing $L$ virtual indices is\n\\begin{equation}\n\\begin{split}\nS_0 = \\log(\\text{Tr}(\\tilde{P}_L)) = \\log\\text{tr}\\left(\\left(\\sum_{i = 1}^D B^{ii}\\right)^L\\right) = \\log\\left(\\sum_{a=1}^{\\mathcal{N}}w_a\\text{tr}\\left( M_a^L\\right)\\right)\\, ,\n\\end{split}\n\\end{equation}\nwhere $\\text{Tr}$ denotes the trace over external MPO indices, $\\text{tr}$ denotes the trace over internal MPO indices and the matrices $M_a=\\sum_{i} B_a^{ii}$ were defined in Section~\\ref{subsec:hermiticity}. Using the eigenvalue structure of the matrices $M_a$, we find that if the PMPO has a unital structure the zero Renyi entropy for large regions scales as\n\\begin{equation}\nS_0 \\approx \\lambda_1 L - \\log\\left(\\frac{1}{w_1}\\right)\\, .\n\\end{equation}\nFor fixed point models this constant correction will also appear in the von Neumann entropy, thus giving rise to the well-known topological entanglement entropy \\cite{KitaevPreskill}. It was shown in \\cite{SPTpaper} that if the PMPO has a unique block, i.e. if $\\mathcal{N} = 1$, there is no topological entanglement entropy. In the next section, we show explicitly that using a single blocked PMPO in our PEPS construction does indeed not allow for the existence of different topological sectors.\n\nThe constructed MPO-injective PEPS corresponds to the exact ground state of a local, frustration free Hamiltonian. The so-called parent Hamiltonian construction is identical to that of standard injective PEPS \\cite{GarciaVerstraeteWolfCirac08} and takes the form\n\\begin{equation}\\label{parent}\nH = \\sum_{p} h_p\\, ,\n\\end{equation}\nwhere the sum is over all plaquettes of the lattice and $h_p$ is a positive semi-definite operator whose kernel corresponds to the image (physical support) of the PEPS map on that plaquette. For the square lattice, this is the image of the PEPS map shown in Fig.~\\ref{fig:squarepeps}(a), interpreted as a matrix from the outer $8$ to the inner $16$ indices. Typically, $h_p$ is defined as the projector onto the orthogonal complement of the physical support of the PEPS map. In \\cite{MPOpaper} the pulling through property was shown to be sufficient to prove that all the ground states of the parent Hamiltonian \\eqref{parent} on a closed manifold are given by MPO-injective PEPS states whose virtual indices along the non-contractible cycles are closed using the same MPOs connected by a so-called ground state tensor $Q$. Because of the pulling through property these MPO loops can be moved freely on the virtual level of the PEPS, implying that all ground states are indistinguishable by local operators.\n\n\\pagebreak\n\n\\section{Anyon sectors in MPO-injective PEPS} \\label{sec:anyons}\n\nHaving gathered all the concepts and technical tools of PMPOs and MPO-injective PEPS we can now turn to the question of how to construct topological sectors in these models. As argued in the previous section and shown in \\cite{MPOpaper}, MPO-injective PEPS give rise to degenerate ground states on nontrivial manifolds that are locally indistinguishable. Systems with this property that are defined on a large but finite open region are believed to have a low-energy eigenstate basis that can be divided into a finite number of topological superselection sectors, such that it is possible to measure in which sector a state is by acting only locally in the bulk of the region, but to go from a state in one sector to a state in another sector one necessarily has to act on both bulk and boundary. The elementary excitations in each sector are called \\emph{anyons} and can be seen as a generalization of bosons and fermions \\cite{wilczek,LeinaasMyrheim}. In this section we show that the entanglement structure of the ground state PEPS as determined by the PMPO $\\tilde{P}_{C}$ contains all necessary information to find the anyonic sectors and their topological properties.\n\n\\subsection{Topological charge}\\label{subsec:ansatz}\n\nTo find the topological sectors we start by looking at a patch of the ground state PEPS on an annulus. It was shown in \\cite{MPOpaper} that the support of the ground state tensors in the annulus is equal to the support of the following tensor when we interpret it as a collection of matrices from the indices outside the annulus to the indices inside:\n\\begin{align}\\label{annulus}\n \\vcenter{\\hbox{\n \\includegraphics[width=0.24\\linewidth]{annulus}}}\n\\end{align}\nWe now use following equality, which follows from the zipper condition \\eqref{zippercondition} and \\eqref{leftright},\n\\begin{align}\\label{reduction}\n \\vcenter{\\hbox{\\hspace{0 mm}\n \\includegraphics[width=0.7\\linewidth]{reduction}}}\n\\end{align}\nto see that whatever tensor we put in the hole of the annulus, its relevant support space is given by the support of following tensors when interpreted as matrices from the outer indices to the inner ones\n\\begin{align}\\label{algebraobject}\n A_{abcd,\\mu\\nu}\\;\\; =\\;\\; \\vcenter{\\hbox{\n \\includegraphics[width=0.23\\linewidth]{algebraobject}}}\n\\end{align}\nA crucial observation is now that the matrices $A_{abcd,\\mu\\nu}$ form a $C^*$-algebra, i.e. we have that\n\\begin{equation}\\label{eq:multiplicationalgebra}\nA_{hegf,\\lambda\\sigma}A_{abcd,\\mu\\nu}=\\delta_{ga}\\sum_{ij,\\rho\\tau}\\Omega_{(hegf,\\lambda\\sigma)(abcd,\\mu\\nu)}^{(hjci,\\rho\\tau)} A_{hjci,\\rho\\tau}\n\\end{equation}\nand\n\\begin{equation}\\label{eq:conjugationalgebra}\nA_{abcd,\\mu\\nu}^\\dagger = \\sum_{e,\\lambda\\sigma}(\\Theta_{abcd,\\mu\\nu})^{e\\sigma\\lambda}A_{cead^*,\\sigma\\lambda}\n\\end{equation}\nWe show this explicitly in appendix \\ref{app:algebra}. It is a well-known fact that every finite dimensional $C^*$-algebra is isomorphic to a direct sum of simple matrix algebras. We now claim that the topological sectors correspond to the different blocks in this direct sum decomposition of the algebra. This means we relate an anyon sector $i$ to every minimal central idempotent $\\mathcal{P}_i$ satisfying $\\mathcal{P}_i\\mathcal{P}_j = \\delta_{ij}\\mathcal{P}_i$, $\\mathcal{P}_i^\\dagger = \\mathcal{P}_i$ and $A_{abcd,\\mu\\nu}\\mathcal{P}_i = \\mathcal{P}_i A_{abcd,\\mu\\nu}$, where $\\mathcal{P}_i$ takes the form\n\\begin{equation}\n\\mathcal{P}_i = \\sum_{abd,\\mu\\nu} c^i_{abd,\\mu\\nu}A_{abad,\\mu\\nu}.\n\\end{equation}\n\nBecause we want to be able to measure the topological charge of an excitation it is a well-motivated definition to associate topological sectors to orthogonal subspaces. We note that in \\cite{Qalgebra} a similar identification of anyons in string-net models with central idempotents was given. This idea dates back to the tube algebra construction of Ocneanu \\cite{dyonicspectrum,tubealgebra}. In the remainder of this paper we represent the minimal central idempotents $\\mathcal{P}_i$ graphically as\n\\begin{align}\\label{idempotent}\n \\mathcal{P}_i = \\vcenter{\\hbox{\n \\includegraphics[width=0.25\\linewidth]{idempotent}}}\n\\end{align}\nOne can easily see that the coefficients $c^i_{abd,\\mu\\nu}$ give the same central idempotent $\\mathcal{P}_i$ independent of the number of MPO tensors used to define $A_{abad,\\mu\\nu}$, i.e. the $\\mathcal{P}_i$ are projectors for every length. In appendix \\ref{app:idempotents} we give a numerical algorithm to explicitly construct the central idempotents $\\mathcal{P}_i$.\n\n\\subsection{Anyon ansatz}\\label{sec:ansatz}\n\nHaving identified the topological sectors we would now like to construct MPO-injective PEPS containing anyonic excitations. To do so we first need to take a closer look at the $C^*$-algebra spanned by the elements $A_{abcd,\\mu\\nu}$.\n\nA general central idempotent $\\mathcal{P}_i$ consists of a sum of idempotents $P^{a_\\alpha}_i$ that are not central:\n\\begin{equation}\\label{eq:centralidempotentdecomposition}\n\\mathcal{P}_i = \\sum_{a=1}^{D_i}\\sum_{\\alpha = 1}^{d_{i,a}} P^{a_\\alpha}_i\\, ,\n\\end{equation}\nwhere the index $a$ refers to the $D_i$ MPO block labels that appear in $\\mathcal{P}_i$, which we gave an arbitrary ordering. The $P^{a_\\alpha}_i$ satisfy $P^{a_\\alpha}_iP^{b_\\beta}_j = \\delta_{i,j}\\delta_{a,b}\\delta_{\\alpha,\\beta}P^{a_\\alpha}_i$ and $P^{a_\\alpha\\dagger}_i = P_i^{a_\\alpha}$. We also take the $P_i^{a_\\alpha}$ to be simple, i.e. they cannot be decomposed further as an orthogonal sum of idempotents. A central idempotent with $r_i \\equiv \\sum_{a=1}^{D_i}d_{i,a} > 1$ is called a higher dimensional central idempotent. From the algebra structure \\eqref{eq:multiplicationalgebra} and \\eqref{eq:conjugationalgebra} we see that the simple idempotents have a diagonal block label, i.e. they can be expressed in terms of the basis elements as\n\\begin{equation}\nP^{a_\\alpha}_i = \\sum_{bd,\\mu\\nu} t^{a_\\alpha,i}_{bd,\\mu\\nu}A_{abad,\\mu\\nu}\\, .\n\\end{equation}\nThe dimension of the algebra, i.e. the total number of basis elements $A_{abcd,\\mu\\nu}$, equals $\\sum_i r_i^2$: for every $\\mathcal{P}_i$ the algebra also contains $r_i(r_i - 1)$ nilpotents $P^{a_\\alpha,b_\\beta}_i$. The nilpotents satisfy $\\left(P_i^{a_\\alpha,b_\\beta}\\right)^\\dagger = P_i^{b_\\beta,a_\\alpha}$ and $P_i^{a_\\alpha,b_\\beta}P_j^{c_\\gamma,d_\\delta} = \\delta_{i,j} \\delta_{b,c}\\delta_{\\beta,\\gamma}P_i^{a_\\alpha,d_\\delta}$. Note that $P^{a_\\alpha,a_\\alpha}_i \\equiv P^{a_\\alpha}_i$ is not a nilpotent but one of the non-central simple idempotents. Combining the simple idempotents and nilpotents we can define the projectors\n\\begin{equation}\n\\Pi_i^{[x]} = \\sum_{a_\\alpha,b_\\beta = 1}^{r_i} x^i_{a_\\alpha b_\\beta} P_i^{a_\\alpha,b_\\beta}\\, ,\n\\end{equation}\nwhere\n\\begin{equation}\n\\bar{x}^i_{a_\\alpha b_\\beta} = x^i_{b_\\beta a_\\alpha}\\,\\;\\;\\;\\;\\text{and }\\;\\;\\;\\;\\; \\sum_{b_\\beta} x^i_{a_\\alpha b_\\beta}x^i_{b_\\beta d_\\delta} = x^i_{a_\\alpha d_\\delta}\\, .\n\\end{equation}\nIf $x^i_{a_\\alpha b_\\beta} = \\bar{v}^i_{a_\\alpha}v^i_{b_\\beta}$ we call $\\Pi^{[x]}_i$ a `rank one' projector (note that $\\Pi_i^{[x]}$ does not have to be a rank one matrix, but this terminology refers to the $C^*$-algebra structure.)\n\nLet us now return to the MPO-injective PEPS. As explained above, every ground state tensor has virtual indices which are supported in the subspace $\\tilde{P}$. To introduce anyonic excitations in the tensor network we need a new type of PEPS tensor. If we want to place an anyon at vertex $v$ with coordination number $C_v$ this new type of tensor has $C_v$ virtual indices of dimension $D$, one virtual index of dimension $\\chi$ and one physical index of dimension $d$. On a square lattice for example, we depict this tensor as\n\n\\begin{align}\nI = \\vcenter{\\hbox{\n \\includegraphics[width=0.1\\linewidth]{anyontensor}}}\\, ,\n\\end{align}\nwhere the physical index points to the top left corner and the $\\chi$-dimensional index is on the bottom left corner. The label $i$ and the name of the tensor $I$ refer to the topological sector. A necessary condition for his tensor to describe an excitation with topological charge $i$ is that its virtual indices are supported in the subspace determined by $\\mathcal{P}_i$:\n\n\\begin{align}\\label{anyonsupport}\n \\vcenter{\\hbox{\n \\includegraphics[width=0.17\\linewidth]{anyontensorsupport1}}} = \\delta_{i,j}\n\\vcenter{\\hbox{\n \\includegraphics[width=0.1\\linewidth]{anyontensorsupport2}}} \\equiv \\delta_{i,j} I\n\\end{align}\nThis property provides a heuristic interpretation of topological sectors in terms of entanglement. For isometric PEPS the virtual indices along the boundary of a region label the Schmidt states of the physical indices and can therefore be interpreted as the `entanglement degrees of freedom'. Topological sectors are then characterized by entanglement degrees of freedom that live in orthogonal subspaces, so they are really the degrees of freedom that contain the topological information. When we go away from the fixed point the interpretation of virtual degrees of freedom as entanglement degrees of freedom starts to break down and one can understand how the construction fails beyond the phase transition.\n\nProperty \\eqref{anyonsupport} is not sufficient to obtain a good anyonic excitation tensor. To see what additional properties it should have we construct a MPO-injective PEPS containing an anyon pair $(i,i^*)$, where $i^*$ will be defined in a moment. We can do this by starting from the ground state PEPS, replacing the tensors at two vertices by the excitation tensors corresponding to sectors $i$ and $i^*$ and then connecting the $\\chi$-dimensional indices of the excitation tensors with the appropriate MPO on the virtual level. See figure \\ref{fig:anyonpair} for an example on the 3 by 3 square lattice. Note that the position of the virtual MPO is irrelevant since it can be moved by using the pulling through property.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.24\\textwidth]{anyonpair}\n\\caption{A MPO-injective PEPS on a 3 by 3 square lattice containing an anyon pair $(i,i^*)$ located at the lower left and upper right corner.}\n\\label{fig:anyonpair}\n\\end{figure}\n\nIf we interpret $I$ as a matrix with the rows labeled by the physical index and the columns by the virtual indices then we can write \\eqref{anyonsupport} as $I \\mathcal{P}_j= \\delta_{i,j}I$. To construct the PEPS containing the $(i,i^*)$ anyon pair we do not use the full tensor $I$ but project its virtual indices in the space corresponding to the simple idempotent $P_i^{a_\\alpha}$: $I^{a_\\alpha} \\equiv IP_i^{a_\\alpha}$, where $a$ is one of the MPO block labels appearing in \\eqref{eq:centralidempotentdecomposition}. Let us assume that we have connected the tensors $I^{a_\\alpha}$ and $I^{*a_\\alpha}$ on the virtual level with a single block MPO $a$. We now want the total anyon pair to be in the vacuum sector. We can impose this by surrounding the tensors with the vacuum projector $\\tilde{P}$. If we ignore the ground state PEPS environment we can represent the vacuum anyon pair graphically as\n\n\\begin{align}\\label{vacuumpair1}\n\\vcenter{\\hbox{\n \\includegraphics[width=0.33\\linewidth]{vacuumpair1}}}\\, .\n\\end{align}\nThe anyon type $i^*$ is defined as the unique topological sector such that \\eqref{vacuumpair1} is non-zero for a fixed $i$ (see section \\ref{sec:fusion} for more details). Using \\eqref{reduction} we can rewrite \\eqref{vacuumpair1} as\n\n\\begin{align}\\label{vacuumpair2}\n\\sum_{bcd,\\mu\\nu}w_d \\vcenter{\\hbox{\n \\includegraphics[width=0.48\\linewidth]{vacuumpair2}}}\\, .\n\\end{align}\nHere we again recognize the algebra basis elements $A_{abcd,\\mu\\nu}^\\dagger$ and $A_{abcd,\\mu\\nu}$ (see Appendix \\ref{subsec:hermitianconjugation} for more details on how to identify the Hermitian conjugate). We now make use of the fact that the basis elements can be written as\n\\begin{eqnarray}\nA_{abcd,\\mu\\nu} & = & \\sum_{i,\\alpha\\gamma} u^{i,a_\\alpha c_\\gamma}_{abcd,\\mu\\nu} P_i^{a_\\alpha,c_\\gamma}\\, , \\\\\nA_{abcd,\\mu\\nu}^\\dagger & = & \\sum_{i,\\alpha\\gamma} \\bar{u}^{i,c_\\gamma a_\\alpha}_{abcd,\\mu\\nu} P_i^{a_\\alpha,c_\\gamma}\\, .\n\\end{eqnarray}\nNote that once we have found the idempotents we can easily obtain the coefficients $u^{i,a_\\alpha c_\\gamma}_{abcd,\\mu\\nu}$ by $P_i^{a_\\alpha} A_{abcd,\\mu\\nu}P_i^{c_\\gamma} = u^{i,a_\\alpha c_\\gamma}_{abcd,\\mu\\nu} P^{a_\\alpha,c_\\gamma}_i$. If we represent the simple idempotents and nilpotents as\n\n\\begin{align}\\label{nilpotent}\nP_i^{a_\\alpha,b_\\beta} = \\vcenter{\\hbox{\n \\includegraphics[width=0.17\\linewidth]{nilpotent}}}\\, ,\n\\end{align}\nthen we can write \\eqref{vacuumpair2} as\n\n\\begin{align}\\label{vacuumpair3}\n\\sum_{c,\\gamma\\lambda} \\left( \\sum_{bcd,\\mu\\nu} u^{i,a_\\alpha c_\\gamma}_{abcd,\\mu\\nu} \\bar{u}^{i^*,c_\\lambda a_\\sigma}_{abcd,\\mu\\nu} w_d\\right) \\vcenter{\\hbox{\n \\includegraphics[width=0.35\\linewidth]{vacuumpair3}}}\\, .\n\\end{align}\nSo we see that in order to be able to take the tensors $I$ and $I^*$ and connect them on the virtual level such that they are in the vacuum state it should hold that\n\n\\begin{equation} \\label{injectivitystructure}\nI^{a_\\alpha} P_i^{a_\\alpha,c_\\gamma} \\propto I^{c_\\gamma}\\, .\n\\end{equation}\nEquation \\eqref{injectivitystructure} tells us something about the injectivity property of $I$. Suppose that $I$ would be injective on the subspace corresponding to $\\mathcal{P}_i$ when we interpret it as a matrix from the virtual to the physical indices, i.e. there is a left inverse $I^+$ such that $I^+I = \\mathcal{P}_i$. (Note that we know from non-topological tensor networks that excitation tensors are generically not injective \\cite{Haegeman,Vanderstraeten}. However, they could become injective by blocking them with multiple surrounding ground state tensors. Even if this would not be the case we want our formalism to hold irrespective of the specific Hamiltonian so we can focus on the extreme cases where the excitation tensors are injective.) Acting with $I^+$ on both sides of equation \\eqref{injectivitystructure} would give $P_i^{a_\\alpha,b_\\beta} \\propto P_i^{b_\\beta}$, which is clearly an inconsisteny. For this reason we consider the more general case\n\\begin{equation}\nI^+I = \\Pi^{[x]}_i\\, ,\n\\end{equation}\nwhere we need to determine the coefficients $x^i_{a_\\alpha b_\\beta}$. Now we get from \\eqref{injectivitystructure}\n\\begin{equation}\n\\sum_{c_\\gamma} x^i_{c_\\gamma a_\\alpha} P_i^{c_\\gamma ,b_\\beta} \\propto \\sum_{c_\\gamma} x^i_{c_\\gamma b_\\beta} P^{c_\\gamma ,b_\\beta}\\,\\;\\;\\forall b_\\beta\\, .\n\\end{equation}\nThis shows that all columns of the matrix $[x]$ are proportional, implying that $\\Pi_i^{[x]}$ is rank one, i.e. $x^i_{a_\\alpha b_\\beta} = \\bar{v}^i_{a_\\alpha} v^i_{b_\\beta}$ and $\\sum_{a_\\alpha} \\bar{v}^i_{a_\\alpha}v^i_{a_\\alpha} = 1$. This is consistent with the fact that it should not be possible to differentiate between the $(i,i^*)$-pair before and after vacuum projection via the physical indices of only $I$ or $I^*$. That $(i,i^*)$ are together in the vacuum state is a global, topological property and this information should not be accessible by looking at only one anyon in the pair. Anyons also detect each other's presence via nonlocal, topological interactions that can be thought of as a generalized Aharonov-Bohm effect \\cite{AharonovBohm}. In section \\ref{sec:braiding} we elaborate on how the virtual MPO strings implement this topological interaction. The fact that the virtual MPO strings implement topological interactions is consistent with the rank one property of $\\Pi_i^{[x]}$ because the physical indices of a single excitation tensor should not allow one to deduce which virtual MPO string is connected to the tensor on the virtual level.\n\nWith the rank one $[x]$ we get from \\eqref{injectivitystructure}\n\n\\begin{equation}\nI^{a_\\alpha}P^{a_\\alpha,c_\\gamma} = \\frac{v^i_{a_\\alpha}}{v^i_{c_\\gamma}}I^{c_\\gamma}\\, .\n\\end{equation}\nSo we can always choose tensors $I^{a_\\alpha}$ such that\n\\begin{equation}\nI^{a_\\alpha}P^{a_\\alpha,c_\\gamma} =I^{c_\\gamma}\\;\\;\\;\\;\\text{ and }\\;\\;\\;\\; I^+I=\\left(\\sum_{a_\\alpha}P_i^{a_\\alpha,x}\\right)\\left( \\sum_{b_\\beta} P^{x,b_\\beta}_i \\right)\\, .\n\\end{equation}\nUsing this choice for $I$ we can write the equality of equations \\eqref{vacuumpair1} and \\eqref{vacuumpair3} as\n\n\\begin{eqnarray}\n\\left(I^{a_\\alpha}\\otimes_a I^{a_\\sigma} \\right)\\tilde{P} & =& \\sum_{c,\\gamma\\lambda} \\left( \\sum_{bcd,\\mu\\nu} u^{i,a_\\alpha c_\\gamma}_{abcd,\\mu\\nu} \\bar{u}^{i^*,c_\\lambda a_\\sigma}_{abcd,\\mu\\nu} w_d\\right) I^{c_\\gamma} \\otimes_c I^{c_\\lambda} \\\\\n & \\equiv & \\sum_{c,\\gamma\\lambda} M_{c\\gamma\\lambda,a\\alpha\\sigma} I^{c_\\gamma} \\otimes_c I^{c_\\lambda} \\, , \\label{Mmatrix}\n\\end{eqnarray}\nwhere we used $\\otimes_a$ to denote the tensor product of two anyonic excitation tensors connected with the single block MPO $a$ and introduced the matrix $M$. We get from \\eqref{Mmatrix}\n\\begin{equation}\n\\left(\\sum_{a\\alpha\\sigma} y_{a\\alpha\\sigma} I^{a_\\alpha}\\otimes_a I^{a_\\sigma} \\right)\\tilde{P} = \\sum_{c\\gamma\\lambda}\\left(\\sum_{a\\alpha\\sigma} M_{c\\gamma\\lambda, a\\alpha\\sigma}y_{a\\alpha\\sigma}\\right) I^{c_\\gamma} \\otimes_c I^{c_\\lambda}\n\\end{equation}\nSo in order for the $(i,i^*)$ pair to be in the vacuum we should choose $y_{a\\alpha\\sigma}$ to be an eigenvector of $M$:\n\\begin{equation}\n\\sum_{a\\alpha\\sigma} M_{c\\gamma\\lambda, a\\alpha\\sigma}y_{a\\alpha\\sigma} = y_{c\\gamma\\lambda}\\, .\n\\end{equation}\nBecause $\\tilde{P}$ is a projector the matrix $M$ is also a projector and therefore has eigenvalues one or zero. Note that in general the vector $y_{c\\gamma\\lambda}$ will be entangled in the indices $\\gamma$ and $\\lambda$. However, this is purely `virtual' entanglement that cannot be destroyed by measurements on only one anyon in the pair because of the rank-one injectivity structure of $I$. \n\nAs a final remark we would like to stress that we only looked at the universal properties of the anyonic excitation tensors. These tensors of course also contain a lot of degrees of freedom that one needs to optimize over using a specific Hamiltonian in order to construct eigenstates of the system. This one can do using similar methods as for non-topological PEPS \\cite{Haegeman,Vanderstraeten}.\n\n\n\\subsection{Ground states on the torus and the $S$ matrix}\\label{sec:Smatrix}\n\nThe projectors $\\mathcal{P}_i$ automatically allow one to construct the Minimally Entangled States (MES) on a torus \\cite{MES}; one can simply put $\\mathcal{P}_i$ along the non-contractible loop in the $y$-direction and close the `inner' and `outer' indices of $\\mathcal{P}_i$ with an MPO along the non-contractible loop in the orthogonal $x$-direction. See figure \\ref{fig:MES} for a schematic representation. The resulting structure on the virtual level of the tensor network can be moved around freely because of the pulling through property and is therefore undetectable via local operators, implying we have constructed a ground state $\\ket{\\Xi^x_i}$ with an anyon flux of type $i$ threaded through the hole in the $x$-direction. A similar construction also allows one to construct a MES $\\ket{\\Xi^y_i}$ with an anyon flux through the hole in the $y$-direction. Since for $\\ket{\\Xi^x_i}$ $\\mathcal{P}_i$ lowers the rank of the reduced density matrix of a segment of the torus obtained by cutting along two non-contractible loops in the $y$-direction it indeed implies (for fixed-point models) that we have minimized the entanglement entropy. In \\cite{MES} the topological entanglement entropy for such a bipartition in a MES $\\ket{\\Xi^x_i}$ was found to be $\\gamma_i = 2(\\log D - \\log d_i)$, where $D$ is the so-called total quantum dimension and $d_i$ is the quantum dimension of anyon type $i$. The PEPS construction then shows that the topological entanglement entropy for any bipartition in a low-energy excited state with a contractible boundary surrounding an anyon in sector $i$ will be given by $\\gamma'_i = \\log D - \\log d_i$.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.35\\textwidth]{MES}\n\\caption{A schematic representation of the minimally entangled state $\\ket{\\Xi}^x_i$ with anyon flux $i$ through the hole in the $x$-direction. It is obtained by placing the projector $\\mathcal{P}_i$ on the virtual level of the tensor network on the torus along the non-contractible loop in the $y$-direction and connecting the open indices with a MPO along the $x$-direction.}\n\\label{fig:MES}\n\\end{figure}\n\nThis identification of the MES gives direct access to the $S$ matrix, which is defined as the unitary matrix that implements the basis transformation from one minimally entangled basis $\\{\\ket{\\Xi^x_i} \\}$ to the other $\\{\\ket{\\Xi^y_i} \\}$. The advantage of the MPO-injectivity formalism is that we can compute the $S$ matrix in a way that does not scale with the system size. For this we take a single PMPO $\\tilde{P}_{C_4}$ with four tensors and use it to construct the smallest possible `torus' by contracting the virtual indices along the $x$ direction and the ones along the $y$ direction. Defining $T_i^x$ as the vector obtained by putting central idempotent $\\mathcal{P}_i$ along the $y$-direction on the virtual level of the minimal torus and $T_i^y$ by putting $\\mathcal{P}_i$ along the $x$-direction we then have\n\\begin{equation}\nT_i^y = \\sum_{j = 1} S_{ij} T_j^x\\, .\n\\end{equation}\nWe have numerically verified the validity of this expression for all examples below and found that it indeed holds.\n\nRecall that the central idempotents can be written as a sum of simple idempotents\n\\begin{equation}\n\\mathcal{P}_i = \\sum_{a=1}^{D_i}\\sum_{\\alpha = 1}^{d_{i,a}} P^{a_\\alpha}_i\\, ,\n\\end{equation}\nwhere each of the $P^{a_\\alpha}_i$ satisfies $P^{a_\\alpha}_iP^{b_\\beta}_i = \\delta_{a,b}\\delta_{\\alpha,\\beta}P^{a_\\alpha}_i$ and $P^{a_\\alpha \\dagger}_i = P_i^{a_\\alpha}$. In principle one could use each of the $P^{a_\\alpha}_i$ to construct a ground state on the torus, in a similar way as explained above for $\\mathcal{P}_i$. For the examples below we have numerically verified that each $P^{a_\\alpha}_i$ for fixed $i$ gives the same ground state, implying that the ground state degeneracy on the torus is indeed given by the number of central idempotents. \n\n\\subsection{Topological spin}\\label{sec:topspin}\n\nEven in the absence of rotational symmetry an adiabatic $2\\pi$ rotation of the system should not be observable. Normally, we would conclude from this that the $2\\pi$ rotation acts as the identity times a phase (called the Berry phase in continuous systems \\cite{Berry}) on the total Hilbert space: $R(2\\pi) = e^{i\\theta}\\mathds{1}$. However, the existence of topological superselection sectors changes this conclusion \\cite{WickWightmanWigner}. Because there are no local, i.e. physical, operators that couple states in different sectors the $2\\pi$ rotation could produce a different phase $e^{i2\\pi h_i}$ in each sector and still be unobservable. The number $h_i$ in a particular sector is generally called the \\emph{topological spin} of the corresponding anyon.\n\nTo see this kind of behavior in MPO-injective PEPS it is important to realize that to define a $2\\pi$ rotation one has to specify a specific (discrete) path of states, in the same way one has to define a continuous family of states in order to obtain a Berry phase. For example, in the case of a square lattice we can define the path using 4 successive rotations over $\\pi\/2$. When dealing with a non-regular lattice we have to use a family of different lattices along the path. We can now consider a region of MPO-injective PEPS in the sector defined by $\\mathcal{P}_i$. This region has an open internal MPO-index along the boundary that cannot be moved freely. We show that one can obtain the topological spin associated to sector $\\mathcal{P}_i$ by rotating the PEPS on a finite region while keeping the virtual boundary conditions fixed. After a $2\\pi$ rotation $\\mathcal{P}_i$ surrounding the PEPS region is transformed to\n\n\\begin{align}\\label{rotation}\n \\vcenter{\\hbox{\n \\includegraphics[width=0.18\\linewidth]{n33}}}\n\\end{align}\nEquation \\eqref{rotation} can be interpreted as $\\mathcal{P}_i$ acting on the matrix $\\mathcal{R}_{2\\pi}$ defined by\n\n\\begin{align}\n \\mathcal{R}_{2\\pi} = \\vcenter{\\hbox{\n \\includegraphics[width=0.16\\linewidth]{n34}}}\n\\end{align}\nBy looking at the graphical expression for $\\mathcal{R}_{2\\pi}^\\dagger\\mathcal{R}_{2\\pi}$\n\n\\begin{align}\n \\mathcal{R}_{2\\pi}^\\dagger\\mathcal{R}_{2\\pi} = \\vcenter{\\hbox{\n \\includegraphics[width=0.17\\linewidth]{n35}}} \\, ,\n\\end{align}\none can easily see by embedding it in a MPO-injective PEPS and using the pulling through property (\\ref{pullingthrough2}) that we can reduce it to a trivial action, implying $\\mathcal{R}_{2\\pi}$ is unitary on the relevant subspace. Using the zipper condition \\eqref{zippercondition}, the pivotal property \\eqref{pivotalnew} and again the pulling through property one can show via some graphical calculus that the following identity holds\n\\begin{equation}\n\\mathcal{R}^\\dagger_{2\\pi} A_{abcd,\\mu\\nu} \\mathcal{R}_{2\\pi} = A_{abcd,\\mu\\nu}\n\\end{equation}\non the relevant subspace for all elements $A_{abcd,\\mu\\nu}$ in the algebra. Schur's lemma thus allows us to conclude that $\\mathcal{R}_{2\\pi} = \\sum_i \\theta_i\\mathcal{P}_i$, with $\\theta_i$ some phases because of the unitarity of $\\mathcal{R}_{2\\pi}$. We thus arrive at the desired result, i.e.\n\\begin{equation}\n\\mathcal{P}_i \\mathcal{R}_{2\\pi} = \\theta_i \\mathcal{P}_i\\, ,\n\\end{equation}\nwhere $\\theta_i = e^{i2\\pi h_i}$ gives the topological spin of the anyon in sector $i$.\n\n\n\\subsection{Fusion} \\label{sec:fusion}\n\nWe can associate an algebra, called the fusion algebra, to the topological sectors. Consider a state which has the ground state energy everywhere except for two spatially separated regions. Using operators that surround one of those individual regions, we can measure the topological charge within both regions. Say these measurements reveal topological charges $i$ and $j$. By considering the two different regions and a part of the ground state between them as one big region and using loop operators surrounding this bigger region, we can similarly measure the total topological charge. This measurement will typically have several outcomes, i.e. the total state is in a superposition of different topological sectors. The sectors appearing in this superposition for every $i$ and $j$ determine the integer rank three tensor $\\mathscr{N}_{ij}^k$ and we formally write the fusion algebra as $i\\times j = \\sum_{k}\\mathscr{N}_{ij}^k\\,k$. It is also clear that this algebra is by construction commutative, i.e. $\\mathscr{N}_{ij}^k = \\mathscr{N}_{ji}^k$. Assuming that all states in the same topological sector are connected via local operators we should be able to move an anyon $i$ from one place to another using a string operator \\cite{ReadChakraborty,Kitaev03,Kitaev06,LevinWen05}. Applying this string operator to a region that does not contain an excitation will create a pair $(i,i^*)$ of anyons, where $i^*$ is the unique dual\/anti particle of anyon $i$. From this we see that $\\mathscr{N}_{ij}^1 = \\delta_{j,i^*}$.\n\nThis fusion algebra is very easily and explicitly realized in MPO-injective PEPS. In the simplest case, we just place two single-site idempotents $\\mathcal{P}_i$ and $\\mathcal{P}_j$, next to each other on neighboring lattice sites. We can then fuse together the MPO strings emanating from $\\mathcal{P}_i$ and $\\mathcal{P}_j$ into one string. Looking at an annular ground state region surrounding the two anyons and using similar reasoning as in section \\ref{subsec:ansatz} we find that the sum of all idempotents $\\sum_k \\mathcal{P}_k$ surrounding both anyons acts as a resolution of the identity on the relevant subspace. We can easily determine the subspaces $\\mathcal{P}_k$ on which the combination of both anyons are supported. These subspaces correspond to the possible fusion products of $\\mathcal{P}_i$ and $\\mathcal{P}_j$. We illustrate this in figure \\ref{fig:fusionidempotents}, which uses a new, simplified diagrammatic notation that is defined in figure \\ref{fig:groundstateandexcitation}. From now on we shall denote the ground state by a tensor network consisting of black colored sites, omitting the physical indices. A site that contains an excitation is colored blue or red. Note that the procedure of figure \\ref{fig:fusionidempotents} does not allow one to determine fusion multiplicities, i.e. it only tells whether $\\mathscr{N}_{ij}^k$ is non-zero. The multiplicities --the specific values of $\\mathscr{N}_{ij}^k$-- are in general harder to obtain directly since they arise from the number of linearly independent ways the MPO strings emanating from the idempotents can be connected on the virtual level. One could of course also just calculate the fusion multiplicities from the $S$ matrix using the Verlinde formula \\cite{verlinde}.\n\nNote that a projective measurement of the topological charge in some region via the physical PEPS indices greatly depends on the details of the tensors $A$ and $A'$ used to complete the tensor network. This is to be expected since the physical measurement is determined by the specific microscopic realization of the quantum phase.\n\n\\begin{figure}[H]\n \\centering\n (a) \\includegraphics[height=0.28\\textwidth] {anyonpair.pdf}\n\\qquad (b) \\includegraphics[height=0.28\\textwidth]{anyonpairsimplified.pdf}\n\n\\caption{(a) The original tensor diagram for the ground state with an anyon pair $(i,i^*)$ in the corners of the lattice. (b) Simplified tensor diagram for the state. In the remainder of the paper we will only use simplified diagrams. The ground state tensors are denoted by black squares and the physical indices are omitted. The blue squares describe an anyon of type $i,i^*$ living on the respective sites. The blue tensors are supposed to be invariant under the virtual action of the idempotent corresponding to the label $i$ or $i^*$. We use blue and red to denote sites containing an anyon, whereas other colors such as grey are reserved for fusion product of MPOs or anyons.}\n\\label{fig:groundstateandexcitation}\n\\end{figure}\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.42\\linewidth]{FusionIdempotents.pdf}\n \\caption{The procedure to determine the fusion product of two anyons in the new, simplified graphical notation (see also figure \\ref{fig:groundstateandexcitation}). The anyons are given by the red and blue idempotents $\\mathcal{P}_i, \\mathcal{P}_j$. We first fuse their outgoing strings $a,b$ to all possible products $c$. We can now measure the fusion product of the anyons by projecting the result on the subspaces determined by the idempotents $\\mathcal{P}_k$. The idempotents that give rise to a non zero projection correspond to the possible fusion products $(k)$ of the red $(i)$ and blue $(j)$ anyons. Importantly, the sum over all grey idempotents $\\mathcal{P}_k$ acts as the identity on the virtual labels.}\n \\label{fig:fusionidempotents}\n\\end{figure}\n\n\\subsection{Braiding} \\label{sec:braiding}\n\nWhen introducing the PEPS anyon ansatz in section \\ref{sec:ansatz} we mentioned that anyons detect each others presence in a non-local, topological way. We will now make this statement more precise. To every fixed configuration of anyons in the plane we can associate a collection of quantum many-body states. This set of states forms a representation of the colored braid group. This means that when we exchange anyons or braid them around each other this induces a non-trivial unitary transformation in the subspace corresponding to the configuration. If there is only one state that we can associate to every anyon configuration then we only get one-dimensional representations. This situation is commonly referred to as Abelian statistics and the anyons are called Abelian anyons. With non-Abelian anyons we can associate multiple orthogonal states to one or more anyon configurations and these will form higher dimensional representations of the colored braid group. \n\nOne can obtain a basis for the subspace associated to a certain anyon configuration by assigning an arbitrary ordering to the anyons and projecting the first two anyons in a particular fusion state. One subsequently does the same for the fusion outcome of the first two anyons and the third anyon. This can be continued until a final projection on the vacuum sector is made. So the degeneracy of an anyon configuration is given by the number of different ways an ordered array of anyons can fuse to the vacuum.\n\nJust as in relativistic field theories there is a spin-statistics relation for anyons, connecting topological spin and braiding. It is expressed by the so-called `pair of pants' relation, which we show graphically using the same set-up as presented in figure \\ref{fig:fusionidempotents}:\n\n\\begin{align}\n\\vcenter{\\hbox{\n \\includegraphics[width=0.17\\linewidth]{pants1}}} \\xrightarrow{R}\n\\vcenter{\\hbox{\n \\includegraphics[width=0.17\\linewidth]{pants2}}} \\xrightarrow{R}\n\\vcenter{\\hbox{\n \\includegraphics[width=0.17\\linewidth]{pants3}}} \\nonumber \\\\ = \n\\vcenter{\\hbox{\n \\includegraphics[width=0.17\\linewidth]{pants4}}} = e^{2\\pi i(h_k - h_i - h_j)}\n\\vcenter{\\hbox{\n \\includegraphics[width=0.17\\linewidth]{pants1}}}\n\\end{align}\nThe pair of pants relation shows that braiding acts diagonally on two anyons that are in a particular fusion state, which is realized in the figures by the grey idempotent $k$ surrounding $i$ and $j$. Because the topological spins can be shown to be rational numbers \\cite{Vafa}, the spin-statistics connection reveals that every anyon configuration provides a representation of the truncated colored braid group, i.e. there exists a natural number $n$ such that $R^{2n} = \\mathds{1}$. \n\nTo describe the exchange and braiding of two anyons that are not in a particular fusion state we look for a generalization of the pulling through condition \\eqref{pullingthrough1}, \\eqref{pullingthrough2}. The goal is to obtain tensors $\\mathcal{R}_{\\mathcal{P}_i,b}$ that describe the pulling of a MPO string of type $b$ through a site that contains an anyon corresponding to $\\mathcal{P}_i$ according to the defining equation \n\\begin{align}\\label{Rmatrix}\n\\vcenter{\\hbox{\n \\includegraphics[width=0.28\\linewidth]{DefinitionR_LHS.pdf}}} =\n\\vcenter{\\hbox{\n \\includegraphics[width=0.28\\linewidth]{DefinitionR_RHS.pdf}}}\n\\end{align}\nIf there is no anyon on the site we consider, i.e. the idempotent on this site is $\\mathcal{P}_1$ corresponding to the trivial anyon, the operator $\\mathcal{R}_{\\mathcal{P}_i,b}$ is equal to the identity on the MPO indices as follows from the pulling through property (\\ref{pullingthrough2}).\n\nWhile in practice one could solve the equation that determines $\\mathcal{R}$ numerically, we can in fact obtain the tensors $\\mathcal{R}_{\\mathcal{P},b}$ analytically also for a nontrivial idempotent $\\mathcal{P}_i$ with $i\\neq 1$. We thereto rewrite the left hand side of \\eqref{Rmatrix} by using relation \\eqref{zippercondition} as follows (Note that we do not depict the required orientations on the indices and that we omit the corresponding gauge transformations $Z_a$ to keep the presentation simple. These issues will also not have to be taken into account for the string-net examples further on.),\n\\begin{align}\\label{4Vs}\n\\vcenter{\\hbox{\n \\includegraphics[width=0.28\\linewidth]{Rmatrix_LHS.pdf}}} = \\sum_{acd\\mu\\nu}\n\\vcenter{\\hbox{\n \\includegraphics[width=0.28\\linewidth]{Rmatrix_RHS.pdf}}}\n\\end{align}\nIf by $\\mathcal{P}_iA_{abcd}$ we denote the multiplication of $\\mathcal{P}_i$ and $A_{abcd}$ in the anyon algebra defined in \\eqref{eq:multiplicationalgebra}, we find that\n\\begin{align} \\label{braidend}\n\\vcenter{\\hbox{\n \\includegraphics[width=0.28\\linewidth]{Rmatrix2_LHS.pdf}}} = \\sum_{acd\\mu\\nu}\n\\vcenter{\\hbox{\n \\includegraphics[width=0.28\\linewidth]{Rmatrix2_RHS.pdf}}}\n\\end{align}\nWith a slight abuse of notation, the grey rectangle containing $\\overline{A}_{acdb,\\mu\\nu}$ denotes a similar tensor as the algebra object $A_{acdb,\\mu\\nu}$ in \\eqref{algebraobject}, but without the MPO tensors,\n$$\n\\vcenter{\\hbox{\n \\includegraphics[width=0.35\\linewidth]{Aabcd.pdf}}}.\n$$\nThe tensors $\\mathcal{P}_i.A_{acdb,\\mu\\nu}=P_i^a.A_{acdb,\\mu\\nu}$ [see Eq.~(\\ref{eq:centralidempotentdecomposition})] can easily be determined using the structure constants. Note that all tensors $\\mathcal{P}_i.A_{acdb}$ are supported on the subspace determined by $\\mathcal{P}_i$, hence they all correspond to the same topological sector. Indeed, braiding an anyon around another one cannot change the topological charges. Remark that after the blue MPO is pulled through the site containing the anyon, the tensor on the site and the braid tensor linking the MPOs are in general entangled, due to the summation over $a,c,d$.\n\nIf $\\mathcal{P}_i$ is a one dimensional idempotent, the tensor $\\mathcal{P}_i.A_{acdb,\\mu\\nu}$ is only nonzero for a unique choice of $d=a$ and is in that case equal to $\\mathcal{P}_i$, up to a constant. Hence, in that case there is no entanglement between the tensor on the site and the tensor that connects the MPOs. \n\nOnce we obtain these tensors $\\mathcal{R}_{\\mathcal{P}_i,b}$ we know how to resolve the exchange of anyons and we can compute the $R$ matrix (braiding matrix). Suppose we have two anyons, described by idempotents $\\mathcal{P}_1, \\mathcal{P}_2$ and we want to compare the fusion of these anyons with and without exchanging them. Both situations correspond to figures \\ref{fig:exchange}(a) and \\ref{fig:exchange}(b) respectively.\n\n\\begin{figure}[H]\n \\centering\n (a)\\includegraphics[width=0.3\\textwidth]{FusionWithoutBraiding.pdf}\n (b)\\includegraphics[width=0.3\\textwidth]{FusionWithBraiding.pdf}\n \\caption{Two anyons, described by idempotents $\\mathcal{P}_i,\\mathcal{P}_j$, can be fused before exchanging them, as in Figure $(a)$, or after exchanging, as in $(b)$. To compare both diagrams we first use the tensor $\\mathcal{R}$ to redraw figure $(b)$. The result is shown in equation \\eqref{exchangeR}.}\n\\label{fig:exchange}\n\\end{figure}\n\nAll we need to resolve this situation is the tensor $\\mathcal{R}_{\\mathcal{P}_i,b}$ for all $b$ for which $\\mathcal{P}_j$ is non zero. With this tensor we can redraw figure \\ref{fig:exchange}(b) in a way similar to the left hand side of \\eqref{exchangeR}. It is now clear that the $\\mathcal{R}_{\\mathcal{P}_i,b}$ tensors encode the $R$ matrices of the topological phase, i.e. the braiding information of the anyonic excitations.\n\n\\begin{align}\\label{exchangeR}\n\\vcenter{\\hbox{\n \\includegraphics[width=0.3\\linewidth]{FusionWithBraiding.pdf}}} =\n\\vcenter{\\hbox{\n \\includegraphics[width=0.3\\linewidth]{FusionWithR.pdf}}}\n\\end{align}\n\nAnalogously, we now show how the full braiding, or double exchange, of one anyon around another can be determined. As before, this information is completely contained within the $\\mathcal{R}$ tensors, as shown in figure \\ref{fig:fullbraid}. We study the situation where there are two anyon pairs present and we braid one anyon of the first pair completely around an anyon of the second pair. The procedure is shown in figure \\ref{fig:fullbraid}. If we compare figures \\ref{fig:fullbraid} (a) and (d), we note that two different changes occurred in the transition between both diagrams. First, the use of relation \\eqref{Rmatrix} can induce a non-trivial action on the inner degrees of freedom of the idempotent. While it cannot change the support of the idempotent itself, as this determines the topological superselection sector, the degrees of freedom within a sector can change. This is important if the idempotent corresponding to the anyon is higher dimensional.\n\n\\begin{figure}[H]\n \\centering\n (a)\\includegraphics[height=0.23\\textwidth]{FullBraid1.pdf} \\qquad\n (b)\\includegraphics[height=0.23\\textwidth]{FullBraid2.pdf} \\\\\n (c)\\includegraphics[height=0.23\\textwidth]{FullBraid3.pdf} \\qquad\n (d)\\includegraphics[height=0.23\\textwidth]{FullBraid4.pdf}\n\n \\caption{Figure (a): two anyons in a lattice, the lattice sites that contain the central idempotents $\\mathcal{P},\\mathcal{Q}$ are colored red and blue respectively. Figure (b): we can move the red anyon until the configuration is suited to apply equation \\eqref{Rmatrix}. Figure (c): We pull the blue line through the red anyon, using the tensor $\\mathcal{R}_{\\mathcal{P},b}$ that depends on the red idempotent and the label of the blue line. Figure (d): a similar operation, now with $\\mathcal{R}_{\\mathcal{Q},a}$.}\n\\label{fig:fullbraid}\n\\end{figure}\n\n\n\n\n\\begin{figure}[H]\n\\begin{align*}\n\\vcenter{\\hbox{\n \\includegraphics[width=0.35\\textwidth]{BraidSymm_LHS.pdf}}}\n =\n \\vcenter{\\hbox{\n \\includegraphics[width=0.45\\textwidth]{BraidSymm_RHS.pdf} }}\n\\end{align*}\n\\caption{A more symmetric version of the braiding process described in figure \\ref{fig:fullbraid}. Completely braiding a red around a blue anyon is described by the contraction of the tensors $\\mathcal{R}_{\\mathcal{P},b}$ and $\\mathcal{R}_{\\mathcal{Q},a}$.}\n\\label{fig:fullbraid2}\n\\end{figure}\n\nSecondly, the fusion channels of the red and blue anyon pair can change. Both pairs were originally in the vacuum sector, but can be in a superposition of sectors after braiding, as is illustrated in Figure \\ref{fig:fullbraid3}.\n\n\\begin{figure}[H]\n \\centering\n (a) \\includegraphics[width=0.5\\textwidth]{BraidFusionChannels1.pdf}\n \\quad (b) \\includegraphics[width=0.20\\textwidth]{BraidFusionChannels2.pdf}\n \\caption{(a) The result of braiding the red anyon around the blue, as in figure \\ref{fig:fullbraid}(b). The gray label correspond to the possible fusion channels of the pair of red (or blue) anyons. Before braiding, the pair of red anyons was in the trivial topological sector. After braiding, several fusion results are possible. They can be measured at the gray line. A sum over the different possible fusion outcome values for these lines is implied. (b) A more symmetric (and rotated) version of (a). Due to the structure of the tensors $\\mathcal{R}$, the grey lines $c$ at the top and bottom are equal.}\n\\label{fig:fullbraid3}\n\\end{figure}\n\n\n\\section{Examples}\\label{sec:examples}\n\nWe will now illustrate the general formalism of anyons in MPO-injective PEPS with some examples and show that we indeed find all topological sectors. First, we focus on discrete twisted gauge theories \\cite{DijkgraafWitten,propitius,Kitaev03,tqd}. After that we turn to string-net models \\cite{LevinWen05,TuraevViro}.\n\n\\subsection{Discrete gauge theories}\n\nThe projector MPO for twisted quantum double models takes the form \n\\begin{equation}\\label{vacuumMPO}\nP = \\frac{1}{|\\mathsf{G}|} \\sum_{g\\in \\mathsf{G}} V(g),\n\\end{equation}\nwhere $\\mathsf{G}$ is an arbitrary finite group of order $|\\mathsf{G}|$ and $V(g)$ are a set of injective MPOs that form a representation of $\\mathsf{G}$, i.e. $V(g)V(h) = V(gh)$. $V(g)$ is constructed from the tensors\n\\begin{align}\\label{groupMPO}\n\\vcenter{\\hbox{\n \\includegraphics[width=0.13\\linewidth]{grouptensor1}}} \\leftrightarrow\n\\vcenter{\\hbox{\n \\includegraphics[width=0.35\\linewidth]{grouptensor2}}}\n\\end{align}\nwhere the internal MPO indices are the horizontal ones. All indices are $|\\mathsf{G}|$-dimensional and are labeled by group elements. We use the convention that indices connected in the body of the tensors are enforced to be equal. In \\eqref{groupMPO} we only drew the non-zero tensor components, i.e. for lower indices $h_1$ and $h_2$ there is only one non-zero tensor component, namely the one where the upper indices are related by a left group multiplication by $g$. The number $\\alpha(g_1,g_2,g_3) \\in \\mathsf{U}(1)$ is a so-called 3-cocycle satisfying the 3-cocycle condition\n\\begin{equation}\\label{cocyclecondition}\n\\alpha(g_1,g_2,g_3)\\alpha(g_1,g_2g_3,g_4)\\alpha(g_2,g_3,g_4) = \\alpha(g_1g_2,g_3,g_4)\\alpha(g_1,g_2,g_3g_4)\n\\end{equation}\nWithout loss of generality one can take the 3-cocycles to satisfy\n\\begin{equation}\\label{gaugechoice}\n\\alpha(e,g,h) = \\alpha(g,e,h) = \\alpha(g,h,e) = 1\\, ,\n\\end{equation}\nwith $e$ the identity group element, for all $g,h \\in \\mathsf{G}$. For this twisted quantum double MPO we have $g^*=g^{-1}$ and $Z_g = \\sum_{h_1}\\alpha(g,g^{-1},h_1)\\ket{g^{-1}h_1,h_1}\\bra{h_1,g^{-1}h_1}$. The specific form of this MPO also allows one to see immediately that the topological entanglement entropy of a contractible region in the corresponding MPO-injective PEPS is given by $\\ln|\\mathsf{G}|$.\n\nThe fusion tensors $X_{g_1g_2}$ for the twisted quantum double MPO take the form\n\\begin{align}\\label{groupfusion}\n\\vcenter{\\hbox{\n \\includegraphics[width=0.15 \\linewidth]{groupfusiontensor1}}} \\leftrightarrow\n\\vcenter{\\hbox{\n \\includegraphics[width=0.37\\linewidth]{groupfusiontensor2}}}\\, .\n\\end{align}\nUsing \\eqref{groupMPO}, \\eqref{groupfusion} and \\eqref{cocyclecondition} one can check that the injective MPOs $V(g)$ indeed form a representation of $\\mathsf{G}$. Using the same data one also sees that the zipper condition \\eqref{zippercondition} holds for the tensors of $V(g)$. Again using the cocycle condition the fusion tensors are seen to satisfy the equation\n\\begin{equation}\n(X_{g_3g_2} \\otimes \\mathds{1}_{g_1})X_{g_3g_2,g_1} = \\alpha(g_3,g_2,g_1)(\\mathds{1}_{g_3}\\otimes X_{g_2,g_1})X_{g_3,g_2g_1}\n\\end{equation}\nSo the 3-cocycles $\\alpha$ play the role of the $F$ matrices in the general equation \\eqref{Fmove}. This connection between MPO group representations and three cocycles was first established in \\cite{czxmodel}. For more details about the MPOs under consideration and the corresponding PEPS we refer to \\cite{SPTpaper}.\n\n\\subsubsection{Topological charge}\n\nUsing the MPO and fusion tensors defined above we can now construct the algebra elements $A_{g_1,g_2,g_3,g_4}$ defined by Eq.~\\ref{algebraobject}; note that the indices $\\mu,\\nu$ are always one dimensional in the group case so we can safely discard them. To construct the central idempotents we focus on the following algebra elements\n\\begin{equation}\nA_{g,g^{-1}k^{-1},g,k} = \\delta_{[k,g],e} R_g(k^{-1})\\, ,\n\\end{equation}\nwhere $[k,g] = kgk^{-1}g^{-1}$ is the group commutator and $e$ the trivial group element. For convenience, our choice for the basis of the algebra $R_g(k)$ deviates slightly from Eq.~\\eqref{algebraobject}. It is constructed by closing a single block MPO \\eqref{groupMPO} labeled by group element $k$, satisfying $[k,g] = e$, with a tensor that has as non-zero components\n\\begin{align}\n\\vcenter{\\hbox{\n \\includegraphics[width=0.1\\linewidth]{TQDQ2}}} \\leftrightarrow\n\\vcenter{\\hbox{\n \\includegraphics[width=0.25\\linewidth]{TQDQ}}} = \\frac{\\alpha(kg^{-1},g,h_1)}{\\alpha(g,kg^{-1},gh_1)}\n\\end{align}\nNote that this tensor is chosen slightly different as the one in Eq.~\\eqref{algebraobject} and that the direction of $k$ has been reversed. \n\nBy repeated use of the cocycle condition and the fact that $[g,k]=[g,m]=e$ one can now derive the multiplication rule of the algebra elements\n\\begin{equation} \\label{groupmultiplication}\nR_g(m)R_g(k) = \\bar{\\omega}_g(m,k) \\frac{\\epsilon_g(mk)}{\\epsilon_g(m)\\epsilon_g(k)}R_g(mk)\n\\end{equation}\nwhere\n\\begin{eqnarray}\n\\bar{\\omega}_g(m,k) & = & \\frac{\\alpha(m,g,k)}{\\alpha(m,k,g)\\alpha(g,m,k)} \\nonumber \\\\\n \\frac{\\epsilon_g(mk)}{\\epsilon_g(m)\\epsilon_g(k)} & = & \\frac{\\alpha(g,mkg^{-1},g)}{\\alpha(g,mg^{-1},g)\\alpha(g,kg^{-1},g)}\n\\end{eqnarray}\nOne can check that $\\omega_g(m,k)$ is a 2-cocycle satisfying the 2-cocycle condition\n\\begin{equation}\n\\omega_g(m,k)\\omega_g(mk,l) = \\omega_g(m,kl)\\omega_g(k,l)\\, ,\n\\end{equation}\nwhen $m$, $k$ and $l$ commute with $g$. So the algebra elements $R_g(k)$ form a projective representation of the centralizer $\\mathcal{Z}_g$ of $g$. We now define the following projective irreducible representations of $\\mathcal{Z}_g$ labeled by $\\mu$\n\\begin{equation}\n\\Gamma^\\mu_g(m) \\Gamma^\\mu_g(k) = \\omega_g(m,k) \\Gamma_g^\\mu(mk)\\, ,\n\\end{equation}\nand the corresponding projective characters $\\chi^\\mu_g(k) = \\text{tr}(\\Gamma^\\mu_g(k))$. We denote the dimension of projective irrep $\\mu$ by $d_\\mu$. Using the Schur orthogonality relations for projective irreps one can now check that\n\\begin{equation}\nP_{(g,\\mu)} = \\frac{d_\\mu}{|\\mathcal{Z}_g|}\\sum_{k\\in\\mathcal{Z}_g} \\epsilon_g(k) \\chi^\\mu_g(k)R_g(k)\n\\end{equation}\nare Hermitian orthogonal projectors, i.e. $P_{(g,\\mu)}^\\dagger = P_{(g,\\mu)}$ and $P_{(g,\\mu)}P_{(h,\\nu)} = \\delta_{g,h}\\delta_{\\mu\\nu} P_{(g,\\mu)}$. To obtain the central idempotents we have to sum over all elements in the conjugacy class $\\mathcal{C}_A$ of $g$, so the final anyon ansatz is\n\\begin{equation}\n\\mathcal{P}_{(\\mathcal{C}_A,\\mu)} = \\sum_{g\\in \\mathcal{C}_A} P_{(g,\\mu)}.\n\\end{equation}\nIn this way we indeed recover the standard labeling of dyonic excitations in discrete twisted gauge theories: the flux is labeled by a conjugacy class $\\mathcal{C}_A$ and the charge is labeled by a projective irrep of the centralizer $\\mathcal{Z}_g$ of a representative element $g$ in $\\mathcal{C}_A$ \\cite{orbifold,DijkgraafPasquierRoche}.\n\n\\subsubsection{Anyon ansatz}\n\n\\begin{figure}\n \\centering\n a)\n \\includegraphics[width=0.2\\textwidth]{QDtensor1}\n b)\n \\includegraphics[width=0.2\\textwidth]{QDtensor2}\n c)\n \\includegraphics[width=0.2\\textwidth]{QDtensor3}\n\\caption{Tensors for the non-twisted ($\\alpha(g_1,g_2,g_3)\\equiv 1$) quantum double PEPS on a hexagonal lattice. a) Ground state tensors for the $A$-sublattice. b) Ground state tensors for the $B$-sublattice. All indices are $|\\mathsf{G}|$-dimensional and are labeled by group elements. There are two virtual indices for every link in the lattice. Indices that are connected in the tensors are enforced to be equal. There are three physical indices on each tensor, of which we write the index value between the virtual indices. All non-zero tensor components have index configurations as indicated in the figures and have value one. The resulting PEPS is MPO-injective with virtual support PMPO \\eqref{vacuumMPO}. c) An anyonic excitation tensor for the $A$-sublattice. The extra lower index, which we colored red for clarity, is to be connected to a MPO on the virtual level. Multiplying all the physical indices of an $A$-sublattice ground state tensor counter clockwise gives the identity group element. For the excitation tensor the multiplied value of the physical indices is $h_2^{-1}g^{-1}h_2$, which indicates the presence of a non-trivial flux. The physical indices can distinguish between virtual MPOs corresponding to group elements in different conjugacy classes. However, the rank-one injectivity structure of the anyonic tensor is reflected in the fact that elements in the same conjugacy class give equivalent configurations of the physical indices.}\n\\label{fig:QDtensor}\n\\end{figure}\n\nIn this section we will illustrate some aspects of the anyon ansatz that were discussed in section \\ref{sec:ansatz}. Firstly, in figure \\ref{fig:QDtensor} we show the explicit PEPS ground state and excitation tensors for the non-twisted ($\\alpha(g_1,g_2,g_3)\\equiv 1$) quantum double PEPS on the hexagonal lattice. This provides an explicit example of an anyonic excitation tensor that has a rank-one injectivity structure on each virtual subspace corresponding to a topological sector. \n\n\nSecondly, we look at a pair of pure charges. Using similar reasoning as in the previous section we can construct the simple idempotents and nilpotents with diagonal group label as\n\\begin{equation}\nP_{(\\mathcal{C}_A,\\mu)}^{(g,i),(g,j)} = \\frac{d_\\mu}{|\\mathcal{Z}_g|}\\sum_{k\\in\\mathcal{Z}_g} \\epsilon_g(k) [\\Gamma^\\mu_g(k)]_{ij}R_g(k)\\, ,\n\\end{equation}\nwhere $i,j$ are matrix indices of the irrep $\\Gamma^\\mu_g(k)$. The simple idempotents and nilpotents with off-diagonal group label can be obtained via a straightforward generalization, but we will not need them here.\n\nWe consider a charge $\\mu$ and its anti-charge $\\mu^*$. The relevant idempotents and nilpotents are $P_{\\mu}^{i,j} \\equiv P_{(e,\\mu)}^{(e,i),(e,j)}$. To construct a topological PEPS containing the charge pair we start with two tensors $C^i_\\mu$ and $C^j_{\\mu^*}$, which have as many virtual indices as the coordination number of the lattice and one physical index. Their virtual indices are supported on the subspaces determined by respectively $P_\\mu^{i,i}$ and $P_{\\mu^*}^{j,j}$. If we interpret the charge tensors as matrices with the physical index as row index and the virtual indices together as the column index then this implies $C^i_\\mu P^{k,k}_\\nu = \\delta_{\\mu,\\nu}\\delta_{i,k}C^i_\\mu$ and $C^j_{\\mu^*} P^{k,k}_{\\nu^*} = \\delta_{\\mu^*,\\nu^*}\\delta_{j,k} C^j_{\\mu^*}$. We now want to find the complete anyonic excitation tensors $C_\\mu$ and $C_{\\mu^*}$ such that $C_\\mu P_\\mu^{i,i} = C_\\mu^i$ and $C_{\\mu^*}P_{\\mu^*}^{j,j} = C^j_{\\mu^*}$. For this we proceed as before: we take both tensors and project them in the vacuum sector. We will ignore the PEPS environment and simply work with the tensor product of both charge tensors $C_\\mu^i\\otimes C_{\\mu^*}^j$. The vacuum projector \\eqref{vacuumMPO} on this tensor product can be written as\n\\begin{equation}\n\\tilde{P} = \\frac{1}{|\\mathsf{G}|}\\sum_{g\\in \\mathsf{G}} V(g)\\otimes V(g) = \\frac{1}{|\\mathsf{G}|}\\sum_{g\\in \\mathsf{G}} R_e(g)\\otimes R_e(g)\\, .\n\\end{equation}\nUsing the orthogonality relations for irreps we rewrite the vacuum projector as\n\\begin{equation}\n\\tilde{P} = \\sum_{\\nu}\\frac{1}{d_\\nu}\\sum_{p,q=1}^{d_\\nu}P^{p,q}_\\nu\\otimes P^{p,q}_{\\nu^*}\\, ,\n\\end{equation}\nwhere $[\\Gamma^{\\nu^*}(g)]_{pq} = [\\bar{\\Gamma}^\\nu(g)]_{pq}$. We therefore get for the vacuum projected charge pair\n\\begin{equation}\n\\left(C_\\mu^i\\otimes C_{\\mu^*}^j\\right)\\left( \\sum_{\\nu}\\frac{1}{d_\\nu}\\sum_{p,q=1}^{d_\\nu}P^{p,q}_\\nu\\otimes P^{p,q}_{\\nu^*}\\right) = \\delta_{i,j}\\frac{1}{d_\\mu}\\sum_q C_\\mu^i P_\\mu^{i,q}\\otimes C_{\\mu^*}^i P_{\\mu^*}^{i,q}\\, .\n\\end{equation}\nBy taking $C_\\mu^q \\equiv C_\\mu^i P_\\mu^{i,q}$ we obtain\n\\begin{equation}\n\\left(\\sum_{i = 1}^{d_\\mu}C_\\mu^i\\otimes C_{\\mu^*}^i\\right)\\tilde{P} = \\left(\\sum_{i = 1}^{d_\\mu}C_\\mu^i\\otimes C_{\\mu^*}^i\\right)\\, .\n\\end{equation}\nSo we see that for the pair to be in the vacuum state both charges should form a maximally entangled state in the irrep matrix indices. However, as explained in the general discusssion of section \\ref{sec:ansatz}, this is purely virtual entanglement that cannot be destroyed by physical operators acting on only one charge in the pair.\n\n\n\\subsubsection{Topological spin}\n\nTo calculate the topological spin we first note following relation\n\\begin{equation}\n\\Gamma^\\mu_g(k)\\Gamma^\\mu_g(g) = \\Gamma^\\mu_g(g)\\Gamma^\\mu_g(k)\\frac{\\omega_g(k,g)}{\\omega_g(g,k)} = \\Gamma^\\mu_g(g)\\Gamma^\\mu_g(k)\\, ,\n\\end{equation}\nwhich holds for all $k \\in \\mathcal{Z}_g$. Using Schur's lemma this implies that $\\Gamma_g^\\mu(g) = e^{i2\\pi h^\\mu_g}\\mathds{1}_{d_\\mu}$. One can also easily check that\n\\begin{equation}\n\\Gamma_g^\\mu(g^{-1}) = \\omega_g(g,g^{-1})\\Gamma_g^\\mu(g)^\\dagger\\, .\n\\end{equation}\nWith these observations we now obtain\n\\begin{align}\nP_{(g,\\mu)}R_g(g) &= \\frac{d_\\mu}{|\\mathcal{Z}_g|}\\sum_{k\\in\\mathcal{Z}_g} \\chi^\\mu_g(k)R_g(kg)\\nonumber \\\\\n&= \\frac{d_\\mu}{|\\mathcal{Z}_g|}\\sum_{x\\in\\mathcal{Z}_g} \\text{tr}(\\Gamma^\\mu_g(x)\\Gamma^\\mu_g(g^{-1})\\omega^*_g(x,g^{-1}))R_g(x)\\nonumber \\\\\n&= e^{-i2\\pi h^\\mu_g}\\frac{d_\\mu}{|\\mathcal{Z}_g|}\\sum_{x\\in\\mathcal{Z}_g} \\epsilon_g(x) \\chi^\\mu_g(x)R_g(x) \\nonumber\\\\\n&= e^{-i2\\pi h^\\mu_g} P_{(g,\\mu)} \\nonumber\n\\end{align}\nSince $e^{-i2\\pi h^\\mu_g}$ is the same for all elements in the conjugacy class $\\mathcal{C}_A$ of $g$ we obtain the desired result\n\\begin{equation}\n\\mathcal{P}_{(\\mathcal{C}_A,\\mu)}\\mathcal{R}_{2\\pi} = \\mathcal{P}_{(\\mathcal{C}_A,\\mu)}\\sum_{g \\in \\mathsf{G}}R_g(g) = \\theta_{(\\mathcal{C}_A,\\mu)}\\mathcal{P}_{(\\mathcal{C}_A,\\mu)}\\, ,\n\\end{equation}\nwhere $\\mathcal{R}_{2\\pi}$ was introduced in section \\ref{sec:topspin}. The phase $\\theta_{(\\mathcal{C}_A,\\mu)} = e^{-i2\\pi h^\\mu_g}$ gives the topological spin of the corresponding anyon.\n\n\n\\subsubsection{Fusion}\n\nIn the group case fusion is easy to calculate analytically because of the following identity for the basis elements of our algebra:\n\n\\begin{align}\\label{fusionloop}\n\\vcenter{\\hbox{\n \\includegraphics[width=0.27\\linewidth]{groupfusion1}}} = \\delta_{k,l}\n\\vcenter{\\hbox{\n \\includegraphics[width=0.25\\linewidth]{groupfusion2}}}\n\\end{align}\nThis implies that to calculate fusion relations we can simply trace over the inner indices at the shared boundary of two central idempotents to create a bigger loop. \n\nWe subsequently act with the fusion tensor $X_{gh}$ on the two red inner indices on the right hand side of \\eqref{fusionloop}, which acts as a unitary on the support of these indices. We also attach $X^\\dagger_{gh}$ to the outer indices in \\eqref{fusionloop}, which can obviously be obtained by decomposing the product of the two MPOs $V(g)$ and $V(h)$ that are connected to the central idempotents once we embed them in a MPO-injective PEPS. Using the 3-cocycle condition one can now check that we have\n\n\\begin{align}\n\\vcenter{\\hbox{\n \\includegraphics[width=0.26\\linewidth]{groupfusion3}}} = \\beta(k,g,h)\n\\vcenter{\\hbox{\n \\includegraphics[width=0.26\\linewidth]{groupfusion4}}}\\, ,\n\\end{align}\nwhere $\\beta(k,g,h)$ is given by\n\\begin{equation}\n\\beta(k,g,h) = \\omega_k(g,h)\\frac{\\epsilon_{gh}(k)}{\\epsilon_{g}(k)\\epsilon_{h}(k)}\\, .\n\\end{equation}\nSo we obtain\n\\begin{equation}\nP_{(g,\\mu)}\\times P_{(h,\\nu)} = \\frac{d_\\mu d_\\nu}{|\\mathcal{Z}_{g}||\\mathcal{Z}_{h}|}\\sum_{k\\in\\mathcal{Z}_{gh}} \\epsilon_{gh}(k) \\chi^\\mu_g(k) \\chi^\\nu_h(k)\\omega_k(g,h)R_{gh}(k) \\, .\n\\end{equation}\nWe now define $\\Gamma^{\\mu\\nu}_{gh}(k) = \\Gamma^\\mu_g(k)\\otimes \\Gamma^\\nu_h(k)\\omega_k(g,h)$ for all $k$ such that $[k,g]=[k,h] = e$. Then repeated use of the 3-cocycle condition \\eqref{cocyclecondition} shows that\n\\begin{equation}\n\\Gamma^{\\mu\\nu}_{gh}(k_1)\\Gamma^{\\mu\\nu}_{gh}(k_2) = \\omega_{gh}(k_1,k_2)\\Gamma^{\\mu\\nu}_{gh}(k_1k_2)\\, ,\n\\end{equation}\ni.e. $\\Gamma^{\\mu\\nu}_{gh}(k)$ is a projective representation of $\\mathcal{Z}_{gh}$. This representation will in general be reducible\n\\begin{equation}\n\\Gamma^{\\mu\\nu}_{gh}(k) \\simeq \\bigoplus_\\lambda \\mathds{1}_{W_{\\mu\\nu}^\\lambda}\\otimes \\Gamma^\\lambda_{gh}(k)\\, ,\n\\end{equation}\nwhere the integer $W_{\\mu\\nu}^\\lambda$ denotes the number of times a projective irrep $\\lambda$ appears in the decomposition of $\\Gamma^{\\mu\\nu}_{gh}$. From this we get following relation between the projective characters\n\\begin{equation}\n\\chi_g^\\mu(k)\\chi_h^\\nu(k)\\omega_k (g,h) = \\sum_{\\lambda} W_{\\mu\\nu}^\\lambda\\, \\chi_{gh}^\\lambda (k)\\, .\n\\end{equation}\nSo we find\n\\begin{equation}\nP_{(g,\\mu)}\\times P_{(h,\\nu)} = \\sum_\\lambda W^\\lambda_{\\mu\\nu} P_{(gh,\\lambda)}\\, ,\n\\end{equation}\nup to some normalization factors. In this way we obtain the final fusion rules\n\\begin{equation}\n\\mathcal{P}_{(\\mathcal{C}_A,\\mu)}\\times \\mathcal{P}_{(\\mathcal{C}_B,\\nu)} = \\sum_{(\\mathcal{C}_C,\\lambda)} \\mathscr{N}_{(\\mathcal{C}_A,\\mu),(\\mathcal{C}_B,\\nu)}^{(\\mathcal{C}_C,\\lambda)} \\mathcal{P}_{(\\mathcal{C}_C,\\lambda)}\\, ,\n\\end{equation}\nwhere the fusion coefficients can be written down explicitly using the orthogonality relations for projective characters \\cite{orbifold,DijkgraafWitten}:\n\\begin{equation}\n\\mathscr{N}_{(\\mathcal{C}_A,\\mu),(\\mathcal{C}_B,\\nu)}^{(\\mathcal{C}_C,\\lambda)} = \\frac{1}{|\\mathsf{G}|}\\sum_{g_1\\in \\mathcal{C}_A} \\sum_{g_2\\in\\mathcal{C}_B}\\sum_{g_3 \\in \\mathcal{C}_C}\\sum_{h\\in\\mathcal{Z}_{g_1}\\cap \\mathcal{Z}_{g_2}\\cap \\mathcal{Z}_{g_3}} \\delta_{g_1g_2, g_3}\\chi_{g_1}^\\mu(h)\\chi^\\nu_{g_2}(h)\\bar{\\chi}^{\\lambda}_{g_3}\\omega_h(g_1,g_2)\n\\end{equation}\n\n\n\\subsection{String-nets}\nThe next example we consider are the string-net models. For simplicity, we restrict ourselves to models without higher dimensional fusion spaces, i.e. all $N_{ab}^c$ in equation \\eqref{fusioncategory1} are either 0 or 1. Also, we only deal with models where each single block MPO is self-dual: $a=a^*$ and $N_{aa}^1=1$. Both restrictions can easily be lifted.\n\nThe description of string-nets in the framework presented here was introduced in \\cite{MPOpaper}. The string-net models are a prime example of the MPO-injectivity formalism. The PMPO is constructed from the $G$-symbols and the quantum dimensions of a unitary fusion category. The single block MPOs correspond one-to-one with the simple objects of the input fusion category. The fusion matrices $X_{ab}^c$ are also easily constructed from the $G$-symbols and the quantum dimensions. These tensors give rise to an MPO-injective PEPS and they satisfy the properties listed in section \\ref{subsec:zipper}. The validity of the general requirements in our formalism follows mainly from the pentagon relation of the $G$-symbols. The properties of section \\ref{subsec:zipper} are rooted in the spherical property of unitary fusion categories.\n\nTo describe the string-nets as a tensor network, there is one extra technical subtlety we need to take into account. Every closed loop in the PEPS representation of a string-net wave function gives rise to a factor equal to the quantum dimension of the label of this loop. In \\cite{MPOpaper}, this was taken care of by incorporating such factors both in the tensors and by adding extra factors for every bend in an MPO. Because of this convention, the MPOs give rise to projectors $P_L$ for every length that are not Hermitian on a closed loop. Luckily, as all these operators are still similar to Hermitian operators via a local, positive similarity transformation, this has no implications for the general theory. For example, we still find that every single block MPO labeled by $a$ has a unique corresponding single block MPO $a^*$ that is obtained by Hermitian conjugation, where $a^*$ is just the categorical dual of $a$. The tensors we describe next are used on a square lattice, similar tensors can be used on different lattices.\n\nFirst we describe the PMPO. We have\n\\begin{align}\\label{StringnetMPO}\n\\vcenter{\\hbox{\n \\includegraphics[height=0.12\\linewidth]{StringNetMPO_LHS.pdf}}} \\leftrightarrow\n\\vcenter{\\hbox{\n \\includegraphics[height=0.15\\linewidth]{StringNetMPO_RHS.pdf}}}\n\\end{align}\nwhere the internal MPO indices are the horizontal ones and all indices are $N$ dimensional. The single block MPOs are determined by fixing the label $f$. The corresponding weights $w_f$ used to construct a PMPO are given by the quantum dimensions $d_f$ divided by $\\mathcal{D}^2 = \\sum_a d_a^2$, the total quantum dimension of the fusion category squared. The factors $v_a$ in the definition of the MPO are included to take care of the closed loop factors. They are given by the square roots of the quantum dimensions: $v_a = d_a^{1\/2}$. The single block MPOs obtained by fixing $f$ satisfy the algebraic structure of the fusion algebra of the category we used to construct the MPOs.\n\nFor the string-net MPOs we consider here the gauge transformations $Z_a$ are all trivial; they amount to simply swapping the double line structure which is present in the virtual indices of the MPO tensor. The fusion tensors $X_{ab}^c$ are given by\n\\begin{align}\n\\vcenter{\\hbox{\n \\includegraphics[height=0.13 \\linewidth]{StringNetFusion_LHS.pdf}}} \\leftrightarrow\n\\vcenter{\\hbox{\n \\includegraphics[height=0.16\\linewidth]{StringNetFusion_RHS.pdf}}}.\n\\end{align}\nThe factor $v_c$ is only present for the closed loop condition (and could be taken care of differently). The pivotal property for these fusion tensors is\n\\begin{align}\n\\vcenter{\\hbox{\n \\includegraphics[height=0.22 \\linewidth]{stringnetpivotal}}} \\, ,\n\\end{align}\nwhich is equivalent to \\eqref{pivotalnew} up to the diagonal matrices labeled by $1\/2$ and $-1\/2$, which denote the power of the quantum dimensions that are added to satisfy the closed loop condition. More specifically, these matrices are $\\sum_a d_a^{\\pm 1\/2}\\ket{a}\\bra{a}$.\n\nWith this information, the MPO and fusion tensors can now be used in our framework in order to obtain an ansatz for anyons in string-nets. Unlike in the case of discrete gauge theories, we now need the ansatz \\eqref{algebraobject} in full generality. We recall the form of the algebra elements\n$$\nA_{abcd,\\mu\\nu} = \\vcenter{\\hbox{\\includegraphics[width=0.24\\linewidth]{algebraobject}}}.\n$$\nThe structure constants that define the multiplication of these objects can be computed analytically with formula \\eqref{eq:structureconstants} or numerically. The algebra that describes the anyons is similar to a construction proposed in \\cite{Qalgebra}, although obtained from a very different motivation. To obtain the central idempotents of this algebra we use a simple algorithm described in Appendix \\ref{app:idempotents}, see also \\cite{friedl1985polynomial}. As expected, we obtain both one and higher dimensional central idempotents.\n\nIn Appendix \\ref{app:snresults} we list the central idempotents and their properties for the Fibonacci, Ising and $\\text{Rep}(S_3)$ string-nets. For each of those, we also compute the topological spin using the standard procedure described in subsection \\ref{sec:topspin}. For string-nets, these spins can in principle be computed analytically from the central idempotents. Furthermore, we compute the fusion table describing the fusion of two anyons. Thereto, we have numerically performed the procedure explained in subsection \\ref{sec:fusion}. We indeed recover the known fusion rules for the anyonic excitations of these theories. Note that there are no fusion multiplicities larger than one in the models we consider. Finally, we explicitly work out the braid tensor $\\mathcal{R}$ using the procedure of \\ref{sec:braiding} for two anyons in the Fibonacci string-net model in Appendix~\\ref{app:fibonacci}.\n\n\n\n\n\\section{Discussion and outlook}\\label{sec:conclusions}\n\nFor all the examples considered here the PEPS anyon construction is equivalent to calculating the Drinfeld center \\cite{drinfeld} of the input theory, i.e. the algebraic structure determined by the single block MPOs $O_a$, which was either a finite group (which can be generalized to a Hopf algebra) or a unitary fusion category. This center construction leads to a modular tensor category, which describes a consistent anyon theory \\cite{DrinfeldCenter}. When the input theory is already a modular tensor category by itself, the center construction gives a new modular tensor category, which is isomorphic to two copies of the original anyon theory, one of which is time-reversed \\cite{DrinfeldCenter}. It is then clear that the new anyon theory cannot correspond to a chiral phase. This is actually true in general, i.e. the set of modular tensor categories obtained via the center construction cannot describe chiral phases. In \\cite{LagrangianAlgebra} the set of center modular tensor categories was identified with the set of modular tensor categories containing a so-called Lagrangian subalgebra. A physical connection between the existence of a Lagrangian subgroup and the non-chirality of the quantum phase was given in \\cite{LevinEdges} for the case of Abelian statistics.\n\nWe have found that PMPOs of the form (\\ref{mpo}) that can be used to built MPO-injective PEPS give rise to many concepts familiar from the theory of unitary fusion categories: a finite number of simple objects and associated fusion relations, the pentagon equation, a generalized notion of duality and the Frobenius-Schur indicator $\\varkappa_a$, pivotal structure and unitarity. However, there is one important property of unitary fusion categories that does not seem to immediately come out of MPO-injectivity, namely the existence of a unique, simple unit element. In other words, we have not found a property of MPO-injectivity that requires the projector MPO to contain a single block MPO $O_1$, satisfying $O_1O_i = O_iO_1 = O_i$ for all $i$. However, if such identity block is not present then we can associate a multi-fusion category to the PMPO. It is known that multi-fusion categories can also be used to construct string-net models \\cite{EnrichingLevinWen}. \n\nSo at this point it seems that the only possibilities to have MPO-injective PEPS that describe physics beyond discrete gauge theories and string-nets without having to extend the MPO-injectivity formalism of \\cite{MPOpaper} are given by:\n\\begin{itemize}\n\\item[(1)] using PMPOs that have no canonical form;\n\\item[(2)] defining different left handed MPO tensors to construct $\\tilde{P}_{C_v}$;\n\\item[(3)] not imposing the zipper condition (\\ref{zippercondition}).\n\\end{itemize}\nTrying option (2) will most likely lead to a violation of unitarity, in which case the algebra $A_{abcd,\\mu\\nu}$ can no longer be proven to be a $C^*$-algebra. This will lead to non-Hermitian central idempotents, which to some extent obscures their interpretation as topological sectors. Options (1) and (3) are at the moment much less clear to us, so we will not try to speculate on their implications. It would be very interesting to better understand the implications of options (1) - (3) and see if there is any relation between MPO-injective PEPS and the recently constructed tensor network states for chiral phases \\cite{chiral1,chiral2}.\n\nTo conclude, we have not only established a connection between MPO-injective PEPS and unitary fusion categories as mentioned above but also a formalism to obtain the topological sectors of the corresponding quantum phase. Similar to previous results \\cite{Qalgebra,haah} we can relate topological sectors to the central idempotents of an algebra, which in our case is a $C^*$-algebra constructed from the MPO that determines the injectivity subspace of the ground space tensors. The formalism is constructive and gives the correct anyon types for all the examples we worked out. It furthermore allows us to write down explicit PEPS wave functions that contain an arbitrary number of anyons. This gives an interpretation of topological sectors in terms of entanglement. From the PEPS wave functions containing anyons we can extract universal properties such as fusion relations and topological spins in a very natural way. For certain string-net models we also studied the effect of braiding on the PEPS. \n\nSeveral open questions concerning topological order in tensor networks remain. As mentioned above, it is not clear if chiral topological phases fit into the MPO-injectivity formalism, or what --if any-- is the correct formalism to describe gapped chiral theories with tensor networks. For non-chiral topological phases the construction presented here defines an equivalence relation for PMPOs, i.e. two PMPOs are said to be equivalent if the resulting central idempotents have the same topological properties. At this point the (Morita) equivalence relation between PMPOs is very poorly understood. It is also known that there is a substantial interplay between the topological order and global symmetries of a quantum system. Some first progress in capturing universal properties of these so-called symmetry-enriched topological phases with tensor networks was made in \\cite{JiangRan}. A direction for future research which enforces itself upon us at the end of this paper is of course the extension of the presented formalism to fermionic PEPS \\cite{fPEPS}. We expect that the concept of MPO algebras should also be connected to topological sectors in fermionic tensor networks. It is conceivable that the concepts introduced here might also be relevant for other types of tensor networks, e.g. the Multi-scale Entanglement Renormalization Ansatz (MERA) descriptions of topological phases \\cite{MERA1,MERA2}. Besides these theoretical questions there are also a lot of new applications of MPO-injectivity that come within reach, especially the study of topological phase transitions in non-Abelian anyon theories. We hope to make progress on these matters in future work.\n\\\\ \\\\\n\\emph{Acknowlegdements -} We acknowledge helpful discussions with Ignacio Cirac, Tobias Osborne, David Perez-Garcia and Norbert Schuch. We also especially like to thank David Aasen for many inspiring conversations and Zhenghan Wang for pointing out to us the possibility of using multi-fusion categories in string-nets. This work was supported by EU grant SIQS and ERC grant QUERG, the Odysseus grant from the Research Foundation Flanders (FWO) and the Austrian FWF SFB grants FoQuS and ViCoM. M.M. and J.H. further acknowledge the support from the Research Foundation Flanders (FWO).\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}