diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzidjj" "b/data_all_eng_slimpj/shuffled/split2/finalzzidjj" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzidjj" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec_intro}\n\\vspace{-.2cm}\n\nSince we first developed language, humans have always told stories. Fashioning a good story is an act of creativity and developing algorithms to replicate this has been a long running challenge. Adding pictures as input \ncan provide information for guiding story construction by offering visual illustrations of the storyline.\nIn the related task of image captioning, most methods try to generate descriptions only for individual images or for short videos depicting a single activity. Very recently, datasets have been introduced that extend this task to longer temporal sequences such as movies or photo albums~\\cite{rohrbach2016movie,pan2016jointly,lu2013story,huang2016visual}. \n\nThe type of data we consider in this paper provides input illustrations for story generation in the form of photo albums, sampled over a few minutes to a few days of time. For this type of data, generating textual descriptions involves telling a temporally consistent story about the depicted visual information, where stories must be coherent and take into account the temporal context of the images. Applications of this include constructing visual and textual summaries of albums, or even enabling search through personal photo collections to find photos of life events. \n\nPrevious visual storytelling works can be classified into two types, vision-based and language-based, where image or language stories are constructed respectively.\nAmong the vision-based approaches, unsupervised learning is commonly applied:\ne.g.,~\\cite{sigurdsson2016learning} learns the latent temporal dynamics given a large amount of albums, and~\\cite{kim2014reconstructing} formulate the photo selection as a sparse time-varying directed graph. \nHowever, these visual summaries tend to be difficult to evaluate and selected photos may not agree with human selections. For language-based approaches, a sequence of natural language sentences are generated to describe a set of photos.\nTo drive this work ~\\cite{park2015expressing} collected a dataset mined from Blog Posts. However, this kind of data often contains contextual information or loosely related language. A more direct dataset was recently released~\\cite{huang2016visual}, where multi-sentence stories are collected describing photo albums via Amazon Mechanical Turk. \n\n\nIn this paper, we make use of the Visual Storytelling Dataset~\\cite{huang2016visual}.\nWhile the authors provide a seq2seq baseline, they only deal with the task of generating stories given 5-representative (summary) photos hand-selected by people from an album.\nInstead, we focus on the more challenging and realistic problem of end-to-end generation of stories from entire albums. This requires us to either generate a story from all of the album's photos or to learn selection mechanisms to identify representative photos and then generate stories from those summary photos. We evaluate each type of approach. \n\nUltimately, we propose a model of hierarchically-attentive recurrent neural nets, consisting of three RNN stages. The first RNN encodes the whole album context and each photo's content, the second RNN provides weights for photo selection, and the third RNN takes the weighted representation and decodes to the resulting sentences. Note that during training, we are only given the full input albums and the output stories, and our model needs to learn the summary photo selections latently.\n\n\nWe show that our model achieves better performance over baselines under both automatic metrics and human evaluations. As a side product, we show that the latent photo selection also reasonably mimics human selections. Additionally, we propose an album retrieval task that can reliably pick the correct photo album given a sequence of sentences, and find that our model also outperforms the baselines on this task.\n\n\n\n\n\n\n\n\n\n\\section{Related work}\n\\vspace{-.2cm}\n\nRecent years have witnessed an explosion of interest in vision and language tasks, reviewed below. \n\n\n\\noindent{\\bf Visual Captioning:}\nMost recent approaches to image captioning~\\cite{vinyals2015show, xu2015show} have used CNN-LSTM structures to generate descriptions. For captioning video or movie content~\\cite{venugopalan2015sequence, pan2016jointly}, sequence-to-sequence models are widely applied, where the first sequence encodes video frames and the second sequence decodes the description. Attention techniques~\\cite{xu2015show, yu2016video, yao2015describing} are commonly incorporated for both tasks to localize salient temporal or spatial information.\n\n\\noindent{\\bf Video Summarization:}\nSimilar to documentation summarization~\\cite{rush2015neural, cheng2016neural, mei2016selective, woodsend2010automatic} which extracts key sentences and words, video summarization selects key frames or shots.\nWhile some approaches use unsupervised learning~\\cite{lu2013story, khosla2013large} or intuitive criteria to pick salient frames, recent models learn from human-created summaries~\\cite{gygli2015video, zhang2016video, zhang2016summary, gong2014diverse}. \nRecently, to better exploit semantics,~\\cite{choi2017textually} proposed textually customized summaries. \n\n\\noindent{\\bf Visual Storytelling:}\nVisual storytelling tries to tell a coherent visual or textual story about an image set. \nPrevious works include storyline graph modeling~\\cite{kim2014reconstructing}, unsupervised mining~\\cite{sigurdsson2016learning}, blog-photo alignment~\\cite{kim2015joint}, \nand language re-telling~\\cite{huang2016visual, park2015expressing}.\nWhile~\\cite{park2015expressing} collects data by mining Blog Posts,~\\cite{huang2016visual} collects stories using Mechanical Turk, providing more directly relevant stories.\n\n\n\n\\section{Model}\n\\vspace{-.2cm}\n\nOur model (Fig.~\\ref{fig:model}) is composed of three modules: Album Encoder, Photo Selector, and Story Generator, jointly learned during training.\n\n\\subsection{Album Encoder}\\label{sec:album_encoder}\n\\vspace{-.1cm}\n\nGiven an album $A=\\{a_1, a_2, ..., a_n\\}$, composed of a set of photos, we use a bi-directional RNN to encode the local album context for each photo.\nWe first extract the 2048-dimensional visual representation $f_i\\in R^k$ for each photo using ResNet101~\\cite{he2016deep}, then\na bi-directional RNN is applied to encode the full album.\nFollowing~\\cite{huang2016visual}, we choose a Gated Recurrent Unit (GRU) as the RNN unit to encode the photo sequence.\nThe sequence output at each time step encodes the local album context for each photo (from both directions).\nFused with the visual representation followed by ReLU, our final photo representation is (top module in Fig.~\\ref{fig:model}):\n\\vspace{-.2cm}\n\\begin{equation}\n\\begin{split}\\nonumber\nf_i &= \\mbox{ResNet}(a_i) \\\\ \n\\vec{h}_i &= \\vec{\\mbox{GRU}}_{album}(f_i, \\vec{h}_{i-1}) \\\\\n\\cev{h}_i &= \\cev{\\mbox{GRU}}_{album}(f_i, \\cev{h}_{i+1}) \\\\\nv_i &= \\mbox{ReLU}([\\vec{h}_i, \\cev{h}_i ] + f_i).\n\\end{split}\n\\end{equation}\n\\vspace{-.3cm}\n\n\\vspace{-.4cm}\n\\subsection{Photo Selector}\\label{sec:photo_selector}\n\\vspace{-.1cm}\nThe Photo Selector (illustrated in the middle yellow part of Fig.~\\ref{fig:model}) identifies representative photos to summarize an album's content. As discussed, we do not assume that we are given the ground-truth album summaries during training, instead regarding selection as a latent variable in the end-to-end learning.\nInspired by Pointer Networks~\\cite{vinyals2015pointer}, we use another GRU-RNN to perform this task \\footnote{While the pointer network requires grounding labels, we regard the labels as latent variables}.\n\n\nGiven the album representation $V^{n\\times k}$, the photo selector outputs probabilities $p_t\\in R^n$ (likelihood of selection as $t$-th summary image) for all photos using soft attention. \n\\vspace{-.1cm}\n\\begin{equation}\\nonumber\n\\begin{split}\n&\\bar{h}_t = \\mbox{GRU}_{select}(p_{t-1}, \\bar{h}_{t-1}), \\\\\n&p(y_{a_i}(t)=1) = \\sigma(\\mbox{MLP}([\\bar{h}_t, v_i])),\n\\end{split}\n\\end{equation}\nAt each summarization step, $t$, the GRU takes the previous $p_{t-1}$ and previous hidden state as input, and outputs the next hidden state $\\bar{h}_t$.\n$\\bar{h}_t$ is fused with each photo representation $v_i$ to compute the $i^{th}$ photo's attention $p_t^i = p(y_{a_i}(t)=1)$.\nAt test time, we simply pick the photo with the highest probability to be the summary photo at step $t$.\n\n\\subsection{Story Generator}\n\\vspace{-.1cm}\n\n\nTo generate an album's story, given the album representation matrix $V$ and photo summary probabilities $p_t$ from the first two modules, we compute the visual summary representation $g_t \\in R^k$ (for the $t$-th summary step).\nThis is a weighted sum of the album representations, i.e., $g_t = p_t^T V$. Each of these 5 $g_t$ embeddings (for $t = 1$ to $5$) is then used to decode 1 of the 5 story sentences respectively, as shown in the blue part of Fig.~\\ref{fig:model}.\n\nGiven a story $S = \\{s_t\\}$, where $s_t$ is $t$-th summary sentence. \nFollowing~\\newcite{donahue2015long}, the $l$-th word probability of the $t$-th sentence is:\n\\vspace{-.1cm}\n\\begin{equation}\n\\begin{split}\nw_{t,l-1} &= W_e s_{t,l-1}, \\\\\n\\tilde{h}_{t, l} &= \\mbox{GRU}_{story}(w_{t, l-1}, g_t, \\tilde{h}_{t, l-1}),\\\\\np(s_{t,l}) &= \\mbox{softmax}(\\mbox{MLP}(\\tilde{h}_{t, l})),\n\\end{split}\n\\end{equation}\n\\vspace{-.1cm}\nwhere $W_e$ is the word embedding.\nThe GRU takes the joint input of visual summarization $g_t$, the previous word embedding $w_{t,l}$, and the previous hidden state, then outputs the next hidden state.\nThe generation loss is then the sum of the negative log likelihoods of the correct words: $L_{gen}(S) = -\\sum_{t=1}^T\\sum_{l=1}^{L_t} \\log p_{t,l}(s_{t,l})$. \n\\vspace{.1cm}\n\n\nTo further exploit the notion of temporal coherence in a story, we add an order-preserving constraint to order the sequence of sentences within a story (related to the story-sorting idea in~\\newcite{agrawal2016sort}).\nFor each story $S$ we randomly shuffle its 5 sentences to generate negative story instances $S'$.\nWe then apply a max-margin ranking loss to encourage correctly-ordered stories: $L_{rank}(S, S') = \\max(0, m-\\log p(S')+\\log p(S))$.\nThe final loss is then a combination of the generation and ranking losses: \n\\begin{equation}\nL = L_{gen}(S) + \\lambda L_{rank}(S, S').\\label{eqn:loss}\n\\end{equation}\n\n\n\\section{Experiments}\n\nWe use the Visual Storytelling Dataset~\\cite{huang2016visual}, consisting of 10,000 albums with 200,000 photos.\nEach album contains 10-50 photos taken within a 48-hour span with two annotations: 1) 2 album summarizations, each with 5 selected representative photos, and 2) 5 stories describing the selected photos. \n\n\\subsection{Story Generation}\n\nThis task is to generate a 5-sentence story describing an album.\nWe compare our model with two sequence-to-sequence baselines: 1) an encoder-decoder model (enc-dec), where the sequence of album photos is encoded and the last hidden state is fed into the decoder for story generation,\n2) an encoder-attention-decoder model~\\cite{xu2015show} (enc-attn-dec) with weights computed using a soft-attention mechanism.\nAt each decoding time step, a weighted sum of hidden states from the encoder is decoded. For fair comparison, we use the same album representation (Sec.~\\ref{sec:album_encoder}) for the baselines.\n\n\\begin{table}[t]\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{l|c c c c}\n\\multicolumn{5}{c}{beam size=3}\\\\\n\\hline\n& Bleu3 & Rouge & Meteor & CIDEr \\\\\n\\hline\nenc-dec & 19.58 & 29.23 & 33.02 & 4.65 \\\\\nenc-attn-dec & 19.73 & 28.94 & 32.98 & 4.96 \\\\\nh-attn & 20.53 & 29.82 & 33.81 & 6.84 \\\\\nh-attn-rank & \\bf{20.78} & \\bf{29.82} & \\bf{33.94} & \\bf{7.38} \\\\\n\\hline\nh-(gd)attn-rank & 21.02 & 29.53 & 34.12 & 7.51\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\vspace{-.4cm}\n\\caption{Story generation evaluation.}\n\\vspace{-.1cm}\n\\label{table:generation}\n\\end{table}\n\n\n\n\\begin{table}[t]\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{l | l}\n\\hline\nenc-dec (29.50\\%) & h-attn-rank (70.50\\%)\\\\\n\\hline\nenc-attn-dec (30.75\\%) & h-attn-rank (69.25\\%) \\\\\n\\hline\n\\hline\nh-attn-rank (30.50\\%) & gd-truth (69.50\\%) \\\\ \n\\hline\n\\end{tabular}\n\\end{center}\n\\vspace{-.4cm}\n\\caption{Human evaluation showing how often people prefer one model over the other.}\n\\vspace{-.2cm}\n\\label{table:human}\n\\end{table}\n\n\\begin{table}[t]\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{l | c c }\n\\hline\n& precision & recall \\\\\n\\hline\nDPP & 43.75\\% & 27.41\\% \\\\\n\\hline\nenc-attn-dec & 38.53\\% & 24.25\\% \\\\\n\\hline\nh-attn \t\t & 42.85\\% & 27.10\\% \\\\\n\\hline\nh-attn-rank & \\bf{45.51}\\% & \\bf{28.77}\\% \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\vspace{-.4cm}\n\\caption{Album summarization evaluation.}\n\\vspace{-.1cm}\n\\label{table:summarization}\n\\end{table}\n\n\\begin{table}[t]\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{l | c c c c}\n\\hline\n& R@1 & R@5 & R@10 & MedR \\\\\n\\hline\nenc-dec & 10.70\\% & 29.30\\% & 41.40\\% & 14.5 \\\\\n\\hline\nenc-attn-dec & 11.60\\% & 33.00\\% & 45.50\\% & 11.0 \\\\\n\\hline \nh-attn & 18.30\\% & \\bf{44.50}\\% & \\bf{57.60}\\% & \\bf{6.0} \\\\\n\\hline\nh-attn-rank & \\bf{18.40}\\% & 43.30\\% & 55.50\\% & 7.0 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\vspace{-.4cm}\n\\caption{1000 album retrieval evaluation.}\n\\vspace{-.1cm}\n\\label{table:retrieval}\n\\end{table}\n\n\n\nWe test two variants of our model trained with and without ranking regularization by controlling $\\lambda$ in our loss function, denoted as h-attn (without ranking), and h-attn-rank (with ranking).\nEvaluations of each model are shown in Table~\\ref{table:generation}.\nThe h-attn outperforms both baselines, and h-attn-rank achieves the best performance for all metrics.\nNote, we use beam-search with beam size=3 during generation for a reasonable performance-speed trade-off (we observe similar improvement trends with beam size = 1).\\footnote{We also compute the $p$-value of Meteor on 100K samples via the bootstrap test~\\cite{efron1994introduction}, as Meteor has better agreement with human judgments than Bleu\/Rouge~\\cite{huang2016visual}.\nOur h-attn-rank model has strong statistical significance ($p=0.01$) over the enc-dec and enc-attn-dec models (and is similar to the h-attn model).}\nTo test performance under optimal image selection, we use one of the two ground-truth human-selected 5-photo-sets as an oracle to hard-code the photo selection, denoted as h-(gd)attn-rank.\nThis achieves only a slightly higher Meteor compared to our end-to-end model.\n\nAdditionally, we also run human evaluations in a forced-choice task where people choose between stories generated by different methods.\nFor this evaluation, we select 400 albums, each evaluated by 3 Turkers. Results are shown in Table~\\ref{table:human}. \nExperiments find significant preference for our model over both baselines. As a simple Turing test, we also compare our results with human written stories (last row of Table~\\ref{table:human}), indicating room for improvement of methods.\n\n\n\\subsection{Album Summarization}\nWe evaluate the precision and recall of our generated summaries (output by the photo selector) compared to human selections (the combined set of both human-selected 5-photo stories). \nFor comparison, we evaluate enc-attn-dec on the same task by aggregating predicted attention and selecting the 5 photos with highest accumulated attention.\nAdditionally, we also run DPP-based video summarization~\\cite{kulesza2012determinantal} using the same album features.\nOur models have higher performance compared to baselines as shown in Table~\\ref{table:summarization} (though DPP also achieves strong results, indicating that there is still room to improve the pointer network).\n\n\\vspace{-0.2cm}\n\\subsection{Output Example Analysis}\nFig.~\\ref{fig:generation_a} and Fig.~\\ref{fig:generation_b} shows several output examples of the joint album summarization and storytelling generation.\nWe compare our full model h-attn-rank with the baseline enc-attn-dec, as both models are able to do the album summarization and story generation tasks jointly.\nIn Fig.~\\ref{fig:generation_a} and Fig.~\\ref{fig:generation_b}, we use blue dashed box and red box to indicate the album summarization by the two models respectively.\nAs reference, we also show the ground-truth album summaries by randomly selecting 1 out of 2 human album summaries, which are highlighted with green box.\nBelow each album are their generated stories.\n\n\\begin{figure*}[t!]\n\\centering\n\\includegraphics[width=0.98\\textwidth]{figures\/all_example_with_gd.jpg}\n\\caption{Examples of album summarization and storytelling by enc-attn-dec (blue), h-attn-rank (red), and ground-truth (green). We randomly select 1 out of 2 human album summaries as ground-truth here.}\n\\label{fig:generation_a}\n\\end{figure*}\n\n\\begin{figure*}[t!]\n\\centering\n\\includegraphics[width=0.98\\textwidth]{figures\/all_example_with_gd2.jpg}\n\\caption{More examples of album summarization and storytelling by enc-attn-dec (blue), h-attn-rank (red), and ground-truth (green). We randomly select 1 out of 2 human album summaries as ground-truth here.}\n\\label{fig:generation_b}\n\\end{figure*}\n\n\n\\subsection{Album Retrieval}\nGiven a human-written story, we introduce a task to retrieve the album described by that story. \nWe randomly select 1000 albums and one ground-truth story from each for evaluation. \nUsing the generation loss, we compute the likelihood of each album $A_m$ given the query story $S$ and retrieve the album with the highest generation likelihood, $A = \\mbox{argmax}_{A_m} p(S|A_m)$.\nWe use Recall@k and Median Rank for evaluation.\nAs shown in Table~\\ref{table:retrieval}), we find that our models outperform the baselines, but the ranking term in Eqn.\\ref{eqn:loss} does not improve performance significantly.\n\n\\section{Conclusion}\n\\vspace{-0.3cm}\nOur proposed hierarchically-attentive RNN based models for end-to-end visual storytelling can jointly summarize and generate relevant stories from full input photo albums effectively. Automatic and human evaluations show that our method outperforms strong sequence-to-sequence baselines on selection, generation, and retrieval tasks.\n\n\n\\section*{Acknowledgments}\n\\vspace{-0.4cm}\nWe thank the anonymous reviewers for their helpful comments. This research is supported by NSF Awards \\#1633295, 1444234, 1445409, 1562098.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\n The \\emph{Tur\\'an number} of a graph $H$, denoted $\\ex(n, H)$, is the maximum possible number of edges in an $n$-vertex graph that does not contain a copy of $H$. In this paper we study a rainbow variant of Tur\\'an numbers, introduced by Keevash, Mubayi, Sudakov and Verstra\\\"ete \\cite{keevash2007rainbow}. A \\emph{proper edge-colouring} of a graph is an assignment of colours to its edges so that edges that share a vertex have distinct colours. A \\emph{rainbow} subgraph of an edge-coloured graph is a subgraph whose edges have distinct colours.\n The \\emph{rainbow Tur\\'an number} of a graph $H$, denoted $\\ex^*(n, H)$, is the maximum possible number of edges in a properly edge-coloured graph on $n$ vertices with no rainbow copy of $H$. One can define $\\ex(n, \\mathcal{H})$ and $\\ex^*(n, \\mathcal{H})$ analogously for a family of graphs $\\mathcal{H}$.\n \n It was shown in \\cite{keevash2007rainbow} that\n $\\ex^*(n, H) = (1 + o(1))\\ex(n, H)$ for non-bipartite $H$. Perhaps unsurprisingly, little is known about rainbow Tur\\'an numbers of bipartite graphs. The authors of \\cite{keevash2007rainbow} raised two problems concerning rainbow Tur\\'an numbers of even cycles, one concerning an even cycle of fixed length $2k$ and the other concerning the family $\\mathcal{C}$ of all cycles. \n For all $k\\geq 2$, they showed that $\\ex^*(n, C_{2k}) = \\Omega(n^{1 + 1\/k})$ and conjectured that\n $\\ex^*(n, C_{2k}) = \\Theta(n^{1 + 1\/k})$. The authors of \\cite{keevash2007rainbow} verified the conjecture\n for $k \\in \\{2, 3\\}$. Following further progress on the conjecture by Das, Lee and Sudakov \\cite{das2013rainbow}, Janzer \\cite{janzer2020rainbow} recently resolved the conjecture. \n \n Regarding the rainbow Tur\\'an number of the family $\\mathcal{C}$ of all cycles,\n Keevash, Mubayi, Sudakov and Verstra\\\"ete \\cite{keevash2007rainbow} showed that $\\ex^*(n, \\mathcal{C}) = \\Omega(n \\log n)$, by considering a naturally defined proper edge-colouring of the hypercube $Q_k$, where $k=\\lfloor \\log n \\rfloor$. They also showed that $\\ex^*(n, \\mathcal{C}) = O(n^{4\/3})$ and\n asked if $\\ex^*(n, \\mathcal{C}) = O(n^{1+o(1)})$ and furthermore, if $\\ex^*(n, \\mathcal{C}) = O(n \\log n)$.\n Das, Lee and Sudakov \\cite{das2013rainbow} answered the first question affirmatively, by showing that $\\ex^*(n, \\mathcal{C}) \\le ne^{(\\log n)^{\\frac{1}{2}+o(1)}}$. Recently, Janzer \\cite{jiang2021rainbow} improved this bound by establishing that $\\ex^*(n, \\mathcal{C}) = O\\big(n (\\log n)^4\\big)$, which is tight up to a polylogarithmic factor. Very recently, Jiang, Methuku and Yepremyan \\cite{jiang2021rainbow} proved the following generalisation of Das, Lee and Sudakov \\cite{das2013rainbow} on $\\ex^*(n,\\mathcal{C})$.\n \n\n \\begin{theorem}[Jiang, Methuku, Yepremyan~\\cite{jiang2021rainbow}]\n \\label{thm:JMY}\n For every integer $m \\geq 2$ there exists a constant $c>0$ such that for every integer $n\\geq m$ the following holds. If $G$ is a properly edge-coloured graph on $n$ vertices with at least $ne^{c\\sqrt{\\log n}}$ edges, then $G$ contains a rainbow subdivision of $K_m$, where each edge is subdivided at most $1300 \\log^2 n$ times.\n \\end{theorem}\n The method used in \\cite{jiang2021rainbow} utilises robust expanders in the coloured setting together with a density increment argument, inspired in part by the method introduced by Sudakov and Tomon~\\cite{sudakov2020extremal}.\n \n In this paper, we lower the $e^{O(\\sqrt{\\log n})}$ error term in Theorem~\\ref{thm:JMY} to a polylogarithmic term, which in conjunction with the above-mentioned $\\Omega(n\\log n)$ lower bound on $\\ex^*(n,\\mathcal{C})$ determines the rainbow Tur\\'an number of the family of $K_m$-subdivisions up to a polylogarithmic factor.\n \n \n \\begin{theorem}\\label{thm:main}\n Fix an integer $m \\geq 2$ and let $n$ be sufficiently large.\n Suppose that $G$ is a properly edge-coloured graph on $n$ vertices with at least $n (\\log n)^{60}$ edges. Then $G$ contains a rainbow subdivision of $K_m$, where each edge is subdivided at most $(\\log n)^{6}$ times.\n \\end{theorem}\n\n Our proof exploits the connection between mixing time of random walks and edge expansion.\n This connection is used in conjunction with counting lemmas developed by Janzer in \\cite{janzer2020rainbow} regarding homomorphisms of cycles in graphs.\n We also prove a strengthening of \\Cref{thm:main}, regarding `rooted' rainbow subdivisions of $K_m$ in expanders (see \\Cref{thm:main-rooted}). For this stronger version, in addition to the ingredients used for proving Theorem~\\ref{thm:main}, we use the framework of \\cite{jiang2021rainbow} and an additional idea used by Letzter in \\cite{letzter2021tight} \n (see Lemma~\\ref{lem:reachable-robust-new}).\n \n The framework that we develop in this paper is quite general. In addition to using it to prove Theorem~\\ref{thm:main}, we also apply\n it to obtain a generalisation of the following result of Janzer \\cite{janzer2020rainbow} concerning the Tur\\'an number of blow-ups of cycles. For an integer $r\\geq 1$ and a graph $F$, the \\emph{$r$-blow-up} of $F$, denoted $F[r]$, is the graph obtained by replacing each vertex of $F$ with an independent set of size $r$ and each edge of $F$ by a $K_{r,r}$. Let $\\mathcal{C}_0[r] \\coloneqq \\{C_{2k}[r] : k \\ge 2\\}$. Answering a question of Jiang and Newman~\\cite{jiang2017small}, Janzer~\\cite{janzer2020rainbow} proved the following.\n \n \\begin{theorem}[Janzer~\\cite{janzer2020rainbow}] \\label{thm:oliver-blowup}\n Let $r \\ge 1$ be a fixed integer. Then $\\ex(n, \\mathcal{C}_0[r]) = O\\big(n^{2-1\/r} (\\log n)^{7\/r}\\big)$.\n \\end{theorem}\n This bound is tight up to a polylogarithmic factor as random graphs show that $\\ex(n, \\mathcal{C}_0[r])=\\Omega(n^{2-1\/r})$. We generalise Theorem~\\ref{thm:oliver-blowup} by proving the following. \n \n \\begin{theorem} \\label{thm:main-blowups}\n Let $r, m \\ge 1$ be fixed integers and let $n$ be sufficiently large. Suppose that $G$ is a graph on $n$ vertices with at least $n^{2-\\frac{1}{r}} (\\log n)^{\\frac{60}{r}}$ edges. Then $G$ contains an $r$-blow-up of a subdivision of $K_m$, where each edge is subdivided at most $(\\log n)^{6}$ times.\n \\end{theorem} \n\n This bound is also tight up to a polylogarithmic factor as shown by random graphs (see Proposition~\\ref{prop:lowerboundblowup}).\n \n The proof of \\Cref{thm:main-blowups} follows the proof of our first main result, \\Cref{thm:main}, with the additional use of a `balanced supersaturation' result, due to Morris and Saxton \\cite{morris2016number}. Such a result gives us a collection of $K_{r,r}$'s in a sufficiently dense graph such that no copy of $K_{1, r}$ is contained in too many copies of $K_{r,r}$ (the result in \\cite{morris2016number} is more general but this condition suffices for our purposes).\n This sort of result is usually used in conjunction with the container method in order to upper bound the number of $H$-free graphs. So it is quite interesting that we use this `balanced supersaturation' result for $K_{r,r}$'s in a new setting.\n \n The rest of the paper is organised as follows. In \\Cref{sec:overview},\n we give a short overview of our proofs, focusing on the proof of \\Cref{thm:main} and a strengthening of it (\\Cref{thm:main-rooted}).\n In \\Cref{sec:prelims}, we mention various preliminary results, regarding the existence of expanders which are close to being regular and properties of expanders. In \\Cref{sec:non-rainbow-walks}, we state three lemmas due to Janzer \\cite{janzer2020rainbow} and some consequences of these lemmas. \\Cref{sec:random-walks} contains the main new ideas of the paper, exploiting a connection between the mixing time of a random walk and expansion properties in a graph. In \\Cref{sec:main-proof},\n we prove \\Cref{thm:main} and a strengthening of it regarding rooted subdivisions in almost regular expanders. \n In \\Cref{sec:blowups}, we prove \\Cref{thm:main-blowups} about blow-ups of subdivisions. We complete the paper with concluding remarks in \\Cref{sec:conc}.\n \n Throughout the paper, for convenience, we drop floor and ceiling signs for large numbers, and logarithms are in base $2$.\n\n\\section{Overview of the proofs} \\label{sec:overview}\n Our main idea is to use the connection between the mixing time of random walks, the notion of `conductance' (see \\Cref{def:conductance}) and our notion of expansion. \n It is a well-known and very useful fact that `large' conductance implies `small' mixing time (see, e.g., Lov\\'asz \\cite{lovasz1993random}). We show that expanders which are close to being regular have large conductance, and thus conclude that long enough paths are close to being uniformly distributed in such expanders. \n We also use two counting lemmas of Janzer from~\\cite{janzer2020rainbow}. Below we describe these lemmas and the main ideas in more detail.\n \n\n \n In a properly edge-coloured graph, say that a closed walk is \\emph{degenerate} if it is either not rainbow or visits a vertex more than once. \n The first lemma from \\cite{janzer2020rainbow} implies that in a properly edge-coloured graph which is close to being regular, the number of degenerate closed $2k$-walks is significantly smaller than the number of closed $2k$-walks, provided that $k$ is sufficiently large.\n \n Given two vertices $x$ and $y$, a closed $2k$-walk $W$ is said to \\emph{be hosted} by $x$ and $y$ if it starts at $x$ and reaches $y$ after $k$ steps. We call a pair of vertices $(x, y)$ \\emph{good} if the number of degenerate closed $2k$-walks hosted by $x$ and $y$ is significantly smaller than the number of closed $2k$-walks hosted by $x$ and $y$. The second lemma from \\cite{janzer2020rainbow} that we use shows that if a pair $(x, y)$ is good then there are many short pairwise colour-disjoint and internally vertex-disjoint $k$-paths from $x$ to $y$.\n\n Using results about random walks on graphs, which relate mixing time to expansion, we show that in an expander $G$ on $n$ vertices which is close to being regular, for $k$ suitably large (at least polylogarithmic in $n$), the numbers of closed $2k$-walks hosted by any two pairs of vertices are within a suitable polylogarithmic factor (in $n$) of each other. This, combined with the fact that the number of degenerate closed $2k$-walks is small compared to the total number of closed $2k$-walks (due to the first lemma above), implies that almost all pairs of vertices are good. Thus, using Tur\\'an's theorem, we find a copy of $K_m$ in the graph formed by good pairs.\n This, together with the fact that there are many short colour-disjoint and internally vertex-disjoint rainbow paths between any good pair of vertices (due to the second lemma above) allows us to greedily build the desired rainbow-subdivision of $K_m$.\n \n \n We also prove a stronger version of Theorem~\\ref{thm:main} (see \\Cref{thm:main-blowups}) asserting that in an expander $G$ which is close to being regular and whose average degree is large enough, for any set $S$ of $m$ vertices, there exists a rainbow $K_m$-subdivision with the vertices of $S$ being the branching vertices. The main step in this proof shows that for any two vertices $x$ and $y$ in $G$ there is a short rainbow $x,y$-path avoiding a prescribed small set $C$ of vertices and colours. By iterating this over all pairs of vertices in $S$, we can build the desired rainbow $K_m$-subdivision.\n \n To show that there is a short rainbow $x,y$-path in $G$, we first apply tools due to Jiang, Methuku and Yepremyan \\cite{jiang2021rainbow} and Letzter \\cite{letzter2021tight} to show that there is a set of vertices $U$ of size $\\Omega(n)$ such that for each $v\\in U$ there is such a short rainbow $x,v$-path $P(v)$ and a short rainbow $y,v$-path $Q(v)$, both of which avoid $C$, such that no colour is used on too many of these paths $P(v)$ and $Q(v)$. It easily follows that for almost all pairs $(u, v)$ with $u, v\\in U$, the paths $P(u)$ and $Q(v)$ are colour-disjoint.\n This, combined with the fact that most pairs in $U$ are good (in the sense mentioned earlier), implies that there exists at least one good pair $(u,v)$ for which $P(u)$ and $Q(v)$ are colour-disjoint. This allows us to find a suitable short rainbow $u,v$-path $L$ such that $P(u)\\cup L\\cup Q(v)$ is a rainbow $x,y$-walk which contains the desired rainbow $x,y$-path.\n \n To prove \\Cref{thm:main-blowups} about $r$-blow-ups of subdivisions of cliques, we use similar arguments as the ones for proving Theorem~\\ref{thm:main}, albeit tailored to the setup of $r$-blow-ups. Recall that we are given a graph $G$ on $n$ vertices with at least $n^{2 - \\frac{1}{r}} (\\log n)^{\\frac{60}{r}}$ edges. A supersaturation result due to Erd\\H{o}s and Simonovits \\cite{erdos1983supersaturated} implies that $G$ has many copies of $K_{r,r}$. \n One natural approach is to take a large collection $\\mathcal{F}$ of copies of $K_{r,r}$ in $G$ and\n consider an auxiliary graph $\\mathcal{H}$ whose vertices are $r$-sets of vertices in $G$, and whose edges correspond to copies of $K_{r,r}$ in $\\mathcal{F}$. To find a blow-up of a $K_m$-subdivision in $G$, it would suffice to find a `clean' subdivision of $K_m$ in $\\mathcal{H}$, denoted $\\mathcal{K}$, where the $r$-sets in $G$ corresponding to the vertices of $\\mathcal{K}$ are pairwise disjoint.\n However, in order for our framework to be applicable, we need a crucial additional property of $\\mathcal{F}$ that any $r$-set of vertices $A$ in $G$ and vertex $u \\notin A$ do not lie in too many $K_{r,r}$-copies in this collection $\\mathcal{F}$. Fortunately, the existence of such a collection $\\mathcal{F}$ is guaranteed by a `balanced supersaturation' result due to Morris and Saxton \\cite{morris2016number}.\n \n\n\n\n\\section{Preliminaries} \\label{sec:prelims}\n\n Let $G$ be a graph. We denote by $d(G)$ the average degree of $G$. For a subset $S \\subseteq V(G)$, let $e(S) = e(G[S])$, and for subsets $S, T \\subseteq V(G)$, let $e(S, T) = e(G[S, T])$. We will use the notions of $d$-minimality and expanders, defined below, following \\cite{jiang2021rainbow}.\n \n \\begin{definition}\n A graph $G$ is said to be $d$-\\emph{minimal} if $d(G) \\geq d$ but $d(H) < d$ for every proper subgraph $H\\subseteq G$.\n \\end{definition}\n\n It is easy to see that every graph $G$ contains a $d(G)$-minimal subgraph.\n The following observation was used in \\cite{jiang2021rainbow}. For completeness, we include its short proof.\n \n \\begin{observation} \\label{obs:d-expand}\n If $G$ is $d$-minimal, then every subset $S\\subseteq V(G)$ satisfies $e(S)+e(S,S^c)\\geq \\frac{d |S|}{2}$.\n In particular, $\\delta(G) \\ge \\frac{d}{2}$.\n \\end{observation}\n \n \\begin{proof} \n Suppose otherwise. Then $e(S^c) \\geq \\frac{d|V|}{2} - \\frac{d|S|}{2} \\geq \\frac{d(|V| - |S|)}{2}$, contradicting $d$-minimality.\n \\end{proof}\n \n \\begin{definition} \\label{def:expander-new}\n Given $d \\ge 1$, $\\eta \\in (0, 1)$ and $\\varepsilon\\in (0,\\frac{1}{2}]$, an $n$-vertex graph $G$ is called a \\emph{$(d, \\eta, \\varepsilon)$-expander} if $G$ is $d$-minimal, and for every subset $S \\subseteq V(G)$ of size at most $(1 - \\varepsilon)n$, we have $d(S) \\le (1 - \\eta)d$. \n \\end{definition}\n\n Note that, by definition, for $0<\\varepsilon'\\leq\\varepsilon\\leq\\frac{1}{2}$ and $0 < \\eta \\le \\eta' < 1$, every $(d,\\eta',\\varepsilon')$-expander is also a $(d,\\eta,\\varepsilon)$-expander. Also, if $G$ is a $(d, \\eta, \\varepsilon)$-expander then it is a $(d(G), \\eta, \\varepsilon)$-expander.\n It will be useful to note the following `edge-expansion' property of $(d, \\eta, \\varepsilon)$-expanders.\n \n \\begin{observation} \\label{obs:edge-expansion-new}\n Let $n, d \\ge 1$, let $\\eta\\in (0, 1)$ and let $\\varepsilon\\in (0,\\frac{1}{2}]$.\n Suppose that $G$ is a $(d, \\eta, \\varepsilon)$-expander on $n$ vertices. Then every $S \\subseteq V(G)$ with $|S| \\le (1 - \\varepsilon)n$ satisfies $e(S, S^c) \\ge \\frac{\\eta d}{2}|S|$.\n \\end{observation}\n \n \\begin{proof}\n Let $S \\subseteq V(G)$ satisfy $|S| \\le (1 - \\varepsilon)n$.\n Since $G$ is $d$-minimal and by \\Cref{obs:d-expand}, we have $e(S) + e(S, S^c) \\ge \\frac{d|S|}{2}$. Since $G$ is a $(d, \\eta, \\varepsilon)$-expander, by definition, we also have $e(S) = \\frac{d(S)}{2}|S| \\le \\frac{(1 - \\eta)d}{2}|S|$. It follows that $e(S, S^c) \\ge \\frac{\\eta d}{2}|S|$, as claimed.\n \\end{proof}\n \n \\begin{lemma}[Lemma 2.5 from \\cite{jiang2021rainbow}]\n \\label{lem:existence-expanders-new}\n Let $n, d \\ge 1$, let $\\varepsilon \\in (0,\\frac{1}{2}]$ and let $\\eta = \\frac{\\varepsilon}{2\\log n}$.\n Suppose that $G$ is a graph on $n$ vertices with average degree $d$. Then $G$ contains a $(d', \\eta, \\varepsilon)$-expander, with $d' \\ge \\frac{d}{2}$. \n \\end{lemma}\n \n The following lemma from \\cite{jiang2021rainbow} asserts that in a properly edge-coloured expander, one can reach almost every vertex by a short rainbow path starting at a specified vertex.\n \n \\begin{lemma}[Lemma 2.7 from \\cite{jiang2021rainbow}] \\label{lem:reachable-new}\n Let $n, \\ell, d, M \\ge 1$, let $\\eta \\in (0,1)$, and let $\\varepsilon \\in (0, \\frac{1}{2}]$. Suppose that $\\ell = \\frac{4 \\log n}{\\eta}$ and $d \\ge \\frac{4\\ell + 8M}{\\eta}$.\n Let $G$ be a properly edge-coloured $(d, \\eta, \\varepsilon)$-expander on $n$ vertices, let $x \\in V(G)$ and let $F$ be a set of vertices and colours of size at most $M$. Then at least $(1 - \\varepsilon)n$ vertices can be reached from $x$ by a rainbow path of length at most $\\ell+1$ that avoids the vertices and colours in $F$.\n \\end{lemma}\n \n We will need a stronger version of the previous lemma, where we require that no colour is used too many times in the short rainbow paths. A similar idea was used in \\cite{letzter2021tight} (see Lemma 5) in the context of tight paths.\n \n \\begin{lemma} \\label{lem:reachable-robust-new}\n Let $n, \\ell, d, q, M \\ge 1$, let $\\eta \\in (0,1)$ and let $\\varepsilon \\in (0, \\frac{1}{2}]$.\n Suppose that $\\ell = \\frac{4\\log n}{\\eta}$ and $d \\ge \\frac{20q\\ell + 8M}{\\eta}$.\n Let $G$ be a properly edge-coloured $(d, \\eta, \\varepsilon)$-expander on $n$ vertices, let $x \\in V(G)$, and let $F$ be a set of colours and vertices of size at most $M$. Then there is a set $U \\subseteq V(G)$ of size at least $(1 - \\varepsilon)n$, and a collection $\\mathcal P = \\{ P(u) : u \\in U\\}$ where for each $u \\in U$, $P(u)$ is a rainbow path from $x$ to $u$ of length at most $\\ell + 1$ that avoids the vertices and colours in $F$ and no colour appears in more than $\\frac{n}{q}$ of the paths in $\\mathcal P$.\n \\end{lemma}\n \n \\begin{proof}\n Let $U$ be a largest set satisfying that for every $u \\in U$ there is a rainbow path $P(u)$ from $x$ to $u$ of length at most $\\ell + 1$ that avoids $F$, such that no colour appears in more than $\\frac{n}{q}$ of the paths $P(u)$. Say that a colour is \\emph{bad} if it appears on exactly $\\frac{n}{q}$ of the paths $P(u)$ with $u \\in U$, and let $C_{\\bad}$ be the set of bad colours. Since each path $P(u)$ has length at most $\\ell + 1$, we have \n \\begin{equation*}\n |C_{\\bad}| \\le \\frac{n(\\ell + 1)}{n\/q} \\le 2q\\ell.\n \\end{equation*}\n \n Since $d\\ge \\frac{20q\\ell + 8M}{\\eta}\\geq \\frac{4\\ell+8(M+|C_{\\bad})|}{\\eta}$, by \\Cref{lem:reachable-new} with $F\\cup C_{\\bad}$ playing the role of $F$, there is a set $U'$ with $|U'| \\ge (1 - \\varepsilon)n$ such that for every $v \\in U'$, there is a rainbow path $P'(v)$ from $x$ to $v$ of length at most $\\ell + 1$ that avoids the colours and vertices in $F \\cup C_{\\bad}$. If $|U| < (1 - \\varepsilon)n$, then there is a vertex $v \\in U' \\setminus U$. The set $U \\cup \\{v\\}$ (along with the paths $P(u)$ for $u \\in U$ and $P'(v)$) contradicts the maximality of $U$. It follows that $|U| \\ge (1 - \\varepsilon)n$, as required. \n \\end{proof}\n \n We would like to work with expanders that are close to regular. For this, we use the following lemma, which is a slight adaptation of Lemma 3.2 of \\cite{montgomery2021C4}.\n \\begin{lemma}\\label{lem:regularization}\n Let $n \\ge 2$ and $d \\ge 36 \\log n$.\n Let $G$ be a bipartite graph on $n$ vertices with minimum degree at least $d$. Then there exists a subgraph $H$ of $G$ with average degree at least $\\frac{d}{12 \\log{n}}$ and maximum degree at most $d$.\n \\end{lemma}\n \n \\begin{proof}\n Let $\\{A,B\\}$ denote a bipartition of $G$ with $|A|\\geq |B|$. Let $G'$ be obtained from $G$ by keeping exactly $d$ edges incident to each vertex in $A$. Then for each $v\\in A$ we have $d_{G'}(v)=d$, and hence $e(G')=d|A|$.\n \n Let $m=\\lceil \\log n\\rceil$. For each $i\\in [m]$, let $B_i=\\{v\\in B: 2^{i-1}\\leq d_{G'}(v)<2^i\\}$. Denoting the set of isolated vertices in $B$ by $B_0$, we have $B \\setminus B_0 = \\cup_{i\\in [m]} B_i$.\n By the pigeonhole principle, there exists an $i\\in [m]$ for which $e(G'[A,B_i])\\geq \\frac{e(G')}{m} \\geq \\frac{d|A|}{2\\log n}$. \n Fix such $i$ and let $t = 2^{i-1}$.\n Then, by definition, each $v\\in B_i$ has degree between $t$ and $2t$ in $G'[A,B_i]$.\n If $2t\\leq d$, then $G'[A,B_i]$ has maximum degree at most $d$ and\n average degree at least $\\frac{2 e(G'[A,B_i])}{|A|+|B_i|}\\geq \\frac{d}{2\\log n}$.\n So the lemma holds. Hence, we may assume that $2t > d$.\n \n Set $p = \\frac{d}{4t}$; then $0 < p < \\frac{1}{2}$. Now let $A'\\subseteq A$ be chosen by including each vertex in $A$ independently with probability $p$. \n For convenience, write $G_i=G'[A,B_i]$ and $G_i'=G'[A',B_i]$. Then\n \\begin{equation} \\label{A'B'}\n \\mathbb{E}[e(G_i')] =p \\cdot e(G_i)\\geq \\frac{pd|A|}{2\\log n}.\n \\end{equation}\n Now, let $B'=\\{v\\in B_i: d_{G'_i}(v)\\leq d\\}$. \n\n For each $v\\in B_i$, the degree $d_{G'_i}(v)$ is binomially distributed with expectation $p \\cdot d_{G_i}(v)$. Since $t\\leq d_{G_i}(v)\\leq 2t$, we have\n $\\frac{d}{4}=pt\\leq \\mathbb{E}[d_{G'_i}(v)]\\leq 2pt=\\frac{d}{2}$. Therefore, using Chernoff's bound (see, e.g., Appendix A of \\cite{alonspencer2016}), we have\n \\begin{align*}\n \\mathbb{E}[|B_i\\setminus B'|]&=\\sum_{v\\in B_i} \\mathbb{P}(v\\in B_i\\setminus B')= \\sum_{v\\in B_i} \\mathbb{P}\\!\\left[d_{G'_i}(v)\\geq d\\right]\n \\,\\leq\\, \\sum_{v\\in B_i} \\mathbb{P}\\!\\left[d_{G'_i}(v)\\geq 2 \\mathbb{E}[d_{G'_i}(v)]\\right] \\\\\n &\\,\\leq\\, \\sum_{v\\in B_i} 2\\cdot \\rm exp\\!\\left(-\\frac{\\mathbb{E}[d_{G'_i}(v)]}{3}\\right) \n \\leq\\ \\sum_{v\\in B_i}2\\cdot \\rm exp\\!\\left(-\\frac{d}{12}\\right)\n \\leq\\ n \\cdot 2e^{-3\\log n}<\\frac{1}{n}.\n \\end{align*}\n This together with the fact that for any $A'$ and the corresponding $B'$, $e(G'[A',B_i\\setminus B']) \\leq n|B_i\\setminus B'|$, implies\n \\begin{equation} \\label{A'notB'}\n \\mathbb{E}[e(G'[A',B_i\\setminus B'])]\\leq n \\cdot \\mathbb{E}[|B_i\\setminus B'|] \\le 1.\n \\end{equation}\n \n By \\eqref{A'B'} and \\eqref{A'notB'},\n \\[\\mathbb{E}[e(G'[A',B'])\\geq \\frac{pd|A|}{2\\log n}-1.\\]\n Note that $\\mathbb{E}[|A'|]=p|A|$. Also, as $t|B_i|\\leq e(G'[A, B_i]) \\leq d|A|$, we have\n $|B'|\\leq |B_i|\\leq \\frac{d}{t}|A|=4p|A|$. Hence $|A'|+|B'|\\leq 5p|A|$.\n Let $d_0=\\frac{d}{12 \\log n}$.\n We have\n \\begin{equation*}\n \\mathbb{E}\\!\\left[e(G'[A', B'])-d_0(|A'|+|B'|)\\right]\n \\geq \\frac{pd|A|}{2\\log n} - 1 - \\frac{5dp|A|}{12 \\log n} \n \\geq \\frac{pd|A|}{12 \\log n} - 1\n \\ge 0,\n \\end{equation*}\n where the last inequality used $pd|A| = \\frac{d^2 |A|}{4t} \\ge 18^2(\\log n)^2 \\ge 12 \\log n$ which holds since $t \\le |A|$ and $d \\ge 36 \\log n$.\n Thus, there is a choice of $A'$ for which $e(G'[A', B'])-d_0(|A'|+|B'|)\\geq 0$. Taking $H = G'[A', B']$, we have $d(H)\\geq d_0= \\frac{d}{12 \\log n}$ and $\\Delta(H)\\leq d$, as desired.\n \\end{proof}\n \n Our final preliminary result combines \\Cref{lem:existence-expanders-new,lem:regularization} to show that every relatively dense graph contains an expander which is close to regular.\n \n \n \\begin{lemma} \\label{lem:bounded-max-deg-expander}\n Let $n, d \\ge 1$ and let $\\varepsilon \\in (0,\\frac{1}{2}]$, and suppose that $d \\ge 10^7 (\\log n)^3$.\n Suppose that $G$ is a bipartite graph on $n$ vertices with average degree at least $d$. Then $G$ has a subgraph $H$ with the following properties.\n \\begin{enumerate} [ref = \\rm\\arabic*]\n \\item \\label{itm:property-1}\n $H$ is a $(d', \\eta, \\varepsilon)$-expander on $n'$ vertices, where $d' \\ge \\frac{d}{2500(\\log n)^2}$ and $\\eta \\ge \\frac{\\varepsilon}{100(\\log n')^2}$,\n \\item \\label{itm:property-2}\n $H$ has maximum degree at most $2500 (\\log n')^2 d'$.\n \\end{enumerate}\n \\end{lemma}\n \n \\begin{proof}\n Let $G_0 = G$. We run the following process, generating graphs $G_i$ for $i \\ge 0$. For each graph $G_i$, we write $d_i = d(G_i)$ and $n_i = |V(G_i)|$.\n \\begin{enumerate} [label = \\rm(\\alph*)]\n \\item \\label{itm:step-a}\n Let $H_i$ be a subgraph of $G_i$ with average degree at least $\\frac{d_i}{24 \\log n_i}$ and maximum degree at most $d_i$. Such a subgraph $H_i$ exists by \\Cref{lem:regularization}, using the fact that every graph with average degree $d$ contains a subgraph with minimum degree at least $\\frac{d}{2}$. To apply the lemma we need to verify that $d_i \\ge 72 \\log n_i$, which we shall do below.\n \\item \\label{itm:step-b}\n Write $n_i' = |V(H_i)|$.\n Let $G_{i+1}$ be a subgraph of $H_i$ which is a $(d_{i+1}, \\eta_{i+1}, \\varepsilon)$-expander, where $d(G_{i+1}) = d_{i+1} \\ge \\frac{d(H_i)}{2} \\ge \\frac{d_i}{48 \\log n_i}$ and $\\eta_{i+1} = \\frac{\\varepsilon}{2 \\log n_i'}$. Such a subgraph exists by \\Cref{lem:existence-expanders-new}, using the observation that if $G$ is a $(d, \\eta, \\varepsilon)$-expander then it is a $(d(G), \\eta, \\varepsilon)$-expander.\n \\end{enumerate}\n \n \\begin{claim}\n If $48 \\log n_i < \\sqrt{48 \\log n_{i-1}}$ for $i \\in [t]$ then the graphs $G_1, \\ldots, G_t$ can be defined as above and are non-empty.\n \\end{claim}\n \n \\begin{proof}\n Notice that to prove the statement, it suffices to show that for $t$ as described, $d_i \\ge 72 \\log n_i$ for $i \\in [t-1]$.\n \n We prove this by induction.\n It is easy to check that the statement is true for $t = 0$ (note that the condition on $t$ holds vacuously here). Indeed, we just need to check that $d_0 \\ge 72 \\log n_0$, which is the case as $d_0 = d$, $n_0 = n$ and $d \\ge 10^7(\\log n)^3$.\n \n Now suppose that $48 \\log n_i < \\sqrt{48 \\log n_{i-1}}$ for $i \\in [t]$ and that the inductive statement holds for $t' \\le t$. This means that the process runs as described for all $i \\in [t]$, and it remains to show that the $(t+1)$-st step can be performed, namely that $d_{t} \\ge 72 \\log n_{t}$.\n By the assumption on $t$, the following holds for every $i \\in [t]$.\n \\begin{equation*}\n 48 \\log n_i \n \\le (48 \\log n_{i-1})^{1\/2}\n \\le \\ldots \\le (48 \\log n_0)^{2^{-i}}\n = (48 \\log n)^{2^{-i}}.\n \\end{equation*}\n Since $d_i \\ge \\frac{d_{i-1}}{48 \\log n_{i-1}}$ for $i \\in [t]$ (see \\ref{itm:step-b}), it follows that\n \\begin{equation} \\label{eqn:d}\n d_t \\ge\n \\frac{d_{t-1}}{48 \\log n_{t-1}} \n \\ge \\ldots \n \\ge \\frac{d_0}{48 \\log n_{t-1} \\cdot \\ldots \\cdot 48 \\log n_{0}} \n \\ge \\frac{d}{(48 \\log n)^{2^0 + \\ldots + 2^{-(t-1)}}}\n \\ge \\frac{d}{(48 \\log n)^2}.\n \\end{equation}\n Using $d \\ge 10^7 \\log n$ and $n \\ge n_t$, we find that $d_t \\ge 10^3 \\log n > 72 \\log n_t$, as required.\n \\end{proof}\n \n Let $\\ell$ be minimum such that $48 \\log n_{\\ell} \\ge \\sqrt{48 \\log n_{\\ell - 1}}$. We claim that such $\\ell$ exists. If not, then by the previous claim the process can be run forever and $n_i \\ge 1$ for all $i \\ge 0$, implying that $(48 \\log n_i)_{i \\ge 0}$ is decreasing, and thus $(n_i)_{i \\ge 0}$ is an infinite decreasing sequence of positive integers, a contradiction. \n \n We will show that $G_{\\ell}$ satisfies the requirements of \\Cref{lem:bounded-max-deg-expander}.\n Indeed, $G_{\\ell}$ is a $(d_{\\ell}, \\eta_{\\ell}, \\varepsilon)$-expander. \n Using the proof of the above claim, inequality \\eqref{eqn:d} holds for $t = \\ell$, showing $d_{\\ell} \\ge \\frac{d}{2500(\\log n)^2}$.\n By choice of $G_{\\ell}$, we have $\\eta_{\\ell} = \\frac{\\varepsilon}{2\\log n_{\\ell-1}} \\ge \\frac{\\varepsilon}{96(\\log n_{\\ell})^2} \\ge \\frac{\\varepsilon}{100(\\log n_{\\ell})^2}$ (using $48 \\log n_{\\ell} \\ge \\sqrt{48 \\log n_{\\ell-1}}$). It follows that property \\ref{itm:property-1} of the lemma holds.\n To see property \\ref{itm:property-2}, note that $G_{\\ell}$ has maximum degree at most $d_{\\ell - 1}$ and $d_{\\ell} \\ge \\frac{d_{\\ell-1}}{48 \\log n_{\\ell-1}} \\ge \\frac{d_{\\ell-1}}{(48 \\log n_{\\ell})^2} \\ge \\frac{d_{\\ell-1}}{2500 (\\log n_{\\ell})^2}$ (again using $48 \\log n_{\\ell} \\ge \\sqrt{48 \\log n_{\\ell-1}}$).\n \\end{proof} \n \n \n \n\\section{Counting rainbow cycles} \\label{sec:non-rainbow-walks}\n \n In a graph $G$, denote by $\\Hom_{x, y}(P_k)$ the family of walks of length $k$ from $x$ to $y$. Similarly, let $\\Hom_{x, y}(C_{2k})$ be the family of closed walks of length $2k$ that start at $x$ and reach $y$ after $k$ steps. \n Write $\\hom_{x,y}(P_k)=|\\Hom_{x,y}(P_k)|$ and $\\hom_{x,y}(C_{2k})=|\\Hom_{x,y}(C_{2k})|$.\n \n The following relation between $\\hom_{x, y}(P_k)$ and $\\hom_{x, y}(C_{2k})$ is very useful.\n \\begin{equation} \\label{eqn:hom-cycle-paths}\n \\hom_{x, y}(C_{2k}) = (\\hom_{x, y}(P_k))^2. \n \\end{equation}\n We also define $\\hom(C_{2k})$ to be the total number of homomorphic copies of $C_{2k}$, namely \n \\begin{equation*}\n \\hom(C_{2k}) = \\sum_{\\substack{x, y \\in V(G)}}\n \\hom_{x,y}(C_{2k}).\n \\end{equation*}\n Similarly, we define $\\hom(P_k) = \\sum_{x, y \\in V(G)}\\hom_{x,y}(P_k)$.\n \n We will make use of the following two lemmas from a recent paper of Janzer \\cite{janzer2020rainbow}. \n \n \\begin{lemma}[Lemma 2.1 from \\cite{janzer2020rainbow}] \\label{lem:janzer-edges}\n Let $k \\ge 2$ be an integer and let $G = (V, E)$ be a graph on $n$ vertices. Let $\\sim$ be a symmetric binary relation on $E$ such that for every $uv \\in E$ and $w \\in V$, there are at most $t$ neighbours $z$ of $w$ for which $uv \\sim zw$. Then the number of homomorphic $2k$-cycles $(x_1, \\ldots, x_{2k})$ in $G$ such that $x_i x_{i+1} \\sim x_j x_{j+1}$ for some $i \\neq j$ is at most\n \\begin{equation*}\n 32k^{3\/2}t^{1\/2}\\Delta(G)^{1\/2}n^{\\frac{1}{2k}}\\hom(C_{2k})^{1 - \\frac{1}{2k}}.\n \\end{equation*}\n \\end{lemma}\n \n \\begin{lemma}[Lemma 2.2 from \\cite{janzer2020rainbow}] \\label{lem:janzer-vertices}\n Let $k \\ge 2$ be an integer and let $G = (V, E)$ be a graph on $n$ vertices. Let $\\sim$ be a symmetric binary relation on $V$ such that for every $u, v \\in V$, there are at most $t$ neighbours $w$ of $v$ for which $u \\sim w$. Then the number of homomorphic $2k$-cycles $(x_1, \\ldots, x_{2k})$ in $G$ such that $x_i \\sim x_j$ for some $i \\neq j$ is at most\n \\begin{equation*}\n 32k^{3\/2}t^{1\/2}\\Delta(G)^{1\/2}n^{\\frac{1}{2k}}\\hom(C_{2k})^{1 - \\frac{1}{2k}}.\n \\end{equation*}\n \\end{lemma}\n \n In a properly edge-coloured graph $G$, let $\\Hom^*_{x, y}(C_{2k})$ be the family of all the closed walks in $\\Hom_{x,y}(C_{2k})$ that do not form a rainbow copy of $C_{2k}$. \n Write $\\hom^*_{x,y}(C_{2k})=|\\Hom^*_{x,y}(C_{2k})|$.\n Let \n \\begin{equation*}\n \\Hom^*(C_{2k}) \n = \\bigcup_{x,y\\in V(G)} \\Hom^*_{x,y}(C_{2k}),\n \\end{equation*} \n and write $\\hom^*(C_{2k})=|\\Hom^*(C_{2k})|$. \n\n \\begin{lemma} \\label{lem:non-rainbow-hom}\n Let $n, d, \\mu, k, S \\ge 1$ and let $\\eta \\in (0, 1)$. Suppose that $d \\ge 2^{14} k^3 S^2 \\mu n^{1\/k}$.\n Let $G$ be a properly edge-coloured graph with minimum degree at least $\\frac{d}{2}$ and maximum degree at most $\\mu d$. \n Then $\\hom^*(C_{2k}) \\le \\frac{1}{S} \\hom(C_{2k})$.\n \\end{lemma}\n \n \\begin{proof}\n We first give a lower bound on $\\hom(C_{2k})$. For this, note that $\\hom(P_k) \\ge n \\cdot (\\frac{d}{2})^k$. Hence,\n \\begin{equation}\n \\label{eq:lowerboundhomC2k}\n \\hom(C_{2k}) \n \n = \\sum_{x, y \\in V(G)} \\big(\\hom_{x, y}(P_k)\\big)^2\n \\ge \\frac{1}{n^2} \\Big(\\sum_{x, y \\in V(G)} \\hom_{x, y} (P_k) \\Big)^2\n \n = \\bigg(\\frac{\\hom(P_k)}{n}\\bigg)^2\n \\ge \\left(\\frac{d}{2}\\right)^{2k},\n \\end{equation}\n where the first inequality follows by convexity.\n \n Let $\\sim_e$ be the binary relation defined on $E(G)$ where $e \\sim_e f$ if and only if $e$ and $f$ have the same colour. Because $G$ is properly edge-coloured, for every edge $uv$ and vertex $w$, there is at most one neighbour $z$ of $w$ for which $uv \\sim_e zw$.\n Let $\\sim_v$ be the binary relation defined on $V(G)$ where $u \\sim_v v$ if and only if $u = v$. Then, trivially, for every $u, v \\in V$ there is at most one neighbour $w$ of $v$ for which $u \\sim_v w$.\n Apply \\Cref{lem:janzer-edges,lem:janzer-vertices} with $\\sim$ (so $t$ can be taken to be $1$ in both lemmas), to obtain the desired upper bound on $\\hom^*(C_{2k})$, as follows. \n \\begin{align*}\n \\hom^*(C_{2k}) \n & \\le 64 k^{3\/2} (\\mu d)^{1\/2} n^{\\frac{1}{2k}} \\cdot \\hom(C_{2k})^{1 - \\frac{1}{2k}} \\\\\n & \\overset{\\eqref{eq:lowerboundhomC2k}}{\\le} \n \\frac{128 k^{3\/2} (\\mu d)^{1\/2} n^{\\frac{1}{2k}}}{d} \\cdot \\hom(C_{2k}) \\\\\n & = \\bigg(\\frac{2^{14} k^3 \\mu n^{\\frac{1}{k}}}{d}\\bigg)^{1\/2} \\cdot \\hom(C_{2k})\n \\le \\frac{1}{S} \\cdot \\hom(C_{2k}). \n \\end{align*}\n Here we used the inequality $\\hom(C_{2k}) \\ge (\\frac{d}{2})^{2k}$ proved in \\eqref{eq:lowerboundhomC2k}, and the lower bound on $d$ assumed in the lemma.\n \\end{proof}\n \n\n \n It would be useful to be able to find many pairwise colour-disjoint and vertex-disjoint paths between a given pair $(x, y)$ of vertices. To that end, we use Lemma~\\ref{lem:theta} which will immediately follow from Lemma~\\ref{lem:theta-general} stated below, whose proof uses the arguments of Theorem 3.7 in \\cite{janzer2020rainbow}. Lemma~\\ref{lem:theta-general} will also be used in \\Cref{sec:blowups}.\n For a graph $G$, and vertices $x, y \\in V(G)$, and walks $P, Q \\in \\Hom(P_k)$, let $PQ$ denote the closed walk in $\\Hom_{x, x}(C_{2k})$ obtained by concatenating $P$ and the reverse of $Q$. \n\n \n \n \n \\begin{lemma} \\label{lem:theta-general}\n Let $k, s \\ge 1$ be integers. Let $G$ be a graph and let $x, y \\in V(G)$. Let $\\mathcal B$ be a subfamily of $\\Hom_{x,y}(C_{2k})$ satisfying $|\\mathcal B| < \\frac{1}{s^2}\\hom_{x, y}(C_{2k})$. Then there exist walks $P_1,\\dots, P_s \\in \\Hom_{x, y}(P_k)$ such that $P_iP_j\\notin \\mathcal B$ for every distinct $i, j \\in [s]$.\n \\end{lemma}\n \n \\begin{proof}\n Let $P_1, \\ldots, P_s$ be walks of length $k$ from $x$ to $y$ in $G$, each chosen uniformly at random among all such walks, independently. Note that for any distinct $i, j \\in [s]$, the walk $P_iP_j$ is a uniformly chosen random element in $\\Hom_{x, y}(C_{2k})$. Thus, the probability that $P_iP_j\\in \\mathcal B$ is \n $\\frac{|\\mathcal B|}{\\hom_{x,y}(C_{2k})} < \\frac{1}{s^2}$. A union bound implies that, with positive probability, $P_i P_j \\notin \\mathcal{B}$ for every distinct $i, j \\in [s]$.\n \\end{proof}\n \n \\begin{lemma} \\label{lem:theta}\n Let $k, s \\ge 1$ be integers. Let $G$ be a properly edge-coloured graph and let $x, y \\in V(G)$. Suppose that $\\hom^*_{x, y}(C_{2k}) < \\frac{1}{s^2}\\hom_{x, y}(C_{2k})$. Then there are $s$ pairwise colour-disjoint and internally vertex-disjoint rainbow paths of length $k$ from $x$ to $y$. \n \\end{lemma}\n \n \\begin{proof}\n By our assumption, $|\\Hom^*_{x,y}(C_{2k})|<\\frac{1}{s^2}\\hom_{x, y}(C_{2k})$. By Lemma~\\ref{lem:theta-general}, there exist walks $P_1, \\dots, P_s \\in \\Hom_{x, y}(P_k)$ satisfying $P_i P_j \\notin \\Hom^*_{x, y}(C_{2k})$ for every distinct $i, j \\in [s]$ \n In other words, $P_i P_j$ is a rainbow copy of $C_{2k}$ for every distinct $i, j \\in [s]$. Now, $P_1, \\dots, P_s$ are pairwise colour-disjoint internally vertex-disjoint paths of length $k$ from $x$ to $y$, as desired. \n \\end{proof}\n \n\\section{Counting walks in expanders} \\label{sec:random-walks}\n\n In this section we exploit the connection between the mixing time of a random walk on a graph $G$ and expansion properties of $G$. A lot of the notation and results that we use can be found in~\\cite{lovasz1993random}.\n \n Suppose $G=(V,E)$ is a connected graph where $V = [n]$. \n Consider a random walk on $V(G)$, where we start at some vertex $v_0$ and at the $i$-th step we move from $v_i$ to one of its neighbours, denoted by $v_{i+1}$, where\n each neighbour of $v_i$ is chosen as $v_{i+1}$ with probability $\\frac{1}{d(v_i)}$. The sequence of vertices $(v_i)_{i \\ge 0}$ defines a Markov chain. \n Let $M$ be the $n \\times n$ matrix of transition probabilities of the Markov chain, namely $M_{v, u}$ is the probability of stepping from $v$ to $u$; so $M_{v, u}= \\frac{1}{d(v)}$ if $vu \\in E(G)$, and $M_{v, u} = 0$ otherwise. Denote by $D$ the $n \\times n$ diagonal matrix with $D_{v, v} = \\frac{1}{d(v)}$ for $v \\in [n]$, and let $A$ be the adjacency matrix of $G$. Then $M=D A$. \n So the probability that a random walk starting at vertex $v$ reaches $u$ in $t$ steps is $(M^t)_{v, u}$.\n\n \\begin{definition}\n \\label{def19}\n Let $G$ be a graph on the vertex set $[n]$. \n Let $D = D(G)$ denote the diagonal $n \\times n$ matrix where $D_{v,v}=\\frac{1}{d(v)}$ for each $v\\in [n]$. Let $A = A(G)$ be the adjacency matrix of $G$, let $M(G) = D A$ and let $N(G) =D^{1\/2}AD^{1\/2}$. Note that the matrix $N(G)$ is symmetric, so it has $n$ real eigenvalues. Let $\\lambda_1(N)\\geq \\lambda_2(N)\\geq \\dots \\geq \\lambda_n(N)$ denote the eigenvalues of $N:=N(G)$. \n \\end{definition}\n\n \n \\begin{lemma}\\label{lem:markovchain}\n Let $G$ be a bipartite graph, with a bipartition $\\{X, Y\\}$, on the vertex set $[n]$ with $m$ edges and no isolated vertices. \n Let $M=D(G)A(G)$ and $N = N(G)$.\n Then for every $v,u \\in V(G)$ and integer $k \\ge 1$, we have\n \\begin{equation*}\n \\left|(M^k)_{v, u} - \\frac{d(u)}{2m}\\left(1 + (-1)^{k + \\mathbbm{1}(v \\in X) + \\mathbbm{1}(u \\in X)}\\right) \\right|\n \\leq \\sqrt{\\frac{d(u)}{d(v)}} \\cdot \\big(\\lambda_2(N)\\big)^k.\n \\end{equation*}\n \\end{lemma}\n \n Note that Lemma~\\ref{lem:markovchain} says that when $k$ is even and both $v,u$ are in the same part or when $k$ is odd and $v,u$ are in different parts then $ \\left|(M^k)_{v, u} - \\frac{d(u)}{m} \\right|\n \\leq \\sqrt{\\frac{d(u)}{d(v)}} \\cdot \\big(\\lambda_2(N)\\big)^k.$ Note that when $k$ is even and $v$ and $u$ are in different parts or when $k$ is odd and $v$ and $u$ are in the same part then $(M^k)_{v, u}=0$. \n \\begin{proof}\n For any vector $w = (w_1, \\ldots, w_n)^T$, let $\\overline{w}$ be the vector $(w_1', \\ldots, w_n')^T$ where $w_i' = w_i$ when $i \\in X$ and $w_i' = -w_i$ when $i \\in Y$. It is easy to check that $N \\overline{w} = -\\overline{Nw}$. Hence, if $w$ is an eigenvector of $N$ with eigenvalue $\\lambda$, then $\\overline{w}$ is an eigenvector of $N$ with eigenvalue $-\\lambda$. It follows that $\\lambda_i = -\\lambda_{n+1-i}$ for $i \\in [n]$. In particular, $|\\lambda_i| \\le \\lambda_2$ for every $i \\in \\{2, \\ldots, n-1\\}$.\n \n One can check that $w_1$, defined as follows, is a unit eigenvector of $N$ with eigenvalue $1$. \n \\begin{equation*}\n w_1 = \\frac{1}{\\sqrt{2m}}\\left(\\sqrt{d(1)}, \\sqrt{d(2)}, \\dots, \\sqrt{d(n)}\\right)^T.\n \\end{equation*}\n By the Frobenius--Perron theorem, since the entries of $N$ are non-negative and the entries of $w_1$ are positive, we have $\\lambda_1 = 1$. As explained above, it follows that $\\overline{w_1}$ is a unit eigenvector of $N$ with eigenvalue $\\lambda_n = -1$. Write $w_n = \\overline{w_1}$, and let $w_i$ be a unit eigenvector of $N$ with eigenvalue $\\lambda_i$ for $i \\in \\{2, \\ldots, n-1\\}$, such that $w_1, \\ldots, w_n$ are orthogonal to each other. Note that $w_i$ is an eigenvector of $N^k$ with eigenvalue $(\\lambda_i)^k$, for each $i \\in [n]$.\n Since $N^k$ is a symmetric matrix and $\\{w_1, \\ldots, w_n\\}$ is an orthonormal eigenbasis for $N^k$, we may write $N^k$ in the spectral form as follows (using $\\lambda_1 = 1$ and $\\lambda_n = -1$).\n \\begin{equation*}\n N^k\n = \\sum_{i=1}^n{(\\lambda_i)^k w_i (w_i)^T} \n = w_1(w_1)^T + (-1)^k \\,\\overline{w_1}(\\overline{w_1})^T + \\sum_{i=2}^{n-1}{(\\lambda_i)^k w_i (w_i)^T}.\n \\end{equation*}\n We also have $D^{1\/2}ND^{-1\/2} = DA = M$.\n Therefore,\n \\begin{align*}\n M^k \n &= D^{1\/2}N^k D^{-1\/2} \\\\\n &= D^{1\/2} w_1 (w_1)^T D^{-1\/2} + (-1)^kD^{1\/2} \\,\\overline{w_1} (\\overline{w_1})^T D^{-1\/2} + \\sum_{i=2}^{n-1}{(\\lambda_i)^k D^{1\/2} w_i (w_i)^T D^{-1\/2} }\n \\end{align*}\n Let $Q= D^{1\/2} w_1 (w_1)^T D^{-1\/2} + (-1)^kD^{1\/2} \\,\\overline{w_1} (\\overline{w_1})^T D^{-1\/2}$. Then\n \\begin{align*}\n M^k \n = Q + \\sum_{i=2}^{n-1}{(\\lambda_i)^k D^{1\/2} w_i (w_i)^T D^{-1\/2} }.\n \\end{align*} \n Hence\n \\begin{equation}\n \\label{eq:P't}\n (M^k)_{v, u} \n = Q_{v, u} + \\sum_{i=2}^{n-1}{(\\lambda_i)^k w_{i, v} w_{i, u} \\sqrt{\\frac{d(u)}{d(v)}}}.\n \\end{equation}\n \n Let $W$ be the matrix whose rows are $w_1, \\ldots, w_n$. Then $W W^T = I$, implying $W^TW = I$. For each $v \\in [n]$, since $(W^TW)_{v,v}=1$, we have $\\sum_{i = 1}^n |w_{i, v}|^2 = 1$, so $\\sum_{i = 2}^{n-1} |w_{i, v}|^2 \\le 1$.\n By the Cauchy-Schwarz inequality, $\\sum_{i = 2}^{n-1} | w_{i, v}w_{i, u}| \\le \n \\sqrt{\\sum_{i=2}^{n-1} |w_{i,v}|^2} \\sqrt{\\sum_{i=2}^{n-1} |w_{i,u}|^2}\\leq 1$. \n Since $|\\lambda_i| \\le \\lambda_2$ for every $i \\in \\{2, \\ldots, n-1\\}$, the inequality in \\eqref{eq:P't} implies\n \\begin{equation*}\n \\left| (M^k)_{v, u} - Q_{v, u} \\right| \n \\le \\sum_{i = 2}^{n-1} |\\lambda_i|^k |w_{i, u} w_{i, v}| \\cdot \\sqrt{\\frac{d(u)}{d(v)}}\n \\le (\\lambda_2)^k \\sqrt{\\frac{d(u)}{d(v)}}.\n \n \\end{equation*}\n Finally, a straightforward calculation shows that \n $Q_{v, u} = \\frac{d(u)}{2m}\\left(1 + (-1)^{k + \\mathbbm{1}(v \\in X) + \\mathbbm{1}(u \\in X)}\\right)$ for all $v,u \\in [n]$, as desired.\n \\end{proof}\n \n \\begin{definition} \\label{def:conductance}\n For a graph $G$ with $m$ edges, let $\\pi(v)= \\frac{d(v)}{2m}$, and for any $S \\subseteq V(G)$, let $\\pi(S) \\coloneqq \\sum_{s\\in S}{\\pi(s)}$; observe that $\\pi(S) \\le 1$ for every $S \\subseteq V(G)$. Define the \\emph{conductance} of a set $S$, denoted $\\Phi(S)$, by\n \\begin{equation*}\n \\Phi(S) \\coloneqq \\frac{e(S,S^c)}{2m \\cdot \\pi(S)\\pi(S^c)}, \n \\end{equation*}\n and let the \\emph{conductance} of a graph $G$, denoted $\\Phi_G$, be defined by\n \\begin{equation*}\n \\qquad \\Phi_G \\coloneqq \\min_{S \\subseteq V(G)} \\Phi(S).\n \\end{equation*}\n \\end{definition}\n \n \\begin{theorem}[Theorem 5.3 in \\cite{lovasz1993random}] \\label{thm:upperboundoneigenvalue} \n Let $G$ be a graph and let $\\lambda_2 = \\lambda_2(N(G))$. Then $\\lambda_2 \\le 1 - \\frac{\\Phi_G^2}{8}$.\n \\end{theorem}\n \n In light of \\Cref{thm:upperboundoneigenvalue} and \\Cref{lem:mixing}, it will be useful to have a lower bound on $\\Phi_G$ for a $(d, \\eta, \\varepsilon)$-expander $G$. This is easy to achieve, as can be seen in the following lemma.\n \n \\begin{lemma} \\label{lem:large-phi} \n Let $d\\geq 1$, $\\eta\\in (0,1)$, $\\varepsilon\\in (0,\\frac{1}{2}]$.\n Let $G$ be a $(d, \\eta, \\varepsilon)$-expander on $n$ vertices. Then $\\Phi_G \\geq \\frac{\\eta}{3}$.\n \\end{lemma}\n \n \\begin{proof}\n Let $S\\subseteq V(G)$. Since $\\Phi(S)=\\Phi(S^c)$, we may assume that\n $|S|\\leq \\frac{n}{2}\\leq (1-\\varepsilon) n$. Observation~\\ref{obs:edge-expansion-new} thus implies \n $e(S,S^c)\\geq \\frac{1}{2}\\eta d|S|$.\n Let $\\gamma$ be such that $e(S,S^c)=\\gamma d|S|$, so $\\gamma\\geq \\frac{\\eta}{2}$. By definition of a $(d,\\eta,\\varepsilon)$-expander, $G$ is also $d$-minimal.\n Hence $e(G[S])\\leq \\frac{1}{2}d|S|$.\n Hence, $\\sum_{v\\in S} d(v)=2e(G[S])+e(S,S^c)\\leq d|S|+\\gamma d|S|$. \n Also, observe that $\\pi(S^c)\\leq 1$. Hence\n \n \\begin{align*}\n \\Phi(S)& =\\frac{e(S,S^c)}{2e(G)\\pi(S)\\pi(S^c)} \n \\geq \\frac{e(S,S^c)}{\\sum_{v\\in S} d(v)}\n \\geq \\frac{\\gamma d|S| }{d |S|+\\gamma d|S|}\\geq \\frac{\\gamma}{1+\\gamma} \\geq \\frac{\\eta}{\\eta+2}\\geq \\frac{\\eta}{3}.\n \\end{align*}\n The above inequality thus implies $\\Phi_G \\ge \\frac{\\eta}{3}$.\n \\end{proof}\n \n Recall that given a graph $G$ and two vertices $x,y$, the quantity $\\hom_{x,y}(P_k)$ denotes the number of walks of length $k$ from $x$ to $y$. The following lemma and its immediate corollary will allow us to compare the values of $\\hom_{x, y}(P_k)$ for different pairs of vertices $(x, y)$ in $G$. \n \n \\begin{lemma} \\label{lem:mixing} \n Let $d\\geq 1$, $\\eta\\in (0,1)$, $\\varepsilon\\in (0,\\frac{1}{2}]$.\n Let $G$ be a bipartite $(d, \\eta, \\varepsilon)$-expander on $n$ vertices with bipartition $\\{X, Y\\}$. The following holds for any two vertices $x,y\\in V(G)$\n and every integer $k\\geq 1$.\n \\begin{equation*}\n \\left|\\frac{\\hom_{x,y}(P_k)}{\\sum_{z\\in V(G)}{\\hom_{x,z}(P_k)}} - \\frac{d(y)}{2e(G)}\\left(1 + (-1)^{k + \\mathbbm{1}(x \\in X) + \\mathbbm{1}(y \\in X)}\\right)\\right| \n \\leq \\sqrt{\\frac{d(y)}{d(x)}}\\left(1-\\frac{\\eta^2}{72}\\right)^k.\n \\end{equation*}\n \\end{lemma}\n \n \\begin{proof}\n Let $M=M(G)$, $N = N(G)$ and let $\\lambda_2$ be the second largest eigenvalue of $N$. Notice that $\\frac{\\hom_{x,y}(P_k)}{\\sum_{z\\in V(G)}{\\hom_{x,z}(P_k)}}$ is the probability that a random walk starting at $x$ reaches $y$ in $k$ steps. This, along with our discussion before Definition~\\ref{def19}, yields $(M^k)_{x,y}=\\frac{\\hom_{x,y}(P_k)}{\\sum_{z\\in V(G)}{\\hom_{x,z}(P_k)}}$.\n By Lemma~\\ref{lem:markovchain}, \n \\begin{equation}\n \\label{eqlem18}\n \\left|\\frac{\\hom_{x,y}(P_k)}{\\sum_{z\\in V(G)}{\\hom_{x,z}(P_k)}}-\\frac{d(y)}{2e(G)}\\left(1 + (-1)^{k + \\mathbbm{1}(x \\in X) + \\mathbbm{1}(y \\in X)}\\right)\\right| \n \\leq \\sqrt{\\frac{d(y)}{d(x)}} \\cdot (\\lambda_2)^k.\n \\end{equation}\n \\Cref{thm:upperboundoneigenvalue} gives that $\\lambda_2 \\le 1 - \\frac{\\Phi_G^2}{8}$, and by \\Cref{lem:large-phi} we have $\\Phi_G \\ge \\frac{\\eta}{3}$. It follows that $\\lambda_2 \\leq 1-\\frac{\\eta^2}{72}$. \n Combining this inequality with \\eqref{eqlem18}, the lemma follows.\n \\end{proof}\n \n \\begin{corollary} \\label{cor:mixing} \n Let $d\\geq 1$, $\\eta\\in (0,1)$, $\\varepsilon\\in (0,\\frac{1}{2}]$. Let $G$ be a bipartite $(d, \\eta, \\varepsilon)$-expander on $n$ vertices with a bipartition $\\{X, Y\\}$. The following holds for any two vertices $x,y\\in X$\n and even integer $k\\geq 2$.\n \\begin{equation*}\n \\left|\\frac{\\hom_{x,y}(P_k)}{\\sum_{z\\in V(G)}{\\hom_{x,z}(P_k)}} - \\frac{d(y)}{e(G)}\\right| \n \\leq \\sqrt{n}\\left(1-\\frac{\\eta^2}{72}\\right)^k.\n \\end{equation*}\n \\end{corollary}\n \n \n The next lemma contains the main takeaway from our discussion about random walks in almost regular bipartite expanders. It tells us that for relatively large $k$, the values of $\\hom_{x, y}(C_{2k})$ do not differ by much over the range of pairs $(x, y)$ where $x$ and $y$ are in the largest part of the bipartition.\n \n \\begin{lemma} \\label{lem:rhomin-rhomax}\n Let $n, d, \\mu, k \\ge 1$ and $\\eta \\in (0, 1)$, $\\varepsilon\\in (0,\\frac{1}{2}]$.\n Suppose that $k$ is an even integer satisfying $k \\ge \\frac{2^9 \\log n}{\\eta^2}$.\n Let $G$ be a bipartite $(d, \\eta, \\varepsilon)$-expander with maximum degree at most $\\mu d$; denote the bipartition of $G$ by $\\{X, Y\\}$ and suppose that $|X| \\ge \\frac{n}{2}$. \n Let \n \\begin{align*}\n \\rho_{\\min} \n & = \\min \\{ \\hom_{x, y}(C_{2k}) \\,:\\, x, y \\in X \\} \\\\\n \\rho_{\\max}\n & = \\max \\{ \\hom_{x, y}(C_{2k}) \\,:\\, x, y \\in X \\}.\n \\end{align*}\n Then $\\rho_{\\max} \\le 2^{12} \\mu^4 \\cdot \\rho_{\\min}$.\n \\end{lemma}\n \n \\begin{proof}\n By \\Cref{cor:mixing}, for every $x, y \\in X$ we have \n \\begin{align*}\n \\left| \\frac{\\hom_{x, y}(P_k)}{\\sum_{z \\in X}\\hom_{x, z}(P_k)} - \\frac{d(y)}{e(G)} \\right| \n & \\le \\sqrt{n}\\left(1 - \\frac{\\eta^2}{72}\\right)^k \\\\\n & \\le \\sqrt{n} \\cdot \\rm exp\\left(-\\frac{k\\eta^2}{72}\\right)\n \\le \\sqrt{n} \\cdot \\rm exp(-4 \\log n) \n \\le \\frac{1}{n^3}\n \\le \\frac{d(y)}{2e(G)}.\n \\end{align*}\n It follows that\n \\begin{equation*}\n \\frac{d(y)}{2e(G)}\n \\le \\frac{\\hom_{x, y}(P_k)}{\\sum_{z \\in X}\\hom_{x, z}(P_k)}\n \\le \\frac{2d(y)}{e(G)}.\n \\end{equation*}\n Hence, every $x, y, z \\in X$ satisfy $\\frac{\\hom_{x, y}(P_k)}{\\hom_{x, z}(P_k)} \\le \\frac{4d(y)}{d(z)} \\leq 8 \\mu$ since the minimum degree of $G$ is at least $\\frac{d}{2}$ (by Observation~\\ref{obs:d-expand}) and the maximum degree is at most $\\mu d$. \n \n Observe that $\\hom_{x, y}(P_k) = \\hom_{y, x}(P_k)$ for $x, y \\in X$. It follows that, for every $x, y, z, w \\in X$,\n \\begin{equation*}\n \\frac{\\hom_{x, y}(P_k)}{\\hom_{z, w}(P_k)}\n = \\frac{\\hom_{x, y}(P_k)}{\\hom_{x, z}(P_k)} \\cdot \\frac{\\hom_{z, x}(P_k)}{\\hom_{z, w}(P_k)} \n \\le 64 \\mu^2.\n \\end{equation*}\n Let $x, y, z, w \\in X$ satisfy $\\rho_{\\max} = \\hom_{x, y}(C_{2k})$ and $\\rho_{\\min} = \\hom_{z, w}(C_{2k})$.\n Then \n \\begin{equation*}\n \\frac{\\rho_{\\max}}{\\rho_{\\min}}\n = \\frac{\\hom_{x, y}(C_{2k})}{\\hom_{z, w}(C_{2k})}\n = \\left(\\frac{\\hom_{x, y}(P_k)}{\\hom_{z, w}(P_k)}\\right)^2\n \\le 2^{12} \\mu^4,\n \\end{equation*}\n proving the lemma.\n \\end{proof}\n \n Recall that, by \\Cref{lem:theta}, pairs of vertices $(x, y)$ for which $\\hom^*_{x, y}(C_{2k})$ is considerably smaller than $\\hom_{x, y}(C_{2k})$ can be used to build many colour-disjoint rainbow paths. \n The following lemma shows that for large enough $k$ and $d$, almost all pairs of vertices in one of the parts of an almost regular bipartite expander satisfy this property.\n\n \\begin{lemma} \\label{lem:few-bad-pairs}\n Let $n, d, \\mu, k, s, p\\ge 1$ and $\\eta \\in (0, 1)$, $\\varepsilon\\in (0,\\frac{1}{2}]$.\n Suppose that $k$ is even and satisfies $k \\ge \\frac{2^9 \\log n}{\\eta^2}$ and that $d \\ge 2^{40} k^3 \\mu^9 s^4 p^2 n^{1\/k}$.\n Let $G$ be a bipartite $(d, \\eta, \\varepsilon)$-expander on $n$ vertices with maximum degree at most $\\mu d$; denote the bipartition of $G$ by $\\{X, Y\\}$ and suppose that $|X| \\ge \\frac{n}{2}$.\n Then for all but at most $\\frac{n^2}{p}$ pairs $(x, y)$ with $x,y \\in X$ the following holds. \n \\begin{equation*}\n \\hom^*_{x, y}(C_{2k}) \\le \\frac{1}{s^2} \\hom_{x, y}(C_{2k}).\n \\end{equation*}\n \\end{lemma}\n \n \\begin{proof}\n Let $S = 2^{13} \\mu^4 s^2 p$, and let $\\rho_{\\min}$ and $\\rho_{\\max}$ be defined as in the statement of \\Cref{lem:rhomin-rhomax}. Then, by the same lemma we have $\\rho_{\\max} \\le 2^{12} \\mu^4 \\rho_{\\min}$.\n Let $A$ be the collection of (ordered) pairs $(x, y)$ with $x, y \\in X$ that satisfy $\\hom^*_{x, y}(C_{2k}) \\ge \\frac{1}{s^2}\\hom_{x, y}(C_{2k})$.\n Then\n \\begin{align*}\n \\hom^*(C_{2k}) \n \\ge \\sum_{(x, y) \\in A} \\hom^*_{x, y}(C_{2k}) \n \\ge \\frac{1}{s^2} \\sum_{(x, y) \\in A} \\hom_{x, y}(C_{2k})\n \\ge \\frac{|A| \\cdot \\rho_{\\min}}{s^2}.\n \\end{align*}\n \n Note that $\\sum_{x, y \\in X} \\hom_{x,y}(C_{2k}) = \\sum_{x,y \\in Y} \\hom_{x,y}(C_{2k})$, and so $\\hom(C_{2k}) = 2\\sum_{x, y \\in X} \\hom_{x,y}(C_{2k})$.\n Hence, by \\Cref{lem:non-rainbow-hom} (which applies since $d \\ge 2^{14} k^3 S^2 \\mu n^{1\/k}$),\n \\begin{equation*}\n \\hom^*(C_{2k}) \n \\le \\frac{1}{S} \\cdot \\hom(C_{2k})\n = \\frac{2}{S} \\sum_{x, y \\in X} \\hom_{x, y}(C_{2k}) \n \\le \\frac{2n^2 \\cdot \\rho_{\\max}}{S}.\n \\end{equation*}\n Combining the lower and upper bounds on $\\hom^*(C_{2k})$, we obtain the required inequality, as follows.\n \\begin{equation*}\n |A|\n \\le \\frac{2n^2 \\cdot s^2 \\cdot \\rho_{\\max}}{S \\cdot \\rho_{\\min}}\n \\le \\frac{n^2 \\cdot 2^{13} \\mu^4 s^2}{S} \n = \\frac{n^2}{p}.\n \\qedhere\n \\end{equation*}\n \\end{proof}\n \n\n\\section{Rainbow paths and subdivisions in expanders} \\label{sec:main-proof} \n\n We now prove our first main result about rainbow subdivisions.\n \n \\begin{proof}[Proof of \\Cref{thm:main}.] \n \\begin{enumerate}\n \\item \n $H$ is a $(d, \\eta, \\varepsilon)$-expander on $n'$ vertices, where $d \\ge \\frac{d(G')}{2500(\\log n)^2} \\ge (\\log n)^{56}$, $\\varepsilon=\\frac{1}{2}$ and $\\eta = \\frac{\\varepsilon}{100(\\log n')^2} = \\frac{1}{200(\\log n')^2}$,\n \\item\n $H$ has maximum degree at most $\\mu d$, where $\\mu = 2500(\\log n')^2$.\n \\end{enumerate}\n \n Denote the bipartition of $H$ by $\\{X, Y\\}$, and suppose that $|X| \\ge \\frac{n'}{2}$. \n Let $k$ be the smallest even integer which is at least $\\frac{2^9 \\log n'}{\\eta^2}$, so $k = \\Theta\\!\\left((\\log n')^{5}\\right)$. \n Let $s=2\\binom{m}{2} k$ and $p = 8m$. \n Let $A$ be the set of pairs $(x, y)$ with $x, y \\in X$ that satisfy $\\hom^*_{x, y}(C_{2k}) > \\frac{1}{s^2}\\hom_{x, y}(C_{2k})$.\n Note that $2^{40} k^3 \\mu^9 s^4 p^2 (n')^{1\/k} = \\Theta\\!\\left((\\log n')^{53}\\right) < d$ and so, by \\Cref{lem:few-bad-pairs}, we have $|A| \\le \\frac{(n')^2}{p} =\\frac{(n')^2}{8m}< \\frac{1}{m-1}\\binom{|X|}{2}$,\n noting that $n'$ is sufficiently large.\n \n Hence by Tur\\'an's theorem, there is a subset $Z \\subseteq X$ of size $m$, such that $(x, y) \\notin A$ for every distinct $x, y \\in Z$. \n By \\Cref{lem:theta}, for any two vertices $x$ and $y$ in $Z$, there exist $s$ many pairwise colour-disjoint and internally vertex-disjoint rainbow paths of length $k$ from $x$ to $y$. By the choice of $s$, one can find greedily $\\binom{m}{2}$ many paths of length $k$, which are pairwise colour-disjoint and internally vertex-disjoint and are internally vertex-disjoint from $Z$, each of which connecting a different pair of vertices in $Z$. This gives us the desired subdivision of $K_m$.\n \\end{proof}\n\n Given a set of $m$ vertices in a graph $G$, a \\emph{$K_m$-subdivision rooted at $Z$} is a subgraph consisting of $\\binom{m}{2}$ paths, each joining a different pair of distinct vertices in $Z$, whose interiors are pairwise vertex-disjoint and disjoint from $Z$. By slightly adapting the proof of\n \\Cref{thm:main}, one can show that any bipartite $(d, \\eta, \\varepsilon)$-expander $G$ (with suitable parameters) contains a rainbow $K_m$-subdivision rooted at $Z$ for almost all the $m$-sets $Z$ in $V(G)$ (not just in $X$). \n By using some additional tools, we next show that in a bipartite $(d,\\eta,\\varepsilon)$-expander with suitable parameters, in fact, one can find a rainbow $K_m$-subdivision rooted at $Z$ for every $m$-set $Z$ in $V(G)$.\n\n \\begin{theorem} \\label{thm:main-rooted} \n Let $n, L, d, \\mu, m \\geq 2$, $\\eta\\in (0,1)$ and $\\varepsilon\\in (0,\\frac{1}{8}]$, and suppose that $L = \\frac{2^{10} \\log n}{\\eta^2}$ and $d \\ge \\frac{2^{135} m^8 \\mu^{9} (\\log n)^7}{\\eta^{14}}$. Let $G$ be a bipartite $(d, \\eta, \\varepsilon)$-expander with maximum degree at most $\\mu d$, and let $Z$ be a set of $m$ vertices in $G$. Then there is a rainbow $K_m$-subdivision, rooted at $Z$, where every edge is subdivided at most $L$ times.\n \\end{theorem}\n \n \\begin{proof}\n Let $\\ell = \\frac{4\\log n}{\\eta}$ and let $k$ be the smallest even integer satisfying $k \\ge \\frac{2^9 \\log n}{\\eta^2}$. One can check that $L \\ge 2(\\ell + 1) + k$.\n \n \n \\begin{claim} \\label{claim:rainbow-connect}\n Let $M$ be any set of colours and vertices such that $|M|\\leq 2\\binom{m}{2}(L+1) $. Let $x,y$ be any two vertices in $G$. There exists a rainbow $x,y$-path of length at most $L$ in $G$ that avoids $M$. \n \\end{claim}\n \\begin{proof}[Proof of Claim~\\ref{claim:rainbow-connect}]\n \n\n Let $q = 256\\ell$. \n By \\Cref{lem:reachable-robust-new} (using $d \\ge \\frac{20q\\ell + 8[2\\binom{m}{2}(L+1)]}{\\eta}$), there exists a subset $U_{x} \\subseteq V(G)$ of size at least $(1-\\varepsilon)n$ and a collection of paths $\\mathcal P = \\{P(u) : u \\in U_x \\}$, where for each $u \\in U_x$ the path $P(u)$ is a rainbow path from $x$ to $u$ of length at most $\\ell+1$ that avoids $M$, and no colour appears in more than $\\frac{n}{q}$ of the paths in $\\mathcal P$. Similarly, there exists a subset $U_{y} \\subseteq V(G)$ of size at least $(1-\\varepsilon)n$ and a collection of paths $\\mathcal Q = \\{Q(u) : u \\in U_y\\}$ where for each $u \\in U_y$ the path $Q(u)$ is a rainbow path from $y$ to $u$ of length at most $\\ell+1$ that avoids $M$, and no colour appears in more than $\\frac{n}{q}$ of the paths in $\\mathcal Q$. Write $U = U_x \\cap U_y$; then $|U| \\ge (1 - 2\\varepsilon)n \\ge \\frac{3n}{4}$.\n \n We call an ordered pair $(u,v)$ with $u, v\\in U$ \\emph{colour-bad} if there is a colour that appears on both paths $P(u)$ and $Q(v)$. We next show that the number of colour-bad pairs in $U$ is small compared to all pairs. Indeed, let $H$ be the auxiliary graph on the vertex set $U$ where $uv$ is an edge whenever at least one of $(u, v)$ and $(v, u)$ is colour-bad.\n Note that $d_H(u)\\leq \\frac{2(\\ell+1)n}{q}$, for every $u \\in U$, since $P(u)$ has length at most $\\ell+1$ and any colour on $P(u)$ can appear in the collection $\\mathcal{Q}$ at most $\\frac{n}{q}$ times (and similarly with the roles of $P$ and $Q$ reversed). Thus, $e(H) \\leq \\frac{(\\ell+1)n}{q}|U| \\le \\frac{2\\ell n}{q}|U| \\leq \\frac{n^2}{128}$.\n \n Let $s = |M| + 2(\\ell + 1) + 1$; then $s \\le 2\\binom{m}{2}(L+1) + 2\\ell + 3 \\le 2 m^2 L$.\n Denote the bipartition of $G$ by $\\{X, Y\\}$ and suppose that $|X| \\ge \\frac{n}{2}$.\n Call a pair $(u,v)$, with $u, v \\in X$, \\emph{$s$-bad} if $\\hom^*_{u, v}(C_{2k}) > \\frac{1}{s^2} \\hom_{u, v}(C_{2k})$. \n Applying \\Cref{lem:few-bad-pairs} with $p = 64$ and verifying the condition on $d$ (namely, that $d \\ge 2^{40} k^3 \\mu^9 s^4 p^2 n^{1\/k}$), at most $\\frac{n^2}{64}$ ordered pairs $(u, v)$, with $u, v \\in X$, are $s$-bad. \n \n We claim that there is a pair $(u, v)$, with $u, v \\in U \\cap X$, which is neither colour-bad nor $s$-bad. Indeed, the total number of ordered pairs $(u, v)$ with $u, v \\in U \\cap X$ where $(u, v)$ is either colour-bad or $s$-bad is at most $\\frac{n^2}{128}+\\frac{n^2}{64}\\leq \\frac{n^2}{32}$. Since $|U| \\ge \\frac{3n}{4}$ and $|X| \\ge \\frac{n}{2}$, we have $|X \\cap U| \\ge \\frac{n}{4}$, so the number of ordered pairs $(u, v)$ with $u, v \\in U \\cap X$ is certainly more than $\\frac{n^2}{32}$. Hence, there is a pair $(u, v)$ which is neither colour-bad nor $s$-bad, as claimed. By Lemma~\\ref{lem:theta} there are $s$ many pairwise colour-disjoint and internally vertex-disjoint rainbow paths of length $k$ from $u$ to $v$. By the choice of $s$, there is at least one such path $T(uv)$ which shares no colours with the paths $P(u)$ and $Q(v)$ and also avoids $M$. Since $P(u)$ and $Q(v)$ are also colour-disjoint, it follows that $P(u)T(uv)Q(v)$ is a rainbow $x,y$-walk avoiding $M$ which contains a rainbow $x,y$-path, as desired.\n \\end{proof}\n \n Let $(x_1, y_1), \\ldots, (x_{\\binom{m}{2}}, y_{\\binom{m}{2}})$ be an arbitrary ordering of the unordered pairs $(x, y)$ where $x, y \\in Z$ and $x \\neq y$.\n We iteratively build paths $P_i$ for $i \\in [\\binom{m}{2}]$, as follows. Let $P_1$ be any rainbow $x_1,y_1$-path of length at most $L$, which exists\n by Claim~\\ref{claim:rainbow-connect}. In general, suppose $P_1,\\dots, P_i$ have been defined, where $i<\\binom{m}{2}$. We let $M_i$ denote the set of vertices and colours used in $\\cup_{j=1}^i P_j$ and let $P_{i+1}$ be a rainbow $x_{i+1},y_{i+1}$-path of length at most $L$ that avoids $M_i\\setminus\\{x_{i+1},y_{i+1}\\}$.\n Since $|M_i|\\leq 2\\binom{m}{2}(L+1)$, by Claim~\\ref{claim:rainbow-connect} such a path $P_{i+1}$ exists. Hence, we are able to find $P_1,\\dots, P_{\\binom{m}{2}}$\n as described above. Now, $\\bigcup_{i=1}^{\\binom{m}{2}} P_i$ forms a rainbow $K_m$-subdivision rooted at $Z$ in which each edge is subdivided\n at most $L$ times.\n \\end{proof}\n\n\\section{Blow-ups} \\label{sec:blowups}\n\n In this section we use the ideas employed in the proof of our main result about rainbow subdivisions to prove \\Cref{thm:main-blowups} about blow-ups of subdivisions.\n\n We will represent a copy of $K_{r, r}$ in a graph $G$ by a pair of $r$-sets $(A, B)$ such that all $A$-$B$ edges are present in $G$. We shall use the following `balanced supersaturation' result, due to Morris and Saxton \\cite{morris2016number}.\n\n \\begin{theorem}[Special case of Theorem 7.4 in \\cite{morris2016number}] \\label{thm:morris-saxton}\n For every $r \\ge 2$ there exist $c_1, c_2 > 0$ such that the following holds for every integer $n \\ge 1$ and sufficiently large $\\alpha$.\n Suppose that $G$ is a graph on $n$ vertices with at least $\\alpha n^{2 - 1\/r}$ edges. Then there is a collection $\\mathcal{H}$ of copies of $K_{r, r}$ in $G$ satisfying\n \\begin{enumerate}\n \\item \n $|\\mathcal{H}| \\ge c_1 \\alpha ^{r^2} n^r$,\n \\item\n For every $r$-set $A$ and vertex $u \\notin A$, there are at most $c_2 \\alpha^{r(r-1)}$ sets $B$ of size $r-1$ such that $(A, B \\cup \\{u\\}) \\in \\mathcal{H}$.\n \\end{enumerate} \n \\end{theorem}\n \n \n \\begin{proof}[Proof of \\Cref{thm:main-blowups}]\n \n Let $G$ be a graph on $n$ vertices with at least $n^{2 - \\frac{1}{r}} (\\log n)^{\\frac{60}{r}}$ edges, and suppose that $n$ is large. We will show that $G$ contains an $r$-blow-up of a subdivision of $K_m$.\n Let $\\alpha = (\\log n)^{\\frac{60}{r}}$, and let $c_1, c_2$ be as in \\Cref{thm:morris-saxton}. Observe that since $n$ is large, $\\alpha$ is large enough for \\Cref{thm:morris-saxton} to be applicable. \n \n Consider the auxiliary graph $\\mathcal{H}$ whose vertices are $r$-sets of vertices in $G$, and where $AB$ is an edge if and only if $(A, B)$ is a copy of $K_{r, r}$ in $G$. By \\Cref{thm:morris-saxton}, there is a subgraph $\\mathcal{H}'$ of $\\mathcal{H}$ with $d(\\mathcal{H}') \\ge c_1 \\alpha^{r^2} = c_1 (\\log n)^{60 r}$, such that for every $r$-set $A$ and every vertex $u \\notin A$ in $G$, there are at most $c_2 \\alpha^{r(r-1)} = c_2 (\\log n)^{60(r-1)}$ neighbours of $A$ in $\\mathcal{H}'$ that contain the vertex $u$. \n \n Write $t = rc_2 (\\log n)^{60(r-1)}$.\n Consider a bipartite subgraph $\\mathcal{H}''$ of $\\mathcal{H}'$ with average degree at least $\\frac{c_1}{2}(\\log n)^{60r}$, and note that $|V(\\mathcal{H}')| \\le n^r$.\n By \\Cref{lem:bounded-max-deg-expander} (using that $\\frac{c_1}{2} (\\log n)^{60r} \\ge 10^7 (\\log n)^3$), there is a bipartite subgraph $\\mathcal{F}$ of $\\mathcal{H}''$ satisfying the following properties.\n \n \\begin{enumerate}\n \\item \n $\\mathcal{F}$ is a $(d, \\eta, \\frac{1}{2})$-expander on $N$ vertices, where $d \\ge \\frac{c_1 (\\log n)^{60 r} }{5000(r\\log n)^2} \\ge t (\\log n)^{55}$ (using that $n$ is large) and $\\eta = \\frac{1}{200(\\log N)^2}$, \n \\item\n $\\mathcal{F}$ has maximum degree at most $\\mu d$, where $\\mu = 2500 (\\log N)^2$. \n \\end{enumerate}\n \n Let $k$ be the smallest even integer such that $k\\geq \\frac{2^9 \\log N}{\\eta^2}$. Then $k = \\Theta\\!\\left((\\log N)^{5}\\right)$.\n \n For each $u\\in V(\\mathcal{F})$, let $R_u$ denote the $r$-set in $G$ that $u$ corresponds to. Let $\\sim$ be a relation defined on $V(\\mathcal{F})$, where $u \\sim v$ whenever $R_u\\cap R_v\\neq\\emptyset$.\n For vertices $x, y$ in $\\mathcal{F}$, let $\\Hom^{**}_{x, y}(C_{2k})$ be the family of all the closed walks $(x_1, \\ldots, x_{2k})$ in $\\mathcal{F}$ where $x_1 = x$, $x_{k+1} = y$, and $x_i \\sim x_j$ for some distinct $i, j \\in [2k]$. \n Let $\\Hom^{**}(C_{2k}) = \\cup_{x, y \\in V(G) }\\Hom^{**}_{x,y}(C_{2k}).$ Let $\\hom^{**}(C_{2k}) = |\\Hom^{**}(C_{2k})|$ and let $\\hom^{**}_{x,y}(C_{2k}) = |\\hom^{**}_{x,y}(C_{2k})|$. \n \n \n\n\n \n\n Let $S = \\left(\\frac{d}{2^{10} k^3t \\mu N^{1\/k}}\\right)^{1\/2}$. Since $\\frac{d}{t} \\ge (\\log n)^{55}$, $N^{1\/k} \\le N^{\\frac{1}{\\log N}} = 2$ and $\\log N = O(\\log n)$, we have \n \\begin{equation*}\n S = \\Omega\\left( (\\log N)^{19}\\right).\n \\end{equation*}\n \\begin{claim}\n \\label{homstarrtohom}\n $\\hom^{**}(C_{2k}) \\le \\frac{1}{S}\\hom(C_{2k})$.\n \\end{claim}\n \n \\begin{proof}[Proof of Claim~\\ref{homstarrtohom}]\n Since $\\mathcal{F}$ is a $(d,\\eta,\\frac{1}{2})$-expander, $\\delta(F)\\geq \\frac{d}{2}$.\n As in the proof of \\Cref{lem:non-rainbow-hom}, we have $\\hom(C_{2k}) \\ge \\left(\\frac{d}{2}\\right)^{2k}$. \n Since $\\mathcal{F}\\subseteq \\mathcal{H}'$, by our assumption on $\\mathcal{H}'$, for every two vertices $u$ and $v$ in $\\mathcal{F}$ there are at most $rc_2 (\\log n)^{60(r-1)} = t$ neighbours $w$ of $v$ for which $u \\sim w$. By \\Cref{lem:janzer-vertices}, we get \n \\begin{align*}\n \\hom^{**}(C_{2k}) \n & \\le 32 k^{3\/2} t^{1\/2} (\\mu d)^{1\/2} N^{\\frac{1}{2k}} \\cdot \\hom(C_{2k})^{1 - \\frac{1}{2k}} \\\\\n & \\le \\left(\\frac{2^{10} k^3 t \\mu N^{\\frac{1}{k}}} {d} \\right)^{1\/2} \\cdot \\hom(C_{2k}) \\\\\n &= \\frac{\\hom(C_{2k})}{S},\n \\end{align*} \n as desired.\n \\end{proof}\n \n Let $\\rho_{\\min}$ and $\\rho_{\\max}$ be defined as in \\Cref{lem:rhomin-rhomax} with respect to the graph $\\mathcal{F}$. Then, by the same lemma, $\\frac{\\rho_{\\max}}{\\rho_{\\min}} \\le 2^{12} \\mu^4 = O\\!\\left( (\\log N)^8\\right)$.\n Let $s = \\binom{m}{2} r k$; then $s = \\Theta\\!\\left((\\log N)^{5}\\right)$.\n \n Denote the bipartition of $\\mathcal{F}$ by $\\{X, Y\\}$, and suppose that $|X| \\ge \\frac{N}{2}$. Let \n \\[A=\\left \\{(x,y) : \\, x,y\\in X \\text{ and } \\hom^{**}_{x, y}(C_{2k}) \\ge \\frac{1}{s^2}\\hom_{x, y}(C_{2k}) \\right\\}.\\]\n \n Then \n \\begin{equation*}\n \\hom^{**}(C_{2k}) \n \\ge \\sum_{(x, y) \\in A} \\hom^{**}_{x, y}(C_{2k})\n \\ge \\frac{1}{s^2} \\sum_{(x, y) \\in A} \\hom_{x, y}(C_{2k}) \\ge \\frac{|A| \\rho_{\\min}}{s^2}.\n \\end{equation*}\n By Claim~\\ref{homstarrtohom},\n \\begin{equation*}\n \\hom^{**}(C_{2k}) \n \\le \\frac{\\hom(C_{2k})}{S} \n = \\frac{2}{S}\\sum_{x, y \\in X} \\hom_{x, y}(C_{2k}) \n \\le \\frac{2 N^2 \\rho_{\\max}}{S},\n \\end{equation*}\n using $\\sum_{x, y \\in X} \\hom_{x, y}(C_{2k}) = \\sum_{x, y \\in Y}\\hom_{x, y}(C_{2k})$.\n Combining the two bounds, and using $\\frac{\\rho_{\\max}}{\\rho_{\\min}} = O\\!\\left((\\log N)^8\\right)$, $s = O\\!\\left( (\\log N)^{5}\\right)$ and $S = \\Omega\\!\\left( (\\log N)^{19}\\right)$, we get\n \\begin{align*}\n \\frac{|A|}{N^2}\n \\le \\frac{2 s^2 \\cdot \\rho_{\\max}}{S \\cdot \\rho_{\\min}}\n = O \\left( \\frac{(\\log N)^{18}}{(\\log N)^{19}}\\right)\n = O\\left( \\frac{1}{\\log N}\\right).\n \\end{align*}\n As $N$ is large, $m$ is a constant and $|X| \\ge \\frac{N}{2}$, it follows that $|A| \\le \\frac{1}{m-1}\\binom{|X|}{2}$.\n By Tur\\'an's theorem, there is a set $Z \\subseteq X$ of size $m$, such that for every distinct $x, y \\in Z$ we have $(x, y) \\notin A$.\n \n \\begin{claim} \\label{blowup-theta}\n For every pair of distinct vertices $x,y$ in $Z$, there exist $x,y$-paths $P_1,\\dots, P_s$ of length $k$ in $\\mathcal{F}$ such that\n for any distinct $u,v$ in $V(\\cup_{i=1}^s P_i)\\setminus \\{x,y\\}$ we have $R_u\\cap R_v=\\emptyset$.\n \\end{claim}\n \\begin{proof}[Proof of Claim~\\ref{blowup-theta}] \n By choice of $Z$ and definition of $A$, we have $\\hom^{**}_{x, y}(C_{2k}) < \\frac{1}{s^2}\\hom_{x, y}(C_{2k})$.\n By Lemma~\\ref{lem:theta-general}, there exist $P_1, \\ldots, P_s \\in \\Hom_{x, y}(P_k)$ such that $P_i P_j \\notin \\Hom^{**}_{x, y}(C_{2k})$ for every distinct $i, j \\in [s]$. The paths $P_1, \\ldots, P_s$ satisfy the requirements, using the definitions of $\\Hom^{**}_{x, y}(C_{2k})$ and the relation $\\sim$.\n \\end{proof}\n \n Let $L$ be a maximal collection of pairs of distinct vertices $u, v \\in Z$ with the property that there are paths $P_{u,v}$ of length $k$ in $\\mathcal{F}$ such that the $r$-sets in $G$ corresponding to the vertices on $W\\coloneqq \\cup_{(u,v)\\in L} V(P_{u,v})$ are pairwise disjoint. Suppose $|L|<\\binom{m}{2}$. Then there exists a pair of distinct vertices $x,y\\in Z$ such that $(x,y)\\not \\in L$. Also, $|W|<\\binom{m}{2}k$ and hence\n $|\\cup_{w\\in W} R_w|<\\binom{m}{2}kr=s$. By Claim~\\ref{blowup-theta}, there exists a $P_{x,y}$ of length $k$ from $x$ to $y$ in $\\mathcal{F}$\n such that $\\cup_{v\\in V(P_{x,y})} R_v$ is disjoint from $\\cup_{w\\in W} R_w$. Hence, the pair $(x,y)$ can be added to $L$, contradicting the maximality of $L$. Therefore, $L=\\binom{Z}{2}$, and so $\\bigcup_{(u,v)\\in L} P_{u,v}$ is a $K_m$-subdivision in $\\mathcal{F}$ \n in which each edge is subdivided at most $k$ times. Since the $r$-sets in $G$ corresponding to the vertices of $L$ are pairwise disjoint, this yields an $r$-blow-up of a subdivision of $K_m$ in $\\mathcal F$.\n \\end{proof}\n\n\n\\section{Conclusion} \\label{sec:conc} \nIn this paper we showed that there is a constant $c\\leq 60$ such that any $n$-vertex properly edge-coloured graph $G$ \nwith at least $n(\\log n)^c$ edges contains a rainbow subdivision of $K_m$. \nOn the other hand, an immediate lower bound is given by the best known lower bound from \\cite{keevash2007rainbow} on $\\ex^*(n,\\mathcal{C})$, which is $\\Omega(n\\log{n})$. This shows that our bound is tight up to a polylogarithmic factor.\nWe pose the following question.\n \n \\begin{question} \\label{question:correct-bound}\n Fix $m\\geq 2$. What is the smallest $c$ such that for\n all sufficiently large $n$ the following holds: if $G$ is a properly edge-coloured graph on $n$ vertices with at least $\\Omega(n(\\log{n})^c)$ edges, then it contains a rainbow subdivision of $K_m$? \n In particular, is $c=1$?\n \\end{question}\n\nFor the clarity of presentation, we did not optimise our arguments to obtain\nthe best possible value of $c$. However,\nto answer Question~\\ref{question:correct-bound}, new ideas will be needed.\nNote that even the correct order of magnitude of $\\ex^*(n,\\mathcal{C})$ is still unknown. \n Recall that Janzer~\\cite{janzer2020rainbow} proved $\\ex^*(n,\\mathcal{C})=O(n(\\log n)^4)$, using the counting lemmas on\n closed walks. We can give an alternative proof of $\\ex^{*}(n,\\mathcal{C})=O(n(\\log{n})^5)$ by using the basic\n expansion property of an expander (Lemma~\\ref{lem:reachable-robust-new}) and a digraph idea used by Letzter~\\cite{letzter2021tight} regarding the Tur\\'an number of the family of tight cycles.\n \n \n \n \n In this paper, we also showed that any $n$-vertex graph with at least $n^{2-\\frac{1}{r}} (\\log n)^{\\frac{60}{r}}$ edges contains an $r$-blow-up of a subdivision of $K_m$. This bound is tight up to $(\\log n)^{\\frac{60}{r}}$ factor, due to the following proposition,\n which was mentioned in~\\cite{janzer2020rainbow} as a remark. Since no proof was given there, we include one here for completeness. Let $\\mathcal{C}[r] \\coloneqq \\{C_k[r] : k \\ge 1\\}$.\n \n \\begin{proposition}\\label{prop:lowerboundblowup}\n Let $r \\ge 1$ be a fixed integer. Then $\\ex(n, \\mathcal{C}[r]) = \\Omega\\big(n^{2-1\/r})$.\n \\end{proposition}\n \\begin{proof}\n \n For each $i,$ let $F_i$ be the $r$-blow-up of a cycle of length $i$, let $v_i = v(F_i)$ and $e_i=e(F_i)$. Then $e_i = r v_i$ since $F_i$ is $2r$-regular. \n Let $\\mathcal{F} \\coloneqq \\{F_i : i \\le n\/r\\}$. Consider the random graph $G \\coloneqq G(n, p)$ with $p = n^{-1\/r}$. \n For each $i$, let $X_i$ be the number of copies of $F_i$ in $G$, and let $X \\coloneqq \\sum_i X_i$. \n Then \n $\\mathbb{E}[X_i] \\le n^{v_i}p^{e_i} = n^{v_i}(n^{-1\/{r}})^{rv_i} = 1.$\n By linearity of expectation, $\\mathbb{E}[X]=\\sum_i \n \\mathbb{E}[X_i] \\le n$.\n On the other hand,\n $\\mathbb{E}[e(G)] \\ge \\Omega(n^{2-1\/r}).$\n Thus, \n $\\mathbb{E}[e(G) - X] \\ge \\Omega(n^{2-1\/r}).$\n \n So there exists a graph $G$ such that $e(G) - X \\ge \\Omega(n^{2-1\/r}).$ Now remove an edge from every copy of a member of $\\mathcal{F}$ in $G$, to obtain an $\\mathcal{F}$-free graph with $\\Omega(n^{2-1\/r})$ edges, as desired.\n \\end{proof}\n \n We pose the following question, which strengthens Question 6.2 in \\cite{janzer2020rainbow}.\n \\begin{question} \n Fix $m \\ge 2$. Is it true that if $G$ is an $n$-vertex graph with at least $\\Omega(n^{2-\\frac{1}{r}})$ edges then it contains an $r$-blow-up of a subdivision of $K_m$?\n \\end{question}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Acknowledgments}\nWe thank Xian Teng and Muheng Yan for their help in data annotation. We would like to acknowledge the support from NSF \\#2027713, ARC DP180101985, and AFOSR projects 19IOA078, 19RT0797, 20IOA064, and 20IOA065. Any opinions, findings, and conclusions or recommendations expressed in this material do not necessarily reflect the views of the funding sources.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\\label{sec:intro}\n\nSeveral recent studies have documented the ideological asymmetries between the left-wing and right-wing activism~\\cite{brady2019ideological,schradie2019revolution,freelon2020false,waller2021quantifying}. Some highlight the dominance of conservative voices on social media~\\cite{brady2019ideological}; others portray the widespread symbolic support for progressive social movements~\\cite{jackson2020hashtagactivism}. The term ``conservative advantage'' is coined to describe the strategic dissemination of right-wing users to spread their messages~\\cite{schradie2019revolution}. However, most of the existing research bases on the analysis of a single platform or a single political topic. Relatively little is known about how different ideological groups garner attention across platforms, and whether the group advantage of gaining visibility remains across topics and over time. To answer these questions, this work designs several sets of cross-platform measurements on the collective attention dynamics of two different ideological groups across three controversial political topics.\n\nOnline platforms, such as Twitter, YouTube, Reddit, and Facebook, are social-technological artifacts that segregate online attention into silos defined by the underlying software and hardware systems. Video views on YouTube are known to be driven by discussions outside the platform~\\cite{rizoiu2017expecting}, and to be part of users' broader information diet~\\cite{hosseinmardi2021examining}. What is not known, however, is how groups of related content comparatively evolve across different social platforms. Collective attention on political content have been studied on one topic, such as the Occupy Movement~\\cite{thorson2013youtube}, Gun Control\/Rights~\\cite{zhang2019whose}, and Black Lives Matters~\\cite{de2016social,stewart2017drawing}. Yet, cross-cutting studies that compare different movements are rare. With data from three long-running controversial topics, this work seeks to provide measures across YouTube and Twitter and paint a nuanced picture about the temporal patterns of attention from left to right.\n\n\\input{images\/intro_teaser}\n\nWe choose three topics: {\\tt Abortion}\\xspace, {\\tt Gun Control}\\xspace, and \\texttt{Black Lives Matter} ({\\tt BLM}\\xspace). We rely on video hyperlinks to connect the content from YouTube to Twitter. A motivating example is given in~\\Cref{fig:intro_teaser}. We plot the time series of daily view count for the collected {\\tt BLM}\\xspace videos from YouTube (top panel) and daily volume of tweets mentioning these {\\tt BLM}\\xspace videos from Twitter (bottom panel). Both time series are further disaggregated by video uploaders' political leanings. Visually, the view count dynamics of both left- and right-leaning videos are relatively stable in year 2017, except a sharp spike caused by the ``Unite the Right rally''\\footnote{\\url{https:\/\/en.wikipedia.org\/wiki\/Unite_the_Right_rally}} event in Charlottesville, USA. On the bottom panel, the tweet count dynamics of right-leaning videos have many spikes, which can be attributed to the upload of new videos from far-right YouTube political commentators. The measures on YouTube and Twitter present a contrasting story here: if we focus on the two weeks period after the rally, left-leaning videos attracted more attention on YouTube (measured by views, left: 27.2M, right: 13.9M) while right-leaning videos had higher exposure on Twitter (measured by tweets, left: 37.5K, right: 52.3K). This example demonstrates the need for cross-platform analysis -- findings on one platform may not generalize to another.\n\nWe design a set of metrics from publicly available data on YouTube and Twitter, which include total views, video watch engagement, tweet reactions, the evolution of attention over time, and early adopter networks among tweets and Twitter users. On YouTube, we find that left-leaning videos accumulate more views, are more engaging, and have higher viral potential than right-leaning videos. In contrast, right-leaning videos have higher numbers of total tweets and retweets on Twitter. Statistics on the unfolding speed for views and tweets show that the attention on left-leaning videos attenuates faster, while that on right-leaning videos persists for longer. Note that these observations are not generalized unanimously across topics, e.g., for some metrics, we observe significant differences for {\\tt Abortion}\\xspace and {\\tt Gun Control}\\xspace, but not for {\\tt BLM}\\xspace. These findings expand current wisdom on ideological asymmetries in two ways: the first is exposing the novel facet that left-leaning content attracts more attention in a shorter period of time; the second is the need of contrasting temporal attention statistics between platforms, such as right-leaning tweet cascades tend to start earlier and YouTube views on right-leaning content sustain longer. In sum, our observations paint a richer picture of attention patterns across the political spectrum, provide a basis for further studying political framing and group behavior, and supply fundamental metrics for understanding influences that transcend platforms.\n\nThe main contributions of this work include:\n\n\\begin{itemize}[leftmargin=*]\n \\item a data curation procedure linking content on YouTube and Twitter for longitudinal topic monitoring.\\footnote{Our datasets and analysis code are publicly available at \\url{https:\/\/github.com\/picsolab\/Measuring-Online-Information-Campaigns}}\n \\item several sets of cross-platform metrics that support statistical comparisons for different ideological groups, encompassing the volume and quality of attention, networks of tweets and users, as well as relative temporal evolution.\n \\item adding the temporal and cross-platform dimensions to recent observations on ideological asymmetries. We find that polarized content engages users in distinct ways -- more views, more engagement, and faster reactions for videos on the left; comparing to more tweets, more sustained attention for videos on the right.\n\\end{itemize}\n\n\n\\section{Measures for Cross-Platform Attention}\n\\label{sec:metric}\n\nThis section designs several sets of metrics for the cross-platform data, in order to compare content across different political ideologies, and examine whether the differences are consistent across topics, across platforms, and over time.\n\n\\subsection{Aggregate Attention on YouTube and Twitter}\n\\label{ssec:aggregate_metrics}\n\nWe present four metrics for the total video attention on YouTube.\n\n\\header{\\em Total view count} sums up a video's view count time series until day 120.\n\n\\header{\\em Relative engagement} is a metric proposed in~\\cite{wu2018beyond} for quantifying the average video watching behavior. Specifically, for each video, we first compute {\\em average watch percentage}, defined as the total watch time divided by total number of views (both at 120 days) and then normalized by the video length (in seconds). The relative engagement score is the percentile ranking of average watch percentage among videos of similar lengths. It is a normalized score between 0 and 1. A higher score means more engaging, e.g., a score of 0.8 suggests that this video is on average watched for longer time than 80\\% videos of similar length. Note that relative engagement is shown to be stable over time, hence there is no need to examine the temporal variations of watch time, as it would strongly correlate with view counts. In this work, relative engagement is computed based on a publicly available collection of 5.3M YouTube videos~\\cite{wu2018beyond}, with details described in Section D of \\cite{appendix}.\n\n\\header{\\em Fraction of likes} measures the video reaction -- provided by YouTube as the total counts of likes and dislikes and collected via the thumb-up and thumb-down icon on the video page. A relatively lower fraction of likes indicates a more diverse audience reaction to the video content. Note that the majority of videos receive a lot more likes than dislikes.\\footnote{YouTube announced that the dislike count will no longer be available to the public on Nov 10, 2021. \\url{https:\/\/blog.youtube\/news-and-events\/update-to-youtube\/}}\n\n\\header{\\em Viral potential} is a positive number, representing the {\\em expected} number of views that a YouTube video will obtain if mentioned by an {\\em average} tweet on Twitter~\\cite{rizoiu2017online}. More specifically, it is the area under the impulse response function of an integral equation known as Hawkes Intensity Process (HIP)~\\cite{rizoiu2017expecting}, which is learned for each video by using the first 120 days of tweeting and viewing history. We choose this quantity rather than simply dividing the number of views by the number of tweets, because the model takes into account views that are yet to unfold due to its sustained circulation via sharing and tweeting. A self-contained summary about HIP and viral potential computation is given in Section E of \\cite{appendix}.\n\nOn Twitter, tweets can be categorized into four types: original tweets, retweets, quotes, and replies. This leads to five counting metrics: the total number of {\\bf\\em tweets, original tweets, retweets, quoted tweets}, and {\\bf\\em replies}. \n\n\\subsection{Views and Tweets over Time}\n\\label{ssec:temporal_metrics}\n\n\\header{\\em Viewing half-life} is computed as the number of days to achieve half of its total views at day 120.\n\n\\header{\\em Tweeting half-life} is computed as the number of days to achieve half of its total tweets at day 120.\n\n\\header{\\em Tweeting lifetime} is time gap between the first and the last tweets. We do not measure lifetime on viewing because the view count of a video rarely becomes zero even towards the end of the measurement period, but tweets tend to exhaust much sooner.\n\n\\header{\\em Tweeting inter-arrival time} is the average time difference between every two consecutive tweets about each video.\n\n\\header{\\em Accumulation of views and tweets.} \nIn addition to the summary metrics above, we also compare the attention accumulation on the left- and right-leaning content on a daily basis. On each day $t$, we compute the fraction of the total views that each video has achieved. This leads to two sets of samples $\\{v^{(L)}_i\\}_{i=1}^n$ and $\\{v^{(R)}_j\\}_{j=1}^m$, where $n$ is the number of left-leaning videos and $m$ is the number of right-leaning videos. We then compute the normalized Mann-Whitney U (MWU) statistic~\\cite{mann1947test}, \n$$\\bar U_t = \\frac{1}{nm}\\sum_i\\sum_j \\{ I[v_i^{(L)} > v^{(R)}_j] + 0.5\\,I[v_i^{(L)} = v^{(R)}_j] \\}$$ \n\nHere $I[\\cdot]$ is the indicator function that takes value 1 when the argument is true, 0 otherwise. The U statistic intuitively corresponds to the fraction of sample pairs $(v^{(L)}_i, v^{(R)}_j)$ where the sample from left-leaning distribution is larger, accounting for ties. If the distributions of $v^{(L)}$ and $v^{(R)}$ are indistinguishable, then $\\bar U$ would be around 0.5. We compute the statistic $\\bar U_t$ on tweets in the same fashion, and both statistics are computed for each day. These two series of statistics allows us to quantify the differences between left- and right-leaning content, and compare the trends on the accumulation of views and tweets over time.\n\n\\subsection{Videos' Tweet Cascades}\n\\label{ssec:cascade_metrics}\n\n\\header{\\textit{Cascade size.}}\nWe define that a {\\em cascade} consists of a root tweet and all of its retweets, replies, and quotes. It is well-known that the vast majority of cascades in online diffusion networks are very small and only a very small fraction of cascades would become very big~\\cite{goel2012structure}. Based on the number of tweets in a cascade, we divide the cascades into isolated (only root tweet), small (2-4 tweets), and large ($\\geq 5$ tweets) groups. For videos of each leaning on each topic, we compute the fractions of isolated\/small\/large cascades and the fraction of tweets in each cascade group. These metrics quantify the structure of online diffusion and allow us to compare behavior on controversial political topics with what was known about tweeted videos in general.\n\n\\header{\\textit{Cascade start time}} is the percentage of accumulated views of the video when the root tweet of the cascade is posted. It measures how much view attention is accumulated on YouTube before the infusion on Twitter starts. We choose to describe cascade timing relative to the accumulation of view, rather than in absolute number of days since upload, because (a) such relative time more directly correlates the amount of cascades with respect to the views they can potentially drive (rather than through another variable, days); and (b) the percentage of views provides more granularity, since many videos have all views and tweets unfold within a few days after upload.\n\n\\subsection{Networks among Early Adopters on Twitter}\n\\label{ssec:network_metrics}\n\nFor each video, we obtain its follower network among the early adopters. If there exists a following relationship between a pair of users, a directed edge is established. This results in one network for each shared video. We compute a set of metrics per video, and then compare their distributions on each topic for left- and right-leaning videos. We describe two key metrics here, and discuss four additional metrics in Section H of~\\cite{appendix}.\n\n\\header{\\textit{Gini coefficient of indegree centrality.}} \nWe calculate the indegree centrality for each node in the network. To have a video-level metric, we use the Gini coefficient, which ranges from 0 to 1 and measures the distribution inequality. Specifically, the Gini coefficient of indegree centrality quantifies the degree of inequality of the indegree distribution. A higher value indicates that a few early adopters are followed more by other early adopters, and a lower value indicates that the indegree distribution is more equal. \n\n\\header{\\textit{Gini coefficient of closeness centrality}} captures the dispersion in inverse of average shortest path length from one early adopter to all other early adopters of a given video. Higher coefficient implies that a few early adopters can reach the rest of the early adopters within a few hops.\n\n\\section{Conclusion and Discussion}\n\\label{sec:conclusion}\n\nThis work presents a quantitative study that links collective attention towards online videos across YouTube and Twitter over three political topics: {\\tt Abortion}\\xspace, {\\tt Gun Control}\\xspace, and {\\tt BLM}\\xspace. For each topic, we curated a cross-platform datatset that contained hundreds of videos and hundreds of thousands of tweets spanning 16 months. The extracted videos all have a non-trivial amount of views and tweets. The key contributions include several sets of video-centric metrics for comparing attention consumption patterns between left-leaning and right-leaning videos across two platforms. We find that left-leaning videos are more viewed and more engaging, while right-leaning videos are more tweeted and have longer attention spans. We also found that the follower networks of early adopters on left-leaning videos are of higher centrality, whereas tweet cascades for right-leaning videos start earlier in the attention lifecycle. This study enriches the current understanding of ideological asymmetries by adding a set of temporal and cross-platform analyses.\n\n\\header{Limitations.} \nExtensive discussions about social data biases are presented in~\\cite{olteanu2019social}. The biases can be introduced due to the choice of social platforms, data (un)availability, sampling methods, etc. Here we discuss three limitations in our data collection process.\n\nA recent study found that Twitter filtered streaming API subsamples high-volume data streams that consist of more than 1\\% of all tweets~\\cite{wu2020variation}. The authors proposed a method of using Twitter rate limit messages to quantify the data loss. Based on this method, we find our 16-month Twitter stream has a sampling rate of 79.4 \\% -- we collected 1,802,230,572 out of 2,270,223,254 estimated total tweets. Under a Bernoulli process assumption, the chance of collecting a video tweeted more than once in our tweet stream is 95.8\\%. Since most missing videos are tweeted sporadically, the sampling loss from Twitter APIs is small, which minimally affects the measures on tweeting activities, including attention volumes, timing, and cascade sizes. Confidence intervals for simple measures such as volume can be derived~\\cite{wu2020variation}.\n\nIn this paper, we present various measurements focused on YouTube videos, which are the main entities that link the two platforms. YouTube viewers are unknowable (via publicly available data) and Twitter users are hard to track consistently over time. Therefore, we track videos which attract views and tweets. All the presented metrics are video-centric and we do not assume that the viewers or tweeters of the videos represent specific groups of users. We believe that each set of videos ({\\tt Abortion}\\xspace, {\\tt Gun Control}\\xspace, and {\\tt BLM}\\xspace) represent YouTube videos that are relevant to the topic, curated by keyword queries and semi-manual coding. The number of videos belonging to each topic is not large but we attempted to include all relevant videos shared on Twitter which have non-trivial activities. Thus our results intend to explain attention gathering behavior of topic-relevant videos. Nevertheless, it is unclear that our observations about ideological asymmetries can generalize to videos with less attention and\/or videos about other topics. We leave this generalization validation as future work.\n\nOne data integrity limitation is the time gap between tweet collection in 2017--2018 and early adopters' follower networks collected in early 2020.\nOur data collection is limited by the Twitter search API quota, which restrains collecting tweets and Twitter user followers simultaneously. The tweeted videos stream has on average 3.7M tweets in each day. Collecting the follower network for all these tweets far exceeds the capacity of Twitter API, focusing on the early-adopter network is a practical trade-off between still having informative results and making data collection feasible. A related issue comes from unavailable YouTube videos and Twitter users since content publicly available in 2017 may be deleted, banned, or protected in 2020. We found that between 17\\% to 19\\% candidate videos become unavailable in our dataset.\n\n\\header{Practical implications and future work.}\nWe believe this work adds a new dimension to the understanding of online political behavior and discourse -- cross platform links. Further examination in this direction could bear theoretical and empirical fruits. The measurements presented in this work are mostly quantitative. One direction of future work is to complement qualitative analysis. For example, to gain deeper insight into our observations about how the user attention to left-leaning YouTube videos was driven by a group of elite early adopters, one can examine typical diffusion networks from both left- and right-leaning groups and study the diffusion process of the video spreading. One could also examine the framing of left- and right-leaning content in both video descriptions and tweets about them. For example, \\citet{linDynamicsTwitterUsers2020} used mixed-methods approaches to identify the primary framing and rhetorics in online conversations related to gun control, which can be expanded to enrich the quantitative analyses, such as investigating the linguistic features of YouTube descriptions and tweet cascades, and their relationships to the changes in collective attitudes. Finally, understanding the collective attention across multiple social platforms is important for content producers, who could devise better strategies in promoting their content in another domain.\n\n\\section*{Ethical Statement}\nAll data that we obtained was publicly available at the time of data collection. We discarded deleted, protected, and private content at the time of analysis. In our released dataset, we anonymized user identities. Therefore, the analyses reported in this work do not compromise any user privacy.\n\n\\input{tables\/table-measures}\n\n\\section{Related Work}\n\\label{sec:related}\n\n\\subsubsection{Online behavior of political groups.} \nMeasurement studies have quantified different aspects of users, contents, and their interactions under political polarization on social media. \\citet{conover2011political} presented one of the first profiling studies of polarized political groups on Twitter. \nThere have also been evidences that liberal and conservative groups attract online attention in different manners. \\citet{abisheva2014watches} focused on a set of influential Twitter users who promoted YouTube videos, and they found that conservatives tweeted more diverse topics than liberals and that conservatives shared new videos faster. \\citet{bakshy2015exposure} quantified the extent to which Facebook users were exposed to politically opposing contents, and they found that conservatives tended to seek out more cross-partisan content. \n\\citet{linDynamicsTwitterUsers2020} distinguished online behavioral signals, such as linguistic and narrative characteristics, of two ideology groups in response to mass shooting events.\n\\citet{garimella2018political} defined several consumption and production metrics and profiled key user behavior patterns. \\citet{ottoni2018analyzing} showed that conservatives used more specific language to discuss political topics and showed more negative emotions in the language. On YouTube, a recent study from \\citet{wu2021cross} found that left-leaning videos attracted more comments from conservatives than right-leaning videos from liberals. However, all of these works are conducted platform-wide, and are not specialized into particular topics or movements. \n\nOnline activism, also known as online social movement has been actively studied as a form of digital political campaigns.\nFor example, \\citet{de2016social} presented one of the first studies on the {\\tt BLM}\\xspace movement, measuring geographical differences in participation, and relationships to offline protests. \\citet{stewart2017drawing} constructed a shared audience network of users who talked about {\\tt BLM}\\xspace on Twitter and found the existence of superclusters among liberals and conservatives. \\citet{zhang2015modeling} performed policy decision prediction based on tweet texts analysis on \\texttt{same-sex marriage}. In a follow-up work, \\citet{zhang2016gender} discussed gender disparity by linking tweet texts to the state-level {\\tt Abortion}\\xspace policy events. \\citet{ertugrul2019activism} examined the relation between offline protest events and their social and geographical contexts. \\citet{freelon2020false} explained different tactics of liberals and conservatives when approaching audience on social media and articulated the asymmetries of measured behavior between conservatives and liberals. However, all of these works focus on a single controversial topic. In contrast, we examine three political topics and assess consistency of findings across the topics.\n\n\\subsubsection{Cross-platform measurement studies.} One early attempt in linking Twitter and YouTube data is from \\citet{abisheva2014watches}, in which the authors found that the features of early adopters on Twitter were predictive for the video view counts on YouTube. \\citet{rizoiu2017expecting} proposed the Hawkes Intensity Process that linked the time series of tweets and views, which led to a metric called {\\it viral potential} for measuring the expected number of views that a video would obtain if mentioned by an average tweet~\\cite{rizoiu2017online}. \\citet{zannettou2017web} measured the sharing of alternative and mainstream news articles on three different platforms -- Twitter, Reddit, and 4chan. This seminal study characterized the role of fringe communities in spreading news. \\citet{hosseinmardi2021examining} used browsing histories to infer video watch behavior, and quantified the consumption and driver of extreme content with respect to users' information diet. However, all of these works cover a breadth of content sources and categories, but none is focused around social movements or consistent political topics. \n\nThis presented work bridges the gap of cross-platform measurement studies on multiple social movements, aiming to provide richer understanding that balances the ideological asymmetry between the left and the right. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Identifying Topical Videos and Tweets}\n\\label{sec:app-data}\n\nWe curated a keyword list for each of the three controversial topics -- {\\tt Abortion}\\xspace, {\\tt Gun Control}\\xspace, and {\\tt BLM}\\xspace. Keywords for {\\tt Abortion}\\xspace and {\\tt Gun Control}\\xspace came from another recent work \\cite{guo2020inflating} and were expanded using contextual expansion with sample tweets. Keywords for {\\tt BLM}\\xspace were manually curated, separating into pro- and anti-{\\tt BLM}\\xspace keywords respectively. We expanded the latter with white-supremacy related keywords. \\Cref{table:app-keyword} lists all keywords.\n\nWe consider a video is potentially relevant if (a) it contains at least one keyword in the video title or description; or (b) it is mentioned in a tweet that contains the keywords in the tweet text. We obtained 815 candidate videos for {\\tt Abortion}\\xspace, 974 candidate videos for {\\tt Gun Control}\\xspace, and 8,571 candidate videos for {\\tt BLM}\\xspace.\n\nFor {\\tt Abortion}\\xspace and {\\tt Gun Control}\\xspace, we manually annotated all candidate videos. All videos were randomly divided into five buckets (with 163 or 195 videos in each) and each bucket was labeled by an annotator. The five authors of this paper, along with two postgraduate students who had extensive research experience in computational social science, consisted of the annotator team. Annotators were instructed to watch a video for at least five minutes before making a decision of whether or not this video is relevant to the target topic. To measure the inter-rater reliability, we randomly selected a bucket, and had two more annotators labeling the same bucket. We computed the Fleiss' Kappa. {\\tt Abortion}\\xspace had Fleiss' Kappa of 0.830 and {\\tt Gun Control}\\xspace had that of 0.826, suggesting a high agreement among annotators.\n\nFor {\\tt BLM}\\xspace, we used a semi-automatic approach because the number of candidate videos (8,571) is much larger. Based on our observations on {\\tt Abortion}\\xspace and {\\tt Gun Control}\\xspace, we designed the following protocol:\n\n\\begin{itemize}[leftmargin=*]\n \\item For videos that contained any topic-relevant keywords in the video titles or descriptions,\n\n \\begin{itemize}[leftmargin=*]\n \\item if the videos were also mentioned by tweets with topic-relevant keywords, 89.0\\%\/86.7\\% of the videos were labeled \\textit{relevant} (in {\\tt Abortion}\\xspace and {\\tt Gun Control}\\xspace respectively, same below). These videos consisted of 16.1\\%\/12.5\\% of all candidate videos. We labeled such videos as relevant, there were 418 in {\\tt BLM}\\xspace.\n \\item if the videos were not mentioned by tweets with topic-relevant keywords, 77.8\\%\/89.4\\% of the videos were labeled \\textit{irrelevant}. These videos consisted of 6.7\\%\/11.9\\% of all candidate videos. We labeled such videos as irrelevant.\n \\end{itemize}\n\n \\item For videos that did not contain any topic-relevant keywords in the titles nor descriptions, but had been mentioned by some topic-relevant tweets,\n\n \\begin{itemize}[leftmargin=*]\n \\item if the videos were tagged as ``Music'', or did not have English captions, or did not contain any topic-relevant keywords in the captions, 95.2\\%\/80.6\\% of the videos were labeled \\textit{irrelevant}. We labeled such videos as irrelevant.\n \\item we used the manual annotation steps for {\\tt Abortion}\\xspace and {\\tt Gun Control}\\xspace to label the remaining 749 videos, out of which 359 videos were labeled relevant. The Fleiss' Kappa was 0.711.\n \\end{itemize}\n\\end{itemize}\n\nIn total, we identified 179 {\\tt Abortion}\\xspace videos, 268 {\\tt Gun Control}\\xspace videos, and 777 {\\tt BLM}\\xspace videos. We extracted all tweets and Twitter users that mentioned any of these topic-relevant videos from our 16 months data stream.\n\n\\input{tables\/table-topical-keywords}\n\n\\clearpage\n\n\\section{Estimating and Validating the Leanings of Twitter Users}\n\\label{sec:app-estimate-twitter}\n\nWe estimated the political leanings of early adopters on Twitter in four steps with increasing coverage.\n\n\\header{1) Curating seed hashtags.}\nTwitter users are known to use political hashtags in their user profiles as a marker of identity and a tool of network gatekeeping, such as marking participation in an ongoing activism, facilitating the establishment of social network ties around specific topics, or expressing community identities~\\cite{kulshrestha2017quantifying}. Based on this insight, we started by identifying users who clearly signaled political ideologies in their Twitter profile descriptions. We curated two sets of ideologically descriptive hashtags that were typically used by \\rev{liberal} and \\rev{conservative} users. We described a hashtag as ideologically descriptive if it either supported or criticized political leaders, their statements, and their campaigns (e.g., \\textit{\\#trump}, \\textit{\\#america1st}, \\textit{\\#maga}). We called these hashtags as political seed hashtags. The whole list can be found in \\Cref{table:seed_hashtag}. There are 25 left-leaning hashtags used by 260, 552, 817 users, and 35 right-leaning hashtags used by 1023, 2065, 3798 users in {\\tt Abortion}\\xspace, {\\tt Gun Control}\\xspace, and {\\tt BLM}\\xspace, respectively.\n\n\\input{tables\/table-seed-hashtags}\n\n\\header{2) Expanding seed hashtags.}\nTo obtain new political hashtags, we followed an expansion protocol based on co-occurrence with the political seed hashtags in the user profile descriptions. Firstly, we required that a candidate hashtag must co-occur with the seed hashtags in at least 0.1\\% of all users' profile descriptions. Secondly, the candidate hashtag should be relevant to only one ideology (either left or right). To achieve this, we computed the Shannon entropy based on the co-occurrence of the candidate hashtag with the seed left- and right-leaning hashtags. We selected the hashtags with entropy less than or equal to 0.1. \\Cref{table:expanded_hashtag} presents the expanded political hashtags. The number of users using left- and right-leaning hashtags are 264 and 1250 for {\\tt Abortion}\\xspace, 677 and 2375 for {\\tt Gun Control}\\xspace, 970 and 4412 for {\\tt BLM}\\xspace, respectively. We noticed that the number of \\rev{conservative} users are about four times that of \\rev{liberal} users across the three topics.\n\n\\input{tables\/table-expanded-hashtags}\n\n\\header{3) Identifying seed users.}\nAfter curating and expanding the left- and right-leaning hashtags (jointly called {\\em political hashtags}), we assigned the political ideologies to the Twitter users. For users whose profile descriptions included at least one political hashtag, we counted the numbers of left- and right-leaning hashtags in their profile descriptions. The average numbers of political hashtags per such user were 1.96, 1.97, and 2.04 for {\\tt Abortion}\\xspace, {\\tt Gun Control}\\xspace and {\\tt BLM}\\xspace, respectively. If the ratio of one leaning was greater than or equal to 0.9, we assigned the corresponding ideology to the user. We called the users {\\em seed users}, since their labels were later used to estimate the ideologies of other users. The numbers of liberal and conservative seed users are given in~\\Cref{table:seed_users}. \n\n\\input{tables\/table-seed-users}\n\n\\header{4) Propagating labels to other users.}\nWe aimed to infer the ideologies of users who had no explicit political hashtags in their profile descriptions. We first created a user-to-user network where the edge weight between two users $(u, v)$ represented their {\\em shared audience}, calculated using the Jaccard similarity of their follower lists:\n\n\\input{equations\/eq-jaccard}\n\nThe shared audience network is shown to be an effective method for splitting users with different political behaviors~\\cite{stewart2017drawing}. When estimating the user ideologies, our assumption is that users with the same ideology share similar audience. The shared audience network was a dense network since we created edges between any two users regardless of whether they had a direct follower-following relation. The initial networks consisted of approximately $28M$, $119M$, and $394M$ edges for {\\tt Abortion}\\xspace, {\\tt Gun Control}\\xspace, and {\\tt BLM}\\xspace, respectively. The corresponding network densities were 18.1\\%, 15.2\\%, and 15.7\\%. To reduce the network complexity, we used a disparity filtering algorithm~\\cite{serrano2009extracting} to extract the network backbone. The disparity filtering algorithm can identify important edges in a network. It has a hyperparameter $\\alpha$ to control the degree of edge importance. We used $\\alpha = 0.05$ as suggested in the initial work~\\cite{serrano2009extracting}. After disparity filtering, the resulting networks contained approximately $1.9M$, $7.8M$, and $25.5M$ edges for {\\tt Abortion}\\xspace, {\\tt Gun Control}\\xspace, and {\\tt BLM}\\xspace, respectively. The densities of these networks ranged between 1\\% and 1.2\\%.\n\nNext, we applied a label propagation algorithm~\\cite{zhou2004learning} on the filtered network to infer the political leanings of the unknown users from those of the seed users. Label propagation is a semi-supervised algorithm that builds on two intuitions: (a) the seed labels should not change too much after propagation; (b) the inferred labels should not be too different with their neighbours. A teleport factor $\\beta$ controls the trade-off between these two intuitions. We used $\\beta = 0.85$ as suggested in the prior work~\\cite{yan2020mimicprop}. The label propagation algorithm returned a liberal score and a conservative score for each user in the network, which we then re-normalized into a score between 0 and 1, where 0 indicated strong liberal and 1 indicated strong conservative. Re-normalization was done by computing the ratio of conservative score to the summation of liberal and conservative scores.\n\nWe stratified one-tenth of seed users in each fold and hided their labels. Next, we compared the inferred political leanings to actual leanings of these users. Note that we employed whole network during the label propagation validation. \\Cref{table:seed_users} shows the evaluation results for the label propagation.\n\n\\clearpage\n\n\\section{Estimating and Validating the Leanings of YouTube Videos}\n\\label{sec:app-estimate-youtube}\n\nOnce we obtained the ideology probabilities of early adopters on Twitter, we computed the leaning score for each YouTube video by averaging its promoters' ideology scores. To assign discrete leaning labels (i.e. \\textit{Left}, \\textit{Right} and \\textit{Center}) to the videos, we leveraged the Recfluence dataset~\\cite{ledwich2020algorithmic} as an external source to annotate YouTube videos' leanings. This dataset includes 816 YouTube channels where each channel has 10k+ subscribers and more than 30\\% of its content is relevant to US politics, or cultural news, or cultural commentary. It consists of channels of YouTubers in addition to those of traditional news outlets. In this dataset, each channel is assigned to \\textit{Left (L)}, \\textit{Center (C)} or \\textit{Right (R)} leaning. The annotations were conducted based on two well known media bias resources\\footnote{Media Bias\/Fact Check: \\url{https:\/\/mediabiasfactcheck.com\/}; Ad Fontes Media: \\url{https:\/\/www.adfontesmedia.com\/}} as well as manual labeling.\n\nThe videos whose channels' leanings are available in the Recfluence dataset are assigned to the same leaning. We were able to label 109 videos (L: 38, R: 66, C:5) from 38 channels for {\\tt Abortion}\\xspace, 153 videos (L: 52, R: 88, C: 13) from 62 channels for {\\tt Gun Control}\\xspace, and 521 videos (L: 194, R: 285, C:42) from 120 channels for {\\tt BLM}\\xspace. To label the videos whose channels are not available in the Recfluence dataset, we identified two thresholds for each topic -- $thr_{(L, C)}$ and $thr_{(C, R)}$ where $thr_{(L, C)} < thr_{(C, R)}$. For a given video $\\nu$, we assigned a discrete leaning label, $leaning\\_label(\\nu)$, based on its leaning score, $leaning\\_score(\\nu)$, as follows:\n\n\\[\nleaning\\_label(\\nu) = \n\\begin{cases}\n L & \\text{if $leaning\\_score(\\nu) < thr_{(L, C)}$} \\\\\n R & \\text{if $leaning\\_score(\\nu) > thr_{(C, R)}$} \\\\\n C & \\text{otherwise.}\n\\end{cases}\n\\]\n\nWe used videos with labels from Recfluence to find optimum thresholds $thr_{(L, C)}$ and $thr_{(C, R)}$. We obtained a leaning score distribution for each group (i.e., L, R and C). We identified outliers for each group using inter-quartile rule and filtered them out. The leaning score distributions by groups for each topic are given in \\Cref{fig:recfluence_thr_plots}. Next, we found the thresholds $thr_{(L, C)}$ and $thr_{(C, R)}$ using posterior probabilities as follows:\n\\begin{eqnarray}\n P(L|leaning\\_score(\\nu)) = P(C|leaning\\_score(\\nu)),&\\text{where}~ leaning\\_score(\\nu) = thr_{(L, C)} \\\\\n P(C|leaning\\_score(\\nu)) = P(R|leaning\\_score(\\nu)),&\\text{where}~ leaning\\_score(\\nu) = thr_{(C, R)} \n\\end{eqnarray}\n\n\\input{images\/app_recfluence_thr_dist}\n\nThe resulting threshold pairs ($thr_{(L, C)}$, $thr_{(C, R)}$) are $(0.538, 0.749)$ for {\\tt Abortion}\\xspace, $(0.478, 0.716)$ for {\\tt Gun Control}\\xspace and $(0.524, 0.695)$ for {\\tt BLM}\\xspace, respectively. Based on these thresholds, we inferred the leaning labels of 70 videos (L: 20, R: 45, C: 5) from 58 channels for {\\tt Abortion}\\xspace, 115 videos (L: 29, R: 66, C: 20) from 98 channels for {\\tt Gun Control}\\xspace, 256 videos (L: 103, R: 111, C: 42) from 197 channels for {\\tt BLM}\\xspace. \n\nTo validate the video leaning estimation, we conducted a manual leaning annotation for videos in {\\tt Gun Control}\\xspace. We used stratified sampling where we sampled 5 videos from each 10 percentile bin of their leaning scores. The leaning of these 50 videos were annotated independently by three authors considering three categories (\\textit{left-leaning}, \\textit{right-leaning}, \\textit{unknown}). The Fleiss' Kappa is 0.691, suggesting a moderate agreement among annotators. We used the majority vote to assign a video label. Table \\ref{table:gun_stance_manual_annotation} summarizes the annotation results. \n\nWe observed that the only decile bin with both manually annotated left- and right-leaning video is the 30th-40th percentile of leaning scores $(0.465 - 0.665)$. Suggesting that the annotator had high agreement with the estimated leaning scores, and that the threshold should be in this interval -- which is indeed the case for {\\tt Gun Control}\\xspace with $thr_{(L, C)}$ lies within its left boundary, and $thr_{(C, R)}$ only 0.05 away from its right boundary. In addition, most of the videos which were assigned \\textit{unknown} by the raters are the footage of shooting of Philando Castile and mass shooting events including Las Vegas shooting and TX massacre. These videos do not include any stance on their contents. However, we observed that the leaning scores of these videos align well with the leaning of channels. One example is ``Comments on TX church shooting'' where the video leaning score belongs to the second bin $(80th - 90th)$ percentile. The channel owner is a far-right YouTuber. The video leaning score matches with the channel owner's leaning, our approach is thus successful for solving such ambiguities.\n\n\\Cref{table:gun_stance_manual_annotation} shows manual leaning annotation results of 50 Gun Control videos. Compared to the leaning score range in the second column (which is estimated), the first three rows of 14 videos (which are labeled as left-leaning) are consistently labeled (the result of manual labeling and estimated leanings agree) as well as the last five rows of 19 videos (which are labeled as right-leaning). The computed thresholds for Gun Control videos, as presented in~\\Cref{fig:recfluence_thr_plots}, are $0.478$ and $0.716$.\n\n\\input{tables\/table-gun-stance-annotation}\n\n\\clearpage\n\n\\section{Computing Relative Engagement of a Video}\n\\label{sec:app_relengagement}\n\nFrom the collected videos we extracted topic ({\\tt Abortion}\\xspace, {\\tt Gun Control}\\xspace and {\\tt BLM}\\xspace) relevant videos for our topical analysis, but all collected videos were used to compute relative engagement scores.\nWe filter out videos that receive less than 100 views within their first 30 days after upload, which is the same filter used by~\\citet{brodersen2012youtube}.\nWe summarize how relative engagement score is computed. For a given video, first we compute two aggregate metrics:\n\\begin{itemize}\n \\item average watch time ($\\bar{\\omega}_t$): the total watch time $x_{w}[1 : t]$ divided by the total view count $x_v[1 : t]$ up to day $t$\n \\[\\bar{\\omega}_t = \\frac{\\Sigma^t_{i=1}x_{w}[i]}{\\Sigma^t_{i=1}x_{v}[i]}\\]\n \\item average watch percentage ($\\bar{\\mu}_t$): the average watch time $\\bar{\\omega}_t$ normalized by video duration D\n \\[\\bar{\\mu}_t = \\frac{\\bar{\\omega}_t}{D}\\]\n\\end{itemize}\n\nThen {\\it Engagement map} of tweeted videos is constructed as follows. Two maps are created; the first where x-axis\nshows video duration D, and the y-axis shows average\nwatch time over the first 120 days ($\\bar{\\omega}_{120}$) (\\Cref{fig:appendix_relative_engagement}(a)) and the second with the same axis with average watch percentage over the first 120 days ($\\bar{\\mu}_{120}$) (\\Cref{fig:appendix_relative_engagement}(b)) as y-axis. All videos in the tweeted videos dataset are projected onto both maps. The x-axis is split into 1,000 equally wide bins in log scale. These 2 maps are logically identical but \\Cref{fig:appendix_relative_engagement}(b) is easier to read as y-axis is bounded between [0,1]. This second map is denoted as {\\it Engagement map}.\n\nBased on this {\\it Engagement map}, relative engagement $\\bar{\\eta}_t\\in [0,1]$ defined\nas the rank percentile of video in its duration bin. This is an average engagement measure in the first t days. \\Cref{fig:appendix_relative_engagement}(b) illustrates the relation between video duration D, watch percentage $\\bar{\\mu}_{120}$ and relative engagement $\\bar{\\eta}_{120}$ for three example videos. $V_1$ is a left-leaning video in {\\tt Abortion}\\xspace topic with duration of 136 seconds. $V_2$ is a right-leaning video in {\\tt Abortion}\\xspace topic with duration of 141 seconds. $V_1$ and $V_2$ have similar lengths but different watch percentages, $\\bar{\\mu}_{120}(V_1) = 0.83$ and $\\bar{\\mu}_{120}(V_2) = 0.59$. The difference becomes more apparent with relative engagement scores, $\\bar{\\eta}_{120}(V_1) = 0.94$ and $\\bar{\\eta}_{120}(V_2) = 0.27$ as shown in \\Cref{fig:appendix_relative_engagement}(b).\n$V_3$ is also a right-leaning video in {\\tt Abortion}\\xspace which is significantly long than the others. Although $V_3$ is not as much watched as the previous two, $\\bar{\\mu}_{120}(V_3) = 0.40$, relative engagement of $V_3$ is much higher than $V_2$, $\\bar{\\eta}_{120}(V_3) = 0.86$ as longer videos tend to have lower average watch percentages. \n\n\\input{images\/app_rel_engagement}\n\n\\clearpage\n\n\\section{Computing Viral Potential of a Video via the Hawkes Intensity Process}\n\\label{sec:app_hip}\n\n\\header{Viral potential}\nestimates the expected number of views a video will obtain if mentioned by an {\\em average} tweet ~\\cite{rizoiu2017online}.\nMore specifically, it is the area under the impulse response function of an integral equation known as Hawkes Intensity Process (HIP)~\\cite{rizoiu2017expecting}, which are learned for each video by using the first 120 days of tweeting and viewing history. The HIP model is designed to describe the phenomena of tweets driving or attracting video views including factors such as network effect (such as views beget views), system memory (such as interests in news videos wane within a week, but interests on music videos sustain for several years), and video quality (including factors such as sensitivity to tweet mentions and inherent attractiveness). \nWe estimate the number of views per tweet via the HIP model rather than simply dividing the number of views by the number of tweets, because the model takes into account future views that are expected to unfold after the 120-day cut-off. \nThe same reason motivates the name viral {\\em potential} rather than viral score.\nOne limitation of HIP and viral potential is the notion of an ``average'' tweet being implemented by marginalizing over power-law distributed number of followers. This means\nnetwork size variations across different topics are not taken into account.\nA self-contained summary of HIP and viral potential computation is included below. The viral potential is a positive number, since views cannot be negative (i.e., no video loses views). However, it can take values less than one, corresponding to the effect of tweets being dampened (one view per several tweets) rather than amplified, as often seen in videos linked by spam tweets. \n\nHawkes Intensity Process (HIP)~\\cite{rizoiu2017expecting} extends the well-known Hawkes (self-exciting) process~\\cite{hawkes1971spectra} to describe the volume of activities within a fixed time interval (e.g. daily).\nThis is done by taking expectations over the stochastic event history.\nSpecifically, this model describes a self-exciting phenomenon that is commonly observed in online social network~\\cite{rizoiu2017online}.\nIt models the target quantity $x[t]$ as a self-consistent equation into three parts: the unobserved external influence, the effects of external promotions, and the influence from historical events.\nFormally, it can be written as\n\n\\input{equations\/eq-hip}\n\nThe first two terms represent unobserved external influences.\n$\\gamma$ and $\\eta$ model the strengths of an initial impulse and a constant background rate, respectively.\nIn the middle component, $\\alpha$ is the sensitivity to external promotion, $s[t]$ is the volume of promotion, and $\\alpha s(t)$ is the instantaneous response to promotion.\nIn the last component, $\\theta$ is the exponent of a power-law memory kernel $(\\tau+c)^{-(1+\\theta)}$.\n$c$ is a nuisance parameter for keeping the kernel bounded, and $C$ accounts for the latent content quality.\nOverall, this last component models the impact over its own event history $x[\\tau]$ for $\\tau{=}1:t{-}1$.\n\nIn our case, $x[t]$ is the time series of daily views $x_v[t]$ on YouTube.\n$s[t]$ is the daily number of tweets on Twitter.\nThe parameter set $\\{\\gamma, \\eta, \\alpha, C, c, \\theta\\}$ is estimated from the first 90-day interval of each video using the constrained L-BFGS algorithm in \\texttt{SciPy} Python package.\\footnote{\\url{https:\/\/github.com\/andrei-rizoiu\/hip-popularity}}\n\n\\clearpage\n\n\\section{View and Tweets over time: additional plots}\n\\label{sec:app-overtime_additional}\n\n\\Cref{fig:app_gun_ccdf} and \\Cref{fig:app_blm_ccdf} visualize the attention accumulation for {\\tt Gun Control}\\xspace and {\\tt BLM}\\xspace topic.\n\n\\begin{figure*}[!htb]\n \\centering\n \\includegraphics[width=0.66\\linewidth]{image-files\/app_gun_ccdf.pdf}\n \\caption{Comparing view and tweet accumulations of left- and right-leaning videos in {\\tt Gun Control}\\xspace. (a) and (b) show CCDF of views and tweets accumulated for Day 1 and Day 30, respectively. As seen in (c), the differences between left- and right-leaning videos are larger than the differences of {\\tt Abortion}\\xspace videos, but the differences between accumulation of views and tweets are smaller than {\\tt Abortion}\\xspace videos.}\n \\label{fig:app_gun_ccdf}\n\\end{figure*}\n\n\\begin{figure*}[!htb]\n \\centering\n \\includegraphics[width=0.66\\linewidth]{image-files\/app_blm_ccdf.pdf}\n \\caption{Comparing view and tweet accumulations of left- and right-leaning videos in {\\tt BLM}\\xspace. (a) and (b) show CCDF of views and tweets accumulated for Day 1 and Day 30, respectively. (c) shows that the differences between left- and right-leaning videos are the largest among all topics.}\n \\label{fig:app_blm_ccdf}\n\\end{figure*}\n\n\\clearpage\n\n\\section{Tweet cascades: additional plots}\n\\label{sec:app-cascade_additional}\n\n\\Cref{fig:app_cascade_additional} visualizes attention distribution of the isolated, small, and large cascades for {\\tt Gun Control}\\xspace and {\\tt BLM}\\xspace topic.\n\n\\begin{figure}[!htb]\n \\centering\n \\begin{subfigure}[b]{0.45\\linewidth}\n \\includegraphics[width=\\linewidth]{image-files\/app_gun_cascade_l.pdf}\n \\caption{Tweet cascades in {\\tt Gun Control}\\xspace left-leaning videos.}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.45\\linewidth}\n \\includegraphics[width=\\linewidth]{image-files\/app_gun_cascade_r.pdf}\n \\caption{Tweet cascades in {\\tt Gun Control}\\xspace right-leaning videos.}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.45\\linewidth}\n \\includegraphics[width=\\linewidth]{image-files\/app_blm_cascade_l.pdf}\n \\caption{Tweet cascades in {\\tt BLM}\\xspace left-leaning videos.}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.45\\linewidth}\n \\includegraphics[width=\\linewidth]{image-files\/app_blm_cascade_r.pdf}\n \\caption{Tweet cascades in {\\tt BLM}\\xspace right-leaning videos.}\n \\end{subfigure}\n \\caption{Cascade start times measured at view accumulation process. \\Cref{table:measures} reports that retweet cascades for left-leaning videos start significantly later in view accumulation process for all topics.}\n \\label{fig:app_cascade_additional}\n\\end{figure}\n\n\\clearpage\n\n\\section{Additional metrics on early adopter networks}\n\\label{sec:app_centralized_left_right}\n\nWe measured additional metrics on early-adopter networks, the comparative findings are summarized in \\Cref{table:measures}.\n\n\\header{\\textit{Network density}} is the fraction of existing edges among all possible pair-wise edges. It quantifies the connectivity of the follower network. A higher value indicates the early adopters are more cohesively followed among themselves.\n\n\\header{\\textit{Maximum indegree}} is the maximum indegree value in the network. It provides information about the most influential user in the network. \n\n\\header{\\textit{Global efficiency}} is computed as the number of closed triplets over the total number of triplets. It measures the average efficiency over all pairs of distinct users. Higher global efficiency reflects higher capacity to diffuse information over the network.\n\n\\header{\\textit{Gini coefficient of betweenness centrality}} measures the dispersion in fraction of all shortest paths in the network that pass through a given user. Higher coefficient implies that a few early adopters are on the shortest path for most node-pairs, but the rest of the early adopters are not.\n\n\n\\section{Intersecting relative engagement with view and tweet count}\n\\label{sec:app_intersect}\n\n\\input{images\/app_intersecting}\n\nWe link the measure of relative engagement of YouTube videos to the number of tweets, the number of followers, and the number of views. The number of tweets and views are accumulated for the first 120 days after upload.\nRather than looking at the overall correlation, which is dominated by large variations, we choose to group videos according to relative engagement. This allows us to interpret the change in Twitter user base, tweeting behavior, and YouTube attention across videos of different engagement levels.\n\nWe compute relative engagement score for each video and rank the videos in descending order of relative engagement score. Then, the videos are divided into 5 groups, each including the top 20\\% videos, top 20\\%\\textasciitilde{}40\\% videos, top 40\\%\\textasciitilde{}60\\% videos, top 60\\%\\textasciitilde{}80\\% videos and top 80\\%\\textasciitilde{}100\\% videos. \nFor each group, left and right-leaning videos are shown as boxplots of \\textit{Tweet120}, \\textit{Follower count}, and \\textit{View120} (\\Cref{fig:app_intersecting}). \n\n\nAs seen earlier, left-leaning videos have less number of tweets on average which can be seen with {\\it Tweet120}. Especially, in the case of {\\tt Abortion}\\xspace, the middle buckets (20\\%, 40\\% and 60\\%) show that left-leaning videos have significantly less tweets but have more views on average. The same pattern holds for {\\tt Gun Control}\\xspace. This is consistent with the previous finding that left-leaning videos are more effectively promoted (higher viral potential) with fewer number of tweets.\n\nWe can observe that higher numbers of followers does not imply higher numbers of tweets on average as witnessed by most of middle buckets. However, we note that the top three outliers in left-leaning videos in 40\\% bucket of {\\it Follower count} boxplot for {\\tt Gun Control}\\xspace corresponds to the top three outliers of left-leaning videos in the same bucket of {\\it Tweet120}.\n\nIn terms of {\\it View120}, the average trends support findings in \\Cref{ssec:aggregate_obs} in that left-leaning videos in {\\tt Abortion}\\xspace and {\\tt Gun Control}\\xspace have higher views whereas left-leaning videos in {\\tt BLM}\\xspace have lower views. Although view count is considered as an important measure of video popularity in general, no noticeable correlation is shown between relative engagement and {\\it View120}. This is expected as engagement metric and popularity metric capture very different perspective of online attention~\\cite{wu2019estimating}.\n\n\\section{Curating Tweeted Video Datasets}\n\\label{sec:data}\n\nWe constructed three new cross-platform datasets by tracking videos on YouTube and posts on Twitter over three controversial topics: {\\tt Abortion}\\xspace, {\\tt Gun Control}\\xspace, and {\\tt BLM}\\xspace. Those topics have been studied extensively by social and political scientists~\\cite{zhang2016gender,de2016social,stewart2017drawing,garimella2018political}. In this section, we first describe the data collection strategy. We then introduce our methods for estimating the political leanings of Twitter users and YouTube videos. \\Cref{table:datasets} summarizes the overall statistics of the three topical datasets.\n\n\\input{tables\/table-datasets.tex}\n\n\\subsection{Finding YouTube Videos and Twitter Posts of Controversial Topics}\n\\label{ssec:data_find}\n\nWe are interested in topical YouTube videos and the discussions about them on Twitter. Following the approach used in~\\cite{wu2018beyond}, we collected public tweets that mentioned any YouTube URLs via the Twitter filtered streaming API. Our raw Twitter stream spanned 16 months (2017-01-01 to 2018-04-30), and contained more than 1.8 billion tweets. To subsample videos that attracted a reasonable amount of attention before the end of the observation period, we required that the videos must be published in 2017, receive at least 100 tweets and at least 100 views within the first 120 days after upload. We make this filtering choice because (a) analyzing videos with little attention (${<}1$ view per day) is bound to generate noises when comparing different groups; (b) characterizing the timing and structure of a video's tweeting cascades requires a non-trivial number of tweets. This yielded 328,557 videos, which were mentioned in 242M tweets by 29.9M users. For each video, we collected its metadata, daily time series of view count and watch time, and all tweets mentioning it.\n\nTo identify topic-relevant videos and tweets, we first curated three separate keyword lists for three controversial topics -- {\\tt Abortion}\\xspace, {\\tt Gun Control}\\xspace, and {\\tt BLM}\\xspace. We consider a video is potentially relevant if (a) it contains at least one keyword in the video title or description; or (b) it is mentioned in a tweet that contains at least one keyword in the tweet text. Next, we used a mix of manual and semi-automated approaches to annotate the potentially relevant videos. Section A of \\cite{appendix} details the topical keywords, their curation, and our video annotation protocol. In total, we obtained 179 {\\tt Abortion}\\xspace, 268 {\\tt Gun Control}\\xspace, and 777 {\\tt BLM}\\xspace videos, which were mentioned in 970K+ tweets.\n\n\\input{images\/data_example}\n\n\\Cref{fig:data_overtime} shows the daily view count series and tweet cascades for an example video. A notable contrast between the two sources is that YouTube attention data is only available in daily aggregates, i.e., without individual user logs, while tweets have precise timestamp, user information, and relations between tweets and users. \\Cref{sec:metric} is dedicated to designing measures for such cross-platform multi-relational temporal data. We bootstrapped the political information about Twitter users and YouTube videos. In particular, we gathered more information about videos' early adopters, defined as the first 20\\% users who tweeted about each video. We collected the follower lists for all early adopters. Network measures of early adopters are found to indicate future popularity~\\cite{romero2013interplay}. The threshold (first 20\\%) is chosen to balance the need for data and the burden of collecting network information within practical API limits. After filtering out banned and protected users, we extracted 132K early adopter tweets posted by tens of thousands of users across the three topics (see \\Cref{table:datasets}).\n\n\\subsection{Estimating Political Leanings of Twitter Users and YouTube Videos}\n\\label{ssec:data_estimate}\n\nWe classified the political leanings of early adopters on Twitter into liberal, neutral, and conservative. Meanwhile, we classified the video leanings into left, center, and right. Twitter users and YouTube videos are related in that left-leaning contents (e.g., {\\tt Gun Control}\\xspace videos) are generally shared by liberal users while right-leaning contents (e.g., {\\tt Gun Rights} videos) are shared by conservative users.\n\nWe estimated Twitter users' political leanings by first identifying a group of seed users who included political hashtags in their profile descriptions, and then by using a label propagation algorithm~\\cite{zhou2004learning} to propagate the labels of seed users to other users based on the shared follower network. This is a common approach for classifying user leaning on Twitter~\\cite{stewart2017drawing} and the {\\it follow} relation is found to be the most important in predicting user ideology~\\cite{xiao2020timme}. We performed 10-fold cross-validation to evaluate the classification performance. We observed very high scores in precision, recall, and F-score across all three topics ($>95\\%$ in all metrics). Section B of~\\cite{appendix} describes our classification and evaluation methods for Twitter users in more detail.\n\nWe estimated YouTube videos' political leanings by first averaging the leaning scores of videos' early adopters on Twitter. We then used an external YouTube media bias dataset~\\cite{ledwich2020algorithmic} to label the video leanings and identified optimal classification thresholds. We were able to find 58\/10\/111 left, center, and right-leaning videos for {\\tt Abortion}\\xspace (analogously, 81\/33\/154 for {\\tt Gun Control}\\xspace and 297\/84\/396 for {\\tt BLM}\\xspace). To validate our estimation, we performed one round of manual annotation for videos in {\\tt Gun Control}\\xspace. We used stratified sampling to sample 50 videos based on the video leaning scores. These videos were annotated independently by three authors. The Fleiss' Kappa was 0.69, suggesting a moderate inter-rater agreement. Section C of~\\cite{appendix} details our classification and evaluation methods for YouTube videos.\n\n\\section{Observations on Cross-Platform Attention}\n\\label{sec:measure}\n\nWe report the results on all metrics described in \\Cref{sec:metric}, in mirroring subsections to aid navigation. Many results in this section are presented as violin plots. The outlines are kernel density estimates for the left-leaning (blue) and right-leaning (red) videos, respectively. The center dashed line is the median, whereas the two outer lines denote the inter-quartile range. To compare the distributions of each metric for the left- and right-leaning videos, we adopt the one-sided Mann\u2013Whitney U test. We summarize our results in~\\Cref{table:measures} at the end of this paper.\n\n\\subsection{Aggregate Attention on YouTube and Twitter}\n\\label{ssec:aggregate_obs}\n\n\\header{Total view count.}\n\\Cref{fig:obs_aggregate}(a) shows the distribution of video views at day 120 after upload. Using the view count at the same day removes the effects of video age, so that the videos published for longer time are not taking an unfair advantage. In {\\tt Abortion}\\xspace and {\\tt Gun Control}\\xspace, the median, as well as 25$^{th}$ and 75$^{th}$ percentile of views of left-leaning videos are higher than that of right-leaning videos. The median views for left-leaning videos are 107,346 for {\\tt Abortion}\\xspace and 153,482 for {\\tt Gun Control}\\xspace, versus 62,780 and 103,373 for right-leaning ones. The differences in view distribution are statistically significant ($p < 0.01$, \\Cref{table:measures} row 1). For {\\tt BLM}\\xspace, right-leaning videos have higher median and 75$^{th}$ of views, but the effect is not significant.\n\n\\input{images\/measure_aggregate}\n\n\\header{Relative engagement.}\nFrom \\Cref{fig:obs_aggregate}(b), we can see that videos in all three topics are highly engaging, with mean relative engagement at 0.834 for {\\tt Abortion}\\xspace, 0.824 for {\\tt Gun Control}\\xspace, and 0.831 for {\\tt BLM}\\xspace. This is because our data processing procedure requires videos to have at least 100 tweets and 100 views, which tends to select videos with significant amount of interests. Left-leaning videos are significantly more engaging than their right-leaning counterparts across all three topics ($p < 0.05$, \\Cref{table:measures} row 2).\n\n\\header{Fraction of likes.}\n\\Cref{fig:obs_aggregate}(c) presents the proportion of likes in videos' reactions. Left-leaning videos across all topics have significantly smaller fraction of likes than right-leaning videos ($p < 0.001$, \\Cref{table:measures} row 3). This may be explained by the observation that there are far more cross-partisan talks on left-leaning videos~\\cite{wu2021cross}.\n\n\\header{Viral potential.}\n\\Cref{fig:obs_aggregate}(d) shows the distributions of viral potential. We find that the left-leaning videos have significantly higher viral scores than the right-leaning videos across all three topics ($p < 0.05$, \\Cref{table:measures} row 4), meaning that given the same amount of tweets exposing the video on Twitter, an average left-leaning video can effectively attracts more views than an average right-leaning video. The difference is most notable in {\\tt Abortion}\\xspace: a typical left-leaning video receives 224 views from an average tweet, whereas a typical right-leaning video receives only 63 views.\n\n\\header{Tweet counts.}\n\\Cref{fig:obs_aggregate}(e) and (f) show the distributions of total tweets and retweets. Contrasting to the observation that left-leaning videos are more viewed, here we find that right-leaning videos are significantly more tweeted, especially with more retweets and more replies ($p < 0.001$, \\Cref{table:measures} row 5-7) in {\\tt Abortion}\\xspace and {\\tt BLM}\\xspace. On the other hand, we do not observe a significant difference in original tweets and quotes, except for {\\tt BLM}\\xspace where right-leaning videos have prevailing volume across all tweet types.\n\nTo examine the robustness of presented results in this section, we bootstrapped videos for each topic and for each ideological group. Specifically, for each group, we created $1,000$ bootstrapped sets of videos that are of the same size as the original group (shown in~\\Cref{table:datasets}). Next, we computed the mean of proposed metrics (shown in~\\Cref{fig:obs_aggregate}) for each bootstrapped set. Lastly, we used the {\\it independent t-test} to check the statistical significance between left- and right-leaning groups. The results of the {\\it t-tests} support all reported relations in~\\Cref{table:measures} row 1-6 with $p < 0.001$.\n\n\\subsection{Views and Tweets over Time}\n\\label{ssec:temporal_obs}\nWe measure how quickly left- and right-leaning videos attract views and tweets. We find that left-leaning videos are reacted on YouTube and Twitter quicker across all topics.\n\n\\header{Viewing half-life} and \\header{Tweeting half-life.}\nWe notice that there are significant differences in the attention consumption patterns: right-leaning videos have more prolonged attention spans on YouTube across all topics ($p < 0.01$, \\Cref{table:measures} row 10). Right-leaning videos also have longer attention spans on Twitter for {\\tt Abortion}\\xspace and {\\tt Gun Control}\\xspace ($p < 0.05$, \\Cref{table:measures} row 11). For example, \\Cref{fig:obs_temporal}(a) shows that right-leaning videos for {\\tt Abortion}\\xspace have the longest attention span -- taking 9 days for 75\\% videos to achieve viewing half-life, while left-leaning videos only take 3 days. Comparing \\Cref{fig:obs_temporal}(a) to \\Cref{fig:obs_temporal}(b), we find that attention spans on Twitter are shorter than that on YouTube. In {\\tt Abortion}\\xspace, left-leaning videos take 2 days for 75\\% of videos to reach tweeting half-life (vs. 3 days for views) and right-leaning videos take 5 days for 75\\% of videos to reach tweeting half-life (vs. 9 days for views).\n\n\\input{images\/measure_temporal}\n\n\\header{Tweeting lifetime} and \\header{Tweeting inter-arrival time.} \n\\Cref{fig:obs_temporal} (c) and (d) shows the distributions of tweeting lifetime and inter-arrival time. The results are mixed across topics. For {\\tt Abortion}\\xspace and {\\tt Gun Control}\\xspace, the MWU test results show that left-leaning videos have significantly less scores in both metrics than right-leaning videos ($p < 0.05$, \\Cref{table:measures} row 12-13). But for {\\tt BLM}\\xspace, both metrics show similar distributions between left- and right-leaning videos.\n\nThese results suggest that left-leaning videos have shorter circulation duration and are mentioned more quickly (except {\\tt BLM}\\xspace). The most notable difference is in {\\tt Abortion}\\xspace where the median of tweeting lifetime and inter-arrival time for right-leaning videos are more than three times of those for left-leaning videos. For instance, the median tweeting lifetime is 1811.6 minutes for left-leaning videos, while the median is 8453.8 minutes for right-leaning videos.\n\n\\header{Accumulation of views and tweets.}\nWe examine how much views and tweets are accumulated each day. \\Cref{fig:obs_accumulation}(a) and (b) show the Complementary Cumulative Distribution Function (CCDF) of views and tweets percentages accumulated for the first day (video published date) and $30^{th}$ day for {\\tt Abortion}\\xspace videos. We observe that left-leaning videos tend to achieve views\/tweets faster than right-leaning videos, which is consistent with \\Cref{fig:obs_temporal}. For example, after day 1, 56.9\\% left-leaning videos have achieved viewing half-life, but only 26.1\\% right-leaning videos achieved the same. For tweet accumulation, after 1 day, the gap of tweeting half-life between left- and right- leaning videos is 11.4\\% (46.6\\% and 35.1\\%, respectively). By day 30, only 3 left-leaning videos have not accumulated 80\\% of views.\n\n\\Cref{fig:obs_accumulation}(c) compares the normalized MWU statistic values of left- and right-leaning videos in views $\\bar U_v$ and tweets $\\bar U_t$ on each of the 120 days since upload. It shows that the difference between left- and right-leaning videos is larger in the beginning and decreases towards 0.5 over time. At the $120^{th}$ day (as the observation period ends), both will be 0.5 by definition, we thus truncate the plot at day 30. We also observe that the differences in views is larger than that of tweets across left- and right- leaning videos initially, whereas the discrepancy between views and tweets narrows as videos get older. This is because more videos have already fully achieved all the views and tweets. \n\n\\input{images\/measure_accumulation}\n\n\\subsection{Videos' Tweet Cascades}\n\\label{ssec:cascade_obs}\n\n\\header{Cascade size.} \n\\citet{goel2012structure} found that one-node-cascades (isolated cascades) account for $96\\%$ of all cascades in their {\\it Twitter Videos} dataset. \\Cref{fig:obs_cascades}(a) shows that the proportions of isolated cascades in our datasets are lower, measured at $91.5\\%$, $91.2\\%$, and $91.9\\%$ respectively for {\\tt Abortion}\\xspace, {\\tt Gun Control}\\xspace, and {\\tt BLM}\\xspace. Notwithstanding a confounder that the dynamics on Twitter has changed significantly since \\cite{goel2012structure}, this may still suggest that tweets on controversial topics are less isolated than tweets of videos about any topics. \\Cref{fig:obs_cascades}(b) shows the volume of tweets involved in each cascade group. It is interesting to find that most tweets belong to either isolated cascades or large cascades. Aggregated over left and right-leaning videos in {\\tt Abortion}\\xspace, {\\tt Gun Control}\\xspace, and {\\tt BLM}\\xspace, 47\\%, 43.4\\%, and 47.3\\% of tweets are isolated, whereas 44.7\\%, 48.4\\%, and 44.3\\% are in large cascades of size 5 and above, dominated by a handful of cascades over 1,000 tweets.\n\n\\input{images\/measure_cascades}\n\n\\header{Cascade start time.} \nWe compare when tweet cascades start in the process of view accumulation, grouped by different cascade sizes. \\Cref{fig:obs_cascades}(c) and (d) show the distribution of cascade start times of left- and right-leaning videos in {\\tt Abortion}\\xspace relative to percentages of views. For right-leaning videos, there is a peak of isolated cascades started at the end of the videos' viewing lifetime. We also observe that for left-leaning videos, the peaks for isolated, small and large cascades are concentrated near $70\\%$ of view accumulation while peaks for right-leaning videos are more distributed over different stages of view accumulation. Moreover, in all topics, more right-leaning tweet cascades start before viewing half-life regardless of cascades size. For example, 41\\% of right-leaning isolated cascades started before viewing half-life while 25\\% of left-leaning isolated cascades started before viewing half-life in {\\tt Abortion}\\xspace. This is consistent with the observation that views of right-leaning videos unfold much slower (See \\Cref{fig:obs_temporal}(a) and \\Cref{fig:obs_accumulation}(a)), allowing tweet cascades to start at earlier stages of view accumulation process. The difference in cascade start time is significant between left- and right-leaning videos in all topics for isolated and small cascades ($p < 0.001$, \\Cref{table:measures} row 14-15) and is significant for {\\tt Abortion}\\xspace and {\\tt Gun Control}\\xspace for large cascades ($p < 0.001$, \\Cref{table:measures} row 16).\n\n\\subsection{Networks among Early Adopters on Twitter}\n\\label{ssec:network_obs}\n\n\\input{images\/measure_network}\n\n\\Cref{fig:obs_network}(c) and (d) show the distributions of the Gini coefficient of indegree centrality and the Gini coefficient of closeness centrality. Gini coef. of indegree centrality of left-leaning videos' networks is significantly larger in {\\tt Gun Control}\\xspace and {\\tt BLM}\\xspace ($p < 0.001$, \\Cref{table:measures} row 17). For Gini coefficient of closeness centrality, the MWU test results indicate that left-leaning videos' networks have significantly greater Gini index than those of right-leaning videos' networks across all topics ($p < 0.05$, \\Cref{table:measures} row 18). \n \nThis suggests that the networks of early adopters for left-leaning videos have more users serving as hubs, i.e., who are followed by more early adopters and have shorter path to other early adopters. This also suggests that in the networks of early adopters for right-leaning videos, users are more equally facilitating dissemination of political information which is consistent with the findings shown in~\\cite{conover2012partisan}. As an example of this, we present the follower networks of early adopters of one left-leaning video and one right-leaning video in {\\tt Abortion}\\xspace in \\Cref{fig:obs_network}(a,b). To have a fair comparison we sample two videos having similar network size (57 and 59 for left and right-leaning videos, respectively). The left-leaning video has Gini coef. of indegree centrality: 0.918, Gini coef. of closeness centrality: 0.90. The right-leaning video has Gini coef. of indegree centrality: 0.748, Gini coef. of closeness centrality: 0.536. It can be observed that the sharing of this left-leaning video relies more on central users who are followed by more early adopters and have shorter path to others. On the other hand, in the follower network of the early adopters of this right-leaning video, indegree and closeness centrality distributions are more equal.\n\nApart from the reported metrics, we have also performed preliminary examination on the correlation and trends between two and more metrics. An example on linking relative engagement to the view and tweet counts is presented in Section I of~\\cite{appendix}. We have not seen consistent and salient patterns that are not already captured by individual measures. \n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\textbf{Introduction}}\nIn this work, focusing on the decumulation phase, we fix the annuitization time and investigate the optimal investment-consumption strategies prior to annuitization in a Brownian market model with time dependent mortality rate. We follow the framework developed in \\cite{kn:Ger3} in which a target for the consumptions during the decumulation phase and a target for the terminal accumulated wealth is considered. Moreover, motivated from \\cite{kn:G}, we consider a minimum guarantee for the final annuity. \n\nAssuming a fixed rate of consumption throughout the whole period of time before annuitization, which is usually a long period, is far from the optimality. On the other hand, it is quite reasonable to consider a minimum consumption rate for a retiree to cover the essential expenses. Therefore, we consider the rate of consumption as a control variable which varies between the two limits, $C_1$ and $C_2$, in which $C_1 > C_2$. We will see from the simulation results that considering a variable consumption rate yields much higher final annuities. To compare the optimal portfolios obtained from different scenarios of the admissible consumptions, we take into account the present market value of the future cash flows before and after the anniuty purchase. \n\nGerrard et al. \\cite{kn:Ger1} study the portfolio optimization problem post-retirement when the loss function is defined totally by the wealth process and the annuitization time and consumption rate are fixed. In \\cite{kn:Ger2}, they violate the fixed consumption rate assumption. In a similar framework, Di Giacinto et al. \\cite{kn:G} explore the optimal investment strategy when a minimum guarantee for the final wealth is assumed and the consumption rate is fixed. When the running cost is neglected, they obtain the closed form of solution. \nIn this paper, in a same framework, we develop a numerical algorithm based on the policy iteration method to find approximations of the optimal investment-consumption strategies when the consumption rate is variable and a running cost for the loss function based on the consumptions is considered. The policy iteration method is a well studied method in finding approximations of solutions of optimal control problems, see \\cite{kn:FL}, \\cite{kn:SR}, in which the value function and the optimal policies are deriving iteratively to converge to the correct solution of the corresponding HJB equation. \n \nThe portfolio optimization problem post-retirement has been studied by many scholars by considering different utility or loss functions, different control variables and different wealth dynamics. Furthermore, different constraints are considered on the control variables and on the wealth dynamics. We mention here a few related works. Milevsky and Young \\cite{kn:MY} study extensively the optimal annuitization and investment-consumption problem for time dependent mortality function in the\\emph{ all or nothing} market and also the more general \\emph{anything anytime}s market, where gradual annuitization strategies are allowed. Milevsky et al. \\cite{kn:MMY} derive the optimal investment and annuitization strategies for a retiree whose objective is to minimize the ruin probability when the consumption rate is fixed. Stabile \\cite{kn:S} studies the optimal investment-consumption problem and investigates the optimal time for purchasing the annuity subject to a constant force of mortality by considering different utility functions defined on the consumption and the final annuity. Blake et al. \\cite{kn:BCD} compare the immediate purchasing the annuity at the retirement age with distribution programs involving differing exposures to equities during retirement. Albrecht and Maurer \\cite{kn:AM} compare the immediate annuitization and the income drawdown option and determine the probability of running out of money before the uncertain date of death. Gerrard et al. \\cite{kn:Ger3}, by considering targets for the consumptions during the decumulation phase and a target for the final wealth, investigate the optimal annuitization time together with the optimal consumption-investment strategies. \n\nThe article is organized as follows. In the next section, we identify the market model, the loss function and also the set of admissible strategies. In the third section, to apply the dynamic programming principle, we specify the value function. Furthermore, by considering a safety level for the wealth process we specify the set of admissible strategies and write the HJB equation obtained via the dynamic programming principle corresponding to the optimal control problem. In section 4, we express the numerical algorithm to find approximations of solution of the HJB equation. Finally in section 5, we present the simulation results for the final annuity, optimal investment-consumption strategies and optimal wealth process. \n\n\n\\section{The Market Model}\nWe consider a Brownian market model consists of a risky and a risk-less asset with the dynamics: \n\\begin{align}\n&dS_t=S_t(\\mu dt+\\sigma dB_t), \\nonumber\\\\\n&dA_t=rA_tdt,\\nonumber\n\\end{align}\nwhere $B(\\cdot)$ is a Brownian motion on the filtered probability space $(\\Omega, \\mathcal{F}, \\mathbb{F}, \\mathbf{P})$ and $r$ is the fixed interest rate. So, the risky asset is a geometric Brownian motion with constant volatility $\\sigma$ and expected return $\\mu=r+\\sigma \\beta$, in which $\\beta $ is its Sharpe ratio.\n\nAt any time $t$ and when the fund value is $x$, let $y(t,x)$ and $1-y(t,x)$ denote the proportions of the fund's portfolio that are invested in the risky and in the riskless asset, respectively. Moreover, denote the consumption rate at the point $(t, x)$ by $c(t,x)$. Therefore, we have the following dynamics of the wealth process (or the fund value) \n\\begin{align}\ndX_t=\\{[y(\\mu-&r)+r]X_t-c \\} dt+\\sigma y X_t dB_t,\\label{wealtheq}\\\\\n&X(0)=x_0. \\nonumber\n\\end{align}\n\nLet the decumulation phase be denoted by the time interval $[0,T]$. We consider the functions of two variables $y:[0,T]\\times[0,\\infty )\\rightarrow \\mathbb R$ and $c:[0,T]\\times[0,\\infty )\\rightarrow [C_2, C_1]$ as control variables that must be determined by solving an optimal control problem. We let the variable $y$ to be greater than one, $y>1$, or to be negative, $y<0$, which correspond to borrowing from the money market and short-selling of the risky asset, respectively. However, our numerical results justify that the short-selling does not appear as the optimal investment strategy. \nWe restrict the consumption rate to the interval $[C_2, C_1]$. Actually $C_1$ is the desired rate of consumption and usually is set to be equal to the annuity that is purchasable by the accumulated wealth at the retirement and $C_2$ corresponds to the minimum required consumption during the decumulation phase. \n\nOur market model can be extended in several directions. Among many other models, Boulier et al. \\cite{kn:BHT} consider a model with stochastic interest rate. Han and Hung \\cite{kn:HH} equip the market model with the inflation rate which has important consequences on the optimal investment strategy. Gao \\cite{kn:Ga}, in a stochastic interest rate framework, considers a market model that consists of three assets, a risky asset, a risk-less asset and a bond. Hainaut and Deelstra investigate the optimal time for annuity purchase in a market model with jump-diffusion dynamics when the economic utility function is replaced by the expected present value operator. Considering the jump-diffusion dynamics, which is much more justified by the empirical data, is the aim of our ongoing work. \n\nWe consider a retiree with age 60 who is going to postpone the annuitization until the age 75, which means $T=15$. The main purpose of a retiree in postponing the annuity purchase is to reach the desired annuity. Let $F$ be the target for the terminal accumulated wealth, $X(T)$, by which the retiree can purchase the desired annuity at the age 75. In other words, if $a_{75}$ is the actuarial value of the unitary lifetime annuity at the age 75, then $\\frac{ F}{a_{75}}$ would be the desired annuity. Moreover, during the decumulation phase, part of the retiree's concern is on the consumption and he or she would like to have the maximum consumption rate $C_1$. \nTherefore, we write the loss function by two terms. One term, as the running cost, is formulated by the consumption and is considered as the distance from the desired rate, $C_1$. Moreover, this term is weighted by the impact factor $\\kappa$. The second term is based on the final annuity and is written as the distance from the specified target $\\frac{ F}{a_{75}}$.\n\nSo, supposing the mortality rate after the retirement, $\\mu(t), t\\geq 60 $, to be independent of the asset dynamics, we write the loss function as\n \n\\begin{align}\n\\kappa \\int^T_0 e^{-\\int^t_0 (\\rho+\\mu(s))ds } (C_1-c(t))^2dt+ e^{-\\int^T_0(\\rho+\\mu(s))ds }\\left(\\frac{ F-X(T)}{a_{75}}\\right)^2, \\label{lossfunc}\n\\end{align}\n in which the constant factor $\\rho$ is the subjective discount factor. \n\nFor any $0\\leq t\\leq T$, consider the filter $\\mathbb{F}^t:=(\\mathcal{F}^t_s)_{s\\in [t,T]}$ where $\\mathcal{F}^t_s$ is the $\\sigma$-algebra generated by the random variables $(B(u)-B(t))_{u\\in [t,s]}$. \nThen at any time $t\\geq 0$, we choose the admissible strategies $\\pi_1(\\cdot)\\in \\mathcal{L}^2(\\Omega \\times [t,T]; \\mathbb{R})$ and $ \\pi_2(\\cdot)\\in \\mathcal{L}^2(\\Omega \\times [t,T]; [C_2, C_1])$, which are $\\mathbb{F}^t$-progressively measurable. It should be noted that if the variables $y$ and $c$ in Eq. (\\ref{wealtheq}) be replaced by $\\pi_1(\\cdot)$ and $\\pi_2(\\cdot)$, respectively, the equation will have a unique strong solution, see \\cite[Section 5.6.C]{kn:KS}. We denote this solution by $X(\\cdot ; t, x, \\pi_1(\\cdot), \\pi_2(\\cdot)).$ \n\nThe main feature of our framework is designing a minimum guarantee for the final annuity or equivalently for the terminal wealth amount. So, if we denote $S$ as the safety level of the terminal wealth, then the set of admissible investment and consumption strategies reduces to \n\\begin{align}\n\t\\tilde{\\Pi}_{ad}(t):=\\{&\\pi_1(\\cdot)\\in \\mathcal{L}^2(\\Omega \\times [t,T]; \\mathbb{R}), \\pi_2(\\cdot)\\in \\mathcal{L}^2(\\Omega \\times [t,T]; [C_1, C_2])\\nonumber\\\\\n\t & | \\pi_1(\\cdot), \\pi_2(\\cdot)\\; are \\;\\mathbb{F}^t- prog. meas., X(T; t,x,\\pi_1(\\cdot), \\pi_2(\\cdot)) \\geq S \\; a.s.\\}\\label{adm}\n\\end{align}\n\\section{\\textbf{The HJB Equation}}\n\nConsidering the loss function (\\ref{lossfunc}), we define the objective functional $\\tilde{J}$, for any $(t,x)\\in [0,T]\\times \\mathbb R^+$, on the set of admissible strategies $ \\tilde{\\Pi}_{ad}(t)$, as\n\n\\begin{align}\n\\tilde{J}(t,x;\\pi_1(\\cdot), \\pi_2(\\cdot)):=\\mathbb{E}^x &[\\kappa \\int^T_t e^{-\\int^s_0 (\\rho+\\mu(r))dr } (C_1-\\pi_2(s))^2ds\\nonumber\\\\\n&+ e^{-\\int^T_0(\\rho+\\mu(s))ds }\\left(\\frac{F-X(T)}{a_{75}}\\right)^2 ],\\label{oldfunc} \n\\end{align}\nwhere $X(s), t\\leq s\\leq T$ is the wealth process obtained by employing the investment and consumption strategies $\\pi_1(\\cdot), \\pi_2(\\cdot)\\in \\tilde{\\Pi}_{ad}(t)$ and $\\mathbb{E}^x$ denotes the expectation subject to $X(t)=x$. \nOur goal is to find the admissible strategies that minimize the above functional. Therefore, to solve this stochastic optimal control problem via the dynamic programming, we define the value function \n\\begin{align}\n\\tilde{V}(t,x):=inf_{\\pi_1(\\cdot), \\pi_2(\\cdot)\\in \\tilde{\\Pi}_{ad}(t)} \\tilde{J}(t,x;\\pi_1(\\cdot), \\pi_2(\\cdot)).\\label{value.f}\n\\end{align}\n\nThe minimum guarantee constraint at the terminal time imposes consequently a constraint on the wealth process before the annuitization. Actually, the following curve is as a barrier for the wealth process, \n\\begin{align}\nS(t)=\\frac{C_2}{r}-(\\frac{C_2}{r}-S)e^{-r(T-t)}, \\qquad 0\\leq t\\leq T. \\label{safe}\n\\end{align}\nBecause, whenever the wealth process hits this curve, investing any portion of the wealth in the risky asset or consuming at any rate higher $C_2$, yields a positive probability for that final wealth to be less than $S$ at the terminal time $T$. Therefore, thereafter, the only admissible investment and consumption policies will be the null strategy and consuming at the minimum rate, $\\pi_1(s)=0$ and $\\pi_2(s)=C_2$, $ s\\geq t$, respectively, which keeps the wealth process on this curve until the terminal time $T$.\n\nMoreover, at any time $t\\in [0,T]$, the wealth amount \n\\begin{align}\nF(t)=\\frac{C_1}{r}+(F-\\frac{C_1}{r})e^{-r(T-t)}, \\label{targfun}\n\\end{align}\nguarantees reaching the desired target $F$ at the terminal time T, by investing the whole portfolio during the time interval $[t,T]$ in the risk-less asset and consuming at the maximum rate $C_1$. This strategy keeps the wealth process on the curve $\\{F(t), 0\\leq t\\leq T \\}$ and yields zero cost. Therefore, it is the optimal strategy. \n\nThese observations indicate that the curves $\\{F(t), 0\\leq t\\leq T\\}$ and $\\{S(t), 0\\leq t\\leq T\\}$ are as the attractors of the wealth process. In the other words, if the initial value lies in $[S(0), F(0)]$ then the wealth process will remain in a domain bounded at the top and at the bottom by the curves $\\{F(t), 0\\leq t\\leq T\\}$ and $\\{S(t), 0\\leq t\\leq T\\}$, respectively. \n\nDue to the definition of the value function $\\tilde{V}$, (\\ref{value.f}), the Bellman principle indicates the following HJB equation, see \\cite[Chapter 11]{kn:O},\n\\begin{align}\n\\inf_{y, c\\in \\mathbb R}\\{ \\frac{\\partial \\tilde{V}}{\\partial t}&+ \\tilde{\\mathcal{A}}\\tilde{V}(t,x)+ \\kappa e^{-\\int^t_0(\\rho+\\mu(s))ds } (C_1-c(t))^2 \\}=0, \\label{main1}\n\\end{align}\nwhere $\\tilde{\\mathcal{A}}$ is the generator of the diffusion process (\\ref{wealtheq}), \n$$\\tilde{\\mathcal{A}} =\\{(y[\\mu-r]+r)x-c\\} \\frac{\\partial }{\\partial x}+ \\frac{1}{2} \\sigma^2 y^2 x^2 \\frac{\\partial^2 }{\\partial x^2}.$$ \nAdditionally, the definition of $\\tilde{V}$ indicates the following boundary conditions\n\\begin{align}\n\\tilde{V}(T,x)= &e^{-\\int^T_0(\\rho+\\mu(s))ds }\\left(\\frac{F-x}{a_{75}}\\right)^2, \\qquad \\qquad \\qquad \\quad x\\in [S,F],\\nonumber\\\\\n\\tilde{V}(t,F(t))=&0,\\qquad \\qquad \\qquad\\qquad \\qquad \\qquad \\qquad\\qquad \\quad t\\in [0,T],\\label{mainboundary}\\\\\n\\tilde{V}(t,S(t))=&e^{-\\int^T_0(\\rho+\\mu(s))ds }\\left(\\frac{ F-S}{a_{75}}\\right)^2\\nonumber\\\\\n&+\\kappa (C_1-C_2)^2\\left(\\int^T_te^{-\\int^s_0(\\rho+\\mu(r))dr } ds\\right), \\quad t\\in [0,T].\\nonumber\n\\end{align}\n\nThe above equation has been defined on an irregular domain, in which the top and the bottom borders are curved. Since we are going to apply the finite difference method as part of our algorithm, we apply a change of variable that converts the domain to a rectangle.\nTo this end, we define the diffeomorphism $\\mathcal{L}: \\mathcal{C} \\rightarrow \\mathcal{C}^{\\prime}$, where $\\mathcal{C}:=\\{(t,x) | t\\in[0,T], S(t)\\leq x\\leq F(t) \\}$, $\\mathcal{C}^{\\prime}:=\\{(t,z) | t\\in[0,T], S\\leq z\\leq F \\}$ and \n\n\\begin{align}\n(t,x)\\rightarrow (t,z)&=\\mathcal{L}(t,x)=(t,\\mathcal{L}_1(t,x))\\nonumber\\\\\n&:=\\left(t,xe^{r(T-t)}+\\left[C_1+(C_2-C_1)\\frac{F(t)-x}{F(t)-S(t)} \\right]\\frac{1-e^{r(T-t)}}{r}\\right).\\nonumber\n\\end{align}\nNotice that \n\\begin{align}\n\\mathcal{L}_1(t,S(t))=S, \\qquad \\mathcal{L}_1(t,F(t))=F. \\label{borders}\n\\end{align}\nNow, we define the process $Z(\\cdot)$ as \n$$Z(t):=\\mathcal{L}_1(t,X(t))=X(t)G(t)+H(t), \\quad t\\in [0,T], $$\nin which \n$$G(t)=e^{r(T-t)}+ \\frac{C_1-C_2}{F(t)-S(t)} \\frac{1-e^{r(T-t)}}{r},$$\n$$H(t)=\\left(C_1+ \\frac{F(t)(C_2-C_1)}{F(t)-S(t)}\\right) \\frac{1-e^{r(T-t)}}{r}.$$ \nApplying the It\\^{o} formula, the process $Z_t$ satisfies the dynamics \n\\begin{align}\ndZ_t=&X_tdG_t+G_tdX_t+dH_t \\nonumber\\\\\n=&X_t\\left[\\left(\\frac{C_1-C_2}{F(t)-S(t)}-r\\right)e^{r(T-t)}+\\frac{(C_1-C_2)(r(F_S)-C_1+C_2) }{(F(t)-S(t))^2}\\frac{1-e^{-r(T-t)}}{r}\\right]dt\\nonumber\\\\\n&+G(t)\\left\\{[(y(\\mu-r)+r)X_t-c ]dt+\\sigma yX_tdW_t \\right \\}\\nonumber\\\\\n&+\\left[C_1+\\frac{F(t)(C_2-C_1)}{F(t)-S(t)}\\right]e^{r(T-t)}dt\\nonumber\\\\\n&+(C_2-C_1)\\frac{e^{-r(T-t)-1}}{r}\\frac{(F(t)-S(t))(rF-C_1)-F(t)[r(F-S)-C_1+C_2]}{(F(t)-S(t))^2}dt.\\label{zproc}\n\\end{align}\nMoreover, by a few manipulations we get\n$$X_t=\\frac{r(F(t)-S(t))e^{-r(T-t)}Z_t-[F(t)C_2-S(t)C_1](e^{-r(T-t)}-1) }{r(F(t)-S(t))+(C_1-C_2)(e^{-r(T-t)}-1)}. $$\n\nWe define for any $\\pi_1(\\cdot), \\pi_2(\\cdot) \\in \\Pi_{ad}(t)$ the process \n$$Z(s;t,z,\\pi_1(s), \\pi_2(s)):=\\mathcal{L}_1(s,X(s;t,x,\\pi_1(s), \\pi_2(s))), \\qquad t\\leq s\\leq T,$$ \nin which $Z(t)=z=\\mathcal{L}_1(t,x)$. Then, because of the relations (\\ref{borders}), the set of admissible strategies turns to \n\\begin{align}\n\\Pi_{ad}(t)=\\{&\\pi_1(\\cdot)\\in \\mathcal{L}^2(\\Omega \\times [t,T]; \\mathbb{R}), \\pi_2(\\cdot)\\in \\mathcal{L}^2(\\Omega \\times [t,T]; [C_2, C_1]) \\nonumber\\\\ \n&| \\pi_1(\\cdot), \\pi_2(\\cdot)\\; are \\; \\mathbb{F}^t- prog. meas. , S\\leq Z(s)\\leq F, s\\in [t,T] \\}.\\nonumber\n\\end{align}\nAnalogous to (\\ref{oldfunc}), we have for any $(t,z)\\in \\mathcal{C}^{\\prime}$ the following functional defined on $\\Pi_{ad}(t)$ whose one term is represented by the process $Z(\\cdot;t,z,\\pi_1(\\cdot), \\pi_2(\\cdot))$, \n\\begin{align} \nJ(t,z,\\pi_1(\\cdot), \\pi_2(\\cdot)):=\\mathbb{E}^x [&\\kappa\\int^T_t e^{-\\int^s_0(\\rho+\\mu(r))dr } (C_1-\\pi_2(s))^2ds\\nonumber\\\\\n&+e^{-\\int^T_0(\\rho+\\mu(s))ds } \\left(\\frac{F-Z(T)}{a_{75}}\\right)^2 ]. \\nonumber\n\\end{align}\nNow, defining the value function $V$ as \n$$V(t,z):=inf_{\\pi_1(\\cdot), \\pi_2(\\cdot)\\in\\Pi_{ad}(t)} J(t,z;\\pi_1(\\cdot), \\pi_2(\\cdot)), \\quad (t,z)\\in \\mathcal{C}^{\\prime},$$\nwe get the following HJB equation on the domain $\\mathcal{C}^{\\prime}$ \n\\begin{align}\n\\inf_{y, c\\in\\mathbb R}\\left\\{ \\frac{\\partial V}{\\partial t}+ \\mathcal{A}V(t,z) +\\kappa e^{-\\int^t_0(\\rho+\\mu(s))ds } (C_1-c)^2\\right \\}=0, \\label{maineq}\n\\end{align}\nwhere $\\mathcal{A}$, the generator of the diffusion process $Z(\\cdot)$, is written as \n\\begin{align}\n\\mathcal{A}=&\\{K(t,z) G^{\\prime}(t)+G(t) [(y(\\mu-r)+r)z-c ]+H^{\\prime}(t) \\} \\frac{\\partial }{\\partial z}\\nonumber\\\\\n&+\\frac{1}{2}G^2(t)\\sigma^2y^2z^2 \\frac{\\partial^2 }{\\partial z^2},\\nonumber\n\\end{align}\nin which \n$$K(t,z)=\\frac{r(F(t)-S(t))e^{-r(T-t)}z-[F(t)C_2-S(t)C_1](e^{-r(T-t)}-1) }{r(F(t)-S(t))+(C_1-C_2)(e^{-r(T-t)}-1)}.$$ \n\nSince during the decumulation phase the loss function just depends on the consumptions, we get the following boundary conditions that are similar to (\\ref{mainboundary}), \n\\begin{align}\n V(T,z)=& e^{-\\int^T_0(\\rho+\\mu(s))ds}\\left(\\frac{F-z}{a_{75}}\\right)^2,\\qquad \\qquad \\quad z\\in [S,F],\\nonumber\\\\\nV(t,F)=&0, \\qquad \\qquad\\qquad \\qquad \\qquad \\qquad \\qquad \\quad\\quad t\\in [0,T],\\label{boundary}\\\\\nV(t,S)=&e^{-\\int^T_0(\\rho+\\mu(s))ds }\\left(\\frac{F-S}{a_{75}}\\right)^2\\nonumber\\\\\n&+\\kappa (C_1-C_2)^2(\\int^T_te^{-\\int^s_0(\\rho+\\mu(r))dr } ds), \\quad t\\in [0,T].\\nonumber\n\\end{align}\n\nThe coefficients of Eq. (\\ref{maineq}) are Lipschitz continuous w. r. t. the state variables $(t,z)$ and the control variables $y$ and $c$. Therefore, they are bounded on the domain. Moreover, the control variable $c$ is chosen from a compact interval. Although, we do not assume a bound for the variable $y$, but practically it is located inside a bounded interval, too. Furthermore, the boundaries of the domain are the attractors of the wealth process. From these observations, we can conclude that the above equation has the non-degeneracy property. Now, \\cite[Theorem 2.1] {kn:BS} indicates the strong comparison property of Eq. (\\ref{maineq}) which implies the existence of viscosity solution for this equation. \n\n\n\\section{\\textbf{Numerical Algorithm}}\n\nWe apply the policy iteration method in which the value function $V$ and the control variables $y, c$ are improved iteratively until they converge to the correct solution of the HJB Eq. (\\ref{maineq}). \n For the fixed control variable functions, we apply a fully implicit backward in time finite difference scheme to obtain the value function. \n\nIn discretizing the domain $\\mathcal{C}^{\\prime}= [0, T] \\times [S, F]$, the time horizon $[0, T]=[0, 15]$ is divided to 780 subintervals with the length $\\Delta t=\\frac{1}{52}$, which corresponds to the length of one week in a year. Moreover, the interval $[S, F]$ is divided to $N+1$ subintervals with $N$ interior nodes, in which the length of each subinterval is $\\Delta z$. Starting from the last column $t=T-\\Delta t$ inside the domain $\\mathcal{C}^{\\prime}$, we apply to each column the following algorithm. \\\\ \n\n\\textbf{ I)} \\emph{Initial value:} For the starting point, we can take the investment function $y=\\frac{1}{2}$ and the consumption rate as $c=\\frac{C_1+C_2}{2}$.\n\n\\textbf{II)} \\emph{Policy evaluation:} For the given investment and consumption functions $y, c$, apply a fully implicit finite difference scheme to solve the PDE (\\ref{maineq}) for the value function $V$ with the corresponding boundary conditions (\\ref{boundary}). \n\n\\textbf{III)} \\emph{Policy improvement:} For the fixed $t$, at any node $(t,z_j), 2\\leq j\\leq N(i)+1$, which correspond to all nodes on the fixed column $t=t_i$ that are inside the domain $\\mathcal{C}^{\\prime}$, solve the static optimization problem (\\ref{maineq}) to find new optimal investment and consumption functions $y^{new}$ and $c^{new}$, respectively. \n\n\\textbf{IV)} \\emph{End of iteration:} Applying the functions $y, c$ and $V$ obtained from the steps (II) and (III), whenever, at any node $(t,z_j), 2\\leq j\\leq N+1$, the absolute value of the left hand side of Eq. (\\ref{maineq}) is not less than $10^{-6}$ or one of the following convergence criterions does not satisfy, return to the step (II) \n$$max_{j} \\abs{c^{new}(t,z_j)-c^{old}(t,z_j)}\\leq \\{ max_{j} \\abs{c^{new}(t,z_j)} \\}\\times 10^{-6}, $$\n$$max_{j} \\abs{y^{new}(t,z_j)-y^{old}(t,z_j)}\\leq \\{ max_{j} \\abs{y^{new}(t,z_j)} \\}\\times 10^{-6}, $$\n$$max_{j} \\abs{V^{new}(t,z_j)-V^{old}(t,z_j)}\\leq \\{ max_{j} \\abs{V^{new}(t,z_j)} \\}\\times 10^{-6}. $$\n\n\\textbf{V)} \\emph{Iteration policy in the preceding column:} If $t > 0$ go to the time step $t-\\Delta t$ and return to the step (II). \\\\\n\nTo apply the finite difference method in step (II), we consider the forward scheme for the time and the space first derivatives and the central difference scheme for the space second derivative. So, denoting $V(i,j)=V(t_i,z_j)$, the value function at the node $(t_i,z_j), \\; 1\\leq i\\leq 781, 1\\leq j\\leq N+2$, the discretization of the PDE (\\ref{maineq}) at the node $(t_i,z_j)$ that is inside the domain $\\mathcal{C}^{\\prime}$ is given by\n\n\\begin{align}\n\\frac{V(i+1,j)-V(i,j)}{\\Delta t}&+\\kappa e^{-\\int^{t_i}_0(\\rho+\\mu(s))ds } (C_1-c(t_i,z_j))^2\\nonumber\\\\\n&+\\left(\\frac{\\alpha(i,j)}{\\Delta z}+\\frac{\\beta(i,j)}{(\\Delta z)^2}\\right) V(i,j+1)+\\frac{\\beta(i,j)}{(\\Delta z)^2}V(i,j-1)\\nonumber\\\\\n&-\\left(\\frac{\\alpha(i,j)}{\\Delta z}+2\\frac{\\beta(i,j)}{(\\Delta z)^2}\\right)V(i,j)=0, \\label{discrete}\n\\end{align}\nin which \n\n$$\\alpha(i,j)=K(t_i,z_j) G^{\\prime} (t_i)+G(t_i) [(y(\\mu-r)+r)z_j-c ]+H^{\\prime} (t_i) ,$$ \n$$\\beta(i,j)=\\frac{1}{2}G^2(t_i)\\sigma^2y^2z^2_j.$$\nThen, applying the fully implicit scheme for the nodes on a fixed column, $(t_i,z_j), 2\\leq j\\leq N+1$, we get the linear equation $Aw=b$, in which $w=(V(t_i,z_2), \\cdots ,V(t_i, z_{N+1}))$ is the unknown vector and $A=[a_{ij}]_{N\\times N}$ and $b=[b_j]_{1\\times N}$ are a tridiagonal matrix and a positive vector, respectively, with the following entries \n$$a_{j,j}=\\frac{1}{\\Delta t}+\\frac{\\alpha(i,j+1)}{\\Delta z}+\\frac{2\\beta(i,j+1)}{(\\Delta z)^2},\\quad 1\\leq j\\leq N,$$\n$$a_{j,j+1}=-\\frac{\\alpha(i,j+1)}{\\Delta z}-\\frac{\\beta(i,j+1)}{(\\Delta z)^2}, \\quad 1\\leq j\\leq N-1, $$\n$$a_{j,j-1}=-\\frac{\\beta(i,j+1)}{(\\Delta z)^2}, \\qquad \\qquad \\qquad 2\\leq j\\leq N,$$ \n$$b_j=\\frac{V(i+1,j+1)}{\\Delta t}+\\kappa e^{-\\int^{t_i}_0(\\rho+\\mu(s))ds } (C_1-c(t_i,z_{j+1}))^2, \\quad 2\\leq j\\leq N,$$\n$$b_1=\\frac{V(i+1,2)}{\\Delta t}+\\kappa e^{-\\int^{t_i}_0(\\rho+\\mu(s))ds } (C_1-c(t_i,z_2))^2+\\frac{ \\beta(i,2)}{(\\Delta z)^2}V(t_i,S).$$\n\nIt is clear from the definitions of the functions $\\alpha$ and $\\beta$ that the discrete representation (\\ref{discrete}) has the positive coefficient property, see \\cite{kn:FL} for the definition of this property, which consequently indicates that the matrix $A$ is an M-matrix.\nThe positive coefficient property and the boundary condition (\\ref{boundary}) easily yield the stability, consistency and the monotonicity of our numerical scheme, see \\cite{kn:FL}. Therefore, from \\cite{kn:B} and \\cite{kn:BS}, the convergence of our numerical approximations to the viscosity solution of Eq. (\\ref{maineq}) is guaranteed. Here we show just the stability property. \n\n\\begin{prop}\nThe discretization (\\ref{discrete}) satisfies the $l_{\\infty}$- stability property, \n\\begin{align}\n\\norm{V(t_i,\\cdot)}_{\\infty} \\leq \\left(\\frac{F-S}{a_{75}}\\right)^2+ \\kappa (C_1-C_2)^2, \\qquad 1\\leq i\\leq 780.\\label{stability}\n\\end{align}\n\n\\begin{proof}\nWe have for every $1\\leq i\\leq 780$ and $2\\leq j\\leq N+1$\n\n\\begin{align}\nV(i,j)=&V(i+1,j)+\\Delta t \\alpha(i,j)\\frac{V(i,j+1)-V(i,j)}{\\Delta z}\\nonumber\\\\\n&+\\Delta t \\beta(i,j) \\frac{V(i,j+1)-2V(i,j)+V(i,j-1)}{(\\Delta z)^2}\\nonumber\\\\\n&+\\Delta t\\kappa e^{-\\int^{t_i}_0(\\rho+\\mu(s))ds } (C_1-c(t_i,z_j))^2.\\nonumber\n\\end{align}\nSo, we have \n\n\\begin{align}\n\\abs{V(i,j)}&\\left(1+\\frac{\\Delta t}{\\Delta z}\\alpha(i,j)+2\\frac{\\Delta t}{(\\Delta z)^2}\\beta(i,j)\\right ) \\leq \\nonumber\\\\\n&\\norm{V(i+1,\\cdot)}_{\\infty}+\\norm{V(i,\\cdot)}_{\\infty} \\left (\\frac{\\Delta t}{\\Delta z}\\alpha(i,j)+2\\frac{\\Delta t}{(\\Delta z)^2}\\beta(i,j)\\right ) \\nonumber\\\\\n&+\\Delta t\\kappa e^{-\\int^{t_i}_0(\\rho+\\mu(s))ds }(C_1-C_2)^2.\\nonumber\n\\end{align}\nIf $V(i,j_1)=max_{1\\leq j\\leq N+2}V(i,j)=\\norm{V(i,\\cdot)}_{\\infty}$, then we can write \n\\begin{align}\n\\norm{V(i,\\cdot)}_{\\infty}&\\left (1+\\frac{\\Delta t}{\\Delta z}\\alpha(i,j_1)+2\\frac{\\Delta t}{(\\Delta z)^2}\\beta(i,j_1)\\right ) \\leq \\nonumber\\\\\n&\\norm{V(i+1,\\cdot)}_{\\infty}+\\norm{V(i,\\cdot)}_{\\infty} \\left (\\frac{\\Delta t}{\\Delta z}\\alpha(i,j_1)+2\\frac{\\Delta t}{(\\Delta z)^2}\\beta(i,j_1)\\right ) \\nonumber\\\\\n&+\\Delta t \\kappa e^{-\\int^{t_i}_0(\\rho+\\mu(s))ds }(C_1-C_2)^2,\\nonumber\n\\end{align}\nwhich means \n$$\\norm{V(i,\\cdot)}_{\\infty}\\leq \\norm{V(i+1,\\cdot)}_{\\infty}+\\Delta t\\kappa e^{-\\int^{t_i}_0(\\rho+\\mu(s))ds }(C_1-C_2)^2.$$\nThen, from the terminal condition of the value function we obtain the bound (\\ref{stability}). \n\\end{proof}\n\n\\end{prop}\n\n\n\n\n\n\\section{\\textbf{Simulation Results }}\nFor comparison purposes, we consider the same market parameters as in \\cite{kn:Ger1} and \\cite{kn:G}. So, we consider the interest rate $r=0.03$ and the expected return and the volatility of the risky asset $\\mu=0.08$ and $\\sigma=0.15$, respectively, which implies a Sharpe ratio equal to $\\beta=0.33$. Furthermore, we consider a retiree with age 60 and with the initial wealth $x_0=100$ and set the decumulation period equal to $T=15$ years. The maximum consumption rate is set to be $C_1=6.5155$, which is assumed to be equal to the payments of a lifetime annuity purchasable by a retiree at age 60 with the wealth $x_0=100$, regarding the mortality rate given in this section. \n\nWe consider four different rates for the minimum consumption, $C_2=C_1$, $C_2=\\frac{3}{4}C_1$, $C_2=\\frac{2}{3}C_1$ and $C_2=\\frac{1}{2}C_1$, which correspond to different minimum cost of living of a retiree. Moreover, we consider the target level $F=1.75C_1 a_{75}$ and the safety level $S=0.5 C_1a_{75}$ for the wealth process, which in the literature correspond to the medium level of risk aversion, see \\cite{kn:Ger1} and \\cite{kn:G}. Actually, in this level, the final annuity that the retiree will get is at most 1.75 times $C_1$ and at least half of $C_1$. \n\nBy employing the dynamics (\\ref{wealtheq}) and the optimal investment and consumption functions obtained from the previous section, we simulate the optimal wealth process. To this end, we apply a same 5000 generated stream of pseudo random numbers for four different ranges of consumptions. \nUsing the simulated optimal wealth processes when the impact factor of the running cost, $\\kappa$, is equal to $0.5$, we present the histograms of the final annuities, Figs. \\ref{fig1}, \\ref{fig2}, \\ref{fig4} and \\ref{fig5} and reveal some percentiles of the optimal wealth amounts, Figs. \\ref{fig3}, \\ref{fig6}, \\ref{fig7} and \\ref{fig10}, and the optimal investment strategies, Figs. \\ref{fig8}, \\ref{fig9}, \\ref{fig11} and \\ref{fig12}, during the decumulation period. \n\nThe graphs show that although the optimal strategies are similar regarding different scenarios of admissible consumption interval, we get higher final annuities when this interval is more restricted. The remarkable indication of the simulation results is the difference between the fixed consumption case and three other scenarios. Actually, the results show that by considering a variable consumption rate, although restricted, we can get much more valuable final annuity and higher percentiles of wealth amounts. \n\n \\begin{figure}\n \\begin{multicols}{2}\n \n \\hspace{-2.5cm}\n \\begin{minipage}{0.8\\textwidth}\n \\centering\n \\title{\\footnotesize{Final Annuity Amount}}\n \\includegraphics[width=0.9\\textwidth]{FA05K05.png}\n \\caption{ \\footnotesize{$C_2=\\frac{1}{2}C_1$}}\n \\label{fig1}\n \\end{minipage}\n\n\\vspace{1cm}\n \\hspace{-2.5cm} \n\\begin{minipage}{0.8\\textwidth}\n \\centering\n \\title{\\footnotesize{FInal Annuity Amount}} \n \\includegraphics[width=0.9\\textwidth]{FA075K05.png}\n \\caption{ \\footnotesize{$C_2=\\frac{3}{4}C_1$}}\n \\label{fig2}\n \\end{minipage}\n\n\n\\vspace{1cm}\n \\hspace{-2.5cm}\n \\begin{minipage}{0.8\\textwidth}\n \\centering\n \\title{\\footnotesize{Optimal Wealth Amounts}}\n \\includegraphics[width=0.9\\textwidth]{OWA05K05.png}\n \\caption{ \\footnotesize{$C_2=\\frac{1}{2}C_1$}}\n \\label{fig3}\n \\end{minipage}\n\n \\begin{minipage}{0.8\\textwidth}\n \\centering\n \\title{\\footnotesize{Final Annuity Amount}}\n \\includegraphics[width=0.9\\textwidth]{FA23K05.png}\n \\caption{ \\footnotesize{$C_2=\\frac{2}{3}C_1$}}\n \\label{fig4}\n \\end{minipage} \n\n\\vspace{1cm}\n \\begin{minipage}{0.8\\textwidth}\n \\centering\n \\title{\\footnotesize{Final Annuity Amount}}\n \\includegraphics[width=0.9\\textwidth]{FAfixK05.png}\n \\caption{ \\footnotesize{$C_2=C_1$}}\n \\label{fig5}\n \\end{minipage}\n\n\\vspace{1cm}\n \\begin{minipage}{0.8\\textwidth}\n \\centering\n \\title{\\footnotesize{Optimal Wealth Amount}}\n \\includegraphics[width=0.9\\textwidth]{OWA23K05.png}\n \\caption{ \\footnotesize{$C_2=\\frac{2}{3}C_1$}}\n \\label{fig6}\n \\end{minipage}\n\n \\end{multicols}\n\\end{figure}\n\n\n\n \\begin{figure}\n \\begin{multicols}{2}\n\n\\vspace{-4cm}\n \\hspace{-2.5cm}\n \\begin{minipage}{0.8\\textwidth}\n \\centering\n \\title{\\footnotesize{Optimal Wealth Amount}}\n \\includegraphics[width=0.9\\textwidth]{OWA075K05.png}\n \\caption{ \\footnotesize{$C_2=\\frac{3}{4}C_1$}}\n \\label{fig7}\n \\end{minipage}\n\n\\vspace{1cm}\n \\hspace{-2.5cm}\n\\begin{minipage}{0.8\\textwidth}\n \\centering\n \\title{\\footnotesize{Optimal Investment Strategy}}\n \\includegraphics[width=0.9\\textwidth]{OP05K05.png}\n \\caption{ \\footnotesize{$C_2=\\frac{1}{2}C_1$}}\n \\label{fig8}\n \\end{minipage}\n\n\\vspace{1cm}\n \\hspace{-2.5cm}\n \\begin{minipage}{0.8\\textwidth}\n \\centering\n \\title{\\footnotesize{Optimal Investment Strategy}} \n \\includegraphics[width=0.9\\textwidth]{OP075K05.png}\n \\caption{ \\footnotesize{$C_2=\\frac{3}{4}C_1$}}\n \\label{fig9}\n \\end{minipage}\n \n\n \\begin{minipage}{0.8\\textwidth}\n \\centering\n \\title{\\footnotesize{Optimal Wealth Amount}}\n \\includegraphics[width=0.9\\textwidth]{OWAfixK05.png}\n \\caption{ \\footnotesize{$C_2=C_1$}}\n \\label{fig10}\n \\end{minipage}\n\n\\vspace{1cm}\n \\begin{minipage}{0.8\\textwidth}\n \\centering\n \\title{\\footnotesize{Optimal Investment Strategy}} \n \\includegraphics[width=0.9\\textwidth]{OP23K05.png}\n \\caption{ \\footnotesize{$C_2=\\frac{2}{3}C_1$}}\n \\label{fig11}\n \\end{minipage}\n\n \\vspace{1cm}\n \\begin{minipage}{0.8\\textwidth}\n \\centering\n \\title{\\footnotesize{Optimal Investment Strategy}}\n \\includegraphics[width=0.9\\textwidth]{OPfixK05.png}\n \\caption{ \\footnotesize{$C_2=C_1$}}\n \\label{fig12}\n \\end{minipage}\n \n \\end{multicols}\n\\end{figure}\n\n\\\n\nTo compare our results corresponding to different scenarios, we consider the market present value of the cash flows that a retiree would have before and after the annuitization. Actually, the cash flow before the annuitization consists of withdrawals from the fund, that in \\cite{kn:Ger1} and \\cite{kn:G} are supposed to be constant and in this paper vary between the two limits $C_2, C_1$. Furthermore, we consider the accumulated wealth at the time of death of an individual if this occurs before the annuitization time $T$. The cash flow after the annuitization consists of the constant payments of the lifetime annuity that has been purchased by the accumulated wealth at the terminal time $T$. \n\nFor the mortality rate, we consider the Gompertz-Makeham distribution, which for an individual of age $t$ is written as \n$$\\mu(t)=A+BC^t.$$\nThe parameters $A, B$ and $C$ are those defined by the Belgian regulator for the pricing of life annuities purchased by males, as in \\cite{kn:HD}. So, we assume $A=0.00055845$, $B=0.000025670$ and $C=1.1011$.\n\nTherefore, considering $X(t)$ and $c(t)$ as the fund value and the consumption rate at time $t$, respectively, the market present value of the future cash flows is written as \n\\begin{align}\nP.V.=&\\int^{\\tau_d\\wedge15}_0e^{-\\rho t }c(t) dt+e^{-\\rho\\tau_d}X_{\\tau_d} 1_{\\{\\tau_d <15\\}}\\nonumber \\\\\n& +\\int^{\\tau_d}_{ \\tau_d\\wedge 15} \\frac{X(15)}{a_{75}}e^{-\\rho t }dt.\\nonumber\n\\end{align}\nin which $\\tau_d$ is the time of death. Then, given that the time of death is independent of the filtration of financial returns, we get\n\\begin{align}\nP.V.=&\\int^{15}_0e^{-\\int^t_0(\\rho+\\mu(60+s))ds }\\left(c(t)+\\mu(60+t)X(t)\\right) dt\\nonumber\\\\\n&+\\int^{T_m-60}_{15} \\frac{X(15)}{a_{75}} e^{-\\int^t_0(\\rho+\\mu(60+s))ds }dt, \\nonumber\n\\end{align}\nin which $T_m=100$ is considered as the maximum lifespan and $\\mu(t)$ is the mortality rate of an individual of age $t$. \n\\begin{table} \n\\caption{\\footnotesize{Distribution of final annuities, consumptions and cash flows}} \n\\begin{tabular}{c c c c c c}\n\\hline \n&\\footnotesize{ $C_2 =\\frac{1}{2}C_1$}\n&\\footnotesize{$C_2=\\frac{2}{3}C_1$} \n&\\footnotesize{ $C_2=\\frac{3}{4}C_1$}\n&\\footnotesize{ $C_2=C_1$}\\\\\n&$\\kappa=0.25$ &$\\kappa=0.25$ &$\\kappa=0.25$ \\\\\n&$\\kappa= 0.5$ &$\\kappa=0.5$ &$\\kappa=0.5$ \\\\\n&$\\kappa=0.75$ &$\\kappa=0.75$ &$\\kappa=0.75$ \\\\\n&$\\kappa= 1$ &$\\kappa=1$ &$\\kappa=1$ \\\\\n\\hline\nmean FA\\footnotemark &9.59 &9.43 &9.28 &5.69 \\\\\n &9.08 &8.96 & 8.85 \\\\ \n &8.77 &8.67 &8.57 \\\\\n &8.54 &8.45 & 8.37 \\\\\n\\hline\nsd. FA &2.14 &2.24 &2.33 &2.77 \\\\\n &2.43 & 2.50 &2.55 \\\\\n &2.58 &2.63 &2.67 \\\\\n &2.68 & 2.71 &2.74 \\\\\n\\hline\nmean PV\\footnotemark &105.21 &104.15 &103.25 &96.17 \\\\\n &105.34 &104.59 & 103.92 \\\\\n &105.06 &104.46 &103.91 \\\\\n &104.71 & 104.22 & 103.76 \\\\\n\n\\hline\nsd. PV &13.95 &13.77 &13.49 &12.17 \\\\\n &14.11 &14.14 & 14.08 \\\\\n &14.12 &14.19 &14.21 \\\\\n &14.16 &14.18 & 14.21 \\\\\n\\hline\n\\footnotesize{Prob(FA $> C_1$) (\\%)} &89.62 &88.10 &86.22 &37.44 \\\\\n &83.72 &82.42 & 80.98 \\\\\n &80.10 &79.08 &77.92 \\\\\n &77.42 & 76.54 & 75.76 \\\\\n\\hline\n\\footnotesize{Prob(FA =$\\frac{1}{2}C_1$)} (\\%) &1.62 &1.94 &2.16 &6.94 \\\\\n &1.66 &2.60 & 2.86 \\\\\n &1.44 &1.94 &3.24 \\\\\n &0.94 &2.18 &3.26 \\\\ \n\\hline \nmean consumption &5.7752 & 5.7466 &5.7341 &6.5155 \\\\\n &5.9869 &5.9698 &5.9595 \\\\\n &6.0873 &6.0756 &6.0668 \\\\\n &6.1470 &6.1398 &6.1330 \\\\\n\\hline \n\n$^1$FA=Final Annuity\\\\\n $^2$ PV=Present Value\n\\end{tabular}\n\\label{tab2}\n\\end{table}\nTable \\ref{tab2} gives more information concerning the distribution, the mean and standard deviation, of the final annuity and the market present value of the cash flows for different ranges of consumption and different impact factors of the running costs, $\\kappa=0.25, 0.5, 1$. Furthermore, it reports the mean of the consumptions before the annuitization and the probability that a member get a final annuity that is higher than the annuity purchasable at the retirement, $C_1$. \n\nAs it is expected, when the minimum consumption rate, $C_2$, decreases or the admissible consumption interval becomes wider, we get more valuable final annuity and higher cash flows. Furthermore, in this case, the probability of getting a final annuity higher than $C_1$ increases. \n\nThe interesting point of this table is the big difference between the outcomes of the fixed consumption rate case, $C_2=C_1$, and the three other cases. Actually, when we assume a fixed consumption rate, we get much less valuable final annuity and, moreover, the present value of the cash flows is remarkably less than the corresponding outcomes of the variable consumption rate. However, it is clear that, since in the fixed consumption case the cash flow before the annuitization, and of course before the time of death, is constant, the deviation of cash flows is less than the other cases. \n\nFor a fixed admissible consumption interval, when we compare the results corresponding to different impact factors of the running cost, we observe finer final annuity (higher mean with lower standard deviation) for lower impact factors. This is quite reasonable, since when the impact factor $\\kappa$ is small, more weight is devoted to the second term of the loss function which is based on the final annuity. The surprising result for a fixed admissible consumption interval is that the maximum present value of cash flows is attained when $\\kappa=0.5$. This can be interpreted by considering the two terms of loss function. Actually, for a smaller impact factor $\\kappa$, more weight of the loss function is devoted to the final annuity and therefore higher cash flow after the annuitization is expected. On the other side, for a greater impact factor $\\kappa$, more weight is devoted to the consumptions and therefore higher cash flow before the annuitization time is expected. \n\nThe last rows of the table report the mean of consumptions before the annuitization. It is clear that, on every fixed admissible consumption interval, the greater impact factor of running cost, the more weight to the first term of loss function and therefore the more consumptions. But, it is surprising that when the minimum consumption rate, $C_2$, decreases, the mean of consumptions slightly decreases. This can be interpreted by the previous rows in the table that when the admissible consumption interval becomes wider the probability of hitting the lower border slightly increases. \n\n\\section{\\textbf{Conclusions }}\nStudying the optimal investment-consumption problem post-retirement with a minimum guarantee for final annuity for a member of a defined contribution plan yields an HJB equation on a bounded domain. Applying the policy iteration method in solving this equation, the value function and the optimal strategies are improved iteratively. To find the approximation of the value function in one step of our algorithm, we apply the backward in time implicit scheme of finite difference method. Our scheme has the positive coefficient property which guarantees the convergence to the viscosity solution. \n\nHowever, the simulation results show that by assuming variable, although very restricted, consumption rate, we can get much finer final annuity. Moreover, considering the cash flows before and after the annuitization, the variable consumption rate yields higher present value of cash flows. \n\nIn the next work, we are going to consider the jump-diffusion dynamics which is more justified by the empirical data. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe identification of motion as a manifestation of biological life\ndates back to the earliest records of science itself. The Greek\nphysician Erasistratos of Ceos studied biological motion on the length\nscale of muscles already in the 3rd century BC. He imagined muscles to\nfunction in the way of a piston contracting and relaxing from\npneumatic origin. It was not until the invention of the microscope in\nthe 17th century by van Leeuwenhoek that this theory could be\ndevalidated with Swammerdams observation that muscles contract at\nconstant volume.\n\nConcerning biological motion on a microscopic scale, scientists\nfavored concepts of ``living forces'' for many centuries until this was\nfinally ruled out by the observations of the Scottish botanist Robert\nBrown in 1828 who found all kind of matter to undergo erratic motion\nin suspensions. A satisfactory explanation was provided by Einstein\nin 1905 by the interaction with thermally fluctuating molecules in the\nsurroundings. However, the molecular details remained unknown in the\nfog of low microscope resolution. Modern experimental techniques\n\\cite{metha} have lately revealed the causes of sub-cellular motion\nand transport.\n\nToday we know that every use of our muscles is the collective effort\nof a class of proteins called myosin that ``walk'' on actin filaments.\nGenerally spoken, we refer to all proteins that convert the chemical\nenergy of ATP (adenosine-triphospate) in a hydrolysis reaction into\nmechanical work as molecular motors. These motors are highly\nspecialized in their tasks and occur in a large variety: ribosomes\nmove along mRNA strands while translating the codons into proteins,\ndynein is responsible for cilia motion and axonal transport, and\nkinesins play a key role in cytoskeletal traffic and spindle formation\n(for an overview see \\cite{howard} and references therein).\n\nWhile the exact details of the molecular structure and function of\nmotor proteins~\\cite{schliwa} remain a topic of ongoing research, on a\ndifferent level attention was drawn to phenomena that arise out of the\ncollective interaction of many motors. Early research along this line\nwas motivated by mRNA translation that is managed by ribosomes.\nRibosomes are bound to the mRNA strand with one subunit and step\nforward codon by codon. The codon information is translated into\ncorresponding amino acids that are taken up from the cytoplasm and\nassembled into proteins. To increase the protein synthesis many\nribosomes can be bound to the same mRNA strand simultaneously. This\nfact might induce collective properties as was first realized by\nMacDonald \\cite{macdonald} who set up a theoretical model for the\ntranslation of highly expressed mRNA. The importance of effects caused\nby the concerted action of many motors can be deduced from a very\nsimple example that has yet drastic consequences: the slow down of\nribosomes due to steric hindrance caused by another ribosome in front\n-- comparable to an intracellular traffic jam that might significantly\nslow down protein synthesis.\n\n\nA theoretical approach to collective phenomena in intracellular\ntraffic will try to simplify the processes of molecular motion\ndown to a single step rate rather than focus on the chemical or \nmechanical details\non the molecular level of motor steps. Then it becomes\npossible to model and analyze the behavior of several motors with the\ntools of many-body and statistical physics. We will start this review\nwith a short introduction on this single step model in Sec.\n\\ref{s_model} before we introduce the total asymmetric exclusion\nprocess (TASEP) as a theoretical model for intracellular transport.\nSec. \\ref{s_phase} describes the stationary states and density\ndistributions and their phase diagram as a function of boundary\nconditions. After a review on several recent extensions in Sec.\n\\ref{s_extensions}, we will focus on the competition of TASEP and bulk\ndynamics in Sec. \\ref{s_pff}. Before concluding, Sec.\n\\ref{s_outlook} contains further recent developments.\n\n\n\\section{Model and Methods} \\label{s_model}\n\nIn the quest for a theoretical model for the motion of molecular\nmotors the first and simplest choice may be the use of a Poisson\nprocess. The ``Poisson stepper'' is assumed to be an extensionless\nobject advancing stochastically in discrete steps along a one-dimensional periodic\nlattice. \nThe process is uni-directional as the position of the stepper\ncan be described by $ x(t)=a \\ n(t)$ with the discrete step size $a$ and\nthe random variable $n(t)$ being a sequence of growing integers. Step\nevents occur stochastically with a rate $r$ constant in both space and\ntime. Consequently, the average time between two steps is then given\nby the dwell time $\\tau = 1\/r$ and the probability to find the\n``Poisson stepper'' at a position\n$n$ after time $t$ by the Poisson distribution \\cite{vankampen}.\n\nAfter we have defined a model for the translocation of a single motor,\nwe proceed with our original task which aims at the understanding of\ncollective properties of many motors. Of course, more elaborate\nmodels have been established \\cite{ps04,jp97} that account for several rate\nlimiting steps -- examples are the ATP supply or the availability of\namino acids for ribosomal mRNA translation. However, the very basic\n``Poisson stepper'' is chosen for reasons of simplicity and in order to\nprevent unnecessary molecular details from masking collective effects.\nStill, the validity and limitations of this simplification have to be\nkept in mind.\n\n\nBeing supplied with the dynamics of a single motor, a stage for the\nconcerted action of many can now be set up. This was first done in a\npioneering work by MacDonald \\cite{macdonald} and is now widely know\nas the total asymmetric exclusion process (TASEP). It consists of an\none-dimensional lattice (Fig. \\ref{tasepmodel}) with N sites labeled\nby $i={1,\\cdots,N}$ and with a spacing of $a=L\/N$, where $L$ is the total\nlength of the lattice. For convenience, $L$ is often set to $1$ and\nthe lattice spacing then referred to as $\\varepsilon=1\/N$.\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{.\/tasepmodel.eps}\n\\caption{Schematic model of TASEP (particles are injected with rate $\\alpha$, move exclusively to\nthe right being subject to hard-core exclusion, and are removed with rate $\\beta$)}\n\\label{tasepmodel}\n\\end{figure}\n\nParticles have an extension of the size of the lattice spacing and are\nsubjected to hard core exclusion due to steric hindrance. Therefore\nthe occupation number $n_i$ of site $i$ can only take the values 0 or\n1. Particles on the lattice attempt jumps to their right neighboring\nsite with a rate $r$, which will be set to unity in the following.\nHereby, a reference time scale is set. The effective frequency of\njumps can be much smaller than $r$ when attempted jumps are rejected\ndue to an already occupied target site. The attempted jump rate to the\nleft is zero, since we deal with a total asymmetric exclusion process,\nin contrast to the asymmetric exclusion process or the symmetric\nexclusion process, where the jump rate to the left is non-zero or even\nequal to the jump rate to the right.\n\n\nUnless one uses periodic boundary conditions, specific dynamic rules\nhave to be defined at the boundaries, which play a crucial role in the\nsolution of the process. Among different other conditions (reflective,\nopen with a blockage) the most common type are open boundaries, which\nwe will use as well: at the left boundary ($i=1$) particles attempt to\nattach with a rate $\\alpha$, while they detach at the right boundary\n($i=N$) with rate $\\beta$. This is equivalent to two additional sites\n$i=0$ and $i=N+1$ at the boundaries, which are connected to the system\nby the bulk dynamics described above, and are constantly set to the\ndensity $\\alpha$ and $1-\\beta$ respectively.\n\n\nIn spite of its simplicity, TASEP shows a wide range of interesting\nproperties. Since it was propelled into the scope of statistical\nphysicists, \nit has become a paradigm for non-equilibrium physics. In contrast to\nequilibrium systems it lacks detailed balance but evolves into a\nsteady state where a non-vanishing current is maintained between\nboundaries. Upon varying these boundary conditions, TASEP was found to\nexhibit phase transitions which -- following general theorems\n\\cite{wagner} -- are not even allowed for one-dimensional equilibrium\nsystems in the absence of long-range interactions. However, the\nanalysis of non-equilibrium systems is considerably complicated by the\nlack of universal concepts like the Boltzmann-Gibbs ensemble theory.\nFeasible methods exist nevertheless and will be explained in the next\nsection.\n\n\\section{Density and Current in Stationary States} \\label{s_phase}\n\nIn analyzing exclusion processes research can focus on a multitude of\ndifferent properties. The probably most obvious to address is the\ndensity and current distribution in the stationary state. This is \nmotivated by two reasons. On the one\nhand, one intuitively attributes a strong importance to density\ninformation with respect to the biological background as e.g. the\nribosome density is connected to the rate of protein synthesis. On the\nother hand, promising experimental techniques can measure motor\ndensities and may allow for validation of theoretical models. Of\ncourse, quite extensive research has also been attracted to a\nmultitude of different properties like correlation functions\n\\cite{de93,schuetz93}, relaxation properties \\cite{ds00} or\nsuper-diffusive spreading of fluctuations \\cite{bks85} which will not\nbe the topic of this review. We will focus on analytical methods\n(supported by numerical simulations) that are designed to investigate\nspatial density distributions in the stationary state of the system.\nTo this end we will introduce some basic tools that have proven useful in\nthe exploration of TASEP properties. These are based on mean-field\napproximations and reproduce many results that can also be derived exactly.\nWe are well aware that this approach neglects correlations as included\nin the exact solutions that have been achieved for the TASEP density\nprofile by applying either recursion relations \\cite{ddm92} or a\nquantum Hamilton formalism with Bethe ansatz \\cite{schuetz01}.\n\n\\subsection{Quantum Mechanics and Statistic Properties}\n\nAs an introduction we will outline some general statistical properties.\nAt any given moment, the system can be found in a certain\nconfiguration $\\mu$ made up of the occupation numbers at each lattice\nsite. The next occurring stochastic event (i.e. the jump of one\nparticle to a neighboring site) will therefore change the system to\nanother configuration $\\mu'$. The transition probability $p_{\\mu \\to\n \\mu'}$ is independent of the way the system had reached the initial\nconfiguration. Since there is no memory of the system's history, but\nany transition probability solely depends on the preceding state,\nTASEP is a Markov process. In order to describe the system's\nevolution, we can then use a master equation for the probability to\nfind the system in a certain state.\n\\begin{equation}\n\\frac{dP(\\mu)}{dt}=\\sum_{\\mu' \\neq \\mu} \n\\left[\\omega_{\\mu' \\to \\mu}P_{\\mu'}(t)-\\omega_{\\mu \\to \\mu'}P_{\\mu}(t)\\right] \\;,\n\\end{equation}\nwhere the $\\omega_{\\mu \\to \\mu'}$ are the transition rates from one \nconfiguration $\\mu$ to another $\\mu'$. \n\nHow can we now translate this general property into a description of\nTASEP? To this end we will use a convenient notation, which applies\nmethods from the quantum mechanics toolbox in order to formulate the\nmaster equation in terms of operators. It was introduced as ``quantum\nHamiltonian formalism'' and allows for exact solutions\n\\cite{schuetz01}. We introduce operators $\\hat n_i(t)$, which act as\noccupation number operators, measuring the presence ($n_i=1$) or\nabsence ($n_i=0$) of a particle at site $i$. This results in the\nHeisenberg equation (for an introduction see e.g. \\cite{schuetz01})\n\\begin{equation}\n\\frac{d}{dt}\\hat n_i(t)= \\hat n_{i-1}(t)[1-\\hat n_i(t)] - \n \\hat n_i(t)[1-\\hat n_{i+1}(t)] \\;,\n\\label{heisenberg}\n\\end{equation}\nwhere the first term on the right hand side constitutes the jump of a\nparticle from the left neighboring site to site $i$ (and thus a\nparticle gain) and the second term a jump from site $i$ to the adjacent\nlattice site on the right (a particle loss). Note the intrinsic\nexclusion constraint in both terms that prevents jump events if the\ndestination site is occupied, i.e. the expression in brackets equals\nzero. If one expresses these gains and losses in current, it becomes\nconvenient to use the current operator\n\\begin{equation}\n\\hat j_i(t)=\\hat n_i(t)[1-\\hat n_{i+1}(t)] \\;.\n\\end{equation}\nThis allows to rewrite (\\ref{heisenberg}) as a discretized form of a\ncontinuity equation with the discrete divergence $\\nabla \\hat\nj_i(t)=\\hat j_i(t) - \\hat j_{i-1}(t)$:\n\\begin{equation}\n\\frac{d}{dt}\\hat n_i + \\nabla \\hat j_i(t)=0 \\;.\n\\label{continuity}\n\\end{equation}\nSimilar equations for the boundaries are readily derived in the same\nway. Since we are interested in the average density on a certain\nlattice site, we need to compute the time (or ensemble) average of the\noperators. Equation (\\ref{heisenberg}) gives an equation of motion for\nthe operator and allows to solve for the time evolution of $\\langle\n\\hat n_i(t) \\rangle$. In executing the ensemble average of\n(\\ref{heisenberg}) two-point correlation functions like $\\langle \\hat\nn_{i-1}(t)(1-\\hat n_i(t)) \\rangle$ appear on the right hand side. These\ncorrelation functions again are connected to higher order correlations\nvia their equations of motion. The resulting infinite series of\ncorrelation functions can be solved exactly for special cases only.\nGenerally, one is required to use mean-field approaches.\n\n\\subsection{Mean-Field Solution} \\label{s_mf}\nThe rather blurry term ``mean-field theory'' is based on the concept of\nusing time or space averages (e.g. by neglecting temporal or spatial\nfluctuations) and has found a wide range of applications in\nstatistical physics (see \\cite{goldenfeld}). In this chapter, we\nwill explain its implementation for the TASEP and show a possible solution.\nTo point out the use of mean-field theory in TASEP, we look again at\nthe average of the operator $\\hat n_i(t)$. We are interested in the\nstationary state and therefore averaging signifies either a time or an\nensemble average, since the system is ergodic. Then the average\nreturns the density at the considered site $i$ as $ \\varrho_i =\n\\langle \\hat n_i \\rangle $. Performing the average over\n(\\ref{heisenberg}), leaves us with the difficulty of the infinite\nseries of correlation functions mentioned earlier. The mean field\napproximation consists now in neglecting any correlations by setting\ne.g. $\\langle \\hat n_i \\hat n_j \\rangle = \\langle \\hat n_i \\rangle\n\\langle \\hat n_j \\rangle $ (see \\cite{mahan}). In our case, we obtain\nfor the current\n\\begin{equation}\n\\langle \\hat n_{i}(t)(1-\\hat n_{i+1}(t)) \\rangle = \n\\langle \\hat n_{i}(t) \\rangle (1-\\langle\\hat n_{i+1}(t)\\rangle) \\;.\n\\end{equation}\nThis allows then, to rewrite (\\ref{heisenberg}) in the stationary\nstate ($d \\varrho_i(t)\/dt=0$) as\n\\begin{equation}\n0=\\varrho_{i-1}(1-\\varrho_i)-\\varrho_i(1-\\varrho_{i+1}) \\;.\n\\label{e_steady}\n\\end{equation}\nObviously, (\\ref{e_steady}) could easily be solved numerically, since\nit forms a system of N difference equations. However, it is possible\nto reduce the set of equations to arrive at an explicit solution by\nmaking two assumptions. First, note that the stationary state\ncondition $d \\varrho_i(t)\/dt=0$ implies a conservation of current\nthroughout the bulk, as can be seen from (\\ref{e_steady}). Ergo, we\njust have to solve one equation out of the set, to determine the\nstationary, homogeneous current (and density). For that purpose, we\nuse the continuum approximation, which turns the spatial lattice\nvariable quasi-continuous. This is achieved for a large number $N$ of\nlattice sites on a lattice of normalized length $L=1$. The\ndistribution of sites then approaches a continuum as $\\varepsilon=L\/N\n\\ll 1$ and $x=i\/N$ is rescaled to the interval $0 \\leq x \\leq 1$.\nThereby an expansion in powers of $\\varepsilon$ is allowed.\n\\begin{equation}\n\\varrho(x \\pm \\varepsilon)=\\varrho(x) \\pm \\varepsilon \\partial_x\n\\varrho(x) \n+ \\frac{1}{2}\\varepsilon^2 \\partial_x^2 \\varrho(x) + O(\\varepsilon^3)\n\\label{e_power}\n\\end{equation}\nUsing this expansion in (\\ref{e_steady}) and the corresponding\nequations for the boundaries, results in the following first-order\ndifferential equation if we neglect all terms with higher orders in\n$\\varepsilon$:\n\\begin{equation}\n(2 \\varrho - 1) \\partial_x \\varrho = 0 \\;. \n\\label{e_ode}\n\\end{equation}\nThe corresponding boundary conditions are $\\varrho(0)=\\alpha$ and\n$\\varrho(1)=1-\\beta$. Because we have a first order differential equation\nthat needs to satisfy two boundary conditions, we are evidently\nconcerned with an over-determined boundary value problem. There are\nthree solutions to (\\ref{e_ode}): $\\varrho_\\mathrm{bulk}(x) = 1\/2$\ndoes not satisfy either boundary condition (except for the special\ncase $\\alpha=\\beta=1\/2$), while $\\varrho(x)=C$ can satisfy either the\nleft or the right boundary condition, resulting in\n$\\varrho_\\mathrm{\\alpha}(x)=\\alpha$ and\n$\\varrho_\\mathrm{\\beta}(x)=1-\\beta$, respectively.\n\n\\subsection{Phase Diagram and Domain Wall Theory}\n\nTo obtain a general solution satisfying the boundary conditions, both\nsolutions need to be matched. Consider the case $\\alpha,\\beta < 1\/2$.\nTo meet the boundary condition at both sides, the global density\nfunction $\\varrho(x)$ has to be $\\varrho_\\mathrm{\\alpha}$\n($\\varrho_\\mathrm{\\beta}$) in an environment close to the left (right)\nboundary. Since both $\\varrho_\\mathrm{\\alpha}$ and\n$\\varrho_\\mathrm{\\beta}$ are uniform, the two solutions do not\nintersect. At this point, we have to go beyond mean-field theory and\nassume that at any given time both solutions are valid in\nnon-overlapping areas of the system. Where those areas border, they\nare connected by a sharp domain wall (DW) at position $x_\\mathrm{w}$\n(see Fig.~\\ref{f_matching})\n\\begin{equation} \\label{e_dw}\n\\varrho(x)= \n\\cases{ \\varrho_\\mathrm{\\alpha} & for $0 \\leq x \\leq x_\\mathrm{w} \\;,$ \\cr\n \\varrho_\\mathrm{\\beta} & for $x_\\mathrm{w} \\leq x \\leq 1 \\;.$}\n\\end{equation}\n\\begin{figure}\n\\centering\n\\includegraphics[height=4cm]{.\/matching.eps}\n\\hspace{1 cm}%\n\\includegraphics[height=4cm]{.\/tasep_phase.eps}\n\\caption{({\\bf left}) Schematic density distribution for \n $\\alpha=.3,\\beta=.2$: in this situation the particle current\n $j_\\mathrm{\\alpha}$ exceeds the current $j_\\mathrm{\\beta}$, thus\n carrying more particles to the domain wall which is then shifted to\n the left. ({\\bf right}) Phase diagram of TASEP in $\\alpha,\\beta$\n phase space shows a low density (LD), high density (HD) and maximal\n current (MC) phase}\n\\label{f_matching}\n\\end{figure}\nFrom the dynamics of this domain wall \\cite{kolomeisky-etal:98} we can deduce\nimportant information about the system. The key point of domain wall\ntheory is the identification of particle currents as the cause of\ndomain wall motion. If for example the current\n$j_\\mathrm{\\alpha}=\\varrho_\\mathrm{\\alpha}(1-\\varrho_\\mathrm{\\alpha})$\nof the left density solution $\\varrho_\\mathrm{\\alpha}$ is higher than\nthe current in the right part of the system (corresponding to\n$\\alpha>\\beta$), particles are transported faster to the DW from the\nleft end then they can head on to the right. Thus, the domain wall is\nshifted to the left. The system is filled up, until finally the whole\nbulk density has taken the value of $\\varrho_\\mathrm{\\beta}$ except\nfor a small boundary layer \\footnote{The boundary layer is necessary\n in order to fulfill the both boundary conditions. Its extend is\n finite for small systems, but will vanish in the thermodynamic limit\n $N \\to \\infty$.} at the system's left boundary. The opposite\nhappens, if $j_\\mathrm{\\beta}>j_\\mathrm{\\alpha}$, which is the case\nfor $\\beta<\\alpha$. Hence we can empirically state that the boundary\nwith the smaller rate acts as a bottleneck and imposes its density\ndistribution on the system. To quantify this behavior, one can use a\ntraveling wave solution \\cite{kolomeisky-etal:98} of the form\n$\\varrho(x-Vt)$ to obtain the domain wall velocity as\n\\begin{equation}\nV = \\frac{j_\\mathrm{\\beta}-j_\\mathrm{\\alpha}}\n {\\varrho_\\mathrm{\\beta}-\\varrho_\\mathrm{\\alpha}}=\\beta - \\alpha \\;.\n\\end{equation}\nTo arrive at a phase diagram in $\\alpha,\\beta$-space we will analyze\nthe bulk density. A positive DW velocity is obtained for\n$\\alpha<\\beta$ and pushes the DW to the right side of the system\nresulting in a bulk density smaller $1\/2$ called low-density phase\n(LD). On the contrary, $\\beta<\\alpha$ leads to a high-density phase\n(HD), as illustrated in the phase diagram (Fig.~\\ref{f_matching}). LD\nand HD phase are connected by a first-order transition. Note, that\nthis phase diagram is equally obtained by the\nabove mentioned exact methods.\n\nThe transition line $\\alpha=\\beta<1\/2$ in the phase diagram requires\nspecial treatment. The velocity of the domain wall yields zero here,\nbut Monte Carlo simulations show that the DW actually will make random\nsteps to either side. This behavior is caused by the stochasticity of\nthe input and output events that are described as rate processes.\nHence, the DW makes random steps to either side with equal\nprobability, being nothing else than the famous random walk in a\ndomain with reflecting boundaries. An average over a sufficiently\nlong time will therefore result in a homogeneous probability density\nover space. The stationary density profile will just be a linear slope\nconnecting the two boundary conditions:\n\\begin{equation}\n\\varrho(x)=\\alpha+(1-\\beta-\\alpha)x \\;.\n\\end{equation}\nNote that mean-field theory neglects fluctuations and hence fails to\npredict this density distribution, but gives the result of a\nstationary domain wall. However, coming back to the picture of the\ndomain wall as a random walker, the linear density distribution can\nreadily be made plausible. Interpreting the random walk as free\ndiffusion of the domain wall and keeping in mind that the density\nconstraints $\\varrho(0)=\\alpha$ and $\\varrho(1)=1-\\beta$ hold at all\ntimes, the linear density profile is easily derived as solution of\nthe one dimensional diffusion equation with boundary conditions.\n\nFinally, consider an increase of e.g. $\\alpha$ to values that exceed\n$1\/2$, while $\\beta>1\/2$ (crossing the boundary from the upper left to\nthe upper right quadrant in the phase diagram). In this situation\nparticles are removed sufficiently fast from the system and the supply\nof particles on the left side acts as a bottleneck limiting density\nand current. For $\\alpha<1\/2$ any increase in $\\alpha$ results in an\nincreased current (think of a highway, where an additional car will\nresult in a higher overall traffic). But above a certain value\n$\\alpha_\\mathrm{C}=1\/2$ any increase in particle input will not\nincrease the current further \\footnote{As\n $j=\\varrho(1-\\varrho)=\\varrho-\\varrho^2$ has a maximum at\n $\\varrho=1\/2$. This density-current relation is a convenient tool to\n characterize traffic processes and its plot is often referred to as\n fundamental diagram.} but will cause the current to\ndiminish again (as an additional car will further slow down traffic\nduring rush hour). As a result the bulk will keep its current maximum\nat $\\varrho=1\/2$ and a boundary layer will form at the left side of\nthe system to match the boundary condition. This is of course nothing\nelse than the bulk solution $\\varrho_\\mathrm{bulk}$ with two boundary\nlayers and was baptized maximal current phase (MC). It is reached via\nsecond-order transitions for values $\\alpha>1\/2$ and $\\beta>1\/2$. A\nmore rigorous treatment of this behavior can be gained by computing\nthe collective velocity of the particles \\cite{kolomeisky-etal:98}.\n\n\\section{Biologically Motivated Generalizations of TASEP} \n\\label{s_extensions}\n\nThe adoption of lattice gas models for biological systems was followed\nby a variety of efforts to fit TASEP to different realistic\nenvironments. For that purpose some of the simplifying assumptions\nTASEP is based on had to be questioned. For example, experimental\nobservations have shown that ribosomes typically cover an area on the\nmRNA that exceeds the lattice spacing by a multiple. To account for\nthis situation the particles in TASEP have to extend over several\nlattice sites. However it was found that this does not change the\nphase diagram quantitatively \\cite{szl03}. Another direction of\nresearch went back to the original field of MacDonald to elucidate the\nimportance of initiation and prolongation of ribosomes \\cite{heinrich}\nor formation of mRNA loops to facilitate the back transport of\nribosomes from the termination site \\cite{chou03}. \n\n\nWhile particle interactions in TASEP is limited to hard-core\npotentials, even a small increase in the interaction radius -- as it\ncould be caused by charged molecules -- leads to qualitative changes in\nthe phase diagram \\cite{ps99}. It was shown that lattice gases with\nshort-range repulsive interaction exhibit a density-current relation\nwith two local maxima in contrast to simple TASEP that leads to one\nmaximum. This behavior results in a qualitative change of the phase\ndiagram that is enriched by four more regions one being a\nminimal-current phase.\n\nFurther work was dedicated to the scenario of the interaction of\ndifferent species of molecular motors that move in opposite\ndirections. This can either happen on the same filament when two\ndifferent particles are able to surpass each other with a jump rate\nthat differs from the jump rate of either particle to a free site\n\\cite{efgm95} or on two adjacent one-dimensional filaments\n\\cite{pp01}. In both cases, spontaneous symmetry breaking was\nobserved.\n\nIntracellular transport along cytoskeletal filaments has also served\nas a source of inspiration for driven lattice gas models. While in the TASEP\nmodel motors can only bind and unbind on the left and the right\nboundary respectively, cytoskeletal motors are known to detach from the track\nto the cytoplasm \\cite{howard} where they perform Brownian motion and \nsubsequently reattach to the track. The interplay between diffusion \nin the cytoplasm and directed motion along the filament was studied \\cite{lkn01} \nboth in open and closed compartments, focussing on anomalous drift and diffusion \nbehavior, and on maximal current and traffic jams as a function of the \nmotor density.\n\nIn \\cite{pff03} it has been realized that the on-off kinetics may not only \ngive rise to quantitative changes in the transport efficiency but also to \na novel class of driven lattice gas models. It was shown that the interplay \nbetween bulk on-off kinetics and driven transport results in a stationary \nphase exhibiting phase separation. This was achieved by an appropriate \nscaling of the on-off rates that ensures that particles travel a finite \nfraction on the lattice even in the limit of large systems. Then, particles \nspend enough time to ``feel'' the their mutual interaction and, eventually, \nproduce collective effects. In the following section, we will review the \nresults of these studies \\cite{pff03,pff04}.\n\n\n\\section{Phase Coexistence} \\label{s_pff}\n\nThe essential features of cytoskeletal transport are the possibility of \nbulk attachment and detachment and a finite residence time on the lattice. \nThe latter can be understood as a effect of thermal fluctuations that may overcome the\nbinding energy of the motors that is only of the order of several\n$k_\\mathrm{B} T$. Hence, attachment and detachment is a stochastic\nprocess whose dynamic rules have to be defined. Parmeggiani {\\it et\n al.} \\cite{pff03} chose to use Langmuir kinetics (LK) known\nas adsorption-desorption kinetics of particles on a one- or\ntwo-dimensional lattice coupled to a bulk reservoir \\cite{vilfan}.\nParticles can adsorb at empty sites and desorb from occupied sites and\nmicroscopic reversibility demands that the kinetic rates obey detailed\nbalance leading to an evolution towards an equilibrium steady state\ndescribable by standard concepts of equilibrium statistical mechanics.\nIn this sense the choice of LK is especially tempting as we are now\nfaced with the competition of two representatives of both equilibrium\nand non-equilibrium systems. The system -- in the following referred\nto as TASEP\/LK -- is defined as follows: the well-known TASEP is\nextended with the possibility of particles to attach to the filament\nwith rate $\\omega_\\mathrm{A}$ and to detach from an occupied lattice\nsite to the reservoir with rate $\\omega_\\mathrm{D}$. According to the\ntype of ensemble (canonical, grand canonical) the reservoir is either\nfinite or infinite. Here, the reservoir is assumed to be infinite and\nhomogeneous throughout space and time. The density on a lattice\nreached in the equilibrium state of LK is only dependent on the ratio\n$K=\\omega_\\mathrm{D}\/\\omega_\\mathrm{A}$ and is completely uncorrelated\nin both space and time for neglection of any particle interaction\nexcept hard-core. This is justified by the assumption that the\ndiffusion in the cytoplasm is fast enough to flatten any deviations of\nthe homogeneous reservoir density. The resulting density profile on\nthe lattice is homogeneous and given as Langmuir isotherm\n$\\varrho_\\mathrm{L}=K\/(K+1)$.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{.\/pffmodel.eps}\n\\caption{Schematic model of TASEP\/LK: \n the TASEP is extended by possible particle attachment and detachment\n in the bulk with rate $\\omega_\\mathrm{A}, \\omega_\\mathrm{D}$}\n\\label{f_pff}\n\\end{figure}\nIf we now consider the combination of TASEP and LK into the model\ndisplayed in Fig.~\\ref{f_pff}, attention has to be paid to the\ndifferent statistical nature of both processes. TASEP evolves into a\nnon-equilibrium state carrying a finite current. Since particles are\nconserved in the bulk, the system is very sensitive to the boundary\nconditions, whereas LK as a equilibrium process is expected to be\nrobust to any boundary effects especially for large systems. Combining\nboth processes would thus lead to a trivial domination of LK as the\nbulk rates $\\omega_\\mathrm{A}$ and $\\omega_\\mathrm{D}$ that apply to a\nlarge number of bulk sites become predominant over the rates $\\alpha$\nand $\\beta$ that only act on the two boundary sites. To observe any\ninteresting behavior (i.e. real interplay) between the two dynamics,\none needs competition. A prerequisite for the two processes to compete\nare comparable jump rates. To ensure that the rates are of the same\norder independently of the system size, an appropriate scaling is\nneeded. To this end a $N$-independent global detachment rate\n$\\Omega_\\mathrm{D}$ is introduced, while the local rate per site\nscales as\n\\begin{equation}\n\\omega_\\mathrm{D}=\\frac{\\Omega_\\mathrm{D}}{N} \\;.\n\\label{e_scaling}\n\\end{equation}\nFor the attachment one proceeds similarly. What does this scaling\nsignify physically? For an explanation it is instructive, to have a\nlook at the time scales involved: a particle on the lattice will\nperform a certain move on an average time scale, which is the inverse\nof that moves rate. Therefore, a particle spends an average time $\\tau\n\\approx 1\/\\omega_\\mathrm{D}$ on the lattice before it detaches.\nBearing in mind that the TASEP jump rate is set to unity, a particles\nwill jump to its adjacent site after a typical time of one unit time\nstep. Therefore the particle will travel a number $N_\\mathrm{T} =\n1\/\\omega_\\mathrm{D}$ of sites before leaving the lattice. Compared to\nthe lattice length this corresponds to a fraction of\n$n_\\mathrm{T}=N_\\mathrm{T}\/N=1\/(N \\omega_\\mathrm{D})$. In order to\nkeep this fraction finite in the thermodynamic limit,\n$\\omega_\\mathrm{D}$ needs to scale as defined in (\\ref{e_scaling}).\nOnly if the fraction $n_\\mathrm{T}$ is finite, a given particle can\nexperience interaction with other particles and give rise to\ncollective phenomena \\cite{pff04}. \n\n\\subsection{Mean Field Solution of TASEP\/LK}\n\nTo obtain density and current distributions of the TASEP\/LK, we use\nagain a mean-field approach and proceed along the lines of Sec.\n\\ref{s_mf}.\n\nFirst of all, we need to account for the Langmuir kinetics. This is\ndone by adding the following terms to the Heisenberg equation\n(\\ref{heisenberg}) of the simple TASEP to obtain\n\\begin{equation}\n\\frac{d}{dt}\\hat n_i(t) = \n\\hat n_{i-1}(t)(1-\\hat n_{i}(t))-\\hat n_{i}(t)(1-\\hat n_{i+1}(t))\n-\\omega_\\mathrm{D} \\hat n_i(t)+\\omega_\\mathrm{A} (1-\\hat n_i(t)) \\;.\n\\end{equation}\nwhere the first added term captures the detachment events and the\nlater the attachment. Neglecting correlations as done before gives\nfor the stationary state\n\\begin{equation}\n0 = \\varrho_{i-1}(1-\\varrho_i)-\\varrho_i(1-\\varrho_{i+1})-\n \\omega_\\mathrm{D} \\varrho_i + \\omega_\\mathrm{A} (1-\\varrho_i) \\;.\n\\label{e_steady_pff}\n\\end{equation}\nThe boundary conditions are not altered by LK. Using again the power\nseries expansion (\\ref{e_power}) and keeping in mind the scaling of\nthe on and off rates $\\omega$ as in (\\ref{e_scaling}), we obtain the\nfollowing ODE:\n\\begin{equation}\n\\frac{\\varepsilon}{2}\\partial_x^2 \\varrho + \\partial_x \\varrho (2\n\\varrho - 1) - \n\\omega_\\mathrm{D} \\varrho + \\omega_\\mathrm{A} (1-\\varrho)=0 \\;.\n\\label{e_ode_pff}\n\\end{equation}\nThe ratio $K=\\omega_\\mathrm{A}\/\\omega_\\mathrm{D}$ between the\nattachment and detachment rates will prove an important parameter in\nthe analysis of this differential equation. Since the case $K \\neq 1$\nis considerably complicated in its mathematical analysis we refer the\nreader to reference \\cite{pff04} and restrain our discussion to the\ncase $K=1$. In the thermodynamic limit ($\\varepsilon \\to 0$) and with\n$\\omega_\\mathrm{D}=\\omega_\\mathrm{A}=\\Omega$ the ODE(\\ref{e_ode_pff})\nsimplifies to first order:\n\\begin{equation}\n(\\partial_x \\varrho - \\Omega)(2 \\varrho - 1)=0 \\;.\n\\label{e_ode_pff3}\n\\end{equation}\nObviously, there are two solutions to this general ODE problem. The\nhomogeneous density $\\varrho_\\mathrm{L}=1\/2$ given by the Langmuir\nisotherm and the linear slope $\\varrho(x)=\\Omega x + C$. The value of\nC is determined by the boundary conditions and leads to the two\nsolutions $\\varrho_\\mathrm{\\alpha}(x)=\\alpha + \\Omega x$ and\n$\\varrho_\\mathrm{\\beta}(x)=1-\\beta-\\Omega + \\Omega x$. The complete\ndensity profile $\\varrho(x)$ is the combination of one or several of\nthe three densities above. Depending on how they are matched, we\ndistinguish several phases as explained in the following.\n\n\\subsection{Phase Diagram and Density Distributions}\n\nThe only area in the phase diagram that does not change compared to\nsimple TASEP is the upper right quadrant. This is not surprising since\nthe maximal current phase is a bulk controlled regime anyway.\nTherefore the additional bulk dynamics with the Langmuir isotherm at\n$\\varrho_\\mathrm{L}=1\/2$ do not result in any changes in the density\ndistribution. In this case, non-equilibrium and equilibrium dynamics\ndo not compete but cooperate. \n\nAs in TASEP different solutions can be matched in various ways, the\nsimplest being the connection by a domain wall between the left\nsolution $\\varrho_\\mathrm{\\alpha}$ and the right solution\n$\\varrho_\\mathrm{\\beta}$. Depending on the current distribution, two\npossibilities have to be distinguished. As both solutions are\nnon-homogeneous, the corresponding currents $j_\\mathrm{\\alpha}$ and\n$j_\\mathrm{\\beta}$ will be strictly monotonic (Fig. \\ref{f_pff_phase}\n(left)). If the currents equal each other inside the system at a\nposition $x_\\mathrm{w}$, the DW is localized at this position, as a\ndisplacement to either side would result in a current inequality that\ndrives the DW back to $x_\\mathrm{w}$ (see Fig. \\ref{f_pff_phase}\n(left)). Ergo, the TASEP\/LK exhibits multi-phase existence of low and\nhigh density regions (LD-HD phase) in the stationary state on all\ntime-scales, opposed to TASEP where this behavior is only observed for\nshort observation times. Recently, this DW localization has been\nobserved experimentally \\cite{nosc}.\n\nIf the matching of left and right current is not possible inside the\nsystem, the known LD and HD phases are found. This is the case for one\nboundary condition being considerably larger than the other and is\nevidently depending quantitatively on the slope of the density\nsolutions. This slope is determined by the ratio of the TASEP step\nrate and the bulk interchange rate $\\Omega$. For large $\\Omega$ any\ndensity imposed by the boundary relaxes fast against the Langmuir\nisotherm of $\\varrho_\\mathrm{L}=1\/2$ resulting a steep slope of the\ndensity profile.\n\nThis fact allows for the existence of two other phases with\nmulti-regime coexistence. We could imagine a scenario in which the\nboundary imposed density solutions decay fast enough towards the\nisotherm to enable a three-regime coexistence of low density, maximal\ncurrent and high density (LD-MC-HD phase). Furthermore, a combination\nof a MC phase with a boundary layer on one side and a LD or HD region\nof finite extend at the other boundary can be imagined. Not all these\nphases will be realized for every value of $\\Omega$. Instead, the\nphase topology of two-dimensional cuts through the\n$\\alpha,\\beta,\\Omega$-phase space changes. An example is shown in Fig.\n\\ref{f_pff_phase} (right).\n\\begin{figure}\n\\centering\n\\includegraphics[height=7cm]{.\/pff_dens.eps}\n\\hspace{1 cm}%\n\\includegraphics[height=7cm]{.\/pff_phase.eps}\n\\caption{({\\bf left}) The DW connects the two densities $\\rho_\\alpha$ \n and $\\rho_\\beta$ ({\\it both dashed}) and is localized at the point\n where the correspondent currents $j_\\alpha$ and $j_\\beta$ match.\n Note the finite extend of the DW (localization length) that is only\n produced for Monte-Carlo simulations ({\\it solid wiggly line}) and\n is not captured by mean-field results ({\\it solid line}). ({\\bf\n right}) Topological changes in the phase diagram of TASEP\/LK for\n (a) $\\Omega=0.3$, (b) $\\Omega=0.5$, (c) $\\Omega=1$, from\n \\cite{pff04}}\n\\label{f_pff_phase}\n\\end{figure}\n\n\n\\subsection{Domain Wall Theory}\n\\label{s_dw}\n\nAfter we have derived an analytical solution for the density profile\nbased on the mean-field differential equation (\\ref{e_ode_pff}), we\nnow have a closer look on the domain wall and its stochastic\nproperties.\n\n\nAs mentioned before, the density profile exhibits a discontinuity at\n$x_\\mathrm{w}$ that is actually a finite size continuous transition\nbetween the high and low density for systems of finite size. Only\nupon increasing the system size $N \\to \\infty$ a sharp transition\nbetween the left and the right solution occurs. However, this is only\ndue to the fact that the lattice spacing decreases to $\\varepsilon \\to\n0^+$. So compared to the lattice length $L$, usually normalized to\n$1$, the domain wall is discontinuous, while on the length scale of\nlattice sites it will still have a finite extend. This extend is an\nintrinsic statistical feature and is usually referred to as\nlocalization length.\n\nThe domain wall in its random walk behavior can be either subjected to\nequal rates (unbiased) as in TASEP or the rates of movement to the\nleft\/right can be different (biased). In general, the rates will not\nonly be different, but also depend on the space variable.\nTo begin with, we will show a way how to derive these rates by taking\ninto account fluctuations of particle number\n\\cite{kolomeisky-etal:98,santen-appert:02}. Consider a situation where\nall events that can change the particle number ($\\alpha,\\beta,\\Omega$)\nhave a typical time scale that is considerably larger than that of\njump processes on the lattice. In this case, the time between any\nentry and exit events is so long, that the system has enough time to\n``rearrange'' (to reach a temporary steady state) in between . Then it\nis possible to identify jump rates $\\omega_\\mathrm{l}(x)$\n($\\omega_\\mathrm{r}(x)$) for DW movement to the left (right) with the\noverall rate for entry and exit of any particle at any site.\nSpecifically, if a particle enters the system, the DW is shifted to\nthe left by a distance of $\\approx\n\\varepsilon\/[\\varrho_\\mathrm{\\beta}(x_\\mathrm{w})-\\varrho_\\mathrm{\\alpha}(x_\\mathrm{w})]$.\nTherefore the rate for the DW to move one lattice site to the left is\n$\\omega_\\mathrm{l} =\n\\omega_\\mathrm{entry}\/[\\varrho_\\mathrm{\\beta}(x_\\mathrm{w})-\\varrho_\\mathrm{\\alpha}(x_\\mathrm{w})]$.\n\nIf the density distribution $\\varrho(x)$ is known analytically, it is\npossible to calculate $\\omega_\\mathrm{entry}$, the overall rate of\nparticle entrance, as the sum over all possible entrance events\n\\begin{equation} \\label{e_partfluct}\n\\omega_\\mathrm{entry}=\\alpha(1-\\alpha)+\\int_0^1 dx \\ \\omega_\\mathrm{A} (1-\\varrho(x)) \\;.\n\\end{equation}\nThe first term captures entrance events from the left boundary\nreservoir and the integral accounts for the Langmuir kinetics. The\nfirst multiplicand in both terms is the attempted rate of a jump,\nwhereas the difference in brackets states the probability of the\ndestination site to be vacant. Along the same lines, the exit rate is\ncomputed. As we know the analytical density distribution as\n$\\varrho(x)=\\alpha+\\Omega x + \\Delta \\Theta(x-x_\\mathrm{w})$ with the\nHeavyside function $\\Theta$ and the DW height~\\footnote{The term \n $-1 \\Omega$ accounts for the diminished height that is caused \n by the Langmuir kinetics on the whole length $1$ of the system.} \n$\\Delta=1-\\alpha-\\beta-\\Omega$, we can execute the integrals\nto obtain\n\\begin{equation}\n\\omega_\\mathrm{entry}=\\alpha(1-\\alpha)+\\Omega(\\beta+\\frac{\\Omega}{2}) + x \\Omega \\Delta \\;.\n\\end{equation}\nKnowing these rates, one can complete the description of the domain\nwall as a random walker by calculating the position dependent jump\nrates to the left and right $\\omega_\\mathrm{l}(x)$ and\n$\\omega_\\mathrm{r}(x)$. The two rates constitute an effective\npotential displayed in Fig.~\\ref{f_potential}.\n\\begin{figure}\n \\centering \\includegraphics[height=3.5cm]{.\/potential.eps}\n\\hspace{1 cm}%\n\\includegraphics[height=3.5cm]{.\/roger4.eps}\n\\caption{({\\bf left}) The DW will be localized at the point where the\n jump rate to the left $\\omega_\\mathrm{l}$ and right\n $\\omega_\\mathrm{r}$ intersect. This can be interpreted as an\n potential $|\\Delta \\omega|$. ({\\bf right}) Variance of DW\n probability distribution over $\\Omega$ for $\\alpha=\\beta=0.01$ and\n $\\alpha=\\beta=0.1$ in a system of $N=500$, predictions according to\n \\cite{evans-juhasz-santen:03} ({\\it solid}) and based on particle\n number fluctuations ({\\it dashed}) compared to MC simulations ({\\it\n dots})}\n\\label{f_potential}\n\\end{figure}\nThe DW will always be driven to that point $x_\\mathrm{S}$ in the\nsystem where\n$\\omega_\\mathrm{exit}(x_\\mathrm{S})=\\omega_\\mathrm{entry}(x_\\mathrm{S})$.\nThis position is the center of the DW probability density in a\nstochastic picture and the stationary DW position in a mean-field\npicture. It is in agreement with mean field results and yields\n\\begin{equation}\nx_\\mathrm{S}=\\frac{\\Omega-\\alpha+\\beta}{2\\Omega} \\;.\n\\end{equation}\nThe other quantity of interest which - in contrast to the DW position\n- cannot be determined by mean-field calculations is the localization\nlength. In order to compute this quantity, fluctuations have to be\ntaken into account as we will show in the following. If we use the\nnotation $p(x)$ for the probability that the domain wall is at\nposition $x$, then the condition for a stationary DW reads in the\ncontinuum limit\n\\begin{equation}\n\\omega_\\mathrm{r}(x)p(x)=\\omega_\\mathrm{l}(x+\\varepsilon)p(x+\\varepsilon) \\;.\n\\label{e_stationary_dw}\n\\end{equation}\nIntroducing now $y(x)=\\omega_\\mathrm{l}(x)p(x)$ and approximate\n$y'(x)=|y(x+\\varepsilon)-y(x)|\/\\varepsilon$ we obtain:\n\\begin{equation}\ny'(x)+ N y(x) (1-\\frac{\\omega_\\mathrm{r}(x)}{\\omega_\\mathrm{l}(x)})=0 \\;.\n\\end{equation}\nThe solution then is given by\n\\begin{equation}\np(x)=\\frac{\\tilde p(x)}{Z} = \n\\frac{1}{Z \\omega_\\mathrm{l}(x)} exp[-N \\int_{x_0}^x dx' (1-\\frac{\\omega_\\mathrm{r}(x)}{\\omega_\\mathrm{l}(x)})] \\;.\n\\label{e_p_dw}\n\\end{equation}\nwhere $Z$ accounts for normalization. In general, $Z$ is not available\nexplicitly, but it has been shown \\cite{evans-juhasz-santen:03} that\nthe unnormalized probability function can be approximated by a\nGaussian\n\\begin{equation}\n\\tilde p(x) \\propto e^{-C(x-x_\\mathrm{S})^2} \\;,\n\\label{e_gaussian}\n\\end{equation}\nwhere $C$ is given by the second order derivative of the exponent in\n(\\ref{e_p_dw}) as\n\\begin{equation}\nC=\\frac{1}{2} \\frac{d^2}{dx^2}[N \\int_{x_0}^x dx' \\left(1-\\frac{\\omega_\\mathrm{r}(x)}{\\omega_\\mathrm{l}(x)}\\right)]=\n\\frac{N(\\omega_\\mathrm{l}-\\omega_\\mathrm{r})'(x_\\mathrm{S})}{2 \\omega_\\mathrm{l}(x_\\mathrm{S})} \\;.\n\\label{e_C}\n\\end{equation}\nHence, the variance $\\sigma=\\sqrt{1\/(2 C)}$ of the domain wall can be\neasily be obtained provided that the jump rates $\\omega_\\mathrm{l}(x)$\nand $\\omega_\\mathrm{r}(x)$ are available. Evans {\\it et al.}\n\\cite{evans-juhasz-santen:03} have assumed those rates to be\n$\\omega_\\mathrm{l,r}(x)=\nj_\\mathrm{\\alpha,\\beta}\/(\\varrho_\\mathrm{\\beta}-\\varrho{\\alpha})$.\nKouyos has shown \\cite{roger} that using the rates (\\ref{e_partfluct})\nderived from the fluctuations of particle number, one arrives at more\naccurate results compared to Monte Carlo simulations (see Fig.\n\\ref{f_potential}). In this case $C$ evaluates to\n\\begin{equation}\nC=\\frac{2 N \\Omega \\Delta}{\\alpha(1-\\alpha)+\\beta(1-\\beta)+\\Omega} \\;. \n\\end{equation}\nAs the width of the DW is given by $\\sigma=1\/\\sqrt{2C}$, the\nlocalization of the DW scales with $N^{-1\/2}$.\n\n\\section{Conclusions and Outlook} \n\\label{s_outlook}\n\nMuch in the same way as MacDonald's pioneering paper \\cite{macdonald} on\nmRNA translation, recent work on kinesin motors walking along\nmicrotubules has spurred progress in nonequilibrium transport\nphenomena. The ubiquitous exchange of material between the cytoplasm\nand the molecular track, which originally was thought to only lead to\nquantitative modifications of the dynamics \\cite{lkn01}, has recently\nbeen identified \\cite{pff03} as the source for qualitatively new phenomena such as\nphase separation. This introduced a completely new class of\nnon-equilibrium transport models.\n\nThese lattice gas models are characterized by a scaling of the on-off \nrates with system size which enables competition of driven motion \nand equilibrium Langmuir kinetics. Hereby, a finite residence time on the \nlattice ensures cooperative effects to establish multi-phase coexistence, \nlocalized shocks and an enriched phase behavior compared to prior TASEP \nresults.\n\nThere are now various routes along which one could proceed. The first\none is to add more realistic features of the molecular motors, such as\nthe fact that they are dimers \\cite{pierobon} or\nthat there is more than one chemo-mechanical state \\cite{nosc}. Such\ninvestigations are crucial for a quantitative understanding of\t\nintracellular traffic in various ways. One might ask how robust the\nfeatures of minimal models are with respect to the addition of more\nmolecular details. In the case of dimers the answer is far from\nobvious since the non-equilibrium dynamics of dimer adsorption shows\nrich dynamic behavior with anomalously slow relaxation towards the\nequilibrium state \\cite{vilfan}. How this combines with the\ndriven transport along the molecular track was recently analyzed\nthoroughly \\cite{pierobon}. While correlation effects due to the extended\nnature of dimers invalidate a simple mean-field picture it was found\nthat an extended mean-field scheme can be developed which\nquantitatively describes the stationary phases. Surprisingly, the\ntopology of the phase diagram and the nature of the phases is similar\nto the minimal model with monomers. The physical origin of this\nrobustness can be traced back to the form of the current-density\nrelation which exhibits only a single maximum.\n\nThe second line of research generalizing the minimal model \\cite{pff03} asks\nfor the effect of interactions, more than one molecular traffic\nlane, ``road blocks'' such as microtubule associated proteins and\nvarious other kinds of ``disorder'',\nbi-directional traffic, coupling of driven and diffusive transport \nand the like on the stationary density\nprofiles and the dynamics. In almost every instance it is found that\nthis leads to an even richer behavior with new phenomena emerging.\n\nFor TASEP it is known that isolated defects (slow sites) depending on\ntheir strength may either give rise to local density perturbations for\nlow particle densities or yield macroscopic effects for densities\nclose to the carrying capacity \\cite{tb98}.\nThe interplay between coupling to the motor reservoir in the\ncytoplasm and the fluctuations in the capacity limit due to disorder\nalong the track gives rise to a number of interesting collective\neffects \\cite{kouyos}.\n\nCoupling two lanes by allowing particle exchange at a constant rate\nalong the molecular track also results in novel phenomena. Similar to\nequilibrium phase transitions described by field theories with two\ncoupled order parameters higher order critical points may emerge\n\\cite{reichenbach}.\n\nCoupling diffusive and driven transport, the origin of new phenomena\nis due to a competition of different processes of comparable time\nscales. The qualitative failure of mean-field theory in some of these \nsystems \\cite{hinsch} comes as quite a surprise, since mean-field has proven to \npredict phase diagrams for large systems with an astonishing accuracy \nin the lattice gas models mentioned above.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}