diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzavel" "b/data_all_eng_slimpj/shuffled/split2/finalzzavel" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzavel" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nAutonomous, biomimetic robots can serve as tools in animal behavioural studies. Robots are used in ethology and behavioural studies to untangle the multimodal modes of interactions and communication between animals~\\cite{mondada2013general}. When they are socially integrated in a group of animals, they are capable of sending calibrated stimuli to test the animal responses in a social context~\\cite{halloy2007social}. Moreover, animal and autonomous robot interactions represent an interesting challenge for robotics. Confronting robots to animals is a difficult task because specific behavioural models have to be designed and the robots have to be socially accepted by the animals. The robots have to engage in social behaviour and convince somehow the animal that they can be social companions. In this context, the capabilities of the robots and their intelligence are put in harsh conditions and often demonstrate the huge gap that still exists between autonomous robots and animals not only considering motion and coping with the environment but also in terms of intelligence. It is a direct comparison of artificial and natural collective intelligence.\nMoreover, the design of such social robots is challenging as it involves both a luring capability including appropriate robot behaviours, and the social acceptation of the robots by the animals. We have shown that the social integration of robots into groups of fish can be improved by refining the behavioural models used to build their controllers~\\cite{cazenille2017acceptation}. The models have also to be calibrated to replicate accurately the animal collective behaviours in complex environments~\\cite{cazenille2017acceptation}.\n\nResearch on animal and robot interactions need also bio-mimetic formal models as behavioural controllers of the robots if the robots have to behave as congeners~\\cite{Bonnet2016IJARS,bonnet2017cats}. Robots controllers have to deal with a whole range of behaviours to allow them to take into account not only the other individuals but also the environment and in particular the walls \\cite{cazenille2017acceptation,cazenille2017automated}. However, most of biological collective behaviour models deal only with one sub-part at a time of fish behaviours in unbounded environments. Controllers based on neural networks, such as multilayer perceptron (MLP)~\\cite{king1989neural} or echo state networks (ESN)~\\cite{jaeger2007echo} have the advantage to be easier to implement and could deal with a larger range of events.\n\n\\subsection*{Objectives}\nWe aim at building models that generate accurately zebrafish trajectories of one individual within a small group of 5 agents. The trajectories are the result of social interactions in a bounded environment. Zebrafish are a classic animal model in the research fields of genetics and neurosciences of individual and collective behaviours. Building models that correctly reproduce the individual trajectories of fish within a group is still an open question~\\cite{herbert2015turing}. We explore MLP and ESN models, optimised by evolutionary computation, to generate individual trajectories. MLP and ESN are black-box models that need few \\textit{a priori} information provided by the modeller. They are optimised on the experimental data and as such represent a model of the complex experimental collective trajectories. However, they are difficult to calibrate on the zebrafish experimental data due to the complexity of the fish trajectories.\nHere, we consider the design and calibration by evolutionary computation of neural network models, MLP and ESN, that can become robot controllers. We test two evolutionary optimisation methods, CMA-ES~\\cite{auger2005restart} and NSGA-III~\\cite{yuan2014improved} and show that the latter gives better results. We show that such MLP and ESN behavioural models could be useful in animal robot interactions and could make the robots accepted by the animals by reproducing their behaviours and trajectories as in~\\cite{cazenille2017acceptation}. \n\n\n\\section{Materials and Methods} \\label{sec:methods}\n\n\\subsection{Experimental set-up} \\label{sec:setup}\nWe use the same experimental procedure, fish handling, and set-up as in~\\cite{cazenille2016automated,bonnet2017cats,seguret2017loose,cazenille2017acceptation,collignon2017collective,bonnet2018closed}. The experimental arena is a square white plexiglass aquarium of $1000\\times1000\\times100$~mm. An overhead camera captures frames at 15 FPS, with a $500\\times500$px resolution, that are then tracked to find the fish positions. We use 10 groups of 5 adults wild-type AB zebrafish (\\textit{Danio rerio}) in 10 trials lasting each one for 30-minutes as in~\\cite{cazenille2016automated,bonnet2017cats,seguret2017loose,cazenille2017acceptation,collignon2017collective,bonnet2018closed}.\nThe experiments performed in this study were conducted under the authorisation of the Buffon Ethical Committee (registered to the French National Ethical Committee for Animal Experiments \\#40) after submission to the French state ethical board for animal experiments.\n\n\n\\begin{figure*}[h]\n\\begin{center}\n\\includegraphics[width=0.85\\textwidth]{workflow3.pdf}\n\\caption{Methodology workflow. An evolutionary algorithm is used to evolve the weight of a MLP (1 hidden layer, 100 neurons) or an ESN (100 reservoir neurons) neural networks that serves as the controller of a simulated robot interacting with 4 fish described by the experimental data. Only the connections represented by dotted arrows are evolved (for MLP: all connections; for ESN: connections from inputs to reservoir, from reservoir to outputs and from outputs to outputs and to reservoir). The fitness function is computed through data-analysis of these simulations and represent the biomimetism metric of the simulated robot behaviour compared to the behaviour exhibited by real fish in experiments. Two evolutionary algorithms are tested: CMA-ES (mono-objective) and NSGA-III (multi-objective).}\n\\label{fig:workflow}\n\\end{center}\n\\end{figure*}\n\n\n\\subsection{Artificial neural network model} \\label{sec:model}\nBlack-box models, like artificial neural networks (ANN), can be used to model phenomena with few \\textit{a priori} information. Although they are not used yet to model fish collective behaviours based on experimental data, here we show that they are relevant to model zebrafish collective behaviour.\nWe propose a methodology (Fig.~\\ref{fig:workflow}) where either a multilayer perceptron (MLP)~\\cite{king1989neural} artificial neural network, or an echo state network (ESN)~\\cite{jaeger2007echo}, is calibrated through the use of evolutionary algorithms to model the behaviour of a simulated fish in a group of 5 individuals. The 4 other individuals are described by the experimental data obtained with 10 different groups of 5 fish for trials lasting 30 minutes.\n\nMLP are a type of feedforward artificial neural networks that are very popular in artificial intelligence to solve a large variety of real-world problems~\\cite{norgaard2000neural}. Their capability to universally approximate functions~\\cite{cybenko1989approximation} makes them suitable to model control and robotic problems~\\cite{norgaard2000neural}. We consider MLP with only one hidden layer of $100$ neurons (using a hyperbolic tangent function as activation function).\n\nESN are recurrent neural networks often used to model temporal processes, like time-series, or robot control tasks~\\cite{polydoros2015advantages}. They are sufficiently expressive to model complex non-linear temporal problems, that non-recurrent MLP cannot model.\n\nFor the considered focal agent, the neural network model takes the following parameters as input:\n (i) the \\textit{direction vector} (angle and distance) from the focal agent towards each other agent;\n (ii) the \\textit{angular distance} between the focal agent direction and each other agent direction (alignment measure);\n (iii) the \\textit{direction vector} (angle and distance) from the focal agent towards the nearest wall;\n (iv) the \\textit{instant linear speed} of the focal agent at the current time-step, and at the previous time-step;\n (v) the \\textit{instant angular speed} of the focal agent at the current time-step, and at the previous time-step.\nThis set of inputs is typically used in multi-agent modelling of animal collective behaviour~\\cite{deutsch2012collective,sumpter2012modelling}. As a first step, we consider that it is sufficient to model fish behaviour with neural networks.\n\nThe neural network has two outputs corresponding to the change in linear and angular speeds to apply from the current time-step to the next time-step. Here, we limit our approach to modelling fish trajectories resulting from social interactions in a homogeneous environment but bounded by walls. Very few models of fish collective behaviours take into account the presence of walls~\\cite{collignon2016stochastic,calovi2018disentangling}.\n\n\n\\subsection{Data analysis} \\label{sec:dataAnalysis}\nFor each trial, $e$, and simulations, we compute several behavioural metrics using the tracked positions of agents: (i) the distribution of \\textit{inter-individual distances} between agents ($D_e$); (ii) the distributions of \\textit{instant linear speeds} ($L_e$); (iii) the distributions of \\textit{instant angular speeds} ($A_e$); (iv) the distribution of \\textit{polarisation} of the agents in the group ($P_e$) and (v) the distribution of \\textit{distances of agents to their nearest wall} ($W_e$).\nThe polarisation of an agent group measures how aligned the agents in a group are, and is defined as the absolute value of the mean agent heading: $P = \\frac{1}{N} \\bigl\\lvert \\sum^{N}_{i=1} u_i \\bigr\\rvert$ where $u_i$ is the unit direction of agent $i$ and $N=5$ is the number of agents~\\cite{tunstrom2013collective}.\n\nWe define a similarity measure (ranging from $0.0$ to $1.0$) to measure the biomimetism of the simulated robot behaviour by comparing the behaviour of the group of agents in simulations where the robot is present (experiment $e_r$: four fish and one robot) to the behaviour of the experimental fish groups (experiment $e_c$: five fish):\n\\begin{equation}\nS(e_r, e_c) = \\sqrt[5]{I(D_{e_r}, D_{e_c}) I(L_{e_r}, W_{e_c}) I(A_{e_r}, O_{e_c}) I(P_{e_r}, T_{e_c}) I(W_{e_r}, T_{e_c})}\n\\end{equation}\nThe function $I(X, Y)$ is defined as such: $I(X, Y) = 1 - H(X, Y)$.\nThe $H(X, Y)$ function is the Hellinger distance between two histograms~\\cite{deza2006dictionary}. It is defined as: $H(X, Y) = \\frac{1}{\\sqrt{2}} \\sqrt{ \\sum_{i=1}^{d} (\\sqrt{X_i} - \\sqrt{Y_i} )^2 }$ where $X_i$ and $Y_i$ are the bin frequencies.\n\nThis score measures the social acceptation of the robot by the fish, as defined in~\\cite{cazenille2017acceptation,cazenille2017automated}. Compared to the similarity measure defined in these articles, we added a measure of the polarisation of the agents. This was motivated by the tendency of our evolved neural models, without a polarisation factor, to generate agents with unnatural looping behaviour to catch up with the group.\n\n\n\\subsection{Optimisation} \\label{sec:optim}\nWe calibrate the ANN models presented here to match as close as possible the behaviour of one fish in a group of 5 individuals in 30-minute simulations (at $15$ time-steps per seconds, \\textit{i.e.}\\ $27000$ steps per simulation). This is achieved by optimising the connection weights of the ANN through evolutionary computation that iteratively perform global optimisation (inspired by biological evolution) on a defined fitness function so as to find its maxima~\\cite{salimans2017evolution,jiang2008supervised}. \n\nWe consider two optimisation methods (as in~\\cite{cazenille2017automated}), for MLP and ESN networks.\nIn the \\textbf{Sim-MonoObj-MLP} case, we use the CMA-ES~\\cite{auger2005restart} mono-objective evolutionary algorithm to optimise an MLP, with the task of maximising the $S_(e_1, e_2)$ function.\nIn the \\textbf{Sim-MultiObj-MLP} and \\textbf{Sim-MultiObj-ESN} cases, we use the NSGA-III~\\cite{yuan2014improved} multi-objective algorithm with three objectives to maximise. The first objective is a performance objective corresponding to the $S_(e_1, e_2)$ function. We also consider two other objectives used to guide the evolutionary process: one that promotes genotypic diversity~\\cite{mouret2012encouraging} (defined by the mean euclidean distance of the genome of an individual to the genomes of the other individuals of the current population), the other encouraging behavioural diversity (defined by the euclidean distance between the $D_{e}$, $L_{e}$, $A_{e}$, $P_{e}$ and $W_{e}$ scores of an individual). The NSGA-III algorithm was used with a $0.80\\%$ probability of crossovers and a $0.20\\%$ probability of mutations (we also tested this algorithm with only mutations and obtained similar results).\nThe NSGA-III algorithm~\\cite{yuan2014improved} is considered instead of the NSGA-II algorithm~\\cite{deb2002fast} employed in~\\cite{cazenille2017automated} because it is known to converge faster than NSGA-II on problems with more than two objectives~\\cite{ishibuchi2016performance}.\n\nIn both methods, we use populations of 60 individuals and 300 generations. Each case is repeated in 10 different trials.\nWe use a NSGA-III implementation based on the DEAP python library~\\cite{fortin2012deap}.\n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.99\\textwidth]{AllScores1.pdf}\n\\includegraphics[width=0.99\\textwidth]{AllScores2.pdf}\n\\caption{Similarity scores between the behaviour of the experimental fish groups (control) and the behaviour of the best-performing simulated individuals of the MLP models optimised by CMA-ES or NSGA-III. Results are obtained over 10 different trials (experiments for fish-only groups, and simulations for NN models). We consider five behavioural features to characterise exhibited behaviours. \\textbf{Inter-individual distances} corresponds to the similarity in distribution of inter-individual distances between all agents and measures the capabilities of the agents to aggregate. \\textbf{Linear and Angular speeds distributions} correspond to the distributions of linear and angular speeds of the agents. \\textbf{Polarisation} measures how aligned the agents are in the group. \\textbf{Distances to nearest wall} corresponds to the similarity in distribution of agent distance to their nearest wall, and assess their capability to follow the walls. The \\textbf{Biomimetic score} corresponds to the geometric mean of the other scores.}\n\\label{fig:scores}\n\\end{figure}\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.90\\textwidth]{hists}\n\\caption{Comparison between 30-minutes trials involving 5 fish (control, biological data) and simulations involving 4 fish and 1 robot, over 10 trials and across 5 behavioural features: inter-individual distances (\\textbf{A}), linear (\\textbf{B}) and angular (\\textbf{C}) speeds distributions, polarisation (\\textbf{D}), and distances to nearest wall (\\textbf{E}).}\n\\label{fig:plotsHists}\n\\end{center}\n\\end{figure}\n\n\n\n\\section{Results} \\label{sec:results}\nWe analyse the behaviour of one simulated robot in a group of 4 fish. The robots are driven by ANN (either MLP or ESN) evolved with CMA-ES (\\textbf{Sim-MonoObj-MLP} case) or with NSGA-III (\\textbf{Sim-MultiObj-MLP} and \\textbf{Sim-MultiObj-ESN} cases) and compare it to the behaviour of fish-only groups (\\textbf{Control} case). We only consider the best-evolved ANN controllers. In the simulations, the simulated robot does not influence the fish because the fish are described by their experimental data that is replayed.\n\nExamples of agent trajectories obtained in the three tested cases are found in Fig.~\\ref{fig:plotsTraces}A. In the \\textbf{Sim-MonoObj-MLP} and \\textbf{Sim-MultiObj-*} cases, they correspond to the trajectory of the simulated robot agent.\nIn both case, we can see that the robot follow the walls like the fish, and are often part of the fish group as natural fish do. However, the robot trajectories can incorporate patterns not found in the fish trajectories. For example, small circular loop are done when the robot performs an U-turn to catch up with the fish group. This is particularly present in the \\textbf{Sim-MonoObj-MLP} case, and seldom appear in the \\textbf{Sim-MultiObj-*} cases.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.80\\textwidth]{traj}\n\\caption{Agent trajectories observed after 30-minute trials in a square ($1m$) aquarium, for the 4 considered cases: \\textbf{Control} reference experimental fish data obtained as in~\\cite{collignon2016stochastic,seguret2017loose}, \\textbf{Sim-MonoObj-MLP} MLP optimised by CMA-ES, \\textbf{Sim-MultiObj-MLP} MLP optimised by NSGA-III, \\textbf{Sim-MultiObj-ESN} ESN optimised by NSGA-III. \\textbf{A} Examples of an individual trajectory of one agent among the 5 making the group (fish or simulated robot) during 1-minute out of a 30-minute trial. \\textbf{B} Presence probability density of agents in the arena.}\n\\label{fig:plotsTraces}\n\\end{center}\n\\end{figure}\n\n\nWe compute the presence probability density of agents in the arena (Fig.~\\ref{fig:plotsTraces}B): it shows that the robot tend to follow the walls as the fish do naturally.\n\nFor the three tested cases, we compute the statistics presented in Sec.~\\ref{sec:dataAnalysis} (Fig.~\\ref{fig:plotsHists}). The corresponding similarity scores are shown in Fig.~\\ref{fig:scores}. The results of the \\textbf{Control} case shows sustained aggregative and wall-following behaviours of the fish group. Fish also seldom pass through the centre of the arena, possibly in small short-lived sub-groups. There is group behavioural variability, especially on aggregative tendencies (measured by inter-individual distances), and wall-following behaviour (measured by the distance to the nearest wall), because each one of the 10 groups is composed of different fish \\textit{i.e.} 50 fish in total.\n\nThe similarity scores of the \\textbf{Sim-MultiObj-*} cases are often within the variance domain of the \\textbf{Control} case, except for the inter-individual score. It suggests that groups incorporating the robot driven by an MLP evolved by NSGA-III exhibit relatively similar dynamics as a fish-only group, at least according to our proposed measures. However, it is still perfectible: the robot is sometimes at the tail of the group, possibly because of gap created between the robot and the fish group by small trajectories errors (\\textit{e.g.}\\ small loops shown in robot trajectories in Fig.~\\ref{fig:plotsTraces}A).\n\nThe \\textbf{Sim-MonoObj-MLP} case sacrifices biomimetism to focus mainly on group-following behaviour: this translated into a higher inter-individual score than in the \\textbf{Sim-MultiObj-*} cases, and robot tend to follow closely the fish group. With \\textbf{Sim-MonoObj-MLP}, the robot is going faster than the fish, and will fastly go back towards the centroid of the group if it is too far ahead of the group: this explains the large presence of loops in Fig.~\\ref{fig:plotsTraces}A. The \\textbf{Sim-MonoObj-MLP} does not take into account behavioural diversity like the \\textbf{Sim-MultiObj-*}, but focus on the one that is easier to find (namely the group-following behaviour) and stays stuck in this local optimum.\n\nThere are few differences between the results of the \\textbf{Sim-MultiObj-MLP} and the \\textbf{Sim-MultiObj-ESN} cases, the latter showing often slightly lower scores than the former. However, the \\textbf{Sim-MultiObj-ESN} displays a large variability of inter-individual scores, which could suggest that its expressivity could be sufficient to model agents with more biomimetic behaviours if the correct connection weights were found by the optimiser.\n\n\n\n\n\\section{Discussion and Conclusion} \\label{sec:conclusion}\nWe evolved artificial neural networks (ANN) to model the behaviour of a single fish in a group of 5 individuals. This ANN controller was used to drive the behaviour of a robot agent in simulations to integrate the group of fish by exhibiting biomimetic behavioural capabilities. Our methodology is similar to the calibration methodology developed in~\\cite{cazenille2017automated}, but employs artificial neural networks instead of an expert-designed behavioural model. Artificial neural networks are black-box models that require few \\textit{a-priori} information about the target tasks.\n\nWe design a biomimetism score from behavioural measures to assess the biomimetism of robot behaviour. In particular, we measure the aggregative tendencies of the agents (inter-individual distances), their disposition to follow walls, to be aligned with the rest of the group (polarisation), and their distribution of linear and angular speeds.\n\nHowever, finding ANN displaying behaviours of appropriate levels of biomimetism is a challenging issue, as fish behaviour is inherently multi-level (tail-beats as motor response vs individual trajectories vs collective dynamics), multi-modal (several kinds of behavioural patterns, and input\/output sources), context-dependent (different behaviours depending on the spatial position and proximity to other agents) and stochastic (leading to individual and collectives choices and action selection)~\\cite{collignon2016stochastic,sumpter2018using}. More specifically, fish dynamics involve trade-offs between social tendencies (aggregation, group formation), and response to the environment (wall-following, zone occupation); they also follow distinct movement patterns that allow them to move in a polarised group and react collectively to environmental and social cues.\n\nWe show that this artificial neural models can be optimised by using evolutionary algorithms, using the biomimetism score of robot behaviour as a fitness function. The best-performing evolved ANN controllers show competitive biomimetism scores compared to fish group behavioural variability. We demonstrate that taking into account genotypic and behavioural diversity in the optimisation process (through the use of the global multi-objective optimiser NSGA-III) improve the biomimetic scores of the evolved best-performing controllers.\nThe ANN models evolved through mono-objective optimisation tend to focus more on evolving a group-following behaviour rather than a biomimetic agent.\n\nOur approach is still perfectible, in particular, we only evolve the behaviour of a single agent in a group, rather than all agents of the group. This choice was motivated by the large increase in difficulty in evolving ANN models for the entire group, which would also involve additional behavioural trade-offs: \\textit{e.g.}\\ individual free-will and autonomous dynamics, individuals leaving or re-joining the group. However, it also means that here the fish do not react to the robot in simulations because the fish behaviour is a replay of fish experimental trajectories recorded without robot.\n\nAdditionally, it may be possible to improve the performance (in term of biomimetism) of the multi-objective optimisation process by combining additional selection pressures as objectives (\\textit{i.e.}\\ not just genotypic and behavioural diversity)~\\cite{doncieux2014beyond}. We already include behavioural and phenotypic diversities as selection pressures to guide the optimisation process; however, taking into account phenotypic diversity can bias the optimisation algorithm to explore rather than exploit, which can prevent some desired phenotypes to be considered by the optimisation algorithm. An alternative would be to use angular diversity instead~\\cite{szubert2016reducing}.\n\nThis study shows that ANN are good candidates to model individual and collective fish behaviours, in particular in the context of social bio-hybrid systems composed of animals and robots. By evolutionary computation, they can be calibrated on experimental data. This approach requires less \\textit{a priori} knowledge than equations or agent based modelling techniques. Although they are black box model, they could also produce interesting results from a biological point of view. Thus, ANN collective behaviour models can be an interesting approach to design animal and robot social interactions.\n\n\n\n\\section*{Acknowledgement}\n{\\small\nThis work was funded by EU-ICT project 'ASSISIbf', no 601074.}\n\n\n\\FloatBarrier\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{\\texorpdfstring{$k$}{k}-Fold Matroid Union} \\label{appendix:matroid-union-fold}\n\nIn this section, we study the special case of matroid union where we take the $k$-fold union of the same matroid.\nThat is, a basis of the $k$-fold union of $\\cM = (U, \\cI)$ is the largest subset $S \\subseteq U$ which can be partitioned into $k$ disjoint independent sets $S_1, \\ldots, S_k$ of $\\cI$.\nMany of the prominent applications of matroid union fall into this special case, particularly the $k$-disjoint spanning tree problem.\nAs a result, here we show an optimized version of the algorithm presented in \\cref{sec:matroid-union} with better and explicit dependence on $k$ that works in this regime. \n\n\\matroidunionfold*\n\nNote that in the breadth-first-search and blocking-flow algorithms in \\cref{sec:matroid-union}, there is an $O(k)$ overhead where we have to spend $O(k)$ time iterating through the $k$ binary search trees in order to explore the out-neighbors of the $O(kr)$ elements.\nOur goal in this section is thus to show that it is possible to further ``sparsify'' the exchange graphs to contain essentially only a basis, hence reducing its size from $\\Theta(kr)$ to $O(r)$.\nWe start with a slight modification to the BFS \\cref{alg:bfs-union} which reduces the running time by a factor of $O(k)$. The idea is that if we visit an element $u$ in the BFS which does not increase the rank of all visited elements so far, we can skip searching out-edges from $u$. Indeed, if $(u,v)$ is an edge of the exchange graph, then there must have been some element $u'$ visited earlier in the BFS which also has the edge $(u',v)$.\n\n\\begin{algorithm}\n \\SetEndCharOfAlgoLine{}\n \\SetKwInput{KwData}{Input}\n \\SetKwInput{KwResult}{Output}\n \n \\caption{BFS in a $k$-fold union exchange graph}\n \\label{alg:bfs-union-fold}\n \\KwData{$S \\subseteq U$ with partition $S_1, \\ldots, S_k$ of independent sets and a basis $B$ of $U \\setminus S$}\n \\KwResult{The $(s, v)$-distance $d(v)$ in $H(S)$ for each $v \\in S \\cup \\{t\\}$}\n\n $\\mathsf{queue} \\gets B$ and $R \\gets \\emptyset$\\;\n $d(v) \\gets \\infty$ for each $v \\in S \\cup \\{t\\}$, and $d(v) \\gets 1$ for each $v \\in B$\\;\n $\\cT_i \\gets \\textsc{Initialize}(\\cM, S_i, S_i)$ (\\cref{thm:bst} with $\\beta = 1$)\\;\n \\While{$\\mathsf{queue} \\neq \\emptyset$} {\n $u \\gets \\mathsf{queue}.\\textsc{Pop}()$\\;\n \\If{$R + u \\in \\cI$} {\n \\For{$i \\in \\{1, 2, \\ldots, k\\}$} {\n \\While{$v := \\cT_i.\\textsc{Find}(u) \\neq \\bot$} {\n $d(v) \\gets d(u) + 1$ and $\\mathsf{queue}.\\textsc{Push}(v)$\\;\n $\\cT_i.\\textsc{Delete}(v)$\\;\n }\n \\lIf{$S_i + u \\in \\cI$ and $d(t) = \\infty$} {\n $d(t) \\gets d(u) + 1$\n }\n }\n $R \\gets R + u$\\;\n }\n }\n \\textbf{return} $d(v)$ for each $v \\in S \\cup \\{t\\}$.\n\\end{algorithm}\n\n\\begin{lemma}\n Given $S \\in \\cI_\\text{part} \\cap \\hat{\\cI}$ and a basis $B$ of $U \\setminus S$, it takes $\\tO(kr)$ time to construct the distance layers $L_2, \\ldots, L_{d_t - 1}$ of $H(S)$.\n \\label{lemma:bfs-union-fold}\n\\end{lemma}\n\n\\begin{proof}\n The algorithm is presented as \\cref{alg:bfs-union-fold}.\n It performs a BFS from a basis $B$ of the first layer and only explores out-edges from the first basis $R$ it found.\n It takes\n (i) $\\tO(|S|)$ time to construct the $\\cT_i$'s,\n (ii) $\\tO(1)$ time to discover each of the $O(kr)$ element, and\n (iii) an additional $\\tO(k \\cdot |R|)$ time to iterate through all $k$ binary search trees $\\cT_i$'s for each $u \\in R$.\n The total running time is thus bounded by $\\tO(kr)$.\n \n We have shown in \\cref{lemma:bfs-union} that it is feasible to replace $U \\setminus S$ with simply $B$.\n It remains to show that exploring only the out-neighbors of $u \\in R$ does not affect the correctness.\n Consider a $v \\in S \\setminus R$ (we know that $B \\subseteq R$ so it suffices to consider elements in $S$) with an out-neighbor $x \\in S_i$, i.e., $S_i - x + v \\in \\cI$.\n It then follows that $\\mathsf{rank}((S_i - x) + R_v) = \\mathsf{rank}((S_i - x) + (R_v + v)) \\geq \\mathsf{rank}(S_i)$ by \\cref{obs:exchange} and \\cref{lemma:basis-rank}, where $R_v$ is the set $R$ when $v$ is popped out of the queue (in other words, $R_v + v \\not\\in \\cI$).\n This implies that there is a $u \\in R_v$ which is visited before $v$ that also has out-neighbor $x$.\n The modification is therefore correct.\n\\end{proof}\n\nOur blocking-flow algorithm for $k$-fold matroid union is presented as \\cref{alg:blocking-flow-union-fold}.\nIt's essentially a specialization of \\cref{alg:blocking-flow-union} to the case where all the $k$ matroids are the same, except that we skip exploring the out-neighbors of $a_{\\ell}$ and remove it directly if it is ``spanned'' by the previous layers and the set $R_{\\ell} \\subseteq L_{\\ell}$ of elements that are \\emph{not} on any augmenting path of length $d_t$.\nWith this optimization, we obtain the following lemma analogous to \\cref{lemma:blocking-flow-union}.\n\n\\begin{algorithm}\n \\SetEndCharOfAlgoLine{}\n \\SetKwInput{KwData}{Input}\n \\SetKwInput{KwResult}{Output}\n \\SetKwInput{KwGuarantee}{Guarantee}\n \n \\caption{Blocking flow in a $k$-fold union exchange graph}\n \\label{alg:blocking-flow-union-fold}\n \\KwData{$S \\subseteq U$ which partitions into $S_1, \\ldots, S_k$ of independent sets and a dynamic-basis data structure $\\cD$ of $U \\setminus S$}\n \\KwResult{$S^\\prime \\in \\cI_{\\text{part}} \\cap \\cI_k$ with $d_{H(S^\\prime)}(s, t) > d_{H(S)}(s, t)$}\n \\KwGuarantee{$\\cD$ maintains a basis of $U \\setminus S^\\prime$ at the end of the algorithm}\n \n Build the distance layers $L_2, \\ldots, L_{d_t - 1}$ of $H(S)$ with \\cref{lemma:bfs-union-fold}\\;\n $L_0 \\gets \\{s\\}$ and $L_{d_t} \\gets \\{t\\}$\\;\n $B \\gets$ the basis maintained by $\\cD$ and $L_1 \\gets B$\\;\n $A_\\ell \\gets L_\\ell$ for each $0 \\leq \\ell \\leq d_t$\\;\n $\\cT_{\\ell}^{(i)} \\gets \\textsc{Initialize}(\\cM_i, S_i, Q_{S_i}, A_{\\ell} \\cap S_i)$ for each $2 \\leq \\ell < d_t$ and $1 \\leq i \\leq k$ (\\cref{thm:bst} with $\\beta = \\sqrt{r} \/ d_t$)\\;\n $D_{\\ell} \\gets \\emptyset$ for each $1 \\leq \\ell < d_t$\\;\n $R_{\\ell} \\gets \\emptyset$ for each $2 \\leq \\ell < d_t$\\;\n $\\ell \\gets 0$ and $a_0 \\gets s$\\;\n \\While{$\\ell \\geq 0$} {\n \\If{$\\ell < d_t$} {\n \\lIf{$A_{\\ell} = \\emptyset$} {\n \\textbf{break}\n }\n \\lIf{$\\ell \\geq 2$ and $\\mathsf{rank}(L_1 \\cup \\cdots \\cup L_{\\ell - 1} \\cup R_{\\ell} \\cup \\{a_{\\ell}\\}) = \\mathsf{rank}(L_1 \\cup \\cdots \\cup L_{\\ell - 1} \\cup R_{\\ell})$} {\\label{line:check-not-spanned}\n $A_{\\ell} \\gets A_{\\ell} - a_{\\ell}$ and \\textbf{continue}\n }\n \\lIf{$\\ell > 0$} {\n Find an $a_{\\ell + 1} := \\cT_{\\ell + 1}^{(i)}.\\textsc{Find}(a_\\ell) \\neq \\bot$ for some $1 \\leq i \\leq k$\n }\n \\lElse {\n $a_{\\ell + 1} \\gets$ an arbitrary element in $A_1$\n }\n \\If{such an $a_{\\ell + 1}$ does not exist} {\n \\lIf{$\\ell \\geq 2$} {\n $R_{\\ell} \\gets R_{\\ell} + a_{\\ell}$ and $\\cT_{\\ell}^{(j)}.\\textsc{Delete}(a_{\\ell})$ where $a_{\\ell} \\in S_j$\n }\n $A_\\ell \\gets A_\\ell - a_\\ell$ and $\\ell \\gets \\ell - 1$\\;\\label{line:remove}\n }\n \\lElse {\n $\\ell \\gets \\ell + 1$\n }\n }\n \\Else {\n \\tcp{Found augmenting path $a_1, a_2, \\ldots a_\\ell$}\n $B \\gets B - a_1$, $A_1 \\gets A_1 - a_1$, and $D_1 \\gets D_1 + a_1$\\;\n \\If{$\\cD.\\textsc{Delete}(a_1)$ returns a replacement $x$} {\n $B_i \\gets B_i + x$ and $A_i \\gets A_i + x$\\;\n }\n \\For{$i \\in \\{2, \\ldots, d_t - 1\\}$} {\n $D_{i} \\gets D_{i} + a_{i}$ and $A_{i} \\gets A_{i} - a_{i}$\\;\n $\\cT_{i}^{(j)}.\\textsc{Delete}(a_i)$ and $\\cT_{i}^{(j)}.\\textsc{Update}(\\{a_{i - 1}, a_{i}\\})$ where $a_i \\in S_j$\\;\n }\n Augment $S$ along $P = (s, a_1, \\ldots, a_{d_t - 1}, t)$\\;\n $\\ell \\gets 0$\\;\n }\n }\n \\textbf{return} $S$\\;\n\\end{algorithm}\n\n\\begin{lemma}\n Given an $S \\in \\cI_{\\text{part}} \\cap \\hat{\\cI}$ with $d_{H(S)}(s, t) = d_t$ together with a data structure $\\cD$ of \\cref{thm:decremental-basis} that maintains a basis of $U \\setminus S$, it takes\n \\begin{equation}\n \\tO\\left(\\underbrace{kr}_{\\ref{item:term1}} + \\underbrace{\\left(|S^\\prime| - |S|\\right) \\cdot d_t\\sqrt{r}}_{\\ref{item:term2}} + \\underbrace{\\left((|S^\\prime| - |S|) \\cdot d_t + r\\right) \\cdot k}_{\\ref{item:term3}} + \\underbrace{\\left(kr + (|S^\\prime| - |S|)\\right) \\cdot \\frac{\\sqrt{r}}{d_t}}_{\\ref{item:term4}}\\right)\n \\label{eq:complexity}\n \\end{equation}\n time to obtain an $S^\\prime \\in \\cI_{\\text{part}} \\cap \\hat{\\cI}$ with $d_{H}(S^\\prime)(s, t) > d_t$, with an additional guarantee that $\\cD$ now maintains a basis of $U \\setminus S^\\prime$.\n \\label{lemma:blocking-flow-union-fold}\n\\end{lemma}\n\nWe need the following observation to bound the running time of \\cref{alg:blocking-flow-union-fold}.\n\n\\begin{observation}\n In \\cref{alg:blocking-flow-union-fold}, it holds that $B \\cup R_2 \\cup R_{3} \\cup \\cdots \\cup R_{d_t - 1} \\in \\cI$.\n \\label{claim:is-basis}\n\\end{observation}\n\n\n\\begin{proof}[Proof of \\cref{lemma:blocking-flow-union-fold}]\n We analyze the running time of \\cref{alg:blocking-flow-union-fold} first.\n In particular, there are four terms in \\cref{eq:complexity} which come from the following.\n \\begin{enumerate}[label=(\\roman*)]\n \\item\\label{item:term1} $\\tO(kr)$: It takes $\\tO(kr)$ time to compute the distance layers using \\cref{lemma:bfs-union-fold} and initialize all the binary search trees $\\cT_{\\ell}^{(i)}$'s. Computing the rank of $L_1 \\cup \\cdots \\cup L_{\\ell - 1} \\cup R_{\\ell}$ also takes $\\tO(kr)$ time in total since we can pre-compute query-sets of the form $L_1 \\cup \\cdots \\cup L_{k}$ for each $k$ in $\\tO(kr)$ time, and each insertion to $R_{\\ell}$ takes $\\tO(1)$ time.\n \\item\\label{item:term2} $\\tO\\left(\\left(|S^\\prime| - |S|\\right) \\cdot d_t\\sqrt{r}\\right)$: For each of the $O(|S^\\prime| - |S|)$ augmentations, it takes $\\tO(r \\cdot \\frac{d_t}{\\sqrt{r}})$ time to update the binary search trees.\n \\item\\label{item:term3} $\\tO(\\left((|S^\\prime| - |S|) \\cdot d_t + r\\right) \\cdot k)$: The number of elements whose out-edges are explored is bounded by $O\\left((|S^\\prime| - |S|) \\cdot d_t + r\\right)$. This is because for each such element $u$, either $u$ is included in an augmenting path of length $d_t$, or $u$ is removed in Line \\ref{line:remove}.\n There are $O((|S^\\prime| - |S|) \\cdot d_t)$ such $u$'s in the augmenting paths.\n For $u$ removed in Line \\ref{line:remove}, if $\\ell = 1$, then the number of such $u$'s is $O(|S^\\prime| - |S| + r)$ because there are initially $O(r)$ elements in $A_1$, and we add at most one to it every augmentation.\n If $\\ell \\geq 2$, then we insert it into $R_{\\ell}$, and by Line \\ref{line:check-not-spanned}, the rank of $L_1 \\cup \\cdots \\cup L_{\\ell - 1} \\cup R_{\\ell}$ increases after including $u$ into $R_{\\ell}$.\n By \\cref{claim:is-basis}, the number of such $u$'s is bounded by $O(r)$.\n The term then comes from spending $O(k)$ time iterating through the $k$ binary search trees for each of the $O\\left((|S^\\prime| - |S|) \\cdot d_t + r\\right)$ elements whose out-neighbors are explored.\n \\item\\label{item:term4} $\\tO(\\left(kr + (|S^\\prime| - |S|)\\right) \\cdot \\frac{\\sqrt{r}}{d_t})$: The number of elements that are once in some $A_{\\ell}$ is bounded by $O(kr + |S^\\prime| - |S|)$. Initially, there are $O(kr)$ elements ($A_1$ plus all the $A_\\ell$ for $\\ell \\geq 2$), and each augmentation adds at most one element to $A_1$.\n Each of these elements is discovered by $\\cT_{\\ell}^{(i)}.\\textsc{Find}(\\cdot)$ at most once, and thus we can charge the $\\tO(\\frac{\\sqrt{r}}{d_t})$ cost to it, resulting in the fourth term of \\cref{eq:complexity}.\n \\end{enumerate}\n Note that for each element whose out-neighbors are explored, any failed attempt of $\\cT_{\\ell}^{(i)}.\\textsc{Find}(\\cdot)$ costs only $\\tO(1)$ instead of $\\tO(\\frac{\\sqrt{r}}{d_t})$ according to \\cref{thm:bst}.\n The $\\tO(\\frac{\\sqrt{r}}{d_t})$ cost of a successful search is charged to term \\ref{item:term4} instead of \\ref{item:term3}.\n \n As for correctness, it suffices to show that each of the $a_{\\ell}$ removed from $A_{\\ell}$ because it is spanned by $L_1 \\cup \\cdots \\cup L_{\\ell - 1} \\cup R_{\\ell}$ in Line \\ref{line:check-not-spanned} is not in any augmenting path of length $d_t$.\n Consider its out-neighbor $a_{\\ell + 1}$ with respect to the current $S$, and we would like to argue that $a_{\\ell + 1}$ is not on any augmenting path of length $d_t$ anymore.\n This is because we have already explored all the out-neighbors of elements in $R_{\\ell}$.\n Since $a_{\\ell} \\in \\mathsf{span}(L_1 \\cup \\cdots \\cup L_{\\ell - 1} \\cup R_{\\ell})$, by \\cref{lemma:basis-rank}, there must exist some $u \\in L_1 \\cup \\cdots \\cup L_{\\ell - 1} \\cup R_{\\ell}$ with a directed edge $(u, a_{\\ell + 1})$.\n We consider two cases:\n \\begin{itemize}\n \\item $u\\in R_{\\ell}$. This means that we have already explored $a_{\\ell+1}$, as we finished exploring all out-neighbors of $u$ already.\n \\item $u\\in L_{1}\\cup \\cdots \\cup L_{\\ell-1}$. We know that by \\cref{lemma:monotone}, both $d_{H(S)}(s,v)$ and\n $d_{H(S)}(v,t)$ can only increase after augmentations for all elements $v$.\n Hence $a_{\\ell+1}$ cannot be part of an augmenting path of length $d_t$ anymore, since if it was its distance to $t$ must be $d-(\\ell+1)$, but then the distance from $u$ to $t$ must be at most $d-\\ell$ (which is smaller than its initial distance to $t$ at the beginning of the phase).\n \\end{itemize}\n \n As a result, all of $u$'s out-neighbors have either already been explored or do not belong to any augmenting path of length $d_t$.\n This implies that $u$ is not on any such path either, and thus it's correct to skip and remove it from $A_{\\ell}$.\n This concludes the proof of \\cref{lemma:blocking-flow-union-fold}.\n\\end{proof}\n\n\n\\cref{thm:dynamic-matroid-union-fold} now follows from analyzing the total running time of $O(\\sqrt{\\min(n, kr)})$ runs of \\cref{lemma:blocking-flow-union-fold}.\n\n\\begin{proof}[Proof of \\cref{thm:dynamic-matroid-union-fold}]\n We initialize the dynamic-basis data structure $\\cD$ of \\cref{thm:decremental-basis} on $U$ in $\\tO(n)$ time.\n Let $p = \\min(n, kr)$ be the rank of the $k$-fold matroid union.\n Using $\\cD$, we then run $O(\\sqrt{p})$ iterations of \\cref{lemma:blocking-flow-union-fold} until $d_{H(S)}(s, t) > \\sqrt{p}$.\n Summing the first two terms of \\cref{eq:complexity} over these $O(\\sqrt{p})$ iterations gives (recall that \\cref{lemma:augmenting-path-lengths} guarantees that $\\sum_{d = 1}^{\\sqrt{p}}d \\cdot (|S_d| - |S_{d - 1}|) = \\tO(p)$)\n \\[\n \\tO\\left(kr\\sqrt{p} + \\sqrt{r} \\cdot \\sum_{d = 1}^{\\sqrt{p}}d \\cdot \\left(|S_d| - |S_{d - 1}|\\right)\\right) = \\tO\\left(kr\\sqrt{p}\\right)\n \\]\n since $p\\sqrt{r} \\leq kr\\sqrt{p}$.\n The third term of \\cref{eq:complexity} contributes a total running time of\n \\[\n \\tO\\left(\\left(\\sum_{d = 1}^{\\sqrt{p}}dk \\cdot \\left(|S_d| - |S_{d - 1}|\\right)\\right) + kr\\sqrt{p}\\right) = \\tO\\left(kr\\sqrt{p} + kp\\right),\n \\]\n while the fourth term of \\cref{eq:complexity} sums up to\n \\[\n \\tO\\left(\\left(\\sum_{d = 1}^{\\sqrt{p}}kr\\frac{\\sqrt{r}}{d}\\right) + kr\\sqrt{r}\\right) = \\tO\\left(kr\\sqrt{r}\\right).\n \\]\n We finish the algorithm by finding the remaining $O(\\sqrt{p})$ augmenting paths one at a time with \\cref{lemma:bfs-union-fold} in a total of $\\tO(kr\\sqrt{p})$ time.\n The $k$-fold matroid union algorithm thus indeed runs in $\\tO\\left(n + kr\\sqrt{\\min(n, kr)} + k\\min(n, kr)\\right)$ time, concluding the proof of \\cref{thm:dynamic-matroid-union-fold}.\n\\end{proof}\n\n\n\n\n\n\\section{Dynamic Oracles for Specific Matroids \\& Applications} \\label{appendix:applications}\n\n\nIn this appendix, we show how to leverage known dynamic algorithms to implement the dynamic rank oracle (\\cref{def:dyn-oracle}) efficiently for many important matroids.\nWhat we need are data structures that can maintain the rank of a set dynamically under insertions and deletions in \\emph{worst-case} update time (converting a \\emph{worst-case} data structure to \\emph{fully-persistent} can be done by the standard technique of \\cite{DriscollSST86,Dietz89}, paying an overhead of $O(\\log{n})$). Additionally, note that the data structures do not need to work against an adaptive adversary since we only ever use the \\emph{rank} of the queried sets, which is not affected by internal randomness.\n\nIn particular, for \\emph{partition, graphic, bicircular, convex transversal}, and \\emph{simple job scheduling matroids} it is possible to maintain the rank with polylog$(n)$ update-time, and for \\emph{linear matroids} in $O(n^{1.529})$ update-time.\n\nTogether with our matroid intersection (\\cref{sec:matroid-intersection}) and matroid union (\\cref{sec:matroid-union}) algorithms, this leads to a black-box approach to solving many different problems. In fact, we can solve matroid intersection and union on any combination of the above matroids, leading to improved or matching running times for many problems (see the introduction \\cref{sec:intro} with \\cref{tab:intro:dyn-oracle-implies-fast-algorithms,tab:intro:implications} for a more thorough discussion). For completeness, we define these problems in \\cref{appendix:problems}.\nThe same algorithms are powerful enough to also solve new problems which have not been studied before.\n\n\n\\paragraph{Example Application: Tree Scheduling (or Maximum Forest with Deadlines).} We give an example of a reasonably natural combinatorial scheduling problem, which---to our knowledge---has not been studied before. Suppose we are given a graph $G = (V,E)$ where each edge $e\\in E$ has two numbers associated with it: a release date $\\ell_e$ and a deadline $r_e$. Consider the problem where we want to for each day pick exactly one edge (say, to build\/construct), but we have constraints that edge $e$ can only be picked between days $\\ell_e$ and $r_e$. Now the task is to come up with a scheduling plan to build a spanning tree of the graph, if possible.\n\nThis problem is exactly a matroid intersection problem between a graphic matroid and a convex transversal matroid. Hence, by a black-box reduction, we know that we can solve this problem in $\\tO(|E|\\sqrt{|V|})$ time.\n\n\n\\subsection{Partition Matroids}\nIn a partition matroid $\\cM = (U,\\cI)$, each element $u\\in U$ is assigned a color $c_u$. We are also, for each color $c$, given a non-negative integer $d_c$, and we define a set of elements $S\\subseteq U$ to be independent if for each color $c$, $S$ includes at most $d_c$ elements of this color.\nImplementing the dynamic oracle for the partition matroid is easy:\n\n\\begin{lemma}\nOne can maintain the rank of a partition matroid in $O(1)$-update time.\n\\end{lemma}\n\\begin{proof}\nFor each color $c$ we maintain a counter $x_c$ of how many elements we have of color $c$. We also maintain $r = \\sum \\min_c(x_c, d_c)$, which is the rank of the current set.\n\\end{proof}\n\n\\begin{remark}\nBipartite matching can be modeled as a matroid intersection problem of two partition matroids. So our matroid intersection algorithm together with the above lemma match (up to poly-logarithmic factors induced by fully-persistence)---in a black-box fashion---the $O(|E|\\sqrt{|V|})$-time bound of the best \\emph{combinatorial} algorithm for bipartite matching \\cite{HopcroftK73}.\n\\end{remark}\n\n\\subsection{Graphic and Bicircular Matroids}\n\\label{appendix:applications-graphic}\nGiven a graph $G = (V,E)$, the \\emph{graphic} and \\emph{bicircular} matroids are matroids capturing the connectivity structure of the graph.\n\n\n\\paragraph{Graphic Matroid.}\nIn the graphic matroids $\\cM = (E,\\cI)$ a subset of edges $E'\\subseteq E$ are independent if and only if they do not contain a cycle.\nWe use the following result to implement the dynamic oracle for this matroid.\n\n\\begin{lemma}[\\cite{KapronKM13,GibbKKT15}]\n There is a data structure that maintains an initially-empty graph $G = (V, E)$ and supports insertion\/deletion of edges $e$ into\/from $E$ in worst case $O(\\log^4{|V|})$ time and query of the connectivity between $u$ and $v$ in worst case $O(\\log{|V|}\/\\log{\\log{|V|}})$ time.\n The data structure works with high probability against an oblivious adversary.\n \\label{lemma:worst-case-dynamic-conn-appendix}\n\\end{lemma}\n\nWith a simple and standard extension, we can maintain the number of connected components as well, and hence also the rank (since $\\mathsf{rank}(E') = |V|- \\#\\text{connected components in }G[E']$).\n\n\\begin{corollary}\n There is a data structure that maintains an initially-empty graph $G = (V, E)$ and supports insertion\/deletion of $e$ into\/from $E$ in worst-case $O(\\log^4{|V|})$ time.\n After each operation, the data structure also returns the number of connected components in $G$.\n The data structure works with high probability against an oblivious adversary.\n \\label{cor:worst-case-dynamic-components-appendix}\n\\end{corollary}\n\n\\begin{proof}\n We maintain the data structure $\\cC$ of \\cref{lemma:worst-case-dynamic-conn-appendix} and a counter $c := |V|$ representing the number of connected components.\n For insertion of $e = (u, v)$, we first query the connectivity of $u$ and $v$ before inserting $e$ into $\\cC$.\n If they are not connected before the insertion, decrease $c$ by one.\n For deletion of $e = (u, v)$, after deleting $e$ from $\\cC$, we check if $u$ and $v$ are still connected.\n If not, then we increase $c$ by one.\n\\end{proof}\n\n\n\\paragraph{Bicircular Matroid.}\nIn the bicircular matroid $\\cM = (E,\\cI)$, a subset of edges $E'\\subseteq E$ are independent if and only if each connected component in $G[E']$ has at most one cycle. Similar to the graphic matroid, dynamic connectivity algorithms can be used to implement the dynamic rank oracle for bicircular matroids too.\n\n\\begin{corollary}\n There is a data structure that maintains an initially-empty graph $G = (V, E)$ and supports insertion\/deletion of $e$ into\/from $E$ in worst-case $O(\\log^4{|V|})$ time.\n After each operation, the data structure also returns the rank of $E$ in the bicircular matroid.\n The data structure works with high probability against an oblivious adversary.\n \\label{cor:worst-case-dynamic-components-appendix-bicircular}\n\\end{corollary}\n\n\\begin{proof}\nThe dynamic connectivity data structure of \n\\cite{KapronKM13,GibbKKT15} (\\cref{lemma:worst-case-dynamic-conn-appendix}) can be adapted to also keep track of the number of edges and vertices in each connected component. Using this, the data structure can, for each connected component $c$ keep track of a number $x_c$ as the minimum of the number of edges in this component and the number of vertices in this component. Then the rank of the bicircular matroid is just the sum of $x_c$ (as in an independent set each component is either a tree or a tree with an extra edge). In each update two components can merge, a component can be split up into two, or the edge-count of a component may simply change. \n\\end{proof}\n\n\\begin{remark}[Deterministic Dynamic Connectivity]\nThe above dynamic connectivity data structures are randomized.\nThere are also deterministic connectivity data structures, but with slightly less efficient sub-polynomial $|V|^{o(1)}$ update time~\\cite{ChuzhoyGLNPS20}.\n\\end{remark}\n\n\\subsection{Convex Transversal and Scheduling Matroids}\n\nConvex transversal and scheduling matroids are special cases of the \\emph{transversal} matroid, with applications in scheduling algorithms.\n\n\\begin{definition}[Transversal Matroid~\\cite{edmonds1965transversals}]\n A \\emph{transversal matroid} with respect to a bipartite graph $G = (L, R, E)$ is defined over the ground set $L$, where each $S \\subseteq L$ is independent if and only if there is a perfect matching in $G$ between $S$ and a subset of $R$.\n\\end{definition}\n\nA bipartite graph $G = (L, R, E)$ is \\emph{convex} if $R$ has a linear order $R = \\{r_1, r_2, \\ldots, r_n\\}$ and each $\\ell \\in L$ corresponds to an interval $1 \\leq s(\\ell) \\leq t(\\ell) \\leq n$ such that $(\\ell, r_i) \\in E$ if and only if $s(\\ell) \\leq i \\leq t(\\ell)$, i.e., the neighbors of each $\\ell$ form an interval.\n\n\\begin{definition}[Convex Transversal Matroid and Simple Job Scheduling Matroid]\n A \\emph{convex transversal matroid} is a transversal matroid with respect to a convex bipartite graph.\n A \\emph{simple job scheduling matroid} is a special case of convex transversal matroids in which $s(\\ell) = 1$ for each $\\ell \\in L$.\n\\end{definition}\n\nOne intuitive way to think about the simple job scheduling matroid is that there is a machine capable of finishing one job per day.\nThe ground set of the matroid consists of $n$ jobs, where the $i$-th jobs must be done before its deadline $d_i$.\nA subset of jobs forms an independent set if it's possible to schedule these jobs on the machine so that every job is finished before its deadline.\n\n\\begin{lemma}[Dynamic Convex Bipartite Matching \\cite{BrodalGHK07}]\n There is a data structure which, given a convex bipartite graph $G = (L, R, E)$, after $\\tO(|L|+|R|)$ initialization, maintains the size of the maximum matching of $G[A \\cup R]$ where $A \\subseteq L$ is a dynamically changing subset of $L$ that is initially empty.\n The data structure supports insertion\/deletion of an $x \\in L$ to\/from $A$ in worst-case $O(\\log^{2}(|L|+|R|))$ update time.\n\\end{lemma}\n\n\\begin{remark}\n The exact data structure presented in \\cite{BrodalGHK07} is different from the stated one.\n In particular, they support insertion\/deletion of an \\emph{unknown} job, i.e., we do not know beforehand what the starting date and deadline of the job are, nor do we know its relative position among the current set of jobs.\n As a result, they used a rebalancing-based or rebuilding-based binary search tree~\\cite{NievergeltR72,Andersson89,Andersson91}, resulting in their amortized bound.\n For our use case, all the possible jobs are known and we are just activating\/deactivating them, hence a static binary tree with a worst-case guarantee over these jobs suffices.\n\\end{remark}\n\n\n\\subsection{Linear Matroid}\nIn a linear matroid $\\cM = (U, \\cI)$, $U$ is a set of $n$ vectors (of dimension $r$) in some vector space and the notion of independence is just that of \\emph{linear independence}. The dynamic algorithm to maintain the rank of a matrix of \\cite{BrandNS19} can be used without modification as the dynamic oracle.\n\n\\begin{lemma}[Dynamic Matrix Rank Maintenance~\\cite{BrandNS19}]\n There is a data structure which, given an $n \\times n$ matrix $M$, maintains the rank of $M$ under row updates in worst-case $O(n^{1.529})$ update time.\n\\end{lemma}\n\n\\subsection{Problems} \\label{appendix:problems}\nFor completeness, here we define the problems we discuss in the introduction, and why they reduce to matroid union or intersection.\n\n\\paragraph{$k$-Forest.} In this problem we are given a graph $G = (V,E)$ and asked to find $k$ edge-disjoint forests of the graph, of the maximum total size.\nIt can be modeled as the $k$-fold matroids union over the graphic matroid of $G$.\n\n\\paragraph{$k$-Disjoint Spanning Trees.} This problem is a special case of the above $k$-forest problem where we ask to find $k$ edge-disjoint spanning trees of the graph. Clearly, if such exists, the $k$-forest problem will find them.\n\n\\paragraph{$k$-Pseudoforest.} Similar to above, in this problem we are given a graph $G = (V,E)$ and asked to find $k$ edge-disjoint \\emph{pseudoforests} of the graph, of the maximum total size. A pseudoforest is an undirected graph in which every component has at most one cycle.\nThe problem can be modeled as the $k$-fold matroids union over the bicircular matroid of $G$.\n\n\\paragraph{$(f,p)$-Mixed Forest-Pseudoforest.} Again, we are given a graph $G = (V,E)$ and asked to find $f$ forests and $p$ pseudoforest (all edge-disjoint), of the maximum total size. \nThe problem can be modeled as the matroids union over $f$ graphic matroids and $p$ bicircular matroids.\n\n\\paragraph{Tree Packing.} In the tree packing problem, we are given a graph $G = (V,E)$ and are asked to find the maximum $k$ such that we can find $k$-disjoint spanning trees in the graph. This number $k$ is sometimes called the \\emph{tree-pack-number} or \\emph{strength} of the graph. The problem can be solved with the $k$-disjoint spanning trees problem, by binary searching for $k$ in the range $[0,|E|\/(|V|-1)]$, and is an example of a \\emph{matroid packing} problem.\n\n\\paragraph{Arboricity and Pseudoarboricity.} The arboricity (respectively pseudoarboricity) of a graph $G = (V,E)$ is the least integer $k$ such that we can partition the edges into $k$ edge-disjoint forests (respectively pseudoforests). \nThis can be solved with the $k$-forest (respectively $k$-pseudoforest) problem with a binary search over $k$. It is well known that for a simple graph the (pseudo-)arboricity is at most $\\sqrt{|E|}$, so we need only search for $k$ in the range $[0,\\sqrt{|E|}]$. The problems are examples of \\emph{matroid covering} problems.\n\n\\paragraph{Shannon Switching Game.} The Shannon switching game is a game played on a graph $G = (V,E)$, between two players ``Short'' and ``Cut''. They alternate turns with Short playing first, and all edges are initially colored white. On Short's turn, he may color an edge of the graph black. On Cut's turn, he picks a remaining non-black edge and removes it from the graph. Short wins if he connects the full graph with black edges, and Cut if he manages to disconnect the graph. It can be shown that Short wins if and only if there exists two disjoint spanning trees in the graph (and these two spanning trees describes a winning strategy for Short). Hence solving this game is a special case of the $k$-disjoint spanning tree problem with $k = 2$.\n\n\\paragraph{Graph $k$-Irreducibility.} \nA (multi-)graph $G = (V,E)$ is called $k$-irreducible (\\cite{Whiteley88rigidity}) if and only if $|E| = k(|V|-1)$ and for any vertex-induced nonempty, proper subgraph $G[V']$ it holds that $|E(G[V'])| < k(|V'|-1)$.\nThe motivation behind this definition comes from the rigidity of \\emph{bar-and-body frameworks}. A bar-and-body framework where rigid bars are attached to rigid bodied with joints (represented by the graph $G$).\nThen any stress put on a $k$-irreducible structure will propagate to all the bars (i.e.\\ edges).\n\\cite{gabow1988forests} show how one can decide if a graph is $k$-irreducible by first determining if its edges can be partitioned into $k$ edge-disjoint trees, and then performing an additional $\\tO(k|V|)$ work.\n\n\\paragraph{Bipartite Matching.} In the bipartite matching problem, we are given a bipartite graph $G = (L\\cup R,E)$, and the goal is to find a \\emph{matching} (a set of edges which share no vertices) of maximum size. Bipartite matching can be modeled as a matroid intersection problem over two partition matroids $M_L = (E,\\cI_L)$ and $M_R = (E,\\cI_R)$. $M_L$ specifies that no two edges share the same vertex on the left $L$ (and $M_R$ is defined similarly on the right set of vertices $R$).\n\n\\paragraph{Colorful Spanning Tree.} In this problem\\footnote{sometimes also called \\emph{rainbow spanning tree}.}, we are given a graph $G = (V,E)$ together with colors on the edges $c:E \\to \\bbZ$. We are tasked to find a spanning tree of $G$ such that no two edges in our spanning tree have the same color. This problem can be modeled by the matroid intersection of the graphic matroid of $G$ (ensuring we pick a forest),\nand a partition matroid of the coloring $c$ (ensuring that we pick no duplicate colors). We also note that this problem is more difficult than bipartite matching since any bipartite matching instance can be converted to a colorful spanning tree instance on a star-multi-graph.\n\n\\paragraph{Graphic Matroid Intersection.} In graphic matroid intersection we are given two graphs $G_1 = (V_1,E_1)$ and $G_2 = (V_2,E_2)$ and a bijection of the edges $\\phi : E_1 \\to E_2$. The task is to find a forest in $G_1$ of the maximum size, which also maps to a forest in $G_2$. By definition, this is a matroid intersection problem over two graphic matroids. Again, this problem is a further generalization of the colorful spanning tree problem.\n\n\\paragraph{Convex Transversal and Simple Job Scheduling Matroid Intersection.} In these problems, we are given a set of unit-size jobs $V$, where each job $v$ has two release times $\\ell_1(v)$, $\\ell_2(v) \\ge 1$ (in simple job scheduling $\\ell(v) = 1$) and two deadlines $r_1(v)$,$r_2(v)\\le \\mu$. The task is to find a set of jobs $S$ of the maximum size such that they can be scheduled on two machines as follows: each job needs to be scheduled at both machines, and at machine $i$ it must be scheduled at time $t \\in [\\ell_i(v), r_i(v)]$.\n\n\\paragraph{Linear Matroid Intersection}. In this problem, we are given two $n\\times r$ matrices $M_1$ and $M_2$ over some field.\nThe task is to find a set of indices $S\\subseteq \\{1,2,\\ldots,n\\}$ of maximum cardinality, such that the rows of $M_1$ (respectively $M_2$) indexed by $S$ are independent at the same time. This is a matroid intersection of two linear matroids defined by $M_1$ and $M_2$. We note that partition, graphic, and transversal matroids are special cases of linear matroids.\n\n\n\\section{Independence-Query Matroid Intersection Algorithm} \\label{appendix:dynamic-matroid-intersection-ind}\n\nIn this section, we show that we can obtain an $\\tO(nr^{3\/4})$ matroid intersection algorithm in the dynamic-independence-oracle model.\nThis matches the state-of-the-art traditional independence-query algorithm of Blikstad \\cite{blikstad21}.\nWe will only provide a proof sketch here because our algorithm is mostly an implementation of \\cite{blikstad21} in the new model with the help of (circuit) binary search trees.\n\nUsing the same construction as \\cref{thm:bst} and \\cref{obs:exchange}, circuit binary search trees work verbatim in the dynamic-independence-oracle model (however, co-circuit binary search trees do not).\nIn particular, \\cref{obs:exchange}\\ref{item:circuit-exchange} can be checked with a single independence query.\n\n\\begin{corollary}\n For any integer $\\beta \\geq 1$, there exists a data structure that supports the following operations.\n \\begin{itemize}\n \\item $\\textsc{Initialize}(\\cM, S, Q_S, X)$: Given $S \\in \\cI$, the query-set $Q_S$ that corresponds to $S$, and $X \\subseteq S$ or $X = \\{t\\}$, initialize the data structure in $\\tO(|X|)$ time. The data structure also maintains $S$.\n \\item $\\textsc{Find}(y)$: Given $y \\in \\bar{S}$,\n \\begin{itemize}\n \\item if $X \\subseteq S$, then return an $x \\in X$ such that $S - y + x \\in \\cI$, or\n \\item if $X = \\{t\\}$, then return the only element $x = s$ or $x = t$ in $X$ if $S + y \\in \\cI$ and $\\bot$ otherwise.\n \\end{itemize}\n The procedure returns $\\bot$ if such an $x$ does not exist.\n The procedure takes $\\tO(\\beta)$ time if the result is not $\\bot$, and $\\tO(1)$ time otherwise.\n \\item $\\textsc{Delete}(x)$: Given $x \\in X$, if $x \\not\\in \\{s, t\\}$, delete $x$ from $X$ in $O(\\log{n})$ time.\n \\item $\\textsc{Replace}(x, y)$: Given $x \\in X$ and $y \\not\\in X$, replace $x$ in $X$ by $y$ in $O(\\log{n})$ time.\n \\item $\\textsc{Update}(\\Delta)$: Update $S$ to $S \\oplus (\\Delta \\setminus \\{s, t\\})$ in amortized $\\tO(\\frac{|X| \\cdot |\\Delta|}{\\beta})$ time.\n \\end{itemize}\n \\label{cor:bst-ind}\n\\end{corollary}\n\n\\paragraph{Framework.}\nThe algorithm of \\cite{blikstad21} consists of the following three phases.\n\n\\begin{enumerate}\n \\item First, obtain an $(1 - \\epsilon)$-approximate solution $S$ using augmenting sets in $\\tO(\\frac{n\\sqrt{r}}{\\epsilon})$ time.\n \\item Eliminate all augmenting paths in $G(S)$ of length at most $d$ using Cunningham's algorithm implemented by \\cite{chakrabarty2019faster} in $\\tO(nd + nr\\epsilon)$ time.\n \\item Finding the remaining $O(r\/d)$ augmenting paths one at a time, using $\\tO(n\\sqrt{r})$ time each.\n\\end{enumerate}\n\nWith $\\epsilon = r^{-1\/4}$ and $d = r^{3\/4}$, the total running time is $\\tO(nr^{3\/4})$.\nWe briefly sketch how to implement the above three steps in the same running time also for the dynamic-independence-oracle model.\n\nNote that the primary difficulty independence-query algorithms face is that we are only capable of checking \\cref{obs:exchange}\\ref{item:circuit-exchange} (using \\cref{cor:bst-ind}), which means that we can only explore the neighbors of $u \\in \\bar{S}$.\nThe aforementioned rank-query algorithms for building distance layers and finding blocking-flow style augmenting paths are thus inapplicable in the dynamic-independence-oracle model.\n\n\\paragraph{Approximation Algorithm.}\nThe $O(n\\sqrt{r}\/\\epsilon)$-query $(1-\\epsilon)$-approximation algorithm of \\cite{blikstad21} needs to first compute distance layers up to distance $O(\\frac{1}{\\epsilon})$. This is done in a similar way as sketched below for ``Eliminating Short Augmenting Paths''.\n\nOtherwise, the approximation algorithm works through a series of ``refine'' operations\n(algorithms \\texttt{RefineAB}, \\texttt{RefineBA}, and \\texttt{RefineABA} in \\cite{chakrabarty2019faster,blikstad21}) to build a partial augmenting set. In each such operation, we only need to be able to do the following for some sets $(P,Q)$: start from some set $Q$ and find a maximal set $X\\subseteq P$ such that\n$Q+X$ is independent. This can be performed with a greedy algorithm in time (and dynamic query) $O(|P|)$, given that we already have queried set $Q$ before (which will be the case). \n\nFinally, the approximation algorithm falls back to finding a special type of augmenting paths \\emph{with respect to} the current augmenting set, in the \\texttt{RefinePath} algorithm of \\cite{blikstad21}, with $\\tO(n)$ queries for each such path. This algorithm can be implemented also in the dynamic-oracle model with the same query complexity. \\texttt{RefinePath} relies on the \\texttt{RefineAB} and \\texttt{RefineBA} algorithms (which we already covered), in addition to a binary search trick to find feasible exchange pairs. This binary search can be implemented with the circuit trees (\\cref{cor:bst-ind}), and it takes a total time $\\tO(n)$ to build them (since we can keep track of a queried set for $S$, and then we only need to build a circuit tree statically once for each layer in cost proportional to the size of the layer---which sums up to $\\tO(n)$).\n\n\\paragraph{Eliminating Short Augmenting Paths.}\nUsing \\cite{chakrabarty2019faster}'s implementation of Cunningham's algorithm, we can eliminate all augmenting paths of length $d$, thereby obtaining a $(1 - 1\/d)$-approximate solution.\nThe algorithm relies on \\cref{lemma:monotone} to ``fix'' the distance layers after each augmentation.\nInitially, all elements have distance $1$ or $2$ from $s$ depending on whether it belongs to $S$ (the common independent set obtained by the above approximation algorithm) or not.\nBefore the first and after each of the remaining $O(\\epsilon r)$ augmentation, we can fix the distance layers as follows.\n\n\\begin{itemize}\n \\item For each $1 \\leq \\ell \\leq d$ and $u \\in L_{\\ell}$, if $u$ is not of distance $\\ell$ from $s$, i.e., there is no in-edge from $L_{\\ell - 1}$ to $u$ anymore, move $u$ from $L_{\\ell}$ to $L_{\\ell + 2}$. This check is done as follows, depending on the parity of $\\ell$.\n \\begin{itemize}\n \\item If $\\ell$ is even, then for each $v \\in L_{\\ell - 1}$, we find all the unmarked $u \\in L_{\\ell}$ that $v$ has an edge to and mark $u$.\n In the end, all the unmarked $u \\in L_{\\ell}$ do not belong to $L_{\\ell}$ and should be moved to $L_{\\ell + 2}$.\n \\item If $\\ell$ is odd, then we simply check if there is an in-edge from $L_{\\ell - 1}$ to decide whether $u$ should be moved to other layers.\n \\end{itemize}\n\\end{itemize}\n\nBoth cases can be implemented efficiently with circuit binary search trees of \\cref{cor:bst-ind}: Each time we spend $\\tO(1)$ time to either confirm that $u$ has distance $\\ell$ from $s$ with respect to the current $S$ (in which case it will not be moved anymore in this iteration), or we increase the distance estimate of $u$.\nThe total running time is thus $\\tO(nd + nr\\epsilon)$, where $\\tO(nd)$ comes from increasing the distance estimate of each element to at most $d$, and $\\tO(nr\\epsilon)$ comes from confirming that each element belongs to its distance layer in the $O(\\epsilon r)$ iterations.\n\nA caveat here is that we need to support insertion\/deletion into the binary search trees.\nThis can be made efficient by doubling the size of a binary search tree (and re-initializing it) every time when there are not enough leaf nodes left in it.\nThe cost of re-building will be amortized to $\\tO(1)$ time per update (i.e., movement of an element to another layer).\n\n\\paragraph{Finding a Single Augmenting Path.}\nWith the $(1 - 1\/d)$-approximate solution obtained in the first two steps, \\cite{blikstad21} then finds the remaining $O(r\/d)$ augmenting paths one at a time, using the \\emph{reachability} algorithm of \\cite{quadratic2021}.\nThe reachability algorithm roughly goes as follows.\nFirst, we initialize two circuit binary search trees (\\cref{cor:bst-ind}) over the two matroids for discovering out-edges and in-edges of elements in $\\bar{S}$.\nWe then repeatedly run the following three steps until either an $(s, t)$-path is found (an arbitrary $(s, t)$-path suffices since such a path can be converted into a chordless one in $\\tO(r)$ time along which augmentation is valid) or we conclude that $t$ is unreachable from $s$.\nWe keep track of a set of visited vertices $F$ which we know are reachable from $s$.\n\n\\begin{enumerate}[label=(\\roman*)]\n \\item Identify the set of unvisited \\emph{heavy} vertices in $\\bar{S}$ that have at least $\\sqrt{r}$ unvisited out-neighbors or has a direct edge toward $t$. This is done by sampling a set $R$ of unvisited vertices in $S$ and then computing for each vertex $u$ whether $R \\cap \\mathsf{OutNgh}(u) = \\emptyset$, or equivalently, whether $S - R + u \\in \\cI$.\n Intuitively, vertices with more out-neighbors are more likely to fail the test.\n This can be tested for a single $R$ and all $u$ in $O(n)$ time in the dynamic-independence-oracle model.\n With $O(\\log{n})$ samples, heavy vertices can be successfully identified with high probability.\n \\item Discover all the out-neighbors for each light vertex, taking a total of $\\tO(n\\sqrt{r})$ time using the circuit binary search tree over the whole run of the algorithm. (Each vertex turns from heavy to light at most once.)\n \\item Perform a reversed breadth-first-search from all the heavy vertices simultaneously. We can assume that every vertex on the path is light (i.e., we find a ``closest'' heavy vertex reachable from $s$), and thus all its out-neighbors have already been discovered.\n That is, going backward from $S$ to $\\bar{S}$, we use the out-edges of light vertices.\n From $\\bar{S}$ to $S$, we use the circuit binary search tree.\n This takes $\\tO(n)$ time, and we either find a heavy vertex reachable from $s$ (in which case we make progress by visiting at least $\\sqrt{r}$ vertices in $S$), or we conclude that all heavy vertices are unreachable from $s$ (in which case $t$ is unreachable either).\n\\end{enumerate}\n\nThe number of iterations is bounded by $O(\\sqrt{r})$ since we discover at least $\\sqrt{r}$ unvisited vertices in $S$ every time.\nThe total running time of finding a single augmenting path is thus $\\tO(n\\sqrt{r})$.\n\nUsing the same parameters $\\epsilon$ and $d$ as in \\cite{blikstad21} to combine the three phases, we obtain the following matroid intersection algorithm in the dynamic-independence-oracle model.\n\n\\begin{theorem}\n For two matroids $\\cM_1 = (U, \\cI_1)$ and $\\cM_2 = (U, \\cI_2)$, it takes $\\tO(nr^{3\/4})$ time to obtain the largest $S \\in \\cI_1 \\cap \\cI_2$ with high probability in the dynamic-independence-oracle model.\n \\label{thm:dynamic-matroid-intersection-ind-main}\n\\end{theorem}\n\n\\section{Binary Search Tree} \\label{sec:bst}\n\nIn this section, we give the core data structure of our algorithms which allows us to do binary searches and find free elements (elements $x$ such that $S + x \\in \\cI$) and exchange pairs (pairs $(x,y)$ such that $S-x+y\\in \\cI$, corresponding to edges in the exchange graph) efficiently.\nWe also support updating the common independent set $S$ that the exchange relationship is based upon.\nFor a matroid $\\cM = (U, \\cI)$, the data structure has the following guarantee ($s, t \\not\\in U$ denote the two distinguished vertices of the exchange graph as defined in \\cref{def:exchange-graph}).\n\n\\begin{theorem}\n For any integer $\\beta \\geq 1$, there exists a data structure that supports the following operations.\n \\begin{itemize}\n \\item $\\textsc{Initialize}(\\cM, S, Q_S, X)$: Given $S \\in \\cI$, the query-set $Q_S$ that corresponds to $S$, and $X \\subseteq \\bar{S}$ (respectively, $X \\subseteq S$ or $X = \\{t\\}$), initialize the data structure in $\\tO(|X|)$ time. The data structure also maintains $S$.\n \\item $\\textsc{Find}(y)$: Given $y \\in S \\cup \\{s\\}$ (respectively, $y \\in \\bar{S}$),\n \\begin{itemize}\n \\item if $y \\in S$ (respectively, $X \\subseteq S$), then return an $x \\in X$ such that $S - y + x \\in \\cI$ (respectively, $S - x + y \\in \\cI$), or\n \\item if $y = s$ (respectively, $X = \\{t\\}$), then return an $x \\in X$ such that $S + x \\in \\cI$ (respectively, return the only element $x = s$ or $x = t$ in $X$ if $S + y \\in \\cI$ and $\\bot$ otherwise).\n \\end{itemize}\n The procedure returns $\\bot$ if such an $x$ does not exist.\n The procedure takes $\\tO(\\beta)$ time if the result is not $\\bot$, and $\\tO(1)$ time otherwise.\n \\item $\\textsc{Delete}(x)$: Given $x \\in X$, if $x \\not\\in \\{s, t\\}$, delete $x$ from $X$ in $O(\\log{n})$ time.\n \\item $\\textsc{Replace}(x, y)$: Given $x \\in X$ and $y \\not\\in X$, replace $x$ in $X$ by $y$ in $O(\\log{n})$ time.\n \\item $\\textsc{Update}(\\Delta)$: Update $S$ to $S \\oplus (\\Delta \\setminus \\{s, t\\})$ in amortized $\\tO(\\frac{|X| \\cdot |\\Delta|}{\\beta})$ time.\n \\end{itemize}\n \\label{thm:bst}\n\\end{theorem}\n\n\\begin{remark}\n To make sense of the seemingly complicated input and casework of \\cref{thm:bst}, one should focus on the first item of $\\textsc{Find}(\\cdot)$.\n We will use \\cref{thm:bst} to explore the exchange graphs, and thus we need to find an exchange element $x$ of $y$ as in the first case.\n The additional complication is included solely because we also have to deal with edges incident to $s$ or $t$.\n For instance, say $X \\subseteq \\bar{S}$, then $\\textsc{Find}(s)$ finds an edge in $G(S)$ directed from $s$ to $X$.\n This will make our algorithms presented later cleaner (see \\cref{alg:blocking-flow} for example).\n \\label{remark:bst}\n\\end{remark}\n\n\nSometimes, we will omit the $Q_S$ parameter of $\\textsc{Initialize}$, meaning that we explicitly build the query-set $Q_S$ from $S$ in $O(r)$ time before running the actual initialization.\nIn such cases, $|X|$ will be $\\Omega(r)$, and thus this incurs no overhead.\n\nWe will later refer to the case of $X \\subseteq \\bar{S}$ as the \\emph{co-circuit binary search tree} and the case of $X \\subseteq S$ as the \\emph{circuit binary search tree}.\nThe data structure follows from the binary search algorithm of \\cite[Lemma 10]{chakrabarty2019faster}, which is based on the following observation.\n\n\\begin{observation}[\\cite{chakrabarty2019faster}]\n \\label{obs:exchange}\n To find free elements and exchange pairs, we can use the following observations.\n\\begin{enumerate}[label=(\\roman*)]\n\\item\\label{item:free-element}\n\\emph{Free element}:\n There exists an $x \\in X$ such that $S + x \\in \\cI$ if and only if $\\mathsf{rank}(S + X) > |S|$.\n\\item\\label{item:cocircuit-exchange}\n\\emph{Co-circuit exchange}:\n Given $y\\in S$, there exists an $x \\in X$ such that $S - y + x \\in \\cI$ if and only if $\\mathsf{rank}(S - y + X) \\geq |S|$.\n \\item\\label{item:circuit-exchange}\n\\emph{Circuit exchange}:\n Given $y\\not\\in S$,\n there exists an $x \\in X$ such that $S - x + y \\in \\cI$ if and only if $\\mathsf{rank}(S - X + y) = |S-X+y|$.\n \\end{enumerate}\n\\end{observation}\n\nThe data structure of \\cref{thm:bst} is built upon the following similar data structure whose independent set $S$ is ``static'' in the sense that its update will be specified for each query.\nWe construct the data structure of \\cref{lemma:bst} first, and then use it for \\cref{thm:bst} in \\cref{sec:periodic-rebuild}.\n\n\\begin{lemma}\n There exists a data structure that supports the following operations.\n \\begin{itemize}\n \\item $\\textsc{Initialize}(\\cM, S, Q_S, X)$: Given $S \\in \\cI$, a query-set $Q_S$ corresponding to $S$, and $X \\subseteq \\bar{S}$ (respectively, $X \\subseteq S$ or $X = \\{t\\}$), initialize the data structure in $\\tO(|X|)$ time.\n \\item $\\textsc{Find}(y)$: Let $S^\\prime := S \\oplus \\Delta$. It is guaranteed that $S^\\prime \\in \\cI$. Given $y \\in S^\\prime \\cup \\{s\\}$ (respectively, $y \\in \\bar{S^\\prime}$),\n \\begin{itemize}\n \\item if $y \\in S^\\prime$ (respectively, $X \\subseteq S^\\prime$), then return an $x \\in X$ such that $S^\\prime - y + x \\in \\cI$ (respectively, $S^\\prime - x + y \\in \\cI$), otherwise\n \\item if $y = s$ (respectively, $X = \\{t\\}$), then return an $x \\in X$ such that $S^\\prime + x \\in \\cI$ (respectively, return the only element $x = s$ or $x = t$ in $X$ if $S^\\prime + y \\in \\cI$ and $\\bot$ otherwise),\n \\end{itemize}\n in $\\tO(\\beta)$ time.\n The procedure returns $\\bot$ if such an $x$ does not exist.\n \\item $\\textsc{Delete}(x)$: Given $x \\in X$, delete $x$ from $X$ in $O(\\log{n})$ time.\n \\item $\\textsc{Replace}(x, y)$: Given $x \\in X$ and $y \\not\\in X$, replace $x$ by $y$ in $X$ in $O(\\log{n})$ time.\n \\end{itemize} \n \\label{lemma:bst}\n\\end{lemma}\n\nWe present the co-circuit version of the data structures as the circuit version is analogous (their difference is essentially stated in the two cases \\ref{item:cocircuit-exchange} and \\ref{item:circuit-exchange} of \\cref{obs:exchange}).\nThe data structure of \\cref{lemma:bst} is a balanced binary tree in which every node $v$ corresponds to a subset $X_v$ of $X$.\nThe subsets correspond to nodes at the same level form a disjoint partition of $X$.\nThere are $|X|$ leaf nodes, each of which corresponds to a single-element subset of $X$.\nAn internal node $v$ with children $u_1$ and $u_2$ has $X_v = X_{u_1} \\sqcup X_{u_2}$.\nEach node $v$ is also associated with a query-set $Q_v := S + X_v$, for which we have prepared a dynamic oracle (see \\cref{def:dyn-oracle}).\n\n\\paragraph{Initialization.}\nIn the initialization stage, we first compute the query-set of the root node $Q_r := S + X$ from $Q_S$ in $O(|X|)$ time.\nAs long as the current node $v$ has $|X_v| > 1$, we split $X_v$ into two equally-sized subsets $X_{u_1}, X_{u_2}$, compute $Q_{u_1}, Q_{u_2}$ from $Q_v$, and then recurse on the two newly created nodes $u_1$ and $u_2$.\nComputing $Q_{u_1}$ and $Q_{u_2}$ from $Q_v$ takes $O(|X_v|)$ time in total, and thus the overall running time for initialization is $\\tO(|X|)$.\n\n\\paragraph{Query.}\nTo find an exchange element of $y \\in S^\\prime$, we perform a binary search on the tree.\nFor each node $v$, we can test whether such an element exists in $X_v$ via \\cref{obs:exchange}\\ref{item:cocircuit-exchange} by computing the query-set $Q^\\prime_v := S^\\prime - y + X_v$ from $Q_v := S + X_v$ in $1+|\\Delta|$ dynamic-oracle queries.\nIf such an element does not exist for the root $X_r = X$, then we return $\\bot$.\nOtherwise, for node $v$ initially being $r$, there must exist one of the child nodes $u_i$ of $v$ where such an exchange $x$ exists in $X_{u_i}$.\nWe then recurse on $u_i$ until we reach a leaf node, at which point we simply return the corresponding element.\nSimilarly, to find a free element, we compute the rank of $Q^\\prime_v := S^\\prime + X_v$ instead (see \\cref{obs:exchange}\\ref{item:free-element}).\nSince we need to compute $Q^\\prime_v$ for each of the visited nodes, the running time is $O(|\\Delta|\\log{n})$.\n\n\\paragraph{Update.}\nFor deletion of $x$, we simply walk up from the leaf node corresponding to $x$ to the root node and remove $x$ from each of the $X_v$ and $Q_v$.\nThis takes time proportional to the depth of the tree, which is $O(\\log{n})$.\nReplacement of $x$ by $y$ follows similarly from deletion of $x$: instead of simply removing $x$ from $X_v$ and $Q_v$, we add $y$ to them as well.\n\n\n\\begin{remark}\n Note that the above binary search tree is static in the sense that we only deactivate elements from a fixed initial set.\n We can extend this data structure to support a dynamically changing input set $X$ by using a dynamic binary search tree based on partial rebuilding~\\cite{Andersson89,Andersson91} instead.\n The amortized time complexity remains the same since rebuilding a subtree takes time proportional to the number of nodes of it.\n\\end{remark}\n\n\\subsection{Periodic Rebuilding} \\label{sec:periodic-rebuild}\n\nHere we extend \\cref{lemma:bst} to prove \\cref{thm:bst}. Recall that the difference between the two data structures is that we need to support a dynamically changing independent set in \\cref{thm:bst} (which we will need since $S$ changes after each augmentation in our matroid algorithms).\nHow we achieve this is to essentially employ a batch-and-rebuild approach to the binary search tree of \\cref{lemma:bst}.\n\n\n\\begin{proof}\n We maintain a binary search tree $\\cT := \\textsc{Initialize}(\\cM, S, Q_S, X)$ of \\cref{lemma:bst} and a collection of ``batched'' updates $\\tilde{\\Delta}$ of size at most $\\beta$.\n Throughout the updates, we also maintain the query-set corresponding to the current $S$ starting from the given $Q_S$ and the query-set corresponding to $S + X$, which initially can be computed from $Q_S$ in $O(|X|)$ time.\n Each call to $\\textsc{Find}(y)$ is delegated to $\\cT.\\textsc{Find}(y, \\tilde{\\Delta})$, which runs in time $\\tO(|\\tilde{\\Delta}|) = \\tO(\\beta)$ time.\n Note that we can test whether the result of $\\cT.\\textsc{Find}(\\cdot)$ will be $\\bot$ in $\\tO(1)$ time by simply checking if \\cref{obs:exchange}\\ref{item:cocircuit-exchange} (or \\ref{item:free-element} if $y = s$) holds with the query-set corresponding to $S + X$ we maintain.\n \n Each call to $\\textsc{Delete}(x)$ and $\\textsc{Replace}(x, y)$ translates simply to $\\cT.\\textsc{Delete}(x)$ and $\\cT.\\textsc{Replace}(x, y)$.\n For an update to $S$ with $\\Delta$, we set $\\tilde{\\Delta} \\gets \\tilde{\\Delta} \\cup \\Delta$ and update $S$ and the query-sets accordingly.\n If the size of $\\tilde{\\Delta}$ exceeds $B$, then we rebuild the binary search tree with the input common independent set being the up-to-date $S$ we maintain.\n Note that we will pass query-set $Q_S$ to $\\textsc{Initialize}$ to not pay the extra $O(r)$ factor.\n Finally, since the binary search tree is now up-to-date, we set $\\tilde{\\Delta}$ to be $\\emptyset$.\n The rebuilding takes $\\tO(|X|)$ time and is amortized to $\\tO(\\frac{|X| \\cdot |\\Delta|}{\\beta})$ per update operation with $|\\Delta|$ changes.\n\\end{proof}\n\n\\section{Dynamically Maintaining a Basis of a Matroid} \\label{sec:decremental-basis}\n\nIn this section, we construct a data structure that allows us to maintain a basis of a matroid in a decremental set.\nThe data structure is used for obtaining an $\\tO_k(n + r\\sqrt{r})$ running time for matroid union, but it may be of independent interest as well.\nSpecifically, our data structure has the following guarantees.\n\n\\begin{theorem}\n For a (weighted) matroid $\\cM = (U, \\cI)$, there exists a data structure supporting the following operations.\n \\begin{itemize}\n \\item $\\textsc{Initialize}(X)$: Given a set $X \\subseteq U$, initialize the data structure and return a (min-weight) basis $S$ of $X$ in $\\tO(n)$ time.\n \\item $\\textsc{Delete}(x)$: Given $x \\in X$, remove $x$ from $X$ and return a new (min-weight) basis of $X$ in $\\tO(\\sqrt{r})$ time. Specifically, the new basis will contain at most one element (the replacement element of $x$) not in the old basis, and this procedure returns such an element if any.\n \\end{itemize}\n \\label{thm:decremental-basis}\n\\end{theorem}\n\n\n\n\nOur data structure for \\cref{thm:decremental-basis} will consist of two parts.\nThe first part, introduced in \\cref{sec:baseline}, is a baseline, unsparsified data structure that supports the $\\textsc{Delete}$ operation in $\\tO(\\sqrt{n})$ time, and the second one is a sparsification structure which brings the complexity down to $\\tO(\\sqrt{r})$, as presented in \\cref{sec:sparsification}.\n\nAs hinted by the statement of \\cref{thm:decremental-basis}, to make things simpler, we will assign an arbitrary but unique weight $w(x)$ to each $x \\in X$.\nNow, instead of maintaining an arbitrary basis of $X$, we maintain the \\emph{min-weight} basis instead.\nThe min-weight basis is well-known to be unique (as long as the weights are) and can be obtained greedily as shown in \\cref{alg:greedy} (see, e.g.,~\\cite{edmonds1971}).\n\n\\begin{algorithm}[!ht]\n \\SetEndCharOfAlgoLine{}\n \\SetKwInput{KwData}{Input}\n \\SetKwInput{KwResult}{Output}\n \n \\caption{Greedy algorithm for computing the min-weight basis}\n \\label{alg:greedy}\n \\KwData{A set $X \\subseteq U$ of size $k$}\n \\KwResult{The min-weight basis $S$ of $X$}\n Order $X = (x_1, x_2, \\ldots, x_{k})$ so that $w(x_1) < w(x_2) < \\cdots < w(x_{k})$\\;\n $S \\gets \\emptyset$\\;\n \\For{$i \\in [1, k]$} {\n \\If{$\\mathsf{rank}(S + x_i) > \\mathsf{rank}(S)$} {\\label{line:check}\n $S \\gets S + x_i$\\;\n }\n }\n \\textbf{return} $S$\\;\n\\end{algorithm}\n\nMoreover, suppose we remove $x \\in S$ from the set $X$. Then the new min-weight basis is either\n(i) $S - x + y$ where $y$ is the minimum weight element in $X - x$ that makes $S - x + y$ independent or\n(ii) simply $S - x$ if such a $y$ does not exist.\nIn case (i), $y$ is called the \\emph{replacement} element of $x$.\nNote that $w(y) > w(x)$ must hold.\n\nIt is useful to note that the $S$ in Line~\\ref{line:check} of \\cref{alg:greedy} is interchangeable with $X_{i - 1} = \\{x_1, \\ldots, x_{i - 1}\\}$, since\n$\\mathsf{span}(X_{i - 1}) = \\mathsf{span}(S \\cap X_{i - 1})$, so the sets $X_{i-1}$ and $S\\cap X_{i-1}$ have the same rank.\nIn other words, in each iteration $i$, we can imagine that \\cref{alg:greedy} has chosen every element before $x_i$.\n\n\\begin{observation}\n In \\cref{alg:greedy}, $x_i \\in S$ if and only if $\\mathsf{rank}(X_{i}) > \\mathsf{rank}(X_{i - 1})$.\n \\label{obs:greedy}\n\\end{observation}\n\n\n\\subsection{Baseline Data Structure} \\label{sec:baseline}\n\nOur baseline data structure supports the operations of \\cref{thm:decremental-basis}, except in time $\\tO(\\sqrt{k})$ where $k = |X|$ instead of $\\tO(\\sqrt{r})$.\n\n\\begin{lemma}\n For a weighted matroid $\\cM = (U, \\cI)$, there exists a data structure supporting the following operations.\n \\begin{itemize}\n \\item $\\textsc{Initialize}(X)$: Given a set $X \\subseteq U$ with $|X| = k$, initialize the data structure and return the min-weight basis $S$ of $X$ in $\\tO(k)$ time.\n \\item $\\textsc{Delete}(x)$: Given $x \\in X$, remove $x$ from $X$ and return the new min-weight basis of $X$ in $\\tO(\\sqrt{k})$ time. Specifically, the new basis will contain at most one element (the replacement element of $x$) not in the old basis, and this procedure returns such an element if any.\n \\item $\\textsc{Insert}(x)$: Given $x \\not\\in X$, add $x$ to $X$. It's guaranteed that $x$ is not in the min-weight basis of the new $X$ and the size of $X$ does not exceed $2k$.\n \\end{itemize}\n \\label{lemma:decremental-basis_baseline}\n\\end{lemma}\n\n\\paragraph{Initialization.}\nIn the initialization stage, we order $X$ by the weights and split the sequence into $\\sqrt{k}$ blocks $X_1, X_2, \\ldots, X_{\\sqrt{k}}$ from left to right, where each block has roughly the same size $O(\\sqrt{k})$.\nThat is, $X_1$ contains the $\\sqrt{k}$ elements with the smallest weights while $X_{\\sqrt{k}}$ contains elements with the largest weights.\nWe also compute the basis $S$ of $X$ from left to right as in \\cref{alg:greedy} together with $\\sqrt{k}$ query-sets $Q_1, Q_2, \\ldots, Q_{\\sqrt{k}}$, where $Q_j = \\bigcup_{i = 1}^{j}X_i$ is the union of the first $j$ blocks.\nThis takes $\\tO(k)$ time in total.\n\n\\paragraph{Deletion.}\nFor each deletion of $x$ located in the block $X_i$, we first update the query-sets $Q_i, \\ldots, Q_{\\sqrt{k}}$ by removing $x$ from them.\nLet $Q_1^\\prime, \\ldots, Q_{\\sqrt{k}}^\\prime$ denote the old query-sets before removing $x$.\nIf $x$ is not in the basis $S$ we currently maintain, then $S$ remains the min-weight basis of the new $X$ and nothing further needs to be done.\nOtherwise, we would like to find the min-weight replacement element $y$ of~$x$.\nWe know that such a $y$, if it exists, can only be located in blocks $X_i, X_{i + 1}, \\ldots, X_{\\sqrt{k}}$.\nAs such, we find the first $j \\geq i$ with $\\mathsf{rank}(Q_j) = \\mathsf{rank}(Q_j^\\prime)$ and recompute the portion of $S$ inside $X_j$.\nThis can be done by running \\cref{alg:greedy} with the initial set $S$ being $Q_{i - 1}$, the union of the first $i - 1$ blocks (see \\cref{obs:greedy}).\nThus, the deletion takes $\\tO(\\sqrt{k})$ time.\n\n\\paragraph{Insertion.}\nFor insertion of $x$, we simply add $x$ to a block where it belongs (according to $w(x)$) and then update $Q_i$'s appropriately.\nThis takes $\\tO(\\sqrt{k})$ as well.\n\n\\paragraph{Rebalancing.}\nTo maintain an update time of $\\tO(\\sqrt{k})$, whenever the size of a block $X_i$ grows larger than $2\\sqrt{k}$, we split it into two blocks and recompute $Q_i$ and $Q_{i+1}$.\nSimilarly, to avoid having too many blocks, whenever the size of a block $X_i$ goes below $\\sqrt{k} \/ 2$, we merge it with an adjacent block and remove $Q_i$.\nEach of the above operations takes $\\tO(\\sqrt{k})$ time, which is subsumed by the cost of an update.\n\\\\\n\n\\noindent\nWe have shown how to implement each operation of \\cref{lemma:decremental-basis_baseline} in its desired running time, and the correctness of the data structure is manifest as we always follow the greedy basis algorithm (\\cref{alg:greedy}).\n\n\\subsection{Sparsification} \\label{sec:sparsification}\n\nIn this section, we prove \\cref{thm:decremental-basis} by ``sparsifying'' the input set of the data structure for \\cref{lemma:decremental-basis_baseline} in a recursive manner, similar to what \\cite{EppsteinGIN97} did to improve \\cite{Frederickson85}'s $O(\\sqrt{|E|})$ dynamic MST algorithm to $O(\\sqrt{|V|})$.\nThe following claim asserts that such sparsification is valid.\n\n\\begin{claim}\n Let $S_X$ and $S_Y$ be the min-weight basis of $X$ and $Y$, respectively, where $w(x) < w(y)$ holds for each $x \\in X$ and $y \\in Y$.\n Then, the min-weight basis of $S_X + S_Y$ is also the min-weight basis of $X + Y$.\n \\label{claim:sparsification}\n\\end{claim}\n\n\\begin{proof}\n Consider running the greedy \\cref{alg:greedy} on the set $X + Y$ to obtain the min-weight basis $S$ of it.\n Clearly, we have $S_X \\subseteq S$ since $X$ contains the elements of smaller weights (in fact $S \\cap X = S_X$).\n Assume for contradiction that $S \\cap Y \\not\\subseteq S_Y$, i.e., there exists a $y^{*} \\in S \\cap Y$ which does not belong to $S_Y$.\n Then, it must be the case that there exists a $y \\in S_Y$ with $w(y) > w(y^{*})$, as otherwise (i.e., $y^{*}$ is ordered after everything in $S_Y$) by \\cref{lemma:basis-rank} the greedy algorithm stops before seeing $y^{*}$.\n We claim that the greedy algorithm on $Y$ chooses $y^{*}$ before all such $y$'s, thereby contradicting the fact that $S_Y$ is the min-weight basis of $Y$.\n This is true by the diminishing returns property\\footnote{The diminishing returns property of submodular functions states that $f(z+X)-f(X) \\geq f(z+Y)-f(Y)$ holds for each $Y \\subseteq X \\subseteq U$ and $z \\not\\in X$.} of the rank function: Let $Y^{*}$ be elements in $Y$ with weights smaller than $w(y^{*})$.\n Since $y^{*} \\in S$, it follows that $\\mathsf{rank}(X + Y^{*} + y^{*}) > \\mathsf{rank}(X + Y^{*})$, implying $\\mathsf{rank}(Y^{*} + y^{*}) > \\mathsf{rank}(Y^{*})$ and the greedy algorithm run on $Y$ picks $y^{*}$.\n\\end{proof}\n\nWe are now ready to present our sparsification data structure.\n\n\\begin{proof}[Proof of \\cref{thm:decremental-basis}]\n Our data structure is a balanced binary tree where the leaf nodes correspond to elements in $X$ and each internal node corresponds to the set consisting of elements in leaf nodes of this subtree.\n We will abuse notation and use a node $v$ to also refer to the elements contained in the subtree rooted at $v$.\n \n We first build the binary tree top-down, starting with the root node containing $X$ and recursively splitting the current set into two subsets of roughly the same size and recursing on them.\\footnote{Note that unlike in \\cref{sec:bst}, we are not building query-sets here.}\n We then build the min-weight basis of each node in a bottom-up manner, starting from the leaves.\n For each node $v$ with children $u_1$ and $u_2$, we initialize the data structure $\\cD_v$ for \\cref{lemma:decremental-basis_baseline} with input set $S_{u_1} + S_{u_2}$, the min-weight basis of $u_1$ and $u_2$ which are obtained from $\\cD_{u_1}$ and $\\cD_{u_2}$.\n By \\cref{claim:sparsification}, the basis $\\cD_v$ maintains is the min-weight basis of $v$.\n Thus, by induction, the basis maintained in the root node is indeed the min-weight basis of the whole set $X$.\n The data structure for \\cref{lemma:decremental-basis_baseline} takes time near-linear in the size of the input set to construct, and since the sparsified input is a subset of elements in the subtree, the initialization takes time near-linear in the sum of sizes of the subtrees, which is $\\tO(n)$ (indeed, every element occurs in at most $\\log n$ nodes).\n \n To delete an element $x \\in X$, we first identify the leaf node $v_x$ of the binary tree which corresponds to $x$.\n Going upward, for each ancestor $p$ of $v_x$, we delete $x$ from $\\cD_p$.\n If we find a replacement element $y$ for $x$, we insert $y$ into $\\cD_{q}$, where $q$ is $p$'s parent, before proceeding to $q$ ($y$ is not in $\\cD_p$ so such an insertion is valid by \\cref{claim:sparsification}).\n Since $x$ will be removed from $\\cD_q$ shortly, the input set of $\\cD_q$ remains the union of the min-weight bases of $q$'s children.\n This takes $\\tO(\\sqrt{r})$ time since $\\cD_p$ is of size $O(r)$.\n Inductively, since the min-weight bases of the child nodes are updated, by \\cref{claim:sparsification}, the min-weight basis of each of the affect nodes (hence the min-weight basis of $X$) is correctly maintained.\n\\end{proof}\n\n\\section{Introduction} \\label{sec:intro}\n\n\n\nVia reductions to the max-flow and min-cost flow problems, exciting progress has been recently made for many graph problems such as maximum matching, vertex connectivity, directed cut, and Gomory-Hu trees \\cite{Madry13,LeeS14,Madry16,\nBrandLNPSSSW20,KathuriaLS20,LiP20,\nAbboudKT21stoc,BrandLLSS0W21-maxflow,LiNPSY21,\nAbboudKT21focs,AxiotisMV21,Cen0NPSQ21,0006PS21,GaoLP21,\nAbboudKT22,CenLP22,\nBrandGJLLPS22,\nAbboudK0PST22,ChenKLPGS22,\nCenHLP23}. \nHowever, many basic problems still witness no progress since a few decades ago. These problems include $k$-disjoint spanning trees \\cite{gabow1988forests,Gabow91}, colorful spanning tree \\cite{GabowS85}, arboricity \\cite{Gabow95}, spanning tree packing \\cite{gabow1988forests}, graphic matroid intersection \\cite{GabowS85,GabowX89}, and simple job scheduling matroid intersection \\cite{XuG94}. \nFor example, in the {\\em $k$-disjoint spanning trees problem} \\cite[Chapter 51]{schrijver2003}, \nwe want to find $k$ edge-disjoint spanning trees in a given input graph $G=(V,E)$. When $k=1$, this is the spanning tree problem and can be solved in linear time. \nFor higher values of $k$, the best runtime remains \n$\\tO(k^{3\/2}|V|\\sqrt{|E|})$-time algorithm from around 1990 \\cite{gabow1988forests,Gabow91}\\footnote{The stated bound was due to Gabow and Westermann \\cite{gabow1988forests}. Gabow \\cite{Gabow91} announced an improved bound of $O(kn\\sqrt{m+kn\\log(n)})$ but this bound was later removed from the journal version of the paper.}, which is also the best runtime for its applications such as Shannon Switching Game \\cite{Gardner61} and Graph $k$-Irreducibility \\cite{Whiteley88rigidity,graver1993combinatorial}. \nNo better runtime was known even for the special case of $k=2$.\n\n{\\em Can we improve the \nbounds of $k$-disjoint spanning trees and other problems?}\nMore importantly, since it is very unclear if these problems can be reduced to max-flow or min-cost flow\\footnote{For example, the best-known number of max-flow calls to decide whether there are $k$ disjoint spanning trees and to find the $k$ spanning trees are $O(n)$ and $O(n^2)$ respectively.}, \n{\\em is there an alternative approach to designing fast algorithms for many problems simultaneously?}\nFortunately, many of the above problems can be modeled as {\\em matroid problems}, giving hope that solving matroid problems would solve many of these problems in one shot. Unfortunately, this is not true in the traditional model for matroid problems---even the most efficient algorithm possible for a matroid problem does not \nnecessarily give a faster algorithm for any of its special cases. \nWe discuss this more below.\n\n\n\\paragraph{Matroid Problems.} A matroid $\\cM$ is a pair $(U, \\cI)$ where $U$ is a finite set (called the ground set) and $\\cI$ is a family of subsets of $U$ (called the independent sets) satisfying some constraints (see \\cref{def:matroid}; these constraints are not important in the following discussion).\nSince $\\cI$ can be very large, problems on matroid $\\cM$ are usually modeled with {\\em oracles} that answer {\\em queries}. Given a set $S\\subseteq U$, {\\em independence queries} ask if $S\\in \\cI$ and {\\em rank queries} ask for the value of $\\max_{I\\in \\cI, I\\subseteq S} |I|.$\nTwo textbook examples of matroid problems are {\\em matroid intersection} and {\\em union}\\footnote{\\emph{Matroid union} is also sometimes called \\emph{matroid sum}.} (e.g., \\cite[Chapters~41-42]{schrijver2003}). \nWe will also consider the special case of matroid union called {\\em $k$-fold matroid union}.\n\n\\begin{definition}[Matroid intersection and ($k$-fold) matroid union]\\label{def:intro:matroid intersection union}\n(I) In matroid intersection, we are given two matroids $(U, \\cI_1)$ and $ (U, \\cI_2)$ and want to find a set of maximum size in $\\cI_1\\cap \\cI_2.$ \n(II) In matroid union, we are given $k$ matroids $(U_1, \\cI_1), (U_2, \\cI_2), \\ldots, (U_k, \\cI_k)$, and want to find the set $S_1\\cup S_2 \\cup \\cdots \\cup S_k$, where $S_i \\in \\cI_i$ for every $i$, of maximum size. \n(III) Matroid union in the special case where $U_1=U_2=\\cdots = U_k$ and $\\cI_1=\\cI_2=\\cdots = \\cI_k$ is called $k$-fold matroid union. \n\n{\\em Notations:} Throughout, for problems over matroids $(U_1, \\cI_1), (U_2, \\cI_2), \\ldots, (U_k, \\cI_k)$, we define $n:=\\max_i{|U_i|}$ and $r:=\\max_{i}\\max_{S \\in \\cI_i}|S|$. \\qed\n\\end{definition}\n\n\n\nMatroid problems are powerful abstractions that can model many fundamental problems. \nFor example, the $2$-disjoint spanning tree problem can be modeled as a $2$-fold matroid union problem: \n\\begin{tcolorbox}[breakable,boxrule=0pt,frame hidden,sharp corners,enhanced,borderline west={.5 pt}{0pt}{red},colback=white]\n\\vspace{-.2cm}\nGiven a graph $G=(V, E)$, let $\\cM=(U, \\cI)$ be the corresponding {\\em graphic matroids}, i.e. $U=E$ and $S\\subseteq E$ is in $\\cI$ if it is a forest in $G$. (It is a standard fact that such an $\\cM$ is a matroid.) \nThe $2$-fold matroid union problem with input $\\cM$ is a problem of finding two forests $F_1\\subseteq E$ and $F_2\\subseteq E$ in $G$ that maximizes $|F_1\\cup F_2|$. This is known as the {\\em $2$-forest} problem which clearly generalizes $2$-disjoint spanning tree (a $2$-forest algorithm will return two disjoint spanning trees in $G$ if they exist). \n\\end{tcolorbox}\nObserve that this argument can be generalized to modeling the $k$-disjoint spanning trees problem by $k$-fold matroid union. \nOther problems that can be modeled as matroid union (respectively, matroid intersection) include arboricity, spanning tree packing, $k$-pseudoforest, and mixed $k$-forest-pseudoforest (respectively, bipartite matching and colorful spanning tree).\n\nThe above fact makes matroid problems a {\\em unified} approach for showing that many problems, including those mentioned above, can be solved in polynomial time. This is because (i) the matroid union, intersection, and other problems can be solved in polynomial time and rank\/independence queries, and (ii) for most problems queries can be answered in polynomial time. For example, when we model $k$-disjoint spanning trees as $k$-fold graphic matroid union like above, the corresponding rank query is: given a set $S$ of edges, find the size of a spanning forest of $S$. This can be solved in $O(|S|)$ time. \n\nWhen it comes to more fine-grained time complexities, such as nearly linear and sub-quadratic time, matroid algorithms in the above model are not very helpful. This is because simulating a matroid algorithm in this model causes too much runtime blow-up. \nFor example, even if we can solve $k$-fold matroid union over $(U, \\cI)$ in {\\em linear} ($O(|U|)$) rank query complexity, it does not necessarily imply that we can solve its special case of $2$-disjoint spanning tree any faster.\nThis is because each query about a set $S$ of edges needs at least $O(|S|)$ time even to specify $S$, which can be as large as the number of edges in the input graph.\nIn other words, even a matroid union algorithm with linear complexities may only imply $O(|E|^2)$ time for solving 2-disjoint spanning trees on graphs $G=(V,E)$. \nThis is also the case for other problems that can be modeled as matroid union and intersection. \nBecause of this, previous works obtained improved bounds by simulating an algorithm for matroid problems and coming up with clever ideas to speed up the simulation for each of these problems one by one (e.g., \\cite{GabowT79,RoskindT85,GabowS85,gabow1988forests,FredericksonS89,GabowX89,Gabow91,XuG94}).\nIt cannot be guaranteed that recent and future improved algorithms for matroid problems (e.g., \\cite{chakrabarty2019faster,quadratic2021,blikstad21}) would imply improved bounds for any of these problems. \n\n\n\\paragraph{Dynamic Oracle.}\nThe main conceptual contribution of this paper is an introduction of a new matroid model called {\\em dynamic oracle}\nand an observation that, using dynamic algorithms, \nsolving a matroid problem efficiently in our model immediately implies efficient algorithms for many problems it can model. \nIn contrast to traditional matroids where a query can be made with an arbitrary set $S$, our model only allows queries made by slightly modifying the previous queries.\\footnote{The ``cost'' of a query in our dynamic model is the distance (size of the symmetric difference) from some (not necessarily the last) previous query.} More precisely, the dynamic-rank-oracle model, which is the focus of this work, is defined as follows.\\footnote{One can also define the dynamic-independence-oracle model where $\\textsc{Query}(i)$ returns only the independence of $S_i$.} \n\n\n\n\\begin{definition}[Dynamic-rank-oracle model]\n\\label{def:dyn-oracle}\n For a matroid $\\cM = (U, \\cI)$, starting from $S_0 = \\emptyset$ and $k = 0$, the algorithm can access the oracle via the following three operations.\n \\begin{itemize}[noitemsep]\n \\item $\\textsc{Insert}(v, i)$: Create a new set $S_{k + 1} := S_i \\cup \\{v\\}$ and increment $k$ by one.\n \\item $\\textsc{Delete}(v, i)$: Create a new set $S_{k + 1} := S_i \\setminus \\{v\\}$ and increment $k$ by one.\n \\item $\\textsc{Query}(i)$: Return the rank of $S_i$, i.e., the size of the largest independent subset of $S_i$.\n \\end{itemize}\n We say that a matroid algorithm takes $t$ time and dynamic-rank-query complexities if its time complexity and required number of operations are both at most $t$. \\qed\n \\end{definition}\n\nWe emphasize that a query can be obtained from \\emph{any} previous query, not just the last one.\n\n\\begin{table}[ht]\n \\centering\n\n{\\small\n \\begin{tabular}{c|l}\n {\\bf matroid problems} &\n \\begin{tabular}{l}\n {\\bf special cases}\n \\end{tabular}\n \\\\\\hline\n \\begin{tabular}{c}\n $k$-fold matroid union in $T(n, r, k)$ \\\\\n \\end{tabular} & \\begin{tabular}{l}\n \\\\\n $k$-forest in $\\tT(|E|, |V|, k))$ \\\\\n $k$-pseudoforest in $\\tT(|E|, |V|, k))$ \\\\\n $k$-disjoint spanning tree in $\\tT(|E|, |V|, k)$ (randomized) \\\\\n \\phantom{$k$-disjoint spanning tree in} $\\hat{T}(|E|, |V|, k)$ (deterministic) \\\\\n arboricity in $\\tT(|E|, |V|, \\sqrt{|E|}))$ \\\\\n tree packing in $\\tT(|E|, |V|, |E|\/|V|))$ \\\\\n Shannon Switching Game in $\\tT(|E|, |V|, 2))$ \\\\\n graph $k$-irreducibility in $\\tT(|E|, |V|, k))$ \\\\\n \\\\\n \\end{tabular}\\\\\\hline\n \\begin{tabular}{c}\n matroid union in $T(n, r, k)$ \\\\\n \\end{tabular}& \\begin{tabular}{l}\n \\\\\n $(f, p)$-mixed forest-pseudoforest in $\\tT(|E|, |V|, f + p))$ \\\\\n \\\\\n \\end{tabular}\\\\\\hline\n \\begin{tabular}{c}\n matroid intersection in $T(n, r)$ \\\\\n \\end{tabular} & \\begin{tabular}{l}\n \\\\\n bipartite matching in $\\tT(|E|, |V|)$ \\\\\n colorful spanning tree in $\\tT(|E|, |V|)$\\\\\n graphic matroid intersection in $\\tT(|E|, |V|)$ \\\\\n simple job scheduling matroid intersection in $\\tT(n, r)$ \\\\\n convex transversal matroid intersection in $\\tT(|V|, \\mu)$ \\\\\n \\\\\n \\end{tabular}\\\\\\hline\n \\end{tabular}\n}\n \\caption{Examples of implications of dynamic-rank-oracle matroid algorithms. The complexities in the first column are in terms of time and dynamic-rank-query complexities. Notations $n$, $r$, and $k$ are as in \\Cref{def:intro:matroid intersection union}.\n In the second column, the $\\mathrm{polylog}{n}$ factors are hidden in $\\tT$ and\n subpolynomial factors are hidden in $\\hat{T}.$ Details are in \\cref{sec:packing,appendix:applications}.}\n \\label{tab:intro:dyn-oracle-implies-fast-algorithms}\n\\end{table}\n\n\n\n\n\\begin{observation}[Details in \\cref{sec:packing,appendix:applications}]\\label{thm:intro:dyn-oracle implies fast alg}\nAlgorithms for the $k$-fold matroid union, matroid union, and matroid intersection problems imply algorithms for a number of problems with time complexities shown in \\Cref{tab:intro:dyn-oracle-implies-fast-algorithms}.\n\\end{observation}\n\\begin{proof}[Proof Idea]\nAs an example, we sketch the proof that if $k$-fold matroid union can be solved in $T(n, r, k)$ then $k$-disjoint spanning trees can be found in $T(|E|, |V|, k)\\cdot\\mathrm{polylog}(|V|)$ time. \nRecall that in the traditional rank-oracle model, the algorithm can ask an oracle for the size of a spanning forest in an arbitrary set of edges $S$, causing $O(|S|)$ time to simulate. \nIn our dynamic-rank-oracle model, an algorithm needs to modify some set $S_i$ to the desired set $S$ using the {\\sc Insert} and {\\sc Delete} operations before asking for the size of a spanning forest in $S$. We can use a spanning forest data structure to keep track of the size of the spanning forest under edge insertions and deletions. This takes $\\mathrm{polylog}(|V|)$ time per operation \\cite{KapronKM13,GibbKKT15}.\\footnote{The dynamic spanning forest algorithms of \\cite{KapronKM13,GibbKKT15} are randomized and assume the so-called oblivious adversary (as opposed to, e.g., \\cite{NanongkaiS17,Wulff-Nilsen17} which work against adaptive adversaries). This is not a problem because we only need to report the size of the spanning forest and not an actual forest. We can also use a deterministic algorithm from \\cite{ChuzhoyGLNPS20,NanongkaiSW17} which requires $|V|^{o(1)}$ time per operation.}\nSo, if $k$-fold matroid union can be solved in $T(n, r, k)$ time and dynamic rank queries, then $k$-disjoint spanning trees can be solved in $\\tO(T(n, r, k))=\\tO(T(|E|, |V|, k))$ time, where the equality is because the ground set size is the number of edges ($|U|=|E|$) and the rank $r$ is equal to the size of a spanning forest (thus at most $|V|$).\\footnote{Note that we also need a fully-persistent data structure \\cite{DriscollSST86,Dietz89} to maintain the whole change history in our argument.}\n\\end{proof}\n\nObserve that designing efficient algorithms in our dynamic-oracle model is {\\em not} easier than in the traditional model: a dynamic-oracle matroid algorithm can be simulated in the traditional model within the same time and query complexities. Naturally, the first challenge of the new model is this question: {\\em Can we get matroid algorithms in the new model whose performances match the state-of-the-art algorithms in the traditional model?}\nMoreover, for the new model to provide a unified approach to solve many problems simultaneously, one can further ask: {\\em Would these new matroid algorithms imply state-of-the-art bounds for many problems?}\n\n\n\n\\paragraph{Algorithms.} \nIn this paper, we provide algorithms in the new model whose complexities not only match those in the traditional model but sometimes even improve them. These lead to new bounds for some problems and, for other problems, a unified algorithm whose performance matches previous results developed in various papers for various problems. \n\n\nMore precisely, the best time and rank-query complexities for matroid intersection on input $(U, \\cI_1)$ and $(U, \\cI_2)$ were $\\tO(n\\sqrt{r})$ by Chakrabarty, Lee, Sidford, Singla, and Wong \\cite{chakrabarty2019faster} (improving the previous $\\tO(nr)$ bound based on Cunningham's classic algorithm \\cite{cunningham1986improved,LeeSW15,nguyen2019note}).\nDue to a known reduction, this implies $\\tO(k^2\\sqrt{k}n\\sqrt{r})$ bound for $k$-fold matroid union and matroid union. \nIn this paper, we present algorithms in the dynamic-oracle model that imply improved bounds in the traditional model for $k$-fold matroid union and matroid union and match the bounds for matroid intersection.\n\nHere, we only state our dynamic-rank-query complexities as they are the main focus of this paper, and for all the applications we have, answering (and maintaining dynamically) independence queries does not seem to be significantly easier.\nNote that we also obtain dynamic-independence-query algorithms that match the state-of-the-art traditional ones~\\cite{blikstad21} which we defer to \\cref{appendix:dynamic-matroid-intersection-ind}.\n\\begin{theorem} \\label{thm:main}\n(I) $k$-fold matroid union over input $(U, \\cI)$ can be solved in $\\tO(n + kr\\sqrt{\\min(n, kr)} + k \\min(n, kr))$ time and dynamic rank queries.\n(II) Matroid union over input $(U_1, \\cI_1), (U_2, \\cI_2), \\ldots, (U_k, \\cI_k)$ can be solved in $\\tO\\left(\\left(n + r\\sqrt{r}\\right) \\cdot \\mathrm{poly}(k)\\right)$ time and dynamic rank queries.\n(III) Matroid intersection over input $(U, \\cI_1)$ and $(U, \\cI_2)$ can be solved in $\\tO(n\\sqrt{r})$ time and dynamic rank queries.\n\\end{theorem}\n\n\n\n\n\\definecolor{applegreen}{rgb}{0.55, 0.71, 0.0}\n\n\\begin{savenotes}\n\\begin{table}[ht]\n \\centering\n \\small\n \\begin{tabular}{c|l|l}\n {\\bf problems} & {\\bf our bounds} & {\\bf state-of-the-art results} \\\\\\hline\n {\\tiny (Via $k$-fold matroid union)}& & \\\\\n $k$-forest\\footnote{For $k$-forest and its related graph problems in the table, we can assume that $k \\leq |V|$, and thus the $k^2r$ (where $r = \\Theta(|V|)$) term in \\cref{thm:main} is dominated by the $(kr)^{3\/2}$ term.} & $\\tO(|E| + (k|V|)^{3\/2})$ {\\color{green!50!black}\\cmark} & $\\tO(k^{3\/2}|V|\\sqrt{|E|})$ \\cite{gabow1988forests} \\\\\n $k$-pseudoforest & $\\tO(|E| + (k|V|)^{3\/2})$ {\\color{red}\\xmark} & $|E|^{1 + o(1)}$ \\cite{ChenKLPGS22} \\\\\n $k$-disjoint spanning trees & $\\tO(|E| + (k|V|)^{3\/2})$ {\\color{green!50!black}\\cmark} & $\\tO(k^{3\/2}|V|\\sqrt{|E|})$ \\cite{gabow1988forests} \\\\\n arboricity\\footnote{Here we use the bound that $\\alpha \\leq \\sqrt{E}$~\\cite{Gabow95}.} & $\\tO(|E||V|)$ {\\color{red}\\xmark} & $\\tO(|E|^{3\/2})$ \\cite{Gabow95} \\\\\n tree packing & $\\tO(|E|^{3\/2})$ & $\\tO(|E|^{3\/2})$ \\cite{gabow1988forests} \\\\\n Shannon Switching Game & $\\tO(|E| + |V|^{3\/2})$ {\\color{green!50!black}\\cmark} & $\\tO(|V|\\sqrt{|E|})$ \\cite{gabow1988forests} \\\\\n graph $k$-irreducibility & $\\tO(|E| + (k|V|)^{3\/2} + k^2|V|)$ {\\color{green!50!black}\\cmark} & $\\tO(k^{3\/2}|V|\\sqrt{|E|})$ \\cite{gabow1988forests} \\\\ & & \\\\ \\hline\n {\\tiny (Via matroid union)} & & \\\\\n $(f, p)$-mixed forest-pseudoforest & $\\tO_{f,p}(|E| + |V|\\sqrt{|V|})$ {\\color{green!50!black}\\cmark} & $\\tO((f + p)|V|\\sqrt{f|E|})$ \\cite{gabow1988forests} \\\\ & & \\\\\\hline\n {\\tiny (Via matroid intersection)} & & \\\\\n bipartite matching (combinatorial\\footnoteref{foot:combinatorial}) & $\\tO(|E|\\sqrt{|V|})$ & $O(|E|\\sqrt{|V|})$ \\cite{HopcroftK73} \\\\\n bipartite matching (continuous) & $\\tO(|E|\\sqrt{|V|})$ {\\color{red}\\xmark} & $|E|^{1 + o(1)}$ \\cite{ChenKLPGS22} \\\\\n graphic matroid intersection & $\\tO(|E|\\sqrt{|V|})$ & $\\tO(|E|\\sqrt{|V|})$ \\cite{GabowX89} \\\\\n simple job scheduling matroid intersection & $\\tO(n\\sqrt{r})$ & $\\tO(n\\sqrt{r})$ \\cite{XuG94} \\\\\n convex transversal matroid intersection & $\\tO(|V|\\sqrt{\\mu})$ & $\\tO(|V|\\sqrt{\\mu})$ \\cite{XuG94} \\\\\n linear matroid intersection\\footnote{Our bound is with respect to the current value of $\\omega < 2.37286$~\\cite{AlmanW21}. If $\\omega = 2$, then our bound becomes $\\tO(n^{2.5}\\sqrt{r})$.} & $\\tO(n^{2.529}\\sqrt{r})$ {\\color{red}\\xmark} & $\\tO(nr^{\\omega - 1})$ \\cite{Harvey09} \\\\\n colorful spanning tree & $\\tO(|E|\\sqrt{|V|})$ & $\\tO(|E|\\sqrt{|V|})$ \\cite{GabowS85} \\\\\n maximum forest with deadlines & $\\tO(|E|\\sqrt{|V|})$ {\\color{green!50!black}\\cmark} & (no prior work) \\\\\n \\end{tabular}\n \\caption{Implications of our matroid algorithms stated in \\cref{thm:main} in comparison with previous results. Results marked with a {\\color{green!50!black}\\cmark} improve over the previous ones. Results marked with a {\\color{red}\\xmark} are worse than the best time bounds. Other results match the currently best-known algorithms up to poly-logarithmic factors. Details can be found in \\cref{appendix:applications}.}\n \\label{tab:intro:implications}\n\\end{table}\n\\end{savenotes}\n\n\nCombined with \\Cref{thm:intro:dyn-oracle implies fast alg}, the above theorem immediately implies fast algorithms for many problems. \\Cref{tab:intro:implications} shows some of these problems. \nOne of our highlights is the improved bounds for $k$-forest and $k$-disjoint spanning trees. Even for $k=2$, there was no runtime better than the decades-old $\\tO(k^{3\/2}|V|\\sqrt{|E|})$ runtime \\cite{gabow1988forests,Gabow91}. Our result improves this to $\\tO(|E| + (k|V|)^{3\/2})$.\nThis is nearly linear for dense input graphs and small $k$.\nThis also implies a faster runtime for, e.g., Shannon Switching Game (see \\cite{shannon1955game,gabow1988forests}) which is a special case of $2$-disjoint spanning trees. \n\n\nOur matroid intersection algorithm gives a unified approach to achieving time complexities that were previously obtained by various techniques in many papers. \nThus, improving this algorithm would imply breakthrough runtimes for many of these problems simultaneously.\nMoreover, in contrast to the previous approach where matroid algorithms have to be considered for each new problem one by one, our approach has the advantage that it can be easier to derive new bounds.\nFor example, say we are given a graph $G = (V, E)$, where the edge $e$ will stop functioning after day $d(e)$.\nEvery day we can ``repair'' one functioning edge. \nOur goal is to make the graph connected in the long run (an edge will work forever once it has been repaired).\nThis is the \\emph{maximum forest with deadlines} problem.\nFormally speaking, the goal is to construct a spanning tree or a forest of the maximum size at the end, by selecting an edge $e$ with $d(e) \\geq t$ in the $t^{\\scriptsize \\mbox{{\\rm th}}}$ round.\\footnote{It is tempting to believe that we can use a greedy algorithm where we always select an edge $e$ with the smallest $d(e)$ to the solution. The following example shows why this does not work: There are three vertices $V=\\{a,b,c,d\\}$. Edges $e_1$ and $e_2$ between $a$ and $b$ have $d(e_1)=1$ and $d(e_2)=3$. Edges $e_3=(b,c)$ and $e_4=(c,d)$ have $d(e_3)=d(e_4)=2$.\n}\nOur result implies a runtime of $\\tO(|E|\\sqrt{|V|})$ for this problem. The runtime holds even for the harder case where each edge is also associated with an {\\em arrival time} (edges cannot be selected before they arrive). \n\n\n\nWe also list some problems where our bounds cannot match the best bounds in \\Cref{tab:intro:implications}. Improving our matroid algorithms to match these bounds is a very interesting open problem. A particularly interesting case is the maximum bipartite matching problem. Our dynamic-oracle matroid intersection algorithm implies a runtime that matches the runtime from the best combinatorial algorithm of Hopcroft and Karp \\cite{HopcroftK73} which has been recently improved via continuous optimization techniques (e.g., \\cite{Madry13,Madry16,CohenMSV17,AxiotisMV20,BrandLNPSSSW20,BrandLLSS0W21-maxflow,ChenKLPGS22}).\\footnote{\\label{foot:combinatorial}The term ``combinatorial'' is vague and varies in different contexts. Here, an algorithm is ``combinatorial'' if it does not use any of the continuous optimization techniques such as interior-point methods (IPMs).}\nThere are barriers to using continuous optimization even to solve some special cases of matroid intersection (e.g. colorful spanning tree's linear program requires exponentially many constraints). Thus, improving our matroid intersection algorithm requires either a new way to use continuous optimization techniques or a breakthrough idea in designing combinatorial algorithms that would improve Hopcroft-Karp algorithm.\n\n\n\n\n\n\n\n\\paragraph{Lower Bounds.} \nAnother advantage of our dynamic-oracle matroid model is that it can be easier to prove lower bounds than the traditional model. As a showcase, we show a simple super-linear rank-query lower bound in our new model. In fact, our argument also implies the first super-linear independence-query lower bound in the traditional model.\nThe latter result might be of independent interest. \n\\begin{theorem}\\label{thm:intro:lowerbound}\n(I) Any deterministic algorithms require $\\Omega(n\\log n)$ dynamic rank queries to solve the matroid union and matroid intersection problems.\n(II) Any deterministic algorithms require $\\Omega(n\\log n)$ (traditional) independence queries to solve the matroid union and matroid intersection problems.\n\\end{theorem}\nOur first lower bound suggests that the dynamic-oracle model might at best give nearly linear (and not linear) time algorithms.\nPrior to this paper, only a $\\log_2(3)n - o(n)$ independence-query lower bound for deterministic algorithms was known for (traditional) independence queries, due to Harvey \\cite{harvey2008matroid}.\\footnote{To the best of our knowledge, this lower bound does not hold for rank queries.}\nOur lower bound in the traditional model improves this decade-old bound. \nMoreover, showing super-linear independence-query lower bounds in the traditional model for matroid intersection is a long-standing open problem considered since 1976 (e.g. \\cite{Welsh1976matroid_book,chakrabarty2019faster}).\\footnote{As noted by Harvey, Welsh asked about the number of queries\nneeded to solve the matroid partition problem, which is equivalent to matroid union and intersection.} \nOur lower bound in the traditional model answers this open problem for deterministic algorithms. The case of randomized algorithms would be resolved too if an $\\omega(|V|)$ lower bound was proved for the communication complexity for computing connectivity of an input graph $G=(V,E)$. (It was conjectured to be $\\Omega(n\\log n)$ in \\cite{ApersEGLMN22}.)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Techniques}\nIn this section we briefly discuss our technical contributions. For a more in-depth overview of our algorithms, see the technical overview (\\cref{sec:overview}).\n\n\\paragraph{Exchange Graph \\& Blocking Flow.} Our algorithms and lower bounds are based on the notion of finding \\emph{augmenting paths} in the \\emph{exchange graph}, due to \\cite{edmonds1970submodular,Lawler75,aignerD}.\nGiven a common independent set $S\\in \\cI_1\\cap \\cI_2$, the exchange graph $G(S)$ is a directed graph where finding an $(s,t)$-path corresponds to increasing the size of $S$ by one. Starting with the work of Cunningham \\cite{cunningham1986improved}, modern matroid intersection algorithms (including the state-of-the-art \\cite{chakrabarty2019faster,blikstad21}) are based on a ``Blocking Flow'' idea inspired by the Hopcroft-Karp's \\cite{HopcroftK73} bipartite matching and Dinic's \\cite{dinic1970algorithm} max-flow algorithms.\n\n\\paragraph{Matroid Intersection with Dynamic Oracle.} \nOur matroid intersection algorithms are implementations of the state-of-the-art $\\tO(n\\sqrt{r})$ rank-query algorithm of \\cite{chakrabarty2019faster} and\nthe $\\tO(n r^{3\/4})$ independence-query algorithm of \\cite{blikstad21}. Our contribution here is to show that versions of them can be implemented also in the \\emph{dynamic}-oracle model. \n\nThese algorithms explore the exchange graph efficiently in the classic non-dynamic models by performing binary searches with the oracle queries to find useful edges. However, such a binary search is very expensive in the dynamic-oracle model (as the queries differ by a lot): a single such binary search might cost up to $O(n)$ in the dynamic-oracle model instead of just $O(\\log n)$.\n\nOur contribution is to design a binary-tree data structure that supports finding these useful edges efficiently also in the dynamic-oracle model.\nNote that after each augmentation the underlying exchange graph changes, so the data structure must also support these dynamic updates efficiently. Some updates can just be propagated up the tree, while others we handle by batching them and rebuilding the tree periodically.\nWe also rely on a structural result ``Augmenting Sets'' by \\cite{chakrabarty2019faster} which states that the updates to the exchange graph are local, which helps us reduce the number of updates we need to make to our data structure, and achieve the final time bound.\n\n\\paragraph{Matroid Union with Dynamic Oracle.} \nOur $\\tO(n + r\\sqrt{r})$ matroid union algorithm with dynamic rank oracle is based on our $\\tO(n \\sqrt{r})$ matroid intersection algorithm (indeed, matroid union is a special case of matroid intersection).\nWe are able to obtain a more efficient algorithm by \ntaking advantage of the additional structure of the exchange graph in the case of matroid union.\nThe main idea is to run the blocking flow algorithm only on a dynamically-changing subgraph of size $\\Theta(r)$, instead of on the full exchange graph of size $\\Theta(n)$.\n\nA crucial observation is that all but $O(r)$ elements will be directly connected to the source vertex~$s$. To ``sparsify'' this first layer in the breadth-first-search tree, we argue that one only needs to consider a basis of it (this basis will have size at most $r$ as opposed to $n$). After an augmentation, this first layer changes, so we design a \\emph{dynamic algorithm to maintain a basis of a matroid}\\footnote{For example, maintaining a spanning forest in a dynamically changing graph.}, with $\\tO(\\sqrt{r})$ update time and $O(n)$ pre-computation.\nOur algorithm to maintain this basis dynamically is inspired by the\n dynamic minimum spanning tree algorithm of \\cite{Frederickson85} ($O(\\sqrt{|E|})$ update time), in combination with the sparsification trick of \\cite{EppsteinGIN97} ($\\tO(\\sqrt{|V|})$ update time).\n We believe that our dynamic algorithm to maintain a (min-weight) basis of a matroid might also be of independent interest.\n\n\n\n\\paragraph{Lower Bounds.} Our super-linear $\\Omega(n \\log n)$ query lower bound comes from studying the communication complexity of matroid intersection. The matroids $\\cM_1$ and $\\cM_2$ are given to two parties Alice and Bob respectively and they are asked to solve the matroid intersection problem using as few bits of communication between them. We show that even if Alice and Bob know some common independent set $S\\in \\cI_1\\cap \\cI_2$, they need to communicate $\\Omega(n\\log n)$ bits to see if $S$ is optimal. Essentially, they need to determine if there is an augmenting path in the exchange graph. Using a class of matroids called \\emph{gammoids} (see e.g.~\\cite{perfect1968applications,mason1972class}), we show a reduction from the $(s,t)$-connectivity problem which has a deterministic $\\Omega(n\\log n)$ communication lower bound \\cite{HajnalMT88}.\n\n\\subsection{Organization}\n\nThe rest of the paper is organized as follows.\nWe first give a high-level overview of how we obtain our algorithms in \\cref{sec:overview}.\nIn \\cref{sec:prelim}, we provide the necessary preliminaries.\nWe then construct the binary search tree data structure in \\cref{sec:bst}, followed in \\cref{sec:matroid-intersection} by how to use it to implement our $\\tO(n\\sqrt{r})$ matroid intersection algorithm in the new dynamic-rank-oracle model (the dynamic-\\emph{independence}-oracle algorithm is in \\cref{appendix:dynamic-matroid-intersection-ind}).\nIn \\cref{sec:decremental-basis} we describe our data structure to maintain a basis of a matroid dynamically, and then we use this in our $\\tO_k(n+r\\sqrt{r})$ matroid union algorithm in \\cref{sec:matroid-union} (the special case of $k$-fold matroid union is in \\cref{appendix:matroid-union-fold}).\nWe show our super-linear lower bound in \\cref{sec:lowerbound}. We end our paper with a discussion of open problems in \\cref{sec:openproblems}.\nIn \\cref{appendix:applications} we mention how to implement different matroids oracles in the dynamic-oracle model, and discuss some problems we can solve with our algorithms.\n\n\\section{Super-Linear Query Lower Bounds} \\label{sec:lowerbound}\n\nLower bounds for matroid intersection have been notoriously difficult to prove. The current highest lower bound is due to Harvey \\cite{harvey2008matroid} which says that $(\\log_2 3) n - o(n)$ queries are necessary for any deterministic independence-query algorithm solving matroid intersection.\nObtaining an $\\omega(n)$ lower bound has been called a challenging open question \\cite{chakrabarty2019faster}.\n\nIn this section, we show the first super-linear query lower bound for matroid intersection, both in our \\emph{new dynamic-rank-oracle} model (\\cref{def:dyn-oracle}), and also for the \\emph{traditional independence-oracle} model, thus answering the above-mentioned open question and improving on the bounds of \\cite{Harvey09}.\nWe obtain our lower bounds by studying the communication complexity for matroid intersection.\n\n\\begin{theorem}\nIf Alice is given a matroid $\\cM_1 = (U,\\cI_1)$\nand Bob a matroid $\\cM_2 = (U,\\cI_2)$, any deterministic communication protocol needs $\\Omega(n \\log n)$ bits of communication to solve the matroid intersection problem.\n\\label{thm:comm-lb}\n\\end{theorem}\n\nThe communication lower bound of \\cref{thm:comm-lb} implies a similar lower bound for the number of independence queries needed. \nWe argue that any independence-query algorithm can be simulated by Alice and Bob in the communication setting by exchanging a single bit per query asked. Whenever they want to ask an independence query ``Is $S \\in \\cI_{i}$?'', Alice or Bob will check this locally and share the answer with the other party by sending one bit of communication.\n\nUnfortunately, this argument does not extend to the traditional rank-oracle model (since each rank query can in fact reveal $\\Theta(\\log n)$ bits of information, which need to be sent to the other party). However, for the new \\emph{dynamic}-rank-oracle model, the $\\Omega(n\\log n)$ lower bound holds as now each new query only reveals constant bits of information: either the rank remains the same, increases by one, or decreases by one (and Alice or Bob can send which is the case to the other party with a constant number of bits). Our discussion proves the following corollaries, given \\cref{thm:comm-lb}.\n\n\n\\begin{corollary}\nAny deterministic (traditional) independence-query algorithm solving matroid intersection requires $\\Omega(n \\log n)$ queries.\n\\end{corollary}\n\n\\begin{corollary}\nAny deterministic dynamic-rank-query algorithm solving matroid intersection requires $\\Omega(n \\log n)$ queries.\n\\end{corollary}\n\n\\begin{remark}\nWe note that our lower bounds are also valid for the \\emph{matroid union} problem, due to the standard reductions\\footnote{See \\cref{sec:reduction} for a reduction from matroid union to matroid intersection. To reduce from matroid intersection to matroid union, consider $\\cM = \\cM_1 \\vee \\cM_2^{*}$, where $\\cM_2^{*}$ is the \\emph{dual matroid} of $\\cM_2$ ($S \\subseteq U$ is independent in $\\cM_2^{*}$ if and only if $U - S$ contains a basis). It's easy to show that the basis $B$ of $\\cM$ in $\\cM_1$ will be of the form $B = S \\cup (U \\setminus R)$, where $S$ is the solution to the intersection between $\\cM_1$ and $\\cM_2$ and $R$ is an arbitrary basis of $\\cM_2$ that contains $S$.} between matroid intersection and union.\n\\end{remark}\n\n\\subsection{Communication Setting}\n\nWe study the following communication game\nwhich we call \\textsf{Matroid-Intersection-with-Candidate}.\nAlice and Bob are given matroids $\\cM_1 = (U,\\cI_1)$ respectively $\\cM_2 = (U,\\cI_2)$.\nSuppose they are also both given a common independent set $S\\in \\cI_1\\cap \\cI_2$, and they wish to determine\nwhether $S$ is a maximum-cardinality independent\nset.\nClearly \n\\textsf{Matroid-Intersection-with-Candidate} is an easier version of\nthe matroid intersection problem, as Alice and Bob can just ignore the candidate $S$.\n\nOur idea is that in order to solve\n\\textsf{Matroid-Intersection-with-Candidate}, Alice and Bob need to determine if there exists an \\emph{augmenting path}---that is an $(s,t)$-path---in the \\emph{exchange graph} $G(S)$ (see \\cref{def:exchange-graph} and \\cref{lemma:augmenting-path}). It is known that \\textsf{$(s,t)$-connectivity} in a\ngraph requires $\\Omega(n\\log n)$ bits of communication (\\cref{lem:lb-conn}, \\cite{HajnalMT88}). Using \\emph{strict gammoids} as our matroids, we argue that we can choose exactly how the underlying exchange graph looks like, and hence that matroid intersection admits the same lower bound.\n\n\\begin{definition}[Strict Gammoid, see~\\cite{perfect1968applications,mason1972class}]\nLet $H = (V,E)$ be a directed graph and $X\\subseteq V$ a subset of vertices.\nThen $(H,X)$ defines a matroid $\\cM = (V,\\cI)$ called a \\emph{strict gammoid}, where a set of vertices $Y\\subseteq V$ is independent if and only if\n there exists a set of vertex-disjoint directed paths (some of which might just consist of single vertices) in $H$ whose starting points all belong to $X$ and whose ending points are exactly $Y$.\n\\end{definition}\n\n\\begin{claim}\n\\label{clm:lb-matroids}\nSuppose $G=(L,R,E)$ is a directed bipartite graph and $a, b \\in R$ are two unique vertices such that $a$ has zero in-degree and $b$ has zero out-degree.\nThen there exist two matroids $\\cM_1, \\cM_2$ over the ground set $L\\cup R$ such that\n$L$ is independent in both matroids and\nthe \\emph{exchange graph} $G(L)$ is exactly $G$ plus two extra vertices ($s$ and $t$) and two extra edges ($s\\to a$ and $b \\to t$).\n\\end{claim}\n\n\\begin{proof}\nLet $F_{1} = \\{(u,v) | (u,v)\\in E, u\\in L, v\\in R\\}$ be the directed edges from $L$ to $R$ in $G$,\nand\n$F_{2} = \\{(u,v) | (v,u)\\in E, u\\in L, v\\in R\\}$ be the (reversed) directed edges from $R$ to $L$ in $G$.\nAlso let $H_1 = (L\\cup R,F_1)$ and $H_2 = (L\\cup R,F_2)$ be the directed graphs with these edges respectively.\n\nWe let $\\cM_1$, respectively $\\cM_2$, be the strict gammoids defined by $(H_1, L+a)$ respectively $(H_2, L+b)$. Now $L$ is independent in both matroids. It is straightforward to verify that the exchange graph $G(L)$ is exactly as described in the claim. We certify that this is the case for the edges defined by $\\cM_1$ ($\\cM_2$ is similar):\n\n\n\\begin{enumerate}\n \\item $G(L)$ will have an edge from $s$ to $a$, since\n $L+a$ is independent in $\\cM_1$. \n Additionally note that $a$ has in-degree zero in $G$ (and hence is an isolated vertex in $H_1$).\n \n \\item\n For any $x\\in L, y\\in R$, the edge $(x,y)$ exists in $G(L)$\n if and only if $L - x + y$ is independent in $\\cM_1$. By definition this is if and only if there exists a vertex-disjoint path starting from $L$ and ending to $L-x+y$ in $H_1$, or equivalently if the edge $(x,y)$ exists in $H_1$ (indeed, all vertices in $L-x$ must be both starts and ends of paths, so the path to $y$ must have started in $x$).\n \\qedhere{}\n \n\\end{enumerate}\n\n\\end{proof}\n\nWe now proceed to reduce an instance of \\textsf{$(s,t)$-connectivity} to that of \\textsf{Matroid-Intersection-with-Candidate}, which concludes the proof of \\cref{thm:comm-lb}.\n\n\\begin{definition}[\\textsf{$(s,t)$-connectivity}]\nSuppose $G = (V, E_A \\cup E_B)$ is an undirected graphs on $n=|V|$ vertices,\nwhere Alice knows edges $E_A$ and Bob knows edges $E_B$.\nThey are also both given vertices $s$ and $t$, and want to determine\nif $s$ and $t$ are connected in $G$.\n\\label{def:st-conn}\n\\end{definition}\n\n\\begin{lemma}[\\cite{HajnalMT88}]\n\\label{lem:lb-conn}\nThe deterministic communication complexity of\n\\textsf{$st$-connectivity} is $\\Omega(n\\log n)$.\n\\end{lemma}\n\n\\begin{proof}[Proof of \\cref{thm:comm-lb}.]\nWe show that an instance of \\textsf{$(s,t)$-connectivity} can be converted to an instance\nof \\textsf{Matroid-Intersection-with-Candidate} of roughly the same size.\nSuppose the symbols are defined as in Definition \\ref{def:st-conn}.\nLet $\\bar{V} = \\{\\bar{v} : v\\in V\\}$ be a copy of $V$.\nWe construct a directed bipartite graph $G' = (V, \\bar{V}, E'_A \\cup E'_B)$\nas follows:\n\n\\begin{itemize}\n \\item $(v,\\bar{v}) \\in E'_A$ for all $v\\in V$.\n \\item $(\\bar{v},v) \\in E'_B$ for all $v\\in V$.\n \\item $(v,\\bar{u}), (u,\\bar{v}) \\in E'_A$ for all $\\{u,v\\}\\in E_A$.\n \\item $(\\bar{v},u), (\\bar{u},v) \\in E'_A$ for all $\\{u,v\\}\\in E_B$.\n \\item No other edges exist.\n\\end{itemize}\n\nAlice knows $E'_A$, and Bob knows $E'_B$. $G'$ has $2n$ vertices and $2n+2m$ edges.\n\nNow let $G''$ be $G'$ but removing all incoming edges from $\\bar{s}$ and\nall outgoing edges from $\\bar{t}$, in order to apply \\cref{clm:lb-matroids} on $G''$ with $a = \\bar{s}$ and $b = \\bar{t}$.\nSay we get matroids $\\cM_1$ and $\\cM_2$. Note that \nAlice knows $\\cM_1$ and Bob knows $\\cM_2$ by construction.\n\nNow $s$ and $t$ are connected in $G$ if and only if there is a directed $(\\bar{s},\\bar{t})$-path in $G''$. This happens if and only if $V$ is not a maximum-cardinality common independent set of $\\cM_1$ and $\\cM_2$ (i.e.\\ in the case we found an augmenting path for $V$).\n\nHence if there is a (deterministic) communication protocol for matroid intersection using $c$ bits of communication, there is also one for \\textsf{$(s,t)$-connectivity} using $O(c)$ bits of communication. \\cref{lem:lb-conn} then implies the\n$\\Omega(n\\log n)$ communication lower bound for matroid intersection.\n\\end{proof}\n\n\n\\section{Matroid Intersection} \\label{sec:matroid-intersection}\n\nIn this section, we present a matroid intersection algorithm in the dynamic-rank-oracle model that matches the state-of-the-art algorithm~\\cite{chakrabarty2019faster} in the traditional model.\n\n\\begin{restatable}{theorem}{matroidintersection}\n For two matroids $\\cM_1 = (U, \\cI_1)$ and $\\cM_2 = (U, \\cI_2)$, it takes $\\tO(n\\sqrt{r})$ time to obtain the largest $S \\in \\cI_1 \\cap \\cI_2$ in the dynamic-rank-oracle model.\n \\label{thm:dynamic-matroid-intersection-rank-main}\n\\end{restatable}\n\nThe algorithm follows the blocking-flow framework of \\cite{chakrabarty2019faster} similar to the Hopcroft-Karp algorithm for bipartite matching~\\cite{HopcroftK73}, which goes as follows.\nInitially, they start with $S = \\emptyset$.\n\n\\begin{enumerate}\n \\item\\label{item:step1} First, they obtain a common independent set that is of size at least $(1 - \\epsilon)r$ by eliminating all augmenting paths of length $O(1\/\\epsilon)$. In each of the $O(1\/\\epsilon)$ iterations, they first compute the distance layers of $G(S)$ along which they find a maximal set of compatible shortest augmenting paths using an approach similar to a depth-first-search from $s$.\n Augmenting paths are searched in a depth-first-search manner.\n Whenever an element has no out-edge with respect to the current common independent set to the next layer, they argue that it can be safely removed as it will not be on a shortest augmenting path anymore in this iteration.\n Augmenting along these augmenting paths increases the $(s, t)$-distance of $G(S)$ by at least one.\n \\item\\label{item:step2} With the current solution which is only $\\epsilon$ fraction away from being optimal, they find the remaining $O(\\epsilon r)$ augmenting paths one at a time.\n\\end{enumerate}\n\nA proper choice of $\\epsilon$ (in this case it is $\\epsilon = 1\/\\sqrt{r}$) that balances the cost between the two steps results in their algorithm.\n\n\\subsection{Building Distance Layers}\n\nBuilding distance layers and finding a single augmenting path in Step~\\ref{item:step2} is immediate by replacing binary searches in \\cite[Algorithm 4]{chakrabarty2019faster} with the binary search trees of \\cref{thm:bst}.\n\n\\begin{lemma}\n It takes $\\tO(n)$ time to compute the $(s, u)$-distance for each $u \\in U$ and find the shortest $(s, t)$-path in $G(S)$ or determine that $t$ is unreachable from $s$.\n \\label{lemma:bfs}\n\\end{lemma}\n\n\\begin{proof}\n First, we build two binary search trees via \\cref{thm:bst} with $\\beta = 1$, a circuit binary search tree ${\\mathcal{T}}_1 := \\textsc{Initialize}(\\cM_1, S, X_1)$ where $X_1 = S$ for the first matroid and a co-circuit binary search tree ${\\mathcal{T}}_2 := \\textsc{Initialize}(\\cM_2, S, X_2)$ where $X_2 = \\bar{S}$ for the second matroid. Initializing these takes $\\tO(n)$ time.\n These two binary search trees allow us to explore the exchange graph efficiently.\n \n Then we run the usual BFS algorithm from the source $s$ (or equivalently, all $u \\in \\bar{S}$ with $S + u \\in \\cI_1$).\n For each visited element $u$, if $u \\in S$, then we repeatedly find $x \\in X_2$ such that $S - u + x \\in \\cI_2$ using ${\\mathcal{T}}_2.\\textsc{Find}(u)$, mark $x$ as visited, and remove $x$ from $X_2$ via ${\\mathcal{T}}_2.\\textsc{Delete}(x)$ (until $\\bot$ is returned).\n Similarly, for $u \\in \\bar{S}$, we find $x \\in X_1$ with $S - x + u \\in \\cI_1$, mark $x$ as visited, and remove $x$ from $X_1$ using ${\\mathcal{T}}_1$.\n This explores all the unvisited out-neighbors of $u$ in $G(S)$.\n Since each element will be visited at most once, the total running time is $\\tO(n)$.\n\\end{proof}\n\n\\subsection{Blocking Flow}\n\nIn this section, we prove the following lemma regarding a single phase of blocking-flow computation.\n\n\\begin{lemma}\n Given an $S \\in \\cI_1 \\cap \\cI_2$ with $d_{G(S)}(s, t) = d$, it takes $\\tO\\left(n + \\frac{n\\sqrt{r}}{d} + \\frac{(|S^\\prime| - |S|) \\cdot nd}{\\sqrt{r}}\\right)$ time to obtain an $S^\\prime \\in \\cI_1 \\cap \\cI_2$ with $d_{G(S^\\prime)}(s, t) > d_{G(S)}(s, t)$.\n \\label{lemma:blocking-flow}\n\\end{lemma}\n\nBefore proceeding to prove \\cref{lemma:blocking-flow}, we first use it to finish our matroid intersection algorithm.\nLike Hopcroft-Karp bipartite matching algorithm \\cite{HopcroftK73} and the matroid intersection algorithm of \\cite{chakrabarty2019faster}, we run several iterations of blocking-flow, and then keep augmenting until we get the optimal solution.\n\n\\begin{proof}[Proof of \\cref{thm:dynamic-matroid-intersection-rank-main}]\n Starting from $S = \\emptyset$, we run the blocking-flow algorithm until $d_{G(S)}(s,t) \\geq \\sqrt{r}$.\n This, by \\cref{lemma:blocking-flow}, takes\n \\begin{equation}\n \\tO\\left(n\\sqrt{r}\\right) + \\tO\\left(\\sum_{d = 1}^{\\sqrt{r}}\\frac{n\\sqrt{r}}{d}\\right) + \\tO\\left(\\frac{n}{\\sqrt{r}}\\cdot\\left(\\sum_{d = 1}^{\\sqrt{r}}{d \\cdot (|S_d| - |S_{d - 1}|)}\\right)\\right)\n \\label{eq:timebound}\n \\end{equation}\n time, where $S_d$ is the size of the $S$ we get after augmenting along paths of length $d$.\n Observe that $\\sum_{d = 1}^{\\sqrt{r}}{d \\cdot (|S_d| - |S_{d - 1}|)}$ is the sum of lengths of the augmenting paths that we use, and thus the third term in \\cref{eq:timebound} is $\\tO(\\frac{n}{\\sqrt{r}} \\cdot r) = \\tO(n\\sqrt{r})$ by \\cref{lemma:augmenting-path-lengths}.\n The second term also sums up to $\\tO(n\\sqrt{r})$ (by a harmonic sum), and therefore the total running time of the blocking-flow phases is $\\tO(n\\sqrt{r})$.\n The current common independent set $S$ has size at least $r - O(\\sqrt{r})$ by \\cref{lemma:approx}, and thus finding the remaining $O(\\sqrt{r})$ augmenting paths one at a time takes a total running time of $\\tO(n\\sqrt{r})$ via \\cref{lemma:bfs}.\n This concludes the proof of \\cref{thm:dynamic-matroid-intersection-rank-main}.\n\\end{proof}\n\n\nThe rest of the section is to prove \\cref{lemma:blocking-flow}.\nOur blocking-flow algorithm is a slight modification to \\cite[Algorithm 5]{chakrabarty2019faster}, as shown in \\cref{alg:blocking-flow}.\nIt takes advantage of the data structure of \\cref{thm:bst} to explore an out-edge from the current element $a_{\\ell}$ to $A_{\\ell + 1}$---the set of ``alive'' elements in the next layers---while (approximately) keeping track of the current common independent set $S$.\nAn element $u$ is ``alive'' if it has not been included in the augmenting set $\\Pi := (D_1, \\ldots, D_{d_t - 1})$ yet, nor has the algorithm determines that there cannot be any shortest augmenting path through $u$.\n\n\\begin{algorithm}[!ht]\n \\SetEndCharOfAlgoLine{}\n \\SetKwInput{KwData}{Input}\n \\SetKwInput{KwResult}{Output}\n \n \\caption{Blocking flow}\n \\label{alg:blocking-flow}\n \\KwData{$S \\in \\cI_1 \\cap \\cI_2$}\n \\KwResult{$S^\\prime \\in \\cI_1 \\cap \\cI_2$ with $d_{G(S^\\prime)}(s, t) > d_{G(S)}(s, t)$}\n \n Build the distance layers $L_1, \\ldots, L_{d_t - 1}$ of $G(S)$ with \\cref{lemma:bfs}\\;\n $L_0 \\gets \\{s\\}$ and $L_{d_t} \\gets \\{t\\}$\\;\n $A_\\ell \\gets L_{\\ell}$ for each $0 \\leq \\ell \\leq d_t$\\;\n $\\cT_{\\ell} \\gets \\textsc{Initialize}(\\cM_{1}, S, Q_S, L_{\\ell})$ for each \\emph{odd} $1 \\leq \\ell \\leq d_t$ by \\cref{thm:bst} with $\\beta = \\sqrt{r} \/ d$\\;\n $\\cT_{\\ell} \\gets \\textsc{Initialize}(\\cM_{2}, S, Q_S, L_{\\ell})$ for each \\emph{even} $1 \\leq \\ell \\leq d_t$ by \\cref{thm:bst} with $\\beta = \\sqrt{r} \/ d$\\;\n $\\ell \\gets 0$, $a_0 \\gets s$, and $D_{\\ell} \\gets \\emptyset$ for each $1 \\leq \\ell < d_t$\\;\n \\While{$\\ell \\geq 0$} {\n \\If{$\\ell < d_t$} {\n \\lIf{$A_{\\ell} = \\emptyset$} {\n \\textbf{break}\n }\n $a_{\\ell + 1} \\gets \\cT_{\\ell + 1}.\\textsc{Find}(a_\\ell)$\\;\\label{line:search}\n \\If{$a_{\\ell + 1} = \\bot$} {\n $\\cT_{\\ell}.\\textsc{Delete}(a_{\\ell})$\\;\n $A_\\ell \\gets A_\\ell - a_\\ell$ and $\\ell \\gets \\ell - 1$\\;\n }\n \\Else {\n $\\ell \\gets \\ell + 1$\n }\n }\n \\Else {\n \\tcp{Found augmenting path $a_1, a_2, \\ldots a_\\ell$}\n \\For{$i \\in \\{1, 2, \\ldots, d_t - 1\\}$} {\n $D_{i} \\gets D_{i} + a_{i}$ and $A_{i} \\gets A_{i} - a_{i}$\\;\n $\\cT_{i}.\\textsc{Delete}(a_i)$ and $\\cT_{i}.\\textsc{Update}(\\{a_{i - 1}, a_{i}\\})$\\;\\label{line:update}\n }\n $\\ell \\gets 0$\\;\n }\n }\n \\textbf{return} $S^\\prime := S \\oplus \\Pi$, where $\\Pi := (D_1, D_2, \\ldots, D_{d_t - 1})$\\;\n\\end{algorithm}\n\n\n\nWe emphasize that the difference between \\cref{alg:blocking-flow} and \\cite[Algorithm 5]{chakrabarty2019faster} is exactly in the replacement of binary searches with the data structure of \\cref{thm:bst}.\nNote that indeed by the specification stated in \\cref{thm:bst}, the binary search trees let us explore edges in the exchange graph (see \\cref{remark:bst}).\nAs a result, our proof will focus on showing that such a replacement does not affect the correctness.\nFor this, we need the concept of \\emph{augmenting sets} (see \\cref{def:augmenting-sets}) which characterizes a collection of ``mutually compatible'' augmenting paths---i.e.\\ a ``blocking flow''.\nThe structural results in \\cref{sec:prelim} culminate in the following lemma that is key to the correctness of our algorithm.\nIt models when we can safely ``remove'' an element since there will be no augmenting path through it in the future.\nThis is in particular required for us (as opposed to the simpler argument used in \\cite{chakrabarty2019faster}) because the set $S$ is not fully updated after each augmentation (at least in the binary search trees that we use to explore the exchange graphs).\n\n\\begin{lemma}\n Let $\\Pi \\subseteq \\Pi^\\prime$ be augmenting sets in $G(S)$ with distance layers $L_1, \\ldots, L_{d_t - 1}$ where $d_{G(S)}(s, t) = d_t$.\n For $x \\in L_\\ell$, if there is no $y \\in L_{\\ell + 1}$ such that\n \\begin{equation}\n (S \\oplus D_{\\ell} \\oplus D_{\\ell + 1}) \\oplus \\{x, y\\} \\in \\cI,\\;\\text{where $\\cI := \\cI_1$ if $\\ell$ is even and $\\cI := \\cI_2$ if $\\ell$ is odd},\n \\label{eq:condition}\n \\end{equation}\n then there is no augmenting path of length $d_t$ through $x$ in $G(S \\oplus \\Pi^\\prime)$.\n \\label{lemma:key}\n\\end{lemma}\n\n\\begin{proof}\n We claim that there is no augmenting path of length $d_t$ through $x$ in $G(S \\oplus \\Pi)$: If there is such a $P$, then we can put $P$ into $\\Pi$ and get an augmenting set $\\tilde{\\Pi} := (\\tilde{D}_1, \\ldots, \\tilde{D}_{d_t - 1})$ by \\cref{claim:put-aug-path-in-aug-set}.\n By definition of the augmenting set, this means that there is such a $y \\in \\tilde{D}_{k + 1} \\setminus D_{k + 1}$ satisfying \\eqref{eq:condition}, a contradiction to our assumption.\n The lemma now follows from \\cref{claim:no-path}. \n\\end{proof}\n\nWe are now ready to prove \\Cref{lemma:blocking-flow}.\n\n\\begin{proof}[Proof of \\cref{lemma:blocking-flow}]\n We analyze the running time of \\cref{alg:blocking-flow} first.\n Similar to \\cite[Lemma 15]{chakrabarty2019faster}, in each iteration, we use $\\cT_\\ell.\\textsc{Find}(\\cdot)$ to find an out-edge of $a_{\\ell}$, taking $\\tO(\\beta) = \\tO(\\sqrt{r}\/d)$ time by \\cref{thm:bst}.\n In each iteration, we either increase $\\ell$ and extend the current path by a new element, decrease $\\ell$ and remove one element, or find an $(s, t)$-path (then remove everything in it), and each element can participate in each of the event at most once.\n Thus, there are only $O(n)$ iterations, and the total cost of $\\cT_{\\ell}.\\textsc{Find}(\\cdot)$ is consequently $\\tO(\\frac{n\\sqrt{r}}{d})$ by our choice of $\\beta$.\n For each of the augmenting path, $\\cT_{\\ell}.\\textsc{Update}(\\cdot)$ takes $\\tO\\left(\\frac{|L_\\ell| \\cdot d}{\\sqrt{r}}\\right)$ time, contributing to a total running time of $\\tO\\left(\\frac{(|S^\\prime| - |S|) \\cdot nd}{\\sqrt{r}}\\right)$ since $L_\\ell$'s are disjoint.\n \n We then argue the correctness of the algorithm.\n Observe that at any point in time, $\\cT_\\ell$ is a data structure capable of finding a replacement element with respect to the independent set $S \\oplus D_{\\ell - 1} \\oplus D_\\ell$, due to the updates that we gave it.\n This means that the collection $\\Pi := (D_1, D_2, \\ldots, D_{d_t - 1})$ remains an augmenting set in $G(S)$ because $S \\oplus (D_{\\ell - 1} + a_{\\ell - 1}) \\oplus (D_{\\ell} + a_\\ell)$ is independent for each $\\ell$ whenever a path is found.\n As a result, when the algorithm terminates, $S^\\prime := S \\oplus \\Pi$ is indeed a common independent set as guaranteed by \\cref{lemma:aug-set}.\n \n It remains to show that $d_{G(S^\\prime)}(s, t) > d_{G(S)}(s, t)$ by arguing that for each $a_{\\ell}$ not in $\\Pi$ but removed from $A_{\\ell}$ at time $t$, there is no shortest augmenting path in $G(S^\\prime)$ that passes through $a_{\\ell}$.\n This is a direct consequence of \\cref{lemma:key} since $\\Pi^{(t)}$, the augmenting set obtained at time $t$, is contained in $\\Pi$.\n The fact that $\\cT_{\\ell + 1}^{(t)}.\\textsc{Find}(a_{\\ell})$ returns nothing (equivalently, \\cref{eq:condition} is not satisfied) shows that $a_{\\ell}$ is not on any shortest augmenting path in $G(S^\\prime)$ since the set $X$ maintained in $\\cT_{\\ell + 1}^{(t)}$ (see \\cref{thm:bst}) is $A_{\\ell + 1}$ at all time.\n We remark that $x$ might have an out-edge (with respect to $S \\oplus D_{\\ell}^{(t)} \\oplus D_{\\ell}^{(t + 1)}$) to a removed element with distance $\\ell + 1$ from $s$ (not in $A_{\\ell + 1}$), but such an element, by induction, is not on any augmenting path either.\n\\end{proof}\n\n\\section{Matroid Union} \\label{sec:matroid-union}\n\nIn this section, we present our improved algorithm for matroid union.\nOur main focus of this algorithm is on optimizing the $O(n\\sqrt{r})$ term to $O(r\\sqrt{r})$.\nThus, for simplicity of presentation, we will treat $k$ as a constant (the dependence on $k$ will be a small polynomial) and express our bounds using the $O_k(\\cdot)$ and $\\tO_k(\\cdot)$ notation.\n\n\\begin{theorem}\n In the dynamic-rank-oracle model, given $k$ matroids $\\cM_i = (U_i, \\cI_i)$ for $1 \\leq i \\leq k$, it takes $\\tO_k(n + r\\sqrt{r})$ time to find a basis $S \\subseteq U_1 \\cup \\cdots \\cup U_k$ of $\\cM = \\cM_1 \\vee \\cdots \\vee \\cM_k$ together with a partition $S_1, \\ldots, S_k$ of $S$ in which $S_i \\in \\cI_i$ for each $1 \\leq i \\leq k$.\n \\label{thm:dynamic-matroid-union}\n\\end{theorem}\n\nIn \\cref{appendix:matroid-union-fold}, we present an optimized (for the parameter $k$) version of the above algorithm which solves the important special case when all the $k$ matroids are the same---i.e.\\ \\emph{$k$-fold matroid union}---with applications in matroid packing problems. For example, the problem of finding $k$ disjoint spanning trees in a graph falls under this special case. In particular, in \\cref{appendix:matroid-union-fold}, we obtain the following \\cref{thm:dynamic-matroid-union-fold}, and we discuss some immediate consequences for the \\emph{matroid packing, matroid covering}, and \\emph{$k$-disjoint spanning trees problems} in \\cref{sec:packing,sec:spanning}.\n\n\\begin{restatable}{theorem}{matroidunionfold}\n In the dynamic-rank-oracle model, given a matroid $\\cM = (U, \\cI)$ and an integer $k$, it takes $\\tO(n + kr\\sqrt{\\min(n, kr)} + k\\min(n, kr))$ time to find the largest $S \\subseteq U$ and a partition $S_1, \\ldots, S_k$ of $S$ in which $S_i \\in \\cI$ for each $1 \\leq i \\leq k$.\n \\label{thm:dynamic-matroid-union-fold}\n\\end{restatable}\n\n\n\n\nThe rest of this section will focus on the proof of \n\\cref{thm:dynamic-matroid-union} (again, where the number of matroids $k$ is treated as a constant).\nOur algorithm is based on the matroid intersection algorithm in \\cref{sec:matroid-intersection}, in which we identify and optimize several components that lead to the improved time bound.\n\n\\subsection{Reduction to Matroid Intersection} \\label{sec:reduction}\n\nFor completeness, we provide a standard reduction from matroid union to matroid intersection.\nFor an in-depth discussion, see \\cite[Chapter 42]{schrijver2003}.\nLet $\\cM_i = (U_i, \\cI_i)$ be the given $k$ matroids and $U = U_1 \\cup \\cdots \\cup U_k$ be the ground set of the matroid union $\\cM = \\cM_1 \\vee \\cdots \\vee \\cM_k$.\nWe first relabel each element in the matroids with an identifier of its matroid, resulting in $\\hat{\\cM}_i = (\\hat{U}_i, \\cI_i)$, where $\\hat{U}_i = \\{(u, i) \\mid u \\in U_i\\}$.\nLet $\\hat{\\cM} = (\\hat{U}, \\hat{\\cI}) = \\hat{\\cM}_1 \\vee \\cdots \\vee \\hat{\\cM}_k$ be over the ground set $\\hat{U} = \\hat{U}_1 \\sqcup \\cdots \\sqcup \\hat{U}_k$.\n\nIn other words, in $\\hat{\\cM}$, we duplicate each element that is shared among multiple matroids into copies that are considered different, effectively making the ground sets of the $k$ matroids disjoint.\nAfter this modification, an independent set in $\\hat{\\cM}$ is now simply the union of $k$ independent sets, one from each matroid.\nHowever, that might not be what we want since these independent sets may overlap, i.e., contain copies that correspond to the same element. \nWe therefore intersection $\\hat{\\cM}$ with a partition matroid $\\cM_{\\text{part}} = (\\hat{U}, \\cI_{\\text{part}})$ given by\n\\[ \\cI_{\\text{part}} = \\{S \\subseteq \\hat{U} \\mid \\left|S \\cap \\{(u, i)\\;\\text{for $1 \\leq i \\leq k$} \\mid u \\in U_i\\}\\right| \\leq 1\\;\\text{holds for each $u \\in U$} \\} \\]\nto restrict different copies of the same element to be chosen at most once.\nThe matroid union problem is thus reducible to the matroid intersection problem in the sense that the intersection of $\\hat{\\cM}$ and $\\cM_{\\text{part}}$ maps exactly to the independent sets of the matroid union $\\cM$.\n\nNotation-wise, given the above mapping between the two worlds, whenever we write $S \\in \\cI_{\\text{part}} \\cap \\hat{\\cI}$, a subset set of $\\hat{U}$, we will equivalently regard $S$ as a subset of $U$ with an implicit partition $S_1, \\ldots, S_k$ where $S_i \\in \\cI_i$.\n\n\n\\subsection{Specialized Matroid Intersection Algorithm}\n\nGiven the reduction, to prove \\cref{thm:dynamic-matroid-union}, it suffices to compute the intersection of $\\hat{\\cM}$ and $\\cM_{\\text{part}}$ in the claimed time bound.\nIn the following, we will set $\\cM_1$ to be $\\cM_{\\text{part}}$ and $\\cM_2$ to be $\\hat{\\cM}$ when talking about exchange graphs and other data structures.\nOur main goal is to optimize the $O(n\\sqrt{r})$ term to $O(r\\sqrt{r})$, so it might be more intuitive to think of $r \\ll n$.\nWe first show that for an $S \\in \\cI_\\text{part} \\cap \\hat{\\cI}$, the exchange graph $G(S)$ is quite unbalanced in the sense that most elements appear in the first distance layer.\nIn fact, the first distance layer of $G(S)$ contains all duplicates of elements $u$ in $U$ that do not appear in $S$.\nThis is by definition of $G(S)$ and the fact that $\\cM_1 = \\cM_{\\text{part}}$ is the partition matroid.\nIn the following, when the context is clear, we let $d_t$ denote the $(s, t)$-distance of $G(S)$ and $L_1, \\ldots, L_{d_t - 1}$ denote the distance layers.\n\n\\begin{fact}\n It holds that $L_1 = \\{(u, i) \\mid (u, i) \\in \\hat{U}\\;\\text{and}\\;(u, j) \\not\\in S\\;\\text{for any $1 \\leq j \\leq k$}\\}$.\n \\label{fact:large-first-layer}\n\\end{fact}\n\nSimilarly, the odd layers of $G(S)$ (that corresponds to $\\bar{S}$) are well-structured in the sense that they consist of elements whose one of the duplicates appears in $S$.\nBy definition of $G(S)$, we also know that elements in odd layers have only a single in-edge, which is from their corresponding duplicate in $S$.\nThese elements thus all have the same distance from $s$.\n\n\\begin{fact}\n It holds that $L_3 \\cup L_5 \\cup \\cdots \\cup L_{d_t - 1} = \\{(u, i) \\mid (u, i) \\in \\hat{U}\\;\\text{and}\\;(u, j) \\in S\\;\\text{for some}\\;i \\neq j\\}$,\n and for each $(u, i) \\in L_3 \\cup \\cdots \\cup L_{d_t - 1}$, we have $d_{G(S)}(s, (u, i)) = d_{G(S)}(s, (u, j)) + 1$ where $(u, j) \\in S$.\n \\label{fact:structured-odd-layers}\n\\end{fact}\n\n\n\\paragraph{Union Exchange Graph.} Given the above facts, we introduce another notion of exchange graphs which is commonly used for matroid union (see, e.g., \\cite{edmonds1968matroid,cunningham1986improved}).\nFor the given $k$ matroids $\\cM_i = (U_i, \\cI_i)$ and a subset $S \\subseteq U$ that can be partitioned into $k$ independent sets $S_1, \\ldots, S_k$ with $S_i \\in \\cI_i$, the \\emph{union exchange graph} is a directed graph $H(S) = (U \\cup \\{s, t\\}, E)$ with two distinguished vertices $s, t \\not\\in U$ and edge set $E = E_s \\cup E_t \\cup E_{\\text{ex}}$, where\n\n\\begin{align*}\n E_s &= \\{(s, u) \\mid u \\not\\in S\\}, \\\\\n E_t &= \\{(u, t) \\mid S_i + u \\in \\cI_i\\;\\text{for some $1 \\leq i \\leq k$}\\},\\;\\text{and} \\\\\n E_{\\text{ex}} &= \\{(u, v) \\mid S_i - v + u \\in \\cI_i\\;\\text{where $v \\in S_i$}\\}. \\\\\n\\end{align*}\n\nWe can see that the exchange graph $G(S)$ with respect to $S \\in \\cI_{\\text{part}} \\cap \\hat{\\cI}$ (as a subset of $\\hat{U}$) and the union exchange graph $H(S)$ with respect to $S \\subseteq U$ is essentially the same in the sense that $H(S)$ can be obtained from $G(S)$ by contracting all copies of the same element in the first layers and skipping all other odd layers.\nIn particular, for each $(u, i) \\in S$, in $G(S)$, there might be a direct edge from $(u, i)$ to $(u, j)$ and an edge from $(u, j)$ to $(v, j)$, where $(v, j) \\in S_j$ and $S_j - v + u \\in \\cI_j$.\nCorrespondingly, in $H(S)$, we skip the intermediate vertex $(u, j)$ and meld the above two edges as one direct edge from $u \\in S_i$ to $v \\in S_j$.\nWe also merge all edges from $s$ to some $(u, i)$ of the same $u$ in the first layer to a single edge from $s$ to $u$ (\\cref{fact:large-first-layer}).\nThis simplification does not impact the distance layers of $H(S)$ since all such $(u, j)$ have the same distance from $s$ (\\cref{fact:structured-odd-layers}).\n\n\nFrom now on, for simplicity, our algorithms will run on the union exchange graphs $H(S)$, i.e., we will perform blocking-flow computation and augment $S$ along paths in $H(S)$.\nOn the other hand, to not repeat and specialize all the lemmas to the case of union exchange graphs, proofs and correctness will be argued implicitly in the perspective of the exchange graph $G(S)$ for matroid intersection.\nFor instance, for $P = (s, a_1, \\ldots, a_{d_t - 1}, t)$ a shortest $(s, t)$-path in $H(S)$, ``augmenting $S$ along $P$'' means moving $a_i$ to the independent set that originally contains $a_{i + 1}$ for each $i \\geq 1$, and thus effectively enlarge the size of $S$ by one via putting $a_1$ in it.\\footnote{One can show that the matroid union $\\cM$ is a matroid~\\cite[Chapter 42]{schrijver2003}. As such, a basis can be obtained by trying to include each element into $S$. From the union exchange graph perspective, the independence test of $S + x$ corresponds to asking whether ``there is a path in $H(S)$ from $x$ to $t$''.}\nOne can verify that this is indeed what happens if we map $P$ back to a path $P^\\prime$ in $G(S)$, and then perform the augmentation of $S$ (as a subset of $\\hat{U}$) along $P^\\prime$.\n\nOur main idea to speed up the matroid union algorithm to $\\tO_k(r\\sqrt r)$ (instead of $\\tO_k(n \\sqrt{r})$) is to ``sparsify'' the first layer of $H(S)$ by only considering a subset of elements contained in some basis. We formalize this in the following \\cref{lemma:bfs-union,lemma:blocking-flow-union} together with \\cref{alg:bfs-union,alg:blocking-flow-union}.\n\n\n\\begin{lemma}\n Given $S \\in \\cI_\\text{part} \\cap \\hat{\\cI}$ and $k$ bases $\\{B_i\\}_{i = 1}^{k}$ of $U_i \\setminus S$, it takes $\\tO_k(r)$ time to construct the distance layers $L_2, \\ldots, L_{d_t - 1}$ of $H(S)$.\n \\label{lemma:bfs-union}\n\\end{lemma}\n\nNote that we know exactly what elements are in the first distance layer, so computing $L_2, \\ldots, L_{d_t - 1}$ suffices.\n\n\\begin{algorithm}\n \\SetEndCharOfAlgoLine{}\n \\SetKwInput{KwData}{Input}\n \\SetKwInput{KwResult}{Output}\n \n \\caption{BFS in a union exchange graph}\n \\label{alg:bfs-union}\n \\KwData{$S \\subseteq U$ which partitions into $S_1, \\ldots, S_k$ of independent sets and $k$ bases $\\{B_i\\}_{i = 1}^{k}$ of $U_i \\setminus S$}\n \\KwResult{The $(s, u)$-distance $d(u)$ in $H(S)$ for each $u \\in S \\cup \\{t\\}$}\n\n $\\mathsf{queue} \\gets B_1 \\cup \\cdots \\cup B_k$\\;\n $d(u) \\gets \\infty$ for each $u \\in S \\cup \\{t\\}$, and $d(u) \\gets 1$ for each $u \\in B_1 \\cup \\cdots \\cup B_k$\\;\n $\\cT_i \\gets \\textsc{Initialize}(\\cM_i, S_i, S_i)$ (\\cref{thm:bst} with $\\beta = 1$)\\;\n \\While{$\\mathsf{queue} \\neq \\emptyset$} {\n $u \\gets \\mathsf{queue}.\\textsc{Pop}()$\\;\n \\For{$i \\in \\{1, 2, \\ldots, k\\}$ where $u \\in U_i$ and $u \\not\\in S_i$} {\n \\While{$v := \\cT_i.\\textsc{Find}(u) \\neq \\bot$} {\n $d(v) \\gets d(u) + 1$ and $\\mathsf{queue}.\\textsc{Push}(v)$\\;\n $\\cT_i.\\textsc{Delete}(v)$\\;\n }\n \\lIf{$S_i + u \\in \\cI$ and $d(t) = \\infty$} {\n $d(t) \\gets d(u) + 1$\n }\n }\n }\n \\textbf{return} $d(u)$ for each $u \\in S \\cup \\{t\\}$\\;\n\\end{algorithm}\n\n\\begin{proof}\n The algorithm is presented as \\cref{alg:bfs-union}, and it is essentially a breadth-first-search (BFS) starting from $B_1 \\cup \\cdots \\cup B_k$ instead of $s$.\n Out-edges in $H(S)$ are explored via $k$ binary search trees $\\cT_1, \\cT_2, \\ldots, \\cT_k$ of \\cref{thm:bst}, one for each matroid $\\cM_i$ and independent set $S_i$.\n Let's analyze the running time first.\n Building $\\cT_i$ takes a total of $\\tO(|S|) = \\tO_k(r)$ time.\n Exploring the graph takes $\\tO(|S \\cup B_1 \\cup \\cdots \\cup B_k| \\cdot k) = \\tO_k(r)$ time in total since each element in $S$ is found at most once by $\\cT_i.\\textsc{Find}(\\cdot)$ because $S_i$'s are disjoint, and we also spend $O(k)$ time for each element in $S \\cup B_1 \\cup \\cdots \\cup B_k$ iterating over $\\cT_i$.\n \n It remains to show that starting from $B_1 \\cup \\cdots \\cup B_k$ instead of $U \\setminus S$ does not affect the correctness of the BFS.\n For this, it suffices to show that we successfully compute $d(u)$ for all $u \\in S$ with distance $2$ from $s$.\n By definition, $u \\in S_i$ is of distance $2$ from $s$ if and only if there exists an $x \\in U_i \\setminus S$ such that $S_i - u + x \\in \\cI_i$.\n This is equivalent to $\\mathsf{rank}_i(S_i - u + (U_i \\setminus S)) > \\mathsf{rank}_i(S_i)$ by \\cref{obs:exchange}.\n But then by \\cref{lemma:basis-rank}, we have $\\mathsf{rank}_i(S_i - u + (U_i \\setminus S)) = \\mathsf{rank}_i(S_i - u + B_i)$, and so such an $x$ exists in $B_i$ as well.\n This concludes the proof of \\cref{lemma:bfs-union}.\n\\end{proof}\n\n\n\\begin{lemma}\n Given an $S \\in \\cI_{\\text{part}} \\cap \\hat{\\cI}$ with $d_{H(S)}(s, t) = d_t$ together with data structures $\\cD_i$ of \\cref{thm:decremental-basis} that maintains a basis of $U_i \\setminus S$ for each $1 \\leq i \\leq k$, it takes $\\tO_k(r + \\frac{r\\sqrt{r}}{d_t} + (|S^\\prime| - |S|) \\cdot d_t\\sqrt{r})$ time to obtain an $S^\\prime \\in \\cI_{\\text{part}} \\cap \\hat{\\cI}$ with $d_{H}(S^\\prime)(s, t) > d_t$, with an additional guarantee that $\\cD_i$ now maintains a basis of $U_i \\setminus S^\\prime$ for each $1 \\leq i \\leq k$.\n \\label{lemma:blocking-flow-union}\n\\end{lemma}\n\n\\begin{algorithm}\n \\SetEndCharOfAlgoLine{}\n \\SetKwInput{KwData}{Input}\n \\SetKwInput{KwResult}{Output}\n \\SetKwInput{KwGuarantee}{Guarantee}\n \n \\caption{Blocking flow in a union exchange graph}\n \\label{alg:blocking-flow-union}\n \\KwData{$S \\subseteq U$ which partitions into $S_1, \\ldots, S_k$ of independent sets and a dynamic-basis data structure $\\cD_i$ of $U_i \\setminus S$ for each $1 \\leq i \\leq k$}\n \\KwResult{$S^\\prime \\in \\cI_{\\text{part}} \\cap \\cI_k$ with $d_{H(S^\\prime)}(s, t) > d_{H(S)}(s, t)$}\n \\KwGuarantee{$\\cD_i$ maintains a basis of $U_i \\setminus S^\\prime$ at the end of the algorithm for each $1 \\leq i \\leq k$}\n \n Build the distance layers $L_2, \\ldots, L_{d_t - 1}$ of $H(S)$ with \\cref{lemma:bfs-union}\\;\n $L_0 \\gets \\{s\\}$ and $L_{d_t} \\gets \\{t\\}$\\;\n $B_i \\gets$ the basis maintained by $\\cD_i$ and $L_1 \\gets B_1 \\cup \\cdots \\cup B_k$\\;\n $A_\\ell \\gets L_\\ell$ for each $0 \\leq \\ell \\leq d_t$\\;\n $\\cT_{\\ell}^{(i)} \\gets \\textsc{Initialize}(\\cM_i, S_i, Q_{S_i}, A_{\\ell} \\cap S_i)$ for each $2 \\leq \\ell < d_t$ and $1 \\leq i \\leq k$ (\\cref{thm:bst} with $\\beta = \\sqrt{r} \/ d_t$)\\;\n $D_{\\ell} \\gets \\emptyset$ for each $1 \\leq \\ell < d_t$\\;\n $\\ell \\gets 0$ and $a_0 \\gets s$\\;\n \\While{$\\ell \\geq 0$} {\n \\If{$\\ell < d_t$} {\n \\lIf{$A_{\\ell} = \\emptyset$} {\n \\textbf{break}\n }\n \\lIf{$\\ell = 0$} {\n Find an $a_{\\ell + 1} := \\cT_{\\ell + 1}^{(i)}.\\textsc{Find}(a_\\ell) \\neq \\bot$ for some $1 \\leq i \\leq k$\n }\n \\lElse {\n $a_{\\ell + 1} \\gets$ an arbitrary element in $A_1$\n }\n \\If{such an $a_{\\ell + 1}$ does not exist} {\n \\lIf{$\\ell \\geq 2$} {\n $\\cT_{\\ell}^{(j)}.\\textsc{Delete}(a_{\\ell})$ where $a_{\\ell} \\in S_j$\n }\n $A_\\ell \\gets A_\\ell - a_\\ell$ and $\\ell \\gets \\ell - 1$\\;\n }\n \\Else {\n $\\ell \\gets \\ell + 1$\n }\n }\n \\Else {\n \\tcp{Found augmenting path $a_1, a_2, \\ldots a_\\ell$}\n $D_1 \\gets D_1 + a_1$ and $A_1 \\gets A_1 - a_1$\\;\n \\For{$i \\in \\{1, 2, \\ldots, k\\}$ where $a_1 \\in U_i$} {\n $B_i \\gets B_i - a_1$\\;\n \\If{$\\cD_i.\\textsc{Delete}(a_1)$ returns a replacement $x$} {\\label{line:delete}\n $B_i \\gets B_i + x$ and $A_1 \\gets A_1 \\cup \\{x\\}$\\;\n }\n }\n \\For{$i \\in \\{2, \\ldots, d_t - 1\\}$} {\n $D_{i} \\gets D_{i} + a_{i}$ and $A_{i} \\gets A_{i} - a_{i}$\\;\n $\\cT_{i}^{(j)}.\\textsc{Delete}(a_i)$ and $\\cT_{i}^{(j)}.\\textsc{Update}(\\{a_{i - 1}, a_{i}\\})$ where $a_i \\in S_j$\\;\n }\n Augment $S$ along $P = (s, a_1, \\ldots, a_{d_t - 1}, t)$\\;\n $\\ell \\gets 0$\\;\n }\n }\n \\textbf{return} $S$\\;\n\\end{algorithm}\n\n\\begin{proof}[Proof of \\Cref{lemma:blocking-flow-union}]\n Our blocking-flow algorithm for matroid union is presented as \\cref{alg:blocking-flow-union}.\n As it is equivalent to \\cref{alg:blocking-flow} running on $G(S)$ except that the first layer $L_1 := B_1 \\cup \\cdots \\cup B_k$ is now only a subset (which is updated after each augmentation) of $U \\setminus S$, we skip most parts of the proof and focus on discussing this difference.\n That is, we need to show that if $A_1$ becomes empty, then there is no augmenting path of length $d_t$ in $H(S^\\prime)$ anymore.\n Given how $A_1$ and $B_i$'s are maintained and \\cref{lemma:key} (note that the set $X$ maintained in $\\cT_{\\ell}^{(i)}$ is always $A_\\ell \\cap S_i$ with respect to the current $S_i$ and thus it lets us explore out-edges to $A_{\\ell} \\cap S_i$ satisfying \\cref{eq:condition}), $A_1$ is always the subset of $B_1 \\cup \\cdots \\cup B_k$ consisting of elements that still potentially admits augmenting path of length $d_t$ in $H(S^\\prime)$ through them.\n That means if $A_1 = \\emptyset$, then there is no augmenting set of length $d_t$ in $G(S^\\prime)$, that starts from some $b \\in B_1 \\cup \\cdots \\cup B_k$.\n This would imply that there is no such path even if we start from $x \\in (U_i \\setminus S) \\setminus D_1$ as $B_i$ is a basis of it: if $S_i + x - y \\in \\cI$ for some $x \\in (U_i \\setminus S) \\setminus D_1$ and $y \\in S_i$, then there is a $b \\in B$ with $S + b - y \\in \\cI$, and thus a path starting from $x$ can be converted into a path starting from $b$.\n On the other hand, all elements in $D_1$ are not on a such path by \\cref{lemma:monotone} either.\n This shows that indeed $d_{H(S^\\prime)}(s, t) > d_{H(S)}(s, t)$.\n \n The guarantee that $\\cD_i$ now operates on $U_i \\setminus S^\\prime$ is clear: Augmenting along $P = (s, a_1, \\ldots, a_{d_t - 1}, t)$ corresponds to adding $a_1$ into $S$, and since we call $\\cD_i.\\textsc{Delete}(a_1)$ in Line~\\ref{line:delete} after each such augmentation, $\\cD_i$ indeed stays up-to-date. \n \n It remains to analyze the running time of \\cref{alg:blocking-flow-union}.\n Computing distance layers with \\cref{lemma:bfs-union} takes $\\tO_k(r)$ time.\n The number of elements that have ever been in some $A_i$ is $O_k(r + |S^\\prime| - |S|)$ since\n (i) $L_2 \\cup \\cdots \\cup L_{d_t - 1}$ has size $O_k(r)$,\n (ii) the initial basis $B_i$ of $U_i \\setminus S$ for each $1 \\leq i \\leq k$ has total size $O_k(r)$, and\n (iii) each of the $|S^\\prime| - |S|$ augmentations adds at most $O_k(1)$ elements to $A_1$.\n Similar to \\cref{lemma:blocking-flow}, this means that there are at most $O_k(r)$ iterations, each taking $O_k(\\frac{\\sqrt{r}}{d_t})$ time in $\\cT_{\\ell + 1}^{(i)}.\\textsc{Find}(\\cdot)$ with our choice of $\\beta$.\n The algorithm found $|S^\\prime| - |S|$ augmenting paths, taking $\\tO_k(d_t\\sqrt{r} \\cdot (|S^\\prime| - |S|))$ time in total to update the binary search trees.\n Also, for each such augmentation, we need $\\tO_k(\\sqrt{r})$ time to update the basis $B_i$ for all $1 \\leq i \\leq k$, which is subsumed by the cost of updating $\\cT_{i}^{(j)}$.\n These components sum up the total running time of\n \\[ \\tO_k\\left(r + \\frac{r\\sqrt{r}}{d_t} + \\left(|S^\\prime| - |S|\\right) \\cdot d_t\\sqrt{r}\\right). \\]\n\\end{proof}\n\n\\cref{thm:dynamic-matroid-union} now follows easily.\n\n\\begin{proof}[Proof of \\cref{thm:dynamic-matroid-union}]\n We initialize the dynamic-basis data structure $\\cD_i$ of \\cref{thm:decremental-basis} on $U_i$ for each of the matroid $\\cM_i$.\n We then run \\cref{lemma:blocking-flow-union} for at most $\\sqrt{r}$ iterations with $\\{\\cD_i\\}_{i = 1}^{k}$ until $d_{H(S)}(s, t) \\geq \\sqrt{r}$ and get an $S \\in \\cI_{\\text{part}} \\cap \\hat{\\cI}$ with $\\cD_i$ now operating on $U_i \\setminus S$ for each $1 \\leq i \\leq k$.\n This takes\n \\[ \\tO_k\\left(r\\sqrt{r} + \\sum_{d = 1}^{\\sqrt{r}}\\frac{r\\sqrt{r}}{d} + \\sum_{d = 1}^{\\sqrt{r}}d \\cdot \\left(|S_d| - |S_{d - 1}|\\right)\\right) = \\tO_k(r\\sqrt{r})\\]\n time.\n By \\cref{lemma:approx}, $S$ is $O_k(\\sqrt{r})$ steps away from being optimal, and thus we find the remaining augmenting paths one at a time using \\cref{lemma:bfs-union} in $\\tO_k(r\\sqrt{r})$ time in total.\n Note that since a single augmentation corresponds to adding an element to $S$ (hence removing it from $U \\setminus S$), we can maintain the basis of $U_i \\setminus S$ that \\cref{lemma:bfs-union} needs in $\\tO_k(\\sqrt{r} \\cdot \\sqrt{r})$ total update time, which is subsumed by other parts of the algorithm.\n\\end{proof}\n\n\\subsection{Matroid Packing and Covering}\n\\label{sec:packing}\n\n\nA direct consequence of our matroid union algorithm (\\cref{thm:dynamic-matroid-union-fold} in particular) is that we can solve the following packing and covering problem efficiently.\nAs a reminder, the exact dependence on $k$ of our algorithm is $\\tO(n + kr\\sqrt{\\min(n, kr)} + k \\min(n, kr))$ by \\cref{thm:dynamic-matroid-union-fold}.\n\n\\begin{restatable}[Packing]{corollary}{packingalgo}\n For a matroid $\\cM = (U, \\cI)$, it takes $\\tO(n\\sqrt{n} + \\frac{n^2}{r})$ time to find the largest integer $k$ and a collection of disjoint subsets $\\cS = \\{S_1, S_2, \\ldots, S_k\\}$ of $U$ such that $S_i$ is a basis for each $1 \\leq i \\leq k$ under the dynamic-rank-query model.\n \\label{cor:packing}\n\\end{restatable}\n\n\\begin{proof}\n It's obvious that $k \\leq \\frac{n}{r}$ holds.\n We do a binary search of $k$ in the range $[0, \\frac{n}{r}]$, and for each $k$, we can determine the largest subset $S$ of $U$ which can be partitioned into $k$ disjoint independent sets by \\cref{thm:dynamic-matroid-union-fold}.\n If $|S| = kr$, then it means that there are at least $k$ disjoint bases.\n Otherwise, there are less than $k$ disjoint bases.\n The running time is $\\tO(n\\sqrt{n} + \\frac{n^2}{r})$.\n\\end{proof}\n\n\n\\begin{restatable}[Covering]{corollary}{coveringalgo}\n For a matroid $\\cM = (U, \\cI)$, it takes $\\tO(\\alpha r\\sqrt{n} + \\alpha n)$ time to find the smallest integer $\\alpha$ and a partition $\\cS = \\{S_1, S_2, \\ldots, S_\\alpha\\}$ of $U$ such that $S_i \\in \\cI$ holds for each $1 \\leq i \\leq \\alpha$ under the dynamic-rank-query model.\n \\label{cor:covering}\n\\end{restatable}\n\n\n\\begin{proof}\n We first obtain a $2$-approximation $\\alpha^{\\prime}$ of $\\alpha$ (i.e., $\\alpha \\leq \\alpha^\\prime \\leq 2\\alpha$) by enumerating powers of $2$, running \\cref{thm:dynamic-matroid-union-fold} with $k = 2^i$, and checking if the returned $S$ has size $n$: If $|S| = n$, then we know $2^i$ independent sets suffice to cover $U$.\n Note that the enumeration stops whenever we found a suitable value of $\\alpha^\\prime$.\n The exact value of $\\alpha$ can then be found by a binary search in $[\\frac{\\alpha^\\prime}{2}, \\alpha^\\prime]$.\n This takes $\\tO(\\alpha r\\sqrt{n} + \\alpha n)$ (note that $\\alpha r \\geq n$ must hold).\n\\end{proof}\n\n\\subsection{Application: Spanning Tree Packing}\n\\label{sec:spanning}\n\n\nWe demonstrate the applicability of our techniques by deriving an $\\tO(|E| + (k|V|)^{3\/2})$ algorithm for the $k$ disjoint spanning tree problem in a black-box manner.\nThis improves Gabow's specialized $\\tO(k^{3\/2}|V|\\sqrt{|E|})$ algorithm~\\cite{gabow1988forests}.\nSince all applications of our algorithms follow the same reduction, we only go through it once here.\nRefer to \\cref{appendix:applications} for other applications of both our matroid union and matroid intersection algorithms.\n\n\\begin{theorem}\n Given an undirected graph $G = (V, E)$, it takes $\\tO(|E| + (k|V|)^{3\/2})$ time to find $k$ edge-disjoint spanning trees in $G$ or determine that such spanning trees do not exist with high probability\\footnote{We use \\emph{with high probability} to denote with probability at least $1 - |V|^{-c}$ for an arbitrarily large constant $c$.}.\n \\label{thm:k-dst}\n\\end{theorem}\n\n\\begin{proof}\nBy \\cref{thm:dynamic-matroid-union-fold}, it suffices to provide a data structure that supports the three dynamic-oracle operations (\\cref{def:dyn-oracle}) in $\\mathrm{polylog}(|V|)$ time.\nOur black-box reduction makes use of the worst-case connectivity data structure of \\cite{KapronKM13,GibbKKT15}, which can be adapted to in $O(\\mathrm{polylog}(|V|))$ update time maintain the rank of a set of edges (see \\cref{appendix:applications-graphic} for a discussion on how this can be done).\n\n\n Let $\\cM_G$ be the graphic matroid with respect to $G = (V, E)$.\n $G$ admits $k$ edge-disjoint spanning trees if and only if $\\cM_G$ admits $k$ disjoint bases.\n The theorem now follows from \\cref{thm:dynamic-matroid-union-fold} with $n = |E|$ and $r = |V| - 1$ since \\cref{thm:dynamic-matroid-union-fold} returns a union of $k$ disjoint bases if they exist (we note that $k \\le |E|\/(|V|-1) \\le O(|V|)$, and hence the $O(k^2 r)$ term is dominated by the $O((kr)^{3\/2})$ term).\n\\end{proof}\n\n\\section{Open Problems} \n\\label{sec:openproblems}\n\nOur dynamic-oracle model opens up a new path to achieve fast algorithms for many problems at once, where the ultimate goal is to achieve near-linear time and dynamic-rank-query complexities. This would imply near-linear time algorithms for many fundamental problems. We envision reaching this goal via a research program where the studies of algorithms and lower bounds in our and the traditional models \ncomplement each other. \nIn particular, a major open problem is to improve our algorithms further, which would imply improved algorithms for many problems simultaneously.\nA major step towards this goal is improved algorithms in the traditional model, which would already be a breakthrough. Moreover, failed lower bound attempts might lead to new algorithmic insights and vice versa, \nwe leave improving our lower bounds as another major open problem. \nWe believe that the communication complexity of graph and matroid problems is an important component in this study since it plays a main role in our lower bound argument. Recently the communication and some query complexities of bipartite matching and related problems were resolved in \\cite{blikstadBEMN22}. How about the communication and query complexities of dynamic-oracle matroid problems and their special cases such as colorful spanning trees?\nIt is also fruitful to resolve some special cases as the solutions may shed more light on how to solve matroid problems in our model. Below are some examples.\n\n\\begin{itemize}\n \\item \\textbf{Disjoint Spanning Trees.} Can we find $k$ edge-disjoint spanning trees in an undirected graph in near-linear time for constant $k$, or even do so for the case of $k = 2$ (which already has application in the Shannon Switching Game)? Our new $\\tO(|E|+|V|\\sqrt{|V|})$-time algorithm shows that it is possible for sufficiently dense graphs. For the closely related problem of finding $k$ edge-disjoint arborescences (rooted \\emph{directed} spanning trees) in a directed graph, the case of $k = 2$ has long been settled by Tarjan's linear time algorithm~\\cite{Tarjan76}, and the case of constant $k$ has also been resolved by~\\cite{BhalgatHKP08}. It is a very interesting question whether the directed case is actually computationally easier than the undirected case or not.\n\n \\item \\textbf{Colorful Spanning Tree.} This problem generalizes the maximum bipartite matching problem, among others. Given the recent advances in max-flow algorithms which are heavily based on continuous optimization techniques, bipartite matching can now be solved in almost-linear time~\\cite{ChenKLPGS22} in general and nearly-linear time for dense input graphs \\cite{BrandLNPSSSW20}. \n It is very unclear if continuous optimization can be used for colorful spanning tree since its linear program has exponentially many constraints. \n This reflects the general challenge of using continuous optimization to solve matroid problems and many of their special cases. \n Thus, improving Hopcroft-Karp's $O(|E|\\sqrt{|V|})$ runtime \\cite{HopcroftK73} (which is matched by our dynamic-oracle matroid algorithm) may shed some light on either how to use continuous optimization for these problems or how combinatorial algorithms can break this runtime barrier for colorful spanning tree, bipartite matching, and matroid problems. \n\n\\end{itemize}\n\n\\paragraph{Other Problems with Dynamic Oracles.} It also makes sense to define dynamic oracles for problems like submodular function minimization (SFM), which asks to find the minimizer of a submodular function given an evaluation oracle. In this regime, similar to matroid intersection, we want to limit the symmetric difference from the current evaluation query to the previous ones. \nWe believe that the recent algorithms for submodular function minimization based on convex optimization and cutting-plane methods, particularly the work of \\cite{LeeSW15,JiangLSW20}, can be adapted to the dynamic-oracle setting. \nHowever, \nwe are not aware of any applications of these dynamic-oracle algorithms. \nThe first step is thus improving the best bounds in the traditional oracle model. \nThe special case of the cut-query setting \\cite{RubinsteinSW18,MukhopadhyayN20,LeeSZ21,LeeLSZ21,ApersEGLMN22} is also very interesting; we leave getting algorithms for min $(s,t)$-cut \\cite{ChenKLPGS22} and directed global mincut \\cite{Cen0NPSQ21} with near-linear time and dynamic-query complexity as major open problems.\\footnote{Adapting the cut-query algorithm of \\cite{MukhopadhyayN20} to work with dynamic cut oracles and, even better, with a parallel algorithm \\cite{AndersonB21,Lopez-MartinezM21}, is also open; though, we suspect that these are not hard.}\nAnother interesting direction is the quantum setting. For example, can one define the notion of dynamic quantum cut query so that the quantum cut-query algorithm of \\cite{ApersEGLMN22} can imply a non-trivial quantum global mincut algorithm?\n\n\n\\paragraph{Improved Lower Bounds.} \nObtaining improved lower bounds for matroid intersection is also an important open problem. Getting $\\Omega(n\\log n)$ lower bound for traditional rank-query matroid intersection algorithms is particularly interesting since it would subsume our $\\Omega(n\\log n)$ lower bounds (traditional rank-query lower bound implies independence-query and dynamic-rank-query lower bounds) and the $\\Omega(n \\log n)$ SFM lower bound of \\cite{ChakrabartyGJS22}. \nFor the latter, \\cite{ChakrabartyGJS22} showed an $\\Omega(n\\log{n})$ lower bound for SFM against \\emph{strongly}-polynomial time algorithms. Since SFM generalizes matroid intersection in the traditional rank-oracle model (i.e., a rank query of a matroid corresponds to an evaluation of the submodular function), getting the same lower bound for traditional rank-query matroid intersection algorithms would further strengthen the result of \\cite{ChakrabartyGJS22} to hold against \\emph{weakly}-polynomial time algorithms. \n\nAdditionally, achieving a {\\em truly} super-linear lower bound (i.e. an $n^{1+\\Omega(1)}$ bound) for any of the above problems is extremely interesting. \n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Technical Overview of Algorithms}\n\\label{sec:overview}\n\n\n\\subsection{The Blocking-Flow Framework}\nIn this section, we give a high-level overview of our algorithms.\nWe will focus on the dynamic-rank-oracle model (\\cref{def:dyn-oracle}), and sketch how to efficiently implement the ``blocking flow''\\footnote{Similar to the Hopcroft-Karp's \\cite{HopcroftK73} bipartite matching and Dinic's \\cite{dinic1970algorithm} maximum flow algorithms.} matroid intersection algorithms of \\cite{GabowS85,cunningham1986improved,chakrabarty2019faster,nguyen2019note} in this model.\nAs such, we briefly recap how the $\\tO(n\\sqrt{r})$ rank-query (in the traditional oracle model) algorithm of \\cite{chakrabarty2019faster} works first, and then explain how to implement their framework in the new dynamic oracle model with the same cost.\n\nTheir algorithm, like most of the matroid intersection algorithms, is based on repeatedly finding \\emph{augmenting paths} in \\emph{exchange graphs} (see \\cref{sec:prelim} for a definition). Say we have already found some common independent set $S\\in \\cI_1\\cap \\cI_2$ (we start with $S = \\emptyset$). Then the \\emph{exchange graph} $G(S)$ is a directed bipartite graph in which finding an $(s,t)$-path exactly corresponds to increasing the size of $S$ by one.\nAccording to Cunningham's blocking-flow argument \\cite{cunningham1986improved}, if we always augment along the \\emph{shortest} augmenting path, the lengths of such augmenting paths are non-decreasing.\nMoreover, if the length of the shortest augmenting path in $G(S)$ is at least $1\/\\epsilon$, then the size of the current common independent set $S$ must be at least $\\left(1 - O(\\epsilon)\\right) r$ (i.e.\\ it is only $O(\\epsilon r)$ away from optimal).\nThus, the ``blocking flow''-style algorithms consists of two stages:\n\\begin{enumerate}\n\\item\nIn the first stage, they obtain a $(1 - \\epsilon)$-approximate solution by finding augmenting paths until their lengths become more than $O(1\/\\epsilon)$. This is done by running in phases, where in phase~$i$ they eliminate all augmenting paths of length $2i$ by finding a so-called ``blocking flow''---a maximal (not necessarily maximum) collection of compatible augmenting paths. Each such phase can be implemented using only $\\tO(n)$ rank queries, as shown in \\cite{chakrabarty2019faster}. This means that the first stage needs a total of $\\tO(n\/\\epsilon)$ rank-queries (in the classic non-dynamic model).\n\n\\item\nIn the second stage, they find the remaining $O(\\epsilon r)$ augmenting paths one at a time. Each such augmentation can be found in $\\tO(n)$ rank queries, for a total of $\\tO(\\epsilon nr)$ queries for this stage.\n\\end{enumerate}\nUsing $\\epsilon = 1\/\\sqrt{r}$, \\cite{chakrabarty2019faster} obtains their $\\tO(n \\sqrt{r})$ rank-query exact matroid intersection algorithm.\nThe crux of how to implement the stages efficiently is a binary search trick to explore useful edges of the exchange graph quickly (for e.g.\\ to implement a breadth-first-search on the graph). The exchange graph can have up to $\\Theta(nr)$ edges in total, but it is not necessary to find all of them. We will argue that this binary search trick (which issues queries far away from each other) can still be implemented in the \\emph{dynamic-oracle model}, with the use of some data structures.\n\n\n\\subsection{Matroid Intersection}\n\n\\paragraph{Binary Search Tree.}\nThe crux of why a breadth-first-search (BFS) and augmenting path searching can be implemented efficiently (in terms of the number of queries) in \\cite{chakrabarty2019faster} is that they show how to, for $S \\in \\cI$, $u \\in S$, and $X \\subseteq \\bar{S}$, discover an element $x \\in X$ with $(S \\setminus \\{u\\}) \\cup \\{x\\} \\in \\cI$ in $O(\\log{n})$ rank queries using binary search (such a pair $(u,x)$ is called an \\emph{exchange pair}, and corresponds to an edge in the exchange graph).\nThe idea is that such an $x$ exists in $X$ if and only if $\\mathsf{rank}((S \\setminus \\{u\\}) \\cup X) \\geq |S|$.\nThus, we can do a binary search over $X$: we split $X$ into two equally-sized subsets $X_1$ and $X_2$, and check if such an $x$ exists in $X_1$ via the above equation. If it does, then we recurse on $X_1$ to find $x$. Otherwise, such an $x$ must exist in $X_2$ (as it does in $X$), and so we recurse on $X_2$.\nTo make this process efficient in our new model, we pre-build a binary search tree over the elements of $X$, where the internal nodes contain all the query-sets we need. That is, in the root node we have the query-set for $S\\cup X$, and in its two children for $S\\cup X_1$ respectively $S\\cup X_2$.\n\nUsing this binary tree, one can simulate the binary search process as described above.\nSince what we need to do in a BFS is to (i) find a replacement element $x$ and (ii) mark $x$ as visited (thus effectively ``deactivate'' $x$ in $X$), each time we see $x$, we just need to remove $x$ from the $O(\\log{n})$ nodes on a root-to-leaf path, and thus the whole BFS algorithm runs in near-linear time as well.\n\n\\paragraph{Batching, Periodic Rebuilding, and Augmenting Sets.}\nThe above binary search tree is efficient when the common independent set $S$ is static. However, once we find an augmenting path, we need to update $S$. This means that every node in the binary search tree needs to be updated.\nIf done naively, this would need at least $\\Omega(nr)$ time, as there are up to $r$ augmentations, and rebuilding the tree takes $O(n)$ time.\nTherefore, we employ a batching approach here.\nThat is, we do not walk through every node and update them immediately when we see an update to $S$.\nInstead, we batch $k$ updates (for $k$ to be decided later) and pay an additional $O(k)$-factor every time we want to do a query in our tree. In other words, at some point, we might want to search for exchanges for a common independent set $S'$ (by doing queries like $(S' \\setminus \\{u\\}) \\cup X$ to find edges incident to $u$). Our binary tree might only have an outdated version $S$ (i.e.\\ store sets like $S\\cup X$). Then the cost of converting $S\\cup X$ to $(S'\\setminus \\{u\\}) \\cup X$ is $|S\\oplus S'| + 1$, which we assert is less than $k$.\nWhen this number exceeds $k$, we rebuild the binary search tree completely using the up-to-date $S'$ instead, in $\\tO(n)$ time.\n\nOver the whole run of the algorithm, there are only $O(r \\log r)$ updates to our common independent set $S$ (see, e.g., \\cite{cunningham1986improved,nguyen2019note}).\nHence, the total running time becomes\n\\[ \\underbrace{\\tO(nk\\epsilon^{-1})}_{\\text{Blocking-flow for $\\tO(\\epsilon^{-1})$ iterations}} + \\underbrace{\\tO(\\epsilon rn)}_{\\text{Remaining $\\tO(\\epsilon r)$ augmenting paths}} + \\underbrace{\\tO(nrk^{-1})}_{\\text{Rebuilding binary search trees}}, \\]\nwhich is $\\tO(nr^{2\/3})$ for $k = r^{1\/3}$ and $\\epsilon =r^{-1\/3}$.\n\nTo achieve the $\\tO(n\\sqrt{r})$ bound in our dynamic-rank-oracle model, there is one additional observation we need.\nBy the ``Augmenting Sets'' argument~\\cite{chakrabarty2019faster}, for each element $v$ that we want to query our tree, it suffices to consider changes to $S$ that are in the same distance layer as $v$ is (in a single blocking-flow phase).\nSince changes to $S$ are uniformly distributed among layers, when the $(s, t)$-distance in $G(S)$ is $d$, we only need to spend an additional $O(\\frac{k}{d})$-factor (instead of an $O(k)$-factor) when querying the binary search tree.\nThis brings our complexity down to\n\\[ \\tO\\left(n\\epsilon^{-1} + \\sum_{d = 1}^{1\/\\epsilon}\\frac{nk}{d}\\right) + \\tO(\\epsilon rn) + \\tO(nrk^{-1}), \\]\nwhere the first part is a harmonic sum which makes for $\\tO(n\\epsilon^{-1} + nk)$, and the total running time is $\\tO(n\\sqrt{r})$ for $k = r^{1\/2}$ and $\\epsilon = r^{-1\/2}$.\n\n\\subsection{Matroid Union}\n\nFor simplicity of the presentation in this overview, let's assume we are solving the $k$-fold matroid union problem and that $k$---the number of bases\\footnote{As an example, consider the problem of finding $k$ disjoint spanning trees of a graph.} we want to find---is constant.\nA standard black-box reduction from matroid intersection, combined with our algorithm outlined above, immediately gives us an $\\tO(n\\sqrt{r})$ bound in the dynamic-rank-oracle model.\nNevertheless, we show how to exploit certain properties of matroid union to speed this up to $\\tO(n + r\\sqrt{r})$, i.e.\\ \\emph{near-linear time} for sufficiently ``dense''\\footnote{We call matroids with $n \\gg r$ ``dense'' by analogy to the graphic matroids where $n$ denotes the number of edges and $r$ the number of vertices.} matroids.\n\nSuppose $\\cM = (U,\\cI)$ is the matroid we want to find $k$ disjoint bases for. The standard reduction to matroid intersection is that we create $k$ copies of all elements $u\\in U$. Then we define two matroids as follows:\n\\begin{itemize}\n\\item The first matroid $\\cM_1$ says that we only want to use one version of each element $u\\in U$. We set $\\cM_1 = (U\\times \\{1,\\ldots,k\\}, \\cI_\\text{part})$ to be the partition matroid defined as $S\\in \\cI_{\\text{part}}$ if and only if $|\\{(u,i)\\in S : i = x\\}| \\le 1$ for all $x$.\n\\item The second matroid $\\cM_2$ says that for each copy of the ground set $U$ we must pick an independent set according to $\\cM$. That is set $\\cM_2 = (U\\times \\{1,\\ldots,k\\},\\hat{\\cI})$ to be the \\emph{disjoint} $k$-fold union of $\\cM$, i.e.\\ $S\\in \\hat{\\cI}$ if and only if $\\{u : (u,i)\\in S\\}$ is independent in $\\cM$ for all $i$.\n\\end{itemize}\nFor a set $S$ which can be partitioned into $k$ disjoint independent sets, notice that in the exchange graph, the number of elements \\emph{not} in the first layer is bounded by $O(r)$.\nThis is because every $u\\in U$ who is not represented in $S$ will be in the first layer $L_1$ of the BFS tree.\nAs such, we can build the BFS layers starting from the second layer if we can identify all the elements in this second layer.\nThis can be done by checking for each $y$ \\emph{not} in the first layer whether $L_1$ contains an exchange element of $y$ (via computing the rank of $(S \\setminus \\{y\\}) \\cup L_1$; no need to do a binary search).\nAlthough binary search is not needed when identifying elements in the second layer, when going backward among layers to find an augmenting path $P$, we still have to find the exact element in the first layer which can be the first element of $P$ since it will decide which augmenting paths remain ``compatible'' later.\nThis inspires us to maintain two separate binary search trees: one, of size $O(r)$, for finding edges from the second layer and onward, and the other, of size $O(n)$, for finding the first elements of the augmenting paths.\nStill, doing a binary search for each element in the first layer results in a total number of $O(r\\epsilon^{-1})$ queries to the binary search tree, which is too much.\nTo reduce the number of queries down to $\\tO(r)$, we note that only binary searches which correspond to the actual augmenting paths will succeed, i.e., reach the leaf nodes of the binary search tree.\nSince there are at most $O(r\/d)$ augmenting paths when the $(s, t)$-distance in $G(S)$ is $d$, we only need to do $O(r\/d)$ queries to the binary search tree; other queries can be blocked by first checking if their corresponding exchange elements exist in the first layer.\nThis results in a running time of $\\tO(n + r\\sqrt{n})$ (note: $\\sqrt{n}$ and not $\\sqrt{r}$), which already matches Gabow's algorithm for $k$-disjoint spanning tree~\\cite{gabow1988forests}. \n\n\\paragraph{Toward $\\tO(n + r\\sqrt{r})$ for Matroid Union.}\nThe bottleneck of the above algorithm is that we need to do binary searches over (and hence rebuild periodically) the tree data structure for the first layer (of size $\\Omega(n)$).\nIf we can reduce the size of this tree down to $O(r)$, then the running time would be $\\tO(n + r\\sqrt{r})$.\nThis suggests that we might want to somehow ``sparsify'' the first layer.\nIndeed, for a single augmenting path, we only need a basis of the first layer.\nAs a concrete example, consider the case of a graphic matroid: Given a forest $S$, an edge $e \\in S$, and the set of non-tree edges $E \\setminus S$, we want to find a ``replacement'' edge $e^\\prime$ in $E \\setminus S$ for $e$ which ``restores'' the connectivity of $S - e + e^\\prime$.\nIn this case, it suffices to only consider a spanning forest (i.e.\\ ``basis'') $B$ of $E \\setminus S$, in the sense that such a replacement edge exists in $E \\setminus S$ if and only if it exists in this spanning forest $B \\subseteq E\\setminus S$.\n\nMoreover, note that after each augmentation a single element will be removed from the first layer.\nThus, if we can maintain a decremental basis of the first layer, we can build our binary search tree data structure dynamically on top of this basis and get the desired time bound.\n\n\\paragraph{Maintaining a Basis in a Matroid.}\nOur data structure for maintaining a basis is inspired by the dynamic minimum spanning tree algorithm of \\cite{Frederickson85}, in combination with the sparsification trick of \\cite{EppsteinGIN97}. It uses $\\tO(n)$ time to initialize, and then $\\tO(\\sqrt{r})$ dynamic rank queries\\footnote{In the application of our matroid union algorithm, there will only be $\\tO(r)$ updates, so this is efficient enough for our final $\\tO(n+r\\sqrt{r})$ algorithm.} per deletion.\nIt also supports maintaining a min-weight basis.\n\n\nLet $L \\subseteq U$ for $|L| = O(n)$ be the first layer in which we want to maintain a dynamic basis.\nIn the preprocessing stage, we split $L$ into $\\sqrt{n}$ blocks $L_1, L_2, \\ldots, L_{\\sqrt{n}}$ of size roughly $\\sqrt{n}$ and compute the basis of $L$ from left to right.\nWe also build the ``prefix sums'' of these $\\sqrt{n}$ blocks so that we can quickly access\/query sets of the form $L_1 \\cup L_2 \\cup \\cdots \\cup L_k$ for all values of $k$.\nWhen we remove an element $x$ from $L_i$, we first update the prefix sums in $O(\\sqrt{n})$ time.\nIf $x$ is not in the basis we currently maintain, then nothing additional needs to be done.\nOtherwise, we have to find the ``first'' replacement element, which is guaranteed to be located in blocks $L_i, \\ldots, L_{\\sqrt{n}}$.\nThe block $L_j$ in which the replacement element lies can be identified simply by inspecting the ranks of the prefix sums, and after that, we then go through elements in that block to find the exact element.\nNote that blocks after $L_j$ need not be updated, as for them it does not matter what basis we picked among blocks $L_1$ to $L_j$. \nThis gives us an $O(\\sqrt{n})$-update time algorithm for maintaining a basis of a matroid.\n\nTo get a complexity of $O(\\sqrt{r}\\log n)$, we show that a similar sparsification structure as that of \\cite{EppsteinGIN97} for dynamic graph algorithms also works for arbitrary matroids. The sparsification is a balanced binary tree over the $n$ elements, where in each node we have an instance of our (un-sparsified) underlying data structure to maintain a basis consisting of elements in the subtree rooted at the node. Only elements part of the basis of a node are propagated upwards to the parent node. This means that in each instance of our underlying data structure we work over a ground set of size at most $2r$.\nThus, each update corresponds to at most two updates (a single insertion and deletion) to at most $O(\\log n)$ (which is the height of the tree) nodes of the tree, each costing $O(\\sqrt{r})$ dynamic rank queries in order to maintain the basis at this node.\nThis results in the desired time bound.\n\n\n\n\\section{Preliminaries} \\label{sec:prelim}\n\n\n\\paragraph{Notation.}\nWe use standard set notation.\nIn addition to that, for two sets $X$ and $Y$, we use $X + Y$ to denote $X \\cup Y$ (when $X \\cap Y = \\emptyset$) and $X - Y$ to denote $X \\setminus Y$ (when $Y \\subseteq X$).\nFor an element $v$, $X + v$ and $X - v$ refer to $X + \\{v\\}$ and $X - \\{v\\}$, respectively.\nLet $X \\oplus Y$ denote the symmetric difference of $X$ and $Y$.\n\n\n\\paragraph{Matroid.}\nIn this paper, we use the standard notion of matroids which is defined as follows.\n\n\\begin{definition}\nA \\emph{matroid} $\\cM = (U, \\cI)$ is defined by a tuple consisting of a finite \\emph{ground set} $U$ and a non-empty family of \\emph{independent sets} $\\cI \\subseteq 2^U$ such that the following properties hold.\n\\begin{itemize}\n \\item \\textbf{Downward closure:} If $S \\in \\cI$, then any subset $S^\\prime \\subseteq S$ is also in $\\cI$.\n \\item \\textbf{Exchange property:} For any two sets $S_1, S_2 \\in \\cI$ with $|S_1| < |S_2|$, there exists an $x \\in S_2 \\setminus S_1$ such that $S_1 + x \\in \\cI$.\n\\end{itemize}\n\\label{def:matroid}\n\\end{definition}\n\nLet $U$ be the ground set of a matroid $\\cM$.\nFor $S \\subseteq U$, let $\\bar{S}$ denote $U \\setminus S$.\nFor $X \\subseteq U$, the \\emph{rank} of $X$, denoted by $\\mathsf{rank}(X)$, is the size of the largest independent set contained in $X$, i.e., $\\mathsf{rank}(X) = \\max_{S \\in \\cI}|X \\cap S|$.\nThe \\emph{rank} of a matroid $\\cM = (U, \\cI)$ is the rank of $U$.\nWe let $r \\leq n$ denote the rank of the input matriods.\nWhen the input consists of more than one matroid (e.g., in the matroid union problem), let $\\mathsf{rank}_i$ denote the rank function of the $i^{\\scriptsize \\mbox{{\\rm th}}}$ matroid.\nA \\emph{basis} of $X$ is an independent set $S \\subseteq X$ with $|S| = \\mathsf{rank}(X)$.\nA basis of $\\cM$ is a basis of $U$.\nA \\emph{circuit} is a minimal dependent set.\nThe \\emph{span} of $X$ contains elements whose addition to $X$ does not increase the rank of it, i.e., $\\mathsf{span}(X) = \\{u \\in U \\mid \\mathsf{rank}(X \\cup \\{u\\}) = \\mathsf{rank}(X)\\}$.\n\n\\begin{fact}\n The rank function is submodular.\n That is, $\\mathsf{rank}(X) + \\mathsf{rank}(Y) \\geq \\mathsf{rank}(X \\cap Y) + \\mathsf{rank}(X \\cup Y)$ holds for each $X, Y \\subseteq U$.\n\\end{fact}\n\n\\begin{fact}[see, e.g., {\\cite[Lemma 1.3.6]{price2015}}]\n $\\mathsf{rank}(A) = \\mathsf{rank}(\\mathsf{span}(A))$ holds for every $A \\subseteq U$.\n \\label{fact:span-does-not-increase-rank}\n\\end{fact}\n\n\\begin{lemma}\n For two sets $X, Y$ and their bases $S_X, S_Y$, it holds that $\\mathsf{rank}(S_X + S_Y) = \\mathsf{rank}(X + Y)$.\n \\label{lemma:basis-rank}\n\\end{lemma}\n\\begin{proof}\n Since $X, Y \\subseteq \\mathsf{span}(S_X + S_Y)$, we have $S_X + S_Y \\subseteq X + Y \\subseteq \\mathsf{span}(S_X + S_Y)$.\n The lemma then follows from $\\mathsf{rank}(S_X + S_Y) = \\mathsf{rank}(\\mathsf{span}(S_X + S_Y))$ using \\cref{fact:span-does-not-increase-rank}.\n\\end{proof}\n\n\\paragraph{Exchange Graph.}\nOur algorithms for matroid intersection and union will be heavily based on finding augmenting paths in exchange graphs.\n\n\\begin{definition}[Exchange Graph]\nFor two matroids $\\cM_1 = (U, \\cI_1)$ and $\\cM_2 = (U, \\cI_2)$ over the same ground set and an $S \\in \\cI_1 \\cap \\cI_2$, the \\emph{exchange graph} with respect to $S$ is a directed bipartite graph $G(S) = (U \\cup \\{s, t\\}, E_S)$ with $s, t \\not\\in U$ being two distinguished vertices and $E_S = E_1 \\cup E_2 \\cup E_s \\cup E_t$, where\n\\begin{align*}\n E_1 &= \\{(x, y) \\mid x \\in S, y \\not\\in S,\\;\\text{and}\\;S - x + y \\in \\cI_1\\}, \\\\\n E_2 &= \\{(y, x) \\mid x \\in S, y \\not\\in S,\\;\\text{and}\\;S - x + y \\in \\cI_2\\}, \\\\\n E_s &= \\{(s, x) \\mid S + x \\in \\cI_1 \\},\\;\\text{and} \\\\\n E_t &= \\{(x, t) \\mid S + x \\in \\cI_2 \\}.\n\\end{align*}\n\\label{def:exchange-graph}\n\\end{definition}\n\nThe \\emph{distance layers} of $G(S)$ is the sets $L_1, \\ldots, L_{d_{G(S)}(s, t) - 1}$, where $L_\\ell$ consists of elements in $U$ that are of distance $\\ell$ from $s$ in $G(S)$.\nMost matroid intersection algorithms including ours are based on augmenting a common independent set with an augmenting path in $G(S)$ until such a path does not exist.\nThe following lemma certifies the correctness of this approach.\n\n\\begin{lemma}[Augmenting Path]\n Let $P$ be a shortest $(s, t)$-path\\footnote{In fact, $P$ only needs to be ``chordless''~\\cite{quadratic2021}, i.e., without shortcuts. Nonetheless, a shortest $(s, t)$-path suffices for our rank-query algorithms.} of $G(S)$.\n Then, the set $S^\\prime := S \\oplus (V(P) \\setminus \\{s, t\\})$ is a common independent set with $|S^\\prime| = |S| + 1$.\n On the other hand, if $t$ is unreachable from $s$ in $G(S)$, then $S$ is a largest common independent set.\n \\label{lemma:augmenting-path}\n\\end{lemma}\n\nWe write $S \\oplus P$, where $P$ is an augmenting path in $G(S)$, for the common independent set $S^\\prime := S \\oplus (V(P) \\setminus \\{s, t\\})$ obtained by augmenting $S$ along $P$.\nLet $d_{G(S)}(u, v)$ denote the $(u, v)$-distance in $G(S)$.\nWhen $S$ is clear from context, let $d_t$ denote $d_{G(S)}(s, t)$.\nThe following lemma states that if $d_{G(S)}(s, t)$ is large, then $S$ is close to being optimal.\n\n\\begin{lemma}[\\cite{cunningham1986improved}]\n If $S \\in \\cI_1 \\cap \\cI_2$ satisfies $d_{G(S)}(s, t) \\geq d$, then $|S|$ is at least $\\left(1 - O(\\frac{1}{d})\\right)r$.\n \\label{lemma:approx}\n\\end{lemma}\n\nThe following bound on the total length of shortest augmenting paths will be useful for our analysis.\n\n\\begin{lemma}[\\cite{cunningham1986improved}]\n If we solve matroid intersection by repeatedly finding the shortest augmenting paths, then the sum of the lengths of these augmenting paths is $O(r\\log r)$.\n \\label{lemma:augmenting-path-lengths}\n\\end{lemma}\n\n\\begin{lemma}[\\cite{cunningham1986improved,price2015,chakrabarty2019faster}]\n If we augment along a shortest $(s, t)$-path in $G(S)$ to obtain $S^\\prime$, then for each $u \\in U$, the following hold (let $d := d_{G(S)}$ and $d^\\prime := d_{G(S^\\prime)})$.\n \\begin{enumerate}\n \\item If $d(s, u) < d(s, t)$, then $d^\\prime(s, u) \\geq d(s, u)$. Similarly, if $d(u, t) < d(s, t)$, then $d^\\prime(u, t) \\geq d(u, t)$.\n \\item If $d(s, u) \\geq d(s, t)$, then $d^\\prime(s, u) \\geq d^\\prime(s, t)$. Similarly, if $d(u, t) \\geq d(s, t)$, then $d^\\prime(u, t) \\geq d^\\prime(s, t)$.\n \\end{enumerate}\n \\label{lemma:monotone}\n\\end{lemma}\n\n\\paragraph{Augmenting Sets.}\nThe following notion of \\emph{augmenting sets}, introduced by \\cite{chakrabarty2019faster}, models a collection of ``mutually compatible'' augmenting paths, i.e., paths that can be augmented sequentially without interfering with each other.\n\n\\begin{definition}[Augmenting Set~{\\cite[Definition 24]{chakrabarty2019faster}}]\n Let $S \\in \\cI_1 \\cap \\cI_2$ satisfy $d_{G(S)}(s, t) = d_t$ and let $L_1, L_2, \\ldots, L_{d_t - 1}$ be the distance layers of $G(S)$.\n A collection $\\Pi := (D_1, D_2, \\cdots, D_{d_t - 1})$ is an \\emph{augmenting set} in $G(S)$ if\n \\begin{enumerate}[noitemsep, label=(\\roman*)]\n \\item $D_\\ell \\subseteq L_\\ell$ holds for each $1 \\leq \\ell < d_t$,\n \\item $|D_1| = |D_2| = \\cdots = |D_{d_t - 1}|$,\n \\item $S + D_1 \\in \\cI_1$,\n \\item $S + D_{d_t - 1} \\in \\cI_2$,\n \\item $S - D_\\ell + D_{\\ell + 1} \\in \\cI_1$ holds for each \\emph{even} $1 \\leq \\ell < d_t - 1$, and\n \\item $S - D_{\\ell + 1} + D_{\\ell} \\in \\cI_2$ holds for each \\emph{odd} $1 \\leq \\ell < d_t - 1$.\n \\end{enumerate}\n \\label{def:augmenting-sets}\n\\end{definition}\n\nOne can think of the concept of augmenting sets as a generalization of augmenting paths.\nIndeed, an augmenting path is an augmenting set where $|D_1| = \\cdots = |D_{d_t - 1}| = 1$.\nThe term ``mutually compatible'' augmenting paths is formalized as follows.\n\n\\begin{definition}[Consecutive Shortest Paths~{\\cite[Definition 28]{chakrabarty2019faster}}]\n A collection of vertex-disjoint shortest $(s, t)$-paths $\\cP = (P_1, \\ldots, P_k)$ in $G(S)$ is a collection of \\emph{consecutive shortest paths} if $P_i$ is a shortest augmenting path in $G(S \\oplus P_1 \\oplus \\cdots \\oplus P_{i - 1})$ for each $1 \\leq i \\leq k$.\n\\end{definition}\n\nThe following structural lemmas of \\cite{chakrabarty2019faster} will be useful for us, particularly in deriving \\cref{lemma:key} in \\cref{sec:matroid-intersection}.\n\n\\begin{lemma}[{\\cite[Theorem 25]{chakrabarty2019faster}}]\n Let $\\Pi := (D_1, \\ldots, D_{d_t - 1})$ be an augmenting set in $G(S)$.\n Then, $S^\\prime := S \\oplus \\Pi := S \\oplus D_1 \\oplus \\cdots \\oplus D_{d_t - 1}$ is a common independent set.\n \\label{lemma:aug-set}\n\\end{lemma}\n\n\nFor two augmenting sets $\\Pi = (D_1, \\ldots, D_{d_t - 1})$ and $\\Pi^\\prime = (D_1^\\prime, \\ldots, D_{d_t - 1}^\\prime)$, we use $\\Pi \\subseteq \\Pi^\\prime$ to denote that $D_{\\ell} \\subseteq D_{\\ell}^\\prime$ hold for each $1 \\leq \\ell < d_t$.\nIn this case, let $\\Pi^\\prime \\setminus \\Pi := (D_1^\\prime \\setminus D_1, \\ldots, D_{d_t - 1}^\\prime \\setminus D_{d_t - 1})$.\nWe will hereafter abuse notation and let $\\Pi$ also denote the set of elements $D_1 \\cup \\cdots \\cup D_{d_t - 1}$ in it.\nIn particular, $U \\setminus \\Pi$ denotes $U \\setminus \\left(D_1 \\cup \\cdots \\cup D_{d_t - 1}\\right)$.\n\n\\begin{lemma}[{\\cite[Theorem 33]{chakrabarty2019faster}}]\n For two augmenting sets $\\Pi \\subseteq \\Pi^\\prime$ in $G(S)$, $\\Pi^\\prime \\setminus \\Pi$ is an augmenting set in $G(S \\oplus \\Pi)$.\n \\label{lemma:difference-is-aug-set}\n\\end{lemma}\n\n\\begin{lemma}[{\\cite[Theorem 29]{chakrabarty2019faster}}]\n Given a collection of consecutive shortest paths $P_1, \\ldots, P_k$ in $G(S)$, where $P_i = (s, a_{i,1}, \\ldots, a_{i, d_t - 1}, t)$, the collection $\\Pi = (D_1, \\ldots, D_{d_t - 1})$, where $D_i = \\{a_{1,i}, \\ldots, a_{k, i}\\}$, is an augmenting set in $G(S)$.\n \\label{lemma:aug-set-from-aug-paths}\n\\end{lemma}\n\nThe converse of \\cref{lemma:aug-set-from-aug-paths} also holds.\n\n\\begin{lemma}[{\\cite[Theorem 34]{chakrabarty2019faster}}]\n Given an augmenting set $\\Pi$ in $G(S)$, there is a collection consecutive shortest paths $P_1, \\ldots, P_k$ in $G(S)$ where $P_i = (s, a_{i,1}, \\ldots, a_{i,d_t - 1}, t)$ such that $D_i = \\{a_{1,i}, \\ldots, a_{k,i}\\}$.\n \\label{lemma:aug-paths-from-aug-set}\n\\end{lemma}\n\n\\begin{remark}\n Note that \\cref{lemma:aug-set-from-aug-paths,lemma:aug-paths-from-aug-set} are not equivalent to the exact statements of {\\cite[Theorems 29 and 34]{chakrabarty2019faster}} (in particular, they did not specify how $\\Pi$ and $P_i$ are constructed), but our versions are clear from their proof.\n\\end{remark}\n\n\n\n\n\\begin{claim}\n Let $S \\in \\cI_1 \\cap \\cI_2$ with $d_{G(S)}(s, t) = d_t$ and $\\Pi \\subseteq \\Pi^\\prime$ be two augmenting sets in $G(S)$.\n Let $u \\not\\in \\Pi^\\prime$ be an element.\n If $u$ is not on any augmenting path of length $d_t$ in $G(S \\oplus \\Pi)$, then $u$ is not on any augmenting path of length $d_t$ in $G(S \\oplus \\Pi^\\prime)$ either.\n \\label{claim:no-path}\n\\end{claim}\n\n\\begin{proof}\n Let $S^\\prime := S \\oplus \\Pi$ and $S^{\\prime\\prime} := S \\oplus \\Pi^\\prime$.\n Since $S^{\\prime\\prime}$ can be obtained by augmenting $S^\\prime$ along a series of shortest augmenting paths (by \\cref{lemma:difference-is-aug-set,lemma:aug-paths-from-aug-set}), the claim follows from the fact that the $(s, u)$-distance and $(u, t)$-distance are monotonic (\\cref{lemma:monotone}).\n\\end{proof}\n\n\\begin{claim}\n Let $\\Pi = (D_1, \\ldots, D_{d_t - 1})$ be an augmenting set in $G(S)$ and $P = (s, a_1, \\ldots, a_{d_t - 1}, t)$ be an augmenting path in $G(S \\oplus \\Pi)$.\n Then, $\\Pi^\\prime = (D_1 + a_1, \\ldots, D_{d_t - 1} + a_{d_t - 1})$ is an augmenting set in $G(S)$.\n \\label{claim:put-aug-path-in-aug-set}\n\\end{claim}\n\n\\begin{proof}\n This directly follows from \\cref{lemma:aug-paths-from-aug-set,lemma:aug-set-from-aug-paths}.\n\\end{proof}\n\n\n\n\n\\paragraph{Using Dynamic Oracle.}\nIn the following sections except for \\cref{appendix:dynamic-matroid-intersection-ind} where we discuss independence-query algorithms, all algorithms and data structures will run in the \\emph{dynamic-rank-oracle} model (see \\cref{def:dyn-oracle}).\nIn other words, we will simply write ``in $t$ time'' for ``in $t$ time and dynamic rank queries''.\nWe will use the term \\emph{query-sets} to refer to the sets $S_i$ in \\cref{def:dyn-oracle}.\nIn particular, constructing a query-set means building the corresponding set from $S_0 = \\emptyset$ with the $\\textsc{Insert}(\\cdot)$ operation.\nInsertion\/Deletion of an element into\/from a query-set is done via the $\\textsc{Insert}$\/$\\textsc{Delete}$ operations.\nUsing the $\\textsc{Query}$ operation, we assume that we know the ranks of all the query-sets we construct in our algorithms.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\\label{sl:intro}\nDuring the past few decades, science has made great progress in understanding the biology of cancer \\cite{p53review_yu2014small,cancer_advances_2014}. The latest technological tools allow assaying tens of thousands of genes simultaneously, providing large volumes of data to search for cancer biomarkers \\cite{microarray_techn_2013,venter2001sequence}. Ideally, scientists would like to extract some qualitative signal from this data in the hope to better understand the underlying biological processes. At the same time it is desirable that the extracted signal is robust in the presence of noise and errors while effectively describing the dataset \\cite{valarmathi2014noise,pozhitkov2014revised}.\n\nOne suite of methods which have enjoyed increased level of success in recent years is based on concepts from the mathematical field of algebraic topology, in particular, {\\em persistent homology} \\cite{EdLeZo2002,Gh2008barcodes,CaZoCoGu2004}. The key benefits of using topological features to describe data include coordinate-free description of shape, robustness in the presence of noise and invariance under many transformations, as well as highly compressed representations of structures \\cite{Lumetal2013}. Analysis of the homology of data allows detection of high-dimensional features based on {\\em connectivity}, such as loops and voids, which could not be detected using traditional methods such as clustering \\cite{Ca2009,Gh2008barcodes}. Further, identifying the most {\\em persistent} of such features promises to pick up the significant shapes while ignoring noise \\cite{Gh2008barcodes}. Such analysis of the stable topological features of the data could provide helpful insights as demonstrated by several studies, including some recent ones on cancer data \\cite{dewoshkin2010,seeman2012}.\n\n\\subsection{Our contributions} \\label{contrib}\nWe present a new method of topological analysis of various cancer gene expression data sets. Our method belongs to the category of exploratory data analysis. In order to more efficiently handle the huge number of genes whose expressions are recorded in such data sets (typically in the order of tens of thousands), we transpose the data and analyze it in its {\\em dual space}, i.e., with each gene represented in the much lower dimensional (in the order of a few hundred) sample space. We then sample critical genes as guided by the topological analysis. In particular, we choose a small subset (typically $120$--$200$) of genes as {\\em landmarks} \\cite{desilva2004}, and construct a family of nested simplicial complexes, indexed by a proximity parameter. We observe topological features (loops) in the first homology group ($\\homo_1$) that remain significant over a large range of values of the proximity parameter (we consider small loops as topological noise). By repeating the procedure for different numbers of landmarks, we select stable features that persist over large ranges of both the number of landmarks and the proximity parameter. We then further analyze these loops with respect to their membership, working under the hypothesis that their topological connectivity could reveal functional connectivity. Through the search of scientific literature, we establish that many loop members have been implicated in cancer biogenesis. We applied our methodology to five different data sets from a variety of cancers (brain, breast, ovarian, and acute myeloid leukemia (AML)), and observed that in each of the five cases, many members of the significant loops in $\\homo_1$ have been identified in the literature as having connections to cancer.\n\nOur method is capable of identifying geometric properties of the data that cannot be found by traditional algorithms such as clustering \\cite{cluster_genes_review2012,cluster_review2010}. By employing tools from algebraic topology, our method goes beyond clustering and detects connected components around holes (loops) in the data space. The shown methodology is also different from techniques such as graph \\cite{cluster_massive_graphs_siam2013,graph_mining_book_cook2006} or manifold learning \\cite{Isomap_original2000,LLE_original2000,HessianLLE_original2003,laplacian_embedding_original2001,diffusion_maps2006}. Graph algorithms, while identifying connectivity, miss wealth of information beyond clustering. Manifold learning algorithms assume that the data comes from an intrinsically low-dimensional space and their goal is to find a low-dimensional embedding. We do not make such assumptions about the data.\n\n\n\n\\subsection{Related work}\\label{prev_work}\nSeveral applications of tools from algebraic topology to analyze complex biological data from the domain of cancer research have been reported recently. DeWoshkin et al.~\\cite{dewoshkin2010} used computational homology for analysis of comparative genomic hybridization (CGH) arrays of breast cancer patients. They analyzed DNA copy numbers by looking at the characteristics of \\(\\homo_0\\) group. Using $Betti_0$ (\\(\\beta_0\\)) numbers, which are the ranks of the zeroth homology groups ($\\homo_0$), their method was able to distinguish between recurrent and non-recurrent patient groups.\n\nLikewise, Seeman et al.~\\cite{seeman2012} applied persistent homology tools to analyze cancer data. Their algorithm starts with a set of genes that are preselected using the {\\it nondimensionalized standard deviation} metric \\cite{seeman2012}. Then, by applying persistent homology analysis to the \\(\\homo_0\\) group, the patient set is recursively subdivided to yield three subgroups with distinct cancer types. By inspecting the cluster membership, a core subset of genes is selected which allows sharper differentiation between the cancer subtypes.\n\nAnother example of topological data analysis is the work of Nicolau et al.~\\cite{nicolau2011}. Their method termed {\\em progression analysis of disease} (PAD) is applied to differentiate three subgroups of breast cancer patients. PAD is a combination of two algorithms -- disease-specific genome analysis (DSGA) \\cite{dsga2007} and the topology-based algorithm termed {\\it Mapper} \\cite{mapper2007}. First, DSGA transforms the data by decomposing it into two components -- the disease component and the healthy state component, where the disease component is a vector of residuals from a {\\it Healthy State Model} (HSM) linear fit. A small subset of genes that show a significant deviation from the healthy state are retained and passed on to Mapper, which applies a specified filter function to reveal the topology of the data. Mapper identified three clusters corresponding to ER$+$, ER$-$, and normal-like subgroups of breast cancer. This work is somewhat different from the previous two papers mentioned above because it does not explicitly analyze features of any of the homology groups.\n\nAll studies mentioned above utilized $\\beta_0$ numbers, thus performing analyses that are topologically equivalent to clustering. In contrast, our method relies on $\\beta_1$ numbers (ranks of $\\homo_1$ groups). One can think of $\\beta_1$ numbers characterizing the loops constructed from connected components (genes) around ``holes'' in the data. The underlying idea is that connections around holes may imply connections between the participating genes and biological functions. Also, most of other methods use some data preprocessing to limit the initial pool of candidate genes. Our method selects the optimal number of genes as part of the analysis itself.\n\n\\section{Mathematical background} \\label{sl:theory}\n\nWe review some basic definitions from algebraic topology used in our work. For details, refer one of the standard textbooks \\cite{Munkres1984,EdHa2009}. Illustrations of simplices, persistent homology, and identification of topological features from landmarks are available in the literature \\cite{Gh2008barcodes, desilva2004, javaplex2011}.\n\n\\subsection{Simplices and simplicial complexes} \\label{sl:simplicial_complexes}\nTopology represents the shape of point sets using combinatorial objects called simplicial complexes. Consider a finite set of points in \\(\\R^n\\). More generally, the space need not be Euclidean. We just need a unique pairwise distance be defined for every pair of points. The building blocks of the combinatorial objects are {\\em simplices}, which are made of collections of these points. \n\nFormally, the convex hull of $k+1$ affinely independent points $\\{v_{0}, v_{1}, \\dots, v_{k}\\}$ is a $k$-simplex. The dimension of the simplex is $k$, and $v_{j}$s are its vertices. Thus, a vertex is a $0$-simplex, a line segment connecting two vertices is a $1$-simplex, a triangle is a $2$-simplex, and so on. Observe that each $p$-simplex $\\sigma$ is made of lower dimensional simplices, i.e., $k$-simplices $\\tau$ with \\(k \\leq p\\). Here, $\\tau$ is called a {\\em face} of $\\sigma$, denoted $\\tau \\subset \\sigma$. A collection of simplices satisfying two regularity properties forms a {\\em simplicial complex}. The first property is that each face of every simplex in a simplicial complex $K$ is also in $K$. Second, each pair of simplices in $K$ intersect in a face of both, or not at all. Due to these properties, algorithms to study shape and topology run much more efficiently on the simplicial complex than on the original point set.\n\nTo construct a simplicial complex on a given point set, one typically considers balls of a given diameter $\\epsilon$ (called $\\epsilon$-ball) centered at each point. The two widely studied complexes of this form are the {\\em \\v{C}ech} and the {\\em Vietoris-Rips} complexes. A $k$-simplex is included in the \\v{C}ech complex if there exists an $\\epsilon$-ball containing all its $k+1$ vertices. Such a simplex is included in the the Vietoris-Rips complex \\(R_{\\epsilon} \\) if each pair of its vertices is within a distance $\\epsilon$. As such, Vietoris-Rips complexes are somewhat easier to construct, since we only need to inspect pairwise, and not higher order, distances.\n\nHowever, both the \\v{C}ech and the Vietoris-Rips complexes have as vertex set all of the points in the data. Such complexes are computationally intensive for datasets of tens of thousands of points. The feasible option is to work with an approximation of the topological space of interest \\cite{desilva2004}. The key idea is to select only a small subset of points (landmarks), while the rest of points serve as {\\it witnesses} to the existence of simplices. Termed {\\em witness complexes}, such complexes have a number of advantages. They are easily computed, adaptable to arbitrary metrics, and do not suffer from the curse of dimensionality. They also provide a less noisy picture of the topological space. We use the {\\it lazy Witness complex}, in which conditions for inclusion are checked only for pairs and not for higher order groups of points \\cite{desilva2004}, analogous to the distinction between the constructions of Vietoris-Rips and \\v{C}ech complexes.\n\nWe employ the heuristic landmark selection procedure called {\\it sequential maxmin} to select a representative set of landmark points \\cite{desilva2004,adams_carlsson_2009,carlsson_etal_2008}. The first landmark is selected randomly from the point set $S$. Then the algorithm proceeds inductively. If \\(L_{i-1}\\) is the set of the first \\(i-1\\) landmarks, then the $i$-th landmark is the point of $S$ which maximizes the function \\(d(x, L_{i-1})\\), the distance between the point \\(x\\) and the set \\(L_{i-1}\\). We vary the total number of landmarks, exploring each of the resulting lazy witness complexes. The final number of landmarks is chosen so that the resulting witness complex maximally exposes topological features.\n\n\\subsection{Persistent homology}\\label{persist_homo}\n\nHomology is the concept from algebraic topology which captures how space is {\\em connected}. Thus, homology can be used to characterize interesting features of a simplicial complex such as connected clusters, holes, enclosed voids, etc., which could reveal underlying relationships and behavior of the data set. Homology of a space can be described by its {\\em Betti numbers}. The $k$-th Betti number \\(\\beta_k\\) of a simplicial complex is the rank of its $k$-th homology group. For $k=0,1,2$, the $\\beta_k$ have intuitive interpretation. \\(\\beta_{0}\\) represents a number of connected components, \\(\\beta_{1}\\) the number of holes, and \\(\\beta_{2}\\) the number of enclosed voids. For example, a sphere has $\\beta_0=1, \\beta_1=0, \\beta_2=1$, as it has one component, no holes, and one enclosed void.\n\nConsider the formation of a simplicial complex using balls of diameter $\\epsilon$ centered on points in a set. For small \\(\\epsilon\\), the simplicial complex is just a set of disjoint vertices. For sufficiently large \\(\\epsilon\\), the simplicial complex becomes one big cluster. What value of \\(\\epsilon\\) reveals the ``correct'' structure? Persistent homology \\cite{EdLeZo2002,Gh2008barcodes} gives a rigorous response to this question. By increasing \\(\\epsilon\\), a sequence of nested simplicial complexes called a filtration is created, which is examined for attributes of connectivity and their robustness. Topological features appear and disappear as \\(\\epsilon\\) increases. The features which exist over a longer range of the parameter $\\epsilon$ are considered as signal, and short-lived features as noise \\cite{EdLeZo2002,zomo_carlsson_2005}. This formulation allows a visualization as a collection of barcodes (one in each dimension), with each feature represented by a bar. The longer the life span of a feature, the longer its bar. In the example barcodes in Figs.~\\ref{breast_evolve}--\\ref{aml170_dim25}, the x-axis represents the \\(\\epsilon\\) parameter, and the bars of persistent loops of interest are circled.\n\n\\section*{Research questions}\n\nOur approach could address several critical questions in the context of cancer data analysis. First, could we select a small subset of relevant genes while {\\em simultaneously} identifying robust nontrivial structure, i.e., topology, of the data? Most previous approaches require the selection of a subset of genes {\\em before} exploring the resulting structure, and hence limiting the generality. Second, could we elucidate higher order interactions (than clusters) between genes that could have potential implications for cancer biogenesis? Higher order structures such as loops could reveal critical subsets of genes with relevant nontrivial interactions, which together have implications to the cancer. Third, could this method work even when data is available from only a {\\em subset} of patients?\n\n\\section{Data}\\label{sl:data}\nWe analyzed five publicly available microarray datasets of gene expression from four different types of cancer -- breast, ovarian, brain, and acute myeloid leukemia (AML). Four of the datasets have the same protocol, GPL570 (HG\\_U133\\_Plus\\_2, Affymetrix Human Genome U133 Plus 2.0 Array). The fifth dataset has a different protocol, HG\\_U95Av2, which has a fewer number of genes (see Table~\\ref{sl:tbl_data}). By including data sets from different protocols, we could verify that the topological features identified are not just artifacts of a particular protocol. \n\n\\begin{wraptable}{r}{3.8in}\n\\tbl{Datasets used in the study.}\n{\n\\begin{tabular}{@{}lllcc@{}}\n\\toprule\nDataset & Series & Protocol & \\# Genes & \\# Samples \\\\ \\colrule\nBrain & GSE36245 & GPL570 & 46201 & 46\\\\\nBreast & GSE3744 & GPL570 & 54613 & 47\\\\ \nOvarian & GSE51373 & GPL570 & 54613 & 28\\\\\nAML188 & GSE10358 & GPL570 & 54613 & 188\\\\\nAML170 & willm-00119 & HG\\_U95Av2 & 12558 & 170\\\\ \\botrule\n\\end{tabular}\n} \\label{sl:tbl_data}\n\\end{wraptable}\nThe number of genes represents the number of unique gene id tags defined by a protocol excluding controls. While the brain dataset is of the same protocol as the breast and ovarian datasets, the former one has fewer genes -- 46201 vs.~54613. This variability, however, did not affect our procedure to find topological features.\n\nAll datasets, except for AML170, were obtained from NCBI Gene Expression Omnibus \\url{http:\/\/www.ncbi.nlm.nih.gov\/geo\/} in October 2013. AML170 was retrieved at the same time from the National Cancer Institute caArray Database at \\url{https:\/\/array.nci.nih.gov\/caarray\/project\/willm-00119}. \n\n\\section{Methods}\\label{sl:methods}\nWe work with the raw gene expression values. In particular, we do not log-transform them.\n\\subsection{Dual space of data}\nTraditionally, gene expression data is viewed in its gene space, i.e., the expression profile of patient $i$ with $m$ genes is a point in $\\R^m$, $\\vx_i= [ x_{i_1}~x_{i_2}~\\cdots~x_{i_m} ]$. Each \\(x_{i_j}\\) is an expression of gene $j$ in the patient $i$. For example, each patient in the Brain dataset is a point in $\\R^{46201}$.\n\nWe analyze expression data in its dual space, i.e., in its {\\it sample space}. Hence a gene $j$ is represented as a point in $\\R^n$, $\\vx_j = [ x_{j_1}~x_{j_2}~\\cdots~x_{j_n} ]$, where $n$ is the number of samples or patients. Each \\(x_{j_i}\\) is the expression of the gene $j$ in the patient $i$. For example, in the same Brain dataset, the points now sit in $\\R^{46}$ space. Hence we study gene expression across the span of all patients.\n\nThe key motivation for this approach is to handle the high dimensionality in a meaningful way for analyzing the {\\em shape} of data. Given the small size of patients, one could efficiently construct a Vietoris-Rips complex using the set of pairwise distances between patients in the gene space (no need to choose landmarks). But such distances become less discriminatory when the number of genes is large \\cite{meaningiful_nneighbor_beyer1999,concentration_ledoux2005}. Working in the dual space, we let our topological method select a manageable number of genes as landmarks to construct the witness complexes, which potentially capture interesting topology of the data. Hence we do not preselect a small number of genes before the topological analysis, as is done by some previous studies \\cite{nicolau2011,seeman2012}.\n\n\\subsection{Choosing the number of landmarks}\nFor construction of the witness complexes, the number of landmarks has to be defined {\\it a priori} (Sec.~\\ref{sl:simplicial_complexes}). Hence the question becomes how many landmarks do we select? We let the data itself guide the selection of genes used as landmarks. If there is a significant loop \n\\begin{wraptable}{r}{2in}\n\\vspace*{-0.1in}\n\\tbl{Numbers of landmarks selected in each dataset.}\n{\\begin{tabular}{@{}lc@{}}\\toprule\nDataset & \\# landmarks \\\\ \\colrule\nBrain & 120\\\\\nBreast & 110\\\\ \nOvarian & 200\\\\\nAML188 & 150\\\\\nAML170 & 130\\\\ \\botrule\n\\end{tabular}}\\label{sl:tbl_num_landmarks}\n\\end{wraptable}\nfeature in the data, it would persist through a range of landmarks in $\\homo_1$ of the complexes. We reconstruct the topological space incrementing the number of landmarks\nwhile observing appearance and disappearance of the topological features. Initially, there would be very few small, noisy features because of insufficient number of points. As the number of landmarks increases, some features stabilize, i.e., do not change much either in size or membership. Then they reach their maximal size, and start to diminish once some critical number of landmarks is exceeded (when the ``holes'' are all filled in). The ``optimal'' number of landmarks is chosen when the length of the bar representing the topological feature is maximal.\n\nA typical example of such behavior is seen in the Breast dataset (see Fig.~\\ref{breast_evolve}). A small loop appears when the number of landmarks is \\(L=50\\). It stabilizes around \\(L=90\\), reaches its maximum span at \\(L=110\\), and then decreases as \\(L\\) grows.\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=\\textwidth]{Breast_evolution2_no_title}\n\\caption{Evolution of the loop of interest (circled) for varying number of landmarks from L=50 to L=130 in the breast dataset. Here and in Figs.~\\ref{brain_loop}--\\ref{aml170_dim25}, the x-axis represents the \\(\\epsilon\\) parameter.}\\label{breast_evolve}\n\\end{figure}\n\n\\subsection{Composition of loops}\n\nOne of the goals of our method is to determine the genes which participate in $\\homo_1$ features, which could indicate potential implications for cancer biogenesis. Since the first landmark is chosen randomly in the sequential maxmin procedure, the composition of the loops identified may differ based on this first choice. To circumvent this effect, we do $20$ different runs in each case to collect possible variations in loop formation. Members of the loops are then pooled together for further analysis. Due to the almost deterministic nature of sequential maxmin selection (apart from the first landmark being selected randomly), we observed very little variation over the $20$ runs in most cases. The recovered members of loops are then queried in scientific literature for cancer-related reports.\n\n\\medskip\n\\noindent We implemented our computations using the package JavaPlex \\cite{javaplex2011}. We explored the barcodes for $\\homo_0, \\homo_1,$ and $\\homo_2$, but interesting persistent features with members related to cancer biogenesis were detected only for $\\homo_1$.\n\n\\section{Results}\\label{sl:results}\n\nPersistent topological features in the homology group \\(\\homo_1\\) were observed in every cancer dataset we analyzed. Representative examples are shown in Figs.~\\ref{brain_loop}--\\ref{aml170_2loops}. The AML datasets both had two persistent loops, while the other datasets have one loop each. The Ovarian dataset had a few medium length bars in the $\\homo_1$ barcode, but we investigated only the longest loop. Once the persistent loops were identified, they were inspected with respect to their composition and relation to cancer through search of scientific literature. Below is a brief description of results for each of the datasets (the full list of all loop members is also available \\cite{suppl}).\n\n\\begin{table}[h]\n\\tbl{Selected representatives of loops in different datasets.}\n{\\begin{tabular}{@{}lllc@{}}\\toprule\nGene & Dataset & Description & References \\\\ \\colrule\nCAV1 & Brain & tumor suppressor gene & \\cite{gene_cav1}\\\\\nRPL36 & Brain & prognostic marker in hepatocellular carcinoma & \\cite{rpl36_2011}\\\\ \nRPS11 & Breast & downregulation in breast carcinoma cells & \\cite{rps11_2001}\\\\\nFTL & Breast & prognostic biomarkers in breast cancer & \\cite{ftl_2006}\\\\\nLDHA & Ovarian & overexpressed in tumors, important for cell growth & \\cite{ldha_2013}\\\\\nGNAS & Ovarian & biomarker for survival in ovarian cancer & \\cite{gnas_2010}\\\\\nLAMP1 & AML170 & regulation of melanoma metastasis & \\cite{lamp1_2014}\\\\\nPABPC1 & AML170 & correlation with tumor progression & \\cite{pabpc1_2006}\\\\\nHLF & AML188 & promotes resistance to cell death & \\cite{hlf_2013}\\\\\nDTNA & AML188 & induces apoptosis in leukemia cells & \\cite{dtna_2012}\\\\ \\botrule\n\\end{tabular}}\\label{sl:tb_some_members}\n\\end{table}\n\n\\vspace*{-0.2in}\n\\subsection{Brain dataset}\\label{sl:brain}\nThe lifespan of the longest loop in this dataset was about 820 (bar between $\\approx[480,1300]$). The loop was consistent over different choices of the first landmark. We identified \n\\begin{wrapfigure}{r}{3in}\n\\centering\n\\includegraphics[height=1.1in]{Brain_one_loop_no_title1}\n\\caption{Representative loop in brain dataset.}\\label{brain_loop}\n\\end{wrapfigure}\n13 loop members, out of which 9 were found in cancer literature. Some cancer-related members include EGR1 and CAV1, which have genes been characterized as cancer suppressor genes\\cite{gene_cav1, gene_egr1}, A2M, which has been identified as a predictor for bone metastases\\cite{a2m_2001}, and RPL36 which has been found to be a prognostic marker in hepatocellular carcinoma\\cite{rpl36_2011}.\n\n\n\\subsection{Breast dataset}\\label{sl:breast}\n\\begin{wrapfigure}{r}{3in}\n\\centering\n\\includegraphics[width=3in]{Breast_one_loop_no_title}\n\\caption{Representative loop in breast dataset.}\\label{breast_loop}\n\\end{wrapfigure}\nThe lifespan of the longest loop in this dataset is in the range $[10080.0, 16684.2]$. As with the brain dataset, this loop is very consistent. However, there were only 10 members of this loop, and 8 of which were found in cancer literature. An interesting feature of this loop is that it had five ribosomal proteins which are known to play a critical role in tightly coordinating p53 signaling with ribosomal biogenesis \\cite{ribo_prots_2011}.\n\n\\subsection{Ovarian dataset}\\label{sl:ovarian}\n\\begin{wrapfigure}{r}{3in}\n\\centering\n\\includegraphics[width=3in]{Ovarian_one_loop_no_title}\n\\caption{Representative loop in ovarian dataset.}\\label{ovarian_loop}\n\\end{wrapfigure}\nThe Ovarian dataset had the most variable features in \\(\\homo_1\\). However, we investigated the loop corresponding to the most consistent and longest bar, which ranged from about 4000 to over 7000. This loop consisted of 17 members, and 9 were mentioned in the cancer-related literature. Among cancer-related members were GNAS, which was identified as ``an independent, qualitative, and reproducible biomarker to predict progression-free survival in epithelial ovarian cancer'' \\cite{gnas_2010}, and HNRNPA1, which has been identified as a potential biomarker for colorectal cancer \\cite{hnrpra1_2009}. \n\n\\subsection{AML188 dataset}\\label{sl:aml188}\n\\begin{wrapfigure}{r}{3in}\n\\centering\n\\vspace*{-0.1in}\n\\includegraphics[width=3in]{AML188-representative-loops_no_title}\n\\caption{Representative loops in AML188 dataset.}\\label{aml188_2loops}\n\\end{wrapfigure}\nAcute myeloid leukemia 188 (AML188) had two significant loops (as did AML170). The first one occurred at $[25200.0, 102200.0]$ and the second one at $[78400.0, 146219.24]$. The first loop has 27 members while the second one only 6. Altogether, only 14 of these 33 genes were mentioned in cancer literature. Some cancer-related representatives were hepatic leukemia factor (HLF) which promotes resistance to cell death \\cite{hlf_2013}, RPL35A known for inhibition of cell death \\cite{rpl35a_2002}, and GRK5 which regulates tumor growth \\cite{grk5_2012}. A group of zinc finger proteins were present in the first loop, some of which have been reported as novel biomarkers for detection of squamous cell carcinoma \\cite{zink_finger_2011,zink_finger_1991}.\n\n\\subsection{AML170 dataset}\\label{sl:aml170}\n\\begin{wrapfigure}{r}{3in}\n\\centering\n\\vspace*{-0.1in}\n\\includegraphics[width=3in]{AML170-representative-loops_no_title}\n\\caption{Representative loops in AML170 dataset.}\\label{aml170_2loops}\n\\end{wrapfigure}\nAML170 comes from caArray database and its protocol HG\\_U95Av2 has only 12558 genes. Even though this protocol had a smaller (about $1\/4$) number of genes compared to the other data sets, we still detected two loops in this dataset. They were relatively shorter than the loops in AML188, and occurred at $[5800.0, 11400.0]$ and $[11000.0, 17600.0]$. The two loops comprised of 19 members, of which 10 were found in cancer-related literature. These relevant members included ubiquitin C (UBC), which was recently identified as a novel cancer-related protein \\cite{ubc_2014}, PABPC1 whose positive expression is correlated with tumor progression in esophageal cancer \\cite{pabpc1_2006}, and LAMP1 which facilitates lung metastasis \\cite{lamp1_2014}.\n\n\\section{Discussion}\\label{sl:discuss}\n\n\\begin{wrapfigure}{r}{3in}\n\\centering\n\\includegraphics[width=3in]{AML170_Dim25_L130_no_tilte}\n\\caption{Two persistent loops in the AML170 dataset detected using only $25$ dimensions at the number of landmarks L=130.} \\label{aml170_dim25}\n\\end{wrapfigure}\nThe Breast, Brain, and Ovarian datasets had only one persistent loop, while AML170 and AML188 had two. Also, the AML datasets had a higher number of patients (samples) than the other three sets (see Table~\\ref{sl:tbl_data}). Is this fact just a coincidence or, indeed, does the number of \\(\\homo_1\\) features (loops) correlate with the number of dimensions? To address this question, we chose samples of random and progressively larger ($25$--$175$) subsets of patients from AML170 and AML188 while also increasing the number of landmarks, and studied the evolution of \\(\\homo_1\\) features. In other words, we repeated our method on smaller subsets of patients from these datasets. Both the AML datasets contained two loops even with only 25 dimensions (see Fig.~\\ref{aml170_dim25} for AML170), and continued to do so for the progressively larger subsets. Thus, the number of significant \\(\\homo_1\\) features appears to depend on intrinsic qualities of the data rather than the number of dimensions, demonstrating the robustness of our method to the number of patients in the dataset.\n\nAn important property of a loop is its lifespan \\cite{carlsson2006algebraic}. One may note that the life span of loops for different datasets vary significantly. For example, the lifespan of a significant \\(\\homo_1\\) feature in the brain dataset is only $820$, while for AML188 the lifespan of the first loop is $77,000$. This difference is not only due to the increase in the actual size of a loop as indicated by the number of points comprising the loop ($13$ vs.~$27$ in this case), but also in part because of the different absolute values in microarray expression data. The maximum value for the brain dataset is \\(24 \\times 10^3\\), while for AML188 is \\(3 \\times 10^6\\). Therefore, the absolute length of an \\(\\homo_1\\) feature is not as important as its length relative to other \\(\\homo_1\\) features.\n\nThe crucial step of our method is the choice of landmarks. The goal here is the efficient inference of the topology of data, while selecting a small subset of potentially relevant genes. Landmarks chosen using sequential maxmin tend to cover the dataset better and are spread apart from each other, but the procedure is also known to pick outliers \\cite{desilva2004}. In our datasets, outliers are typically identified by extreme expression values \\cite{outlier_survey2004}. We examined the expression values of the chosen landmarks, and found that very few of them had extreme values (Figs.~\\ref{aml170_loop_hist} and \\ref{breast_loop_hist}). Similarly, the expressions of the genes implicated in cancer biogenesis (among the loop members) did not have any extreme values, and in fact appear to follow normal distributions. We infer that this observation results because sequential maxmin indeed picks points on the outskirts of the topological features. Further, the distribution of expressions of the loop members suggests that the group as a whole could have potential implications for the disease. More interestingly, the ``hole'' structure means such groups could not potentially be identified by traditional coexpression or even differential expression analyses \\cite{ChKe2009}. \n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=0.33]{hist_aml170_tripple}\n\\caption{Histograms for AML170 dataset, x-axis represents gene expression level of scale $10^4$. (a) distribution of gene expressions for the whole set; (b) distribution of gene expressions for only the loop members; and (c) distribution of gene expressions for cancer-related loop members.} \\label{aml170_loop_hist} \n\\end{figure}\n\\vspace*{-0.2in}\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=0.33]{hist_breast_tripple}\n\\caption{\\label{breast_loop_hist} Histograms for Breast dataset, x-axis represents gene expression level of scale $10^5$. (a) distribution of gene expressions for the whole set, (b) distribution of gene expressions for the loop members only, (c) distribution of gene expressions for cancer-related loop members.}\n\\end{figure}\n\nThe computational complexity of our method is based on the current implementation of JavaPlex \\cite{javaplex2011}, where the two main steps are building and filtering simplicial complexes, and computing homology. Up to dimension $2$ (\\(\\beta_2\\)), homology could be computed in $O(n^3)$ time \\cite{EdHa2009}. However, building simplicial complexes relies on clique enumeration, which is NP-complete, and has a complexity of $O(3^{n\/3}) $\\cite{jholes2014,cliques65}. Further, JavaPlex requires explicit enumeration of simplicial facets appearing at each filtration step, implying the need for large memory resources\\cite{javaplex2011}.\n\nThe cancer-related loop members identified from the two AML datasets were distinct apart from the prominent group of ribosomal proteins. This observation could be explained by two main reasons. First, AML188 has four times the number of genes as AML170 (see Table~\\ref{sl:tbl_data}). Second, JavaPlex identifies only {\\em one} representative of each homology class. That is, if a significant topological feature (hole) exists, it will be identified, but only one loop will be found around that hole. There could be other relevant points in proximity to the hole, but are not guaranteed to be included in the loop. If we have prior knowledge of some genes being relevant, we could try to identify loops around the holes that include these genes as members. In this case, the other members of the identified loops could also have potential implications for the cancer biogenesis. Methods to find a member of a homology class that includes specific points could be of independent interest in the context of optimal homology problems \\cite{DeHiKr2011,DeSuWa2010}.\n\n\\section{Conclusion}\nWe have presented a method to look at cancer data from a different angle. Unlike previous methods, we look at characteristics of the first homology group ($\\homo_1$). We identify the persistent $\\homo_1$ features (which are loops, rather than connected components) and inspect their membership. Importantly, our approach finds potentially interesting connections among genes which cannot be found otherwise using traditional methods. This geometric connectedness may imply functional connectedness, however, this is yet to be investigated by oncologists. If such connections are indeed implied, then the genes in the loops could together form a characteristic ``signature'' for the cancer in question.\n\n\\paragraph{Acknowledgment:} \nKrishnamoorthy acknowledges support from the National Science Foundation through grant \\#1064600.\n\n\\vspace*{-0.1in}\n\\bibliographystyle{ws-procs11x85}\n\\input{BibLockwood_TopoFtrsCncrData}\n\n\n\\end{document}\n\n\n\n\\section{Using Other Packages}\\label{aba:sec1}\nThe class file loads the packages {\\tt amsfonts, amsmath, amssymb,\nchapterbib, cite, dcolumn, epsfig, rotating} and {\\tt url} at\nstartup. Please try to limit your use of additional packages as they\noften introduce incompatibilities. This problem is not specific to\nthe WSPC styles; it is a general \\LaTeX{} problem. Check this\narticle to see whether the required functionality is already\nprovided by the WSPC class file. If you do need additional packages,\nsend them along with the paper. In general, you should use standard\n\\LaTeX{} commands as much as possible.\n\n\\section{Layout}\nIn order to facilitate our processing of your article, please give\neasily identifiable structure to the various parts of the text by\nmaking use of the usual \\LaTeX{} commands or by using your own commands\ndefined in the preamble, rather than by using explicit layout\ncommands, such as \\verb|\\hspace, \\vspace, \\large, \\centering|,\netc.~Also, do not redefine the page-layout parameters.\n\n\\section{User Defined Macros}\nUser defined macros should be placed in the preamble of the article,\nand not at any other place in the document. Such private\ndefinitions, i.e. definitions made using the commands\n\\verb|\\newcommand,| \\verb|\\renewcommand,| \\verb|\\newenvironment| or\n\\verb|\\renewenvironment|, should be used with great care. Sensible,\nrestricted usage of private definitions is encouraged. Large macro\npackages and definitions that are not used in this example article\nshould be avoided. Please do not change the existing environments,\ncommands and other standard parts of \\LaTeX.\n\n\\section{Using WS-procs11x85}\nYou can obtain these files from the following website:\n\\url{http:\/\/www.wspc.com.sg\/style\/proceedings_style.shtml}.\n\n\\subsection{Input used to produce this paper}\n\\begin{verbatim}\n\\documentclass{ws-procs11x85}\n\\begin{document}\n\\title{FOR PROCEEDINGS ...}\n\\author{A. B. AUTHOR$^*$ ...}\n\\address{University ...}\n\\author{A. N. AUTHOR}\n\\address{Group, Laboratory, ...}\n\\begin{abstract}\nThis article...\n\\end{abstract}\n\\keywords{Style file; ...}\n\\bodymatter\n\\section{Using Other Packages}\nThe class file has...\n...\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and background}\n\\setcounter{equation}{0}\n\\setcounter{theo}{0}\nFor a fixed $ 0<\\alpha \\leq 2$ {\\it symmetric $\\alpha$-stable L\\'{e}vy motion} $\\{L_\\alpha (t), t\\geq 0\\}$ is a stochastic process characterized by having stationary independent increments with $L(0) = 0$ almost surely, and \n$L_\\alpha (t) - L_\\alpha (s)\\ (s>t)$ having the distribution of $S_\\alpha((t-s)^{1\/\\alpha},0,0)$,\nwhere $S_\\alpha(c, \\beta,\\mu)$ denotes a stable random variable with stability-index $\\alpha$, with scale parameter $c$, skewness parameter $\\beta$ and shift $\\mu$. A detailed account of such processes may be found in \\cite{Bk_Sam} but we summarize here the features we need.\nStable motion $L_\\alpha $ is $1\/\\alpha$-self-similar in the sense that $L_\\alpha (c t)$ and $c^{1\/\\alpha}L_\\alpha (t)$ are equal in distribution so in particular have the same finite-dimensional distributions. There is a version of $L_\\alpha $ such that its sample paths are c\\`{a}dl\\`{a}g, that is right continuous with left limits. \n\nOne way of representing symmetric $\\alpha$-stable L\\'{e}vy motion $L_\\alpha$ is as a sum over a plane point process.\n Throughout the paper we write\n$$\nr^{\\langle s\\rangle} = {\\rm sign}(r)|r|^s \\mbox{ for }r\\in \\mathbb{R}, s\\in \\mathbb{R}.\n$$\nThen\n\\begin{equation}\\label{sum}\nL_\\alpha (t) = C_\\alpha \n\\sum_{(\\X,\\Y) \\in \\Pi} 1_{(0,t]}(\\X) \\Y^{\\langle-1\/\\alpha\\rangle},\n\\end{equation}\nwhere $C_\\alpha$ is a normalising constant given by\n$$\nC_\\alpha = \\Big(\\int_0^\\infty u^{-\\alpha} \\sin u\\, du \\Big)^{-1\/\\alpha}\n$$\nand where $\\Pi$ is a Poisson point process on $\\mathbb{R}^+ \\times\\mathbb{R}$ with plane Lebesgue measure $\\mathcal{L}^2$ as mean measure, so that for a Borel set $A \\subset \\mathbb{R}^+ \\times\\mathbb{R}$ the number of points of $\\Pi$ in $A$ has a Poisson distribution with parameter $\\mathcal{L}^2(A)$, independently for disjoint $A$. \nThe sum \\eqref{sum} is almost surely absolutely uniformly convergent if $0<\\alpha<1$, but if $\\alpha\\geq 1$ then \\eqref{sum} must be taken as the limit as $n\\to \\infty$ of symmetric partial sums\n$$\nL_{\\alpha ,n} (t) = C_\\alpha \n\\sum_{(\\X,\\Y) \\in \\Pi : |\\Y|\\leq n} 1_{(0,t]}(\\X) \\Y^{\\langle-1\/\\alpha\\rangle},\n$$\nin the sense that $\\|L_{\\alpha ,n}-L_\\alpha\\|_{\\infty}\\to 0$ almost surely.\n\nSeveral variants of $\\alpha$-stable motion have been considered. \nFor example, for {\\it multistable L\\'{e}vy motion} $\\{M_{\\alpha} (t), t\\geq 0\\}$ the stability index $\\alpha$ in \\eqref{sum} can depend on $\\X$ so that the local behaviour changes with $t$, see \\cite{FLL,FL,KFLL,XFL,LLA,LLL,LL}. Thus given a continuous $\\alpha: \\mathbb{R}^+ \\to (0,2)$, \n$$\nM_{\\alpha}(t) = \n\\sum_{(\\X,\\Y) \\in \\Pi} 1_{(0,t]}(\\X) C_{\\alpha(\\X)} \\Y^{\\langle-1\/\\alpha(\\X)\\rangle}.\n$$\nThen $M_{\\alpha}$ is a Markov process. Under certain conditions it is {\\it localisable} with {\\it local form} $L_{\\alpha(t)}$, in the sense that near $t$ the process `looks like' an $\\alpha(t)$-stable process, that is for each $t>0$ and $u \\in \\mathbb{R}$,\n$$\\frac{M_{\\alpha}(t+ru) -M_{\\alpha}(t)}{r^{1\/\\alpha(t)}} \\tod L_{\\alpha(t)}(u)$$\nas $r\\searrow 0$, where convergence is in distribution with respect to the Skorohod metric and consequently is convergent in finite dimensional distributions, see \\cite{FLL,FL}. \n\nThe local stability parameter of multistable L\\'{e}vy motion depends on the time $t$ but in some contexts, for example in financial modelling, it may be appropriate for the local stability parameter to depend instead (or even as well) on the {\\it value} of the process at time $t$. Such a process might be called `self-stabilizing'. Thus, for suitable \n$\\alpha: \\mathbb{R} \\to (0,2)$, we seek a process $\\{Z (t), t\\geq 0\\}$ that is localisable with local form $L^0_{\\alpha(Z (t))}$, in the sense that for each $t$ and $u>0$,\n\\begin{equation}\\label{locdef1}\n\\frac{Z(t+ru) -Z(t)}{r^{1\/\\alpha(Z (t))}}\\bigg|\\, \\mathcal{F}_t \\ \\tod \\ L^0_{\\alpha(Z (t))}(u)\\end{equation}\nas $r\\searrow 0$,\nwhere convergence is in distribution and finite dimensional distributions and where $\\mathcal{F}_t$ indicates conditioning on the process up to time $t$. (For notational simplicity it is easier to construct $Z_{\\alpha}$ with the non-normalised $\\alpha$-stable processes $L^0_{\\alpha} = C_\\alpha^{-1} L_{\\alpha}$ as its local form.)\n\nThroughout the paper we write $D[t_0,t_1)$ for the c\\`{a}dl\\`{a}g functions on the interval $[t_0,t_1)$, that is functions that are right continuous with left limits; this is the natural space for functions defined by sums over point sets. \n\nIn an earlier paper \\cite{FL2} we constructed self-stabilizing processes for $\\alpha: \\mathbb{R}^+ \\to (0,1)$ by first showing that there exists a deterministic function $f\\in D[t_0,t_1)$ satisfying the relation\n$$f(t) = a_0 +\\sum_{(x,y) \\in \\Pi} 1_{(t_0,t]}(x) y^{\\langle-1\/\\alpha(f(x_-))\\rangle}$$\nfor a {\\it fixed} point set $\\Pi$, and then randomising to get a random function $Z$ such that\n\\begin{equation*}\nZ (t) = a_0 + \\sum_{(\\X,\\Y) \\in \\Pi} 1_{(t_0,t]}(\\X)\\, \\Y^{\\langle-1\/\\alpha(Z (\\X_-))\\rangle} \\qquad (t_0\\leq t x$ and $y'0,\\ u,v \\in \\mathbb{R} ),$$\nwhere $\\xi \\in (u,v)$. In particular this gives the estimate we will use frequently:\n\\be \n\\big| y^{-1\/\\alpha(v)} - y^{-1\/\\alpha(u)}\\big |\n\\ \\leq \\ M\\ |v- u|\\, y^{-1\/(a,b)}\\qquad (y>0, \\ u,v \\in \\mathbb{R}), \\label{ydif4}\n \\ee\nwhere\n$$\nM\\ =\\ \\sup_{\\xi\\in\\mathbb{R}} \\frac{|\\alpha'(\\xi)|}{\\alpha(\\xi)^2},\n$$\nand for convenience we write\n$$\ny^{-1\/(a,b)} \\ = \\ \\max\\big\\{ y^{-1\/a}\\big(1+|\\log y| \\big), y^{-1\/b}\\big(1+|\\log y| \\big)\\big\\}\n\\qquad (y>0)\n$$\nand \n$$ y^{-2\/(a,b)} \\ =\\ (y^{-1\/(a,b)})^2.$$ \n\nFor $t_0< t_1$ and a suitable probability space $\\Omega$ (to be specified later), we will work with functions \n$F: \\Omega\\times [t_0, t_1) \\to \\mathbb{R}\\cup \\{\\infty \\}$ which we assume to be measurable (taking Lebesgue measure on $[t_0, t_1)$). Writing $F_\\omega(t)$ for the value of $F$ at $\\omega \\in \\Omega$ and $t\\in [t_0, t_1)$, we think of $F_\\omega$ as a random function on $[t_0, t_1)$ in the natural way (most of the time we will write $F$ instead of $F_\\omega$ when the underlying randomness is clear). In particular we will work in the space \n$${\\mathcal D} \\ = \\ \\big\\{F: F_\\omega \\in D[t_0, t_1) \\mbox{ for almost all }\n \\omega \\in \\Omega \\mbox{ with } \\mathbb{E}\\big(\\|F\\|_\\infty^2\\big)<\\infty\\big\\},$$\n where $\\mathbb{E}$ is expectation, $D[t_0,t_1)$ denotes the c\\`{a}dl\\`{a}g functions, and $\\|\\cdot\\|_\\infty$ is the usual supremum norm. By identifying $F$ and $F'$ if $F_\\omega = F'_\\omega$ for almost all $\\omega \\in \\Omega$, this becomes a normed space under the norm\n\\be\\label{norm}\n\\Big(\\mathbb{E}\\big(\\|F\\|_\\infty^2\\big)\\Big)^{1\/2}.\n\\ee\nA routine check shows that \\eqref{norm} defines a complete norm on ${\\mathcal D}$. \n\n\n\n\\section{Point sums with random signs}\\label{sec:signs}\n\\setcounter{equation}{0}\n\\setcounter{theo}{0}\n\nIn this section we fix a discrete point set $\\Pi^+ \\subset (t_0,t_1)\\times \\mathbb{R^+}$ \nand form sums over values at the points of $\\Pi^+$ with an independent random assignment of sign $+$ or $-$ at each point of $\\Pi^+$.\n\nWe will assume that the point set $\\Pi^+$ satisfies \n$$\n \\sum_{(x,y) \\in \\Pi^+}y^{-2\/(a,b)}< \\infty;\n$$\nthis will certainly be the case if $ \\sum_{(x,y) \\in \\Pi^+}y^{-2\/b'}< \\infty$ for some $b'$ with $bn$. We list the points\n$$\\{(x,y) \\in \\Pi^+ : y\\leq m\\} = \\{(x_1,y_1), \\ldots, (x_N,y_N)\\},$$\nwith $t_0n\\geq 1$. Thus taking the unconditional expectation and using \\eqref{ind1} for $i-1$ gives \\eqref{ind1} for $i$.\n\n(b) If $i= i_k $ for some $1\\leq k\\leq K$, then from \\eqref{mart},\n \\begin{align*}\n\\mathbb{E}\\big((Z_m & (x_i)- Z_n(x_i))^2\\big| \\mathcal{F}_{i-1} \\big)\\\\\n&=\\ \\mathbb{E}\\Big(\\Big(Z_m(x_{i-1}) - Z_n(x_{i-1}) + S(x_{i},y_{i})\\big(y_{i}^{-1\/\\alpha (Z_m(x_{i-1}))} - y_{i}^{-1\/\\alpha (Z_n(x_{i-1}))}\\big)\\Big)^2\\Big| \\mathcal{F}_{i-1}\\Big)\\\\\n&=\\ \\big(Z_m(x_{i-1}) - Z_n(x_{i-1})\\big)^2 + \\big(y_{i}^{-1\/\\alpha (Z_m(x_{i-1}))} - y_{i}^{-1\/\\alpha (Z_n(x_{i-1}))}\\big)^2\\\\\n&\\leq \\ \\big(Z_m(x_{i-1}) - Z_n(x_{i-1})\\big)^2\\big(1 + M^2 y_{i_k}^{-2\/(a,b)}\\big)\\\\\n&\\leq \\ \\big(Z_m(x_{i-1}) - Z_n(x_{i-1})\\big)^2\\big(1 + c_k\\big),\n\\end{align*}\n using \\eqref{ydif4} and \\eqref{cki}.\nAgain, taking the unconditional expectation and using \\eqref{ind2} for $i-1$ gives \\eqref{ind1} for $i$ with a vacuous sum of terms $y_j^{-2\/b}$, completing the induction.\n\nIt follows from \\eqref{ind2} that for all $0\\leq i \\leq N$,\n\\begin{eqnarray}\n\\mathbb{E}\\big((Z_m(x_i) - Z_n(x_i))^2\\big)& \\leq & \\prod_{k=1}^K (1+c_k) \\sum_{k= 1}^{K+1} \\epsilon_k\\nonumber\\\\ \n & \\leq & \\prod_{(x,y)\\in \\Pi ^+ : y\\leq n} (1+M^2y^{-2\/(a,b)}) \\sum_{(x,y)\\in \\Pi ^+ : n n} y^{-2\/b}\\ \\to 0. \n\\ee\nMoreover, there exists a sequence $n_j \\nearrow \\infty$ such that almost surely $\\|Z_{n_j} -Z\\|_\\infty \\to 0$ i.e. $Z_{n_j} \\to Z$ uniformly. If $0n_j} y^{-2\/b}\\ < \\ 2^{-j}\n\\ee\nfor all sufficiently large $j$,\nthen by \\eqref{cauchy0} $\\mathbb{E}\\big(\\|Z_{n_{j+1}}-Z_{n_j}\\|_\\infty^2\\big)< c\\ 2^{-j}$ so almost surely\n $Z= Z_{n_1} + \\sum_{j=1}^\\infty (Z_{n_{j+1}}-Z_{n_j})$ is convergent in $\\|\\cdot\\|_\\infty$. If $00$,\n\\be\\label{Obound}\n\\sum_{(x,y)\\in \\Pi^+\\, :\\, t< x\\leq t+h} y^{-2\/\\alpha(Z(t))} \\ = \\ O(h^\\beta) \\qquad (0n} |\\Y|^{-2\/b}\\ <\\ C\\, n^{-\\eta}\\qquad (n \\in \\mathbb{N}).\n\\ee\n\\end{lem}\n\\begin{proof}\nBy Theorem \\ref{camp},\n$$\n\\mathbb{E}\\Big(\\sum_{(\\X,\\Y) \\in \\Pi: |\\Y|>n} |\\Y|^{-2\/b}\\Big)\\ = \\ \\int_n^\\infty y^{-2\/b}\n\\ \\leq \\ \\frac{b}{2-b}n^{1-2\/b}.$$\nA Borel-Cantelli argument summing over $n=2^{-k}$ completes the proof.\n\\end{proof}\n\nWe can now obtain develop Theorem \\ref{finlim2} to sums over a Poisson point process. \n\n\n\\begin{theo}\\label{thmrand2}\nLet $\\Pi \\subset (t_0,t_1)\\times \\mathbb{R}$ \nbe a Poisson point process with mean measure ${\\mathcal L}^2$, let $\\alpha :\\mathbb{R} \\to [a,b]$ where $00$ such that $|\\Y|\\geq y_0$ if $(\\X,\\Y) \\in \\Pi$, which ensures that the right hand side of \\eqref{diffbound1} below converges. In practice this is a realistic assumption in that it excludes the possibility of $Z$ having unboundedly large jumps.\n\n\n\\begin{theo}\\label{thmrand3}\nLet $y_0>0$ and let $\\Pi $ be a Poisson point process on $ (t_0,t_1)\\times (-\\infty, -y_0] \\cup [y_0,\\infty)$ \n with mean measure ${\\mathcal L}^2$ restricted to this domain. Let $a_0\\in\\mathbb{R}$, let $0 n} |\\Y|^{-2\/b}\\bigg)\\\\\n&=\\ \n4\\,\\mathbb{E}\\bigg( \\exp\\Big( \\sum_{(\\X,\\Y) \\in \\Pi ^+_2: y_0\\leq \\Y\\leq n } \\log(1+M^2\\Y^{-2\/(a,b)})\\Big)\\bigg)\n\\mathbb{E}\\bigg( \\sum_{(\\X,\\Y)\\in \\Pi ^+_2 : \\Y> n} |\\Y|^{-2\/b}\\bigg)\\\\\n&=\\ \n4\\,\\exp\\bigg(\\int_{y_0}^n 2(t_1-t_0)\\ M^2 y^{-2\/(a,b)} dy\\bigg) \n(t_1-t_0) \\int_n^\\infty 2y^{-2\/b} dy.\n \\end{align*}\nLetting $n\\to \\infty$ in the first integral and evaluating the second integral gives \\eqref{diffbound1}.\n\\end{proof}\n\nWe remark that Theorem \\ref{thmrand3} allows us to quantify the rate of convergence in probability of $\\|Z_n -Z\\|_\\infty\\to 0$ in Theorem \\ref{thmrand2}. By the Poisson distribution $\\P\\{\\Pi \\cap ((t_0,t_1)\\times [0,y_0] )= \\emptyset\\} = \\exp(-y_0(t_1-t_0))$. Given $\\epsilon >0$ we can choose $y_0$ to make this probability at most $\\epsilon\/2$, then using \\eqref{diffbound1} and Markov's inequality it follows that if $n$ is sufficiently large then $\\P\\{ \\|Z_n -Z\\|_\\infty>\\epsilon\\} < \\epsilon$. In practice, this leads to an enormous value of $n$. \n\n\\subsection{Local properties and self-stabilising processes}\n\nWe next obtain local properties of the random functions defined by a Poisson point process as in Theorem \\ref{thmrand2}. Not only are the sample paths right-continuous, but they satisfy a local H\\\"{o}lder continuity estimate and are self-stabilizing, that is locally they look like $\\alpha$-stable processes. \n\nWe will use a bound provided by the $\\alpha$-{\\it stable subordinator} which may be defined for each (constant) $0<\\alpha<1$ by\n$$\nS_\\alpha (t) := \\sum_{(\\X,\\Y) \\in \\Pi} 1_{(t_0 ,t]}(\\X)\\, |\\Y|^{-1\/\\alpha}\\qquad (t_0 \\leq t }\n\\end{equation}\nwhere $\\Pi$ is a Poisson point process on $(t_1,t_2) \\times \\mathbb{R}$ with mean measure ${\\mathcal L}^2$.\nThis sum is almost surely absolutely convergent if $0<\\alpha<1$ but for general $0<\\alpha<2$\nit is the limit as $n \\to \\infty$ of\n$$L^0_{\\alpha,n}(t) =\\sum_{(\\X,\\Y) \\in \\Pi:|\\Y| \\leq n} 1_{(0,t]}(\\X) \\Y^{<-1\/\\alpha>}.$$ \nWhilst $\\mathbb{E}\\big(\\|L^0_{\\alpha,n} -L^0_\\alpha\\|_\\infty^2\\big)\\to 0$ as a special case of Theorem \\ref{thmrand2}, the constant value of $\\alpha$ means that $L^0_{\\alpha,n}$ and $L^0_{\\alpha,m}-L^0_{\\alpha,n}$ are independent for $m>n$ and also that $\\{L^0_{\\alpha,n}\\}_n$ is a martingale, which ensures that $\\|L^0_{\\alpha,n} -L^0_\\alpha\\|_\\infty\\to 0$ almost surely. \n\nIn the same way to $Z_n$ we can think of $L^0_{\\alpha,n}(t)$ in terms of $\\Pi^+_2$ and random signs, so that \n$$L^0_{\\alpha,n}(t) =\\sum_{(\\X,\\Y) \\in \\Pi^+_2:\\Y \\leq n} 1_{(0,t]}(\\X) S(\\X,\\Y)\\Y^{<-1\/\\alpha>}.$$ \n\n\nThe following proposition is the analogue of Proposition \\ref{cty1} in this context.\n\n\\begin{prop}\\label{cty3}\nLet $Z$ be the random function given by Theorem \\ref{thmrand2} and let $t\\in [t_0,t_1)$. Then, conditional on ${\\mathcal F}_{t}$, given $0< \\epsilon< 1\/b$ there exist almost surely random numbers $C_1, C_2 <\\infty$ such that for all $0\\leq h 0$. Let $\\Pi^+_2$ be a Poisson point process on $\\mathbb{R}$ with mean measure $2{\\mathcal L}$.\nFrom \\eqref{sub2} \n$$\n\\sum_{(X,Y)\\in \\Pi^+_2\\, :\\, t< x\\leq t+h} \\Y^{-2\/\\alpha(Z(t))}\n\\ =\\ 2S_{\\alpha(Z(t))\/2}(h)\\ \\leq\\ C h^{2\/\\alpha(Z(t)) -\\epsilon}\n$$\nwhere $C<\\infty$ for almost all realisations of $\\Pi^+_2$.\nFor such $\\Pi^+_2$, Proposition \\ref{cty1} gives on randomising the signs,\n$$\\big|Z(t+h) -Z(t)|\\Pi^+_2\\big|\\ \\leq\\ C_1 h^{1\/\\alpha(Z(t))-\\epsilon\/2-\\epsilon\/2}$$\nfor some random $C_1$, \nand hence \\eqref{holas3} holds almost surely.\n\nIn the same way,\n$$\\sum_{(X,Y)\\in \\Pi^+_2\\, :\\, t< x\\leq t+h} \\Y^{-2\/(a,b)}\\ \\leq\\ C' (h^{-1})^{-2\/(a,b) -\\epsilon\/2} \\ \\leq\\ C^{\\prime\\prime}h^{2\/b -\\epsilon} \n$$\nwhere $C', C^{\\prime\\prime}< \\infty$ for almost all realisations of $\\Pi^+_2$.\nThen in Proposition \\ref{cty1}, $L(t+h) = L^0_{\\alpha(t)}(t+h)-L^0_{\\alpha(t)}(t)$, so for such $\\Pi^+_2$, on randomising the signs,\n$$\\Big|\\big(Z(t+h)-Z(t)\\big) -\\big(L^0_{\\alpha(t)}(t+h)-L^0_{\\alpha(t)}(t)\\big)\\big|\\Pi^+_2\\Big| \\ \\leq\\ C_2 h^{1\/\\alpha(Z(t))+ 1\/b -\\,\\epsilon}$$\nfor random $C_2<\\infty$, so \\eqref{locas3} holds almost surely.\n\\end{proof}\n\n\nWe finally show that almost surely at each $t\\in [t_0,t_1)$ the random function $Z$ of Theorem \\ref{thmrand2} is right-localisable with local form an $\\alpha(Z(t))$-stable process, so that $Z$ may indeed be thought of as self-stablizing. \n\n\\begin{theo}\\label{rtloc}\nLet $Z$ be the random function given by Theorem \\ref{thmrand2} and let $t\\in [t_0,t_1)$. Then, conditional on ${\\mathcal F}_{t}$, almost surely $Z$ is strongly right-localisable at $t$, in the sense that\n$$\\frac{Z(t+ru) -Z(t)}{r^{1\/\\alpha(Z(t))}}\\bigg|\\, \\mathcal{F}_t \\ \\tod \\ L^0_{\\alpha(Z(t))}(u) \\qquad (0\\leq u \\leq 1)\n$$\nas $r\\searrow 0$, where convergence is in distribution with respect to $(D[0,1], \\rho_S)$, with $ \\rho_S$ is the Skorohod metric.\n\\end{theo}\n\n\n\\begin{proof}\nLet $0<\\epsilon< 1\/b$. For $u\\in [0,1]$ and $0 W \\log_{2}\\left(1+\\gamma \\beta\\right)\\}\n\\end{equation}\n\\noindent where $W$ is the bandwidth of the channel, $\\gamma$ is the received SNR when the channel gain is equal to unity, and $\\beta$ is the channel gain, which is exponentially distributed in the case of Rayleigh fading.\n\nThe outage probability can be written as\n\\begin{equation}\nP_{{\\rm out},is}={\\rm Pr}\\Big\\{\\beta<\\frac{2^{\\frac{r_i}{W}}-1}{\\gamma}\\Big\\}\n\\end{equation}\n\\noindent Assuming that the mean value of $\\beta$ is $\\overline{\\beta}$,\n\\begin{equation}\nP_{{\\rm out},is}=1-\\exp\\bigg(-\\frac{2^{\\frac{r_i}{W}}-1}{\\gamma\\overline{\\beta}}\\bigg)\n\\end{equation}\n\\noindent Let $\\overline{P}_{{\\rm out},is}=1-P_{{\\rm out},is}$ be the probability of correct reception. It is therefore given by\n\\begin{equation}\\label{correctreception}\n\\overline{P}_{{\\rm out},is}=\\exp\\bigg(-\\frac{2^{\\frac{b}{TW\\left(1-i\\frac{\\tau}{T}\\right)}}-1}{\\gamma\\overline{\\beta}}\\bigg)\n\\end{equation}\n\\noindent The probability $\\overline{P}_{{\\rm out},p}$ for the PU is given by a similar expression with $i=0$ as the PU transmits starting from the beginning of the time slot and the relevant primary link parameters.\n\n\\begin{figure}\n \n \\includegraphics[width=1\\columnwidth]{1}\\\\\n \\caption{Maximum secondary service rate for the parameters: $\\lambda_e=0.8$, $\\overline{P}_{{\\rm out},p}=0.7$, ${\\overline{P}}_{{\\rm out},p}^{\\left({\\rm c}\\right)}=0.14$, $\\overline{P}_{{\\rm out},0s}=0.6065$, ${\\overline{P}}_{{\\rm out},0s}^{\\left({\\rm c}\\right)}=0.1820$, $\\delta=0.9782$, $\\delta^{\\rm c}=0.8$, $P_{\\rm FA}=0.1$, $P_{\\rm MD}=0.08$, and $\\overline{D_p}=2$ time slot.}\\label{r1}\n\\end{figure}\n\\begin{figure}\n \n \\includegraphics[width=1\\columnwidth]{5}\\\\\n \\caption{Optimal values of the sensing and access probabilities for the feedback-based system depicted in Fig.\\ \\ref{r1}.}\\label{r5}\n\\end{figure}\n\n\\begin{figure}\n \n \\includegraphics[width=1\\columnwidth]{9}\\\\\n \\caption{Maximum secondary service rate for the parameters: $\\lambda_e=0.8$, $\\overline{P}_{{\\rm out},p}=0.7$, ${\\overline{P}}_{{\\rm out},p}^{\\left({\\rm c}\\right)}=0.14$, $\\overline{P}_{{\\rm out},0s}=0.6065$, ${\\overline{P}}_{{\\rm out},0s}^{\\left({\\rm c}\\right)}=0.1820$, $\\delta=0.9782$, $\\delta^{\\rm c}=0.8$, $P_{\\rm FA}=0.1$, $P_{\\rm MD}=0.08$, and $\\overline{D_p}=200$ time slot.}\\label{r9}\n\\end{figure}\n\n\n \n \n \n\n\n\\begin{figure}\n \\includegraphics[width=1\\columnwidth]{delay_fig}\\\\\n \\caption{Maximum secondary service rate for different values of the primary packet delay constraint. Parameters used to generate the figure are: $\\lambda_e=0.8$, $\\overline{P}_{{\\rm out},p}=0.7$, ${\\overline{P}}_{{\\rm out},p}^{\\left({\\rm c}\\right)}=0.14$, $\\overline{P}_{{\\rm out},0s}=0.6065$, ${\\overline{P}}_{{\\rm out},0s}^{\\left({\\rm c}\\right)}=0.1820$, $\\delta=0.9782$, $\\delta^{\\rm c}=0.8$, $P_{\\rm FA}=0.1$, and $P_{\\rm MD}=0.08$}\\label{delay_fig}\n\\end{figure}\n\n\\begin{figure}\n \n \\includegraphics[width=1\\columnwidth]{beta_curves2}\\\\\n \\caption{Impact of the energy arrival rate $\\lambda_e$ on the maximum secondary service rate for the parameters: $\\overline{P}_{{\\rm out},p}=0.7$, ${\\overline{P}}_{{\\rm out},p}^{\\left({\\rm c}\\right)}=0.14$, $\\overline{P}_{{\\rm out},0s}=0.6065$, ${\\overline{P}}_{{\\rm out},0s}^{\\left({\\rm c}\\right)}=0.1820$, $\\delta=0.9782$, $\\delta^{\\rm c}=0.8$, $P_{\\rm FA}=0.1$, $P_{\\rm MD}=0.08$, and $\\overline{D_p}=2$ time slot.}\\label{rx}\n\\end{figure}\n\n\\begin{figure}\n \n \\includegraphics[width=1.09\\columnwidth]{MPR_versus_stability}\\\\\n \\caption{Impact of the MPR capability on the maximum secondary service rate for the parameters: $\\lambda_e=0.5$, $\\overline{P}_{{\\rm out},p}=0.7$, $\\overline{P}_{{\\rm out},0s}=0.6065$, $\\delta=0.9782$, $\\delta^{\\rm c}=0.8$, $P_{\\rm FA}=0.1$, $P_{\\rm MD}=0.08$, and $\\overline{D_p}=2$ time slot.}\\label{ry}\n\\end{figure}\n\n\n\\section*{Appendix B}\nWe derive here a generic expression for the outage probability at the receiver of link $j$ when there is concurrent transmission from the transmitter of link $v$. Outage occurs when the transmission rate $r_i$, given by (\\ref{r_i}), exceeds the channel capacity\n\\begin{equation}\nP_{{\\rm out},ij}^{\\left({\\rm c}\\right)}={\\rm Pr}\\biggr\\{r_i > W \\log_{2}\\left(1+\\frac{\\gamma_{j} \\beta_{j}}{\\gamma_{v} \\beta_{v}+1}\\right)\\biggr\\}\n\\end{equation}\n\\noindent where the superscript c denotes concurrent transmission, $\\gamma_j$ is the received SNR at the receiver of link $j$ without interference when the channel gain $\\beta_j$ is equal to unity, and $\\gamma_v$ is the received SNR at the receiver of link $j$ when it only receives a signal from the interfering transmitter of link $v$ given that the gain of the interference channel, $\\beta_v$, is equal to unity.\nThe outage probability can be written as\n\\begin{equation}\\label {1900}\nP_{{\\rm out},ij}^{\\left({\\rm c}\\right)}={\\rm Pr}\\Big\\{\\frac{\\gamma_{j} \\beta_{j}}{\\gamma_{v} \\beta_{v}+1}<{2^{\\frac{r_i}{W}}-1}\\Big\\}\n\\end{equation}\n\\noindent Assuming that $\\beta_j$ and $\\beta_v$ are exponentially distributed with means $\\overline{\\beta_{j}}$ and $\\overline{\\beta_{v}}$, respectively, we can use the probability density functions of these two random variables to obtain the outage as\n \\begin{eqnarray*}\\label{193}\n P_{{\\rm out},ij}^{\\left({\\rm c}\\right)}=1-\\frac{1}{1+\\Big( {2^{\\frac{r_i}{W}}-1} \\Big)\\frac{\\gamma_v\\overline{\\beta_v}}{ \\gamma_j\\overline{\\beta_j}}} {e^{-\\frac{{2^{\\frac{r_i}{W}}-1}}{\\gamma_j \\overline{\\beta_j}}}}\n\\end{eqnarray*}\n\\noindent The probability of correct reception $ \\overline{P}^{\\left({\\rm c}\\right)}_{{\\rm out},ij}=1-P^{\\left({\\rm c}\\right)}_{{\\rm out},ij}$ is thus given by\n\n \\begin{eqnarray}\\label{conctra}\n \\overline{P}_{{\\rm out},ij}^{\\left({\\rm c}\\right)}=\\frac{\\overline{P}_{{\\rm out},ij}}{1+\\Big({2^{\\frac{b}{TW\\left(1-\\frac{i\\tau}{T}\\right)}}-1} \\Big)\\frac{\\gamma_v\\overline{ \\beta_v}}{\\gamma_j \\overline{\\beta_j}}}\n\\end{eqnarray}\n\\noindent As is obvious, the probability of correct reception is lowered in the case of interference.\n\\section*{Appendix C}\nWe prove here that $\\overline{P}_{{\\rm out},0j} > \\overline{P}_{{\\rm out},1j}$. As a function of $\\tau$, $\\overline{P}_{{\\rm out},1j}$ is given by (\\ref{correctreception}) with $i=1$. Let $x=1-\\frac{\\tau}{T}$ where\n$x \\in [0,1]$. Assuming that the energy unit used per slot is $e$, the transmit power is $\\frac{e}{T-\\tau}=\\frac{e}{Tx}$. This means that the received SNR $\\gamma$ is inversely proportional to $x$. The exponent in (\\ref{correctreception}) with $i=1$ is thus proportional to $\\boldsymbol{g(x)=x (\\exp(\\frac{a}{x})-1)}$ where $a =\\frac{b \\ln 2} {WT} > 0$. Differentiating $g(x)$ with respect to $x$, the derivative is\n\\begin{equation}\n\\begin{split}\n-1+\\Big[1-\\frac{a}{x}\\Big] \\exp(\\frac{a}{x})&=-1+\\Big[1-\\frac{a}{x}\\Big]\\sum_{k=0}^{\\infty}\\frac{1}{k!}\\Big(\\frac{a}{x}\\Big)^{k}\\\\&=\n-1+\\!\\sum_{k=0}^{\\infty}\\frac{1}{k!}\\Big(\\frac{a}{x}\\Big)^{k}\\!-\\!\\sum_{k=0}^{\\infty}\\frac{1}{k!}\\Big(\\frac{a}{x}\\Big)^{k+1}\\\\&=\n\\sum_{k=1}^{\\infty}\\frac{1}{k!}\\Big(\\frac{a}{x}\\Big)^{k}-\\sum_{k=1}^{\\infty}\\frac{1}{(k-1)!}\\Big(\\frac{a}{x}\\Big)^{k}\\\\&=\n\\sum_{k=1}^{\\infty}\\bigg[\\frac{1}{k!}-\\frac{1}{(k-1)!}\\bigg]\\Big(\\frac{a}{x}\\Big)^{k+1}<0\n\\end{split}\n\\end{equation}\nTherefore, the derivative is always negative. Since $x=1-\\frac{\\tau}{T}$, function $g(x)$ increases with $\\tau$. This means that $\\overline{P}_{{\\rm out},1j}$ decreases with $\\tau$ and its maximum value occurs when the transmission starts at the beginning of the time slot, i.e., $\\tau=0$. This proves that $\\overline{P}_{{\\rm out},0j} > \\overline{P}_{{\\rm out},1j}$ for $\\tau > 0$. For the case of concurrent transmission given in (\\ref{conctra}), the numerator is $\\overline{P}_{{\\rm out},1j}$, which decreases with $\\tau$. The denominator is proportional to $g(x)$ which has been shown to decrease with $x$ and, hence, increase with $\\tau$. This means that $\\overline{P}^{\\left({\\rm c}\\right)}_{{\\rm out},1j}$ decreases with $\\tau$, i.e., $\\overline{P}^{\\left({\\rm c}\\right)}_{{\\rm out},0j} > \\overline{P}^{\\left({\\rm c}\\right)}_{{\\rm out},1j}$.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}