diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfaiq" "b/data_all_eng_slimpj/shuffled/split2/finalzzfaiq" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfaiq" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\n\\emph{This Game Is Not Going To Load Itself} (TGINGTLI{}) \\cite{tgingtli} is a free game created in 2015 by Roger ``atiaxi'' Ostrander for the Loading Screen Jam, a game jam hosted on \\protect\\url{itch.io},\nwhere it finished $7$th overall out of $46$ entries.\nThis game jam was a celebration of the expiration of US Patent 5,718,632 \\cite{US5718632}, which covered the act of including mini-games during video game loading screens.\nIn this spirit, TGINGTLI{} is a real-time puzzle game themed around the player helping a game load three different resources of itself --- save data, gameplay, and music, colored red, green, and blue --- by placing arrows on the grid cells to route data entering the grid to a corresponding sink cell.\nFigure~\\ref{fig:realplay} shows an example play-through.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.63]{figs\/realplay_input}\\hfill\n \\includegraphics[scale=0.63]{figs\/realplay}\n \\caption{%\n Left: The (eventual) input for a real-world Level 16 in TGINGTLI{}.\n Right: A successful play-through that\n routes every packet to its corresponding sink.}\n \n \\label{fig:realplay}\n\\end{figure}\n\nWe formalize TGINGTLI{} as follows.\nYou are given an $m \\times n$ grid where each unit-square cell is either empty,\ncontains a data sink, or contains an arrow pointing\nin one of the four cardinal directions.\n(In the implemented game, $m=n=12$ and no arrows are placed initially.)\nEach data sink and arrow has a color (resource) of red, green, or blue;\nand there is exactly one data sink of each color in the grid.\nIn the online version (as implemented), sources appear throughout the game;\nin the offline version considered here, all sources are known a priori.\nNote that an outer edge of the grid may have multiple sources of different colors.\nFinally, there is an loading bar that starts at an integer $k_0$\nand has a goal integer~$k^*$.\n\nDuring the game,\neach source periodically produces data packets of its color,\nwhich travel at a constant speed into the grid.\nIf a packet enters the cell of an arrow of the same color,\nthen the packet will turn in that direction.\n(Arrows of other colors are ignored.)\nIf a packet reaches the sink of its color,\nthen the packet disappears and the loading bar increases by one unit of data.\nIf a packet reaches a sink of the wrong color, or exits the grid entirely,\nthen the packet disappears and the loading bar decreases by one unit of data, referred to as taking damage.\nPackets may also remain in the grid indefinitely\nby going around a cycle of arrows;\nthis does not increase or decrease the loading bar.\nThe player may at any time permanently fill an empty cell with an arrow, which may be of any color and pointing in any of the four directions.\nIf the loading bar hits the target amount $k^*$, then the player wins; but if the loading bar goes below zero, then the player loses.\n\nIn Section~\\ref{sec:NP-hardness},\nwe prove NP-hardness of the {TGINGTLI} decision problem:\ngiven a description of the grid\n(including sources, sinks, and preplaced arrows),\ncan the player place arrows to win?\nThis reduction works even for just six sources and three colors;\nit introduces a new problem, \\defn{3DSAT}, where variables have three\ndifferent colors and each clause mixes variables of all three colors.\nIn Section~\\ref{sec:sigma-2}, we introduce more detailed models for the\nperiodic behavior of sources, and show that many sources of differing periods\nenable both NP- and coNP-hardness of winning the game,\n\\emph{even without player input} (just simulating the game).\nOn the positive side, we prove that these problems are in $\\Sigma_2^P$;\nand in NP when the source periods are all equal,\nas in our first NP-hardness proof, so this case is in fact NP-complete.\n\nIn Section~\\ref{sec:perfect-layouts}, we consider how levels start in\nthe implemented game: a grid with placed sinks but no preplaced arrows.\nWe give a full characterization of when there is a \\defn{perfect layout}\nof arrows, where all packets are guaranteed to route to the correct sink,\n\\emph{no matter where sources get placed}.\nIn particular, this result provides a winning strategy for most\nsink arrangements in the implemented game.\nNotably, because this solution works independent of the sources,\nit works in the online setting.\n\n\\xxx{We should perhaps point to some of the other hardness of videogames literature and if there is literature on solving small instances of puzzle games through math and exhaustive search.}\n\n\\section{NP-Hardness for Three Colors and Six Sources}\n\\label{sec:NP-hardness}\n\nWe first prove that TGINGTLI{} is NP-hard by reducing from a new problem called\n\\defn{3-Dimensional SAT (3DSAT)},\ndefined by analogy to 3-Dimensional Matching (3DM).\n3DSAT is a variation of 3SAT where, in addition to a 3CNF formula,\nthe input specifies one of three colors (red, green, or blue) to each variable\nof the CNF formula, and the CNF formula is constrained to have trichromatic\nclauses, i.e., to have exactly one variable (possibly negated) of each color.\n\n\\begin{lemma}\n 3DSAT is NP-complete.\n\\end{lemma}\n\n\\begin{proof}\n We reduce from 3SAT to 3DSAT by converting a 3CNF formula $F$\n into a 3D CNF formula~$F'$.\n For each variable $x$ of~$F$,\n we create three variables $x^{(1)}, x^{(2)}, x^{(3)}$ in $F'$\n (intended to be equal copies of $x$ of the three different colors)\n and add six clauses to $F'$ to force $x^{(1)} = x^{(2)} = x^{(3)}$:\n %\n \\def\\halfup#1{\\raisebox{2.5ex}{\\smash{$#1$}}}\n \\begin{align*}\n \\lnot x^{(1)} \\lor x^{(2)} \\lor x^{(3)} &\\iff (x^{(1)} \\to x^{(2)}) \\lor x^{(3)} \\\\\n \\lnot x^{(1)} \\lor x^{(2)} \\lor \\lnot x^{(3)} &\\iff (x^{(1)} \\to x^{(2)}) \\lor \\lnot x^{(3)} ~\\halfup{\\Bigg\\rbrace \\iff x^{(1)} \\to x^{(2)}} \\\\\n x^{(1)} \\lor \\lnot x^{(2)} \\lor x^{(3)} &\\iff (x^{(2)} \\to x^{(3)}) \\lor x^{(1)} \\\\\n \\lnot x^{(1)} \\lor \\lnot x^{(2)} \\lor x^{(3)} &\\iff (x^{(2)} \\to x^{(3)}) \\lor \\lnot x^{(1)} ~\\halfup{\\Bigg\\rbrace \\iff x^{(2)} \\to x^{(3)}} \\\\\n x^{(1)} \\lor x^{(2)} \\lor \\lnot x^{(3)} &\\iff (x^{(3)} \\to x^{(1)}) \\lor x^{(2)}\\\\\n x^{(1)} \\lor \\lnot x^{(2)} \\lor \\lnot x^{(3)} &\\iff (x^{(3)} \\to x^{(1)}) \\lor \\lnot x^{(2)} ~\\halfup{\\Bigg\\rbrace \\iff x^{(3)} \\to x^{(1)}}\n \\end{align*}\n %\n Thus the clauses on the left are equivalent to the implication loop\n $x^{(1)} \\implies x^{(2)} \\implies x^{(3)} \\implies x^{(1)}$,\n which is equivalent to $x^{(1)} = x^{(2)} = x^{(3)}$.\n \n For each clause $c$ of $F$ using variables $x$ in the first literal,\n $y$ in the second literal, and $z$ in the third literal,\n we create a corresponding clause $c'$ in $F'$ using\n $x^{(1)}$, $y^{(2)}$, and $z^{(3)}$ (with the same negations as in~$c$).\n All clauses in $F'$ (including the variable duplication clauses above)\n thus use a variable of the form $x^{(i)}$ in the\n $i$th literal for $i \\in \\{1,2,3\\}$,\n so we can 3-color the variables accordingly.\n\\end{proof}\n\n\\begin{theorem} \\label{thm:NP-hard36}\n {TGINGTLI} is NP-hard, even with three colors and six sources\n of equal period.\n\\end{theorem}\n\n\\begin{proof}\n Our reduction is from 3DSAT.\n Figure~\\ref{fig:reduction-sketch} gives a high-level sketch:\n each variable gadget has two possible routes\n for a packet stream of the corresponding color,\n and each clause gadget allows at most two colors of packets\n to successfully route through.\n When a clause is satisfied by at least one variable,\n the clause gadget allows the other variables to successfully pass\n through; otherwise, at least one of the packet streams enters a cycle.\n Variables of the same color are chained together\n to re-use the packet stream of that color.\n\n \\begin{figure}\n \\centering\n \\subcaptionbox{Satisfied clause}{\\includegraphics[scale=0.5]{figs\/reduction-sketch-sat}}\\hfil\n \\subcaptionbox{Unsatisfied clause}{\\includegraphics[scale=0.5]{figs\/reduction-sketch-unsat}}\n \\caption{Sketch of our NP-hardness reduction.}\n \\label{fig:reduction-sketch}\n \\end{figure}\n \n In detail, most cells of the game board\n will be prefilled, leaving only a few empty cells\n (denoted by question marks in our figures) that the player can fill.\n \n \n \n\n For each color, say red, we place a red source gadget\n on the left edge of the construction.\n Then, for each red variable $x$ in sequence,\n we place a variable gadget of Figure~\\ref{fig:gadget:variable}\n at the end of the red stream.\n To prevent the packets from entering a loop,\n the player must choose between sending the stream upward or downward,\n which results in it following one of the two rightward paths,\n representing the literals $x$ and $\\overline x$ respectively.\n The path followed by the packet stream is viewed as \\emph{false};\n an empty path is viewed as \\emph{true}.\n\n\\begin{figure}[t]\n \\centering\n \\begin{minipage}[b]{0.51\\linewidth}\n \\centering\n \\includegraphics[scale=0.9]{figs\/clause.pdf}\n \\caption{Clause gadget. At most two streams of data, representing false literals, can pass through the gadget (by placing upwards arrows in the ``?'' cells) without entering a cycle.\n Placing any other direction of arrow also puts the stream into a cycle.}\n \\label{fig:gadget:clause}\n \n \\end{minipage}\\hfill\n \\begin{minipage}[b]{0.22\\linewidth}\n \\centering\n \\includegraphics[scale=0.9]{figs\/variable.pdf}\n \\caption{\\centering Variable gadget lets the player route packets from a source to one of two literal paths.}\n \\label{fig:gadget:variable}\n \\end{minipage}\\hfill\n \\begin{minipage}[b]{0.22\\linewidth}\n \\centering\n \\includegraphics[scale=0.9]{figs\/merge.pdf}\n \\caption{\\centering Merge gadget combines two literal paths back into one.\\linebreak~}\n \\label{fig:gadget:merge}\n \\end{minipage}\n \n \\bigskip\n \n \\begin{minipage}[b]{0.5\\linewidth}\n \\centering\n \\includegraphics[scale=0.9]{figs\/crossover.pdf}\n \\caption{Crossover gadget between two literal paths of same or different colors. (The center cell is colored different from both paths.)}\n \\label{fig:gadget:crossover}\n \\end{minipage}\\hfill\n \n \n \n \n \n \n \\begin{minipage}[b]{0.46\\linewidth}\n \\includegraphics[scale=0.9]{figs\/death.pdf}\n \\caption{Damage gadget forces damage at a unit rate after a desired start delay.\\hspace{\\fill}\\linebreak~}\n \\label{fig:gadget:damage}\n \\end{minipage}\n\\end{figure}\n\n Then we route each literal path to sequentially visit\n every clause containing it.\n Figure~\\ref{fig:gadget:crossover} shows a crossover gadget\n to enable such routing.\n (Note that it still works if both lines are the same color.)\n\n Figure~\\ref{fig:gadget:clause} shows the clause gadget,\n which allows at most two packet streams to pass through it.\n If all three literals are false then at least\n one stream of data must be placed into a cycle.\n On the other hand, if the clause is satisfied, then the literal paths\n carrying data can pass their data on to the next clause, and so on.\n Note that the length of the diverted path through the clause\n is the same for all three colors.\n\n After all uses of the red variable \\(x\\),\n we lengthen the two red literal paths\n corresponding to \\(x\\) and \\(\\overline{x}\\) to have the same length,\n then combine them back together\n using the merge gadget of Figure~\\ref{fig:gadget:merge}.\n We then route this red path into the variable gadget\n for the next red variable, and so on.\n Finally, after all red variables, we connect the red stream\n to a red sink.\n\n We lengthen the red, green, and blue streams\n to all have the same length~$\\ell$.\n If the player successfully satisfies all clauses, then\n they will increase the loading bar by $3$ units ($1$~per color)\n after an initial delay of~$\\ell$.\n We set the parameters so that the player wins in this case:\n $k^* - k_0 = 3$.\n Otherwise, the loading rate is at most~$2$.\n To ensure that the player loses in this case, we add\n $3$ damage gadgets of Figure~\\ref{fig:gadget:damage},\n each incurring damage at a rate of $1$ after an initial delay of $\\ell+1$.\n Thus we obtain a net of $-1$ per period,\n so the player eventually loses even if $k_0$ is large.\n\\end{proof}\n\n\nThis NP-hardness result does not need a very specific model of sources and\nhow they emit packets.\nTo understand whether the problem is in NP, we need a more specific model,\nwhich is addressed in the next section.\n\n\n\\section{Membership in $\\Sigma_2^P$ and Hardness from Source Periodicity}\n\\label{sec:sigma-2}\n\n\\iffalse\nmembership in sigma 2 - algorithm\n check if you didn't lose\n check if you didn't win\n skip to the next event\n\nco-np hard:\n checking if you didn't win\/lose\n\\fi\n\nIn this section, we consider the effect of potentially differing periods\nfor different sources emitting packets.\nSpecifically, we show that carefully setting periods together with\nthe unbounded length of the game results in both NP- and coNP-hardness\nof determining the outcome of {TGINGTLI},\n\\emph{even when the player is not making moves}.\nConversely, we prove that the problem is in~$\\Sigma_2^P$,\neven allowing player input.\n\n\\subsection{Model and Problems}\n\nMore precisely, we model each source $s$ as emitting data packets of its color\ninto the grid with its own period $p_s$, after a small warmup time $w_s$\nduring which the source may emit a more specific pattern of packets.\n{TGINGTLI} as implemented has a warmup behavior of each source\ninitially (upon creation) waiting 5 seconds before the first emitted packet,\nthen emitting a packet after 2 seconds, after 1.999 seconds,\nafter 1.998 seconds, and so on,\nuntil reaching a fixed period of 0.5 seconds.\nThis is technically a warmup period of 1881.25 seconds with\n1500 emitted packets, followed by a period of 0.5 seconds.\n\nIn the \\defn{simulation} problem, we are given the initial state of the grid,\na list of timestamped \\defn{events} for\nwhen each source emits a packet during its warmup period,\nwhen each source starts periodic behavior,\nand when the player will place each arrow.\nWe assume that timestamps are encoded in binary but\n(to keep things relatively simple)\nperiods and warmup times are encoded in unary.\nThe problem then asks to predict whether the player wins;\nthat is, the loading bar reaches~$k^*$ before a loss occurs.\n\nIn the \\defn{game} problem, we are not given the player's arrow placements.\nIf we allow nondeterministic algorithms, the game problem reduces to the\nsimulation problem: just guess what arrows we place, where, and at what times.\n\nA natural approach to solving the simulation problem is to simulate the game\nfrom the initial state to each successive event.\nSpecifically, given a state of the game (a grid with sinks, sources of varying periods and offsets, placed arrows, and the number of in-flight packets at each location) and a future timestamp \\(t\\),\nwe wish to determine the state of the game at time \\(t\\).\nUsing this computation, we can compute future states of the game quickly by\n``skipping ahead'' over the time between events.\nOn an $m \\times n$ grid, there are $O(m n)$ events, so we can determine the state of the game at any time \\(t\\) by simulating polynomially many phases between events.\n\nThis computation is easy to do.\nGiven the time \\(t\\), we can divide by each source's period and the period of each cycle of arrows to determine\nhow many packets each source produces and where the arrows route them ---\neither to a sink which affects loading, off the grid, stuck in a cycle,\nor in-flight outside a cycle ---\nand then sum up the effects to obtain the new amount loaded and the number of packets at each location.\n\\xxx{Be more specific?}\n\nHowever, being able to compute future states\ndoes not suffice to solve the simulation and game problems because there might be an intermediate time\nwhere the loading amount drops below $0$ or reaches the target amount~$k^*$.\nNonetheless, this suffices to show that the problems are in \\(\\Sigma_2^P\\),\nby guessing a win time and verifying there are no earlier loss times:\n\n\\begin{lemma}\n The simulation and game problems are in \\(\\Sigma_2^P\\).\n\\end{lemma}\n\\begin{proof}\n The player wins if there exists a time with a win such that all smaller times are not losses.\n To solve the simulation problem,\n nondeterministically guess the winning time and verify that it is a win\n by computing the state at that time.\n Then check using a coNP query that there was no loss before that time,\n again using the ability to quickly compute states at individual timestamps.\n\n To solve the game problem, we first existentially guess the details of the arrow placements,\n then solve the resulting simulation problem as before.\n\\end{proof}\n\nAn easier case is when the source periods are all the same\nafter warmup, as implemented in the real game.\nTheorem~\\ref{thm:NP-hard36} proved this version of the game NP-hard,\nand we can now show that it is NP-complete:\n\n\\begin{lemma}\n If all sources have the same polynomial-length period after a\n polynomial number $t_p$ of time steps,\n then the simulation problem is in P and the game problem is in NP.\n\\end{lemma}\n\n\\begin{proof}\n In this case, we can check for wins or losses in each phase between events by explicitly simulating longer than all packet paths,\n at which point the loading bar value becomes periodic with the common source period.\n (Cycles of arrows may have different periods but these do not affect the loading bar, and thus do not matter when checking for wins and losses.)\n We skip over each phase, checking for win or loss along the way.\n If the game continues past the last event, we measure the sign of the net score change over the period.\n If it is positive, the player will eventually win;\n if it is negative, the player will eventually lose; and\n if it is zero, the game will go on forever.\n\\end{proof}\n\n\nIn the remainder of this section, we consider the case where each source can be assigned any integer period, and the period does not change over time.\n\n\\subsection{Periodic Sum Threshold Problem}\n\nWith varying source periods, the challenge is that the overall periodic\nbehavior of the game can have an extremely large (exponential) period.\n\nWe can model this difficulty via the\n\\defn{Periodic Sum Threshold Problem}, defined as follows.\nWe are given a function $f(x) = \\sum_{i} g_i(x)$\nwhere each $g_i$ has unary integer period $T_i$\nand unary maximum absolute value $M_i$.\nIn addition, we are given a unary integer $\\tau > 0$\nand a binary integer time $x^*$.\nThe goal is to determine whether there exists an integer $x$\nin $[0, x^*)$ such that $f(x) \\geq \\tau$.\n(Intuitively, reaching $\\tau$ corresponds to winning.)\n\n\\begin{theorem}\n\\label{thm:pstp-np-complete}\n The Periodic Sum Threshold Problem is NP-complete,\n even under the following restrictions:\n \\begin{enumerate}\n \\item \\label{prop:one-hot}\n Each $|g_i|$ is a one-hot function, i.e.,\n $g_i(x) = 0$ everywhere except for exactly one $x$ in its period\n where $g_i(x) = \\pm 1$.\n \\item \\label{prop:lambda}\n We are given a unary integer $\\lambda < 0$ such that\n $f(x) > \\lambda$ for all $0 \\leq x < x^*$ and $f(x^*) \\leq \\lambda$.\n (Intuitively, dipping down to $\\lambda$ corresponds to losing.)\n \\end{enumerate}\n\\end{theorem}\n\\begin{proof}\n First, the problem is in NP: we can guess $x \\in [0,x^*)$\n and then evaluate whether $f(x) \\leq c$ in polynomial time.\n\n For NP-hardness, we reduce from 3SAT.\n We map each variable $v_i$ to the $i$th prime number $p_i$ excluding~$2$.\n Using the Chinese Remainder Theorem,\n we can represent a Boolean assignment $\\phi$ as a single integer $0 \\le x < \\prod_i p_i$\n where $x \\equiv 1 \\mod p_i$ when $\\phi$ sets $v_i$ to true,\n and $x \\equiv 0 \\mod p_i$ when $\\phi$ sets $v_i$ to false.\n (This mapping does not use other values of $x$ modulo~$p_i$.\n In particular, it leaves $x \\equiv -1 \\mod p_i$ unused,\n because $p_i \\geq 3$.)\n\n Next we map each clause such as $C = (v_i \\vee v_j \\vee \\neg v_k)$\n to the function\n %\n $$g_C(x) = \\max\\{[x \\equiv 1 \\mod p_i], [x \\equiv 1 \\mod p_j], [x \\equiv 0 \\mod p_k]\\},$$\n %\n i.e., positive literals check for $x \\equiv 1$\n and negated literals check for $x \\equiv 0$.\n This function is $1$ exactly when $x$ corresponds\n to a Boolean assignment that satisfies~$C$.\n \n \n \n This function has period $p_i p_j p_k$,\n whose unary value is bounded by a polynomial.\n Setting $\\tau$ to the number of clauses,\n there is a value $x$ where the sum is $\\tau$ if and only if\n there is a satisfying assignment for the 3SAT formula.\n (Setting $\\tau$ smaller, we could reduce from Max 3SAT.)\n\n To achieve Property~\\ref{prop:one-hot}, we split each $g_C$ function\n into a sum of polynomially many one-hot functions\n (bounded by the period). In fact, seven functions per clause suffice,\n one for each satisfying assignment of the clause.\n\n To achieve Property~\\ref{prop:lambda},\n for each prime $p_i$, we add the function\n $h_i(x) = -[x \\equiv -1 \\mod p_i]$.\n This function is $-1$ only for unused values of \\(x\\)\n which do not correspond to any assignment \\(\\phi\\),\n so it does not affect the argument above.\n Setting $-\\lambda$ to the number of primes (variables)\n and $x^* = \\prod_i p_i - 1$,\n we have $\\sum_i h_i(x^*) = \\lambda$\n because $h_i(x^*) \\equiv -1 \\mod p_i$ for all~$i$,\n while $\\sum_i h_i(x) > \\lambda$ for all $0 \\leq x < x^*$.\n All used values \\(x\\) are smaller than \\(x^*\\).\n\n In total, $f(x)$ is the sum of the constructed functions\n and we obtain the desired properties.\n\\end{proof}\n\n\n\\subsection{Simulation Hardness for Two Colors}\n\nWe can use our hardness of the Periodic Sum Threshold Problem\nto prove hardness of simulating {TGINGTLI}, even without player input.\n\n\\begin{theorem} \\label{thm:sim NP-hard}\n Simulating {TGINGTLI} and determining whether the player wins\n is NP-hard, even with just two colors.\n\\end{theorem}\n\\begin{proof}\n We reduce from the Periodic Sum Threshold Problem proved NP-complete\n by Theorem~\\ref{thm:pstp-np-complete}.\n\n For each function $g_i$ with one-hot value $g_i(x_i) = 1$ and period~$T_i$,\n we create a blue source $b_i$ and a red sources $r_i$,\n of the same emitting period~$T_i$,\n and route red and blue packets from these sources to the blue sink.\n By adjusting the path lengths and\/or the warmup times of the sources,\n we arrange for a red packet to arrive one time unit after each blue packet\n which happens at times $\\equiv x_i \\mod T_i$.\n Thus the net effect on the loading bar value is $+1$ at time $x_i$\n but returns to $0$ at time $x_i + 1$.\n Similarly, for each function $g_i$ with one-hot value $g_i(x_i) = -1$,\n we perform the same construction but swapping the roles of red and blue.\n\n Setting $k_0 = -\\lambda-1 \\geq 0$, the loading bar goes negative\n (and the player loses)\n exactly when the sum of the functions $g_i$ goes down to~$\\lambda$.\n Setting $k^* = k_0 + \\tau$, the loading bar reaches $k^*$\n (and the player wins)\n exactly when the sum of the functions $g_i$ goes up to~$\\tau$.\n\\end{proof}\n\nThis NP-hardness proof relies on completely different aspects of the game\nfrom the proof in Section~\\ref{sec:NP-hardness}: instead of using player input,\nit relies on varying (but small in unary) periods for different sources.\nMore interesting is that we can also prove the same problem coNP-hard:\n\n\\begin{theorem}\n Simulating {TGINGTLI} and determining whether the player wins\n is coNP-hard, even with just two colors.\n\\end{theorem}\n\\begin{proof}\n We reduce from the complement of the Periodic Sum Threshold Problem,\n which is coNP-complete by Theorem~\\ref{thm:pstp-np-complete}.\n The goal in the complement problem is to determine whether\n there is \\emph{no} integer $x$ in $[0, x^*)$ such that $f(x) \\geq \\tau$.\n The idea is to negate all the values to flip the roles of winning and losing.\n \n For each function $g_i$, we construct two sources and wire them to a sink\n in the same way as Theorem~\\ref{thm:sim NP-hard}, but negated:\n if $g_i(x_i) = \\pm 1$, then we design the packets to have a net effect\n of $\\mp 1$ at time $x_i$ and $0$ otherwise.\n\n Setting $k_0 = \\tau-1$, the loading bar goes negative\n (and the player loses)\n exactly when the sum of the functions $g_i$ goes up to~$\\tau$, i.e.,\n the Periodic Sum Threshold Problem has a ``yes'' answer.\n Setting $k^* = k_0 - \\lambda$, the loading bar reaches $k^*$\n (and the player wins)\n exactly when the sum of the functions $g_i$ goes down to $\\lambda$, i.e.,\n the Periodic Sum Threshold Problem has a ``no'' answer.\n\\end{proof}\n\n\n\\section{Characterizing Perfect Layouts}\n\\label{sec:perfect-layouts}\n\nSuppose we are given a board which is empty except for the location of the\nthree data sinks.\nIs it possible to place arrows such that all possible input packets\nget routed to the correct sink?\nWe call such a configuration of arrows a \\defn{perfect layout}.\nIn particular, such a layout guarantees victory,\nregardless of the data sources.\nIn this section, we give a full characterization of boards\nand sink placements that admit a perfect layout.\nSome of our results work for a general number $c$ of colors,\nbut the full characterization relies on $c=3$.\n\n\\subsection{Colors Not Arrows}\n\nWe begin by showing that we do not need to consider the directions of the arrows, only their colors and locations in the grid.\n\nLet \\(B\\) be a board with specified locations of sinks,\nand let \\(\\partial B\\) be the set of edges on the boundary of \\(B\\).\nSuppose we are given an assignment of colors to the cells of \\(B\\)\nthat agrees with the colors of the sinks;\nlet \\(C_i\\) be the set of grid cells colored with color \\(i\\).\nWe call two cells of \\(C_i\\), or a cell of \\(C_i\\) and a boundary edge\n$e \\in \\partial B$, \\defn{visible} to each other if and only if\nthey are in the same row or the same column\nand no sink of a color other than \\(i\\) is between them.\nLet \\(G_i\\) be the graph whose vertex set is \\(C_i \\cup \\partial B\\),\nwith edges between pairs of vertices that are visible to each other.\n\n\\begin{lemma}\n \\label{lem:perfect-colors}\n Let \\(B\\) be a board with specified locations of sinks.\n Then \\(B\\) admits a perfect layout if and only if it is possible to choose\n colors for the remaining cells of the grid such that, for each color~\\(i\\),\n the graph \\(G_i\\) is connected.\n\\end{lemma}\n\\begin{proof}\n ($\\Longrightarrow$)\n Without loss of generality, assume that\n the perfect layout has the minimum possible number of arrows.\n Color the cells of the board with the same colors as the sinks and arrows in the perfect layout.\n (If a cell is empty in the perfect layout,\n then give it the same color as an adjacent cell;\n this does not affect connectivity.)\n Fix a color \\(i\\).\n Every boundary edge is connected to the sink of color \\(i\\)\n by the path a packet of color \\(i\\) follows when entering from that edge.\n (In particular, the path cannot go through a sink of a different color.)\n By minimality of the number of arrows in the perfect layout,\n every arrow of color \\(i\\) is included in such a path.\n Therefore \\(G_i\\) is connected.\n\n ($\\Longleftarrow$)\n We will replace each cell by an arrow of the same color to form a perfect layout.\n Namely, for each color~\\(i\\), choose a spanning tree of \\(G_i\\)\n rooted at the sink of color~\\(i\\),\n and direct arrows from children to parents in this tree.\n By connectivity, any packet entering from a boundary edge\n will be routed to the correct sink, walking up the tree to its root.\n\\end{proof}\n\n\\subsection{Impossible Boards}\n\\label{sec:impossible-boards}\n\nNext we show that certain boards cannot have perfect layouts.\nFirst we give arguments about boards containing sinks\ntoo close to the boundary or each other.\nThen we give an area-based constraint on board size.\n\n\\begin{lemma}\n \\label{lem:sink-distance}\n If there are fewer than $c-1$ blank cells in a row or column\n between a sink and a boundary of the grid,\n then there is no perfect layout.\n\\end{lemma}\n\\begin{proof}\n A perfect layout must prevent packets of the other \\(c-1\\) colors\n entering at this boundary from reaching this sink;\n this requires enough space for \\(c-1\\) arrows.\n\\end{proof}\n\n\\begin{lemma}\n \\label{lem:impossible-c3}\n For $c=3$, a board has no perfect layout if either\n (as shown in Figure~\\ref{fig:impossible})\n \\begin{enumerate}[(a)]\n \\item a data sink is two cells away from three boundaries and adjacent to another sink;\n \\item a data sink is two cells away from two incident boundaries and is adjacent to two other sinks;\n \\item a data sink is two cells away from two opposite boundaries and is adjacent to two other sinks; or\n \\item a data sink is two cells away from three boundaries and is one blank cell away from a pair of adjacent sinks.\n \\end{enumerate}\n\\end{lemma}\n\n\\begin{figure}\n \\centering\n \\subcaptionbox{}{\\includegraphics[scale=0.75]{figs\/impossible1}}\\hfil\n \\subcaptionbox{}{\\includegraphics[scale=0.75]{figs\/impossible2}}\\hfil\n \\subcaptionbox{}{\\includegraphics[scale=0.75]{figs\/impossible3}}\\hfil\n \\subcaptionbox{}{\\includegraphics[scale=0.75]{figs\/impossible4}}\n \\caption{Sink configurations with no perfect layout.\n Dots indicate arrows of forced colors\n (up to permutation within a row or column).}\n \\label{fig:impossible}\n\\end{figure}\n\n\\begin{proof}\n Assume by symmetry that, in each case, the first mentioned sink is red.\n\n Cases (a), (b), and (c):\n The pairs of cells between the red sink and the boundary\n (marked with dots in the figure) must contain a green arrow and a blue arrow\n to ensure those packets do not reach the red sink.\n Thus there are no available places to place a red arrow\n in the same row or column as the red sink,\n so red packets from other rows or columns cannot reach the red sink.\n \n Case (d): The pairs of cells between the red sink and the boundary\n (marked with green and blue dots in the figure)\n must contain a green arrow and a blue arrow to ensure those packets\n do not collide with the red sink.\n Thus the blank square between the red sink and the other pair of sinks\n must be a red arrow pointing toward the red sink,\n to allow packets from other rows and columns to reach the red sink.\n Assume by symmetry that the sink nearest the red sink is green.\n As in the other cases, the pairs of cells between the green sink and the\n boundary must be filled with red and blue arrows.\n Thus there are no green arrows to route green packets from other\n rows or columns to the green sink.\n \n \n \n\\end{proof}\n\nWe now prove a constraint on the sizes of boards that admit a perfect layout.\n\n\\begin{lemma}\n \\label{lem:size-constraint}\n Let \\(c\\) be the number of colors.\n Suppose there is a perfect layout on a board where \\(m\\) and \\(n\\) are respectively the number of rows and columns, and \\(p\\) and \\(q\\) are respectively the number of rows and columns that contain at least one sink.\n Then\n \\begin{equation}\n \\label{eqn:size-constraint}\n c(m + n) + (c-2)(p + q) \\le m n - c.\n \\end{equation}\n\\end{lemma}\n\\begin{proof}\n Each of the \\(m - p\\) unoccupied rows must contain \\(c\\) vertical arrows in order to redirect packets of each color out of the row.\n Each of the \\(p\\) occupied rows must contain \\(c-1\\) vertical arrows to the\n left of the leftmost sink in order to redirect incorrectly colored packets\n from the left boundary edge away from that sink;\n similarly, there must be \\(c-1\\) vertical arrows to the right of the\n rightmost sink.\n Thus we require \\(c(m - p) + 2(c - 1)p = c m + (c - 2)p\\)\n vertical arrows overall.\n By the same argument, we must have \\(c n + (c - 2)q\\) horizontal arrows,\n for a total of\n \\(c(m + n) + (c - 2)(p + q)\\)\n arrows.\n There are \\(m n - c\\) cells available for arrows, which proves the claim.\n\\end{proof}\n\nUp to insertion of empty rows or columns, rotations, reflections, and\nrecolorings, there are six different configurations that $c=3$ sinks\nmay have with respect to each other, shown in Figure~\\ref{fig:3-sink-configs}.\nWe define a board's \\defn{type} according to this configuration of its sinks\n(C, I, J, L, Y, or \/).\n\n\\begin{figure}[htbp]\n \\centering\n \\hfil\n \\subcaptionbox{C}{\\includegraphics[scale=0.9]{figs\/config-C}}\\hfil\n \\subcaptionbox{I}{\\includegraphics[scale=0.9]{figs\/config-I}}\\hfil\n \\subcaptionbox{J}{\\includegraphics[scale=0.9]{figs\/config-J}}\\hfil\n \\subcaptionbox{L}{\\includegraphics[scale=0.9]{figs\/config-L}}\\hfil\n \\subcaptionbox{Y}{\\includegraphics[scale=0.9]{figs\/config-Y}}\\hfil\n \\subcaptionbox{\/}{\\includegraphics[scale=0.9]{figs\/config-slash}}%\n \\hfil\n \\caption{The six possible configurations of three sinks up to rotations, reflections, recolorings, and removal of empty rows.}\n \\label{fig:3-sink-configs}\n\\end{figure}\n\nA board's type determines the values of \\(p\\) and \\(q\\)\nand thus the minimal board sizes as follows.\nDefine a board to have size \\defn{at least} $m \\times n$ if\nit has at least $m$ rows and at least $n$ columns, or vice versa.\n\n\\begin{lemma}\n \\label{lem:size-constraint-c3}\n For a perfect layout to exist with \\(c=3\\), it is necessary that:\n \\begin{itemize}\n \\item Boards of type Y or \/ have size at least $7\\times8$.\n \\item Boards of type C or J have size at least $7\\times8$ or $6\\times9$.\n \\item Boards of type L have size at least $7\\times7$ or $6\\times9$.\n \\item Boards of type I have size at least $7\\times7$, $6\\times9$, or $5\\times11$.\n \\end{itemize}\n\\end{lemma}\n\\begin{proof}\n These bounds follow from Lemma~\\ref{lem:size-constraint} together with the requirement from Lemma~\\ref{lem:sink-distance} that it be possible to place sinks at least two cells away from the boundary.\n\\end{proof}\n\n\\subsection{Constructing Perfect Layouts}\n\nIn this section, we complete our characterization of boards\nwith perfect layouts for \\(c=3\\).\nWe show that Lemmas \\ref{lem:sink-distance}, \\ref{lem:impossible-c3}, and\n\\ref{lem:size-constraint-c3} are the only obstacles to a perfect layout:\n\n\\begin{theorem}\n \\label{thm:characterization-c3}\n A board with $c=3$ sinks has a perfect layout if and only if the following conditions all hold:\n \\begin{enumerate}\n \\item All sinks are at least two cells away from the boundary\n (Lemma~\\ref{lem:sink-distance}).\n \\item The board does not contain any of the four unsolvable configurations in Figure~\\ref{fig:impossible} (Lemma~\\ref{lem:impossible-c3}).\n \\item The board obeys the size bounds of Lemma~\\ref{lem:size-constraint-c3}.\n \\end{enumerate}\n\\end{theorem}\n\nWe call a board \\defn{minimal} if it has one of the minimal dimensions\nfor its type as defined in Lemma~\\ref{lem:size-constraint-c3}.\nOur strategy for proving Theorem~\\ref{thm:characterization-c3}\nwill be to reduce the problem to the finite set of minimal boards,\nwhich we then verify by computer.\nWe will accomplish this by removing empty rows and columns\nfrom non-minimal boards to reduce their size,\nwhich we show can always be done while preserving the above conditions.\n\n\\begin{lemma}\n \\label{lem:characterization-c3-minimal}\n All minimal boards satisfying the three conditions of\n Theorem~\\ref{thm:characterization-c3} have a perfect layout.\n\\end{lemma}\n\\begin{proof}\n \\renewcommand{\\qedsymbol}{\\(\\blacksquare\\)}\n The proof is by exhaustive computer search of all such minimal boards.\n We wrote a Python program to generate all possible board patterns,\n reduce each perfect layout problem to Satisfiability Modulo Theories\n (SMT), and then solve it using Z3 \\cite{z3}.\n The results of this search are in Appendix~\\ref{apx:computer-solutions}.\n\\end{proof}\n\nIf \\(B_0\\) and \\(B_1\\) are boards, then we define \\defn{\\(B_0 \\pmb{\\lessdot} B_1\\)}\nto mean that \\(B_0\\) can be obtained by removing a single empty row or column\nfrom \\(B_1\\).\n\n\\begin{lemma}\n \\label{lem:add-row}\n If \\(B_0 \\lessdot B_1\\) and \\(B_0\\) has a perfect layout,\n then \\(B_1\\) also has a perfect layout.\n\\end{lemma}\n\\begin{proof}\n By symmetry, consider the case where $B_1$ has an added row.\n By Lemma~\\ref{lem:perfect-colors}, it suffices to show that we can color\n the cells of the new row while preserving connectivity in each color.\n We do so by duplicating the colors of the cells (including sinks)\n in an adjacent row.\n Connectivity of the resulting coloring follows from that of the original.\n\\end{proof}\n\n\\begin{lemma}\n \\label{lem:characterization-c3-nonminimal}\n Let \\(B_1\\) be a non-minimal board satisfying the three conditions\n of Theorem~\\ref{thm:characterization-c3}.\n Then there exists a board \\(B_0\\) that also satisfies all three conditions\n and such that \\(B_0 \\lessdot B_1\\).\n\\end{lemma}\n\\begin{proof}\n By symmetry, suppose $B_1$ is non-minimal in its number $m$ of rows.\n By removing a row from $B_1$ that is not among the first or last two rows\n and does not contain a sink, we obtain a board \\(B'_0\\) satisfying conditions\n (1) and (3) such that \\(B'_0 \\lessdot B_1\\).\n If \\(B'_0\\) also satisfies condition (2), then we are done,\n so we may assume that it does not.\n\n Then \\(B'_0\\) must contain one of the four unsolvable configurations,\n and \\(B_1\\) is obtained by inserting a single empty row or column to\n remove the unsolvable configuration.\n Figure~\\ref{fig:perfect-reductions} shows all possibilities for \\(B'_0\\),\n as well as the locations where rows or columns may be inserted to yield\n a corresponding possibility for \\(B_1\\).\n (\\(B'_0\\) may have additional empty rows and columns beyond those shown,\n but this does not affect the proof.)\n For each such possibility, Figure~\\ref{fig:perfect-reductions} highlights\n another row or column which may be deleted from \\(B_1\\) to yield\n \\(B_0 \\lessdot B_1\\) where \\(B_0\\) satisfies all three conditions.\n\\end{proof}\n\n\\begin{figure}\n \\centering\n \\hfil\n \\subcaptionbox{$L$, $7\\times7$}{\\includegraphics[scale=0.6]{solver\/reductions\/L7_7_reduction}}\\hfill\n \\subcaptionbox{$L$, $6\\times9$}{\\includegraphics[scale=0.6]{solver\/reductions\/L6_9_reduction}}\\hfill\n \\subcaptionbox{\\label{fig:perfect-reductions-row-choice}$I$, $5\\times11$}{\\includegraphics[scale=0.6]{solver\/reductions\/I5_11_0_reduction}}\\hfill\n \\subcaptionbox{$I$, $5\\times11$}{\\includegraphics[scale=0.6]{solver\/reductions\/I5_11_1_reduction}\\vspace{0.2in}}\\hfill\n \\subcaptionbox{$I$, $5\\times11$}{\\includegraphics[scale=0.6]{solver\/reductions\/I5_11_2_reduction}}%\n \\hfil\n \\caption{\n All boards satisfying conditions (1) and (3) but not (2),\n up to rotations, reflections, and recolorings.\n An empty row or column may be inserted in any of the locations\n marked ``$+$'' to yield a board satisfying all three conditions.\n Removing the row or column marked ``$-$'' then preserves the conditions.\n In case~(c),\n \n remove a row that does not contain the blue sink.\n In case~(d),\n $\\vcenter{\\hbox{\\includegraphics[scale=0.6]{figs\/dotdotdot}}}$\n denotes zero or more rows.\n }\n \\label{fig:perfect-reductions}\n\\end{figure}\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:characterization-c3}]\n It follows from Lemmas \\ref{lem:sink-distance}, \\ref{lem:impossible-c3},\n and \\ref{lem:size-constraint-c3} that all boards with perfect layouts\n must obey the three properties of the theorem.\n We prove that the properties are also sufficient\n by induction on the size of the board.\n As a base case, the claim holds for minimal boards\n by Lemma~\\ref{lem:characterization-c3-minimal}.\n For non-minimal boards \\(B_1\\),\n Lemma~\\ref{lem:characterization-c3-nonminimal} shows that\n there is a smaller board \\(B_0\\)\n that satisfies all three conditions and such that \\(B_0 \\lessdot B_1\\).\n By the inductive hypothesis, \\(B_0\\) has a perfect layout.\n Lemma~\\ref{lem:add-row} shows that \\(B_1\\) also has a perfect layout.\n\\end{proof}\n\n\n\\section{Open Questions}\n\nThe main complexity open question is whether {TGINGTLI} is $\\Sigma_2^P$-complete.\nGiven our NP- and coNP-hardness results, we suspect that this is true.\n\nOne could also ask complexity questions of more restrictive versions of the game.\nFor example, what if the board has a constant number of rows?\n\nWhen characterizing perfect layouts, we saw many of our lemmas generalized to different numbers of colors. It may be interesting to further explore the game and try to characterize perfect layouts with more than three colors.\n\nA related problem is which boards and configurations of sinks\nadmit a \\defn{damage-free} layout, where any packet entering from\nthe boundary either reaches the sink of the correct color\nor ends up in an infinite loop. Such a layout avoids losing,\nand in the game as implemented, such a layout actually wins the game\n(because the player wins if there is ever insufficient room\nfor a new source to be placed).\nCan we characterize such layouts like we did for perfect layouts?\n\nPerfect and damage-free layouts are robust to any possible sources.\nHowever, for those boards that do not admit a perfect or damage-free layout,\nit would be nice to have an algorithm that determines whether a given\nset of sources or sequence of packets still has a placement of arrows\nthat will win on that board.\nBecause the board starts empty except for the sinks,\nour hardness results do not apply.\n\nHaving a unique solution is often a desirable property of puzzles. Thus it is natural to ask about ASP-hardness and whether counting the number of solutions is \\#P-hard.\n\n\n\n\\section*{Acknowledgments}\n\nThis work was initiated during open problem solving in the MIT class on\nAlgorithmic Lower Bounds: Fun with Hardness Proofs (6.892)\ntaught by Erik Demaine in Spring 2019.\nWe thank the other participants of that class\nfor related discussions and providing an inspiring atmosphere.\nIn particular, we thank Quanquan C. Liu for helpful discussions\nand contributions to early results.\n\nMost figures of this paper were drawn using SVG Tiler\n[\\url{https:\/\/github.com\/edemaine\/svgtiler}].\nIcons (which match the game) are by looneybits and\nreleased in the public domain\n[\\url{https:\/\/opengameart.org\/content\/gui-buttons-vol1}].\n\n\\bibliographystyle{alpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:Introduction}\nThere are insufficiently large or no datasets for research questions on camera-based AI systems from the field of medical operating rooms. Use cases for AI systems in this context include sterility of health professionals, whether and where certain medical devices are located or action recognition of health professionals. Furthermore, datasets are, if at all, only institution-related and not publicly available due to data protection regulations and ethics requirements.\n\nThe successes of deep learning in recent years are among others, due to the availability of large datasets such as Imagenet \\cite{ILSVRC15} for Image Classification, or MS COCO \\cite{DBLP:journals\/corr\/LinMBHPRDZ14} for Object Bounding Box Detection. In addition, the research of new methods or architectures like \\cite{He2016}\\cite{Lin2020}\\cite{Tan_2020}\\cite{Bochkovskiy2020} and the research of high performance hardware for parallel computing are to be mentioned. The authors of \\cite{Sze2018}\\cite{LeCun2019} analyze the hardware topic in depth. In this work however, special focus is put on the availability of datasets and methods for dataset generation in order to reduce the necessary amount of real data from the target domain. \n\nSeveral works already deal with the generation of synthetic data and with the goal of reducing the reality gap between synthetic and real data. Among these are the work on Domain Randomization (DR) and Structured Domain Randomization (SDR) \\cite{DrTobin}\\cite{DrTremblay}\\cite{Prakash_2019}. In addition to other work, it has already been shown that the use of synthetic image data can decrease the required amount of real data \\cite{DrTremblay}. Likewise, synthetic data of persons exist \\cite{h36m_pami}. However, the challenge lies in the specific characteristics of a medical intervention space. Health professionals wear special clothing with sometimes multiple layers, wear sterile gloves, masks and hairnets. The differences between conventional human data and data from the medical field are large. Nevertheless, Domain Randomization techniques sound promising for use in research questions around medical interventions.\n\nThis work presents a comparison in terms of detection accuracy and generalizability of different methods for synthetic clothing generation using either 3D clothing scans (SCANS) or designed CAD clothing (CAD) with SMPL models \\cite{SMPL:2015}. The comparison is performed using the example of medical clothing object detection. Both methods (SCANS, CAD) are incorporated into a domain randomization environment called NDDS \\cite{to2018ndds} and a Structured Domain Randomization environment implemented in Unity, based on \\cite{Technologies2017}, to generate synthetic training data. Likewise, the aim of the presented methodology is to explore a pipeline for the generation of synthetic data for the medical field, so that further research questions from the intervention space can be explored. In addition to synthetic data, real data of different persons are recorded in front of a green screen and in a clinical environment. All data are split into train\/val\/test sets while the clinical dataset serves as a test set for all methods.\n\n\n\\section{Related Work}\n\\label{sec:relatedWork}\nWith the rise of synthetic data generation methods, for example Domain Randomization \\cite{DrTobin}, it has already been shown that synthetic data can reduce the amount of real data required \\cite{DrTremblay}. However, one focus of research is the reduction of the reality gap between the synthetic data and the target domain.\n\nHere, the aforementioned Domain Randomization (DR) has turned out to be one way to reduce the gap. One idea of DR is that if enough variance can be generated in the synthetic data, reality represents another variance of the target domain \\cite{DrTobin}.\n\nThe work of \\textit{Tobin et al.} \\cite{DrTobin} and \\textit{Tremblay et al.} \\cite{DrTremblay} showed, that an object detection network for robot grasping or car detection can be trained from synthetic images with random positioning, random lighting, random backgrounds, distractor objects and non-realisitc textures alone. In addition, the work of \\textit{Tremblay et al.} showed, that the necessary amount of real target domain data can be reduced while maintaining adequate accuracy, when pretrained with DR generated images.\n\nAlso the work of \\textit{Borkman et al.}\\cite{Borkman2021} showed that when using Unity Perception for synthetic data generation, the amount of real-world data could be reduced to 10\\% when used together with the synthetic data, while achieving better AP score as with all real-world data alone. \n\nDomain Randomization has already been successfully applied in various fields. In addition to the mentioned areas of car detection and robot grasping, the work of Sadeghi et al. \\cite{Sadeghi2017} for flying a quadrocopter through indoor environments, \\textit{Zhang et al.} \\cite{DBLP:journals\/corr\/abs-1709-05746} for a table-top object reaching task through clutter or \\textit{James et al.} \\cite{DBLP:journals\/corr\/JamesDJ17} for grasping a cube and placing it in a basket can be named.\n\nThis leads us to believe that DR is a suitable approach for the medical intervention room domain, where no real data is largely available and access to that domain is widely restricted.\n\nAblation studies of \\cite{DrTremblay} and \\cite{DrTobin} showed, that high resolution textures and higher numbers of unique textures in the scene improve performance.\nAlso, \\cite{DBLP:journals\/corr\/abs-1709-05746} come to the conclusion, after testing their hypothesis, that using complex textures yields better performance than using random colors.\n\nIn contrast to the domain randomization approach is the photorealistic rendering of the scene and objects. A number of datasets have been created for this purpose in recent years. Here the works of \\cite{Tremblay2018a} \\cite{h36m_pami} \\cite{Varol2017LearningFS} or \\cite{GraspingDRphoto} are to be mentioned. Some of these works combine real image data with domain randomization and photorealistic rendered image data.\n\nIn \\cite{Tremblay2018a} a photorealistic rendered dataset was created for 21 objects of the YCB dataset. Here, the objects are rendered in different scenes with collision properties when falling down. The dataset is intended to accelerate progress in the area of object detection and pose estimation. \n\nIn \\cite{GraspingDRphoto}, domain randomization is combined with photo realistically rendered image data, for robotic grasping of household objects. Using the data generated in this way, the authors have managed to explore a real-time system for object detection and robot grasping with sufficient accuracy. They also showed that the combination of both domains improved performance as opposed to just one alone.\n\nIn the field of human pose estimation, the works of \\cite{h36m_pami} and \\cite{Varol2017LearningFS} need to be mentioned. Both works were able to show that the performance of networks can be increased by using synthetic and animated persons, respectively.\n\nThe work of \\cite{Varol2017LearningFS} generates photo-realistic synthetic image data and their ground truth for body part classifications.\n\nIn \\cite{h36m_pami}, animated persons are integrated into mixed reality environments. The movements were recorded by actors in a motion capture scenario and transferred to 3D scanned meshes. In their experiments, they were able to achieve a 20\\% increase in performance compared to the largest training set available in this domain.\n\nState of the art models for realistic human body shapes are the SMPL models introduced by \\cite{SMPL:2015} and improved by STAR in \\cite{Osman2020}. SMPL stands for Skinned Multi-Person Liner Model (SMPL) which according to the authors is a skinned vertex-based model which represents human shapes in a wide variety. In their work they learn male and female body shape from the CAESAR dataset \\cite{CAESAR}. Their model is compatible with a wide variety of rendering engines like Unity or Unreal and therefore highly suited to be used in synthetic data generation for humans. There also exist extensions to the SMPL model like MANO and SMPL-H which introduce a deformable hand model into the framework. MANO \\cite{MANO:SIGGRAPHASIA:2017} is learned from 1000 high resolution 3D scans of various hand poses.\n\n\\section{Method}\n\\label{sec:methods}\nAs previously mentioned, real-world data collection in medical intervention rooms is complex, costly, and requires approval from an ethics board and the persons involved. \nAs shown in the previous, DR can help train an object detection network with sufficient performance in real world applications.\n \nHowever, one challenge in dataset generation for the medical intervention space is domain-specific clothing. We argue, that randomizing the clothing textures with random textures would help improve detection rates of the clothing types, but when applied in real world applications, for example a colored T-shirt would not be distinguishable from the targeted blue colored specific area clothing. For the general detection of cars as in \\cite{DrTremblay} the randomization technique makes sense, but for the domain specific use case presented here something else should be used in our opinion.\n\n\nThe questions we try to address in this work are:\n\\begin{enumerate}\n\\item How can health professionals be modeled for synthetic data generation?\n\\item Which techniques are best suited for SDR\/DR clothing generation?\n\\item Can we close the reality gap further by including greenscreen data (Mixed Reality, MR)?\n\\item Can the required amount of real data be reduced by using SDR\/DR\/MR ?\n\\end{enumerate}\n\nFor point 1) we argue to use a deformable human shape model like the SMPL models. This provides sufficient variance of different human shapes and sizes. For point 2) we explore two different methods of clothing generation. First we 3D scan various persons wearing medical clothing and generate a database of different medical clothing scans for each clothing type, which we call SCANS. Second we commission a professional graphics designer to create assets based on the area clothing, which we call CAD. Regarding point 3) we take images in front of a greenscreen of different persons wearing medical clothing which we label by hand. For point 4) we explore wether and by how much the required amount of real data can be reduced.\n\nTo address the named questions further, we set up experiments where we want to detect the following classes with the help of the Scaled Yolov4 object detector \\cite{Wang_2021_CVPR}\n\nThe classes to be detected are:\n\\begin{itemize}\n\t\\item humans\n\t\\item area clothing shirt\n\t\\item area clothing pants\n\t\\item sterile gown\n\t\\item medical face mask\n\t\\item medical hairnet\n\t\\item medical gloves\n\\end{itemize}\n\nExamples of the medical clothing are given in figure \\ref{fig:expClothing}.\n\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tabular}{l c}\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/clothingexp\/shirt}\n\t\t&\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/clothingexp\/glove} \\\\\n\t\t\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/clothingexp\/pants}\n\t\t&\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/clothingexp\/mask} \\\\\n\t\\end{tabular}\t\n\t\\caption{Medical clothing examples like area shirt, pants, mask and glove. These clothing types among others represent the target clothing for our object detection network.}\n\t\\label{fig:expClothing}\n\\end{figure}\n\n\n\\subsection{Character Creation}\n\\label{sec:charCreation}\nThe medical characters we use in SDR\/DR are built through a combination of SMPL body models, textures, animations and clothing assets. Within the following sections each of the components are described in detail. \n\n\\textbf{Body Model}\\\\\nAs the base of our characters we use the male and female model of the SMPL+H model from \\cite{MANO:SIGGRAPHASIA:2017}. The models cover a huge variety of realistic human shapes, which can be randomized through ten blend shapes. We decide to use the extended SMPL+H model instead of the original SMPL model \\cite{SMPL:2015}. This is because one of our cloth items are gloves and through the hand rig of the SMPL+H model we will be able to create more deformations of the glove asset. \n\n\\textbf{Human Texture}\\\\\nTo add more variation and realism to the appearance of the characters, the texture maps from \\cite{Varol2017LearningFS} are used.\nOut of the 930 textures, only 138 (69 of every gender) have been used. This is, as we created our own cloth assets, only the textures of people in undergarments were relevant. Those texture maps were created out of 3D body scans from the CAESAR dataset \\cite{CAESAR} and cover a variety of skin colors and identities, however all of the faces have been anonymized \\cite{Varol2017LearningFS}. \n\n\\textbf{Pose} \\\\\nTo provide a variety of realistic body poses, the models were animated through Motion Capture (MoCap) data, which has been captured within our laboratory. We track the movement of 74 joints down to the fingertips. We use an intrinsic Motion Capture suite with the Hand gloves Add-on called Perception Neuron Studio \\footnote{Perception Neuron Studio suite and gloves addon: https:\/\/neuronmocap.com\/perception-neuron-studio-system}. In order to keep the dataset simple, we only used one animation in our experiments. The potential to add more varying animations is given however.\\\\\n\n\\textbf{3D-scanned Cloth Assets (SCANS)} \\\\\nA 3D Scanner called Artec Leo\\footnote{3D Scanner Artec Leo: https:\/\/www.artec3d.com\/portable-3d-scanners\/artec-leo } with a 3D resolution of 0.2mm was used to capture the medical cloths. For our synthetic training dataset we used clothing scans of 4 male and 4 female models. In this way, variations of the real world textures, including reflections, wrinkles, colors and surface texture information are collected. After building an initial model from the 3D Scanner, we adapt the cloths to fit the standard male and female SMPL+H Character using 3D Modelling techniques. As the gloves should match the Characters fingers exactly, to ensure correct animations of the fingers, they have been modeled instead of scanned. Afterwards the textures of scanned gloves have been applied to the models. When the cloth assets are fitted, we create cloth blend shapes, which match the blend shapes of the SMPL+H model, in order to make them adaptable with the character. Additionally, the cloth meshes are bound to the same rig as the character, by transferring the skin weights of the SMPL+H models. Like this the cloth assets are just as adaptable in shape and pose as the body model. \nAccording to our research, medical cloths usually come in the colors blue, green and light pink. To cover this variation in our dataset, without scanning more cloth or performing augmentations on the whole image, we augmented the texture maps. Examples of the scanned and rigged clothing assets can be seen in figure \\ref{fig:expClothingScans}.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tabular}{l c}\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/pics_3dmodels\/mask-and-net}\n\t\t&\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/pics_3dmodels\/mask-and-net-1} \\\\\n\t\t\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/pics_3dmodels\/shirt}\n\t\t&\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/pics_3dmodels\/gown} \\\\\n\t\\end{tabular}\t\n\t\\caption{Examples of our 3D-scanned clothing assets with color augmentation.}\n\t\\label{fig:expClothingScans}\n\\end{figure}\n\n\n\\textbf{Designed Cloth Assets (CAD)} \\\\\nTo evaluate the performance of 3D-scanned cloth assets we compare them to hand designed clothing assets. Therefore we have asked a designer\\footnote{azeemdesigns: https:\/\/www.fiverr.com\/azeemdesigns?\\\\source=order\\_page\\_user\\_message\\_link} on Fiver to model the clothes. Examples of those assets can be seen in figure \\ref{fig:expClothingCad}. We first evaluated to what extent freely available assets from the assets stores can be used for this purpose. However, there are no assets available that match our specific clothing in total. Therefore, we have decided to have the assets designed. The designed assets have been processed in the same way as our scanned assets. They are also deformable and are bound to the same rig.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tabular}{l c}\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/pics_cadModels\/mask-and-hat-1}\n\t\t&\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/pics_cadModels\/mask-and-hat-2} \\\\\n\t\t\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/pics_cadModels\/shirt-1}\n\t\t&\n\t\t\\includegraphics[width=.22\\textwidth]{figures\/pics_cadModels\/gown-1} \\\\\n\t\\end{tabular}\t\n\t\\caption{Examples of the designed clothing assets with color augmentation.}\n\t\\label{fig:expClothingCad}\n\\end{figure}\n\n\\textbf{Modular Character DR, NDDS} \\\\\nTo create the datasets from the body models, animations, cloth assets and all other components are combined in a character blueprint in Unreal Engine 4.22.3(DR). Here we have created a modular character, who is able to take on random shapes, textures and combinations of clothing. Also the 3D scanned cloths and designed cloths will vary in shape with the character and move with the animation. When the data capturing begins, the character first iterates over 1700 body shapes. The SMPL shape parameters for these body shapes have been taken from \\cite{Varol2017LearningFS} and represent 1700 male and female body types taken from the CEASAR Dataset. Next, one randomly sampled human texture is applied to the body model. Afterwards one cloth item out of each category is randomly sampled, textured and added to the body model. After the blend shapes of all cloth assets are adapted to the current body shape, one animation is chosen. When the animation finished playing, the next body shape will be chosen and the process will repeat itself. We create two seperate datasets, one with 3D-scanned (SCANS) assets and another with designed (CAD) assets for DR. An activity diagram, which represents the blueprint for Modular Character creation in NDDS, is given in figure \\ref{fig:flowChartBlueprint}.\\\\\n\n\\textbf{Modular Character SDR, Unity}\\\\\nTo be able to use SMPL models in unity, the male and female models are converted to a Humanoid character and an avatar is created from the model. Variation of clothing in unity is made possible using a custom-made component which selects random pieces of clothing from each category and applies them to a randomly selected SMPL model. In order to enable the clothing to animate together with the model, the bones of each piece of clothing are changed to match with those of the model using said component. Afterwards a random animation can be assigned to the model using the Animator component with a custom Animator Controller by setting the Motion parameter. Finally the model including all selected clothing is instantiated and randomly placed inside the scene. The animation is then moved along during data generation by setting the Motion Time parameter of the Animator Controller to a random value. The character itself is adapted using the custom-made randomization components by varying the texture of the models as well as the material of the clothing using predefined color variations. Models as well as clothing and position are varied each frame. The shape of the models is not modified as it resulted in clipping of the clothing and other unrealistic behavior. An activity diagram is shown in figure \\ref{fig:flowChartUnity}\n\n\\begin{figure}\n\t\\centering\t\n\t\\includegraphics[width=.49\\textwidth]{figures\/activity} \\\\\t\n\t\\caption{Activity Diagram of the modular character blueprint used in Unreal with NDDS.}\n\t\\label{fig:flowChartBlueprint}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\t\n\t\\includegraphics[width=.49\\textwidth]{figures\/activityUnity} \\\\\t\n\t\\caption{Activity Diagram of the character creation used in Unity for SDR.}\n\t\\label{fig:flowChartUnity}\n\\end{figure}\n\n\n\n\n\\textbf{Domain Randomization}\nFor the synthetic data generation of DR, an Unreal Engine 4 plugin called NDDS\\cite{to2018ndds} was used. This allows the generation of RGB images at rates similar to real cameras, as well as depth image data and segementation masks of the scene within Unreal Engine 4 (UE4). The plugin also creates bounding box labeling data for each object in the scene in 2D and 3D. The tool was specifically developed for Domain Randomization and therefore provides tools for scene randomization like object or camera position, lighting and distractor objects, among others. Using the aforementioned Modular Character blueprint, NDDS enables the generation of synthetic datasets for sterile clothing using 3D scanned clothing or designed based clothing. Example images are given in figure \\ref{fig:expSynth} on the top row.\n\n\\textbf{Structured Domain Randomization}\nFor dataset generation of SDR, we used a Unity plugin called ML-ImageSynthesis \\cite{Technologies2017} as a base and adapted it to work with the universal rendering pipeline (URP) for quality improvement. Using Unity 2020.3.32f1, additional components have been added to enable an export of additional metadata regarding each generated image such as camera parameters, bounding boxes and world position in .json format. SDR is made possible by making use of a variety of custom-made components which allow the randomization of parameters such as lighting, material, texture, position. The plugin ProBuilder provided by Unity was used to build an intervention room based on the target domain of the real dataset (Klinikum). Scene randomization is achieved by utilizing the aforementioned randomization components.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tabular}{l c}\n\t\t\\includegraphics[width=.23\\textwidth, trim={5cm 2cm 25cm 2cm},clip]{figures\/NDDS_exp\/000014.jpg}\n\t\t&\n\t\t\\includegraphics[width=.23\\textwidth, trim={25cm 2cm 15cm 9.5cm},clip]{figures\/NDDS_exp\/000024.jpg} \\\\\n\t\t\n\t\t\\includegraphics[width=.23\\textwidth, trim={25cm 2cm 2.5cm 9.5cm},clip]{figures\/SDR_exp\/scans.jpg}\n\t\t&\n\t\t\\includegraphics[width=.23\\textwidth, trim={25cm 2cm 2.5cm 4.5cm},clip]{figures\/SDR_exp\/cad.jpg} \\\\\n\t\\end{tabular}\t\n\t\\caption{Examples of synthetic RGB image data from DR and SDR datasets (top: DR, bottom: SDR, left: SCANS, right: CAD).}\n\t\\label{fig:expSynth}\n\\end{figure}\n\n\n\\subsection{Datasets}\n\\label{sec:dataset}\nTo investigate the potential accuracy difference between 3D scanned clothing (SCANS) and designed clothing (CAD) and the amount of necessary real data, different datasets were generated.\n\nFirst, synthetic datasets of DR and SDR were generated for both scanned and designed clothing, using the presented pipelines in Unreal-Engine und Unity. Second, a dataset in front of a greenscreen was collected (MR-DR). It consists of 8 persons in the training dataset and 2 persons in the validation dataset. The recorded persons move in front of the green screen with a certain grasping motion, which is also used as motion animation for the synthetic data. Finally a dataset of the target domain was recorded (Klinikum). It serves as a baseline comparison for all models and also presents the test data. All datasets are divided into training and validation data. \n\nExamples of real data in front of the green screen with exchanged background can be seen in figure \\ref{fig:realDataExps}. Examples of the synthetic data can be seen in figure \\ref{fig:expSynth} and finally examples from the clinical test data can be seen in figure \\ref{fig:expKlinikumTest}.\n\nTable \\ref{tab:datasetDistr} shows a breakdown of the sizes and distributions of the datasets.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tabular}{l c}\n\t\t\\includegraphics[width=.23\\textwidth,trim={15cm 5cm 25cm 2cm},clip]{figures\/gsExps\/i-1617957374561}\n\t\t&\n\t\t\\includegraphics[width=.23\\textwidth, trim={20cm 2cm 20cm 5cm}, clip]{figures\/gsExps\/i-1617958913261}\n\t\\end{tabular}\t\n\t\\caption{Examples of the greenscreen dataset with exchanged backgrounds.}\n\t\\label{fig:realDataExps}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tabular}{l c}\n\t\t\\includegraphics[width=.23\\textwidth,trim={20cm 5cm 20cm 5cm},clip]{figures\/expKlinikum\/i-1630600026553}\n\t\t&\n\t\t\\includegraphics[width=.23\\textwidth, trim={20cm 5cm 20cm 5cm}, clip]{figures\/expKlinikum\/i-1630600245053}\n\t\\end{tabular}\t\n\t\\caption{Examples of the Klinikum test dataset.}\n\t\\label{fig:expKlinikumTest}\n\\end{figure}\n\n\\begin{table}\n\t\\caption{This table provides data distributions in the different datasets.}\n\t\\centering\n\t\\begin{tabular}{llll}\n\t\t\\toprule\n\t\t\\multicolumn{4}{c}{dataset distributions} \\\\\n\t\t\\midrule\n\t\tdataset name \t& train \t& validation \t& test \\\\\n\t\t\\midrule\n\t\tDRscans \t\t& 10000\t\t& 1000\t\t\t& \/ \t\\\\\n\t\tDRcad \t\t\t& 10000\t\t& 1000\t\t\t& \/\t\t\\\\\n\t\tSDRscans\t\t& 8000\t\t& 2000\t\t\t& \/\t \t\\\\\n\t\tSDRcad\t\t\t& 8000\t\t& 2000\t\t\t& \/\t \t\\\\\n\t\tMR-DR(GS) \t\t& 6443\t\t& 1536\t\t\t& \/\t\\\\\n\t\tKlinikum\t\t& 660\t\t& 110\t\t\t& 331\t\\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\label{tab:datasetDistr}\n\\end{table}\n\n\\section{Experiment Results}\n\\label{sec:experiments}\n\n\nFor the experimental investigation of whether and how well 3D scanned clothing compares to designed CAD clothing for detection in the medical environment and how much addition real data is needed to achieve sufficient accuracy, various tests were carried out.\n\nFor our experiments we used Scaled-Yolov4 implementation from GitHub\\cite{Wang_2021_CVPR}.\nAt first, 6 different baseline networks were trained to show a basic comparison of the different methods. These baseline models include trainings with synthetic (DRscans, DRcad, SDRscans, SDRcad), mixed-reality (MR-DR) and real data from the clinic domain (Klinikum train).\n\nTraining was conducted with default finetune parameters provided from Scaled Yolo-V4\\footnote{https:\/\/github.com\/WongKinYiu\/ScaledYOLOv4}. Only Mosaic Augmentation ratio parameters $\\alpha$ and $\\beta$ were increased from $8.0$ to $20.0$. Additionally a green channel augmentation was used when MR data was present in the training dataset in order to reduce the greenscreen reflection influence which we had troubles with in some classes. We experimentally found out that this gives the best results. All networks were trained for 300 epochs and achieved convergence. Tests were performed with IoU-threshold: 0.5 and confidence-threshold: 0.2. The used Yolov4 network was yolov4-p5, image size setting was 896 in training and test and the pretrained weights provided were used.\n\nAll trained models were tested on the Klinikum test-set. The results of the baseline models are displayed in table \\ref{tab:baselinesTableResults}.\n\nThe results show, that CAD based synthetic data generally give better results than SCAN based data on this experiment. This is why we used SDRcad dataset as a baseline for all follow up experiments.\n\nTo investigate by how much the amount of real data can be reduced when used together with synthetic or MR data, while maintaining sufficient accuracy, experiments were conducted with a percentage distribution of the Klinikum training data. We decided to use the mosaic augmentation during these experiments as well and use all datasets as training data instead of a finetune experiment. We argue that the network can better learn relevant features while maintaining the advantages of the additional synthetic data when seeing a variation of all used datasets mixed together with mosaic augmentation as when only finetuning. During these experiments we decided to include the aforementioned green-channel augmentation on all trainings. Additionally the real data runs are trained with equal step size.\n\nThe follow up results with SDRcad, MR-DR as well as real data is shown in table \\ref{tab:followUpResults}. While generally reaching worse results than the SDR based datasets, the MR dataset provides an additional boost in accuracy when used together with the SDRcad dataset.\n\nThese results show that when using synthetic data, MR data and all real data or only 15\\% real data, the mAP accuracy could be improved further compared to when only using real data. Additionally it shows that when only using 15\\% of real data the gap between the synthetic, MR and real data could be closed.\n\n\n\\begin{table}[ht]\n\t\\small\n\t\\caption{Results on Klinikum test-set for baseline trainings. *additional green-channel augmentation used}\n\t\\centering\n\t\\begin{tabular}{llllll}\n\t\t\\multicolumn{5}{c}{\\textbf{all}}\\\\\n\t\t\\toprule\t \n\t\tExperiment \t\t& $mAP$ \t\t\t& $mAP50$ \t\t\t& P \t\t\t\t& R \\\\\n\t\t\\toprule\n\t\tKlinikum train \t \t& 81.60\t\t\t\t& 98.28 \t\t\t& 88.99\t\t\t\t& 98.73 \\\\\n\t\t\\midrule\n\t\tDRscans \t\t& 46.06\t\t\t\t& 75.51 \t\t\t& 76.89\t\t\t\t& 77.83 \\\\\n\t\tDRcad \t \t\t& 51.50\t\t\t\t& 77.85 \t\t\t& 76.40\t\t\t\t& 81.56 \\\\\n\t\tSDRscans\t\t\t& 65.52\t\t\t\t& 87.12 \t\t\t& 77.92\t\t\t\t& 89.31 \\\\\n\t\tSDRcad \t\t\t& \\textbf{67.44}\t& \\textbf{90.72} \t& \\textbf{81.98}\t& \\textbf{92.02} \\\\\n\t\tMR-DR* \t\t\t& 60.94\t\t\t\t& 82.29 \t\t\t& 79.54\t\t\t\t& 84.04 \\\\\n\t\t\\bottomrule\t \n\t\\end{tabular}\n\t\\label{tab:baselinesTableResults}\n\\end{table}\n\n\\begin{table}\n\t\\caption{Results on Klinikum test-set for follow up experiments. For these experiment we used the additional green-channel augmentation on all reported trainings.}\n\t\\centering\n\t\\begin{tabular}{llllll}\n\t\t\\multicolumn{5}{c}{\\textbf{all}}\\\\\n\t\t\\toprule\t \n\t\tExperiment \t\t& $mAP$ \t\t\t& $mAP50$ \t\t\t& P \t\t\t\t& R \\\\\n\t\t\\toprule\n\t\t\\midrule\n\t\tKlinikum(100) \t \t& 81.95\t\t\t\t& \\textbf{98.57} \t& \\textbf{88.88}\t& \\textbf{98.85} \\\\\n\t\t\\midrule\n\t\tSDR+MR+\\\\real(100) & \\textbf{83.35}\t& 98.14 \t\t\t& 88.41\t\t\t\t& 98.59 \\\\\n\t\t\\midrule\n\t\tKlinikum(15) \t\t& 77.52\t\t\t\t& 96.67 \t\t\t& 87.83\t\t\t\t& 97.37 \\\\\n\t\t\\midrule\n\t\tSDR+MR+\\\\real(15)\t& 80.05\t\t\t\t& 96.92 \t\t\t& 87.06\t\t\t\t& 97.64 \\\\\n\t\t\\midrule\n\t\tSDR+MR \t\t\t& 72.00\t\t\t\t& 92.27 \t\t\t& 83.81\t\t\t\t& 94.07 \\\\\n\t\t\\bottomrule\t \n\t\\end{tabular}\n\t\\label{tab:followUpResults}\n\\end{table}\n\n\nInference result images of SDR+MR+real(100) can be seen in figure \\ref{fig:bestScoreKlinikExp}. Here we used a slightly higher confidence threshold of 0.4 and and IoU-threshold of 0.5.\n\n\n\\begin{figure}\n\t\\centering\n\t\\begin{tabular}{l c}\n\t\t\\includegraphics[width=.22\\textwidth,trim={15cm 5cm 25cm 0cm},clip]{figures\/resultsKlinikumTest\/i_1630599989803.jpg}\n\t\t&\n\t\t\\includegraphics[width=.22\\textwidth,trim={20cm 8cm 25cm 2cm},clip]{figures\/resultsKlinikumTest\/i_1630600036303.jpg} \\\\\n\t\t\n\t\t\\includegraphics[width=.22\\textwidth,trim={25cm 5cm 25cm 10cm},clip]{figures\/resultsKlinikumTest\/i_1630600109803.jpg}\n\t\t&\n\t\t\\includegraphics[width=.22\\textwidth,trim={20cm 5cm 25cm 5cm},clip]{figures\/resultsKlinikumTest\/i_1630600252803.jpg} \\\\\n\t\\end{tabular}\t\n\t\\caption{Inference results with SDR+MR+real(100) trained net. Confidence threshold of 0.4 and IoU-threshold of 0.5}\n\t\\label{fig:bestScoreKlinikExp}\n\\end{figure}\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nWe were able to show that the use of SMPL models together with scanned or designed medical clothing is a suitable method for modeling heathcare professionals for AI questions in the intervention space using the example of medical clothing detection.\nDuring our experiments we found out that the designed clothing generally performed better on our test dataset than the 3D scanned cloths. This result surprised us, as we expected the potentially more accurate textures of the 3D scan to have a positive impact on detection rates. However, according to the results, it cannot be ruled out that artifacts in the rendering pipeline or pre-processing pipeline that we did not detect have an influence on this. In order to make a final statement about the potential of 3D scanned clothing for the modeling of health professionals, further experiments should be conducted.\nUsing Mixed-Reality data together with the synthetic data closed the gap further and while the margin is quite small, we could show that when using synthetic, mixed reality and 15\\% real data the remaining gap towards 100\\% real data could be nearly closed.\n\nFinally, it can be stated that the presented modeling of health professionals is a promising methodology to address the challenge of missing datasets from medical intervention rooms. We will further investigate it on various task around the medical field.\n\n\\section{Acknowledgements}\nThis research project is part of the Research Campus M2OLIE and funded by the German Federal Ministry of Education and Research (BMBF) within the Framework \"Forschungscampus \u2013 public-private partnership for Innovations\" und the funding code 13GW0389C.\n\nDisclaimer: The methods and information presented in this work are based on research and are not commercially available.\n\n\n\\bibliographystyle{unsrt} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLet the domain of interest be $D=(0,1)^d$ where $d=2,3$. The Helmholtz\nequation is\n\\begin{equation}\n \\Delta u(x)+\\dfrac{\\omega^2}{c^2(x)}u(x)=f(x),\\quad \\forall x\\in D,\n \\label{eq:helm}\n\\end{equation}\nwhere $u(x)$ is the time-independent wave field generated by the\ntime-independent force $f(x)$, $\\omega$ is the angular frequency and\n$c(x)$ is the velocity field. Commonly used boundary conditions are\nthe approximations of the Sommerfeld radiation condition. By rescaling the system, we assume $c_{\\min}\\le c(x)\\le c_{\\max}$ where $c_{\\min}$\nand $c_{\\max}$ are of $\\Theta(1)$. Then $\\omega\/(2\\pi)$ is the typical\nwave number and $\\lambda=2\\pi\/\\omega$ is the typical wavelength.\n\nSolving the equation numerically is challenging in high frequency\nsettings for two reasons. First, in most applications, the equation is\ndiscretized with at least a constant number of points per wavelength,\nwhich makes the number of points in each direction $n=\\Omega(\\omega)$\nand the total degree of freedom $N=n^d=\\Omega(\\omega^d)$ very\nlarge. Second, the system is highly indefinite and has a very\noscillatory Green's function, which makes most of the classical\niterative methods no longer effective.\n\nThere has been a sequence of papers on developing iterative methods\nfor solving \\eqref{eq:helm}. The AILU method by Gander and Nataf\n\\cite{ailu} is the first to use the incomplete LU factorization to\nprecondition the equation. Engquist and Ying \\cite{sweephmf,sweeppml}\ndeveloped a series of sweeping preconditioners based on approximating\nthe inverse of the Schur complements in the LDU factorization and\nobtained essentially $\\omega$-independent iteration numbers. In\n\\cite{stolk2013domaindecomp}, Stolk proposed a domain decomposition\nmethod based on the PML which constructs delicate transmission\nconditions between the subdomains by considering the ``pulses''\ngenerated by the intermediate waves. In \\cite{vion2014doublesweep},\nVion and Geuzaine proposed a double sweep preconditioner based on the\nDirichlet-to-Neumann (DtN) map and several numerical simulations of\nthe DtN map were compared. In\n\\cite{chen2013sourcetrans,chen2013sourcetrans2}, Chen and Xiang\nintroduced a source transfer domain decomposition method which\nemphasizes on transferring the sources between the subdomains. In\n\\cite{demanet}, Zepeda-N{\\'u}{\\~n}ez and Demanet developed a novel\ndomain decomposition method for the 2D case by pairing up the waves\nand their normal derivatives at the boundary of the subdomains and\nsplitting the transmission of the waves into two directions. Most\nrecently in \\cite{Liu2015}, Liu and Ying proposed a recursive sweeping\npreconditioner for 3D Helmholtz problems. Other progresses includes\n\\cite{parallelsweep,sweepem,sweepemfem,sweepspectral} and we refer to\n\\cite{advances} by Erlangga and \\cite{why} by Ernst and Gander for a\ncomplete discussion.\n\nInspired by \\cite{stolk2013domaindecomp} and these previous\napproaches, we propose a new domain decomposition method in this paper\nwhich shares some similarities with\n\\cite{sweeppml,stolk2013domaindecomp}. The novelty of this new\napproach is that the transmission conditions are built with the\nboundary values of the intermediate waves directly. For each wave\nfield on the subdomains, we divide it into three parts -- the waves\ngenerated by the force to the left of the subdomain, to the right of\nthe subdomain, and within the subdomain itself. This corresponds to an\n$L+D+U$ decomposition of the Green's matrix $G$ as the sum of its\nlower triangular part, upper triangular part and diagonal part. This\nis why we call this new preconditioner the additive sweeping\npreconditioner.\n\nThe rest of this paper is organized as follows. First in Section\n\\ref{sec:1D} we use the 1D case to illustrate the idea of the\nmethod. Then in Section \\ref{sec:2D} we introduce the preconditioner\nin 2D and present the 2D numerical results. Section\n\\ref{sec:3D} discusses the 3D case. Conclusions and some future\ndirections are provided in Section \\ref{sec:Conclusion}.\n\n\\section{1D Illustration}\n\\label{sec:1D}\n\nWe use the PML\\cite{berenger1994pml,chew1994pml,johnson2008pmlnotes}\nto simulate the Sommerfeld condition. The PML introduces the auxiliary\nfunctions\n\\begin{align*}\n\\sigma(x) &:=\n\\begin{dcases}\n\\dfrac{C}{\\eta}\\left(\\dfrac{x-\\eta}{\\eta}\\right)^2,\\quad &x\\in[0,\\eta),\\\\\n0,\\quad &x\\in[\\eta,1-\\eta],\\\\\n\\dfrac{C}{\\eta}\\left(\\dfrac{x-1+\\eta}{\\eta}\\right)^2,\\quad &x\\in(1-\\eta,1],\n\\end{dcases}\\\\\ns(x) &:= \\left(1+\\ii\\dfrac{\\sigma(x)}{\\omega}\\right)^{-1},\n\\end{align*}\nwhere C is an appropriate positive constant independent of $\\omega$,\nand $\\eta$ is the PML width which is typically around one wavelength.\n\nThe Helmholtz equation with PML in 1D is\n\\begin{equation*}\n\\begin{dcases}\n\\left((s(x)\\dfrac{\\dd}{\\dd x})^2+\\dfrac{\\omega^2}{c^2(x)}\\right)u(x)=f(x),\\quad \\forall x\\in (0,1),\\\\\nu(0)=0,\\\\\nu(1)=0. \n\\end{dcases}\n\\end{equation*}\nWe discretize the system with step size $h=1\/(n+1)$, then $n$\nis the degree of freedom. With the standard central difference\nnumerical scheme the discretized equation is\n\\begin{equation}\n\\label{eqn:1D}\n\\dfrac{s_{i}}{h}\\left(\\dfrac{s_{i+1\/2}}{h}(u_{i+1}-u_{i})-\\dfrac{s_{i-1\/2}}{h}(u_{i}-u_{i-1})\\right)+\\dfrac{\\omega^2}{c_i^2}u_{i}=f_{i}, \\quad \\forall 1\\le i\\le n,\n\\end{equation}\nwhere the subscript $i$ means that the corresponding function is evaluated at $x=ih$.\n\nWe denote Equation \\eqref{eqn:1D} as $A\\pmb u=\\pmb f$, where $\\pmb u$ and $\\pmb f$ are the discrete array of the wave field and the\nforce\n\\begin{align*}\n\\pmb u:=[u_1,\\dots,u_n]^T,\\quad \\pmb f:=[f_1,\\dots,f_n]^T. \n\\end{align*}\nIn 1D, $A$ is tridiagonal and Equation \\eqref{eqn:1D} can be solved\nwithout any difficulty. However, here we are aiming at an approach\nwhich can be generalized to higher dimensions so the rest of this\nsection takes another point of view to solve \\eqref{eqn:1D} instead of\nexploiting the sparsity structure of $A$ directly.\n\nWith the Green's matrix $G=A^{-1}$, $\\pmb u$ can be written\nas $\\pmb u=G\\pmb f$. Now let us divide the discrete grid into $m$ parts. We\nassume that $\\eta=\\gamma h$ and $n=2\\gamma+mb-2$ where $\\gamma$ and $b$ are some\nsmall constants and $m$ is comparable to $n$, and we define\n\\begin{align*}\nX_1 &:= \\{ih:1 \\le i \\le \\gamma + b-1 \\}, \\\\\nX_p &:= \\{ih:\\gamma + (p-1) b \\le i \\le \\gamma + pb-1 \\},\\quad p=2,\\dots, m-1, \\\\\nX_m &:= \\{ih:\\gamma + (m-1) b \\le i \\le 2 \\gamma + mb-2 \\}, \n\\end{align*}\nwhich means, $X_1$ is the leftmost part containing the left PML of the\noriginal problem and a small piece of grid with $b$ points, $X_m$ is\nthe rightmost part containing the right PML and a grid of $b$ points,\nand $X_p, p=2,\\dots,m-1$ are the middle parts each of which contains\n$b$ points. $\\pmb u_p$ and $\\pmb f_p$ are defined as the restrictions\nof $\\pmb u$ and $\\pmb f$ on $X_p$ for $p=1,\\dots,m$, respectively,\n\\begin{align*}\n\\pmb u_1 &:= [u_1,\\dots,u_{\\gamma+b-1}]^T,\\\\\n\\pmb u_p &:= [u_{\\gamma+(p-1)b},\\dots,u_{\\gamma + pb-1}]^T,\\quad p=2,\\dots,m-1,\\\\\n\\pmb u_m &:= [u_{\\gamma + (m-1) b},\\dots,u_{2 \\gamma + m b -2}]^T,\\\\\n\\pmb f_1 &:= [f_1,\\dots,f_{\\gamma+b-1}]^T,\\\\\n\\pmb f_p &:= [f_{\\gamma+(p-1)b},\\dots,f_{\\gamma + pb-1}]^T,\\quad p=2,\\dots,m-1,\\\\\n\\pmb f_m &:= [f_{\\gamma + (m-1) b},\\dots,f_{2 \\gamma + m b -2}]^T.\n\\end{align*}\nThen $u=Gf$ can be written as\n\\begin{align*}\n\\begin{bmatrix}\n\\pmb u_1\\\\\\pmb u_2\\\\ \\vdots\\\\\\pmb u_m\n\\end{bmatrix}\n=\n\\begin{bmatrix}\nG_{1,1}&G_{1,2}&\\ldots&G_{1,m}\\\\\nG_{2,1}&G_{2,2}&\\ldots&G_{2,m}\\\\\n\\vdots&\\vdots&&\\vdots\\\\\nG_{m,1}&G_{m,2}&\\ldots&G_{m,m}\n\\end{bmatrix}\n\\begin{bmatrix}\n\\pmb f_1\\\\\\pmb f_2\\\\ \\vdots\\\\\\pmb f_m\n\\end{bmatrix}.\n\\end{align*}\n\nBy introducing $\\pmb u_{p,q}:=G_{p,q}\\pmb f_q$ for $1\\le p,q\\le m$,\none can write $\\pmb u_p=\\sum_{q=1}^m \\pmb u_{p,q}$. The physical\nmeaning of $\\pmb u_{p,q}$ is the contribution of the force $\\pmb f_q$\ndefined on the grid $X_q$ acting upon the grid $X_p$. If we know the\nmatrix $G$, the computation of $\\pmb u_{p,q}$ can be carried out\ndirectly. However, computing $G$, or even applying $G$ to the vector\n$\\pmb f$, is computationally expensive. The additive sweeping method\ncircumvent this difficulty by approximating the blocks of $G$\nsequentially and the idea works in higher dimensions. In what follows,\nwe shall use $\\td{\\pmb u}_{p,q}$ to denote the approximations of $\\pmb\nu_{p,q}$.\n\n \n\\subsection{Approximating $\\pmb u_{p,q}$ with auxiliary PMLs}\n\n\\subsubsection{Wave generated by $\\pmb f_1$}\n\nThe components ${\\pmb u}_{p,1}$ for $p=1,\\dots,m$ can be regarded as a\nsequence of right-going waves generated by $\\pmb f_1$. Note that the\nboundary condition of the system is the approximated Sommerfeld\ncondition. If we assume that the reflection during the transmission of\nthe wave is negligible, then, to approximate $\\pmb u_{1,1}$, we can\nsimply put an artificial PML on the right of the grid $X_1$ to solve a\nmuch smaller problem, since the domain of interest here is only $X_1$\n(see Figure \\ref{fig:approxU11}). To be precise, we define\n\\begin{align*}\n\\sigma_1^{M}(x) &:= \n\\begin{dcases}\n\\dfrac{C}{\\eta}\\left(\\dfrac{x-\\eta}{\\eta}\\right)^2,\\quad &x\\in[0,\\eta),\\\\\n0,\\quad &x\\in[\\eta,\\eta+(b-1)h],\\\\\n\\dfrac{C}{\\eta}\\left(\\dfrac{x-(\\eta+(b-1)h)}{\\eta}\\right)^2,\\quad &x\\in(\\eta+(b-1)h,2\\eta+(b-1)h],\n\\end{dcases}\\\\\ns_1^{M}(x) &:= \\left(1+\\ii\\dfrac{\\sigma_1^{M}(x)}{\\omega}\\right)^{-1}. \n\\end{align*}\nWe consider a subproblem on the auxiliary domain $D_1^M := (0,2\\eta + (b - 1)h)$\n\\begin{align*}\n\\begin{dcases}\n\\left((s_1^{M}(x)\\dfrac{\\dd}{\\dd x})^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=g(x),\\quad &\\forall x\\in D_1^M,\\\\\nv(x)=0, \\quad & \\forall x\\in \\partial D_1^M.\n\\end{dcases}\n\\end{align*}\nWith the same discrete numerical scheme and step size $h$, we have the\ncorresponding discrete system $H_1^{M} \\pmb v=\\pmb g$ on the extended grid\n\\begin{align*}\nX_1^{M}:=\\{ih :1\\le i\\le 2\\gamma+b-2\\}. \n\\end{align*}\nFigure \\ref{fig:Xp} shows a graphical view of $X_1^M$, as well as other extended grids which we will see later.\n\nWith the discrete system $H_1^M \\pmb v = \\pmb g$, we can define an operator $\\td{G}_{1}^{M}:\\pmb y\\to \\pmb z$, which is an approximation of\n$G_{1,1}$, by the following:\n\\begin{enumerate}\n\\item \n Introduce a vector $\\pmb g$ defined on $X_1^{M}$ by setting\n $\\pmb y$ to $X_1$ and zero everywhere else.\n\\item \n Solve $H_1^{M} \\pmb v=\\pmb g$ on $X_1^{M}$.\n\\item \n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_1$.\n\\end{enumerate}\nThen $\\td{\\pmb u}_{1,1}$ can be set as\n\\begin{align*}\n\\td{\\pmb u}_{1,1}:=\\td{G}_{1}^{M}\\pmb f_1.\n\\end{align*}\n\n\\begin{figure}[h!]\n \\centering\n \n \\begin{tikzpicture}\n [x=2cm,y=2cm,>=latex]\n \\draw\n (0.0 , 0.0)--(7.0 , 0.0)\n \n (0.0 , 1.5)--(2.0 , 1.5)\n (2.5 , 1.5)--(4.5 , 1.5)\n (5.0 , 1.5)--(7.0 , 1.5)\n \n (0.0 , 1.0)--(1.5 , 1.0)\n (2.5 , 1.0)--(4.0 , 1.0)\n \n (3.0 , 2.0)--(4.5 , 2.0)\n (5.5 , 2.0)--(7.0 , 2.0)\n ;\n \n \\draw\n (0.0 , -0.1)--(0.0 , 0.1)\n (1.5 , -0.1)--(1.5 , 0.1)\n (3.0 , -0.1)--(3.0 , 0.1)\n (4.0 , -0.1)--(4.0 , 0.1)\n (5.5 , -0.1)--(5.5 , 0.1)\n (7.0 , -0.1)--(7.0 , 0.1)\n \n (0.0 , 1.0 - 0.1)--(0.0 , 1.0 + 0.1)\n (1.5 , 1.0 - 0.1)--(1.5 , 1.0 + 0.1)\n (2.5 , 1.0 - 0.1)--(2.5 , 1.0 + 0.1)\n (4.0 , 1.0 - 0.1)--(4.0 , 1.0 + 0.1)\n \n (3.0 , 2.0 - 0.1)--(3.0 , 2.0 + 0.1)\n (4.5 , 2.0 - 0.1)--(4.5 , 2.0 + 0.1)\n (5.5 , 2.0 - 0.1)--(5.5 , 2.0 + 0.1)\n (7.0 , 2.0 - 0.1)--(7.0 , 2.0 + 0.1)\n \n (0.0 , 1.5 - 0.1)--(0.0 , 1.5 + 0.1)\n (2.0 , 1.5 - 0.1)--(2.0 , 1.5 + 0.1)\n (2.5 , 1.5 - 0.1)--(2.5 , 1.5 + 0.1)\n (4.5 , 1.5 - 0.1)--(4.5 , 1.5 + 0.1)\n (5.0 , 1.5 - 0.1)--(5.0 , 1.5 + 0.1)\n (7.0 , 1.5 - 0.1)--(7.0 , 1.5 + 0.1)\n ;\n \n \\draw\n (0.0 , 0.0)node[anchor=south west]{PML}\n (0.0 , 1.0)node[anchor=south west]{PML}\n (0.0 , 1.5)node[anchor=south west]{PML}\n (2.5 , 1.0)node[anchor=south west]{PML}\n (2.5 , 1.5)node[anchor=south west]{PML}\n (5.0 , 1.5)node[anchor=south west]{PML}\n (7.0 , 0.0)node[anchor=south east]{PML}\n (7.0 , 1.5)node[anchor=south east]{PML}\n (7.0 , 2.0)node[anchor=south east]{PML}\n (4.5 , 1.5)node[anchor=south east]{PML}\n (4.5 , 2.0)node[anchor=south east]{PML}\n (2.0 , 1.5)node[anchor=south east]{PML}\n ;\n \n \\draw\n (1.0 , 0.0)node[anchor = north]{$X_1$}\n (1.0 , 1.0)node[anchor = north]{$X_1^L$}\n (1.0 , 1.5)node[anchor = north]{$X_1^M$}\n \n (6.0 , 0.0)node[anchor = north]{$X_m$}\n (6.0 , 2.0)node[anchor = north]{$X_m^R$}\n (6.0 , 1.5)node[anchor = north]{$X_m^M$}\n \n (3.5 , 0.0)node[anchor = north]{$X_p$}\n (3.5 , 1.0)node[anchor = north]{$X_p^L$}\n (3.5 , 1.5)node[anchor = north]{$X_p^M$}\n (3.5 , 2.0)node[anchor = north]{$X_p^R$}\n ;\n \n \\draw\n (2.25 , 0.0)node[fill=white]{$\\ldots$}\n \n (4.75 , 0.0)node[fill=white]{$\\ldots$}\n \n ;\n \\end{tikzpicture}\n \n \\caption{This figure shows how the grids $X_p$ are extended with auxiliary PMLs.}\n \\label{fig:Xp}\n\\end{figure}\n\nOnce we have computed $\\td{\\pmb u}_{1,1}$, we can use the right\nboundary value of $\\td{\\pmb u}_{1,1}$ to compute $\\td{\\pmb u}_{2,1}$\nby introducing an auxiliary PML on the right of $X_2$ and solving the\nboundary value problem with the left boundary value at\n$x=(\\gamma+b-1)h$ equal to the right boundary value of $\\td{\\pmb\n u}_{1,1}$. The same process can be repeated to compute $\\td{\\pmb\n u}_{p+1,1}$ by exploiting the right boundary value of $\\td{\\pmb\n u}_{p,1}$ recursively for $p=2,\\dots,m-1$ (see Figure\n\\ref{fig:approxUp1}). In the following context of this section, we\nintroduce notations $g^{L}, g^{R}$ for a vector array $\\pmb\ng=[g_1,\\dots,g_s]^T$ by\n\\begin{align*}\ng^{L}:=g_1,\\quad g^{R}:=g_s,\n\\end{align*}\nwhere $g^{L}$ and $g^{R}$ should be interpreted as the\nleftmost and the rightmost element of the array $\\pmb g$.\n\n\n\\begin{figure}[h!]\n \\centering\n \\subfigure[The wave $\\pmb u_1$ (shown as the gray arrow) generated by $\\pmb f_1$.]{\n \\begin{tikzpicture}\n [x=3cm,y=3cm,>=latex]\n \\draw[step=1](0,-0.1)grid(4,0.1);\n \\draw(0.5,0)node[below=3]{$X_1$}(1.5,0)node[below=3]{$X_2$}(3.5,0)node[below=3]{$X_m$};\n \\draw[<->,gray,very thick](0,0.5)--(4,0.5);\n \\draw(0.5,0.5)node[fill=white]{$\\pmb f_1$};\n \\draw(2.0,0.5)node[below=3]{$\\pmb u_1$};\n \\draw(2.5,0.5)node[fill=white]{$\\ldots$}(2.5,0)node[fill=white]{$\\ldots$};\n \\draw(0,0)node[anchor=south west]{PML}(4,0)node[anchor=south east]{PML};\n \\end{tikzpicture}\n }\n \\subfigure[$\\td{\\pmb u}_{1,1}$ is computed by introducing an auxiliary PML on the right of $X_1$. The dotted gray arrow stands for the restriction of ${\\pmb u}_1$ on $X_2\\cup\\dots \\cup X_m$, which is to be approximated.]{\n \\label{fig:approxU11}\n \\begin{tikzpicture}\n [x=3cm,y=3cm,>=latex]\n \\draw[step=1](0,-0.1)grid(4,0.1);\n \\draw(0.5,0)node[below=3]{$X_1$}(1.5,0)node[below=3]{$X_2$}(3.5,0)node[below=3]{$X_m$};\n \\draw[<->,gray,very thick](0,0.5)--(1,0.5);\n \\draw[->,dotted,gray,very thick](1,0.5)--(4,0.5);\n \\draw(0.5,0.5)node[fill=white]{$\\pmb f_1$};\n \\draw(0.5,0.5)node[below=3]{$\\td{\\pmb u}_{1,1}$};\n \\draw(2.5,0.5)node[fill=white]{$\\ldots$}(2.5,0)node[fill=white]{$\\ldots$};\n \\draw(0,0)node[anchor=south west]{PML}(1,0)node[anchor=south west]{PML};\n \\end{tikzpicture}\n }\n \\subfigure[$\\td {\\pmb u}_{p,1}$ for $p=2,\\dots,m$ are computed sequentially.]{\n \\label{fig:approxUp1}\n \\begin{tikzpicture}\n [x=3cm,y=3cm,>=latex]\n \\draw[step=1](0,-0.1)grid(4,0.1);\n \\draw(0.5,0)node[below=3]{$X_1$}(1.5,0)node[below=3]{$X_2$}(3.5,0)node[below=3]{$X_m$};\n \\draw[<->,gray,very thick](0,0.5)--(1,0.5);\n \\foreach \\x in{1,2,3}\n {\n \\draw[->,gray,very thick](\\x,0.5)--(\\x+1,0.5);\n }\n \\draw(0.5,0.5)node[fill=white]{$\\pmb f_1$};\n \\draw(0.5,0.5)node[below=3]{$\\td{\\pmb u}_{1,1}$}(1.5,0.5)node[below=3]{$\\td {\\pmb u}_{2,1}$}(3.5,0.5)node[below=3]{$\\td {\\pmb u}_{m,1}$};\n \\draw(2.5,0.5)node[fill=white]{$\\ldots$}(2.5,0)node[fill=white]{$\\ldots$};\n \\end{tikzpicture}\n }\n \\subfigure[$\\td {\\pmb u}_{p,m}$ for $p=m,\\dots,1$ are computed sequentially.]\n {\n \\label{fig:approxUpm}\n \\begin{tikzpicture}\n [x=3cm,y=3cm,>=latex]\n \\draw[step=1](0,-0.1)grid(4,0.1);\n \\draw(0.5,0)node[below=3]{$X_1$}(2.5,0)node[below=3]{$X_{m-1}$}(3.5,0)node[below=3]{$X_m$};\n \\draw[<->,gray,very thick](3,0.5)--(4,0.5);\n \\foreach \\x in{3,2,1}\n {\n \\draw[->,gray,very thick](\\x,0.5)--(\\x-1,0.5);\n }\n \\draw(3.5,0.5)node[fill=white]{$\\pmb f_m$};\n \\draw(0.5,0.5)node[below=3]{$\\td {\\pmb u}_{1,m}$}(2.5,0.5)node[below=3]{$\\td {\\pmb u}_{m-1,m}$}(3.5,0.5)node[below=3]{$\\td {\\pmb u}_{m,m}$};\n \\draw(1.5,0.5)node[fill=white]{$\\ldots$}(1.5,0)node[fill=white]{$\\ldots$};\n \\end{tikzpicture}\n }\n \\subfigure[$\\td {\\pmb u}_{p,q}$ are computed for $p=q$ first, and then for $p=q+1,\\dots,m$ and for $p=q-1,\\dots,1$ sequentially.]\n {\n \\label{fig:approxUpq}\n \\begin{tikzpicture}\n [x=2cm,y=2cm,>=latex]\n \\draw[step=1](0,-0.1)grid(7,0.1);\n \\draw(0.5,0)node[below=3]{$X_1$}(2.5,0)node[below=3]{$X_{q-1}$}(3.5,0)node[below=3]{$X_q$}(4.5,0)node[below=3]{$X_{q+1}$}(6.5,0)node[below=3]{$X_m$};\n \\draw[<->,gray,very thick](3,0.5)--(4,0.5);\n \\foreach \\x in{3,2,1}\n {\n \\draw[->,gray,very thick](\\x,0.5)--(\\x-1,0.5);\n \\draw[->,gray,very thick](\\x+3,0.5)--(\\x+4,0.5);\n }\n \\draw(3.5,0.5)node[fill=white]{$\\pmb f_q$};\n \\draw(0.5,0.5)node[below=3]{$\\td {\\pmb u}_{1,q}$}(2.5,0.5)node[below=3]{$\\td {\\pmb u}_{q-1,q}$}(3.5,0.5)node[below=3]{$\\td {\\pmb u}_{q,q}$}(4.5,0.5)node[below=3]{$\\td {\\pmb u}_{q+1,q}$}(6.5,0.5)node[below=3]{$\\td {\\pmb u}_{m,q}$};\n \\draw(1.5,0.5)node[fill=white]{$\\ldots$}(1.5,0)node[fill=white]{$\\ldots$}(5.5,0.5)node[fill=white]{$\\ldots$}(5.5,0)node[fill=white]{$\\ldots$};\n \\end{tikzpicture}\n }\n \\caption{This figure shows how $\\td {\\pmb u}_{p,q}$ are generated. The direction of the arrows indicates the computing orders of the approximating waves.}\n\\end{figure}\n\n\nTo formalize the definition of $\\td{\\pmb u}_{p,1}$ for each\n$p=2,\\dots,m$, we introduce the auxiliary domain $D_p^R$, which will be defined below, to simulate the right-transmission of the waves. The superscript $R$ means that the auxiliary domain is intended for approximating the right-going waves. The left boundary of $D_p^R$ will be denoted as $\\partial^L D_p^R$, on which the boundary value will be used to approximate the wave transmission as we shall see. We also extend $X_p$ with an auxiliary PML on the right to form an extended grid $X_p^R$ (see Figure \\ref{fig:Xp}), which corresponds the discretization of $D_p^R$. To be specific, we define\n\\begin{align*}\nD_p^R &:= (\\eta+((p-1)b-1)h,2\\eta+(pb-1)h), \\\\\n\\partial^L D_p^R &:= \\{\\eta+((p-1)b-1)h\\}, \\\\\n X_p^{R} &:= \\{ih:\\gamma+(p-1)b\\le i\\le 2\\gamma+pb-2\\}.\n\\end{align*}\nNote that the grid $X_m^{R}$ is $X_m$ itself since\n$X_m$ already contains the original right PML region. The purpose to use the notation $X_m^{R}$ is to simplify the description of the algorithm.\n\nFor the PML on $D_p^{R}$, we define\n\\begin{align*}\n \\sigma_p^{R}(x) &:= \n \\begin{dcases}\n 0,\\quad &x\\in[\\eta+((p-1)b-1)h,\\eta+(pb-1)h],\\\\\n \\dfrac{C}{\\eta}\\left(\\dfrac{x-(\\eta+(pb-1)h)}{\\eta}\\right)^2,\\quad &x\\in(\\eta+(pb-1)h,2\\eta+(pb-1)h],\n \\end{dcases}\\\\\n s_p^{R} &:= \\left(1+\\ii\\dfrac{\\sigma_p^{R}(x)}{\\omega}\\right)^{-1}.\n\\end{align*} \nWe consider the following subproblem\n\\begin{align*}\n \\begin{dcases}\n \\left((s_p^{R}(x)\\dfrac{\\dd}{\\dd x})^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=0,\\quad &\\forall x\\in D_p^R,\\\\\n v(x)=w,\\quad & \\forall x \\in \\partial^L D_p^R,\\\\\n v(x)=0, \\quad & \\forall x \\in \\partial D_p^R \\setminus\\partial^L D_p^R,\n \\end{dcases}\n\\end{align*}\nwhere $w$ is the left boundary value of the unknown $v(x)$. We define $H_p^{R} \\pmb v=\\pmb g$ as the discretization of this\nproblem on $X_p^{R}$ where the right-hand side $\\pmb g$ is given by\n$\\pmb g:= (-1\/h^2)[w,0,\\dots,0]^T$ as a result of the central\ndiscretization. The subproblem $H_p^{R} \\pmb v=\\pmb g$ for each\n$p=2,\\dots,m$ induces the approximation operator $\\td{G}_{p}^{R}:w \\to\n\\pmb z$ by the following procedure:\n\\begin{enumerate}\n\\item\n Set $\\pmb g=(-1\/h^2)[w,0,\\dots,0]^T$. \n\\item\n Solve $H_p^{R} \\pmb v =\\pmb g$ on $X_p^{R}$. \n\\item\n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_p$.\n\\end{enumerate}\nThen $\\td{\\pmb u}_{p,1}$ can be defined recursively for $p=2,\\dots,m$ by\n\\begin{align*}\n \\td{\\pmb u}_{p,1}:=\\td{G}_{p}^{R}\\td{\\pmb u}_{p-1,1}^{R}.\n\\end{align*}\nNote that, the operator $\\td{G}_p^{R}$ is not an approximation of the\nmatrix block $G_{p,1}$, since $\\td{G}_p^R$ maps the right boundary\nvalue of $\\td{\\pmb u}_{p-1,1}$ to $\\td{\\pmb u}_{p,1}$ while $G_{p,1}$\nmaps $\\pmb f_1$ to $\\pmb u_{p,1}$.\n\n\\subsubsection{Wave generated by $\\pmb f_m$}\n\nThe components ${\\pmb u}_{p,m}$ for $p=1,\\dots,m$ can be regarded as a\nsequence of left-going waves generated by $\\pmb f_m$. The method for\napproximating them is similar to what was done for $\\pmb f_1$ (see\nFigure \\ref{fig:approxUpm}). More specifically, for $\\td{\\pmb\n u}_{m,m}$ we define\n\\begin{align*}\n D_m^M &:= (1-2\\eta-(b-1)h,1),\\\\\n X_m^{M} &:= \\{ih:(m-1)b+1\\le i \\le 2\\gamma+mb-2\\},\\\\\n \\sigma_m^{M}(x) &:= \n \\begin{dcases}\n \\dfrac{C}{\\eta}\\left(\\dfrac{x-(1-\\eta-(b-1)h)}{\\eta}\\right)^2,\\quad &x\\in[1-2\\eta-(b-1)h,1-\\eta-(b-1)h),\\\\\n 0,\\quad &x\\in[1-\\eta-(b-1)h,1-\\eta],\\\\\n \\dfrac{C}{\\eta}\\left(\\dfrac{x-(1-\\eta)}{\\eta}\\right)^2,\\quad &x\\in(1-\\eta,1],\n \\end{dcases}\\\\\n s_m^{M}(x) &:= \\left(1+\\ii\\dfrac{\\sigma_m^{M}(x)}{\\omega}\\right)^{-1}. \n\\end{align*}\nWe consider the continuous problem\n\\begin{align*}\n \\begin{dcases}\n \\left((s_m^{M}(x)\\dfrac{\\dd}{\\dd x})^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=g(x),\\quad & \\forall x\\in D_m^M,\\\\\n v(x)=0,\\quad & \\forall x\\in \\partial D_m^M,\n \\end{dcases}\n\\end{align*}\nand define $H_m^{M}\\pmb v=\\pmb g$ as its discretization on $X_m^{M}$.\nThe operator $\\td{G}_{m}^{M}: \\pmb y\\to \\pmb z$ can be defined as:\n\\begin{enumerate}\n\\item\n Introduce a vector $\\pmb g$ defined on $X_m^{M}$ by setting $\\pmb y$ to\n $X_m$ and zero everywhere else.\n\\item\n Solve $H_m^{M} \\pmb v=\\pmb g$ on $X_m^{M}$. \n\\item\n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_m$. \n\\end{enumerate}\nThen\n\\begin{align*}\n \\td{\\pmb u}_{m,m}:=\\td{G}_{m}^{M}\\pmb f_m. \n\\end{align*}\n\nFor each $\\td{\\pmb u}_{p,m},p=1,\\dots,m-1$, we introduce the auxiliary domain $D_p^L$, the right boundary $\\partial ^R D_p^L$, the extended grid $X_p^L$, and the corresponding PML functions $\\sigma_p^L(x), s_p^L(x)$ as follows\n\\begin{align*}\n D_p^L &:= ((p-1)bh,\\eta+pbh),\\\\\n \\partial ^R D_p^L &:= \\{\\eta + pbh\\},\\\\\n X_p^{L} &:= \\{x_i:(p-1)b+1\\le i\\le \\gamma+pb-1\\},\\\\\n \\sigma_p^{L}(x) &:= \n \\begin{dcases}\n \\dfrac{C}{\\eta}\\left(\\dfrac{x-(\\eta+(p-1)bh)}{\\eta}\\right)^2,\\quad &x\\in[(p-1)bh,\\eta+(p-1)bh),\\\\\n 0,\\quad &x\\in[\\eta+(p-1)bh,\\eta+pbh],\n \\end{dcases}\\\\\n s_p^{L}(x) &:= \\left(1+\\ii\\dfrac{\\sigma_p^{L}(x)}{\\omega}\\right)^{-1}, \n\\end{align*}\nand we consider the continuous problem\n\\begin{align*}\n \\begin{dcases}\n \\left((s_p^{L}(x)\\dfrac{\\dd}{\\dd\n x})^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=0,\\quad & \\forall x\\in D_p^L,\\\\\n v(x) = w,\\quad & \\forall x \\in \\partial^R D_p^L,\\\\\n v(x) = 0,\\quad & \\forall x \\in \\partial D_p^L \\setminus \\partial^R D_p^L,\n \\end{dcases}\n\\end{align*}\nwhere $y$ is the right boundary value of $v(x)$. Let $H_p^{L}\\pmb v=\\pmb g$ be its discretization on $X_p^{L}$ with\n$\\pmb g:=(-1\/h^2)[0,\\dots,0,w]^T$. We introduce the operator\n$\\td{G}_{p}^{L}:w\\mapsto \\pmb z$ by:\n\\begin{enumerate}\n\\item\n Set $\\pmb g=(-1\/h^2)[0,\\dots,0,w]^T$. \n\\item\n Solve $H_p^{L} \\pmb v =\\pmb g$ on $X_p^{L}$. \n\\item\n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_p$. \n\\end{enumerate}\nThen $\\td{\\pmb u}_{p,m}$ can be defined recursively for\n$p=m-1,\\dots,1$ by\n\\begin{align*}\n\\td{\\pmb u}_{p,m}:=\\td{G}_{p}^{L}\\td{\\pmb u}_{p+1,m}^{L}. \n\\end{align*}\n\n\\subsubsection{Wave generated by $\\pmb f_q$ for $q=2,\\dots,m-1$}\n\nFor each $q$, the components ${\\pmb u}_{p,q}$ for $p=1,\\dots,m$ can\nbe regarded as a sequence of left- and right-going waves generated by\n$\\pmb f_q$ (see Figure \\ref{fig:approxUpq}). For $\\td{\\pmb u}_{q,q}$,\nwe introduce\n\\begin{align*}\nD_q^M &:= ((q-1)bh,2\\eta+(qb-1)h),\\\\\nX_q^{M} &:= \\{x_i:(q-1)b+1\\le i \\le 2\\gamma+qb-2\\},\\\\\n\\sigma_q^{M}(x) &:= \n\\begin{dcases}\n\\dfrac{C}{\\eta}\\left(\\dfrac{x-(\\eta+(q-1)bh)}{\\eta}\\right)^2,\\quad &x\\in[(q-1)bh,\\eta+(q-1)bh),\\\\\n0,\\quad &x\\in[\\eta+(q-1)bh,\\eta+(qb-1)h],\\\\\n\\dfrac{C}{\\eta}\\left(\\dfrac{x-(\\eta+(qb-1)h)}{\\eta}\\right)^2,\\quad &x\\in(\\eta+(qb-1)h,2\\eta+(qb-1)h],\n\\end{dcases}\\\\\ns_q^{M}(x) &:= \\left(1+\\ii\\dfrac{\\sigma_q^{M}(x)}{\\omega}\\right)^{-1}, \n\\end{align*}\nand define $H_q^{M} \\pmb v=\\pmb g$ as the discrete problem of the continuous problem\n\\begin{align*}\n\\begin{dcases}\n\\left((s_q^{M}(x)\\dfrac{\\dd}{\\dd x})^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=g(x),\\quad & \\forall x\\in D_q^M,\\\\\nv(x)=0, \\quad & \\forall x \\in \\partial D_q^M.\n\\end{dcases}\n\\end{align*}\nWe introduce the operator $\\td{G}_q^{M}: \\pmb y\\to \\pmb z$ as:\n\\begin{enumerate}\n\\item\n Introduce a vector $\\pmb g$ defined on $X_q^{M}$ by setting $\\pmb y$ to $X_q$\n and zero everywhere else.\n\\item\n Solve $H_q^{M} \\pmb v=\\pmb g$ on $X_q^{M}$. \n\\item\n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_q$. \n\\end{enumerate}\nThen\n\\begin{align*}\n \\td{\\pmb u}_{q,q}:= \\td{G}_q^{M} \\pmb f_q. \n\\end{align*}\nFollowing the above discussion, the remaining components $\\td{\\pmb\n u}_{p,q}$ are defined recursively as\n\\begin{align*}\n\\td{\\pmb u}_{p,q} &:= \\td{G}_{p}^{R} \\td{\\pmb u}_{p-1,q}^{R}, \\quad \\text{for }p=q+1,\\dots,m,\\\\\n\\td{\\pmb u}_{p,q} &:= \\td{G}_{p}^{L}\\td{\\pmb u}_{p+1,q}^{L},\\quad \\text{for } p=q-1,\\dots,1. \n\\end{align*}\n\n\\subsection{Accumulating the boundary values}\nAfter all the above are done, an approximation of $\\pmb u_p$ is\ngiven by (see Figure \\ref{fig:approxUseparate})\n\\begin{align*}\n\\td{\\pmb u}_p:=\\sum_{q=1}^m\\td{\\pmb u}_{p,q},\\quad p=1,\\dots,m.\n\\end{align*}\n\n\n\\begin{figure}[h!]\n \\centering\n \\subfigure[$\\td {\\pmb u}_p$ is a superposition of $\\td {\\pmb u}_{p,q},q=1,\\dots,m$.]{ \n \\label{fig:approxUseparate}\n \\begin{tikzpicture}\n [x=2cm,y=2cm,>=latex]\n \\draw[step=1](0,-0.1)grid(7,0.1);\n \\draw(0.5,0)node[below=3]{$X_1$}(1.5,0)node[below=3]{$X_2$}(2.5,0)node[below=3]{$X_3$}(4.5,0)node[below=3]{$X_{m-2}$}(5.5,0)node[below=3]{$X_{m-1}$}(6.5,0)node[below=3]{$X_m$};\n \\foreach \\x in{0,...,5}\n {\n \\draw[->,gray,very thick](\\x+1,2.5)--(\\x+2,2.5);\n \\draw[<-,gray,very thick](\\x,0.5)--(\\x+1,0.5);\n }\n \\draw[<->,gray,very thick](6,0.5)--(7,0.5);\n \\draw[<->,gray,very thick](0,2.5)--(1,2.5);\n \n \\foreach \\x in{4,5,6}\n \\draw[->,gray,very thick](\\x,1.5)--(\\x+1,1.5);\n \\foreach \\x in{0,1,2}\n \\draw[<-,gray,very thick](\\x,1.5)--(\\x+1,1.5);\n \\draw[<->,gray,very thick](3,1.5)--(4,1.5);\n \n \\draw(6.5,0.5)node[fill=white]{$\\pmb f_m$};\n \\draw(0.5,2.5)node[fill=white]{$\\pmb f_1$};\n \n \\foreach \\x in{0.5,1.5,2.5}\n \\draw(3.5,\\x)node[fill=white]{$\\ldots$};\n \n \\foreach \\x in{0.5,1.5,2.5,4.5,5.5,6.5}\n \\foreach \\y in{1.0,2.0}\n \\draw(\\x,\\y)node{$\\vdots$};\n \n \\draw(0.5,0.5)node[below=3]{$\\td {\\pmb u}_{1,m}$}(1.5,0.5)node[below=3]{$\\td {\\pmb u}_{2,m}$}(2.5,0.5)node[below=3]{$\\td {\\pmb u}_{3,m}$}(4.5,0.5)node[below=3]{$\\td {\\pmb u}_{m-2,m}$}(5.5,0.5)node[below=3]{$\\td {\\pmb u}_{m-1,m}$}(6.5,0.5)node[below=3]{$\\td {\\pmb u}_{m,m}$};\n \n \\draw(3.5,0)node[fill=white]{$\\ldots$};\n \\draw(0.5,1.5)node[below=3]{$\\td {\\pmb u}_{1,q}$}(1.5,1.5)node[below=3]{$\\td {\\pmb u}_{2,q}$}(2.5,1.5)node[below=3]{$\\td {\\pmb u}_{3,q}$}(4.5,1.5)node[below=3]{$\\td {\\pmb u}_{m-2,q}$}(5.5,1.5)node[below=3]{$\\td {\\pmb u}_{m-1,q}$}(6.5,1.5)node[below=3]{$\\td {\\pmb u}_{m,q}$};\n \n \\draw(3.5,0)node[fill=white]{$\\ldots$};\n \\draw(0.5,2.5)node[below=3]{$\\td {\\pmb u}_{1,1}$}(1.5,2.5)node[below=3]{$\\td {\\pmb u}_{2,1}$}(2.5,2.5)node[below=3]{$\\td {\\pmb u}_{3,1}$}(4.5,2.5)node[below=3]{$\\td {\\pmb u}_{m-2,1}$}(5.5,2.5)node[below=3]{$\\td {\\pmb u}_{m-1,1}$}(6.5,2.5)node[below=3]{$\\td {\\pmb u}_{m,1}$};\n \n \\foreach \\x in{0.5,1.5,2.5,4.5,5.5,6.5}\n \\draw[->,>=latex,line width=3 pt](\\x,2.7)--(\\x,3.0);\n \n \\draw(0.5,3.0)node[above]{$\\td {\\pmb u}_1$}(1.5,3.0)node[above]{$\\td {\\pmb u}_2$}(2.5,3.0)node[above]{$\\td {\\pmb u}_3$}(4.5,3.0)node[above]{$\\td {\\pmb u}_{m-2}$}(5.5,3.0)node[above]{$\\td {\\pmb u}_{m-1}$}(6.5,3.0)node[above]{$\\td {\\pmb u}_m$};\n \\end{tikzpicture}\n }\n \n \\subfigure[$\\td {\\pmb u}_p$ is a superposition of $\\td {\\pmb u}_{p,1:p-1}$, $\\td {\\pmb u}_{p,p}$ and $\\td {\\pmb u}_{p,p+1:m}$.]{\n \\label{fig:approxUaccumulate}\n \\begin{tikzpicture}\n [x=2cm,y=2cm,>=latex]\n \\draw[step=1](0,-0.1)grid(7,0.1);\n \\draw(0.5,0)node[below=3]{$X_1$}(1.5,0)node[below=3]{$X_2$}(2.5,0)node[below=3]{$X_3$}(4.5,0)node[below=3]{$X_{m-2}$}(5.5,0)node[below=3]{$X_{m-1}$}(6.5,0)node[below=3]{$X_m$};\n \n \\foreach \\x in{0,...,5}\n {\n \\draw[->,gray,very thick](\\x+1,1.5)--(\\x+2,1.5);\n \\draw[<-,gray,very thick](\\x,0.5)--(\\x+1,0.5);\n }\n \\draw[<->,gray,very thick](6,0.5)--(7,0.5);\n \\draw[<->,gray,very thick](0,1.5)--(1,1.5);\n \n \\draw(6.5,0.5)node[fill=white]{$\\pmb f_m$};\n \\draw(0.5,1.5)node[fill=white]{$\\pmb f_1$};\n \n \\foreach \\x in{0.5,1.5}\n \\draw(3.5,\\x)node[fill=white]{$\\ldots$};\n \n \\foreach \\x in{1,...,5}\n {\n \\draw[<->,gray,very thick](\\x,0.5)--(\\x+1,1.5);\n \\fill[white](3.35,0.7)rectangle(3.65,1.3);\n }\n \n \\draw(1.5,1.0)node[fill=white]{$\\pmb f_2$}(2.5,1.0)node[fill=white]{$\\pmb f_3$}(3.5,1.0)node[fill=white]{$\\ldots$}(4.5,1.0)node[fill=white]{$\\pmb f_{m-2}$}(5.5,1.0)node[fill=white]{$\\pmb f_{m-1}$};\n \n \\draw(1.5,1.0)node[below=3]{$\\td {\\pmb u}_{2,2}$}(2.5,1.0)node[below=3]{$\\td {\\pmb u}_{3,3}$}(4.5,1.0)node[below=3]{$\\td {\\pmb u}_{m-2,m-2}$}(5.5,1.0)node[below=3]{$\\td {\\pmb u}_{m-1,m-1}$};\n \n \\draw(0.5,0.5)node[below=3]{$\\td {\\pmb u}_{1,2:m}$}(1.5,0.5)node[below=3]{$\\td {\\pmb u}_{2,3:m}$}(2.5,0.5)node[below=3]{$\\td {\\pmb u}_{3,4:m}$}(4.5,0.5)node[below=3]{$\\td {\\pmb u}_{m-2,m-1:m}$}(5.5,0.5)node[below=3]{$\\td {\\pmb u}_{m-1,m}$}(6.5,0.5)node[below=3]{$\\td {\\pmb u}_{m,m}$};\n \n \\draw(3.5,0)node[fill=white]{$\\ldots$};\n \\draw(0.5,1.5)node[below=3]{$\\td {\\pmb u}_{1,1}$}(1.5,1.5)node[below=3]{$\\td {\\pmb u}_{2,1}$}(2.5,1.5)node[below=3]{$\\td {\\pmb u}_{3,1:2}$}(4.5,1.5)node[below=3]{$\\td {\\pmb u}_{m-2,1:m-3}$}(5.5,1.5)node[below=3]{$\\td {\\pmb u}_{m-1,1:m-2}$}(6.5,1.5)node[below=3]{$\\td {\\pmb u}_{m,1:m-1}$};\n \n \\foreach \\x in{0.5,1.5,2.5,4.5,5.5,6.5}\n \\draw[->,>=latex,line width=3 pt](\\x,1.7)--(\\x,2.0);\n \n \\draw(0.5,2.0)node[above]{$\\td {\\pmb u}_1$}(1.5,2.0)node[above]{$\\td {\\pmb u}_2$}(2.5,2.0)node[above]{$\\td {\\pmb u}_3$}(4.5,2.0)node[above]{$\\td {\\pmb u}_{m-2}$}(5.5,2.0)node[above]{$\\td {\\pmb u}_{m-1}$}(6.5,2.0)node[above]{$\\td {\\pmb u}_m$};\n \n \\end{tikzpicture}\n }\n \\caption{This figure shows how the boundary values are accumulated after each step. The thin arrows indicate the transmission directions of the waves. The bold, up-pointing arrows symbolizes that summing up the corresponding waves on $X_p$ gives the superposition wave $\\td {\\pmb u}_p$.}\n\\end{figure}\n\n\nIn the algorithm described above, the computation of each component\n$\\td{\\pmb u}_{p,q}$ requires a separate solution of a problem of form\n$H_p^{R} \\pmb v =\\pmb g$ or $H_p^{L} \\pmb v =\\pmb g$. Since there are $O(m^2)$ such\ncomponents, the algorithm is computationally expensive. A key\nobservation is that the computation associated with each $p$ can be\ncombined in one single shot by accumulating the boundary values of the waves. More precisely, we define\n\\begin{align*}\n\\td{\\pmb u}_{p,q_1:q_2}:=\\sum_{t=q_1}^{q_2} \\td{\\pmb u}_{p,t}, \n\\end{align*}\nwhich is the total contribution of the waves generated by $\\pmb\nf_{q_1},\\dots,\\pmb f_{q_2}$ restricted to the grid $X_{p}$. The\nquantity $\\td{\\pmb u}_{p,1:p-1}$, which is the total right-going wave\ngenerated by $\\pmb f_1,\\dots,\\pmb f_{p-1}$ upon $X_p$, can be computed\nsequentially for $p=2,\\dots,m$ without computing each component and\nthen adding them together as we described above, as long as we\naccumulate the boundary values after each intermediate\nstep. Specifically, we first compute $\\td{\\pmb\n u}_{q,q}=\\td{G}_q^{M}\\pmb f_{q}$ for $q=1,\\dots,m$. This step is similar to what we did above. Then, to compute $\\td{\\pmb\n u}_{p,1:p-1}$ we carry out the following steps\n\\begin{align*}\n \\td{\\pmb u}_{p,1:p-1}=\\td{G}_{p}^{R}\\td{\\pmb u}_{p-1,1:p-1}^{R},\\quad \\td{\\pmb u}_{p,1:p}^{R}=\\td{\\pmb u}_{p,1:p-1}^{R}+\\td{\\pmb u}_{p,p}^{R}, \\quad \\text{for } p=2,\\dots,m. \n\\end{align*}\nThis means, before computing the total right-going wave $\\td{\\pmb u}_{p+1,1:p}$ on subdomain $X_{p+1}$, the boundary values of the previous right-going\nwaves, $\\td{\\pmb u}_{p,1:p-1}^{R}$ and $\\td{\\pmb u}_{p,p}^{R}$, are added together, so\nthat the the current right-going wave $\\td{\\pmb u}_{p+1,1:p}$ can be computed in one shot, eliminating the\ntrouble of solving the subproblems for many times and adding the\nresults together (see Figure \\ref{fig:approxUaccumulate}).\n\nFor the left going waves $\\td{\\pmb u}_{p,p+1:m}$, a\nsimilar process gives rise to the recursive formula\n\\begin{align*}\n\\td{\\pmb u}_{p,p+1:m}=\\td{G}_p^{L}\\td{\\pmb u}_{p+1,p+1:m}^{L},\\quad \\td{\\pmb u}_{p,p:m}^{L}=\\td{\\pmb u}_{p,p}^{L}+\\td{\\pmb u}_{p,p+1:m}^{L}, \\quad \\text{for } p=m-1,\\dots,1.\n\\end{align*}\n\nFinally, each $\\td{\\pmb u}_p$ can be computed by summing $\\td{\\pmb\n u}_{p,1:p-1}$, $\\td{\\pmb u}_{p,p}$ and $\\td{\\pmb u}_{p,p+1:m}$\ntogether (for the leftmost and the rightmost one, $\\td{\\pmb u}_1$ and\n$\\td{\\pmb u}_m$, only two terms need to be summed), i.e.,\n\\begin{align*}\n\\td{\\pmb u}_1 &= \\td{\\pmb u}_{1,1}+\\td{\\pmb u}_{1,2:m},\\\\\n\\td{\\pmb u}_p &= \\td{\\pmb u}_{p,1:p-1}+\\td{\\pmb u}_{p,p}+\\td{\\pmb u}_{p,p+1:m}, \\quad p=2,\\dots,m-1,\\\\\n\\td{\\pmb u}_m &= \\td{\\pmb u}_{m,1:m-1}+\\td{\\pmb u}_{m,m}. \n\\end{align*}\n\nWe see that, by accumulating the boundary values after each intermediate step, we only need to solve $O(m)$ subproblems instead of $O(m^2)$.\n\nIn this algorithm, the approximation $\\td{\\pmb u}_{p}$ on each small\nsubdomain is divided into three parts. From a matrix point of view,\nthis is analogous to splitting the block matrix $G$ into its lower\ntriangular part, diagonal part and upper triangular part, and then\napproximating each part as an operator to get the intermediate waves and then summing the intermediate results\ntogether. This is why we call it the additive sweeping method. \n\nEquation \\eqref{eqn:LDUsplit} shows an analogy of this procedure, where the matrix $G$ is split into $3m - 2$ blocks, each of which corresponds to a subproblem solving process:\n\\begin{align*}\n\\td{\\pmb u}_{q,q} &\\approx {\\pmb u}_{q,q} = G_{q,q} {\\pmb f}_q,\\quad q = 1,\\dots, m,\\\\\n\\td{\\pmb u}_{p,1:p-1} &\\approx {\\pmb u}_{p,1:p-1} = \\sum_{q=1}^{p-1} G_{p,q} {\\pmb f}_q,\\quad p = 2,\\dots, m,\\\\\n\\td{\\pmb u}_{p,p+1:m} &\\approx {\\pmb u}_{p,p+1:m} = \\sum_{q=p+1}^{m} G_{p,q} {\\pmb f}_q,\\quad p = 1,\\dots, m-1.\\\\\n\\end{align*}\n\n\\begin{equation}\n \\label{eqn:LDUsplit}\n \\begin{bmatrix}\n \\pmb u_1\\\\\\pmb u_2\\\\ \\ldots\\\\\\pmb u_m\n \\end{bmatrix}\n =\n \\begin{bmatrix}\n \\pmb u_{1,1}+\\pmb u_{1,2:m}\\\\\\pmb u_{2,1}+\\pmb u_{2,2}+\\pmb u_{2,3:m}\\\\ \\ldots\\\\\\pmb u_{m,1:m-1}+\\pmb u_{m,m}\n \\end{bmatrix}\n =\n \\left[\n \\begin{array}{ccccc}\n \\multicolumn{1}{c|}{G_{1,1}} & G_{1,2} & \\dots & & G_{1,m} \\\\ \n \\hline\n G_{2,1} & \\multicolumn{1}{|c|}{G_{2,2}} & G_{2,3} & \\ldots & G_{2,m} \\\\ \n \\hline\n & & \\ldots & & \\\\ \n \\hline\n G_{m,1} & \\ldots & \u2022 & G_{m,m-1} & \\multicolumn{1}{|c}{G_{m,m}}\n \\end{array} \n \\right]\n \\begin{bmatrix}\n \\pmb f_1\\\\\\pmb f_2\\\\ \\ldots\\\\\\pmb f_m\n \\end{bmatrix}\n\\end{equation}\n\nWhen combined with standard iterative solvers, the approximation\nalgorithm serves as a preconditioner for Equation \\eqref{eqn:1D} and it can be easily generalized to higher dimensions. In the following sections, we will discuss the details of the algorithm in 2D and 3D. To be structurally consistent, we will keep the notations for 2D and 3D the same with the 1D case without causing ambiguity. Some of the key notations and concepts are listed below as a reminder to the reader:\n\\begin{itemize}\n\\item\n$\\{X_p\\}_{p=1}^m$\\quad The sliced partition of the discrete grid.\n\\item\n$\\{D_q^M\\}_{q=1}^m$\\quad The auxiliary domains with two-sided PML padding.\n\\item\n$\\{D_p^R\\}_{p=2}^m$\\quad The auxiliary domains with right-side PML padding.\n\\item\n$\\{D_p^L\\}_{p=1}^{m-1}$\\quad The auxiliary domains with left-side PML padding.\n\\item\n$\\{X_q^M\\}_{q=1}^m$\\quad $X_q$ with two-sided PML padding, the discretization of $D_q^M$.\n\\item\n$\\{X_p^R\\}_{p=2}^m$\\quad $X_p$ with right-side PML padding, the discretization of $D_p^R$.\n\\item\n$\\{X_p^L\\}_{p=1}^{m-1}$\\quad $X_p$ with left-side PML padding, the discretization of $D_p^L$.\n\\item\n$\\{\\tilde{G}_q^M\\}_{q=1}^m$ \\quad The auxiliary Green's operators each of which maps the force on $X_q$ to the approximation of the wave field restricted to $X_q$.\n\\item\n$\\{\\tilde{G}_p^R\\}_{p=2}^m$ \\quad The auxiliary Green's operators each of which maps the left boundary value to the approximated wave field restricted to $X_p$, which simulates the right-transmission of the waves.\n\\item\n$\\{\\tilde{G}_p^L\\}_{p=1}^{m-1}$ \\quad The auxiliary Green's operators each of which maps the right boundary value to the approximated wave field restricted to $X_p$, which simulates the left-transmission of the waves.\n\\end{itemize}\n\n\\section{Preconditioner in 2D}\n\\label{sec:2D}\n\n\\subsection{Algorithm}\n\nThe domain of interest is $D=(0,1)^2$. We put PML on the two\nopposite sides of the boundary, $x_2=0$ and $x_2=1$, to illustrate the\nidea. The resulting equation is\n\\begin{align*}\n \\begin{dcases}\n \\left(\\partial_1^2+(s(x_2)\\partial_2)^2+\\dfrac{\\omega^2}{c^2(x)}\\right)u(x)=f(x),&\\quad \\forall x=(x_1,x_2)\\in D,\\\\\n u(x)=0,&\\quad \\forall x\\in \\partial D, \n \\end{dcases}\n\\end{align*}\nWe discretize $D$ with step size $h=1\/(n+1)$ in each direction, which\nresults the Cartesian grid\n\\begin{align*}\n X:=\\{(i_1h,i_2h):1\\le i_1,i_2\\le n\\}, \n\\end{align*} \nand the discrete equation\n\\begin{equation}\n \\label{eqn:2D}\n \\begin{gathered}\n \\dfrac{s_{i_2}}{h}\\left(\\dfrac{s_{i_2+1\/2}}{h}(u_{i_1,i_2+1}-u_{i_1,i_2})-\\dfrac{s_{i_2-1\/2}}{h}(u_{i_1,i_2}-u_{i_1,i_2-1})\\right)\\\\\n +\\dfrac{u_{i_1+1,i_2}-2u_{i_1,i_2}+u_{i_1-1,i_2}}{h^2}+\\dfrac{\\omega^2}{c_{i_1,i_2}^2}u_{i_1,i_2}=f_{i_1,i_2}, \\quad \\forall 1\\le i_1,i_2\\le n,\n \\end{gathered}\n\\end{equation}\nwhere the subscript $(i_1,i_2)$ means that the corresponding function\nis evaluated at $(i_1h,i_2h)$, and since $s(x_2)$ is a function of\n$x_2$ only, we omit the $i_1$ subscript. $\\pmb u$ and $\\pmb f$ are\ndefined to be the column-major ordering of the discrete array $u$ and\n$f$ on the grid $X$\n\\begin{align*}\n \\pmb u:=[u_{1,1},\\dots,u_{n,1},\\dots,u_{n,n}]^T,\\quad \\pmb f:=[f_{1,1},\\dots,f_{n,1},\\dots,f_{n,n}]^T.\n\\end{align*}\nNow \\eqref{eqn:2D} can be written as $A\\pmb u=\\pmb f$.\n\nWe divide the grid into $m$ parts along the $x_2$ direction\n\\begin{align*}\n X_1 &:= \\{(i_1h,i_2h):1\\le i_1\\le n,1 \\le i_2 \\le \\gamma + b-1 \\}, \\\\\n X_p &:= \\{(i_1h,i_2h):1\\le i_1\\le n,\\gamma + (p-1) b \\le i_2 \\le \\gamma + pb-1 \\},\\quad p=2,\\dots, m-1, \\\\\n X_m &:= \\{(i_1h,i_2h):1\\le i_1\\le n,\\gamma + (m-1) b \\le i_2 \\le 2 \\gamma + mb-2 \\}, \n\\end{align*}\nand we define $\\pmb u_p$ and $\\pmb f_p$ as the column-major ordering\nrestriction of $u$ and $f$ on $X_p$\n\\begin{align*}\n \\pmb u_1 &:= [u_{1,1},\\dots,u_{n,1},\\dots,u_{n,\\gamma+b-1}]^T,\\\\\n \\pmb u_p &:= [u_{1,\\gamma+(p-1)b},\\dots,u_{n,\\gamma+(p-1)b},\\dots,u_{n,\\gamma + pb-1}]^T, \\quad p=2,\\dots,m-1,\\\\\n \\pmb u_m &:= [u_{1,\\gamma + (m-1) b},\\dots,u_{n,\\gamma + (m-1) b},\\dots,u_{n,2 \\gamma + mb-2}]^T,\\\\\n \\pmb f_1 &:= [f_{1,1},\\dots,f_{n,1},\\dots,f_{n,\\gamma+b-1}]^T,\\\\\n \\pmb f_p &:= [f_{1,\\gamma+(p-1)b},\\dots,f_{n,\\gamma+(p-1)b},\\dots,f_{n,\\gamma + pb-1}]^T, \\quad p=2,\\dots,m-1,\\\\\n \\pmb f_m &:= [f_{1,\\gamma + (m-1) b},\\dots,f_{n,\\gamma + (m-1) b},\\dots,f_{n,2 \\gamma + mb-2}]^T,\n\\end{align*}\nthen $\\pmb u=G\\pmb f$ for $G=A^{-1}$ can be written as\n\\begin{align*}\n\\begin{bmatrix}\n\\pmb u_1\\\\\\pmb u_2\\\\ \\vdots\\\\\\pmb u_m\n\\end{bmatrix}\n=\n\\begin{bmatrix}\nG_{1,1}&G_{1,2}&\\ldots&G_{1,m}\\\\\nG_{2,1}&G_{2,2}&\\ldots&G_{2,m}\\\\\n\\vdots&\\vdots&&\\vdots\\\\\nG_{m,1}&G_{m,2}&\\ldots&G_{m,m}\n\\end{bmatrix}\n\\begin{bmatrix}\n\\pmb f_1\\\\\\pmb f_2\\\\ \\vdots\\\\\\pmb f_m\n\\end{bmatrix}.\n\\end{align*}\n\n\\paragraph{Auxiliary domains.}\nFollowing to the 1D case, the extended subdomains and the\ncorresponding left and right boundaries are defined by\n\\begin{align*}\n D_q^{M} &= (0,1)\\times((q-1)bh,2\\eta+(qb-1)h),\\quad q=1,\\dots,m,\\\\\n D_p^{R} &= (0,1)\\times(\\eta+((p-1)b-1)h,2\\eta+(pb-1)h),\\quad p=2,\\dots,m,\\\\\n D_p^{L} &= (0,1)\\times((p-1)bh,\\eta+pbh),\\quad p=1,\\dots,m-1,\\\\\n \\partial^{L} D_p^{R} &= (0,1)\\times\\{\\eta+((p-1)b-1)h\\},\\quad p=2,\\dots,m,\\\\\n \\partial^{R} D_p^{L} &= (0,1)\\times\\{\\eta+pbh\\},\\quad p=1,\\dots,m-1.\n\\end{align*}\nThe extended grid for these domains are\n\\begin{align*}\n X_q^{M} &:= \\{(i_1h,i_2h):1\\le i_1\\le n,(q-1)b+1\\le i_2 \\le 2\\gamma+qb-1\\}, \\quad q=1,\\dots,m, \\\\\n X_p^{R} &:= \\{(i_1h,i_2h):1\\le i_1\\le n,\\gamma+(p-1)b\\le i_2\\le 2\\gamma+pb-2\\}, \\quad p=2,\\dots,m, \\\\\n X_p^{L} &:= \\{(i_1h,i_2h):1\\le i_1\\le n,(p-1)b+1\\le i_2\\le \\gamma+pb-1\\}, \\quad p=1,\\dots,m-1. \n\\end{align*}\n\n\\paragraph{Auxiliary problems.}\nFor $q=1,\\dots,m$, we define $H_q^{M} \\pmb v=\\pmb g$ to be the\ndiscretization on $X_q^{M}$ of the problem\n\\begin{align*}\n &\\begin{dcases}\n \\left(\\partial_1^2+(s_q^{M}(x_2)\\partial_2)^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=g(x),\\quad &\\forall x\\in D_q^{M},\\\\\n v(x)=0,\\quad &\\forall x\\in \\partial D_q^{M}. \n \\end{dcases}\n\\end{align*}\nFor $p=2,\\dots,m$, $H_p^{R} \\pmb v=\\pmb g$ is the discretization on\n$X_p^{R}$ of the problem\n\\begin{align*}\n &\\begin{dcases}\n \\left(\\partial_1^2+(s_p^{R}(x_2)\\partial_2)^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=0,\\quad &\\forall x\\in D_p^{R},\\\\\n v(x)=w(x_1),\\quad &\\forall x\\in \\partial^{L} D_p^{R}, \\\\\n v(x)=0,\\quad &\\forall x\\in \\partial D_p^{R} \\setminus \\partial^{L} D_p^{R},\n \\end{dcases}\n\\end{align*}\nwhere $\\pmb g:=(-1\/h^2)[\\pmb w^T,0,\\dots,0]^T$ and $\\pmb w:= [w_1,\\dots,w_n]^T$ is the discrete value of $w(x_1)$. Finally, for\n$p=1,\\dots,m-1$, $H_p^{L} \\pmb v=\\pmb g$ is the discretization on\n$X_p^{L}$ of the problem\n\\begin{align*}\n &\\begin{dcases}\n \\left(\\partial_1^2+(s_p^{L}(x_2)\\partial_2)^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=0,\\quad &\\forall x\\in D_p^{L},\\\\\n v(x)=w(x_1),\\quad &\\forall x\\in \\partial^{R} D_p^{L}, \\\\\n v(x)=0,\\quad &\\forall x\\in \\partial D_p^{L} \\setminus \\partial^{R} D_p^{L},\n \\end{dcases}\n\\end{align*}\nwhere $\\pmb g:=(-1\/h^2)[0,\\dots,0,\\pmb w^T]^T$ and $\\pmb w:= [w_1,\\dots,w_n]^T$.\n\n\\paragraph{Auxiliary Green's operators.}\nFor $q=1,\\dots,m$, we define $\\td{G}_q^{M}:\\pmb y\\mapsto \\pmb z$ to be\nthe operator defined by the following operations:\n\\begin{enumerate}\n\\item\n Introduce a vector $\\pmb g$ defined on $X_q^{M}$ by setting $\\pmb y$ to $X_q$ and zero everywhere else. \n\\item\n Solve $H_q^{M} \\pmb v=\\pmb g$ on $X_q^{M}$. \n\\item\n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_q$. \n\\end{enumerate}\nFor $p=2,\\dots,m$, the operators $\\td{G}_p^{R}:\\pmb w\\mapsto \\pmb z$\nis given by:\n\\begin{enumerate}\n\\item\n Set $\\pmb g=(-1\/h^2)[\\pmb w^T,0,\\dots,0]^T$. \n\\item\n Solve $H_p^{R} \\pmb v =\\pmb g$ on $X_p^{R}$. \n\\item\n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_p$. \n\\end{enumerate}\nFinally, for $p=1,\\dots,m-1$, $\\td{G}_p^{L}:\\pmb w\\mapsto \\pmb z$ is\ndefined as:\n\\begin{enumerate}\n\\item\n Set $\\pmb g=(-1\/h^2)[0,\\dots,0,\\pmb w^T]^T$. \n\\item\n Solve $H_p^{L} \\pmb v =\\pmb g$ on $X_p^{L}$. \n\\item\n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_p$. \n\\end{enumerate}\n\n\\paragraph{Putting together.}\nSimilar to the previous section, we introduce the left boundary value\n$\\pmb g^{L}$ and the right boundary value $\\pmb g^{R}$ for a\ncolumn-major ordering array\n$\\pmb g=[g_{1,1},\\dots,g_{s_1,1},\\dots,g_{s_1,s_2}]^T$\ninduced from some grid with size $s_1\\times s_2$ by\n\\begin{align*}\n \\pmb g^{L}:=[g_{1,1},\\dots,g_{s_1,1}]^T, \\quad \\pmb g^{R}:=[g_{1,s_2},\\dots,g_{s_1,s_2}]^T. \\\\\n\\end{align*}\nThen the approximations for $\\pmb u_p,p=1,\\dots,m$, can be defined step by step as\n\\begin{align*}\n \\td{\\pmb u}_{q,q} &:= \\td{G}_q^{M} \\pmb f_q,\\quad q=1,\\dots,m,\\\\\n \\td{\\pmb u}_{p,1:p-1} &:= \\td{G}_{p}^{R}\\td{\\pmb u}_{p-1,1:p-1}^{R},\\quad \\td{\\pmb u}_{p,1:p}^{R}:=\\td{\\pmb u}_{p,1:p-1}^{R}+\\td{\\pmb u}_{p,p}^{R},\\quad \\text{for } p=2,\\dots,m,\\\\\n \\td{\\pmb u}_{p,p+1:m} &:= \\td{G}_p^{L}\\td{\\pmb u}_{p+1,p+1:m}^{L},\\quad \\td{\\pmb u}_{p,p:m}^{L}:=\\td{\\pmb u}_{p,p}^{L}+\\td{\\pmb u}_{p,p+1:m}^{L}, \\quad \\text{for } p=m-1,\\dots,1,\\\\\n \\td{\\pmb u}_1 &:= \\td{\\pmb u}_{1,1}+\\td{\\pmb u}_{1,2:m},\\\\\n \\td{\\pmb u}_p &:= \\td{\\pmb u}_{p,1:p-1}+\\td{\\pmb u}_{p,p}+\\td{\\pmb u}_{p,p+1:m},\\quad p=2,\\dots,m-1,\\\\\n \\td{\\pmb u}_m &:= \\td{\\pmb u}_{m,1:m-1}+\\td{\\pmb u}_{m,m}. \n\\end{align*}\n\nTo solve the subproblems $H_q^{M} \\pmb v=\\pmb g$, $H_p^{R}\\pmb v=\\pmb\ng$ and $H_p^{L}\\pmb v=\\pmb g$, we notice that they are indeed quasi-1D\nproblems since $\\gamma$ and $b$ are some small constants. Therefore,\nfor each one of them, we can reorder the system by grouping\nthe elements along dimension 2 first and then dimension 1, which\nresults a banded linear system that can be solved by the LU\nfactorization efficiently. These factorization processes induce the\nfactorizations for the operators $\\td{G}_q^{M}$, $\\td{G}_p^{R}$ and\n$\\td{G}_p^{L}$ symbolically, which leads to our setup algorithm of the\npreconditioner in 2D as described in Algorithm \\ref{alg:2dsetup} and\nthe application algorithm as described in Algorithm \\ref{alg:2dapp}.\n\n\\begin{algorithm}[h!]\n \\caption{Construction of the 2D additive sweeping preconditioner of\n the Equation \\eqref{eqn:2D}. Complexity\n $=O(n^2(b+\\gamma)^3\/b)=O(N(b+\\gamma)^3\/b)$.}\n \\label{alg:2dsetup}\n \\begin{algorithmic}\n \\FOR {$q=1,\\dots,m$}\n \\STATE Construct the LU factorization of $H_q^{M}$, which defines $\\td{G}_q^{M}$.\n \\ENDFOR\n \\FOR {$p=2,\\dots,m$}\n \\STATE Construct the LU factorization of $H_p^{R}$, which defines $\\td{G}_p^{R}$.\n \\ENDFOR\n \\FOR {$p=1,\\dots,m-1$}\n \\STATE Construct the LU factorization of $H_p^{L}$, which defines $\\td{G}_p^{L}$.\n \\ENDFOR\n \\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[h!]\n \\caption{Computation of $\\td{\\pmb u}\\approx G \\pmb f$ using the\n preconditioner from Algorithm \\ref{alg:2dsetup}. Complexity\n $=O(n^2(b+\\gamma)^2\/b)=O(N(b+\\gamma)^2\/b)$.}\n \\label{alg:2dapp}\n \\begin{algorithmic}\n \\FOR {$q=1,\\dots,m$}\n \\STATE\n $\\td{\\pmb u}_{q,q}=\\td{G}_q^{M} \\pmb f_q$\n \\ENDFOR\n \\FOR {$p=2,\\dots,m$}\n \\STATE\n $\\td{\\pmb u}_{p,1:p-1}=\\td{G}_{p}^{R}\\td{\\pmb u}_{p-1,1:p-1}^{R}$\\\\\n $\\td{\\pmb u}_{p,1:p}^{R}=\\td{\\pmb u}_{p,1:p-1}^{R}+\\td{\\pmb u}_{p,p}^{R}$\n \\ENDFOR\n \\FOR {$p=m-1,\\dots,1$}\n \\STATE\n $\\td{\\pmb u}_{p,p+1:m}=\\td{G}_p^{L}\\td{\\pmb u}_{p+1,p+1:m}^{L}$\\\\\n $\\td{\\pmb u}_{p,p:m}^{L}=\\td{\\pmb u}_{p,p}^{L}+\\td{\\pmb u}_{p,p+1:m}^{L}$\n \\ENDFOR\n \\STATE\n $\\td{\\pmb u}_1=\\td{\\pmb u}_{1,1}+\\td{\\pmb u}_{1,2:m}$\\\\\n \\FOR {$p=2,\\dots,m-1$}\n \\STATE\n $\\td{\\pmb u}_p=\\td{\\pmb u}_{p,1:p-1}+\\td{\\pmb u}_{p,p}+\\td{\\pmb u}_{p,p+1:m}$\n \\ENDFOR\n \\STATE\n $\\td{\\pmb u}_m=\\td{\\pmb u}_{m,1:m-1}+\\td{\\pmb u}_{m,m}$ \n \\end{algorithmic}\n\\end{algorithm}\n\nTo analyze the complexity, we note that, in the setup process, there\nare $O(n\/b)$ subproblems, each of which is a quasi-1D problem with\n$O(\\gamma+b)$ layers along the second dimension. Therefore, the setup\ncost of each subproblem by the LU factorization is $O(n(\\gamma+b)^3)$\nand the application cost is $O(n(\\gamma+b)^2)$. So the total setup\ncost is $O(n^2(\\gamma+b)^3\/b)$. Besides, one needs to solve each\nsubproblem once during the application process so the total\napplication cost is $O(n^2(\\gamma+b)^2\/b)$.\n\nThere are some differences when implementing the method practically:\n\\begin{enumerate}\n\\item\n In the above setting, PMLs are put only on two opposite sides of the\n unit square for illustration purpose. In reality, PMLs can be put on\n other sides of the domain if needed. As long as there are two\n opposite sides with PML boundary condition, the method can be\n implemented.\n\\item\n The thickness of the auxiliary PMLs introduced in the interior part\n of the domain needs not to be the same with the thickness of the PML\n at the boundary. In fact, the thickness of the auxiliary PML is\n typically thinner in order to improve efficiency.\n\\item\n The widths of the subdomains are completely arbitrary and they need\n not to be the same. Practically, the widths can be chosen to be\n larger for subdomains where the velocity field varies heavily.\n\\item\n The symmetric version of the equation can be adopted to save memory\n and computational cost.\n\\end{enumerate}\n\n\\subsection{Numerical results}\n\\label{sec:2Dnumerical}\n\nHere, we present some numerical results in 2D to illustrate the\nefficiency of the algorithm. The proposed method is implemented in\nMATLAB and the tests are performed on a 2.0 GHz computer with 256 GB\nmemory. GMRES is used as the iterative solver with relative residual\nequal to $10^{-3}$ and restart value equal to $40$. PMLs are put on\nall sides of the unit square. The velocity fields tested are given in\nFigure \\ref{fig:2D}:\n\\begin{enumerate}[(a)]\n\\item\n A converging lens with a Gaussian profile at the center of the domain.\n\\item\n A vertical waveguide with a Gaussian cross-section.\n\\item\n A random velocity field.\n\\end{enumerate}\n\n\\begin{figure}[h!]\n \\centering\n \\subfigure[]\n {\\includegraphics[width=0.32\\textwidth]{2Da.pdf}}\n \\subfigure[]\n {\\includegraphics[width=0.32\\textwidth]{2Db.pdf}}\n \\subfigure[]\n {\\includegraphics[width=0.32\\textwidth]{2Dc.pdf}}\n \\caption{The three velocity fields tested in 2D.}\n \\label{fig:2D}\n\\end{figure}\n\nFor each velocity field, two external forces are tested:\n\\begin{enumerate}[(a)]\n\\item\n A Gaussian point source centered at $(1\/2,1\/8)$.\n\\item\n A Gaussian wave packet with wavelength comparable to the typical\n wavelength of the domain. The packet centers at $(1\/8,1\/8)$ and\n points to the direction $(1\/\\sqrt{2},1\/\\sqrt{2})$.\n\\end{enumerate}\n\n\\begin{table}[h!]\n \\centering\n \\begin{overpic}\n [width=0.45\\textwidth]{2Daa.pdf}\n \\put(40,6){force (a)}\n \\end{overpic}\n \\begin{overpic}\n [width=0.45\\textwidth]{2Dab.pdf}\n \\put(40,6){force (b)}\n \\end{overpic}\n \\begin{tabular}{ccc|cc|cc}\n \\hline \n \\multicolumn{3}{c|}{velocity field (a)} & \\multicolumn{2}{c|}{force (a)} & \\multicolumn{2}{c}{force (b)} \\\\ \n \\hline \n $\\omega\/(2\\pi)$ & $N$ & $T_{\\text{setup}}$ & $N_\\text{iter}$ & $T_\\text{solve}$ & $N_\\text{iter}$ & $T_\\text{solve}$ \\\\ \n \\hline \n 16 & $127^2$ & 8.1669e$-$01 & 4 & 5.3199e$-$01 & 4 & 2.5647e$-$01 \\\\\n 32 & $255^2$ & 3.4570e$+$00 &4& 7.3428e$-$01 &4& 7.2807e$-$01\\\\ \n 64 & $511^2$ & 1.5150e$+$01 &5& 3.6698e$+$00 &4& 3.7239e$+$00\\\\ \n 128 & $1023^2$ & 6.2713e$+$01 &5& 1.6812e$+$01 &4& 1.6430e$+$01\\\\\n 256 & $2047^2$ & 2.6504e$+$02 &6& 7.8148e$+$01 &4& 5.6936e$+$01\\\\\n \\hline \n \\end{tabular} \n \\caption{Results for velocity field (a) in 2D. Solutions with $\\omega\/(2\\pi)=32$ are presented.}\n \\label{tab:2domegaa}\n\\end{table}\n\\begin{table}[h!]\n \\centering\n \\begin{overpic}\n [width=0.45\\textwidth]{2Dba.pdf}\n \\put(40,6){force (a)}\n \\end{overpic}\n \\begin{overpic}\n [width=0.45\\textwidth]{2Dbb.pdf}\n \\put(40,6){force (b)}\n \\end{overpic}\n \\begin{tabular}{ccc|cc|cc}\n \\hline \n \\multicolumn{3}{c|}{velocity field (b)} & \\multicolumn{2}{c|}{force (a)} & \\multicolumn{2}{c}{force (b)} \\\\ \n \\hline \n $\\omega\/(2\\pi)$ & $N$ & $T_{\\text{setup}}$ & $N_\\text{iter}$ & $T_\\text{solve}$ & $N_\\text{iter}$ & $T_\\text{solve}$ \\\\ \n \\hline \n 16 & $127^2$ & 7.0834e$-$01 &6& 2.9189e$-$01 &4& 1.9408e$-$01\\\\\n 32 & $255^2$ & 3.2047e$+$00 &8& 1.6147e$+$00 &4& 7.9303e$-$01\\\\ \n 64 & $511^2$ & 1.4079e$+$01 &8& 6.3057e$+$00 &4& 3.9008e$+$00\\\\ \n 128 & $1023^2$ & 6.0951e$+$01 &8& 2.9097e$+$01 &4& 1.5287e$+$01\\\\\n 256 & $2047^2$ & 2.6025e$+$02 &8& 1.1105e$+$02 &5& 7.2544e$+$01\\\\\n \\hline \n \\end{tabular} \n \\caption{Results for velocity field (b) in 2D. Solutions with $\\omega\/(2\\pi)=32$ are presented.}\n \\label{tab:2domegab}\n\\end{table}\n\\begin{table}[h!]\n\\centering\n\\begin{overpic}\n [width=0.45\\textwidth]{2Dca.pdf}\n \\put(40,6){force (a)}\n\\end{overpic}\n\\begin{overpic}\n [width=0.45\\textwidth]{2Dcb.pdf}\n \\put(40,6){force (b)}\n\\end{overpic}\n\\begin{tabular}{ccc|cc|cc}\n \\hline \n \\multicolumn{3}{c|}{velocity field (c)} & \\multicolumn{2}{c|}{force (a)} & \\multicolumn{2}{c}{force (b)} \\\\ \n \\hline \n $\\omega\/(2\\pi)$ & $N$ & $T_{\\text{setup}}$ & $N_\\text{iter}$ & $T_\\text{solve}$ & $N_\\text{iter}$ & $T_\\text{solve}$ \\\\ \n \\hline \n 16 & $127^2$ & 7.0495e$-$01 &5& 2.4058e$-$01 &6& 2.8347e$-$01\\\\\n 32 & $255^2$ & 3.1760e$+$00 &5& 1.0506e$+$00 &5& 9.9551e$-$01\\\\ \n 64 & $511^2$ & 1.4041e$+$01 &6& 4.7083e$+$00 &7& 6.7852e$+$00\\\\\n 128 & $1023^2$ & 6.1217e$+$01 &6& 1.8652e$+$01 &6& 1.9792e$+$01\\\\\n 256 & $2047^2$ & 2.5762e$+$02 &8& 1.1214e$+$02 &6& 8.6936e$+$01\\\\\n \\hline \n\\end{tabular} \n\\caption{Results for velocity field (c) in 2D. Solutions with $\\omega\/(2\\pi)=32$ are presented.}\n\\label{tab:2domegac}\n\\end{table}\n\nIn these tests, each typical wavelength is discretized with 8\npoints. The width of the PML at the boundary and the one of the PMLs introduced\nin the interior parts of the domain are both $9h$, i.e., $\\gamma=9$. The number of layers in each interior subdomain is $b=8$,\nthe number of layers in the leftmost subdomain is $b+\\gamma-1=16$ and\nthe one in the rightmost is $b+\\gamma-2=15$. \n\nWe vary the typical wave number $\\omega\/(2\\pi)$ and test the behavior\nof the algorithm. The test results are presented in Tables\n\\ref{tab:2domegaa}, \\ref{tab:2domegab} and\n\\ref{tab:2domegac}. $T_\\text{setup}$ is the setup time of the\nalgorithm in seconds. $T_{\\text{solve}}$ is the total solve time in\nseconds and $N_{\\text{iter}}$ is the iteration number. From these\ntests we see that the setup time scales like $O(N)$ as well as the\nsolve time per iteration, which is consistent with the algorithm\ncomplexity analysis. The iteration number remains constant or grows at\nmost logarithmically, which shows the efficiency of the\npreconditioner.\n\n\\section{Preconditioner in 3D}\n\\label{sec:3D}\n\n\\subsection{Algorithm}\n\nIn this section we briefly state the preconditioner in 3D case. The\ndomain of interest is $D=(0,1)^3$. PMLs are put on two opposite faces\nof the unit cube, $x_3=0$ and $x_3=1$, which results the equation\n\\begin{align*}\n \\begin{dcases}\n \\left(\\partial_1^2+\\partial_2^2+(s(x_3)\\partial_3)^2+\\dfrac{\\omega^2}{c^2(x)}\\right)u(x)=f(x),&\\quad \\forall x=(x_1,x_2,x_3)\\in D,\\\\\n u(x)=0,&\\quad \\forall x\\in \\partial D, \n \\end{dcases}\n\\end{align*}\nDiscretizing $D$ with step size $h=1\/(n+1)$ gives the grid\n\\begin{align*}\n X:=\\{(i_1h,i_2h,i_3h):1\\le i_1,i_2,i_3\\le n\\}, \n\\end{align*} \nand the discrete equation\n\\begin{equation}\n \\label{eqn:3D}\n \\begin{gathered}\n \\dfrac{s_{i_3}}{h}\\left(\\dfrac{s_{i_3+1\/2}}{h}(u_{i_1,i_2,i_3+1}-u_{i_1,i_2,i_3})-\\dfrac{s_{i_3-1\/2}}{h}(u_{i_1,i_2,i_3}-u_{i_1,i_2,i_3-1})\\right)\\\\\n +\\dfrac{u_{i_1+1,i_2,i_3}-2u_{i_1,i_2,i_3}+u_{i_1-1,i_2,i_3}}{h^2}+\\dfrac{u_{i_1,i_2+1,i_3}-2u_{i_1,i_2,i_3}+u_{i_1,i_2-1,i_3}}{h^2}\\\\\n +\\dfrac{\\omega^2}{c_{i_1,i_2,i_3}^2}u_{i_1,i_2,i_3}=f_{i_1,i_2,i_3}, \\quad \\forall 1\\le i_1,i_2\\le n. \n \\end{gathered}\n\\end{equation}\n$\\pmb u$ and $\\pmb f$ are defined as the column-major ordering of $u$ and $f$ on the grid $X$\n\\begin{align*}\n \\pmb u:=[u_{1,1,1},\\dots,u_{n,1,1},\\dots,u_{n,n,1},\\dots,u_{n,n,n}]^T,\\quad \\pmb f:=[f_{1,1,1},\\dots,f_{n,1,1},\\dots,f_{n,n,1},\\dots,f_{n,n,n}]. \n\\end{align*}\n$X$ is divided into $m$ parts along the $x_3$ direction\n\\begin{align*}\n X_1 &:= \\{(i_1h,i_2h,i_3h):1\\le i_1\\le n,1\\le i_2\\le n,1 \\le i_3 \\le \\gamma + b-1 \\}, \\\\\n X_p &:= \\{(i_1h,i_2h,i_3h):1\\le i_1\\le n,1\\le i_2\\le n,\\gamma + (p-1) b \\le i_3 \\le \\gamma + pb-1 \\}, \\quad p=2,\\dots, m-1, \\\\\n X_m &:= \\{(i_1h,i_2h,i_3h):1\\le i_1\\le n,1\\le i_2\\le n,\\gamma + (m-1) b \\le i_3 \\le 2 \\gamma + mb-2 \\}. \n\\end{align*}\n$\\pmb u_p$ and $\\pmb f_p$ are the column-major ordering restrictions of $u$ and $f$ on $X_p$ \n\\begin{align*}\n\\pmb u_1 &:= [u_{1,1,1},\\dots,u_{n,1,1},\\dots,u_{n,n,1},\\dots,u_{n,n,\\gamma+b-1}]^T,\\\\\n\\pmb u_p &:= [u_{1,1,\\gamma+(p-1)b},\\dots,u_{n,1,\\gamma+(p-1)b},\\dots,u_{n,n,\\gamma+(p-1)b},\\dots,u_{n,n,\\gamma + pb-1}]^T, \\quad p=2,\\dots,m-1,\\\\\n\\pmb u_m &:= [u_{1,1,\\gamma + (m-1) b},\\dots,u_{n,1,\\gamma + (m-1) b},\\dots,u_{n,n,\\gamma + (m-1) b},\\dots,u_{n,n,2 \\gamma + mb-2}]^T,\\\\\n\\pmb f_1 &:= [f_{1,1,1},\\dots,f_{n,1,1},\\dots,f_{n,n,1},\\dots,f_{n,n,\\gamma+b-1}]^T,\\\\\n\\pmb f_p &:= [f_{1,1,\\gamma+(p-1)b},\\dots,f_{n,1,\\gamma+(p-1)b},\\dots,f_{n,n,\\gamma+(p-1)b},\\dots,f_{n,n,\\gamma + pb-1}]^T, \\quad p=2,\\dots,m-1,\\\\\n\\pmb f_m &:= [f_{1,1,\\gamma + (m-1) b},\\dots,f_{n,1,\\gamma + (m-1) b},\\dots,f_{n,n,\\gamma + (m-1) b},\\dots,f_{n,n,2 \\gamma + mb-2}]^T. \n\\end{align*}\n\n\\paragraph{Auxiliary domains.}\nThe extended subdomains, the extended grids, and the corresponding\nleft and right boundaries are defined by\n\\begin{align*}\nD_q^{M} &:= (0,1)\\times(0,1)\\times((q-1)bh,2\\eta+(qb-1)h), \\quad q=1,\\dots,m,\\\\\nD_p^{R} &:= (0,1)\\times(0,1)\\times(\\eta+((p-1)b-1)h,2\\eta+(pb-1)h), \\quad p=2,\\dots,m,\\\\\nD_p^{L} &:= (0,1)\\times(0,1)\\times((p-1)bh,\\eta+pbh), \\quad p=1,\\dots,m-1,\\\\\n\\partial^{L} D_p^{R} &:= (0,1)\\times(0,1)\\times\\{\\eta+((p-1)b-1)h\\}, \\quad p=2,\\dots,m,\\\\\n\\partial^{R} D_p^{L} &:= (0,1)\\times(0,1)\\times\\{\\eta+pbh\\}, \\quad p=1,\\dots,m-1, \\\\\nX_q^{M} &:= \\{(i_1h,i_2h,i_3h):1\\le i_1\\le n,1\\le i_2\\le n,(q-1)b+1\\le i_3 \\le 2\\gamma+qb-1\\}, \\quad q=1,\\dots,m, \\\\\nX_p^{R} &:= \\{(i_1h,i_2h,i_3h):1\\le i_1\\le n,1\\le i_2\\le n,\\gamma+(p-1)b\\le i_3\\le 2\\gamma+pb-2\\}, \\quad p=2,\\dots,m, \\\\\nX_p^{L} &:= \\{(i_1h,i_2h,i_3h):1\\le i_1\\le n,1\\le i_2\\le n,(p-1)b+1\\le i_3\\le \\gamma+pb-1\\}, \\quad p=1,\\dots,m-1. \n\\end{align*}\n\n\\paragraph{Auxiliary problems.}\nFor each $q=1,\\dots,m$, $H_q^{M} \\pmb v=\\pmb g$ is defined as the\ndiscretization on $X_q^{M}$ of\n\\begin{align*}\n &\\begin{dcases}\n \\left(\\partial_1^2+\\partial_2^2+(s_q^{M}(x_3)\\partial_3)^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=g(x),&\\quad \\forall x\\in D_q^{M},\\\\\n v(x)=0,&\\quad \\forall x\\in \\partial D_q^{M}, \n \\end{dcases}\n\\end{align*}\nFor $p=2,\\dots,m$, $H_p^{R} \\pmb v=\\pmb g$ is defined as the\ndiscretization on $X_p^{R}$ of\n\\begin{align*}\n &\\begin{dcases}\n \\left(\\partial_1^2+\\partial_2^2+(s_p^{R}(x_3)\\partial_3)^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=0,\\quad &\\forall x\\in D_p^{R},\\\\\n v(x)=w(x_1,x_2),\\quad &\\forall x\\in \\partial^{L} D_p^{R}, \\\\\n v(x)=0,\\quad &\\forall x\\in \\partial D_p^{R} \\setminus \\partial^{L} D_p^{R}, \n \\end{dcases}\n\\end{align*}\nwhere $\\pmb\ng:=(-1\/h^2)[\\pmb w^T,0,\\dots,0]^T$ and $\\pmb w := [w_{1,1},\\dots,w_{n,1},\\dots,w_{n,n}]$ is the discrete boundary value. Finally,\nfor $p=1,\\dots,m-1$, $H_p^{L} \\pmb v=\\pmb g$ is the discretization on\n$X_p^{L}$ of\n\\begin{align*}\n &\\begin{dcases}\n \\left(\\partial_1^2+\\partial_2^2+(s_p^{L}(x_3)\\partial_3)^2+\\dfrac{\\omega^2}{c^2(x)}\\right)v(x)=0,\\quad &\\forall x\\in D_p^{L},\\\\\n v(x)=w(x_1,x_2),\\quad &\\forall x\\in \\partial^{R} D_p^{L}, \\\\\n v(x)=0,\\quad &\\forall x\\in \\partial D_p^{L} \\setminus \\partial^{R} D_p^{L},\n \\end{dcases}\n\\end{align*}\nwhere $\\pmb g:=(-1\/h^2)[0,\\dots,0,\\pmb w^T]^T$ and $\\pmb w := [w_{1,1},\\dots,w_{n,1},\\dots,w_{n,n}]$. \n\n\\paragraph{Auxiliary Green's operators.}\nFor $q=1,\\dots,m$, $\\td{G}_q^{M}:\\pmb y\\mapsto \\pmb z$ is defined\nusing the following operations:\n\\begin{enumerate}\n\\item\n Introduce a vector $\\pmb g$ defined on $X_q^{M}$ by setting $\\pmb y$ to $X_q$ and zero everywhere else. \n\\item\n Solve $H_q^{M} \\pmb v=\\pmb g$ on $X_q^{M}$. \n\\item\n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_q$. \n\\end{enumerate}\nFor $p=2,\\dots,m$, $\\td{G}_p^{R}:\\pmb w\\mapsto \\pmb z$ is given by:\n\\begin{enumerate}\n\\item\n Set $\\pmb g=(-1\/h^2)[\\pmb w^T,0,\\dots,0]^T$. \n\\item\n Solve $H_p^{R} \\pmb v =\\pmb g$ on $X_p^{R}$. \n\\item\n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_p$. \n\\end{enumerate}\nFinally, for $p=1,\\dots,m-1$, the operators $\\td{G}_p^{L}:\\pmb\nw\\mapsto \\pmb z$ is introduced to be:\n\\begin{enumerate}\n\\item\n Set $\\pmb g=(-1\/h^2)[0,\\dots,0,\\pmb w^T]^T$. \n\\item\n Solve $H_p^{L} \\pmb v =\\pmb g$ on $X_p^{L}$. \n\\item\n Set $\\pmb z$ as the restriction of $\\pmb v$ on $X_p$. \n\\end{enumerate}\n\n\\paragraph{Putting together.}\nIn the 3D case, $\\pmb g^{L}$ and $\\pmb g^{R}$ for the column-major ordering array\\\\\n$\\pmb\ng=[g_{1,1,1},\\dots,g_{s_1,1,1},\\dots,g_{s_1,s_2,1},\\dots,g_{s_1,s_2,s_3}]^T$\ninduced from some 3D grid with size\\\\\n$s_1\\times s_2 \\times s_3$ are\ngiven by\n\\begin{align*}\n \\pmb g^{L}:=[g_{1,1,1},\\dots,g_{s_1,1,1},\\dots,g_{s_1,s_2,1}]^T, \\quad \\pmb g^{R}:=[g_{1,1,s_3},\\dots,g_{s_1,1,s_3},\\dots,g_{s_1,s_2,s_3}]^T. \n\\end{align*}\n\nThe subproblems $H_q^{M} \\pmb v=\\pmb g$, $H_p^{R} \\pmb v=\\pmb g$ and\n$H_p^{L} \\pmb v=\\pmb g$ are quasi-2D. To solve them, we group the\nelements along dimension 3 first, and then apply the nested dissection\nmethod\\cite{george1973nested,duff1983multifrontal} to them, as in \\cite{sweeppml}. This gives the setup process of the 3D\npreconditioner in Algorithm \\ref{alg:3dsetup} and the application\nprocess in Algorithm \\ref{alg:3dapp}.\n\\begin{algorithm}[h!]\n \\caption{Construction of the 3D additive sweeping preconditioner of\n the system \\eqref{eqn:3D}. Complexity\n $=O(n^4(b+\\gamma)^3\/b)=O(N^{4\/3}(b+\\gamma)^3\/b)$.}\n \\label{alg:3dsetup}\n \\begin{algorithmic}\n \\FOR {$q=1,\\dots,m$}\n \\STATE Construct the nested dissection factorization of $H_q^{M}$, which defines $\\td{G}_q^{M}$.\n \\ENDFOR\n \\FOR {$p=2,\\dots,m$}\n \\STATE Construct the the nested dissection factorization of $H_p^{R}$, which defines $\\td{G}_p^{R}$.\n \\ENDFOR\n \\FOR {$p=1,\\dots,m-1$}\n \\STATE Construct the the nested dissection factorization of $H_p^{L}$, which defines $\\td{G}_p^{L}$.\n \\ENDFOR\n \\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[h!]\n \\caption{Computation of $\\td{\\pmb u}\\approx G \\pmb f$ using the\n preconditioner from Algorithm \\ref{alg:3dsetup}. Complexity\n $=O(n^3\\log n (b+\\gamma)^2\/b)=O(N\\log N(b+\\gamma)^2\/b)$.}\n \\label{alg:3dapp}\n \\begin{algorithmic}\n \\FOR {$q=1,\\dots,m$}\n \\STATE\n $\\td{\\pmb u}_{q,q}=\\td{G}_q^{M} \\pmb f_q$\n \\ENDFOR\n \\FOR {$p=2,\\dots,m$}\n \\STATE\n $\\td{\\pmb u}_{p,1:p-1}=\\td{G}_{p}^{R}\\td{\\pmb u}_{p-1,1:p-1}^{R}$\\\\\n $\\td{\\pmb u}_{p,1:p}^{R}=\\td{\\pmb u}_{p,1:p-1}^{R}+\\td{\\pmb u}_{p,p}^{R}$\n \\ENDFOR\n \\FOR {$p=m-1,\\dots,1$}\n \\STATE\n $\\td{\\pmb u}_{p,p+1:m}=\\td{G}_p^{L}\\td{\\pmb u}_{p+1,p+1:m}^{L}$\\\\\n $\\td{\\pmb u}_{p,p:m}^{L}=\\td{\\pmb u}_{p,p}^{L}+\\td{\\pmb u}_{p,p+1:m}^{L}$\n \\ENDFOR\n \\STATE\n $\\td{\\pmb u}_1=\\td{\\pmb u}_{1,1}+\\td{\\pmb u}_{1,2:m}$\\\\\n \\FOR {$p=2,\\dots,m-1$}\n \\STATE\n $\\td{\\pmb u}_p=\\td{\\pmb u}_{p,1:p-1}+\\td{\\pmb u}_{p,p}+\\td{\\pmb u}_{p,p+1:m}$\n \\ENDFOR\n \\STATE\n $\\td{\\pmb u}_m=\\td{\\pmb u}_{m,1:m-1}+\\td{\\pmb u}_{m,m}$ \n \\end{algorithmic}\n\\end{algorithm}\n\nFor the algorithm analysis, we notice that each quasi-2D subproblem\nhas $O(\\gamma+b)$ layers along the third dimension. Therefore, the\nsetup cost for each subproblem is $O((\\gamma+b)^3n^3)$ and the\napplication cost is $O((\\gamma+b)^2n^2\\log n)$. Taking the total\nnumber of subproblems into account, the total setup cost for the 3D\npreconditioner is $O(n^4(b+\\gamma)^3\/b)$ and the total application\ncost is $O(n^3\\log n (b+\\gamma)^2\/b)$.\n\n\\subsection{Numerical results}\n\\label{sec:3Dnumerical}\n\nHere we present the numerical results in 3D. All the settings and\nnotations are kept the same with Section \\ref{sec:2Dnumerical}\nunless otherwise stated. The PMLs are put on all sides of the boundary\nand the symmetric version of the equation is adopted to save memory\ncost. The PML width is $\\eta= 9h$ for the boundary and is\n$\\eta_\\text{aux}=5h$ for the interior auxiliary ones. The number of\nlayers in each subdomain is $b=4$ for the interior ones,\n$b+\\gamma-1=12$ for the leftmost one and $b+\\gamma-2=11$ for the\nrightmost one.\n\nThe velocity fields tested are (see Figure \\ref{fig:3D}): \n\\begin{enumerate}[(a)]\n\\item\n A converging lens with a Gaussian profile at the center of the domain.\n\\item\n A vertical waveguide with a Gaussian cross-section.\n\\item\n A random velocity field.\n\\end{enumerate}\n\n\\begin{figure}[h!]\n \\centering\n \\subfigure[]\n {\\includegraphics[width=0.32\\textwidth]{3Da.pdf}}\n \\subfigure[]\n {\\includegraphics[width=0.32\\textwidth]{3Db.pdf}}\n \\subfigure[]\n {\\includegraphics[width=0.32\\textwidth]{3Dc.pdf}}\n \\caption{The three velocity fields tested in 3D.}\n \\label{fig:3D}\n\\end{figure}\n\nThe forces tested for each velocity field are: \n\\begin{enumerate}[(a)]\n\\item\n A Gaussian point source centered at $(1\/2,1\/2,1\/4)$.\n\\item\n A Gaussian wave packet with wavelength comparable to the typical\n wavelength of the domain. The packet centers at $(1\/2,1\/4,1\/4)$ and\n points to the direction $(0,1\/\\sqrt{2},1\/\\sqrt{2})$.\n\\end{enumerate}\n\n\\begin{table}[h!]\n\\centering\n\\begin{overpic}\n[width=0.45\\textwidth]{3Daa.pdf}\n\\put(40,6){force (a)}\n\\end{overpic}\n\\begin{overpic}\n[width=0.45\\textwidth]{3Dab.pdf}\n\\put(40,6){force (b)}\n\\end{overpic}\n\\begin{tabular}{ccc|cc|cc}\n\\hline \n\\multicolumn{3}{c|}{velocity field (a)} & \\multicolumn{2}{c|}{force (a)} & \\multicolumn{2}{c}{force (b)} \\\\ \n\\hline \n$\\omega\/(2\\pi)$ & $N$ & $T_{\\text{setup}}$ & $N_\\text{iter}$ & $T_\\text{solve}$ & $N_\\text{iter}$ & $T_\\text{solve}$ \\\\ \n\\hline \n5 & $39^3$ & 2.3304e$+$01 &3& 2.9307e$+$00 &4& 3.7770e$+$00\\\\\n10 & $79^3$ & 3.2935e$+$02 &3& 3.6898e$+$01 &4& 4.6176e$+$01\\\\ \n20 & $159^2$ & 4.2280e$+$03 &4& 4.3999e$+$02 &4& 4.6941e$+$02\\\\\n\\hline \n\\end{tabular} \n\\caption{Results for velocity field (a) in 3D. Solutions with $\\omega\/(2\\pi)=10$ at $x_1=0.5$ are presented.}\n\\label{tab:3domegaa}\n\\end{table}\n\\begin{table}[h!]\n\\centering\n\\begin{overpic}\n[width=0.45\\textwidth]{3Dba.pdf}\n\\put(40,6){force (a)}\n\\end{overpic}\n\\begin{overpic}\n[width=0.45\\textwidth]{3Dbb.pdf}\n\\put(40,6){force (b)}\n\\end{overpic}\n\\begin{tabular}{ccc|cc|cc}\n\\hline \n\\multicolumn{3}{c|}{velocity field (b)} & \\multicolumn{2}{c|}{force (a)} & \\multicolumn{2}{c}{force (b)} \\\\ \n\\hline \n$\\omega\/(2\\pi)$ & $N$ & $T_{\\text{setup}}$ & $N_\\text{iter}$ & $T_\\text{solve}$ & $N_\\text{iter}$ & $T_\\text{solve}$ \\\\ \n\\hline \n5 & $39^3$ & 2.1315e$+$01 &3& 2.7740e$+$00 &3& 2.7718e$+$00\\\\\n10 & $79^3$ & 3.4256e$+$02 &4& 4.4286e$+$01 &3& 3.4500e$+$01\\\\ \n20 & $159^2$ & 4.3167e$+$03 &5& 5.7845e$+$02 &4& 4.6462e$+$02\\\\\n\\hline \n\\end{tabular} \n\\caption{Results for velocity field (b) in 3D. Solutions with $\\omega\/(2\\pi)=10$ at $x_1=0.5$ are presented.}\n\\label{tab:3domegab}\n\\end{table}\n\\begin{table}[h!]\n\\centering\n\\begin{overpic}\n[width=0.45\\textwidth]{3Dca.pdf}\n\\put(40,6){force (a)}\n\\end{overpic}\n\\begin{overpic}\n[width=0.45\\textwidth]{3Dcb.pdf}\n\\put(40,6){force (b)}\n\\end{overpic}\n\\begin{tabular}{ccc|cc|cc}\n\\hline \n\\multicolumn{3}{c|}{velocity field (c)} & \\multicolumn{2}{c|}{force (a)} & \\multicolumn{2}{c}{force (b)} \\\\ \n\\hline \n$\\omega\/(2\\pi)$ & $N$ & $T_{\\text{setup}}$ & $N_\\text{iter}$ & $T_\\text{solve}$ & $N_\\text{iter}$ & $T_\\text{solve}$ \\\\ \n\\hline \n5 & $39^3$ & 2.1063e$+$01 &4& 3.8074e$+$00 &4& 3.7975e$+$00\\\\\n10 & $79^3$ & 3.4735e$+$02 &4& 4.4550e$+$01 &4& 4.5039e$+$01\\\\ \n20 & $159^2$ & 4.3391e$+$03 &4& 4.4361e$+$02 &5& 5.8090e$+$02\\\\\n\\hline \n\\end{tabular} \n\\caption{Results for velocity field (c) in 3D. Solutions with $\\omega\/(2\\pi)=10$ at $x_1=0.5$ are presented.}\n\\label{tab:3domegac}\n\\end{table}\n\nThe results are given in Tables \\ref{tab:3domegaa}, \\ref{tab:3domegab}\nand \\ref{tab:3domegac}. From these tests we see that the iteration\nnumber grows mildly as the problem size grows. We also notice that the\nsetup cost scales even better than $O(N^{4\/3})$, mainly because MATLAB\nperforms dense linear algebra operations in a parallel way, which\ngives some extra advantages to the nested dissection algorithm as the\nproblem size grows.\n\n\\section{Conclusion}\n\\label{sec:Conclusion}\nIn this paper, we proposed a new additive sweeping preconditioner for\nthe Helmholtz equation based on the PML. When combined with the\nstandard GMRES solver, the iteration number grows mildly as the\nproblem size grows. The novelty of this approach is that the unknowns\nare split in an additive way and the boundary values of the\nintermediate results are utilized directly. The disadvantage is that,\nfor each subdomains, three subproblems need to be built up, which is\ntime consuming compared to \\cite{sweeppml} and\n\\cite{stolk2013domaindecomp}. However, the costly parts of the\nalgorithm, i.e. the whole setup process and the solve processes of the\nsubproblems $H_q^{M} \\pmb v=\\pmb g$, can be done in parallel. The only\nparts that must be implemented sequentially are the accumulations of\nthe left-going and right-going waves, where only the solve processes\nof the subproblems $H_p^{L} \\pmb v=\\pmb g$ and $H_p^{R} \\pmb v=\\pmb g$\nare involved, which are the cheapest parts of the algorithm. Besides,\nwe think that the whole approximation process is simple and\nstructurally clear from a physics point of view and the idea might be\neasy to be generalized to other equations.\n\nThere are also some other directions to make potential\nimprovements. First, other numerical schemes of the equation and other\napproximations of the Sommerfeld radiation condition can be used to\ndevelop more efficient versions of this additive\npreconditioner. Second, the parallel version of the nested dissection\nalgorithm can be combined to solve large scale problems. Last, in the\n3D case, the quasi-2D subproblems can be solved recursively by\nsweeping along the $x_2$ direction with the same technique, which\nreduces the theoretical setup cost to $O(N)$ and the application cost\nto $O(N)$. However, compared to \\cite{sweeppml}, the coefficient of\nthe complexity in this new method is larger, so it is not clear\nwhether or not the recursive approach will be more efficient\npractically. Nevertheless, it is of great theoretical interest to look\ninto it.\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec: intro}\nInterstellar dust grains participate in many important physical and chemical processes in the interstellar medium (ISM). For example, the surface of dust is the catalyst for formation of some molecules, especially $\\rm H_2$ \\citep{GOULD63, CAZAUX04}. Dust also shields gas from the interstellar radiation field (ISRF), and allows the low temperatures crucial to star formation to emerge deep within molecular clouds \\citep{KRUMHOLZ11, YAMASAWA11, GLOVER12}. Dust plays an important role in the observed spectral energy distribution (SED) of galaxies: it absorbs and scatters starlight, and reemits the absorbed energy at infrared (IR) wavelengths \\citep{CALZETTI01, BUAT12}. Thus, it is important to understand the properties of dust before we can fully understand the ISM and the observed SED from galaxies.\n\nThe amount of interstellar dust depends on the balance between dust formation and dust destruction. The mechanisms of dust destruction include supernovae (SNe) shocks, thermal evaporation, cosmic rays, and dust incorporated into newly formed stars \\citep{DWEK98, HIRASHITA99}. The dust formation mechanisms include accretion of metals in the ISM onto existing dust grains, formation of new dust grains in the winds of AGB stars, and dust formation in type II SNe \\citep{DWEK98, ASANO13}. Different dominant dust destruction and formation mechanisms would result in a different dust-to-gas mass ratio (DGR):\n\\begin{equation}\n{\\rm DGR} \\equiv \\Sigma_d \/ \\Sigma_{\\rm gas}\n\\end{equation}\nand dust-to-metals ratio (DTM):\n\\begin{equation}\n {\\rm DTM} \\equiv {\\rm DGR} \/ Z,\n\\end{equation}\nwhere $\\Sigma_d$ is the dust mass surface density, $\\Sigma_{\\rm gas}$ is the total gas mass surface density, which includes the contribution from \\textsc{HI}, H$_2$ and He, and $Z$ is the metallicity. Note that some authors replace $\\Sigma_{\\rm gas}$ with hydrogen mass surface density in the definition of DGR, e.g., \\citet{DRAINE14, GORDON14}. Other than the formation and destruction mechanisms affecting DGR and DTM, the DTM itself can directly impact the ISM dust accretion rate \\citep{DWEK98}. Thus, studying DGR and DTM provides key insights into the dust life cycle.\n\nTheoretical dust life cycle models yield varying predictions for the DTM as a function of metallicity and local environment. Models in \\citet{SODROSKI97, DWEK98} show that the DGR gradient scales linearly with the metallicity gradient, and the DTM is nearly a constant. This can be achieved by a constant rate of dust formation and destruction, which results in a constant fraction of metal incorporated into dust, and thus DTM at all chemical evolution stages is a constant \\citep{GALLIANO08}. Other studies show that DTM is not always a constant, but a multi-stage variable as metallicity increases. At low metallicity, ISM accretion is less effective and the dust production rate is dominated by stellar ejecta, which could result in a locally constant DTM in this low metallicity regime \\citep{HIRASHITA11}. Above a certain critical metallicity, the efficiency of dust accretion may increase, which would result in a DTM increasing with metallicity \\citep{ZHUKOVSKA08, HIRASHITA11, FELDMANN15}. The critical metallicity depends on model and choices of parameters, and usually falls in the range of $\\rm 12+\\logt({\\rm O\/H}) = 7.5$ and $8.5$ \\citep{HIRASHITA99, ZHUKOVSKA08, HIRASHITA11, ASANO13, ZHUKOVSKA16}.\n\nSeveral observational studies support a constant DTM. In \\citet{ISSA90}, the authors collated the DGR gradients and metallicity gradients from previous studies in M31, M33, and M51, and reached the conclusion that the slopes of DGR and metallicity with galactic radius are consistent with each other. In \\citet{LEROY11}, the authors followed the approaches in \\citet{DRAINE07} to derive the dust masses in local group galaxies. They showed that DTM is a constant across $8.0 \\la \\rm 12+\\logt({\\rm O\/H}) \\la 9.0$. In \\citet{DRAINE14}, the authors fit the IR SED in M31 to a renormalized version of dust model described in \\citet{DRAINE07}. The authors showed that their derived DGR scales linearly with metallicity where metallicity measurements are reported by \\citet{ZURITA12}. Importantly, the relation between dust and metallicity is consistent with $M_d\/M_H \\sim 0.0091 Z\/Z_\\sun$, a prediction from depletion conditions in the cloud toward $\\zeta$Oph in the Milky Way (MW) \\citep{DRAINE11, DRAINE14}.\n\nThere are also observational results supporting a varying DTM. In \\citet{LISENFELD98}, the authors studied the DTM in 44 dwarf galaxies, and found a varying DTM. In \\citet{HIRASHITA02}, the authors study 16 blue compact dwarf (BCD) galaxies, and found that $\\log_{10}(\\rm DGR)$ spreads from $-3.3$ to $-4.6$ within $\\rm 7.9 < \\rm 12+\\logt({\\rm O\/H}) < 8.6$, indicating an variable DTM because the slope between DGR and metallicity is not unity. The authors hypothesized that this phenomenon is the result of the variation in dust destruction efficiency by SNe, which depends on the star formation history of the region. \\citet{HUNT05} also showed a 2 dex spread of DGR at $8 \\leq \\rm 12+\\logt({\\rm O\/H}) \\leq 9$. They also reported that the BCD SBS 0335$-$052, which has a metallicity $\\rm 12+\\logt({\\rm O\/H}) = 7.32$, has an extremely low dust mass, two orders of magnitude below a linear trend with metallicity. Similarly, \\citet{HERRERA-CAMUS12} and \\citet{FISHER14} showed that the local dwarf galaxy I Zw 18 has a DGR two orders of magnitude below the linear trend derived from local galaxies. In \\citet{REMY-RUYER14}, the authors compiled DGR measurements for 126 galaxies, with 30\\% of their sample having $\\rm 12+\\logt({\\rm O\/H})\\leq 8.0$. They showed that there might be a discontinuity of the linear DTM at oxygen abundance $\\rm 12+\\logt({\\rm O\/H}) = 8$, and the galaxies below that metallicity have $\\rm DGR \\propto Z^{3.1}$. That is, instead of a simple linear relation between DGR and $Z$, the authors suggest a broken power-law. In \\citet{ROMAN-DUVAL17}, the authors showed that the DGR changes by factors of 3 to 7 in the Magellanic Clouds, where metallicity is considered to be constant. This result also indicates a variable DTM. In \\citet{GIANNETTI17}, the authors found a ${\\rm DGR}(Z) \\propto Z^{1.4}$ in a sample set composed by 23 massive and dense star-forming regions in the far outer MW.\n\nIn this work, we revisit the possible variation of DTM in a single galaxy, M101. There are several benefits to studying DTM within a single galaxy. First, metallicity measurements are calibrated more uniformly within one galaxy than across galaxies, which is crucial for studying DTM variation \\citep{REMY-RUYER14, BERG15, CROXALL16}. Moreover, focusing on one galaxy can avoid the problem in galaxy-integrated results that DTM can be underestimated by integrating over dust-poor HI in outer disks \\citep{DRAINE07}. By comparing the DTM within one galaxy and across galaxies, we will also be able to determine whether the possible variation in DTM depends more on local physical properties or galactic properties. Lastly, observations within one galaxy would have the minimum differences of MW foreground, calibration, and background level estimation, which means the data are more uniform.\n\nM101 is an ideal target for this study for four reasons: 1) M101 has one of the most detailed studies of its metallicity from the Chemical Abundances Of Spirals survey \\citep[CHAOS,][]{BERG15, CROXALL16}, based on electron temperature ($T_e$) derived from auroral line measurements. 2) M101 has the largest metallicity gradient among those galaxies where direct $T_e$-based metallicity measurements are available, ranging $7.5 \\la \\rm 12+\\logt({\\rm O\/H}) \\la 8.8$ \\citep{CROXALL16}. This range covers both as high as the solar neighborhood and as low as the turning point in \\citet{REMY-RUYER14} broken-power law. 3) M101 has a good radial resolution even at far-infrared (FIR) observations because it is nearby (distance $\\sim \\rm 6.7~Mpc$), physically large (the 25th magnitude isophote in B band, or r$_{25}$, is $0.2^\\circ=23.4~{\\rm kpc}$ at distance $\\rm 6.7~Mpc$), and relatively face on \\citep[inclination $\\approx 16^\\circ$,][]{FREEMAN01, MAKAROV14}. 4) M101 also has high sensitivity \\textsc{Hi} and CO maps \\citep{WALTER08, LEROY09}, which let us map the total gas distribution.\n\nThis paper is presented as follows. \\secref{sec: observations} presents FIR, \\textsc{Hi}, CO, and other supporting data used in this study, with our data processing procedures. The five modified blackbody (MBB) model variants and the fitting methodologies are described in \\secref{Sec: methods}. We present our fitting results in \\secref{sec: results}, and compare them with known physical limitations and statistical properties. In \\secref{sec: discussions}, we discuss the implication of our results, and the relation between our DTM and previous findings. Finally, we give our conclusions in \\secref{sec: conclusions}.\n\n\\section{Observations} \\label{sec: observations}\n\\subsection{Data}\\label{sec: data}\nIn this section, we introduce the multi-wavelength measurements of M101 from several surveys and their uncertainties, which we adopted for this study. The physical properties (position, distance and orientation) of M101 adopted for this study are listed in Table \\ref{tab: samples}.\n\\begin{deluxetable}{lll}\n\\tablecaption{Properties of M101. \\label{tab: samples}}\n\\tablehead{\\colhead{Property} & \\colhead{Value} & \\colhead{Reference}}\n\\startdata\nR.A. (2000.0)\t& 14h~03m~12.6s & (1) \\\\\nDec (2000.0)\t& +54d~20m~57s & (1) \\\\\nDistance\t\t& 6.7~Mpc\\tablenotemark{$\\dagger$} &(2) \\\\\nr$_{25}$\t\t\t& $0^\\circ.19990$ & (1) \\\\\nInclination\t\t& $16^\\circ$ & (1) \\\\\nP.A.\t\t\t& $38^\\circ$ &(3) \\\\\n$\\alpha_{{\\rm CO}~J=(2-1)}$\\tablenotemark{*}\t& $\\rm (2.9\/R_{21})~M_\\sun~pc^{-2} (K~km~s^{-1})^{-1}$ & (4) \\\\\n$R_{21}$ & 0.7 & (4) \\\\\n\\enddata\n\\tablenotetext{\\dagger}{Consistent with the value in \\citet{SHAPPEE11}.}\n\\tablenotetext{*}{See \\secref{subsec: CO} for discussion of the $\\alpha_{\\rm CO}$ factor we use.}\n\\tablerefs{(1) HyperLeda database (\\url{http:\/\/leda.univ-lyon1.fr\/}), \\citet{MAKAROV14}; (2) \\citet{FREEMAN01}; (3) \\citet{SOFUE99}; (4) \\citet{SANDSTROM13}.}\n\\end{deluxetable}\n\n\\subsubsection{Infrared Imaging} \\label{subsec: IR}\nWe use FIR images from the ``Key Insights on Nearby Galaxies: A Far-Infrared Survey with \\textit{Herschel}'' survey \\citep[KINGFISH,][]{KENNICUTT11} to fit dust surface densities in M101. KINGFISH imaged 61 nearby galaxies in the FIR with the \\textit{Herschel Space Observatory} \\citep{PILBRATT10}, covering $70~\\micron$, $100~\\micron$, and $160~\\micron$ from Photoconductor Array Camera and Spectrometer \\citep[PACS,][]{POGLITSCH10}, and $250~\\micron$, $350~\\micron$, and $500~\\micron$ from Spectral and Photometric Imaging Receiver \\citep[SPIRE,][]{GRIFFIN10}. We do not include the $70~\\micron$ flux in our SED modeling because stochastic heating from small dust grains makes non-negligible contribution in that spectral range \\citep{DRAINE07}, which is not accounted for by the simple SED models we employ in this study. The PACS images were processed from level 1 with \\texttt{Scanamorphos v16.9} \\citep{ROUSSEL13} by the KINGFISH team. The SPIRE images were processed with \\texttt{HIPE} \\citep{OTT10} version \\texttt{spire-8.0.3287} and from level 1 to final maps with \\texttt{Scanamorphos v17.0} \\citep{ROUSSEL13} by the KINGFISH team. According to the KINGFISH DR3 user guide \\citep{KINGFISH13}, the SPIRE images have been multiplied by correction factors of 0.9282, 0.9351, and 0.9195 for SPIRE250, SPIRE350, and SPIRE500, respectively, due to improved effective beam size estimation. The FWHMs are approximately $7\\arcsec.0 = 0.23~\\rm kpc$, $11\\arcsec.2 = 0.36~\\rm kpc$, $18\\arcsec.2 = 0.59~\\rm kpc$, $24\\arcsec.9 = 0.81~\\rm kpc$, and $36\\arcsec.1 = 1.17~\\rm kpc$ for the 100\\micron, 160\\micron, 250\\micron, 350\\micron, and 500\\micron\\ band images, respectively.\n\n\\subsubsection{\\textsc{Hi}} \\label{subsec: HI}\nWe obtain \\textsc{Hi} 21 cm line data from ``The \\textsc{Hi} Nearby Galaxy Survey'' \\citep[THINGS,][]{WALTER08}. The images were obtained at the Very Large Array (VLA)\\footnote{The VLA is operated by the National Radio Astronomy Observatory (NRAO), which is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.}. The M101 dataset in this survey has (10\\arcsec.8, 10\\arcsec.2)$\\sim$(0.35~kpc, 0.33~kpc) angular resolution and $\\rm 5.2~km~s^{-1}$ velocity resolution with natural weighting. The observed 21 cm emission can be converted to \\textsc{Hi} column density ($N_{\\rm HI}$) via Eq. (1) and Eq. (5) in \\citet{WALTER08} assuming it is optically thin, and then further converted to surface density $\\Sigma_{\\rm HI}$ by multiplying by the atomic weight of hydrogen. The uncertainty in the THINGS survey is dominated by the estimated zero-point uncertainty in \\textsc{Hi}, which is around $1~\\rm M_\\sun\/pc^2$, corresponding to 0.04 to 0.17 dex in the center of M101 (molecular gas dominated region), 0.03 to 0.04 dex for most atomic gas dominated region, and goes above 0.08 dex for the outer most pixels.\n\n\\subsubsection{CO and Total Gas} \\label{subsec: CO}\nWe obtain CO emission line measurements from the ``HERA CO Line Extragalactic Survey'' \\citep[HERACLES,][]{LEROY09, SCHRUBA11, SCHRUBA12, LEROY13}, a survey mapping the ${\\rm ^{12}CO}~J=(2-1)$ rotational line at $230.538~\\rm GHz$ of 48 nearby galaxies, including M101. The observation was carried out with Heterodyne Receiver Array \\citep[HERA,][]{SCHUSTER04} on the IRAM 30-m telescope\\footnote{IRAM is supported by CNRS\/INSU (France), the MPG (Germany) and the IGN (Spain).}. The survey has 13\\arcsec\\ angular resolution and $\\rm 2.6~km~s^{-1}$ velocity resolution. The CO line integrated intensity can be converted to surface density of $\\rm H_2$ plus He ($\\Sigma_{\\rm mol}$) by:\n\\begin{equation}\n \\Sigma_{\\rm mol} = \\alpha_{\\rm CO}\\frac{I_{{\\rm CO}~J=(2-1)}}{R_{21}},\n\\end{equation}\nwhere $\\alpha_{\\rm CO}$ is the CO-to-$\\rm H_2$ conversion factor, see Table \\ref{tab: samples}. The standard $\\alpha_{\\rm CO}$ is quoted for $I_{{\\rm CO}~J=(1-0)}$, thus, we convert the $I_{{\\rm CO}~J=(2-1)}$ with a fixed line ratio\\footnote{We adopt the $\\alpha_{\\rm CO}$ value from \\citet{SANDSTROM13}, which the authors originally derived with $I_{{\\rm CO}~J=(2-1)}$ data and convert with $R_{21}=0.7$. Thus we need to use the same $R_{21}$ for consistency.} $R_{21}=(2-1)\/(1-0)=0.7$ \\citep{SANDSTROM13}.\n\nWith $\\Sigma_{\\rm HI}$ and $\\Sigma_{\\rm mol}$, we calculate the total gas mass surface density ($\\Sigma_{\\rm gas}$) with Eq. \\ref{eq: total gas}. A multiplier of value 1.36 is included in $\\Sigma_{\\rm mol}$ for helium mass \\citep{SANDSTROM13}. We multiply the $\\Sigma_{\\rm HI}$ by this factor to calculate the total gas surface density correctly.:\n\\begin{equation} \\label{eq: total gas}\n\\Sigma_{\\rm gas} = 1.36~\\Sigma_{\\rm HI} + \\alpha_{\\rm CO}\\frac{I_{{\\rm CO}~J=(2-1)}}{R_{21}}\n\\end{equation}\n\nWe have checked that a metallicity dependent $\\alpha_{\\rm CO}$ \\citep{WOLFIRE10, BOLATTO13} would make no significant difference in $\\Sigma_{\\rm gas}$ because in the region where H$_2$ is important in M101, the metallicity is still relatively high. See more discussion in \\secref{sec: alpha_CO discussion}.\n\n\\subsubsection{Metallicity\\label{subsec: metal_data}}\nWe obtained metallicity measurements from CHAOS survey \\citep{CROXALL16}. Measurements were taken in 109 \\textsc{Hii} regions by the Multi-Object Double Spectrographs (MODS) on the Large Binocular Telescope \\citep[LBT,][]{POGGE10}. They derived $T_e$ from a three-zone model with \\textsc{[Oiii]}, \\textsc{[Siii]}, and \\textsc{[Nii]} line ratios. The electron densities are derived from \\textsc{[Sii]} line ratios. This gives us gas phase oxygen abundances in 74 \\textsc{Hii} regions inside M101, and also an average metallicity gradient spread over the galactocentric radius considered in this study. We will compare our derived DGR with their derived metallicity gradient \\citep[Eq. 10 in][second line\\footnote{Instead of the 7.4 Mpc distance quoted in \\citet{CROXALL16}, we used a galaxy distance of 6.7 Mpc, thus we multiplied the slope in their Eq. 10 by $\\frac{7.4}{6.7}$ to account for the difference.}]{CROXALL16}. The uncertainty in $\\rm 12+\\logt({\\rm O\/H})$ from the average metallicity gradient is $\\sim0.02$ dex in the center and $\\sim0.07$ dex in the outer most part.\n\n\\subsubsection{Star formation rate and stellar mass\\label{subsec: other}}\nWe calculate star formation rate surface density ($\\Sigma_{\\rm SFR}$) from the Galaxy Evolution Explorer (GALEX) FUV \\citep{MARTIN05} and \\textit{Spitzer} Multiband Imaging Photometer (MIPS) 24 \\micron\\ data \\citep{WERNER04, RIEKE04}, and stellar mass surface density ($\\Sigma_\\star$) from \\textit{Spitzer} Infrared Array Camera (IRAC) 3.6 \\micron. These data are from the Local Volume Legacy survey \\citep[LVL,][]{DALE09}.\n\nWe use the following equation to convert observed FUV and IR emission to $\\Sigma_{\\rm SFR}$:\n\\begin{equation} \\label{eq: SFR}\n\\Sigma_{\\rm SFR}=(8.1\\times 10^{-2} I_{\\rm FUV} + 3.2 \\times 10^{-3} I_{24})\\cos i,\n\\end{equation}\nwhere $i$ is the inclination of M101. $\\Sigma_{\\rm SFR}$ is in $M_{\\sun}~\\rm kpc^{-2}~yr^{-1}$, and both $I_{\\rm FUV}$ and $I_{24}$ are in $\\rm MJy~sr^{-1}$. Eq. \\ref{eq: SFR} is adopted from \\cite{LEROY08}, and it is functionally similar to the prescription in \\citet{KENNICUTT12}.\n\nFor converting 3.6 \\micron\\ SED to $\\Sigma_\\star$, we use the relation:\n\\begin{equation}\n \\Sigma_\\star = 350I_{3.6}\\cos i,\n\\end{equation}\nwhere $\\Sigma_\\star$ is in $M_{\\sun}~\\rm pc^{-2}$, and $I_{3.6}$ is in $\\rm MJy~sr^{-1}$. Note that the appropriate mass to light ratio ($\\Upsilon_\\star^{3.6}$) remains a topic of research \\citep{MCGAUGH14, MEIDT14}. Here, we assume the $\\Upsilon_\\star^{3.6}=0.5$ \\citep{MCGAUGH14}, see discussions in \\citet{LEROY08} and A. K. Leroy et al. (2018, in preparation).\n\n\\subsection{Data processing\\label{sec: data proc}}\n\\subsubsection{Background subtraction} \\label{subsec: background error}\nThe IR and GALEX images that we use include contributions from various backgrounds and foregrounds. Throughout this study, we will neglect the structure in MW foreground over the relatively small angular ($r_{25}=0^\\circ.2$) extent of M101. To estimate the foreground\/background (hereafter referred to as background) level for each image, we need a uniform definition of background region. We define our background region as where $N_{\\rm HI} < 1.0\\times \\rm 10^{18}~cm^{-2}$. For the GALEX map, we take the mean value in the background region as recommended due to the Poisson statistics of the GALEX counts. For the IR images, we fit a tilted plane and iteratively reject outliers. This includes several steps: we fit a tilted plane to all the background pixels. We then subtract the tilted plane from the data and calculate the absolute deviation (AD) from the median for all pixels and derive median absolute deviation (MAD). Finally, we use only the pixels with AD smaller than three times MAD to fit a tilted plane, and iterate over step two and three for five times, keeping the last fitted tilted plane as the background to be removed.\n\nAfter background subtraction and convolution (\\secref{subsec: convolution}), we calculate the covariance matrix\\footnote{A matrix with its i-j element as the i-band to j-band covariance. Our covariance matrix has a dimension of 5x5, corresponding to the 100$-$500 \\micron\\ bands in \\textit{Herschel}.} in the background region of the five \\textit{Herschel} bands. This covariance matrix ($\\mathcal{C}_{\\rm bkg}$) will play an important role in calculation of likelihood in our fitting procedure because it incorporates the observed band-to-band correlation in the noise due to confusion and other astronomical sources into our fitting (\\secref{subsec: fitting}). \n\n\\subsubsection{Convolution}\\label{subsec: convolution}\nMaps obtained from different surveys do not have the same pixel scale and point spread function (PSF). In order to compare them pixel-by-pixel, we first convolve all the maps to match the PSF of SPIRE500 using the \\texttt{convolve\\_fft} function in \\texttt{astropy.convolution} \\citep{ASTROPY13}. Most kernels in this study were adapted from \\citet{ANIANO11}, except the Gaussian kernels for THINGS and HERACLES surveys. For these two surveys, we built elliptical or circular Gaussian kernels according to their beam sizes \\citep{WALTER08, LEROY09} to convolve them to match a Gaussian PSF with 25\\arcsec\\ FWHM. Then, we convolve the images with a second kernel from \\citet{ANIANO11}, which convolves Gaussian PSF with 25\\arcsec\\ FWHM to SPIRE500 PSF.\n\n\\subsubsection{Alignment\\label{subsec: alignment}}\nAfter convolution, we align the coordinates of all the images with the SPIRE500 image and its pixel scale using the function \\texttt{reproject\\_exact} in \\texttt{reproject}, an \\texttt{astropy} affiliated package. The final pixel scale is 14.0\\arcsec, or $\\rm \\sim 0.45~kpc$, which is smaller than half of SPIRE500 PSF FWHM, 36\\arcsec, thus enough for properly sampling the PSF. In the final images, one resolution element contains $\\sim 5.2$ pixels, therefore, neighboring pixels are not independent.\n\n\\subsubsection{Binning\\label{subsec: voronoi}}\nOne of our main interests is to analyze DTM in regions with $\\rm 12+\\logt({\\rm O\/H}) \\la 8.0$, where the relation of DTM with metallicity is expected to change \\citep{HIRASHITA99, HIRASHITA11, REMY-RUYER14}. However, individual pixels in the low metallicity region, or outer disk, tend to have insufficient signal-to-noise ratio (SNR) for analysis. One way we can solve this problem is to bin neighboring pixels together and average the measured quantities in those pixels to increase SNR according to:\n\n\\begin{equation}\n {\\rm SNR_{avg}} = \\frac{(\\sum_i {\\rm Signal}_i)\/n}{\\sqrt{(\\sum_i {\\rm Noise}_i^2)\/n^2}},\n\\end{equation}\n\nwhere the summation is over resolution elements inside the binned and $n$ is the number of resolution elements. As a consequence, uniform binning requires all regions on the map to sacrifice their spatial resolution in order to recover the regions with lower SNR, which means some structures that could have been resolved would be smoothed out in the binning process. To optimize the resolution and extend to the outer disk simultaneously, we choose to use adaptive binning: binning more pixels together in the low SNR region, while binning fewer pixels together or leaving pixels as individuals in the high SNR region.\n\nThe adaptive binning method we choose is the \\texttt{voronoi\\_2d\\_binning} function \\citep{CAPPELLARI03}. Instead of directly apply the algorithm to the entire SED, we execute some extra procedures listed below in order to preserve radial information:\n\\begin{enumerate}\n \\item We calculate SNR map for all five \\textit{Herschel} bands using the square root of diagonal terms in the covariance matrix ($\\mathcal{C}_{\\rm bkg}$), which is the variance of each band, as the noise of each band.\n \\item For each pixel, we select the lowest SNR among five bands at that pixel to build the worst SNR map, which is plotted in Figure \\ref{fig: voronoi} (a). This worst SNR map is used for the subsequent binning process in order to make sure all five bands will reach the target SNR with the same binned regions. 58\\% of pixels have their worst SNR from PACS100.\n \\item We cut the target galaxy into concentric rings with the same radial spacing, which is set to be the same as the FWHM of the SPIRE500 PSF. This initial radial cut is shown in Figure \\ref{fig: voronoi} (b).\n \\item Starting from the outermost ring, if the average SNR of all pixels within a ring is lower than target SNR, we combine it with one ring inside until target SNR is achieved. This final radial cut is shown in Figure \\ref{fig: voronoi} (c). The target SNR is set to be 5. However, since the pixels are oversampled with the SPIRE500 PSF (see \\secref{subsec: alignment}), the effective target SNR is $ 5 \/ \\sqrt{5.2} \\sim 2.2 $.\n \\item We apply \\texttt{voronoi\\_2d\\_binning} with \\texttt{targetSN} set to 5, to each ring from Step 4 and worst SNR map from Step 2 to generate the final binned regions, as shown in Figure \\ref{fig: voronoi} (d).\n\\end{enumerate}\nNote that we discard the \\texttt{roundness} threshold in the original function \\citep{CAPPELLARI03}. This \\texttt{roundness} threshold makes sure all binned region are nearly circular, which will result in malfunctions when we cut the image into concentric circles at the beginning. All pixels within radius 7.4 kpc ($0.3~\\rm r_{25}$) have high enough SNR thus remain unbinned.\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{_Voronoi_NGC5457.pdf}\n\\caption{Voronoi binning process in this study. (a) The worst-SNR map. Among the 16,403 points, 58\\% have their worst SNR in PACS100. Both PACS160 and SPIRE500 take around 18\\%. (b) The initial radial cut. (c) The final radial cut after grouping rings according to target SNR. (d) The final binned regions. The white circles in panel (a) and (d) show the radius 7.4 kpc. All pixels within 7.4 kpc remain unbinned.\\label{fig: voronoi}}\n\\end{figure}\n\n\\section{Methods\\label{Sec: methods}}\n\\subsection{Models\\label{Sec: Model}}\nIn this work, we focus on the FIR part of the dust emission SED. It is reasonable to assume that emission from dust grains in thermal equilibrium dominates the FIR range \\citep{LI01, BLAIN02, GORDON14}, therefore we start with fitting the FIR emission with a modified blackbody (MBB) model:\n\\begin{equation}\nI_\\nu = \\kappa_{\\nu}\\Sigma_d B_\\nu(T_d),\n\\end{equation}\nwhere $I_\\nu$ is the specific intensity, $\\kappa_{\\nu}$ is the wavelength-dependent emissivity, $\\Sigma_d$ is the dust surface density, and $B_\\nu(T_d)$ is the blackbody spectral radiance at dust temperature $T_d$. An empirical power law emissivity is often assumed, that is, $\\kappa_\\nu = \\kappa_{\\nu_0}(\\nu\/\\nu_0)^\\beta$, where the emissivity index $\\beta$ is a constant and $\\nu_0=c\/\\lambda_0$. Throughout this study, $\\lambda_0=160~\\micron$ is used.\n\nThere are a few possible drawbacks to this simple model, some of them are physical, and the others are inherent to the process of fitting the model. The physical drawbacks include: 1) The simple model above does not allow for wavelength or environmental dependence of $\\beta$, which might exist \\citep{REACH95, FINKBEINER99, LI01, GORDON14}. 2) The model does not include stochastic heating \\citep{DRAINE07}, which might contribute to our shortest wavelength observation due to the width of the response functions of the PACS instruments. 3) The model does not include the broadening in the SED due to multiple heating conditions involved in one resolution element \\citep{DALE01}. The fitting process drawbacks include: 1) $\\kappa_{\\nu_0}$ and $\\Sigma_d$ are completely degenerate, thus there will be an inherent uncertainty in $\\Sigma_d$ from how we determine the $\\kappa_{\\nu_0}$ value. 2) Due to the nature of this model, $\\beta$ and $T_d$ are covariant, since they both shift the peak wavelength of the SED. Thus, there might be artificial correlation between them. \\citet{KELLY12} demonstrated this artificial correlation with traditional $\\chi^2$-minimization fitting.\n\nWe calibrate $\\kappa_{\\nu_0}$ with high-latitude MW diffuse ISM following the approach in \\citet{GORDON14} (see \\secref{Sec: calibration}). It is possible that this calibration is not appropriate at all local environmental conditions and it would result in a systematic uncertainty in our results (see \\secref{subsec: discuss_emissivity} for further discussion). We also use a probabilistic fitting procedure following \\citet{GORDON14} that lets us assess the correlations between fit parameters and properly marginalize over the degeneracy between $\\beta$ and $T_d$. Still, there is no simple way to solve all the physical drawbacks of the MBB model. In order to address the physical shortcomings of the MBB model, we construct five variant models. These each address a shortcoming of the MBB. They are not all mutually exclusive, and a full model \\citep[e.g.,][]{DRAINE07} might incorporate several of these. Our goal here is to identify the simplest possible modifications that yield a good fit to the IR SED. These variants are listed below:\n\n\\subsubsection{Simple emissivity (SE)\\label{subsubsec: SE}}\nHere, we assume a simple power-law emissivity, which gives a dust emission SED described by the following equation:\n\\begin{equation} \\label{eq: SE}\nI_\\nu = \\kappa_{\\nu_0}(\\frac{\\nu}{\\nu_0})^\\beta \\Sigma_d B_\\nu(T_d).\n\\end{equation}\nThe free parameters in this model are $\\Sigma_d$, $T_d$ and $\\beta$. This method allows $\\beta$ to vary spatially, thus could partially avoid the environmental-dependent $\\beta$ drawback. However, it is also heavily affected by the possible artificial correlation between $\\beta$ and $T_d$.\n\n\\subsubsection{Fixing $\\beta$ (FB)\\label{subsubsec: FB}}\nUsing the same functional form as Eq. \\ref{eq: SE}, we can also fix the $\\beta$ value. This is one way to remove the inherent covariance between $T_d$ and $\\beta$ based on what is expected for the optical properties of ISM dust grain materials. In some previous studies \\citep{MENNELLA98, BOUDET05, GALLIANO17} and our preliminary test of SE method, there are fitting results with anti-correlated $T_d$ and $\\beta$. This could mean that $\\beta$ is a function of $T_d$, however, due to the degeneracy of $T_d$ and $\\beta$ in the model, it is also possible that this anti-correlation is all, or partially, artificial \\citep{SHETTY09, SHETTY09B, KELLY12}. In the latter case, fixing $\\beta$ can improve the accuracy of fitted $T_d$ \\citep{SHETTY09B}. Thus, we adapted $\\beta=2$ from previous studies \\citep{REACH95, DUNNE01, DRAINE07} as a variation of MBB spectrum. We also tested $\\beta$ values of 1.6, 1.8, and 2.2 and the difference in $\\Sigma_d$ and chi-square values between them and $\\beta=2$ results are insignificant. The insensitivity of the resulting $\\Sigma_d$ to our choice of $\\beta$ results from the fact that we calibrate the emissivity for each $\\beta$ value accordingly. The process of emissivity calibration is described in \\secref{Sec: calibration}. It is also true for the other methods where we also have $\\beta$ fixed at 2 at short wavelength or the whole spectral range.\n\n\\subsubsection{Broken Emissivity (BE)}\nIt is possible that the dust emissivity is not a simple power law, but varies with wavelength. Previous studies have shown that the emissivity in the long wavelength end tends to be flatter than the short wavelength end. Thus, many authors including \\citet{REACH95} and \\citet{GORDON14} have tried to build more complicated forms of emissivity as a function of wavelength. Here, we adapted the BEMBB model in \\citet{GORDON14}: assuming $\\beta$ as a step function in wavelength, which makes the emissivity a broken-power law (Eq. \\ref{eq: BE}).\n\\begin{equation}\\label{eq: BE}\n\\kappa_\\nu=\\left\\{\\begin{array}{ll}\n \\kappa_{\\nu_0}(\\frac{\\nu}{\\nu_0})^{\\beta} & \\lambda < \\lambda_c \\\\\n \\kappa_{\\nu_0}(\\frac{\\nu_c}{\\nu_0})^{\\beta}(\\frac{\\nu}{\\nu_c})^{\\beta_2} & \\lambda \\geq \\lambda_c\n \\end{array}\\right.\n\\end{equation}\n$\\lambda_c$ is the critical wavelength corresponding to the break, and $\\nu_c$ is the frequency corresponding to $\\lambda_c$. $\\lambda_c$ is fixed at 300~\\micron\\ in this study. We explored varying the break wavelength with the spectral range of 50 to 600 \\micron and found it had no major impact on the results. $\\beta_2$ is the dust emissivity index at long wavelength. The short wavelength dust emissivity index $\\beta$ is fixed at 2 in this study.\n\n\\subsubsection{Warm dust component (WD)}\nIn the spectral region below $100~\\micron$, it is possible that the SED is affected by stochastic emission from small grains \\citep{DRAINE07}, which is within the effective bandpass of the PACS100 response function (around $80$ to $120~\\micron$). In this model, we add a second MBB component with $T_d=\\rm 40~K$ to our SED, called ``warm dust'', to simulate the contribution from stochastically heated dust. We made this choice of $T_d$ to have the peak of warm dust SED at the boundary of PACS100 response function. The fraction of warm dust relative to total dust is symbolized as $f_W$. The fitting model in this method becomes (Note that both components have power-law emissivity with $\\beta=2$):\n\\begin{equation}\nI_\\nu = \\kappa_{\\nu_0}(\\frac{\\nu}{\\nu_0})^\\beta\\Sigma_d \\Big((1-f_W) B_\\nu(T_d) + f_W B_\\nu(40K) \\Big).\n\\end{equation}\n\nTo properly take this effect into account, one would need to adopt a complete physical dust model. However, among the dust properties, we are mainly interested in $\\Sigma_d$, which is necessary for calculating DGR and DTM, and which does not require adopting a full dust model. This is because within our current understanding of dust heating and the dust grain size distribution, only a small fraction of the dust mass is stochastically heated \\citep{DRAINE07}. Our preliminary test confirms this: the mass fraction of stochastically heated dust in the WD modeling is usually under 1\\%. This means that we can still acquire reasonable accuracy in $\\Sigma_d$ even when the SED of stochastically heated dust is not modeled with high accuracy. \n\n\\subsubsection{Power Law distribution (PL)}\\label{sec: PL}\nAt the SPIRE500 resolution, the FWHM of PSF would have a large physical size ($\\sim 1.22~{\\rm kpc}$). Thus, it is likely that there are various dust heating conditions within one resolution element. To attempt to model such a distribution of heating conditions, we adopt a model wherein a fraction ($1-\\gamma$) of the dust mass is heated by a single value ISRF $U_{\\rm min}$, while the other $\\gamma$ fraction is heated by a distribution of ISRF between $U_{\\rm min}$ and $U_{\\rm max}$ with $\\frac{d\\Sigma_d}{dU}\\propto U^{-\\alpha}$ \\citep{DALE01, DRAINE07}. Each mass fraction emits a FB MBB spectrum, which makes the total emission\\footnote{The normalization factor $\\frac{1-\\alpha}{U_{\\rm max}^{1-\\alpha} - U_{\\rm min}^{1-\\alpha}}$ in Eq. \\ref{eq: PL} only works when $\\alpha \\neq 1$. For $\\alpha = 1$ (which is excluded in this study), one should use $\\frac{1}{\\ln(U_{\\rm max}\/U_{\\rm min})}$ instead.}:\n\\begin{equation} \\label{eq: PL}\n\\begin{array}{ll}\nI_\\nu = \\kappa_{\\nu_0}(\\frac{\\nu}{\\nu_0})^\\beta\\Sigma_d & \\Big((1-\\gamma) B_\\nu(U_{\\rm min}) + \\\\\n& \\gamma \\frac{1-\\alpha}{U_{\\rm max}^{1-\\alpha} - U_{\\rm min}^{1-\\alpha}}\\int^{U_{\\rm max}}_{U_{\\rm min}}U^{-\\alpha}B_\\nu(U)dU \\Big).\n\\end{array}\n\\end{equation}\nTo calculate the equivalent MBB temperature, we convert $U$ to $T_d$ as $U \\propto T_d^{\\beta + 4}$, with a normalization of $U=1$ corresponding to $T_d=\\rm 18~K$ \\citep{DRAINE14}. This approach adds several free parameters, however, since we do not have good constraints for all of them, we fix some parameters before fitting: $U_{\\rm max}$ is fixed at $10^7$ \\citep[following][]{ANIANO12}, and $\\beta$ is fixed at 2. Thus, the number of free parameters is 4, which is not a major difference from the other models.\n\n\\subsection{Fitting techniques} \\label{subsec: fitting}\n\\begin{deluxetable}{lllll}\n\\tablecaption{Grid parameters for fitting.\\label{tab: grid space}}\n\\tablecolumns{4}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{Parameter} &\n\\colhead{Range} &\n\\colhead{Spacing} &\n\\colhead{Range$_c$\\tablenotemark{f}} &\n\\colhead{Spacing$_c$}}\n\\startdata\n$\\log_{10}\\Sigma_d$ & -4 to 1\\tablenotemark{a} & 0.025 & $\\pm0.2$ & 0.002\\\\\n$T_d$ & 5 to 50\\tablenotemark{b} & 0.5 & $\\pm1.5$ & 0.1\\\\\n$\\beta$ & -1.0 to 4.0\\tablenotemark{c} & 0.1 & $\\pm0.3$ & 0.02\\\\\n$\\lambda_c$ & 300\\tablenotemark{d} & N\/A & 300 & N\/A\\\\\n$\\beta_2$ & -1.0 to 4.0 & 0.25 & $\\pm0.3$ & 0.02\\\\\n$f_W$ & 0.0 to 0.05 & 0.002 & $\\pm0.006\\tablenotemark{g}$ & 0.0005\\\\\n$\\alpha$ & 1.1 to 3.0 & 0.1 & $\\pm0.3$ & 0.01\\\\\n$\\log_{10}\\gamma$ & -4.0 to 0.0 & 0.2 & $\\pm0.3$ & 0.1\\\\\n$\\log_{10} U_{\\rm min}$ & -2.0 to 1.5\\tablenotemark{e} & 0.1 & $\\pm0.1$ & 0.01\\\\\n$\\log_{10} U_{\\rm max}$ & 7 & N\/A & 7 & N\/A \\\\\n\\enddata\n\\tablecomments{(a) $\\Sigma_d$ in $M_\\sun ~{\\rm pc}^{-2}$. (b) In K. (c) For SE only. All the others are fixed at $\\beta=2$. (d) In \\micron. (e) $9.3\\leq T_d \\leq 35.6~\\rm K$ under our conversion. (f) Range for second iteration during calibration. (g) While none negative.}\n\\end{deluxetable}\nWe follow the fitting techniques in \\citet{GORDON14}: we build model SEDs on discrete grids in parameter space, and then calculate the likelihood for all models given the SED in each binned region. The multi-dimensional (3 dimensional for SE, BE and WD methods, 2 for FB and 4 for PL) grids have axes defined in \\secref{Sec: Model}, and grid spacing defined in Table \\ref{tab: grid space}.\n\nFor each grid point, we can generate a model SED $M_{ij...d}(\\nu)$, where the subscript represents a unique combination of parameters in the grid with $d$ dimensions. The calculated model is a continuous function of frequency $\\nu$. To compare with the real observation, we integrated $M_{ij...d}(\\nu)$ over the response function $R^n(\\nu)$ of each band $n$ in PACS and SPIRE with the following integral:\n\\begin{equation}\n\\overline{M^n_{ij...d}} = \\frac{\\int^\\infty_0 R^n(\\nu)M_{ij...d}(\\nu)d\\nu}{\\int^\\infty_0 R^n(\\nu)(\\nu_n\/\\nu)d\\nu}\n\\end{equation}\nNote that the denominator is added to account for the fact that \\textit{Herschel} intensities are quoted assuming a spectrum with $S(\\nu) \\propto \\nu^{-1}$ within the response function. The $\\nu_n$ values are the frequencies corresponding to the representative wavelength at each band, that is, 100, 160, 250, 350, and 500 \\micron.\n\nNext, in each binned region, we calculate the relative likelihood ($\\mathcal{L}$) of the model SED ($\\overline{M_{ij...d}}$) given the observed SED ($I_{\\rm obs}$) assuming Gaussian errors\\footnote{See \\citet{GORDON14} for discussion about statistical advantages of this matrix form definition}, that is:\n\\begin{equation}\\label{eq: likelihood}\n\\mathcal{L}(\\overline{M_{ij...d}}|I_{\\rm obs}) = \\exp\\big( -\\frac{1}{2} \\chi^2_{ij...d} \\big),\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq: chi2}\n\\chi_{ij...d}^2 \\equiv (\\overline{M_{ij...d}}-I_{\\rm obs})^T \\mathcal{C}^{-1} (\\overline{M_{ij...d}}-I_{\\rm obs})\n\\end{equation}\nand\n\\begin{equation}\\label{eq: covariance_sum}\n\\mathcal{C} = \\mathcal{C}_{\\rm bkg} + \\mathcal{C}_{\\rm cal}.\n\\end{equation}\nThe $^T$ sign represents the transpose matrix, and $^{-1}$ sign represents the inverse matrix. $\\mathcal{C}_{\\rm bkg}$ is the background covariance matrix discussed in \\secref{subsec: background error} with values:\n\\begin{eqnarray}\n\\mathcal{C}_{\\rm bkg} = \n \\left[\n\\arraycolsep=3.0pt\n\\begin{array}{ccccc}\n 1.548 & 0.09 & 0.057 & 0.025 & 0.01 \\\\\n 0.09 & 0.765 & 0.116 & 0.079 & 0.04 \\\\\n 0.057 & 0.116 & 0.098 & 0.071 & 0.037 \\\\\n 0.025 & 0.079 & 0.071 & 0.063 & 0.033 \\\\\n 0.01 & 0.04 & 0.037 & 0.033 & 0.028 \\\\\n\\end{array}\\right].\n\\end{eqnarray}\nAs described in \\secref{subsec: voronoi}, $\\mathcal{C}_{\\rm bkg}$ will be lower for resolution elements binned together. For a binned region with a number of pixels greater than one resolution element (5.2 pixels, see \\secref{subsec: alignment}), $\\mathcal{C}_{\\rm bkg}$ is divided by number of resolution elements in the region.\n\n$\\mathcal{C}_{\\rm cal} = I^T \\mathcal{M}_{\\rm fit} I$ is the covariance matrix generated from calibration error, where $\\mathcal{M}_{\\rm fit}$ is the percentage calibration errors and $I$ is the observed SED at the binned region. There are two kinds of errors from calibration. The first one is absolute calibration uncertainty, estimated from the systematic uncertainty by comparing the calibrator to model \\citep{BENDO17}. We assume this absolute calibration uncertainty will affect all the bands calibrated together at the same time, thus we will fill this uncertainty both in the diagonal terms and the band-to-band off diagonal terms in $\\mathcal{M}_{\\rm fit}$. The second one is the relative uncertainty, or random uncertainty, which is estimated from the ability of an instrument to reproduce the same measurement \\citep{BENDO17}. We assume this noise is band-independent thus we only put it in diagonal terms in $\\mathcal{M}_{\\rm fit}$.\n\nAmong the \\textit{Herschel} observations, the SPIRE instruments were calibrated with Neptune, and were estimated to have 4\\% absolute calibration and 1.5\\% relative calibration uncertainty. The PACS instruments were calibrated with 5 stars. and the result gave a 5\\% absolute uncertainty and 2\\% relative uncertainty \\citep{PACS13, BALOG14}. In the diagonal terms in $\\mathcal{M}_{\\rm fit}$, where we need to consider both kinds of uncertainties, it is recommended that we should take the direct sum of the two errors instead of quadratic sum \\citep{BALOG14, BENDO17}. Since our object is an extended source, we must also take the uncertainty in the beam shape into account when calculating calibration errors \\citep{BENDO17}. It is recommended that we double the absolute uncertainties for this \\citep{GORDON14}. The final $\\mathcal{M}_{\\rm fit}$ is:\n\\begin{eqnarray}\\label{Eq: Mcal}\n\\mathcal{M}_{\\rm fit} = \n \\left[\n\\arraycolsep=3.0pt\n\\begin{array}{ccccc}\n 0.12^2 & 0.1^2 & 0 & 0 & 0 \\\\\n 0.1^2 & 0.12^2 & 0 & 0 & 0 \\\\\n 0 & 0 & 0.095^2 & 0.08^2 & 0.08^2 \\\\\n 0 & 0 & 0.08^2 & 0.095^2 & 0.08^2 \\\\\n 0 & 0 & 0.08^2 & 0.08^2 & 0.095^2 \\\\\n\\end{array}\\right].\n\\end{eqnarray}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\textwidth]{_Model_5555_NGC5457_z_merged.pdf}\n\\caption{An example of observed SED versus fitted SED from a single binned region. Red: The observed SED and error used in the fit. The error bars only include the square root of diagonal terms from the complete covariance matrix $\\mathcal{C}$. Green dot: The SED convolved with response function. Orange dashed line: The model SED generated from expectation values in the fit. Gray lines: Some selected models with transparency proportional to $\\mathcal{L}$. For each method, we randomly select 50 models from the subset $\\mathcal{L}(M^n_{ij...d}|I^n) \\geq max\\big(\\mathcal{L}(M^n_{ij...d}|I^n)\\big)\/1000$ for plotting. Note that both WD and PL methods allow FB components with peak wavelength below 100 \\micron\\, where we do not include observational constraint in this study. Therefore, the unusual shape in SED at short wavelength will not affect the fitting qualities of those models. However, we can still get similar expectation values in $\\Sigma_d$ from these methods.\\label{fig: example_model_merged}}\n\\end{figure*}\nWith the relative likelihood $\\mathcal{L}(\\overline{M_{ij...d}}|I_{\\rm obs})$ calculated, we can construct the full probability distribution function (PDF) for each parameter by summing over all other dimensions in parameter space. For example, if the index $i$ corresponds to $\\Sigma_d$, then the PDF of $\\Sigma_d$ with observed $I^n$ would be $P_{\\Sigma_{d,i}} = \\sum_{j...d} \\mathcal{L}(\\overline{M_{ij...d}}|I_{\\rm obs})$. We can then calculate the expectation value\\footnote{When calculating the expectation values, we use logarithmic scales for variables with logarithmic spacing in the grid.}, and the probability weighted 16\\% and 84\\% values, which represent the 1-$\\sigma$ confidence interval and are sampled to represent the uncertainty of the fit. An example of observed SED versus fitted models with all methods is shown in Figure \\ref{fig: example_model_merged}. An example of the log-scale likelihood distribution and correlation between fitting parameters is shown in Figure \\ref{fig: example_model_corner}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{_Corner_5555_NGC5457_BE.pdf}\n\\caption{Likelihood distribution in the parameter space from results of BE method at the same binned region in Figure \\ref{fig: example_model_merged}. Both the histograms and 2-dimensional histograms are shown in log scale. The figure does not include the whole parameter space. It is magnified to emphasize the region with $\\chi^2 \\leq \\big(min(\\chi^2) + 6\\big)$.\\label{fig: example_model_corner}}\n\\end{figure}\n\n\\subsubsection{Calibrating $\\kappa_{160}$ \\label{Sec: calibration}}\nWe use the procedure and integrated dust SED of the MW diffuse ISM from \\citet{GORDON14} to calibrate $\\kappa_{160}$ in our models. The SED was originally measured with Cosmic Background Explorer (COBE), where the $\\lambda \\geq 127~\\micron$ measurements are from Far Infrared Absolute Spectrophotometer (FIRAS) and the 100 \\micron\\ measurement is from Diffuse Infrared Background Experiment (DIRBE). The resulting SED is 0.6887, 1.4841, 1.0476, 0.5432, and 0.2425 ${\\rm MJy~sr^{-1}~(10^{20}~H~atom)^{-1}}$ for the 100, 160, 250, 350, and 500 \\micron\\ bands. These values differ from those given by \\citet{GORDON14} because we include a factor of 0.97 for the molecular cloud correction \\citep{COMPIEGNE11}. The ionized gas factor in \\citet{COMPIEGNE11} is excluded because we do not include ionized gas through out this study, including the calculation of average DGR in the MW diffuse ISM \\citep{JENKINS09, GORDON14}. The dust-to-Hydrogen mass ratio appropriate for this high-latitude diffuse region is calculated by averaging the depletion strength factor $\\rm F_\\star$ value over sightlines in \\citet{JENKINS09} with similar hydrogen column densities as the observed region. The resulting $\\rm F_\\star$ is 0.36, and the dust-to-Hydrogen mass ratio is $1\/150$, which corresponds to a dust surface density to H column density ratio of $5.30\\times10^{-3}~{\\rm M_\\sun~ pc^{-2}}~(10^{20}~{\\rm H~atom})^{-1}$.\n\nDuring calibration, it is important to use the same models and fitting methods as the real fitting \\citep{GORDON14}. We follow the same steps of our fitting techniques except four necessary differences: 1) We replace the original $\\mathcal{M}_{\\rm fit}$ with $\\mathcal{M}_{\\rm cali}$ (Eq. \\ref{eq: M_cal cali}) for calibration since the calibration data came from COBE instead of \\textit{Herschel}. Following \\citet{FIXSEN97}, we assume 0.5\\% relative uncertainty and 2\\% absolute uncertainty for FIRAS (calibrating PACS160 and SPIRE bands), and 1\\% relative uncertainty and 10\\% absolute uncertainty for DIRBE (calibrating PACS100 \\micron).\n\\begin{eqnarray}\\label{eq: M_cal cali}\n\\mathcal{M}_{\\rm cali} = \n \\left[\n\\arraycolsep=3.0pt\n\\begin{array}{ccccc}\n 0.11^2 & 0 & 0 & 0 & 0 \\\\\n 0 & \\cdf & 0.02^2 & 0.02^2 & 0.02^2 \\\\\n 0 & 0.02^2 & \\cdf & 0.02^2 & 0.02^2 \\\\\n 0 & 0.02^2 & 0.02^2 & \\cdf & 0.02^2 \\\\\n 0 & 0.02^2 & 0.02^2 & 0.02^2 & \\cdf \\\\\n\\end{array}\\right]\n\\end{eqnarray}\n2) No $\\mathcal{C}_{\\rm bkg}$ term is applied. $\\mathcal{C}_{\\rm cal}$ is the only variance term considered. 3) Due to the small uncertainty of COBE data, the normal parameter spacing is not finely-sampled enough to resolve the PDF for all the parameters. Thus, we use a two-step calibration: first, we fit with the normal parameter space; then reduce the parameter range to a smaller region near the peak with a finer spacing (see ``Range$_c$'' and ``Spacing$_c$'' columns in Table \\ref{tab: grid space}); last, we fit with this new parameter spacing and report the results. 4) Our SED per hydrogen atom of the MW diffuse ISM is weaker than the one in \\citet{GORDON14} by a factor of 0.97 due to the molecular cloud fraction.\n\n\\begin{deluxetable*}{llcc}\n\\tablecaption{Results of calibrating emissivity to the MW high latitude SED.\\label{tab: cali result}}\n\\tablecolumns{4}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{Model} &\n\\colhead{$\\kappa_{160}~({\\rm cm^2~g^{-1}})$} &\n\\colhead{Other parameters} &\n\\colhead{Expectation values}}\n\\startdata\nSE & $10.10\\pm1.42$ & ($T_d$, $\\beta$) & ($20.90\\pm0.62$~K, $1.44\\pm0.08$) \\\\\nFB & $25.83\\pm0.86$ & ($T_d$) & ($17.13\\pm0.12$~K) \\\\\nBE & $20.73\\pm0.97$ & ($T_d$, $\\beta_2$) & ($18.02\\pm0.18$~K, $1.55\\pm0.06$) \\\\\nWD & $27.46\\pm1.14$ & ($T_d$, $f_W$) & ($16.60\\pm0.25$~K, $0.00343\\pm0.00143$) \\\\\nPL & $26.60\\pm0.98$ & ($\\alpha$, $\\log_{10}\\gamma$, $\\log_{10} U_{\\rm min}$) & ($1.69\\pm0.19$, $-1.84\\pm0.21$, $-0.16\\pm0.03$)\\\\\n\\enddata\n\\end{deluxetable*}\nThe calibrated $\\kappa_{160}$ values range from 10.48 to 21.16$~\\rm cm^2~g^{-1}$, see complete results in Table \\ref{tab: cali result}. This is a fairly large range, which indicates that the choice of model does affect the measurement of dust properties. Our results are comparable with calculated $\\kappa_{160}$ values in literature, e.g., the widely used \\citet{DRAINE07} model, with updates in \\citet{DRAINE14}, gives $\\kappa_{160}$ equal to 13.11$~\\rm cm^2~g^{-1}$ for silicates and 10.69$~\\rm cm^2~g^{-1}$ for carbonaceous grains, and 12.51$~\\rm cm^2~g^{-1}$ in the combined model. The standard model in \\citet{GALLIANO11} gives a value of 14$~\\rm cm^2~g^{-1}$, and 16$~\\rm cm^2~g^{-1}$ after replacing graphite with amorphous carbons. A recent calculation by \\citet{RELANO18}, following the \\citet{DESERT90} dust model, gives an equivalent $\\kappa_{160}=22.97~\\rm cm^2~g^{-1}$.\n\nIn the MBB model calibration process in \\citet{GORDON14} and \\citet{GORDON17}, the resulting $\\kappa_{160}$ falls between 30.2 and 36.4$~\\rm cm^2~g^{-1}$, depending on the model used. The common model between us is the SMBB in \\citet{GORDON14}, where they have $\\kappa_{160}=30.2~\\rm cm^2~g^{-1}$, and our SE, where we have $\\kappa_{160}=10.1~\\rm cm^2~g^{-1}$. Our calibration method differs from \\citet{GORDON14} in four ways: 1) With the values of COBE uncertainty we quote, we are allowed to have more deviation at 100 \\micron\\ than the other bands. On the other hand, \\citet{GORDON14} have both correlated and uncorrelated uncertainty values uniform for all bands. 2) We use a $M_{\\rm cali}$ which assumes 100 \\micron\\ calibration independent of the other bands since DIRBE and FIRAS were calibrated independently. \\citet{GORDON14} assumed that all bands are correlated with the same absolute uncertainties. 3) We use a two-step fitting to increase the accuracy only for calibration, while \\citet{GORDON14} used exactly the same methods for calibration and fitting. 4) Our SED per hydrogen atom of the MW diffuse ISM is weaker by a factor of 0.97 due to the molecular cloud fraction. In Section~\\ref{sec:sensitivity} we discuss the sensitivity of the results to choices in the SED fitting and calibration in more detail.\n\n\\section{Results}\\label{sec: results}\nWe fit the SEDs from all binned regions with all five MBB variants introduced in \\secref{Sec: Model}. We calculate the DGR in each bin from the observed $\\Sigma_{\\rm gas}$ and the fitting results of $\\Sigma_d$. Here, we look at the DGR and dust temperature radial gradients for each model, and at the residuals and reduced chi-square values about the best fit. Doing so, we will be interested in which models meet our physically motivated expectations and which models provide good fits to the SED. The complete fitting results are shown in Appendix \\ref{app: fitting}, along with their correlations in Appendix \\ref{app: corner}.\n\n\\subsection{DGR-metallicity relation\\label{sec: max DGR}}\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{_DGR-vs-Metallicity_NGC5457_z_merged.pdf}\n\\caption{DGR expectation values versus radius and metallicity. The shaded regions show the intrinsic scatter of DGR from $\\Sigma_d$ fitting results and the zero-point fluctuation of $\\Sigma_{\\rm gas}$ (\\secref{subsec: HI}). MAX is the maximum possible DGR calculated as a function of metallicity. The range is set by the difference between \\citet{LODDERS03} and \\citet{ASPLUND09} chemical composition, which is small at this plotting scale.\\label{fig: DGR_grad_models_merged}}\n\\end{figure}\nIn Figure \\ref{fig: DGR_grad_models_merged}, we plot the DGR-metallicity relation from all fitting methods. The metallicity-radius relation is calculated with Eq. 10 in \\citet{CROXALL16}. We first separate M101 into 20 radial regions, and, at each region with $r_i\\leq r < r_j$, we take the sum of the expectation value of dust mass divided by the total gas mass as the expectation value of DGR (${\\rm < DGR >}$) in that region, that is:\n\\begin{equation}\n{\\rm }_{ij} = \\frac{\\sum_{r_i\\leq r_k < r_j}<\\Sigma_d>_k A_k}{\\sum_{r_i\\leq r_k < r_j}M_{\\rm gas,k}},\n\\end{equation}\nwhere $<\\Sigma_d>_k$ and $A_k$ are the expectation value of $\\Sigma_d$ and area at the k-th binned region, respectively. We estimate the uncertainties of these expectation values of DGR with the ``realize'' method \\citep{GORDON14}, and the uncertainties are $\\sim$ 0.02 dex in the high metallicity region, $\\sim$ 0.09 dex at $\\rm 12+\\logt({\\rm O\/H})\\sim8.2$, and $\\sim$ 0.6 dex in the lowest metallicity region, which are reasonably small. However, there is also intrinsic scatter of DGR in each radial region, which would be larger than the uncertainties. To estimate this intrinsic scatter of DGR per $M_{\\rm gas}$ within one radial region, we calculate the distribution by summing up the PDFs of DGR from each bin in that radial region, weighted by their $M_{\\rm gas}$. Next, we take the region between the 16th and 84th percentile of the distribution as the range of the intrinsic scatter. This intrinsic scatter is included in Figure \\ref{fig: DGR_grad_models_merged}, along with the zero-point uncertainty in $\\Sigma_{\\rm gas}$.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{_DTM_C18_NGC5457_BE.pdf}\n\\caption{Our DGR and DTM versus metallicity from the BE method results. (a) The DGR expectation values fitted by the BE model from each binned region are shown with error bars. Shaded region: The scatter of DGR. The definition is described in Figure \\ref{fig: DGR_grad_models_merged}. (b) The DGR from the BE model with power-law (${\\rm DGR} \\propto Z^x$) fitting as listed in Table \\ref{tab: D2M}. Blue: The expectation values calculated from the combined PDF (same for figures in \\secref{sec: discussions}). Orange: The power-law result with whole data range. Green: Fitting with only $\\rm 12+\\logt({\\rm O\/H}) > 8.2$, where we have a more concentrated data point distribution. (c) The DTM from the BE model. The DTM scatter includes DGR scatter, the $\\rm 12+\\logt({\\rm O\/H})$ uncertainty \\citep{CROXALL16}, and $M_{\\rm O}\/M_{\\rm Z}$ uncertainty (\\secref{sec: MOMZ unc}). The horizontal lines are the DTM=0.1, 0.2, ......1.0 locations.\\label{fig: DGR_grad_models_BE}}\n\\end{figure}\nThe distribution of our original data points is denser in the region with $\\rm 12+\\logt({\\rm O\/H}) \\ga 8.2$, where the original SNR is high. This is illustrated in Figure \\ref{fig: DGR_grad_models_BE} (a) with the results from the BE model. Within this range, all models except SE have their DGR dropping by nearly 1 dex, which is around twice faster than the metallicity gradient. The SE has its DGR dropping by 1.5 dex. At $\\rm 12+\\logt({\\rm O\/H}) < 8.0$, the scatter in PDF is large (generally with $\\sigma \\ga 1~\\rm dex$), which makes determining a trend difficult. By treating metallicity as an independent variable, we fit our DGR versus metallicity with a linear equation ${\\log_{10} \\rm DGR} = a\\times(\\rm 12+\\logt({\\rm O\/H})) +b$ in both the full metallicity range and only $\\rm 12+\\logt({\\rm O\/H}) \\geq 8.2$ region. An example showing results from the BE model is shown in Figure \\ref{fig: DGR_grad_models_BE} (b). The results are listed in Table \\ref{tab: D2M}. All the fitting results indicate a $\\log_{10}{\\rm DGR}$ variation steeper than $\\rm 12+\\logt({\\rm O\/H})$. The three methods with $\\beta$ fixed over the whole spectral range, FB, WD, and PL, have fitted slopes closer to one.\n\\begin{table}\n \\centering\n \\caption{$\\log_{10}$DGR versus $\\rm 12+\\logt({\\rm O\/H})$ linear fitting results.\\label{tab: D2M}}\n \\begin{tabular}{ccccc}\n \\hline\n \\hline\n Model & \\multicolumn{2}{c}{Full range} & \\multicolumn{2}{c}{$\\rm 12+\\logt({\\rm O\/H})\\geq 8.2$} \\\\\n & a & b & a & b \\\\\n \\hline\n SE & $2.7\\pm0.3$ & $-25.3\\pm 2.1$ & $3.2\\pm0.2$ & $-29.4\\pm 1.8$ \\\\\n FB & $1.5\\pm0.1$ & $-14.9\\pm0.9$ & $1.5\\pm0.1$ & $-15.3\\pm 0.7$ \\\\\n BE & $1.7\\pm0.1$ & $-16.9\\pm 1.0$ & $1.9\\pm 0.1$ & $-18.1\\pm 0.7$ \\\\\n WD & $1.5\\pm0.2$ & $-14.9\\pm1.4$ & $1.3\\pm0.1$ & $-13.1\\pm0.5$ \\\\\n PL & $1.3\\pm0.1$ & $-13.3\\pm 0.9$ & $1.2\\pm0.1$ & $-12.8\\pm0.5$ \\\\\n \\hline\n \\end{tabular}\n \\raggedright Note: Data are fitted with ${\\log_{10} \\rm DGR} = a\\times(\\rm 12+\\logt({\\rm O\/H})) +b$.\n\\end{table}\n\n\\subsubsection{Physical limitations to DGR}\\label{sec: MOMZ unc}\nDust grains are built from metals. Thus, we can calculate the theoretical upper limit to the DGR by calculating the DGR for the case when all available metals are in dust. If the fitted DGR exceeds the calculated upper limit, we would consider the fitting result physically less plausible. To convert to total metallicity from oxygen abundance, we need to assume the ISM chemical composition. We calculate the mass ratio of oxygen to total metal from two literature of solar chemical composition: 1) \\citet{LODDERS03}, which gives $M_{\\rm O}\/M_Z = 51\\%$ where $M_Z$ is the mass of all metals. This is the composition used in \\citet{JENKINS09}, which we will discuss in \\secref{sec: J09}. 2) A later version in \\citet{ASPLUND09}, which gives $M_{\\rm O}\/M_Z = 44.5\\%$. The conversion from $\\rm 12+\\logt({\\rm O\/H})$ to metallicity is given by:\n\\begin{equation}\\label{eq: DGR_limit}\n \\frac{M_Z}{M_{\\rm gas}} = \\frac{M_Z}{M_{\\rm O}}\\frac{M_{\\rm O}}{1.36M_{\\rm H}} = \\frac{\\frac{m_{\\rm O}}{m_{\\rm H}}10^{\\big(\\rm 12+\\logt({\\rm O\/H})\\big)-12}}{\\frac{M_{\\rm O}}{M_Z} \\times 1.36},\n\\end{equation}\nwhere $m_{\\rm O}$ and $m_{\\rm H}$ are the atomic weights of oxygen and hydrogen. The solar $\\rm 12+\\logt({\\rm O\/H})$ adopted in this study is $8.69\\pm0.05$ \\citep{ASPLUND09}. This estimation of the DGR upper limit can be incorrect if the actual chemical composition deviates from this range. For example, \\citet{CROXALL16} showed there is a trend that $\\log_{10}({\\rm N\/O})$ goes from $-0.4$ to $-1.4$ as radius increases in M101, which means we can overestimate the upper limit in the outer disk if other major elements have similar trends.\n\nWe overlay the DGR upper limit calculated between $M_{\\rm O}\/M_Z = 44.5\\%$ and $51\\%$ with our results in Figure \\ref{fig: DGR_grad_models_merged}. We find that in the highest metallicity region, the DGR given by SE method is greater than the upper limit by a factor of 3, which is outside the 16-84 percentile of intrinsic scatter. This is unlikely being a result of $\\alpha_{\\rm CO}$ variation because we will need to have $\\alpha_{\\rm CO}\\sim 9$ in the center of M101 to explain this apparent DGR. This $\\alpha_{\\rm CO}$ value is unlikely to be true with our knowledge of $\\alpha_{\\rm CO}$ in M101 \\citep{SANDSTROM13} and metallicity-dependency of $\\alpha_{\\rm CO}$ \\citep{BOLATTO13}. We thus consider the results from SE method less physically plausible.\n\nWe also notice that for all methods listed, there is a DGR spike in expectation value exceeding the upper limit near $\\rm 12+\\logt({\\rm O\/H}) \\sim 7.9$. Nevertheless, all the others still have their 16-84 percentile scatter falling under the DGR upper limit. Thus, we consider all methods except SE still reasonable under DGR upper limit test. Note that the scatter in the regions with $\\rm 12+\\logt({\\rm O\/H}) < 8.2$ reach the order of 1 dex, which means the fit values are less reliable.\n\n\\subsection{Temperature profiles}\\label{sec: T grad}\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{_T-profile_NGC5457_z_merged.pdf}\n\\caption{Radial profiles of dust temperature, $\\Sigma_{\\rm SFR}$ and $\\Sigma_\\star$. All the profiles are plotted in gas-mass-weighted average. Top panel: temperature profiles from all fitting methods. 16-84 percentile scatter from the fitting is shown in shaded areas. Bottom panel: $\\Sigma_{\\rm SFR}$ and $\\Sigma_\\star$ profiles. See \\secref{subsec: other} for data source and calculation. A 10\\% uncertainty is plotted in shaded region, which is an uncertainty suggested in \\citet{DALE09}.\\label{fig: T_prof_new}}\n\\end{figure}\nIn the top panel of Figure \\ref{fig: T_prof_new}, we plot the $M_{\\rm gas}$-weighted dust temperature as a function of radius for each method. Within a small radial range, we assume that the DGR variation is small, thus the $M_{\\rm gas}$-weighted dust temperature would be a representative $T_d$ in the corresponding radial region. For the PL method, temperature is not a directly fitted variable. Thus, we calculate the dust mass-weighted average $U$, and convert it to temperature according to \\secref{sec: PL}.\n\nThe equilibrium dust temperature depends on the heating radiation field, which should be related to a combination of $\\Sigma_\\star$ and $\\Sigma_{\\rm SFR}$ here, shown in bottom panel of Figure \\ref{fig: T_prof_new}. By comparing to the radial trend of heating sources, the one model that stands out is SE: it has a temperature profile rising from the galaxy center to $0.8R_{25}$. It is possible to change the relationship between heating sources and dust temperature if the geometry and\/or the opacity of the ISM changes with radius. However, with both heating source tracers having intensity decreasing by more than one dex within $0.8R_{25}$, we expect that a decreasing $T_d$ with radius to be the dominant trend. Thus, we also reach the conclusion as previous section that results from SE method are less physically plausible.\n\n\\subsection{Residual distributions}\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\textwidth]{_Residual_NGC5457_merged.pdf}\n\\caption{2-dimensional histograms of relative residual versus observed SED at each band. The x-axes have unit in $\\rm MJy~sr^{-1}$. The zero relative residual line is marked in gray.\\label{fig: Residual_merged}}\n\\end{figure*}\nThe residual distribution is one of the most straightforward ways to check the goodness of fit. For each method, we plotted the 2-dimensional histogram of relative surface brightness residuals in Figure \\ref{fig: Residual_merged}. We expect that a good fit will give a residual distribution that is symmetric about zero (The gray lines in all panels in Figure \\ref{fig: Residual_merged}) and has no trend with the measured surface brightness. An example of well-behaved residual distribution can be seen for the BE model at the SPIRE 250 band. Otherwise, there may be a underlying systematic effect which tells us that the model is flawed or an additional free parameter is needed.\n\nThere are two features occurring for all MBB methods: 1) At the high intensity end, all of our methods underestimate PACS160. 2) In general, the relative residuals are smaller at the low intensity end (see more discussion in \\secref{sec: compare-chi2}). The SE method gives the most compact residual distributions. This means that letting both $T_d$ and $\\beta$ free provides the highest flexibility to fit the SED among all models here. However, we should bear in mind that the SE model yields DGR and temperature gradients distinct from the other models and that we consider these results less physically plausible, as previously shown in \\secref{sec: max DGR} and \\ref{sec: T grad}. The FB method yields the residual distribution least consistent with random scatter about the model. It shows the least compact residual distribution with long tails in positive residuals, especially in PACS100, SPIRE350 and SPIRE500. These positive residuals mainly come from low intensity regions. These indicate the need for $\\beta$ to change between high and low intensity regions. Among the remaining methods, both the WD and PL improve the residuals the short wavelengths covered by PACS100. This reflects the expected presence of warm, possibly out-of-equilibrium dust at these short wavelengths. The BE method has the second most compact residual distribution, and shows a better fit to the long wavelength bands that are crucial to accurately trace $\\Sigma_d$.\n\n\\subsection{The reduced chi-square values\\label{sec: compare-chi2}}\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{_Residual_Chi2_NGC5457.pdf}\n\\caption{The $\\tilde{\\chi}^2$ distributions for all fitting methods. Left: 2-dimensional histograms of $\\tilde{\\chi}^2$ with PACS100. The x-axes have unit in $\\rm MJy~sr^{-1}$. Note that the 2-dimensional histograms of $\\tilde{\\chi}^2$ with all five bands demonstrate similar information, thus we only plot the ones from PACS100. Right: the horizontal histograms of $\\tilde{\\chi}^2$. The orange lines show the expected distribution according to DoF.\\label{fig: residuals_chi2}}\n\\end{figure}\nThe reduced chi-square value is defined as $\\tilde{\\chi}^2 \\equiv \\chi^2\/(n - m)$, where $n$ is the number of observations (which is $5$ in our study) and $m$ is the number of fitting parameters (3 for SE, 2 for FB, 3 for BE, 3 for WD and 4 for PL). This value takes both uncertainties in the observations and the degrees of freedom (DoF) of the models into account. The $\\tilde{\\chi}^2$ value gives the information of how good the fitting is and how much an extra fitting parameter improves the fitting quality. We plot the $\\tilde{\\chi}^2$ distribution versus observation in the left panels in Figure \\ref{fig: residuals_chi2}. As we have seen in residual maps, the FB and WD methods have long tails in the low luminosity region. FB and WD methods have $\\tilde{\\chi}^2 \\geq 1$ in the high luminosity region, mainly due to the residuals in long wavelength, where the corresponding uncertainties are much smaller. The PL method has relatively large $\\tilde{\\chi}^2$ everywhere, which means the extra DoF does not offer an improvement in the quality of the fitting. Note that this result does not imply the physical correctness of single temperature over ISRF distribution, but indicates that the DoF from ISRF distribution is less effective in improving the quality of FIR SED fitting.\n\nAll the methods have a gradually rising $\\tilde{\\chi}^2$ toward the high luminosity region. By calculating the contribution to $\\tilde{\\chi}^2$ from each band, the most important contributor to this phenomenon is the PACS160 band. There is in general a $\\sim$20\\% systematic underestimation by the model fits in PACS160 in the center of M101. One possible explanation is that the contribution from \\textsc{[Cii]} 158 \\micron\\ line is integrated into the PACS160 SED, which makes the PACS160 SED brighter than what is predicted by dust emission models. This effect is shown to be minor by \\citet{GALAMETZ14}, where the authors demonstrated that \\textsc{[Cii]} contributes only around 0.4\\% to integrated 160~\\micron\\ emission. Another possible explanation is an unknown systematic uncertainty in PACS160. Previous work by \\citep{ANIANO12} found that PACS160 was $\\sim$20\\% higher than {\\em Spitzer} MIPS160 measurements in the bright regions of some nearby galaxies.\n\nWe also examine the histograms of $\\tilde{\\chi}^2$ (Figure \\ref{fig: residuals_chi2} right panels) with two features: 1) The mean value, which is expected to be one. 2) The shape of the histogram, which should resemble the $\\chi^2$-distribution with $k$ DoF\\footnote{We normalized the $\\chi^2$-distribution to a mean value of one, i.e., $k\\times f(k\\tilde{\\chi}^2, k)$.}. The SE method has mean $\\tilde{\\chi}^2$ of 0.77. The histogram is more compact than a $\\chi^2$-distribution with $k=2$. Both indicate that we might be overestimating the uncertainties in the SE method. FB and WD have mean values of 1.5 and 1.64, respectively, and flatter histograms than expected. BE has a mean value of 0.97 and a distribution resembling what we expected. PL has a mean value of 3.16, which means the extra parameters in the PL model do not help it making a more precise fit corresponding to its DoF.\n\n\\subsection{Summary of model comparison}\nAmong the MBB variants we have tested, we consider the SE method physically less plausible because the resulting temperature and DGR gradient do not match our physically-motivated expectations. The DGR results from the other four MBB variants are consistent with each other in regions with $\\rm 12+\\logt({\\rm O\/H}) \\leq 8.5$, as illustrated in Figure \\ref{fig: DGR_grad_models_merged}. This implies that the dust masses measure from the MBB fitting is mostly insensitive to the specific choices about the radiation field distribution. According to the residual distribution and $\\tilde{\\chi}^2$ values, the BE model gives the statistical best fit, which means that the most important first-order correction to the basic MBB is to allow $\\beta$ vary in the long wavelength region. We will consider BE as the preferred model based on these tests.\n\n\\section{Discussion}\\label{sec: discussions}\n\\subsection{Is DTM Constant in M101?}\\label{sec: dicuss_variable_DTM}\nAll of our models indicate that DGR falls off steeper than metallicity, showing a variable DTM ratio. Our preferred model (BE) has ${\\rm DGR} \\propto Z^{1.7}$, which is equivalent to DTM changing from 0.25 at $\\rm 12+\\logt({\\rm O\/H})\\sim$7.8 to 1 above $\\rm 12+\\logt({\\rm O\/H})\\sim$8.5. Models with $\\beta$ fixed have smaller power-law indices, specifically the FB and WD models show ${\\rm DGR} \\propto Z^{1.4}$, and PL model shows ${\\rm DGR} \\propto Z^{1.2}$. Even if we only consider region with $\\rm 12+\\logt({\\rm O\/H}) \\geq 8.2$, where the majority of our data points reside, we still obtain a DGR trend steeper than metallicity gradient. These results are based on direct-$T_e$ method metallicity measurements \\citep{CROXALL16} with uncertainties in $\\rm 12+\\logt({\\rm O\/H})$ around $0.04-0.08$ dex.\n\nIn order to understand what aspects of the dust life cycle could result in a variable DTM, we look for mechanisms that affect dust mass and metals in the ISM with different rates. The five most important mechanisms of this kind are: 1) Accretion of metals in the ISM onto existing dust grains, which raises DTM. 2) ISM enrichment from stellar sources (e.g. AGB stars, SNe), which have DTM characteristic of the particular stellar source instead of DTM in the current ISM. 3) Dust destruction by SNe, which lowers DTM. 4) Infall of circumgalactic medium (CGM) into the galaxy, which dilutes the ISM DTM with the lower DTM in the CGM \\citep{DWEK98, HIRASHITA99, ZHUKOVSKA16}. 5) Outflows of dust and metals into CGM, which increases the ISM DTM because the outflow is less dusty than the ISM \\citep{LISENFELD98}.\n\nAmong these mechanisms, ISM accretion has a rate that increases with ISM density, especially in cold clouds \\citep{DWEK98, ASANO13}. Observationally, ISM density can be roughly traced by the mass fraction of molecular hydrogen ($\\rm f_{H_2}$)\\footnote{Without knowing the three-dimensional ISM geometry, $\\rm f_{H_2}$ would be a better indicator of ISM density than $\\Sigma_{\\rm gas}$.}. The rate of enrichment from stellar sources should follow the stellar mass surface density ($\\Sigma_\\star$) modulo stellar age effects. The effects of production and destruction of dust by SNe should track both the massive star formation rate ($\\Sigma_{\\rm SFR}$) and the older stellar populations ($\\Sigma_\\star$).\n\\begin{table}\n \\centering\n \\caption{Correlation between $\\log_{10}$DTM and physical quantities $\\log_{10}\\rm f_{H_2}$, $\\log_{10}\\Sigma_\\star$ and $\\log_{10}\\Sigma_{\\rm SFR}$.\\label{tab: Residual trend}}\n \\begin{tabular}{ccccc}\n \\hline\n \\hline\n Quantity & \\multicolumn{2}{c}{Direct} & \\multicolumn{2}{c}{Residual} \\\\\n & $\\rho_S$ & $p$-value\\footnote{$p$-value is the probability that we get a $\\rho_S$ greater or equal to the calculated value from the given data when null hypothesis is true. In other words, $p$-value goes from 0 to 1, and a smaller $p$-value implies a more significant correlation.} & $\\rho_S$ & $p$-value \\\\\n \\hline\n $\\log_{10}\\rm f_{H_2}$ & 0.80 & $\\ll 1$ & 0.26 & $\\ll 1$ \\\\\n $\\log_{10}\\Sigma_\\star$ & 0.72 & $\\ll 1$ & -0.05 & 0.12 \\\\\n $\\log_{10}\\Sigma_{\\rm SFR}$ & 0.22 & $\\ll 1$ & -0.08 & 0.007 \\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\textwidth]{_residual_trends_NGC5457_BE.pdf}\n\\caption{Relation between DTM and the three physical quantities: $\\rm f_{H_2}$ (a, d), $\\Sigma_\\star$ (b, e) and $\\Sigma_{\\rm SFR}$ (c, f). (a, b, c): Relations in the raw data. (d, e, f): Relations after removing the radial trends in all four quantities: $\\log_{10}$DTM, $\\log_{10}\\rm f_{H_2}$, $\\log_{10}\\Sigma_\\star$ and $\\log_{10}\\Sigma_{\\rm SFR}$. The radial trend removal is done by first fitting the quantities versus radius with linear regression, and then subtracting the regression results from the original data. The discussion of radial trend removal is described in \\secref{sec: dicuss_variable_DTM}. The mean uncertainty in $\\Sigma_d$ is 0.1 dex. $\\Sigma_\\star$ has unit in $M_{\\sun}~\\rm pc^{-2}$ and $\\Sigma_{\\rm SFR}$ has unit in $M_{\\sun}~\\rm kpc^{-2}~yr^{-1}$.\\label{fig: residual trend}}\n\\end{figure*}\nTo test these potential correlations of DTM with environmental characteristics, we calculate the Spearman's rank correlation coefficient ($\\rho_S$) and $p$-value between $\\log_{10}$DTM and these three quantities. Note that we only include the region with $\\rm f_{H_2}\\geq 5\\%$ for all four quantities, namely DTM, $\\rm f_{H_2}$, $\\Sigma_\\star$ and $\\Sigma_{\\rm SFR}$, due to the detection limit of HERACLES. $\\log_{10}$DTM correlates strongly and significantly with both $\\log_{10} \\rm f_{H_2}$ and $\\log_{10} \\Sigma_\\star$, while it shows a weaker but significant correlation with $\\log_{10}\\Sigma_{\\rm SFR}$. This is shown in the ``direct'' columns in Table \\ref{tab: Residual trend} and top panels in Figure \\ref{fig: residual trend}.\n\nWhile there are significant correlations between DTM and these environmental characteristics, all the quantities here (DTM, $\\rm f_{H_2}$, $\\Sigma_\\star$ and $\\Sigma_{\\rm SFR}$) to first order have major trends that vary with radius. $\\rm f_{H_2}$, $\\Sigma_\\star$ and $\\Sigma_{\\rm SFR}$ all have $\\rho_S$ with radius greater than the $\\rho_S$ with $\\log_{10}$DTM. $\\log_{10}$DTM also has a higher $\\rho_S$ with radius than with other quantities. The results of calculating the $\\rho_S$ and $p$-value directly will therefore be dominated by this major radial trend. In order to investigate what drives the DTM variation, we need to remove these dominant radial trends. This removal is done by first fitting $\\log_{10}$DTM, $\\log_{10} \\rm f_{H_2}$, $\\log_{10}\\Sigma_\\star$ and $\\log_{10}\\Sigma_{\\rm SFR}$ versus radius with linear regression, and then subtracting the regression results from the original data points to get the residuals. The correlations between $\\log_{10}$DTM and $\\log_{10} \\rm f_{H_2}$, $\\log_{10}\\Sigma_\\star$ and $\\log_{10}\\Sigma_{\\rm SFR}$ after radial trend removal are shown in the bottom panels in Figure \\ref{fig: residual trend} and the ``Residual'' columns in Table \\ref{tab: Residual trend}.\n\nThe resulting $\\rho_S$ between residual $\\log_{10}$DTM and residual $\\log_{10}\\rm f_{H_2}$ is 0.26, with a $p$-value $\\ll 1$. This indicates that the correlation between them is weak compared to the scatter in the data but significant. The null hypothesis, that the two variables (residual DTM and $\\rm f_{H_2}$) are unrelated, is extremely unlikely to be true. Residual $\\log_{10}\\Sigma_\\star$ and residual $\\log_{10}\\Sigma_{\\rm SFR}$, on the other hand, have their $\\rho_S$ drop relative to the direct correlation and the residual $\\rho_S$ of them show extremely weak correlations, and thus considered negligible.\n\nBased on this calculation, we suggest that ISM density may be the most important environmental factor that affects DTM in M101. This would explain the correlation between variations of DTM at a fixed radius and variations in $\\rm f_{H_2}$. The stellar sources, traced by $\\Sigma_\\star$ and $\\Sigma_{\\rm SFR}$, do not correlate significantly with the variations of DTM at a fixed radius.\n\n\\subsubsection{Variable emissivity coefficient}\\label{subsec: discuss_emissivity}\nAlthough we have thus far interpreted our results as changes in DTM, an alternative possibility is that $\\kappa_{160}$ varies with environment instead. As discussed in \\secref{Sec: Model} all of our MBB variants are subject to the degeneracy between $\\Sigma_d$ and $\\kappa_{160}$. The way we deal with it is by calibrating $\\kappa_{160}$ with the MW diffuse ISM SED (\\secref{Sec: calibration}) and assuming all the variation in temperature-corrected SED amplitude is due to $\\Sigma_d$ only. However, this assumption might fail if we observe environments that differ from the high-latitude MW diffuse ISM we used for calibration and if $\\kappa_{160}$ varies with local environment. In general, our DGR($Z$) does not follow the DGR($Z$) calculated from ${\\rm F_\\star}=0.36$, which have been used for our calibration. This leaves the possibility that the changes we see in DTM are still degenerate with the changes in $\\kappa_{160}$.\n\n$\\kappa_{160}$ can be a function of dust size, temperature, and composition, which may change as gas transitions from diffuse to dense phases. The calculations in \\citet{OSSENKOPF94, KOHLER11} show an enhanced dust emissivity due to coagulation of dust particles in dense ISM regions. This phenomenon is also observed by \\citet{PLANCK14} and \\citet{PLANCK15} in the MW, where the authors show an increase in total opacity with increasing ISM density and decreasing $T_d$. However, we note that both \\citet{PLANCK14} and \\citet{PLANCK15} \\textit{assumed} a constant DGR, and explained their observations with a change in the composition and structure of the dust particles.\n\nWe will focus on the dense regions in M101 for discussing emissivity variation with coagulation, where coagulation is more likely to happen. \nWe use the constant DTM in MW \\citep{DRAINE11} as our reference true DTM and calculate how our DTM deviates from the reference as a function of ISM density, traced by $\\rm f_{\\rm H_2}$, plotted Figure \\ref{fig: rDGR_vs_metal_BE}. Note that the figure only includes the region with significant detection from HERACLES ($f_{\\rm H_2} \\ga 5\\%$, or $\\rm 12+\\logt({\\rm O\/H}) \\ga 8.4$), not the full range of our DGR-to-metallicity figures.\n\nWe calculate the Pearson's correlation coefficient of all four combinations of log\/linear $\\frac{DTM}{DTM_{MW}}$-to-$ f_{\\rm H_2}$ relation, i.e., $\\rm \\frac{DTM}{DTM_{MW}}$-to-$f_{\\rm H_2}$, $\\rm \\frac{DTM}{DTM_{MW}}$-to-$\\log_{10} f_{\\rm H_2}$, $\\log_{10} \\rm \\frac{DTM}{DTM_{MW}}$-to-$f_{\\rm H_2}$, and $\\log_{10} \\rm \\frac{DTM}{DTM_{MW}}$-to-$\\log_{10} f_{\\rm H_2}$. The result shows 0.712, 0.790, 0.694, and 0.795, respectively. Thus we continue our analysis with $\\log_{10} \\rm \\frac{DTM}{DTM_{MW}}$-to-$\\log_{10} f_{\\rm H_2}$ relation. By fitting $\\log_{10} \\rm \\frac{DTM}{DTM_{MW}}$ to $\\log_{10} f_{\\rm H_2}$, our $\\rm \\frac{DTM}{DTM_{MW}}$ varies from 0.9 to 2.0 in this region. If we attribute this change to the increase in emissivity, then $\\kappa_{160}$ will go from 19 to $41~\\rm cm^2~g^{-1}$ in this region, with a relation of $\\kappa_{160} \\propto f_{\\rm H_2}^{0.2}$. This is comparable to the emissivity changes inferred by \\citet{PLANCK14} using similar reasoning in MW clouds and well within the range allowed by theoretical grain coagulation models \\citep{OSSENKOPF94, KOHLER11}.\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{_relDGR_vs_afH2_NGC5457_.pdf}\n\\caption{Our DTM normalized by the MW DTM \\citep{DRAINE11} plotted as a function of H$_2$ mass fraction ($f_{\\rm H_2}$). The original distribution is shown in blue. A representative error bar in cyan, which only include the uncertainties in DGR, is shown at top-left. Another error bar including extra uncertainty in $\\rm 12+\\logt({\\rm O\/H})$, which is considered systematic, is shown in green at top-left. The linear regression of $\\log_{10} \\rm DTM\/DTM_{MW}$-to-$\\log_{10} f_{\\rm H_2}$ is shown in red. Note that this plot only includes data with $f_{\\rm H_2} \\ga 5\\%$ ($\\rm 12+\\logt({\\rm O\/H}) \\ga 8.4$), and that the y-axis is in log scale.\\label{fig: rDGR_vs_metal_BE}}\n\\end{figure}\n\n\\subsubsection{Variable conversion factor}\\label{sec: alpha_CO discussion}\nAnother potential explanation of the change in DGR (and thereby DTM) is that the conversion factor $\\alpha_{\\rm CO}$ is not a constant, therefore, we could be wrong in estimating $\\Sigma_{\\rm H_2}$. There are two major observed trends in $\\alpha_{\\rm CO}$ \\citep{BOLATTO13}. The first trend is a metallicity dependent $\\alpha_{\\rm CO}$. In the model derived in \\citet{WOLFIRE10}, among others, $\\alpha_{\\rm CO}$ increases as metallicity decreases, which means we could be overestimating DGR in the outer part of M101. Recovering this overestimation would increase the variation in DTM and make the observed trends stronger. Moreover, since $\\rm f_{H_2}$ traced by a fixed $\\alpha_{\\rm CO}$ drops steeply with increasing radius in M101, any modification from metallicity dependent $\\alpha_{\\rm CO}$ that can affect DGR in the disk must posit a large and almost totally invisible reservoir of CO-dark molecular gas. It is suggested by \\citet{BOLATTO13} to use a constant $\\alpha_{\\rm CO}$ in regions with $\\rm 12+\\logt({\\rm O\/H})\\ge 0.5Z_\\sun$. When we test the total gas mass from a constant $\\alpha_{\\rm CO}$ against the one calculated with \\citet{WOLFIRE10} metallicity-dependent $\\alpha_{\\rm CO}$, the difference between them is at most 0.12 dex. This small change is due to the fact that in the radial region of M101 where H$_2$ makes a substantial contribution to the total gas mass, the metallicity is greater than $\\rm 12+\\logt({\\rm O\/H})=8.4$, where $\\alpha_{\\rm CO}$ only changes by a small amount. Considering the unknown uncertainties caused by the constant DTM assumption in the metallicity-dependent model \\citep{BOLATTO13}, we decide to only present the results with a fixed $\\alpha_{\\rm CO}$.\n\nThe second trend is the decrease of $\\alpha_{\\rm CO}$ in the very center of some nearby galaxies, shown by \\citet{SANDSTROM13}. It is worth noting that the \\citet{SANDSTROM13} analysis assumed DGR was locally independent of $\\rm f_{H2}$ to simultaneously solve for $\\alpha_{\\rm CO}$ and DGR in their solution pixels. Over most of M101, however, the average $\\alpha_{\\rm CO}$ they find is similar to the standard MW conversion factor, so using the \\citet{SANDSTROM13} values or making the standard assumption of a MW $\\alpha_{\\rm CO}$ will not greatly impact our results. \\citet{SANDSTROM13} found that M101 has one of the largest observed central decreases in $\\alpha_{\\rm CO}$, showing $\\alpha_{\\rm CO}=0.35^{+0.21}_{-0.13}$ in the central solution pixel, which is far lower than the galaxy-average value. Adopting the galaxy average value of $\\alpha_{\\rm CO}$ therefore causes us to overestimate the amount of gas in the center and subsequently underestimate the DGR and DTM. As shown in Figure \\ref{fig: DGR_grad_models_BE} (c), we do observe a decrease in the DGR and DTM in the central $\\sim$kpc of M101, which is likely the result of an incorrect conversion factor assumption there. However, since the affected region is small compared to our full M101 maps, we can neglect this effect in the DTM discussion.\n\nBeyond radial trends that alter $\\alpha_{\\rm CO}$ relative to what we have assumed, it is also possible that $\\alpha_{\\rm CO}$ varies from cloud-to-cloud at a fixed radius. If we overestimate $\\alpha_{\\rm CO}$ for a cloud, the DTM would be underestimated and $\\rm f_{H_2}$ would be overestimated. If we underestimate $\\alpha_{\\rm CO}$, we would underestimate $\\rm f_{H_2}$ and overestimate DTM. Both overestimation and underestimation work in the opposite sense of the correlation we observe in the residual DTM and $\\rm f_{H_2}$ and, if corrected for, would therefore strengthen our conclusions. Thus, the positive correlation between DTM and $\\rm f_{H_2}$ we calculate previously is not a result of $\\alpha_{\\rm CO}$ variation.\n\n\\subsubsection{Summary of DTM Measurements}\nTo summarize, we can explain our fitting results from all our MBB variants except the SE model with a variable DTM, where ${\\rm DGR} \\propto Z^{1.7}$ in the BE model. The maximum DGR is still within the available total metal abundance limits. By comparing the correlation between DTM and physical quantities $\\rm f_{H_2}$, $\\Sigma_\\star$ and $\\Sigma_{SFR}$, we conclude that the strongest environmental correlation of DTM is with $\\rm f_{H_2}$, which we take to be a reasonable observational indicator of ISM density and thus a tracer for accretion process. We see no clear trends that indicate correlations of DTM with stellar sources or massive star formation.\n\nOn the other hand, we could also explain the DTM results with enhanced dust emissivity in dense regions due to coagulation. The increase in $\\kappa_{160}$ is at most twice of the originally calibrated value, which is within the findings in \\citet{PLANCK14}. A non-extreme metallicity-dependent $\\alpha_{\\rm CO}$ does not affect our DGR trend much due to the low $\\rm f_{H_2}$ in most regions, however, the change of $\\alpha_{\\rm CO}$ in the center is related to our observed decrease of DGR in the central kpc. Variability of $\\alpha_{\\rm CO}$ from cloud to cloud at fixed radius would lead to a negative correlation between residual DTM and residual $\\rm f_{H_2}$, which is opposite what we observe.\n\nBoth explanations of variable DTM and variable emissivity are within the physically plausible range, thus we cannot definitively conclude if the variations we see are mainly due to changes in DGR or changes in the emissivity. However, given the observation that elemental depletions in the Milky Way are a function of ISM density and $\\rm f_{H_2}$ \\citep[][see further discussion below]{JENKINS09}, which is equivalent to a variable DTM, we argue that attributing all variation to emissivity is unlikely. To break the degeneracy between emissivity and $\\Sigma_d$, one future path is to calculate emissivity from dust models according to physical properties of local ISM. Another is to build an observational database of $\\Sigma_d$-to-SED, with known metallicity and ISM density, for future calibration. \nAnother powerful test available in the near future will be to measure the properties of the UV\/optical extinction curve, like $\\rm R_V$, as a tracer for coagulation and processes that can change the IR emissivity in the Local Group, and correlate this extinction curve tracer with quantities observable outside the Local Group.\n\n\\subsection{Comparison with previous DTM studies}\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{_DTM_D14_NGC5457_BE.pdf}\n\\caption{Top: We compare our DGR($Z$) with that from M31 measured by \\citet{DRAINE14}. The solid line is where \\citet{DRAINE14} presents their DGR fitting in M31 with observed metallicity, and the dashed line is extrapolation of their linear DGR($Z$). Within this metallicity region, our M101 results suggest a DTM 2 times higher than M31. However, if we instead select the range of radii where the M31 $\\rm f_{H_2}$ matches what we see in M101 (red region), we find a much better agreement between our observed DTM and extrapolation of \\citet{DRAINE14}. Bottom: Demonstration of how we select the green and red zones. Grey zone: \\citet{DRAINE14} $\\rm f_{H_2}$ range corresponding to the presented $\\rm 12+\\logt({\\rm O\/H})$ range. Blue: $\\rm f_{H_2}$-metallicity relation in M101. Green zone: Region with the same metallicity as \\citet{DRAINE14} data range. Red zone: Region where M101 $\\rm f_{H_2}$ corresponds to \\citet{DRAINE14} $\\rm f_{H_2}$.\\label{fig: DTM_D14}}\n\\end{figure}\nIn Figure \\ref{fig: DTM_D14}, we plot our results compared to the linear DGR($Z$) relation discussed in \\citet{DRAINE14}. \\citet{DRAINE14} show that the M31 DTM matches very well with the DTM predicted from depletions along the line of sight to $\\zeta$Oph in the MW \\citep[${\\rm F_\\star}=1$ line of sight in][]{JENKINS09}. In the corresponding metallicity range, our DGR is larger than the one in \\citet{DRAINE14}. This is illustrated in Figure \\ref{fig: DTM_D14} green zone. The derived $\\kappa_{160}$ value in \\citet{DRAINE14} is 12.51, which is around 0.75 times of our $\\kappa_{160}$ value. Thus, the DGR discrepancy at high metallicity end is not a result of our choice of $\\kappa_{160}$. Moreover, \\citet{DALCANTON15, PLANCK16} indicates that the \\citet{DRAINE07} model might overestimate $\\Sigma_d$ by $\\sim 2$ times, which also makes the difference larger. Thus, The difference between \\citet{DRAINE14} and our results in high metallicity region is not due to parameter selection, but due to physical differences between M101 and M31, or differences in the modeling.\n\nInstead of comparing region with the same metallicity, we can also compare the DTM between regions in M31 and M101 with similar ISM density, traced by $\\rm f_{H_2}$ here. According to \\citet{NIETEN06}, the region in M31 where \\citet{DRAINE14} gives the direct metallicity measurements has $\\rm f_{H_2}$ below 0.2, marked by the horizontal dashed line in Figure \\ref{fig: DTM_D14}. This $\\rm f_{H_2}=0.2$ upper limit meets our M101 data at $\\rm 12+\\logt({\\rm O\/H})=8.44$, indicated at where the horizontal dashed line meets the blue curve in Figure \\ref{fig: DTM_D14}. We pick the region between $\\rm 12+\\logt({\\rm O\/H})=8.44$ and where we have minimum $\\rm f_{H_2}$, shown in red in Figure \\ref{fig: DTM_D14}, as the region that has similar ISM density with M31 data in \\citet{DRAINE14}. Within this region, our DTM is consistent with the extrapolation of \\citet{DRAINE14} DTM. This suggests that the difference in DTM between our results and \\citet{DRAINE14} may be a consequence of M101 having a higher $\\rm f_{H_2}$ and therefore enhanced depletion (e.g. larger DTM) at the metallicity of M31.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{_DTM_R14_NGC5457_BE.pdf}\n\\caption{Our DGR versus metallicity with \\citet{REMY-RUYER14} results (data points in blue). The power law (orange dahsed line) and broken power law (green dotted line) fitting are quoted with MW conversion factors.\\label{fig: DTM_R14}}\n\\end{figure}\n\\citet{REMY-RUYER14} has compiled integrated DGR($Z$) for a large set of galaxies observed by \\textit{Herschel}. In Figure \\ref{fig: DTM_R14} we compare our measured DGR($Z$) with theirs. At the high metallicity end, our slope is shallower than their power law fitting, but within 1-$\\sigma$ confidence level of each other ($2.02 \\pm 0.28$ from \\citet{REMY-RUYER14}). Unfortunately, the turnover point of broken power law derived in \\citet{REMY-RUYER14} is at $\\rm 12+\\logt({\\rm O\/H}) = 8.10 \\pm 0.43$, and we do not have enough reliable DGR fitting results below that metallicity to compare with. It is hard to draw a conclusion whether a broken power law with turnover point around $\\rm 12+\\logt({\\rm O\/H})=8.0$ would fit our results better than a power law. The \\citet{REMY-RUYER14} broken power law in high metallicity region is basically identical to the \\citep{DRAINE11} power law.\n\n\\subsection{Comparison with MW depletion\\label{sec: J09}}\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{_DTM_J09_NGC5457_BE.pdf}\n\\caption{The comparison of our results with the DTM corresponding to various MW $\\rm F_\\star$ values described in \\citet{JENKINS09}. Most MW measurable regions have $0 \\la {\\rm F_\\star} \\la 1$. ${\\rm F_\\star}=0.36$ represents the average property of our $\\kappa_{160}$ calibration, and ${\\rm F_\\star}=\\rm inf$ means total depletion. The 40\\% $\\rm H_2$ location is marked because all \\citet{JENKINS09} data points have $f_{\\rm H_2} \\la 0.4$.\\label{fig: DTM_J09}}\n\\end{figure}\nStudies of the depletion of heavy elements in the MW \\citep{JENKINS09} also found a dependence of DTM on average ISM density and $\\rm f_{H_2}$. In Figure \\ref{fig: DTM_J09}, we display DTM corresponding to various MW $F_\\star$ regions described in \\citet{JENKINS09}. All of their original data points have $f_{\\rm H_2} \\la 0.4$ and $17.4 \\la \\log_{10} (N_{\\rm HI}) \\la 21.8$. Regions with ${\\rm F_\\star}=1$ and ${\\rm F_\\star}=0$ are by definition the representative regions of high and low depletion in the diffuse ISM of the MW, respectively. Thus, the region between these two lines corresponds to a DTM similar to the MW range extending to lower metallicity. Most points with $\\rm 12+\\logt({\\rm O\/H}) \\leq 8.4$ fall inside this range. The high-latitude diffuse ISM in the MW used to calibrate our $\\kappa_{160}$ has an ${\\rm F_\\star}$ of 0.36, thus it was selected for DGR calculation in calibrating our $\\kappa_{160}$, see \\secref{Sec: calibration}. The ${\\rm F_\\star}=\\rm inf$ line means total depletion, which is physically the same as the DGR upper limit discussed in \\secref{sec: max DGR}. All our DGR fitting results are within this limit. It is interesting to note that the point where the DGR trend falls below the maximum depletion is at the boundary of molecular gas dominant region and atomic gas dominant region (f$_{\\rm H_2}\\sim$0.4).\n\n\\subsection{Sensitivity of results to fitting methods}\\label{sec:sensitivity}\nIt is worth noting that given the same dust emission SED, the fitting results are sensitive to methods and parameters in the fitting process. Thus, it is important to be clear and self-consistent about the choices we make for calibration and fitting, as demonstrated by \\citet{GORDON14}. We also need to be careful when comparing cross-study results. Here, we use the process of $\\kappa_{160}$ calibration with SE model, which gives $\\kappa_{160}=10.48\\pm1.48~\\rm cm^2~g^{-1}$ with the SED of the MW diffuse ISM from \\citet{GORDON14}, to illustrate the possible variations in results due to different choices. Note that we want to focus only on the methods, thus we use the MW diffuse ISM from \\citet{GORDON14} in this section instead of ours described in \\secref{Sec: calibration} to eliminate the simple offset.\n\\begin{itemize}\n \\item By changing to different models, $\\kappa_{160}$ can go up to 21.16 (PL model), which is a 100\\% change. Thus, the choice of fitting model strongly affects fitting results.\n \\item By making the fitting grid spacing coarser, from the original 0.002 spacing to a 0.1 spacing in $\\log_{10}\\kappa_{160}$, the resulting $\\kappa_{160}$ becomes 11.7, which is a 10\\% change. This has a mild effect on fitting results, and is especially important when the grid spacing is larger than the adopted uncertainties.\n \\item The matrix form and values of the covariance matrix can affect the fitting results. By changing the covariance matrix from ours to the one in \\citet{GORDON14} and keeping all other factors the same, the resulting $\\kappa_{160}$ goes to 17.9, which is a 70\\% change. This also affects the results strongly.\n \\item The covariance matrix can also change the fitting residuals. For example, \\citet{GORDON14} assumes a flat uncertainty across the five bands and equal correlation, which results in similar residuals among the five bands. On the other hand, we assume different values and correlation between DIRBE and FIRAS bands, which results in better residuals in FIRAS bands and worse residual in the DIRBE band.\n\\end{itemize}\n\n\\section{Conclusions} \\label{sec: conclusions}\nWe present dust SED fitting results from five MBB variants in M101 with kpc scale spatial resolution. We compare the resulting $\\Sigma_d$ and $T_d$ with known physical limitations, and conclude the results from a simple, variable emissivity, modified blackbody model are not physically plausible. The other four models have results consistent with each other at $\\rm 12+\\logt({\\rm O\/H}) \\leq 8.5$, which demonstrates the robustness of modified blackbody model under many conditions. Among the four models, the one with a single temperature blackbody modified by a broken power-law emissivity has the highest fitting quality in residuals and $\\tilde{\\chi}^2$ distribution. Thus, the first order correction to the MBB, necessitated by our observed SEDs in M101, is to add flexibility in the emissivity spectral index at long wavelengths.\n\nThe resulting DTM, derived from our dust and gas surface densities and direct $\\rm T_e$-based metallicities, is not constant with radius or metallicity in M101 from all five models. From the preferred BE model, a relation of $\\rm DGR \\propto Z^{1.7}$ is observed overall, and $\\rm DGR \\propto Z^{1.9}$ in region with $\\rm 12+\\logt({\\rm O\/H}) \\geq 8.2$. We try to explain this variable DTM by searching for correlations between tracers of formation and destruction mechanisms of dust and metallicity to the observed physical quantities. By comparing the correlation between DTM and physical quantities ($\\rm f_{H_2}$, $\\Sigma_\\star$ and $\\Sigma_{\\rm SFR}$) after removing the major radial trend, we argue that the accretion of metals in ISM onto existing dust grains could be a cause of this variable DTM, while we do not see evidence for correlations with stellar or SNe related production and destruction.\n\nIt is also possible that the change in DTM is actually the enhancement of emissivity due to coagulation. In the center of M101, if we assume the \\citet{DRAINE14} DTM and calculate the possible change in emissivity, the resulting $\\kappa_{160}$ would be $\\sim$19 to $41~\\rm cm^2~g^{-1}$, which are 0.9 to 2.0 larger than the originally calibrated value of $16.52~\\rm cm^2~g^{-1}$ in the high latitude diffuse ISM in the MW. This change is still within the range of previous observational and theoretical calculations. Both changes in DTM and in emissivity are possible according to our current knowledge.\n\nWhen comparing with previous DTM studies, our DTM is 2 times larger than the \\citet{DRAINE14} results in the same metallicity region, but our DTM are consistent with their DTM extrapolated to the region with similar $\\rm f_{H_2}$. Comparing with \\citet{REMY-RUYER14}, our DTM has a slope consistent with their power-law fitting slope. Unfortunately, we do not have enough low-metallicity data to compare with their broken-power law. When comparing with known depletion relations from the MW and the amount of available metals in the central 5 kpc of M101, our DTM suggests essentially all available heavy elements are in dust, which is consistent with ${\\rm F_\\star}={\\rm inf}$ line from extrapolating the \\citet{JENKINS09} calculations, and also larger than most of the previous studies. Our DTM results in the lower metallicity region would fall between ${\\rm F_\\star}=1$ and ${\\rm F_\\star}=0$ in the MW. This suggests that even in the lowest metallicity regime of our study, we have not yet probed conditions where the dust life cycle differs in major ways from that in the Milky Way.\n\nDuring the fitting process, we found that the fitting results from the likelihood calculated with a multi-dimensional Gaussian distribution and a complete covariance matrix are sensitive to the choice of model and covariance matrix. Therefore, it is important to be self-consistent between calibration and fitting processes. It is also important to note the covariance matrix adopted when comparing fitting results across studies because the fitting results could change by 70\\% with different covariance matrices.\n\n\\acknowledgments\nWe thank the referee for useful comments that helped to improve the quality of the manuscript. We gratefully acknowledge the hard work of the KINGFISH, THINGS, HERACLES, LVL, and CHAOS teams and thank them for making their data publicly available. We acknowledge the usage of the HyperLeda database (http:\/\/leda.univ-lyon1.fr). IC thanks K. Gordon for helpful conversations regarding calibration and fitting. IC thanks Y.-C. Chen for helpful conversations. The work of KS, IC, AKL, DU and JC is supported by National Science Foundation grant No. 1615728 and NASA ADAP grants NNX16AF48G and NNX17AF39G. The work of AKL and DU is partially supported by the National Science Foundation under Grants No. 1615105, 1615109, and 1653300.\n\nThis work uses observations made with \\textit{Herschel}. \\textit{Herschel} is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. \nPACS has been developed by a consortium of institutes led by MPE (Germany) and including UVIE (Austria); KU Leuven, CSL, IMEC (Belgium); CEA, LAM (France); MPIA (Germany); INAF-IFSI\/OAA\/OAP\/OAT, LENS, SISSA (Italy); IAC (Spain). This development has been supported by the funding agencies BMVIT (Austria), ESA-PRODEX (Belgium), CEA\/CNES (France), DLR (Germany), ASI\/INAF (Italy), and CICYT\/MCYT (Spain). \nSPIRE has been developed by a consortium of institutes led by Cardiff University (UK) and including Univ. Lethbridge (Canada); NAOC (China); CEA, LAM (France); IFSI, Univ. Padua (Italy); IAC (Spain); Stockholm Observatory (Sweden); Imperial College London, RAL, UCL-MSSL, UKATC, Univ. Sussex (UK); and Caltech, JPL, NHSC, Univ. Colorado (USA). This development has been supported by national funding agencies: CSA (Canada); NAOC (China); CEA, CNES, CNRS (France); ASI (Italy); MCINN (Spain); SNSB (Sweden); STFC, UKSA (UK); and NASA (USA).\n\nThis work uses observations based on National Radio Astronomy Observatory (NRAO) Karl G. Jansky Very Large Array. The NRAO is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.\nThis work uses observations based on HERA on IRAM 30-m telescope. IRAM is supported by CNRS\/INSU (France), the MPG (Ger-\nmany) and the IGN (Spain). \nThis work uses observations made with the \\textit{Spitzer} Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. \nThis research has made use of NASA's Astrophysics Data System. This research has made use of the NASA\/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.\n\n\\vspace{5mm}\n\\facilities{Herschel(PACS and SPIRE), VLA, GALEX, IRAM(HERA), Spitzer(MIPS and IRAC), LBT(MODS)}\n\\software{astropy \\citep{ASTROPY13},\n matplotlib \\citep{HUNTER07},\n numpy \\& scipy \\citep{VANDERWALT11},\n pandas \\citep{MCKINNEY10},\n voronoi\\_2d\\_binning \\citep{CAPPELLARI03},\n corner \\citep{Foreman-Mackey16}, \n Scanamorphos \\citep[v16.9 and v17.0;][]{ROUSSEL13}, \n HIPE \\citep[vspire-8.0.3287;][]{OTT10}}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{}\n\\vspace{-1cm}\n\n\n\n\n\\footnotetext{\\textit{$^{a}$~Laboratory of Nonlinear Chemical Dynamics, Institute of Chemistry, ELTE E\\\"otv\\\"os Lor\\'and University, Budapest, Hungary.}}\n\\footnotetext{\\textit{$^{b}$~Budapest University of Technology and Economics, Department of Analysis, Budapest, Hungary. Fax: +361 463 3172; Tel: +361 463 2314; E-mail: jtoth@math.bme.hu}}\n\\footnotetext{\\textit{$^{c}$~Chemical Kinetics Laboratory, Institute of Chemistry, ELTE E\\\"otv\\\"os Lor\\'and University, Budapest, Hungary.}}\n\n\\footnotetext{\\dag~Electronic Supplementary Information (ESI) available: [details of any supplementary information available should be included here]. See DOI: 10.1039\/cXCP00000x\/}\n\n\\footnotetext{\\ddag~Based on the talk given at the 2nd International Conference on Reaction Kinetics, Mechanisms and Catalysis. 20--22 May 2021, Budapest, Hungary}\n\n\\section{Introduction} \nThe concept of reaction extent (most often denoted by \\(\\xi\\)) is more than 100 years old \\cite{dedondervanrysselberghe}. \nIts importance is emphasized by the mere fact that it has been included in the IUPAC Green Book\\cite{millscvitashomankallaykuchitsu} (see page 43). \nTwo definitions are given that are equivalent in the treated very simple special case of a\nsingle reaction step:\n\\begin{equation}\\label{eq:cm}\n\\ce{$\\sum_{m=1}^M\\alpha_{m}$X($m$) = $\\sum_{m=1}^M\\beta_{m}$X($m$)}; \n\\end{equation}\nwhere \\(M\\) is the number of chemical species \\ce{X(1)}, \\ce{X(2)}, \\dots, \\ce{X($M$)};\nand the integers \\(\\alpha_m\\) and \\(\\beta_m\\) are the corresponding stoichiometric coefficients of the reactant and product species, respectively.\nThe first definition is:\n\\begin{equation}\\label{eq:iupac1def}\n n_{\\ce{X($m$)}}=n^0_{\\ce{X($m$)}}+ \\nu_{m}\\xi, \n\\end{equation}\nwhere \\ce{n_{\\ce{X($m$)}}} and $n^0_{\\ce{X($m$)}}$ are the actual and initial quantities (number of moles) of the species \\ce{X($m$)}, respectively. The symbol \\(\\nu_m\\) is the generalized stoichiometric number. It is negative for a reactant and positive for a product species.\nThe second definition is\n\\begin{equation}\\label{eq:iupac2def}\n\\Delta\\xi=\\frac{\\Delta n_{\\ce{X($m$)}}}{\\nu_m}=\n\\frac{n_{\\ce{X($m$)}}-n^0_{\\ce{X($m$)}}}{\\nu_m}.\n\\end{equation}\nA slightly different version is given in Ref.\\cite{goldloeningmcnaughtsehmi} and by the electronic version \\url{https:\/\/goldbook.iupac.org\/terms\/view\/E02283} called IUPAC Gold Book\\cite{mcnaughtwilkinson}:\n\\begin{equation}\\label{eq:iupac3def}\n\\mathrm{d}\\xi=\\frac{\\mathrm{d} n_{\\ce{X($m$)}}}{\\nu_m}.\n\\end{equation}\nThe above-cited definitions have been summarized in the book by Stepanov et al.\\cite{stepanoverlikinafilippov}. The authors also give a good introduction to the methods of linear algebra applied in Reaction Kinetics.\n\nWith an eye on the applicability of the concept in modern formal reaction kinetics (or, chemical reaction network theory) \nas exposed by Feinberg\\cite{feinbergbook} and T\u00f3th et al.\\cite{tothnagypapp} the following points seem crucial:\n\\begin{enumerate}\n\\item\nStarting from the original definition by De Donder and Van Rysselberghe\\cite{dedondervanrysselberghe}, we extend the definition to an \\emp{arbitrary} number of reaction steps.\n\\item\nWe do not restrict ourselves to reversible steps.\n\\item\nWe do not require linear independence of the reaction steps.\n\\item\nWe do not \"order the steps to one side\" which would result in hiding the difference between the steps like \\ce{X -> Y} and \\ce{X + Y -> 2Y}.\n\\item\nWe do not consider and take into account the atomic (or molecular) structure of the species.\n\\item \nWe do not use differentials when introducing the concept\n(cf. p. 61. of Ref.\\cite{truesdell}).\n\\item\nWe shall give an explicit definition more similar to Eq. \\eqref{eq:iupac2def} rather than to Eq. \\eqref{eq:iupac1def}.\n\\item\nWe take into consideration the volume of the reacting mixture to be able to calculate the number of individual reaction events.\n\\end{enumerate}\n\nThe structure of our paper is as follows. \nSection \\ref{sec:concept} introduces the concept for reaction networks of arbitrary complexity: for any number of reaction steps and species, mass action kinetics is not assumed. \nAs it is a usual requirement that the reaction extent tends to 1 when \"the reaction tends to its end\", we try to find quantities derived from our reaction extent having this property in Section \\ref{sec:what}.\nIt will turn out in many examples that the reaction extents do not tend to 1 in any sense. We show, however, that they contain quite relevant information about the time evolution of the reactions: they measure (or give) the number of the occurrences of the individual reaction events.\nThese examples will also reflect the fact that the reaction events do not cease during equilibrium, and this can be seen without referring to fluctuations.\nAs the closure of our paper, we show applications of the concept\nto more complicated cases: those with multiple stationary states, oscillation, and chaos. \n\n\nIn this part, first, we analyze the classical multi-stationary example\nby Horn and Jackson\\cite{hornjackson}. \nAs to oscillatory reactions, we start with the irreversible Lotka--Volterra reaction, and we also study the reversible Lotka--Volterra reaction both in the detailed balanced and not detailed balanced cases. \nOur following oscillatory example will be an experimental system studied by R\u00e1bai\\cite{rabai}. \nAs a chaotic example, we shall take a slightly modified version of that oscillatory system. \nDiscussion of the Conclusions and a list of Notations come last.\nThe proofs of the statements and Theorems are relegated to an Appendix so as to improve the logical flow of the manuscript without getting side-tracked.\nSupporting Information is given in a PDF file; upon request, the corresponding Wolfram Language notebook---the source of the PDF file---will be provided to the interested reader.\n\\section{The concept of reaction extent}\\label{sec:concept}\nStarting from the classical works\\cite{dedondervanrysselberghe,arisprolegomena1,croce}\nand relying on the consensus of the chemists' community as formulated by Laidler\\cite{laidlerglossary} our aim is to present a treatment more general than any of the definitions introduced and applied up to now.\n\\subsection{Motivation and fundamental definitions}\nWe are going to use the following concepts.\n\\subsubsection{Fundamental notations and definitions: The framework.}\nFollowing the books by Feinberg\\cite{feinbergbook} and by T\\'oth et al.\\cite{tothnagypapp} we consider a \\emp{complex chemical reaction}, simply \\emp{reaction}, or \\emp{reaction network} as a set consisting of \\emp{reaction steps} as follows: \n\\begin{equation}\\label{eq:ccr}\n\\left\\{\\ce{$\\sum_{m=1}^M\\alpha_{m,r}$X($m$) -> $\\sum_{m=1}^M\\beta_{m,r}$X($m$)}\\quad (r=1,2,\\dots,R)\\right\\}; \n\\end{equation}\nwhere \n\\begin{enumerate}\n\\item \nthe chemical \\emp{species} are \\ce{X($1$)}, \\ce{X($2$)}, \\dots, \\ce{X($M$)}---take note that their quantities \\(N_{\\ce{X($m$)}}\\) or \\(N_m\\) will be applied interchangeably;\n\\item\nthe \\emp{reaction steps} are numbered from 1 to \\(R;\\) \n\\item\nhere \\(M\\) and \\(R\\) are positive integers;\n\\item\n\\(\\boldsymbol{\\alpha}:=[\\alpha_{m,r}]\\) and \n\\(\\boldsymbol{\\beta}:=[\\beta_{m,r}]\\) are \\(M\\times R\\) matrices of non-negative integer components called \\emp{stoichiometric coefficients}, with the properties that all the species take part in at least one reaction step (\\(\\forall m \\exists r: \\beta_{m,r}\\neq\\alpha_{m,r}\\)), and all the reaction steps do have some effect (\\(\\forall r \\exists m: \\beta_{m,r}\\neq\\alpha_{m,r}\\)), and finally\n\\item\n\\(\\boldsymbol{\\gamma}:=\\boldsymbol{\\beta}-\\boldsymbol{\\alpha}\\) is the \\emp{stoichiometric matrix} of \\emp{stoichiometric numbers}.\n\\end{enumerate}\nInstead of Eq. \\eqref{eq:ccr} some authors prefer writing this:\n\\begin{equation}\\label{eq:ccrreduced}\n\\ce{$\\sum_{m=1}^M\\gamma_{m,r}$X($m$) = 0}\\quad (r=1,2,\\dots,R).\n\\end{equation}\nThis formulation immediately excludes reaction steps like \\ce{X + Y -> 2Y} (used e.g. to describe a step in the Lotka--Volterra reaction), or reduces it to \\ce{X -> Y}, \nchanging the stoichiometric coefficients used to formulate mass-action type kinetics. Similarly, an autocatalytic step that may be worth studying, see e.g. pp. 63 and 66 in the book by Aris\\cite{arisintro}, like \\ce{X -> 2X} appears oversimplified as \\ce{0 -> X}. \nAnother possibility is to exclude the \\emp{empty complex}, implying involuntarily that we get rid of the possibility to simply represent in- and outflow with \\ce{0 -> X} and \\ce{X -> 0}, respectively. \nThese last two examples evidently mean mass-creation and \nmass-destruction. \nIf one does not like these one should explicitly say that one is only interested in mass-conserving reactions. \nSometimes mass creation and mass destruction are slightly less obvious than above, \nsee the reaction network\n\\begin{equation*}\n\\ce{X -> Y + U},\\quad \\ce{Z -> X + U},\\quad \\ce{Z -> U},\n\\end{equation*}\nwhich is mass-producing.\n\nIt may happen that one would like to exclude reaction steps with more than two particles on the left side, such as \n\\[\n\\ce{2MnO4- + 6H+ + 5H2C2O4 = 2Mn^2+ + 8H2O + 10CO2}. \n\\]\nSuch steps do occur e.g. on page 1236 of Kov\\'acs et al.\\cite{kovacsvizvaririedeltoth} when dealing with \\emp{overall reactions}. The theory and applications of decomposition of overall reactions into elementary steps\\cite{kovacsvizvaririedeltoth,pappvizvari} would have been impossible without the framework of formal reaction kinetics. Someone may be interested in complex chemical reactions consisting of reversible steps only. Then, they have to write down all the forward and the corresponding backward reaction steps.\n\nTaking into consideration restrictions of the above kind usually \ndoes not make the mathematical treatment easier. \nSometimes it needs hard work to figure out how they can be checked, \nas it is in the case of mass conservation of models containing species without atomic structure,\\cite{deaktothvizvari,tothnagypapp} or in relation to the existence of oscillatory reactions.\\cite{potalotka}\nTo sum up: an author has the right to make any restriction \nthought to be chemically important, but these restrictions should be declared at the outset. \nFinally, we mention our main assumption: all the steps in Eq. \\eqref{eq:ccr} are really present, i.e. they proceed with a positive rate whenever the species on the left side are present.\n\n\nWe now provide a simple example to make the understanding easier.\n\\subsubsection{A simple example.}\nLet us take an example that may be deemed chemically oversimplified but not too trivial, still simple enough so as not to be lost in the details. \nAssume that water formation follows the reversible reaction step \n\\ce{2H2 + O2 <=> 2H2O}.\nThis means that we do not take into consideration either the atomic structure of the species, or the realistic details of water formation.\nLet the forward step be represented in a more abstract way: \\ce{2X + Y -> 2Z}.\n\nThe number of species, denoted as above by \\(M\\), is 3, and the number of (irreversible) reaction steps, denoted as above by \\(R\\), is 1. \nThe \\(3\\times1\\) stoichiometric matrix \\(\\boldsymbol{\\gamma}\\) consisting of the stoichiometric numbers is:\n\\(\\begin{bmatrix}-2\\\\-1\\\\2\\end{bmatrix}\\). \nIn case this step occurs five times, the vector of the \n\\emp{number of individual species}\nwill change as follows:\n\\begin{equation*}\n\\begin{bmatrix}\nN_{\\ce{X}}-N_{\\ce{X}}^0\\\\\nN_{\\ce{Y}}-N_{\\ce{Y}}^0\\\\\nN_{\\ce{Z}}-N_{\\ce{Z}}^0\n\\end{bmatrix}=\n5\\begin{bmatrix}\n-2\\\\\n-1\\\\\n2\n\\end{bmatrix},\n\\end{equation*}\nwhere \\(N_{\\ce{X}}^0\\) is the number of molecules of species \\ce{X} at the beginning, and \\(N_{\\ce{X}}\\) is the number of molecules of species \\ce{X} after five reaction events, and so on. \nIf one considers the reversible reaction \n\\ce{2X + Y <=> 2Z}, and assumes that the backward reaction step takes place three times then the total change is\n\\begin{equation}\n\\begin{bmatrix}\nN_{\\ce{X}}\\\\mathbb{N}_{\\ce{Y}}\\\\mathbb{N}_{\\ce{Z}}\n\\end{bmatrix}-\n\\begin{bmatrix}\nN_{\\ce{X}}^0\\\\mathbb{N}_{\\ce{Y}}^0\\\\mathbb{N}_{\\ce{Z}}^0\n\\end{bmatrix}=\n5\\begin{bmatrix}\n-2\\\\-1\\\\2\n\\end{bmatrix}+\n3\\begin{bmatrix}\n2\\\\1\\\\-2\n\\end{bmatrix}.\\label{eq:spec}\n\\end{equation}\nNote that both the number of molecules and the number of the occurrence of reaction events are positive integers. \n\nEq. (2.2) of Ref.\\cite{kurtz} is of the same form as our Eq. \\eqref{eq:spec}. Kurtz is interested mainly in reversible and detailed balanced complex chemical reactions, and, more importantly, in the relationship of their deterministic and stochastic models. This is the reason why he formulates his Eq. (2.2) for the slightly restricted case only. As to the relationship between discrete and continuous descriptions, we follow here more or less Kurtz\\cite{kurtz} and T\\'oth et al.\\cite{tothnagypapp} We cannot rely on a discrete state deterministic model of reaction kinetics---that would be desirable---because such a model does not exist as far as we know. \n\n\\subsubsection{The general case.}\nBefore providing general definitions, we mention that Dumon et al.\\cite{dumonlichanotpoquet} formulated a series of requirements that---according to them---should be obeyed by a well-defined reaction extent.\nUnfortunately, we are unable to accept most of these requirements. \nLet us mention only one: the reaction extent should be independent of the choice of stoichiometric coefficients (invariant under multiplication), i.e. it should have the same value for the reaction \\ce{2H2 + O2 -> 2 H2O} and for the reaction \\ce{H2 + $\\frac{1}{2}$O2 -> H2O}. Our point of view is that the reaction extent is strongly connected to kinetics, and it is not a tool to describe stoichiometry as some other authors\\cite{garst} also think. The only requirement that we accept will be mentioned later, in the discussion of Definition \\ref{def:extent}.\n\nWe assume throughout that the volume (\\(V\\)) is constant, and\none can write the generalized form of Eq. \\eqref{eq:spec} as\n\\begin{equation*}\n\\begin{bmatrix}\nN_{\\ce{1}}\\\\\nN_{\\ce{2}}\\\\\n\\dots\\\\\nN_{\\ce{M}}\n\\end{bmatrix}-\\begin{bmatrix}\nN_{\\ce{1}}^0\\\\\nN_{\\ce{2}}^0\\\\\n\\dots\\\\\nN_{\\ce{M}}^0\n\\end{bmatrix}=\n\\sum_{r=1}^{R}\n\\begin{bmatrix}\n\\gamma_{1,r}\\\\\n\\gamma_{2,r}\\\\\n\\dots\\\\\n\\gamma_{M,r}\n\\end{bmatrix}W_r, \n\\end{equation*}\nor shortly\n\\begin{equation*}\n\\mathbf{N}-\\mathbf{N}^0=\\boldsymbol{\\gamma} \\mathbf{W},\n\\end{equation*}\nwhere component \\(W_r\\) of the vector \\(\\mathbf{W}=\\begin{bmatrix}\nW_1&W_2&\\dots&W_R\n\\end{bmatrix}^{\\top}\\) gives the number of occurrences of the \\(r^{\\mathrm{th}}\\) reaction step. \nNote that we do not speak about \\emp{infinitesimal} changes. \n\nWith a slight abuse of notation let \\(\\mathbf{W}(t)\\), the vector of the numbers of occurrences of reaction events, a step function in the interval \\([0,t]\\).\nThen:\n\\begin{equation*}\n\\mathbf{N}(t)-\\mathbf{N}^0=\\boldsymbol{\\gamma} \\mathbf{W}(t),\n\\end{equation*}\nor turning to moles\n\\begin{equation}\\label{def:moles}\n\\mathbf{n}(t)-\\mathbf{n}^0=\\frac{\\mathbf{N}(t)-\\mathbf{N}^0}{L}=\\boldsymbol{\\gamma} \\frac{\\mathbf{W}(t)}{L}=\\boldsymbol{\\gamma}\\boldsymbol{\\xi}(t),\n\\end{equation}\nwhere \\(L\\)\\emp{ is the Avogadro constant} having the unit \\({\\,\\mathrm{mol}}^{-1}\\), and\n\\begin{equation*}\n\\mathbf{n}(t):=\\frac{\\mathbf{N}(t)}{L},\n\\mathbf{n}^0:=\\frac{\\mathbf{N}^0}{L},\n\\boldsymbol{\\xi}(t):=\\frac{\\mathbf{W}(t)}{L}.\n\\end{equation*}\nHere we had to choose the less often used notation \\(L\\) (\\url{https:\/\/goldbook.iupac.org\/terms\/view\/A00543}) to avoid mixing up with other notations.\n\nThe relationship \\eqref{def:moles} can be expressed in concentrations as\n\\begin{equation}\n\\mathbf{c}(t)-\\mathbf{c}^0=\\frac{\\mathbf{n}(t)-\\mathbf{n}^0}{V}=\\frac{\\boldsymbol{\\gamma}}{V}\\boldsymbol{\\xi}(t),\\label{eq:implicit}\n\\end{equation}\nwhere \\(V\\in\\mathbb{R}^+\\), the volume of the reaction vessel is assumed to be constant, \\(\\mathbf{c}(t):=\\frac{\\mathbf{n}(t)}{V}\\) and\n\\(\\mathbf{c}^0:=\\frac{\\mathbf{n}^0}{V}\\). \nThe component \\(c_{\\ce{X($m$)}}\\) or \\(c_m\\) of \\(\\mathbf{c}\\) is traditionally denoted in chemical textbooks as \\([\\ce{X($m$)}],\\) see e.g. Section 1.2 of Ref.\\cite{pillingseakins}.\n\nThe concentration in Eq. \\eqref{eq:implicit} is again a step function; however, if the number of particles (molecules, radicals, electrons, etc.) is very large, as very often it is, it may be considered to be a continuous, even differentiable function. Remember that the components of \\(\\boldsymbol{\\xi}(t)\\) have the dimension of the amount of substance, measured in moles.\n\nLet us now give a general, formal, and \\emp{explicit} definition of reaction extent valid \\emp{for an arbitrary number of species and reaction steps, and not restricted to mass action type kinetics}. \n(Few qualitative---mainly technical---restrictions are usually made on the function \n\\(\\mathbf{rate}\\)\\cite{feinbergbook,tothnagypapp,volperthudyaev},\nbut we now mention the continuous differentiability only.)\nWe start with rewriting the induced kinetic differential equation \\begin{equation}\\label{eq:ikdegen}\n\\dot{\\mathbf{c}}(t)=\\boldsymbol{\\gamma}\\mathbf{rate}(\\mathbf{c}(t))\n\\end{equation} \ntogether with the initial condition \\(\\mathbf{c}(0)=\\mathbf{c}^0\\) into an (equivalent) integral equation:\n\\begin{equation}\\label{eq:int}\n\\mathbf{c}(t)-\\mathbf{c}^0=\\boldsymbol{\\gamma}\\int_0^t\\mathbf{rate}(\\mathbf{c}(\\overline{t}))\n{\\;\\mathrm{d}\\overline{t}}.\n\\end{equation}\n\nThe component \\(rate_r\\) of the vector \\(\\mathbf{rate}\\) provides the reaction rate of the \\(r^{\\mathrm{th}}\\) reaction step.\nNote that in the mass action case Eq. \\eqref{eq:ikdegen} specializes into\n\\begin{equation}\\label{eq:ikdemass}\n\\dot{\\mathbf{c}}=\\boldsymbol{\\gamma}\\mathbf{k}\\odot\\mathbf{c}^{\\boldsymbol{\\alpha}} \n\\end{equation} \nor, in coordinates\n\\begin{equation*\n\\dot{c}_m(t)=\\sum_{r=1}^{R}\\gamma_{mr}k_r\\prod_{p=1}^{M}c_p^{\\alpha_{p,r}}\\quad(m=1,2,\\cdots,M),\n\\end{equation*}\nwhere \\(\\mathbf{k}\\) is the vector of (positive) reaction rate coefficients \\(k_r.\\)\n(We prefer using the expression \\emp{reaction rate coefficients} to \\emp{reaction rate constants},\nas these numbers do depend on many factors---except species concentrations.)\nIn Eq. \\eqref{eq:ikdemass} we used the usual vectorial operations, see e.g. Section 13.2 in T\\'oth et al.\\cite{tothnagypapp}\nTheir use in formal reaction kinetics has been initiated by Horn and Jackson.\\cite{hornjackson}\n\nIn accordance with what has been said up to now, we can introduce the explicit definition of reaction extent by combining Eqs. \\eqref{eq:implicit} and \\eqref{eq:int}.\n\n\\begin{definition}\\label{def:extent}\nThe \\emp{reaction extent}\nof a complex chemical reaction or reaction network defined by Eq. \\eqref{eq:ccr} is the scalar variable, vector-valued function given by the formula\n\\begin{equation}\\label{eq:specre}\n\\boxed{\n\\boldsymbol{\\xi}(t):=V\\int_0^t\n\\mathbf{rate}(\\mathbf{c}(\\overline{t}))\n{\\;\\mathrm{d}\\overline{t}}}.\n\\end{equation}\nIts time derivative \\(\\dot{\\boldsymbol{\\xi}}(t)=V\\mathbf{rate}(\\mathbf{c}(t))\\) is usually called the \\emp{rate of conversion} or \\emp{reaction flux}\\cite{polettiniesposito}.\n\\end{definition}\nNote that Eq. \\eqref{eq:specre} shows that the reaction extent, in general, depends on the whole history\n(past and present) of the vector of concentrations, as if it had a \\emp{memory}.\\label{page:memory}\n\nDefinition \\ref{def:extent} of the reaction extent has been derived from the number of reaction events in order to reveal its connection to changes in the concentrations. \nAssuming here also that \\(V\\) is constant, \none can formulate the following trivial (equivalent) consequences of the definition:\n\\begin{equation}\\label{eq:trivicons}\n\\dot{\\mathbf{n}}=\\boldsymbol{\\gamma}\\dot{\\boldsymbol{\\xi}},\\quad\\dot{\\mathbf{c}}=\\frac{1}{V}\\boldsymbol{\\gamma}\\dot{\\boldsymbol{\\xi}},\\quad\n\\mathbf{c}=\\mathbf{c}^0+\\boldsymbol{\\gamma}\\frac{\\boldsymbol{\\xi}}{V}\n\\end{equation}\nmentioned also by Laidler\\cite{laidlerglossary},\nsometimes as definitions, sometimes as statements.\n\nNote that neither the rate of the reaction: \n\\(\\mathbf{rate}(\\mathbf{c}(t))=\\frac{\\dot{\\boldsymbol{\\xi}}(t)}{V},\\)\nnor the reaction extent \\({\\boldsymbol{\\xi}},\\) nor the rate of conversion \\(\\dot{\\boldsymbol{\\xi}}\\) depends on the stoichiometric matrix \\(\\boldsymbol{\\gamma}\\), thereby this\none of the requirements formulated by Dumont et al.\\cite{dumonlichanotpoquet}\nis fulfilled.\n\nWhat is wrong with the almost ubiquitous implicit \"definition\" Eq. \\eqref{eq:implicit}? \nWe show an example to enlighten this.\n\\begin{example}\nConsider the reaction steps\n\\begin{equation*}\n\\ce{X ->[$k_1$] Y}, \\quad\\ce{X + Y ->[$k_2$] 2Y},\n\\end{equation*}\nexpressing the fact that \\ce{X} is transformed into \\ce{Y} directly and also via autocatalysis.\nAlthough the reaction steps \n\\begin{equation*}\n\\ce{X ->[$k_1$] Y + P},\\quad \\ce{X + Y ->[$k_2$] 2Y + P}\n\\end{equation*} with the external species \\ce{P} is a more realistic description of genuine chemical reactions,\ne.g. the acid autocatalysis in ester hydrolysis\\cite{ostwald,bansagitaylor}, they lead to the same kinetic differential equations for \\ce{X} and \\ce{Y}. Therefore, we shall analyze the simpler scheme.\nNow the stoichiometric matrix $\\gamma$ is as follows:\n\\begin{equation*}\n\\boldsymbol{\\gamma}=\\begin{bmatrix}\n-1&-1\\\\\n1&1\n\\end{bmatrix}.\n\\end{equation*}\nThen Eq. \\eqref{eq:int} specializes into \n\\begin{align*}\nc_{\\ce{X}}(t)-c_{\\ce{X}}(0)&=\n-\\int_{0}^{t}k_1c_{\\ce{X}}(\\overline{t}){\\;\\mathrm{d}\\overline{t}}\n-\\int_{0}^{t}k_2c_{\\ce{X}}(\\overline{t})c_{\\ce{Y}}(\\overline{t}){\\;\\mathrm{d}\\overline{t}}\n=\n-\\frac{\\xi_1(t)+\\xi_2(t)}{V}\\\\\nc_{\\ce{Y}}(t)-c_{\\ce{Y}}(0)&=\n\\int_{0}^{t}k_1c_{\\ce{X}}(\\overline{t}){\\;\\mathrm{d}\\overline{t}}+\n\\int_{0}^{t}k_2c_{\\ce{X}}(\\overline{t})c_{\\ce{Y}}(\\overline{t}){\\;\\mathrm{d}\\overline{t}}\n=\n\\frac{\\xi_1(t)+\\xi_2(t)}{V}.\n\\end{align*}\nThese two relations do not determine \\(\\xi_1\\) and \\(\\xi_2\\) individually but only their sum (even if one utilizes \\(c_{\\ce{Y}}(t)=c_{\\ce{X}}(0)+c_{\\ce{Y}}(0)-c_{\\ce{X}}(t)\\)).\nThe problem originates from the fact that the reaction steps are not linearly independent as reflected in the singularity of the matrix \\(\\boldsymbol{\\gamma}.\\)\n\\end{example}\n\n\nIf the reaction steps of a complex chemical reaction are independent, the situation is better.\n\\begin{example}\\label{ex:indep}\nIn some special cases, there is a way of making the \"definition\" Eq. \\eqref{eq:implicit} into a real, explicit definition.\nAssume that \\(R\\le M\\), and that the stoichiometric matrix \\(\\boldsymbol{\\gamma}\\) is of the full rank, i.e. the reaction steps are independent. \nThen, one can rewrite Eq. \\eqref{eq:implicit} in two steps as follows:\n\\begin{align}\n\\boldsymbol{\\gamma}^{\\top}(\\mathbf{c}(t)-\\mathbf{c}^0)&=\\frac{1}{V}\\boldsymbol{\\gamma}^{\\top}\\boldsymbol{\\gamma}\\boldsymbol{\\xi}(t)\\nonumber\\\\\n\\boldsymbol{\\xi}(t)&=V(\\boldsymbol{\\gamma}^{\\top}\\boldsymbol{\\gamma})^{-1}\\boldsymbol{\\gamma}^{\\top}(\\mathbf{c}(t)-\\mathbf{c}^0).\\label{eq:memoryless}\n\\end{align}\nNow one can accept Eq. \\eqref{eq:memoryless} as a definition for the reaction extent.\nNevertheless, in this special case \nEq. \\eqref{eq:int} \nimplies\n\\begin{equation*}\n\\boldsymbol{\\gamma}^{\\top}(\\mathbf{c}(t)-\\mathbf{c}^0)=\\boldsymbol{\\gamma}^{\\top}\\boldsymbol{\\gamma}\\int_0^t\\mathbf{rate}(\\mathbf{c}(\\overline{t})){\\;\\mathrm{d}\\overline{t}}\n\\end{equation*}\nand\n\\begin{equation*}\n(\\boldsymbol{\\gamma}^{\\top}\\boldsymbol{\\gamma})^{-1}\\boldsymbol{\\gamma}^{\\top}(\\mathbf{c}(t)-\\mathbf{c}^0)=\n\\frac{1}{V}\\boldsymbol{\\xi}(t)=\n\\int_0^t\\mathbf{rate}(\\mathbf{c}(\\overline{t})){\\;\\mathrm{d}\\overline{t}},\n\\end{equation*}\nthus this definition is the same as the one in Eq. \\eqref{eq:specre}. \nThis derivation can always be done if \\(R=1\\) that is, in a not-so-interesting trivial case.\nUnfortunately, the case \\(R\\le M\\) does not happen very often. On the contrary, for example, in\ncase of combustion reactions, Law's law\\cite{law} (see page 11) states that \\(R\\approx 5 M.\\)\n\nNote also, that Eq. \\eqref{eq:memoryless} shows the following: in these cases, i.e. when the stoichiometric matrix is of the full rank---as opposed to the general case, see page \\pageref{page:memory}---the reaction extents do not depend on the whole history of the concentration vector, it only depends on its instantaneous value.\n\\end{example}\nLet us make a trivial remark on the independence of reaction steps. \nIf the complex chemical reaction consists of a single irreversible step, then the reaction steps(!) are independent. \nIf any of the reaction steps are reversible, then the reaction steps are not independent. \n\\subsection{Properties of the reaction extent}\nThe usual assumptions \\label{pg:assump} on the vector-valued function \\(\\mathbf{rate}\\) are as follows, see Refs.\\cite{tothnagypapp,volperthudyaev}.\n\\begin{enumerate}\n\\item\nAll of its components are continuously differentiable functions defined on \\(\\mathbb{R}^M\\) taking only non-negative values. This is usual, e.g. in the case of mass action kinetics, but---with some restrictions---also in the case when the reaction rates are rational functions as in the case of Michaelis--Menten or Holling type kinetics, see e.g. Refs.\\cite{polczkulcsarszederkenyi,polczpeniszederkenyi,kisstoth,laidlerglossary}\n\\item\nThe value of \\(rate_r(\\mathbf{c})\\) is zero if and only if some of the species needed for the \\(r^{\\mathrm{th}}\\) reaction step is missing, i.e. for some \\(m: \\alpha_{m,r}>0\\) and \\(c_m=0\\) (see p. 613, Condition 1 in Ref.\\cite{volperthudyaev}). \nWe shall say in this case that reaction step \\(r\\) \\emp{cannot start} from the concentration vector \\(\\mathbf{c}.\\)\n\\end{enumerate}\nThe second assumption implies---even in the general case, i.e. without restriction to the mass action type kinetics---that \\(rate_r(\\mathbf{c})>0\\) if all the necessary species (\\emp{reactants}, see below) are present initially: \\(\\alpha_{m,r}>0\\Longrightarrow c_m>0.\\)\n\nLet us sum up the relevant qualitative characteristics of the reaction extent.\n(Remember that the proof can be found in the Appendix.)\n\\begin{theorem}\\label{thm:basics}\n\\begin{enumerate}\n\\item[\\nonumber]\n\\item\nThe domain of the function \\(t\\mapsto\\boldsymbol{\\xi}(t)\\) is the same as that of \\(\\mathbf{c}\\).\n\\item\nBoth \\(\\mathbf{c}\\) and \\(\\boldsymbol{\\xi}\\in\\mathcal{C}^2(J,\\mathbb{R}^R);\\) with some open interval \\(J\\subset\\mathbb{R}\\) such that \\(0\\in J.\\)\n \\item\n \\(\\boldsymbol{\\xi}\\) obeys the following initial value problem:\n\\begin{equation}\\label{eq:rmdiffegy}\n\\dot{\\boldsymbol{\\xi}}(t)=V\\mathbf{rate}(\\mathbf{c}^0+\\frac{1}{V}\\boldsymbol{\\gamma}\\boldsymbol{\\xi}(t)),\\quad \\boldsymbol{\\xi}(0)=\\boldsymbol{0}.\n\\end{equation}\n\\item\nAt the beginning, the velocity vector of the reaction extent (also called the \\emp{rate of conversion}) points into the closed first orthant, and this property is kept for all times in the domain of the function \\(t\\mapsto\\boldsymbol{\\xi}(t).\\) \n\\item\nThe components of \\(\\boldsymbol{\\xi}\\) are either positive, strictly monotonously increasing functions or constant zero. \nIf for some positive time \\(t\\) we find that \\(\\xi_r(t)=0\\) then, obviously, the reaction step \\(r\\) did not start at all at the beginning.\n\\end{enumerate}\n\\end{theorem}\n\n\n\nLet us make a few remarks:\n\\begin{itemize}\n\\item \nThe last property (positivity) mentioned in the Theorem can be realized with \\(\\lim_{t\\to+\\infty}\\xi(t)=+\\infty\\)\n(the simplest example for this being \\(\\ce{X -> 2X}, c_{\\ce{X}}^0>0\\)), or with a finite positive value of \\(\\lim_{t\\to+\\infty}\\xi(t),\\) see the example \\ce{X -> Y -> Z} below.\n\\item\nEq. \\eqref{eq:rmdiffegy} shows that we would have got simpler formulas if we used\n\\(\\frac{\\xi(t)}{V}\\) as proposed by Aris\\cite{arisintro} on p. 44, \nbut this form is valid only if \\(V\\) is constant. \n\\item\nIn the mass action case both \\(\\mathbf{c}\\) and \\(\\boldsymbol{\\xi}\\) are infinitely many times differentiable.\n\\item\nIf one uses a kinetics different from the mass action type\nnot fulfilling assumptions 1 and 2 on page \\pageref{pg:assump}, \nor---as P\\'ota\\cite{potarabai} has shown---if one applies an approximation, \nthen it may happen that some of the initially positive concentrations turn to zero.\n\\end{itemize}\n\nIn order to proceed, we need to make a technical remark on the figures shown hereinafter. We label the first axis (usually: horizontal) in the figures with \\(t\\mathrm{\/s}\\), where \\(\\mathrm{s}\\) is the time unit second. Labels of other axes are formed in a similar way. With this procedure we want to emphasize that the figures show the relationship between pure numbers and not between physical quantities. \n\nThe condition in part 5 of Theorem \\ref{thm:basics} is only necessary but not sufficient as the example below shows. \n\\begin{example}\\label{ex:convexconcave}\nLet us start the consecutive reaction \\ce{X ->[$k_1$] Y ->[$k_2$] Z} from the vector of the initial concentrations:\n\\(\\begin{bmatrix}\nc^0_{\\ce{X}}&0&0\n\\end{bmatrix}^{\\top}\\), and suppose \\(k_1\\neq k_2.\\) \nAlthough, the second step cannot start at the beginning, yet the second reaction extent is positive for all positive times as the solution of the evolution equations\n\\begin{equation}\\label{eq:velo}\n\\dot{\\xi}_1=Vk_1\\left(c^0_{\\ce{X}}-\\frac{\\xi_1}{V}\\right),\\quad\n\\dot{\\xi}_2=Vk_2\\left(0+\\frac{\\xi_1-\\xi_2}{V}\\right)\n\\end{equation}\nare as follows\n\\begin{align*}\n&\\xi_1(t)=Vc^0_{\\ce{X}}(1-e^{-k_1t}),\\\\\n&\\xi_2(t)=\\frac{Vc^0_{\\ce{X}}}{k_2-k_1}\\left(k_2(1-e^{-k_1t})-k_1(1-e^{-k_2t})\\right).\n\\end{align*}\nPositivity also follows without any calculations from the fact that the velocity vector of the differential equations in \\eqref{eq:velo} point inward, into the interior of the first quadrant, or using the fact that Eqs. \\eqref{eq:velo} are also kinetic type differential equations.\n\nNote that \\(\\lim_{t\\to+\\infty}\\xi_1(t)=\\lim_{t\\to+\\infty}\\xi_2(t)=Vc^0_{\\ce{X}}.\\)\nIt means that the number of occurrences of the reaction events for both reactions, and thus the reaction extents, are exactly the same at the end of the whole process. \nMoreover, it does not depend on the reaction rate coefficients.\n\nEasy calculations show the following facts. \nThe function \\(\\xi_2\\,\/\\,\\mathrm{mol}\\) in Fig. \\ref{fig:convexconcave} has an inflection point, because its second derivative is zero at some positive time \\(t_{\\mathrm{infl}}\\) for all choices of the reaction rate coefficients, and the third derivative is not zero at \\(t_{\\mathrm{infl}}\\). \nThe function \\(\\xi_1\\,\/\\,\\mathrm{mol}\\) in Fig. \\ref{fig:convexconcave} is concave from below no matter what the reaction rate coefficients are. \n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.8\\linewidth]{convexconcave}\n\\caption{Reaction extents in the consecutive reaction \\ce{X ->[$k_1$] Y ->[$k_2$] Z} when \n\\(k_1~=~1~{\\mathrm{s}}^{-1}\\), \n\\(k_2=~2~{\\mathrm{s}}^{-1}\\), \n\\(c_{X}^0=~1~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \n\\(c_{\\ce{Y}}^0=c_{\\ce{Z}}^0=~0~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \n\\(V=~1~\\mathrm{dm}^3.\\) \nThe limit 1 here is only a consequence of the choice of the parameters. \nThe large dot is the inflection point of the curve \\(\\xi_2\\,\/\\,\\mathrm{mol}\\).}\n\\label{fig:convexconcave}\n\\end{figure}\n\\end{example}\n\nTo characterize the convexity of the reaction extents in the general case is an open problem. \nOne should take into consideration that although in the practically interesting cases the number of equations in \\eqref{eq:rmdiffegy} is larger than those in Eq.\n\\eqref{eq:ikdegen}, that is \\(R > M\\), the equations for the reaction extents are of a simpler structure. \n\nMonotonicity mentioned in Theorem \\ref{thm:basics} implies that all the components of the reaction extent do have a finite or infinite limit as \\(t\\) tends to \\(\\sup(J)=:t^*,\\) where \\(J:=\\Dom(\\mathbf{c}),\\)\nand \\(t^*\\) is a finite or infinite time. \nIt is an interesting open question: \nwhen does a coordinate of the reaction extent vector tend to infinity?\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{graphical1}\n\\caption{\n\\(c_{\\ce{X}}^0~=~3.0~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\),\n\\(c_{\\ce{Y}}^0~=~1.0~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\),\n\\(c_{\\ce{Z}}^0~=~1.0~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\),\n\\(k_1~=~1.0~\\mathrm{dm}^6~\\,\\mathrm{mol}^{-2}~\\mathrm{s}^{-1}\\),\n\\(k_{-1}~=~1.0~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(V~=~1~\\mathrm{dm}^3.\\)\nWhile the concentrations tend to and are becoming very close to the equilibrium values, the reaction extents tend to infinity in a monotonously increasing way in the reaction: \n\\ce{2X + Y <=>[$k_1$][$k_{-1}$] 2Z}.}\n\\label{fig:dynamiceq}\n\\end{figure}\nAs the \\emp{emphatic} closure of this series of remarks, we mention that the strictly monotonous increase of the number of the occurrence of reaction events shows that the reaction events never stop, see \\ref{fig:dynamiceq}. This important fact is independent on the form of kinetics, and it is a property of the deterministic models of reaction kinetics.\nThis sheds light on the meaning of \\emp{dynamic equilibrium} as generally taught. \nNote that \\emp{no reference to thermodynamics or statistical physics} has been invoked here,\nanalyzing the connections are left to the reader.\n\\begin{example}\\label{ex:blowup}\nIn case when the domain of the function \\(t\\mapsto\\mathbf{c}(t)\\) is a proper subset of the non-negative real numbers, \\(\\boldsymbol{\\xi}\\) has the same property. \nLet us consider the induced kinetic differential equation\\ \\(\\dot{c}=kc^2\\) of the quadratic auto-catalytic reaction \\ce{2X ->[k] 3X} with the initial condition \\(c(0)=c^0>0.\\) Then, \n\\(\nc(t)=\\frac{c^0}{1-kc^0t}, \\left(t\\in\\left[0,\\frac{1}{kc^0}\\right[\n\\,\\subsetneq\\,[0,+\\infty[\\right).\n\\) \nNow Eq. \\eqref{eq:rmdiffegy} specializes into\n\\(\\dot{\\xi}=Vk(c^0+\\xi\/V)^2,{\\ }\\xi(0)=0\\)\nhaving the solution\n\\(\n\\xi(t)=V(c^0)^2\\frac{kt}{1-kc^0t}, \\left(t\\in\\left[0,\\frac{1}{kc^0}\\right[\\right),\n\\)\nthus \\(\\xi\\) blows up at the same time (\\(t^*:=\\frac{1}{kc^0}\\)) when \\(c\\) does.\nUp to the blow-up, the reaction event occurs infinitely many times:\n\\(\\lim_{t\\to+t^*}\\xi(t)=+\\infty.\\)\nDefinitions and a few statements about blow-up in kinetic differential equations are given in the works by Csikja et al.\\cite{csikjapapptoth,csikjatoth}\n\\end{example}\n\n\n\\section{What is it that tends to 1?}\\label{sec:what}\nOur interest up to this point was the number of occurrences of reaction events. \nHowever, many authors think it is useful and visually attractive that the \"reaction extent tends to 1 when the reaction tends to its end,\" see e.g. Fig. 1 of Glasser\\cite{glasser}.\nBorge\\cite{borge} and Peckham\\cite{peckham} also argue for [0,1].\n\nAnother approach is given by Moretti\\cite{moretti}, Dumon et al.\\cite{dumonlichanotpoquet} and others via introducing the \\emp{reaction advancement ratio} \\(\\frac{\\xi}{\\xi_{\\max}}\\), and stating that this ratio is always between 0 and 1. \nPeckham\\cite{peckham} noticed that Atkins\\cite{atkins5} (pp. 272--276) shows a figure of the free energy \\(G\\) of the reacting system versus \\(\\xi,\\) where the first axis is labeled from zero to one. \nHowever, in the next edition\\cite{atkins6} (pp. 216--217), the graph has been changed, and it now shows the first axis without 1 as an upper bound.\nBeing loyal to the usual belief\\cite{peckham,treptow}, \nwe are looking for quantities (pure numbers) tending to 1 as e.g. \\(t\\to+\\infty\\). \nScaling might help to find such quantities.\n\\subsection{Scaling by the initial concentration: One reaction step}\nNow we are descending from the height of generality by considering a single irreversible reaction step (\\(R=1\\)), assuming that the kinetics is of the mass action type. Thus, the reaction is\n\\begin{equation}\\label{eq:onestep}\n\\ce{$\\sum_{m=1}^M\\alpha_m$X($m$) ->[k] $\\sum_{m=1}^M\\beta_m$X($m$)}.\n\\end{equation}\nTherefore, one has the reaction extent\n\\begin{equation}\\label{eq:ximone}\n\\dot{\\xi}=Vk\\left(\\begin{bmatrix}\nc_1^0\\\\c_2^0\\\\\\dots\\\\c_M^0\n\\end{bmatrix}+\n\\begin{bmatrix}\n\\gamma_1\\\\\\gamma_2\\\\\\dots\\\\\\gamma_M\n\\end{bmatrix}\\frac{\\xi}{V}\\right)^{\\boldsymbol{\\alpha}}\n=\nVk\\prod_{m=1}^{M}\\left(c_m^0+\\gamma_m\\frac{\\xi}{V}\\right)^{\\alpha_m},\\quad \\xi(0)=0;\n\\end{equation}\nwith \\(\\gamma_m:=\\beta_m-\\alpha_m.\\)\n\n\\begin{theorem}\\label{thm:singlestep}\n\\begin{enumerate}\n\\item[\\nonumber]\n\\item \nIf the reaction in Eq. \\eqref{eq:onestep} cannot start, then\n\\(\\xi(t)=0\\) for all non-negative real times \\(t\\):\n\\begin{equation}\\label{eq:cannot}\n\\exists m:(\\alpha_m\\neq0\\ \\&\\ c_m^0=0)\\Longrightarrow\\forall t\\in\\mathbb{R}^+_0:\\xi(t)=0.\n\\end{equation}\n\\item\nIf the reaction in Eq. \\eqref{eq:onestep} does start and all the species are produced (i.e. for all \\(m: \\gamma_m>0\\)), then \\(\\xi(t)\\)\ntends to infinity (blow-up included):\n\\begin{equation}\\label{eq:produced}\n\\forall m:\\gamma_m:=\\beta_m-\\alpha_m>0\\Longrightarrow\\lim_{t\\to t^*}\\xi(t)=+\\infty,\n\\end{equation} \nwhere \\(t^*:=\\sup(J)\\) with \\(J:=\\Dom(\\xi)\\).\n\\item \nIf some of the species is consumed, that is \\(\\exists m: \\gamma_m<0\\), then \n\\begin{equation}\\label{eq:decreases}\n\\lim_{t\\to +\\infty}\\xi(t)=\\min\\left\\{-\\frac{Vc_m^0}{\\gamma_m};\\gamma_m<0\\right\\}.\n\\end{equation}\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{example}\n\\begin{enumerate}\n\\item [\\nonumber]\n\\item \nReaction \\ce{X ->[$k$] Y} with \\(c^0_{\\ce{X}}=0\\) and \\(c^0_{\\ce{Y}}\\)\narbitrary, illustrates the first case as here \\(\\dot{\\xi}=-k\\xi, \\xi(0)=0\\)\nimplies \\(\\forall t\\in\\mathbb{R}:\\xi(t)=0.\\)\n\\item \nReaction \\ce{X ->[$k$] 2X} with \\(c^0_{\\ce{X}}>0\\) is an illustration for the second case \n\\begin{equation*}\n\\forall t\\in\\mathbb{R}:\\xi(t)=V c^0_{\\ce{X}}(e^{k t}-1 )\n\\end{equation*}\nwith \n\\(\n\\lim_{t\\to+\\infty}\\xi(t)=+\\infty. \n\\)\n\\item \nReaction \\ce{2X ->[$k$] 3X} with \\(c^0_{\\ce{X}}>0\\) (Example \\ref{ex:blowup}) is another illustration for the second case with \n\\begin{equation*}\n\\forall t\\in[-\\infty,\\frac{1}{kc^0_{\\ce{X}}}[ :\\xi(t)=\\frac{k t V (c^0_{\\ce{X}})^2}{1-ktc^0_{\\ce{X}}}\n\\end{equation*} with\n\\(\\lim_{t\\to\\frac{1}{kc^0_{\\ce{X}}}}\\xi(t)=+\\infty.\\)\n\\item \nReaction \\ce{X + Y ->[$k$] 2X}\nis an illustration for the third case \n\\begin{equation*}\n\\forall t\\in\\mathbb{R}:\\xi(t)=\n\\frac{(-1 + e^{k t (c^0_{\\ce{X}} + c^0_{\\ce{Y}})}) V c^0_{\\ce{X}} c^0_{\\ce{Y}}}{(c^0_{\\ce{Y}} + e^{k t (c^0_{\\ce{X}} + c^0_{\\ce{Y}})} c^0_{\\ce{X}})}\n\\end{equation*} \nwith \\(\\lim_{t\\to+\\infty}\\xi(t)=Vc^0_{\\ce{Y}},\\) if \\(c^0_{\\ce{X}},c^0_{\\ce{Y}}\\neq0\\). If either \\(c^0_{\\ce{X}}=0,\\) or \\(c^0_{\\ce{Y}}=0,\\)\nthen \\(\\forall t\\in \\mathbb{R}:\\xi(t)=0.\\)\n\\item\nThe example \\ce{X -> Y} with \\(c_{\\ce{X}}^0=0, c_{\\ce{Y}}^0>0\\) shows that\na species (here \\ce{Y}) can have positive concentration for all positive times in a reaction where \"none of the steps\" can start.\n\\end{enumerate}\n\\end{example}\n\nThe table below shows a series of examples illustrating different types of single irreversible reaction steps.\n\n\\begin{table}\n \\caption{Reaction extent for various reaction types}\n \\label{table:cases}\n\\begin{tabular}{lllll}\n\\hline\nStep &\\(\\mathbf{c}^0\\)&\\(\\dot{\\xi}=\\) &\\({\\xi}(t)=\\) &Case\\\\ \n\\hline\\ce{X ->[$k$] 2X} & 0 &\\(k\\xi\\) & 0 &\\eqref{eq:cannot}\\\\\n\\hline\\ce{X ->[$k$] 2X} & 1 &\\(Vk(1+\\xi\/V)\\)& \\(V(e^{kt}-1)\\)&\\eqref{eq:produced}\\\\\n\\hline\\ce{2X ->[$k$] 3X} & 1 &\\(Vk(1+\\xi\/V)^2\\) &\\(\\frac{Vkt}{1-kt}\\) &\\eqref{eq:produced}\\\\\n\\hline\\ce{2X + Y ->[$k$] 2Z}& &\\(Vk(c_{\\ce{X}}^0-2\\xi\/V)^2*\\)&\\\\\n&&\\((c_{\\ce{Y}}^0-\\xi\/V)\\)& &\\eqref{eq:decreases}\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{example}\nHere we analyze the last example of Table \\ref{table:cases}.\nIn the case of the reaction \\ce{2X + Y ->[$k$] 2Z} mimicking water formation one has the following quantities:\n\\(R:=1,M:=3,\\ce{X}:=\\ce{H2}, \\ce{Y}:=\\ce{O2},\\ce{Z}:=\\ce{H2O}.\\)\nFurthermore,\n\\[\n\\boldsymbol{\\alpha}=\n\\begin{bmatrix}\n2\\\\1\\\\0\n\\end{bmatrix}{\\!}{\\!},\\quad\n\\boldsymbol{\\beta}=\\begin{bmatrix}\n0\\\\0\\\\2\n\\end{bmatrix}{\\!}{\\!},\\quad \n\\boldsymbol{\\gamma}=\\begin{bmatrix}\n-2\\\\-1\\\\2\n\\end{bmatrix}{\\!}{\\!}.\n\\]\nThe initial value problem to describe the time evolution of reaction extent is\n\\begin{equation}\\label{eq:wateru}\n\\dot{\\xi}=Vk\\left(\n\\begin{bmatrix}\nc_{\\ce{X}}^0\\\\c_{\\ce{Y}}^0\\\\c_{\\ce{Z}}^0\n\\end{bmatrix}+\n\\begin{bmatrix}\n-2\\\\-1\\\\2\\\\\n\\end{bmatrix}\\frac{\\xi}{V}\\right)^{\\begin{bmatrix}\n2\\\\1\\\\0\n\\end{bmatrix}}\n=\nVk(c_{\\ce{X}}^0-2\\frac{\\xi}{V})^2(c_{\\ce{Y}}^0-\\frac{\\xi}{V}),\\quad \\xi(0)=0.\n\\end{equation}\nWe can provide only the inverse of the solution to Eq. \\eqref{eq:wateru}. However, one can state that the reaction extent tends strictly monotonously to its limit (independent on the value of the reaction rate coefficient): \\(\\lim_{t\\to+\\infty}\\xi(t)=\\min\\{\\frac{Vc_{\\ce{X}}^0}{2},Vc_{\\ce{Y}}^0\\}.\\)\nDifferent initial conditions lead to different results, see Figs. \\ref{fig:rm11}--\\ref{fig:rm13}. Obviously, the third point of Theorem \\ref{thm:singlestep} is of main practical use here. For this case one has the following statement.\n\\end{example}\n\n\\begin{corollary}\\label{corr:pure}\nIn the third case of Theorem \\ref{thm:singlestep}, \ndividing \\(\\xi\\) by the initial concentration and scaled by the quantity \\(-\\gamma_m\/V\\), \nwe obtain a (pure) number tending to 1 as \\(t\\) tends to infinity:\n\\fbox{\\(\\lim_{t\\to +\\infty}\n\\frac{\\xi(t)}{Vc_m^0\/(-\\gamma_m)}=1.\\)}\n\\end{corollary}\n\\subsection{Stoichiometric initial condition, excess and deficit}\nBefore studying the above-mentioned figures, we need some definitions in order to avoid the sin of using a concept without having defined it. The concepts of \\emp{stoichiometric initial condition} and \\emp{initial stoichiometric excess} are often used elsewhere but never defined.\n\\begin{definition\nConsider the induced kinetic differential equation \n\\eqref{eq:ikdegen} of the reaction \\eqref{eq:ccr} with the initial condition\n\\(\\mathbf{c}(0)=\\mathbf{c}^0\\neq\\boldsymbol{0}.\\)\nThis initial condition is said to be a \\emp{stoichiometric initial condition} (and \\(\\mathbf{c}^0\\) is a stoichiometric initial concentration), if for all such\n\\(m=1,2,\\dots,M;{\\ }r=1,2,\\dots,R\\) for which \\(\\gamma_{m,r}<0\\) the ratios \\(\\frac{c_m^0}{-\\gamma_{m,r}}\\)\nare independent from \\(m\\) and \\(r.\\) \nIf the ratios are independent of \\(r\\), but for some \n\\(p=1,2,\\dots,M\\) the ratio \\(\\frac{c_p^0}{-\\gamma_{p,r}}\\) is larger than the others, then \\ce{X(p)} is said to be in \\emp{initial stoichiometric excess}, or it is in excess initially.\nIf the ratios are independent of \\(r\\), but for some \n\\(p=1,2,\\dots,M\\) the ratio \\(\\frac{c_p^0}{-\\gamma_{p,r}}\\) is smaller than the others, then \\ce{X(p)} is said to be in \\emp{initial stoichiometric deficit}, or it is in deficit initially.\nThe last notion is mathematically valid, but in such cases, one prefers saying that all the other species are in excess.\nIn combustion theory the expressions \\emp{stoichiometric}, \\emp{fuel lean} and \\emp{fuel rich} are used in the same sense, see page 115 of the book by Tur\\'anyi and Tomlin\\cite{turanyitomlin}.\n\\end{definition}\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.7\\linewidth]{p1}\n\\caption{\n\\(c_{\\ce{X}}^0\/2~=~0.7~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3} 2Z}. \nThe limiting value is denoted with the dashed line.} \n\\label{fig:rm11}\\end{figure}\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.7\\linewidth]{p2}\n\\caption{\n\\(c_{\\ce{X}}^0\/2~=c_{\\ce{Y}}^0=~0.6~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\),\n\\(c_{\\ce{Z}}^0~=~3~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \n\\(k~=~1.0~\\mathrm{dm}^6~\\,\\mathrm{mol}^{-2}~\\mathrm{s}^{-1}\\),\n\\(V~=~1~\\mathrm{dm}^3.\\) Stoichiometric initial condition in the reaction \\ce{2X + Y -> 2Z}; slow convergence. The limiting value is denoted with the dashed line.}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.7\\linewidth]{p3}\n\\caption{\n\\(c_{\\ce{X}}^0\/2~=~0.6~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}>c_{\\ce{Y}}^0~=~0.5~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\),\n\\(c_{\\ce{Z}}^0~=~3~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \n\\(k~=~1.0~\\mathrm{dm}^6~\\,\\mathrm{mol}^{-2}~\\mathrm{s}^{-1}\\),\n\\(V~=~1~\\mathrm{dm}^3.\\)\n\\ce{X} is in stoichiometric excess in the reaction \\ce{2X + Y -> 2Z}. \nThe limiting value is denoted with the dashed line.}\n\\label{fig:rm13}\\end{figure}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.3\\linewidth]{q1}\\quad\n\\includegraphics[width=0.3\\linewidth]{q2}\\quad\n\\includegraphics[width=0.3\\linewidth]{q3}\n\\caption{Scaled reaction extents tend to 1. \nThe reaction rate coefficient and the initial data are the same as in Figs. \\ref{fig:rm11}--\\ref{fig:rm13}.}\n\\label{fig:rm1skalazott}\n\\end{figure*}\nWe suggest that instead of saying that the scaling factor is some initial concentration as in the third case of Theorem \\ref{thm:singlestep},\none can equally well say that the divisor is the limiting value of the reaction extent, as it is in the cases in figures \\ref{fig:rm1skalazott}: \\(Vc_{\\ce{X}}^0\/2, Vc_{\\ce{X}}^0\/2=Vc_{\\ce{Y}}^0,\nVc_{\\ce{Y}}^0,\\) respectively.\nThis result will come in handy below.\n\nUnder stoichiometric initial conditions Peckham\\cite{peckham} gives a definition of \\(\\xi_{\\max}\\) and proposes to use \\(\\frac{\\xi(t)}{\\xi_{\\max}}\\) in extremely special cases. The domain of this ratio is \\([0,1].\\)\n\nAt this point it may not be obvious how to generalize Corollary \\ref{corr:pure}. In order to treat more complicated cases we shall choose another way.\n\\subsection{Scaling by the \"maximum\"}\nIn cases when \\(\\xi^*:=\\lim_{t\\to+\\infty}\\xi(t)\\) is finite, then \n\\(\\xi^*=\\sup\\{\\xi(t);t\\in\\mathbb{R}\\}\\), thus \n \\(\\xi^*\\) may be identified (mathematically incorrectly) with \\(\\xi_{\\max},\\) and surely \\(\\lim_{t\\to+\\infty}\\frac{\\xi(t)}{\\xi^*}=1.\\)\n That is the procedure applied by most authors\\cite{dumonlichanotpoquet,vandezandevandergrienddekock,moretti}.\n\\subsection{Detailed balanced reactions}\n\\begin{definition}\nThe complex chemical reaction\n\\begin{equation}\\label{eq:revccr}\n\\ce{$\\sum_{m=1}^M\\alpha_{m,r}$X($m$) <=>[$k_r$][$k_{-r}$] $\\sum_{m=1}^M\\beta_{m,r}$X($m$)}\\quad (r=1,2,\\dots,R);\n\\end{equation}\nendowed with mass action kinetics is said to be \\emp{conditionally detailed balanced} at the positive stationary point \\(\\mathbf{c}^*\\) if\n\\begin{equation}\\label{eq:dbgeneralcond}\nk_r(\\mathbf{c}^*)^{\\boldsymbol{\\alpha}_{.,r}}=k_{-r}(\\mathbf{c}^*)^{\\boldsymbol{\\beta}_{.,r}}\n\\end{equation}\nholds. It is unconditionally \\emp{detailed balanced} if\nEq. \\eqref{eq:dbgeneralcond} holds for any choice of (positive) reaction rate coefficients.\n\\end{definition}\nNote that all the steps in \\eqref{eq:revccr} are reversible. Furthermore, in such cases the reaction steps are indexed\nby \\(r\\) and \\(-r,\\). It is always our choice in which order the forward and backward steps are written, expressing the fact that \"forward\" and \"backward\" has no true physical meaning.\n\\subsubsection{Ratio of two reaction extents.}\nSuppose we have a reversible reaction\n\\begin{equation}\\label{eq:singledb}\n\\ce{$\\sum_{m=1}^M\\alpha_m$X($m$) <=>[$k_1$][$k_{-1}$] $\\sum_{m=1}^M\\beta_m$X($m$)}\n\\end{equation}\nbeing unconditionally detailed balanced because the number of the forward and backward reaction pairs is 1.\nThen the initial value problem for the reaction extents is as follows.\n\\begin{align*}\n&\\dot{\\xi}_1=Vk_1\\prod_{m=1}^{M}(c_m^0+\\gamma_m(\\xi_1-\\xi_{-1})\/V)^{\\alpha_m},&\\xi_1(0)=0,\\\\\n&\\dot{\\xi}_{-1}=Vk_{-1}\\prod_{m=1}^{M}(c_m^0+\\gamma_m(\\xi_1-\\xi_{-1})\/V)^{\\beta_m},&\\xi_{-1}(0)=0,\n\\end{align*}\nwhere \\(\\gamma_m:=\\beta_m-\\alpha_m.\\)\n\\begin{proposition}\\label{prop:specdb}\nUnder the above conditions, one has\n\\fbox{\\(\\lim_{t \\to +\\infty}\n\\frac{\\xi_1(t)}{\\xi_{-1}(t)}=1.\\)}\n\\end{proposition}\nNote that initially one only knows that it is the \\emp{derivative}s of the reaction extents that have the same value at equilibrium.\n\n\\begin{example}\\label{ex:waterform}\nConsider the reversible reaction\n\\ce{2X + Y <=>[$k_1$][$k_{-1}$] 2Z} for water formation with the data \n\\( k_1~=~1~\\mathrm{dm}^6~\\,\\mathrm{mol}^{-2}~\\mathrm{s}^{-1},\\) \n\\(k_{-1}~=~1~\\mathrm{dm}^3\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1},\\)\n\\(c_{\\ce{Y}}^0~=~1~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}, c_{\\ce{Z}}^0~=~1~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}.\\)\n\\begin{itemize}\n\\item \nIf \\(c_{\\ce{X}}^0~=~3~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}, \\) then \\ce{X} is in excess initially (a);\n\\item\nif \\(c_{\\ce{X}}^0~=~2~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}, \\) then one has a stoichiometric initial condition (b);\n\\item\nif \\(c_{\\ce{X}}^0~=~1~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}, \\) or\n\\(c_{\\ce{X}}^0~=~1\/2~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3},\\) then \\ce{Y} is in excess initially (c or d).\n\\end{itemize}\nNote that it is not the excess or deficit that is relevant, see Conjecture \\ref{conj:iniratio} below.\nThe initial rates of the forward and backward reactions are as follows:\n\\begin{itemize}\n\\item \n\\(1\\cdot9\\cdot1 > 1\\cdot1,\\)\n\\item \n\\(1\\cdot4\\cdot1 > 1\\cdot1,\\)\n\\item \n\\(1\\cdot1\\cdot1 = 1\\cdot1,\\)\n\\item \n\\(1\\cdot1\/4\\cdot1 <1\\cdot1.\\)\n\\end{itemize}\nThe results are in accordance with Conjecture \\ref{conj:iniratio} below and can be seen in Figs. \\ref{fig:rm21} and \\ref{fig:rm22}.\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.7\\linewidth]{xexcess1}\\\\\n\\includegraphics[width=0.7\\linewidth]{xexcess2}\\\\\n\\caption{The ratio \\(\\frac{\\xi_{1}(t)}{\\xi_{-1}(t)}\\) is tending to 1 from above: \\ce{X} is in excess at the top figure (case a of Example \\ref{ex:waterform}) and the initial condition is stoichiometric at the bottom figure (case b) in case of the reaction \\ce{2X + Y <=>[$k_1$][$k_{-1}$] 2Z}. \\(V~=~1~\\mathrm{dm}^3,\\) and other data are given in the text.}\n\\label{fig:rm21}\n\\end{figure}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.7\\linewidth]{stoichio}\\\\\n\\includegraphics[width=0.7\\linewidth]{yexcess}\\\\\n\\caption{The ratio \\(\\frac{\\xi_{1}(t)}{\\xi_{-1}(t)}\\) is constant at the top figure (case c of Example \\ref{ex:waterform}) and is tending to 1 from below at the bottom figure (case d) in case of the reaction \\ce{2X + Y <=>[$k_1$][$k_{-1}$] 2Z}. \\ce{Y} is in excess in both cases. \\(V~=~1~\\mathrm{dm}^3,\\) and other data are given in the text.}\n\\label{fig:rm22}\n\\end{figure}\n\\end{example}\n\nNow we formulate our experience collected on several models. \nConsider reaction \\eqref{eq:singledb}.\n\\begin{conjecture}\\label{conj:iniratio}\nThe sign of the difference \\(k_{-1}(\\mathbf{c}^0)^{\\beta}-k_1(\\mathbf{c}^0)^{\\alpha}\\)\nand the sign of \\(\\lim_{t\\to0}\\frac{\\xi_1(t)}{\\xi_{-1}(t)}\\)\nis the same.\n\\end{conjecture}\n\n\nConvergence has been proved above.\nThe ratio at \\(t=0\\) is not defined, but the limit of the ratio when \\(t \\to +0\\)\ncan be calculated using the l'Hospital Rule as\n\\begin{equation*}\n\\lim_{t\\to+0}\\frac{\\xi_{1}(t)}{\\xi_{-1}(t)}=\n\\lim_{t\\to+0}\\frac{\\dot{\\xi}_{1}(t)}{\\dot{\\xi}_{-1}(t)}=\n\\frac{k_1}{k_{-1}}(\\mathbf{c}^0)^{\\boldsymbol{\\alpha}-\\boldsymbol{\\beta}},\n\\end{equation*}\nand \\(\\frac{k_1}{k_{-1}}(\\mathbf{c}^0)^{\\boldsymbol{\\alpha}-\\boldsymbol{\\beta}}<1\\) is equivalent to saying that \n\\(k_1(\\mathbf{c}^0)^{\\boldsymbol{\\alpha}} $\\sum_{m=1}^M\\beta_{m,r}$X($m$)}\\quad (r=1,2,\\dots,R)\\right\\} \n\\end{equation}\nwith the usual notations: \n\\(\\alpha_{m,r}\\) and \\(\\beta_{m,r}\\)\nare the stoichiometric coefficients, \\ce{X($m$)} are the species. \nThe induced kinetic differential equation describing the time evolution of the quantities of the species is in general \\begin{equation}\\label{eq:ikdegen}\n\\dot{\\mathbf{c}}(t)=\\boldsymbol{\\gamma}\\mathbf{rate}(\\mathbf{c}(t))\n\\end{equation} \ntogether with the initial condition \\(\\mathbf{c}(0)=\\mathbf{c}^0.\\) The component \\(rate_r\\) of the vector \\(\\mathbf{rate}\\) provides the reaction rate of the \\(r^{\\mathrm{th}}\\) reaction step.\nNote that in the mass action case Eq. \\eqref{eq:ikdegen} specializes into\n\\begin{equation}\\label{eq:ikdemass}\n\\dot{\\mathbf{c}}=\\boldsymbol{\\gamma}\\mathbf{k}\\odot\\mathbf{c}^{\\boldsymbol{\\alpha}} \n\\end{equation} \nor, in coordinates\n\\begin{equation*\n\\dot{c}_m(t)=\\sum_{r=1}^{R}\\gamma_{mr}k_r\\prod_{p=1}^{M}c_p^{\\alpha_{p,r}}\\quad(m=1,2,\\cdots,M),\n\\end{equation*}\nwhere \\(\\mathbf{k}\\) is the vector of (positive) reaction rate coefficients \\(k_r.\\)\n\nThe \\emp{reaction extent}\nof a complex chemical reaction or reaction network defined by Eq. \\eqref{eq:ccr} is the scalar variable, vector-valued function given by the formula\n\\begin{equation}\\label{eq:specre}\n\\boxed{\n\\boldsymbol{\\xi}(t):=V\\int_0^t\n\\mathbf{rate}(\\mathbf{c}(\\overline{t}))\n{\\;\\mathrm{d}\\overline{t}}}.\n\\end{equation}\nIts time derivative \\(\\dot{\\boldsymbol{\\xi}}(t)=V\\mathbf{rate}(\\mathbf{c}(t))\\) is usually called the \\emp{rate of conversion}.\nWe shall calculate the reaction extent for a few exotic reactions and see how they reflect the special properties of the reactions.\n\\end{comment}\n\\section{What if the conditions are not fulfilled?}\nIn the first part of the paper, we calculated the reaction extents for the reaction steps of simple reactions, for detailed balanced reactions, for reactions with a kinetic differential equation having an attracting stationary point, etc. \nOur main question in the present part is:\nWhat happens if one takes an exotic reaction that has multiple stationary point(s), and shows oscillations or even chaos?\n\\subsection{Multistationarity}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.85\\linewidth]{multist1}\n\\caption{The Horn--Jackson reaction network\nwith \\(k_1=k_3=1~\\mathrm{dm}^6~\\,\\mathrm{mol}^{-2}~\\mathrm{s}^{-1}\\) and \\(k_2=k_4=k\\). \nThe value of \\(k\\) is varied as described in the text, \nits unit is the same as that of the other rate coefficients.}\n\\label{fig:hj0}\n\\end{figure}\nHorn and Jackson\\cite{hornjackson}, (see p. 110) has shown that the complex chemical reaction in Fig. \\ref{fig:hj0} has three (positive) stationary points in every stoichiometric compatibility class if \nthe numerical value of \\(k\\) lies between 0 and \\(\\frac{1}{6}.\\)\nTo be more specific, let us choose \n\\(\nk~=~\\frac{1}{10}~\\mathrm{dm}^6~\\,\\mathrm{mol}^{-2}~\\mathrm{s}^{-1}\\), and \\(c^0_{\\ce{X}}~=~c^0_{\\ce{Y}}~=~\\frac{1}{2}~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}.\\) \nThen, easy calculation--neglecting the units for simplicity--shows that in the \\emp{stoichiometric compatibility} class\n\\(\\{[c_{\\ce{X}}\\quad c_{\\ce{Y}}]; c_{\\ce{X}}+c_{\\ce{Y}}=1\\}\\)\n(or, for cases when the total concentration is unity) there are three stationary points:\n\\begin{enumerate}\n\\item \nthe stationary point\n\\(\n\\mathbf{c}^*_1:=\\begin{bmatrix}\n\\frac{1}{3 + \\sqrt{3}}&\\frac{3 + \\sqrt{3}}{6}\n\\end{bmatrix}\n\\) is globally asymptotically stable (i.e. attracting) with the attracting domain \n\\(\\{[c_{\\ce{X}}\\quad c_{\\ce{Y}}]: 0\\le c_{\\ce{X}}<\\frac{1}{3 + \\sqrt{3}}, c_{\\ce{X}}+c_{\\ce{Y}}=1\\},\n\\) and\n\\item \nthe stationary point \\(\\mathbf{c}^*_2:=[\\frac{1}{2}\\quad\\frac{1}{2}]\\)\nis unstable (i.e. non-attracting), and\n\\item \nthe stationary point\n\\(\n\\mathbf{c}^*_3:=\\begin{bmatrix}\n\\frac{1}{3 - \\sqrt{3}}&\\frac{3 - \\sqrt{3}}{6}\n\\end{bmatrix}\n\\) \nis globally asymptotically stable (i.e. attracting)\n with the attracting domain \n\\(\n\\{[c_{\\ce{X}}\\quad c_{\\ce{Y}}];\\frac{1}{3 - \\sqrt{3}} 3Y}. \nNote that the ranking of the reaction extents is different in the two cases.}\n\\label{fig:hj}\n\\end{figure}\n\\subsection{Oscillation}\nWe shall study here two oscillatory reactions. \nFirst, the often used Lotka--Volterra reaction\\cite{lotka,volterra} comes that is not only theoretically interesting because it can be used to describe the oscillations in cold flames\\cite{frankkamenetskii} or see Ref.\\cite{frankkamenetskiiPUP}.\nThe experimentally based R\\'abai reaction\\cite{rabai} aimed at describing pH oscillations follows as the second. \nOne may say that the Brusselator model\\cite{prigoginelefever} would be a more realistic choice, as it results in limit cycle solutions. However, it has a third-order step that makes the calculations more tedious.\nThe type of calculations shown below would give almost the same kind of results with the Brusselator, too.\n\\subsubsection{The Lotka--Volterra reaction.}\nThe irreversible and reversible cases behave in qualitatively different ways.\n\n\\paragraph{Irreversible case:}\n\nIt is known\\cite{potalotka,schumantoth} that under some mild conditions the only two-species reaction to show oscillations is the (irreversible) Lotka--Volterra reaction \\ce{X ->[$k_1$] 2X}, \\ce{X + Y ->[$k_2$] 2Y}, \\ce{Y ->[$k_3$] 0}. (Cf. also the paper by T\\'oth and H\\'ars\\cite{tothhars} and that by Banaji and Boros\\cite{banajiboros}.) \nIt has a single positive stationary point that is stable but not attractive, therefore, one cannot apply Proposition \\ref{prop:main} above.\nNote that the individual reaction extents are not oscillating; they are \"pulsating\" while monotonously increasing to infinity. \nThey have an oscillatory derivative, and the zeros of their second derivative clearly show the endpoints of the periods, \nsee Fig. \\ref{fig:pulsating}. \nIt may be a good idea to calculate any kind of reaction extent \\emp{for a period} in case of oscillatory reactions.\nWe are going to study this point later.\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{lotkaxi0}\\\\\n\\includegraphics[width=0.85\\linewidth]{lotkaxi1}\\\\\n\\includegraphics[width=0.85\\linewidth]{lotkaxi2}\n\\caption{The individual reaction extents and their first and second derivatives in case of the irreversible Lotka--Volterra reaction with \n\\(k_1~=~3~{\\mathrm{s}}^{-1}\\),\n\\(k_2~=~4~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\), \n\\(k_3~=~5~{\\mathrm{s}}^{-1}\\),\n\\(c^0_{\\ce{X}}~=~1~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}, c^0_{\\ce{Y}}~=~2~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \n\\(V~=~1~\\mathrm{dm}^3.\\)}\n\\label{fig:pulsating}\n\\end{figure}\nIt is interesting to have a look at the ratios of the reaction extents, as they seem to tend 1, see Fig. \\ref{fig:lotkaratios}.\nWe assume that this phenomenon is related to the fact that the oscillatory solution results in a closed curve in the phase plane of the irreversible Lotka--Volterra reaction.\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.75\\linewidth]{lotkaratio12}\\\\\n\\includegraphics[width=0.75\\linewidth]{lotkaratio23}\\\\\n\\includegraphics[width=0.75\\linewidth]{lotkaratio31}\n\\caption{The ratios of the reaction extents in case of the irreversible Lotka--Volterra reaction with the parameters as in Fig. \\ref{fig:pulsating}.}\n\\label{fig:lotkaratios}\n\\end{figure}\n\n\\paragraph{Reversible case, detailed balanced:}\n\nThe reversible Lotka--Volterra reaction\n\\ce{X <=>[$k_1$][$k_{-1}$] 2X}, \n\\ce{X + Y <=>[$k_2$][$k_{-2}$] 2Y}, \n\\ce{Y <=>[$k_3$][$k_{-3}$] 0}\nis also worth studying.\nFirst, let us note that for all values of the reaction rate coefficients it has a single, positive stationary point because the reaction steps are reversible. \nTherefore, the system is permanent\\cite{simon,borosexistence}, i.e. the trajectories remain in a compact set. \nIf the trajectories remain in a compact set, then they are either tending to a limit cycle, or the stationary point is asymptotically stable. \nThe first possibility is excluded by the above-mentioned theorem by P\\'ota\\cite{potalotka}, thus it is only the second possibility that remains. \nFig. \\ref{fig:lvrevdb} shows the behavior of the individual reaction extents.\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{lotkarevxi0}\\\\\n\\includegraphics[width=0.85\\linewidth]{lotkarevxi1}\\\\\n\\includegraphics[width=0.85\\linewidth]{lotkarevxi2}\n\\caption{The individual reaction extents and their first and second derivatives in case of the reversible, detailed balanced Lotka--Volterra reaction with \n\\(k_1~=~1~{\\mathrm{s}}^{-1}\\),\n\\(k_{-1}~=~2~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_2~=~3~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_{-2}~=~4~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_3=~5~{\\mathrm{s}}^{-1}\\),\n\\(k_{-3}~=~\\frac{15}{8}~\\,\\mathrm{mol}~\\mathrm{dm}^{-3}~{\\mathrm{s}}^{-1}\\),\n\\(c^0_{\\ce{X}}~=~1~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \\(c^0_{\\ce{Y}}~=~2~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \\(V~=~1~\\mathrm{dm}^3.\\)}\n\\label{fig:lvrevdb}\n\\end{figure}\nLet us note that both the existence and uniqueness of the stationary state also follow from the Deficiency One Theorem (see p. 106 in Feinberg\\cite{feinbergbook}, or p. 176 in T\u00f3th et al.\\cite{tothnagypapp})\n\nIf the reaction is detailed balanced that holds if and only if\n\\begin{equation}\\label{eq:lvdb}\nk_1k_2k_3=k_{-1}k_{-2}k_{-3}\n\\end{equation}\nis true, then our Proposition 2 of the previous paper implies that\nthe product of the ratios of the reaction extents tends to 1, see Fig \\ref{fig:lotkarevdbratios}. \nThis follows also from our Theorem 3 there.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{lotkarevdbratios}\\\\\n\\includegraphics[width=0.85\\linewidth]{lotkarevdbprod}\n\\caption{\nAbove: Time evolution of the ratios of the individual reaction extents---blue for Reaction (1), orange for Reaction (2), and green for Reaction (3)---in case of the reversible, detailed balanced Lotka--Volterra reaction with\n\\(k_1~=~1~{\\mathrm{s}}^{-1}\\),\n\\(k_{-1}~=~2~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\), \n\\(k_2~=~3~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_{-2}~=~4~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_3=~5~{\\mathrm{s}}^{-1}\\),\n\\(k_{-3}~=~\\frac{15}{8}~\\,\\mathrm{mol}~\\mathrm{dm}^{-3}~{\\mathrm{s}}^{-1}\\),\n\\(c^0_{\\ce{X}}~=~1~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \\(c^0_{\\ce{Y}}~=~2~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \\(V~=~1~\\mathrm{dm}^3.\\)\nBelow: Time evolution of the product of the ratios tending to 1.}\n\\label{fig:lotkarevdbratios}\n\\end{figure}\n\n\\paragraph{Reversible case, not detailed balanced:}\n\nIf Condition \\eqref{eq:lvdb} does not hold, the reaction still has an \\emp{attracting stationary point}. What is more, it has an asymptotically stable stationary point. \nFig. \\ref{fig:lotkanotdbxi} shows the behavior of the individual reaction extents.\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{lotkanotdbxi0}\\\\\n\\includegraphics[width=0.85\\linewidth]{lotkanotdbxi1}\\\\\n\\includegraphics[width=0.85\\linewidth]{lotkanotdbxi2}\n\\caption{The individual reaction extents and their first and second derivatives in case of the reversible, \\emp{not} detailed balanced Lotka--Volterra reaction with \\(k_1~=~1~{\\mathrm{s}}^{-1},\\)\n\\(k_{-1}~=~2~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_2~=~3~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_{-2}~=~4~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_3=~5~{\\mathrm{s}}^{-1}\\),\n\\(k_{-3}~=~6~\\,\\mathrm{mol}~\\mathrm{dm}^{-3}~{\\mathrm{s}}^{-1}\\),\n\\(c^0_{\\ce{X}}~=~1~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \\(c^0_{\\ce{Y}}~=~2~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\), \\(V~=~1~\\mathrm{dm}^3.\\)}\n\\label{fig:lotkanotdbxi}\n\\end{figure}\n\n\\subsubsection{The R\\'abai reaction of pH oscillation.}\nHere we include a reaction proposed by R\\'abai\\cite{rabai} to describe pH oscillations. \nThis reaction has much more direct contact with chemical kinetic experiments, and it is much more challenging---from the point of view of numerical mathematics---than the celebrated Lotka--Volterra reaction.\n\nR\\'abai\\cite{rabai} starts with the steps\n\\begin{align*}\n&\\ce{A- + H+ <=>[$k_1$][$k_-1$] AH},\\\\\n&\\ce{AH + H+ +\\{B\\} ->[$k_2$] 2H+ + P-}\n\\end{align*}\nwhere \\ce{\\{B\\}} is an external species with a constant concentration. \nThis reaction has a single stationary point\n\\begin{equation*}\nc^*_{\\ce{A-}}=0,\\quad\nc^*_{\\ce{H+}}=c^0_{\\ce{H+}}+c^0_{\\ce{AH}},\\quad\nc^*_{\\ce{AH}}=0,\\quad\nc^*_{\\ce{P}}=c^0_{\\ce{A-}}+c^0_{\\ce{AH}}+c^0_{\\ce{P}}\n\\end{equation*}\nspecializing into\n\\(\nc^*_{\\ce{A-}}=0,\nc^*_{\\ce{H+}}=c^0_{\\ce{H+}},\nc^*_{\\ce{AH}}=0,\nc^*_{\\ce{P}}=c^0_{\\ce{A-}}\n\\)\nwith the natural restriction on the initial condition\n\\(\nc^0_{\\ce{AH}}=0,\nc^0_{\\ce{P}}=0.\n\\)\n\nPutting the reaction into a CSTR (continuously stirred flow-through tank reactor) \nmeans in the terms of formal reaction kinetics that\nall the species can flow out and some of the species may flow in, \nso that in the meantime the volume is maintained constant.\nIn the present case, the following steps are added\n\\begin{align}\n&\\ce{A- ->[$k_0$] 0},\\label{rabaiout1}\\\\\n&\\ce{0 ->[$k_0c^0_{\\mathrm{A}^-}$]A-},\\label{rabaiin1}\\\\\n&\\ce{H+ ->[$k_0$] 0},\\label{rabaiout2}\\\\\n&\\ce{0 ->[$k_0c^0_{\\mathrm{H}^+}$] H+},\\label{rabaiin2}\\\\\n&\\ce{AH ->[$k_0$] 0},\\label{rabaiout3}\n\\end{align}\nwhere $k_0$ is the volumetric flow rate normalized to the volume of the reactor (often called the reciprocal of the residence time) measured in unit \\(\\mathrm{s}^{-1}\\). \nAs a result of adding these steps, multistability may occur with appropriately chosen values of the parameters. When the reaction step\n\\begin{equation}\\label{rabai3}\n\\ce{H+ + \\{C$^-$\\} ->[$k_3$] CH}\n\\end{equation}\nis also added, one may obtain periodic solutions having appropriate parameter values, see Fig. \\ref{fig:rabaiosc}. Let us remark that neither the R\\'abai reaction nor the Lotka--Volterra reaction is mass-conserving.\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{rabaioscsol}\\\\\n\\includegraphics[width=0.85\\linewidth]{rabaiosctraj}\n\\caption{Time evolution of the pH and the projection of the negative logarithm of the first three coordinates of the trajectory in case of the oscillating R\\'abai reaction with \n\\(k_1~=~10^{10}~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_{-1}~=~10^3~{\\mathrm{s}}^{-1}\\),\n\\(k_2~=~10^6~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_3~=~1~\\mathrm{dm}^3~\\,\\mathrm{mol}^{-1}~\\mathrm{s}^{-1}\\),\n\\(k_0~=~10^{-3}~{\\mathrm{s}}^{-1}\\),\n\\(c^0_{\\ce{A-}}~=~5~\\times~10^{-3}~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\),\n\\(c^0_{\\ce{H+}}~=~10^{-3}~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\),\n\\(c^0_{\\ce{AH}}~=~0~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\),\n\\(c^0_{\\ce{P}}~=~0~\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\),\n\\(V~=~1~\\mathrm{dm}^3.\\)}\n\\label{fig:rabaiosc}\n\\end{figure}\n\nIt is instructive to cast a glance to the reaction extents in such a complex system.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{rabaioscext01forward}\\\\\n\\includegraphics[width=0.85\\linewidth]{rabaioscext01backward}\n\\caption{Reaction extents of the forward (above) and backward (below) steps of the fast equilibrium reaction \\ce{A- + H+ <=>[$k_1$][$k_-1$] AH} of the oscillating R\\'abai reaction (in the same time window as that of in Fig. \\ref{fig:rabaiosc}.)}\n\\label{fig:rabaioscext12} \n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{rabaioscext02}\\\\\n\\includegraphics[width=0.85\\linewidth]{rabaioscext03}\n\\caption{Reaction extents of the reaction steps\n\\ce{AH + H+ +\\{B\\} ->[$k_2$] 2H+ + P-} (left) and \n\\ce{H+ + \\{C$^-$\\} ->[$k_3$] CH} (right) of the oscillating R\\'abai reaction (in the same time window as that of in Fig. \\ref{fig:rabaiosc}.)}\n\\label{fig:rabaioscext23} \n\\end{figure}\nNote that the reaction extents of the fast equilibrium reaction \\ce{A- + H+ <=>[$k_1$][$k_-1$] AH} shown in Fig. \\ref{fig:rabaioscext12} are practically the same, or: their ratio tends to 1, as if no other steps were present! \nThey are also four-five orders of magnitude higher than those of the auto-catalytic production \\ce{AH + H+ +\\{B\\} ->[$k_2$] 2H+ + P-} and the slow pseudo first-order chemical removal \\ce{H+ + \\{C$^-$\\} ->[$k_3$] CH} of \\ce{H+} ion shown in Fig. \\ref{fig:rabaioscext23}.\nNote also the step-wise increase of reaction extent \\(\\xi_3(t)\\).\n\\subsection{Chaos}\nHere we use a version of the R\\'abai reaction that \\emp{can numerically be shown} to exhibit chaotic behavior, see Fig. \\ref{fig:rabaichaos}. \nThis is a good model for experimental pH oscillators also showing behavior that seems to be chaotic according to the usual standards.\n\nWhen the reaction step \\eqref{rabai3} is made reversible \n\\begin{equation}\\label{rabai5}\n\\ce{H+ + \\{C$^-$\\} <=>[$k_3$][$k_{-3}$] CH},\n\\end{equation}\nand one also introduces both the chemical \"removal\" and the outflow of \\ce{CH} \n\\begin{equation}\\label{rabai6}\n\\ce{CH ->[$k_{4}$] Q},\n\\end{equation}\n\\begin{equation}\\label{rabai7}\n\\ce{CH ->[$k_{0}$] 0},\n\\end{equation}\nchaotic solutions are obtained by using appropriate parameters and favorable input concentrations. Fig. \\ref{fig:rabaichaos} illustrates this fact. The reaction extents tend to \\(+\\infty\\) (see for example Fig. \\ref{fig:rabaichaosextent}) in such a way that their derivative is chaotically oscillating (not shown), as expected.\n\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{rabaichaossol}\\\\\n\\includegraphics[width=0.85\\linewidth]{rabaichaostraj}\n\\caption{Time evolution of the pH and the projection of the negative logarithm of the first two coordinates of the trajectory in case of the chaotically oscillating R\\'abai reaction with \nthe same parameters as in Fig. \\ref{fig:rabaiosc} and\n\\(k_{-3}~=~1.5~\\times~10^{-2}~{\\mathrm{s}}^{-1}\\),\n\\(k_{4}~=~~5~\\times~10^{-2}~{\\mathrm{s}}^{-1}.\\) \n}\n\\label{fig:rabaichaos} \n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{rabaichaosextent}\n\\caption{Time evolution of the reaction extent \nof the forward step of reaction \\eqref{rabai5}\nin case of the chaotic R\\'abai reaction with the same parameters as in Fig. \\ref{fig:rabaichaos}.}\n\\label{fig:rabaichaosextent}\n\\end{figure}\n\n\n\n\n\n\n\n\\section{Conclusions}\n\nA generalized definition for the reaction extent has been given by Bowen\\cite{bowen} (included in Chapter 6 of the book edited and partially written by Truesdell\\cite{truesdell}).\nAnother definition that turned out to be much better fitting into the framework of modern formal reaction kinetics in the last 50 years was given by Aris\\cite{arisprolegomena1}. \nStill, neither of them became popular among chemists and chemical engineers. Our goal here is to further generalize the definition (essentially by Aris) to make it compatible with the present theory of reaction kinetics. The result will reveal that there existed a kind of sleeping definition with no use in chemical kinetics, and we show that this should not be the case.\n\nWe have introduced the concept of reaction extent for reaction networks of arbitrary complexity (any number of reaction steps and species) without assuming mass action kinetics. \nThe newly defined reaction extent gives the advancement of each individual irreversible reaction step; \nin case of reversible reactions, we have a pair of reaction extents. \nIn all the practically important cases, the fact that the reaction extent is strictly monotonously increasing, implies that the reaction events never stop. \nThis observation sheds new light onto the concept of \\emp{dynamic equilibrium},\nwithout alluding to either thermodynamics or statistical mechanics.\n\nAfter a few statements on the qualitative behaviour of the reaction extent we made efforts to connect the notion with the traditional ones. \nThus, we have shown that if the number of reaction steps is one, \nthe reaction extent in the long run (as \\(t\\to+\\infty\\))\ntends to 1 if appropriately scaled.\nWe have not used the expression \\emp{progress of reaction}, \nand even less the \\emp{reaction coordinate}. \nWe agree that it is convenient to accept the proposal by Aris\\cite{aris} to work with \\(\\frac{1}{V}\\boldsymbol{\\xi}\\), that is usually called the \\emp{degree of advancement}. \nWe have also shown that for an arbitrary number of reversible detailed balanced reaction steps the product of the ratios of the individual reaction extents also tends to 1 as \\(t\\to+\\infty.\\)\n\nOur most general statement follows for arbitrary reactions having an attracting stationary point\nand with a function not vanishing on the stationary point: in this case the value of the chosen function \nalong the time dependent concentrations divided by the value of the given function at the stationary concentration tends to 1. \nThus, this statement is true not only for the reaction extent but also for any appropriate functions.\n\nOne should take into consideration that although in the practically interesting cases, when the number of equations \\(R\\) in \\eqref{eq:rmdiffegy} for the reaction extents is larger than \\(M\\), i.e. the number of the kinetic differential equations in \\eqref{eq:ikdegen}: \\(R > M\\), then\nthe equations for the reaction extents are of a much simpler structure, \nas the right hand side of the differential equations \\eqref{eq:rmdiffegy} describing them consist only of a single term. \nDuring calculations, we had the experience that it was numerically less demanding to solve the system of differential equations of the reaction extents than those of the concentrations. \n\nThe main advantage of the new definition of reaction extent is that by knowing the kinetic model of a reacting system one can now calculate not only the time evolution of the concentration of each reacting species but also the number of occurrences of the individual reaction events.\n\nAs a by-product we have given an exact definition of the stoichiometric initial concentrations and the initial concentration in excess. \nOne can say that the concept of the newly defined reaction extent can be usefully applied to a larger class of reactions than usual, but in some (exotic) cases its use needs further investigations,\nthis we will start in the forthcoming paper.\nIt is for the reader to decide if we succeeded in avoiding all the traps mentioned in the Introduction. \nQuite a few authors treat the methodology of teaching the concept\\cite{garst,moretti,vandezandevandergrienddekock};\nwe think this approach will only have its \\textit{raison d'\\^{e}tre} when the scientific background will have been clarified and agreed on. \n\n\nLet us mention a few limitations and future directions of research. We have assumed throughout that volume (together with temperature and pressure) is constant. Tacitly, we assumed that we deal with homogeneous kinetics; heterogeneous systems are not taken into consideration. Also, we have not dealt with reaction-diffusion systems. \nWe mention that recently, Peka\\v{r}\\cite{pekar} and Rodrigues et al.\\cite{rodriguesbilleterbonvin} have applied the concept of reaction extent to the case when diffusion is also present. \nWe have also mentioned a few mathematical conjectures that are to be investigated later. \n\n\\section*{Supporting Information}\nThe file FiguresandCalculationGasparToth.pdf contains all the calculations and drawings made using the Wolfram language. \nInterested readers may request from the authors the .nb file usable for calculations.\n\n\\section*{Author Contributions}\nThe authors equally participated in all parts of the paper.\n\n\\section*{Conflicts of interest}\nThere are no conflicts to declare.\n\n\\section*{Acknowledgements}\nThe present work has been supported by the National Office for Research and Development (2018-2.1.11-T\\'ET-SI-2018-00007 and FK-134332). \nJT is grateful to Dr. J. Karsai (Bolyai Institute, Szeged University) and for Daniel Lichtblau (Wolfram Research) for their help. \nMembers of the Working Committee for Reaction Kinetics and Photochemistry, \nespecially Profs. T. Tur\\'anyi and G. Lente, furthermore Drs. Gy. P\\'ota and T. Nagy, made a number of useful critical remarks.\n\n\\section*{Notations}\nSome of the readers may appreciate that we have collected the used notations.\n\n\\begin{table*}\n\\caption{Notations}\n\\label{tbl:notations1}\n\\begin{tabular*}{\\textwidth}{@{\\extracolsep{\\fill}}clcc}\n\\hline\nNotation & Meaning & Unit & Typical value \\\\\n\\hline\\([a, b[\\)&left-closed, right-open interval&&\\\\\n\\hline\\(\\Longrightarrow\\)&implies&&\\\\\n\\hline\\(\\in\\)&belongs to&&\\\\\n\\hline\\(\\forall\\)&universal quantifier&&\"for all\"\\\\\n\\hline\\(\\exists\\)&existential quantifier&&\"there is\"\\\\\n\\hline\\(\\odot\\)&coordinate-wise product of vectors&&\\\\\n\\hline\\(\\mathbf{A}^{\\top}\\)&transpose of the matrix \\(\\mathbf{A}\\)&&\\\\\n\\hline\\(c_m, c_{\\ce{X}}\\) & concentration of \\ce{X($m$)} and \\ce{X} & \\(\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\)& \\\\\n\\hline\\(\\mathbf{c}\\) &vector of concentrations & & \\\\\n\\hline\\(c_m^0\\) & initial concentration of \\ce{X($m$)} & \\(\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\)& \\\\\n\\hline\\(c_m^*\\) & stationary concentration of \\ce{X($m$)} & \\(\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}\\)& \\\\\n\\hline\\(\\mathcal{C}^i(A,B) \\) & \\(i\\) times continuously differentiable \\\\\n & functions from \\(A\\) into \\(B\\) & & \\\\\n\\hline\\(\\Dom(u)\\) & the domain of the function \\(t\\mapsto u(t)\\) & &\\\\\n\\hline \\(J\\subset\\mathbb{R}\\) & an open interval &&\\\\ \n\\hline \\(k, k_r,k_{-r}\\) & reaction rate coefficient &\\((\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3})^{1-\\sum_{m=1}^{M}\\alpha_{m,r}}~{\\mathrm{s}}^{-1}\\)&\\\\\n\\hline \\(k_{0}\\) & normalized volumetric flow rate& \\({\\mathrm{s}}^{-1}\\) &\\\\\n\\hline\\(M\\) & the number of chemical species & & \\(\\in\\mathbb{N}\\)\\\\\n\\hline\\(n_m\\) & the quantity of species \\ce{X($m$)}&\\(\\,\\mathrm{mol}\\) & \\(\\in\\mathbb{N}\\)\\\\\n\\hline\\(n_m^0\\) & the initial quantity of species \\ce{X($m$)} &\\(\\,\\mathrm{mol}\\) & \\(\\in\\mathbb{N}\\)\\\\\n\\hline\\(\\mathbf{n}\\) & the vector of the quantity of species & &\\\\\n\\hline\\(\\mathbf{n}^0\\) & the vector of the initial quantity of species\\\\\n\\hline \\(\\mathbb{N}\\) & the set of positive integers &&\\\\\n\\hline \\(\\mathbb{N}_0\\) & the set of non-negative integers &&\\\\\n\\hline\\(rate_r\\) & rate of the \\(r^{\\mathrm{th}}\\) reaction step &\\(\\,\\mathrm{mol}\\;\\mathrm{dm}^{-3}~\\mathrm{s}^{-1}\\) & \\\\\n\\hline\\(\\mathbf{rate}\\) & vector of reaction rates & & \\\\\n\\hline \\(\\mathbb{R}\\) & the set of real numbers&&\\\\\n\\hline \\(\\mathbb{R}^+\\) & the set of positive real numbers &&\\\\\n\\hline \\(\\mathbb{R}^+_0\\) & the set of non-negative real numbers &&\\\\\n\\hline\\(t\\) & time & s & \\(\\in\\mathbb{R}\\)\\\\\n\\hline \\(V\\) & volume & \\(\\mathrm{dm}^3\\) & \\\\\n\\hline \\(W_r\\) & number of occurrences of the \\(r^{\\mathrm{th}}\\) step & &\\(\\in\\mathbb{N}_0\\) \\\\\n\\hline \\(\\mathbf{W}\\) & vector of number of occurrences &&\\\\\n\\hline \\(x_m\\) & \\(m^{\\mathrm{th}}\\) dependent variable in a & &\\\\\n& differential equation&&\\\\\n\\hline\\(\\mathbf{x}\\) & vector of variables in a & &\\\\\n& differential equation &&\\\\\n\\hline\\(\\ce{X},\\ce{Y},\\ce{X($m$)}\\) &chemical species & & \\\\\n\\hline\\(\\alpha_{m,r},\\alpha_{m}\\) & stoichiometric coefficient & 1 & \\(0,1,2,3\\) \\\\\n& in the reactant complex & & \\\\\n\\hline \\(\\boldsymbol{\\alpha}\\) & matrix of reactant complex vectors & & \\\\\n\\hline \\(\\beta_{m,r},\\beta_{m}\\) & stoichiometric coefficient & 1 & \\(0,1,2,3\\) \\\\\n& in the product complex & & \\\\\n\\hline \\(\\boldsymbol{\\beta}\\) & matrix of product complex vectors & & \\\\\n\\hline \\(\\gamma_{m,r},\\gamma_{m}\\) & stoichiometric number & 1 & \\(-3,-2,\\dots,2,3\\) \\\\\n\\hline \\(\\boldsymbol{\\gamma}\\) & stoichiometric matrix & &\\\\\n\\hline \\(\\xi_r\\) & reaction extent of the \\(r^{\\mathrm{th}}\\) step &\\(\\,\\mathrm{mol}\\)&\\\\\n\\hline \\(\\boldsymbol{\\xi}\\) &vector of reaction extents & & \\\\\n\\hline\n\\end{tabular*}\n\\end{table*}\n\n\n\n\n\\renewcommand\\refname{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}