diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzdnch" "b/data_all_eng_slimpj/shuffled/split2/finalzzdnch" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzdnch" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n{\\bf 1.1. Overview of the proposed results.}\nGiven a pair $(G,T)$ where $G$ is a graph \nand $T$ is a specified set of vertices, called \\emph{terminals},\na (vertex) Multiway Cut (\\textsc{mwc}) of $(G,T)$ is a set\nof non-terminal vertices whose removal from $G$ separates all\nthe terminals. The \\textsc{mwc} problem asks to compute the\nsmallest \\textsc{mwc} of $G$. It is NP-hard for $|T| \\geq 3$ \\cite{Dahlapprox}.\n\nIn this paper we concentrate on the parameterized version $(G,T,k)$ of\nthe \\textsc{mwc} problem where we are given a parameter $k$ and asked\nwhether there is an \\textsc{mwc} of $(G,T)$ of size at most $k$.\nThe goal we work towards is understanding the \\emph{kernelizability}\nof the \\textsc{mwc} problem. In other words, we want to understand,\nwhether there is a polynomial time algorithm that transforms\n$(G,T,k)$ into an \\emph{equivalent} instance $(G',T',k')$ (equivalent in\nthe sense that the former is the 'YES' instance iff the latter is)\nsuch that $|V(G')|$ is upper-bounded by a polynomial of $k'$\nand $k'$ itself is upper bounded by a polynomial of $k$. Informally speaking\nwe want to shrink the instance of the \\textsc{mwc} problem to a size\npolynomially dependent on the parameter.\n\nThe kernelizability of the \\textsc{mwc} is considered by\nthe parameterized complexity community as an interesting and challenging\nquestion. In this paper we propose a partial result and pose two open\nquestions that, if resolved affirmatively, will imply, together with\nthis result, that the \\textsc{mwc} problem is kernelizable. An informal\noverview is given below.\n\nLet $(G,T,k)$ be an instance of the \\textsc{mwc} problem. \nAn \\emph{isolating cut} \\cite{Dahlapprox} is a set of non-terminal \nvertices separating a terminal $t$ from the rest of terminals. \nLet $r$ be the smallest size of an isolating cut. Clearly we can assume \n$r \\leq k$ otherwise, $(G,T,k)$ is a 'NO' instance. In this paper we propose\nan algorithm transforming the initial instance into an equivalent one \nwhose size is $O(2^{k-r}rk^2)$. The runtime of this algorithm is\n$O(A(n)+2^{k-r}n^3r^2k^4)$ where $A(n)$ is the runtime of the constant ratio\napproximation algorithm for the vertex \\textsc{mwc} problem proposed in \n\\cite{Gargapprox}. Thus we demonstrate that for every fixed constant\n$c$ the subclass of \\textsc{mwc} problem consisting of instances with \n$k-r \\leq c*logk$ is polynomially kernelizable. \nTo the best of our knowledge, this is the first progress towards kernelization\nof the \\textsc{mwc} problem. \nTwo more merits of the\nproposed results are that it might be a building block in an unconditional\nkernelization of the \\textsc{mwc} problem and that it gives a new insight\ninto the structure of \\emph{important separators} \\cite{MarxTCS}.\nTo justify these merits, we provide below a more detailed overview of the\nproposed result.\n\nThe main ingredient of the proposed algorithm is computing for each\n$t \\in T$ the union $U_t$ of all important isolating cuts of $t$ of size\nat most $k$. An almost immediate consequence of Lemma 3.6. of \\cite{MarxTCS}\nshows that the union of all $U_t$ contains a solution of $(G,T,k)$ if\nsuch exists. Therefore, 'contracting' the rest of non-terminal vertices\nresults in an instance equivalent to $(G,T,k)$. Prior to computing\nthe sets $U_t$ we ensure that the size of $|T|$ is at most $2k(k+1)$.\nThis is done in Section 3 by running the approximation algorithm \nof \\cite{Gargapprox}\nand processing the output in the flavour of a simple quadratic kernelization\nalgorithm for the Vertex Cover problem (i.e. noticing that the vertices\nof the given \\textsc{mwc} adjacent to a large number of terminal\ncomponents must be present in any solution and, \nafter removal of these vertices and the already separated\nterminals, the number of remaining terminals is small). \n\n\nBut what is the size of $U_t$ and what is the time needed for its computation?\nTo understand this, we study (in Section 2) \nimportant $X-Y$ separators of $G$ \\cite{MarxTCS} where $X$ and $Y$\nare two arbitrary subsets of vertices. As a result, we obtain a combinatorial\ntheorem saying that if $r$ is the smallest size of an $X-Y$ separator\nthen for an arbitrary $x$ the size of the union of all $X-Y$ important\nseparators of size at most $r+x$ is at most $2^{x+1}r$ and these vertices\ncan be computed in time $O(n^32^xr^2(r+x)^2)$. \\footnote{The results are obtained\nwithout any regard to the \\textsc{mwc} problem, hence they might be of an\nindependent interest.} The exponential\npart of the runtime follows from the need to enumerate so-called\n\\emph{principal important separators} whose union includes\nall the needed vertices. We argue that the principal important separators\nconstitute a generally small subset of the whole set of important\nseparators and pose the \\emph{first open question} asking whether\nthe number of principal separators can be bounded by a polynomial of $n$.\nThe affirmative answer to this question implies the polynomial runtime\nof the algorithm proposed in this paper. In this case the algorithm can\nbe a first step of a kernelization method of the \\textsc{mwc}.\nHowever, it cannot be \\emph{the only} step. We demonstrate the \nupper bound on the number of vertices is tight and hence generally\ncannot polynomially depend on $k$. Therefore, a natural question is\nwhether the output of this algorithm can be further processed to obtain\nan unconditional kernelization. We pose this as our \\emph{second open\nquestion}.\n\n{\\bf 1.2. Related work.} There are many publications related to the \ntopics considered in the paper. We overview only those that are of \na direct relevance for the proposed results.\n\nThe fixed-parameter tractability of the \\textsc{mwc} problem has been\nestablished in \\cite{MarxTCS} and the runtime has been improved\nto $O^*(4^k)$ in \\cite{ChenAlgorithmica}. A parameterization\nof the \\textsc{mwc} problem above a guaranteed value has been recently\nproposed in \\cite{REX}, where we show that the problem is in XP under\nthis parameterization leaving open the fixed-parameter tractability status.\n\nThe notion of \\emph{important separator} has been introduced in \n\\cite{MarxTCS}. As noticed in \\cite{LokshtanovClustering}, \nthe recent algorithms for a number of challenging graph separation\nproblems, including the one of \\cite{ChenAlgorithmica}, are\nbased on enumeration of important separators. Further on,\n\\cite{LokshtanovClustering} proves an upper bound $4^k$\non the number of important separators of size at most $k$ and \nnotices that the algorithm of \\cite{ChenAlgorithmica} \nin fact implicitly establishes this upper bound. \nAn alternative upper bound, suitable for the case where the \nsmallest important separator is large, is established in \\cite{REX}.\n\nConstant ratio approximation algorithms for the \\textsc{mwc}\nproblem have been first proposed in \\cite{Dahlapprox} for\nthe edge version and in \\cite{Gargapprox} for the vertex\nversion.\n\nThe research on kernelization has been given its current\nshape by the landmark paper \\cite{Bodnokernel}, which\nallowed to classify fixed-parameter tractable problems\ninto kernelizable ones and those that are probably not. Among the many known \nkernelizability and non kernelizability results, \nlet us mention the kernelization methods for multicut for trees \n\\cite{DaligaultSTACS09} and Feedback Vertex Set \\cite{FVSkernel} and non-kernelizability\nproof for the Disjoint Cycles problem \\cite{Bod2nokernel}. \nAlthough far from being analogous to the \\textsc{mwc} problem,\nall these problems are related to the flow maximization\/cut minimization\ntasks and hence might be a source of ideas useful for the final \nsettling of the kernelizability of \\textsc{mwc} problem.\n\n\n\\section{Bounding the union of important separators}\nLet $X$ and $Y$ be two disjoint sets of vertices of the given graph $G$.\nA set $K \\subseteq V(G) \\setminus (X \\cup Y)$ is an $X-Y$ separator\nif in $G \\setminus K$ there is no path from $X$ to $Y$.\nLet $A,B$ be two disjoint subsets of $V(G)$. We denote\nby $NR(G,A,B)$ the set of vertices that are not reachable from $A$ in $G \\setminus B$\nLet $K_1$ and $K_2$ be two $X-Y$ separators. We say that $K_1 \\prec^* K_2$ if\n$NR(G,Y,K_1) \\subset NR(G,Y,K_2)$.\n\nA minimal $X-Y$ separator $K$ is called \\emph{important} if there is no $X-Y$ separator $K'$\nsuch that $K \\prec^* K'$ and $|K| \\geq |K'|$. This notion was first introduced in \\cite{MarxTCS}\nin a slightly different although equivalent way (see Proposition 3 of \\cite{REX}). \nLet $r$ be the size of a smallest important $X-Y$ separator and let $S$ be an arbitrary\nimportant separator. We call $|S|-r$ the \\emph{excess} of $S$. Then the following theorem\nholds.\n\n\\begin{theorem} \\label{MainResult}\nLet $U$ be the union of important $X-Y$ separators of excess at most $x$.\nThen $|U| \\leq 2^{x+1}r$. Moreover, $U$ can be computed in time\n$O(n^32^{2x}r^2(r+x)^2)$.\n\\end{theorem}\n\nIn this section we prove Theorem \\ref{MainResult} and show the tightness\nof the upper bound of $|U|$.\nThe proof of Theorem \\ref{MainResult} in divided into two stages.\nOn the first stage we introduce a partially ordered family of subsets of the given\nset satisfying a number of certain properties. \nWe call such family of sets an \\emph{IS-family}.\nWe prove Theorem \\ref{MainResult} in terms of the IS family. Then we\nshow that the family of all important separators with the $\\prec^*$ relation \nis in fact an IS family from where Theorem \\ref{MainResult} immediately follows.\n\nThe advantage of such 'axiomatic' way of proof is the possibility to clearly\nspecify the properties of the family of important separators (viewed as a \npartially ordered family of sets) that imply the above upper bound. \nAn additional potential advantage is that some deep algebraic techniques\nmight become applicable for further investigation of the kernelization of\nmultiway cut. \n\n\\subsection{From Important Separators to Partially Ordered Families of Sets}\nLet $V$ be a finite set.\nLet $({\\bf F}, \\prec)$ be a pair where ${\\bf F}$ is a family\nof subsets of $V$ and $\\prec$ is an order relation on\nthe elements of ${\\bf F}$ . Let $S \\in {\\bf F}$ and $v \\in V$.\nWe say that $S$ \\emph{covers} $v$ if\nthere is $S' \\in {\\bf F}$ such that\n$S' \\prec S$ and $v \\in S' \\setminus S$.\nWe define $Pred(S)$ to be the set of\nall $S'$ such that $S' \\prec S$ and\nthere is no $S'' \\in {\\bf F}$ such that\n$S' \\prec S'' \\prec S$. Symmetrically,\nwe define $Succ(S)$ to be the set of\nall $S' \\in {\\bf F}$ such that $S \\prec S'$\nand there is no $S'' \\in {\\bf F}$ such that\n$S \\prec S'' \\prec S'$.\nWe define the \\emph{visible set} of $S$ denoted\nby $Vis(S)$ to be the set of all $v \\in V$ satisfying\nthe following two conditions:\n\\begin{itemize}\n\\item there is $S' \\in Pred(S)$ such that $v \\in S'$;\n\\item $v$ is not covered by any element of $Pred(S)$.\n\\end{itemize}\n\n$({\\bf F}, \\prec)$ is called an $IS$-family\nif the following conditions are true.\n\n\\begin{itemize}\n\\item {\\bf Smallest element (SE) condition.}\nThere is a unique element of ${\\bf F}$ denoted\nby $sm({\\bf F})$ such that for any other\n$S \\in {\\bf F}$, $sm({\\bf F}) \\prec S$.\n\\item {\\bf Strict monotonicity (SM) condition.}\nLet $S_1,S_2 \\in {\\bf F}$. If $S_1 \\prec S_2$\nthen $|S_1|<|S_2|$. \n\\item {\\bf Single witness (SW) condition.}\nLet $S \\in {\\bf F}$ and let $v \\in S$.\nLet $S'$ be a minimal element such that\n$S \\prec S'$ and $v \\in S \\setminus S'$.\nWe call $S'$ a \\emph{witness} of $v$ w.r.t.\n$S$. The condition requires that there is \n\\emph{at most one} witness of $v$ w.r.t. $S$.\n\\item {\\bf Transitive Elimination (TE) condition}\nLet $S_1 \\prec S_2 \\prec S_3$ be three elements\nof ${\\bf F}$ and let $v \\in S_1 \\setminus S_2$.\nThen $v \\in S_1 \\setminus S_3$.\n\\item {\\bf Large visible set (LVS) condition}\nLet $S \\in {\\bf F}$ and let $S' \\in Pred(S)$.\nThen $|S'| \\leq |Vis(S)|$. For the subsequent\nproofs we will use the \\emph{extended} {\\bf LVS}\ncondition stating that for each $S'' \\prec S$,\n$|S''| \\leq |Vis(S)|$, which immediately follows\nfrom the combination of {\\bf LVS} and {\\bf SM}\nconditions.\n\\item {\\bf Distinct visible set (DVS) condition}\nFor each $S \\in {\\bf F}$ such that $S \\neq sm({\\bf F})$.\nThen $Vis(S) \\not\\subset S$.\n\\item {\\bf Efficient Computability (EC) condition}\nLet $n=|V|$. In $O(n^3)$ we can compute $sm({\\bf F})$ \nas well as the witness of $v$ w.r.t. $S$ for the\ngiven $S \\in {\\bf F}$ and $v \\in S$ (or return 'NO' in case such\nwitness does not exist). The relation $S_1 \\prec S_2$\ncan be tested in $O(|S_1|)$.\n\\end{itemize}\n\nIn the rest of this subsection we assume that $({\\bf F}, \\prec)$\nis an IS-family. Our reasoning consists of three stages.\nOn the first stage, we prove 3 propositions stating simple properties\nof an IS family. On the second stage we prove Theorem \\ref{MainLevelBound},\nour main counting result. The main body of the proof is provided in the\n3 preceding lemmas. On the last stage we prove an analogue of Theorem \\ref{MainResult}\nfor IS families: Corollary \\ref{VertexBound} proves the upper bound on the\nsize of the union of the respective sets and Theorem \\ref{runtime} establishes\nan algorithm for computing of these sets. \n\nFor $S \\in {\\bf F}$ let us denote $S \\setminus \\bigcup_{S' \\prec S} S'$ by $hat(S)$.\n\n\\begin{proposition} \\label{PropHat}\n$hat(S)=S \\setminus Vis(S)$.\n\\end{proposition}\n\n{\\bf Proof.}\nIt is clear from the definition that $hat(S) \\subseteq S \\setminus Vis(S)$.\nConversely, consider $v \\in (S \\setminus Vis(S)) \\setminus hat(S)$. What can we say\nabout such $v$? First, that $v \\in S$. Then, since $v \\in \\bigcup_{S' \\prec S} S' \\setminus Vis(S)$,\nthere may be two possibilities. According to one of them,\n$v \\in S' \\prec S$ such that $S' \\notin Pred(S)$ and $v$ does \nnot belong to any $S'' \\in Pred(S)$. It follows that there is $S'' \\in Pred(S)$ such that\n$S' \\prec S''$ and $v \\in S' \\setminus S''$. Since $S'' \\prec S$, $v \\notin S$ by the \n{\\bf TE} condition in contradiction to our assumption. The other possibility may be that\n$v \\in S' \\in Pred(S)$ and $v$ is covered by another $S'' \\in Pred(S)$. Then analogous reasoning applies.\nBy definition of a covered vertex, there is $S^* \\prec S''$ such that $v \\in S^* \\setminus S''$ and again\n$v \\notin S$ by the {\\bf TE} condition, yielding an analogous contradiction. $\\blacksquare$.\n\n\n\\begin{proposition} \\label{WitnessBasic}\nLet $S_1,S_2,v$ be such that $S_1 \\prec S_2$ and $v \\in S_1 \\setminus S_2$.\nLet $S^*$ be the witness of $v$ w.r.t. $S_1$. Then $S^* \\preceq S_2$.\n\\end{proposition}\n\n{\\bf Proof.}\nLet $S''$ be a minimal element of ${\\bf F}$ such that $S_1 \\prec S'' \\preceq S_2$\nand $v \\in S_1 \\setminus S''$. Then $S''$ is a witness of $v$ w.r.t. $S_1$.\nBy the {\\bf SW} condition, $S''=S^*$. $\\blacksquare$\n\n\\begin{proposition} \\label{WitnessAdvanced}\nLet $S \\in {\\bf F}$ and let $v \\in Vis(S) \\setminus S$. \nThen there is $S^* \\prec S$ such that $v \\in hat(S^*)$ and \n$S$ is the witness of $v$ w.r.t. $S^*$.\n\\end{proposition}\n\n{\\bf Proof.}\nLet $S^*$ be a minimal element of ${\\bf F}$ preceding $S$ such that $v \\in S^*$.\nThen $v \\in hat(S^*)$. Indeed, otherwise, there is $S'$ such that \n$v \\in S' \\prec S^* \\prec S$ in contradiction\nto the choice of $S^*$. Assume by contradiction that $S$ is \nnot the witness of $v$ w.r.t. $S^*$ and let $S''$ be this witness. \nAccording to Proposition \\ref{WitnessBasic}, $S'' \\prec S$. Let $S_2 \\in Pred(S)$ be such that\n$S'' \\preceq S_2$. Clearly $S^* \\prec S_2$. If $S''=S_2$ then $v \\in S^* \\setminus S_2$\nby definition of $S''$. Otherwise, $v \\in S^* \\setminus S_2$ by the {\\bf TE}\ncondition. It follows that $S_2$ covers $v$. Consequently, $v \\notin Vis(S)$, a contradiction\nproving that $S$ is indeed the witness of $v$ w.r.t. $S^*$. $\\blacksquare$\n\nFor $S \\in {\\bf F}$, let's call $|S|-|sm({\\bf F})|$, the \\emph{excess} of $S$\nand denote it $ex(S)$. \n\n\\begin{lemma} \\label{BasicBound}\nLet $S \\in {\\bf F}$ such that $S \\neq sm({\\bf F})$.\nFor $v \\in Vis(S) \\setminus S$,let $S(v)$ be such\nthat $v \\in hat(S(v))$ and $S$ is the witness of $v$ w.r.t. $S(v)$\n(the existence of such $S(v)$ follows from Proposition \\ref{WitnessAdvanced}).\nThen $|hat(S)| \\leq \\sum_{v \\in Vis(S) \\setminus S} 2^{ex(S)-ex(S(v))}$.\n\\end{lemma}\n\n{\\bf Proof.}\nIf $Vis(S)=S$ then by Proposition \\ref{PropHat}, $hat(S)=\\emptyset$ and\nwe are done. Otherwise, the {\\bf DVS} condition\nallows us to fix a $v^* \\in Vis(S) \\setminus S$ .\nLet us define a function $f$ on $V$ as follows:\n$f(v^*)=2^{ex(S)-ex(S(v^*))}$ and for $w \\neq v^*$, $f(w)=1$.\nFor $S \\subseteq V$, the function naturally extends to \n$f(S)=\\sum_{v \\in S} f(v)$.\n\n\\begin{claim}\n$|hat(S)| \\leq f(Vis(S) \\setminus S)$\n\\end{claim}\nObserve that\n$f(Vis(S))=|Vis(S)\\setminus \\{v^*\\}|+f(v^*)=|Vis(S)|+f(v^*)-1$.\nBy the extended {\\bf LVS} condition, the rightmost part \nof the above equality does not increase if we replace \n$Vis(S)$ by $S(v)$, i.e. $f(Vis(S)) \\geq |S(v)|+f(v^*)-1$. \nSince $ex(S)-ex(S(v)) \\geq 1$, by the {\\bf SM} condition,\n$f(v^*) \\geq ex(S)-ex(S(v))+1$. That is, $f(Vis(S)) \\geq \n|S(v)|+ex(S)-ex(S(v))=|S|$.\nFurthermore $f(Vis(S))=f(Vis(S)\\setminus S)+f(Vis(S) \\cap S)=f(Vis(S) \\setminus S)+|Vis(S) \\cap S|$.\nOn the other hand, $|S|=|S \\setminus Vis(S)|+|Vis(S) \\cap S|=|hat(S)|+|Vis(S) \\cap S|$, the last\nequality follows from Proposition \\ref{PropHat}.\nThus the desired claim follows by removal $|Vis(S) \\cap S|$ from the both sides of the \ninequality $f(Vis(S)) \\geq S$. $\\square$ \n\nObserve that due to the {\\bf SM} condition, for each $v \\in Vis(S) \\setminus S$,\n$2^{ex(S)-ex(S(v))} \\geq f(v)$, hence $f(Vis(S) \\setminus S) \\leq \\sum_{v \\in Vis(S) \\setminus S} 2^{ex(S)-ex(S(v))}$. \nTherefore the lemma follows from the above claim.\n$\\blacksquare$\n\nFor $x \\geq 0$, let ${\\bf E}_x$ be the subset of ${\\bf F}$ consisting\nof all the elements of excess at most $x$.\nLet $S \\in {\\bf E}_x$. The $x$-hat of $S$ denoted by $hat_x(S)$\nis a subset of $hat(S)$ consisting of all elements $v$ such that\nthere is no $S' \\in {\\bf E}_x$ such that $S \\prec S'$ and $v \\in S \\setminus S'$.\n\n\n\\begin{lemma} \\label{FirstLevelBound}\nFor any $x \\geq 0$ \\\\\n$\\sum_{S \\in {\\bf E}_x} 2^{x-ex(S)+1}*|hat_x(S) \\setminus hat_{x+1}(S)| \n\\geq \\sum_{S' \\in {\\bf E}_{x+1} \\setminus {\\bf E}_x} |hat(S')|$.\n\\end{lemma}\n\n{\\bf Proof.}\nDenote the elements of ${\\bf E}_x$ by $S_1, \\dots, S_m$.\nDenote $\\{(v,i)| 1 \\leq i \\leq m, v \\in hat_x(S_i) \\setminus hat_{x+1}(S_i)\\}$\nby $OS$. For each $(v,i) \\in OS$, let $aw(v,i)=2^{x-ex(S_i)+1}$.\nFor $OS' \\subseteq OS$, let \n$aw(OS')=\\sum_{(v,i) \\in OS'} aw(v,i)$. It is not hard to\nsee that the left part of the desired inequality is \n$aw(OS)$. Indeed, for the given $i$, \nif we sum up $aw(v,i)$ for all $(v,i) \\in OS$ \nthen the total amount will be exactly\n$2^{x-ex(S_i)+1}*|hat_x(S)|$.\n\nConsider $(v,i) \\in OS$. \nThen, since $v \\notin hat_{x+1}(S_i)$, there is $S' \\in {\\bf E}_{x+1} \\setminus {\\bf E}_x$\nsuch that $S_i \\prec S'$ and $v \\in S_i \\setminus S'$.\nWe claim that $S'$ is in fact the witness of $v$ w.r.t. $S_i$.\nIndeed, otherwise, according to Proposition \\ref{WitnessBasic}, $S'$ succeeds\nthe witness of $v$ w.r.t. $v$ hence, by the {\\bf SM} condition,\nthe size of the latter is at most $x$. However, this contradicts $v \\in hat_x(S_i)$.\nBy the {\\bf SW} condition, the above $S'$ is unique for $(v,i)$.\nSo, we can say that $(v,i)$ is \\emph{witnessed} by $S'$.\n\nDenote the elements of ${\\bf E}_{x+1} \\setminus {\\bf E}_x$ by $S'_1, \\dots, S'_q$.\nPartition $OS$ into $OS_1, \\dots, OS_q$ such that the elements of $OS_q$ are\nwitnessed by $S'_q$. To confirm the lemma, it remains to prove that, for the given $i$,\n$aw(OS_i) \\geq |hat(S'_i)|$. \n\nLet $v \\in Vis(S'_i)$. According to Proposition \\ref{WitnessAdvanced},\nthere is $S^* \\prec S'_i$ such that $v \\in hat(S^*)$ and $S'_i$\nis the witness of $v$ w.r.t. $S^*$. By the {\\bf SM}\ncondition $ex(S^*) \\leq x$, that is $S^* \\in {\\bf E}_x$. \nObserve that in fact $v \\in hat_x(S^*) \\setminus hat_{x+1}(S^*)$.\nIndeed, otherwise there is an element $S''$ of ${\\bf E}_x$\nsuch that $S^* \\prec S''$ and $v \\in S^* \\setminus S''$. But then\n$S \\prec S''$ by Proposition \\ref{WitnessBasic} in contradiction to the \n{\\bf SM} condition. Let $j(v)$ be such that $S^*=S_{j(v)}$.\nIt follows that $(v,j(v)) \\in OS_i$. Consequently, \n$aw(OS_i) \\geq \\sum_{v \\in Vis(S'_i)} 2^{x+1-ex(S_{j(v)})} \\geq \n\\sum_{v \\in Vis(S'_i)} 2^{ex(S'_i)-ex(S_{j(v)})} \\geq |hat(S'_i)|$,\nthe last inequality follows from Lemma \\ref{BasicBound}. \n$\\blacksquare$\n\n\nFor $x \\geq 0$, denote $\\sum_{S \\in {\\bf E}_x} 2^{x-ex(S)}|hat_x(S)|$\nby $M(x)$. Then the following statement takes\nplace.\n\n\\begin{lemma} \\label{SecondLevelBound}\nFor each $x \\geq 0$, $M(x+1) \\leq 2M(x)$.\n\\end{lemma}\n\n{\\bf Proof.}\nFirst of all, observe that for each $S \\in {\\bf E}_{x+1} \\setminus {\\bf E}_x$,\n$hat_{x+1}(S)=hat(S)$ just because, by the {\\bf SM} condition, \nthere is no $S' \\in {\\bf E}_{x+1}$ such that $S \\prec S'$. Furthermore, by definition,\nthe excess of $S$ is $x+1$. Therefore $|hat(S)|=2^{x+1-ex(S)}|hat_{x+1}(S)|$.\nThat is, we can rewrite the inequality of \nLemma \\ref{FirstLevelBound} as\\\\\n$\\sum_{S \\in {\\bf E}_x} 2^{x-ex(S)+1}*\n|hat_x(S) \\setminus hat_{x+1}(S)| \\geq \n \\sum_{S' \\in {\\bf E}_{x+1} \\setminus {\\bf E}_x} 2^{x+1-ex(S)}|hat_{x+1}(S')|$\n\nFurthermore, observe that for each $S \\in {\\bf E}_x$, $hat_{x+1}(S) \\subseteq hat_x(S)$,\ntherefore $hat_{x+1}(S)=hat_{x+1}(S) \\cap hat_x(S)$. Then we can safely add \n$\\sum_{S \\in {\\bf E}_x} 2^{x-ex(S)+1}* |hat_x(S) \\cap hat_{x+1}(S)|$ to the\nleft part of the inequality of the previous paragraph and \n$\\sum_{S \\in {\\bf E}_x} 2^{x-ex(S)+1}*|hat_{x+1}(S)|$ \nto the right part of this inequality. Then after noticing that for each $S \\in {\\bf E}_x$,\n$|hat_x(S) \\cap hat_{x+1}(S)|+|hat_x(S) \\setminus hat_{x+1}(S)|=|hat_x(S)|$ and that the\nright part in fact explores $|hat_{x+1}(S')|$ for all elements $S' \\in {\\bf E}_{x+1}$,\nthe resulting inequality is transformed into:\\\\\n$\\sum_{S \\in {\\bf E}_x} 2^{x-ex(S)+1}* |hat_x(S)| \\geq\n \\sum_{S' \\in {\\bf E}_{x+1}} 2^{x-ex(S')+1}*|hat_{x+1}(S')|$.\nIt remains to notice that the left part of this inequality is $2M(x)$\nand the right part is $M(x+1)$. $\\blacksquare$\n\nNow we are ready to state the main counting result.\n\n\\begin{theorem} \\label{MainLevelBound}\nFor each $x \\geq 0$, $M(x) \\leq 2^x|sm({\\bf F})|$.\n\\end{theorem}\n\n{\\bf Proof.}\nApplying inductively Lemma \\ref{SecondLevelBound}, it is easy to see\nthat $M(x) \\leq 2^xM(0)$. By definition,\n$M(0)=\\sum_{S \\in {\\bf E}_0} 2^{0-ex(S)} |hat_0(S)|$. Since the only\nelement of ${\\bf E}_0$ is $sm({\\bf F})$ whose excess is $0$ and\n$hat_0(sm({\\bf F}))=sm({\\bf F})$, the theorem follows. $\\blacksquare$\n\nThe following corollary is the first statement of\nTheorem \\ref{MainResult} in terms of an IS family\n\n\\begin{corollary} \\label{VertexBound}\n$|\\bigcup_{S \\in {\\bf E}_x} S| \\leq 2^{x+1}|sm({\\bf F})|$.\n\\end{corollary}\n\n{\\bf Proof.}\nObserve that $\\bigcup_{S \\in {\\bf E}_x} S=\nsm({\\bf F}) \\cup \\bigcup_{i=1}^x \\bigcup_{S' \\in {\\bf E}_i \\setminus {\\bf E}_{i-1}} hat_i(S')$.\nIndeed, by definition, the left set is clearly a superset of the right one, so let $v$\nbe a vertex of the left set. If $v \\in sm({\\bf F})$ then the containment in the right\nset is clear. Otherwise, let $S^* \\in {\\bf E}_x$ be a minimal set containing $v$ and let $j>0$ be\nthe excess of $S^*$. Then, by definition of sets ${\\bf E}_i$, \n$S^* \\in {\\bf E}_j \\setminus {\\bf E}_{j-1}$. \nFrom the minimality of $S^*$ subject to the containment of $v$, \nit follows that $v \\in hat(S^*)$. Furthermore, by the {\\bf SM} condition,\nthere is no $S'' \\in {\\bf E}_j$ such that $S^* \\prec S''$. This implies that\n$v \\in hat_j(S^*)$, confirming the observation. \n\nIt follows from this equality that $|\\bigcup_{S \\in {\\bf E}_x} S| $ is upper-bounded\nby $|sm({\\bf F})|+\\sum_{i=1}^x \\sum_{S \\in {\\bf E}_i \\setminus {\\bf E}_{i-1}} |hat_i(S)| \\leq M_0+\\sum_{i=1}^x M_i \\leq \n\\sum_{i=0}^x M_i$. According to Theorem \\ref{MainLevelBound}, the rightmost item of the above inequality is\nclearly upperbounded by $2^{x+1}|sm({\\bf F})|$, hence the corollary follows. $\\blacksquare$\n\nTo prove the second statement of Theorem \\ref{MainResult}, we need to compute $\\bigcup_{S \\in {\\bf E}_x} S$.\nWe obtain the required algorithm in four simple steps. First we introduce the notion\nof \\emph{principal sets} of ${\\bf F}$, then we show that the union of principal sets of excess at most\n$x$ in fact includes all the vertices of $\\bigcup_{S \\in {\\bf E}_x} S$. Furthermore, we show that \nthe number of principal sets can be upper bounded by $2^{x+1}|sm({\\bf F})|$. Finally, we show that\nsubject to {\\bf EC} condition, these principal\nsets can be computed in time polynomial in their bound and in $n=|V|$. (Recall\nthat $V$ is the universe of for the sets of ${\\bf F}$).\n\nWe say that a set $S \\in {\\bf F}$ is \\emph{principal} if $hat(S) \\neq \\emptyset$.\nDenote by ${\\bf Pr}_x$ the family of all principal sets of excess at most $x$.\nBy definition, $\\bigcup_{S \\in {\\bf Pr}_x} \\subseteq \\bigcup_{S \\in {\\bf E}_x}$.\nFor the other direction, let $v \\in \\bigcup_{S \\in {\\bf E}_x}$. Then, arguing as\nin the proof of Corollary \\ref{VertexBound}, we observe the existence of $S^*$\nof excess at most $x$ such that $v \\in hat(S^*)$. Clearly $S^* \\in {\\bf Pr}_x$.\nThus we have established the following proposition.\n\n\\begin{proposition} \\label{TheSame}\n$\\bigcup_{S \\in {\\bf Pr}_x} S = \\bigcup_{S \\in {\\bf E}_x} S$\n\\end{proposition}\n\n\\begin{proposition} \\label{PrBound}\nFor each $x \\geq 0$, $|{\\bf Pr}_x| \\leq 2^{x+1}|sm({\\bf F})|$. \n\\end{proposition}\n\n{\\bf Proof.}\nBy definition, the number of elements of $|{\\bf Pr}_x|$\nis upper-bounded by the sum of the sizes of their hats, which\nin turn, is bounded by the sum of sizes of hats of all elements\nof ${\\bf E}_x$. Taking into account that for each\n$1 \\leq i \\leq x$ and for each $S \\in {\\bf E}_i \\setminus {\\bf E}_{i-1}$,\n$hat(S)=hat_i(S)$ (argue as in the proof of Corollary \\ref{VertexBound}), \nour upper bound can be represented as\n$|sm({\\bf F})|+ \\sum_{i>1}^x \\sum_{S \\in {\\bf E}_i \\setminus {\\bf E}_{i-1}} |hat_i(S)|$.\nNow, apply the second paragraph of the proof of Corollary \\ref{VertexBound}. $\\blacksquare$\n\n\n\\begin{theorem} \\label{runtime}\n${\\bf Pr}_x$ can be computed in time $O(n^32^{2x}r^2(r+x)^2)$\nwhere $r=|sm({\\bf F})|$.\n\\end{theorem}\n\n{\\bf Proof sketch.}\nThe algorithm works iteratively. First it computes ${\\bf Pr}_0$.\nFor each $i>0$, it computes ${\\bf Pr}_i$ based on ${\\bf Pr}_{i-1}$.\nSince ${\\bf Pr}_0=\\{sm({\\bf F})\\}$, for $i=0$, the result directly follows from\nthe {\\bf EC} condition. Now consider computing of ${\\bf Pr}_i$ \nfor $i>0$ assuming that ${\\bf Pr}_{i-1}$ have been computed. \n\nThe algorithm explores all the elements \nof ${\\bf Pr}_{i-1}$ and for each such element $S$ and for each \n$v \\in S$, applies the witness computation algorithm \nof the {\\bf EC} condition. If the witness\n$S'$ of $S$ has been returned, $S'$ joins ${\\bf Pr}_i$ if \n$ex(S')=i$, $S'$ has not been already generated and the union\nof elements of ${\\bf Pr}_i$ preceding $S'$ is not a superset of $S$.\nIn the rest of the proof, postponed to the appendix, \nwe prove correctness and the runtime of this algorithm. \n$\\blacksquare$\n\n\n\\subsection{Back to Important Separators.}\n\\begin{lemma} \\label{conditions}\nThe family of all important $X-Y$ separators of graph $G$\npartially ordered by the $\\prec^*$ relation is an IS-family.\n\\end{lemma}\n\n{\\bf Proof sketch.} \nThe {\\bf SE} condition is established by Lemma 3.3. of \\cite{MarxTCS}.\nThe {\\bf SM} condition immediately follows from the definition of an\nimportant separator. For the {\\bf SW} condition, let $K$ be an\nimportant $X-Y$ separator and let $v \\in K$. Assume that a witness\nof $v$ w.r.t. $K$ exists. Replace $NR(G,Y,K)$\nby a single vertex $x$ and split $v$ into $n+1$ copies. Let $G^*$\nbe the resulting graph. We prove that there is a bijection between the\nwitnesses of $v$ w.r.t. $K$ and smallest important $x-Y$ separators\nof $G^*$ and apply to $G^*$ the {\\bf SE} condition.\nFor the {\\bf TE} condition, we observe \n(e.g. Proposition 1 of \\cite{REX}), that if $K_1 \\prec^* K_2$\nthen $K_1 \\setminus K_2 \\subseteq NR(G,Y,K_2)$. Thus if $K_2 \\prec^* K_3$,\n$K_1 \\setminus K_2 \\subseteq NR(G,Y,K_2) \\subseteq NR(G,Y,K_3)$, the last \ninclusion is obtained by definition of the $\\prec^*$ relation. Thus, no\nvertex of $K_1 \\setminus K_2$ can belong to $K_3$. For the visible set conditions,\nwe first prove that if $K$ is an important $X-Y$ separator different from\nthe smallest one then for each $K' \\in Pred(K)$, $K^*=Vis(K) \\setminus NR(G,Y,K')$ is also\nan $X-Y$ separator such that $K' \\preceq^* K^*$.\nThe {\\bf LVS} and {\\bf DVS} conditions will immediately follow from this claim\ncombined with the definition of an important separator. \nThe $O(n^3)$ algorithm for computing $sm({\\bf F})$, as required by the {\\bf EC}\ncondition follows from Lemma 1 in \\cite{REX}. As shown in the proof of the\n{\\bf SW} condition, computing of a witness is essentially equivalent to\ncomputing of an important separator. Finally the fast testing of $K_1 \\prec^* K_2$\nis easy to establish by maintaining an important separator in an appropriate\ndata structure. $\\blacksquare$\n\n{\\bf Proof of Theorem \\ref{MainResult}}\nThe theorem immediately follows from combination of\nCorollary \\ref{VertexBound}, Theorem \\ref{runtime}, and \nLemma \\ref{conditions}.\n$\\blacksquare$\n\n\\subsection{Lower bounds and possibilities for further improvement}\nWe start with showing that the obtained upper bound on the number\nof vertices involved in important separators of size at most $x$\nis quite tight.\n\n\\begin{theorem} \\label{LowerBound}\nFor each $x$ and $r$ there is a graph $H$ with two specified terminals $s$\nand $t$ such that the size of the smallest $s-t$ separator is $r$ and\nthe size of the union of all important separators of excess at most $x$\nis $2^{x+1}r-r$.\n\\end{theorem}\n\n{\\bf Proof.} Take $r$ complete rooted binary trees of height $x$ with $2^x$ leaves\n(of course, replace arcs by undirected edges).\nAdd two new vertices $s$ and $t$. Connect $s$ to the roots of the trees\nand $t$ to all the leaves. This is the resulting graph $H$ for the given\n$x$ and $r$. It is not hard to see that any minimal $s-t$ separator of this graph \nis an important one. It only remains to show that each non-terminal vertex\nparticipates in a $s-t$ separator of excess at most $x$. In fact, we can show that\nany vertex $v$ whose depth in the respective binary tree is $i$ participates in\na separator of excess $i$. We compute such separator by obtaining a sequence\n$S_1, \\dots, S_i$ of separators, where $S_i$ is the desired separator.\n$S_1$ is just the set of neighbours of $s$. To obtain $S_{j+1}$ from $S_j$, we specify\nthe unique $u \\in S_j$ such that $u$ is the ancestor of $v$ (the uniqueness easily\nfollows by induction) and replace it by its children. The correctness of this construction\ncan be easily established by induction on the constructed sequence of separators,\nwe omit the tedious details. $\\blacksquare$\n\n\nIn the previous subsection we introduced the notion of a principal set of an \nIS-family. The corresponding notion of a principal important separator $K$\nmeans that $K \\setminus \\bigcup_{K' \\prec^* K} K' \\neq \\emptyset$.\nProposition \\ref{PrBound} along with Lemma \\ref{conditions} implies that the number of \nprincipal important $X-Y$ separators of excess $x$ is at most $2^{x+1}r$ where $r$ is\nthe size of the smallest important $X-Y$ separator and the class of graphs considered \nin Theorem \\ref{LowerBound} shows that this bound is tight. On the other hand,\nthe number of principal important separators in this class of graphs is \\emph{linear}\nin the overall number $n$ of vertices. This leads us to the following question\n\n\\begin{openquestion}\nIs the number of principal important $X-Y$ separators of the given \ngraph $G$ bounded by a polynomial of $|V(G)|$?\n\\end{openquestion}\n\nFirst of all observe that this question is reasonable because the\nnumber of principal separators is generally much smaller than the overall\nnumber of important separators. Indeed, in the class of instances considered\nin Theorem \\ref{LowerBound}, the overall number of important separators is\nexponential in $n$ (consider the important separators including leaves \nof the binary trees). \n\nTo see the significance of this open question, suppose that the answer is \\emph{yes}.\nThen the algorithm claimed in Theorem \\ref{MainResult} runs in a polynomial time.\nIndeed, its exponential runtime is caused by the fact that the algorithm explores\nall pairs of principal important separators, so, replacing the upper bound has an\nimmediate effect on the runtime. Such poly-time algorithm would mean that it is\npossible to test in a polynomial time \\emph{whether the given vertex belongs to an important\nseparator}, which is itself quite an interesting achievement. Moreover, the whole \npreprocessing algorithm for the \\textsc{mwc} problem proposed in this paper will have\na polynomial time. This means that the output of this algorithm can be used\nfor the \\emph{further} preprocessing, potentially making easier the unconditional kernelization\nof the \\textsc{mwc} problem.\n\n\\section{Preprocessing of multiway cut}\nLet $(G,T)$ be an instance of the \\textsc{mwc} problem.\nAn isolating cut of $t \\in T$ is a $t-T \\setminus \\{t\\}$\nseparator. If \nsuch separator is important, we call it \\emph{important isolating\ncut} of $t$.\n\nWe start from a proposition that allows us to harness\nthe machinery of important separators for the \npreprocessing of the \\textsc{mwc} problem. The proposition\nis easily established by iterative application the argument\nof Lemma 3.6 of \\cite{MarxTCS}.\n\n\\begin{lemma} \\label{OnlyImportant}\nLet $(G,T)$ be an instance of the \\textsc{mwc} problem.\nThen there is a smallest \\textsc{mwc} $S$ of $(G,T)$ such that\neach $v \\in S$ belongs to an important isolating cut of some $t \\in T$.\n\\end{lemma} $\\blacksquare$\n\nWith Lemma \\ref{OnlyImportant} in mind, we can use the algorithm claimed\nin Theorem \\ref{MainResult} for the preprocessing. In particular,\nfor each $t \\in T$, let $r_t$ be the size of the smallest isolating cut.\nCompute the set of all vertices participating in the important isolating\ncuts of $t$. Let $V^*$ be the set of all the computed vertices together\nwith the terminals. Let $G^*$ be the graph obtained from $G[V^*]$ by making\nadjacent all non-adjacent $u,v$ such that $G$ has a $u-v$ path with all intermediate \nvertices lying outside $V^*$. It is not hard to infer from Lemma \\ref{OnlyImportant}\nthat the size of the optimal solution of $(G^*,T)$ is the same as of $(G,T)$.\nAccording to Theorem \\ref{MainResult}, the number of vertices of \n$G^*$ is at most $|T|(2^{k-r}r+1)$ where $r=min_t \\in T r_t$ \nand $1$ is added on the account of terminals. This bound is not good in the sense\nthat $|T|$ may be not bounded by $k$ at all. Therefore \n\\emph{prior} to computing the union of important separators, we reduce the number\nof terminals. This is possible due to the following theorem.\n\n\\begin{theorem} \\label{SqueezeTerminals}\nThere is a polynomial-time algorithm that transforms the instance\n$(G,T,k)$ of the \\textsc{mwc} problem into an equivalent instance\n$(G',T',k')$ such that $k' \\leq k$ and $|T'| \\leq 2k'(k'+1)$.\nThen runtime of this algorithm is the same as the runtime of \nthe fixed-ratio approximation algorithm for the \\textsc{mwc} \nproblem \\cite{Gargapprox} \\footnote{This algorithm is based on\nsolving a linear program.}.\n\\end{theorem}\n\n{\\bf Proof.}\nWe start from observation that if $u$ is a non-terminal vertex\nsuch that there are $k+2$ terminals connected to $u$ by paths\nintersecting only at $u$ then $u$ participates in any \n\\textsc{mwc} of $(G,T)$ of size at most $k$. Indeed, removal\nof a set of at most $k$ vertices not containing $u$ would leave\nat least 2 of these paths undestroyed and hence the corresponding\nterminals would be connected. An immediate consequence of this observation\nis that if $S$ is a \\textsc{mwc} of $G$ and there is $v \\in S$ adjacent\nto at least $k+2$ components of $G \\setminus S$ containing terminals\nthen this vertex participates in any \\textsc{mwc} of $G$ of size at\nmost $k$.\n\nHaving the above in mind, we apply the ratio $2$ approximation\nalgorithm for the \\textsc{mwc} problem proposed in \\cite{Gargapprox}. \\footnote{In fact,\nthe approximation ratio of this algorithm is $2-2\/|T|$, but ratio $2$ is sufficient\nfor our purpose.}\nIf the resulting \\textsc{mwc} is of size greater than $2k$, the\nalgorithm simply returns 'NO'. Otherwise, let $S$ be the resulting\n\\textsc{mwc}. If $|T|>|S|(k+1)$ then, taking into \naccount that each component is adjacent to at least one vertex of $S$,\nit follows from the pigeonhole principle that\nat least one vertex of $S$ is adjacent to at least $k+2$ components\nof $G \\setminus S$ containing terminals. \nRemove $v$ and remove isolated components of $G \\setminus \\{v\\}$\n(i.e. those that contain at most one terminal), decrease the parameter\nby $1$ and recursively apply the same operation to the new data. \nEventually, one of three possible situations\noccur. First, after removal of $k$ or less vertices, the resulting graph\nhas no terminals. In this case we have just found the desired \\textsc{mwc}\nof $(G,T)$ in a polynomial time. Second, after removal of $k$ vertices, \nthere are still terminals, not separated by the removed vertices. In this\ncase, again in a polynomial time, we have found that $(G,T)$ has no \\textsc{mwc}\nof size at most $k$. Finally, it may happen that after removal of some $S' \\subseteq S$\nof size at most $k$, the number of terminals in the remaining graph is at most $|S \\setminus S'|(k-|S'|+1)$.\nThen the resulting graph is returned as the output of the preprocessing. $\\blacksquare$\n\nThus, Theorem \\ref{SqueezeTerminals} together with Theorem \\ref{MainResult} and\nLemma \\ref{OnlyImportant} lead to the following result.\n\n\\begin{corollary}\nThere is an algorithm that for an instance $(G,T,k)$ of the \\textsc{mwc} problem\nfinds an equivalent instance of $O(k^2r2^{k-r})$ vertices in time \n$O(A(n)+2^{k-r}n^3r^2k^4)$ where $r$ is the smallest isolating cut and $A(n)$\nis the time complexity of the approximation algorithm proposed in \\cite{Gargapprox}.\nIn particular, if $k-r=c*log(k)$ for any fixed $c$ \nthen the \\textsc{mwc} problem is polynomially kernelizable.\n\\end{corollary}\n\nThe output of the above algorithm is much richer than just another\ninstance of the \\textsc{mwc} problem. Indeed, for each terminal, the \nalgorithm in fact computes all principal important isolating cuts.\nThis leads to the follows interesting question.\n\\begin{openquestion}\nIs there an algorithm that gets the above output as input and, \nin time polynomial in $n$ and the number of the principal isolating cuts,\nproduces an equivalent instance of the \\textsc{mwc} problem of size \npolynomial in $k$?\n\\end{openquestion}\n\nObserve that if Open Questions 1 and 2 are answered affirmatively then,\ntogether with Proposition \\ref{TheSame}, Theorem \\ref{runtime}, and Lemma \\ref{conditions}, \nthey imply an unconditional polynomial kernelization of the \\textsc{mwc} problem. Moreover, we believe \nthat investigation of Open Question 2 would give a significant insight into \nthe structure of the \\textsc{mwc} problem. \nIndeed it would reveal whether or not we can 'filter' in a reasonable time \nsome principal isolating cuts, which in turn would require proof of some\ninteresting structural dependencies related to the \\textsc{mwc} problem.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nCooperative control of multiple autonomous underwater vehicles (AUVs) has been drawing increasing attention in ocean exploration, where sub-meter resolution scan of vast range of sea floor is desired \\cite{8604772}, \\cite{Rego2014}. The successful implementation of a cooperative control strategy depends on reliable communications. However, marine environment imposes strict constraints on the communication range and data-rate \\cite{Fallon2010}. In practice, acoustic modems only have communication range between 0.1 km to 5 km and data-rate between 0.1 kbps to 15 kbps \\cite{Sendra2016}. Consequently, these communication constraints lead to inaccurate information exchange that may cause deteriorated control performance \\cite{Wen2019}. \n\nModel Predictive Control (MPC) is a popular tool for motion control of autonomous systems. It can optimize multiple control specifications taking into account, for example, state and input constraints, collision avoidance and energy consumption \\cite{Yang2019}. For controlling a multi-AUV system with a surfacing unit, a centralized MPC can be used by solving the resulted optimization problem in the surfacing unit who gathers global information \\cite{Fallon2009}. However, for a multi-AUV system operating in deep water, long range information exchange is less reliable, sometimes impossible. As the system gets bigger and more complex, solving an optimal control problem in a centralized way becomes difficult, since it requires full communication to collect information from each subsystem, and enough computational power on one central entity to solve the global optimization problem. A promising concept to avoid these problems is to use distributed MPC (DMPC) technique, requiring only neighbour-to-neighbour communication, to solve network-level control problems. The recent results have shown the great potential of DMPC for motion control of multi-agent systems, such as circular path-following \\cite{Hu2019}, circular formation control \\cite{Wen2019}, and plug-and-play maneuver in platooning \\cite{Hu2009}. \n\nSuccessful implementations of DMPC require to solve distributed optimization problems in a real-time manner. However, due to the communication constraints and limitations in the AUV application, distributed optimization algorithms may suffer from noised iterations. Inexact distributed optimization algorithms can potentially deal with the errors resulted from noised iterations caused by unreliable or limited communication. Some works can be found in \\cite{Magnusson2018}, \\cite{Doan2020}, \\cite{Q2016}, \\cite{Devolder2014} and \\cite{Nedelcu2014}. In \\cite{Puy} and \\cite{7402506}, the authors proposed an iteratively refining quantization design for distributed optimization and showed complexity upper-bounds on the number of iterations to achieve a given accuracy. \n\nIn this paper, we aim to extend the quantized optimization algorithm proposed in \\cite{7402506} to a real-time DMPC framework for multi-AUV systems with limited communication data-rates. The contributions of this paper can be summarized as \n\\begin{itemize}\n \\item Based on \\cite{7402506}, we first present an novel approach to obtain an optimal quantization design to achieve the best sub-optimality subject to a limited communication data-rate. We establish a relationship between the quantization design and control performance.\n \\item We propose a real-time DMPC framework based on a distributed optimization algorithm using the optimal quantization design. We further study the closed-loop properties of the system with the proposed approach.\n \\item We apply the proposed approach to a case study of three AUVs with a real-time constraint on communication data-rates. The simulation results show the effectiveness of the DMPC with the optimal quantization design.\n\\end{itemize}\n\\section{Preliminaries}\\label{section:prem}\n\\subsection{Notations}\n\nThroughout this paper, we use the superscript $\\star$ to indicate a variable as an optimal solution to an optimization problem. We use $k$ to denote the iteration step in an optimization algorithm, $l$ to denote a prediction step in an MPC problem, and $t$ to denote the time step of the closed-loop system. For a vector $x$, we use $\\|x\\|$ and $\\|x\\|_Q$ to denote the 2-norm and the weighted 2-norm, respectively. We use $ \\operatorname{diag}(x)$ to indicate a diagonal matrix with the elements in diagonal induced by a vector $x$. We use $\\operatorname{blkdiag}(X_1,\\ldots,X_n)$ to indicate a block-diagonal matrix with elements in diagonal induced by matrices $X_1,\\ldots,X_n$. We use $I$ to denote an identity matrix of appropriate dimension. Consider a distributed problem solved in a network of $M$ agents. The agents communicate according to a fixed undirected graph $G=(\\mathcal{V}, \\mathcal{E})$. The agents distribute according to the vertex set $\\mathcal{V}=\\{1, \\cdots, M\\}$ and exchange information according to the edge set $\\mathcal{E} \\subseteq \\mathcal{V} \\times \\mathcal{V}$. If $(i, j) \\in \\mathcal{E}$, then agent $i$ is said to a neighbour of agent $j$ and $\\mathcal{N}_{i}=\\{j \\mid(i, j) \\in \\mathcal{E}\\}$ denotes the set of the neighbours of agent $i$. For a real number $z$, a uniform quantizer with quantization step-size $\\Delta$ and mid-value $\\bar{z}$ is defined b\n\\begin{equation}\n Q(z)=\\bar{z}+\\operatorname{sgn}(z-\\bar{z}) \\cdot \\Delta \\cdot\\left\\lfloor\\frac{\\|z-\\bar{z}\\|}{\\Delta}+\\frac{1}{2}\\right\\rfloor,\n\\end{equation}\nwhere $\\operatorname{sgn}(z-\\bar{z})$ denotes a sign function and $\\Delta=\\frac{l}{2^{n}}$. The parameters $l$ and $n$ represent the length of the quantization interval and the number of bits sent through a quantizer at each iteration, respectively. The quantization interval is set as $\\left[\\bar{z}-\\frac{l}{2}, \\bar{z}+\\frac{l}{2}\\right]$. Note that if $z$ falls inside the quantization interval, then the quantization error satisfies $|z-Q(z)| \\leq \\frac{\\Delta}{2}=\\frac{l}{2^{n+1}}$.\n\\subsection{Parametric Distributed Optimization Problem}\n\nConsider the following parametric distributed optimization problem $\\mathbb{P}^p(\\zeta^t)$:\n\\begin{subequations}\\label{problem:PDproblem}\n \\begin{align}\n &\\min _{z, z_{\\mathcal{N}_{i}}} f\\left(z, \\zeta^{t}\\right)=\\sum_{i=1}^{M} f_{i}\\left(z_{\\mathcal{N}_{i}}, \\zeta_{i}^{t}\\right), \\\\\n \\text{s.t. } & z_{i} \\in \\mathbb{C}_{i},\\; z_{i}=F_{j i} z_{\\mathcal{N}_{j}}, j \\in \\mathcal{N}_{i}, \\\\\n & z_{\\mathcal{N}_{i}}=E_{i} z,\\; i=1,2, \\cdots, M, \n \\end{align}\n\\end{subequations}\nwhere $z_i$ denotes the local variable, $z_{\\mathcal{N}_{i}}$ denotes the concentration of the local variable $z_j$ where $j\\in \\mathcal{N}_{i}$ and $z=\\left[z_{1}, \\cdots, z_{M}\\right]^{\\top}$ denotes the global variable. The matrix $E_i$ selects $z_{\\mathcal{N}_{i}}$ from the global variable $z$. The matrix $F_{ji}$ selects the local variable $z_i$ from $z_{\\mathcal{N}_j}$. Furthermore, the local constraints are $z_{i} \\in \\mathbb{C}_{i} \\subseteq \\mathbb{R}^{{m_{i}}}$, where $m_i$ is the size of the local variable vector and $\\mathbb{C}_i$ is a convex set, for $i=1,\\cdots, M$. $\\zeta^t_i$ is a time-varying parameter that does not change the convexity of the problem. \n\\begin{assumption} \\label{ap1}\nThe local cost function $f_i(\\cdot)$ has a Lipshitz continuous gradient with respect to a Lipshitz constant $L_i$. The global cost function $f(\\cdot)$ is strongly convex with a convexity modulus $\\sigma_{f}$.\n\\end{assumption}\n\nNote that if Assumption \\ref{ap1} holds, then the global cost function $f(\\cdot)$ has a Lipschitz continuous gradient with the Lipschitz constant $L_{max}$, where $L_{max}:=\\max_{1<=i<=M} L_i$. \n\n\\subsection{Parametric Distributed Optimization Algorithm with Warm-starting and Progressive Quantization Refinement}\n\n\\begin{algorithm}[h]\n\\textbf{Require:} Give $ z^{0,0}$, $K$, $C_{\\alpha}$ and $C_{\\beta}$, $(1-\\gamma)<\\kappa<1$ where $\\gamma = \\frac{\\sigma_f}{L}$ and $ \\tau<\\frac{1}{L}$. \\\\\n\\textbf{\\textbf{for}} $t = 0,1,\\cdots$ \\textbf{\\textup{do}} \\\\\n 1. Initialize $C_{\\alpha}^{t}=C_{\\alpha}$, $C_{\\beta}^{t}=C_{\\beta}$, $ \\hat{z}_{i}^{t,-1}=z^{t,0}_i$ and $\\hat{\\nabla}f_{i}^{t,-1}=\\nabla f_{i}(\\operatorname{Proj}_{\\mathbb{C}_{\\mathcal{N}_{i}}}(z_{\\mathcal{N}_{i}}^{t,0}))$; \\\\\n\\textbf{\\textup{for}} $k = 0,1,\\cdots,K$ \\textbf{\\textup{do}} \\\\\n\\textbf{\\textup{for}} $i = 1,\\cdots,M$ \\textbf{\\textup{do in parallel}} \\\\\n2. Update quantizer $Q_{\\alpha, i}^{t,k}: l_{\\alpha, i}^{t,k}=C^{t}_{\\alpha} \\kappa^{k}$ and $\\bar{z}_{\\alpha, i}^{t,k}=\\hat{z}_{i}^{t,k-1}$;\\\\\n3. Quantize local variable $\\hat{z}_{i}^{t,k}=Q_{\\alpha, i}^{t,k}\\left(z_{i}^{t,k}\\right)$; \\\\\n4. Send $\\hat{z}^{t,k}_i$ to agent $j$ for all $j\\in\\mathcal{N}_i$; \\\\\n5. Compute the projection: $ \\tilde{z}_{\\mathcal{N}_{i}}^{t, k}=\\operatorname{Proj}_{\\mathbb{C}_{\\mathcal{N}_{i}}}\\left(\\hat{z}_{\\mathcal{N}_{i}}^{t,k}\\right)$; \\\\\n6. Compute $\\nabla f^{t,k}_i = \\nabla f \\left(\\tilde{z}_{\\mathcal{N}_{i}}^{t, k}\n\\right)$; \\\\\n7. Update quantizer $Q_{\\beta, i}^{t,k}: l_{\\beta, i}^{t,k}=C_{\\beta}^t \\kappa^{k}$ and $\\bar{\\nabla} f_{\\beta, i}^{t,k}=\\hat{\\nabla} f_{i}^{t,k-1}$;\\\\\n8. Quantize local gradient $\\hat{\\nabla} f_{i}^{t,k}=Q_{\\beta, i}^{t,k}\\left(\\nabla f_{i}^{k}\\right)$;\\\\\n9. Send $\\hat{\\nabla} f^{t,k}_i$ to agent $j$ for all $j\\in\\mathcal{N}_i$;\\\\\n10. $z^{t,k+1}_i = \\mathrm{Proj}_{\\mathbb{C}_{i}} \\left(z^{t,k}_i - \\tau \\sum_{j \\in \\mathcal{N}_{i}} F_{j i} \\hat{\\nabla} f_{j}^{t,k} \\right)$; \\\\\n\\textbf{\\textup{end}} \\textbf{\\textup{end}}\\\\\n11. Warm-start update: $z_i^{t+1,0} = z^{t,K+1}_i$ for $i=1,\\cdots, M$;\\\\\n12. \\textbf{\\textup{Return: }} $z^{t,K+1}_i$ for $i=1,\\cdots, M$.\\\\\n\\textbf{\\textup{end}}\n\\caption{Parametric Distributed Optimization with Warm-starting and Progressive Quantization Refinement}\n\\end{algorithm}\n\nWe now introduce Algorithm 1 is introduced to solve $\\mathbb{P}^p(\\zeta^t)$. In Algorithm 1, $Q_{\\alpha, i}^{t,k}$ and $Q_{\\beta, i}^{t,k}$ are two uniform quantizers. The subscripts $\\alpha$ and $\\beta$ indicate that they are used to quantize local variables and local gradients, respectively. The quantizers are refined at each iteration by shrinking the size of their quantization intervals according to $ l_{\\alpha, i}^{t, k}= C^{t}_{\\alpha} \\kappa^{k}; l_{\\beta, i}^{t,k}= C^t_{\\beta} \\kappa^{k}$ where $C^t_\\alpha$ and $C^t_\\beta$ are the initial quantization intervals and $\\kappa$ is a shrinkage constant. The mid-values of the quantizers are updated according to $\\bar{z}_{\\alpha, i}^{t,k}=\\hat{z}_{i}^{t,k-1}$ and $\\bar{\\nabla} f_{\\beta, i}^{t,k}=\\hat{\\nabla} f_{i}^{t,k-1}$. The quantized values are designated by $\\hat{\\cdot}$ while output of the projection steps are designated by $\\tilde{\\cdot}$. The operation $\\operatorname{Proj}_{\\mathbb{C}}(v):=\\operatorname{argmin}_{\\mu \\in \\mathbb{C}}\\|\\mu-v\\|$ represents the projection of any point $v \\in \\mathbb{R}^{n_{v}}$ on the set $\\mathbb{C}$.\n\n\\subsection{Complexity Upper-bound for Algorithm 1}\n\nThere exist an error upper bound on the sub-optimality of the solutions given by Algorithm 1 at all time steps if the following assumptions are made.\n\n\\begin{assumption}\\label{ap3}\nFor all $t \\geq 0$, the solutions to $\\mathbb{P}^p(\\zeta^t)$ satisfy\n\\begin{equation} \\label{solution upper bound}\n \\left\\|z^{\\star}\\left(\\zeta^{t}\\right)-z^{\\star}\\left(\\zeta^{t+1}\\right)\\right\\| \\leq \\rho. \n\\end{equation}\n\\end{assumption}\n\n\\begin{assumption}\\label{ap4}\nThe initial solution to $\\mathbb{P}^p(\\zeta^0)$ satisfies \n\\begin{equation}\n \\left\\|z^{0}\\left(\\zeta^{0}\\right)-z^{\\star}\\left(\\zeta^{0}\\right)\\right\\| \\leq \\epsilon.\n\\end{equation}\n\\end{assumption}\n\n\\begin{assumption}\\label{ap5}\nThe initial quantization intervals $C_\\alpha$ and $C_\\beta$ satisfy \n\\begin{subequations}\n \\begin{align}\n a_{1} (\\epsilon+\\rho)+a_{2} \\frac{C_{\\alpha}}{2^{n+1}}+a_{3} \\frac{C_{\\beta}}{2^{n+1}} & \\leq \\frac{C_{\\alpha}}{2},\\\\\n b_{1} (\\epsilon+\\rho)+b_{2} \\frac{C_{\\alpha}}{2^{n+1}}+b_{3} \\frac{C_{\\beta}}{2^{n+1}} & \\leq \\frac{C_{\\beta}}{2}, \n \\end{align}\n\\end{subequations}\nwhere the coefficients $a_1, a_2, a_3$ and $b_1, b_2, b_3$ can be found in Appendix.\n\\end{assumption}\n\n\\begin{theorem}[\\cite{7402506}] \\label{tm1}\nConsider Assumption 1-4 hold and the number of iteration $K$ at all time steps $t \\geq 0$ satisfies\n\\begin{equation}\n K \\geq \\ceil*{\\log _{\\kappa} \\frac{\\epsilon(1-\\kappa)}{\\rho+\\delta+(1-\\kappa)(\\epsilon+\\delta)}}-1,\n\\end{equation}\nwhere $\\delta=\\frac{\\kappa\\left(C_{1}+\\sqrt{2 L} C_{2}\\right)}{L(\\kappa+\\gamma-1)(1-\\gamma)}, C_{1}=\\frac{M \\sqrt{m_{max}}\\left(L_{\\max } d C_{\\alpha}+\\sqrt{d} C_{\\beta}\\right)}{2^{n+1}}$, and $C_{2}=\\frac{\\sqrt{2}}{2} \\cdot \\frac{M \\sqrt{m_{max}} C_{\\alpha}}{2^{n+1}}$. \nThe parameter $m_{max}:=\\max_{1<=i<=M} m_i$ is the largest size of local variables, $d$ is the degree of the graph and $L_{max}$ is the largest Lipshitz constant of the gradient of the local cost $f_i$, i.e., $L_{max}:=\\max_{1<=i<=M} L_i$, and $L$ is the Lipshitz constant of the gradient of the global cost $f$. Then, at all time steps $t \\geq 0$, the solution given by Algorithm 1 satisfies\n\\begin{equation}\\label{error upper bound}\n \\left\\|z^{K+1}\\left(\\zeta^{t}\\right)-z^{\\star}\\left(\\zeta^{t}\\right)\\right\\| \\leq \\epsilon.\n\\end{equation}\n\\end{theorem}\n\n\\section{Optimal Quantization Design}\\label{section:Quantization Design}\n\n\\subsection{Optimal Quantization Design Formulation}\n\nAccording to Theorem \\ref{tm1}, we know that the required number of iterations $K$ to upper-bound the sub-optimality of the solutions given by Algorithm 1 at a certain level $\\epsilon$. This upper-bound also establishes a relationship between the solution accuracy and the parameters of the quantizers $C_{\\alpha}$, $C_{\\beta}$, the quantization shrinkage constant $\\kappa$ and the number of bits of the quantizers $n$. This is further related to the number iterations $K$ in Algorithm 1 given the real-time communication constraint $T \\geq \\ceil{nK} $ at each time step $t$, where $T$ is a communication data-rate. In the following, the set of the parameters $(\\kappa,n,K,C_{\\alpha},C_{\\beta})$ is referred to as a quantization design. The relationship shown in (\\ref{error upper bound}) is used to compute the optimal quantization design yielding the lowest sub-optimality upper bound for a fixed $T$.\n\\begin{subequations}\\label{problem:UpperBound}\n\t\\begin{align}\n \t& && \\min _{\\kappa,n,K,C_{\\alpha},C_{\\beta},\\epsilon} \\quad \\epsilon,\\\\\n \t&\\text{s.t. } && K \\geq\\left\\lceil\\log _{\\kappa}\\frac{\\epsilon(1-\\kappa)}{\\rho+\\delta+(1-\\kappa)(\\epsilon+\\delta)}\\right\\rceil-1, \\label{st1}\\\\\n \t& && T \\geq \\ceil{nK}, \\\\\n \t& && a_{1} (\\epsilon+\\rho)+a_{2} \\frac{C_{\\alpha}}{2^{n+1}}+a_{3} \\frac{C_{\\beta}}{2^{n+1}} \\leq \\frac{C_{\\alpha}}{2}, \\label{st2}\\\\\n \t& && b_{1} (\\epsilon+\\rho)+b_{2} \\frac{C_{\\alpha}}{2^{n+1}}+b_{3} \\frac{C_{\\beta}}{2^{n+1}} \\leq \\frac{C_{\\beta}}{2}.\\label{st3}\n\t\\end{align}\n\\end{subequations}\n\nIt can be seen that the optimization problem in \\eqref{problem:UpperBound} is non-convex and computationally challenging for existing solvers. Alternatively, we propose a method to compute an approximation the optimal solution for problem \\eqref{problem:UpperBound} by solving a set of convex sub-problems where each sub-problem is specified with a fixed ($\\kappa,n,K$) configuration from the set $\\{(\\kappa,n,K)\\mid T \\geq \\ceil{nK} , 1 - \\gamma \\leq \\kappa \\leq 1, \\kappa \\ \\mathbf{rem} \\ {\\lambda } = 0 \\}$. The operation $\\kappa \\ \\mathbf{rem} \\ {\\lambda }$ denotes for the remainder of $\\frac{\\kappa}{\\lambda}$ where $\\lambda$ is a small and positive constant. The constant $\\lambda$ indicates the resolution level at which $\\kappa$ is selected. With fixed $(\\kappa,n,K)$, the sub-problem given in (9) is convex and solvable by existent solvers.\n\\begin{align}\\label{problem:UpperBoundInterger}\n\t& \\min _{C_{\\alpha},C_{\\beta},\\epsilon} \\quad \\epsilon,\\\\\n\t\\text{s.t. } & \\text{(\\ref{st1}),(\\ref{st2}),(\\ref{st3})} \\nonumber.\n\\end{align}\n\nThe (${\\kappa^{a\\star},n^{a\\star},K^{a\\star},C_{\\alpha}^{a\\star},C_{\\beta}}^{a\\star}$) configuration associated with the smallest sub-optimality upper bound $\\epsilon^{a\\star}$ out of the solutions of all sub-problems is said to be the best approximation of the optimal quantization design, namely the optimal solution of problem (\\ref{problem:UpperBound}).\n\n\\begin{remark}\nSince problem (\\ref{problem:UpperBound}) is solved off-line, all possible $(n,K)$ pairs can be considered for solving (\\ref{problem:UpperBoundInterger}) for a fixed $T$. Although $\\kappa$ is a continuous variable, a high resolution (a small $\\lambda$) can be chosen to uniformly select $\\kappa$ from the range $(1-\\mu,1)$. Therefore, by considering all possible $(n,K)$ such that $\\ceil{nK} \\leq T$ and choosing $\\lambda$ to be a small and positive number, the solution (${\\kappa^{a\\star},n^{a\\star},K^{a\\star},C_{\\alpha}^{a\\star},C_{\\beta}}^{a\\star}$) provides a close estimation of the optimal quantization design.\n\\end{remark}\n\n\\subsection{Example}\nConsider the following parametric distributed quadratic optimization problem:\n\\begin{subequations}\\label{problem:QPproblem}\n \\begin{align}\n & \\min _{z, z_{\\mathcal{N}_{i}}} && \\sum_{i=1}^{M} z_{\\mathcal{N}_{i}}^{T}H_{i}z_{\\mathcal{N}_{i}} + (\\zeta^t_i)^{T} h_i z_{\\mathcal{N}_{i}}, \\\\\n & \\text{s.t. } && G_i z_i \\leq h_i,\\; z_{i}=F_{j i} z_{\\mathcal{N}_{j}}, j \\in \\mathcal{N}_{i},\\\\\n & && z_{\\mathcal{N}_{i}}=E_{i} z,\\; i=1,2, \\cdots, M.\n \\end{align}\n\\end{subequations}\n\n\\begin{figure}[t]\n \\centering\n \\subfigure[]{\\includegraphics[width=0.49\\hsize]{image\/bound_a.pdf}\\label{bound_a}}\n \\subfigure[]{\\includegraphics[width=0.49\\hsize]{image\/bound_b.pdf}\\label{bound_b}}\n\\caption{Complexity upper-bound in (5) (red curves) v.s. true sub-optimality (blue curves); (a) Fixing $\\kappa$ at the optimal value 0.34 and varying the number of bits $n$; (b) Fixing the number at the optimal value 11 and varying $\\kappa$.}\n\\label{img error upper bound}\n\\end{figure}\n\nIn this example, we randomly generate a connected graph with $6$ agents. The degree of the graph is equal to $2$. Each agent has 2 local variables. The matrix $H_i$ is set to be a randomly generated positive definite matrix and the vector $h_i$ is also randomly generated, for $i=1,\\cdots,M$. The time-varying parameter $\\zeta^t_i$ is uniformly sampled from a constant interval at each time step $t$. The polytopic constraint $G_i z_i \\leq h_i$ are also randomly generated but with the guatrantee that more $50\\%$ optimization variables can hit the constraints. Furthermore, we set the total number of bits transmitted at each step $t$ to be $T = 100$ bits. For this example, the parameters required to compute of the optimal quantizaton design are $L = 21.99$, $L_m = 16.54$, $\\sigma_{f} = 15.93$, $\\gamma = 0.72$ and $\\rho = 8.42$ obtained by sampling. We compute an approximation of the optimal quantizaton design using the approach presented in Section III.A and compare the result with the true sub-optimality achieved by Algorithm 1.\n\nFig. \\ref{img error upper bound} presents a comparison between the sub-optimality upper-bound $\\epsilon$ and the true sub-optimality achieved by the optimization algorithm using different quantization configurations, represented by the red curves and the blue curves, respectively. We can see that the red and blue curves show the same trend. The upper-bound $\\epsilon$ is tight to the true sub-optimality. In Fig. \\ref{bound_a}, we fix $\\kappa$ and compare $\\epsilon$ with the true sub-optimality using different $n$ ($K$ is set to be $\\floor{T\/n}$). For both cases (the red and blue curves), we observe that the lowest sub-optimality is achieved at $n=11$. In Fig. \\ref{bound_a}, we fix the number of bits $n$ ($K$ is set to be $\\floor{T\/n}$) and compare $\\epsilon$ with the true sub-optimality using different $\\kappa$. For both cases (the red and blue curves), we observe that the lowest sub-optimality is achieved at $\\kappa=0.34$.\n\n\\section{DMPC with Limited Communication Data-rates for Multiple AUVs}\\label{section:AUV Formation Control}\n\nWe first introduce a nonlinear model of an AUV and a discrete-time linear model. Then, we present a DMPC formulation for formation control of multiple AUVs. Furthermore, we propose a real-time framework for solving the DMPC problem subject to communication constraints. The framework consists of two stages: an offline stage and an online stage. In the offline stage, we use the method presented in Section III. A to find the optimal quantization design. In the online stage, we use the distributed optimization algorithm with in Algorithm 1 to solve the DMPC problem with the off-line computed optimal quantization design. Finally, we briefly analyze the closed-loop properties. \n\n\\subsection{AUV Dynamic Model}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width = \\hsize]{image\/AUV.pdf}\n\\caption{AUV Schematic.}\n\\label{fig:AUV}\n\\end{figure}\n\nIn Fig. \\ref{fig:AUV}, the schematic of a BlueROV2 underwater vehicle \\footnote{$https:\/\/bluerobotics.com\/store\/rov\/bluerov2\/$} is shown. \nTo model this AUV, let us first define the states and the control inputs of the system as $x=[\\eta, \\nu]^\\top$ and $u=[T_1,T_2,T_3,T_4,T_5,T_6,T_7,T_8]^\\top$, respectively. We use the vector $\\eta=[p_X,p_Y,p_Z,q_X,q_Y,q_Z]^\\top$ to denote the position and orientation components in the global coordinate, and the vector $\\nu=[v_x,v_y,v_z,w_x,w_y,w_z]^\\top$ to denote the \nlinear and angular velocities in the body-fixed coordinate. It is assumed the center of gravity (C.G.) and the center of buoyancy (C.G.) collocate in the same horizontal plane as the horizontal thrusters $T1,T2,T3,$ and $T4$ as labeled in Fig. \\ref{fig:AUV}. The nonlinear AUV model can be expressed as\n\\begin{align}\\label{nonlinear model}\n&\\dot{x}= \n\\begin{bmatrix}\nJ(\\eta)\\nu\\\\\nM^{-1}\\left(\\tau_{c}(u)-C(\\nu) \\nu-D(\\nu) \\nu-g(\\eta)\\right)\n\\end{bmatrix},\n\\end{align}\nwhere $J(\\eta)=\\text{diag}(R,W)$ with the transformation matrices $R$ and $W$ mapping the linear and angular velocities from the body-fixed frame to the global coordinate. The matrix $M = M_r + M_a$ represents the total mass. $M_r = \\operatorname{diag} ([m,m,m,m,m,m])$ is the rigid body mass and $ M_a = - \\operatorname{diag} ([X_{\\dot{v}_z},Y_{\\dot{v}_y},Z_{\\dot{v}_z},K_{\\dot{\\omega}_x},M_{\\dot{\\omega}_y},N_{\\dot{\\omega}_z}])$ is the added mass associated with linear and angular velocities. Moreover, $\\tau_c (u)$ is the control input and can be formulated as follows:\n\\begin{equation*}\n \\tau_c (u) = \\tau u = \n\\begin{bsmallmatrix}\n\\sin\\theta &\\sin\\theta & \\sin\\theta & \\sin\\theta & 0 & 0 & 0 & 0\\\\\n-\\cos\\theta & \\cos\\theta & \\cos\\theta & -\\cos\\theta & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\\\\n0 & 0 & 0 & 0 & l_3 & -l_3 & l_3 & -l_3 \\\\\n0 & 0 & 0 & 0 & l_2 & l_2 & -l_2 & -l_2 \\\\\nl_1 & -l_1 & l_1 & -l_1 & 0 & 0 & 0 & 0\n\\end{bsmallmatrix}\nu,\n\\end{equation*}\nwhere $\\theta = \\frac{\\pi}{3}$ is the tilt angle of the horizontal thrusters based on the AUV structure in Fig. \\ref{fig:AUV}, $l_1$ represents the distance between the horizontal thrusters and the C.G., $l_2$ represents the distance between the vertical thrusters and the $X$-axis of the body-frame, and $l_3$ represents the distance between the vertical thrusters and the $Y$-axis of the body-frame. $C(\\nu) \\nu$, $D(\\nu) \\nu$ and $g(\\eta) $ correspond to the Coriolis force, damping force and gravitational\/buoyant force, which can be found in Appendix. We refer to \\cite{Yang2019} for further details.\n\nFor the nonlinear AUV model in \\eqref{nonlinear model}, we can use the first-order Taylor expansion at the point ($x_n = 0 $ and $u_n = 0$) to obtain a discrete-time linear model as follows:\n\\begin{equation}\\label{local model}\n {x}(t+1) = Ax(t) + B u(t),\n\\end{equation}\nwhere $ A =\n\\begin{bsmallmatrix}\nI & \\Delta t I \\\\\n0 & I\n\\end{bsmallmatrix} $, $B = \n\\Delta t M^{-1} \\tau $. $\\Delta t$ is the sampling time. \n\\begin{remark}\n For the linear model in \\eqref{local model}, the matrix pair~$(A,B)$ is controllable.\n\\end{remark}\n\\subsection{DMPC Formulation}\nWe now formulate the DMPC optimization problem for formation control of multiple AUVs. Under a leader-follower framework, the agent $i=1$ is assigned as the leader and agent $ i \\geq 1$ as the followers. The agents share information according to a fixed communication graph $\\mathcal{G}$. The agents connected by the edges in the edge set $\\mathcal{E}$ keep their relative distances. Since the states of the agents are not coupled in the dynamics, the global system is also controllable as long as the individual agents are controllable. In general, the DMPC problem $\\mathbb{P}^s(x,x_r,u_r)$ can be formulated as follows.\n\\begin{subequations}\\label{soft DMPC problem}\n\\begin{alignat}{3}\n&\\min_{\\substack{\\bar{x}_i(l),\\bar{u}_i(l)}} && \\;\\; \\sum_{i=1}^{M} \\sum_{l=0}^{N-1} \\ell_{i}\\left(\\bar{x}_{i}(l)-x_{r_i}, \\bar{u}_{i}(l)-u_{r_i}\\right)\\nonumber \\allowdisplaybreaks\\\\\n& &&\\hspace{-10mm} + \\sum_{(i,j)\\in\\mathcal{E}}\\sum_{l=0}^{N-1} \\ell^d_{ij}(G_{ij} [\\bar{x}_{i}^{\\top}(l), \\bar{x}^{\\top}_j(l)]^{\\top} - G_{ij} [x_{r_i}^{\\top}, x^{\\top}_{r_j}]^{\\top}) \\nonumber \\allowdisplaybreaks \\\\\n& && \\hspace{-10mm} +\\sum_{i=1}^{M} \\ell_{i}^{f}\\left(\\bar{x}_{i}(N)-x_{r_i}\\right) \\label{eq:total DMPC cost function} ,\\allowdisplaybreaks \\\\\n&\\text{s.t. } && \\bar{x}_i(0)=x_i(t), \\label{cons1} \\allowdisplaybreaks\\\\\n& && \\bar{x}_i(l+1)=A \\bar{x}_i(l)+ B \\bar{u}_i(l), \\allowdisplaybreaks\\\\\n& && G_{u_i} \\bar{u}_i(l) \\leq h_{u_i},\\label{stage input constraint} \\allowdisplaybreaks\\\\\n& && G_{x_i} \\bar{x}_i(l) \\leq h_{x_i}, \\label{stage state constraint} \\allowdisplaybreaks\\\\\n& && \\bar{x}_{i}(N) \\in \\mathcal{E}_{f_{i}}^{s}\\left(x_{r_{i}}\\right), \\;\\;\\forall i=1, \\cdots, M,\\label{terminal constraint}\n\\end{alignat}\n\\end{subequations}\nwhere $x_i(t)$ is a measured system state of~\\eqref{local model} at a time instant $t$. $\\bar{x}_i(l)$ and $\\bar{u}_i(l)$ denote the state and control input of agent $i$, respectively. $x_{r_i}$ and $u_{r_i}$ denote the state and input reference of agent $i$. The polytopic constraints \\eqref{stage input constraint} and \\eqref{stage state constraint} denote the local input constraint and the local state state constraint, respectively. $\\mathcal{E}^s_{f_i}(x_{r_i})$ represents the local terminal constraint. \n\nThe stage cost is defined as $\\ell_i (\\bar{x}_{i}(l)-x_{r_i}, \\bar{u}_{i}(l)-u_{r_i} ) = \\|\\bar{x}_{i}(l)-x_{r_i}\\|_{Q_i}^2+\\| \\bar{u}_{i}(l)-u_{r_i}\\|_{R_i}^2$. The cost function for formation control is defined as $\\ell^d_{ij}(G_{ij} [\\bar{x}_{i}^{\\top}(l), \\bar{x}^{\\top}_j(l)]^{\\top} - G_{ij} [x_{r_i}^{\\top}, x^{\\top}_{r_j}]^{\\top}) = \\|G_{ij} [\\bar{x}_{i}^{\\top}(l), \\bar{x}^\\top_j(l)]^{\\top}- G_{ij} [x_{r_i}^{\\top}, x^\\top_{r_j}]^{\\top}\\|_{S_{ij}}^2$. The terminal cost is defined as $\\ell_{i}^{f} \\left(\\bar{x}_{i}(N)-x_{r_i}\\right)= \\|\\bar{x}_i(N)-x_{r_i}\\|_{P_i}^2 $. The weighting matrices $Q_i$, $R_i$, $P_i$, and $S_{ij}$ are set to be positive definite matrices. $P = \\operatorname{blkdiag}(P_i,\\ldots,P_M)$ can be obtained by solving the algebraic Riccati equation with $ Q = \\operatorname{blkdiag}(Q_i,\\ldots,Q_M)$ and $R = \\operatorname{blkdiag}(R_i,\\ldots,R_M)$.\n\nIn the DMPC problem \\eqref{soft DMPC problem}, formation maintenance is achieved by penalizing the difference between the relative distances and the reference relative distances $G_{ij} [\\bar{x}_{i}^{\\top}(l), \\bar{x}^\\top_j(l)]^{\\top}- G_{ij} [x_{r_i}^{\\top}, x^\\top_{r_j}]^{\\top}$. Given a set of reference setpoints $(p_{r_{X_i}}, p_{r_{Y_i}}, p_{r_{Z_i}})$, for agent $i=1, \\cdots, M$, the reference setpoint for the leader, i.e., agent $1$, is set to be $x_{r_1} = (p_{r_{X_1}}, p_{r_{Y_1}}, p_{r_{Z_1}})$. The reference setpoint for the followers, i.e., agent i for $i>1$, is set to be $x_{r_j} = (p_{r_{X_i}}-d_{r_{X_{ij}}}, p_{r_{Y_i}}-d_{r_{Y_{ij}}}, p_{r_{Z_i}}-d_{r_{Z_{ij}}})$. Among them, $d_{r_{X_{ij}}}, d_{r_{Y_{ij}}},$ and $d_{r_{Z_{ij}}}$ are given references for the relative distances in the $X-$axis, $Y-$axis and $Z-$axis, respectively. The matrix $G_{ij}$ is used to select $p_{X_i}-p_{X_j}, p_{Y_i} -p_{Y_j}, p_{Z_i} -p_{Z_j}$ from the concatenated vector $[\\bar{x}_{i}^{\\top}(l), \\bar{x}^\\top_j(l)]^{\\top}$. Similarly, $d_{r_{X_{ij}}}, d_{r_{Y_{ij}}},d_{r_{Z_{ij}}}$ can be selected by using the matrix $G_{ij}[x_{r_i}^{\\top}(l), x^\\top_{r_j}(l)]^{\\top}$. \n\n\\begin{remark}\\label{remark:DMPC reformulation}\n$\\mathbb{P}^r(x,x_r,u_r)$ can be reformulated as a distributed QP in the form of \\eqref{problem:QPproblem}. For each agent $i$, the optimization variable is set to be $z_i = [\\bar{x}_i^{\\top}(0),\\cdots,\\bar{x}_i^{\\top}(N),\\bar{u}_i^{\\top}(0),\\cdots,\\bar{u}_i^{\\top}(N-1)]^{\\top}$. The local formation cost function can be written as $\\|[x_{i}^{\\top}(l), x^\\top_j(l)]^{\\top}-[x_{r_i}^{\\top}, x^\\top_{r_j}]^{\\top}\\|_{\\tilde{S}_{ij}}^2$ with $\\tilde{S}_{ij} = G_{ij}^\\top S_{ij} G_{ij}$. The augmented block-diagonal matrix $\\textup{H}_i$ can be built with $Q_i$, $R_i$, $\\tilde{S}_{ij}$ and $P_i$. The local constraint $\\mathbb{C}_i$ can be obtained by reformulating (13b)-(13f). \n\\end{remark}\n\nWe now summarize the real-time DMPC framework for solving and implementing the DMPC problem in \\eqref{soft DMPC problem} with a limited communication data-rate~$T$ in Algorithm 2. In the off-line stage, the optimal quantization design $(\\kappa,n,K,C_{\\alpha},C_{\\beta})$ is obtained by solving the optimization problem in \\eqref{problem:UpperBound}. Then, the distributed optimization algorithm in Algorithm 1 with the optimal quantization design obtained from the off-line stage is used to solve the DMPC problem in \\eqref{soft DMPC problem} in the on-line stage.\n\n\n\\begin{algorithm}[t]\n\\textbf{\\textup{Off-line stage:}}\\\\\n1. Consider $\\mathbb{P}^s(x(t),x_r,u_r)$ in (\\ref{soft DMPC problem}), reform it to be (\\ref{problem:QPproblem}) and find $M, L, L_m, d,$ $ m, \\gamma$, and $\\tau$. Find $\\rho$ specified by (\\ref{solution upper bound}) through sampling;\\\\\n2. Given $M, T, L, L_m, d, m, \\gamma, \\tau, \\text{and } \\rho$, solve problem (\\ref{problem:UpperBound}), via solving the sub-problems in (\\ref{problem:UpperBoundInterger}), to calculate the optimal quantization design $(\\kappa^\\star,n^\\star,K^\\star,C_{\\alpha}^\\star,C_{\\beta}^\\star)$ and the corresponding optimal error bound $\\epsilon^\\star$; \\\\\n3. Given the initial state of the system $x_{i}(0) $, for $i=1,\\cdots,M$, calculate ${z}^{0,0}$ s.t. $\\|{z}^{0,0}-{z}^{0,\\star}\\| \\leq \\epsilon^\\star$. \\\\\n\\textbf{\\textup{On-line stage:}}\\\\\n\\textbf{\\textup{for}} $t = 0,1,\\cdots$ \\textbf{\\textup{do}} \\\\\n4. Measure $x_i(t)$;\\\\\n5. Execute step (1)-(10) in Algorithm 1 using the quantization design $(\\kappa^\\star,n^\\star,K^\\star,C_{\\alpha}^\\star,C_{\\beta})^\\star$ to solve $\\mathbb{P}^s(x(t),x_r,u_r)$ in (13) with the initial solution ${z}^{t,0}$;\\\\\n6. Update the initial solution for $t+1$: ${z}^{t+1,0} = {z}^{t, K+1}$ ;\\\\\n7. Extract $u(0)^{t,K+1}_{i}$ from ${z}^{t,K+1}_i$ and apply it to agent $i$, $\\forall{i\\in\\{1,\\cdots,M\\}}$; \\\\\n\\textbf{\\textup{end}}\n\\caption{Real-time DMPC framework}\n\\end{algorithm}\n\n\\begin{remark}\nThe parameter $\\rho$ introduced in Assumption \\ref{ap3} is determined by simulation. For each sample, the system is driven from a random initial state in the set of admissible initial states to the origin using Algorithm 2. The parameter $\\rho$ is set to be the maximum distance between the optimal solutions of any two sampled states. \n\\end{remark}\n\n\\subsection{Closed-loop Analysis for the DMPC in (13) with a Limited Communication Data-rate}\n\n\n\nIn this section, we analyze the closed-loop properties for the system with the proposed DMPC formulation in (13) following the implementation steps in Algorithm 2 in the following corollary.\n\n\\begin{corollary}\n Consider the distributed MPC problem in (13) and Algorithm 2. Given the total number of bits $T$, the closed-loop system \\eqref{local model}-\\eqref{soft DMPC problem} with the real-time DMPC framework in Algorithm 2 is recursively feasible and input-to-state stable (ISS).\n\\end{corollary}\n\n\\begin{proof}\n \\textit We first discuss the feasibility of the solution ${z}^{t,K+1}_i$ returned by Algorithm 2. Since each agent has only local dynamical, state and input constraints, the re-projection step (Step 5 in Algorithm 1) guarantees that the sub-optimal solution ${z}^{t,K+1}_i$ returned by Algorithm 2 is feasible for all $t\\geq 0$. We then study the stability property. In the closed-loop system, the computational error (sub-optimality) of the control input $u(0)^{t,K+1}_{i}$ generated by Algorithm 2 can be considered as a disturbance $w$ in the closed-loop system. We know that this induced disturbance is bounded by a constant determined by the sub-optimality level $\\epsilon$. Due to the fact that the closed-loop system is uniformly continuous in $x$ and $w$, the ISS property of the closed-loop system is implied by following \\cite[Theorem 4]{Limon2009}.\n\\end{proof} \n\n\n \n \n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\\section{Case Study: a Multi-AUV System}\\label{section:Formation Control with sub-optimality}\n\nIn this section, we apply the DMPC framework with the optimal quantization design applied to the multi-agent system with three AUVs. For the agents $i \\in \\mathcal{V} = \\{1,2,3\\}$, agent 1 is assigned as the leader and agent 2, 3 as the followers. The agents share information according to a fixed undirected graph $\\mathcal{G}$, where the edges are $\\mathcal{E} = \\{ (1,2 ), (1,3 ) \\}$. Also, followers only keep relative distance with respect to the leader. Each agent can be modelled by the AUV model in \\eqref{local model}. The model parameters of the AUV are given as follows: $m = 11$, $W = B = 107.8$ (with ballast), $X_{\\dot{v}_z} = -2.8$, $Y_{\\dot{v}_y}=-3$, $Z_{\\dot{v}_z}=-3.2$, $K_{\\dot{\\omega}_x}=-0.05$, $M_{\\dot{\\omega}_y}=-1$, $N_{\\dot{\\omega}_z}=-0.3$, $X_{v_x|v_x|}|v_x|,Y_{v_y|v_y|}|v_y|,Z_{v_z|v_z|}|v_z|=10$ and $K_{\\omega_x|\\omega_x|}|\\omega_x|,M_{\\omega_y|\\omega_y|}|\\omega_y|,N_{\\omega_z|\\omega_z|}|\\omega_z|=4$. \n\nThe state constraints are considered for three AUVs as $p_X \\in [5,-10]$, $p_Y,p_Z \\in [5,-5]$, $q_X,q_Y,q_Z \\in [\\frac{\\pi}{3},-\\frac{\\pi}{3}]$, $v_x,v_y,v_z \\in [1,-1]$, and $\\omega_x,\\omega_y,\\omega_z \\in [\\frac{\\pi}{6},\\frac{\\pi}{6}]$. Furthermore, the input constraints are set as $T_1,T_2,T_3,T_4,T_5,T_6,T_7,T_8 \\in [2,-2]$ for three AUVs.\n\nThe tracking reference setpoints are set as $(0,0,0)$, $(-2,1,0)$ and $(-2,-1,0)$ for three AUVs. Three AUVs are expected to maintain a formation with desired relative distances as follows: $p_{X_{1}}-p_{X_{2}} = 2$, $p_{Y_{1}}-p_{Y_{2}} = -1$, $p_{Z_{1}}-p_{Z_{2}} = 0$, $p_{X_{1}}-p_{X_{3}} = 2$, $p_{Y_{1}}-p_{Y_{3}} = 1$ and $p_{Z_{1}}-p_{Z_{3}} = 0$. The initial position of three AUVs are $(-8,0.5,-0.5)$, $(-7,1.5,1)$ and $(-8,-0.5,-1)$, respectively. A sampling time $\\Delta t = 0.1$ s is considered and the communication data-rate is $T =100$ bits at every step $t$. \n\n\\subsection{Closed-loop Simulation Results}\n\nWe first show simulation results to demonstrate the stability of the closed-loop multi-AUV system controlled with the proposed DMPC framework with the optimal quantization design. The agents are modeled by the discrete linear model \\eqref{local model}. The DMPC with optimal quantization design has been implemented following the steps described in Algorithm 2. In Fig. 3(a), the position of agent 2 is measured as its distance from its reference setpoint $(-2,-1,0)$. It can be seen that agent 2 moves toward the reference setpoint slowly from $t = 0$ to $2$ because it prioritizes formation tracking first. The position of agent 2 then converges to the reference setpoint after $t = 10$. As shown in Fig. 3(b), the control inputs saturate between $t = 0$ and $t = 6$ corresponding to the process of formation control.\n\n\\begin{figure}[t]\n \\centering\n \\subfigure[]{\\includegraphics[width=0.44\\hsize]{image\/converge_a.pdf}\\label{converge_a}}\n \\subfigure[]{\\includegraphics[width=0.45\\hsize]{image\/converge_b.pdf}\\label{converge_b}}\n\\caption{(a) Distance of agent 2 from its reference setpoint; (b) Control input of agent 2.}\n\\label{global error upper bound}\n\\end{figure}\n\n\\subsection{Control Performance with the Quantization Design}\n\nWe next show simulation results to compare control performance with two quantization designs. In this case, no terminal constraint is used in the DMPC framework in \\eqref{soft DMPC problem}. The positive definite matrix $P_i$ in the terminal cost $\\ell_i^f(x_i(N)-x_{r_i})$ is replaced by $Q_i$. The DMPC problem was implemented following the steps described in Algorithm 2 to control the system of three AUVs with non-linear model \\eqref{nonlinear model}. By solving the optimization problem~\\eqref{problem:UpperBound}, we can obtain the optimal quantization design to be $(\\kappa=0.51,n=20,K=5,C_\\alpha =65.85,C_\\beta = 66.23)$. The sub-optimal quantization design is chosen as $(\\kappa=0.95,n=30,K=3,C_\\alpha =693.51,C_\\beta = 693.72)$. For comparison of the two different quantization designs, random state disturbances are sampled from the intervals $[-0.1,0.1]$ and $[-0.03,0.03]$ with uniform distribution at each time step $t$. Then, these sampled disturbances are added to the linear states $p_X,p_Y,p_Z$ and angular states $q_X,q_Y,q_Z$, respectively.\n\n\\begin{figure}[t]\n \\centering\n \\subfigure[]{\\includegraphics[width=0.49\\hsize]{image\/perform_a.pdf}\\label{perform_a}}\n \\subfigure[]{\\includegraphics[width=0.475\\hsize]{image\/perform_b.pdf}\\label{perform_b}}\\\\\n \\subfigure[]{\\includegraphics[width=0.49\\hsize]{image\/perform_c.pdf}\\label{perform_c}}\n \\subfigure[]{\\includegraphics[width=0.48\\hsize]{image\/perform_d.pdf}\\label{perform_d}}\n\\caption{Control performance comparison between the optimal quantization design and a sub-optimal quantization design; (a) Change in sub-optimality; (b) Relative distance in $X$-axis; (c) Relative distance in $Y$-axis; (d) Relative distance in $Z$-axis.}\n\\label{global error upper bound}\n\\end{figure}\n\nThe sub-optimal solutions obtained with the optimal and sub-optimal quantization design are shown in In Fig. 4(a). It can be seen that the sub-optimality from the optimal quantization design has an order around $10^{-2}$ while the one from the sub-optimal quantization design has an order around $10^{-1}$. In Fig. 4(b)-(d), the relative distances between the leader and agent 2 in the $X$-axis, $Y$-axis and $Z$-axis are shown. As shown in the dark-red line, for the $Y$-axis, the desired relative distance can be achieved with the optimal quantization design. For comparison, worse tracking performance for the relative distance can be observed with the sup-optimal quantization design. At $t = 10$, the tracking error is 0.015 m when using the optimal quantization design, while the tracking error is 0.136 m when using the sub-optimal quantization design. In the $Z$-axis, the optimal quantization design outperforms the sub-optimal design after $t=10$. In the $X$-axis, both quantization designs yield similar performance, because movement in the $X$-axis is relatively slow. In conclusion, while the change between optimal solutions is upper bounded by the same $\\rho$ for both quantization designs, the optimal quantization design is able to consistently achieve better sub-optimality, which leads to better control performance.\n\n\n\n\\section*{Appendix}\n\nThe coefficients $a_1, a_2, a_3$ and $b_1, b_2, b_3$ from Assumption \\ref{ap5} are defined as follows:\n\n\\begin{align*}\n & a_{1}=(\\kappa+1)(\\kappa), a_{2}=(M \\sqrt{m_{max}} \\kappa(\\kappa+1)\\left(d L_{\\max }+\\sqrt{L}\\right)\\\\\n & +M \\sqrt{m_{max}} L(\\kappa+\\gamma-1)(1-\\gamma))(L \\kappa(\\kappa+\\gamma-1)(1-\\gamma)),\\\\\n & a_{3}=(M \\sqrt{d m_{max}}(\\kappa+1))(L(\\kappa+\\gamma-1)(1-\\gamma)), \\\\\n & b_{3} =(L_{\\max } M \\sqrt{d m_{max}} \\kappa(\\kappa+1)+L \\sqrt{d m_{max}}(\\kappa+\\gamma-1)\\\\\n &(1-\\gamma)) (L \\kappa(\\kappa+\\gamma-1)(1-\\gamma)), b_{1} =(L_{\\max }(\\kappa+1))(\\kappa), \\\\\n & b_{2}= (L_{\\max } M \\sqrt{m_{max}} \\kappa(\\kappa+1)\\left(d L_{\\max }+\\sqrt{L}\\right)+L_{\\max } d \\\\\n & \\sqrt{m_{max}} L(\\kappa+1)(\\kappa+\\gamma-1)(1-\\gamma))(L \\kappa(\\kappa+\\gamma-1)(1-\\gamma)).\\\\\n\\end{align*}\n\nThe Coriolis force matrix, damping matrix and gravity matrix defined in the nonlinear model \\eqref{nonlinear model} are as follows:\n\\begin{align*}\nC(\\nu) &= \n\\begin{bsmallmatrix}\n0 & 0 & 0 & 0 & M_{z} v_z & -M_{y} v_y \\\\\n0 & 0 & 0 & -M_{z} v_z & 0 & M_{x} v_x \\\\\n0 & 0 & 0 & M_{y} v_y & -M_{x} v_x & 0 \\\\\n0 & M_{z} v_z & -M_{y} v_y & 0 & M_{\\omega_x} \\omega_z & -M_{\\omega_y} \\omega_y \\\\\n-M_{z} v_z & 0 & M_{x} v_x & -M_{\\omega_x} \\omega_z & 0 & M_{\\omega_z} \\omega_x \\\\\nM_{y} v_y & -M_{x} v_x & 0 & M_{\\omega_y} \\omega_y & -M_{\\omega_z} \\omega_x & 0\n\\end{bsmallmatrix} , \\allowdisplaybreaks\\\\ \n D(\\nu) &= \n\\operatorname{diag} ([X_{v_x|v_x|}|v_x|,Y_{v_y|v_y|}|v_y|,Z_{v_z|v_z|}|v_z|, \\nonumber\\\\\n& \\quad\\quad\\quad\\quad K_{\\omega_x|\\omega_x|}|\\omega_x|,M_{\\omega_y|\\omega_y|}|\\omega_y|,N_{\\omega_z|\\omega_z|}|\\omega_z|]) ,\\allowdisplaybreaks\\\\\n g(\\eta) &= \n\\begin{bmatrix}\n(W-B) q_y \\\\\n-(W-B) \\cos q_y \\sin q_z \\\\\n-(W-B) \\cos q_y \\cos q_z \\\\\n\\mathbf{0}_{3 \\times 1}\n\\end{bmatrix},\n\\end{align*}\nwhere $\\mathbf{0}_{3 \\times 1}$ is zero column matrix. $W$ and $B$ represent the weight and buoyant force acting on the AUV while assuming center of gravity collocates with the center of buoyancy.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Functional framework}\n\n\nIn this section, we set up a suitable functional framework for our problem. We consider the following functional on ${W^{1,p}(\\Om)}$:\n$$ \nG(\\phi)={\\displaystyle \\int_{\\partial \\Omega}} g |\\phi|^p, \\quad \\forall \\phi \\in {W^{1,p}(\\Om)}.\n$$\nFor $g \\in L^{\\frac{N-1}{p-1}, \\infty}({\\partial} \\Om)$ (if $N > p$) and $g \\in L^{1, \\infty;N}({\\partial} \\Om)$ (if $N=p$), Proposition \\ref{Hardy boundary} ensures that $G$ is well defined. Now we study the continuity, compactness and differentiability of $G$.\n\n\n\\begin{proposition}\\label{G cont}\nLet $$ g \\in \\left\\{\\begin{array}{ll}\n L^{\\frac{N-1}{p-1}, \\infty}({\\partial} \\Om) & \\text{ for } N > p,\\\\\n L^{1, \\infty; N}({\\partial} \\Om) & \\text{ for } N = p.\n \\end{array}\\right.$$ \nThen $G$ is continuous.\n\\end{proposition}\n\\begin{proof}\nWe only consider the case $N > p.$ For $N=p,$ the proof will follow using similar arguments. Let $\\phi_n \\rightarrow \\phi$ in ${W^{1,p}(\\Om)}$ and let $\\ep >0$ be given. Clearly,\n\\begin{align*}\n \\abs{G(\\phi_n) - G(\\phi)} \\leq {\\displaystyle \\int_{\\partial \\Omega}} \\abs{g}\\abs{(\\abs{\\phi_n}^p - \\abs{\\phi}^p)}.\n\\end{align*}\n Using the inequality due to Lieb and Loss \\cite[Page 22]{Lieb}, there exists $C = C(\\ep, p) > 0$ such that\n\\begin{align*}\n \\abs{(\\abs{\\phi_n}^p - \\abs{\\phi}^p)} \\leq \\ep \\abs{\\phi}^p + C \\abs{\\phi_n - \\phi}^p \\quad \\text { a.e. on } {\\partial} \\Om.\n\\end{align*}\nHence \n\\begin{align}\\label{cont1}\n {\\displaystyle \\int_{\\partial \\Omega}} \\abs{g}\\abs{(\\abs{\\phi_n}^p - \\abs{\\phi}^p)} \\leq \\ep{\\displaystyle \\int_{\\partial \\Omega}} \\abs{g} \\abs{\\phi}^p + C {\\displaystyle \\int_{\\partial \\Omega}} \\abs{g} \\abs{\\phi_n - \\phi}^p.\n\\end{align}\nNow using \\eqref{N>p}, we obtain \n\\begin{align}\\label{cont2}\n {\\displaystyle \\int_{\\partial \\Omega}} \\abs{g} \\abs{\\phi_n - \\phi}^p \\leq C \\@ifstar{\\oldnorm}{\\oldnorm*}{g}_{\\left( \\frac{N-1}{p-1}, \\infty \\right)} \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi_n - \\phi}^p_{{W^{1,p}(\\Om)}},\n\\end{align}\nwhere $C = C(N, p) > 0$ is the embedding constant and ${p^{\\prime}}$ is the conjugate exponent of $p$. Now from \\eqref{cont1} and \\eqref{cont2}, we easily conclude that $G(\\phi_n) \\rightarrow G(\\phi)$ as $n\\ra \\infty$. \n\\end{proof}\n\n\n\\begin{proposition}\\label{G cpt}\nLet $$g \\in \\left\\{\\begin{array}{ll}\n {\\mathcal F}_{\\frac{N-1}{p-1}} & \\text{ for } N > p,\\\\\n {\\mathcal G}_1 & \\text{ for } N = p.\n \\end{array}\\right.$$ Then $G$ is compact.\n\\end{proposition}\n\n\n\\begin{proof} \nAs before, we only consider the case $N > p.$ Let $\\phi_n \\rightharpoonup \\phi$ in ${W^{1,p}(\\Om)}$ and let $\\ep > 0$ be given. Set $L = \\sup \\{ \\@ifstar{\\oldnorm}{\\oldnorm*}{ \\phi_n}^p_{{W^{1,p}(\\Om)}} + \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}^p_{{W^{1,p}(\\Om)}} \\}. $ For $g \\in {\\mathcal F}_{\\frac{N-1}{p-1}}$, we split $g = g_{\\ep} + (g - g_{\\ep})$ where $g_{\\ep} \\in {\\C^1}({\\partial} \\Om)$ such that $\\@ifstar{\\oldnorm}{\\oldnorm*}{g - g_{\\ep}}_{\\left(\\frac{N-1}{p-1}, \\infty \\right)} < \\frac{\\ep}{L}.$ Then \n\\begin{align}\\label{cpt1}\n {\\displaystyle \\int_{\\partial \\Omega}} \\abs{g}\\abs{(\\abs{\\phi_n}^p - \\abs{\\phi}^p)} \\leq {\\displaystyle \\int_{\\partial \\Omega}} \\abs{g_{\\ep}} \\abs{(\\abs{\\phi_n}^p - \\abs{\\phi}^p)} + {\\displaystyle \\int_{\\partial \\Omega}} \\abs{g - g_{\\ep}} \\abs{(\\abs{\\phi_n}^p - \\abs{\\phi}^p)}.\n\\end{align}\nWe estimate the second integral of \\eqref{cpt1} using \\eqref{N>p} as, \n\\begin{align}\\label{cpt2}\n {\\displaystyle \\int_{\\partial \\Omega}} \\abs{g - g_{\\ep}} \\abs{(\\abs{\\phi_n}^p - \\abs{\\phi}^p)} \\leq C \\@ifstar{\\oldnorm}{\\oldnorm*}{g - g_{\\ep}}_{\\left(\\frac{N-1}{p-1}, \\infty \\right)}\\left( \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi_n}^p_{{W^{1,p}(\\Om)}} + \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}^p_{{W^{1,p}(\\Om)}}\\right).\n\\end{align}\nSince ${W^{1,p}(\\Om)}$ is compactly embedded into $L^p({\\partial} \\Om)$ (Proposition \\ref{classical}), there exists $n_1 \\in \\mathbb{N}$ such that $\\int_{{\\partial} \\Om} \\abs{g_{\\ep}}\\abs{(\\abs{\\phi_n}^p - \\abs{\\phi}^p)} < \\ep, \\; \\forall n \\geq n_1. $ Now from \\eqref{cpt1} and \\eqref{cpt2}, we obtain\n$$ \n{\\displaystyle \\int_{\\partial \\Omega}} \\abs{g}\\abs{(\\abs{\\phi_n}^p - \\abs{\\phi}^p)} < (C+1) \\ep, \\quad \\forall n \\geq n_1. $$\nThus $G(\\phi_n)$ converges to $G(\\phi)$ as $ n\\ra \\infty.$ \n\\end{proof}\n\n\n\\begin{proposition}\\label{G' cpt}\nLet $p \\in (1, \\infty).$ Let $N,g $ be given as in Proposition \\ref{G cpt}. Then $G$ is differentiable at every $\\phi \\in {W^{1,p}(\\Om)}$ and \n$$ \n\\quad \\big< G'(\\phi), v \\big> = p{\\displaystyle \\int_{\\partial \\Omega}} g| \\phi|^{p-2} \\phi v, \\quad \\forall v \\in {W^{1,p}(\\Om)}.\n$$\nMoreover, the map $G'$ is compact.\n\\end{proposition}\n\n\n\\begin{proof}\nFor $\\phi,v \\in {W^{1,p}(\\Om)}$, let $f : {\\partial} \\Om \\times [-1,1] \\rightarrow {\\mathbb R}$ defined by $f(y , t) = g(y) \\abs{(\\phi + t v)(y)}^p.$\nThen\n$\n\\frac{{\\partial} f}{{\\partial} t}(\\cdot,t) = p g \\abs{\\phi + t v}^{p-2} (\\phi + t v) v\n$ and\n$$\\left|\\frac{{\\partial} f}{{\\partial} t}(\\cdot,t)\\right|\\le p 2^{p-1} \\abs{g} \\left( \\abs{\\phi}^{p-1} + \\abs{v}^{p-1} \\right) \\abs{v}.$$\n Set $h = p 2^{p-1} \\abs{g} \\left( \\abs{\\phi}^{p-1} + \\abs{v}^{p-1} \\right) \\abs{v}$ and for each $n\\in {\\mathbb N},$ set \n\\begin{align*}\nh_n(y) = n \\left( f(y, \\frac{1}{n}) - f(y, 0) \\right).\n\\end{align*}\n Clearly, $h_n(y) \\rightarrow \\frac{{\\partial} f}{{\\partial} t}(y,0)$ a.e. on ${\\partial} \\Om$ and by mean value theorem, we also have\n\\begin{align*}\n\\abs{h_n(y)} \\leq \\sup_{t \\in [-1,1]} \\Big| \\frac{{\\partial} f}{{\\partial} t}(y,t) \\Big| \\leq h(y).\n\\end{align*}\nFurthermore, using a similar set of arguments as given in the proof of Proposition \\ref{Hardy boundary}, one can show that $h_n,h \\in L^1({\\partial} \\Om),$ for each $n\\in {\\mathbb N}.$ Therefore, by the dominated convergence theorem, \n\\begin{align*}\n\\lim_{n \\rightarrow \\infty} {\\displaystyle \\int_{\\partial \\Omega}} n \\left( f(y, \\frac{1}{n}) - f(y, 0) \\right) \\; {\\rm d}y = {\\displaystyle \\int_{\\partial \\Omega}} \\frac{{\\partial} f}{{\\partial} t}(y,0) \\; {\\rm d}y= p{\\displaystyle \\int_{\\partial \\Omega}} g| \\phi|^{p-2} \\phi v. \n\\end{align*}\nThus \n\\begin{align*}\n\\big< G'(\\phi), v \\big> = \\frac{{\\rm d}}{{\\rm d}t} G(\\phi + tv) \\Big|_{t = 0} = p{\\displaystyle \\int_{\\partial \\Omega}} g| \\phi|^{p-2} \\phi v.\n\\end{align*}\nThe proof of compactness is quite similar to that of Proposition \\ref{G cpt}.\n\\end{proof}\n\n\nFor $p \\in (1, \\infty)$, consider the following functional\n$$\nJ(\\phi) = {\\displaystyle \\int_{\\Omega}} \\abs{\\Gr \\phi}^p, \\quad \\forall \\phi \\in {W^{1,p}(\\Om)}.\n$$ \nThen $J$ is differentiable on ${W^{1,p}(\\Om)}$, and the derivative is given by\n$$ \n\\big< J'(\\phi), u \\big> = p{\\displaystyle \\int_{\\Omega}} |\\Gr \\phi|^{p-2}\\Gr \\phi\\cdot \\Gr u, \\quad \\forall u \\in {W^{1,p}(\\Om)}.\n$$ \n\n\n\\begin{proposition}\\label{class alpha}\nLet $p \\in (1, \\infty).$ Then \n\\begin{enumerate}[(i)]\n \\item $J'$ is continuous.\n \\item $J'$ is of class $\\al({W^{1,p}(\\Om)}).$\n\\end{enumerate}\n\\end{proposition}\n\n\n\\begin{proof}\n(i) Let $\\phi_n \\rightarrow \\phi$ in ${W^{1,p}(\\Om)}$. For $v \\in {W^{1,p}(\\Om)},$ \n\\begin{align*}\n \\big|\\left< J'(\\phi_n) - J'(\\phi), v \\right>\\big| & \\leq {\\displaystyle \\int_{\\Omega}} \\abs{(\\abs{\\Gr \\phi_n}^{p-2} \\Gr \\phi_n - \\abs{\\Gr \\phi}^{p-2} \\Gr \\phi)} \\abs{\\Gr v} \\\\\n & \\leq \\left( {\\displaystyle \\int_{\\Omega}} \\abs{(\\abs{\\Gr \\phi_n}^{p-2} \\Gr \\phi_n - \\abs{\\Gr \\phi}^{p-2} \\Gr \\phi)}^{{p^{\\prime}}} \\right)^{\\frac{1}{{p^{\\prime}}}} \\left( {\\displaystyle \\int_{\\Omega}} \\abs{\\Gr v}^p \\right)^{\\frac{1}{p}}. \n\\end{align*}\nTherefore,\n\\begin{align*}\n \\@ifstar{\\oldnorm}{\\oldnorm*}{J'(\\phi_n) - J'(\\phi)} \\leq \\left( {\\displaystyle \\int_{\\Omega}} \\abs{(\\abs{\\Gr \\phi_n}^{p-2} \\Gr \\phi_n - \\abs{\\Gr \\phi}^{p-2} \\Gr \\phi)}^{{p^{\\prime}}} \\right)^{\\frac{1}{{p^{\\prime}}}}.\n\\end{align*}\nNow consider the map $J_1$ defined as $J_1(\\phi) = \\abs{\\Gr \\phi}^{p-2} \\Gr \\phi.$ Clearly $J_1$ maps ${W^{1,p}(\\Om)}$ into $L^{{p^{\\prime}}}(\\Om)$ and $J_1$ is continuous. Hence we conclude $ \\@ifstar{\\oldnorm}{\\oldnorm*}{J'(\\phi_n) - J'(\\phi)} \\rightarrow 0$ as $n \\rightarrow \\infty.$ \n\n\n\\noi (ii) Let $\\phi_n \\rightharpoonup \\phi$ in ${W^{1,p}(\\Om)}$ and let $\\uplim_{n \\rightarrow \\infty} \\big< J'(\\phi_n), \\phi_n - \\phi \\big> \\leq 0.$ Then \n\\begin{align}\\label{ca 1}\n \\uplim_{n \\rightarrow \\infty} \\big< J'(\\phi_n) - J'(\\phi), \\phi_n - \\phi \\big> = \\uplim_{n \\rightarrow \\infty} \\big< J'(\\phi_n), \\phi_n - \\phi \\big> - \\lowlim_{n \\rightarrow \\infty} \\big< J'(\\phi), \\phi_n - \\phi \\big> \\leq 0.\n\\end{align}\nNow for each $n \\in {\\mathbb N},$ \n\\begin{align*}\n \\big< J^{\\prime}(\\phi_n) - J^{\\prime}(\\phi), \\phi_n - \\phi \\big> \\geq p\\left( \\Vert \\Gr \\phi_n \\Vert_p^{p-1} - \\Vert \\Gr \\phi \\Vert_p^{p-1} \\right) \\left( \\Vert \\Gr \\phi_n \\Vert_p - \\Vert \\Gr \\phi \\Vert_p \\right) \\geq 0.\n\\end{align*}\nHence from \\eqref{ca 1}, we get \n\\begin{align*}\n \\lim_{n \\rightarrow \\infty} \\big< J'(\\phi_n) - J'(\\phi), \\phi_n - \\phi \\big> = 0.\n\\end{align*}\nTherefore, $\\Vert \\Gr \\phi_n \\Vert_p \\rightarrow \\Vert \\Gr \\phi \\Vert_p$ as $n \\rightarrow \\infty$. Hence by uniform convexity of $(L^p(\\Om))^N$, we obtain $\\Gr \\phi_n \\rightarrow \\Gr \\phi$ in $(L^p(\\Om))^N$. Further, since ${W^{1,p}(\\Om)}$ is compactly embedded into $L^p(\\Om)$, we get $\\phi_n \\rightarrow \\phi$ in $L^p(\\Om)$ . Therefore, $\\phi_n\\rightarrow \\phi $ in ${W^{1,p}(\\Om)}.$ Thus the map $J'$ is of class $\\al({W^{1,p}(\\Om)}).$\n\\end{proof}\n\n\n\\begin{proposition}\\label{compactmap1}\nLet $p \\in (1, \\infty)$ and let $N$, $r$ and $f$ satisfy \\textbf{(H1)} or \\textbf{(H2)}. Then the map $F$ defined by \n$$ \n\\big< F(\\phi), v \\big> = {\\displaystyle \\int_{\\partial \\Omega}} f r(\\phi) v \n$$\nis a well-defined map from ${W^{1,p}(\\Om)} \\rightarrow ({W^{1,p}(\\Om)})^{\\prime}$. Moreover, $F$ is continuous and compact.\n\\end{proposition}\n\n\n\\begin{proof} \nFirst, we assume that $N, r$ and $f$ satisfy \\textbf{(H1)}. In this case $\\ga\\in (1,{\\frac{p(N-1)}{N-p}})$ and we use different arguments for $\\ga \\in (1,p)$ and $\\ga \\in [p, {\\frac{p(N-1)}{N-p}}).$\nFor $\\ga \\in (1,p),$ there exists $C > 0$ such that $\\abs{r(s)} \\leq C \\abs{s}^{p-1}$ for $s \\in {\\mathbb R}$. Therefore, using the finer trace embeddings (Proposition \\ref{Cianchi}), for $\\phi, v \\in {W^{1,p}(\\Om)},$ clearly we have \n\\begin{align}\\label{for < p}\n \\big |{\\big< F(\\phi), v \\big>} \\big | \\leq C \\@ifstar{\\oldnorm}{\\oldnorm*}{f}_{\\left(\\frac{N-1}{p-1}, \\infty \\right)}\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}^{p -1}_{{W^{1,p}(\\Om)}} \\@ifstar{\\oldnorm}{\\oldnorm*}{v}_{{W^{1,p}(\\Om)}}.\n \\end{align} For $\\gamma \\in [p, {\\frac{p(N-1)}{N-p}})$, using Proposition \\ref{Lorentz properties} and the finer trace embeddings (Proposition \\ref{Cianchi}), we have\n \\begin{align}\\label{trace embed 1}\n {W^{1,p}(\\Om)} \\hookrightarrow L^{{\\frac{p(N-1)}{N-p}}, \\ga}({\\partial} \\Om).\n\\end{align}\nSince \n$\n\\frac{1}{\\tilde{p}} + \\frac{(\\gamma-1)(N-p)}{p(N-1)} + \\frac{N-p}{p(N-1)} = 1,\n$\n for $\\phi, v \\in {W^{1,p}(\\Om)},$ using the generalized H\\\"{o}lder inequality (Proposition \\ref{Lorentz properties}), we obtain \n\\begin{align*}\n {\\displaystyle \\int_{\\partial \\Omega}} \\abs{f} \\abs{r(\\phi)v} \\leq C \\tilde{p} \\@ifstar{\\oldnorm}{\\oldnorm*}{f}_{(\\tilde{p}, \\infty)} \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}^{\\ga - 1}_{\\left( {\\frac{p(N-1)}{N-p}}, \\ga \\right)} \\@ifstar{\\oldnorm}{\\oldnorm*}{ v}_{\\left({\\frac{p(N-1)}{N-p}}, \\ga \\right)}.\n\\end{align*}\nTherefore, from \\eqref{trace embed 1}, \n\\begin{align}\\label{cont}\n \\big |{\\big< F(\\phi), v \\big>} \\big | \\leq C \\@ifstar{\\oldnorm}{\\oldnorm*}{f}_{(\\tilde{p}, \\infty)} \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}^{\\ga -1}_{{W^{1,p}(\\Om)}} \\@ifstar{\\oldnorm}{\\oldnorm*}{v}_{{W^{1,p}(\\Om)}}, \\quad \\forall \\phi, v \\in {W^{1,p}(\\Om)},\n\\end{align}\nwhere $C = C(N,p) > 0$. \\\\ \nNow assume that $N,r$ and $f$ satisfy \\textbf{(H2)}. For $d \\in (1, \\infty)$, choose $a_i, b_i \\in (1, \\infty)$ (for $i = 1,2$) such that \n\\begin{align*}\n a_1, b_1 > \\frac{1}{\\ga - 1}, \\quad \\frac{1}{d} + \\frac{1}{a_1} + \\frac{1}{a_2} = 1 = \\frac{1}{N} + \\frac{1}{b_1} + \\frac{1}{b_2}.\n\\end{align*}\nFor $\\phi, v \\in {W^{1,p}(\\Om)},$ using the generalized H\\\"{o}lder inequality (Proposition \\ref{Lorentz properties}), we obtain \n\\begin{align}\\label{cont 1}\n {\\displaystyle \\int_{\\partial \\Omega}} \\abs{f} \\abs{r(\\phi) v} &\\leq C d \\@ifstar{\\oldnorm}{\\oldnorm*}{f}_{(d, N)} \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}^{\\ga - 1}_{(a_1(\\ga - 1),b_1(\\ga - 1))} \\@ifstar{\\oldnorm}{\\oldnorm*}{v}_{(a_2, b_2)}.\n\\end{align}\nNow by Proposition \\ref{properties} and using the trace embeddings (Proposition \\ref{classical} and Proposition \\ref{Cianchi}), we have \n\\begin{equation*}\n \\begin{aligned}\n& \\quad L^{d, \\infty;N}({\\partial} \\Om) \\hookrightarrow L^{d,N}({\\partial} \\Om),\\\\\n& \\quad {W^{1,N}(\\Om)} \\hookrightarrow L^{\\infty, N; -1}({\\partial} \\Om) \\hookrightarrow L^{a_1(\\gamma-1),b_1(\\gamma-1)}({\\partial} \\Om),\\\\\n& \\quad {W^{1,N}(\\Om)} \\hookrightarrow L^q({\\partial} \\Om) \\hookrightarrow L^{a_2, b_2}({\\partial} \\Om), \\; \\text{for} \\; q > a_2.\n \\end{aligned} \n\\end{equation*}\nTherefore, from \\eqref{cont 1} we get \\label{cont 2}\n\\begin{align*}\n \\big |{\\big< F(\\phi), v \\big>} \\big | \\leq C\\@ifstar{\\oldnorm}{\\oldnorm*}{f}_{(d, \\infty;N)}\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}^{\\gamma-1}_{{W^{1,N}(\\Om)}} \\@ifstar{\\oldnorm}{\\oldnorm*}{v}_{{W^{1,N}(\\Om)}}, \\quad \\forall \\phi, v \\in {W^{1,N}(\\Om)},\n\\end{align*}\nwhere $C=C(N) > 0.$ Thus the map $F$ is well defined in both the cases. The continuity and the compactness of $F$ will follow from the similar set of arguments as given in the proof of Proposition \\ref{G cpt}. So we omit the proof.\n\\end{proof}\n\n\n\n\\begin{proposition}\\label{Growth}\nLet $p \\in (1, \\infty)$. Let $N,r$ and $f$ be given as in Proposition \\ref{compactmap1}. Then \n$$\n \\frac{\\@ifstar{\\oldnorm}{\\oldnorm*}{F(\\phi)}_{({W^{1,p}(\\Om)})'}}{\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}_{{W^{1,p}(\\Om)}}^{p-1}} \\longrightarrow 0, \\quad \\text{as} \\; \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}_{{W^{1,p}(\\Om)}} \\rightarrow 0.\n$$\n\\end{proposition} \n\n\n\\begin{proof}\nLet $\\ep > 0$ be given. We only prove the case when $N,r$ and $f$ satisfy \\textbf{(H1)}. For \\textbf{(H2)}, the proof is similar. For $\\ga \\in [p, {\\frac{p(N-1)}{N-p}}),$ using \\eqref{cont} we have, \n\\begin{align*}\n \\@ifstar{\\oldnorm}{\\oldnorm*}{F(\\phi)} \\leq C \\@ifstar{\\oldnorm}{\\oldnorm*}{f}_{(\\tilde{p}, \\infty)}\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}^{\\ga -1}_{{W^{1,p}(\\Om)}}, \\quad \\forall \\phi \\in {W^{1,p}(\\Om)}.\n\\end{align*}\n Therefore, \n\\begin{align*}\n \\frac{\\@ifstar{\\oldnorm}{\\oldnorm*}{F(\\phi)}_{({W^{1,p}(\\Om)})'}}{\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}_{{W^{1,p}(\\Om)}}^{p-1}} \\leq C \\@ifstar{\\oldnorm}{\\oldnorm*}{f}_{(\\tilde{p}, \\infty)}\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}^{\\ga -p}_{{W^{1,p}(\\Om)}}.\n\\end{align*}\nIf $\\ga \\in (1,p)$, then from \\textbf{(H1)} there exists $s_0 > 0$ and $C = C(s_0) > 0$ such that\n\\begin{equation}\\label{growth}\n\\begin{aligned}\n \\abs{r(s)} &< \\frac{\\ep}{\\@ifstar{\\oldnorm}{\\oldnorm*}{f}_{\\left(\\frac{N-1}{p-1}, \\infty \\right)}} \\abs{s}^{p-1}, \\quad \\text{for} \\; \\abs{s} < s_0, \\\\\n \\abs{r(s)} \\leq C \\abs{s}^{p -1} \\quad &\\text{and} \\quad \\abs{r(s)} \\leq C \\abs{s}^{{\\frac{p(N-1)}{N-p}}-1}, \\quad \\text{for} \\; \\abs{s} \\geq s_0. \n\\end{aligned}\n\\end{equation}\nFor $\\phi \\in {W^{1,p}(\\Om)},$ set $A = \\{ y \\in {\\partial} \\Om : \\abs{ \\phi(y)} < s_0 \\}$ and $B = {\\partial} \\Om \\setminus A.$ For $v \\in {W^{1,p}(\\Om)}$, using \\eqref{growth} and \\eqref{for < p}, we get\n\\begin{align}\\label{grth1}\n \\int_{A} \\abs{f}\\abs{r(\\phi)}\\abs{ v} < \\frac{\\ep}{\\@ifstar{\\oldnorm}{\\oldnorm*}{f}_{\\left(\\frac{N-1}{p-1}, \\infty \\right)}} \\int_{A} \\abs{f} \\abs{\\phi}^{p-1} \\abs{v} & \\leq C \\ep \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}_{{W^{1,p}(\\Om)}}^{p-1} \\@ifstar{\\oldnorm}{\\oldnorm*}{v}_{{W^{1,p}(\\Om)}}.\n\\end{align}\nTo estimate the above integral on $B$, we split $f= f_\\ep+ (f-f_\\ep)$ where $f_{\\ep} \\in {\\C^1}({\\partial} \\Om)$ with $\\@ifstar{\\oldnorm}{\\oldnorm*}{f - f_{\\ep}}_{\\left(\\frac{N-1}{p-1}, \\infty \\right)} < \\ep.$ Now \\eqref{growth} and \\eqref{for < p} yield\n\\begin{align}\\label{grth2}\n \\int_{B} \\abs{f - f_{\\ep}} \\abs{r(\\phi)} \\abs{v} \\leq C\\int_{B} \\abs{f - f_{\\ep}} \\abs{\\phi}^{p-1} \\abs{v} < C\\ep \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}_{{W^{1,p}(\\Om)}}^{p-1}\\@ifstar{\\oldnorm}{\\oldnorm*}{v}_{{W^{1,p}(\\Om)}},\n\\end{align}\nwhere $C=C(s_0,N,p)>0.$ On the other hand using \\eqref{growth}, H\\\"{o}lder inequality (Proposition \\ref{properties}) and the classical trace embeddings (Proposition \\ref{classical}), we obtain\n\\begin{align*}\n\\nonumber \\int_{B} \\abs{f_{\\ep}}\\abs{r(\\phi)}\\abs{v} & \\leq C \\int_{B} \\abs{f_{\\ep}}\\abs{\\phi}^{{\\frac{p(N-1)}{N-p}} - 1} \\abs{v} \\\\ \n & \\leq C \\@ifstar{\\oldnorm}{\\oldnorm*}{f_{\\ep}}_{L^{\\infty}({\\partial} \\Om)} \\@ifstar{\\oldnorm}{\\oldnorm*}{\\abs{\\phi}^{{\\frac{p(N-1)}{N-p}} - 1}}_{L^{\\frac{p(N-1)}{N(p-1)}}({\\partial} \\Om)} \\@ifstar{\\oldnorm}{\\oldnorm*}{v}_{L^{\\frac{p(N-1)}{N-p}}({\\partial} \\Om)},\\\\\n &\\leq C \\@ifstar{\\oldnorm}{\\oldnorm*}{f_{\\ep}}_{L^{\\infty}({\\partial} \\Om)} \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}_{{W^{1,p}(\\Om)}}^{\\frac{N(p-1)}{N-p}} \\@ifstar{\\oldnorm}{\\oldnorm*}{v}_{{W^{1,p}(\\Om)}}, \n\\end{align*}\nwhere $C = C(N,p) > 0.$ Now using \\eqref{grth2} we conclude \n\\begin{align*} \n \\int_{B} \\abs{f}\\abs{r(\\phi)}\\abs{v} \\leq C \\left( \\ep \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}_{{W^{1,p}(\\Om)}}^{p-1} + \\@ifstar{\\oldnorm}{\\oldnorm*}{f_{\\ep}}_{L^{\\infty}({\\partial} \\Om)} \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}_{{W^{1,p}(\\Om)}}^{\\frac{N(p-1)}{N-p}} \\right) \\@ifstar{\\oldnorm}{\\oldnorm*}{v}_{{W^{1,p}(\\Om)}},\n\\end{align*}\nwhere $C = C(s_0,N,p) > 0$. Thus \\eqref{grth1} and the above inequality yield: \n\\begin{align*}\n \\@ifstar{\\oldnorm}{\\oldnorm*}{F(\\phi)}_{({W^{1,p}(\\Om)})'} 0 \\right\\}.\n$$ \nSince $g^+ \\not \\equiv 0$, we can show that the set $M_g$ is nonempty. The functional $J$ is not coercive on ${W^{1,p}(\\Om)}.$ However, using a Poincar\\'{e} type inequality on $M_g$ we show that $J$ is coercive on $M_g$.\n\n\n\\begin{Lemma}\\label{Poin}\nLet $g^+ \\not \\equiv 0,$ $\\int_{{\\partial} \\Om} g < 0,$ and \n$$\ng \\in \\left\\{\\begin{array}{ll} \n {\\mathcal F}_{\\frac{N-1}{p-1}} & \\text{ for } N > p,\\\\\n {\\mathcal G}_1 & \\text{ for } N = p. \n \\end{array}\\right.$$ \nThen there exists $m \\in (0,1)$ such that\n\\begin{align}\\label{Poincare}\n {\\displaystyle \\int_{\\Omega}} \\abs{\\Gr \\phi}^p \\geq m {\\displaystyle \\int_{\\Omega}} \\abs{\\phi}^p, \\quad \\forall \\phi \\in M_g.\n\\end{align}\n\\end{Lemma}\n\\begin{proof}\n On the contrary, assume that \\eqref{Poincare} does not hold for any $m\\in (0,1)$. Thus for each $n\\in {\\mathbb N},$ there exists $\\phi_n \\in M_g$ such that\n \\begin{align*}\n {\\displaystyle \\int_{\\Omega}} \\abs{\\Gr \\phi_n}^p < \\frac{1}{n} {\\displaystyle \\int_{\\Omega}} \\abs{\\phi_n}^p.\n \\end{align*}\n If we set $w_n = \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi_n}^{-1}_p \\phi_n,$ then $\\@ifstar{\\oldnorm}{\\oldnorm*}{w_n}_p = 1$ and $ \\int_{\\Om} \\abs{\\Gr w_n}^p < \\frac{1}{n}.$\n Thus $(w_n)$ is bounded and hence there exists a subsequence $(w_{n_k})$ of $(w_n)$ such that $w_{n_k} \\rightharpoonup w$ in ${W^{1,p}(\\Om)}.$ By weak lowersemicontinuity of $\\@ifstar{\\oldnorm}{\\oldnorm*}{\\Gr \\cdot}_p$ we have $\\@ifstar{\\oldnorm}{\\oldnorm*}{\\Gr w}_p =0.$ Hence the connectedness yields $w \\equiv c $ a.e. in $\\overline{\\Om}$. By the compactness of the embedding of ${W^{1,p}(\\Om)}$ into $L^p(\\Om)$, we get $\\@ifstar{\\oldnorm}{\\oldnorm*}{w}_p = 1$ and hence $\\abs{c} |\\Om|^{\\frac{1}{p}} = 1$. Therefore, $ \\int_{{\\partial} \\Om} g \\abs{w}^p = \\frac{1}{|\\Om|} \\int_{{\\partial} \\Om} g < 0$. On the other hand, $\\int_{{\\partial} \\Om} g \\abs{w_{n_k}}^p = \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi_{n_k}}^{-p}_p \\int_{{\\partial} \\Om} g \\abs{\\phi_{n_k}}^p > 0 $. Thus by the compactness of $G$ (Proposition \\ref{G cpt}), we get $ \\int_{{\\partial} \\Om} g \\abs{w}^p = \\lim_{k \\rightarrow \\infty} \\int_{{\\partial} \\Om} g \\abs{w_{n_k}}^p \\geq 0$, a contradiction. Thus there must exists $m\\in (0,1)$ satisfying \\eqref{Poincare}. \n\\end{proof}\n\n\n\\begin{remark}\\label{Manifold}\nFor $g$ as given in Lemma \\ref{Poin}, consider the set\n\\begin{align*}\n N_g = \\left \\{\\phi \\in {W^{1,p}(\\Om)}: {\\displaystyle \\int_{\\partial \\Omega}} g \\abs{\\phi}^p = 1\\right \\} = G^-(1).\n\\end{align*} \nFor $\\phi \\in N_g,$ $ \\langle G'(\\phi), \\phi \\rangle = p \\neq 0$. Thus $1$ is a regular point of $G$ and $N_g$ is a $C^1$ manifold. Moreover (see \\cite[Proposition 6.4.35]{Drabek-Milota}),\n\\begin{align*}\n \\@ifstar{\\oldnorm}{\\oldnorm*}{{\\rm d}J(\\phi)} = \\min_{\\la \\in {\\mathbb R}} \\@ifstar{\\oldnorm}{\\oldnorm*}{(J' - \\la G')(\\phi)}, \\quad \\forall \\phi \\in N_g. \n\\end{align*}\n\\end{remark}\n\n\n\\begin{definition}\nA map $f \\in C^1(Y, {\\mathbb R})$ is said to satisfy {\\bf Palais-Smale (P. S.)} condition on a $C^1$ manifold $ M \\subset Y$, if $(\\phi_n)$ is a sequence in $M$ such that $f(\\phi_n) \\rightarrow c \\in {\\mathbb R}$ and $ \\Vert {\\rm d}f (\\phi_n) \\Vert \\rightarrow 0,$ then $(\\phi_n)$ has a subsequence that converges in $M$.\n\\end{definition}\n\n\n\\begin{Lemma}\\label{PS} \nLet $g$ be as given in Lemma \\ref{Poin}. Then $J$ satisfies the P. S. condition on $N_g.$ \n\\end{Lemma}\n\n\n\\begin{proof}\nLet $(\\phi_n)$ be a sequence in $N_g$ and $\\la \\in {\\mathbb R}$ such that $J(\\phi_n) \\rightarrow \\la$ and $\\@ifstar{\\oldnorm}{\\oldnorm*}{{\\rm d}J(\\phi_n)} \\rightarrow 0.$ By Remark \\ref{Manifold}, there exists a sequence $(\\la_n)$ such that $(J' - \\la_n G')(\\phi_n) \\rightarrow 0$ as $n \\rightarrow \\infty.$ By Lemma \\ref{Poin}, the sequence $(\\phi_n)$ is also bounded in ${W^{1,p}(\\Om)}.$ Now using the reflexivity of ${W^{1,p}(\\Om)}$, we get a subsequence $(\\phi_{n_k})$ such that $\\phi_{n_k} \\rightharpoonup \\phi$ in ${W^{1,p}(\\Om)}$. Since $N_g$ is weakly closed, $\\phi \\in N_g$. Also $\\la_{n_k} \\rightarrow \\la$ as $k \\rightarrow \\infty,$ since $$\\big< (J' - \\la_{n_k} G')(\\phi_{n_k}), \\phi_{n_k} \\big> = p(J(\\phi_{n_k}) - \\la_{n_k}).$$ Furthermore,\n\\begin{align*}\n \\big< J'(\\phi_{n_k}), \\phi_{n_k} - \\phi \\big> = \\big< (J' - \\la_{n_k} G')(\\phi_{n_k}), \\phi_{n_k} - \\phi \\big> + \\la_{n_k} \\big< G'(\\phi_{n_k}), \\phi_{n_k} - \\phi \\big>.\n\\end{align*}\nNow using the compactness of $G'$, we get $\\langle J'(\\phi_{n_k}), \\phi_{n_k} - \\phi \\rangle \\rightarrow 0.$ Moreover, as $J'$ is of class $\\al({W^{1,p}(\\Om)})$ (Proposition \\ref{class alpha}), the sequence $(\\phi_{n_k})$ converges to $\\phi$ in ${W^{1,p}(\\Om)}$. Therefore, $J$ satisfies the P. S. condition on $N_g.$ \n\\end{proof}\n\n\n\\section{Introduction}\nLet $\\Om$ be a bounded Lipschitz domain in ${\\mathbb R}^N$ $(N\\ge 2)$ with the boundary ${\\partial} \\Om$. For $p \\in (1, \\infty),$ we consider the following nonlinear Steklov bifurcation problem: \n\\begin{equation}\\label{Steklov pertub}\n\\begin{aligned}\n -\\Delta_p \\phi & = 0 \\ \\text{in}\\ \\Om,\\\\\n \\displaystyle \\abs{\\Gr \\phi}^{p-2}\\frac{{\\partial} \\phi}{{\\partial} \\nu} &= \\la \\left( g |\\phi|^{p-2}\\phi + f r(\\phi) \\right) \\ \\text{on} \\ {\\partial} \\Om, \n\\end{aligned}\n\\end{equation}\nwhere $\\Delta_p$ is the $p$-Laplace operator defined as $\\Delta_p(\\phi) = \\text{div}(\\abs{\\Gr \\phi}^{p-2} \\Gr \\phi),$ $f,g \\in L^1({\\partial} \\Om)$ are indefinite weights functions and $r \\in C({\\mathbb R})$ satisfying $r(0) = 0$. A function $\\phi \\in {W^{1,p}(\\Om)}$ is said to be a solution of \\eqref{Steklov pertub}, if\n\\begin{align}\\label{weak pertub}\n {\\displaystyle \\int_{\\Omega}} |\\Gr \\phi|^{p-2} \\Gr \\phi \\cdot \\Gr v\\; {\\rm d}x = \\la {\\displaystyle \\int_{\\partial \\Omega}} \\left( g \\abs{\\phi}^{p-2} \\phi v + f r(\\phi)v \\right) \\; {\\rm d}\\sigma, \\quad \\forall v \\in {W^{1,p}(\\Om)}.\n\\end{align}\nSince $r(0) = 0$, the set $\\{ (\\la,0): \\la \\in {\\mathbb R} \\}$ is always a trivial branch of solutions of \\eqref{Steklov pertub}. We say a real number $\\la$ is a bifurcation point of \\eqref{Steklov pertub}, if there exists a sequence $\\{(\\la_n , \\phi_n)\\}$ of nontrivial weak solutions of \\eqref{Steklov pertub} such that $\\la_n \\rightarrow \\la$ and $\\phi_n \\rightarrow 0$ in ${W^{1,p}(\\Om)}$ as $n \\rightarrow \\infty.$ \n\n\nThe bifurcation problem arises in numerous contexts in mathematical and engineering applications. For example, in reaction diffusion \\cite{JA}, elasticity theory \\cite{BRK, TW}, population genetics \\cite{BT}, water wave theory \\cite{Levi}, stability problems in engineering \\cite{Troger, Troger1}. Many authors considered the following nonlinear bifurcation problem with different boundary conditions\n\\begin{align}\\label{pert 1}\n -\\Delta_p \\phi = \\la g \\abs{\\phi}^{p-2} \\phi + h(\\la, x, \\phi) \\; \\text{in} \\; \\Om,\n \\end{align}\n where $h$ is assumed to be a Carath\\'{e}odory function satisfying $h(\\la, x, 0) = 0$. There are various sufficient conditions available on $g$ for the existence of a bifurcation point of \\eqref{pert 1}. For Dirichlet boundary condition, $g = 1$ \\cite {Drabek2, Girg, DelPino}, $g \\in L^r(\\Om)$ with $r > \\frac{N}{2}$ \\cite{AGJ}, $g \\in L^{\\infty}({\\mathbb R}^N)$ \\cite{Drabek-Huang}. There are few works that deal with $h$ of the form $\\la f(x) r(\\phi)$ with continuous $r$ satisfying $r(0) = 0$ and certain growth condition at zero and at infinity, see for $g, f$ in H\\\"{o}lder continuous spaces \\cite{ Rumbos}, in certain Lebesgue spaces \\cite{GLR}, in Lorentz spaces \\cite{AMM, Lucia}. The bifurcation problem \\eqref{pert 1} with Neumann boundary condition is considered for $g = 1$ in \\cite{Drabek2}, for smooth $f, g$ in \\cite{Brown}. \n\n\nFor $p=2$, \\eqref{Steklov pertub} is considered in \\cite{Cushing, Cushing1, Stuart} for $g=1$ and continuous $f,$ and in \\cite{Pagani} for $f, g \\in L^{\\infty}({\\partial} \\Om)$. Indeed, there are many singular weights (not belonging to any of the Lebesgue spaces) that appear in problems in quantum mechanics, molecular physics, see \\cite{FMT, Ferreira, Frank}. In this article, we enlarge the class of weight functions beyond the classical Lebesgue spaces. More precisely, we consider $f,g$ in certain Lorentz-Zygmund spaces, and study the existence of bifurcation point for \\eqref{Steklov pertub}.\n\nUsing the weak formulation, it is easy to see that \\eqref{pert 1} is equivalent to the following \n operator equation:\n\\begin{align}\\label{operator eqn}\n A(\\phi) = \\la G(\\phi) + H(\\la, \\phi), \\quad \\phi\\in X,\n\\end{align}\nwhere $X$ is the Banach space $W^{1,p}(\\Om)$ or $W^{1,p}_0(\\Om)$ depending on the boundary conditions, $A, G, H(\\la,.): X\\ra X'$ defined as $\\left< A(\\phi), v \\right> = \\int_{\\Om} \\abs{\\Gr \\phi}^{p-2} \\Gr \\phi \\cdot \\Gr v \\, {\\rm d}x;$ $\\left< G(\\phi), v \\right> = \\int_{\\Om} g \\abs{\\phi}^{p-2} \\phi v \\, {\\rm d}x;\\,$ $ \\left< H(\\la, \\phi), v \\right> = \\int_{\\Om} h(\\la, x, \\phi) v \\, {\\rm d}x.$ For $p = 2$, $A$ is an invertible map. Using the Leray-Schauder degree \\cite{Leray}, Krasnosel'skii in \\cite{Krasnosel} gave sufficient conditions on $L=A^{-1}G,K=A^{-1}H$ so that, for any eigenvalue $\\mu=\\la^{-1}$ of $L$ with odd multiplicity, $(\\la, 0)$ is a bifurcation point of \\eqref{operator eqn}. Later, Rabinowitz \\cite[Theorem 1.3]{Rabinowitz}, extended this result by exhibiting a continuum of nontrivial solutions of \\eqref{operator eqn} bifurcating from $(\\la, 0)$ which is either unbounded in ${\\mathbb R}\\times X$ or meets at $(\\la*, 0)$, where $\\mu={\\la*}^{-1}$ is an eigenvalue of $L.$ Further, if $\\mu$ has multiplicity one, then this continuum decompose into two subcontinua of nontrivial solutions of \\eqref{operator eqn}, see \\cite{Ambrosetti, Dancer, Dancer1, Rabinowitz, Rabinowitz1}. For $p \\neq 2$, the Leray-Schauder degree is extended for certain maps between $X$ to $X'$ \\cite{ Browder, Skrypnik} and then an analogue of Rabinowitz result is proved for the first eigenvalue of $A=\\la G$, see \\cite{Drabek2, Drabek-Huang, Girg, DelPino}.\n\n\nTo study the bifurcation problem \\eqref{Steklov pertub}, we consider the following nonlinear eigenvalue problem:\n\\begin{equation}\\label{Steklov weight}\n\\begin{aligned}\n -\\Delta_p \\phi & = 0 \\ \\text{in}\\ \\Om,\\\\\n \\displaystyle \\abs{\\Gr \\phi}^{p-2}\\frac{{\\partial} \\phi}{{\\partial} \\nu} &= \\la g |\\phi|^{p-2}\\phi \\ \\text{on} \\ {\\partial} \\Om. \n\\end{aligned}\n\\end{equation}\n For $N=2$, $p=2$ and $g = 1,$ the problem \\eqref{Steklov weight} is first considered by Steklov in \\cite{Stekloff}. A real number $\\la $ is said to be an eigenvalue of \\eqref{Steklov weight}, if there exists $\\phi \\in {W^{1,p}(\\Om)} \\setminus \\{ 0 \\}$ satisfying the following weak formulation \n\\begin{align}\\label{Sw weak}\n{\\displaystyle \\int_{\\Omega}} |\\Gr \\phi|^{p-2} \\Gr \\phi \\cdot \\Gr v\\; {\\rm d}x = \\la {\\displaystyle \\int_{\\partial \\Omega}} g \\abs{\\phi}^{p-2}\\phi v \\; {\\rm d}\\sigma, \\quad \\forall v \\in {W^{1,p}(\\Om)}.\n\\end{align} \nFor $N>p$, the classical trace embeddings (\\cite[Theorem 4.2 and Theorem 6.2]{Nevcas}) gives\n$${W^{1,p}(\\Om)} \\hookrightarrow L^{q}({\\partial} \\Om), \\text{ where } q \\in \\left[1, {\\frac{p(N-1)}{N-p}} \\right],$$ and for $q<{\\frac{p(N-1)}{N-p}}$ the above embedding is compact. Thus, by the H\\\"{o}lder inequality the right hand side of \\eqref{Sw weak} is finite for $g\\in L^r({\\partial}\\Om)$ with $r\\in \\left[\\frac{N-1}{p-1},\\infty\\right]$ and for any $\\phi, v \\in {W^{1,p}(\\Om)}.$ We say an eigenvalue $\\la$ is principal, if there exists an eigenfunction of \\eqref{Steklov weight} corresponding to $\\la$ that does not change it's sign in $\\overline{\\Om}.$ Notice that, zero is always a principal eigenvalue of \\eqref{Steklov weight} and if $\\int_{{\\partial} \\Om} g \\geq 0$, then zero is the only principal eigenvalue. Thus for the existence of a positive principal eigenvalue of \\eqref{Steklov weight}, it is necessary to have a $g$ satisfying $\\int_{{\\partial} \\Om} g < 0$ and the $(N-1)$-dimensional Hausdorff measure of $\\text{supp}(g^+)$ is nonzero. In \\cite{Torne}, for $g\\in L^r({\\partial}\\Om)$ with $r\\in \\left(\\frac{N-1}{p-1},\\infty\\right]$ satisfying the above necessary conditions, with the help of the above compact embedding, the authors proved the existence of a positive principal eigenvalue of \\eqref{Steklov weight}. For $N=p$, ${W^{1,p}(\\Om)}$ is embedded compactly in $L^{q}({\\partial} \\Om)$ for $q \\in [1, \\infty).$ Thus for $g \\in L^r({\\partial} \\Om)$ with $r\\in (1,\\infty]$ satisfying the above necessary condition, \\eqref{Steklov weight} admits a positive principal eigenvalue, as obtained in \\cite{Torne}. \n\n\n\nIn order to enlarge the class of weight functions beyond $L^r$, we use the trace embeddings due to Cianchi-Kerman-Pick. In \\cite{CianchiPick}, the authors improved the classical trace embeddings by providing finer trace embeddings as below:\n\\begin{align*}\n & (i) \\; \\text{For} \\; N > p: \\quad {W^{1,p}(\\Om)} \\hookrightarrow L^{\\frac{p(N-1)}{N-p},p}({\\partial} \\Om)\\subsetneq L^{\\frac{p(N-1)}{N-p}}({\\partial}\\Om). \\\\\n & (ii) \\; \\text{For} \\; N = p: \\quad {W^{1,p}(\\Om)} \\hookrightarrow L^{\\infty,N;-1}({\\partial} \\Om)\\subsetneq L^{q}({\\partial} \\Om), \\;\\; \\forall\\, q \\in [1, \\infty).\n\\end{align*}\nNevertheless, none of these embeddings are compact. In this article, we use the above trace embeddings and prove the existence of a positive principal eigenvalue of \\eqref{Steklov weight} for weight functions in certain Lorentz-Zygmund spaces. More precisely, for $1 \\leq d < \\infty,$ we consider the following closed subspaces: \n\\begin{equation*}\n\\begin{aligned}\n &{\\mathcal F}_{d} := \\text{closure of} \\; {\\C^1}({\\partial} \\Om) \\; \\text{in the Lorentz space} \\; L^{d,\\infty}({\\partial} \\Om),\\\\\n &{\\mathcal G}_{d} := \\text{closure of} \\; {\\C^1}({\\partial} \\Om) \\; \\text{in the Lorentz-Zygmund space} \\; L^{d, \\infty;N}({\\partial} \\Om). \n\\end{aligned}\n\\end{equation*}\n\n\n\\begin{theorem}\\label{Steklov existence}\nLet $p \\in (1, \\infty)$ and $N \\geq p.$ Let $g^+ \\not \\equiv 0$, $\\int_{{\\partial} \\Om} g < 0$ and \n$$g \\in \\left\\{\\begin{array}{ll}\n {\\mathcal F}_{\\frac{N-1}{p-1}} & \\text{ for } N > p,\\\\\n {\\mathcal G}_1 & \\text{ for } N = p.\n \\end{array}\\right.$$ \nThen $$\\la_1 = \\inf \\left\\{ \\displaystyle \\int_{\\Om} |\\Gr \\phi|^p : \\phi \\in {W^{1,p}(\\Om)}, {\\displaystyle \\int_{\\partial \\Omega}} g |\\phi|^p = 1 \\right\\} $$ is the unique positive principal eigenvalue of \\eqref{Steklov weight}. Furthermore, $\\la_1$ is simple and isolated.\n\\end{theorem}\n\nIndeed, $L^{\\frac{N-1}{p-1}}({\\partial} \\Om)$ is contained in ${\\mathcal F}_{\\frac{N-1}{p-1}}$ (for $N > p$) and $L^q({\\partial}\\Om)$ (for $q > 1$) is contained in ${\\mathcal G}_1$ (for $N=p$) (see Remark \\ref{Stricly contained}). Thus the above theorem extends the result of \\cite{Torne}.\n\n\n\nHaving obtained the right candidate for bifurcation point, we can study \\eqref{Steklov pertub} for weights in appropriate Lorentz-Zygmund spaces. For this, let us consider the following set:\n\\begin{align*}\n {\\mathcal S} = \\left\\{ (\\la, \\phi) \\in {\\mathbb R} \\times {W^{1,p}(\\Om)}: (\\la, \\phi) \\; \\text{is a solution of} \\; \\eqref{Steklov pertub} \\; \\text{and} \\; \\phi \\not \\equiv 0 \\right\\}.\n\\end{align*}\nWe say ${\\mathcal C} \\subset {\\mathcal S}$ is a continuum of nontrivial solutions of \\eqref{Steklov pertub} if it is connected in ${\\mathbb R} \\times {W^{1,p}(\\Om)}.$ In this article, we prove the existence of a continuum ${\\mathcal C}$ of nontrivial solutions of \\eqref{Steklov pertub} that bifurcates from $(\\la_1, 0)$. \n\n\n\nFor $p \\in (1, \\infty)$ and $g$ as in Theorem \\ref{Steklov existence}, depending on the dimension we make the following assumptions on $r$ and $f$:\n\\begin{equation*}\n({\\bf H1}) \\left\\{ \\begin{aligned}\n&{\\bf(a)} \\quad \\displaystyle\\lim_{\\abs{s} \\rightarrow 0}\\frac{\\abs{r(s)}}{\\abs{s}^{p-1}}=0 \\; \\text{ and} \\;\\abs{r(s)} \\leq C\\abs{s}^{\\gamma - 1} \\; \\text{for some} \\; \\gamma\\in \\left(1, \\frac{p(N-1)}{N-p} \\right). \\\\\n&{\\bf(b)} \\quad g\\in {\\mathcal F}_{\\frac{N-1}{p-1}}, \\; f \\in \\left\\{\\begin{array}{ll} \n {\\mathcal F}_{\\tilde{p}}, & \\text {if } \\gamma \\geq p, \\; \\text{where} \\; \\displaystyle \\frac{1}{\\tilde{p}} + \\frac{\\gamma(N-p)}{p(N-1)} = 1; \\\\ {\\mathcal F}_{\\frac{N-1}{p-1}}, & \\text{if} \\; \\ga < p. \\\\\n \\end{array} \\right. \n \\end{aligned} \\right.\n\\end{equation*}\n\\begin{equation*}\n({\\bf H2}) \\left\\{ \\begin{aligned}\n &{\\bf(a)} \\quad \\displaystyle \\lim_{\\abs{s} \\rightarrow 0}\\frac{\\abs{r(s)}}{\\abs{s}^{N-1}}=0 \\; \\text{ and} \\; \\abs{r(s)} \\leq C\\abs{s}^{\\gamma-1} \\; \\text{for some} \\; \\gamma\\in (1, \\infty). \\quad \\quad \\quad \\quad\\\\\n &{\\bf (b)} \\quad g\\in {\\mathcal G}_1, \\; f \\in {\\mathcal G}_d \\; \\text{with} \\; d > 1.\n \\end{aligned} \\right.\n\\end{equation*}\n\n\n\\begin{theorem}\\label{bifur}\nLet $p \\in (1, \\infty)$. Assume that $ r,g \\text{ and } f$ satisfy \n ({\\bf H1}) for $N>p$ and satisfy ({\\bf H2}) for $N=p$. Then $\\la_1$ is a bifurcation point of \\eqref{Steklov pertub}. Moreover, there exists a continuum of nontrivial solutions ${\\mathcal C}$ of \\eqref{Steklov pertub} such that $(\\la_1, 0) \\in \\overline{{\\mathcal C}}$ and either \n \\begin{enumerate}[(i)]\n \\item ${\\mathcal C}$ is unbounded, or \\item ${\\mathcal C}$ contains the point $(\\la, 0)$, where $\\la$ is an eigenvalue of \\eqref{Steklov weight} and $\\la \\neq \\la_1$.\n \\end{enumerate}\n\\end{theorem}\n\n\n\nThe rest of the article is organized as follows. In Section 2, we give the definition and list some properties of symmetrization and Lorentz-Zygmund spaces. We also state the classical trace embedding theorems and their refinements. The definition and some of the properties of degree of a certain class of nonlinear maps between ${W^{1,p}(\\Om)}$ and $({W^{1,p}(\\Om)})'$ are also given in this section. In Section 3, we develop a functional framework associated with our problem and prove many results that we needed to prove our main theorems. Section 4 contains the proofs of Theorem 1.1 and Theorem 1.2.\n\n\n\\section{Preliminaries}\nIn this section, we briefly describe the one-dimensional decreasing rearrangement with respect to $(N-1)$-dimensional Hausdorff measure. Using this, we define Lorentz-Zygmund spaces over the boundary and give examples of functions in these spaces. Further, we state the classical trace embeddings of ${W^{1,p}(\\Om)}$, and it's refinements due to Cianchi et al. We also define the degree for a certain class of nonlinear maps and list some of the results that we use in this article. \n\n\n\\subsection{Symmetrization}\nLet $ \\Om \\subset {\\mathbb R}^N $ be a bounded Lipschitz domain. Let $ \\mathcal{M}({\\partial} \\Om)$ be the collection of all real valued $(N-1)$-dimensional Hausdorff measurable functions defined on ${\\partial} \\Om$. Given a function $f \\in \\mathcal{M}({\\partial} \\Om),$ and for $s > 0, $ we define $E_{f}(s) = \\{ x \\in {\\partial} \\Om : |f(x)| > s \\}.$ The \\textit{distribution function} $ \\alpha_{f} $ of $f$ is defined as $\\alpha_{f}(s) = {\\mathcal{H}}^{N-1}(E_{f}(s)) \\; \\text{for} \\; s > 0.$\nWe define the \\textit{one dimensional decreasing rearrangement} $ f^{*} $ of $f$ as\n\\begin{align*}\n f^*(t) = \\inf \\left\\{ s > 0 : \\alpha_{f}(s) < t \\right\\}, \\; \\mbox{ for } t > 0. \n\\end{align*}\nThe map $f \\mapsto f^*$ is not sub-additive. However, we obtain a sub-additive function from $f^*,$ namely the maximal function $f^{**}$ of $f^*$, defined by \n\\begin{equation*}\\label{maximal}\n f^{**}(t)=\\frac{1}{t}\\int_0^tf^*(\\tau)\\, {\\rm d}\\tau, \\quad t>0.\n\\end{equation*}\nNext we state one important inequality concerning the symmetrization \\cite[Theorem 3.2.10]{EdEv}.\n\n\n\\begin{proposition}\\label{HL}(Hardy-Littlewood inequality)\nLet $N \\geq 2$ and let $\\Om$ be a bounded Lipschitz domain in ${\\mathbb R}^N$. Let $f$ and $g$ be nonnegative measurable functions defined on ${\\partial} \\Om$. Then\n$$\\int_{ {\\partial} \\Omega} fg \\; {\\rm d}\\sigma \\leq \\int_0^{{\\mathcal{H}}^{N-1}({\\partial} \\Om)}f^*(t) g^*(t) \\;{\\rm d}t.$$\n\\end{proposition}\n\n\n\\subsection{Lorentz-Zygmund space}\nThe Lorentz-Zygmund spaces are three parameter family of function spaces that refine the classical Lebesgue spaces. For more details on Lorentz-Zygmund spaces, we refer to \\cite{ColinRudnik, ET}. Here we consider the Lorentz-Zygmund spaces over ${\\partial} \\Om$ of a bounded domain $\\Om$. \n\n\nLet $\\Om \\subset {\\mathbb R}^N$ be a bounded Lipschitz domain. Let $f \\in \\mathcal{M}( {\\partial} \\Omega)$ and let $l_1(t) = 1 + \\abs{\\log(t)}$. For $(p,q, \\al) \\in [1,\\infty]\\times[1,\\infty]\\times {\\mathbb R}$, consider the following quantity:\n\\begin{align*} \n |f|_{(p,q; \\al)} & := \\@ifstar{\\oldnorm}{\\oldnorm*}{t^{\\frac{1}{p}-\\frac{1}{q}} {l_1(t)}^{\\al} f^{*}(t)}_{{L^q((0,{\\mathcal{H}}^{N-1}({\\partial} \\Om)))}} \\\\\n & = \\left\\{\\begin{array}{ll}\n \\left(\\displaystyle\\int_0^{{\\mathcal{H}}^{N-1}({\\partial} \\Om)} \\left[t^{\\frac{1}{p}} {l_1(t)}^{\\alpha} {f^{*}(t)} \\right]^q \\frac{{\\rm d}t}{t} \\right)^{\\frac{1}{q}}, &\\; 1\\leq q < \\infty; \\\\ \n \\displaystyle\\sup_{0 < t < {\\mathcal{H}}^{N-1}({\\partial} \\Om)} t^{\\frac{1}{p}} {l_1(t)}^{\\alpha} {f^{*}(t)}, &\\; q=\\infty.\n \\end{array}\\right.\n \\end{align*}\nThe Lorentz-Zygmund space $L^{p,q;\\al}({\\partial} \\Om)$ is defined as\n\\[ L^{p,q;\\al}({\\partial} \\Om) := \\left \\{ f\\in \\mathcal{M}( {\\partial} \\Om): \\, |f|_{(p,q;\\al)}<\\infty \\right \\},\\]\nwhere $ |f|_{(p,q;\\al)}$ is a complete quasi norm on $L^{p,q;\\al}({\\partial} \\Om).$ For $p > 1,$ \n$$\n\\@ifstar{\\oldnorm}{\\oldnorm*}{f}_{(p,q, \\al)} = \\@ifstar{\\oldnorm}{\\oldnorm*}{t^{\\frac{1}{p}-\\frac{1}{q}} {l_1(t)}^{\\al} f^{**}(t)}_{{L^q((0,{\\mathcal{H}}^{N-1}({\\partial} \\Om)))}}\n$$ \nis a norm in $L^{p,q;\\al}({\\partial} \\Om)$ which is equivalent to $\\abs{f}_{(p,q, \\al)}$ \\cite[Corollary 8.2]{ColinRudnik}. In particular, $L^{p,q;0}({\\partial} \\Om)$ coincides with the Lorentz space $L^{p,q}({\\partial} \\Om)$ introduced by Lorentz in \\cite{Lorentz}. In the following proposition we discuss some important properties of the Lorentz-Zygmund spaces that we will use in this article. \n\n\n\\begin{proposition}\\label{properties}\nLet $p,q,r,s \\in [1, \\infty]$ and $\\al, \\beta \\in (-\\infty, \\infty).$\n\\begin{enumerate}[(i)]\n \\item Let $p \\in (1, \\infty)$. If $f \\in L^{\\infty, p; -1}({\\partial} \\Om)$, then $\\abs{f}^p \\in L^{\\infty,1; -p}({\\partial} \\Om)$. Moreover, there exists $C > 0$ such that $$ \\@ifstar{\\oldnorm}{\\oldnorm*}{\\abs{f}^p}_{(\\infty,1; -p)} \\leq C \\@ifstar{\\oldnorm}{\\oldnorm*}{f}^p_{(\\infty,p; -1)}. $$\n \\item Let $p \\in (1, \\infty)$. Then the space $ L^{1, \\infty;p}({\\partial} \\Om)$ is contained in the dual space of $ L^{\\infty,1;-p}({\\partial} \\Om)$.\n \\item If $ r > p,$ then $L^{r,s; \\beta}({\\partial} \\Om) \\hookrightarrow L^{p,q; \\al}({\\partial} \\Om)$, i.e., there exists a constant $C > 0$ such that\n \\begin{align}\\label{LZ3}\n \\@ifstar{\\oldnorm}{\\oldnorm*}{f}_{(p,q,\\al)} \\leq C \\@ifstar{\\oldnorm}{\\oldnorm*}{f}_{(r,s,\\beta)}, \\quad \\forall f \\in L^{r,s; \\beta}({\\partial} \\Om).\n \\end{align} \n \\item If either $q \\leq s$ and $\\al \\geq \\beta$ or, $q > s$ and $\\al + \\frac{1}{q} > \\beta + \\frac{1}{s},$ then $L^{p,q; \\al}({\\partial} \\Om) \\hookrightarrow L^{p,s;\\beta}({\\partial} \\Om)$, i.e., there exists $C > 0$ such that \n \\begin{align}\\label{LZ4}\n \\@ifstar{\\oldnorm}{\\oldnorm*}{f}_{(p,s;\\beta )} \\leq C \\@ifstar{\\oldnorm}{\\oldnorm*}{f}_{(p,q; \\al)}, \\quad \\forall f \\in L^{p,q; \\al}({\\partial} \\Om).\n \\end{align} \n \n \\item For $p \\in (1, \\infty)$, $ L^p({\\partial} \\Om) \\hookrightarrow L^{1, \\infty;\\alpha}({\\partial} \\Om)$.\n\\end{enumerate}\n\\end{proposition}\n\n\n\\begin{proof} \n\n\\noi (i) If $f \\in L^{\\infty, p; -1}({\\partial} \\Om)$, then $\\abs{f}_{(\\infty, p; -1)} < \\infty.$ Hence using $(|f|^p)^{*} = ( f^{*} )^p,$ we get \n\\begin{align*}\n \\abs{ |f|^p }_{(\\infty, 1; -p)} = \\int_{0}^{{\\mathcal{H}}^{N-1}({\\partial} \\Om)} \\frac{(|f|^p)^{*}}{ {(l_1(t))}^p} \\; \\frac{dt}{t} = \\left( \\left( \\int_{0}^{{\\mathcal{H}}^{N-1}({\\partial} \\Om)} \\left( \\frac{f^{*}(t)}{l_1(t)} \\right)^p \\; \\frac{dt}{t} \\right)^{\\frac{1}{p}} \\right)^p = \\abs{ f }^p_{(\\infty, p; -1)}.\n\\end{align*}\nTherefore, $\\abs{f}^p \\in L^{\\infty, 1; -p}({\\partial} \\Om).$ Now by the equivallence of norms, there exists $C_1, C_2 > 0$ such that \n\\begin{align*}\n\\@ifstar{\\oldnorm}{\\oldnorm*}{ |f|^p }_{(\\infty, 1; -p)} \\le C_1 \\abs{ f }^p_{(\\infty, p; -1)} \\le C_1 C_2 \\@ifstar{\\oldnorm}{\\oldnorm*}{ f }^p_{(\\infty, p; -1)}. \n\\end{align*}\nThus there exists $C > 0$ such that $\\@ifstar{\\oldnorm}{\\oldnorm*}{ |f|^p }_{(\\infty, 1; -p)} \\leq C \\@ifstar{\\oldnorm}{\\oldnorm*}{ f }^p_{(\\infty, p; -1)}.$ \\\\\n\\noi (ii) Let $f \\in L^{\\infty,1;-p}({\\partial} \\Om)$ and $g \\in L^{1, \\infty;p}({\\partial} \\Om)$. Then using the Hardy-Littlewood inequality (Proposition \\ref{HL}), \n\\begin{align*}\n{\\displaystyle \\int_{\\partial \\Omega}} fg \\; {\\rm d}\\sigma &\\leq \\int_0^{{\\mathcal{H}}^{N-1}({\\partial} \\Om)}f^*(t) g^*(t)\\; {\\rm d}t \\\\\n&\\leq \\left( \\sup_{0 < t < {\\mathcal{H}}^{N-1}({\\partial} \\Om)}{t g^{**}(t)(l_1(t))^p} \\right) \\left( \\int_0^{{\\mathcal{H}}^{N-1}({\\partial} \\Om)} \\frac{f^{**}(t)}{l_1(t))^p} \\; \\frac{dt}{t} \\right) \\\\\n& = \\@ifstar{\\oldnorm}{\\oldnorm*}{g}_{(1, \\infty;p)}\\@ifstar{\\oldnorm}{\\oldnorm*}{f}_{(\\infty,1;-p)}. \n\\end{align*}\nThus $f$ is in the dual space of $L^{1, \\infty;p}({\\partial} \\Om)$. \\\\\n\\noi (iii) Follows from \\cite[Theorem 9.1]{ColinRudnik}. (iv) Follows from \\cite[Theorem 9.3]{ColinRudnik}. \\\\\n\\noi (v) Let $f \\in L^p({\\partial} \\Om).$ Since $p > 1$, using \\eqref{LZ3} there exists $C > 0$ such that \n$$\\@ifstar{\\oldnorm}{\\oldnorm*}{f}_{(1, \\infty;\\alpha)} \\leq C \\@ifstar{\\oldnorm}{\\oldnorm*}{f}_{L^p({\\partial} \\Om)}.$$ Therefore, $ L^p({\\partial} \\Om)$ is continuously embedded into $L^{1, \\infty;\\alpha}({\\partial} \\Om)$.\n\\end{proof}\n\n\nThe following characterization of the function space ${\\mathcal G}_d$ follows by similar arguments as in the proof of \\cite[Theroem 16]{Anoop}. \n\n\n\\begin{proposition}\\label{G_d}\nLet $N \\geq 2$ and $d \\in [1, \\infty).$ Then $f \\in {\\mathcal G}_d$ if and only if \n$$\n \\underset{t \\rightarrow 0}{\\lim} \\; t^{\\frac{1}{d}}(l_1(t))^N f^*(t) = 0.\n$$ \n\\end{proposition}\n\nNext we list some properties of the Lorentz spaces. For more details on Lorentz spaces, we refer to \\cite{Adams, EdEv, Hunt}. \n\n\n\\begin{proposition}\\label{Lorentz properties}\n Let $p,q,r \\in [1, \\infty]$.\n \\begin{enumerate}[(i)]\n \\item Generalized H\\\"{o}lder inequality: Let $f \\in L^{p_1, q_1}({\\partial} \\Om)$ and $g \\in L^{p_2, q_2}({\\partial} \\Om)$, where $(p_i, q_i) \\in (1, \\infty) \\times [1, \\infty]$ for $i = 1,2$. If $(p,q)$ be such that $\\frac{1}{p} = \\frac{1}{p_1} + \\frac{1}{p_2}$ and \n $\\frac{1}{q} = \\frac{1}{q_1} + \\frac{1}{q_2},$ then \n \\begin{align*}\n \\@ifstar{\\oldnorm}{\\oldnorm*}{fg}_{(p,q)} \\leq C \\@ifstar{\\oldnorm}{\\oldnorm*}{f}_{(p_1,q_1)} \\@ifstar{\\oldnorm}{\\oldnorm*}{g}_{(p_2,q_2)},\n \\end{align*}\n where $C = C(p) > 0$ is a constant such that $C = 1,$ if $p=1$ and $C = {p^{\\prime}},$ if $p > 1$.\n \\item For $r>0$, $\\@ifstar{\\oldnorm}{\\oldnorm*}{\\abs{f}^{r}}_{\\left(\\frac{p}{r}, \\frac{q}{r} \\right)} = \\@ifstar{\\oldnorm}{\\oldnorm*}{f}^{r}_{(p,q)}.$ \n \\end{enumerate}\n\\end{proposition}\n\n\n\\begin{proof} Proof of (i) follows from \\cite[Theorem 4.5]{Hunt}. For $\\al = 0,$ proof of (ii) directly follows from the definition of the Lorentz-Zygmund space.\n\\end{proof}\n\n\nIn the following we list some properties of the function space ${\\mathcal F}_d.$\n\n\n\\begin{proposition}\\label{F_d}\nLet $d,q \\in (1, \\infty).$ Then\n\\begin{enumerate}[(i)] \n \\item $L^{d,q}({\\partial} \\Om) \\subset {\\mathcal F}_{d}$.\n \\item Let $h \\in L^{d, \\infty}({\\partial} \\Om)$ and $h > 0$. Let $f \\in L^1({\\partial} \\Om)$. If $\\int_{{\\partial} \\Om} h^{d-q} \\abs{f}^q < \\infty$ for $q \\ge d$, then $f \\in L^{d, q}({\\partial} \\Om)$ and hence $f \\in {\\mathcal F}_{d}.$\n \\item $f \\in {\\mathcal F}_{d}$ if and only if \n \\begin{align*}\n \\lim_{t \\rightarrow 0}t^{\\frac{1}{d}} f^*(t) = 0 = \\lim_{t \\rightarrow {\\mathcal{H}}^{N-1}({\\partial} \\Om)}t^{\\frac{1}{d}} f^*(t).\n \\end{align*}\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\n\\noi (i) Using \\eqref{LZ4} for $\\al = \\beta = 0$ and by the density arguments, we get $L^{d,q}({\\partial} \\Om) \\subset {\\mathcal F}_{d}$. \\\\\n\\noi (ii) The result is obvious for $q = d$. For $q > d$, set $g = h^{\\frac{d}{q} - 1} \\abs{f}$. Then $g \\in L^q({\\partial} \\Om).$ Using Proposition \\ref{Lorentz properties}, $h^{1- \\frac{d}{q}} \\in L^{\\frac{dq}{q-d}, \\infty}({\\partial} \\Om).$ Therefore, by the generalized H\\\"{o}lder inequality (Proposition \\ref{Lorentz properties}), $f \\in L^{d, q}({\\partial} \\Om).$ \\\\\n\\noi (iii) Follows by the similar arguments as in \\cite[Theorem 3.3]{AMM}.\n\\end{proof}\n\n\n\\subsection{Examples}\nNow we give some examples of functions in the Lorentz-Zygmund spaces that are defined on ${\\partial} \\Om$ of a Lipschitz bounded domain $\\Om$. \n\n\n\\begin{example}\\label{Ex1}\nFor $\\Om = \\{ (x,y) \\in {\\mathbb R}^2 : x^2 + y^2 < 1 \\}$, we consider \n\\begin{equation*} \n g_1(x,y) = \\abs{y}^{-\\frac{1}{2}}, \\quad \\forall \\; (x,y) \\in {\\partial} \\Om.\n\\end{equation*}\nFor $s> 0,$ we can compute\n\\begin{equation*} \n \\begin{aligned}\n \\al_{g_1}(s) = \\left\\{\\begin{array}{ll} \n 2\\pi, & \\text {for} \\quad 0 0; \\quad \\lim_{t \\rightarrow 0} t (l_1(t))^2 \\left( \\text{cosec} \\left( \\frac{t}{4} \\right) \\right)^{\\frac{1}{2}} > 0.\n\\end{align*}\nHence $g_1 \\not \\in {\\mathcal F}_2$ (by Proposition \\ref{F_d}) and $g_1 \\not \\in {\\mathcal G}_1$ (by Proposition \\ref{G_d}).\n\\end{example}\n\n\n\\begin{example}\\label{Ex2}\nLet $p \\in (1, \\infty)$ and $N > p$. For $0 < R < \\frac{1}{2}$, let $$ \\Om = \\left\\{ (x_1, x_2, \\cdot, \\cdot, \\cdot, x_N) \\in {\\mathbb R}^N: |x_i| < R \\; (\\text{for} \\; i = 1, \\cdot, \\cdot, \\cdot, N-1), 0 < x_N < 2R \\right\\} $$ and $A= \\left\\{ (x_1, x_2, \\cdot, \\cdot, \\cdot, x_{N-1}, 0): |x_i| < R \\right\\}$. Now consider \n\\begin{equation*} \n \\begin{aligned}\n g_2(x) = \\left\\{\\begin{array}{ll} \n \\abs{x_1 \\log(\\abs{x_1})}^{-\\frac{p-1}{N-1}}, & \\text {for} \\quad x \\in A,\\\\ \n 0, & \\text{for} \\quad x \\in {\\partial} \\Om \\setminus A.\\\\\n \\end{array} \\right. \n \\end{aligned}\n\\end{equation*}\nClearly $g_2 \\in L^1({\\partial} \\Om)$ and $g_2 \\not \\in L^r({\\partial} \\Om)$ for $r \\in \\Big[ \\frac{N-1}{p-1}, \\infty \\Big).$ Let\n\\begin{equation*} \n \\begin{aligned}\n h(x) = \\left\\{\\begin{array}{ll} \n \\abs{x_1}^{-\\frac{p-1}{N-1}}, & \\text {for} \\quad x \\in A,\\\\ \n 0, & \\text{for} \\quad x \\in {\\partial} \\Om \\setminus A.\\\\\n \\end{array} \\right. \n \\end{aligned}\n\\end{equation*}\nWe calculate $\\al_h(s) = 2^{N-1}R^{N-2} s^{-\\frac{N-1}{p-1}}$ and $h^*(t) = (2^{N-1}R^{N-2})^{\\frac{p-1}{N-1}} t^{-\\frac{p-1}{N-1}}.$\nTherefore, $h \\in L^{\\frac{N-1}{p-1},\\infty}({\\partial} \\Om)$. For $q = \\frac{N}{p-1},$ \n\\begin{equation*} \n\\begin{aligned}\n h^{\\frac{N-1}{p-1} - q}(x) = \\left\\{\\begin{array}{ll} \n \\abs{x_1}^{\\frac{1}{N-1}}, & \\text {for} \\quad x \\in A,\\\\ \n 0, & \\text{for} \\quad x \\in {\\partial} \\Om \\setminus A.\\\\\n \\end{array} \\right. \n \\end{aligned}\n\\end{equation*}\nFurther, \n\\begin{align*}\n \\int_{{\\partial} \\Om} h^{\\frac{N-1}{p-1} - q} g_2^q \\, {\\rm d}\\sigma = 2^{N-1} R^{N-2} \\int_0^{R} t^{-1} \\abs{\\log(t)}^{- \\frac{N}{N-1}} \\, {\\rm d}t < \\infty.\n\\end{align*}\nTherefore, by Proposition \\ref{F_d}, $g_2 \\in L^{\\frac{N-1}{p-1},q }({\\partial} \\Om)$ and hence $g_2 \\in {\\mathcal F}_{\\frac{N-1}{p-1}}.$ \n\\end{example}\n\n\n\\begin{example}\\label{Ex3}\nFor $0 < R < 1$, let $\\Om$ and $A$ be given as in the above example. For $q \\in (1, \\infty),$ we consider \n\\begin{equation*} \n \\begin{aligned}\ng_3(x) = \\left\\{\\begin{array}{ll} \n \\abs{x_1}^{- \\frac{1}{q}}, & \\text {for} \\quad x \\in A,\\\\ \n 0, & \\text{for} \\quad x \\in {\\partial} \\Om \\setminus A.\\\\\n \\end{array} \\right. \n \\end{aligned}\n\\end{equation*}\nClearly $g_3 \\not \\in L^q({\\partial} \\Om)$ for $q \\in (1, \\infty).$ Further, we calculate $\\al_{g_3}(s) = 2^{N-1}R^{N-2} s^{-q}$ and \n$g^*_3(t) = (2^{N-1}R^{N-2})^{\\frac{1}{q}}t^{- \\frac{1}{q}}.$ Moreover, \n$$ \n\\underset{t \\rightarrow 0}{\\lim} \\; t^{\\frac{q-1}{q}} (1+ |\\log(t)|)^N = 0 \n$$ \nand hence $g_3 \\in {\\mathcal G}_1$ (by Proposition \\ref{G_d}).\n\\end{example}\n\n\n\n \\subsection{Trace embeddings}\nNow we state the trace embeddings that play a vital role in this article. First, we state the classical trace embeddings to the Lebesgue spaces \\cite[Theorem 4.2, Theorem 4.6, Theorem 6.2]{Nevcas}.\n\n\n\\begin{proposition}[Classical trace embeddings]\\label{classical}\nLet $N \\geq 2$ and let $\\Om$ be a Lipschitz bounded domain in ${\\mathbb R}^N$. Let $p\\in (1,\\infty)$. Then the following embeddings hold:\n\\begin{enumerate}[(i)]\n \\item If $N > p$ and $q \\in \\left[1, {\\frac{p(N-1)}{N-p}} \\right]$, then ${W^{1,p}(\\Om)} \\hookrightarrow L^{q}({\\partial} \\Om),$ i.e., there exists $C = C(N,p) > 0$ satisfying \n \\begin{align*}\n \\|\\phi \\|_{L^{q}({\\partial} \\Om)} \\leq C \\| \\phi \\|_{{W^{1,p}(\\Om)}}, \\quad \\forall \\phi \\in {W^{1,p}(\\Om)}. \n \\end{align*}\n If $q \\neq {\\frac{p(N-1)}{N-p}},$ then the above embedding is compact.\n \\item If $N=p$ and $q \\in \\left[ 1, \\infty \\right)$, then ${W^{1,p}(\\Om)} \\hookrightarrow L^q({\\partial} \\Om),$ i.e., there exists $C = C(N) > 0$ satisfying \n \\begin{align*}\n \\|\\phi \\|_{L^q({\\partial} \\Om)} \\leq C \\| \\phi \\|_{{W^{1,p}(\\Om)}}, \\quad \\forall \\phi \\in {W^{1,p}(\\Om)}, \n \\end{align*}\n and the above embedding is compact.\n\\end{enumerate}\n\\end{proposition}\n\n\nThe following embeddings are due to Cianchi et al. \\cite[Theorem 1.3]{CianchiPick} that extends the classical trace embeddings to the Lebesgue spaces with the finer embeddings to the Lorentz-Zygmund spaces. \n\n\n\\begin{proposition}[Finer trace embeddings]\\label{Cianchi}\nLet $N \\geq 2$ and let $\\Om$ be a Lipschitz bounded domain in ${\\mathbb R}^N$. Let $p\\in (1,\\infty)$. Then the following embeddings hold: \n\\begin{enumerate}[(i)]\n \\item If $N > p$, then ${W^{1,p}(\\Om)} \\hookrightarrow L^{\\frac{p(N-1)}{N-p},p}({\\partial} \\Om),$ i.e., there exists $C = C(N,p) > 0$ such that \n \\begin{align*}\n \\|\\phi \\|_{\\left(\\frac{p(N-1)}{N-p},p\\right)} \\leq C \\| \\phi \\|_{{W^{1,p}(\\Om)}}, \\quad \\forall \\phi \\in {W^{1,p}(\\Om)}. \n \\end{align*}\n \\item If $N=p$, then ${W^{1,p}(\\Om)} \\hookrightarrow L^{\\infty,N;-1}({\\partial} \\Om)$, i.e., there exists $C = C(N) > 0$ such that\n \\begin{align*}\n \\|\\phi \\|_{(\\infty,N;-1)} \\leq C \\| \\phi \\|_{{W^{1,p}(\\Om)}}, \\quad \\forall \\phi \\in {W^{1,p}(\\Om)}. \n \\end{align*}\n\\end{enumerate}\n\\end{proposition}\n\n\nThe above finer trace embeddings help us to get the weighted trace inequality for a class of weight functions defined on the boundary. \n\n\n\\begin{proposition}\\label{Hardy boundary}\n(i) Let $N > p$ and $g \\in L^{\\frac{N-1}{p-1}, \\infty}({\\partial} \\Om)$. Then there exists a constant $C = C(N,p) > 0$ satisfying \\begin{align}\\label{N>p}\n {\\displaystyle \\int_{\\partial \\Omega}} \\abs{g} \\abs{\\phi}^p \\leq C \\@ifstar{\\oldnorm}{\\oldnorm*}{g}_{\\left(\\frac{N-1}{p-1}, \\infty \\right)} \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}^p_{{W^{1,p}(\\Om)}}, \\quad \\forall \\phi \\in {W^{1,p}(\\Om)}. \n\\end{align}\n\\noi (ii) Let $N =p$ and $g \\in L^{1, \\infty;N}({\\partial} \\Om)$. Then there exists a constant $C = C(N) > 0$ satisfying \n\\begin{align}\\label{N=p}\n {\\displaystyle \\int_{\\partial \\Omega}} \\abs{g} \\abs{\\phi}^p \\leq C \\@ifstar{\\oldnorm}{\\oldnorm*}{g}_{(1, \\infty;N)} \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}^p_{{W^{1,p}(\\Om)}}, \\quad \\forall \\phi \\in {W^{1,p}(\\Om)}.\n\\end{align}\n\\end{proposition}\n\n\n\\begin{proof}\n(i) For $\\phi \\in {W^{1,p}(\\Om)}$, by the generalized H\\\"{o}lder inequality (Proposition \\ref{Lorentz properties}) and Proposition \\ref{Lorentz properties}, we obtain \n\\begin{align*}\n {\\displaystyle \\int_{\\partial \\Omega}} \\abs{g} \\abs{\\phi}^p \\leq \\@ifstar{\\oldnorm}{\\oldnorm*}{g}_{\\left(\\frac{N-1}{p-1}, \\infty \\right)} \\@ifstar{\\oldnorm}{\\oldnorm*}{\\abs{\\phi}^p}_{\\left(\\frac{N-1}{N-p},1 \\right)} = \\@ifstar{\\oldnorm}{\\oldnorm*}{g}_{\\left( \\frac{N-1}{p-1}, \\infty \\right)} \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}^p_{\\left(\\frac{p(N-1)}{N-p},p \\right)}. \n\\end{align*}\nNow using the finer trace embeddings (Proposition \\ref{Cianchi}), we get \n\\begin{align*}\n {\\displaystyle \\int_{\\partial \\Omega}} \\abs{g} \\abs{\\phi}^p \\leq C \\@ifstar{\\oldnorm}{\\oldnorm*}{g}_{\\left(\\frac{N-1}{p-1}, \\infty \\right)} \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}_{{W^{1,p}(\\Om)}}, \\quad \\forall \\phi \\in {W^{1,p}(\\Om)}, \n\\end{align*}\nwhere $C = C(N,p)$ is the embedding constant. \n\n\n\\noi (ii) For $\\phi \\in {W^{1,N}(\\Om)}$, using Proposition \\ref{properties}, we obtain\n\\begin{align*}\n {\\displaystyle \\int_{\\partial \\Omega}} \\abs{g} \\abs{\\phi}^N \\leq \\@ifstar{\\oldnorm}{\\oldnorm*}{g}_{(1, \\infty;N)} \\@ifstar{\\oldnorm}{\\oldnorm*}{\\abs{\\phi}^N}_{(\\infty, 1;-N)} & \\leq C \\@ifstar{\\oldnorm}{\\oldnorm*}{g}_{(1, \\infty;N)} \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}^N_{(\\infty, N;-1)}.\n\\end{align*}\nAgain using the finer trace embeddings, \n\\begin{align*}\n {\\displaystyle \\int_{\\partial \\Omega}} \\abs{g} \\abs{\\phi}^N \\leq C \\@ifstar{\\oldnorm}{\\oldnorm*}{g}_{(1, \\infty;N)} \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}^N_{{W^{1,N}(\\Om)}}, \\quad \\forall \\phi \\in {W^{1,N}(\\Om)},\n\\end{align*}\nwhere $C = C(N) > 0$ is the embedding constant given in Proposition \\ref{Cianchi}.\n\\end{proof}\n\n\n\\subsection{Degree}\nWe define the degree for certain class of maps from ${W^{1,p}(\\Om)}$ to it's dual $({W^{1,p}(\\Om)})'$. For more details on this topic, we refer to \\cite{Browder, Skrypnik}.\n\n\n\\begin{definition} Let $D \\subset {W^{1,p}(\\Om)}$ be a set and let $F: D \\rightarrow ({W^{1,p}(\\Om)})'$ be a map. \\begin{enumerate}[(i)] \n \\item \\it{\\textbf {Demicontinuous}}: $F$ is said to be demicontinuous on $D$, if for any sequence $(\\phi_n) \\subset D$ such that $\\phi_n\\rightarrow \\phi_0$, then \n $\\displaystyle \\lim_{n\\rightarrow \\infty} \\left< F(\\phi_n), \\; \\upsilon \\right> = \\left< F(\\phi_0), \\upsilon \\right>,\\; \\forall \\upsilon \\in {W^{1,p}(\\Om)}.$ \n \\item \\it{ \\textbf{Class $\\al(D)$}}: $F$ is said to be in class $\\al(D),$ if every sequence $(\\phi_n)$ in $D$ satisfying $\\phi_n \\rightharpoonup \\phi_0$ and \n $\n\\uplim_{n \\rightarrow \\infty} \\left< F(\\phi_n), \\phi_n - \\phi_0 \\right> \\leq 0,$ converges to some $\\phi_0$ in $D$. \n \\item For $F \\subset \\overline{D},$ $A(D,F)$ denotes the set of all bounded, demicontinuous map defined on $\\overline{D}$ that satisfies the class $\\al(F).$ \n \\item \\it{ \\textbf{Isolated zero}}: A point $\\phi_0 \\in D$ is called an isolated zero of $F$, if $F(\\phi_0) = 0$ and there exists $r > 0$ such that the ball $B_{r}(\\phi_0) $ (where $ \\overline{B_{r}(\\phi_0}) \\subset D$) does not contain any other zeros of $F$.\n \\item \\it{\\textbf{Degree}}: Let $F \\in A(D,{\\partial} D)$ satisfying $F(\\phi) \\neq 0$ for every $\\phi \\in {\\partial} D.$ Let $(\\upsilon_i)$ be a Schauder basis for ${W^{1,p}(\\Om)}$ and let $V_n = span\\{\\upsilon_1,..., \\upsilon_n\\}.$ A finite-dimensional approximation $F_n$ of $F$ with respect to $V_n$ is defined as: $$ F_n(\\phi) = \\sum_i^n \\left< F(\\phi),\\upsilon_i \\right> \\upsilon_i, \\; \\text{for} \\; \\phi \\in \\overline{D_n}, \\; \\text{where} \\; D_n = D \\cap V_n.$$\n From \\cite[Theorem 2.1]{Skrypnik}, $F_n(\\phi) \\neq 0$ for every $\\phi \\in {\\partial} D_n$, the degree $deg(F_n, \\overline{D_n}, 0)$ of $F_n$ with respect to $0 \\in V_n$ is well defined and independent of $n$. Further from \\cite[Theorem 2.2]{Skrypnik}, $\\lim_{n\\rightarrow \\infty} deg(F_n, \\overline{D_n},0)$ is independent of basis $(v_i)$. Now the degree of $F$ with respect to $0 \\in ({W^{1,p}(\\Om)})'$ is defined as $$ deg(F, \\overline{D}, 0) = \\lim_{n\\rightarrow \\infty} deg(F_n, \\overline{D_n},0).$$ \n \\item \\it{\\textbf{Homotopy}}: Let $F, G \\in A(D,{\\partial} D)$ satisfying $F(\\phi), G(\\phi) \\neq 0$ for every $\\phi \\in {\\partial} D.$ The mapping $F$ and $G$ is said to be homotopic on $\\overline{D}$, if there exists a sequence of one parameter family $H_t: \\overline{D} \\rightarrow ({W^{1,p}(\\Om)})'$, $t \\in [0,1]$ such that $H_0 = F$ and $H_1 = G$ and $H_t$ satisfies the following:\n \\begin{enumerate}[(a)]\n \\item For $t \\in [0,1]$, $H_t \\in A(D, {\\partial} D)$ and $H_t(\\phi) \\neq 0$ for every $\\phi \\in {\\partial} D.$ \n \\item For a sequence $t_n \\in [0,1]$ satisfying $t_n \\rightarrow t$ and a sequence $\\phi_n \\in \\overline{D}$ satisfying $\\phi_n \\rightarrow \\phi_0,$ $H_{t_n}\\phi_n \\rightharpoonup H_{t} \\phi_0$ as $n \\rightarrow \\infty$. \\end{enumerate}\n \\item \\it{\\textbf{Index}}: Let $F \\in A(D, \\overline{D})$ and let $\\phi_0$ be an isolated zero of $F$. Then the index of a map $F$ is defined as $ ind(F, \\phi_0) = \\displaystyle\\lim_{r\\rightarrow 0} deg(F, \\overline{B_{r}(\\phi_0)}, 0).$ \n \\item \\it{\\textbf{Potential operator}}: A map $F \\in A(D,({W^{1,p}(\\Om)})')$ is called a potential operator, if there exists a functional $f: {W^{1,p}(\\Om)} \\rightarrow {\\mathbb R}$ such that $f^{\\prime}(\\phi) = F(\\phi), \\; $ for all $\\phi \\in {W^{1,p}(\\Om)}.$\n\\end{enumerate}\n\\end{definition}\n\n\nThe following Proposition is proved in \\cite{Skrypnik} (Theorem 4.1, Theorem 4.4, Theorem 5.1, and Theorem 6.1).\n\n\n\\begin{proposition}\\label{degree} $(i)$ Let $F, G \\in A(D, {\\partial} D)$ satisfying $F(\\phi), G(\\phi) \\neq 0$ for every $\\phi \\in {\\partial} D.$ If $F$ and $G$ are homotopic in $\\overline{D},$ then \n$deg(H_t, \\overline{D}, 0) = C, \\; \\forall t \\in [0,1].$ In particular, $deg(F, \\overline{D}, 0) = deg(G, \\overline{D}, 0)$.\\vspace{0.3 cm} \\\\\n$(ii)$ Let $F \\in A(D, {\\partial} D).$ Suppose that $0 \\in \\overline{D} \\setminus {\\partial} D$ and $\\displaystyle \\left< F(\\phi), \\phi \\right> \\geq 0,\\; F(\\phi) \\neq 0$ for $\\phi \\in {\\partial} D.$ Then $deg(F, \\overline{D},0) = 1.$ \\vspace{0.3 cm} \\\\\n\\noi $(iii)$ Let $F \\in A(D, \\overline{D})$ satisfying $F(\\phi) \\neq 0$, for every $\\phi \\in {\\partial} D.$ If $F$ has only finite number of isolated zeros in $\\overline{D},$ then $$ deg(F, \\overline{D}, 0) = \\sum_{i = 1}^n ind(F, \\phi_i),$$ where $\\phi_i (i = 1,...,n)$ are all zeros of $F$ in $D$. \\vspace{0.3 cm} \\\\\n\\noi $(iv)$ Let $F \\in A(D,({W^{1,p}(\\Om)})')$ be a potential operator. Suppose that the point $\\phi_0$ is a local minimum of $f$ and it is an isolated zero of $F$. Then $ind(F, \\phi_0) = 1.$\n\n\\end{proposition}\n\n\n\n\\section{Proof of main theorems}\n\n\nIn this section, we prove all our main theorems. \n\\subsection{The existence and some of the properties of the first eigenvalue}\n\\noi{\\textbf{Proof of Theorem \\ref{Steklov existence}}}: \\\\\nFirst, recall that \n$$\n\\la_1 = \\inf_{\\phi \\in N_g} {\\displaystyle \\int_{\\Omega}} \\abs{\\Gr \\phi}^p. \n$$ \nFrom Lemma \\ref{Poin}, we clearly have $\\la_1 > 0$. Since the functional $J$ is coercive on $N_g $, a sequence that minimizes $J$ over $N_g$ will be bounded and hence admits a weakly convergent subsequence that converges to say $\\phi_1$. As $N_g$ is weakly closed, $\\phi_1\\in N_g$ and $J(\\phi_1)=\\la_1.$ Thus $\\la_1$ is the minimum of $J$ on $N_g$ and hence $\\@ifstar{\\oldnorm}{\\oldnorm*}{{\\rm d}J(\\phi_1)} = 0.$ Now from Remark \\ref{Manifold}, we obtain \n\\begin{align}\\label{weak form of first}\n {\\displaystyle \\int_{\\Omega}} |\\Gr \\phi_1|^{p-2} \\Gr \\phi_1 \\cdot \\Gr v\\; {\\rm d}x = \\la_1 {\\displaystyle \\int_{\\partial \\Omega}} g \\abs{\\phi_1}^{p-2}\\phi_1 v \\; {\\rm d}\\sigma, \\quad \\forall v \\in {W^{1,p}(\\Om)}.\n\\end{align}\n\n\n\\noi{\\it $\\la_1$ is a principal eigenvalue}: Clearly $\\abs{\\phi_1}$ is also an eigenfunction of \\eqref{Steklov weight} corresponding to $\\la_1$. Moreover, as $\\abs{\\phi_1}$ is $p$-harmonic, $\\abs{\\phi_1} \\in C^{1, \\alpha}(\\Om)$. Since $\\abs{\\phi_1} \\geq 0$, by the maximum principle in \\cite[Theorem 5]{Vazquez}, $\\abs{\\phi_1} > 0$ in $\\Om$. Without loss of generality we may assume $\\phi_1>0$ in $\\Om.$ We show that $\\phi_1$ is positive also on $\\partial \\Om.$\nFor $\\ep > 0$, consider the function $\\frac{\\phi_1}{\\phi_1+ \\ep}$. It is easy to verify that $\\frac{\\phi_1}{\\phi_1+ \\ep} \\in {W^{1,p}(\\Om)}$ and $\\frac{\\phi_1}{\\phi_1+ \\ep} \\rightarrow 1$ in $L^p(\\Om).$ We show that $\\frac{\\phi_1}{\\phi_1+ \\ep} \\rightarrow 1$ in ${W^{1,p}(\\Om)}$ as well. This together with trace embedding will ensure that $\\phi_1 > 0$ in $\\overline{\\Om}$.\nThus it is enough to prove $\\Gr \\frac{\\phi_1}{\\phi_1+ \\ep} \\rightarrow 0$ in $L^p(\\Om)$ as $\\ep\\ra 0.$ Notice that,\n\\begin{align}\\label{bounded}\n \\left|{\\Gr \\frac{\\phi_1}{\\phi_1 + \\ep}}\\right|^p = \\left( \\frac{\\ep}{\\phi_1 + \\ep} \\right)^p \\frac{\\abs{\\Gr \\phi_1}^p}{(\\phi_1 + \\ep)^p} \\leq \\frac{\\abs{\\Gr \\phi_1}^p}{\\phi_1^p}.\n\\end{align}\nFurthermore, by taking $ \\frac{1}{(\\phi_1 + \\ep)^{p-1}} \\in {W^{1,p}(\\Om)}$ as a test function in \\eqref{weak form of first}, we obtain\n\\begin{align*}\n (p-1){\\displaystyle \\int_{\\Omega}} \\frac{\\abs{\\Gr \\phi_1}^p}{(\\phi_1 + \\ep)^p} = \\la_1 {\\displaystyle \\int_{\\partial \\Omega}} g \\left( \\frac{\\phi_1}{\\phi_1 + \\ep} \\right)^{p-1} \\leq \\la_1 {\\displaystyle \\int_{\\partial \\Omega}} \\abs{g}. \n\\end{align*}\nWe apply Fatou's lemma and let $\\ep\\ra 0$ in the above inequality to get \n\\begin{align*}\n (p-1){\\displaystyle \\int_{\\Omega}} \\frac{\\abs{\\Gr \\phi_1}^p}{\\phi_1^p} \\leq \\la_1 {\\displaystyle \\int_{\\partial \\Omega}} \\abs{g}. \n\\end{align*} \nNow \\eqref{bounded} together with the dominated convergence theorem ensures that $\\Gr \\frac{\\phi_1}{\\phi_1+ \\ep} \\rightarrow 0$ in $L^p(\\Om)$. \n\n\n\n\\noi{\\it The uniqueness and the simplicity}: The usual arguments (for example, see \\cite[Lemma 3.1]{Torne} for a proof) using the Picone's identity \\cite[Theorem 1.1]{Picone} gives the uniqueness of the positive principal eigenvalue and the simplicity of $\\la_1$.\n\n\n\n\\noi{\\it $\\la_1$ is an isolated eigenvalue}: We adapt the proof of \\cite[Proposition 2.12]{ADSS}. On the contrary, we suppose that there exists a sequence $(\\la_n)$ of eigenvalues of \\eqref{Steklov weight} converging to $\\la_1$. For each $n\\in {\\mathbb N}$, let $\\psi_n \\in N_g$ be an eigenfunction corresponding to $\\la_n$. Then $J(\\psi_n) = \\la_n \\rightarrow \\la_1$ and \n$$\n\\big< (J' - \\la_n G')(\\psi_n), \\psi_n \\big> = (J - \\la_n G)(\\psi_n) = 0, \n$$ \ni.e., $\\@ifstar{\\oldnorm}{\\oldnorm*}{{\\rm d}J(\\psi_n)} = 0$. Hence using Lemma \\ref{PS} and the continuity of $J'$ and $G'$, we get $\\psi_n \\rightarrow \\psi$, an eigenfunction corresponding to $\\la_1$. Since $\\la_1$ is simple, $\\psi = \\pm \\phi_1,$ where $\\phi_1$ is a first eigenfunction such that $\\phi_1>0$ on $\\overline{\\Om}.$ If we let $\\psi= {\\phi_1}$, then by Egorov's theorem there exists $E \\subset \\Om$ and $n_1\\in {\\mathbb N}$ such that $\\abs{E} < \\ep$ and $\\psi_n^{-}= 0$ a.e. in $E^c$ for $n \\geq n_1.$ Also from \\eqref{Steklov weight} we have \n\\begin{align*}\n {\\displaystyle \\int_{\\Omega}} \\abs{\\Gr \\psi_n^{-}}^p = \\la_n {\\displaystyle \\int_{\\partial \\Omega}} g \\abs{\\psi_n^{-}}^p.\n\\end{align*}\n Notice that $\\int_{\\Om} \\abs{\\Gr \\psi_n^{-}}^p\\neq 0,$ since $\\psi_n$ changes sign on $\\Om.$ Now by setting $v_n= (\\int_{{\\partial} \\Om} g \\abs{\\psi_n^{-}}^p)^{-\\lfrac{1}{p}} \\psi_n^{-},$ we have $v_n \\in N_g$ and $\\int_{\\Om} \\abs{\\Gr v_n}^p = \\la_{n} \\rightarrow \\la_1.$ Therefore, $v_n$ must converge to ${\\phi_1},$ a contradiction as $v_{n} = 0$ a.e. in $E^c$ for $n\\ge n_1$. Thus $\\la_1$ must be an isolated eigenvalue. \\qed\n\n\n\\begin{remark}\\label{Stricly contained}\n\\begin{enumerate}[(a)]\n \\item Let $$g \\in \\left\\{\\begin{array}{ll}\n L^{\\frac{N-1}{p-1}, \\infty}({\\partial} \\Om) & \\text{ for } N > p,\\\\\n L^{1, \\infty; N}({\\partial} \\Om) & \\text{ for } N = p.\n \\end{array}\\right.$$ \n Then $\\frac{1}{\\la_1}$ is the best constant in the following weighted trace inequality:\n \\begin{align*}\n {\\displaystyle \\int_{\\partial \\Omega}} \\abs{g} \\abs{\\phi}^p &\\leq C {\\displaystyle \\int_{\\Omega}} \\abs{\\Gr \\phi}^p, \\quad \\forall \\phi \\in {W^{1,p}(\\Om)}. \n \\end{align*}\n In addition, if $g$ satisfy all the assumptions of Theorem \\ref{Steklov existence}, then this best constant is also attained.\n \\item Since $\\Om$ is bounded, we have \n $$ L^q({\\partial} \\Om)\\subset L^{\\llfrac{N-1}{p-1}}({\\partial} \\Om), \\; \\forall q > \\frac{N-1}{p-1}, \\: \\text{ and } \\: \\; L^q({\\partial} \\Om) \\subset {\\mathcal G}_1,\\; \\forall q \\in (1, \\infty). \n $$ \n Thus, Theorem 1.2 of \\cite{Torne} follows from Theorem \\ref{Steklov existence}. Furthermore, Example \\ref{Ex2} and Example \\ref{Ex3} give examples of weight functions for which Theorem 1.2 of \\cite{Torne} is not applicable, however admits a positive principal eigenvalue by our Theorem \\ref{Steklov existence}.\n\\end{enumerate}\n\\end{remark}\n\n\n\\begin{remark}\nFor $g$ as given in Theorem \\ref{Steklov existence}, the functional $J$ and the set $N_g$ satisfy all the properties of \\cite[Theorem 5.3]{Kavian}. Therefore, by \\cite[Theorem 5.3]{Kavian}, there exists a sequence of eigenvalues $(\\la_n)$ of \\eqref{Steklov weight} and the sequence $(\\la_n)$ is unbounded.\n\\end{remark}\n\n\n\\subsection{Bifurcation}\n\nFor proving Theorem \\ref{bifur}, we adapt the degree theory arguments given in \\cite{Drabek-Huang}, also see \\cite{Anoopthesis}. We split our proof into several lemmas and propositions. \n\\begin{Lemma}\\label{coercive}\nLet $g^+ \\not \\equiv 0,$ $\\int_{{\\partial} \\Om} g < 0,$ and \n$$ g \\in \\left\\{\\begin{array}{ll} \n {\\mathcal F}_{\\frac{N-1}{p-1}} & \\text{ for } N > p,\\\\\n {\\mathcal G}_1 & \\text{ for } N = p.\n \\end{array} \\right.\n $$ \nLet $(\\phi_n)$ be a sequence in ${W^{1,p}(\\Om)}$ such that \n\\begin{align}\\label{ineq5}\n{\\displaystyle \\int_{\\Omega}} \\abs{\\Gr \\phi_n}^p - \\la {\\displaystyle \\int_{\\partial \\Omega}} g \\abs{\\phi_n}^p < C\n\\end{align} \nfor some $C>0$ and $\\la>0.$ If $(\\@ifstar{\\oldnorm}{\\oldnorm*}{\\Gr\\phi_n}_p)$ is bounded, then $(\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi_n}_p)$ is bounded.\n\\end{Lemma}\n\n\n\n\\begin{proof} Our proof is by method of contradiction. Suppose that the sequence $(\\@ifstar{\\oldnorm}{\\oldnorm*}{\\Gr\\phi_n}_p)$ is bounded and $\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi_n}_p\\ra \\infty$ as $n\\ra \\infty.$ By setting $w_n = \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi_n}^{-1}_p \\phi_n,$ we obtain $\\@ifstar{\\oldnorm}{\\oldnorm*}{w_n}_p = 1$ and $\\@ifstar{\\oldnorm}{\\oldnorm*}{\\Gr w_n}_p \\rightarrow 0$ as $n \\rightarrow \\infty.$ Thus there exists a subsequence $(w_{n_k})$ of $(w_{n})$ such that $w_{n_k} \\rightharpoonup w$ in ${W^{1,p}(\\Om)}$. Now the weak lowersemicontinuity of $\\@ifstar{\\oldnorm}{\\oldnorm*}{\\Gr \\cdot}_p$ gives $\\@ifstar{\\oldnorm}{\\oldnorm*}{\\Gr w}_p=0$. Since $\\Om$ is connected, we get $w=c$ a.e. in $\\overline{\\Om}$ and from the compactness of the embedding of ${W^{1,p}(\\Om)}$ into $L^p(\\Om)$, $\\abs{c} |\\Om|^{\\frac{1}{p}} = 1$. Thus $\\int_{{\\partial} \\Om} g \\abs{w}^p = \\frac{1}{|\\Om|} \\int_{{\\partial} \\Om} g < 0$. On the other hand from \\eqref{ineq5} we also have\n\\begin{align*}\n {\\displaystyle \\int_{\\Omega}} \\abs{\\Gr w_{n_k}}^p - \\la {\\displaystyle \\int_{\\partial \\Omega}} g \\abs{w_{n_k}}^p \\leq \\frac{C}{\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi_{n_k}}^p_p}. \n\\end{align*}\nNow we let $k \\rightarrow \\infty$ so that the compactness of $G$ gives $- \\la \\int_{{\\partial} \\Om} g \\abs{w}^p \\leq 0$. A contradiction to $\\int_{{\\partial} \\Om} g \\abs{w}^p < 0$. \n\\end{proof}\n\nIn the next proposition, for $\\la\\in (0,\\la_1+\\de),$ we find a lower estimate of the functional $J-\\la G.$ \n\\begin{proposition}\\label{bdd below}\nLet $ \\de > 0$ and let $\\la\\in(0,\\la_1+\\de)\\setminus{\\la_1}.$ Then for $\\phi\\in {W^{1,p}(\\Om)}\\setminus\\{0\\},$ \n\\begin{align}\\label{bounded below}\nJ(\\phi) - \\la G(\\phi) >\\left\\{\\begin{array}{ll}\n 0, & \\text{ if } \\la\\in(0,\\la_1); \\\\\n \\frac{-\\de}{\\la_1}J(\\phi), & \\text{ if } \\la\\in(\\la_1,\\la_1+\\de). \n \\end{array}\n \\right.\n\\end{align}\n\\end{proposition}\n\n\n\\begin{proof}\n Firstly, for any $\\la>0$ and $\\phi\\in {W^{1,p}(\\Om)}\\setminus\\{0\\},$ we consider the following cases:\n\\begin{enumerate}[(i)]\n \\item $G(\\phi) \\le 0$ and $J(\\phi) > 0:$ clearly $J(\\phi) - \\la G(\\phi) > 0.$\n \\item $G(\\phi)= 0$ and $J(\\phi)=0:$ using the connectedness of $\\Om$ and the fact that $\\int_{{\\partial} \\Om} g <0$, we get $\\phi =0$. So this case does not arise, since $\\phi\\neq 0.$\n \\item $G(\\phi)>0:$ in this case $\\la_1 \\leq \\frac{J(\\phi)}{G(\\phi)}.$ Thus for $\\la\\in(0,\\la_1),$ we get $J(\\phi) - \\la G(\\phi) > 0$.\n\\end{enumerate}\n Secondly, for $\\la \\in (\\la_1, \\la_1 +\\de)$ and $\\phi \\in {W^{1,p}(\\Om)},$ we have\n\\begin{align}\\label{lb1}\n J(\\phi) - \\la G(\\phi)&= J(\\phi) - \\la_1 G(\\phi) + (\\la_1 - \\la) G(\\phi) \\no \\\\ &\\geq (\\la_1 - \\la) G(\\phi) >\\frac{\\la_1 - \\la}{\\la_1} J(\\phi)>- \\frac{\\de}{\\la_1} J(\\phi),\n\\end{align}\nwhere the inequalities follow from the facts $J(\\phi) - \\la_1 G(\\phi) \\geq 0$ and $\\la\\in (\\la_1,\\la_1+\\de) $.\n\\end{proof}\n\n\n\nFor $\\la \\in (\\la_1,\\la_1+\\de)$, we consider a differentiable function $\\eta(t)$ such that\n\\begin{align}\\label{df}\n\\eta(t)\n =\\left\\{\\begin{array}{ll}\n 0, &\\; 0\\le t \\leq 1, \\\\\n \\text {strictly convex}, &\\, 10$ such that the map $\\eta_\\la' : {W^{1,p}(\\Om)}\\ra ({W^{1,p}(\\Om)})'$ does not vanish on $\\partial B_R(0)$ for all $R\\ge R_0.$\n \\end{enumerate}\n\\end{Lemma}\n\\begin{proof}\n$(a)$ Let $\\phi_n \\rightharpoonup \\phi$ in ${W^{1,p}(\\Om)}.$ Since $J$ is weakly lower semicontinuous, $G$ is compact and $\\eta$ is increasing and continuous, we get\n\\begin{align*}\n \\lowlim_{n \\rightarrow \\infty} \\eta_{\\la}(\\phi_n) & = \\lowlim_{n \\rightarrow \\infty} J(\\phi_n) - \\la \\lowlim_{n \\rightarrow \\infty} G(\\phi_n) + \\eta (\\lowlim_{n \\rightarrow \\infty} (J(\\phi_n))) \\\\\n & \\geq J(\\phi) - \\la G(\\phi) + \\eta (J(\\phi)) = \\eta_{\\la}(\\phi).\n\\end{align*} \nTherefore, $\\eta_\\la$ is weakly lower semicontinuous. \n\n\\noi $(b)$ Let $(\\phi_n)$ be a sequence in ${W^{1,p}(\\Om)}$ such that $\\eta_{\\la}(\\phi_n)\\le C, \\forall n\\in {\\mathbb N}.$ We show that the sequence $(\\phi_n)$ is bounded in ${W^{1,p}(\\Om)}.$ \n From \\eqref{bounded below}, we have\n\\begin{equation}\\label{coercive 2}\n C\\ge \\eta_{\\la}(\\phi_n) >- \\frac{\\de}{\\la_1} J(\\phi_n)+\\eta (J(\\phi_n)), \\quad \\forall n\\in {\\mathbb N}. \n\\end{equation}\nThus, for $\\phi_n$ with $J(\\phi_n) \\geq 2,$ using the definition of $\\eta,$ we have\n$$\n C\\ge \\eta_{\\la}(\\phi_n) > - \\frac{\\de}{\\la_1} J(\\phi_n) + \\frac{2 \\de}{\\la_1} (J(\\phi_n) - 1) = \\frac{\\de}{\\la_1} J(\\phi_n) - \\frac{2 \\de }{\\la_1}.\n$$\nHence, $ J(\\phi_n)\\le \\displaystyle \\max\\Big\\{2,\\frac{\\la_1 C}{\\de}+2\\Big\\}. $\nNow, we can use Lemma \\ref{coercive} to obtain $C_1>0$ so that $\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi_n}_p\\le C_1.$ \nTherefore, the sequence $(\\phi_n)$ is bounded in ${W^{1,p}(\\Om)}$. \n\n\\noi $(c)$ From \\eqref{bounded below}, we have\n$\n\\eta_{\\la}(\\phi) > - \\frac{\\de}{\\la_1} J(\\phi) + \\eta (J(\\phi)), \\; \\forall \\phi\\in {W^{1,p}(\\Om)}.\n$\nTherefore,\n\\begin{align*}\n \\eta_{\\la}(\\phi) > \\left\\{\\begin{array}{ll}\n \\frac{\\de}{\\la_1} J(\\phi) - \\frac{2 \\de }{\\la_1} > 0, & \\; \\text{ if }J(\\phi) > 2; \\\\\n - \\frac{\\de}{\\la_1} J(\\phi) + \\eta (J(\\phi)) \\geq - \\frac{2 \\de }{\\la_1}, &\\; \\text{ if }J(\\phi) \\leq 2.\n \\end{array} \n \\right.\n\\end{align*}\nThus $\\eta_\\la$ bounded below. \n\n\\noi $(d)$ \n By Lemma \\ref{Poin}, there exists $m>0$ such that \n\\begin{align}\\label{eqn1}\n J(\\phi) \\geq m \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}^p_p,\\quad \\forall\\, \\phi\\in {W^{1,p}(\\Om)} \\text{ with } G(\\phi)>0.\n\\end{align}\n We choose $R_0=2(1+\\frac{1}{m})$. Thus, for $\\phi\\in {\\partial} B_R(0)$ with $R>R_0,$ either $J(\\phi)> 2$ or $\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}^p_p> \\frac{2}{m}.$\nNotice that, \n$$\n\\big< \\eta^{\\prime}_{\\la}(\\phi), \\phi \\big> = p\\Big(J(\\phi) - \\la G(\\phi) + \\eta'(J(\\phi)) J(\\phi)\\Big). \n$$\nThus, using \\eqref{bounded below}, we obtain\n\\begin{align*}\n \\frac{1}{p} \\big< \\eta^{\\prime}_{\\la}(\\phi), \\phi \\big> \\geq - \\frac{\\de}{ \\la_1} J(\\phi) + {\\eta}^{\\prime}(J(\\phi)) J(\\phi).\n\\end{align*}\nIn particular, for $J(\\phi)> 2$, we have\n\\begin{align*}\n \\frac{1}{p} \\big< \\eta^{\\prime}_{\\la}(\\phi), \\phi \\big> \\geq \\frac{\\de}{\\la_1}J(\\phi).\n\\end{align*}\n On the other hand, for $J(\\phi)\\le 2,$ we have $\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}^p_p> \\frac{2}{m}.$ Hence from \\eqref{eqn1}, we conclude that $G(\\phi)\\le 0.$ Now from the part $(i)$ and $(ii)$ of proof of Proposition \\ref{bounded below}, we get $\\big< \\eta^{\\prime}_{\\la}(\\phi), \\phi \\big>>0.$\nTherefore, $\\eta^{\\prime}_{\\la}(\\phi) \\ne 0$ for $\\phi\\in {\\partial} B_R(0)$ for any $R>R_0$. \n\\end{proof}\n\n\nRecall that a function $\\phi \\in {W^{1,p}(\\Om)}$ is a weak solution of \\eqref{Steklov pertub}, if it satisfies the following weak formulation:\n\\begin{align*}\n {\\displaystyle \\int_{\\Omega}} |\\Gr \\phi|^{p-2} \\Gr \\phi \\cdot \\Gr v - \\la {\\displaystyle \\int_{\\partial \\Omega}} \\left( g \\abs{\\phi}^{p-2} \\phi v + f r(\\phi)v \\right) = 0, \\quad \\forall v \\in {W^{1,p}(\\Om)}.\n\\end{align*}\nTherefore, $\\phi$ is a solution of \\eqref{Steklov pertub} if and only if \n$$ \n\\big< \\left( J' - \\la(G' + F) \\right)(\\phi), v \\big>=0, \\quad \\forall v \\in {W^{1,p}(\\Om)}.\n$$\n\n\n\n\\begin{proposition}\\label{class alpha 1}\nThe maps $J' - \\la (G' + F)$ and $J' - \\la G'$ are well-defined maps from ${W^{1,p}(\\Om)}$ to its dual $({W^{1,p}(\\Om)})'.$ \nMoreover, these maps are bounded, demicontinuous and of class $\\al({W^{1,p}(\\Om)})$. \n\\end{proposition}\n\n\n\\begin{proof}\nFrom Proposition \\ref{G' cpt}, Proposition \\ref{class alpha}, and Proposition \\ref{compactmap1}, we obtain $J' - \\la (G' + F)$ and $J' - \\la G'$ are well defined, bounded and demicontinuous. Since $J'$ is of class $\\al({W^{1,p}(\\Om)})$ and $G', F$ are compact, the maps $J' - \\la (G' + F)$ and $J' - \\la G' $ are of class $\\al({W^{1,p}(\\Om)})$.\n\\end{proof}\n\n\n\\begin{proposition}\\label{index}\nLet $g, \\la_1$ be as given in Theorem \\ref{Steklov existence}. Then there exists $\\de>0$ such that for each $ \\la \\in (0, \\la_1 + \\de) \\setminus \\{ \\la_1 \\},$ $ind(J' - \\la G' ,0)$ is well defined. Furthermore, \\begin{enumerate}[(a)]\n \\item $ind(J' - \\la G',0) = 1 \\;$ for $\\la \\in (0, \\la_1)$,\n \\item $ ind(J' - \\la G',0) = -1 \\;$ for $\\la \\in (\\la_1, \\la_1 + \\de)$.\n\\end{enumerate}\n\\end{proposition}\n\n\n\\begin{proof}\nSince $\\la_1$ is an isolated eigenvalue of \\eqref{Steklov weight}, there exists $\\de > 0$ such that $ \\la \\in (0, \\la_1 + \\de)\\setminus \\{ \\la_1 \\}$ is not an eigenvalue of \\eqref{Steklov weight}. Thus for $ \\la \\in (0, \\la_1 + \\de)\\setminus \\{ \\la_1 \\}$, 0 is the only solution of $J' - \\la G'$ and hence $ind(J' - \\la G' ,0)$ is well defined.\n \n\\noi $(a)$ For $\\la \\in (0, \\la_1)$, from \\eqref{bounded below}, we have \n\\begin{align*}\n \\big< \\big(J' - \\la G' \\big)(\\phi), \\phi \\big> = p(J(\\phi) - \\la G(\\phi)) > 0, \\quad \\forall \\phi \\in {W^{1,p}(\\Om)} \\setminus \\{0 \\}.\n\\end{align*}\nTherefore, by Proposition \\ref{degree}, $deg(J' - \\la G', \\overline{B_{r}(0)}, 0) =1$ for every $r>0$. Thus \n$$\nind(J' - \\la G',0) = \\lim_{ r \\rightarrow 0} deg(J' - \\la G', \\overline{B_{r}(0)}, 0) = 1.\n$$\n\n\n\\noindent $(b)$ In this case, we adapt a technique used in the proof of \\cite[Theorem 4.1]{Drabek-Huang}. First, we compute $ind(\\eta_\\la',0)$. Clearly, $0$ is a zero of $\\eta'_{\\la}$. If $\\phi_0 \\neq 0$ is a zero of $\\eta_{\\la}',$ then $ \\frac{\\la}{1 + \\eta^{\\prime}(J(\\phi_0))}$ is an eigenvalue of \\eqref{Steklov weight} and $ \\phi_0 $ is a corresponding eigenfunction. Since $ 0 < \\frac{\\la}{1 + \\eta^{\\prime}(J(\\phi_0))} < \\la_1 + \\de$, we must have $\\frac{\\la}{1 + \\eta^{\\prime}(J(\\phi_0))} = \\la_1$ and $\\phi_0 = c \\phi_1$ for some $c \\in {\\mathbb R},$ where $\\phi_1 $ is the first eigenfunction of \\eqref{Steklov weight} normalized as $\\int_{{\\partial} \\Om} g\\phi^p_1=1$ and $\\phi_1 > 0$ in $\\overline{\\Om}$. Notice that,\n\\begin{align*}\n \\eta^{\\prime}(J(\\phi_0)) = \\frac{\\la}{\\la_1} - 1 \\in \\left( 0, \\frac{\\de}{\\la_1} \\right). \n\\end{align*}\nThus from \\eqref{df1}, we assert that $J(\\phi_0) \\in (1, 2).$ Moreover, since $\\eta^{\\prime}$ is strictly increasing in $(1, 2)$ and the functional $J$ is even, there exists a unique $c > 0$ such that $\\phi_0 = \\pm c \\phi_1.$ Conversely, if we choose $c>0$ such that $\\eta^{\\prime}(J(c\\phi_1)) = \\frac{\\la}{\\la_1} - 1,$ then $\\pm c\\phi_1$ is a zero of $\\eta_\\la'.$ Therefore, the map $ \\eta^{\\prime}_{\\la}$ has precisely three zeros $-c\\phi_1, 0, c\\phi_1$. Now we will show that $ind(\\eta^{\\prime}_{\\la}, \\pm c\\phi_1) = 1$. It is enough to prove $\\pm c \\phi_1$ are the minimizers for $\\eta_{\\la}$. From Lemma \\ref{map properties}, the functional $\\eta_{\\la}$ is coercive, weak lowersemicontinuous and bounded below. Thus $\\eta_{\\la}$ admits a minimizer. Notice that, $\\eta_{\\la}(t\\phi_1)=(\\la_1-\\la)t^pG(\\phi_1)+\\eta(t^pJ(\\phi_1))$ and hence $\\eta_{\\la}(t\\phi_1)<0$ for sufficiently small $t>0.$ Thus $0$ is not a minimizer and hence $\\pm c \\phi_1$ are the only minimizers of $\\eta_{\\la} $. Therefore, by Proposition \\ref{degree}, we get\n\\begin{align}\\label{index1}\n ind(\\eta^{\\prime}_{\\la}, \\pm c\\phi_1) = 1. \n\\end{align}\nFor $R_0$ as given in Lemma \\ref{map properties}, we choose $R>R_0,$ so that $\\pm c \\phi_1 \\in B_{R}(0)$ and $\\big< \\eta^{\\prime}_{\\la}(\\phi), \\phi \\big>>0$ for $\\phi\\in {\\partial} B_R(0)$. By Proposition \\ref{degree}, $deg(\\eta^{\\prime}_{\\la}, \\overline{B_{R}(0)}, 0) = 1.$ Thus by the additivity of degree (Proposition \\ref{degree}) and from \\eqref{index1}, we obtain $deg(\\eta^{\\prime}_{\\la}, \\overline{B_{r}(0)}, 0) = -1$ for sufficiently small $r > 0$. Since $\\eta^{\\prime}_{\\la} = J' - \\la G'$ on ${B_{r}(0)}$ for $r<1$, we conclude that $ind(J' - \\la G',0) = -1.$ \n\\end{proof}\n\n\n\n\n\\begin{Lemma}\\label{homotopy invariance}\nLet $\\la_1$ be given as in Theorem \\ref{Steklov existence}. Then for $ \\la \\in (0, \\la_1 + \\de) \\setminus \\{ \\la_1 \\}$, $ ind(J' - \\la (G' + F), 0) = ind(J' - \\la G' ,0).$ \n\\end{Lemma}\n\n\n\\begin{proof}\nFor $\\la \\in (0, \\la_1 + \\de) \\setminus \\{ \\la_1 \\},$ define $H_{\\la}: {W^{1,p}(\\Om)} \\times [0,1] \\rightarrow ({W^{1,p}(\\Om)})'$ as $$H_{\\la}(\\phi, t) = J'(\\phi) - \\la G' (\\phi) - \\la t F(\\phi).$$ Clearly, $H_{\\la}(.,0) = J' - \\la G'$ and $H_{\\la}(.,1) = J' - \\la (G' + F).$ From Proposition \\ref{class alpha 1}, for each $t \\in [0,1]$, $H_{\\la}(\\cdot, t)$ is bounded, demicontinuous and of class $\\al({W^{1,p}(\\Om)})$. We prove the existence of a sufficiently small $r > 0$ such that for each $t \\in [0,1], \\; H_{\\la}(.,t)$ does not vanish in $\\overline{B_r(0)} \\setminus \\{ 0 \\}.$ On the contrary, assume that no such $r$ exists. \nThen for any $r > 0$, there exists $t_r \\in [0,1]$ and $\\phi_r \\in {W^{1,p}(\\Om)} \\setminus \\{ 0 \\}$ such that $\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi_r}_{{W^{1,p}(\\Om)}} \\leq r$ and $H_{\\la}(\\phi_r,t_r) = 0.$ In particular, for a sequence of positive numbers $(r_n)$ converging to 0, there exist a sequence $t_n\\in [0,1]$ and a sequence $\\phi_n \\in {W^{1,p}(\\Om)} \\setminus \\{ 0 \\}$ such that $\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi_n}_{{W^{1,p}(\\Om)}} \\leq r_n$ and \n\\begin{align}\\label{homotopy}\n J'(\\phi_n) -\\la G'(\\phi_n) - \\la t_n F(\\phi_n) = 0. \n\\end{align}\nIf we set $v_n = \\phi_n {\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi_n}^{-1}_{{W^{1,p}(\\Om)}}},$ then $\\@ifstar{\\oldnorm}{\\oldnorm*}{v_n}_{{W^{1,p}(\\Om)}} = 1$ and hence admits a subsequence $(v_{n_k})$ such that $v_{n_k} \\rightharpoonup v$ in ${W^{1,p}(\\Om)}$. From \\eqref{homotopy} we also have\n\\begin{align*}\n \\big< J'(v_{n_k}) - \\la G'(v_{n_k}), v_{n_k} - v \\big> = \\la t_{n_k} \\left< \\frac{F(\\phi_{n_k})}{\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi_{n_k}}^{p-1}_{{W^{1,p}(\\Om)}}}, v_{n_k} - v \\right>.\n\\end{align*}\n By Proposition \\ref{Growth}, the right hand side of the above inequality goes to zero as $k\\ra \\infty.$ Therefore, \n$$ \n\\lim_{k \\rightarrow \\infty} \\big< J'(v_{n_k}) - \\la G'(v_{n_k}), v_{n_k} - v \\big> = 0. \n$$\nNow, since $J' - \\la G'$ is of class $\\al({W^{1,p}(\\Om)})$ (Proposition \\ref{class alpha 1}), we get $v_{n_k} \\rightarrow v$ as $k \\rightarrow \\infty$. Thus using \\eqref{homotopy}, we deduce that\n$J'(v) - \\la G'(v) = 0$ and $\\@ifstar{\\oldnorm}{\\oldnorm*}{v}_{{W^{1,p}(\\Om)}} = 1.$\nA contradiction, as $\\la \\in (0, \\la_1 + \\de) \\setminus \\{ \\la_1 \\}$ is not an eigenvalue of \\eqref{Steklov weight}. Therefore, there exists $R > 0$ such that $H_\\la(.,t)$ does not vanish in $\\overline{B_R(0)} \\setminus \\{ 0 \\}.$ Thus 0 is an isolated zero of $H(.,t)$ for any $t\\in [0,1]$. Hence by homotopy invariance of degree (Propostion \\ref{degree}), we obtain\n\\begin{align}\\label{index difference}\n ind(J' - \\la (G' + F),0) = ind(J' - \\la G' ,0)=\\left\\{\\begin{array}{ll}\n 1, &\\; \\text{for} \\; \\la \\in (0, \\la_1); \\\\ \n -1, &\\; \\text{for} \\; \\la \\in (\\la_1, \\la_1 + \\de).\n \\end{array}\\right. \n\\end{align}\n\\end{proof}\n\n\nThe following theorem gives a sufficient condition \\cite[Theorem 7.5, Page-61]{Skrypnik} under which $\\la_1$ is a bifurcation point of \\eqref{Steklov pertub}. \n\n\n\\begin{theorem}\\label{Bifurcation}\nLet $\\la_1$ be given as in Theorem \\ref{Steklov existence} and $g, r, f$ be given as in Theorem \\ref{bifur}. Let\n\\begin{align*}\n \\overline{i}^{\\pm} = \\uplim_{\\la \\rightarrow \\la_1 \\pm 0} ind(J' - \\la (G' + F),0); \\quad \\underline{i}^{\\pm} = \\lowlim_{\\la \\rightarrow \\la_1 \\pm 0} ind(J' - \\la (G' + F),0). \n\\end{align*}\nIf at least two of the numbers $\\overline{i}^+, \\underline{i}^+, \\overline{i}^-, \\underline{i}^-, ind(J' - \\la (G' + F),0) $ are distinct, then $\\la_1$ is a bifurcation point of \\eqref{Steklov pertub}. \n\\end{theorem}\n\n\n\n\\begin{theorem}\\label{bifurcation}\nLet $\\la_1$ be given as in Theorem \\ref{Steklov existence} and $g, r, f$ be given as in Theorem \\ref{bifur}. Then $\\la_1$ is a bifurcation point of \\eqref{Steklov pertub}.\n\\end{theorem}\n\n\\begin{proof}\nFrom Proposition \\ref{index} and Lemma \\ref{homotopy invariance}, we have\n\\begin{align*}\n ind(J' - \\la (G' + F),0) = \\left\\{\\begin{array}{ll}\n 1, &\\; \\text{for} \\; \\la \\in (0, \\la_1); \\\\ \n -1, &\\; \\text{for} \\; \\la \\in (\\la_1, \\la_1 + \\delta).\n \\end{array} \n \\right.\n\\end{align*}\nTherefore,\n\\begin{align*}\n \\overline{i}^{+} = \\uplim_{\\la \\rightarrow \\la_1 + 0} ind(J' - \\la (G' + F),0) = -1; \\quad \\underline{i}^{-} = \\lowlim_{\\la \\rightarrow \\la_1 - 0} ind(J' - \\la (G' + F),0) = 1.\n\\end{align*}\nThus, by Theorem \\ref{Bifurcation}, $\\la_1$ is a bifurcation point of \\eqref{Steklov pertub}.\n\\end{proof}\n\nThe following lemma is proved as a part of \\cite[Theorem 1.3]{Rabinowitz}.\n\n\\begin{Lemma}\\label{ls and p}\nLet $ r,g$ and $f$ be given as in Theorem \\ref{bifur}. For $\\la \\in \\mathbb{R},$ define\n$$ r(\\la) = \\inf \\left\\{ \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}_{{W^{1,p}(\\Om)}} > 0: (J' - \\la (G' + F))(\\phi) = 0 \\right\\}.$$\nThen $r$ is lower semicontinuous. Further more, if $\\la$ is not an eigenvalue of \\eqref{Steklov weight}, then $r(\\la)> 0$.\n\n\n\\end{Lemma}\n\n\\begin{proof}\n {\\it $r$ is lower semicontinuous}: Let $(\\la_n)$ be a sequence in ${\\mathbb R}^+$ such that $\\la_n \\rightarrow \\la$. Without loss of generality we assume that $r(\\la_n)$ is finite. Now by definition of $r$, there exists $\\phi_{n} \\in {W^{1,p}(\\Om)} \\setminus \\{ 0 \\}$ such that $\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi_{n}}_{{W^{1,p}(\\Om)}} < r(\\la_{n}) + \\frac{1}{n}$ and $(J' - \\la_{{n}} (G' + F))(\\phi_{n}) = 0.$ Since $(\\phi_{n})$ is bounded, up to a subsequence $\\phi_{n} \\rightharpoonup \\phi$ in ${W^{1,p}(\\Om)}$. Now by writing\n\\begin{align*}\\label{p1}\n (J' - \\la (G' + F))(\\phi_n) = (J' - \\la_n (G' + F))(\\phi_{n}) + (\\la_{n} - \\la )(G' + F)(\\phi_{n}),\n\\end{align*}\nwe observe that $\\lim_{n \\rightarrow \\infty} \\big< (J' - \\la (G' + F))(\\phi_{n}), \\phi_{n} - \\phi \\big> = 0$. As $J' - \\la (G' + F)$ is of class $\\al({W^{1,p}(\\Om)})$ (Proposition \\ref{class alpha 1}), we get $\\phi_{{n}} \\rightarrow \\phi$ in ${W^{1,p}(\\Om)}$. Therefore,\n\\begin{equation}\\label{eq:class}\n (J' - \\la (G' + F))(\\phi) = 0\n\\end{equation}\nWe claim that $\\phi\\neq 0$. If not, then $\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi_{n}}_{{W^{1,p}(\\Om)}} \\rightarrow 0,$ as $n \\rightarrow \\infty.$ Set $v_n = \\phi_n {\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi_n}^{-1}_{{W^{1,p}(\\Om)}}}$. Then $v_n \\rightharpoonup v$ in ${W^{1,p}(\\Om)}$ and (by the similar arguments as in the proof of Lemma \\ref{homotopy invariance}) $v$ must be an eigenfunction corresponding to $\\la.$\nA contradiction and hence $\\phi\\neq 0.$ Thus, \n\\begin{align*}\n r(\\la) \\le \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi}_{{W^{1,p}(\\Om)}} = \\lim_{n \\rightarrow \\infty} \\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi_n}_{{W^{1,p}(\\Om)}} \\le \\lim_{n \\rightarrow \\infty} \\left( r(\\la_n) + \\frac{1}{n} \\right) = \\lim_{n \\rightarrow \\infty} r(\\la_n).\n\\end{align*}\n{\\it $r$ is positive}: Suppose $r(\\la) = 0$ for some $\\la$. Then there exists a sequence $(\\phi_n) \\in {W^{1,p}(\\Om)} \\setminus \\{ 0 \\}$ such that $\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi_n}_{{W^{1,p}(\\Om)}} < \\frac{1}{n}$ and $(J' - \\la (G' + F))(\\phi_n) = 0.$ Set $v_n = \\phi_n {\\@ifstar{\\oldnorm}{\\oldnorm*}{\\phi_n}^{-1}_{{W^{1,p}(\\Om)}}}.$ Then $\\@ifstar{\\oldnorm}{\\oldnorm*}{v_n}_{{W^{1,p}(\\Om)}} = 1$ and $v_n \\rightharpoonup v$ in ${W^{1,p}(\\Om)}.$ Now using the similar arguments as in Lemma \\ref{homotopy invariance}, we obtain \n\\begin{align*}\n J'(v) - \\la G'(v) = 0, \\ \\ \\text{where} \\; \\@ifstar{\\oldnorm}{\\oldnorm*}{v}_{{W^{1,p}(\\Om)}} = 1.\n\\end{align*}\nThus $\\la$ must be an eigenvalue of \\eqref{Steklov weight}. Therefore, $r(\\la)>0$, if $\\la$ is not an eigenvalue of \\eqref{Steklov weight}.\n\\end{proof}\n\n\\begin{remark}\nIf $(\\la, 0)$ is a bifurcation point of \\eqref{Steklov pertub}, then $r(\\la) = 0$ and hence from Lemma \\ref{ls and p}, $\\la$ must be an eigenvalue of \\eqref{Steklov weight}. Thus for the existence of a bifurcation point $(\\la, 0)$ of \\eqref{Steklov pertub}, it is necessary that $\\la$ is an eigenvalue of \\eqref{Steklov weight}. \n\\end{remark}\n\nIn the next proposition we prove a generalized homotopy invariance property for the maps $J' - \\la(G' + F).$ A similar result for Leray-Schauder degree is obtained in \\cite{Leray}. For a set $U$ in $[a,b] \\times {W^{1,p}(\\Om)}$, let $U_{\\la}= \\left\\{ \\phi \\in {W^{1,p}(\\Om)}: (\\la, \\phi) \\in U \\right\\}$ and ${\\partial} U_{\\la} = \\left\\{ \\phi \\in {W^{1,p}(\\Om)}: (\\la, \\phi) \\in {\\partial} U \\right\\}$. \n\n\\begin{proposition}\\label{gen homotopy}\nLet $U$ be a bounded open set in $[a,b] \\times {W^{1,p}(\\Om)}$. If $(J' - \\la(G' + F))(\\phi) \\neq 0$ for every $\\phi \\in {\\partial} U_{\\la},$ then $deg(J' - \\la(G' + F), U_{\\la}, 0) = C,$ $\\forall\\, \\la \\in [a,b].$\n\\end{proposition}\n\n\\begin{proof}\nIt is enough to show that $deg(J' - \\la(G' + F), U_{\\la}, 0)$ is locally constant on $[a,b].$ Then the proof will follow from the connectedness of $[a,b]$ and the continuity of the degree. For each $\\la\\in [a,b],$\nconsider the set $N_{\\la} = \\left\\{ \\phi \\in U_{\\la} : (J' - \\la(G' + F))(\\phi) = 0 \\right\\}.$\nFor $\\la_0 \\in [a,b]$, let $I_0 \\subset [a,b]$ be a neighbourhood of $\\la_0$ and let $V_0$ be an open set such that $N_{\\la_0} \\subset V_0 \\subset \\overline{V_{0}} \\subset U_{\\la_0}$ and $I_0 \\times V_0 \\subset U.$ We claim that there exists $$I_1 \\subset I_0 \\text{ such that } \\la_0 \\in I_1 \\text{ and } N_{\\la} \\subset V_0, \\; \\forall \\la \\in I_1.$$ If not, then there exists a sequence $(\\la_n,\\phi_n)$ in $U$ such that $\\phi_n \\in N_{\\la_n} \\setminus V_0$ and $\\la_n\\ra \\la_0.$ As $(\\phi_n)$ is bounded in ${W^{1,p}(\\Om)}$, $\\phi_n \\rightharpoonup \\phi$ for some $\\phi\\in{W^{1,p}(\\Om)}$. Now following the steps that yield \\eqref{eq:class}, we get $\\phi_n\\ra \\phi$ in ${W^{1,p}(\\Om)}$ and \n$(J' - \\la_0 (G' + F))(\\phi) = 0.$ Since $\\phi\\in \\overline{U_\\la}$ and $J' - \\la_0 (G' + F)$ is not vanishing on $\\partial U_\\la,$ we conclude $\\phi \\in U_\\la$. Thus $\\phi\\in N_{\\la_0},$ a contradiction since $\\phi\\not\\in V_0.$ Therefore, our claim must be true. Now consider the homotopy, $H : I_1 \\times V_0 \\rightarrow ({W^{1,p}(\\Om)})'$ defined as $H(\\la, \\phi) = (J' - \\la (G' + F))(\\phi).$ By construction, for every $\\la\\in I_1,$ $H(\\la,.)$ does not vanish on ${\\partial} V_0.$ Thus by the classical homotopy invariance of degree (Proposition \\ref{degree}),\n$deg(H(\\la, \\cdot), V_0, 0) = C, \\; \\forall\\, \\la \\in I_1.$ Since $H(\\la, \\phi) \\neq 0$ in $U_{\\la} \\setminus V_0$, by the additivity of degree, we obtain $deg(H(\\la, \\cdot), U_{\\la}, 0) = C, \\; \\forall\\, \\la \\in I_1.$ \n\\end{proof}\n\n\n\\noi{\\textbf {Proof of Theorem \\ref{bifur}}}: We adapt the technique used in the proof of \\cite[Theorem 1.3]{Rabinowitz}. Recall that ${\\mathcal S} \\subset {\\mathbb R} \\times {W^{1,p}(\\Om)}$ is the set of all nontrivial solutions of $(J' - \\la (G' + F))(\\phi) = 0.$\nSuppose there does not exist any continuum ${\\mathcal C} \\subset {\\mathcal S}$ such that $(\\la_1, 0) \\in {\\mathcal C}$ and ${\\mathcal C}$ is either unbounded, or meets at $(\\la, 0)$ where $\\la$ is an eigenvalue of \\eqref{Steklov weight} and $\\la \\neq \\la_1$. Then by \\cite[Lemma 1.2]{Rabinowitz}, there exists a bounded open set $U \\subset {\\mathbb R} \\times {W^{1,p}(\\Om)}$ containing $(\\la_1,0)$ such that ${\\partial} U \\cap {\\mathcal S} = \\emptyset $ and $\\overline{U}\\cap {\\mathbb R} \\times\\{0\\} =\\overline{I}\\times \\{0\\},$ where $I=(\\la_1 - \\de, \\la_1 + \\de)$ with $0<\\de< \\min\\{\\la_1,\\la_2-\\la_1\\}.$ Thus $(\\la \\times {\\partial} U_{\\la}) \\cap {\\mathcal S} = \\emptyset$ for every $\\la\\in {\\mathbb R}$ and $(\\la, 0) \\not \\in {\\partial} U$ for $\\la \\in I$. In particular, $J' - \\la (G' + F)$ does not vanish on $\\partial U_\\la$ for every $\\la$ in $ I.$ Hence $deg((J' - \\la (G' + F), U_{\\la}, 0)$ is well defined and by homotopy invariance of degree (Proposition \\ref{gen homotopy}), we have\n\\begin{align}\\label{d1}\n deg(J' - \\la (G' + F), U_{\\la}, 0) = C, \\ \\ \\text{for} \\; \\la \\in I.\n\\end{align}\nNext we compute $ind(J' - \\la (G' + F),0)$ for $\\la\\in I$. Let $$d:=\\text{dist}((-\\infty,0]\\cup [\\la_2,\\infty),\\overline{U}).$$\n Since $\\overline{U}\\cap {{\\mathbb R}\\times\\{0\\}}=\\overline{I}\\times \\{0\\}$, we observe that $d>0.$ Now set $$\\rho(\\la)=\\left\\{\\begin{array}{ll}\n \\frac{d}{2}, & \\quad \\text{ for } \\la\\in (-\\infty,0]\\cup [\\la_2,\\infty),\\\\\n \\min\\{1,\\frac{1}{2}r(\\la)\\}, & \\quad \\text{ for } \\la\\in (0,\\la_2)\\setminus\\{\\la_1\\}. \n \\end{array}\\right. \n$$\nThus using \\ref{ls and p} we easily conclude that $\\rho(\\la)>0$ for each $\\la\\ne \\la_1$ and $\\overline{B_{\\rho(\\la)}} \\setminus \\{ 0 \\}$ does not contain any solution of $J' - \\la (G' + F).$ \nLet $$I^* := \\left\\{\\la: (\\la,\\phi)\\in U \\text{ for some } \\phi\\right\\},\\quad \\la^*:=\\sup\\{\\la:\\la\\in I^*\\},\\quad \\la_*:=\\inf\\{\\la:\\la\\in I^*\\}$$ \n For $\\la\\in (\\la_1,\\la^*],$ let $\\rho = \\inf \\left\\{ \\rho(\\mu): \\mu \\in [\\la, \\la^*] \\right\\}.$ By Lemma \\eqref{ls and p}, we have $\\rho > 0.$ Now consider the set $V= U \\setminus [\\la, \\la^*] \\times \\overline{B_{\\rho}}$. Observe that, $V$ is bounded and open in $[\\la, \\la^*] \\times {W^{1,p}(\\Om)}$. Further more, for each $\\mu \\in [\\la, \\la^*]$, $V_\\mu=U_{\\mu} \\setminus \\overline{B_{\\rho}}$ and $(J' - \\mu (G' + F))$ does not vanish on ${\\partial} V_\\mu={\\partial}(U_{\\mu} \\setminus \\overline{B_{\\rho}})$. Therefore, by the homotopy invariance of degree (Proposition \\ref{gen homotopy}) and noting that $U_{\\la^*} = \\emptyset,$ we get \n$$ deg(J' - \\la (G' + F), U_{\\la} \\setminus \\overline{B_{\\rho}}, 0) = deg(J' - \\mu (G' + F), U_{\\la^*} \\setminus \\overline{B_{\\rho}}, 0)=0.$$ \nSimilarly, for $\\la \\in [\\la_*, \\la_1)$ we get $deg(J' - \\la (G' + F), U_{\\la} \\setminus \\overline{B_{\\rho}}, 0) = 0$.\n Since $(J' - \\la (G' + F))(\\phi) \\neq 0$ for $\\phi \\in B_{\\rho(\\la)} \\setminus \\overline{B_{\\rho}},$ by the additivity of the degree we get $$deg(J' - \\la (G' + F), U_{\\la} \\setminus \\overline{B_{\\rho(\\la)}}, 0) = 0,\\quad \\la \\in [\\la_*,\\la^*]\\setminus\\{\\la_1\\}.$$ \nAgain using the additivity of the degree, we conclude that\n\\begin{align*}\n deg(J' - \\la (G' + F), U_{\\la}, 0) = deg(J' - \\la (G' + F), B_{\\rho(\\la)}, 0), \\quad \\forall \\la \\in I \\setminus \\{\\la_1\\}.\n\\end{align*}\nThus from \\eqref{d1} we obtain\n\\begin{align*}\n ind(J' - \\la(G' + F), 0) = C, \\quad \\text{for} \\; \\la \\in I\\setminus \\{\\la_1\\}.\n\\end{align*}\nA contradiction to \\eqref{index difference}. Thus there must exist a continuous branch of non-trivial solutions from $(\\la_1, 0)$ and is either unbounded, or meets at $(\\la, 0)$ where $\\la$ is an eigenvalue of \\eqref{Steklov weight}. \\qed\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWe work over an algebraically closed field $k$ of characteristic $0$. \n\nOur aim in this article is to bring examples of varieties $X$ that do not have finitely generated Cox rings. Our varieties $X$ are toric varieties $X_\\Delta$ blown up at a point $t_0$ in the torus. In \\cite{GK} we constructed examples of such toric surfaces $X_\\Delta$ of Picard number $1$. In this article we generalize this construction to toric varieties of higher Picard number and higher dimension. \n\nLet us recall the definition by Hu and Keel \\cite{HuKeel} of the Cox ring of a normal projective variety $X$:\n\\[ \\Cox(X) = \\bigoplus_{[D]\\in \\Cl(X)} H^0(X, \\cO_X(D)).\\] \nGiving a ring structure to this space involves some choices, but finite generation of the resulting $k$-algebra does not depend on the choices. A normal projective $\\mathbb{Q}$-factorial variety $X$ is called a Mori Dream Space (MDS) if $\\Cox(X)$ is a finitely generated $k$-algebra. \n\nThe construction in \\cite{GK} was based on the examples of blowups at a point of weighted projective planes by Goto, Nishida and Watanabe \\cite{GNW} and the geometric description of these examples by Castravet and Tevelev \\cite{CastravetTevelev}. \nA basic fact about Cox rings is that on a MDS $X$ every nef divisor is semiample (i.e. there exists a positive multiple of the divisor that has no base locus and defines a morphism $X\\to \\mathbb{P}^n$). To prove that $X$ is not a MDS, it suffices to find a nef divisor $D$ that is not semiample. The examples in \\cite{GK} have Picard number 2 and\nthere is essentially a unique choice for $D$. The class of $D$ necessarily has to lie on the boundary of the (2-dimensional) nef cone. One of the boundary rays is generated by the class $H$ of the pullback of an ample divisor on $X_\\Delta$, which is clearly semiample. It follows that $D$ must lie on the other boundary ray. In the case where $X$ is a surface, this other boundary ray is determined if we can find a curve $C$ of negative self-intersection on $X$, different from the exceptional curve. \n\nIn general, the existence of a nef divisor $D$ on $X$ that is not semiample is only a sufficient condition for $X$ being a non-MDS. When $X_\\Delta$ is a weighted projective plane $\\mathbb{P}(a,b,c)$, then Cutkosky \\cite{Cutkosky} has shown that $X$ is a MDS if and only if the divisor $D$ as above is semiample.\n\nThere are two essential differences in the proof of non-finite generation when going to higher Picard number or higher dimension. In the case of surfaces $X$ with Picard number $p>2$ we still look for a curve $C\\subset X$ of negative self-intersection. This curve now defines a $(p-1)$-dimensional face of the nef cone and there is no obvious choice for the non-semiample divisor $D$. We show that a general divisor on this face is not semiample. \n\nIn dimension greater than $2$ we will encounter normal projective varieties $X$ that are not $\\mathbb{Q}$-factorial. \nFor such varieties the Cox ring and MDS are defined in the same way as above. (This generalizes slightly the definition of Hu and Keel \\cite{HuKeel} who required a MDS to be $\\mathbb{Q}$-factorial.) In this greater generality, if $X$ has a free class group and a finitely generated Cox ring, then its cones of effective, moving, semiample and nef divisors are polyhedral \\cite[Theorem 4.2, Theorem 7.3, Remark 7.6]{BH07}. Moreover, the cones of nef Cartier divisors and semiample Cartier divisors coincide \\cite[Corollary 7.4]{BH07}.\nIn our examples we find nef Cartier divisors $D$ that are not semiample and hence $X$ is not a MDS.\n\n\n{\\bf Acknowledgment.} We thank J\\\"urgen Hausen for explaining us various details in the definition of Cox rings. \n\n\\section{Statement of the main results.}\n\nWe use the terminology of toric varieties from \\cite{Fulton}. Let $X_\\Delta$ be the toric variety defined by a rational convex polytope $\\Delta$ and let $X$ be the blowup of $X_\\Delta$ at a general point, which we can assume to be the identity point $t_0 = (1,1,\\ldots,1)$ in the torus. We are interested in the Cox ring of $X$. \n\n \n\\subsection{The case of surfaces.}\n\nLet $\\Delta$ be a convex plane $4$-gon with rational vertices $(0,0)$, $(0,1)$, $P_L=(x_L, y_L)$, $P_R=(x_R, y_R)$, where $x_L<0$ and $x_R>0$ (see Figure~\\ref{fig-4gon}). The polygon can equivalently be defined by the slopes of its sides, $s_1,s_2,s_3,s_4$. We will assume that the slope $s_2$ of the side connecting $(0,0)$ and $P_R$ satisfies $0\\leq s_2 < 1$. When $x_R\\leq 1$, this can always be achieved without changing the isomorphism class of $X_{\\Delta}$ by applying an integral shear transformation $(x,y)\\mapsto(x,y+ax)$ for some $a\\in\\ZZ$ to the polytope.\n\n\\begin{figure}[ht] \n\\centerline{\\psfig{figure=4gon.eps,width=10cm}}\n\\caption{Polygon $\\Delta$.}\n\\label{fig-4gon}\n\\end{figure}\n\nChoose $m>0$ such that $m\\Delta$ is integral. We study lattice points in $m\\Delta$. Let us denote by column $c$ in $m\\Delta$ the set of lattice points with first coordinate $x=c$. \n\n\n\\begin{theorem}\\label{thm-2D}\nLet $\\Delta$ be a rational plane $4$-gon as above. Assume that $0\\leq s_2 <1$ and let $m>0$ be sufficiently large and divisible so that $m\\Delta$ is integral. The variety $X= \\Bl_{t_0} X_\\Delta$ is not a MDS if the following two conditions are satisfied:\n\\begin{enumerate}\n\\item Let $w=x_R-x_L$ be the width of $\\Delta$. Then $w<1$.\n\\item Let the column $mx_L+1$ in $m\\Delta$ consist of $n$ points $(mx_L+1,b+i)$, $i=0,\\ldots, n-1$. Then\n\\begin{enumerate}\n\\item columns $mx_R, mx_R-1,\\ldots,mx_R-n+1$ in $m\\Delta$ have $1,2,\\ldots,n$ lattice points, respectively;\n\\item $m y_L$ is not equal to $b+i$, $i=1,\\ldots,n-1$.\n\\end{enumerate}\n\\end{enumerate}\n\nIf the width $w=1$ or $\\Delta$ degenerates to a triangle with slopes $s_1=s_2$, then $X$ is not a MDS if in addition to (1') $w\\leq 1$ and (2) the following holds:\n\\begin{enumerate}\n\\item[(3)] Let $s=\\frac{y_R-y_L}{w}$ be the slope of the line joining the left and right vertices. Then $my_L\\neq b-ns$.\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{example}\nConsider $\\Delta$ with $(x_L,y_L)=(-3\/4, 1\/2)$ and $(x_R,y_R)=(1\/4, 3\/4)$. \n\n\\begin{figure}[ht] \\label{fig-4gon1}\n\\centerline{\\psfig{figure=4gon1.eps,width=10cm}}\n\\caption{Polygon $4\\Delta$ and the corresponding (outer) normal fan.}\n\\end{figure}\n\nIn this case $w=1$ and $n=1$. When $n=1$ condition (2) of the theorem is vacuously true and condition (3) states that the single lattice point in column $mx_L+1$ does not lie on the line joining the left and right vertices. (These conditions still hold after applying an integral shear transformation as above, hence the assumption $0\\leq s_2 <1$ is not necessary in the $n=1$ case.) \nThis gives an example of a surface $X$ of Picard number $3$ that is not a MDS. \nNotice that if we move the vertex $(x_R,y_R)$ to $(1\/4, 1)$ or $(1\/4,7\/6)$, but not $(1\/4,1\/2)$, the theorem applies and we again get an example of a non-MDS.\n\\end{example}\n\nWhen $\\Delta$ degenerates to a triangle then Theorem~\\ref{thm-2D} reduces to the case considered in \\cite{GK}. In the case of a triangle, He \\cite{He} has generalized condition (2a) to a weaker one. We expect that such a generalization also exists in the case of 4-gons.\n\nBy a result of Okawa \\cite{Okawa}, if $Y\\to X$ is a surjective morphism of (not necessarily $\\mathbb{Q}$-factorial) normal projective varieties,\nand $X$ is not a MDS, then $Y$ is also not a MDS. Thus, if $X= \\Bl_{t_0} X_\\Delta$ is not a MDS, we can replace $X_\\Delta$ with any toric blowup $X_{\\hat{\\Delta}}$ to produce non-MDS of higher Picard number. Our methods do not give examples of surfaces other than the ones obtained from a plane $4$-gon. The proof below shows that finite generation of the Cox ring of $X$ only depends on the singularities at the two torus fixed points corresponding to $P_L, P_R$ and the curve of negative self-intersection $C\\subset X$ passing through these points. If $X_\\Delta$ has toric divisors that do not pass through the two torus fixed points, then these can be contracted.\n\n\n\\subsection{ Higher dimensional varieties}\n\nWe first generalize Theorem~\\ref{thm-2D} to dimension $3$ and then discuss generalizations to dimension $4$ and higher.\n\nLet now $\\Delta$ be a rational convex $3$-dimensional polytope with vertices $(0,0,0)$, $(0,1,0)$, $(0,0,1)$, $P_L=(x_L,y_L,z_L)$, $P_R=(x_R,y_R,z_R)$, where $x_L<0$ and $x_R>0$. We allow $\\Delta$ to degenerate to a tetrahedron, where the points $(0,0,0), P_L, P_R$ are collinear.\n\n\\begin{figure}[ht] \\label{fig-polytope}\n\\begin{center}\n\\includegraphics[height=7cm]{polytope2.eps}\n\\end{center}\n\\caption{Polytope $\\Delta$.}\n\\end{figure}\n\n\nWe assume that $0\\leq \\frac{y_R}{x_R}, \\frac{z_R}{x_R} <1$. When $x_R\\leq 1$, this can be achieved by applying an integral shear transformation to the polytope.\n\nLet $m\\Delta$ be integral. A slice $c$ of $m\\Delta$ consists of all lattice points in $m\\Delta$ with first coordinate $x=c$. Such a slice forms a right triangle with $n$ lattice points on each side. We say that the slice has size $n$.\n\n\n\\begin{theorem}\\label{thm-3D}\nLet $\\Delta$ be a $3$-dimensional polytope as above. Assume that $0\\leq \\frac{y_R}{x_R}, \\frac{z_R}{x_R} <1$ and let $m>0$ be sufficiently large and divisible so that $m\\Delta$ is integral. The variety $X= \\Bl_{t_0} X_\\Delta$ is not a MDS if the following three conditions are satisfied:\n\\begin{enumerate}\n\\item Let $w=x_R-x_L$ be the width of $\\Delta$. Then $w\\leq 1$.\n\\item Let the slice $mx_L+1$ in $m\\Delta$ have size $n$ with points $(mx_L+1,b+i,c+j)$, $i,j\\geq 0$, $i+j0$ be sufficiently large and divisible so that $m\\Delta$ is integral. \n The variety $X= \\Bl_{t_0} X_\\Delta$ is not a MDS if the following three conditions are satisfied:\n\\begin{enumerate}\n\\item $w=x_R-x_L \\leq 1$.\n\\item The slice $mx_L+1$ in $m\\Delta$ consists of a single lattice point $P$.\n\\item The point $P$ does not lie on the line joining the left and right vertices of $m\\Delta$.\n\\end{enumerate}\n\\end{corollary}\n\nTheorem~\\ref{thm-3D} in particular applies to the case where $\\Delta$ is a tetrahedron.\nThe statement also simplifies in this case.\n\n\\begin{corollary}\\label{cor-3D-tetr}\nLet $\\Delta$ be a $3$-dimensional tetrahedron as above, where the points $(0,0,0), P_L, P_R$ are collinear. Let $m>0$ be sufficiently large and divisible so that $m\\Delta$ is integral. The variety $X= \\Bl_{t_0} X_\\Delta$ is not a MDS if the following three conditions are satisfied:\n\\begin{enumerate}\n\\item $w=x_R-x_L \\leq 1$.\n\\item Let the slice $mx_L+1$ in $m\\Delta$ have size $n$. Then \n the slice $mx_R-n+1$ in $m\\Delta$ has size $n$.\n\\item Let $s_y = \\frac{y_R-y_L}{w}$, $s_z = \\frac{z_R-z_L}{w}$ be the two slopes of the line joining left and right vertices. Then $n(s_y,s_z)\\notin\\ZZ^2$.\n\\end{enumerate}\n\\end{corollary}\n\n\nWe will study the tetrahedron case further to find examples where $X_\\Delta$ is a weighted projective space $\\mathbb{P}(a,b,c,d)$. Let $(x_L, x_R, y_0,z_0)$ be such that \n\\begin{gather*}\n(x_L,y_L,z_L) = x_L (1,y_0,z_0),\\\\\n (x_R,y_R,z_R) = x_R (1,y_0,z_0).\n\\end{gather*}\nThen the $4$-tuple of rational numbers $(x_L, x_R, y_0,z_0)$ determines the tetrahedron $\\Delta$. The normal fan to $\\Delta$ has rays generated by \n\\begin{equation}\\label{eq-rays} (y_0+z_0-\\frac{1}{x_L}, -1,-1), (y_0+z_0-\\frac{1}{x_R},-1,-1), (-y_0, 1, 0), (-z_0, 0,1).\\end{equation}\nThe slice $mx_L+1$ in $m\\Delta$ can be identified with lattice points in the triangle with vertices $(y_0,z_0), (y_0-\\frac{1}{x_L}, z_0), (y_0, z_0-\\frac{1}{x_L})$. It has size \n\\[ n = 1+ \\lfloor y_0+z_0-\\frac{1}{x_L}\\rfloor - \\lceil y_0\\rceil - \\lceil z_0\\rceil.\\]\nSimilarly, the slice $mx_R-n+1$ in $m\\Delta$ can be identified with lattice points in the triangle with vertices $(n-1)(y_0,z_0), (n-1)(y_0-\\frac{1}{x_R}, z_0), (n-1)(y_0, z_0-\\frac{1}{x_R})$. It has size \n\\[ 1- \\lceil (n-1)(y_0+z_0-\\frac{1}{x_R})\\rceil + \\lfloor(n-1) y_0\\rfloor + \\lfloor (n-1)z_0\\rfloor.\\]\n \nWe can now state Corollary~\\ref{cor-3D-tetr} in terms of $(x_L, x_R, y_0,z_0)$.\n\n\\begin{corollary}\\label{cor-3D-tetr1}\nLet $\\Delta$ be a tetrahedron given by the $4$-tuple of rational numbers \n$(x_L, x_R, y_0,z_0)$, with $x_L<0$ and $x_R>0$. The variety $X= \\Bl_{t_0} X_\\Delta$ is not a MDS if the following three conditions are satisfied:\n\\begin{enumerate}\n\\item $w=x_R-x_L \\leq 1$.\n\\item Let\n\\[ n = 1+ \\lfloor y_0+z_0-\\frac{1}{x_L}\\rfloor - \\lceil y_0\\rceil - \\lceil z_0\\rceil.\\]\n Then also \n\\[ n= 1 - \\lceil (n-1)(y_0+z_0-\\frac{1}{x_R})\\rceil + \\lfloor(n-1) y_0\\rfloor + \\lfloor (n-1)z_0\\rfloor.\\]\n\\item $n(y_0,z_0)\\notin\\ZZ^2$.\n\\end{enumerate}\n\\end{corollary}\n\n\nNote that the statements of Corollaries~\\ref{cor-3D-n1}, \\ref{cor-3D-tetr} and \\ref{cor-3D-tetr1} \ndo not depend on the assumption $0\\leq \\frac{y_R}{x_R}, \\frac{z_R}{x_R} <1$. The three conditions are the same after applying an integral shear transformation as above.\n\n\n\\begin{example} \\label{ex-3D-1}\nLet $x_L=-3\/5, x_R=6\/17, y_0= 1\/3, z_0=1\/2$. The three conditions of Corollary~\\ref{cor-3D-tetr1} are satisfied with $w = 81\/85$ and $n=1$. The normal fan has rays generated by\n\\[ (5,-2,-2), (-2,-1,-1), (-1,3,0), (-1,0,2).\\]\nThese vectors generate the lattice $\\ZZ^3$, and $X_\\Delta$ is the weighted projective space $\\mathbb{P}(17,20,18,27)$.\n\\end{example}\n\n\\begin{example} \\label{ex-3D-2}\nLet $x_L=-2\/3, x_R=1\/3, y_0= 1\/2, z_0=1\/2$. The three conditions are again satisfied with $w=1$ and $n=1$. The normal fan has rays generated by\n\\[ (5,-2,-2), (2,-3,-3), (-1,2,0), (-1,0,2).\\]\nThese vectors generate a sublattice of index $2$ in $\\ZZ^3$, and $X_\\Delta$ is the quotient of $\\mathbb{P}(2,6,11,11)$ by a $2$-element subgroup of the torus.\n\\end{example}\n\n\\begin{example} \\label{ex-3D-3}\nLet $x_L=-5\/18, x_R=5\/7, y_0= 2\/5, z_0=1$. Here $w = 125\/126 <1$ and $n=4$.\nHowever, \n\\[ 1- \\lceil (n-1)(y_0+z_0-\\frac{1}{x_R})\\rceil + \\lfloor(n-1) y_0\\rfloor + \\lfloor (n-1)z_0\\rfloor = 5,\\]\nand hence Corollary~\\ref{cor-3D-tetr1} does not apply to the blowup of $X_\\Delta= \\mathbb{P}(7,18,5,25)$.\n\\end{example}\n\n\\begin{remark}\nGiven a polytope $\\Delta$, one can project it to the $xy$-plane or the $xz$-plane to get a plane $4$-gon. The slice $c$ in $m\\Delta$ has size no bigger than the corresponding column $c$ in the projection. This implies that if the projection of $\\Delta$ satisfies the conditions of Theorem~\\ref{thm-2D} with $n=1$, then $\\Delta$ satisfies the conditions in Corollary~\\ref{cor-3D-n1}. Thus, one can construct $3$-dimensional polytopes by lifting $2$-dimensional polygons. However, Examples \\ref{ex-3D-1} and \\ref{ex-3D-2} are genuinely new: they can not be reduced to $2$-dimensional cases by projection. This can be seen as follows.\nThe projection of the tetrahedron to the $xy$-plane is a triangle determined by $(x_L,x_R,y_0)$. The three conditions of Theorem~\\ref{thm-2D} in the case $n=1$ are:\n\\begin{enumerate}\n\\item $w=x_R-x_L \\leq 1$.\n\\item $1= 1+\\lfloor y_0-\\frac{1}{x_L}\\rfloor - \\lceil y_0\\rceil$.\n\\item $y_0\\notin \\ZZ$.\n\\end{enumerate}\nIn Examples \\ref{ex-3D-1} and \\ref{ex-3D-2} the second condition is not satisfied. Similarly, projecting to the $xz$-plane, the condition $1= 1+\\lfloor z_0-\\frac{1}{x_L}\\rfloor - \\lceil z_0\\rceil$ is not satisfied.\n\\end{remark}\n\nIn \\cite{GK} we gave an algorithm for checking if the blowup of a weighted projective plane satisfies the assumptions of Theorem~\\ref{thm-2D}. We will state a similar result in dimension $3$.\n\nConsider the weighted projective space $\\mathbb{P}(a,b,c_1,c_2)$. We say that $(e,f,g_1,g_2)\\in \\ZZ^4_{> 0}$ is a relation in degree $d$ if\n\\[ ea+fb = g_1 c_1 = g_2 c_2 = d.\\]\nWe require for a relation $(e,f,g_1,g_2)$ that\n\\[ \\gcd(e,f,g_1) = \\gcd(e,f,g_2)=\\gcd(g_1,g_2) = 1.\\]\n(If $x,y, z_1,z_2$ are variables of degree $a,b,c_1, c_2$ respectively, then $x^e y^f, z_1^{g_1}, z_2^{g_2}$ are three monomials of degree $d$. They correspond to the three lattice points in $\\Delta$.)\n\n\\begin{theorem} \\label{thm-proj3}\nLet $\\mathbb{P}(a,b,c_1,c_2)$ be a weighted projective space with a relation $(e,f,g_1,g_2)$ in degree $d$. Then $\\Bl_{t_0} \\mathbb{P}(a,b,c_1,c_2)$ is not a MDS if the following three conditions are satisfied:\n\\begin{enumerate}\n\\item Let \n\\[ w= \\frac{d^3}{abc_1 c_2}.\\]\nThen $w\\leq 1$.\n\\item Consider integers $\\delta_1,\\delta_2 \\leq 0$ such that the vector\n\\[ \\frac{1}{g_1 g_2}(b,a)+\\big(\\frac{\\delta_1}{g_1}+\\frac{\\delta_2}{g_2}\\big)(e,-f)\\]\nhas non-negative integer entries. The set of such $(\\delta_1,\\delta_2)$ forms a slice of size $n$. Then the integers $\\gamma_1,\\gamma_2 \\geq 0$ such that \n\\[ \\frac{n-1}{g_1 g_2}(b,a)+\\big(\\frac{\\gamma_1}{g_1}+\\frac{\\gamma_2}{g_2}\\big)(e,-f)\\]\nhas non-negative integer entries must also form a slice of size $n$. \n\\item With $n$ as above, \n\\[ \\frac{n}{g_1 g_2} (b,a) \\notin \\ZZ^2.\\]\n\\end{enumerate}\n\\end{theorem} \n\nTo check if some $\\mathbb{P}(a,b,c_1,c_2)$ satisfies the assumptions of the theorem, we first determine $g_1, g_2$. The conditions $g_1 c_1 = g_2 c_2$ and $\\gcd(g_1,g_2) = 1$ imply that $g_1=c_2\/\\gcd(c_1,c_2)$, $g_2=c_1\/\\gcd(c_1,c_2)$. After that we check that $w\\leq 1$, find $e,f$, and compute the two slices. \n\nTable~\\ref{tab50} lists examples with $a,b,c_1,c_2 < 50$ that were found using a computer. We have omitted some isomorphic weighted projective spaces from this table. For example, $\\mathbb{P}(a,b,c_1,c_2)\\isom \\mathbb{P}(da,db,dc_1,dc_2)$ for any $d>0$. Similarly, if a prime $p$ divides all numbers $a,b,c_1,c_2$ except one, we can divide the three numbers by $p$ to get isomorphic weighted projective spaces. The table lists only spaces $\\mathbb{P}(a,b,c_1,c_2)$ where every triple in $\\{a,b,c_1,c_2\\}$ has no common divisor greater than $1$. \n\n\\begin{table}[ht] \n\\begin{minipage}[b]{0.4\\linewidth}\\centering\n\\begin{tabular}{| c | c | c|}\n\\hline\n\\hline\n$\\mathbb{P}(a,b,c_1, c_2)$ & $(e,f,g_1,g_2)$ & $n$\\\\\n\\hline\n$\\mathbb{P}(47, 13, 12, 30)$ & $(1, 1, 5, 2)$ & $1$ \\\\ \n$\\mathbb{P}(19, 41, 15, 20)$ & $(1, 1, 4, 3)$ & $3$ \\\\ \n$\\mathbb{P}(43, 17, 15, 20)$ & $(1, 1, 4, 3)$ & $1$ \\\\ \n$\\mathbb{P}(26, 49, 15, 25)$ & $(1, 1, 5, 3)$ & $3$ \\\\ \n$\\mathbb{P}(11, 32, 18, 27)$ & $(2, 1, 3, 2)$ & $2$ \\\\ \n$\\mathbb{P}(13, 28, 18, 27)$ & $(2, 1, 3, 2)$ & $2$ \\\\ \n$\\mathbb{P}(17, 20, 18, 27)$ & $(2, 1, 3, 2)$ & $1$ \\\\ \n$\\mathbb{P}(47, 7, 18, 27)$ & $(1, 1, 3, 2)$ & $1$ \\\\ \n$\\mathbb{P}(23, 44, 18, 45)$ & $(2, 1, 5, 2)$ & $2$ \\\\ \n$\\mathbb{P}(29, 32, 18, 45)$ & $(2, 1, 5, 2)$ & $1$ \\\\ \n$\\mathbb{P}(23, 20, 22, 33)$ & $(2, 1, 3, 2)$ & $1$ \\\\ \n$\\mathbb{P}(25, 16, 22, 33)$ & $(2, 1, 3, 2)$ & $1$ \\\\ \n$\\mathbb{P}(29, 20, 26, 39)$ & $(2, 1, 3, 2)$ & $1$ \\\\ \n\\hline\n\\end{tabular}\n\\end{minipage}\n\\hspace{0.5cm}\n\\begin{minipage}[b]{0.4\\linewidth}\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\hline\n$\\mathbb{P}(a,b,c_1,c_2)$ & $(e,f,g_1,g_2)$ & $n$\\\\\n\\hline\n$\\mathbb{P}(31, 16, 26, 39)$ & $(2, 1, 3, 2)$ & $1$ \\\\ \n$\\mathbb{P}(29, 50, 27, 36)$ & $(2, 1, 4, 3)$ & $2$ \\\\ \n$\\mathbb{P}(31, 46, 27, 36)$ & $(2, 1, 4, 3)$ & $1$ \\\\ \n$\\mathbb{P}(35, 38, 27, 36)$ & $(2, 1, 4, 3)$ & $1$ \\\\ \n$\\mathbb{P}(43, 49, 27, 45)$ & $(2, 1, 5, 3)$ & $1$ \\\\ \n$\\mathbb{P}(44, 47, 27, 45)$ & $(2, 1, 5, 3)$ & $1$ \\\\ \n$\\mathbb{P}(17, 33, 28, 42)$ & $(3, 1, 3, 2)$ & $1$ \\\\ \n$\\mathbb{P}(19, 27, 28, 42)$ & $(3, 1, 3, 2)$ & $1$ \\\\ \n$\\mathbb{P}(37, 16, 30, 45)$ & $(2, 1, 3, 2)$ & $1$ \\\\ \n$\\mathbb{P}(23, 27, 32, 48)$ & $(3, 1, 3, 2)$ & $1$ \\\\ \n$\\mathbb{P}(43, 46, 33, 44)$ & $(2, 1, 4, 3)$ & $1$ \\\\ \n$\\mathbb{P}(47, 38, 33, 44)$ & $(2, 1, 4, 3)$ & $1$ \\\\ \n$\\mathbb{P}(49, 34, 33, 44)$ & $(2, 1, 4, 3)$ & $1$ \\\\ \n\\hline\n\\end{tabular}\n\\end{minipage}\n\\\\ [2ex]\n\\caption{Weighted projective spaces $\\mathbb{P}(a,b,c_1,c_2)$, $a,b,c_1,c_2 <50$, with relation $(e,f,g_1,g_2)$, that satisfy the conditions of Theorem~\\ref{thm-proj3}. } \\label{tab50}\n\\end{table}\n\nCorollaries~\\ref{cor-3D-n1}, \\ref{cor-3D-tetr} and \\ref{cor-3D-tetr1} have obvious generalizations to higher dimension. Similarly, Theorem~\\ref{thm-proj3} can be generalized to dimension $r$. We need to consider weighted projective spaces $\\mathbb{P}(a,b,c_1,c_2,\\ldots,c_{r-1})$ with a relation $(e,f,g_1,g_2,\\ldots,g_{r-1})$. Wherever there is a term with $c_1$ and $c_2$ (or $g_1, g_2$) in Theorem~\\ref{thm-proj3}, we need to add terms with $c_3,\\ldots, c_{r-1}$ (or $g_3,\\ldots,g_{r-1}$). Table~\\ref{tab60} lists weighted projective $4$-spaces with $a,b,c_i < 65$. Again, only normalized numbers are listed. \n\n\\begin{table}[ht] \\label{tab60}\n\\begin{minipage}[b]{0.45\\linewidth}\\centering\n\\begin{tabular}{| c | c | c|}\n\\hline\n\\hline\n$\\mathbb{P}(a,b,c_1, c_2, c_3)$ & $(e,f,g_1,g_2,g_3)$ & $n$\\\\\n\\hline\n$\\mathbb{P}(47, 13, 12, 30, 60)$ & $(1, 1, 5, 2, 1)$ & $1$ \\\\ \n$\\mathbb{P}(19, 11, 13, 52, 52)$ & $(1, 3, 4, 1, 1)$ & $3$ \\\\ \n$\\mathbb{P}(21, 10, 13, 52, 52)$ & $(2, 1, 4, 1, 1)$ & $1$ \\\\ \n$\\mathbb{P}(19, 41, 15, 20, 60)$ & $(1, 1, 4, 3, 1)$ & $3$ \\\\ \n$\\mathbb{P}(43, 17, 15, 20, 60)$ & $(1, 1, 4, 3, 1)$ & $1$ \\\\ \n$\\mathbb{P}(22, 7, 17, 51, 51)$ & $(2, 1, 3, 1, 1)$ & $1$ \\\\ \n$\\mathbb{P}(11, 32, 18, 27, 54)$ & $(2, 1, 3, 2, 1)$ & $2$ \\\\ \n$\\mathbb{P}(13, 28, 18, 27, 54)$ & $(2, 1, 3, 2, 1)$ & $2$ \\\\ \n\\hline\n\\end{tabular}\n\\end{minipage}\n\\hspace{0.5cm}\n\\begin{minipage}[b]{0.45\\linewidth}\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\hline\n$\\mathbb{P}(a,b,c_1,c_2,c_3)$ & $(e,f,g_1,g_2,g_3)$ & $n$\\\\\n\\hline\n$\\mathbb{P}(17, 20, 18, 27, 54)$ & $(2, 1, 3, 2, 1)$ & $1$ \\\\ \n$\\mathbb{P}(47, 7, 18, 27, 54)$ & $(1, 1, 3, 2, 1)$ & $1$ \\\\ \n$\\mathbb{P}(25, 7, 19, 57, 57)$ & $(2, 1, 3, 1, 1)$ & $1$ \\\\ \n$\\mathbb{P}(53, 7, 20, 30, 60)$ & $(1, 1, 3, 2, 1)$ & $1$ \\\\ \n$\\mathbb{P}(15, 7, 26, 52, 52)$ & $(3, 1, 2, 1, 1)$ & $1$ \\\\ \n$\\mathbb{P}(9, 13, 29, 58, 58)$ & $(5, 1, 2, 1, 1)$ & $1$ \\\\ \n$\\mathbb{P}(17, 7, 29, 58, 58)$ & $(3, 1, 2, 1, 1)$ & $1$ \\\\ \n$\\mathbb{P}(19, 7, 32, 64, 64)$ & $(3, 1, 2, 1, 1)$ & $1$ \\\\ \n\\hline\n\\end{tabular}\n\\end{minipage}\n\\\\ [2ex]\n\\caption{Weighted projective spaces $\\mathbb{P}(a,b,c_1,c_2,c_3)$, $a,b,c_1,c_2,c_3 < 65$, with relation $(e,f,g_1,g_2,g_3)$ that satisfy the conditions of Theorem~\\ref{thm-proj3} in dimension $4$.} \\label{tab60}\n\\end{table}\n\n\n \n\n\n\n\n\\section{Proof of Theorem~\\ref{thm-2D}}\n\nWe use standard notation from birational geometry. Let $N^1(X)$ (resp. $N_1(X)$) be the group of numerical equivalence classes of Cartier divisors (resp. $1$-cycles). Let $\\overline{NE(X)}\\subset N_1(X)_\\mathbb{R}$ be the closed Kleiman-Mori cone of curves, and $Nef(X)\\subset N^1(X)_\\mathbb{R}$ the dual cone of nef divisors.\n\nWe prove Theorem~\\ref{thm-2D} by contradiction. We assume that $X$ is a MDS and produce a nef divisor $D$ that is not semiample. Note that $X$ being a MDS implies that its nef cone is polyhedral, generated by a finite number of semiample divisor classes.\n\nLet $\\Delta$ be a plane $4$-gon as in the theorem. The toric variety $X_\\Delta$ is $\\mathbb{Q}$-factorial and has Picard number $2$. The blowup $X$ has Picard number $3$. (We will deal with the case where $\\Delta$ is a triangle or $w=1$ later.) The $4$-gon contains two lattice points, $(0,0)$ and $(0,1)$. Consider the irreducible curve in the torus $T$ defined by the vanishing of the binomial\n\\[ \\chi^{(0,0)} - \\chi^{(0,1)} = 1-y,\\]\nand let $\\overline{C}\\subset X_\\Delta$ be its closure. Considering $\\overline{C}$ as a $\\mathbb{Q}$-Cartier divisor in $X_\\Delta$, it has class corresponding to the polygon $\\Delta$. This implies that its self-intersection number is\n\\[ \\overline{C}^2 = 2 Area(\\Delta) = w.\\]\nIf now $C$ is the strict transform of $\\overline{C}$ in $X$, then $C$ has divisor class $\\pi^* \\overline{C}-E$, where $\\pi:X\\to X_\\Delta$ is the blowup map and $E$ is the exceptional divisor. Hence $C^2 = w-1 <0$. This implies that $C$ defines an extremal ray in the cone $\\overline{NE(X)}$ and $C^\\perp$ defines a $2$-dimensional face in the $3$-dimensional nef cone of $X$. We will show that a general divisor $D \\in C^\\perp \\cap Nef(X)$ is not semiample.\n\nLet us start by describing the face of the nef cone defined by $C^\\perp$.\nA nef divisor in $X$ has the form $H-aE$, where $a\\geq 0$ and $H$ is the pullback of a nef divisor in $X_\\Delta$. We may assume that $a\\neq 0$, and even more specifically that $a=1$. Indeed, if $a=0$ and $(H-aE)\\cdot C = 0$, then also $H=0$ because $\\overline{C}$ is ample on $X_\\Delta$. The divisor $H$ corresponds to a convex polygon with sides parallel to the sides of $\\Delta$. (The polygon may be degenerate if some side has length $0$). Let us define the width of $H$ as the width of the corresponding polygon. \n\n\\begin{lemma}\nA nef divisor $H-E$ lies in $C^\\perp$ if and only if the width of $H$ is equal to $1$.\n\\end{lemma}\n\n\\begin{proof}\nLet $\\Delta'$ be the polygon corresponding to $H$ and let $m>0$ be such that $m\\Delta'$ is integral. Denote by $Q_L$ and $Q_R$ the left and right vertices of $m\\Delta'$ (which are necessarily distinct). Consider the divisor in $T$ defined by the vanishing of \n\\[ \\chi^{Q_L} - \\chi^{Q_R}.\\] \nLet $\\overline{D}$ be its closure in $X_\\Delta$ and let $D=\\pi^*\\overline{D}-mE$ in $X$. Then $D$ has class $m(H-E)$.\n\nLet us compute the intersection number $\\overline{D}\\cdot \\overline{C}$. The two curves intersect only in the torus $T$. We may multiply the equation $\\chi^{Q_L} - \\chi^{Q_R}$ with $\\chi^{-Q_L}$ to put it in the form $1-x^i y^j$. Here $i\/m$ is the width of the polygon $\\Delta'$. Now the intersection\n\\[ V(1-x^i y^j) \\cap V(1-y)\\]\nhas $i$ points with multiplicity $1$. This implies that \n\\[ D\\cdot C = \\overline{D}\\cdot \\overline{C} + m E\\cdot E = i-m,\\]\nwhich is zero if and only if $i=m$.\n\\end{proof}\n\nLet now $D$ be a general nef $\\mathbb{Q}$-divisor on $X$ in the class $H-E$, where $H$ is defined by a polygon $\\Delta'$ of width $1$. Since $D$ is a general divisor on the $2$-dimensional face of $Nef(X)$, we may assume that $\\Delta'$ is a $4$-gon. We wish to show that $D$ is not semiample. More precisely, we show that for any $m$ sufficiently large and divisible, all global sections of $\\cO_X(m D)$ vanish at the $T$-fixed point corresponding to the left vertex $P_L$. \n\nLet $m>0$ be an integer such that $m\\Delta'$ is integral. Let $Q_L, Q_R$ be the left and right vertices of $\\Delta'$. Global sections of $\\cO_X(m D)$ have the form \n\\[ f= \\sum_{q\\in m\\Delta'} a_q \\chi^q \\qquad a_q\\in k, \\text{ $f$ vanishes to order at least $m$ at $t_0$}.\\]\nSuch a global section $f$ vanishes at the $T$-fixed point corresponding to $P_L$ if and only if $a_{m Q_L}=0$. The condition that $f$ vanishes to order at least $m$ at $t_0$ can be expressed by saying that all partial derivatives of $f$ up to order $m-1$ vanish at the point $t_0 = (1,1)$. Now the vanishing of the coefficient $a_{m Q_L}$ is equivalent to the existence of a partial derivative $\\cD$ of order at most $m-1$ such that for $q\\in m\\Delta'$\n\\[ \\cD(\\chi^q)|_{t_0} = \\begin{cases}\n0 & \\text{if $q\\neq m Q_L$,}\\\\\nc\\neq 0 & \\text{if $q = m Q_L$.}\n\\end{cases} \\]\n\nAs in \\cite{GK}, it is enough to find such a derivative $\\cD$ after an integral translation of $m\\Delta'$ (which corresponds to multiplication of $f$ with a monomial).\nWe translate $m\\Delta'$ so that its right vertex $m Q_R$ has coordinates $(m-2,0)$. Then its left vertex $mQ_L$ has coordinates $(-2,\\beta)$ for some $\\beta\\in\\ZZ$. We choose $\\cD$ of the form \n\\[ \\cD = \\partial_x^{m-n-1} \\tilde{\\cD},\\]\nwhere $\\tilde{\\cD}$ has order at most $n$. Note that $\\partial_x^{m-n-1}$ vanishes when applied to monomials $\\chi^q= x^i y^j$, $0\\leq i < m-n-1$. After applying $\\partial_x^{m-n-1}$ to the monomials $\\chi^q$, $q\\in m\\Delta'$, the results with nonzero coefficients can be divided into three sets:\n\\begin{align*}\n S_1 &= \\{ x^{-A-1} y^\\beta\\},\\\\\n S_2 &= \\{ x^{-A} y^{B+j}\\}_{j=0,\\ldots,n-1},\\\\\n S_3 &= \\{ x^i y^j\\}_{i,j\\geq0, i+j0$, $n>0$. Consider three sets of monomials\n\\begin{align*}\n S_1 &= \\{ x^{-A-1} y^\\beta\\},\\\\\n S_2 &= \\{ x^{-A} y^{B+j}\\}_{j=0,\\ldots,n-1},\\\\\n S_3 &= \\{ x^iy^j\\}_{i,j\\geq0, i+j0$, $n>0$. We want to find a degree $n$ polynomial $p(X,Y,Z)$ that vanishes on $T_2$ and $T_3$, but not on $T_1$.\n\nThe general polynomial that vanishes on $T_3$ has the form\n\\begin{equation}\\label{eq-form} \n p(X,Y,Z) = \\sum_{i,j\\geq 0; i+j\\leq n} c_{ij} [X]_{n-i-j} [Y]_i [Z]_j.\n \\end{equation} \nAs before we find\n\\[ p(-A-1, Y, Z) = \\frac{A+n}{A}p(-A,Y,Z) - \\frac{1}{A} \\sum_{i,j} i c_{ij} [-A]_{n-i-j} [Y]_{i} [Z]_j - \\frac{1}{A} \\sum_{i,j} j c_{ij} [-A]_{n-i-j} [Y]_{i} [Z]_j,\\]\n\\[ p(-A, Y-1,Z ) = p(-A,Y,Z) - \\frac{1}{Y} \\sum_{i,j} i c_{ij} [-A]_{n-i-j} [Y]_{i} [Z]_j,\\]\n\\[ p(-A, Y,Z-1 ) = p(-A,Y,Z) - \\frac{1}{Z} \\sum_{i,j} j c_{ij} [-A]_{n-i-j} [Y]_{i} [Z]_j.\\]\nEliminating the sums from the three equations we get\n\\[ A p(-A-1,Y,Z) = (A+n-Y-Z) p(-A, Y, Z) + Y p(-A, Y-1, Z) + Z p(-A, Y, Z-1).\\]\n\nThe polynomial $p(X,Y,Z)$ must vanish at points $(-A, Y,Z)\\in T_2$. There is an $(n+1)$-dimensional space of degree $n$ polynomials in $Y,Z$ that vanish at these points. A basis for this space is given by $[Y-B]_d [Z-C]_{n-d}$, $d=0,\\ldots, n$. \nLet $p=p_d$ be a polynomial as in (\\ref{eq-form}) with the coefficients $c_{ij}$ chosen\nsuch that \n\\[ p_d(-A,Y,Z) = [Y-B]_d [Z-C]_{n-d}.\\]\nWhen $d=n$, we get the polynomial from the $2$-dimensional case $p_n(-A,Y,Z) = [Y-B]_n$, which at $X=-A-1$ is\n\\[ p_n(-A-1,Y,Z) = [Y-B-1]_{n-1}(Y-B-\\frac{n B}{A}).\\]\nSimilarly, the polynomial $p_0$ satisfies \n\\[ p_0(-A-1,Y,Z) = [Z-C-1]_{n-1}(Z-C-\\frac{n C}{A}).\\]\nFor $00$, $\\bgamma=n-d>0$. Then $p_d$ is the only polynomial whose first part does not vanish at $(\\bbeta,\\bgamma)$. The last part of $p_d$ vanishes if and only if $B+C=A$.\n\\item All other $(\\bbeta,\\bgamma)$. There exist two different $d$ such that the first part of $p_d$ does not vanish at $(\\bbeta,\\bgamma)$. If both last parts vanish at $(\\bbeta,\\bgamma)$ then $(\\bbeta, \\bgamma) =(\\frac{n B}{A},\\frac{n C}{A})$.\n\\end{itemize} \n\\end{proof}\n\n\\section{Proofs in dimension $3$.}\n\nWe start with the proof of Theorem~\\ref{thm-3D}.\n\nLet $\\Delta$ be the polytope in Theorem~\\ref{thm-3D}. The variety $X_\\Delta$ is not $\\mathbb{Q}$-factorial and has Picard number $1$. (To see the Picard number, consider deformations of the polytope by moving facets in the normal direction. We may keep one vertex, say the origin, fixed and move the remaining two facets. There is a one parameter family of such deformations, given by moving the vertex $(0,1,0)$ along the $y$-axis.)\n Let $H$ be the class of the $\\mathbb{Q}$-Cartier divisor corresponding to the polytope $\\Delta$. Then $H$ generates $\\Pic(X_\\Delta)_\\mathbb{R}$. The space $\\Pic(X)_\\mathbb{R} = N^1(X)$ is generated by (the pullback of) $H$ and the class $E$ of the exceptional divisor.\n\nWe construct a curve $C \\subset X$ that is analogous to a curve of negative self-intersection on a surface. The polytope $\\Delta$ contains $3$ lattice points $(0,0,0)$, $(0,1,0)$ and $(0,0,1)$. Consider two surfaces in the torus $T$ defined by the vanishing of\n\\[ \\chi^{(0,0,0)} - \\chi^{(0,1,0)} = 1-y,\\]\n\\[ \\chi^{(0,0,0)} - \\chi^{(0,0,1)} = 1-z,\\]\n and let $\\bar{S}_1, \\bar{S}_2$ be their closures in $X_\\Delta$. Then $\\bar{S}_1$ and $\\bar{S}_2$ are both $\\mathbb{Q}$-Cartier divisors in the class $H$. Let $\\bar{C}$ be their intersection. \n\n\\begin{lemma}\n$\\bar{C}$ is an irreducible curve.\n\\end{lemma}\n\n\\begin{proof}\nWe consider the intersection of $\\bar{C}$ with $T$-orbits of $X_\\Delta$. For any $T$-orbit of dimension $1$ or $2$, the restriction of at least one $S_i$ to the orbit is defined by a monomial equation, hence that $S_i$ does not intersect the $T$-orbit. This implies that $\\bar{C}$ does not contain any component in $X_\\Delta \\setmin T$ and hence is irreducible. \n\\end{proof}\n\nLet $S_1$, $S_2$, $C$ be the strict transforms of $\\bar{S}_1$, $\\bar{S}_2$, $\\bar{C}$ in $X$. Then $S_1$ and $S_2$ both have class $H-E$ and $C=S_1\\cap S_2$.\n\n\\begin{lemma}\nThe class of $C$ generates an extremal ray in $\\overline{NE(X)}$. The dual face of $Nef(X)$ is generated by the class $\\frac{1}{w}H - E$. \n\\end{lemma}\n\n\\begin{proof}\nWe can compute the intersection number\n\\[ \\bar{S}_i^3 = H^3 = 6 Vol(\\Delta) = w.\\]\nHence $S_i^3 = w-1 \\leq 0$. Now \n\\[ S_i \\cdot C = S_i^3 \\leq 0.\\]\nAny other irreducible curve $C'$ in $X$ does not lie on either $S_1$ or $S_2$, hence $S_i\\cdot C' \\geq 0$. It follows that the class of $C$ lies on the boundary of $\\overline{NE(X)}$, and since this cone is $2$-dimensional, $C$ generates an extremal ray.\n\nThe class $\\frac{1}{w}H - E$ is orthogonal to $C$:\n\\[ (\\frac{1}{w}H - E)\\cdot C = (\\frac{1}{w}H - E)(H-E)(H-E) = \\frac{1}{w}H^3 - 1 = 0,\\]\nhence it generates a boundary ray of $Nef(X)$.\n\\end{proof}\n\n\nIt now remains to show that a divisor in the class $\\frac{1}{w}H - E$ is not semiample. Let $m$ be as in the theorem, with $m\\Delta$ integral, and let $M=mw \\in\\ \\ZZ$. Notice that any positive integer multiple of $m$ also satisfies the hypotheses of the theorem.\nConsider the divisor class $M(\\frac{1}{w}H - E) = mH-ME$. We show that any \n\\[ f(x,y,z) = \\sum_{q\\in m\\Delta\\cap \\ZZ^3} c_q \\chi^q \\]\nthat vanishes to order at least $M$ at $t_0=(1,1,1)$ must have $c_{m P_L} = 0$. This implies that the $T$-fixed point corresponding to $P_L$ is a base point for $M(\\frac{1}{w}H - E)$.\nThis argument run with $m$ replaced by any of its positive integer multiples, allows us to deduce that $\\frac{1}{w}H - E$ is not semiample. \n\nAs in the $2$-dimensional case, we need to produce a partial derivative $\\cD$ of order $M-1$ such that, when applied to any monomial $\\chi^q$ for $q\\in m\\Delta\\cap \\ZZ^3$, it vanishes at $t_0$ if and only if $q\\neq m P_L$.\nTo find such $\\cD$, we first translate $m\\Delta$ so that $mP_R$ becomes equal to $(M-2,0,0)$. Then $m P_L$ moves to $(-2, \\beta,\\gamma)$, where $\\beta = my_L-my_R$, $\\gamma = mz_L-mz_R$. We look for $\\cD$ of the form \n\\[ \\cD = \\partial_x^{M-n-1} \\tilde{\\cD},\\]\nwhere $\\tilde{\\cD}$ has order $n$. When applying $\\partial_x^{M-n-1}$ to monomials $\\chi^q$ for $q\\in m\\Delta\\cap \\ZZ^3$, the resulting nonzero terms $a_p \\chi^{p}$ correspond to lattice points $p$ that can be divided into three sets:\n\\begin{align*}\n T_1 &= \\{ (-A-1,\\beta, \\gamma)\\},\\\\\n T_2 &= \\{ (-A, B+i, C+j)\\}_{i,j\\geq 0, i+j 0$, there exists a finite $\\lgc{K(A)}$-model $\\fram{M}' = \\langle W', R', V' \\rangle$ with $x \\in W'$ such that $|V(\\ensuremath{\\varphi},x) -V'(\\ensuremath{\\varphi},x)| < \\varepsilon$ for all $\\ensuremath{\\varphi} \\in S$. We proceed by induction on the sum of the complexities of the formulas in $S$. \n\nFor the base case, $S$ contains only variables and $\\ensuremath{\\overline{0}}$, and we let $\\fram{M}' = \\langle W', R', V' \\rangle$ with $W' = \\{x\\}$, $R' = \\emptyset$, and $V'(p,x) = V(p,x)$ for each $p \\in \\ensuremath{{\\rm Var}}$. For the inductive step, suppose first that $S = S' \\cup \\{\\ensuremath{\\psi}_1 \\to \\ensuremath{\\psi}_2\\}$. Then we can apply the induction hypothesis with $\\fram{M}$, $x \\in W$, $S'' = S' \\cup \\{\\ensuremath{\\psi}_1,\\ensuremath{\\psi}_2\\}$, and $\\frac{\\varepsilon}{2} > 0$ to obtain a finite $\\lgc{K(A)}$-model $\\fram{M}'= \\langle W', R', V' \\rangle$ with $x \\in W'$ such that $|V(\\ensuremath{\\varphi},x) -V'(\\ensuremath{\\varphi},x)| < \\frac{\\varepsilon}{2}$ for all $\\ensuremath{\\varphi} \\in S''$. It suffices then to observe that $|V(\\ensuremath{\\psi}_1 \\to \\ensuremath{\\psi}_2,x) -V'(\\ensuremath{\\psi}_1 \\to \\ensuremath{\\psi}_2,x)| = |V(\\ensuremath{\\psi}_2,x) - V(\\ensuremath{\\psi}_1,x) - V'(\\ensuremath{\\psi}_2,x) + V'(\\ensuremath{\\psi}_1,x)| \\le |V(\\ensuremath{\\psi}_2,x) - V'(\\ensuremath{\\psi}_2,x)| + |V(\\ensuremath{\\psi}_1,x) - V'(\\ensuremath{\\psi}_1,x)| < \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon$. The cases where $S$ contains $\\ensuremath{\\psi}_1 \\& \\ensuremath{\\psi}_2$, $\\ensuremath{\\psi}_1 \\land \\ensuremath{\\psi}_2$, or $\\ensuremath{\\psi}_1 \\lor \\ensuremath{\\psi}_2$ are very similar.\n\nFinally, suppose that $S$ consists of variables and boxed formulas $\\ensuremath{\\Box} \\ensuremath{\\psi}_1,\\ldots,\\ensuremath{\\Box} \\ensuremath{\\psi}_n$ ($n \\ge 1$). Then for $i \\in \\{1,\\ldots,n\\}$, there exists $y_i \\in W$ such that $Rxy_i$ and $|V(\\ensuremath{\\Box} \\ensuremath{\\psi}_i,x) - V(\\ensuremath{\\psi}_i,y_i)| < \\frac{\\varepsilon}{2}$. We apply the induction hypothesis to each submodel $\\fram{M}_i$ of $\\fram{M}$ generated by $y_i$ (i.e., the restriction of $\\fram{M}$ to the smallest subset of $W$ containing $y_i$ and closed under $R$) with $S' = (S \\setminus \\{\\ensuremath{\\Box} \\ensuremath{\\psi}_1,\\ldots,\\ensuremath{\\Box} \\ensuremath{\\psi}_n\\}) \\cup \\{\\ensuremath{\\psi}_1,\\ldots,\\ensuremath{\\psi}_n\\}$, $y_i \\in W_i$, and $\\frac{\\varepsilon}{2} > 0$ to obtain a finite $\\lgc{K(A)}$-model $\\fram{M}'_i= \\langle W'_i, R'_i, V'_i \\rangle$ and $y_i \\in W'_i$ such that $|V(\\ensuremath{\\varphi},y_i) -V'(\\ensuremath{\\varphi},y_i)| < \\frac{\\varepsilon}{2}$ for all $\\ensuremath{\\varphi} \\in S'$. By renaming worlds, we may assume that these models are disjoint and do not include $x$. Now let $\\fram{M}'= \\langle W', R', V' \\rangle$ be the finite $\\lgc{K(A)}$-model with $W' = \\{x\\} \\cup W'_1 \\cup \\ldots \\cup W'_n$ such that for $u,v \\in W'$ and $p \\in \\ensuremath{{\\rm Var}}$,\n\\[\n\\begin{array}{rclcrcl}\nR'uv & = &\n\\begin{cases}\nR'_iuv & \\text{if } u,v \\in W'_i\\\\\n1\t\t & \\text{if } u=x, \\ v \\in \\{y_1,\\ldots,y_n\\}\\\\\n0\t\t & \\text{otherwise}\n\\end{cases}\n& \\quad\\text{and}\\quad &\nV'(p,u) & = &\n\\begin{cases}\nV'_i(p,u) & \\text{if } u \\in W'_i\\\\\nV(p,x)\t& \\text{if } u=x.\n\\end{cases}\n\\end{array}\n\\]\nClearly $V'(p,x) = V(p,x)$ for each variable $p \\in S$. Moreover, $|V(\\ensuremath{\\Box} \\ensuremath{\\psi}_i,x) - V(\\ensuremath{\\psi}_i,y_i)| < \\frac{\\varepsilon}{2}$ and $|V(\\ensuremath{\\psi}_i,y_j) - V'(\\ensuremath{\\psi}_i,y_j)| < \\frac{\\varepsilon}{2}$ for $i,j \\in \\{1,\\ldots,n\\}$, so $|V(\\ensuremath{\\Box} \\ensuremath{\\psi}_i,x) - V'(\\ensuremath{\\Box} \\ensuremath{\\psi}_i,x)| < \\varepsilon$.\n\\end{proof}\n\nAs remarked above, the preceding lemma provides some justification both for assuming in the definition of the semantics of $\\lgc{K(A)}$ that $\\bigwedge_\\ensuremath{\\mathbb{R}} \\emptyset = \\bigvee_\\ensuremath{\\mathbb{R}} \\emptyset = 0$, and for restricting valuations of variables in a particular $\\lgc{K(A)}$-model to a fixed interval. It shows that for determining the valid formulas of $\\lgc{K(A)}$, we need only consider finite serial $\\lgc{K(A)}$-models. For such frames we can leave $\\bigwedge_\\ensuremath{\\mathbb{R}} \\emptyset$ and $\\bigvee_\\ensuremath{\\mathbb{R}} \\emptyset$ undefined; we can also make use of a standard (unrestricted) valuation map $V \\colon \\ensuremath{{\\rm Var}} \\times W \\to \\mathbb{R}$, since only finite infima and suprema are needed for calculating values of formulas. In principle then, we could define the logic $\\lgc{K(A)}$ without extra assumptions by considering only finite serial $\\lgc{K(A)}$-models. We prefer here, however, to give a more general semantics and to discover this finite model property as a fact about the logic rather than building it into the definition.\n\n\\begin{exa}\nAny $\\lgc{K(A)}$-model can be viewed as a state transition system, where each state is labelled with a vector of real numbers that represents the values of the variables in $\\ensuremath{{\\rm Var}}$ at that state. Such a transition system may be used to represent choices for various players in a game together with points (or other resources) accumulated by the players during that game. Consider for example the $\\lgc{K(A)}$-model $\\fram{M} = \\langle W, R, V \\rangle$ depicted below, where the vectors {\\tiny $\\begin{pmatrix} p \\\\ q \\end{pmatrix}$} represent the values of $p$ and $q$, respectively, at each state.\n\n \\begin{center}\n \\begin{pspicture}(-5,-1)(5,3.75)\n \\psline[](-2,1.3)(0,0)(2,1.3)\n \\psline[](-3.6,2.6)(-2,1.3)(-0.4,2.6)\n \\psline[](0.4,2.6)(2,1.3)(3.6,2.6)\n \\uput[-90](0,0.1){\\footnotesize $\\begin{pmatrix} 0 \\\\ 0 \\end{pmatrix}$}\n \\uput[-150](-1.8,1.3){\\footnotesize $\\begin{pmatrix} 1 \\\\ 0 \\end{pmatrix}$}\n \\uput[-30](1.8,1.3){\\footnotesize $\\begin{pmatrix} 0 \\\\ 1 \\end{pmatrix}$}\n \\uput[40](-4.1,2.6){\\footnotesize $\\begin{pmatrix} 2 \\\\ 0 \\end{pmatrix}$}\n \\uput[40](-0.9,2.6){\\footnotesize $\\begin{pmatrix} 4 \\\\ 1 \\end{pmatrix}$}\n \\uput[40](-0.1,2.6){\\footnotesize $\\begin{pmatrix} 1 \\\\ 1 \\end{pmatrix}$}\n \\uput[40](3.1,2.6){\\footnotesize $\\begin{pmatrix} 1 \\\\ 3 \\end{pmatrix}$}\n \\end{pspicture}\n \\end{center}\n\n\\noindent \nWe can use the model $\\fram{M}$ to define various two-player games, where in the first round, starting at the root, Player~$P$ chooses one of several (in this case, two) options, and in the second round, Player~$Q$ also chooses one of several (in this case, also two) options. The points assigned to Player~$P$ and Player~$Q$ at each state are the values of $p$ and $q$, respectively. Let also call the values of $p-q$ and $q-p$ at a state, the {\\em scores} for~$P$ and~$Q$, respectively. We assume in all these games that the players have complete knowledge of both $\\fram{M}$ and their opponent's goals.\n\nLet us consider some different ways of concluding games based on $\\fram{M}$. Suppose that in Game 1 each player's goal is to maximize her final score. Player $P$'s maximal payoff is then the value at the root of the formula $\\ensuremath{\\Diamond} \\ensuremath{\\Box} (q \\to p)$, which is $2$. If Player~$P$'s goal in Game~2 is to maximize her final number of points, and Player~$Q$ aims to minimize this number, then the required formula is $\\ensuremath{\\Diamond} \\ensuremath{\\Box} p$, which at the root also takes value $2$. Reversing the roles for Game $3$, we obtain the formula $\\ensuremath{\\Box} \\ensuremath{\\Diamond} q$, which takes value $1$ at the root. More complicated goals can also be modelled. For example, if both players aim to maximize the sum of their scores accumulated during the two rounds, then Player~$P$'s maximal payoff is the value of the formula $\\ensuremath{\\Diamond} ((q \\to p) \\& \\ensuremath{\\Box} (q \\to p))$ at the root, namely $3$.\n\nFormulas can also be used to express general relationships between games. For example, the model $\\fram{M}$ shows that\n\\[\n\\not \\mdl{\\lgc{K(A)}} \\ensuremath{\\Diamond} \\ensuremath{\\Box} (q \\to p) \\to (\\ensuremath{\\Box} \\ensuremath{\\Diamond} q \\to \\ensuremath{\\Diamond} \\ensuremath{\\Box} p).\n\\]\nThat is, Player~$P$'s maximal payoff in Game 1 exceeds her maximal payoff in Game 2 minus her maximal payoff in Game 3. On the other hand, it can be shown (e.g., using one of the calculi introduced below) that\n\\[\n\\mdl{\\lgc{K(A)}} \\ensuremath{\\Diamond} \\ensuremath{\\Box} (q \\to p) \\to (\\ensuremath{\\Box} \\ensuremath{\\Box} q \\to \\ensuremath{\\Diamond} \\ensuremath{\\Box} p).\n\\]\nThis means that if the goals of Games 1 and 2 are adopted with respect to an arbitrary $\\lgc{K(A)}$-model $\\fram{M}'$ based on a finite rooted tree with branches of length $2$, then Player~$P$'s maximal payoff in Game 1 for $\\fram{M}'$ is always less than or equal to her maximal payoff in Game 2 for $\\fram{M}'$ minus the minimum number of points for Player~$Q$ in all final states of $\\fram{M}'$. \n\\end{exa}\n\n\n\n\\subsection{\\L ukasiewicz Modal Logic} \\label{ss:lukmodal}\n\nLet us briefly recall the semantics of the \\L ukasiewicz modal logic $\\lgc{K(\\mathrmL)}$ studied by Hansoul and Teheux in~\\cite{HT13}. For convenience, we make use of a language $\\ensuremath{\\mathcal{L}}_{\\mathrmL}^\\ensuremath{\\Box}$ with the binary connective $\\ensuremath{\\supset}$ and unary connectives $\\ensuremath{{\\sim}}$ and $\\ensuremath{\\Box}$, where further connectives are defined as $\\ensuremath{\\varphi} \\oplus \\ensuremath{\\psi} := \\ensuremath{{\\sim}} \\ensuremath{\\varphi} \\ensuremath{\\supset} \\ensuremath{\\psi}$, $\\ensuremath{\\varphi} \\odot \\ensuremath{\\psi} := \\ensuremath{{\\sim}} (\\ensuremath{{\\sim}} \\ensuremath{\\varphi} \\oplus \\ensuremath{{\\sim}} \\ensuremath{\\psi})$, $\\ensuremath{\\varphi} \\lor\t \\ensuremath{\\psi} := (\\ensuremath{\\varphi} \\ensuremath{\\supset} \\ensuremath{\\psi}) \\ensuremath{\\supset} \\ensuremath{\\psi}$, $\\ensuremath{\\varphi} \\land \\ensuremath{\\psi} := \\ensuremath{{\\sim}} (\\ensuremath{{\\sim}} \\ensuremath{\\varphi} \\lor \\ensuremath{{\\sim}} \\ensuremath{\\psi})$, and $\\ensuremath{\\Diamond} \\ensuremath{\\varphi} \t:= \\ensuremath{{\\sim}} \\ensuremath{\\Box} \\ensuremath{{\\sim}} \\ensuremath{\\varphi}$.\n\nA \\emph{$\\lgc{K(\\mathrmL)}$-model} $\\fram{M} = \\langle W, R, V \\rangle$ consists of a frame $\\langle W, R \\rangle$ and a valuation map $V \\colon \\ensuremath{{\\rm Var}} \\times W \\to [0,1]$ that is extended to $V \\colon \\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_{\\mathrmL}^\\ensuremath{\\Box}) \\times W \\to [0,1]$ by \n\\[\n\\begin{array}{rcl}\nV(\\ensuremath{{\\sim}} \\ensuremath{\\varphi}, x) & = & 1 - V(\\ensuremath{\\varphi},x)\\\\[.025in]\nV(\\ensuremath{\\varphi} \\ensuremath{\\supset} \\ensuremath{\\psi}, x) & = & \\min(1, 1 - V(\\ensuremath{\\varphi},x) + V(\\ensuremath{\\psi},x))\\\\[.025in]\nV(\\ensuremath{\\Box} \\ensuremath{\\varphi}, x) & = & \\bigwedge_{[0,1]} \\{V(\\ensuremath{\\varphi}, y) : Rxy \\}.\n\\end{array}\n\\]\nAn $\\ensuremath{\\mathcal{L}}_{\\mathrmL}^\\ensuremath{\\Box}$-formula $\\ensuremath{\\varphi}$ is \\emph{valid} in a $\\lgc{K(\\mathrmL)}$-model $\\fram{M} = \\langle W, R,V \\rangle$ if $V(\\ensuremath{\\varphi},x) = 1$ for all $x \\in W$. If $\\ensuremath{\\varphi}$ is valid in all $\\lgc{K(\\mathrmL)}$-models, then $\\ensuremath{\\varphi}$ is \\emph{$\\lgc{K(\\mathrmL)}$-valid}, written $\\mdl{\\lgc{K(\\mathrmL)}} \\ensuremath{\\varphi}$.\n \nAn axiom system for $\\lgc{K(\\mathrmL)}$ is presented in~\\cite{HT13} as an extension of an axiomatization of infinite-valued \\L ukasiewicz logic with the modal axioms and rules\n\\[\n\\begin{array}{c}\n\\ensuremath{\\Box} (\\ensuremath{\\varphi} \\ensuremath{\\supset} \\ensuremath{\\psi}) \\ensuremath{\\supset} (\\ensuremath{\\Box} \\ensuremath{\\varphi} \\ensuremath{\\supset} \\ensuremath{\\Box} \\ensuremath{\\psi})\\\\[.025in]\n\\ensuremath{\\Box}(\\ensuremath{\\varphi} \\oplus \\ensuremath{\\varphi}) \\ensuremath{\\supset} (\\ensuremath{\\Box} \\ensuremath{\\varphi} \\oplus \\ensuremath{\\Box} \\ensuremath{\\varphi})\\\\[.025in]\n\\ensuremath{\\Box}(\\ensuremath{\\varphi} \\odot \\ensuremath{\\varphi}) \\ensuremath{\\supset} (\\ensuremath{\\Box} \\ensuremath{\\varphi} \\odot \\ensuremath{\\Box} \\ensuremath{\\varphi})\\\\[.075in]\n\\infer{\\ensuremath{\\Box} \\ensuremath{\\varphi}}{\\ensuremath{\\varphi}} \n\\end{array}\n\\]\nand the following rule with infinitely many premises\n\\[\n \\infer{\\ensuremath{\\varphi}}{\\ensuremath{\\varphi} \\oplus \\ensuremath{\\varphi} & \\ensuremath{\\varphi} \\oplus (\\ensuremath{\\varphi} \\odot \\ensuremath{\\varphi}) & \\ensuremath{\\varphi} \\oplus (\\ensuremath{\\varphi} \\odot \\ensuremath{\\varphi} \\odot \\ensuremath{\\varphi}) & \\ldots}\n \\]\nIt is proved that an $\\ensuremath{\\mathcal{L}}_{\\mathrmL}^\\ensuremath{\\Box}$-formula $\\ensuremath{\\varphi}$ is derivable in this system if and only if $\\mdl{\\lgc{K(\\mathrmL)}} \\ensuremath{\\varphi}$.\\footnote{In fact, the authors of~\\cite{HT13} prove a more general {\\em strong completeness} result: an $\\ensuremath{\\mathcal{L}}_{\\mathrmL}^\\ensuremath{\\Box}$-formula $\\ensuremath{\\varphi}$ is derivable from a (possibly infinite) set of $\\ensuremath{\\mathcal{L}}_{\\mathrmL}^\\ensuremath{\\Box}$-formulas $\\mathrm{\\Sigma}$ in the system if and only if for every $\\lgc{K(\\mathrmL)}$-model $\\langle W, R,V \\rangle$ and $x \\in W$, whenever $V(\\ensuremath{\\psi},x) = 1$ for all $\\ensuremath{\\psi} \\in \\mathrm{\\Sigma}$, also $V(\\ensuremath{\\varphi},x) = 1$. Note that an infinitary rule is needed to obtain a strong completeness theorem even for propositional {\\L}ukasiewicz logic and Abelian logic. However, in this paper we establish only (weak) completeness results.}\n\nThis raises an intriguing question. Is there an elegant axiomatization containing only finitary rules, obtained perhaps by removing the infinitary rule above? Our first step towards addressing this issue will be to view \\L ukasiewicz modal logic $\\lgc{K(\\mathrmL)}$ as a fragment of a modest extension of the Abelian modal logic $\\lgc{K(A)}$. Let $\\ensuremath{\\mathcal{L}}_{\\lgc{A^c}}^\\ensuremath{\\Box}$ be the language $\\ensuremath{\\mathcal{L}}_{\\lgc{A}}^\\ensuremath{\\Box}$ extended with an extra constant $c$. A $\\lgc{K(A^c)}$-model $\\fram{M} = \\langle W, R, V, c^\\fram{M} \\rangle$ consists of a $\\lgc{K( A)}$-model $\\langle W, R,V \\rangle$ and an element $c^\\fram{M} \\in \\ensuremath{\\mathbb{R}}$, where valuations are extended as before, using the additional clause $V(c,x) = c^\\fram{M}$ for each $x \\in W$. \n\nLet us fix $\\bot := c \\land \\lnot c$ and define the following mapping from $\\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_{\\mathrmL}^\\ensuremath{\\Box})$ to $\\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_{\\lgc{A^c}}^\\ensuremath{\\Box})$:\n\\[\n\\begin{array}{rcl}\np^* \t\t\t& = & (p \\land \\ensuremath{\\overline{0}}) \\lor \\bot \\ \\text{ for each} \\ p \\in \\ensuremath{{\\rm Var}}\\\\[.025in]\n(\\ensuremath{{\\sim}} \\ensuremath{\\varphi})^* \t\t& = & \\ensuremath{\\varphi}^* \\to \\bot\\\\[.025in]\n(\\ensuremath{\\varphi} \\ensuremath{\\supset} \\ensuremath{\\psi})^*\t& = & (\\ensuremath{\\varphi}^* \\to \\ensuremath{\\psi}^*) \\land \\ensuremath{\\overline{0}}\\\\[.025in]\n(\\ensuremath{\\Box} \\ensuremath{\\varphi})^*\t\t& = & \\ensuremath{\\Box} \\ensuremath{\\varphi}^*.\n\\end{array}\n\\]\nWe show that this mapping preserves validity between $\\lgc{K(\\mathrmL)}$ and $\\lgc{K(A^c)}$ by identifying the value taken by an $\\ensuremath{\\mathcal{L}}_{\\mathrmL}^\\ensuremath{\\Box}$-formula in $[0,1]$ with the value taken by the corresponding $\\ensuremath{\\mathcal{L}}_{\\lgc{A^c}}^\\ensuremath{\\Box}$-formula in the interval $[-|c|,0]$.\n\n\\begin{prop}\nLet $\\ensuremath{\\varphi} \\in \\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_{\\mathrmL}^\\ensuremath{\\Box})$. Then $\\mdl{\\lgc{K(\\mathrmL)}} \\ensuremath{\\varphi}$ if and only if $\\mdl{\\lgc{K(A^c)}} \\ensuremath{\\varphi}^*$.\n\\end{prop}\n\\begin{proof}\nSuppose first that $\\ensuremath{\\varphi}$ is not valid in a $\\lgc{K(\\mathrmL)}$-model $\\fram{M} = \\langle W, R, V \\rangle$. So $V(\\ensuremath{\\varphi},x) < 1$ for some $x \\in W$. We consider the $\\lgc{K(A^c)}$-model $\\fram{M}' = \\langle W, R, V', c^\\fram{M} \\rangle$ where $V'(p,x) = V(p,x) - 1$ for any $p \\in \\ensuremath{{\\rm Var}}$ and $x \\in W$, and $c^\\fram{M} = -1$. It suffices to prove that $V'(\\ensuremath{\\psi}^*,x) = V(\\ensuremath{\\psi},x) - 1$ for any $\\ensuremath{\\psi} \\in \\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_{\\mathrmL}^\\ensuremath{\\Box})$, since then $V'(\\ensuremath{\\varphi}^*,x) = V(\\ensuremath{\\varphi},x) - 1 < 0$ and $\\not \\mdl{\\lgc{K(A^c)}} \\ensuremath{\\varphi}^*$. We proceed by induction on the complexity of $\\ensuremath{\\psi}$. The base case follows by definition and for the inductive step for the propositional connectives, we just notice that, using the induction hypothesis,\n\\[\n\\begin{array}{rcl}\nV'((\\ensuremath{\\psi}_1 \\ensuremath{\\supset} \\ensuremath{\\psi}_2)^*,x) & = & V'((\\ensuremath{\\psi}_1^* \\to \\ensuremath{\\psi}_2^*) \\land \\ensuremath{\\overline{0}},x)\\\\[.025in]\n& = & \\min(V'(\\ensuremath{\\psi}_2^*,x)-V'(\\ensuremath{\\psi}_1^*,x),0)\\\\[.025in]\n& = & \\min((V(\\ensuremath{\\psi}_2,x) - 1) -(V(\\ensuremath{\\psi}_1,x) - 1),0)\\\\[.025in]\n& = & \\min(V(\\ensuremath{\\psi}_2,x) - V(\\ensuremath{\\psi}_1,x),0)\\\\[.025in]\n& = & \\min(1 - V(\\ensuremath{\\psi}_1,x) + V(\\ensuremath{\\psi}_2,x),1) - 1 \\\\[.025in]\n& = & V(\\ensuremath{\\psi}_1 \\ensuremath{\\supset} \\ensuremath{\\psi}_2,x) - 1,\n\\end{array}\n\\]\nthe case where $\\ensuremath{\\psi}$ is $\\ensuremath{{\\sim}} \\ensuremath{\\psi}_1$ being very similar. For the modal case, we obtain\n\\[\n\\begin{array}{rcl}\nV'((\\ensuremath{\\Box} \\ensuremath{\\psi}_1)^*,x) & = & V'(\\ensuremath{\\Box} \\ensuremath{\\psi}_1^*,x)\\\\[.025in]\n& = & \\bigwedge_{\\ensuremath{\\mathbb{R}}} \\{V'(\\ensuremath{\\psi}_1^*,y) : Rxy\\}\\\\[.025in]\n& = & \\bigwedge_{\\ensuremath{\\mathbb{R}}} \\{V(\\ensuremath{\\psi}_1,y) - 1 : Rxy\\}\\\\[.025in]\n& = & \\bigwedge_{[0,1]} \\{V(\\ensuremath{\\psi}_1,y) : Rxy\\} - 1\\\\[.025in]\n& = & V(\\ensuremath{\\Box} \\ensuremath{\\psi}_1,x) - 1,\n\\end{array}\n\\]\nnoting that in the case where $R[x] = \\emptyset$, we obtain $\\bigwedge_{\\ensuremath{\\mathbb{R}}} \\{V'(\\ensuremath{\\psi}_1^*,y) : Rxy\\} = 0 = 1 - 1 = \\bigwedge_{[0,1]} \\{V(\\ensuremath{\\psi}_1,y) : Rxy\\} - 1$ as required.\n\nSuppose now conversely that $\\ensuremath{\\varphi}^*$ is not valid in a $\\lgc{K(A^c)}$-model $\\fram{M} = \\langle W, R, V, c^\\fram{M} \\rangle$. That is, $V(\\ensuremath{\\varphi}^*,x) < 0$ for some $x \\in W$. Observe first that if $c^\\fram{M} = 0$, then, by a simple induction on the complexity of $\\ensuremath{\\varphi}$, we obtain $V(\\ensuremath{\\varphi}^*,x) = 0$ for all $x \\in W$, a contradiction. Hence $c^\\fram{M} \\neq 0$. Moreover, by scaling (dividing $V(p,x)$ by $c^\\fram{M}$ for each $p \\in \\ensuremath{{\\rm Var}}$ and $x \\in W$), we may assume that $V(\\bot,x) = -1$ for all $x \\in W$. We consider the $\\lgc{K(\\mathrmL)}$-model $\\fram{M}' = \\langle W, R, V' \\rangle$ where $V'(p,x) = \\max(\\min(V(p,x) + 1,1),0)$. It then suffices to prove that $V'(\\ensuremath{\\psi},x) = V(\\ensuremath{\\psi}^*,x) + 1$ for any $\\ensuremath{\\psi} \\in \\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_{\\mathrmL}^\\ensuremath{\\Box})$, proceeding by induction on the complexity of $\\ensuremath{\\psi}$. \n\\end{proof}\n\nNote that the addition of a constant $c$ to $\\lgc{K(A)}$ does not affect the fact that validity in the logic is equivalent to validity in finite models. It does, however, introduce a difference between the logic $\\lgc{K(A^c)}$ and the same logic restricted to serial models. Clearly, the formula $c \\to \\ensuremath{\\Box} c$ is valid in all serial models, but not in all models. \n\n\n\\section{A Labelled Tableau Calculus}\n\n\nIn this section we introduce a labelled tableau calculus for checking $\\lgc{K(A)}$-validity that is based very closely on the Kripke semantics described above. We use the calculus here to show that the problem of checking $\\lgc{K(A)}$-validity is in the complexity class {\\sc coNEXPTIME}. In Section~\\ref{s:fragment}, we will also use (a fragment of) the calculus to establish the completeness of an axiom system and a sequent calculus admitting cut-elimination for the modal-multiplicative fragment of $\\lgc{K(A)}$.\n\n\\subsection{The Calculus}\n\nOur labelled tableau calculus $\\lgc{LK(A)}$ proves that an $\\ensuremath{\\mathcal{L}}_{\\lgc{A}}^\\ensuremath{\\Box}$-formula $\\ensuremath{\\varphi}$ is valid by showing that the assumption that $\\ensuremath{\\varphi}$ takes a value less than $0$ in some world $w_1$ leads to a contradiction. Informally, we build a tableau for $\\ensuremath{\\varphi}$ as follows. First we decompose the propositional structure of $\\ensuremath{\\varphi}$ to obtain inequations between sums of formulas labelled with the world $w_1$. We then use box formulas occurring on the right of these inequations to generate new worlds accessible to $w_1$ and further inequations between sums of formulas labelled with $w_1$ and these accessible worlds. Box formulas on the left are decomposed by considering accessible worlds to $w_1$ and generating new inequations for those worlds. The process is then repeated with the new inequations and worlds appearing on the tableau. The formula $\\ensuremath{\\varphi}$ will be valid if the generated set of inequations (suitably interpreted) on each branch of the tableau is unsatisfiable over the real numbers.\n\nBy a {\\em labelled formula} we mean an ordered pair consisting of an $\\ensuremath{\\mathcal{L}}_{\\lgc{A}}^\\ensuremath{\\Box}$-formula $\\ensuremath{\\varphi}$ and a natural number $k$, written $\\lab{\\ensuremath{\\varphi}}{k}$. Given a multiset of $\\ensuremath{\\mathcal{L}}_{\\lgc{A}}^\\ensuremath{\\Box}$-formulas $\\mathrm{\\Gamma} = [\\ensuremath{\\varphi}_1,\\ldots,\\ensuremath{\\varphi}_n]$ (denoting the empty multiset by $[]$) and $k_1,\\ldots,k_n \\in \\ensuremath{\\mathbb{N}}$, we let $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}}$ denote the multiset of labelled formulas $[\\lab{\\ensuremath{\\varphi}_1}{k_1},\\ldots,\\lab{\\ensuremath{\\varphi}_n}{k_n}]$. \n\nTableaux are constructed from {\\em (tableau) nodes} of two types:\\medskip\n\n\\begin{enumerate}[label=(\\arabic*)]\n\n\\item\n{\\em labelled inequations} of the form $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}$ where $\\triangleright \\in \\{>,\\geq\\}$ and $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}},\\lab{\\mathrm{\\Delta}}{\\vecn{l}}$ are finite multisets of labelled formulas;\\medskip\n\n\\item\n{\\em relations} of the form $rij$ where $i,j \\in \\ensuremath{\\mathbb{N}}$.\\medskip\n\n\\end{enumerate}\n\n\n\\noindent\nAn $\\lgc{LK(A)}$-{\\em tableau} is a finite tree of nodes generated according to the inference rules of the system presented in Figure~\\ref{f:lkz}. That is, if nodes above the line in an instance of a rule occur on the same branch $B$, then $B$ can be extended with the nodes below the line. For convenience, we often write branches as (numbered) lists, noting for future reference that tableaux for formulas in the modal-multiplicative fragment (i.e., not containing $\\land$ or $\\lor$) consist of just one branch.\n\n\\newcommand{\\premiseszerol}{\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}}, \\lab{\\ensuremath{\\overline{0}}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n}\n\\newcommand{\\conclusionszerol}{\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n}\n\\newcommand{\\premiseszeror}{\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\ensuremath{\\overline{0}}}{i}, \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n}\n\\newcommand{\\conclusionszeror}{\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n}\n\\newcommand{\\premisesmultl}{\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}}, \\lab{\\ensuremath{\\varphi}\\&\\ensuremath{\\psi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n}\n\\newcommand{\\conclusionsmultl}{\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}}, \\lab{\\ensuremath{\\varphi}}{i}, \\lab{\\ensuremath{\\psi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n}\n\\newcommand{\\premisesmultr}{\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\ensuremath{\\varphi} \\& \\ensuremath{\\psi}}{i}, \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n}\n\\newcommand{\\conclusionsmultr}{\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\ensuremath{\\varphi}}{i}, \\lab{\\ensuremath{\\psi}}{i}, \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n}\n\\newcommand{\\premisesbor}{\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\ensuremath{\\Box} \\ensuremath{\\varphi}}{i}, \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n}\n\\newcommand{\\conclusionsbor}{\n\\begin{array}{c}\n\\lab{\\ensuremath{\\Box} \\ensuremath{\\varphi}}{i} \\ge \\lab{\\ensuremath{\\varphi}}{j} \\\\\nrij \n\\end{array}}\n\\newcommand{\\premisesbol}{\n\\begin{array}{c}\nrij\\\\\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}}, \\lab{\\ensuremath{\\Box} \\ensuremath{\\varphi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n\\end{array}}\n\\newcommand{\\conclusionsbol}{\n\\begin{array}{c}\n\\lab{\\ensuremath{\\varphi}}{j} \\ge \\lab{\\ensuremath{\\Box} \\ensuremath{\\varphi}}{i}\\\\\n\\end{array}}\n\\newcommand{\\premisesimpl}{\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}}, \\lab{\\ensuremath{\\varphi}\\to\\ensuremath{\\psi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n}\n\\newcommand{\\conclusionsimpl}{\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}}, \\lab{\\ensuremath{\\psi}}{i} \\triangleright \\lab{\\ensuremath{\\varphi}}{i}, \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n}\n\\newcommand{\\premisesimpr}{\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\ensuremath{\\varphi} \\to \\ensuremath{\\psi}}{i}, \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n}\n\\newcommand{\\conclusionsimpr}{\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}}, \\lab{\\ensuremath{\\varphi}}{i} \\triangleright \\lab{\\ensuremath{\\psi}}{i}, \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n}\n\\newcommand{\\premisesveel}{\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}}, \\lab{\\ensuremath{\\varphi} \\lor \\ensuremath{\\psi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n}\n\\newcommand{\\conclusionsveel}{\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}}, \\lab{\\ensuremath{\\varphi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}} \\quad \\lab{\\mathrm{\\Gamma}}{\\vecn{k}}, \\lab{\\ensuremath{\\psi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n}\n\\newcommand{\\premisesveer}{\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\ensuremath{\\varphi} \\lor \\ensuremath{\\psi}}{i}, \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n}\n\\newcommand{\\conclusionsveer}{\n\\begin{array}{c}\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\ensuremath{\\varphi}}{i}, \\lab{\\mathrm{\\Delta}}{\\vecn{l}} \\\\ \n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\ensuremath{\\psi}}{i}, \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n\\end{array}\n}\n\\newcommand{\\premiseswedl}{\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}}, \\lab{\\ensuremath{\\varphi} \\land \\ensuremath{\\psi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n}\n\\newcommand{\\conclusionswedl}{\n\\begin{array}{c}\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}}, \\lab{\\ensuremath{\\varphi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}} \\\\ \n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}}, \\lab{\\ensuremath{\\psi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n\\end{array}\n}\n\\newcommand{\\premiseswedr}{\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\ensuremath{\\varphi} \\land \\ensuremath{\\psi}}{i}, \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n}\n\\newcommand{\\conclusionswedr}{\n\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\ensuremath{\\varphi}}{i}, \\lab{\\mathrm{\\Delta}}{\\vecn{l}} \\quad \\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\ensuremath{\\psi}}{i}, \\lab{\\mathrm{\\Delta}}{\\vecn{l}}\n}\n\n\\begin{figure}[tbp] \n\\centering\n\\fbox{\n\\begin{minipage}{14.5 cm}\n\\[\n\\begin{array}{cc}\n\\infer[(\\0\\,\\aineq)]{\\conclusionszerol}{\\premiseszerol} \n&\n\\infer[(\\aineq\\,\\0)]{\\conclusionszeror}{\\premiseszeror} \\qquad \\qquad \\qquad \\\\[.2in]\n\\infer[(\\&\\,\\aineq)]{\\conclusionsmultl}{\\premisesmultl} \n&\n\\infer[(\\aineq\\,\\&)]{\\conclusionsmultr}{\\premisesmultr} \\qquad \\qquad \\qquad \\\\[.2in]\n\\infer[({\\to}\\,\\aineq)]{\\conclusionsimpl}{\\premisesimpl} \n&\n\\infer[(\\aineq\\,{\\to})]{\\conclusionsimpr}{\\premisesimpr} \\qquad \\qquad \\qquad \\\\[.2in]\n\\infer[(\\land\\,\\aineq)]{\\conclusionswedl}{\\premiseswedl} \n&\n\\infer[(\\aineq\\,\\land)]{\\conclusionswedr}{\\premiseswedr} \\qquad \\qquad \\qquad \\\\[.3in]\n\\infer[(\\lor\\,\\aineq)]{\\conclusionsveel}{\\premisesveel} \n&\n\\infer[(\\aineq\\,\\lor)]{\\conclusionsveer}{\\premisesveer} \\qquad \\qquad \\qquad \\\\[.2in]\n\\infer[(\\bo\\,\\aineq)]{\\conclusionsbol}{\\premisesbol}\n& \n\\infer[(\\aineq\\,\\bo)]{\\conclusionsbor}{\\premisesbor}\n\\hspace{-.5in}j \\in \\ensuremath{\\mathbb{N}} \\text{ new} \\hspace{1.7cm}\n\\end{array}\n\\]\n\\[\n\\infer[(\\ensuremath{\\rm ex})]{rkj}{rik} \\ j \\in \\ensuremath{\\mathbb{N}} \\text{ new}\n\\]\n\\caption{The labelled tableau calculus $\\lgc{LK(A)}$} \\label{f:lkz}\n\\end{minipage}}\n\\end{figure}\n\n\n Observe that the rules for $\\ensuremath{\\overline{0}}$, $\\&$, $\\to$, $\\land$, and $\\lor$ decompose formulas on the left and right of inequations, using the same label for added subformulas, while the rules for $\\ensuremath{\\Box}$ introduce inequations between a boxed formula $\\ensuremath{\\Box} \\ensuremath{\\varphi}$ labelled with $i$, and $\\ensuremath{\\varphi}$ labelled with a different $j$. Let us note also that the premises $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}}, \\lab{\\ensuremath{\\Box} \\ensuremath{\\varphi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}$ and $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\ensuremath{\\Box} \\ensuremath{\\varphi}}{i}, \\lab{\\mathrm{\\Delta}}{\\vecn{l}}$ of $(\\bo\\,\\aineq)$ and $(\\aineq\\,\\bo)$, respectively, are not, strictly speaking, necessary for either the soundness or the completeness of the calculus. However, they restrict the decomposition of boxed formulas to those occurring as subformulas of the initial formula, thereby ensuring a subformula property for the calculus.\n\nLet ${\\rm LVar}$ be the set of all formulas of the form $\\lab{p}{i}$ and $\\lab{\\ensuremath{\\Box} \\ensuremath{\\varphi}}{i}$ for $p \\in \\ensuremath{{\\rm Var}}$, $\\ensuremath{\\varphi} \\in \\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_{\\lgc{A}}^\\ensuremath{\\Box})$, and $i \\in \\ensuremath{\\mathbb{N}}$, considered as a set of variables. Given an $\\lgc{LK(A)}$-tableau $T$ and a branch $B$ of $T$, the {\\em system of inequations $S$ associated to $B$} consists of all labelled inequations occurring on $B$ that contain only formulas from ${\\rm LVar}$. Each labelled inequation in $S$ is interpreted here as an inequation between formal sums (where addition is over the multisets occurring in the labelled inequation and the empty multiset is $0$) of variables from ${\\rm LVar}$. We call the branch $B$ {\\em open} if the set of inequations $S$ associated to $B$ is consistent over $\\ensuremath{\\mathbb{R}}$, and {\\em closed} otherwise. The tableau $T$ is called {\\em closed} if all of its branches are closed, and {\\em open} if it has at least one open branch.\n\n\nA {\\em tableau for an $\\ensuremath{\\mathcal{L}}_{\\lgc{A}}^\\ensuremath{\\Box}$-formula} $\\ensuremath{\\varphi}$ is an $\\lgc{LK(A)}$-tableau with root node $[] > [\\lab{\\ensuremath{\\varphi}}{1}]$ and covering node $r12$. We say that $\\ensuremath{\\varphi}$ is {\\em $\\lgc{LK(A)}$-derivable}, written $\\der{\\lgc{LK(A)}} \\ensuremath{\\varphi}$, if there exists a closed tableau for $\\ensuremath{\\varphi}$.\n\n\\begin{exa}\nThe seriality axiom $\\ensuremath{\\Box} p \\to \\ensuremath{\\Diamond} p$ is $\\lgc{LK(A)}$-derivable using the tableau\\smallskip\n\\begin{center}\n\\begin{tabular}{rl}\n{\\footnotesize$1:$}\t&\t$[] > \\lab{\\ensuremath{\\Box} p \\to (\\ensuremath{\\Box}(p \\to \\ensuremath{\\overline{0}}) \\to \\ensuremath{\\overline{0}})}{1}$\\\\\n{\\footnotesize$2:$}\t&\t$r12$\\\\\n{\\footnotesize$3:$}\t&\t$\\lab{\\ensuremath{\\Box} p}{1} > \\lab{\\ensuremath{\\Box}(p \\to \\ensuremath{\\overline{0}}) \\to \\ensuremath{\\overline{0}}}{1}$\\\\\n{\\footnotesize$4:$}\t&\t$\\lab{\\ensuremath{\\Box} p}{1}, \\lab{\\ensuremath{\\Box}(p\\to\\ensuremath{\\overline{0}})}{1} > \\lab{\\ensuremath{\\overline{0}}}{1}$\\\\\n{\\footnotesize$5:$}\t&\t$\\lab{\\ensuremath{\\Box} p}{1}, \\lab{\\ensuremath{\\Box}(p\\to\\ensuremath{\\overline{0}})}{1} > []$\\\\\n{\\footnotesize$6:$}\t&\t$\\lab{p}{2} \\geq \\lab{\\ensuremath{\\Box} p}{1}$ \\\\\n{\\footnotesize$7:$}\t&\t$\\lab{p \\to \\ensuremath{\\overline{0}}}{2} \\geq \\lab{\\ensuremath{\\Box}(p\\to \\ensuremath{\\overline{0}})}{1}$ \\\\\n{\\footnotesize$8:$}\t&\t$\\lab{\\ensuremath{\\overline{0}}}{2} \\geq \\lab{p}{2}, \\lab{\\ensuremath{\\Box}(p\\to \\ensuremath{\\overline{0}})}{1}$ \\\\\n{\\footnotesize$9:$}\t&\t$[] \\geq \\lab{p}{2}, \\lab{\\ensuremath{\\Box}(p\\to \\ensuremath{\\overline{0}})}{1}$ \\\\\n\\end{tabular}\n\\end{center}\\smallskip\nwhich generates a (single) inconsistent system of inequations over $\\ensuremath{\\mathbb{R}}$\n\\[\n\\{\n x + y > 0, \\\n z \\geq x, \\\n 0 \\geq z + y\n\\}\n\\]\nwhere $x$, $y$, and $z$ stand for $\\lab{\\ensuremath{\\Box} p}{1}$, $\\lab{\\ensuremath{\\Box}(p \\to \\ensuremath{\\overline{0}})}{1}$, and $\\lab{p}{2}$, respectively.\n\\end{exa}\n\nThe calculus $\\lgc{LK(A)}$ can also be used to prove that an $\\ensuremath{\\mathcal{L}}_{\\lgc{A}}^\\ensuremath{\\Box}$-formula is {\\em not} $\\lgc{K(A)}$-valid; indeed a concrete counter-model for such a formula can be constructed from an open branch of a tableau where, taking care to avoid loops, the rules have been applied exhaustively.\n\n\\begin{exa}\nConsider a tableau for the formula $\\ensuremath{\\Box}(p \\lor q) \\to (\\ensuremath{\\Box} p \\lor \\ensuremath{\\Box} q)$ that begins with\n\\begin{center}\n\\begin{tabular}{rl}\n{\\footnotesize$1:$}\t&\t$[] > \\lab{\\ensuremath{\\Box}(p \\lor q) \\to (\\ensuremath{\\Box} p \\lor \\ensuremath{\\Box} q)}{1}$\\\\\n{\\footnotesize$2:$}\t&\t$r12$\\\\\n{\\footnotesize$3:$}\t&\t$\\lab{\\ensuremath{\\Box}(p \\lor q)}{1} > \\lab{\\ensuremath{\\Box} p \\lor \\ensuremath{\\Box} q}{1}$\\\\\n{\\footnotesize$4:$}\t&\t$\\lab{\\ensuremath{\\Box}(p \\lor q)}{1} > \\lab{\\ensuremath{\\Box} p}{1}$\\\\\n{\\footnotesize$5:$}\t&\t$\\lab{\\ensuremath{\\Box}(p \\lor q)}{1} > \\lab{\\ensuremath{\\Box} q}{1}$\\\\\n{\\footnotesize$6:$}\t&\t$\\lab{\\ensuremath{\\Box} p}{1} \\geq \\lab{p}{3}$\\\\\n{\\footnotesize$7:$}\t&\t$r13$\\\\\n{\\footnotesize$8:$}\t&\t$\\lab{\\ensuremath{\\Box} q}{1} \\geq \\lab{q}{4}$\\\\\n{\\footnotesize$9:$}\t&\t$r14$\\\\\n{\\footnotesize$10:$}\t&\t$\\lab{p \\lor q}{2} \\geq \\lab{\\ensuremath{\\Box}(p \\lor q)}{1}$\\\\\n{\\footnotesize$11:$}\t&\t$\\lab{p \\lor q}{3} \\geq \\lab{\\ensuremath{\\Box}(p \\lor q)}{1}$\\\\\n{\\footnotesize$12:$}\t&\t$\\lab{p \\lor q}{4} \\geq \\lab{\\ensuremath{\\Box}(p \\lor q)}{1}$\\\\\n\\end{tabular}\n\\end{center}\nthen continues by splitting into two subtrees, namely\\medskip\n\\begin{prooftree}\\rootAtTop\n\\AxiomC{$\\lab{p}{4} \\geq \\lab{\\ensuremath{\\Box}(p \\lor q)}{1}$}\n\\AxiomC{$\\lab{q}{4} \\geq \\lab{\\ensuremath{\\Box}(p \\lor q)}{1}$}\n\\BinaryInfC{$\\lab{p}{3} \\geq \\lab{\\ensuremath{\\Box}(p \\lor q)}{1}$}\n\\AxiomC{$\\lab{p}{4} \\geq \\lab{\\ensuremath{\\Box}(p \\lor q)}{1}$}\n\\AxiomC{$\\lab{q}{4} \\geq \\lab{\\ensuremath{\\Box}(p \\lor q)}{1}$}\n\\BinaryInfC{$\\lab{q}{3} \\geq \\lab{\\ensuremath{\\Box}(p \\lor q)}{1}$}\n\\BinaryInfC{$\\lab{p}{2} \\geq \\lab{\\ensuremath{\\Box}(p \\lor q)}{1}$}\n\\end{prooftree}\\medskip\n and a second that is exactly the same except that the root is $\\lab{q}{2} \\geq \\lab{\\ensuremath{\\Box}(p \\lor q)}{1}$.\n \n Observe now that the systems of inequations for the two leftmost branches of the subtree above are both inconsistent, since, combining inequations, we obtain\n \\[\n \\lab{p}{3} \\geq \\lab{\\ensuremath{\\Box}(p \\lor q)}{1} > \\lab{\\ensuremath{\\Box} p}{1} \\geq \\lab{p}{3}.\n \\]\n Similarly, the system of inequations for the rightmost branch is inconsistent, since we obtain\n \\[\n \\lab{q}{4} \\geq \\lab{\\ensuremath{\\Box}(p \\lor q)}{1} > \\lab{\\ensuremath{\\Box} q}{1} \\geq \\lab{q}{4}.\n \\]\n The system of inequations for the remaining branch is consistent, however. Let us denote each $\\lab{p}{i}$ and $\\lab{q}{i}$ by $x_i$ and $y_i$, respectively, for $i = 2,3,4$, $\\lab{\\ensuremath{\\Box} p}{1}$ and $\\lab{\\ensuremath{\\Box} q}{1}$ by $x'_1$ and $y'_1$, respectively, and $\\lab{\\ensuremath{\\Box}(p \\lor q)}{1}$ by $z$. Then for this branch, we obtain the set of inequations\n \\[\n \\{z > x'_1, \\, z > y'_1, \\, x'_1 \\geq x_3, \\, y'_1 \\geq y_4, \\, x_2 \\geq z, \\, y_3 \\geq z, \\, x_4 \\geq z\\}\n \\]\n which can be satisfied over $\\ensuremath{\\mathbb{R}}$ by taking, e.g., \n \\[\n x_2 = 3, \\, x_3 = 0, \\, x_4 = 3, \\, y_3 = 3, \\, y_4 = 0, \\, x'_1 = 1, \\, y'_1 = 1, \\, z = 2.\n \\]\n We obtain a $\\lgc{K(A)}$-model $\\fram{M} = \\langle W, R, V \\rangle$ by identifying $w_i$ in $W$ with each $i \\in \\ensuremath{\\mathbb{N}}$ occurring on the branch and including $\\langle w_i,w_j \\rangle$ in $R$ whenever $rij$ appears; that is, $W = \\{w_1,w_2,w_3,w_4\\}$ and $R = \\{\\langle w_1,w_2 \\rangle, \\langle w_1,w_3 \\rangle, \\langle w_1,w_4 \\rangle\\}$. We also use the assignment satisfying the set of inequations to define (the other values are unimportant)\n \\[\n V(p,w_2) = V(p,w_4) = V(q,w_3) = 3 \\quad \\text{and} \\quad V(q,w_2) = V(p,w_3) = V(q,w_4) = 0.\n \\]\n Then $V(\\ensuremath{\\Box} (p \\lor q),w_1) = 3$ and $V(\\ensuremath{\\Box} p \\lor \\ensuremath{\\Box} q,w_1) = 0$, so $V(\\ensuremath{\\Box}(p \\lor q) \\to (\\ensuremath{\\Box} p \\lor \\ensuremath{\\Box} q),w_1) = -3$. \n\\end{exa}\n\n\\subsection{Soundness}\\label{ss:soundness}\n\nLet $T$ be an $\\lgc{LK(A)}$-tableau and let $B$ be a branch of $T$. We call a serial $\\lgc{K(A)}$-model $\\fram{M} = \\langle W, R, V \\rangle$ {\\em faithful to $B$} if there is a map $f \\colon \\ensuremath{\\mathbb{N}} \\to W$ (said to {\\em show} that $\\fram{M}$ is faithful to $B$) such that if $rij$ occurs on $B$, then $Rf(i)f(j)$, and for every inequation $\\lab{\\ensuremath{\\varphi}_1}{i_1},\\ldots,\\lab{\\ensuremath{\\varphi}_n}{i_n} \\triangleright \\lab{\\ensuremath{\\psi}_1}{j_1},\\ldots,\\lab{\\ensuremath{\\psi}_m}{j_m}$ occurring on $B$,\n\\[\n V(\\ensuremath{\\varphi}_1,f(i_1)) + \\ldots + V(\\ensuremath{\\varphi}_n,f(i_n)) \\,\\triangleright\\, V(\\ensuremath{\\psi}_1,f(j_1)) + \\ldots + V(\\ensuremath{\\psi}_m,f(j_m)).\n\\]\nWe say that $\\fram{M}$ is {\\em faithful to $T$} if $\\fram{M}$ is faithful to a branch $B$ of $T$. Observe that in this case, the map defined by $e(\\lab{p}{i}) = V(p,i)$ and $e(\\lab{\\ensuremath{\\Box} \\ensuremath{\\varphi}}{i}) = V(\\ensuremath{\\Box} \\ensuremath{\\varphi}, i)$ satisfies the system of inequations associated to $B$, and hence $T$ is open. \n\nThe following lemma establishes the soundness of the rules of $\\lgc{LK(A)}$.\n\n\\begin{lem}\\label{LK:soundness lemma}\n Let $\\fram{M} = \\langle W, R, V \\rangle$ be a finite serial $\\lgc{K(A)}$-model faithful to a branch $B$ of an $\\lgc{LK(A)}$-tableau $T$. If a rule of $\\lgc{LK(A)}$ is applied to $B$, giving a tableau $T'$ extending $T$, then $\\fram{M}$ is faithful to $T'$.\n\\end{lem}\n\\begin{proof}\n Let $f$ be a map showing that the finite serial $\\lgc{K(A)}$-model $\\fram{M}= \\langle W, R, V \\rangle$ is faithful to the branch $B$ of the tableau $T$. The cases of $(\\0\\,\\aineq)$, $(\\aineq\\,\\0)$, $({\\to}\\,\\aineq)$, $(\\aineq\\,{\\to})$, $(\\&\\,\\aineq)$, $(\\aineq\\,\\&)$, $(\\aineq\\,\\lor)$, and $(\\land\\,\\aineq)$ follow easily. For $(\\lor\\,\\aineq)$, suppose that $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}}, \\lab{\\ensuremath{\\varphi} \\lor \\ensuremath{\\psi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}$ appears on $B$, and that we obtain an extension $T'$ of $T$ by two branches: one branch $B'$ extending $B$ with $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}}, \\lab{\\ensuremath{\\varphi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}$, and another branch $B''$ extending $B$ with $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}}, \\lab{\\ensuremath{\\psi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}$. Let $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} = [\\lab{\\ensuremath{\\varphi}_1}{k_1},\\ldots,\\lab{\\ensuremath{\\varphi}_n}{k_n}]$ and $\\lab{\\mathrm{\\Delta}}{\\vecn{l}} = [\\lab{\\ensuremath{\\psi}_1}{l_1},\\ldots,\\lab{\\ensuremath{\\psi}_m}{l_m}]$ and denote $V(\\ensuremath{\\varphi}_1,f(k_1)) + \\ldots + V(\\ensuremath{\\varphi}_n,f(k_n))$ by $V(\\mathrm{\\Gamma},f(\\vecn{k}))$ and $V(\\ensuremath{\\psi}_1,f(l_1)) + \\ldots + V(\\ensuremath{\\psi}_m,f(l_m))$ by $V(\\mathrm{\\Delta},f(\\vecn{l}))$. Since $\\fram{M}$ is faithful to $B$, we have $V(\\mathrm{\\Gamma},f(\\vecn{k})) + V(\\ensuremath{\\varphi} \\lor \\ensuremath{\\psi},f(i)) \\triangleright V(\\mathrm{\\Delta},f(\\vecn{l}))$. Hence\n \\[\nV(\\mathrm{\\Gamma},f(\\vecn{k})) + \\max(V(\\ensuremath{\\varphi},f(i)), V(\\ensuremath{\\psi},f(i))) \\,\\triangleright\\, V(\\mathrm{\\Delta},f(\\vecn{l})).\n \\]\n If $\\max(V(\\ensuremath{\\varphi},f(i)), V(\\ensuremath{\\psi},f(i))) = V(\\ensuremath{\\varphi},f(i))$ then $\\fram{M}$ is faithful to the branch $B'$, otherwise $\\fram{M}$ is faithful to the branch $B''$. Hence $\\fram{M}$ is faithful to $T'$. The case of $(\\aineq\\,\\land)$ follows similarly.\n \n For $(\\bo\\,\\aineq)$, suppose that $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}}, \\lab{\\ensuremath{\\Box} \\ensuremath{\\varphi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}$ and $rij$ appear on $B$ and we obtain an extension $T'$ of $T$ by a branch $B'$ which extends $B$ with $\\lab{\\ensuremath{\\varphi}}{j} \\geq \\lab{\\ensuremath{\\Box}\\ensuremath{\\varphi}}{i}$. Since $\\fram{M}$ is faithful to $B$, we have $Rf(i)f(j)$. But then $V(\\ensuremath{\\varphi},f(j)) \\geq V(\\ensuremath{\\Box} \\ensuremath{\\varphi},f(i))$, so $\\fram{M}$ is faithful to $B'$ and~$T'$.\n\nFor $(\\aineq\\,\\bo)$ suppose that $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\ensuremath{\\Box} \\ensuremath{\\varphi}}{i}, \\lab{\\mathrm{\\Delta}}{\\vecn{l}}$ appears on $B$ and we obtain an extension $T'$ of $T$ by a branch $B'$ that extends $B$ with $rij$ ($j\\in \\ensuremath{\\mathbb{N}}$ new) and $\\lab{\\ensuremath{\\Box}\\ensuremath{\\varphi}}{i} \\geq \\lab{\\ensuremath{\\varphi}}{j}$. Since $\\fram{M}$ is finite and serial, there exists $v\\in W$ such that $Rf(i)v$ and $V(\\ensuremath{\\Box} \\ensuremath{\\varphi},f(i)) = V(\\ensuremath{\\varphi},v)$. Hence the map $f'$ defined to coincide with $f$ except that $f'(j)= v$ together with the branch $B'$ show that $\\fram{M}$ is faithful to $T'$. \n\nFinally, for $(\\ensuremath{\\rm ex})$, suppose that $rik$ appears on $B$ and we obtain an extension $T'$ of $T$ by a branch $B'$ that extends $B$ with $rkj$ ($j \\in \\ensuremath{\\mathbb{N}}$ new). Since $rik$ is in $B$, we have $Rf(i)f(k)$. Because $\\fram{M}$ is serial, there exists $v\\in W$ such that $Rf(k)v$. The map $f'$ defined to coincide with $f$ except that $f'(j)= v$ shows that $\\fram{M}$ is faithful to $B'$ and, hence, to $T'$.\n\\end{proof}\n\n\\begin{prop}\\label{p:soundnesslabelledtableau}\nIf $\\der{\\lgc{LK(A)}} \\ensuremath{\\varphi}$, then $\\mdl{\\lgc{K(A)}} \\ensuremath{\\varphi}$.\n\\end{prop}\n\\begin{proof}\nSuppose that $\\not \\mdl{\\lgc{K(A)}} \\ensuremath{\\varphi}$. By Lemma~\\ref{l:fmp}, there exist a finite serial $\\lgc{K(A)}$-model $\\fram{M} = \\langle W, R, V \\rangle$ and $w_1\\in W$ such that $0 > V(\\ensuremath{\\varphi}, w_1)$. Let $f \\colon \\ensuremath{\\mathbb{N}} \\to W$ be any function such that $f(1) = w_1$ and $f(2) = w_2$, where $Rw_1w_2$. This function shows that $\\fram{M}$ is faithful to the only branch of the tableau consisting just of the root $[] > [\\lab{\\ensuremath{\\varphi}}{1}]$ and covering node $r12$. Suppose that by applying the decomposition rules to this tableau, we obtain a tableau $T$. Applying Lemma~\\ref{LK:soundness lemma} inductively, $\\fram{M}$ is faithful to $T$ by some branch $B$. But then the system of inequations associated with $B$ is consistent over $\\ensuremath{\\mathbb{R}}$, and $T$ is open. Hence $\\not \\der{\\lgc{LK(A)}} \\ensuremath{\\varphi}$.\n\\end{proof}\n\n\\vspace{-0.5em}\n\\subsection{Completeness} \\label{ss:completeness}\n\nWe establish the completeness of $\\lgc{LK(A)}$ by showing that an open branch of a tableau for a formula where the rules have been applied exhaustively generates a $\\lgc{K(A)}$-model where the formula is not valid. In order to avoid repetitions occurring when a rule is applied more than once to a labelled inequation with the same conclusions (or with a new label in the case of $(\\aineq\\,\\bo)$), we distinguish between active and inactive inequations and use new variables to denote modal formulas that have already been decomposed. To make this precise, we introduce the notation $\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}}$ to denote a variable corresponding to the modal $\\ensuremath{\\mathcal{L}}_{\\lgc{A}}^\\ensuremath{\\Box}$-formula $\\ensuremath{\\Box} \\ensuremath{\\varphi}$, and define $\\ensuremath{{\\rm Var}^*} = \\ensuremath{{\\rm Var}} \\cup \\{\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}} : \\ensuremath{\\Box} \\ensuremath{\\varphi} \\in \\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_{\\lgc{A}}^\\ensuremath{\\Box})\\}$. We let $\\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_{\\lgc{A}}^\\ensuremath{\\Box})^*$ denote the set of $\\ensuremath{\\mathcal{L}}_{\\lgc{A}}^\\ensuremath{\\Box}$-formulas over $\\ensuremath{{\\rm Var}^*}$, noting that of course $\\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_{\\lgc{A}}^\\ensuremath{\\Box}) \\subseteq \\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_{\\lgc{A}}^\\ensuremath{\\Box})^*$. The {\\em complexity} of a labelled inequation $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}$ over $\\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_{\\lgc{A}}^\\ensuremath{\\Box})^*$ is defined as the sum of the complexities of the formula occurrences in $\\mathrm{\\Gamma}$ and $\\mathrm{\\Delta}$.\n\nWe now consider a slight variant $\\lgc{LK'(A)}$ of $\\lgc{LK(A)}$, replacing the rules for $\\ensuremath{\\Box}$ with the following rules that decompose several occurrences of a labelled formula simultaneously:\n\\newcommand{\\premisesborn}{\n\\begin{array}{c}\n\\lab{\\mathrm{\\Gamma}_1}{\\vecn{k_1}} \\triangleright n_1\\lab{\\ensuremath{\\Box} \\ensuremath{\\varphi}}{i}, \\lab{\\mathrm{\\Delta}_1}{\\vecn{l_1}}\\\\\n\\vdots\\\\\n\\lab{\\mathrm{\\Gamma}_m}{\\vecn{k_m}} \\triangleright n_m \\lab{\\ensuremath{\\Box} \\ensuremath{\\varphi}}{i},\\lab{\\mathrm{\\Delta}_m}{\\vecn{l_m}}\\\\\n\\end{array}\n}\n\\newcommand{\\conclusionsborn}{\n\\begin{array}{c}\n\\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}}}{i} \\ge \\lab{\\ensuremath{\\varphi}}{j} \\\\\nrij \\\\\n\\lab{\\mathrm{\\Gamma}_1}{\\vecn{k_1}} \\triangleright n_1 \\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}}}{i}, \\lab{\\mathrm{\\Delta}_1}{\\vecn{l_1}} \\\\\n\\vdots\\\\\n\\lab{\\mathrm{\\Gamma}_m}{\\vecn{k_m}} \\triangleright n_m \\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}}}{i}, \\lab{\\mathrm{\\Delta}_m}{\\vecn{l_m}} \\\\\n\\end{array}}\n\\newcommand{\\premisesboln}{\n\\begin{array}{c}\nrij_1\\\\\n\\vdots\\\\\nrij_s\\\\\n\\lab{\\mathrm{\\Gamma}_1}{\\vecn{k_1}}, n_1\\lab{\\ensuremath{\\Box} \\ensuremath{\\varphi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}_1}{\\vecn{l_1}}\\\\\n\\vdots\\\\\n\\lab{\\mathrm{\\Gamma}_m}{\\vecn{k_m}}, n_m \\lab{\\ensuremath{\\Box} \\ensuremath{\\varphi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}_m}{\\vecn{l_m}}\\\\\n\\end{array}}\n\\newcommand{\\conclusionsboln}{\n\\begin{array}{c}\n\\lab{\\ensuremath{\\varphi}}{j_1} \\ge \\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}}}{i}\\\\\n\\vdots\\\\\n\\lab{\\ensuremath{\\varphi}}{j_s} \\ge \\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}}}{i}\\\\\n\\lab{\\mathrm{\\Gamma}_1}{\\vecn{k_1}}, n_1 \\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}}}{i} \\triangleright \\lab{\\mathrm{\\Delta}_1}{\\vecn{l_1}} \\\\\n\\vdots\\\\\n\\lab{\\mathrm{\\Gamma}_m}{\\vecn{k_m}}, n_m \\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}}}{i} \\triangleright \\lab{\\mathrm{\\Delta}_m}{\\vecn{l_m}} \\\\\n\\end{array}}\n\\[\n\\begin{array}{ccc}\n\\infer[(\\bo\\,\\aineq')]{\\conclusionsboln}{\\premisesboln}\n& &\n\\infer[(\\aineq\\,\\bo')]{\\conclusionsborn}{\\premisesborn}\n\\hspace{-1.25cm}j \\in \\ensuremath{\\mathbb{N}} \\text{ new}\n\\end{array}\n\\]\nClosed and open $\\lgc{LK'(A)}$-tableaux are defined as for $\\lgc{LK(A)}$, except that the system associated to a branch of a tableau consists of all inequations on the branch that contain only variables from $\\{\\lab{q}{i} : q \\in \\ensuremath{{\\rm Var}^*}, i \\in \\ensuremath{\\mathbb{N}}\\}$. We call an $\\lgc{LK'(A)}$-tableau for $\\ensuremath{\\varphi} \\in \\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_{\\lgc{A}}^\\ensuremath{\\Box})$ {\\em complete} if it is constructed as follows, making use of the notions of {\\em active} and {\\em inactive} inequations of the tableau to control applications of the rules:\\smallskip\n\n\\begin{enumerate}\n\n\\item\tBegin the tableau with the active labelled inequation $[] > [\\lab{\\ensuremath{\\varphi}}{1}]$ and relation $r12$.\\smallskip\n\n\\item\tIf all active labelled inequations have complexity $0$, then stop.\\smallskip\n\n\\item\tApply the rules for $\\ensuremath{\\overline{0}},\\&,\\to,\\land,\\lor$ exhaustively to active labelled inequations, changing the premise to inactive and the conclusions to active after each application.\\smallskip\n\n\\item\tFix $i$ such that $\\lab{\\ensuremath{\\Box} \\ensuremath{\\psi}}{i}$ occurs in an active labelled inequation, and apply $(\\ensuremath{\\rm ex})$ to every branch $B$ containing $\\lab{\\ensuremath{\\Box} \\chi}{i}$ for some $\\ensuremath{\\Box} \\chi$ in an active inequation to obtain relations $rik_B$ for some new $k_B \\in \\ensuremath{\\mathbb{N}}$.\\smallskip\n\n\\item\tFor each $\\lab{\\ensuremath{\\Box} \\ensuremath{\\psi}}{i}$ occurring on the right in an active labelled inequation, apply $(\\aineq\\,\\bo')$ to the collection of all active labelled inequations $\\lab{\\mathrm{\\Gamma}_t}{\\vecn{k_t}} \\triangleright n_t \\lab{\\ensuremath{\\Box} \\ensuremath{\\psi}}{i}, \\lab{\\mathrm{\\Delta}_t}{\\vecn{l_t}}$ (where $\\lab{\\ensuremath{\\Box} \\ensuremath{\\psi}}{i}$ does not occur in $\\lab{\\mathrm{\\Delta}_t}{\\vecn{l_t}}$) on a branch, changing the premise to inactive and the conclusions to active after each application. \\smallskip\n\n\\item\tFor each $\\lab{\\ensuremath{\\Box} \\ensuremath{\\psi}}{i}$ occurring on the left in an active labelled inequation, apply $(\\bo\\,\\aineq')$ to the collection of all active labelled inequations $\\lab{\\mathrm{\\Gamma}_t}{\\vecn{k_t}}, n_t \\lab{\\ensuremath{\\Box} \\ensuremath{\\psi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}_t}{\\vecn{l_t}}$ (where $\\lab{\\ensuremath{\\Box} \\ensuremath{\\psi}}{i}$ does not occur in $\\lab{\\mathrm{\\Gamma}_t}{\\vecn{k_t}}$) and all relations $rij_1,\\ldots,rij_s$ on a branch, changing the premises to inactive and the conclusions to active after each application. \\smallskip\n\n\\item Repeat from (2).\n\n\\end{enumerate}\n\nObserve that steps (3), (5), and (6) above decrease the multiset of complexities of the active labelled inequations, according to the standard multiset well-ordering (see~\\cite{DeMa79}). Hence the procedure terminates with a complete $\\lgc{LK'(A)}$-tableau $T$ for any $\\ensuremath{\\varphi}\\in \\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_{\\lgc{A}}^\\ensuremath{\\Box})$. Suppose now that we change each $\\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\psi}}}{i}$ to $\\lab{\\ensuremath{\\Box} \\ensuremath{\\psi}}{i}$ in $T$. Replacing applications of the rules $(\\bo\\,\\aineq')$ and $(\\aineq\\,\\bo')$ with appropriate repeated applications of the rules $(\\bo\\,\\aineq)$ and $(\\aineq\\,\\bo)$, we obtain an $\\lgc{LK(A)}$-tableau $T'$ for $\\ensuremath{\\varphi}$ such that each branch of $T'$ contains all the inequations (modulo renaming of variables) occurring on the corresponding branch of $T$. Hence we obtain:\n\n\n\\begin{lem}\\label{LK:equivalent calculus}\nIf there exists a closed complete $\\lgc{LK'(A)}$-tableau for $\\ensuremath{\\varphi} \\in \\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_{\\lgc{A}}^\\ensuremath{\\Box})$, then $\\der{\\lgc{LK(A)}}~\\ensuremath{\\varphi}$.\n\\end{lem}\n\n Let $T$ be an open complete $\\lgc{LK'(A)}$-tableau for $\\ensuremath{\\varphi} \\in \\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_{\\lgc{A}}^\\ensuremath{\\Box})$ and let $e$ be a map satisfying the system of inequations associated to an open branch $B$ of $T$. We say that $\\fram{M} = \\langle W, R, V \\rangle$ is the {\\em $e$-induced model} of $T$ by $B$ if \n\n\\begin{itemize}\n\n\\item\t$W = \\{w_i : i \\in \\ensuremath{\\mathbb{N}} \\text{ is a label occurring on } B\\}$;\\smallskip\n\n\\item\t$Rw_iw_j$ if and only if $rij$ occurs on $B$;\\smallskip\n\n\\item\tthe valuation map $V \\colon \\ensuremath{{\\rm Var}^*} \\times W \\to [-r,r]$ is defined by\n\\[\n\\begin{array}{rcl}\nV(p,w_i) & = & \\begin{cases} e(\\lab{p}{i}) & \\text{ if } \\lab{p}{i} \\text{ occurs on } B\\\\\n0 & \\text{ otherwise} \\end{cases}\n\\end{array}\n\\]\nwhere $r =\\max\\{|e(\\lab{p}{i})| : \\lab{p}{i} \\mbox{ occurs on } B\\}$.\n\n\\end{itemize}\n\n\\begin{lem}\\label{LK:completeness lemma}\nLet $\\fram{M} = \\langle W, R, V \\rangle$ be the $e$-induced model of an open complete $\\lgc{LK'(A)}$-tableau $T$ by a branch $B$. Extend the map $e$ by fixing $e(\\lab{\\ensuremath{\\varphi}}{i}) = V(\\ensuremath{\\varphi},w_i)$ for each $\\ensuremath{\\varphi} \\in \\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_{\\lgc{A}}^\\ensuremath{\\Box})^*$ and $w_i \\in W$, and denote $e(\\lab{\\ensuremath{\\varphi}_1}{k_1}) + \\ldots + e(\\lab{\\ensuremath{\\varphi}_n}{k_n})$ by $e((\\mathrm{\\Gamma})^{\\vecn{k}})$ for $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} = [\\lab{\\ensuremath{\\varphi}_1}{k_1},\\ldots,\\lab{\\ensuremath{\\varphi}_n}{k_n}]$. Then $e((\\mathrm{\\Gamma})^{\\vecn{k}}) \\triangleright e((\\mathrm{\\Delta})^{\\vecn{l}})$ for each labelled inequation $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}$ that appears on $B$.\n \\end{lem}\n\n\\begin{proof}\nWe prove the claim by induction on the complexity of $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}$. The base case follows using the definition of $\\fram{M}$ and the fact that $e$ satisfies the system of inequations associated to $B$. Moreover, the cases where $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}$ appears as a premise of an application of a rule for $\\ensuremath{\\overline{0}}$, $\\&$, or $ \\to$ follow directly using the induction hypothesis. \n\nSuppose that the inequation is $\\lab{\\mathrm{\\Gamma}'}{\\vecn{k'}}, \\lab{\\ensuremath{\\varphi} \\lor \\ensuremath{\\psi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}$ and $\\lab{\\mathrm{\\Gamma}'}{\\vecn{k'}}, \\lab{\\ensuremath{\\varphi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}$ appears on $B$. (The case where $\\lab{\\mathrm{\\Gamma}'}{\\vecn{k'}}, \\lab{\\ensuremath{\\psi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}$ appears on $B$ is symmetrical.) By the induction hypothesis, $e((\\mathrm{\\Gamma}')^{\\vecn{k'}}) + e((\\ensuremath{\\varphi})^i) \\triangleright e((\\mathrm{\\Delta})^{\\vecn{l}})$. Since $e((\\ensuremath{\\varphi} \\lor \\ensuremath{\\psi})^i) = \\max(e((\\ensuremath{\\varphi})^i),e((\\ensuremath{\\psi})^i))$ we obtain the desired inequality. The case when the inequation is $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\ensuremath{\\varphi} \\land \\ensuremath{\\psi}}{i}, \\lab{\\mathrm{\\Delta}'}{\\vecn{l'}}$ follows similarly.\n\nSuppose that the inequation is $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\ensuremath{\\varphi} \\lor \\ensuremath{\\psi}}{i}, \\lab{\\mathrm{\\Delta}'}{\\vecn{l'}}$ and $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\ensuremath{\\varphi}}{i}, \\lab{\\mathrm{\\Delta}'}{\\vecn{l'}}$ and $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\ensuremath{\\psi}}{i}, \\lab{\\mathrm{\\Delta}'}{\\vecn{l'}}$ appear on $B$. The desired inequality follows by applying the induction hypothesis to these two inequations and noticing that $e((\\ensuremath{\\varphi} \\lor \\ensuremath{\\psi})^i) = \\max(e((\\ensuremath{\\varphi})^i),e((\\ensuremath{\\psi})^i))$. The case when the inequation is $\\lab{\\mathrm{\\Gamma}'}{\\vecn{k'}}, \\lab{\\ensuremath{\\varphi} \\land \\ensuremath{\\psi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}$ follows similarly.\n\nSuppose that the inequation is $\\lab{\\mathrm{\\Gamma}'}{\\vecn{k'}}, n \\lab{\\ensuremath{\\Box} \\ensuremath{\\varphi}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}$ and $\\lab{\\mathrm{\\Gamma}'}{\\vecn{k'}}, n\\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}}}{i} \\triangleright \\lab{\\mathrm{\\Delta}}{\\vecn{l}}$ occurs on $B$. Since $\\fram{M}$ is finite and serial, there is a $j$ such that $rij$ occurs on $B$ and $V(\\ensuremath{\\Box} \\ensuremath{\\varphi}, w_i) = V(\\ensuremath{\\varphi},w_j)$. But then also $\\lab{\\ensuremath{\\varphi}}{j} \\geq \\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}}}{i}$ occurs on $B$. By the induction hypothesis twice, $e((\\mathrm{\\Gamma}')^{\\vecn{k'}}) + ne(\\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}}}{i}) \\triangleright e((\\mathrm{\\Delta})^{\\vecn{l}})$ and $e(\\lab{\\ensuremath{\\varphi}}{j}) \\geq e(\\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}}}{i})$, and the desired inequality follows since also $e(\\lab{\\ensuremath{\\Box} \\ensuremath{\\varphi}}{i})= V(\\ensuremath{\\Box} \\ensuremath{\\varphi}, w_i) = V(\\ensuremath{\\varphi},w_j) = e(\\lab{\\ensuremath{\\varphi}}{j})$. \n\nFinally, suppose that the inequation is $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright n\\lab{\\ensuremath{\\Box} \\ensuremath{\\varphi}}{i}, \\lab{\\mathrm{\\Delta}'}{\\vecn{l'}}$ and $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright n\\lab{\\newv{\\ensuremath{\\Box}\\ensuremath{\\varphi}}}{i}, \\lab{\\mathrm{\\Delta}'}{\\vecn{l'}}$ and $\\lab{\\newv{\\ensuremath{\\Box}\\ensuremath{\\varphi}}}{i} \\geq \\lab{\\ensuremath{\\varphi}}{j}$ appear on $B$ together with the relation $rij$. By the induction hypothesis twice, $e((\\mathrm{\\Gamma})^{\\vecn{k}}) \\triangleright ne(\\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}}}{i}) + e((\\mathrm{\\Delta}')^{\\vecn{l'}})$ and $e(\\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}}}{i}) \\geq e(\\lab{\\ensuremath{\\varphi}}{j})$, and the desired inequality follows since also $e(\\lab{\\ensuremath{\\varphi}}{j}) = V(\\ensuremath{\\varphi},w_j) \\ge V(\\ensuremath{\\Box}\\ensuremath{\\varphi},w_i) = e(\\lab{\\ensuremath{\\Box} \\ensuremath{\\varphi}}{i})$.\n\\end{proof}\n\n\n\\begin{thm}\\label{t:LabelledEquivSemantics}\nThe following are equivalent for any $\\ensuremath{\\varphi} \\in \\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_{\\lgc{A}}^\\ensuremath{\\Box})$:\n\\begin{enumerate}[label=\\rm (\\arabic*)]\n\\item There exists a closed complete $\\lgc{LK'(A)}$-tableau for $\\ensuremath{\\varphi}$.\n\\item\t$\\der{\\lgc{LK(A)}} \\ensuremath{\\varphi}$.\n\\item $\\mdl{\\lgc{K(A)}} \\ensuremath{\\varphi}$.\n\\end{enumerate}\n\\end{thm}\n\\begin{proof}\n(1)\\,$\\Rightarrow$\\,(2)\\,$\\Rightarrow$\\,(3) is just the combination of Lemma~\\ref{LK:equivalent calculus} and Proposition~\\ref{p:soundnesslabelledtableau}. We prove (3)\\,$\\Rightarrow$\\,(1) by contraposition. If (1) fails, then there is an open complete $\\lgc{LK'(A)}$-tableau $T$ beginning with $[] > [\\lab{\\ensuremath{\\varphi}}{1}], \\ r12$. Let $e$ be a map satisfying the system of inequations associated to a branch $B$ of $T$ and consider the $e$-induced model $\\fram{M} = \\langle W, R, V \\rangle$ of $T$ by $B$. By Lemma \\ref{LK:completeness lemma}, we obtain $0 > e(\\lab{\\ensuremath{\\varphi}}{1}) = V(\\ensuremath{\\varphi},w_1)$. Hence $\\not \\mdl{\\lgc{K(A)}} \\ensuremath{\\varphi}$.\n\\end{proof}\n\nLet us remark here that there exist significant similarities between $\\lgc{LK(A)}$ and the tableau calculus given in~\\cite{KPS13} for the fuzzy description logic ``{\\L}ukasiewicz fuzzy $\\mathcal{ALC}$''. Both calculi reduce the validity of a formula to the satisfiability of linear programming problems, using labels to record values of formulas at different worlds. Superficial differences arise as a result of the restriction of values for {\\L}ukasiewicz fuzzy $\\mathcal{ALC}$ to the real unit interval $[0,1]$ and the use of several modal operators (corresponding to roles in the description logic). More significantly, roles in {\\L}ukasiewicz fuzzy $\\mathcal{ALC}$ are interpreted by fuzzy rather than crisp relations and appear also in inequations, whereas $\\lgc{LK(A)}$ proceeds by directly generating a crisp frame suitable for constructing a potential countermodel.\n\n\\subsection{Complexity} \\label{ss:complexity}\n\nIt follows directly from the completeness proof above that checking the $\\lgc{K(A)}$-validity of an $\\ensuremath{\\mathcal{L}}_\\lgc{A}^\\ensuremath{\\Box}$-formula $\\ensuremath{\\varphi}$ is decidable. We simply apply the procedure for building a complete $\\lgc{LK'(A)}$-tableau for $\\ensuremath{\\varphi}$ to generate finitely many linear programming problems which can then be checked for satisfiability. Considering this procedure in more detail, we obtain an upper bound for the complexity of checking $\\lgc{K(A)}$-validity.\n\n\\begin{thm}\\label{t:complexity}\nThe problem of checking if $\\ensuremath{\\varphi} \\in \\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_\\lgc{A}^\\ensuremath{\\Box})$ is $\\lgc{K(A)}$-valid is in {\\sc coNEXPTIME}.\n\\end{thm}\n\\begin{proof}\nBy Theorem~\\ref{t:LabelledEquivSemantics}, we may consider a complete $\\lgc{LK'(A)}$-tableau $T$ for an $\\ensuremath{\\mathcal{L}}_\\lgc{A}^\\ensuremath{\\Box}$-formula $\\ensuremath{\\varphi}$ obtained by following steps (1)-(7) in the procedure above. We may also assume that no labelled inequation appears twice on the same branch of $T$. Suppose that $\\ensuremath{\\varphi}$ has complexity $n$. A new label $j$ is introduced by applying the rule $(\\aineq\\,\\bo')$ to a labelled inequation $\\lab{\\mathrm{\\Gamma}}{\\vecn{k}} \\triangleright \\lab{\\ensuremath{\\Box} \\ensuremath{\\psi}}{i}, \\lab{\\mathrm{\\Delta}}{\\vecn{l}}$, and by step (4), producing a new labelled inequation $\\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\psi}}}{i} \\ge \\lab{\\ensuremath{\\psi}}{j}$, where $\\ensuremath{\\Box} \\ensuremath{\\psi}$ is a subformula of $\\ensuremath{\\varphi}$, and $\\ensuremath{\\psi}$ has smaller modal depth than $\\ensuremath{\\Box} \\ensuremath{\\psi}$. Note that the number of subformulas $\\ensuremath{\\Box} \\ensuremath{\\psi}$ of $\\ensuremath{\\varphi}$ is bounded by $n$; also the modal depth of $\\ensuremath{\\varphi}$ is bounded by $n$. Hence the number of labels appearing on a branch of $T$ is at most exponential in $n$. Observe next that the complexity of any labelled inequation that occurs in $T$ is bounded by $n$, and that there are at most $n$ new variables of the form $\\newv{\\ensuremath{\\Box} \\ensuremath{\\psi}}$ appearing in $T$. Hence the number of different labelled inequations that can appear in $T$, and so also the length of any branch of $T$, is at most exponential in $n$.\n\nTo show that $\\ensuremath{\\varphi}$ is not $\\lgc{K(A)}$-valid, we choose a branch $B$ of $T$ non-deterministically, noting that (binary) branching occurs only when applying the rules $\\lor$ and $\\land$. By the above reasoning, the length of $B$ and the complexity of the labelled inequations appearing on $B$ are at most exponential in $n$. The result then follows from the fact that the linear programming problem is in {\\sc P}~\\cite{Khachiyan79}. \n\\end{proof}\n\nIt is no surprise that the upper bound provided here for checking $\\lgc{K(A)}$-validity matches the known upper bound for checking validity in fuzzy description logics based on infinite-valued \\L ukasiewicz logic (see~\\cite{KPS13}) and indeed also the \\L ukasiewicz modal logic described in Section~\\ref{s:modalabelianlogic}. In all these cases, unpacking the semantics leads to a non-deterministic guessing of linear programming problems of exponential size in the complexity of the original formula. Validity in modal or description logics based on finite \\L ukasiewicz logics is known to be PSPACE-complete~\\cite{BCE11}, and the same holds for many-valued modal logics based on G{\\\"o}del logics~\\cite{CMRR17}; however, these arguments do not seem to generalize to the current setting.\n\n\n\\section{The Modal-Multiplicative Fragment} \\label{s:fragment}\n\nIn this section, we provide an axiom system (without infinitary rules) and analytic sequent calculus for the modal-multiplicative fragment of $\\lgc{K(A)}$, and in doing so, take a first step towards obtaining such systems for the full logic. \n\n\n\\subsection{An Axiom System}\\label{ss:axiomatization}\n\n\\begin{figure}[tbp] \n\\centering\n\\fbox{\n\\begin{minipage}{10 cm}\n\\[\n\\begin{array}{rll}\n{\\rm (B)} & \\\t& (\\ensuremath{\\varphi} \\to \\ensuremath{\\psi}) \\to ((\\ensuremath{\\psi} \\to \\ensuremath{\\chi}) \\to (\\ensuremath{\\varphi} \\to \\ensuremath{\\chi}))\\\\[.025in]\n{\\rm (C)} & \t& (\\ensuremath{\\varphi} \\to (\\ensuremath{\\psi} \\to \\ensuremath{\\chi})) \\to (\\ensuremath{\\psi} \\to (\\ensuremath{\\varphi} \\to \\ensuremath{\\chi}))\\\\[.025in]\n{\\rm (I)} &\t\t& \\ensuremath{\\varphi} \\to \\ensuremath{\\varphi}\\\\[.025in]\n{\\rm (A)} & \t& ((\\ensuremath{\\varphi} \\to \\ensuremath{\\psi}) \\to \\ensuremath{\\psi}) \\to \\ensuremath{\\varphi}\\\\[.025in]\n{\\rm (K)} &\t\t& \\ensuremath{\\Box} (\\ensuremath{\\varphi} \\to \\ensuremath{\\psi}) \\to (\\ensuremath{\\Box} \\ensuremath{\\varphi} \\to \\ensuremath{\\Box} \\ensuremath{\\psi})\\\\[.025in]\n\\textrm{(D$_n$)} &\t& \\ensuremath{\\Box}(n\\ensuremath{\\varphi}) \\to n\\ensuremath{\\Box} \\ensuremath{\\varphi} \\qquad (n \\ge 2)\\\\[.025in]\n\\end{array}\n\\]\n\\[\n\\infer[\\textrm{(mp)}]{\\ensuremath{\\psi}}{\\ensuremath{\\varphi} & \\ensuremath{\\varphi} \\to \\ensuremath{\\psi}}\n\\qquad\n\\infer[\\textrm{(nec)}]{\\ensuremath{\\Box} \\ensuremath{\\varphi}}{\\ensuremath{\\varphi}}\n\\qquad\n\\infer[\\textrm{(con$_n$) }]{\\ensuremath{\\varphi}}{n \\ensuremath{\\varphi}}\n\\quad\n(n \\ge 2)\n\\]\n\\caption{The axiom system $\\lgc{K(A_m)}$}\\label{f:kz}\n\\end{minipage}}\n\\end{figure}\n\nFor convenience (in particular, to reduce the number of cases in proofs), we define the modal-multiplicative fragment here over a language $\\ensuremath{\\mathcal{L}}_{\\lgc{A_m}}^\\ensuremath{\\Box}$ consisting of the binary connective $\\to$ and unary connective $\\ensuremath{\\Box}$. To define further connectives, we fix $p_0 \\in \\ensuremath{{\\rm Var}}$ and let\n\\[\n\\ensuremath{\\overline{0}} := p_0 \\to p_0, \\quad \n\\lnot \\ensuremath{\\varphi} := \\ensuremath{\\varphi} \\to \\ensuremath{\\overline{0}}, \\quad\n\\ensuremath{\\varphi} \\& \\ensuremath{\\psi} := \\lnot \\ensuremath{\\varphi} \\to \\ensuremath{\\psi}, \\quad \n\\text{and} \\quad\n\\ensuremath{\\Diamond} \\ensuremath{\\varphi} := \\lnot \\ensuremath{\\Box} \\lnot \\ensuremath{\\varphi}.\n\\]\nWe also define $0\\ensuremath{\\varphi} := \\ensuremath{\\overline{0}}$ and $(n+1)\\ensuremath{\\varphi} := \\ensuremath{\\varphi} \\& (n\\ensuremath{\\varphi})$ for each $n \\in \\ensuremath{\\mathbb{N}}$. \n\n\nOur axiom system $\\lgc{K(A_m)}$ for the modal-multiplicative fragment of $\\lgc{K(A)}$ is presented in Figure~\\ref{f:kz}. For a formula $\\ensuremath{\\varphi} \\in \\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_\\lgc{A_m}^\\ensuremath{\\Box})$, we write $\\der{K(A_m)} \\ensuremath{\\varphi}$ if there exists a $\\lgc{K(A_m)}$-derivation of $\\ensuremath{\\varphi}$, defined as usual as a finite sequence of $\\ensuremath{\\mathcal{L}}_\\lgc{A_m}^\\ensuremath{\\Box}$-formulas that ends with $\\ensuremath{\\varphi}$ and is constructed inductively using the axioms and rules of $\\lgc{K(A_m)}$. \n\nEstablishing soundness for this system is straightforward. It is easily checked that the axioms (B), (C), (I), (A), and (K) are valid in all $\\lgc{K(A)}$-models. For the less standard axioms (D$_n$) ($n \\ge 2$), it suffices to consider a $\\lgc{K(A)}$-model $\\fram{M} = \\langle W, R,V \\rangle$ and $x \\in W$, and observe that for all $\\ensuremath{\\varphi} \\in \\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_\\lgc{A_m}^\\ensuremath{\\Box})$,\n\\[\n\\begin{array}{rcl}\nV(\\ensuremath{\\Box}(n\\ensuremath{\\varphi}),x) & = & \\bigwedge_{\\ensuremath{\\mathbb{R}}} \\{V(n\\ensuremath{\\varphi}, y) : Rxy \\}\\\\[.05in]\n\t\t\t\t& = & \\bigwedge_{\\ensuremath{\\mathbb{R}}} \\{nV(\\ensuremath{\\varphi}, y) : Rxy \\}\\\\[.05in]\n\t\t\t\t& = & n\\bigwedge_{\\ensuremath{\\mathbb{R}}} \\{V(\\ensuremath{\\varphi}, y) : Rxy \\}\\\\[.05in]\n\t\t\t\t& = & V(n\\ensuremath{\\Box} \\ensuremath{\\varphi},x).\n\\end{array}\n\\]\nIt is clear that (mp) and (nec) preserve validity in $\\lgc{K(A)}$-models. For (con$_n$) ($n \\ge 2$), we just note that if $V(n\\ensuremath{\\varphi},x) \\ge 0$ for a $\\lgc{K(A)}$-model $\\fram{M} = \\langle W, R,V \\rangle$ and $x \\in W$, then $V(\\ensuremath{\\varphi},x) \\ge 0$. Hence a simple induction on the length of a $\\lgc{K(A_m)}$-derivation gives the following result.\n \n\\begin{prop}\\label{p:axiomsystemsound}\nLet $\\ensuremath{\\varphi} \\in \\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_{\\lgc{A_m}}^\\ensuremath{\\Box})$. If $\\der{K(A_m)} \\ensuremath{\\varphi}$, then $\\mdl{\\lgc{K(A)}} \\ensuremath{\\varphi}$.\n\\end{prop}\n\n\\noindent\nThe proof of the converse direction is much harder. Before arriving finally at this result in Theorem~\\ref{t:main}, we make a detour via a sequent calculus and exploit the completeness result for our labelled tableau calculus provided by Theorem~\\ref{t:LabelledEquivSemantics}.\n\n\n\n\\subsection{A Sequent Calculus}\\label{ss:sequent}\n\n\nFor the purposes of this paper, a {\\em sequent} is an ordered pair of finite multisets of $\\ensuremath{\\mathcal{L}}_{\\lgc{A_m}}^\\ensuremath{\\Box}$-formulas $\\mathrm{\\Gamma}$ and $\\mathrm{\\Delta}$, written $\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}$. A {\\em sequent rule} is a set of {\\em instances}, each consisting of a finite set of sequents called {\\em premises} and a sequent called the {\\em conclusion}. Such rules are typically written schematically, using $\\ensuremath{\\varphi},\\ensuremath{\\psi},\\ensuremath{\\chi}$ and $\\mathrm{\\Gamma},\\mathrm{\\Delta},\\Pi,\\mathrm{\\Sigma}$ to denote arbitrary formulas and finite multisets of formulas, respectively. We also often write $\\mathrm{\\Gamma},\\mathrm{\\Delta}$ to denote the multiset union $\\mathrm{\\Gamma} \\uplus \\mathrm{\\Delta}$, $n\\mathrm{\\Gamma}$ for $\\mathrm{\\Gamma},\\ldots,\\mathrm{\\Gamma}$ ($n$ times), and $\\ensuremath{\\Box} \\mathrm{\\Gamma}$ for $[\\ensuremath{\\Box} \\ensuremath{\\varphi} : \\ensuremath{\\varphi} \\in \\mathrm{\\Gamma}]$. \n\nWe make use of a formula translation (assuming $\\ensuremath{\\varphi}_1 \\& \\ldots \\& \\ensuremath{\\varphi}_n = \\ensuremath{\\overline{0}}$ for $n=0$),\n\\[\n\\begin{array}{rcl}\n\\mathcal{I}(\\ensuremath{\\varphi}_1,\\ldots,\\ensuremath{\\varphi}_n \\Rightarrow \\ensuremath{\\psi}_1,\\ldots,\\ensuremath{\\psi}_m) & := & (\\ensuremath{\\varphi}_1 \\& \\ldots \\& \\ensuremath{\\varphi}_n) \\to (\\ensuremath{\\psi}_1 \\& \\ldots \\& \\ensuremath{\\psi}_m),\n\\end{array}\n\\]\nand say that a sequent $\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}$ is \\emph{$\\lgc{K(A)}$-valid}, written $\\mdl{\\lgc{K(A)}} \\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}$, if $\\mdl{\\lgc{K(A)}}\\mathcal{I}(\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta})$.\n\nA {\\em sequent calculus} $\\lgc{GL}$ consists of a set of sequent rules, and a {\\em $\\lgc{GL}$-derivation} of a sequent $S$ from a set of sequents $Y$ is a finite tree of sequents with root $S$ such that each node is either (i) a leaf node and in $Y$, or (ii) together with its parent nodes forms an instance of a rule of $\\lgc{GL}$. In this case, we write $Y \\der{\\lgc{GL}} S$ or just $\\der{\\lgc{GL}} S$ if $Y = \\emptyset$. A sequent rule is $\\lgc{GL}$-{\\em derivable} if there is a $\\lgc{GL}$-derivation of the conclusion of any instance of the rule from its premises; $\\lgc{GL}$-{\\em admissible} if whenever the premises of an instance of the rule are $\\lgc{GL}$-derivable, the conclusion is $\\lgc{GL}$-derivable; and $\\lgc{GL}$-{\\em invertible} if whenever the conclusion of an instance of the rule is $\\lgc{GL}$-derivable, the premises are $\\lgc{GL}$-derivable. \n\n\\begin{figure}[tbp] \n\\centering\n\\fbox{\n\\begin{minipage}{14.75 cm}\n \\[\n \\begin{array}{c}\n \\begin{array}{ccc}\n \\infer[(\\textsc{id})]{\\mathrm{\\Delta} \\Rightarrow \\mathrm{\\Delta}}{} & & \n \\infer[\\textup{\\sc (cut)}]{\\mathrm{\\Gamma}, \\Pi \\Rightarrow \\mathrm{\\Sigma}, \\mathrm{\\Delta}}{\\mathrm{\\Gamma}, \\ensuremath{\\varphi} \\Rightarrow \\mathrm{\\Delta} & \\Pi \\Rightarrow \\ensuremath{\\varphi}, \\mathrm{\\Sigma}}\\\\[.15in]\n \\infer[\\textup{\\sc (mix)}]{\\mathrm{\\Gamma},\\Pi \\Rightarrow \\mathrm{\\Sigma},\\mathrm{\\Delta}}{\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta} & \\Pi \\Rightarrow \\mathrm{\\Sigma}} & \\quad & \n \\qquad \\qquad \\infer[\\textup{\\sc (sc$_n$)}]{\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}}{n\\mathrm{\\Gamma} \\Rightarrow n\\mathrm{\\Delta}} \\quad (n \\ge 2)\\\\[.15in]\n \\infer[(\\to\\seq)]{\\mathrm{\\Gamma}, \\ensuremath{\\varphi} \\to \\ensuremath{\\psi} \\Rightarrow \\mathrm{\\Delta}}{\\mathrm{\\Gamma}, \\ensuremath{\\psi} \\Rightarrow \\ensuremath{\\varphi}, \\mathrm{\\Delta}} & & \\infer[(\\seq\\to)]{\\mathrm{\\Gamma} \\Rightarrow \\ensuremath{\\varphi} \\to \\ensuremath{\\psi}, \\mathrm{\\Delta}}{\\mathrm{\\Gamma}, \\ensuremath{\\varphi} \\Rightarrow \\ensuremath{\\psi}, \\mathrm{\\Delta}}\\\\[.15in]\n \\end{array}\\\\\n\\infer[\\boxknr{n}]{\\ensuremath{\\Box} \\mathrm{\\Gamma} \\Rightarrow n[\\ensuremath{\\Box} \\ensuremath{\\varphi}]}{\\mathrm{\\Gamma} \\Rightarrow n[\\ensuremath{\\varphi}]}\\quad (n \\ge 0) \\quad \n\\end{array}\n \\]\n \\caption{The sequent calculus $\\lgc{GK(A_m)}$} \\label{f:gkz}\n\\end{minipage}}\n\\end{figure}\n\nA sequent calculus $\\lgc{GK(A_m)}$ for the modal-multiplicative fragment of $\\lgc{K(A)}$, an extension of a calculus for the multiplicative fragment of Abelian logic given in~\\cite{met:seq}, is presented in Figure~\\ref{f:gkz}. Although only rules for $\\to$ and $\\ensuremath{\\Box}$ appear in this system, the following rules for other connectives are $\\lgc{GK(A_m)}$-derivable:\\smallskip\n\\[\n\\begin{array}{ccc}\n\\infer[(\\&\\!\\seq)]{\\mathrm{\\Gamma}, \\ensuremath{\\varphi}\\&\\ensuremath{\\psi} \\Rightarrow \\mathrm{\\Delta}}{\\mathrm{\\Gamma}, \\ensuremath{\\varphi}, \\ensuremath{\\psi} \\Rightarrow \\mathrm{\\Delta}} & \\qquad \\qquad & \n\\infer[(\\seq\\!\\&)]{\\mathrm{\\Gamma} \\Rightarrow \\ensuremath{\\varphi}\\&\\ensuremath{\\psi}, \\mathrm{\\Delta}}{\\mathrm{\\Gamma} \\Rightarrow \\ensuremath{\\varphi}, \\ensuremath{\\psi}, \\mathrm{\\Delta}}\\\\[.1in]\n\\infer[(\\lnot\\!\\seq)]{\\mathrm{\\Gamma}, \\lnot \\ensuremath{\\varphi} \\Rightarrow \\mathrm{\\Delta}}{\\mathrm{\\Gamma} \\Rightarrow \\ensuremath{\\varphi}, \\mathrm{\\Delta}} & & \n\\infer[(\\seq\\!\\lnot)]{\\mathrm{\\Gamma} \\Rightarrow\\lnot \\ensuremath{\\varphi}, \\mathrm{\\Delta}}{\\mathrm{\\Gamma}, \\ensuremath{\\varphi} \\Rightarrow \\mathrm{\\Delta}}\\\\[.1in]\n\\infer[(\\0\\!\\seq)]{\\mathrm{\\Gamma}, \\ensuremath{\\overline{0}} \\Rightarrow \\mathrm{\\Delta}}{\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}} & & \n\\infer[(\\seq\\!\\0)]{\\mathrm{\\Gamma} \\Rightarrow \\ensuremath{\\overline{0}}, \\mathrm{\\Delta}}{\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}}\n\\end{array}\n\\]\n\n\\begin{exa}\\label{ex:dn}\nBelow we provide a simple example of a $\\lgc{GK(A_m)}$-derivation, making use of the derived rules for $\\&$ given above.\n\\[\n\\infer[\\pfa{(\\seq\\to)}]{\\Rightarrow \\ensuremath{\\Box} (\\ensuremath{\\varphi} \\& \\ensuremath{\\varphi}) \\to (\\ensuremath{\\Box} \\ensuremath{\\varphi} \\& \\ensuremath{\\Box} \\ensuremath{\\varphi})}{\n \\infer[\\pfa{(\\seq\\!\\&)}]{\\ensuremath{\\Box} (\\ensuremath{\\varphi} \\& \\ensuremath{\\varphi}) \\Rightarrow \\ensuremath{\\Box} \\ensuremath{\\varphi} \\& \\ensuremath{\\Box} \\ensuremath{\\varphi}}{\n \\infer[\\pfa{\\boxknr{2}}]{\\ensuremath{\\Box} (\\ensuremath{\\varphi} \\& \\ensuremath{\\varphi}) \\Rightarrow \\ensuremath{\\Box} \\ensuremath{\\varphi}, \\ensuremath{\\Box} \\ensuremath{\\varphi}}{\n \\infer[\\pfa{(\\&\\!\\seq)}]{\\ensuremath{\\varphi} \\& \\ensuremath{\\varphi} \\Rightarrow \\ensuremath{\\varphi}, \\ensuremath{\\varphi}}{\n \\infer[\\pfa{(\\textsc{id})}]{\\ensuremath{\\varphi},\\ensuremath{\\varphi} \\Rightarrow \\ensuremath{\\varphi},\\ensuremath{\\varphi}}{}}}}}\n\\]\nSequents of the form $\\Rightarrow \\ensuremath{\\Box} (n\\ensuremath{\\varphi}) \\to n \\ensuremath{\\Box} \\ensuremath{\\varphi}$ can be proved similarly using the rule $\\boxknr{n}$.\n\\end{exa}\n\nIt is straightforward to establish an equivalence between derivability of a sequent in $\\lgc{GK(A_m)}$ and derivability of its formula interpretation in the axiom system $\\lgc{K(A_m)}$.\n\n\\begin{prop}\\label{p:sequentcalculusaxiomsystemequivalent}\n$\\der{\\lgc{GK(A_m)}} \\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}$ if and only if $\\der{\\lgc{K(A_m)}} \\mathcal{I}(\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta})$.\n\\end{prop}\n\\begin{proof}\nIt suffices for the left-to-right direction to show that for any rule of $\\lgc{GK(A_m)}$ with premises $S_1,\\ldots,S_m$ and conclusion $S$, whenever $\\der{\\lgc{K(A_m)}} \\mathcal{I}(S_i)$ for each $i \\in \\{1, \\ldots, m\\}$, also $\\der{\\lgc{K(A_m)}} \\mathcal{I}(S)$. For example, consider the rule $\\boxknr{n}$ and assume that $\\der{\\lgc{K(A)}} \\mathcal{I}(\\mathrm{\\Gamma} \\Rightarrow n[\\ensuremath{\\varphi}])$. Suppose that $\\mathrm{\\Gamma} = [\\ensuremath{\\psi}_1, \\ldots, \\ensuremath{\\psi}_m]$ and let $\\ensuremath{\\psi} = \\ensuremath{\\psi}_1 \\& \\ldots \\& \\ensuremath{\\psi}_m$. We continue the $\\lgc{K(A_m)}$-derivation of $\\mathcal{I}(\\mathrm{\\Gamma} \\Rightarrow n[\\ensuremath{\\varphi}]) =\\ensuremath{\\psi} \\to n\\ensuremath{\\varphi}$ to obtain a $\\lgc{K(A_m)}$-derivation of $\\ensuremath{\\Box} \\ensuremath{\\psi} \\to n\\ensuremath{\\Box} \\ensuremath{\\varphi}$:\n\\[\n\\begin{array}{rll}\n 1.\\, & \\ensuremath{\\psi} \\to n\\ensuremath{\\varphi}\\\\\n 2.\\, & \\ensuremath{\\Box} (\\ensuremath{\\psi} \\to n\\ensuremath{\\varphi}) &(\\text{nec})\\\\\n 3.\\, & \\ensuremath{\\Box} (\\ensuremath{\\psi} \\to n\\ensuremath{\\varphi}) \\to (\\ensuremath{\\Box} \\ensuremath{\\psi} \\to \\ensuremath{\\Box} n\\ensuremath{\\varphi}) & \\rm{(K)}\\\\\n 4.\\, & \\ensuremath{\\Box} \\ensuremath{\\psi} \\to \\ensuremath{\\Box} n\\ensuremath{\\varphi} & (\\text{mp}) \\text{ with 2,3}\\\\\n 5.\\, & \\ensuremath{\\Box} n\\ensuremath{\\varphi} \\to n\\ensuremath{\\Box} \\ensuremath{\\varphi} & \\textrm{(D$_n$)}\\\\\n 6.\\, & (\\ensuremath{\\Box} \\ensuremath{\\psi} \\to \\ensuremath{\\Box} n\\ensuremath{\\varphi}) \\to ((\\ensuremath{\\Box} n\\ensuremath{\\varphi} \\to n\\ensuremath{\\Box} \\ensuremath{\\varphi}) \\to (\\ensuremath{\\Box} \\ensuremath{\\psi} \\to n\\ensuremath{\\Box} \\ensuremath{\\varphi})) \\quad & \\rm{(B)}\\\\\n 7.\\, & (\\ensuremath{\\Box} n\\ensuremath{\\varphi} \\to n\\ensuremath{\\Box} \\ensuremath{\\varphi}) \\to (\\ensuremath{\\Box} \\ensuremath{\\psi} \\to n\\ensuremath{\\Box} \\ensuremath{\\varphi}) & (\\text{mp}) \\text{ with 4,6}\\\\\n 8.\\, & \\ensuremath{\\Box} \\ensuremath{\\psi} \\to n\\ensuremath{\\Box} \\ensuremath{\\varphi} & (\\text{mp}) \\text{ with 5,7}. \n \\end{array}\n \\]\n$(\\ensuremath{\\Box} \\ensuremath{\\psi}_1 \\& \\ldots \\& \\ensuremath{\\Box} \\ensuremath{\\psi}_m) \\to \\ensuremath{\\Box} \\ensuremath{\\psi}$ is derivable using (B), (C), (I), and (K), so, using (B) and (mp), we obtain a $\\lgc{K(A_m)}$-derivation of $\\mathcal{I}(\\ensuremath{\\Box} \\mathrm{\\Gamma} \\Rightarrow n[\\ensuremath{\\Box} \\ensuremath{\\varphi}]) = (\\ensuremath{\\Box} \\ensuremath{\\psi}_1 \\& \\ldots \\& \\ensuremath{\\Box} \\ensuremath{\\psi}_m) \\to n \\ensuremath{\\Box} \\ensuremath{\\varphi}$.\n \nFor the right-to-left direction, it is easy to show that every axiom of $\\lgc{K(A_m)}$ is $\\lgc{GK(A_m)}$-derivable; see, e.g., Example~\\ref{ex:dn} for $\\lgc{GK(A_m)}$-derivations of instances of (D$_n$). Also, the rules of $\\lgc{K(A_m)}$ are $\\lgc{GK(A_m)}$-derivable. For example, for $(\\text{con}_n)$, starting with $\\Rightarrow n\\ensuremath{\\varphi}$, we apply $\\textup{\\sc (cut)}$ with the $\\lgc{GK(A_m)}$-derivable sequent $n\\ensuremath{\\varphi} \\Rightarrow n[\\ensuremath{\\varphi}]$ to obtain $\\Rightarrow n[\\ensuremath{\\varphi}]$ and then, applying $\\textup{\\sc (sc$_n$)}$, also $\\Rightarrow \\ensuremath{\\varphi}$. Hence, if $\\der{\\lgc{K(A_m)}} \\mathcal{I}(\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta})$, then $\\der{\\lgc{GK(A_m)}} \\Rightarrow \\mathcal{I}(\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta})$ and, applying $\\textup{\\sc (cut)}$ with the $\\lgc{GK(A_m)}$-derivable sequent $\\mathrm{\\Gamma}, \\mathcal{I}(\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}) \\Rightarrow \\mathrm{\\Delta}$, also $\\der{\\lgc{GK(A_m)}} \\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}$. \n \\end{proof}\n \nWe now consider a more complicated family of rules, indexed by $k \\in \\ensuremath{\\mathbb{N}} \\setminus\\!\\{0\\}$ and $n \\in \\ensuremath{\\mathbb{N}}$, that will be very useful in subsequent cut-elimination and completeness proofs:\\smallskip\n\\[\n\\begin{array}{c}\n\\infer[\\boxknr{k,n} \\quad ]{\\mathrm{\\Delta}, \\ensuremath{\\Box} \\mathrm{\\Gamma} \\Rightarrow \\ensuremath{\\Box} \\ensuremath{\\varphi}_1,\\ldots, \\ensuremath{\\Box} \\ensuremath{\\varphi}_n, \\mathrm{\\Delta}}{\n\\mathrm{\\Gamma}_0 \\Rightarrow & \\mathrm{\\Gamma}_1 \\Rightarrow k[\\ensuremath{\\varphi}_1] & \\ldots & \\mathrm{\\Gamma}_n \\Rightarrow k[\\ensuremath{\\varphi}_n]}\\quad\n \\text{where } k\\mathrm{\\Gamma} = \\mathrm{\\Gamma}_0 \\uplus \\mathrm{\\Gamma}_1 \\uplus \\ldots \\uplus \\mathrm{\\Gamma}_n.\n\\end{array}\n\\]\nCritically for our later considerations, $\\boxknr{k,n}$ is $\\lgc{GK(A_m)}$-derivable for all $k \\in \\ensuremath{\\mathbb{N}} \\setminus\\!\\{0\\}$, $n \\in \\ensuremath{\\mathbb{N}}$ (for $k = 1$, omitting the application of $\\textup{\\sc (sc$_k$)}$):\n\\[\n\\infer[\\pfa{\\textup{\\sc (mix)}}]{\\mathrm{\\Delta}, \\ensuremath{\\Box} \\mathrm{\\Gamma} \\Rightarrow \\ensuremath{\\Box} \\ensuremath{\\varphi}_1,\\ldots, \\ensuremath{\\Box} \\ensuremath{\\varphi}_n, \\mathrm{\\Delta}}{\n \\infer[\\pfa{(\\textsc{id})}]{\\mathrm{\\Delta} \\Rightarrow \\mathrm{\\Delta}}{} &\n \\infer[\\pfa{\\textup{\\sc (sc$_k$)}}]{\\ensuremath{\\Box}\\mathrm{\\Gamma} \\Rightarrow \\ensuremath{\\Box} \\ensuremath{\\varphi}_1,\\ldots, \\ensuremath{\\Box} \\ensuremath{\\varphi}_n}{\n \\infer[\\pfa{\\textup{\\sc (mix)}}]{\\ensuremath{\\Box} (\\mathrm{\\Gamma}_0 \\uplus \\mathrm{\\Gamma}_1 \\uplus \\ldots \\uplus \\mathrm{\\Gamma}_n) \\Rightarrow k [\\ensuremath{\\Box} \\ensuremath{\\varphi}_1],\\ldots, k[\\ensuremath{\\Box} \\ensuremath{\\varphi}_n]}{\n \\infer[\\pfa{\\boxknr{0}}]{\\ensuremath{\\Box} \\mathrm{\\Gamma}_0 \\Rightarrow}{ \n \\mathrm{\\Gamma}_0 \\Rightarrow} & \n \\infer[\\pfa{\\textup{\\sc (mix)}}]{\\ensuremath{\\Box} (\\mathrm{\\Gamma}_1 \\uplus \\ldots \\uplus \\mathrm{\\Gamma}_n) \\Rightarrow k [\\ensuremath{\\Box} \\ensuremath{\\varphi}_1],\\ldots, k[\\ensuremath{\\Box} \\ensuremath{\\varphi}_n]}{\n \\infer[\\pfa{\\boxknr{k}}]{\\ensuremath{\\Box} \\mathrm{\\Gamma}_1 \\Rightarrow k[\\ensuremath{\\Box} \\ensuremath{\\varphi}_1]}{\n \\mathrm{\\Gamma}_1 \\Rightarrow k [\\ensuremath{\\varphi}_1]} &\n \\infer[\\pfa{\\textup{\\sc (mix)}}]{\\vdots}{\n \\infer[\\pfa{\\boxknr{k}}]{\\ensuremath{\\Box} \\mathrm{\\Gamma}_n \\Rightarrow k[\\ensuremath{\\Box} \\ensuremath{\\varphi}_n]}{\n \\mathrm{\\Gamma}_n \\Rightarrow k [\\ensuremath{\\varphi}_n]}}}}}}\n \\]\n \n\nWe devote the remainder of this subsection to showing that the calculus $\\lgc{GK(A_m)}$ admits cut-elimination. That is, we provide an algorithm for constructively eliminating applications of the rule $\\textup{\\sc (cut)}$ from $\\lgc{GK(A_m)}$-derivations. Observe first that the ``cancellation'' rule \n\\[\n \\infer[\\textup{\\sc (can)}]{\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}}{\\mathrm{\\Gamma}, \\ensuremath{\\varphi} \\Rightarrow \\ensuremath{\\varphi}, \\mathrm{\\Delta}}\n\\]\nis both $\\lgc{GK(A_m)}$-derivable and can be used, with $\\textup{\\sc (mix)}$, to derive $\\textup{\\sc (cut)}$:\\smallskip\n\\[\n\\infer[\\pfa{\\textup{\\sc (cut)}}]{\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}}{\n \\infer[\\pfa{(\\to\\seq)}]{\\ensuremath{\\varphi} \\to \\ensuremath{\\varphi} \\Rightarrow}{\n \\infer[\\pfa{(\\textsc{id})}]{\\ensuremath{\\varphi} \\Rightarrow \\ensuremath{\\varphi}}{}}\n&\n \\infer[\\pfa{(\\seq\\to)}]{\\mathrm{\\Gamma} \\Rightarrow \\ensuremath{\\varphi} \\to \\ensuremath{\\varphi}, \\mathrm{\\Delta}}{\n \\mathrm{\\Gamma}, \\ensuremath{\\varphi} \\Rightarrow \\ensuremath{\\varphi}, \\mathrm{\\Delta}}}\n\\qquad\n\\infer[\\pfa{\\textup{\\sc (can)}}]{\\mathrm{\\Gamma}, \\Pi \\Rightarrow \\mathrm{\\Sigma}, \\mathrm{\\Delta}}{\n \\infer[\\pfa{\\textup{\\sc (mix)}}]{\\mathrm{\\Gamma}, \\Pi, \\ensuremath{\\varphi} \\Rightarrow \\ensuremath{\\varphi}, \\mathrm{\\Sigma}, \\mathrm{\\Delta}}{\n \\mathrm{\\Gamma}, \\ensuremath{\\varphi} \\Rightarrow \\mathrm{\\Delta} & \\Pi \\Rightarrow \\ensuremath{\\varphi}, \\mathrm{\\Sigma}}}\n\\]\nHence, to prove cut-elimination, it will be enough to show constructively that $\\textup{\\sc (can)}$ is admissible in $\\lgc{GK(A_m)}$ without $\\textup{\\sc (cut)}$.\n\nWe begin by showing that every cut-free $\\lgc{GK(A_m)}$-derivation can be transformed into a derivation in a restricted calculus $\\lgc{GK(A_m)^r}$ consisting only of the rules $(\\textsc{id})$, $(\\to\\seq)$, $(\\seq\\to)$, and $\\boxknr{k,n}$ ($k \\in \\ensuremath{\\mathbb{N}} \\setminus\\!\\{0\\}$, $n \\in \\ensuremath{\\mathbb{N}}$). \n\n\\begin{lem}\\label{l:invertiblerules}\nThe rules $(\\to\\seq)$ and $(\\seq\\to)$ are $\\lgc{GK(A_m)^r}$-invertible.\n\\end{lem}\n\\begin{proof}\nTo show that $(\\to\\seq)$ is $\\lgc{GK(A_m)^r}$-invertible, we prove, more generally, that $\\der{\\lgc{GK(A_m)^r}} \\mathrm{\\Gamma}, m[\\ensuremath{\\varphi} \\to \\ensuremath{\\psi}] \\Rightarrow \\mathrm{\\Delta}$ implies $\\der{\\lgc{GK(A_m)^r}} \\mathrm{\\Gamma}, m\\ensuremath{\\psi} \\Rightarrow m\\ensuremath{\\varphi}, \\mathrm{\\Delta}$ for all $m \\in \\ensuremath{\\mathbb{N}}$, proceeding by induction on the height of a $\\lgc{GK(A_m)^r}$-derivation of $\\mathrm{\\Gamma}, m[\\ensuremath{\\varphi} \\to \\ensuremath{\\psi}] \\Rightarrow \\mathrm{\\Delta}$. For the base case, $\\mathrm{\\Delta} = \\mathrm{\\Gamma} \\uplus m[\\ensuremath{\\varphi} \\to \\ensuremath{\\psi}]$ and it suffices to observe that $\\der{\\lgc{GK(A_m)^r}} \\mathrm{\\Gamma}, m\\ensuremath{\\psi} \\Rightarrow m\\ensuremath{\\varphi}, m[\\ensuremath{\\varphi} \\to \\ensuremath{\\psi}], \\mathrm{\\Gamma}$. For the inductive step, we observe that when the last rule applied is $(\\to\\seq)$ or $(\\seq\\to)$, the claim follows immediately by applying the induction hypothesis and, where necessary, the relevant rule. If the last rule applied is $\\boxknr{k,n}$, then $m[\\ensuremath{\\varphi} \\to \\ensuremath{\\psi}]$ must occur also on the right of the sequent and the claim follows by first applying the rule $\\boxknr{k,n}$ and then $(\\seq\\to)$ $m$ times. The proof that $(\\seq\\to)$ is $\\lgc{GK(A_m)^r}$-invertible is very similar.\n\\end{proof}\n\n\\begin{lem}\\label{l:admissiblemixsck}\nThe rules $\\textup{\\sc (mix)}$ and $\\textup{\\sc (sc$_n$)}$ are $\\lgc{GK(A_m)^r}$-admissible.\n\\end{lem}\n\\begin{proof}\nTo show the $\\lgc{GK(A_m)^r}$-admissibility of $\\textup{\\sc (mix)}$, we prove that\n\\[\n\\der{\\lgc{GK(A_m)^r}} \\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta} \\ \\text{ and } \\ \\der{\\lgc{GK(A_m)^r}} \\Pi \\Rightarrow \\mathrm{\\Sigma} \\quad \\Longrightarrow \\quad\n\\der{\\lgc{GK(A_m)^r}} r\\mathrm{\\Gamma}, s\\Pi \\Rightarrow s\\mathrm{\\Sigma},r\\mathrm{\\Delta} \\ \\text{ for all } r,s \\in \\ensuremath{\\mathbb{N}},\n\\]\nproceeding by induction on the sum of the heights of $\\lgc{GK(A_m)^r}$-derivations $d_1$ and $d_2$ of $\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}$ and $\\Pi \\Rightarrow \\mathrm{\\Sigma}$, respectively.\n \nFor the base case, if $d_1$ and $d_2$ have height $0$, then $\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}$ and $\\Pi \\Rightarrow \\mathrm{\\Sigma}$ are instances of $(\\textsc{id})$, i.e., $\\mathrm{\\Gamma} = \\mathrm{\\Delta}$ and $\\Pi = \\mathrm{\\Sigma}$. So $r\\mathrm{\\Gamma} \\uplus s\\Pi = r\\mathrm{\\Delta}\\uplus s\\mathrm{\\Sigma}$ and $\\der{\\lgc{GK(A_m)^r}} r\\mathrm{\\Gamma}, s\\Pi \\Rightarrow s\\mathrm{\\Sigma},r\\mathrm{\\Delta}$ by $(\\textsc{id})$. If the last application of a rule in $d_1$ or $d_2$ is $(\\to\\seq)$ or $(\\seq\\to)$, then the result follows easily by an application of the induction hypothesis and further applications of the rule.\n\nSuppose now that $d_1$ ends with \n\\[\n\\begin{array}{c}\n\\infer[\\boxknr{k,n}]{\\Omega, \\ensuremath{\\Box} \\mathrm{\\Gamma}' \\Rightarrow \\ensuremath{\\Box} \\ensuremath{\\varphi}_1,\\ldots, \\ensuremath{\\Box} \\ensuremath{\\varphi}_n, \\Omega}{\n\\mathrm{\\Gamma}_0 \\Rightarrow & \\mathrm{\\Gamma}_1 \\Rightarrow k[\\ensuremath{\\varphi}_1] & \\ldots & \\mathrm{\\Gamma}_n \\Rightarrow k[\\ensuremath{\\varphi}_n]}\n\\quad\n \\text{where } k\\mathrm{\\Gamma}' = \\mathrm{\\Gamma}_0 \\uplus \\mathrm{\\Gamma}_1 \\uplus \\ldots \\uplus \\mathrm{\\Gamma}_n.\n\\end{array}\n\\]\nIf $d_2$ has height $0$, then $\\Pi = \\mathrm{\\Sigma}$. An application of the induction hypothesis to the $\\lgc{GK(A_m)^r}$-derivation of the premise $\\mathrm{\\Gamma}_0 \\Rightarrow$ together with a $\\lgc{GK(A_m)^r}$-derivation of the empty sequent $\\Rightarrow$ of height $0$ yields $\\der{\\lgc{GK(A_m)^r}} r\\mathrm{\\Gamma}_0 \\Rightarrow$. It follows then that the sequent $r\\Omega, r\\ensuremath{\\Box} \\mathrm{\\Gamma}', s\\Pi \\Rightarrow s\\Pi, r\\ensuremath{\\Box} \\ensuremath{\\varphi}_1,\\ldots, r\\ensuremath{\\Box} \\ensuremath{\\varphi}_n, r\\Omega$ is $\\lgc{GK(A_m)^r}$-derivable using an application of the rule $\\boxknr{k,rn}$. The case where $d_1$ has height $0$ and $d_2$ ends with $\\boxknr{k,n}$ is symmetrical.\n\nIf $d_2$ ends with \n \\[\n\\begin{array}{c}\n \\infer[\\boxknr{l,m}]{\\Theta, \\ensuremath{\\Box} \\Pi' \\Rightarrow \\ensuremath{\\Box} \\ensuremath{\\psi}_1,\\ldots, \\ensuremath{\\Box} \\ensuremath{\\psi}_m, \\Theta}{\n\\Pi_0 \\Rightarrow & \\Pi_1 \\Rightarrow l[\\ensuremath{\\psi}_1] & \\ldots & \\Pi_m \\Rightarrow l[\\ensuremath{\\psi}_m]} \n\\quad \\text{where } l\\Pi' = \\Pi_0 \\uplus \\Pi_1 \\uplus \\ldots \\uplus \\Pi_m,\n\\end{array}\n\\]\nthen we obtain the required $\\lgc{GK(A_m)^r}$-derivation \n \\[\n \\infer[\\boxknr{kl,rn+sm}]{r\\Omega, s\\Theta, r\\ensuremath{\\Box} \\mathrm{\\Gamma}', s\\ensuremath{\\Box} \\Pi' \\Rightarrow r\\ensuremath{\\Box} \\ensuremath{\\varphi}_1,\\ldots, r\\ensuremath{\\Box} \\ensuremath{\\varphi}_n, s\\ensuremath{\\Box} \\ensuremath{\\psi}_1,\\ldots, s\\ensuremath{\\Box} \\ensuremath{\\psi}_m, r\\Omega, s\\Theta}{\nrl\\mathrm{\\Gamma}_0, sk\\Pi_0 \\Rightarrow & \n\\{l\\mathrm{\\Gamma}_i \\Rightarrow kl[\\ensuremath{\\varphi}_i]\\}_{i \\in \\{1,\\ldots,n\\}} & \\{k\\Pi_j \\Rightarrow kl[\\ensuremath{\\psi}_j]\\}_{1 \\le j \\le m}}\n \\]\n where the premises are all $\\lgc{GK(A_m)^r}$-derivable using the induction hypothesis.\n \nWe establish the $\\lgc{GK(A_m)^r}$-admissibility of $\\textup{\\sc (sc$_n$)}$ by proving that\n\\[\n\\der{\\lgc{GK(A_m)^r}} n\\mathrm{\\Gamma} \\Rightarrow n\\mathrm{\\Delta} \\quad \\Longrightarrow \\quad \\der{\\lgc{GK(A_m)^r}} \\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta},\n\\]\nproceeding by induction on the sum of the complexities of the formulas in $\\mathrm{\\Gamma},\\mathrm{\\Delta}$. For the base case, if $n\\mathrm{\\Gamma} = n\\mathrm{\\Delta}$ (in particular if $\\mathrm{\\Gamma}$ and $\\mathrm{\\Delta}$ contain only variables), then $\\mathrm{\\Gamma}=\\mathrm{\\Delta}$ and $\\der{\\lgc{GK(A_m)^r}} \\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}$ by $(\\textsc{id})$. If $\\mathrm{\\Gamma}$ contains a formula $\\ensuremath{\\varphi} \\to \\ensuremath{\\psi}$, then by the invertibility of the rule $(\\to\\seq)$ established in Lemma~\\ref{l:invertiblerules}, $\\der{\\lgc{GK(A_m)^r}} n(\\mathrm{\\Gamma} - [\\ensuremath{\\varphi} \\to \\ensuremath{\\psi}]), n\\ensuremath{\\psi} \\Rightarrow n\\ensuremath{\\varphi}, n\\mathrm{\\Delta}$. The induction hypothesis and an application of $(\\to\\seq)$ gives $\\der{\\lgc{GK(A_m)^r}} \\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}$. The case where $\\mathrm{\\Delta}$ contains a formula $\\ensuremath{\\varphi} \\to \\ensuremath{\\psi}$ is symmetrical. In the final case, the $\\lgc{GK(A_m)^r}$-derivation of $n\\mathrm{\\Gamma} \\Rightarrow n\\mathrm{\\Delta}$ must end with an application of $\\boxknr{k,nl}$ where $\\mathrm{\\Gamma} = \\Pi \\uplus [\\ensuremath{\\Box} \\mathrm{\\Sigma}]$ and $\\mathrm{\\Delta} = \\Pi \\uplus [\\ensuremath{\\Box} \\ensuremath{\\varphi}_1,\\ldots,\\ensuremath{\\Box} \\ensuremath{\\varphi}_l]$. Hence $\\der{\\lgc{GK(A_m)^r}} \\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}$ using $\\boxknr{kn,l}$ and the $\\lgc{GK(A_m)^r}$-admissibility of $\\textup{\\sc (mix)}$.\n\\end{proof}\n\nWe now have all the necessary tools to prove the promised cut-elimination theorem.\n\n\\begin{thm}\\label{t:cutelimination}\n$\\lgc{GK(A_m)}$ admits cut-elimination. \n\\end{thm}\n\\begin{proof}\nTo establish cut-elimination for $\\lgc{GK(A_m)}$, it suffices to prove that an uppermost application of $\\textup{\\sc (cut)}$ in a $\\lgc{GK(A_m)}$-derivation can be eliminated; that is, we show that cut-free $\\lgc{GK(A_m)}$-derivations of the premises of an instance of $\\textup{\\sc (cut)}$ can be transformed into a cut-free $\\lgc{GK(A_m)}$-derivation of the conclusion. Observe first that the rule $\\boxknr{n}$ is $\\lgc{GK(A_m)^r}$-derivable using $\\boxknr{k,n}$ with $k = n$, $\\ensuremath{\\varphi}_1 = \\ldots = \\ensuremath{\\varphi}_n = \\ensuremath{\\varphi}$, and $\\mathrm{\\Gamma}_1 = \\ldots = \\mathrm{\\Gamma}_n = \\mathrm{\\Gamma}$. Hence, the proof of Lemma~\\ref{l:admissiblemixsck} shows that any cut-free $\\lgc{GK(A_m)}$-derivation can be transformed algorithmically into a $\\lgc{GK(A_m)^r}$-derivation. We prove (constructively) that\n\\[\n\\der{\\lgc{GK(A_m)^r}} \\mathrm{\\Gamma}, \\ensuremath{\\varphi} \\Rightarrow \\ensuremath{\\varphi}, \\mathrm{\\Delta} \\quad \\Longrightarrow \\quad \\der{\\lgc{GK(A_m)^r}} \\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}. \\qquad (\\star)\n\\]\nSuppose then that there are cut-free $\\lgc{GK(A_m)}$-derivations of the premises $\\mathrm{\\Gamma}, \\ensuremath{\\varphi} \\Rightarrow \\mathrm{\\Delta}$ and $\\Pi \\Rightarrow \\ensuremath{\\varphi}, \\mathrm{\\Sigma}$ of an uppermost application of $\\textup{\\sc (cut)}$. By $\\textup{\\sc (mix)}$, we obtain a cut-free $\\lgc{GK(A_m)}$-derivation of $\\mathrm{\\Gamma}, \\Pi, \\ensuremath{\\varphi} \\Rightarrow \\ensuremath{\\varphi}, \\mathrm{\\Sigma}, \\mathrm{\\Delta}$ and hence a $\\lgc{GK(A_m)^r}$-derivation of this sequent. By $(\\star)$, we obtain a $\\lgc{GK(A_m)^r}$-derivation of $\\mathrm{\\Gamma}, \\Pi \\Rightarrow \\mathrm{\\Sigma}, \\mathrm{\\Delta}$, which also gives the desired cut-free $\\lgc{GK(A_m)}$-derivation.\n\nWe prove $(\\star)$ by induction on the lexicographically ordered pair consisting of the modal depth of $\\ensuremath{\\varphi}$ and the sum of the complexities of the formulas in $\\mathrm{\\Gamma}, \\ensuremath{\\varphi} \\Rightarrow \\ensuremath{\\varphi}, \\mathrm{\\Delta}$. If $\\mathrm{\\Gamma} \\uplus [\\ensuremath{\\varphi}] = [\\ensuremath{\\varphi}] \\uplus \\mathrm{\\Delta}$, then $\\mathrm{\\Gamma} = \\mathrm{\\Delta}$ and $\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}$ is derivable using $(\\textsc{id})$. If $\\ensuremath{\\varphi}$ has the form $\\ensuremath{\\psi} \\to \\ensuremath{\\chi}$, then we use the $\\lgc{GK(A_m)^r}$-invertibility of $(\\to\\seq)$ and $(\\seq\\to)$ and apply the induction hypothesis twice. The cases where $\\mathrm{\\Gamma}$ or $\\mathrm{\\Delta}$ includes a formula $\\ensuremath{\\psi} \\to \\ensuremath{\\chi}$ are very similar. Lastly, suppose that $\\mathrm{\\Gamma}, \\ensuremath{\\varphi} \\Rightarrow \\ensuremath{\\varphi}, \\mathrm{\\Delta}$ contains only variables and box formulas. Then there is a $\\lgc{GK(A_m)}$-derivation of the sequent ending with an application of $\\boxknr{k,n}$. The case where $\\ensuremath{\\varphi}$ is a variable is trivial, so let us just consider the case where $\\ensuremath{\\varphi} = \\ensuremath{\\Box} \\ensuremath{\\chi}$ and the derivation ends with an application of $\\boxknr{k,n}$. The case where $\\ensuremath{\\varphi}$ occurs in the context appearing on both sides of the conclusion follows immediately, so suppose that the derivation ends with\n\\[\n\\infer[\\boxknr{k,n}]{\\mathrm{\\Sigma}, \\ensuremath{\\Box} \\Pi, \\ensuremath{\\Box} \\ensuremath{\\chi} \\Rightarrow \\ensuremath{\\Box} \\ensuremath{\\chi}, \\ensuremath{\\Box} \\ensuremath{\\psi}_2,\\ldots, \\ensuremath{\\Box} \\ensuremath{\\psi}_n, \\mathrm{\\Sigma}}{\n\\Pi_0, k_0[\\ensuremath{\\chi}] \\Rightarrow & \\Pi_1, k_1[\\ensuremath{\\chi}] \\Rightarrow k[\\ensuremath{\\chi}] & \\{\\Pi_i, k_i[\\ensuremath{\\chi}] \\Rightarrow k[\\ensuremath{\\psi}_i]\\}_{i=2}^{n}}\n\\]\nwhere $k\\Pi = \\Pi_0 \\uplus \\Pi_1 \\uplus \\ldots \\uplus \\Pi_n$ and $k = k_0 + k_1 + \\ldots + k_n$. By the induction hypothesis, \n\\[\n\\der{\\lgc{GK(A_m)^r}} \\Pi_1 \\Rightarrow (k-k_1)[\\ensuremath{\\chi}].\n\\]\nBy Lemma~\\ref{l:admissiblemixsck} (the $\\lgc{GK(A_m)^r}$-admissibility of $\\textup{\\sc (mix)}$), we have $\\lgc{GK(A_m)^r}$-derivations of\n\\[\n\\begin{array}{l}\n k_0 \\Pi_1, (k-k_1)\\Pi_0, (k-k_1)k_0[\\ensuremath{\\chi}] \\Rightarrow (k-k_1)k_0[\\ensuremath{\\chi}]\\\\[.025in]\n k_i \\Pi_1, (k-k_1)\\Pi_i, (k-k_1)k_i[\\ensuremath{\\chi}] \\Rightarrow (k-k_1)k_i[\\ensuremath{\\chi}], (k-k_1)k[\\ensuremath{\\psi}_i]\\quad \\text{for } i \\in \\{2,\\ldots, n\\}.\n\\end{array}\n\\]\nSo, by the induction hypothesis, we have $\\lgc{GK(A_m)^r}$-derivations of\n\\[\n\\begin{array}{l}\nk_0 \\Pi_1, (k-k_1)\\Pi_0 \\Rightarrow \\\\[.025in]\nk_i \\Pi_1, (k-k_1)\\Pi_i \\Rightarrow (k-k_1)k[\\ensuremath{\\psi}_i] \\quad \\text{for } i \\in \\{2,\\ldots, n\\}.\n\\end{array}\n\\]\nNow by an application of $\\boxknr{(k-k_1)k,n-1}$, we have a $\\lgc{GK(A_m)^r}$-derivation ending with\n\\[\n\\infer{\\mathrm{\\Sigma}, \\ensuremath{\\Box} \\Pi \\Rightarrow \\ensuremath{\\Box} \\ensuremath{\\psi}_2,\\ldots, \\ensuremath{\\Box} \\ensuremath{\\psi}_n, \\mathrm{\\Sigma}}{\nk_0 \\Pi_1, (k-k_1)\\Pi_0 \\Rightarrow & \\{k_i \\Pi_1, (k-k_1)\\Pi_i \\Rightarrow (k-k_1)k[\\ensuremath{\\psi}_i]\\}_{i=2}^n}\n\\]\nwhere $(k-k_1)k\\Pi = (k_0 + k_2 + \\ldots + k_n)(\\Pi_0 \\uplus \\Pi_1 \\uplus \\ldots \\uplus \\Pi_n)$. \n\\end{proof}\n\n\n\\subsection{Completeness}\n\nIn this section we establish the completeness of both the axiom system $\\lgc{K(A_m)}$ and the sequent calculus $\\lgc{GK(A_m)}$ for the modal-multiplicative fragment of $\\lgc{K(A)}$. The crucial ingredient of our proof will be the fact that an $\\lgc{LK'(A)}$-tableau for an $\\ensuremath{\\mathcal{L}}_{\\lgc{A_m}}^\\ensuremath{\\Box}$-formula always consists of just one branch, and hence a single inconsistent system of linear inequations can be associated with each valid $\\ensuremath{\\mathcal{L}}_{\\lgc{A_m}}^\\ensuremath{\\Box}$-formula. \n\nWe begin by proving two lemmas for $\\lgc{K(A)}$-valid sequents of a certain form, recalling that sequents contain only $\\ensuremath{\\mathcal{L}}_{\\lgc{A_m}}^\\ensuremath{\\Box}$-formulas by definition.\n\n\\begin{lem}\\label{l:separation}\nLet $ \\mathrm{\\Gamma}, \\Pi \\Rightarrow \\mathrm{\\Sigma}, \\mathrm{\\Delta}$ be a $\\lgc{K(A)}$-valid sequent such that no variable occurs in both $\\mathrm{\\Gamma} \\uplus \\mathrm{\\Delta}$ and $\\Pi \\uplus \\mathrm{\\Sigma}$. Then $\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}$ and $\\Pi \\Rightarrow \\mathrm{\\Sigma}$ are both $\\lgc{K(A)}$-valid.\n\\end{lem}\n\\begin{proof}\nSuppose contrapositively that $\\not \\mdl{\\lgc{K(A)}} \\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}$. Then there exists a $\\lgc{K(A)}$-model $\\fram{M} = \\langle W, R, V \\rangle$ and $x \\in W$ such that $V(\\mathcal{I}(\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}),x) < 0$. Since $\\mathrm{\\Gamma} \\uplus \\mathrm{\\Delta}$ and $\\Pi \\uplus \\mathrm{\\Sigma}$ have disjoint sets of variables, we may assume without loss of generality that $V(p,y) = 0$ for all $p \\in \\ensuremath{{\\rm Var}}$ occurring in $\\Pi \\uplus \\mathrm{\\Sigma}$ and $y \\in W$. A simple induction yields also that $V(\\ensuremath{\\varphi},y) = 0$ for all $\\ensuremath{\\varphi} \\in \\Pi \\uplus \\mathrm{\\Sigma}$ and $y \\in W$. But then $V(\\mathcal{I}(\\mathrm{\\Gamma}, \\Pi \\Rightarrow \\mathrm{\\Sigma},\\mathrm{\\Delta}),x) < 0$. So $\\not \\mdl{\\lgc{K(A)}} \\mathrm{\\Gamma}, \\Pi \\Rightarrow \\mathrm{\\Sigma}, \\mathrm{\\Delta}$. The case where $\\not \\mdl{\\lgc{K(A)}} \\Pi \\Rightarrow \\mathrm{\\Sigma}$ follows by symmetry. \n\\end{proof}\n\n\\begin{lem}\\label{l:mix}\nLet $\\ensuremath{\\Box} \\mathrm{\\Gamma}, \\Pi \\Rightarrow \\mathrm{\\Sigma}, \\ensuremath{\\Box} \\mathrm{\\Delta}$ be a $\\lgc{K(A)}$-valid sequent such that $\\Pi$ and $\\mathrm{\\Sigma}$ contain only variables. Then $\\Pi = \\mathrm{\\Sigma}$ and $\\ensuremath{\\Box} \\mathrm{\\Gamma} \\Rightarrow \\ensuremath{\\Box} \\mathrm{\\Delta}$ is $\\lgc{K(A)}$-valid.\n\\end{lem}\n\\begin{proof}\nSuppose that $\\mdl{\\lgc{K(A)}} \\ensuremath{\\Box} \\mathrm{\\Gamma}, \\Pi \\Rightarrow \\mathrm{\\Sigma}, \\ensuremath{\\Box} \\mathrm{\\Delta}$. It suffices to show that $\\Pi = \\mathrm{\\Sigma}$, since then clearly also $\\mdl{\\lgc{K(A)}} \\ensuremath{\\Box} \\mathrm{\\Gamma} \\Rightarrow \\ensuremath{\\Box} \\mathrm{\\Delta}$. Suppose for a contradiction that $\\Pi \\not = \\mathrm{\\Sigma}$. Without loss of generality, some $p \\in \\ensuremath{{\\rm Var}}$ occurs strictly more times in $\\Pi$ than $\\mathrm{\\Sigma}$. Consider a $\\lgc{K(A)}$-model $\\fram{M} = \\langle \\{x\\}, \\emptyset, V \\rangle$ with one irreflexive world $x$ satisfying $V(p,x) = 1$ and $V(q,x) = 0$ for all $q \\in \\ensuremath{{\\rm Var}} \\setminus \\{p\\}$. Then $V(\\mathcal{I}( \\ensuremath{\\Box} \\mathrm{\\Gamma}, \\Pi \\Rightarrow \\mathrm{\\Sigma}, \\ensuremath{\\Box} \\mathrm{\\Delta}),x) < 0$ and so $\\not \\mdl{\\lgc{K(A)}} \\ensuremath{\\Box} \\mathrm{\\Gamma}, \\Pi \\Rightarrow \\mathrm{\\Sigma}, \\ensuremath{\\Box} \\mathrm{\\Delta}$, a contradiction. \n\\end{proof}\n\nTo deal with $\\lgc{K(A)}$-valid sequents in general, we use the fact that for such a sequent, there must exist a corresponding closed complete $\\lgc{LK'(A)}$-tableau with one branch and an associated inconsistent set of inequations. We use this set of inequations to show that the rule $\\boxknr{k,m}$ for suitable $k,m$ can be applied backwards to the sequent to obtain $\\lgc{K(A)}$-valid sequents containing formulas of strictly smaller modal depth. To this end, it will be helpful to extend some of the notions for the labelled tableau calculus $\\lgc{LK'(A)}$ to sequents. We define a {\\em complete $\\lgc{LK'(A)}$-tableau} for a sequent $\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}$ to be a tableau beginning with the active inequation $\\lab{\\mathrm{\\Gamma}}{1} > \\lab{\\mathrm{\\Delta}}{1}$ and relation $r12$, constructed according to steps (2)--(7). Consulting the proof of Theorem~\\ref{t:LabelledEquivSemantics}, we obtain the following result.\n\n\\begin{cor}\\label{c:sequentlabelledcalculus}\nThere exists a closed complete $\\lgc{LK'(A)}$-tableau for a sequent $\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}$ if and only if $\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}$ is $\\lgc{K(A)}$-valid.\n\\end{cor}\n\nTo argue about the inconsistency of a system of inequations associated to a tableau, we recall some basic notions from linear programming. Let $S$ be a system of inequations of the form $I_i = (f_i(\\bar{x}) > g_i(\\bar{x}))$ ($i \\in \\{1,\\ldots,n\\}$) and $J_j = (h_j(\\bar{x}) \\ge k_j(\\bar{x}))$ ($j \\in \\{1,\\ldots,m\\}$) where each $f_i,g_i,h_j,k_j$ is a positive linear sum of variables in $\\bar{x}$. Then $S$ is inconsistent over $\\ensuremath{\\mathbb{R}}$ if and only if there exists an inequation given by a linear combination of these inequations\n\\[\n\\begin{array}{rcl}\nL_S & = & \\sum_{i = 1}^n \\lambda_i I_i + \\sum_{j = 1}^m \\mu_j J_j\n\\end{array}\n\\]\nwhere $\\lambda_1,\\ldots,\\lambda_n \\in \\ensuremath{\\mathbb{N}}$ (not all zero) and $\\mu_1,\\ldots,\\mu_m \\in \\ensuremath{\\mathbb{N}}$ such that\n\\[\n\\begin{array}{rcl}\n\\lambda_1 f_1 + \\ldots + \\lambda_n f_n + \\mu_1 h_1 + \\ldots + \\mu_m h_m & = & \\lambda_1 g_1 + \\ldots + \\lambda_n g_n + \\mu_1 k_1 + \\ldots + \\mu_m k_m.\n\\end{array}\n\\]\nWe say that $L_S$ is {\\em inconsistent} and that each inequation $f_i(\\bar{x}) > g_i(\\bar{x})$ or $h_j(\\bar{x}) \\ge k_j(\\bar{x})$ is {\\em used} $\\lambda_i$ or $\\mu_j$ times, respectively, in $L_S$.\n\nGiven a labelled inequation $I = \\lab{\\mathrm{\\Gamma}_1}{\\vecn{k_1}} \\triangleright \\lab{\\mathrm{\\Delta}_1}{\\vecn{l_1}}$, let $I^E = \\lab{\\mathrm{\\Gamma}_2}{\\vecn{k_2}} \\triangleright \\lab{\\mathrm{\\Delta}_2}{\\vecn{l_2}}$ be the inequation obtained by applying the rules for $\\to$ in $\\lgc{LK'(A)}$ to $I$ exhaustively. By further replacing each boxed formula $\\ensuremath{\\Box} \\ensuremath{\\varphi}$ with $\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}}$, we obtain the \\emph{reduced form} $I^R$ of $I$, saying that $I$ is \\emph{in reduced form} if $I = I^R$. We now have all the required tools to prove our main lemma.\n\n\\begin{lem}\\label{l:main}\nLet $\\ensuremath{\\Box} \\mathrm{\\Gamma} \\Rightarrow \\ensuremath{\\Box} \\ensuremath{\\psi}_1,\\ldots, \\ensuremath{\\Box} \\ensuremath{\\psi}_m$ be a $\\lgc{K(A)}$-valid sequent. Then there exist $k \\in \\ensuremath{\\mathbb{N}} \\setminus\\!\\{0\\}$ and multisets of $\\ensuremath{\\mathcal{L}}_{\\lgc{A_m}}^\\ensuremath{\\Box}$-formulas $\\mathrm{\\Gamma}_0,\\mathrm{\\Gamma}_1,\\ldots,\\mathrm{\\Gamma}_m$ such that \n\\begin{enumerate}[label=\\rm (\\roman*)]\n\\item\t$k\\mathrm{\\Gamma} = \\mathrm{\\Gamma}_0 \\uplus \\mathrm{\\Gamma}_1 \\uplus \\ldots \\uplus \\mathrm{\\Gamma}_m$\n\\item\t$\\mathrm{\\Gamma}_0 \\Rightarrow$ \\, and\\, $\\mathrm{\\Gamma}_i \\Rightarrow k[\\ensuremath{\\psi}_i]$ for $i \\in \\{1,\\ldots,m\\}$ are all $\\lgc{K(A)}$-valid.\n\\end{enumerate}\n\\end{lem}\n\\begin{proof}\nLet $\\mathrm{\\Gamma} = [\\ensuremath{\\varphi}_1,\\ldots,\\ensuremath{\\varphi}_n]$. By assumption, $\\mdl{\\lgc{K(A)}} \\ensuremath{\\Box} \\ensuremath{\\varphi}_1,\\ldots,\\ensuremath{\\Box} \\ensuremath{\\varphi}_n \\Rightarrow \\ensuremath{\\Box} \\ensuremath{\\psi}_1,\\ldots,\\ensuremath{\\Box} \\ensuremath{\\psi}_m$, and, by Corollary~\\ref{c:sequentlabelledcalculus}, we obtain a complete closed tableau $T$ in $\\lgc{LK'(A)}$ that begins with \n\\[\n\\lab{{\\ensuremath{\\Box} \\ensuremath{\\varphi}_1}}{1},\\ldots,\\lab{{\\ensuremath{\\Box} \\ensuremath{\\varphi}_n}}{1} > \\lab{{\\ensuremath{\\Box} \\ensuremath{\\psi}_1}}{1},\\ldots, \\lab{{\\ensuremath{\\Box} \\ensuremath{\\psi}_m}}{1} \\quad \\text{and} \\quad r12.\n\\]\nThis tableau will contain the inequation\n\\[\n\\begin{array}{rcl}\nI & = & \\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}_1}}{1},\\ldots,\\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}_n}}{1} > \\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\psi}_1}}{1},\\ldots, \\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\psi}_m}}{1}\n\\end{array}\n\\]\nand for new labels $y_1,\\ldots,y_m \\in \\ensuremath{\\mathbb{N}}$, the inequations\n\\[\n\\begin{array}{rclcrcl}\nI_1 & = & \\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\psi}_1}}{1} \\ge \\lab{\\ensuremath{\\psi}_1}{y_1} & \\quad \\ldots \\quad & I_m & = & \\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\psi}_m}}{1} \\ge \\lab{\\ensuremath{\\psi}_m}{y_m}.\n\\end{array}\n\\]\nLet us fix $y_0 = 2$. Then $T$ contains for each $i \\in \\{1, \\ldots, n\\}$ and $j \\in \\{ 0, \\ldots, m\\}$, an inequation\n\\[\n\\begin{array}{rcl}\nI_{i,j} & = & \\lab{\\ensuremath{\\varphi}_i}{y_j} \\ge \\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}_i}}{1}.\n\\end{array}\n\\]\nConsider now the set of inequations associated to $T$\n\\[\n\\begin{array}{rcl}\nS & = & \\{I\\} \\cup \\{I_j^R : 1 \\le j \\le m\\} \\cup \\{I_{ij}^R : 1 \\le i \\le n, \\, 0 \\le j \\le m\\} \\cup S',\n\\end{array}\n\\]\nnoting that the inequations in $S'$ are obtained by applying rules of $\\lgc{LK'(A)}$ to inequations in $\\{I_j^E : 1 \\le j \\le m\\} \\cup \\{I_{ij}^E : 1 \\le i \\le n, \\, 0 \\le j \\le m\\}$. Since $T$ is closed, $S$ is inconsistent over $\\ensuremath{\\mathbb{R}}$. Hence there is an inconsistent linear combination $L_S$ of the inequations in $S$. The following observations can be confirmed by simple inductions on the height of $T$:\n\n\\begin{enumerate}[label=\\rm (\\roman*)]\n\n\\item The (reduced form) inequation $I$ is the only strict inequation occurring in $S$, and hence must be used $k$ times in $L_S$ for some $k \\in \\ensuremath{\\mathbb{N}} \\setminus\\!\\{0\\}$. \\smallskip\n\n\\item For each $j \\in \\{1, \\ldots, m\\}$, $\\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\psi}_j}}{1}$ occurs in $S$ only in $I$ and in the reduced form $I_j^R$ of $I_j$; hence, by (i), $I_j^R$ must also be used $k$ times in $L_S$. \\smallskip\n\n\\item For each $i \\in \\{1, \\ldots,n\\}$, $\\lab{\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}_i}}{1}$ occurs in $S$ only in $I$ and in the reduced forms $I_{i,j}^R$ of $I_{i,j}$ for $j \\in \\{0, \\ldots, m\\}$; hence, given that $I_{i,j}^R$ is used in the linear combination $\\lambda_{i,j}$ times, we obtain $\\lambda_{i,0} + \\lambda_{i,1} + \\ldots + \\lambda_{i,m} = k$; in particular, not all $\\lambda_{i,j}$ are zero. \\smallskip\n\n\n\\end{enumerate}\n\n\\noindent\nThe inconsistent linear combination of the inequations in $S$ is therefore\n\\[\n\\begin{array}{rcl}\nL_S & = & k I + \\sum_{j = 1}^m k I^R_j + \\sum_{i = 1}^n \\sum_{j = 0}^m \\lambda_{i,j} I^R_{i,j} + L_{S'}.\n\\end{array}\n\\]\nWe define multisets of formulas\n\\[\n\\begin{array}{rcl}\n\\mathrm{\\Gamma}_j & = & \\lambda_{1,j} [\\ensuremath{\\varphi}_1], \\ldots, \\lambda_{n,j} [\\ensuremath{\\varphi}_n] \\ \\text{ for $j \\in \\{0,\\ldots,m\\}$}\\\\[.05in]\n\\mathrm{\\Delta} & = & k[\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}_1}],\\ldots, k[\\newv{\\ensuremath{\\Box} \\ensuremath{\\varphi}_n}], k[\\newv{\\ensuremath{\\Box} \\ensuremath{\\psi}_1}],\\ldots, k[\\newv{\\ensuremath{\\Box} \\ensuremath{\\psi}_m}].\n\\end{array}\n\\] \nNote that, as required, $k\\mathrm{\\Gamma} = \\mathrm{\\Gamma}_0 \\uplus \\mathrm{\\Gamma}_1 \\uplus \\ldots \\uplus \\mathrm{\\Gamma}_m$. Consider now the inequation\n\\[\n\\begin{array}{rcl}\n J\t& = & kI + \\sum_{j=1}^{m} kI_j + \\sum_{i=1}^{n} \\sum_{j=0}^{m} \\lambda_{i,j} I_{i,j}\\\\[.05in]\n\t& = & \\lab{\\mathrm{\\Gamma}_0}{y_0}, \\lab{\\mathrm{\\Gamma}_1}{y_1}, \\ldots, \\lab{\\mathrm{\\Gamma}_m}{y_m}, \\lab{\\mathrm{\\Delta}}{1} > \\lab{\\mathrm{\\Delta}}{1}, k[\\lab{\\ensuremath{\\psi}_1}{y_1}], \\ldots, k[\\lab{\\ensuremath{\\psi}_m}{y_m}].\n\\end{array}\n\\]\nThen $L_S = J^R + L_{S'}$ and the set of inequations $S^* = \\{J^R\\} \\cup S'$ is inconsistent over $\\ensuremath{\\mathbb{R}}$.\n\nRecall that each (reduced form) inequation in $S'$ is obtained by applying rules of $\\lgc{LK'(A)}$ to the inequations $\\{I_j^E : 1 \\le j \\le m\\} \\cup \\{I_{ij}^E : 1 \\le i \\le n, \\, 0 \\le j \\le m\\}$. But following the procedure for building a complete $\\lgc{LK'(A)}$-tableau, the inequations in $S'$ are obtained by first applying the rules $(\\bo\\,\\aineq')$ and $(\\aineq\\,\\bo')$. Hence these inequations in $S'$ and $J^R$ are also obtained by first applying the rules $(\\bo\\,\\aineq')$ and $(\\aineq\\,\\bo')$ to $J^E$ and then continuing as before. \n\nNow for each $j \\in \\{0,\\ldots,m\\}$, let $\\ensuremath{{\\rm Var}}_j \\subseteq \\ensuremath{{\\rm Var}}$ be a countably infinite set such that $\\ensuremath{{\\rm Var}}_0 \\cap \\ensuremath{{\\rm Var}}_1 \\cap \\ldots \\cap \\ensuremath{{\\rm Var}}_m = \\emptyset$, and let $h_j \\colon \\ensuremath{{\\rm Var}} \\to \\ensuremath{{\\rm Var}}_j$ be a bijective map that extends in the obvious way to all formulas and multisets of formulas. Consider the inequation\n\\[ \n\\begin{array}{rcl}\nJ' & = & \\lab{h_0(\\mathrm{\\Gamma}_0)}{1}, \\lab{h_1(\\mathrm{\\Gamma}_1)}{1}, \\ldots, \\lab{h_m(\\mathrm{\\Gamma}_m)}{1}, \\lab{\\mathrm{\\Delta}}{1} > \\lab{\\mathrm{\\Delta}}{1}, k[\\lab{h_1(\\ensuremath{\\psi}_1)}{1}], \\ldots, k[\\lab{h_m(\\ensuremath{\\psi}_m)}{1}].\n\\end{array}\n\\]\nAn easy induction on the height of a tableau shows that applying the rules of $\\lgc{LK'(A)}$ to $J'$ and relation $r12$ also produces a set of inequations that is inconsistent over $\\ensuremath{\\mathbb{R}}$. But then by Corollary~\\ref{c:sequentlabelledcalculus}, \n\\[\n\\mdl{\\lgc{K(A)}} h_0(\\mathrm{\\Gamma}_0), h_1(\\mathrm{\\Gamma}_1), \\ldots, h_m(\\mathrm{\\Gamma}_m), \\mathrm{\\Delta} \\Rightarrow \\mathrm{\\Delta}, k[h_1(\\ensuremath{\\psi}_1)],\\ldots, k[h_m(\\ensuremath{\\psi}_m)].\n\\]\nApplying Lemma~\\ref{l:separation} repeatedly, we obtain \n\\[\n\\mdl{\\lgc{K(A)}} h_0(\\mathrm{\\Gamma}_0) \\Rightarrow \\quad \\text{ and } \\quad \\mdl{\\lgc{K(A)}} h_i(\\mathrm{\\Gamma}_i) \\Rightarrow k[h_i(\\ensuremath{\\psi}_i)] \\, \\text{ for }i \\in \\{1,\\ldots,m\\},\n\\]\nand hence, renaming variables,\n\\[\n\\mdl{\\lgc{K(A)}} \\mathrm{\\Gamma}_0 \\Rightarrow \\quad \\text{ and } \\quad \\mdl{\\lgc{K(A)}} \\mathrm{\\Gamma}_i \\Rightarrow k[\\ensuremath{\\psi}_i] \\, \\text{ for }i \\in \\{1,\\ldots,m\\}\n\\]\nas required.\n\\end{proof}\n\n\\begin{prop}\\label{p:sequentcalculuscompleteness}\nLet $\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}$ be a $\\lgc{K(A)}$-valid sequent. Then $\\der{\\lgc{GK(A_m)}} \\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}$. \n\\end{prop}\n\n\\begin{proof}\nWe prove the claim by induction on the lexicographically ordered pair consisting of the modal depth of $\\mathcal{I}(\\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta})$ and the sum of the complexities of the formulas in $\\mathrm{\\Gamma} \\uplus \\mathrm{\\Delta}$. \n\nFor the base case, suppose that $\\mdl{\\lgc{K(A)}} \\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}$ and that both $\\mathrm{\\Gamma}$ and $\\mathrm{\\Delta}$ contain only variables. Then, by Lemma~\\ref{l:mix}, we obtain $\\mathrm{\\Gamma} = \\mathrm{\\Delta}$. Hence, by $(\\textsc{id})$, we get $\\der{\\lgc{GK(A_m)}} \\mathrm{\\Gamma} \\Rightarrow \\mathrm{\\Delta}$.\n\nFor the inductive step, suppose first that $\\mdl{\\lgc{K(A)}} \\mathrm{\\Gamma}, \\ensuremath{\\varphi} \\to \\ensuremath{\\psi} \\Rightarrow \\mathrm{\\Delta}$. Then also $\\mdl{\\lgc{K(A)}} \\mathrm{\\Gamma}, \\ensuremath{\\psi} \\Rightarrow \\ensuremath{\\varphi}, \\mathrm{\\Delta}$. So by the induction hypothesis, $\\der{\\lgc{GK(A_m)}} \\mathrm{\\Gamma}, \\ensuremath{\\psi} \\Rightarrow \\ensuremath{\\varphi}, \\mathrm{\\Delta}$. Hence, by $(\\to\\seq)$, we get $\\der{\\lgc{GK(A_m)}} \\mathrm{\\Gamma}, \\ensuremath{\\varphi} \\to \\ensuremath{\\psi} \\Rightarrow \\mathrm{\\Delta}$. The case where $\\ensuremath{\\varphi} \\to \\ensuremath{\\psi}$ occurs on the right is very similar.\n\nNow suppose that $\\mdl{\\lgc{K(A)}}\\ensuremath{\\Box} \\mathrm{\\Gamma}, \\Pi \\Rightarrow \\mathrm{\\Sigma}, \\ensuremath{\\Box} \\ensuremath{\\psi}_1,\\ldots,\\ensuremath{\\Box} \\ensuremath{\\psi}_m$ where $\\Pi$ and $\\mathrm{\\Sigma}$ contain only variables. By Lemma~\\ref{l:mix}, we obtain $\\Pi= \\mathrm{\\Sigma}$ and $\\mdl{\\lgc{K(A)}} \\ensuremath{\\Box} \\mathrm{\\Gamma} \\Rightarrow \\ensuremath{\\Box} \\ensuremath{\\psi}_1,\\ldots,\\ensuremath{\\Box} \\ensuremath{\\psi}_m$. By $(\\textsc{id})$, we get $\\der{\\lgc{GK(A_m)}}\\Pi \\Rightarrow \\mathrm{\\Sigma}$. Moreover, by Lemma~\\ref{l:main}, there exist $k \\in \\ensuremath{\\mathbb{N}} \\setminus\\!\\{0\\}$ and multisets of $\\ensuremath{\\mathcal{L}}_{\\lgc{A_m}}^\\ensuremath{\\Box}$-formulas $\\mathrm{\\Gamma}_0,\\mathrm{\\Gamma}_1,\\ldots,\\mathrm{\\Gamma}_m$ such that \n\\begin{enumerate}[label=\\rm (\\roman*)]\n\\item\t$k\\mathrm{\\Gamma} = \\mathrm{\\Gamma}_0 \\uplus \\mathrm{\\Gamma}_1 \\uplus \\ldots \\uplus \\mathrm{\\Gamma}_m$\n\\item\t$\\mdl{\\lgc{K(A)}} \\mathrm{\\Gamma}_0 \\Rightarrow$ \\, and\\, $\\mdl{\\lgc{K(A)}} \\mathrm{\\Gamma}_i \\Rightarrow k[\\ensuremath{\\psi}_i]$ for $i \\in \\{1,\\ldots,m\\}$.\n\\newcounter{tempSaveCounter}\n\\setcounter{tempSaveCounter}{\\value{enumi}}\n\\end{enumerate}\nBut then by the induction hypothesis also\n\\begin{enumerate}[label=\\rm (\\roman*)]\n\\setcounter{enumi}{\\value{tempSaveCounter}}\n\\item\t$\\der{\\lgc{GK(A_m)}} \\mathrm{\\Gamma}_0 \\Rightarrow$ \\, and\\, $\\der{\\lgc{GK(A_m)}} \\mathrm{\\Gamma}_i \\Rightarrow k[\\ensuremath{\\psi}_i]$ for $i \\in \\{1,\\ldots,m\\}$.\n\\end{enumerate}\nHence, using the $\\lgc{GK(A_m)}$-derivable rule $\\boxknr{k,m}$, we obtain $\\der{\\lgc{GK(A_m)}} \\ensuremath{\\Box} \\mathrm{\\Gamma} \\Rightarrow \\ensuremath{\\Box} \\ensuremath{\\psi}_1,\\ldots,\\ensuremath{\\Box} \\ensuremath{\\psi}_m$. Finally, using $\\textup{\\sc (mix)}$, we obtain $\\der{\\lgc{GK(A_m)}} \\ensuremath{\\Box} \\mathrm{\\Gamma}, \\Pi \\Rightarrow \\mathrm{\\Sigma}, \\ensuremath{\\Box} \\ensuremath{\\psi}_1,\\ldots,\\ensuremath{\\Box} \\ensuremath{\\psi}_m$ as required.\n\\end{proof}\n\n\n\nOur main theorem now follows as a direct combination of Propositions~\\ref{p:axiomsystemsound},~\\ref{p:sequentcalculusaxiomsystemequivalent}, and~\\ref{p:sequentcalculuscompleteness}.\n\n\n\\begin{thm}\\label{t:main}\nThe following are equivalent for any $\\ensuremath{\\varphi} \\in \\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_{\\lgc{A_m}}^\\ensuremath{\\Box})$:\n\\begin{enumerate}[label=\\rm (\\arabic*)]\n\\item $\\mdl{\\lgc{K(A)}} \\ensuremath{\\varphi}$.\n\\item\t$\\der{K(A_m)} \\ensuremath{\\varphi}$.\n\\item\t$\\der{\\lgc{GK(A_m)}} \\, \\Rightarrow\\! \\ensuremath{\\varphi}$.\n\\end{enumerate}\n\\end{thm}\n\nLet us remark finally that, since any $\\lgc{LK'(A)}$-tableau for an $\\ensuremath{\\mathcal{L}}_{\\lgc{A_m}}^\\ensuremath{\\Box}$-formula has just one branch, we obtain (consulting the proof of Theorem~\\ref{t:complexity}) a smaller upper bound for the complexity of checking $\\lgc{K(A)}$-validity in this fragment.\n\n\\begin{thm}\nThe problem of checking if $\\ensuremath{\\varphi} \\in \\ensuremath{{\\rm Fm}}(\\ensuremath{\\mathcal{L}}_\\lgc{A_m}^\\ensuremath{\\Box})$ is $\\lgc{K(A)}$-valid is in {\\sc EXPTIME}.\n\\end{thm}\n\n\n\n\\section{Concluding Remarks}\n\nThis paper takes a significant step towards a proof-theoretic account of continuous modal logics: many-valued modal logics with connectives interpreted locally by continuous functions over sets of real numbers. We have introduced here a minimal modal extension $\\lgc{K(A)}$ of Abelian logic (see~\\cite{mey:ab,cas:ab,met:seq}), where propositional connectives are interpreted using lattice-ordered group operations over the real numbers, and shown that the modal \\L ukasiewicz logic $\\lgc{K(\\mathrmL)}$ studied in~\\cite{HT13} is a fragment of this logic with an additional constant. We have provided a labelled tableau calculus for $\\lgc{K(A)}$ and established a {\\sc coNEXPTIME} upper bound for checking validity. More significantly, for the modal-multiplicative fragment of $\\lgc{K(A)}$, we have obtained both a sequent calculus that admits cut-elimination and an axiomatization without infinitary rules. Notably, this latter result was established using the completeness of the labelled tableau calculus to derive a corresponding proof in the sequent calculus. The more standard algebraic approach to proving completeness of many-valued modal logics, employed, e.g., for finite-valued {\\L}ukasiewicz modal logics in~\\cite{HT13}, proceeds by constructing a canonical model as the set of maximal filters of the Lindenbaum-Tarski algebra of the logic. For finite-valued {\\L}ukasiewicz modal logics, completeness is proved using the fact that the appropriate reduct of this algebra is semi-simple, which is not applicable in the infinite-valued case or for the modal-multiplicative fragment of $\\lgc{K(A)}$.\n\nClearly, there are many open questions still to be addressed. The most pressing issue is to find an axiomatization and algebraic semantics for the full logic $\\lgc{K(A)}$. We conjecture that such an axiomatization can be obtained by extending the axiom system $\\lgc{HA}$ for Abelian logic with the axiom schema (K), (D$_n$) ($n \\ge 2$) and rules (mp), (nec) from Figure~\\ref{f:kz}, and the axiom schema $(\\ensuremath{\\Box} \\ensuremath{\\varphi} \\land \\ensuremath{\\Box} \\ensuremath{\\psi}) \\to \\ensuremath{\\Box} (\\ensuremath{\\varphi} \\land \\ensuremath{\\psi})$. It can be shown using methods of abstract algebraic logic that this axiom system is sound and complete with respect to a corresponding variety of algebras with a lattice-ordered abelian group reduct; the difficulty of course is to prove that the axiomatization is complete with respect to the frame semantics of $\\lgc{K(A)}$, perhaps by extending the proof for the modal-multiplicative fragment (using the labelled tableau calculus and a Gentzen-style calculus), or via an alternative representation of the algebras. Such a proof would provide the basis for an axiomatization and algebraic semantics for $\\lgc{K(\\mathrmL)}$, and, more generally, a starting point for a J{\\'o}nsson-Tarski-style account of the relationship between relational and algebraic semantics for these logics. Note that we can already develop such a relationship for the modal-multiplicative fragment axiomatized in this paper, but the algebras corresponding to the axiom system $\\lgc{K(A_m)}$ will not form a variety.\n\nWe have focussed in this work only on the minimal modal extension of Abelian logic. However, adapting the Kripke semantics and labelled tableau calculi to other (e.g., reflexive, symmetric, transitive) classes of frames is a straightforward exercise. More challenging is the problem of adapting the completeness proofs for the modal-multiplicative fragment to suitably extended axiom systems and sequent calculi. For the reflexive case, completeness proofs, similar to those given here, can be obtained for the extension of the axiom system $\\lgc{K(A_m)}$ with the axiom schema $\\ensuremath{\\Box} \\ensuremath{\\varphi} \\to \\ensuremath{\\varphi}$ and the sequent calculus $\\lgc{GK(A_m)}$ with the rule\n\\[\n\\infer{\\mathrm{\\Gamma}, \\ensuremath{\\Box} \\ensuremath{\\varphi} \\Rightarrow \\mathrm{\\Delta}}{\\mathrm{\\Gamma}, \\ensuremath{\\varphi} \\Rightarrow \\mathrm{\\Delta}}\n\\]\nHowever, a general approach for tackling different classes of frames is still lacking.\n\nFinally, it remains to determine whether the upper bounds given here for the complexity of checking $\\lgc{K(A)}$-validity are optimal. Let us just note that it makes sense to first investigate the {\\sc EXPTIME} upper bound for the modal-multiplicative fragment, before considering the {\\sc coNEXPTIME} upper bound for the full logic $\\lgc{K(A)}$ and indeed also $\\lgc{K(\\mathrmL)}$. \n\n\n\\bibliographystyle{plain}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}