diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzffqz" "b/data_all_eng_slimpj/shuffled/split2/finalzzffqz" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzffqz" @@ -0,0 +1,5 @@ +{"text":"\\section{\\label{sec:Intro}INTRODUCTION\\protect\\\\}\n\nThe Winter conferences have brought forth\na host of new experimental results from the LHC.\nContinuing the 2011 trend, the Standard Model (SM)\nstands tall, and there are no strong\nhints of new physics beyond SM (BSM).\nOn the flavor front, a fit~\\cite{psiphiLHCb}\nto $B_s \\to J\/\\psi\\phi$ events by LHCb with\n1 fb$^{-1}$ data yields $\\Delta\\Gamma_s$ that is\nin good agreement with SM, while\ncombining the $\\phi_s \\equiv 2\\Phi_{B_s}$ (the $CP$\nviolating phase of $\\bar B_s \\to B_s$ mixing)\nmeasurement with the result from $B_s\\to J\/\\psi\\pi\\pi$\ngives\n\\begin{eqnarray}\n\\phi_s &=& -0.002 \\pm 0.083 \\pm 0.027\\ {\\rm rad}\n \\ \\ \\ {\\rm (LHCb\\ 1\\ fb}^{-1}) \\nonumber \\\\\n &=& -0.002 \\pm 0.087\\ {\\rm rad},\n\\label{phis1203}\n\\end{eqnarray}\nwhich is fully consistent with the result of\n$0.03 \\pm 0.16 \\pm 0.07$ with 1\/3 the dataset.\nAgain with 1 fb$^{-1}$ data, LHCb has advanced\nthe measurement of forward-backward asymmetry\nin $B^0 \\to K^{*0}\\mu^+\\mu^-$, giving a first\nmeasurement~\\cite{s0LHCb} of the zero-crossing point\n\\begin{equation}\n q_0^2 = (4.9^{+1.1}_{-1.3})\\ {\\rm GeV}^2,\\ \\ \\ \\\n {\\rm (LHCb\\ 1\\ fb}^{-1})\n\\label{s01203}\n\\end{equation}\nwhich is consistent with SM expectation of \n4.0--4.3 GeV$^2$~\\cite{Bobeth}.\n\nIt is interesting then, that more apparent progress\nhas been made on the quest for the $B_s \\to \\mu^+\\mu^-$\nrare decay mode: SM sensitivity has genuinely been\nreached, and data~\\cite{mumuCMS, mumuLHCb} might be\nsuggestive of a rate {\\it below} SM expectations.\nGiven that a decade long search for $B_s \\to \\mu^+\\mu^-$\nwas motivated by the possible enhancement up to factors\nof hundreds to thousands, by powers~\\cite{tanbn}\nof $\\tan\\beta$ in the settings of supersymmetry\nor two Higgs doublet models,\nwe are now at the\njuncture of a mindset change, switching from possible\nhuge enhancements of old,\nto SM-like or even sub-SM values as it might emerge.\nIt is in this context that we wish to explore in\nthis Brief Report the implications on relevant flavor\nparameters involving a fourth generation of quarks,\n$t'$ and $b'$ (SM4).\n\nIt should be noted that bounds on $t'$ and $b'$ masses\nhave reached~\\cite{4G-MoriQCD} the 600 GeV level by direct search at the\nLHC, hence we have nominally crossed the threshold of\nthe unitarity bound (UB) of 500--550 GeV~\\cite{Chanowitz:1978uj}.\nIn the following, we will proceed naively, extending\nour previous work~\\cite{HKX}, and return to comment on\nUB and other issues towards the end of our discussion.\n\n\n\n\\section{\\label{sec:II}\\boldmath\nLow versus SM-like $B_s \\to \\mu^+\\mu^-$ Rate \\protect\\\\}\n\n\nIt is difficult to enhance $B_s \\to \\mu^+\\mu^-$ in SM4\nby more than a factor of two or three,\nbecause it is constrained by $B \\to X_s\\ell^+\\ell^-$\n(together with $B \\to X_s\\gamma$), which is consistent with SM.\nHence, this mode appeared less relevant for SM4, until recently.\nIn contrast, the aforementioned $\\tan\\beta$ enhancement effect\nfeeds scalar operators that do not enter\n$b\\to s\\gamma$ and $b\\to s\\ell^+\\ell^-$ processes,\nhence were far less constrained.\nHowever, the scalar operators are now muted by\nthe prowess of the LHC (and previous searches at the Tevatron).\n\nA dramatic turn of events were already played out in 2011,\nwhere the combined result~\\cite{comboBsmumu} of LHCb and CMS,\n ${\\cal B}(B_s\\to \\mu^+\\mu^-) < 11 \\times 10^{-9}$ at\n 95\\% Confidence Level (CL),\nrefuted the CDF result~\\cite{CDFmumu11} of\n$(18^{+11}_{-\\ 9}) \\times 10^{-9}$, which was\nat the time itself hot-off-the-press.\nAdding close to 3 fb$^{-1}$ data to the previous 7 fb$^{-1}$\nanalysis, the CDF value dropped a bit to\n$(13^{+9}_{-7}) \\times 10^{-9}$, but\nthe Tevatron has ran out of steam.\nATLAS has also turned out a bound of $22 \\times 10^{-9}$\nbased on 2.4 fb$^{-1}$ data, which is not yet competitive\neven with summer 2011 results from LHCb or CMS.\nThe highlight this Winter was therefore the\n$B_s\\to \\mu^+\\mu^-$ results from CMS and LHCb.\n\nLet us first describe the LHCb result.\nUsing a multivariate analysis (MVA),\nLHCb gave~\\cite{mumuLHCb} the 95\\% CL bound of\n\\begin{equation}\n {\\cal B}(B_s\\to \\mu^+\\mu^-) < 4.5 \\times 10^{-9},\\ \\ \\\n {\\rm (LHCb\\ 1\\ fb}^{-1})\n\\label{LHCb-mumu-1fb}\n\\end{equation}\nwhich is approaching rather close to the SM value~\\cite{Buras:2010wr} of\n\\begin{equation}\n {\\cal B}(B_s\\to \\mu^+\\mu^-) = (3.2 \\pm 0.2) \\times 10^{-9}.\\ \\ \\\n {\\rm (SM)}\n\\label{mumu-SM}\n\\end{equation}\nIn fact, LHCb gave a fitted number,\n\\begin{equation}\n {\\cal B}(B_s\\to \\mu^+\\mu^-) = (0.8^{+1.8}_{-1.3}) \\times 10^{-9},\\ \\\n {\\rm (LHCb\\ 1\\ fb}^{-1})\n\\label{mumu-subSM}\n\\end{equation}\nwhich naively implies possibly negative branching ratio!\nThe central value is from the maximum log-likelihood,\nwhile the errors correspond to varying the\nlog-likelihood by 0.5. The main upshot may be that\nLHCb does not really see any clear hint of a\nSM-strength signal!\nEither this is a downward fluctuation of the ``true SM\"\nvalue of Eq.~(\\ref{mumu-SM}), or Nature has\na sub-SM value in store for us.\nWe caution, of course, that statistics is still rather low.\n\nThe CMS result~\\cite{mumuCMS} is, at 95\\% CL,\n\\begin{equation}\n {\\cal B}(B_s\\to \\mu^+\\mu^-) < 7.7 \\times 10^{-9},\\ \\ \\\n {\\rm (CMS\\ 5\\ fb}^{-1})\n\\label{CMS-mumu-5fb}\n\\end{equation}\nby a cut-based analysis. A mild deficit seems\nto be indicated when compared with the\nmedian expected limit of $< 8.4 \\times 10^{-9}$.\nBut the handful of events reveal some\ninteresting pattern.\nIn the Barrel detector region, one expects\n$\\sim 2.7$ signal events if SM were true,\ntogether with $\\sim 0.8$ events from background.\nOnly two events were observed, which are\nseparated by $\\sim 100$ MeV, wider than the\nnominal detector mass resolution.\nThis suggests the presence of background events.\nWhether this constitutes one event each for signal\nand background, or if both events are background,\nit seems to echo LHCb~\\cite{mumuLHCb} in some ``downward\"\nfluctuation from the SM value of Eq.~(\\ref{mumu-SM}).\nHowever, if both LHCb and CMS sense a downward signal\nfluctuation, then the likelihood that the actual\nsignal might be lower would be enhanced!\n\nIn the Endcap detector region, the situation is\na bit puzzling. Here, signal and background are\nboth expected at 1.2 event level, while a total of\n4 events were seen~\\cite{mumuCMS}.\nBut they all cluster within 50 MeV or less,\ninside a signal mass window of 150 MeV,\nwhich is set at twice the detector mass resolution\n(poorer than in the Barrel detector).\nHowever, since the Endcap is less sensitive\nthan the Barrel, we refrain from further comment,\nexcept that the ``excess\" events push up the\nbound of Eq.~(\\ref{CMS-mumu-5fb}) slightly.\nThus, by CMS Barrel detector alone, the\n``discrepancy\" with median expected is\na little larger.\n\nAlthough anything can happen at the present\nstatistics level, LHCb expects to add\n$\\sim$ 1 fb$^{-1}$ in 2012, while\nCMS would add $\\sim$ 15 fb$^{-1}$, both at\nthe slightly higher collision energy of 8 TeV.\nWe therefore like to emulate\nfuture prospects as follows.\nFor the indication of lower than SM rate,\nwe shall take Eq.~(\\ref{mumu-subSM}) at face\nvalue. Projecting to full 2011-2012 data,\nbesides the doubling of LHCb data, CMS data should\nincrease more than four fold (an MVA approach\nshould increase the effective luminosity).\nAlthough one cannot really project what is\nthe combined effective reduction of errors,\nwe take the factor $\\sqrt{6} \\sim 2.5$.\nI.e. in our subsequent numerics, besides the\n$1\\sigma$ allowed range for Eq.~(\\ref{mumu-subSM}),\nwe will show also the $1\/2.5\\, \\sigma$ range,\nwhich would give $(0.8^{+0.7}_{-0.5}) \\times 10^{-9}$.\nWhile this is rather aggressive, it would\nillustrate a sub-SM result when LHCb combined\nwith CMS probes genuinely below SM values.\nIt is not impossible that, by end of\n2011-2012 run, we find ${\\cal B}(B_s\\to \\mu^+\\mu^-)$\nto be consistent with zero, i.e. at $10^{-9}$ or less.\nWe note that ATLAS could also eventually contribute\nsignificantly to $B_s\\to \\mu^+\\mu^-$ search.\n\nThe notable feature across the board for\nnew physics search at the LHC, however, is\nthat no cracks were found in SM's armor.\nThus, we offer a second case of SM-like behavior.\nHere, we take the central value from SM,\nand mimic the current error bar by satisfying\nthe 95\\% CL bound from CMS. We get from\nEq.~(\\ref{CMS-mumu-5fb}),\n\\begin{equation}\n {\\cal B}(B_s\\to \\mu^+\\mu^-) = (3.2 \\pm 2.7) \\times 10^{-9}.\\ \\ \\\n \\textrm{(SM-like)}\n\\label{mumu-SMlike}\n\\end{equation}\nAgain, we will discuss the $1\\sigma$ and $1\/2.5\\, \\sigma$\nallowed range of Eq.~(\\ref{mumu-SMlike}) for projections\ninto the future.\nActual error reduction would likely be more than\n$1\/2.5$ for SM-like central values in Eq.~(\\ref{mumu-SMlike}).\n\nWe follow our previous paper~\\cite{HKX} and combine the above\nscenarios for ${\\cal B}(B_s\\to \\mu^+\\mu^-)$ with\nmeasurements of $\\phi_s$ and\n$A_{\\rm FB}(B^0 \\to K^{*0}\\mu^+\\mu^-)$\n(we shorthand as $A_{\\rm FB}$ below).\nOur target physics is the flavor parameters of the\nfourth generation for $b\\to s$ transitions, namely\n$V_{t's}^*V_{t'b} \\equiv r_{sb}\\, e^{i\\,\\phi_{sb}}$.\nIf the current hint for 125 GeV SM-like Higgs boson\ndoes not get substantiated by 2012 data, a very\nheavy fourth generation could provide the mechanism\nfor electroweak symmetry breaking through its strong\nYukawa interaction~\\cite{Hou:2012nh}.\nWe will find that a sub-SM ${\\cal B}(B_s\\to \\mu^+\\mu^-)$\nvalue would imply a \\textit{lower bound} on\n$r_{sb} = \\vert V_{t's}^*V_{t'b}\\vert$,\nwhich would be rather interesting.\n\nWe had suggested that the\nthree measurements of $\\phi_s$,\n${\\cal B}(B_s\\to \\mu^+\\mu^-)$ and $A_{\\rm FB}$\nwould help map out the preferred $V_{t's}^*V_{t'b}$,\nor $(r_{sb},\\ \\phi_{sb})$ parameter space.\nThe main measurements are $\\phi_s$ and\n${\\cal B}(B_s\\to \\mu^+\\mu^-)$, with $A_{\\rm FB}$\nproviding further discrimination, both in its\nshape, and now also the $q_0^2$ value~\\cite{s0LHCb}.\nThree cases were discussed.\nCase A was for large and negative $\\phi_s$,\nwhere we used $\\sin2\\Phi_{B_s} = -0.3 \\pm 0.1$,\nand enhanced $10^9{\\cal B}(B_s\\to \\mu^+\\mu^-)\n= 5.0 \\pm 1.5$. This was motivated by\nhints for large and negative time-dependent\nCPV in $B_s \\to J\/\\psi \\phi$ from Tevatron studies.\nAlthough a $-0.2 \\pm 0.1$ value could still be\nentertained at the 2$\\sigma$ level, there is\nnot more to be said beyond our previous work,\nwhile the likelihood for enhanced\n${\\cal B}(B_s\\to \\mu^+\\mu^-)$ is receding.\nThus, we no longer present this case.\nCase B and C were for $\\sin2\\Phi_{B_s}$\ntaking SM value of $-0.04 \\pm 0.01$,\nwhile $10^9{\\cal B}(B_s\\to \\mu^+\\mu^-)$\ntakes the slightly enhanced or depressed values\nof $5.0 \\pm 1.5$ and $2.0 \\pm 1.5$, respectively.\nBy design, the overlap between Case B and Case C\nis precisely when ${\\cal B}(B_s\\to \\mu^+\\mu^-)$\nis SM-like.\nThus, the three Cases of A, B and C map out\nthe foreseen parameter space in $r_{sb}$\nand $\\phi_{sb}$ as data improves.\n\nWith the present experimental situation\nfor ${\\cal B}(B_s\\to \\mu^+\\mu^-)$,\nwhich could either be sub-SM as in\nEq.~(\\ref{mumu-subSM}), or SM-like, as in\nEq.~(\\ref{mumu-SMlike}), we reinvestigate\nthe implications for the preferred region\nin the $r_{sb}$-$\\phi_{sb}$ plane.\nFor both cases, we impose the\n$\\phi_s \\equiv 2\\Phi_{B_s}$ constraint of\nEq.~(\\ref{phis1203}).\nThe observed shape and $q_0^2$ value from\n$A_{\\rm FB}$ are further applied to\nconstrain parameter space.\nWe take $m_{t'} = 650$ GeV for sake of illustration.\n\n\n\n\\begin{figure*}[t!]\n\\centering\n{\\includegraphics[width=70mm]{Fig-1a.eps}\n \\includegraphics[width=70mm]{Fig-1b.eps}\n}\n\\vskip-0.35cm\n\\caption{\n Overlap region for contours of\n $\\phi_{s} = -0.002 \\pm 0.087$,\n where dashed line is for $1\/\\sqrt{2}$ the error, and\n $10^{9} {\\cal B}(B_s \\to \\mu^+\\mu^-) = $\n (a) $0.8^{+1.8}_{-1.3}$\n (we allow only positive definite values), and\n (b) $3.2 \\pm 2.7$,\n where dashed line is for $1\/2.5$ the error.\n For illustration, $m_{t'} = 650$ GeV has been used.\n} \\label{Fig1}\n\\end{figure*}\n\n\n\n\\section{\\boldmath\nResults \\protect\\\\}\n\n\nThe $\\bar B_s$--$B_s$ mixing amplitude is\n\\begin{eqnarray}\n M_{12}^s &=& \\frac{G_F^2M_W^2}{12\\pi^2}m_{B_s}f_{B_s}^2\\hat B_{B_s}\\eta_B\n \\Delta_{12}^s,\n \\label{M12s}\n\\end{eqnarray}\nwith\n\\begin{equation}\n\\Delta_{12}^s = \\left(\\lambda_t^{\\rm\\scriptsize SM}\\right)^2 S_0(t,t)\n + 2\\lambda_t^{\\rm\\scriptsize SM}\\lambda_{t'}\\Delta S_0^{(1)}\n + \\lambda_{t'}^2\\Delta S_0^{(2)},\n \\label{Del12s}\n\\end{equation}\nwhere $\\lambda_q \\equiv V_{qs}^*V_{qb}$.\nWith $S_0$ and $\\Delta S_0^{(i)}$ as defined in Ref.~\\cite{Hou:2006mx},\nEq.~(\\ref{Del12s}) manifestly respects GIM~\\cite{Glashow:1970gm}.\nThe CPV phase\n\\begin{equation}\n \\phi_s = 2\\Phi_{B_s} \\equiv \\arg M_{12}^s = \\arg \\Delta_{12}^s,\n \\label{argM12s}\n\\end{equation}\ndepends only on $m_{t'}$ and $\\lambda_{t'} = V_{t's}^*V_{t'b}$.\nNote that $\\lambda_t^{\\rm\\scriptsize SM} =\n-\\lambda_c - \\lambda_u \\cong -0.04 -V_{us}^*V_{ub}$.\nAlthough we take PDG~\\cite{PDG} values for the\nphase of $V_{ub}$, it is exciting that the\nphase of $V_{us}^*V_{ub}$ is starting to be\ndirectly measured via interference of tree processes.\nFor ${\\cal B}(B_s \\to \\mu^+\\mu^-)$, the $f_{B_s}^2$ dependence is\nlargely removed~\\cite{Buras:2003td} by taking the\nratio with $\\Delta m_{B_s}\/\\Delta m_{B_s}|^{\\rm exp}$,\nwhich works for SM4 as in SM. That is,\n\\begin{equation}\n\\mathcal{B}(B_s\\to \\bar\\mu\\mu)\n = C\\frac{\\tau_{B_s}\\eta_Y^2}{\\hat{B}_{B_s}\\eta_B}\n \\frac{|\\lambda_t^{\\rm\\scriptsize SM}Y_0(x_t)+\\lambda_{t'}\\Delta Y_0|^2}\n {|\\Delta_{12}^s|\/\\Delta m_{B_s}|^{\\rm exp}},\n \\label{BrBsmumu}\n\\end{equation}\nwhere $C = 3g_W^4m_{\\mu}^2\/2^7\\pi^3M_{W}^2$, and\n$\\eta_Y= \\eta_Y(x_t) = \\eta_Y(x_{t'})$ is taken.\n\nWe plot, in Fig.~1(a), the contours for $\\phi_{s}$\nwithin $1\\sigma$ and $1\/\\sqrt{2}\\,\\sigma$ range of\nEq.~(\\ref{phis1203}), in the\n$r_{sb} \\equiv |V_{t's}^*V_{t'b}|$,\n$\\phi_{sb} \\equiv \\arg V_{t's}^*V_{t'b}$\nplane, for $m_{t'} = 650$ GeV.\nHere, LHCb holds a monopoly, and statistics is\nexpected to only double during 2012.\nSimilarly for ${\\cal B}(B_s\\to \\mu^+\\mu^-)$,\nwe plot the contours within $1\\sigma$ and $1\/2.5\\,\\sigma$\nrange of Eq.~(\\ref{mumu-subSM}), which is sub-SM\nin strength.\nThe $m_{t'}$ value used is beyond the 550 GeV\nnominal UB bound~\\cite{Chanowitz:1978uj},\nand one is no longer sure of the\nnumerical accuracy of Eqs.~(\\ref{Del12s}) and (\\ref{BrBsmumu}),\ni.e. the perturbative computation of the functions\n$\\Delta S_0^{(i)}$ and $\\Delta Y_0$ would become questionable.\nHowever, some form such as Eq.~(\\ref{Del12s}) should\ncontinue to hold even above the UB,\nand we shall continue to use existing formulas.\n\nThe overlap between the $\\phi_{s}$ and\n${\\cal B}(B_s\\to \\mu^+\\mu^-)$ contours\nnow favor $\\phi_{sb}$ in the 4th quadrant\nwith $|\\sin\\phi_{sb}|$ small,\nwhere the darker regions are for more\naggressive error projections towards the future.\nIt should be clear that a precise determination of\n$\\phi_{sb}$ depends much more on the precision of\n$\\phi_{s}$ measurement.\n\nWe remark that Fig.~1(a) is a much more\nstringent version of Case C presented\nin our previous paper, where we now have\n${\\cal B}(B_s\\to \\mu^+\\mu^-)$ considerably\nbelow SM expectation.\nThus, the most notable feature is that,\neven at $1\\sigma$ error level,\n$r_{sb}$ is now \\textit{bounded from below}.\nThis is because the (current LHCb~\\cite{mumuLHCb})\ncentral value of Eq.~(\\ref{mumu-subSM}) is\nmore than $1\\sigma$ below the SM expectation of\n$3.2\\times 10^{-9}$. Thus, it calls for a finite\n$t'$ effect to subtract, or destructively interfere,\nagainst the SM amplitude from top quark. That this\nmight become the picture for flavor parameters involving\n4th generation, if a lower than SM value for\n${\\cal B}(B_s\\to \\mu^+\\mu^-)$ is found at the LHC,\nis the main point of this short note.\nIt should be stressed that this is a natural\nconsequence for SM4, since we know from\nexisting constraints that $t'$-induced amplitudes\nmust be subdominant in strength compared with\ntop-induced amplitudes, while the\nsign of the real part of $V_{t's}^*V_{t'b}$\ncan precisely be correlated with what experiments\nobserve.\n\n\nThe SM-like case of Eq.~(\\ref{mumu-SMlike}) is\nless interesting, but given the continued\nsuccess of the SM into the LHC era,\nshould be viewed as more probable.\nWe illustrate in Fig.~1(b) for $m_{t'} = 650$ GeV\nthe overlap of the contours for $\\phi_{s}$\nin Eq.~(\\ref{phis1203}) and\n${\\cal B}(B_s \\to \\mu^+\\mu^-)$\nin Eq.~(\\ref{mumu-SMlike}).\nBesides some high $r_{sb}$ region for\nmodest $\\vert \\phi_s\\vert$ values, the generic\nfeature is relatively small $r_{sb}$, with\n$\\phi_{sb}$ undetermined by the present\nprecision of $\\phi_{sb}$ measurement.\nThis small $r_{sb}$ case is rather intuitive,\nthat of subdued 4th generation effect.\nWe shall see that the larger $r_{sb}$\nvalues are ruled out by the observation of\nSM-like behavior for $A_{\\rm FB}$,\nas we have seen in our previous paper.\n\nThe SM-like shape for $A_{\\rm FB}$ as observed by\nLHCb provides a powerful discriminant\nagainst larger $r_{sb}$ values.\nNote that data prior to summer 2011 had\nsuggested a deviation from SM behavior~\\cite{PDG},\nwhich, besides a hint for sizable\ndeviation in $\\sin2\\Phi_{B_s}$, was part of\nthe motivation for Case A in our previous paper.\nThe SM-like shape for $A_{\\rm FB}$ is further\naffirmed with 1 fb$^{-1}$ data from LHCb~\\cite{s0LHCb},\nwhile the first measurement for\nzero-crossing point, Eq.~(\\ref{s01203}), is offered.\nWe have checked the allowed parameter space\nof Fig.~1 and find generically\nthat $r_{sb} \\gtrsim 0.004$ would generate\nsignificant deviations in shape for $A_{\\rm FB}$.\nThe drop from roughly $0.008$~\\cite{HKX} to $0.004$\nis due to the higher $m_{t'} = 650$ GeV taken\nto satisfy direct search bounds~\\cite{4G-MoriQCD},\nas well as the tighter experimental constraints\ntowards SM.\nWe note with interest that, for the sub-SM\n${\\cal B}(B_s \\to \\mu^+\\mu^-)$ case,\nthe slightly larger than SM central value of\n$q_0^2 = 4.9$ GeV$^2$ in Eq.~(\\ref{s01203})\nalso prefers $\\phi_{sb}$ in the 4th quadrant.\n\n\n\\section{\\label{sec:Conclusion} Discussion and Conclusion\\protect\\\\}\n\n\nAfter some hints for BSM physics for some years,\nboth in $A_{\\rm FB}(B^0 \\to K^{*0}\\mu^+\\mu^-)$\nand in $\\sin2\\Phi_{B_s}$~\\cite{disc},\nSM is reaffirmed by 2011 data from LHC.\nInterestingly, now there might be a hint for\n${\\cal B}(B_s \\to \\mu^+\\mu^-)$ below SM expectations.\nIt is of course too early to tell. However,\nthis mode has always been looked upon as\npossibly greatly enhanced by\nthe less constrained scalar operators.\nWe are at least at the turning point,\nwhere no large enhancement is observed,\nbut now whether it is SM-like, or sub-SM,\ncan be distinguished with full 2011-2012 data.\nThe 4th generation $t'$ quark offers the\nnatural toolbox in this domain,\nas it is constrained to be subdominant\nby $b\\to s\\gamma$ and $b \\to s\\ell^+\\ell^-$\ndata since a decade,\nwhile providing a destructive mechanism\nin the unknown phase of $V_{t's}^*V_{t'b}$.\nIn contrast,\nadjusting the scalar interactions to the SM strength\nis like training a big hammer on a small nail.\n\nWe note that, to have ${\\cal B}(B_s \\to \\mu^+\\mu^-)$\nnear the central value of Eq.~(\\ref{mumu-subSM}),\nthe $C_{10}$ Wilson coefficient would be\nconsiderably smaller than SM value,\nsuch that one would worry about ${\\cal B}(B \\to X_s\\ell^+\\ell^-)$.\nIt is then interesting to not that\nLHCb data does seem to indicate that the\n$d{\\cal B}(B^0 \\to K^{*0}\\mu^+\\mu^-)\/dq^2$ differential rate\ncould be a little lower than the SM expectations~\\cite{s0LHCb}.\nAlthough a vanishing ${\\cal B}(B_s \\to \\mu^+\\mu^-)$ is unlikely\n(more probably within $(1-2)\\times 10^{-9}$),\nit would be interesting to watch this mutually supporting\ntrend of somewhat lower ${\\cal B}(B_s \\to \\mu^+\\mu^-)$\nand ${\\cal B}(B \\to K^*\\ell^+\\ell^-)$ \n(or ${\\cal B}(B \\to X_s\\ell^+\\ell^-)$).\nThe darker region in Fig.~1(a) is just to stress the point.\n\nWe have used $m_{t'} = 650$ GeV to satisfy direct\nsearch bounds, which is now beyond the nominal\nunitarity bound. To probe much further,\nthe 13-14 TeV run would be necessary.\nHowever, with the Yukawa coupling turning nonperturbative,\nthe phenomenology may change~\\cite{Enkhbat:2011vp}.\nFortunately, the leading production mode of\n$gg\\to Q\\bar Q$ is not affected.\nThe usage of such large $m_{t'}$ values is becoming\ndubious, and nonperturbative studies should be\nperformed.\nThe nonperturbative, strong Yukawa coupling\ncould actually be the source of electroweak symmetry\nbreaking~\\cite{Hou:2012nh}.\nIt is interesting that, with full 2011-2012 data,\nwe would learn whether a 125 GeV Higgs boson is\nsubstantiated, as well as whether $B_s\\to \\mu^+\\mu^-$\nis below SM expectations.\n\nIn conclusion, we illustrate what LHC data\nmight tell us about 4th generation flavor parameters.\nAssuming 2011-2012 data would give\n$\\phi_s = -0.002 \\pm 0.062$ and taking $m_{t'} = 650$ GeV,\nwe mocked the low $B_s \\to \\mu^+\\mu^-$ rate case with\n$(0.8^{+0.7}_{-0.5}) \\times 10^{-9}$, which would imply\n$|V_{t's}^*V_{t'b}| \\sim 0.0015$--0.004, with\n$-40^\\circ \\lesssim \\arg V_{t's}^*V_{t'b} \\lesssim 15^\\circ$.\nOn the other hand, if a SM-like $B_s \\to \\mu^+\\mu^-$ rate\nemerges, it would imply small $|V_{t's}^*V_{t'b}|$\nat a couple per mille, while $\\arg V_{t's}^*V_{t'b}$\nwould require more precise measurement of $\\phi_s$\nto determine.\nThe $B^0 \\to K^{*0}\\mu^+\\mu^-$ forward-backward\nasymmetry provides a further discriminant that rules out\n$|V_{t's}^*V_{t'b}| \\gtrsim 0.004$ for the discussed\nallowed regions.\n\n\\vskip0.3cm\n\\noindent{\\bf Acknowledgement}.\nWSH is grateful to\nU. Langenegger, S. Stone and D.~Tonelli\nfor communications,\nand thanks the National Science Council for\nthe Academic Summit grant NSC 100-2745-M-002-002-ASP.\nMK and FX are supported by the NTU grant\n10R40044 and the Laurel program.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\n\nSpatial representation and reasoning is an essential component of geographical information systems, cognitive robotics, spatial databases, document interpretation and digital forensics. \nMany tasks in these areas, such as satellite image retrieval, navigation of a robot to a destination, describing the location of an object, constructing digital maps involve dealing with spatial properties of the objects and the environment.\n\n\nIn some domains (e.g., exploration of an unknown territory), qualitative models are more suitable for representing and reasoning about spatial relations because quantitative data may not always be available due to uncertainty or incomplete knowledge. \nIn cognitive systems, spatial information obtained through perception might be coarse or imperfect.\nEven if quantitative data is available, in some circumstances agents may prefer to use qualitative terms for the sake\nof sociable and understandable communication.\nFor instance, humans express orientation and distance in words like \\textit{left}, \\textit{right}, \\textit{front}, \\textit{back}, \\textit{north}, \\textit{near}.\nNaval, air and space navigation typicaly involve geographical directions such as \\textit{south}, \\textit{east}, \\textit{northwest}.\nAlthough qualitative terms have less resolution in geometry than their quantitative counterparts, it is easier\nfor people to communicate using them.\nIt is more eloquent to say ``The library is in front of the theater, near the cafeteria\"\nrather than ``The library is at 38.6 latitude and 27.1 longitude. \nThis explains why driving instructions in GPS system are conducted in daily language.\n\n\nAs an illustrative scenario (depicted in Figure~\\ref{fig:scenario}), suppose that a robot is assisting a parent to find her missing child in a shopping mall that is not completely known to the robot nor to the parents. \nThe robot has received some sightings of the child (e.g., ``to the south of Store A\").\nThis information will be useful if the robot can understand the relative location of the child described qualitatively, figure out where the child might be, based on such qualitative direction constraints, and describe qualitatively in which direction (e.g., ``to the north\") the parents should search for their child.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width = 0.5\\linewidth]{Scenario2Sketchv11.png}\n\\caption{Missing child scenario}\n\\label{fig:scenario}\n\\end{figure}\n\n\nIn another scenario depicted in Figure~\\ref{fig:scenario2}, a human user requests a service robot to prepare the kitchen table.\nThe commonsense knowledge for a well-set table can be described using statements such as ``The plate is in the middle of the table\", ``Spoon is on the right and very near to the plate\", ``Desert is between napkin and salad\", ``Salad is on the left and near to the plate\", ``Salt is adjacent to the napkin which is near to the top border\", \n``The cup is on the right or back and very near to the bottle.\"\nThe human might express his preferences as well: ``It is better if the juice is placed on the right side of the plate, not far and not very near to it\".\nTo set up the table, the robot should possess representation of these qualitative direction and distance relations between objects in order to understand the human and infer the setting.\nIt is also beneficial if the robot can utilize commonsense knowledge to enhance the arrangement,\nfor example ``by default the fork is placed next to the spoon\". \n\n\n\n\n\n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width = 0.5\\linewidth]{Kitchenlayout4.png}\n\\caption{Kitchen table scenario}\n\\label{fig:scenario2}\n\\end{figure}\n\n\n\nWith these motivations, our objective in this thesis is to develop a general framework to represent constraints, commonsense knowledge, preferences about qualitative directions and distances, to check consistency of these information, and to infer unknown spatial relations.\nWe propose to develop this framework using Answer Set Programming (ASP).\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Related Literature}\n\nBeginning with the seminal work of Allen on Interval Algebra~\\cite{Allen83}, \na multitude of qualitative calculi have been proposed in the literature focusing on \ndifferent aspects of space. \nSome of these formalisms focus on topology (DIR9~\\cite{egenhofer1990categorizing}, RCC8~\\cite{cohn1997qualitative}), \ndirection (cone and projection based~\\cite{Frank91}, \nLR~\\cite{ligozat1993qualitative}, Double-cross~\\cite{Freksa1992}, Dipole~\\cite{moratz2000qualitative}, SV~\\cite{LeeRW13},\nOPRA~\\cite{moratz2005relative}, \nRectangle Algebra~\\cite{balbiani1998model,BalbianiCC99}, \nCardinal Directional Calculus~\\cite{Goyal1997, SkiadKoub2004}), \ndistance~\\cite{zimmermann1996qualitative,monferrer1996enhancing,falomir2013qualitative,guesgen2002reasoning}, \nsize~\\cite{Frank91}, and shape~\\cite{dugat1999qualitative,gottfried2005global,van2005double,museros2004qualitative,dorr2014qualitative}. \nAn overview of qualitative spatial and temporal calculus can be found in recent surveys~\\cite{cohn2008qualitative,chen2015survey,dylla2017survey}.\n\nAs for direction, point objects~\\cite{Frank91, moratz2005relative, LeeRW13}, \nline segments and ternary relations~\\cite{Freksa1992, moratz2002qualitative}, and \nextended regions on the plane~\\cite{BalbianiCC99, Goyal1997} have been examined. \nThese formalisms are designed for point objects; in this thesis we consider extended objects.\n\nRectangle Algebra~\\cite{BalbianiCC99} and \nCardinal Directional Calculus~\\cite{Goyal1997, SkiadKoub2004, SkiadKoub2005} \nare widely used for reasoning about directions between extended objects on the plane. \nRectangle Algebra (RA) is an extension of Allen's Interval Algebra into 2-dimension. Objects are rectangles whose sides are parallel to axes of reference frame. An RA relation is identified by a pair of interval relation between sides of rectangles in horizontal and vertical axis. \nIn Direction Relation Matrix ~\\cite{Goyal1997}, the space is divided into 9 tiles and \nthe direction of the target object is represented by its intersection \nwith the tiles in a 3x3 matrix. \nA more formal model was adapted by Skiadopoulos and Koubarakis~\\cite{SkiadKoub2004,SkiadKoub2005} in a manner that lower \ndimensional parts (points and lines) do not alter directional relations.\nIts new form is named Cardinal Directional Calculus (CDC). \nIn this thesis, our studies regarding directions is based on CDC.\nConsistency checking in CDC and its computational complexity \nhave been investigated in subsequent research~\\cite{Liuthesis2013,LiLiu2011,Liuetal2010,Navarreteetal2007,SkiadKoub2004,SkiadKoub2005,Zhangetal2008}. \nPolynomial time complexity fragments of the problem have been identified~\\cite{Liuthesis2013,Liuetal2010,Navarreteetal2007,Zhangetal2008} and algorithms have been presented for them. \nAlthough consistency checking problem is proven to be NP-complete in general~\\cite{Liuthesis2013,LiLiu2011,Liuetal2010,SkiadKoub2005},\nno solution method exists for these intractable problems in the literature.\n\nThere are also calculi that integrate different aspects such as topology and orientation~\\cite{hotzcombining}, \norientation and distance~\\cite{clementini1997qualitative,moratz2002qualitative,moratz2003spatial,moratz2012spatial}, \ntopology and size~\\cite{gerevini2002combining},\ntopology, size and distance~\\cite{brageul2007model}. \nThese formalisms consider solely point objects for describing combined spatial relations.\nWe aim to construct a formal framework for reasoning about directions and distance for extended planar objects in the thesis.\nWe consider qualitative distance which includes symbolic relations with adjustable granularity. \n\nIn the literature, qualitative direction and distance are combined to define a \\textit{qualitative position}. \nSymbolic binary distance relation is augmented into cone-based cardinal directions with granularity $k$ ~\\cite{clementini1997qualitative}. \nThis model with four cone-based cardinal directions (\\textit{north, south, east, west}) and four interval-based distance relations has been further investigated~\\cite{hong1995robustness}.\n\nAs for other formalisms that combine direction with distance, in one study~\\cite{zimmermann1996qualitative} LR calculus is enriched with a comparative distance relation. \nOrientation of a point $c$ is identified with respect to the directed line segment across the two reference objects $a,b$ and denoted by $ab:c$. \nIn~\\cite{monferrer1996enhancing}, the same LR calculus is augmented with an interval-based qualitative distance relation of arbitrary granularity. \nThey encode qualitative spatial constraints in Prolog with CLP (Constraint Logic Programming).\nThis model is also extended into 3D space~\\cite{pacheco2002qualitative}.\n\nIn TPCC calculus \\cite{moratz2002qualitative}, LR relations are made finer by further subdividing the 2-D space into four cones. To measure the distance, they draw a circle whose radius is the line segment across reference objects $ab$.\nInside of the circle is designated as \\textit{near} and outside of it as \\textit{far}.\nIn another model~\\cite{moratz2003spatial,moratzspatial}, the planar space is partitioned into angular segments that are called distance orientation interval (DOI).\nIn these calculi, a DOI is specified by four metric parameters $(\\phi_1, \\phi_2, r_1, r_2)$ and correspond to a qualitative position.\n\nAnother approach \\cite{moratz2012spatial} for describing qualitative position suggests adding symbolic distance relations into $\\mathrm{OPRA}_m$ calculus. The distance relation can be asymmetric \nhence the distance relation is specified by a pair of relations.\nThe distance concept is similar to interval-based system \\cite{clementini1997qualitative} except that the borders of the intervals also constitute a separate distance relation.\n\n\nAnswer Set Programming, thanks to its efficient solvers for computationally hard problems, have been applied to\nqualitative spatial reasoning~\\cite{baryannis2018trajectory,brenton2016answer,li2012qualitative}. \nThese approaches are based on path consistency and don't involve nonmonotonicity.\nAs shown in~\\cite{Liuetal2010,SkiadKoub2005}, local algorithms such as \\textit{path consistency} or \\textit{k-consistency} are not sufficient to decide consistency of a CDC network.\nEncoding of a constraint network in IA and RCC8 has been developed~\\cite{brenton2016answer}.\nTheir formulation can represent disjunctive constraints but not defaults. \nLikewise ASP has been utilized to check path consistency of a network in Trajectory Calculus~\\cite{baryannis2018trajectory}.\nUnknown relations are nondeterminitically generated and path consistency is tested with a composition table.\nIn another study, ASP programs for checking consistency of basic and disjunctive constraint networks in any qualitative calculus are presented~\\cite{li2012qualitative}. Specialized programs for IA and RCC8 are also provided. \n\n\n\nConsistency problems that involve topology (part, whole, contact relations) and \norientation (left, right, perpendicular, colinear relations) have been solved using ASP Modulo Theories (ASPMT) in~\\cite{WalegaBS15}. \nThe benefit of ASPMT is that it permits formulas in first order logic and equations including real numbers.\nThe authors consider point, line segment, circle and polygon as spatial entities.\nConstraint networks in Interval Algebra, Rectangle Algebra, LR, RCC8 can be encoded \nin their setting.\nSpatial constraints are written in terms of polynomial inequalities in ASPMT and then transformed into SAT Modulo Theories \nfor the SMT solver.\nFor consistency checking in CDC, objects can be instantiated at any shape and size; consequently this approach is not complete for solving the CDC consistency checking problem in this thesis.\nMoreover their formulation does not allow for disjunctive, nonmonotonic constraints or preferences. \n\n\n\\section{Our Approach}\n\nWe use Cardinal Directional Calculus introduced by Skiadopoulos and Koubarakis~\\cite{Goyal1997, SkiadKoub2004} to define relative direction of objects with respect to each other.\nIn CDC, direction between objects are denoted by binary relations.\nObjects are extended regions on a plane and they can be \\textit{simple}, \\textit{connected} or \\textit{disconnected} as shown in Figure~\\ref{fig:regions}(i).\nThe minimum bounding rectangle of the reference object along the axes (Figure~\\ref{fig:regions}(ii)) divides the plane into nine regions (called tiles): \n\\textit{north} (N), \\textit{south} (S), \\textit{east} (E), \\textit{west} (W), \\textit{northeast} (NE), \\textit{northwest} (NW), \\textit{southeast} (SE), \\textit{southwest} (SW), \\textit{on} (O) as in Figure~\\ref{fig:regions}(iii). \nThese nine atomic (single-tile) relations and their combinations constitute the set of basic relations. (e.g. see Figure~\\ref{fig:regions}(iv))\nCDC also allows for disjunction of these basic relations. \n\n\\begin{figure}[h]\n\t\\centering\n\t\\begin{tabular}{cc}\n\t\t\\includegraphics[width=0.3\\columnwidth]{regions1.png} &\n\t\t\\includegraphics[width=0.36\\columnwidth]{regions2.png}\n\t\t\\\\\n\t\t(i) & (ii)\n\t\\end{tabular}\n \\\\\n \\vspace{3mm}\n\t\\begin{tabular}{cc}\n\t\t\\includegraphics[width=0.27\\columnwidth]{regions3.png} &\n\t\t\\includegraphics[width=0.45\\columnwidth]{regions4.png}\n\t\t\\\\\n\t\t(iii) & (iv)\n\t\\end{tabular}\n\t\\caption{(i) Regions $a$, $b$, $c_1$, $c_2$ are connected, where $c = c_1\\cup c_2$ is disconnected. (ii) A region and its bounding box. (iii) Reference tiles. (iv) Sample relations (orientation of $a$ with respect to $b$): $a\\ S\\ b$, $a\\ NE{:}E\\ b$, $a\\ N{:}S\\ b$.}\n\t\\label{fig:regions}\n\\end{figure}\n\n\\vspace{3mm}\n\nUsing these binary relations, relative directions of extended objects can be described in CDC as a set of constraints.\nIn our studies, we formalize CDC using ASP and further extend it with a new form of constraints: Default CDC constraints.\nThey can be used to express default assumptions like ``The food truck is normally to the south of the movie theater\".\n\nWe define qualitative distance relations with adjustable granularity $g_d$.\nTo examplify, for granularity $g_d=6$, the set of basic distance relations are \n$ \\Omega \\,{=}\\, \\mathit{ \\{adjacent, \\: very \\: near, \\: near, \\: commensurate, \\: far},$ \\\\ $ \\mathit{very \\: far \\} }$.\n\n\nOne of the central problems in CDC and qualitative spatial reasoning literature is the consistency checking of a constraint network.\nThe input of the consistency checking problem are a set of spatial variables (objects), the domain of objects, a constraint network and\nthe set of CDC relations. The domain can be\nthe set of connected regions or \nthe set of possibly disconnected regions in $\\mathbb{R}^2$.\nThen, the consistency checking problem asks for whether there exists an instantiation of\nobjects in the domain which satisfy all constraints in the network. If such an instantiation exists,\nthe output is \\ii{Yes}, otherwise it is \\ii{No}.\n\nBased on our representation of direction and distance constraints in ASP, we propose a novel method to check consistency of a network of constraints. Note that consistency checking problem is defined over continuous domain.\nWe discretize consistency checking problem in CDC, prove its equivalence with the continuous version, and introduce a solution using ASP.\nWe also establish soundness and completeness of our ASP-based solution.\nNamely, the ASP program has an answer set if and only if the given network of constraints is consistent.\n\n\n\n\n\n\n\n\nOur ASP formulation is elaboration tolerant in the sense that a few rules are added to the main ASP program in order to \nincorporate disjunctive, default, soft, negative constraints, or to ensure that the generated regions are connected. \n\n\n\n\n\n\\section{Goal and Current Status of the Research} \n\n\nThe objective of this thesis is to develop a generic formal framework to represent and reason about qualitative spatial relations. \nIn the first step, we have studied directional relations and taken Cardinal Directional Calculus as a starting point.\nWe have formulated CDC consistency checking problem in ASP. Then we have extended CDC with new sorts of constraints which involve defaults, preferences, negation using ASP \\cite{izmirlioglu018}. We call this extended version of CDC as nonmonotonic CDC (nCDC). \n\nCurrently, we are working on a further extension of nCDC with qualitative distance relation. We name this extension as nCDC+. Preferences, disjunctive and default constraints can also be expressed in nCDC+.\n\n\nFor CDC, nCDC, nCDC+, we aim to introduce a general framework to solve consistency checking problems,\naddress composition and inversion of qualitative spatial relations, infer unknown or missing relations between objects, and\nfind a suitable configuration of objects which fulfills the given spatial constraints in the inquiry.\n\nWe have illustrated benefits of our methods for reasoning over nCDC constraints in \\cite{izmirlioglu018} with the example scenarios mentioned in the introduction. \nWe have evaluated efficiency of our approach for consistency checking in nCDC with experiments on benchmark instances.\nFor this purpose, a variety of problem instances over differents domain have been prepared.\nFor every instance, grounding time, total computation time and program size have been recorded. Observed values are compared across input parameters and the domain. We plan to perform these applications and experiments for nCDC+ as well.\n\n\n\n\n\n\n\n\n\n\n\n\n\\vspace{0.1in}\n\n\n\n\n\n\n\\section{Future Work}\n\nOur agenda for future work consists of the following items:\\\\\n\\begin{itemize}\n\\item \\textbf{Experimental evaluation for nCDC+ :} We plan to create benchmark instances with nCDC+ networks that include directional, distance constraints, run experiments and evaluate the results with respect to computation time. \n\\item \\textbf{Applications of nCDC+:} We plan to revise example scenarios in the introduction for nCDC+ and apply our ASP-based methods to solve them.\n\\end{itemize}\n\n\n\n\n\n\\bibliographystyle{eptcs}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection*{Introduction}\n\\vspace*{-0.3cm}\nTo solve for the VLBI polarisation calibration terms one needs to\nremove the rotation of the feed. Until now the only feed terms\nsupported in AIPS, and there could be solved for, were the Cassegrain\nand Equatorial types. The Giant mm-VLBI array (GMVA), the European\nVLBI network (EVN) and the Australian Long Baseline Array (LBA)\nincluded feed types which were not supported in AIPS; that is the\nNasmyth (Pico Veleta and Yebes) and EW-mount (Hobart) types. New\nadditions to the AIPS code, now included in the general distribution,\nwill allow the full polarisation calibration of these antennae, and\ntheir respective arrays. These routines have been used to calibrate\nthe LBA and to produce the first polarised VLBI images of Methanol Masers\n(Dodson 2008). Followup EVN and LBA observations are being made for\nthis project. Preliminary results from EVN network experiment N08K2\nshow successful feed angle calibration. Here we present recent results\nfrom test-time LBA experiment VX014, showing the comparison of the\npolarised flux seen by these observations and those of the MOJAVE survey\n(with the VLBA). \n\n\n\n\n\n\\subsection*{Yebes first VLBI fringes with the EVN}\n\\vspace*{-0.3cm}\nAs part of the first VLBI light of the new 40m antenna at Yebes,\nSpain, observations were performed at 22-GHz in the network monitoring\nexperiment N08K2. Fringes were obtained between Yebes and the other\nantennae. In Figure 1\nwe present the phase difference between the RCP and LCP data after the\nfeed angle phases have been corrected for, showing that this was\nsuccessful. It was not possible to take this analysis further, as the\namplitude calibration was poor, which makes the seperation of the\ncontributions from source polarisation and instrument polarisation\ndifficult.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=.8\\textwidth\n{rrll.n08k2.vplot.2.eps}\n\\caption{RR\/LL Phase between Yebes and Effelsberg after feed rotation correction.}\n\\label{fig:yb}\n\\end{center}\n\\end{figure}\n\n\n\n\\subsection*{Comparisons of the LBA with ATCA}\n\\vspace*{-0.3cm}\nWe have used the new code to successfully map the polarisation\ndirection of the magnetic fields of the Methanol Masers in\nG339.88-1.26 (experiment V148).\nMethanol Masers, it has been claimed, lend important support to the\nmodel of the formation of Massive Stars from in-falling disks (Norris\net~al. , 1993). The magnetic fields found via VLBI are not consistent\nwith the masers being in the disks, and therefore remove this strong\nsupport for the disk model.\nAs part of this analysis we compared the integrated spectral line\nfluxes from ATCA and LBA data which showed agreement between the ATCA\ndetection of the polarised flux by velocity channel and that found \nwith the LBA. See Dodson (2008) for the figures and details of the analysis.\n\n\\subsection*{Comparisons of the LBA with VLBA}\n\\vspace*{-0.3cm}\nFour sources were observed in VX014, of which two are in the MOJAVE\nprogram (Lister et~al. , 2005); the Monitoring Of Jets and AGNs with\nVLBA Experiments. We compared the images of these two sources, 3C273\nand 3C279, despite the differences in resolution and frequency between\nthe two observations. The LBA at 8.4-GHz has a resolution of\n$\\sim$3\\,mas and the VLBA at 15-GHz has a resolution of\n$\\sim$0.5\\,mas. Nevertheless we smoothed the VLBA images to the\nresolution of the LBA, and amplitude scaled the LBA data to match the\nfluxes; assuming a simple spectral index between the two\nfrequencies. With these simple corrections the total power images from\nthe two instruments are in very good agreement, giving us the\nconfidence to compare the polarisation.\n\nThe images in Figure 2\nshow the total power, with a log scaled colour index. Overlaid are the\ncontours from the linear polarised flux for LBA and VLBA data. The\nvectors are the field directions for the LBA data, these are in good\nagreement with those of the VLBA data. However this is not total\nindependent, as the VLBA image was used to provide the calibration in\nLPCAL. The D-term solutions for the two sources (both assuming a\npolarised source and an unpolarised source) were in agreement to a few\npercent, dominated by the errors in Mopra.\nIt is notable to the first order that there is a good match between\nthe results from the LBA and the VLBA, but at the second order there\nare notable differences. These will need further investigation, as it\nis possible that they are due to the second order effects from the\nvery high polarisation feed correction in the LBA calibration\nterms. Work is undergoing to improve these, particularly for Mopra,\nwhich were known to be anomalous.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=.7\\textwidth]{3c.eps}\n \\caption\n{a) The polarised flux from 3C273, as observed by the LBA and\n the VLBA in 2008. The colour scale is the LBA total intensity, the\n black contours are the VLBA polarised intensity, the red those of\n the LBA with the LBA polarisation vectors overlaid. b) The polarised\n flux from 3C279, as observed by the LBA and the VLBA in 2008. The\n colour scale is the LBA total intensity, the blue contours are the\n VLBA polarised intensity, the red those of the LBA with the LBA\n polarisation vectors overlaid.}\n\\end{center}\n\\end{figure}\n\n\n\\begin{footnotesize}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\nThe integration of neural and symbolic learning methods is high on the research agenda.\nIt is popular to use (variants of) logic programs to represent the available symbolic knowledge and use Prolog-like mechanisms to generate computation structures that can then be differentiated \\cite{manhaeve2018deepproblog,dai,rocktaschel2017end,yang2020neurasp,tensorlog,ondrej,difflog}.\nSeveral of these approaches also incorporate probability into these neural logic programming models, cf. \\cite{deraedt2020starnesy}.\nThere are several ways to incorporate probability into logic programs \\cite{de2015probabilistic}.\nMost notably, one distinguishes probabilistic from stochastic logic programs (PLPs vs SLPs).\nThe more popular PLPs are based on a possible worlds semantics (the so-called distribution semantics), which extends probabilistic graphical models, while the SLPs are based on stochastic grammars. \nThe difference can also be described as a random graph vs a random walk model.\nSo far, the emphasis in neurosymbolic computation has been on the PLP approach, especially \\cite{yang2020neurasp,manhaeve2018deepproblog,ondrej,efthymia}, with only Tensorlog \\cite{tensorlog} adopting the SLP semantics in an efficient but restricted Datalog or database setting that does not handle subsymbolic data such as images (see Section \\ref{sec:rw} for details). \n\nTo fill this gap, we introduce DeepStochLog, a neural stochastic logic programming approach.\nIt incorporates ideas from DeepProbLog such as the neural predicate.\nThe neural predicate encapsulates neural networks to cope, for instance, with subsymbolic data such as images. \nWithout loss of generality, we base DeepStochLog on stochastic definite clause grammars (SDCGs) as this notation is not only easier to introduce and use, but also results in a sequence-based model.\nSDCGs are a kind of probabilistic unification-based grammar formalism \\cite{have2009stochasticdcg}.\nHowever, SDCGs and SLPs are very closely related.\nSDCGs can be directly translated and executed as SLPs, and all the concepts we introduce for SDCGs can directly apply to SLPs as well. \nMore specifically, the key contributions of this paper are: 1) the introduction of the neural stochastic logic programming framework DeepStochLog;\n2) the introduction of inference and learning algorithms (through gradient descent) for DeepStochLog programs; and\n3) experimental results that show that DeepStochLog obtains state-of-the-art results on a number of \nchallenging tasks for neural-symbolic computation and that it is also\nseveral orders of magnitude faster than alternative approaches based on PLPs.\n\n\\section{Stochastic DCGs}\n\\label{sec:sdcg}\n\nA context-free grammar (CFG) $G$ is a 4-tuple $(V,\\Sigma,S, R)$, with $V$ the set of non-terminals, $\\Sigma$ the set of terminals, $S \\in V$ the starting symbol and $R$ a set of rewrite rules of the form $N \\rightarrow S_1, ... , S_k$ where $N$ is a non-terminal, the $S_i$ are either terminals or non-terminals.\nA probabilistic context-free grammar (PCFG) extends a CFG by adding probabilities to the rules $R$, i.e., the rules take the form $p_i :: N \\rightarrow S_1, ... , S_k$, where $p_i$ is a probability.\nFurthermore, the sum of the probabilities of rules with the same non-terminal $N$ on the left-hand side equals 1. \nWe use list notation for sequences of terminals such as $[cat]$ and $[the, cat]$.\nWhereas CFGs define whether a sequence can be parsed, PCFGs define a probability distribution over possible parses. This allows for the most likely parse to be identified.\nAn example PCFG, representing single digit additions, is shown on the left of Example \\ref{ex:pcfgsdcg}.\n\nDefinite clause grammars (DCGs) are a well-known logic programming-based extension of CFGs \\cite{pereira1980dcg}.\nDCGs can represent context-sensitive languages and are unification-based. \nThey differ from CFGs in that logical atoms are used instead of the non-terminals. \nAn {\\em atom} $a(t_1, ...,t_n)$ consists of a predicate $a$ of arity $n$ followed by $n$ terms $t_i$. \nTerms are either constants, logical variables or structured terms of the form $f(t_1, ... , t_k)$ with $f$ a functor and $t_j$ terms. \nThe production rules are called definite clause grammar rules because they can be directly translated to a set of definite clauses (i.e., Horn clauses with exactly one positive literal) and can be executed as a Prolog program using SLD-resolution.\nThe right hand side of DCG rules are also allowed\nto contain queries to Prolog predicates $q_i$ between curly brackets \\texttt{\\{$q_1(t_{1,1},...,t_{1,m_1}),...,q_n(t_{n,1},...,t_{n,m_n})$\\}} to impose further constraints and perform additional computations during the inference process.\nThese are to be considered atoms as well.\nDCGs use substitutions $\\{V_1=t_1, ..., V_n=t_k\\}$, which are sets of variable\/term pairs, to unify atoms with heads from the rules.\nApplying a substitution to an atom $a$ yields the atom $a\\theta$ where all variables $V_i$ have been replaced by their corresponding terms $t_i$. $\\theta$ is a unifier of an atom $s$ and an atom $u$ if and only if $s\\theta = u\\theta$.\nFor more information, see standard textbooks on logic programming such as \\cite{Flach1994simplylogical,sterling1994artofprolog}.\n\nStochastic definite clause grammars (SDCGs) extend DCGs by associating probabilities to the rules, just like how PCFGs extend CFGs \\cite{have2009stochasticdcg}.\nAs PCFGs, SDCGs require that the sum of the probabilities for the rules defining a single non-terminal predicate equals 1. SDCGs also correspond directly to stochastic logic programs (SLP) \\cite{cussens2001parameterestimationslp,muggleton1996slp,muggleton2000learningslp}\nwhich are well-known in the probabilistic (logic) programming community \\cite{de2015probabilistic}. An example SDCG is shown in Example \\ref{ex:pcfgsdcg} on the right.\n\n\\begin{example}%\n\\label{ex:pcfgsdcg}\nA PCFG (left) and a similar SDCG(right) that constrains the result of the expression.\\\\\n\\begin{minipage}[t]{.43\\linewidth}\n \\begin{align*}\n 0.5&::E \\rightarrow N\\\\\n 0.5&::E \\rightarrow E,[+],N\\\\\n 0.1&::N \\rightarrow [0] \\quad ...\\quad0.1::N \\rightarrow [9]\n \\end{align*}\n \\end{minipage}\n \\begin{minipage}[t]{.55\\linewidth}\n\\begin{align*}\n0.5&::e(N) \\rightarrow n(N)\\\\\n0.5&::e(N) \\rightarrow e(N1), [+], n(N2), \\{N \\;\\text{is}\\; N1 + N2\\}\\\\\n0.1&::n(0) \\rightarrow [0]\\quad ... \\quad 0.1::n(9) \\rightarrow [9]\n\\end{align*}\n \\end{minipage}\n\\end{example}\n\n\n\nThe inference task in (S)DCGs consists of deriving a sequence of terminals from a goal (which often captures the starting symbol of the grammar). \nSLD-derivations are used for this. \nMore formally, in an SLD-derivation for a DCG, a {\\em goal} $g_1, ... ,g_n$ is a sequence where each $g_i$ is either a logical atom (a non-terminal) or a list containing terminals and logical variables. %\nAn SLD-derivation is shown in Example \\ref{ex:derivation_example} and uses several resolution steps.\nApplying {\\em resolution} to a goal $g_1, ... ,g_n$ and a definite clause grammar rule $n \\rightarrow t_1, ... , t_k$ yields the goal \n$g_1\\theta, ... , g_{i-1} \\theta, t_1 \\theta, ... , t_k\\theta, g_{i+1}\\theta, ... , g_n\\theta$ \nprovided that $g_i$ is the leftmost atom in the goal (so $g_1, ... , g_{i-1}$ are terminal symbols), $\\theta$ is the unifier of $g_i$ and $n$, i.e., $g_i \\theta = n\\theta$.\\footnote{When a Prolog query $q$ is the first non-terminal to occur in a goal during the derivation process, the query is executed in Prolog possibly yielding an answer substitution $\\theta$ such that $q\\theta$ is true. For instance, in Example \\ref{ex:pcfgsdcg}, there is the Prolog query $N~ \\mathtt{is}~ N1 + N2$ which computes $N$ as the sum of $N1$ and $N2$.\nIn this paper, we assume that when such a query is called there is at most one answer substitution that is true. If there were more such substitutions, we would have to introduce a probability for such substitutions in the SDCG case, which unnecessarily complicates the semantics.}\nWe write this as $g_1, ... ,g_n \\vdash t_1 \\theta, ... , t_k\\theta, s_2\\theta, ... , s_n\\theta$. \nA derivation $d(S)$ is then the repeated application $G \\vdash G_1\\theta_1 \\vdash G_2\\theta_1\\theta_2 \\vdash ... \\vdash G_n\\theta_1\\theta_2 ... \\theta_n$ of such resolution steps onto a goal $G$. \nWe will write $G \\vdash^* G_n$.\nSuccessful derivations of a goal end in a sequence $T$ that consists only of terminal symbols, see Example \\ref{ex:derivation_example} for an example. \nWe will write that $d(S) = T$ and also say that $derives(S\\theta,T)$ is true, with $\\theta=\\theta_1 \\theta_2 ... \\theta_n$ the answer substitution. \nA successful derivation corresponds to a proof.\nThe set of all possible proofs can be depicted using SLD-trees, see Figure~\\ref{fig:inference_sld}.\n\n\nThe probability $P(d(G))$ of a derivation $d(G)$ is the product of the probabilities $\\prod p_i^{m_i}$ of the rules $i$ used in the derivation with $m_i$ the number of times the rule $i$ was used.\nAn important difference between the probability of a parse in a PCFG and a derivation in an SDCG is that there can be a loss of probability mass in the latter whenever a derivation fails.\nDerivations can fail when there are non-terminals in the goal that do not unify with the heads of any of the rules.\nThis is due to unification %\nand is different from (P)CFGs, where non-terminals can always be resolved using rules for that non-terminal.\nNon-terminating derivations can also lead to loss of probability mass.\nObserve that every $G$ in this way induces a probability distribution $P_G$ over possible derivations $d(G)$. \nThe goal can consist of one or more atoms, but in general for parsing, this will typically be the starting symbol or atom of the grammar.\nThis in turn lets us define the probability $P_G(derives(G,T))$\nof a terminal sequence $T$ relative to the goal $G$ as $\\sum_{d_i(G\\theta)= T} P_G(d_i(G\\theta))$, i.e. the sum of the probabilities of all derivations $G\\theta$ for $G$ that result in the terminal sequence $T$ with answer substitution $\\theta$. In a similar way, this allows to define the probability of an answer substitution $\\theta$ relative to a goal $G$ and sequence $t$ as $P_G(derives(G\\theta,t\\theta))$ where $t$ could contain variables. For ease of notation, when the goal is clear, we shall omit the subscript $G$.\nNotice that if there are failing derivations, the total probability mass assigned to all sequences of terminals for a goal $G$ may be strictly less than 1. This is discussed at length by \\cite{cussens2001parameterestimationslp}.\nIt is possible to obtain normalized probabilities by calculating the normalization constant, but this is computationally expensive.\nWe avoid this normalization in the present paper because in practice, the goal is often to find the most likely derivation $d_{max}(G,T) = \\arg\\max_{d(G) = T} P_G(d(G))$, non-normalized probabilities usually suffice. Notice that a SDCG defines a parametric probability distribution, where the parameters $p$ of the distribution are the vector of probabilities of the rules. When we need to refer to these probabilities we write $P_G(derives(G,T); p)$.\n\n\\begin{example}[Derivations]\n\\label{ex:derivation_example}\nConsider the following successful derivation using the SDCG in Example \\ref{ex:pcfgsdcg} for the goal $G = [e(X)]$, the answer substitution $\\theta = \\{X\/2\\}$ and the terminal sequence $T = [2, +, 0]$.\\newline\\noindent\n\\begin{minipage}{0.7\\textwidth}\n \\begin{align*}\n e(X) &\\vdash e(N1), [+], n(N2), \\{X \\;\\text{is}\\; N1 + N2\\} & \\theta_1 = \\{\\} \\\\\n &\\vdash n(N1), [+] , n(N2), \\{X \\;\\text{is}\\; N1 + N2\\} & \\theta_2= \\{\\} \\\\\n &\\vdash [2, +] , n(N2), \\{X \\;\\text{is}\\; 2 + N2\\} & \\theta_3 = \\{N1\/2\\} \\\\\n &\\vdash [2,+,0], {2 \\;\\text{is}\\; 2 + 0} & \\theta_4 = \\{X\/2, N2\/0\\} \n \\end{align*}\n \\end{minipage}\n \\begin{minipage}{0.25\\textwidth}\n \\begin{align*}\n p = 0.5 \\\\\n \\times 0.5 \\\\\n \\times 0.1\\\\\n \\times 0.1\n \\end{align*}\n \\end{minipage}\n\\end{example}\n\n\\section{DeepStochLog}\n\nDeepStochLog integrates neural networks and SDCGs by introducing neural definite clause grammars (NDCG).\nMore formally, DeepStochLog allows for specifying an SDCG that additionally supports {\\em neural definite clause grammar rules}, or neural rules for short. These are statements of the form:\n\\[\n nn( m,[I_1, ... , I_m],[O_1, ... ,O_L],[D_1, ... , D_L]) :: nt \\rightarrow g_1, ... , g_n\n\\]\nwhere $nt$ is an atom, \n$g_1, ... , g_n$ is a goal,\nand the $I_1 ... I_m$ and $O_1, ..., O_L$\nare variables occurring in $g_1, ... , g_n $ and $nt$. The $D_i$ are unary predicates defining the domain of the output variables $O_i$.\nThe $nn$ declaration states that $m$ is a neural network that takes the variables $I_1, \\ldots, I_m$ as input and outputs a probability distribution over output variables $O_1, ..., O_L$ (i.e. a probability distribution over the cross product of the domains specified by $D_i$). It thus maps an input substitution $\\sigma$ for the variables $I_1, ..., I_m$ to a set of output substitutions $\\theta_j$ with probability $p_j$.\nThe neural rule serves as a template. For every input substitution $\\sigma$, the template $(nt \\rightarrow g_1, ... , g_n) \\sigma$\ndefines the set of instantiated stochastic definite clause grammar rules \n$p_j:: (nt \\rightarrow g_1, ... , g_n)\\sigma\\theta_j$. \n\n\\begin{example}[Neural definite clause grammar rules]\n\\label{ex:mnist_addition}\nConsider the SDCG in Example~\\ref{ex:pcfgsdcg}. We can substitute the $n(X)\\rightarrow[X]$ rules with the following neural rule\n\\begin{equation*}\nnn(mnist,[Mnist],[N],[digit]) :: n(N) \\rightarrow [Mnist].\n\\end{equation*}\nHere, the neural network called $mnist$ takes as input the $Mnist$ image and returns a probability for every possible number between 0 and 9, indicating how likely it is that every number is for the given MNIST image \\cite{mnist}. The predicate $digit$ is defined as $digit(0), digit(1), ... , digit(9)$. \nGiven the neural network and the input substitution $\\sigma =\\{ \\texttt{Mnist =} \\digit{0} \\}$ (which could be obtained through unification with the terminal sequence), the neural network could \ngenerate the output substitutions $\\theta_0 = \\{ N = 0\\}$; ... ; $\\theta_9 = \\{ N = 9\\}$; with probabilities $0.87$; ... ;$0.05$. \nThus, the neural rule with the input substitution $\\sigma =\\{ \\texttt{Mnist } = \\digit{0} \\}$ denotes the following set of grammar rules: \n$0.87::n(0) \\rightarrow [\\digit{0}];~~~\\ldots;~~~$ $0.05::n(9) \\rightarrow [\\digit{0}]$\n\\end{example}\nThe neural rules are reminiscent of the neural predicates in DeepProbLog \\cite{manhaeve2018deepproblog}, which also encapsulate a neural network that outputs a distribution over a number of alternatives. \nIt is worth analyzing how a neural rule behaves w.r.t the neural inputs (e.g. images). In fact, a neural rule defines a probability distribution over the values of the output variables \\textit{given} the neural inputs, whose distribution is not modeled in the program. This \\textit{conditional} setting is akin to \\textit{conditional} PCFGs \\cite{riezler-etal-2002-parsing,sutton2006introduction} and it is a common modeling strategy in discriminative parsing\\footnote{DeepStochLog could also be used to define generative grammars on subsymbolic inputs (e.g. images) if provided with neural models that can provide a joint probability distribution of both outputs and images. This is also discussed in \\cite{manhaeve2021deepproblogjournal} but will not be further analyzed in the current paper.}.\n\n\n\\section{Inference in DeepStochLog} \\label{sec:inference}\n\nThe goal of the inference is to compute the probability $P_G(derives(G,T))$ for a given goal $G$ and (possibly unknown) sequence $T$. This is divided into two steps, a logical and probabilistic one, which we now explain.\n\\paragraph{Logical inference}\nGiven a DeepStochLog program, a goal $G$ and a (potentially unknown) sequence $T$, logical inference uses resolution to answer $derives(G, T)$\\footnote{This is sometimes called $phrase$ or $sentence$ in actual Prolog implementations and it requires an automatic syntactical translation of a DCG into Prolog. We show an example in Appendix \\ref{app:translation-example}.}. This corresponds to finding all the possible derivations for $G$ that result in a terminal sequence $T$.\nThe resolution process is then turned into a compact AND-OR circuit, which represents all possible derivations and will be the input for the probabilistic inference.\nThe logical inference procedure is illustrated in Figure \\ref{fig:inference_example}, where the SLD resolution tree for the given goal is on the left and its corresponding AND-OR circuit on the right. The translation to the AND-OR circuit is straightforward. It has exactly the same structure as the SLD-tree. For every resolution step with a rule $p_i:r_i$, an AND node is added. Furthermore,\nfor a normal SDCG rule, the corresponding probability $p_i$ is added,\nand for a neural grammar rule, there is a call to the neural network that returns the probability $p_m$.\nWhenever there are two (or more) branches in the SLD tree for a goal, an OR node is added. Notice that all the leaves are either probabilities given as parameters or the result of a neural call.\n\n\nDuring SLD resolution, many identical intermediate goals may be proved multiple times which results in an explosion of inference time.\nTo avoid proving the same goals, we use SLG resultion~\\cite{chen1996slgresolution}, which plays a similar role as the dynamic programming CYK algorithm for CFGs~\\cite{kasami1966efficient}.\nTabling using SLG resolution is a standard logic programming technique that memoizes the answers of predicates by tabling the evaluations.\nThis technique is incorporated in Prolog implementations such as XSB, SWI-Prolog and Prism \\cite{sato1997prism}.\nThe important difference with SLD resolution is that the results are not a single derivation tree, but rather a forest, where certain parts are re-used for multiple derivations thanks to tabled evaluations.\nThe effect of tabling is carried over to the creation of the AND-OR tree. Each time a derivation is reused from the table, its corresponding node is returned and linked to the new derivation. Thus, also the AND-OR circuit turns into a forest. \nWhen a goal admits a finite set of answers, we can resolve it only once and cache the corresponding AND-OR tree.\n\n\\paragraph{Probabilistic inference.}\nWith probabilistic inference, we refer to the task of calculating the probability $P(derives(G, T)) = \\sum_{d(G\\theta)= T} P(d(G\\theta)) = \\sum_{d(G\\theta)= T} \\prod_{r_i \\in d(G\\theta) } p_i^{m_i} $, \ni.e. the sum of the probabilities of all derivations for a given $G$ that result in a given terminal sequence $T$ and answer substitution $\\theta$.\nThanks to SLG resolution and tabling, the shared sub-structure of many derivations is explicit in the AND-OR circuit obtained from the logical inference.\nThis dissipates the need for a specialized algorithm, like the \\textit{inside} algorithm used in the probabilistic extension of CYK. \nComputing the probability $P(derives(G\\theta, T))$ is just a bottom-up evaluation of the AND-OR circuit where AND-nodes are substituted by multiplications and OR-nodes by summations, i.e. compiling the logical circuit to an arithmetic circuit using the $(+, \\times)$ semiring~\\cite{kimmig2011aproblog}.\nAnalogously, the most probable derivation for the goal $G$ is found with the $(\\max, \\times)$ semiring.\n\n\n\n\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[t]{0.58\\linewidth}\n \\includegraphics[width=\\linewidth]{figures\/sld_tree.pdf}\n \\caption{The SLD tree for $derives(e(1),[\\digit{0}+\\digit{1}])$. Failing branches are omitted. \n Notice that only the left-hand branch derives the correct parse of the images.\n }\n \\label{fig:inference_sld}\n \\end{subfigure}\\hfill\n \\begin{subfigure}[t]{0.39\\linewidth}\n \\includegraphics[width=\\linewidth]{figures\/and_or}\n \\caption{AND-OR circuit for $derives(e(1), [\\digit{0}+\\digit{1}])$ }\n \\label{fig:inference_and_or}\n \\end{subfigure}\n \\caption{The different steps of inference on an example grammar.}\n \\label{fig:inference_example}\n\\end{figure}\n\n\n\\section{Learning in DeepStochLog}\n\nLearning in DeepStochLog is achieved by optimizing the parameters of the neural networks and the parameters of the logic programs itself.\nLet us consider a dataset of triples $\\mathcal{D} = \\{(G_i\\theta_i, T_i, t_i)\\}$, where $G_i$ is a goal, $\\theta_i$ a substitution for $G_i$, $T_i$ a sequence of terminals and $t_i$ a target probability.\nLet us also consider a DeepStochLog program parameterized by the vector $p$ of rule probabilities.\nLearning in DeepStochLog is defined as the following optimization problem, with $\\mathcal{L}$ being any differentiable loss function:\n\\begin{equation}\n\\label{eq:learning}\n \\min_{p} \\sum_{(G_i\\theta_i, T_i, t_i) \\in \\mathcal{D}} \\mathcal{L}\\bigg (P_G(derives(G_i\\theta_i,T_i);p), t_i\\bigg)\n\\end{equation}\nRepresenting the dynamic programming computation for the \\textit{inside} probability in terms of an arithmetic circuit has an important advantage. In fact, the corresponding computational graph is differentiable and the derivatives of the loss function $\\mathcal{L}$ w.r.t. the probabilities $p$ can be carried out automatically using out-of-the-box differentiation frameworks. Moreover, when the probabilities $p$ are computed by a neural network \nas for a neural grammar rule, the gradients can be seamlessly backpropagated to the network to train its internal parameters.\nWe solve the learning problem using standard gradient descent techniques from deep learning, e.g. the Adam optimizer \\cite{kingma2014adam}.\n\n\n\n\n\n\n\n\nOne interesting case is when the loss function $\\mathcal{L}$ is the negative log-likelihood, as it brings DeepStochLog into the standard learning scenario for probabilistic grammars. Here, the optimization problem is usually carried out in the expectation-maximization (EM) framework. Given an \\textit{inside} algorithm, a correspondent \\textit{outside} algorithm is designed to extract the expected counts of the various grammar rules from data (E-step) and then the counts are used to update the probabilities (M-step). Most of the developed inside-outside algorithms are tailored to a specific formalism. However, the gradient descent approach of DeepStochLog on the negative log-likelihood is actually equivalent to the EM approach but it does not require the explicit definition of the corresponding outside algorithm. In fact, the gradients obtained by the backward pass through the AND-OR circuit have been shown to actually compute the outside probabilities (E-step), while the gradient descent step is used to update the parameters (M-step) \\cite{salakhutdinov2003optimization,berg2010painless,eisner2016inside}.\n\n\n\\section{Evaluation}\n\n\n\\subsection{Research Questions}\nThe goal of our experiments is to answer the following questions:\n\\begin{itemize}\n \\item[\\textbf{Q1}] Does DeepStochLog reach state-of-the-art predictive performance on neural-symbolic tasks?\n \n \n \\item[\\textbf{Q2}] How does the inference time of DeepStochLog compare to other neural-symbolic frameworks and what is the role of tabling?\n \n \\item[\\textbf{Q3}] Can DeepStochLog handle larger-scale tasks?\n \n \\item[\\textbf{Q4}] Can DeepStochLog go beyond grammars and encode more general programs?\n \n\n\\end{itemize}\n\n\n\\subsection{Tasks}\n\\label{sec:tasks}\n\nWe specify the tasks used in this paper. Complete details are specified in Appendix~\\ref{sec:details}.\n\n\\textbf{T1: MNIST Addition.}\nIn the MNIST Addition task \\cite{manhaeve2018deepproblog}, the model is given two sequences of length $N$ of MNIST images containing handwritten images \\cite{mnist}, each representing an N-digit number.\nThe task is to predict the sum of these numbers ($\\digit{3}\\digit{1}+\\digit{2}\\digit{5}=56$). \nThe training data only contains the two image sequences and the sum of the corresponding numbers, thus not providing the digit labels of the individual images.\nThe datasets for each digit length use all 60K images of MNIST images exactly once. \n\n\\textbf{T2: Handwritten Formulas.}\nIn the Handwritten Formulas (HWF) task, the goal is to solve mathematical expressions, where both digits and operators (addition, subtraction, multiplication and division) are images of handwritten characters, like $\\hwf{9}~\\hwf{div}~\\hwf{2_1}~\\hwf{_}~\\hwf{7}$.\nLike T1, the data of T2 only contains the outcome of the expression and the sequence of images.\nFor this task, we use the Handwritten Formula (HWF) dataset, introduced in \\cite{li2020ngs}. The dataset contains 10000 expressions of lengths 1, 3, 5 and 7. \nUnlike the original paper, we do not consider a curriculum learning setting here, and split the dataset into 4 separate parts by length.\n\n\\textbf{T3: Well-formed Parentheses.} We introduce the Well-formed Parentheses task, where the model is asked to recognize image sequences that represent well-formed parentheses.\nWell-formed parentheses is a classic context-free grammar language where $\\Sigma = \\{(, )\\}$, and $R=\\{s\\rightarrow() \\rvert ( s ) \\rvert s s \\}$, i.e. all open brackets are closed in the right order.\nAs images, we use the zeros from MNIST as ``('' and ones as ``)'', and generate 1000 well-formed parenthesis sequences without labels as training data.\nThe goal is to predict the most probable parse of the bracket sequence. \n\n\n\\textbf{T4: Context-Sensitive Grammar.} Since DCGs support context-sensitive grammars, we created a dataset of 2000 image sequences representing the canonical context-sensitive grammar $a^n b^n c^n$.\nSince each sequence of length $3n$ only has one valid parse, we increased the difficulty by allowing permutations such as $b^n a^n c^n$.\nWe also generated 2000 negative examples, i.e. random sequences of the form $a^k b^l c^m$, and permutations like $c^k a^l b^m$ such that \n$k$, $l$, $m$ are all larger than 1, sum to a multiple of 3 and are not all the same number. \nThe goal of the task is to recognize whether the input sequence belongs to the first grammar or the second.\n\n\\textbf{T5: Semi-supervised classification in citation networks.} Given a set of scientific papers represented as bag-of-words and their citation network, the goal is to assign the correct class to a large test set of documents by having access only to the true labels of a small training set. The intuition is that one can infer the class of a paper not only by the features of the document but also by the class of its neighbors. This task is interesting from a neural symbolic perspective because one must be able to use both the features of the documents and the symbolic network. Two well-known datasets for this task are the Cora\\footnote{On the Cora dataset, another common task is link prediction, which is a purely symbolic task (i.e. there are no subsymbolic features on the documents).} (2708 nodes and 5429 edges) and Citeseer (3327 nodes and 9228 edges) \\cite{sen2008collective} .\n\n\\textbf{T6: Word Algebra Problems.}\nIn this task, a natural language text describes a word algebra problem (e.g, \\textit{\"Mark has 6 apples. He eats 2 and divides the remaining among his 2 friends. How many apples did each friend get?\"}).\nThis dataset of this task contains 300 training instances and was introduced in \\cite{roy2016solving}.\nEach text contains 3 numbers, and all numbers have to be used exactly once in a formula containing addition subtraction, multiplication and division.\nThe task is to predict the right numerical answer to the expression implied by textual description.\n\n\\subsection{Results}\n\\label{sec:results}\n\nFor all experiments and metrics, we report the mean accuracy (or the mean most likely parse accuracy where applicable, i.e. \\textbf{T1}, \\textbf{T2} and \\textbf{T3}) and standard deviation over 5 runs. We report ``timeout'' if a single of these 5 runs took more than 1 hour to execute.\n\n\n\\paragraph{Q1: Performance of DeepStochLog}\nWe first investigate whether DeepStochLog achieves state-of-the-art results compared to similar neural-symbolic frameworks. %\nTable~\\ref{tab:mnist_accuracies} shows the result for the MNIST addition task (\\textbf{T1}), for training and testing on lengths 1 to 4.\nIt shows that DeepStochLog performs similarly to the DeepProbLog \\cite{manhaeve2018deepproblog} and NeurASP \\cite{yang2020neurasp} frameworks but scales to larger sequences.\nTable~\\ref{tab:hwf_accuracies} shows the result on the HWF task (\\textbf{T2}).\nDeepStochLog performs similar to NGS and DeepProbLog for expressions of length 1 and 3.\nStarting from expression length 5, it becomes infeasible to train DeepProbLog.\nNGS \\cite{li2020ngs} can still be trained, but for expression length 7, some runs fail to converge.\nDeepStochLog, however, performs well for all expression lengths.\nTable~\\ref{tab:bracket_accuracies} shows the accuracy on task \\textbf{T3}.\nHere we can see that both DeepStochLog and DeepProbLog achieve high accuracy, but DeepStochLog reaches a slightly higher accuracy for a longer length.\nFor task \\textbf{T6}, DeepStochLog and DeepProbLog achieve a similar accuracy of $94.8 \\pm 1.1$ and $94.2 \\pm 1.4$ respectively. We also compare to $\\delta$4 \\cite{riedel2017programming}, but the authors only report the maximum accuracy reached.\nFor all three frameworks, the maximum accuracy reached is $96.0$. To conclude, DeepStochLog is able to achieve similar or better performance compared to other state-of-the-art neural-symbolic frameworks.\n \n\\begin{table}[t]\n\\centering\n\\caption{The test accuracy (\\%) on the MNIST addition (\\textbf{T1}).}\n\\begin{tabular}{@{}lrrrr@{}}\n\\toprule\n & \\multicolumn{4}{c}{Number of digits per number (N)} \\\\\nMethods & 1 & 2 & 3 & 4 \\\\ \\midrule\nNeurASP & $97.3 \\pm 0.3$ & $93.9 \\pm 0.7$ & timeout & timeout \\\\\nDeepProbLog & $97.2 \\pm 0.5$ & $95.2 \\pm 1.7$ & timeout & timeout \\\\\nDeepStochLog & $97.9 \\pm 0.1$ & $96.4 \\pm 0.1$ & $94.5 \\pm 1.1$ & $92.7 \\pm 0.6$ \\\\\n\\bottomrule\n\\end{tabular}\n\n\\label{tab:mnist_accuracies}\n\\end{table}\n\n\n\n\\begin{table}[t]\n \\centering\n \\caption{The accuracy (\\%) on the HWF dataset (\\textbf{T2}). }%\n \\begin{tabular}{lrrrr}\n \\toprule\n & \\multicolumn{4}{c}{Expression length}\\\\\n Method & 1 & 3 & 5 & 7 \\\\\n \\midrule\n NGS & $90.2 \\pm 1.6$ & $85.7 \\pm 1.0$ & $91.7 \\pm 1.3$ & $\t20.4 \\pm 37.2$\\\\\n DeepProbLog& $90.8 \\pm 1.3$ & $85.6 \\pm 1.1$ & timeout & timeout\\\\\n DeepStochLog & $90.8 \\pm 1.0$ & \n $86.3 \\pm 1.9$ &\n $92.1 \\pm 1.4$ &\n $94.8 \\pm 0.9$ \n \\\\\n \\bottomrule\n \\end{tabular}\n \n \\label{tab:hwf_accuracies}\n\\end{table} \n\n\n\n\\begin{table}[t]\n \\centering\n \\caption{The parse accuracy (\\%) on the well-formed parentheses dataset (\\textbf{T3}).}\n \\begin{tabular}{lrrr}\n \\toprule\n & \\multicolumn{3}{c}{Maximum expression length}\\\\\n Method & 10 & 14 & 18 \\\\\n \\midrule\n DeepProbLog & $100.0 \\pm 0.0$ & $99.4 \\pm 0.5$ & $99.2 \\pm 0.8$ \\\\\n DeepStochLog & $100.0 \\pm 0.0$ & $100.0 \\pm 0.0$ & $100.0 \\pm 0.0$ \\\\\n \\bottomrule\n \\end{tabular}\n \n \\label{tab:bracket_accuracies}\n\\end{table} \n\n\\begin{table}[t]\n \\centering\n \\caption{The accuracy (\\%) on the $a^nb^nc^n$ dataset (\\textbf{T4}).}\n \\begin{tabular}{lrrr}\n \\toprule\n & \\multicolumn{3}{c}{Expression length}\\\\\n Method & 3-12 & 3-15 & 3-18 \\\\\n \\midrule\n DeepProbLog & $99.8 \\pm 0.3$ \n & timeout & timeout\\\\\n DeepStochLog & $99.4 \\pm 0.5$ & $99.2 \\pm 0.4$ & $98.8 \\pm 0.2$ \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:anbncn_accuracies}\n\\end{table} \n\n\n\n\\textbf{Q2: DeepStochLog scalability}\nWe now investigate whether DeepStochLog is more scalable than similar neural-symbolic frameworks. \nFirst, we observe that in the tasks \\textbf{T1}, \\textbf{T2}, \\textbf{T4} and \\textbf{T5}, DeepStochLog scales to settings or datasets that are infeasible for the competitors. \nNext, in Table~\\ref{tab:timing}, we compare the execution times for inference in task \\textbf{T1}.\nWe selected 100 queries from the training data and we computed the average time required from the system to compute the probability of the query. We repeated the experiment for increasing number lengths.\nDeepStochLog shows a huge gap over the competitors, especially for large numbers.\nThe advantage of DeepStochLog over the competitors is two-fold.\nFirstly, DeepStochLog is natively implemented on top of SLG resolution and tabling, which plays a fundamental role in compactly representing derivations and SLD-trees.\nWe analyzed the impact of tabling in Table~\\ref{tab:tabling}, where we show the comparison between SLD and SLG resolution in DeepStochLog. In particular, we compared the resolution time required to find all the possible answers for expressions of variable lengths (on task \\textbf{T2}). \nSecondly, DeepStochLog is based on a random walk semantics which is computationally cheaper than the possible world semantics exploited by DeepProbLog and NeurASP. \n\n\\begin{table}[t]\n\\begin{minipage}{0.45\\linewidth}\n\\centering\n\\caption{\\textbf{Q3} Accuracy (\\%) of the classification on the test nodes on task \\textbf{T5}}\n\\label{tab:results_citation}\n\n \\begin{tabular}{l r r }\n \\toprule\n \\textbf{Method} & \\textbf{Citeseer} & \\textbf{Cora}\\\\%[0.05em]\\hline \\\\[-0.8em]\n \\midrule\n ManiReg & $60.1$ & $59.5$\\\\\n SemiEmb & $59.6$ & $59.0$\\\\\n LP & $45.3$ & $68.0$\\\\\n DeepWalk & $43.2$ & $67.2$\\\\\n ICA & $69.1$ & $75.1$ \\\\\n GCN & $70.3$ & $81.5$ \\\\%[0.05em]\\hline \\\\[-0.8em]\n \\midrule\n DeepProbLog & timeout & timeout \\\\\n DeepStochLog \n & $65.0$\n & \n $69.4$ \\\\\n \\bottomrule\n \\end{tabular}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{0.50\\linewidth}\n\\centering\n\\caption{\\textbf{Q4} Parsing time in seconds (\\textbf{T2}). Comparison of the DeepStochLog with and without tabling (SLD vs SLG resolution).}\n\\label{tab:tabling}\n\\begin{tabular}{@{}rrrr@{}}\n\\toprule\n\\textbf{Lengths} & \\textbf{\\# Answers} &\\textbf{ No Tabling} & \\textbf{Tabling} \\\\ \n\\midrule\n1 & 10 & $0.067$ & $0.060$ \\\\\n3 & 95 & $0.081$ & $0.096$\\\\\n5 & 1066 & $3.78$ & $0.95$\\\\\n7 & 10386 & $30.42$ & $10.95$\\\\\n9 & 68298 & $1494.23$ & $132.26$\\\\\n11 & 416517 &timeout& $1996.09$\\\\\n \\bottomrule\n\\end{tabular}\n\\end{minipage}\n\\end{table} \n\n\\begin{table}[t]\n\\centering\n\\caption{Inference times in milliseconds for DeepStochLog, DeepProbLog and NeurASP on task \\textbf{T1} for variable number lengths.}\n\\label{tab:timing}\n\\begin{tabular}{@{}lcccc@{}}\n\\toprule\nNumbers Length & 1\n& 2 & 3 & 4 \\\\ \\midrule\nDeepStochLog & $1.3 \\pm 0.9$ & $2.3 \\pm 0.4$ & $4.0 \\pm 0.4$ & $5.7 \\pm 1.8$ \\\\\nDeepProbLog & $13.5 \\pm \t3.0$ & $36.0 \\pm 0.5$ & $199.7 \\pm 14.0$ & timeout \\\\\nNeurASP & $9.2 \\pm 1.4$ \n & $85.7 \\pm 22.6$ \n & $158.2 \\pm 47.7$ & timeout \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\n\n\\textbf{Q3: Larger scale relational datasets}\nThe complexity of many of the previous experiments comes from the large number of derivations for a single goal, while the number of subsymbolic inputs (e.g. images) in a single relational example was quite limited.\nHere, we focus on task \\textbf{T5}, i.e. semi-supervised classification in citation networks, where the complexity mainly comes from the large number of elements of the unique relational example, i.e. the citation network.\nThis task is usually out of the scope of (neural) PLP approaches due to the fact that there is a unique large relational example and the possible world semantics is prohibitive in this scenario.\nWe compare against the following baselines: label propagation (LP) \\cite{zhu2003semi}, semi-supervised embedding (SemiEmb) \\cite{weston2012deep}, manifold regularization (ManiReg) \\cite{belkin2006manifold}, skip-gram based graph embeddings (DeepWalk) \\cite{perozzi2014deepwalk}, ICA \\cite{getoorICA} and GCN \\cite{kipf2016semi}. All these baselines are specific to the semi-supervised classification task, while DeepStochLog is a much more general framework. We finally tried to compare with DeepProbLog, which, however, does not scale to the size of this problem due to the different probabilistic semantics. Results are reported in Table \\ref{tab:results_citation}. DeepStochLog compares similarly or favorably to most of the other methods, even though it is the only one that has not been developed for the specific task. However, it still underperforms w.r.t. ICA and GCN. But these methods use extra knowledge as input to the classifier in the form of precomputed or learned features of the neighbors of a document, which is very useful for this task but not considered in the DeepStochLog experiment. %\nAdding or learning relational features for input to the neural modules is, however, an interesting future direction.\n\n\n\n\n\n\n\\paragraph{Q4: General programs in DeepStochLog}\nEven though DeepStochLog naturally represents grammars for parsing sequences, NDCGs with Prolog goals are a powerful formalism to express more complex relational problems and programs. Actually, both task \\textbf{T5} and \\textbf{T6} have been solved with programs that depart from the pure grammar formalism and are more like general logic programs. We provide the complete models in Appendix \\ref{sec:details}. The main ingredients are (neural) empty production rules, sometimes referred to as non-consuming or $\\epsilon$-production rules. They allow to take probabilistic decisions, including also neural networks, without consuming any element of the sequence, as shown in Example~\\ref{ex:empty_productions}. This also shows that DeepStochLog has the full power of stochastic logic programs.\n\n\\begin{example}[Empty productions]\n\\label{ex:empty_productions}\nWe show a variant of the MNIST Addition problem using empty productions.\n\\begin{align*}\n&nn(mnist,[X],[Y],[digit]) :: number(X,Y) \\rightarrow []. \\\\\n&addition(X,Y,N) \\rightarrow number(X,N1), number(Y,N2), {N \\;is\\; N1 + N2}.\n\\end{align*}\nThis grammar will always produce the empty sequence but through the Prolog unification mechanism and the probabilistic modeling, we can express complex stochastic logic programs that include calls to neural networks.\n\\end{example}\n\n\n\n\\section{Related Works}\n\\label{sec:rw}\nDeepStochLog is an expressive neural symbolic framework\nwhose distinguishing features are: 1) it is based on the expressive stochastic logic programming paradigm, which can express probabilistic programs (as in T5-T6) as well as probabilistic unification based grammars (T1-T4); 2) it can work with both symbolic and subsymbolic data such as images (as shown in T1-T4);\nand 3) its inference and learning mechanism is based on SLG-resolution \nthat naturally supports tabling, a form of dynamic programming (as shown in Q2). \n\nThere are several strands of related research.\nFirst, DeepStochLog is a neural logic programming language in the spirit of\nDeepProbLog \\cite{manhaeve2018deepproblog}, NeurASP \\cite{yang2020neurasp}, the neural theorem prover \\cite{rocktaschel2017end} and lifted relational neural networks (LRNNs) \\cite{ondrej}.\nThe first two systems are based on a probabilistic possible world semantics, while DeepStochLog is based on stochastic grammars, which---as we have shown---scales much better (in part also due to the use of tabling).\nThe latter two approaches focus on Datalog (which cannot deal with function symbols) and use the logic to construct the neural network in a kind of knowledge based model construction approach.\nFurthermore, they are neither probabilistic nor do they deal with subsymbolic inputs such as images.\nAnother related system is Tensorlog \\cite{tensorlog}, which is based on stochastic logic programming.\nWhile sharing their roots in SLPs, it is less expressive than DeepStochLog, as it considers only Datalog and predicates of arity 2.\nWhile Tensorlog's implementation is fast thanks to being in terms of tensors, it\nhas only been applied to symbolic data. \n\nSecond, DeepStochLog can be viewed as a neural-based grammar, similarly to Neural Grammars~\\cite{dyer} and NGS~\\cite{li2020ngs}. \nNeural Grammars have been introduced in the natural language community as an effective strategy to learn PCFGs. They are neural parameterizations of PFCG and it is possible to learn the structure of the grammar by enumerating a set of candidate rules and using neural networks to learn their probabilities. Differently from DeepStochLog, they are restricted to context-free grammars. \nFurthermore, Neural Grammars~\\cite{dyer} do not consider subsymbolic inputs (as in all our tasks T1-T6). \nDifferent from the probabilistic interface of DeepStochLog, NGS uses backsearch, a greedy search that defines the backward feedback from the grammar to the neural nets. While this makes NGS very scalable, the backsearch must be defined \\textit{per-program}, while DeepStochLog backpropagates evidence automatically through any NDCG. \nNeural Attribute Grammars~\\cite{neuralattribute} integrate attribute grammars with neural networks.\nWhile this is also an expressive grammatical framework, \nthey are quite different from DeepStochLog in their approaches and applications, and also are not applied to subsymbolic data.\n\nThird, many systems in the neural symbolic community \\cite{LTN,SBR,ondrej} obtain differentiable logics by relaxing logical programs or theories using fuzzy logic and t-norms. While the shift in semantics from\nprobabilistic to fuzzy logic has known issues %\n\\cite{vankrieken}, fuzzy logic allows for more scalable systems as compared to probabilistic logic based on the possible world semantics. But by exploiting the stochastic grammars, DeepStochLog shows the same benefits as fuzzy logic in terms of computational complexity (i.e. no disjoint-sum problem required) by resorting to an alternative probabilistic semantics.\n\n\\section{Conclusions} \\label{sec:conclusions}\nWe have introduced a novel and very expressive neural symbolic model based on stochastic logic programming, that allows to integrate symbolic knowledge with subsymbolic representations, that scales well, and gives state-of-the-art results on various neural symbolic computation tasks.\n\nThere are several limitations of DeepStochLog that we want to explore in further research.\nFirst, DeepStochLog does not yet learn the structure of the rules, while the neural theorem prover \\cite{rocktaschel2017end}, DiffLog \\cite{difflog} and the neural grammars \\cite{dyer} can all enumerate rules and then identify the most relevant ones. Second, DeepStochLog's inference could be further optimised by parallelization of the circuit using ideas from TensorLog \\cite{tensorlog}.\nThird, SLPs and hence, DeepStochLog, may lose probability mass due to failing derivations.\nThis can be addressed by normalizing and computing the partition function \\cite{cussens2001parameterestimationslp}. It would be interesting to approximate the partition function and also to further speed up the inference by sampling or by searching for the k-best derivations. Finally, it would be interesting to explore the use of DeepStochLog as a generative model \nto generate sequences. \n\n\\section{Acknowledgements}\n\nWe would like to thank Jessa Bekker for her helpful feedback and discussions throughout the whole project.\nThis work has received funding by the Research foundation - Flanders, the KU Leuven Research Fund, the European Research Council (ERC)\nunder the European Union's Horizon 2020 research and innovation programme (grant agreement No [694980] SYNTH: Synthesising Inductive Data Models), the EU H2020 ICT48 project ``TAILOR'',\nunder contract \\#952215; the Flemish Government under the ``Onderzoeksprogramma Artifici\u00eble Intelligentie (AI) Vlaanderen'' programme and the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.\nThomas Winters is a fellow of the Research Foundation-Flanders (FWO-Vlaanderen, 11C7720N).\nRobin Manhaeve is a SB PhD fellow of the Research Foundation-Flanders (FWO-Vlaanderen, 1S61718N).\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{EDCA Cycle Time Analysis}\n\nIn this section, we will first derive the AC-specific average\ncollision probability. Next, we will calculate the AC-specific\naverage cycle time. Finally, we will relate the average cycle time\nand the average collision probability to the average normalized\nthroughput, EDCA service time, and packet loss probability.\n\n\\subsection{AC-specific Average Collision Probability}\n\nThe difference in AIFS of each AC in EDCA creates the so-called\n\\textit{contention zones or periods} as shown in\nFig.~\\ref{fig:unsat_contzones} \\cite{Robinson04},\\cite{Hui05}. In\neach contention zone, the number of contending stations may vary.\nWe employ an average analysis on the AC-specific collision\nprobability rather than calculating it separately for different\nAIFS and backoff slots as in \\cite{Inan07_ICC}-\\cite{Banchs06}. We\ncalculate the AC-specific collision probability according to the\nlong term occupancy of AIFS and backoff slots.\n\nWe define $p_{c_{i,x}}$ as the conditional probability that\nAC$_{i}$ experiences either an external or an internal collision\ngiven that it has observed the medium idle for $AIFS_{x}$ and\ntransmits in the current slot (note $AIFS_{x}\\geq AIFS_{i}$ should\nhold). For the following, in order to be consistent with the\nnotation of \\cite{802.11e}, we assume $AIFS_{0}\\geq AIFS_{1} \\geq\nAIFS_{2} \\geq AIFS_{3}$. Let $d_{i} = AIFSN_{i} - AIFSN_{3}$.\nFollowing the slot homogeneity assumption of \\cite{Bianchi00},\nassume that each AC$_{i}$ transmits with constant probability,\n$\\tau_{i}$. Also, let the total number AC$_{i}$ flows be $N_{i}$.\nThen, for the heterogeneous scenario in which each station has\nonly one AC\n\\begin{equation}\n\\label{eq:unsatpcix} \\setlength{\\nulldelimiterspace}{0pt}\np_{c_{i,x}} = 1-\\frac{\\prod \\limits_{i':d_{i'}\\leq d_{x}}\n(1-\\tau_{i'})^{N_{i'}}}{(1-\\tau_{i})}.\n\\end{equation}\n\\noindent We only formulate the situation when there is only one\nAC per station, therefore no internal collisions can occur. Note\nthat this simplification does not cause any loss of generality,\nbecause the proposed model can be extended for the case of higher\nnumber of ACs per station as in \\cite{Kong04},\\cite{Inan07_ICC}.\n\nWe use the Markov chain shown in Fig.~\\ref{fig:unsat_AIFSMC} to\nfind the long term occupancy of the contention zones. Each state\nrepresents the $n^{th}$ backoff slot after the completion of the\nAIFS$_{3}$ idle interval following a transmission period. The\nMarkov chain model uses the fact that a backoff slot is reached if\nand only if no transmission occurs in the previous slot. Moreover,\nthe number of states is limited by the maximum idle time between\ntwo successive transmissions which is $W_{min}=\\min(CW_{i,max})$\nfor a saturated scenario. The probability that at least one\ntransmission occurs in a backoff slot in contention zone $x$ is\n\\begin{equation}\n\\label{eq:unsatptr} \\setlength{\\nulldelimiterspace}{0pt}\np^{tr}_{x} = 1-\\prod_{i':d_{i'}\\leq d_{x}} (1-\\tau_{i'})^{N_{i'}}.\n\\end{equation}\n\\noindent Note that the contention zones are labeled with $x$\nregarding the indices of $d$. In the case of an equality in AIFS\nvalues of different ACs, the contention zone is labeled with the\nindex of AC with higher priority.\n\nGiven the state transition probabilities as in\nFig.~\\ref{fig:unsat_AIFSMC}, the long term occupancy of the\nbackoff slots $b'_{n}$ can be obtained from the steady-state\nsolution of the Markov chain. Then, the AC-specific average\ncollision probability $p_{c_{i}}$ is found by weighing zone\nspecific collision probabilities $p_{c_{i,x}}$ according to the\nlong term occupancy of contention zones (thus backoff slots)\n\\begin{equation}\n\\label{eq:unsatpci}p_{c_{i}} = \\frac{\\sum_{n=d_{i}+1}^{W_{min}}\np_{c_{i,x}}b'_{n}}{\\sum_{n=d_{i}+1}^{W_{min}} b'_{n}}\n\\end{equation}\n\\noindent where $x = \\max \\left( y~|~d_{y} = \\underset{z}{\\max}\n(d_{z}~|~d_{z} \\leq n)\\right)$ which shows $x$ is assigned the\nhighest index value within a set of ACs that have AIFSN smaller\nthan or equal to $n+AIFSN_{3}$. This ensures that at backoff slot\n$n$, AC$_{i}$ has observed the medium idle for AIFS$_{x}$.\nTherefore, the calculation in~(\\ref{eq:unsatpci}) fits into the\ndefinition of $p_{c_{i,x}}$.\n\n\\subsection{AC-Specific Average Cycle Time}\n\nIntuitively, it can be seen that each user transmitting at the\nsame AC has equal cycle time, while the cycle time may differ\namong ACs. Our analysis will also mathematically show this is the\ncase. Let $E_{i}[t_{cyc}]$ be average cycle time for a tagged\nAC$_{i}$ user. $E_{i}[t_{cyc}]$ can be calculated as the sum of\naverage duration for \\textit{i)} the successful transmissions,\n$E_{i}[t_{suc}]$, \\textit{ii)} the collisions, $E_{i}[t_{col}]$,\nand \\textit{iii)} the idle slots, $E_{i}[t_{idle}]$ in one cycle.\n\nIn order to calculate the average time spent on successful\ntransmissions during an AC$_{i}$ cycle time, we should find the\nexpected number of total successful transmissions between two\nsuccessful transmissions of AC$_{i}$. Let $Q_{i}$ represent this\nrandom variable. Also, let $\\gamma_{i}$ be the probability that\nthe transmitted packet belongs to an arbitrary user from AC$_{i}$\ngiven that the transmission is successful. Then,\n\\begin{equation}\\label{eq:gamma_i}\n\\gamma_{i} = \\sum_{n=d_{i}+1}^{W_{min}}\nb'_{n}\\frac{p_{s_{i,n}}\/N_{i}}{\\sum \\limits_{\\forall j}\np_{s_{j,n}}}\n\\end{equation}\n\\noindent where\n\\begin{equation}\\label{eq:p_s_i_cycle}\np_{s_{i,n}} =\n\\left\\{ \\\\\n\\begin{IEEEeqnarraybox}[\\relax][c]{lc}\n\\frac{N_{i}\\tau_{i}}{(1-\\tau_{i})}\\prod_{i':d_{i'}\\leq\nn-1}(1-\\tau{i'})^{N_{i'}}, &~{\\rm if}~n \\geq d_{i}+1 \\\\ 0, &~{\\rm\nif }~n < d_{i}+1.\n\\end{IEEEeqnarraybox}\n\\right.\n\\end{equation}\n\n\nThen, the Probability Mass Function (PMF) of $Q_{i}$ is\n\\begin{equation}\\label{eq:PMFsucctrans}\nPr(Q_{i}=k) = \\gamma_{i}(1-\\gamma_{i})^{k}, ~~k \\geq 0.\n\\end{equation}\n\nWe can calculate expected number of successful transmissions of\nany AC$_{j}$ during the cycle time of AC$_{i}$, $ST_{j,i}$, as\n\\begin{equation}\\label{eq:ExpectedindividualAC}\nST_{j,i} = N_{j}E[Q_{i}] \\frac{\\gamma_{j}}{1-\\gamma_{i}}.\n\\end{equation}\n\nInserting $E[Q_{i}]=(1-\\gamma_{i})\/\\gamma_{i}$ in\n(\\ref{eq:ExpectedindividualAC}), our intuition that each user from\nAC$_{i}$ can transmit successfully once on average during the\ncycle time of another AC$_{i}$ user, i.e., $ST_{i,i}=N_{i}$, is\nconfirmed. Therefore, the average cycle time of any user belonging\nto the same AC is equal in a heterogeneous scenario where each\nstation runs only one AC. Including the own successful packet\ntransmission time of tagged AC$_{i}$ user in $E_{i}[t_{suc}]$, we\nfind\n\\begin{equation}\\label{eq:Etsuc}\nE_{i}[t_{suc}] = \\sum_{\\forall j} ST_{j,i}T_{s_{j}}\n\\end{equation}\n\n\\noindent where $T_{s_{j}}$ is defined as the time required for a\nsuccessful packet exchange sequence. $T_{s_{j}}$ will be derived\nin (\\ref{eq:unsatTs}).\n\nTo obtain $E_{i}[t_{col}]$, we need to calculate average number of\nusers that involve in a collision, $N_{c_{n}}$, at the $n^{th}$\nslot after last busy time for given $N_{i}$ and $\\tau_{i}$,\n$\\forall i$. Let the total number of users transmitting at the\n$n^{th}$ slot after last busy time be denoted as $Y_{n}$. We see\nthat $Y_{n}$ is the sum of random variables,\n$Binomial(N_{i},\\tau_{i})$, $\\forall i:~d_{i}\\leq n-1$. Employing\nsimple probability theory, we can calculate\n$N_{c_{n}}=E[Y_{n}|Y_{n}\\geq 2]$. After some simplification,\n\\begin{equation}\nN_{c_{n}} = \\frac{\\sum\\limits_{i:d_{i}\\leq n-1}\n(N_{i}\\tau_{i}-p_{s_{i,n}})}{1-\\prod\\limits_{i:d_{i}\\leq\nn-1}(1-\\tau_{i})^{N_{i}}-\\sum\\limits_{i:d_{i}\\leq n-1}p_{s_{i,n}}}\n\\end{equation}\n\nIf we let the average number of users involved in a collision at\nan arbitrary backoff slot be $N_{c}$, then\n\\begin{equation}\nN_{c} = \\sum_{\\forall n} b'_{n}N_{c_{n}}.\n\\end{equation}\n\nWe can also calculate the expected number of collisions that an\nAC$_{j}$ user experiences during the cycle time of an AC$_{i}$,\n$CT_{j,i}$, as\n\\begin{equation}\nCT_{j,i} = \\frac{p_{c_{j}}}{1-p_{c_{j}}}ST_{j,i}.\n\\end{equation}\n\n\\noindent Then, defining $T_{c_{j}}$ as the time wasted in a\ncollision period (will be derived in (\\ref{eq:unsatTc}),\n\\begin{equation}\nE_{i}[t_{col}] = \\frac{1}{N_{c}}\\sum_{\\forall j}\nCT_{j,i}T_{c_{j}}.\n\\end{equation}\n\nGiven $p_{c_{i}}$, we can calculate the expected number of backoff\nslots $E_{i}[t_{bo}]$ that AC$_{i}$ waits before attempting a\ntransmission. Let $W_{i,k}$ be the CW size of AC$_{i}$ at backoff\nstage $k$ \\cite{Inan07_ICC}. Note that, when the retry limit\n$r_{i}$ is reached, any packet is discarded. Therefore, another\n$E_{i}[t_{bo}]$ passes between two transmissions with probability\n$p_{c_{i}}^{r_{i}}$\n\\begin{equation}\\label{eq:aveBO}\nE_{i}[t_{bo}]=\\frac{1}{1-p_{c_{i}}^{r_{i}}}\\sum_{k=1}^{r}p_{c_{i}}^{k-1}(1-p_{c_{i}})\\frac{W_{i,k}}{2}.\n\\end{equation}\n\n\\noindent Noticing that between two successful transmissions,\nAC$_{i}$ also experiences $CT_{i,i}$ collisions,\n\\begin{equation}\\label{eq:E_i_t_idle}\nE_{i}[t_{idle}] = E_{i}[t_{bo}](CT_{i,i}\/N_{i}+1)t_{slot}.\n\\end{equation}\n\nAs shown in \\cite{Hui05}, the transmission probability of a user\nusing AC$_{i}$,\n\\begin{equation}\\label{eq:tauapp}\n\\tau_{i} = \\frac{1}{E_{i}[t_{bo}]+1}.\n\\end{equation}\n\nNote that, in \\cite{Hui05}, it is proven that the mean value\nanalysis for the average transmission probability as in\n(\\ref{eq:tauapp}) matches the Markov analysis of \\cite{Bianchi00}.\n\nThe fixed-point equations (\\ref{eq:unsatpcix})-(\\ref{eq:tauapp})\ncan numerically be solved for $\\tau_{i}$ and $p_{c_{i}}$, $\\forall\ni$. Then, each component of the average cycle time for AC$_{i}$,\n$\\forall i$, can be calculated using\n(\\ref{eq:gamma_i})-(\\ref{eq:E_i_t_idle}).\n\n\\subsection{Performance Analysis}\n\nLet $T_{p_{i}}$ be the average payload transmission time for\nAC$_{i}$ ($T_{p_{i}}$ includes the transmission time of MAC and\nPHY headers), $\\delta$ be the propagation delay, $T_{ack}$ be the\ntime required for acknowledgment packet (ACK) transmission. Then,\nfor the basic access scheme, we define the time spent in a\nsuccessful transmission $T_{s_{i}}$ and a collision $T_{c_{i}}$\nfor any AC$_{i}$ as\n\\begin{align}\\label{eq:unsatTs}\nT_{s_{i}} = & T_{p_{i}} + \\delta + SIFS + T_{ack} + \\delta +\nAIFS_{i}\n\\\\ \\label{eq:unsatTc} T_{c_{i}} = & T_{p^{*}_{i}} + ACK\\_Timeout +\nAIFS_{i}\n\\end{align}\n\\noindent where $T_{p^{*}_{i}}$ is the average transmission time\nof the longest packet payload involved in a collision\n\\cite{Bianchi00}. For simplicity, we assume the packet size to be\nequal for any AC, then $T_{p^{*}_{i}}=T_{p_{i}}$. Being not\nexplicitly specified in the standards, we set $ACK\\_Timeout$,\nusing Extended Inter Frame Space (EIFS) as $EIFS_{i}-AIFS_{i}$.\nNote that the extensions of~(\\ref{eq:unsatTs})\nand~(\\ref{eq:unsatTc}) for the RTS\/CTS scheme are straightforward\n\\cite{Bianchi00}.\n\nThe average cycle time of an AC represents the renewal cycle for\neach AC. Then, the normalized throughput of AC$_{i}$ is defined as\nthe successfully transmitted information per renewal cycle\n\\begin{equation}\\label{eq:Si_cycle}\nS_{i} =\n\\frac{N_{i}T_{p_{i}}}{E_{i}[t_{suc}]+E_{i}[t_{col}]+E_{i}[t_{idle}]}.\n\\end{equation}\n\nThe AC-specific cycle time is directly related but not equal to\nthe mean protocol service time. By definition, the cycle time is\nthe duration between successful transmissions. We define the\naverage protocol service time such that it also considers the\nservice time of packets which are dropped due to retry limit. On\nthe average, $1\/p_{i,drop}$ service intervals correspond to\n$1\/p_{i,drop}-1$ cycles. Therefore, the mean service time\n$\\mu_{i}$ can be calculated as\n\\begin{align}\\label{eq:pdp_cycle}\n\\mu_{i} = (1-p_{i,drop})E_{i}[t_{cyc}].\n\\end{align}\n\nSimply, the average packet drop probability due to MAC layer\ncollisions is\n\\begin{align}\\label{eq:pidrop_cycle}\np_{i,drop} = p_{c_{i}}^{r_{i}}.\n\\end{align}\n\n\\section{EDCA Overview}\\label{sec:EDCAoverview}\n\nThe IEEE 802.11e EDCA is a QoS extension of IEEE 802.11\nDistributed Coordination Function (DCF). The major enhancement to\nsupport QoS is that EDCA differentiates packets using different\npriorities and maps them to specific ACs that are buffered in\nseparate queues at a station. Each AC$_{i}$ within a station\n($0\\leq i\\leq i_{max}$, $i_{max}=3$ in \\cite{802.11e}) having its\nown EDCA parameters contends for the channel independently of the\nothers. Following the convention of \\cite{802.11e}, the larger the\nindex $i$ is, the higher the priority of the AC is. Levels of\nservices are provided through different assignments of the\nAC-specific EDCA parameters; AIFS, CW, and TXOP limits.\n\nIf there is a packet ready for transmission in the MAC queue of an\nAC, the EDCA function must sense the channel to be idle for a\ncomplete AIFS before it can start the transmission. The AIFS of an\nAC is determined by using the MAC Information Base (MIB)\nparameters as $AIFS = SIFS + AIFSN \\times T_{slot}$, where $AIFSN$\nis the AC-specific AIFS number, $SIFS$ is the length of the Short\nInterframe Space, and $T_{slot}$ is the duration of a time slot.\n\nIf the channel is idle when the first packet arrives at the AC\nqueue, the packet can be directly transmitted as soon as the\nchannel is sensed to be idle for AIFS. Otherwise, a backoff\nprocedure is completed following the completion of AIFS before the\ntransmission of this packet. A uniformly distributed random\ninteger, namely a backoff value, is selected from the range\n$[0,W]$.\nThe backoff counter is decremented at the slot boundary if the\nprevious time slot is idle. Should the channel be sensed busy at\nany time slot during AIFS or backoff, the backoff procedure is\nsuspended at the current backoff value. The backoff resumes as\nsoon as the channel is sensed to be idle for AIFS again. When the\nbackoff counter reaches zero, the packet is transmitted in the\nfollowing slot.\n\nThe value of $W$ depends on the number of retransmissions the\ncurrent packet experienced. The initial value of $W$ is set to the\nAC-specific $CW_{min}$. If the transmitter cannot receive an\nAcknowledgment (ACK) packet from the receiver in a timeout\ninterval, the transmission is labeled as unsuccessful and the\npacket is scheduled for retransmission. At each unsuccessful\ntransmission, the value of $W$ is doubled until the maximum\nAC-specific $CW_{max}$ limit is reached. The value of $W$ is reset\nto the AC-specific $CW_{min}$ if the transmission is successful,\nor the retry limit is reached thus the packet is dropped.\n\nThe higher priority ACs are assigned smaller AIFSN. Therefore, the\nhigher priority ACs can either transmit or decrement their backoff\ncounters while lower priority ACs are still waiting in AIFS. This\nresults in higher priority ACs facing a lower average probability\nof collision and relatively faster progress through backoff slots.\nMoreover, in EDCA, the ACs with higher priority may select backoff\nvalues from a comparably smaller CW range. This approach\nprioritizes the access since a smaller CW value means a smaller\nbackoff delay before the transmission.\n\nUpon gaining the access to the medium, each AC may carry out\nmultiple frame exchange sequences as long as the total access\nduration does not go over a TXOP limit. Within a TXOP, the\ntransmissions are separated by SIFS. Multiple frame transmissions\nin a TXOP can reduce the overhead due to contention. A TXOP limit\nof zero corresponds to only one frame exchange per access.\n\nAn internal (virtual) collision within a station is handled by\ngranting the access to the AC with the highest priority. The ACs\nwith lower priority that suffer from a virtual collision run the\ncollision procedure as if an outside collision has occured.\n\n\n\\section{Conclusion}\n\nWe have presented an accurate cycle time model for predicting the\nEDCA saturation performance analytically. The model accounts for\nAIFS and CW differentiation mechanisms of EDCA. We employ a simple\naverage collision probability calculation regarding AIFS and CW\ndifferentiation mechanisms of EDCA. Instead of generic slot time\nanalysis of \\cite{Bianchi00}, we use the AC-specific cycle time as\nthe renewal cycle. We show that the proposed simple cycle time\nmodel performs as accurate as more detailed and complex models\npreviously proposed in the literature. The mean saturation\nthroughput, protocol service time and packet drop probability are\ncalculated using the model. This analysis also highlights some\ncommonalities between approaches in EDCA saturation performance\nanalysis. The simple cycle time analysis can provide invaluable\ninsights for QoS provisioning in the WLAN.\n\n\n\n\\section{Introduction}\n\nThe IEEE 802.11e standard \\cite{802.11e} specifies the Hybrid\nCoordination Function (HCF) which enables prioritized and\nparameterized Quality-of-Service (QoS) services at the MAC layer.\nThe HCF combines a distributed contention-based channel access\nmechanism, referred to as Enhanced Distributed Channel Access\n(EDCA), and a centralized polling-based channel access mechanism,\nreferred to as HCF Controlled Channel Access (HCCA). We confine\nour analysis to the EDCA scheme, which uses Carrier Sense Multiple\nAccess with Collision Avoidance (CSMA\/CA) and slotted Binary\nExponential Backoff (BEB) mechanism as the basic access method.\nThe EDCA defines multiple Access Categories (AC) with AC-specific\nContention Window (CW) sizes, Arbitration Interframe Space (AIFS)\nvalues, and Transmit Opportunity (TXOP) limits to support\nMAC-level QoS and prioritization.\n\nWe evaluate the EDCA performance for the saturation (asymptotic)\ncase. The saturation analysis provides the limits reached by the\nsystem throughput and protocol service time in stable conditions\nwhen every station has always backlogged data ready to transmit in\nits buffer. The analysis of the saturation provides in-depth\nunderstanding and insights into the random access schemes and the\neffects of different contention parameters on the performance. The\nresults of such analysis can be employed in access parameter\nadaptation or in a call admission control algorithm.\n\nOur analysis is based on the fact that a random access system\nexhibits cyclic behavior. A cycle time is defined as the duration\nin which an arbitrary tagged user successfully transmits one\npacket on average \\cite{Medepalli05}. We will derive the explicit\nmathematical expression of the AC-specific EDCA cycle time. The\nderivation considers the AIFS and CW differentiation by employing\na simple average collision probability analysis. We will use the\nEDCA cycle time to predict the first moments of the saturation\nthroughput, the service time, and the packet loss probability. We\nwill show that the results obtained using the cycle time model\nclosely follow the accurate predictions of the previously proposed\nmore complex analytical models and simulation results. Our cycle\ntime analysis can serve as a simple and practical alternative\nmodel for EDCA saturation throughput analysis.\n\n\\section{Previous Work}\n\nA number of proposals were made to extend this Markov model for\nthe 802.11e EDCA function. \\cite{Xiao05} includes the contention\nwindow size differentiation in the EDCA mechanism, but lacks the\nAIFS differentiation and the virtual collision scheme.\n\\cite{Robinson04} modifies the model by adding an in depth\ntreatment of the postcollision contention period, but the proposed\nmodel remains complex. \\cite{Tantra05} includes AIFS\ndifferentiation for the two Access Category (AC) scenario, still\nmodelling the high priority AC as in \\cite{Bianchi} which is not\ncorrect. \\cite{Kong04} provides a more complete model but misses\nthe handling collision probability differentiation for different\naccess categories after channel busy period finishes.\n\n\\section{Related Work}\n\nIn this section, we provide a brief summary of the studies in the\nliterature on the theoretical DCF and EDCA function saturation\nperformance analysis.\n\nThree major saturation performance models have been proposed for\nDCF; \\textit{i)} assuming constant collision probability for each\nstation, Bianchi \\cite{Bianchi00} developed a simple Discrete-Time\nMarkov Chain (DTMC) and the saturation throughput is obtained by\napplying regenerative analysis to a generic slot time,\n\\textit{ii)} Cali \\textit{et al.} \\cite{Cali00} employed renewal\ntheory to analyze a \\textit{p}-persistent variant of DCF with\npersistence factor \\textit{p} derived from the CW, and\n\\textit{iii)} Tay \\textit{et al.} \\cite{Tay01} instead used an\naverage value mathematical method to model DCF backoff procedure\nand to calculate the average number of interruptions that the\nbackoff timer experiences. Having the common assumption of slot\nhomogeneity (for an arbitrary station, constant collision or\ntransmission probability at an arbitrary slot), these models\ndefine all different renewal cycles all of which lead to accurate\nsaturation performance analysis.\n\nThese major methods (especially \\cite{Bianchi00}) are modified by\nseveral researchers to include the extra features of the EDCA\nfunction in the saturation analysis. Xiao \\cite{Xiao05} extended\n\\cite{Bianchi00} to analyze only the CW differentiation. Kong\n\\textit{et al.} \\cite{Kong04} took AIFS differentiation into\naccount. On the other hand, these EDCA extensions miss the\ntreatment of varying collision probabilities at different AIFS\nslots due to varying number of contending stations. Robinson\n\\textit{et al.} \\cite{Robinson04} proposed an average analysis on\nthe collision probability for different contention zones during\nAIFS. Hui \\textit{et al.} \\cite{Hui05} unified several major\napproaches into one approximate average model taking into account\nvarying collision probability in different backoff subperiods\n(corresponds to contention zones in \\cite{Robinson04}). Zhu\n\\textit{et al.} \\cite{Zhu05} proposed another analytical EDCA\nMarkov model averaging the transition probabilities based on the\nnumber and the parameters of high priority flows. Inan \\textit{et\nal.} \\cite{Inan07_ICC} proposed a 3-dimensional DTMC which\nprovides accurate treatment of AIFS and CW differentiation.\nAnother 3-dimensional DTMC is proposed by Tao \\textit{et al.}\n\\cite{Tao06} in which the third dimension models the state of\nbackoff slots between successive transmission periods. The fact\nthat the number of idle slots between successive transmissions can\nbe at most the minimum of AC-specific $CW_{max}$ values is\nconsidered. Independently, Zhao \\textit{et al.} \\cite{Zhao02} had\npreviously proposed a similar model for the heterogeneous case\nwhere each station has traffic of only one AC. Banchs \\textit{et\nal.} \\cite{Banchs06} proposed another model which considers\nvarying collision probability among different AIFS slots due to a\nvariable number of stations. Lin \\textit{et al.} \\cite{Lin06}\nextended \\cite{Tay01} in order to carry out mean value analysis\nfor approximating AIFS and CW differentiation.\n\nOur approach is based on the observation that the transmission\nbehavior in the 802.11 WLAN follows a pattern of periodic cycles.\nPreviously, Medepalli \\textit{et al.} \\cite{Medepalli05} provided\nexplicit expressions for average DCF cycle time and system\nthroughput. Similarly, Kuo \\textit{et al.} \\cite{Kuo03} calculated\nthe EDCA transmission cycle assuming constant collision\nprobability for any traffic class. On the other hand, such an\nassumption leads to analytical inaccuracies\n\\cite{Kong04}-\\cite{Lin06}. The main contribution is that we\nincorporate accurate AIFS and CW differentiation calculation in\nthe EDCA cycle time analysis. We show that the cyclic behavior is\nobserved on a per AC basis in the EDCA. To maintain the simplicity\nof the cycle time analysis, we employ averaging on the AC-specific\ncollision probability. The comparison with more complex and\ndetailed theoretical and simulation models reveals that the\nanalytical accuracy is preserved.\n\n\\section{Numerical and Simulation Results} \\label{sec:Validation}\n\nWe validate the accuracy of the numerical results by comparing\nthem to the simulation results obtained from ns-2 \\cite{ns2}. For\nthe simulations, we employ the IEEE 802.11e HCF MAC simulation\nmodel for ns-2.28 \\cite{ourcode}. This module implements all the\nEDCA and HCCA functionalities stated in \\cite{802.11e}.\n\nIn simulations, we consider two ACs, one high priority (AC$_{3}$)\nand one low priority (AC$_{1}$). Each station runs only one AC.\nEach AC has always buffered packets that are ready for\ntransmission. For both ACs, the payload size is 1000 bytes.\nRTS\/CTS handshake is turned on. The simulation results are\nreported for the wireless channel which is assumed to be not prone\nto any errors during transmission. The errored channel case is\nleft for future study. All the stations have 802.11g Physical\nLayer (PHY) using 54 Mbps and 6 Mbps as the data and basic rate\nrespectively ($T_{slot}=9~\\mu s$, $SIFS=10~\\mu s$) \\cite{802.11g}.\nThe simulation runtime is 100 seconds.\n\n\n\nIn the first set of experiments, we set $AIFSN_{1}=3$,\n$AIFSN_{3}=2$, $CW_{1,min}=31$, $CW_{3,min}=15$, $m_{1}=m_{3}=3$,\n$r_{1}=r_{3}=7$. Fig.~\\ref{fig:A1_thp_GC07} shows the normalized\nthroughput of each AC when both $N_{1}$ and $N_{3}$ are varied\nfrom 5 to 30 and equal to each other. As the comparison with a\nmore detailed analytical model \\cite{Inan07_ICC} and the\nsimulation results reveal, the cycle time analysis can predict\nsaturation throughput accurately.\nFig.~\\ref{fig:A1_mpst_GC07} and Fig.~\\ref{fig:A1_mpdp_GC07}\ndisplay the mean protocol service time and packet drop probability\nrespectively for the same scenario of Fig.~\\ref{fig:A1_thp_GC07}.\nAs comparison with \\cite{Inan07_ICC} and the simulation results\nshow, both performance measures can accurately be predicted by the\nproposed cycle time model. Although not included in the figures, a\nsimilar discussion holds for the comparison with other detailed\nand\/or complex models of \\cite{Tao06}-\\cite{Banchs06}.\n\n\nIn the second set of experiments, we fix the EDCA parameters of\none AC and vary the parameters of the other AC in order to show\nthe proposed cycle time model accurately captures the normalized\nthroughput for different sets of EDCA parameters. In the\nsimulations, both $N_{1}$ and $N_{3}$ are set to 10.\nFig.~\\ref{fig:v_aifs_cw_1_thp_GC07} shows the normalized\nthroughput of each AC when we set $AIFSN_{3}=2$, $CW_{3,min}=15$,\nand vary $AIFSN_{1}$ and $CW_{1,min}$.\nFig.~\\ref{fig:v_aifs_cw_3_thp_GC07} shows the normalized\nthroughput of each AC when we set $AIFSN_{1}=4$, $CW_{1,min}=127$,\nand vary $AIFSN_{3}$ and $CW_{3,min}$. As the comparison with\nsimulation results show, the predictions of the proposed cycle\ntime model are accurate. We do not include the results for packet\ndrop probability and service time for this experiment. No\ndiscernable trends toward error are observed.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}