diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbopk" "b/data_all_eng_slimpj/shuffled/split2/finalzzbopk" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbopk" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\nLS I $+61^{\\circ}303$, a Galactic high-mass X-ray binary system located at a distance of 2 \nkpc \\citep{Frail-1991}, is detected in the energy range from radio to \n$\\gamma$-rays exhibiting strong variable emission. It consists of a B0 \nmain-sequence star with a circumstellar disk (i.e a Be star) and a compact \nobject of unknown nature. The orbital period of the system is estimated\nto be $P_{orb}$ = 26.496 days and it also exhibits a long term periodic \nvariation with a superorbital period of $P_{sup}$ = 1667 \ndays \\citep{Gregory-2002,Massi-2013,Massi-2015}. However, very recently the \nsuperorbital period is estimated to be 1626 days using 37 years of radio data \n\\citep{Massi_2016}.\nThe zero orbital phase \ncorresponds to $T_{0,orb} = 2443366.775 + nP_{orb}~JD$. According to the \nmost recent radial velocity measurements, the orbit is elliptical with\neccentricity of e = 0.537 $\\pm$ 0.034 and periastron passage occurring around phase \n$\\phi$ = 0.275, apastron passage at $\\phi$ = 0.775, superior conjunction \nat $\\phi$ = 0.081, and inferior conjunction at $\\phi$ = 0.313 \n\\citep{Aragona-2009}. \n\n\nHigh angular resolution VLBI radio data has shown the presence of \nhigh-energy particle outflow possibly related to jet-like ejection on \nthe time scale of a orbital period \\citep{Paredes-1998, Massi-2004}. \nHowever, the observed morphological changes in the data\ncollected at different epochs reported by \\cite{Dhawan-2006} \nsupport a scenario of binary pulsar. Recent\ndetailed VLBA radio images, obtained by reprocessing same\ndata-set, through the orbital period established the\npresence of one sided and double sided radio structures\nsupporting a precessing microquasar model \\citep{Massi-2012}.\n \n\nLong-term monitoring of the source during 2007-2011 by Proportional \nCounter Array (PCA) onboard Rossi X-ray Timing Explorer (RXTE) \nestablished the superorbital modulation in X-rays and a shift of \nsuperorbital phase by 0.2 between radio and X-ray data \\citep{Li-2012, \nChernyakova-2012}. Very recently, superorbital modulation at MeV--GeV \n$\\gamma$-rays in the apastron phase (0.5 --1.0) has been established by \nFermi-LAT \\citep{Ackermann-2013} based on the data taken during 2008 \nAugust 4 to 2013 March 24. \n\nThis source has often shown complex behaviour in very high energy $\\gamma$-rays.\nLS I $+61^{\\circ}303$\\ was first observed at TeV energies by the MAGIC telescope system \nduring 2005 October -- 2006 March \nwith a significance of 8.7$\\sigma$ in the orbital phase 0.4--0.7, \nestablishing it as a $\\gamma$-ray binary\n\\citep{Albert-2006}. The VERITAS observations carried out during 2006 \nSeptember -- 2007 February confirmed TeV emission from this\nsource \\citep{Acciari-2008}. However, further observations of the source \nby both MAGIC and VERITAS have shown different flux levels \n\\citep{Acciari-2011,Aleksic-2012,Aliu-2013}. These observations at TeV \nenergies show that the source behaves differently in different orbital \ncycles suggesting a variable nature of the source. Variability of the \nsource almost in all wavelengths could possibly be related to superorbital \nmodulation of the fluxes, which has been shown at radio, X-ray and \nMeV--GeV $\\gamma$-rays detected by Fermi-LAT \\citep{Gregory-2002,Li-2012, \nChernyakova-2012,Ackermann-2013}. Hence long term multiwaveband study of\nthis source can provide an important observational support for unveiling\nthe nature of the source and the emission mechanisms.\n\nWith this motivation, we have studied the radio, X-ray and $\\gamma$-ray data from\nthis source collected over a period longer than the superorbital period. We\nhave studied the variation of flux as a function of orbital and superorbital\nphases. We have also studied multiwaveband Spectral Energy Distribution\n(SED) of the source in some of the phases. This paper is organized as follows:\nIn section 2, the data set used for these studies and analysis procedure is\ndescribed. Variation of the flux with the orbital and the superorbital phases is\ndiscussed in section 3. The SEDs and their interpretation in terms of\nmicroquasar model are given in section 4 followed by a discussion and \nconclusions in section 5.\n\n\\section{Multiwaveband Data and Analysis}\n\nIn the last few years, LS I $+61^{\\circ}303$\\ has been observed extensively by various instruments.\nIn the present work, we have used data from radio, X-ray and $\\gamma$-ray bands.\nRadio data used here is from \\cite{Richards-2011} and \\cite{Massi-2015}. These are\n15 GHz observations from 40 m single-dish telescope at Owens Valley Radio \nObservatory (OVRO). Data on LS I $+61^{\\circ}303$\\ were collected during MJD 54908.8 -- 56795.0\n(2009 March -- 2014 May). Observations were carried out approximately \ntwice a week. \n\nX-ray data were obtained from PCA onboard RXTE and X-ray Telescope or XRT\nonboard Swift. The PCA is an array of five identical \nXenon filled proportional counter units (PCUs) \\citep{Bradt_1993} covering an energy \nrange from 2 to 60 keV with a total collecting area of 6500 cm$^2$. \nData were collected over the period MJD 50143 -- 55924 and\nstandard analysis procedure was used to generate PCA\nlight curves over the energy range of 2 --\u00ad 9 keV. \n\n\nThe XRT onboard Swift consists of a grazing incidence Wolter I\ntelescope which focuses X-rays on a CCD \\citep{Burrows_2005}. This\ninstrument has an effective area of 110 cm$^2$, 23.6 arcmin field\nof view (FOV) and 15 arcsec resolution (half-power diameter).\nIt covers an energy range of 0.2 to 10 keV. \nSwift-XRT light curves were obtained from the site \n\\footnote{$http:\/\/www.swift.ac.uk\/user\\_objects\/$}.\nData is collected over the period of MJD 53980 -- 57039 (2006 September 2 \n-- 2015 January 17). Details of the procedure used for generating\nthese light curves is given in \\cite{Evans_2007}.\n\nHigh energy $\\gamma$-ray data are obtained from Large Area Telescope (LAT)\nonboard Fermi. The Fermi-LAT is a pair production telescope \\citep{Atwood_2009} \ncovering energy range of 20 MeV to 300 GeV with a FOV of \n$\\ge$ 2.5 sr. The data taken over the period MJD 54682.9 -- 57145.9\n(2008 August 4 -- 2015 May 3) were analysed in the present work. Circular\nregion of interest (ROI) with radius 15$^\\circ$ centred at the position \nof RA(J2000) = 02$^h$ 40$^m$ 34$^s$ and Dec(J2000) = 61$^\\circ$ 15$'$ 25$''$\nwas used for extracting the data. Fermi Science Tools (FST-v10r0p5) with event \nclass Pass 8 data \nwere used for Galactic point source analysis. Since the Earth's limb is \na strong source of background $\\gamma$-rays, they were filtered out with a \nzenith-angle cut of 100$^\\circ$. A python based software tool {\\it enrico} \n\\citep{enrico-2013} was used to do standard binned likelihood analysis. \nThe $\\gamma$-ray events in the data were binned in 8 logarithmic bins in the energies\nbetween 300 MeV and 300 GeV. Since the point-spread function (PSF) of \nLAT is large, the sources from outside of the ROI may contribute at low energies affecting \ntrue estimates of the fluxes for the sources considered in this analysis. In order to account \nfor this, exposure map was expanded by another 10$^\\circ$ outside the ROI, for all events, \nas suggested by \\citet{Abdo-2009}. \n\n\nWe studied the spectral properties of the $\\gamma$-ray emission by comparing \nthe observational results with the models of the sources present in the ROI. To get the \nbest-fit model parameters, the spatial distribution and spectral models of the sources \nare convolved with the instrument response function (IRF) and exposure of the observation. \nIn this work, we used newly introduced IRF version \\textit{ P8R2\\_SOURCE\\_V6}. There are 85 \npoint-like sources and some diffuse background sources from the 3rd Fermi-LAT catalog \nlocated in the ROI. In order to account for the emission from background sources, we \nconsidered two component background model: diffuse Galactic emission (gll\\_iem\\_v06.fits) \nand isotropic emission component (iso\\_P8R2\\_SOURCE\\_V6\\_v06.txt) consisting of emission \nfrom extra galactic background, unresolved sources and instrumental background.\n\nThe binned likelihood analysis was used for both background and source \nmodelling using {\\it gtlike} tool of FST. Spectral parameters for the source outside \nthe 3$^\\circ$ region centred at the LS I $+61^{\\circ}303$\\ position were kept fixed. \nHowever, parameters except normalization for the point-like background \nsources were fixed or varied based on their strength and distance from the center of the \nROI. Light curve was generated over the energy range of 300 MeV -- 300 GeV.\n\n\nFor Very High Energy (VHE) or TeV band, published data collected during 2005-2011\nfrom MAGIC \\citep{Albert-2009} and VERITAS \\citep{Acciari-2008,Acciari-2011,Aliu-2013} \nexperiments are used. These are the ground\nbased atmospheric Cherenkov experiments located in La Plama and Arizona, respectively.\n\n\n\\section{Multiwaveband Flux Variation}\n\nIn order to study the variation of flux as a function of orbital and superorbital\nphases, datasets from various wavebands were folded in 10 superorbital phase\nbins using the ephemeris given by \\cite{Massi_2016}. Each of these bins\ncorrespond to 163 days. Further, in each phase bin, 10 orbital phase bins were \ngenerated. Average flux was estimated in each of these 10 $\\times$ 10 phase bins. \nX-ray fluxes from Swift-XRT and RXTE-PCA in 10 $\\times$ 10 orbital vs superorbital \nbins are shown in the top left and the top right panels of Fig. \\ref{fig:density-plot}, respectively. \nThese panels \nshow a definite pattern in the variation of flux over the orbital and the superorbital \nphases for both XRT and PCA data. It can be seen from the figure that the source \nis bright in the orbital phase range $\\sim$ 0.4 -- 0.8 while the corresponding superorbital \nphase is at $\\sim$ 0.3 -- 0.8. The highest flux in each of the orbital cycles \nshifts towards apastron passage as the superorbital phase value increases. Similar\nplots generated for the $\\gamma$-ray data from Fermi-LAT and 15 GHz radio data from\nOVRO are given in the middle panels of Fig. \\ref{fig:density-plot}. \nAs noted by earlier \nstudies, there is a definite shift in the pattern for the radio data: the maximum flux\nis at the same orbital phase range (0.4 -- 0.8) as in the X-ray data, but the super \norbital phase range is shifted to 0.7 to 1.4.\nThe Fermi-LAT data shows some indication of enhanced emission in the orbital phase 0.4 to 0.8, but\nunlike the other wavebands, the enhancements in the super orbital phases are not very clear.\nPlots for VHE $\\gamma$-ray data from VERITAS and MAGIC\nare shown in bottom panels of Fig. \\ref{fig:density-plot}. But the data-set is not \nextensive enough to detect any trend in this case.\n\n\\begin{figure*}[t]\n\\begin{tabular}{cc}\n\\centering\n\\includegraphics[scale=0.32,angle=-90]{fig1a.eps}\n\\includegraphics[scale=0.32,angle=-90]{fig1b.eps}\n\\end{tabular}\n\\\\\n\\begin{tabular}{cc}\n\\centering\n\\includegraphics[scale=0.32,angle=-90]{fig1c.eps}\n\\includegraphics[scale=0.32,angle=-90]{fig1d.eps}\n\\end{tabular}\n\\\\\n\\begin{tabular}{cc}\n\\centering\n\\includegraphics[scale=0.32,angle=-90]{fig1e.eps}\n\\includegraphics[scale=0.32,angle=-90]{fig1f.eps}\n\\end{tabular}\n\\caption{Multiwaveband flux as a function of orbital and superorbital\nphases. Top panels show X-ray flux from XRT data in the left panel and PCA\ndata in the right panel. Middle left panel corresponds to $\\gamma$-ray data \nfrom Fermi-LAT and middle right panel shows radio data from OVRO. Bottom\npanels correspond to VHE $\\gamma$-ray flux from VERITAS (left panel)\nand MAGIC (right panel). Flux values in each panel are normalized setting\nmedian flux to 125, i.e. the middle of the scale.}\n\\label{fig:density-plot}\n\\end{figure*}\n\n\nTo investigate this aspect further, variation of the flux as a function of superorbital\nphase was studied in various orbital phase bins. Variation of the X-ray count rates from\nSwift-XRT with superorbital phase is shown in the top left panel of Fig. \\ref{fig:xrt-lc2}. \nSimilar plots for RXTE-PCA, Fermi-LAT and OVRO are shown in the top right, bottom\nleft and bottom right panels of the same figure, respectively. \nIn each panel, different curves from bottom to top \ncorrespond to orbital phases 0-0.1, 0.1-0.2, .. , 0.9-1.0. Curves are shifted with \nrespect to each other for the sake of clarity. Error bars correspond to the standard \ndeviation in each bin. To parameterize this variation, data are fitted with a constant \nand alternatively with a sine function of the form \n$f(t) = f_o + A \\times sin(\\phi_s - \\phi_o)$, where $f_o$, $A$ and $\\phi_o$ are model\nparameters and $\\phi_s$ is the superorbital phase. Sine function with a period of 1626 \ndays gives a better fit than the constant. \nIt can be seen from the figure that there is a definite \nshift in the superorbital phase for the peak flux with respect to the orbital phase.\nPhase at the peak of the function, peak function value and the ratio of the maximum to the minimum \nfunction values are listed in Table ~\\ref{tab:super_peak_orb}. Results are given only for \nthe cases where modulation is seen clearly in Fig. ~\\ref{fig:xrt-lc2}. \nFig. ~\\ref{fig:super_peak_orb} shows \nthese results graphically, where superorbital phase for peak of the function value is \nplotted as a function of orbital phase bins for XRT, PCA, Fermi-LAT and OVRO data.\nThis figure clearly shows the trend of increasing superorbital phase for peak as a\nfunction of orbital phase near apastron. The wavelength dependent phase difference \nbetween superorbital phase for given orbital phase bin is also evident. This\ndifference remains more or less constant in various orbital phase bins near apastron.\n \n\n\n\\begin{figure*}[t]\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[scale=0.45]{fig2a.eps}\n\\includegraphics[scale=0.45]{fig2b.eps}\n\\end{tabular}\n\\\\\n\\begin{tabular}{cc}\n\\includegraphics[scale=0.45]{fig2c.eps}\n\\includegraphics[scale=0.45]{fig2d.eps}\n\\end{tabular}\n\\caption{Variation of flux with superorbital phase for XRT (top left),\nPCA (top right), Fermi-LAT (bottom left) and OVRO (bottom right). In\neach panel curves from bottom to top correspond to orbital phases \n0-0.1, 0.1-0.2, .. , 0.9-1.0. These curves are shifted along Y-axis \nfor the sake of clarity.}\n\\label{fig:xrt-lc2}\n\\end{figure*}\n\n\n\n\\begin{deluxetable*}{ccccccccccccc}\n\\tabletypesize{\\scriptsize}\n\\tablecaption{Peak flux and corresponding superorbital (SO) phase from sine function fit in various orbital phase bins \\label{tab:super_peak_orb}}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{Orbital} & \\multicolumn{3}{c}{XRT} & \\multicolumn{3}{c}{PCA} &\\multicolumn{3}{c}{Fermi-LAT} &\\multicolumn{3}{c}{OVRO} \\\\\n\\colhead{Phase} & \\colhead{SO} & \\colhead{peak flux} & \\colhead{ratio} & \\colhead{SO} & \\colhead{peak flux} & \\colhead{ratio} & \\colhead{SO} & \\colhead{peak flux} & \\colhead{ratio} & \\colhead{SO} & \\colhead{peak flux} & \\colhead{ratio} \\\\\n\\colhead{} & \\colhead{phase} & \\colhead{($10^{-1} ph$} & \\colhead{max} & \\colhead{phase} & \\colhead{($10^{-1} ph$} & \\colhead{max} & \\colhead{phase} & \\colhead{($10^{-1} ph$} & \\colhead{max} & \\colhead{phase} & \\colhead{($10^{-1} ph$} & \\colhead{max} \\\\\n\\colhead{} & \\colhead{at peak} & \\colhead{$ cm^{-2}~s^{-1}$)} & \\colhead{\/min)} & \\colhead{at peak} & \\colhead{$ cm^{-2}~s^{-1}$)} & \\colhead{\/min)} & \\colhead{at peak} & \\colhead{$ cm^{-2}~s^{-1}$)} & \\colhead{\/min)} & \\colhead{at peak} & \\colhead{$ cm^{-2}~s^{-1}$)} & \\colhead{\/min)}\\\\\n}\n\\startdata\n0.0--0.1 & - & - & - & - & - & - & - & - & - &0.30 & 4.19 &23.88\\\\\n0.1--0.2 & - & - & - & - & - & - & - & - & - &0.26 & 2.22 &3.25\\\\\n0.2--0.3 & - & - & - & - & - & - & - & - & - &0.58 & 2.32 &1.86 \\\\\n0.3--0.4 & - & - & - & - & - & - & - & - & - &0.70 & 2.51 &2.01\\\\\n0.4--0.5 & 0.28 & 2.79 &3.16 &0.26 & 1.60 & 1.80 &0.28 & 2.21 & 1.13 &0.80 & 8.73 &7.75\\\\\n0.5--0.6 & 0.38 & 3.12 &1.86 &0.54 & 1.83 & 1.68 &0.30 & 2.60 & 1.53 &0.90 & 6.50 &2.58\\\\\n0.6--0.7 & 0.50 & 3.26 &3.55 &0.46 & 1.75 & 1.70 &0.34 & 2.71 & 1.47 &0.06 & 9.31 &3.10\\\\\n0.7--0.8 & 0.60 & 3.27 &3.79 &0.62 & 1.64 & 2.12 &0.34 & 2.78 & 2.06 &0.10 & 9.93 &2.85\\\\\n0.8--0.9 & 0.60 & 2.53 &3.19 &0.66 & 1.46 & 2.50 &0.36 & 2.54 & 2.60 &0.30 & 6.41 &6.91\\\\\n0.9--1.0 & 0.84 & 2.34 &2.71 &0.78 & 1.16 & 1.51 &0.42 & 2.28 & 1.89 &0.26 & 3.82 &1.82\\\\\n\\enddata\n\\end{deluxetable*}\n\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.35,angle=0]{fig3.eps}\n\\caption{Superorbital phase at peak flux from fitted sinusoidal function given in \nTable ~\\ref{tab:super_peak_orb}\nas a function of orbital phase bins for XRT, PCA, Fermi-LAT and OVRO data. Positions\nfor apastron (A) and superior conjunction (SC) are marked.}\n\\label{fig:super_peak_orb}\n\\end{figure}\n\n\n\n\n\\section{Spectral Energy Distribution}\\label{sec:sed}\n\nWe have investigated the spectral properties of the source at\ndifferent orbital and superorbital phases. Following Figure \\ref{fig:density-plot}\n three different regions were chosen.\nTwo of the regions are bright in most of the wavebands and the third one is of low brightness. \nThese regions are i) Super orbital phase: 0.3 -- 0.5 Orbital phase: 0.6 -- 0.8, \nii) Super orbital phase: 0.5 -- 0.7 Orbital phase: 0.6 -- 0.8 and iii) Super orbital \nphase: 0.0 -- 0.2 Orbital phase, 0.0 -- 0.2 (hereafter state1, state2, and state3, \nrespectively). X-ray and Fermi-LAT spectral data were analysed for these three regions.\nFermi-LAT analysis procedure is already described in section 2. We have analysed\nspectral data from Swift-XRT and RXTE-PCA corresponding to the states mentioned above.\nSome details of these observations are given in Table \\ref{tab:xrt-pca-log}.\nDates for XRT and PCA observations for each of the three states are listed in the\ntable along with the total observation duration. \n\nIn case of XRT we have fitted spectrum over the energy range of 0.3 to 10 keV.\nSource and background photons were selected using the tool XSELECT. Data were recorded in\nPhoton Counting (PC) mode for these observations. Source photons were selected from a circular\nregion with the radius of 20 pixels (i.e. 47 arc-seconds), whereas \nnearby circular region with radius of 40 pixels was used for extracting \nbackground photons.\nEvents with grades 0-12 were selected in this analysis. \nThe spectral data were rebinned using tool GRPPHA with 20 photons per bin. \nStandard response matrices and ancillary response files\nwere used.\n\nIn case of PCA, standard 2 data with time resolution of 16 s and 128 channels\nof energy information were used. Data were analysed using HEASOFT (version 6.15). \nFor each observation, data were filtered using the standard procedure\ngiven in the RXTE Cook Book. The tool 'pcabackest' was used for generation\nof background model, calibration files for 'faint' source (less than 40 ct\/sec\/PCU)\nfrom RXTE GOF were used. To improve statistics, only\ndata from top layer of PCU2 was used.\n\n\n\n\n\n\\begin{deluxetable*}{ccccc}\n\\tabletypesize{\\scriptsize}\n\\tablecaption{Observation log for XRT and PCA \\label{tab:xrt-pca-log}}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{State} & \\colhead{Instrument} & \\colhead{Observation dates} & \\colhead{Number of} & \\colhead{Total duration}\\\\\n\\colhead{} & \\colhead{} & \\colhead{} & \\colhead{observations} & \\colhead{seconds}\\\\\n}\n\\startdata\n1 & XRT & 2010-10-22, 2010-11-18, 2010-12-17, 2014-10-18, & 13 &15805 \\\\\nSuperorb. phase : 0.3--0.5 & & 2014-10-20 - 2014-10-23, 2014-11-14, 2014-11-15, & & \\\\\nOrbital phase : 0.6--0.8 & & 2014-12-11 - 2014-12-13 && \\\\\n & PCA & 2010-02-25, 2010-02-28, 2010-03-24, 2010-03-28, & 18 & 22416 \\\\\n & & 2010-04-20, 2010-04-22, 2010-05-16, 2010-05-18, & & \\\\\n & & 2010-06-14, 2010-07-10, 2010-08-05, 2010-09-02, & & \\\\\n & & 2010-09-26, 2010-09-30, 2010-10-25, 2010-11-17, & &\\\\\n & & 2010-11-21, 2010-12-16 &&\\\\\n\\hline\n2 & XRT & 2006-09-05, 2006-11-21 - 2006-11-24, 2006-12-18, & 10 & 20147 \\\\\nSuperorb. phase : 0.5 -- 0.7 & & 2006-12-20, 2006-12-22, 2011-01-14, 2011-10-01 &&\\\\\nOrbital phase : 0.6 -- 0.8 & PCA & 2006-10-27, 2006-10-29, 2011-01-09, 2011-01-13, & 20 & 24384 \\\\\n & & 2011-02-06, 2011-02-09, 2011-03-06, 2011-03-31, && \\\\\n & & 2011-04-03, 2011-04-28, 2011-05-22, 2011-05-26, && \\\\\n & & 2011-06-19, 2011-07-13, 2011-07-17, 2011-08-10, && \\\\\n & & 2011-08-14, 2011-09-07, 2011-10-03, 20011-10-30 && \\\\\n\\hline\n3 & XRT & 2008-10-22, 2008-11-19, 2008-12-17, 2013-11-23, & 6 & 10266 \\\\\nSuperorb. phase : 0.0 -- 0.2 & & 2013-12-14, 2014-01-11 && \\\\\nOrbital phase : 0.0 -- 0.2 & PCA & 2008-10-22, 2008-10-25, 2008-11-17, 2008-11-19, & 21 & 32912 \\\\\n & & 2008-12-13, 2008-12-17, 2009-01-10, 2009-02-04, && \\\\\n & & 2009-02-07, 2009-03-05, 2009-03-29, 2009-04-02, && \\\\\n & & 2009-04-26, 2009-05-21, 2009-05-25, 2009-06-18, && \\\\\n & & 2009-07-12, 2009-07-15, 2009-08-08 &&\\\\\n\\enddata\n\\end{deluxetable*}\n\nA combined spectral fit was performed for XRT and PCA data. The PCA spectrum was\nnormalized with the XRT spectrum for this purpose. The XRT and PCA spectra covering the energy range\nof 0.7-20 keV were fitted by using XSPEC with a powerlaw with the line-of-sight absorption,\nwhich was kept free during the fit.\nModel\nparameters for the combined fit as well as for only XRT data are listed in Table\n\\ref{tab:fit-params-xrt-pca}.\nSince the bandwidth of the data is quite limited, we find a correlation\nbetween the power-law index and the absorption, indicating that a steeper power-law\nis compensated by a large absorption. For the wide band fitting, we use the joint\nXRT-PCA fit because the higher energy data from PCA constrains the power-law better.\n\n\n\n\n\\begin{deluxetable}{cccc}\n\\tablecaption{Best-fit parameters of a power-law (with absorption) fit to the data for XRT and PCA \\label{tab:fit-params-xrt-pca}}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{} &\\multicolumn{3}{c}{Only XRT}\\\\\n\\colhead{} & \\colhead{$N_H$ (10$^{22}$ cm$^{-2}$)} & \\colhead{alpha} & \\colhead{norm}\\\\\n}\n\\startdata\nstate1 & 0.68$\\pm$0.05 & 1.58$\\pm$0.06 & (2.51$\\pm$0.21)$\\times 10^{-3}$ \\\\\nstate2 & 0.70$\\pm$0.05 & 1.53$\\pm$0.05 & (3.08$\\pm$0.20)$\\times 10^{-3}$ \\\\\nstate3 & 0.69$\\pm$0.11 & 1.47$\\pm$0.12 & (1.25$\\pm$0.20)$\\times 10^{-3}$ \\\\\n\\hline\n& \\multicolumn{3}{c}{XRT+PCA (all layers)} \\\\\n& $N_H$ (10$^{22}$ cm$^{-2}$) & alpha & norm \\\\\n\\hline\nstate1 & 0.81$\\pm$0.04 & 1.79$\\pm$0.03 & (3.22$\\pm$0.15)$\\times 10^{-3}$ \\\\\nstate2 & 0.90$\\pm$0.03 & 1.78$\\pm$0.03 & (4.27$\\pm$0.17)$\\times 10^{-3}$ \\\\\nstate3 & 1.05$\\pm$0.08 & 1.95$\\pm$0.05 & (2.25$\\pm$0.18)$\\times 10^{-3}$ \\\\\n\\enddata\n\\end{deluxetable}\n\n\n\n\nFermi-LAT SEDs for the three states fitted with a cutoff powerlaw are given in Fig.\n\\ref{fig:fermi-sed} and model parameters are listed in Table \\ref{tab:fermi-param}.\nSome differences are seen in the spectral indices for $\\gamma$-rays between state1\nand other states (see Table \\ref{tab:fermi-param}). In case of X-ray data some \nsteepening of the spectrum is seen as flux decreases, as indicated by variation in\nspectral index (see Table \\ref{tab:fit-params-xrt-pca}).\n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[scale=0.45]{fig4.eps}\n\\caption{A cutoff power law fit to Fermi-LAT data for the three different states. Best-fit curves are shown\nas solid lines.}\n\\label{fig:fermi-sed}\n\\end{figure}\n\n\n\\begin{deluxetable}{cccc}\n\\tablecaption{Parameters of a cutoff power law fit to the Fermi-LAT data for three different states \\label{tab:fermi-param}}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{parameters} & \\colhead{state1} & \\colhead{state2} & \\colhead{state3}\n}\n\\startdata\n$\\alpha$ & 2.31 & 2.12 & 2.12 \\\\\nflux & 2.48 $\\times 10^{-7}$ & 2.41$\\times 10^{-7}$ & 2.28$\\times 10^{-7}$ \\\\\nEc (MeV) & 30041 & 10000 & 6338 \\\\\nTS & 2663 & 2727 & 2328 \\\\\n\\enddata\n\\end{deluxetable}\n\n\n\n\nWe have investigated the spectral energy distributions (SEDs) of the source. The state3 \ndoes not have TeV data and hence we have made a detailed SED study for the\nother two states. These states are bright in all wavebands and hence can be used as\na template to understand the emission mechanisms. \nVERITAS spectral data obtained from \\cite{Acciari-2011} corresponds to state1.\nFor radio flux, the average of 15 GHz data from OVRO described in section 2 \nis used. This sets an upper limit on the modelled radio flux. In addition, \nwe have also plotted radio data from \\citet{Strickman_1998}, which correspond to orbital \nphase of 0.8 and superorbital phase of 0.8.\n\n\n\nSince LS I $+61^{\\circ}303$\\ is identified as a potential microquasar based on radio observations, \nhigh energy emission is likely to be produced in jets. In case of microquasars, \ncompact object could be a neutron star or a black hole accreting matter from a\ncompanion star which presumably drives relativistic outflow or jet from the\ncompact object. Acceleration of charged particles in the jet produces high\nenergy emission. We have considered this scenario to model the SEDs. In the\ncontext of leptonic model, the low energy emission arises from Synchrotron emission\nfrom ultra-relativistic electrons in the jet. Whereas the high energy emission\narises from inverse Compton scattering of soft photons, which could be either\nsoft photons from Synchrotron radiation (Synchrotron Self-Compton or SSC \nmodel) or photons from companion star or accretion disk (External Compton\nmodel). In this work, relativistic jet making an angle of 30$^\\circ$ \n(\\cite{Gupta-2006} and reference therein) with our line of sight is considered.\nElectrons are assumed to have a broken power-law energy spectrum given by\n\n\\begin{eqnarray}\n{dn_e \\over d\\gamma} \\propto \\begin{cases}\n \\gamma^{-\\alpha} ~\\mbox{for} ~\\gamma <\\gamma_{br} \\nonumber \\\\\n \\gamma^{-\\beta} \\exp{\\left(-{\\gamma \\over \\gamma_c} \\right )} ~ \\mbox{for}~ \\gamma_{br}\\leq \\gamma \\leq \\gamma_c \n \\end{cases}\n\\label{eqn:pi0}\n\\end{eqnarray}\n\nwhere, $n_e$ denotes the number density of the electrons, $\\gamma$ is \nthe Lorentz factor of the electron, $\\alpha$ and $\\beta$ are spectral indices,\n$\\gamma_{br}$ break energy and $\\gamma_c$ the highest energy of the electron.\n\nFor this source, the distance is taken as 2 kpc \\citep{Hutchings-1981,\nFrail-1991} and Lorentz factor for bulk motion is assumed to be\n1.25 \\citep{Massi-2004}. The models are shown for state1 and state2 respectively\nin Figure \\ref{fig:xxx_p} and Figure \\ref{fig:xxx_a}. \n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.45]{fig5.eps}\n\\caption{The SED of LS I $+61^{\\circ}303$\\ for state1.\nThe synchrotron and inverse Compton spectra are calculated using the parameters\nas given in Table \\ref{tab:params_fit_SED}. X-ray, Fermi-LAT and \nVERITAS data for state 1 are shown with points in red color. Radio data shown in\nthe figure do not correspond to state 1. The average flux from OVRO is shown with \nfilled circle of red color, whereas radio data from \\citet{Strickman_1998} is shown \nwith brown triangles.}\n\\label{fig:xxx_p}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.45]{fig6.eps}\n\\caption{The SED of LS I $+61^{\\circ}303$\\ for state2.\nThe synchrotron and inverse Compton spectra are calculated using the same parameters\nas given in Table \\ref{tab:params_fit_SED}. X-ray and Fermi-LAT data \nfor state 2 are shown with points in blue color. Since, VERITAS data for state 2 is \nnot available, state 1 VERITAS data is used which is shown in brown colour. Radio \ndata are the same as in Fig.~\\ref{fig:xxx_p}.}\n\\label{fig:xxx_a}\n\\end{figure}\n\nHere it is assumed that the radio, X-ray and $\\gamma$-ray emissions originate in the\nsame region and the magnetic field in emission blob is quite high, of the\norder of $10^3$ G. Rest of the model parameters are fitted \nand these parameters are listed in Table \\ref{tab:params_fit_SED}.\nTo explain the TeV $\\gamma$-ray emission it was necessary to include IC of\nphotons from accretion disk or companion star in addition to the SSC\ncomponent. Radiation density ($U_{rad}$) is estimated from luminosity \n$L$ using expression $U_{rad} = L\/ {4 \\pi R^2 c}$, where $R$ is the \ndistance of the emission volume from the companion star or the accretion disk. \nRadiation density \nfrom the companion star, with $L_c = 2 \\times 10^{38}$ erg s$^{-1}$ and a distance of \n$R~\\sim 10^{12}$ cm, is about 4 orders of magnitude higher than the\ncorresponding radiation density from the accretion disk. Hence, we have considered \nonly the seed photons from the companion star for the External Compton model. However, \nthis spectrum cannot explain the observed data as seen from Figure \\ref{fig:xxx_p} \nand Figure \\ref{fig:xxx_a}. In this case, we have considered radius of the emission \nvolume as a parameter for the fit to the data. We can also estimate the radius of \nemission volume from the variability time scale of the source. We fixed the size \nof the emission region according to the estimates from variability study \\citep{Smith-2009},\nwhich indicates a possible size of the emission region to be $\\sim 6 \\times 10^{10}$ cm.\nConsidering that the bulk Lorentz factor is 1.25, this size corresponds to\n$\\sim~7.5 \\times 10^{10}$ cm. Fixing the emission region size to this value, \nmodel parameters were estimated which are given in the last column of Table \\ref{tab:params_fit_SED}.\nAlthough the synchrotron spectrum explains the observed fluxes from radio to MeV--GeV \nenergies, SSC spectrum alone cannot fit the data well. Hence we have also estimated \nthe contribution of companion star photons for this low magnetic field case and we \nfound that the external Compton model overestimates the observed flux at MeV--TeV \nenergies. However, SSC and EC models together can explain the data well if the \ncompanion star luminosity is considered to be reduced by a factor of 10. This is \nshown in Figure \\ref{fig:xxx_new_B.eps}.\n\nIn the spectral fitting, we did not consider the radio data (triangles \nin Fig.~\\ref{fig:xxx_a}, \\ref{fig:xxx_p} and \\ref{fig:xxx_new_B.eps}) from VLA observation \n\\citep{Strickman_1998} in fit, since the orbital and superorbital phases for these are \ndifferent from the phases for state 1 and state 2. Since, the energy spectrum is not \navailable for OVRO data, we have used average flux as an upper-limit for SED modelling, \nand for the chosen set of parameters the model does not overestimate radio fluxes for \nthe states considered above.\n\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.45]{fig7.eps}\n\\caption{The SED of LS I $+61^{\\circ}303$\\ for state1.\nThe synchrotron and inverse Compton spectra are calculated using the same parameters\ngiven in the last column of Table \\ref{tab:params_fit_SED}, with the emission region radius\ndecided from the variability time scale.}\n\\label{fig:xxx_new_B.eps}\n\\end{figure}\n\n\\begin{deluxetable}{cccc}\n\\tablecaption{Parameters of the fit for microquasar scenario \\label{tab:params_fit_SED}}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{Parameters} & \\colhead{state1} & \\colhead{state2} & \\colhead{state1 (radius from} \\\\\n\\colhead{} & \\colhead{} & \\colhead{} & \\colhead{variability study)} \\\\\n}\n\\startdata\n Magnetic field (Gauss) & 5 $\\times 10^3$ & 5 $\\times 10^{3}$ & 15 \\\\\n$\\gamma_{min}$ & 4.4 & 4.9 & 110 \\\\\n $\\gamma_{max}$ & 5.6 $\\times 10^6$ & 5.4 $\\times 10^6$ & 9.0 $\\times 10^7$ \\\\\n\n spectral index ($\\alpha$) & 2.53 & 2.55 & 2.7 \\\\\n spectral index ($\\beta$ ) & 2.34 & 2.40 & 2.4 \\\\\n radius (cm) & 11.5 $\\times 10^7$ & 18.0 $\\times 10^7$ & 7.5 $\\times 10^{10}$ \\\\\n Gamma Break & 1.0 $\\times 10^5$ & 1.4 $\\times 10^7$ & 9.0 $\\times10^4$ \\\\\n Bulk Lorentz factor & 1.25 & 1.25 & 1.25 \\\\\n Distance (kpc) & 2.0 & 2.0 & 2.0 \\\\\nInclination angle(deg) & 30.0 & 30.0 & 30.0 \\\\\n\n Luminosity (erg\/s) & 3.9 $\\times 10^{35}$ & 3.8 $\\times 10^{35}$ & 4.3 $\\times 10^{35}$ \\\\\n\\enddata\n\\end{deluxetable}\n\n\n\n\n\\section{Discussion and Conclusions}\\label{sec:discussion}\n\n\nLong-term timing analyses of LS I $+61^{\\circ}303$\\ at different wavelengths has shown some of the \ninteresting characteristics of the source. Flux in various wavebands shows \nvariation with superorbital phase and this variation is wavelength dependent as well\nas the binary phase dependent.\nAt X-ray energies, as evident from Figure \\ref{fig:density-plot}, the source is\nbright at orbital phases $\\sim 0.4-0.8$ and superorbital phases of $\\sim\n0.3-0.8$. Whereas at radio energies, the source is bright at orbital phases of\n$\\sim 0.4-0.8$ and superorbital phases of $\\sim 0.7-1.4$. \nThe $\\gamma$-ray flux in MeV-GeV band as given by Fermi-LAT\nshows a shift relative to the radio and the X-ray bands.\nThis behaviour possibly indicates that radio,\nX-ray and $\\gamma$-ray emissions could be originating from different regions.\n\nThe long-term superorbital modulation of flux could support the scenario where \ncircumstellar disk of a Be star quasi-cyclically expands and shrinks (e.g., \n\\cite{Negueruela-2001}). However, for such a scenario the long-term period is \nvariable from cycle to cycle \\citep{Rivinius_2013}. Recent analysis of radio \ndata established the fact that the long-term period is quite stable over 8 \ncycles \\citep{Massi_2016} which makes the scenario of quasi-cyclic variation \nof circumstellar disk of the Be star for LS I $+61^{\\circ}303$\\ less probable. This stable \nsuperobital modulation is attributed to periodic Doppler boosting effects \nof the precessional jets associated with the compact objects \\citep{Massi_2014}.\n\nIn this paper, we have seen that the modulation of flux with superorbital phase \nis more prominent in orbital phase bins near apastron. This is clearly seen at \nvarious wavelengths in Figures \\ref{fig:xrt-lc2} and \\ref{fig:super_peak_orb}. \nAlthough the long-term superorbital variation does not support the variation of \ncircumstellar disk size, this type of superorbital modulation near the apastron \ncould stem from the interaction of the compact object with the circumstellar disk \nof the Be star. The equivalent width (EW) of the H$\\alpha$ emission line is \nrelated to the size of the stellar disk \\citep{Zamanov-Mart-2000,Grundstrom_2006}. \nIn addition to that, it has been found that the maximum of the EW of H$\\alpha$ \noccurs in a region around superorbital phase of $\\sim$ 0.4 (see \\cite{Zamanov-1999,\nZamanov-Mart-2000}) considering superorbital period of 1584 days. However, if we \nuse superorbital period as 1626 days then the maximum of the EW of H$\\alpha$ \noccurs at $\\sim$ 0.3. From Figure \\ref{fig:super_peak_orb} we see that flux \nof gamma-rays is high at the superorbital phase of $\\sim$ 0.3--0.5, which suggests \nthat the disk plays an important role\nin modulating $\\gamma$-rays. Although, a similar enhancement of X-rays at superorbital\nphase of 0.2 is seen by \\cite{Li-2011} considering only peak flux, we see that X-ray\nflux peaks at the superorbital phases in the range of $\\sim$ 0.4--0.8 depending on the\norbital phase. We see that the peak of radio flux is shifted further. It suggests that even \nif the disk size plays a significant role for $\\gamma$-rays, X-ray and radio fluxes \nare not necessarily affected much by the size of the disk.\n\n\n\nFigure \\ref{fig:xrt-lc2} shows that, for all wavebands, the superorbital variability \nis not significant in the periastron region, whereas it is significant at the apastron.\nThis can support the scenario where one assumes that the interaction between\ncompact object and the circumstellar disk of Be star is strong when compact object is in the\nproximity of Be star.\nAs a result, superorbital modulation effect becomes insignificant as \nsuggested by \\cite{Ackermann-2013}. However, it becomes dominant as the compact object \nstarts moving towards the apastron region. \n\nAnother possible scenario for the modulation is related to the precession of the Be star disk about \nthe orbital plane. If this scenario is adopted for possible \nexplanation of the \nstrong superorbital modulation in the apastron phase, then the angular distance between \norbital plane and the disk plane should become minimum. As a result, even if the compact \nobject is far from the Be star, the smaller angular distance between disk plane and orbital \nplane provides relatively higher interaction of the compact object with the disk. \n\nIn addition to the superorbital modulation in the apastron phase (0.5 -- 1.0) we have seen \nphase lag among radio, X-ray and $\\gamma$-rays. The possible explanation for the constant \nphase lag between X-ray and radio is that the plasma blobs filled with \nhigh-energy particles may escape from the X-ray emission region to the radio emission region \nwhich is at a distance of $\\sim$ \n10 times the binary separation distance as proposed by \\cite{Chernyakova-2012} in the context \nof pulsar wind scenario. However, in the microquasar scenario, different regions in the jets can be \nresponsible for the phase lag. We have also seen the phase lag between radio and \n$\\gamma$-rays. In such binary systems, $\\gamma$-rays are considered to be produced by \nup-scattering of radio photons or accretion disk\/star photons. If the $\\gamma$-rays are \noriginating through the up-scattering of radio photons which are being produced by the same \npopulation of electrons then there should not be any phase lag between radio and \n$\\gamma$-rays. Hence, up-scattering of a separate population of photons could be a\npossible explanation for the phase lag between $\\gamma$-ray and radio fluxes. \n\nIn addition to the timing analysis, we also tried to understand the spectral behaviour of the \nsource at different orbital and superorbital phases. We have chosen three \ndifferent regions following flux variations for X-ray, radio, and $\\gamma$-rays as function of \norbital and superorbital phase. From Figure \\ref{fig:density-plot} we see that the source \nat high energy is mostly very \nactive in the orbital phase bin of 0.5--0.8 and superorbital phase bin 0.3--0.7. We \nselected two different regions with superorbital phase 0.3--0.5; orbital phase: 0.6--0.8 (state1) \nand superorbital phase: 0.5 -- 0.7; orbital phase: 0.6 -- 0.8 (state2) from this region where source \nis bright at all wavelengths. To compare the spectral variation with the other orbital and \nsuperorbital phases where the source is not bright, we have chosen a region with superorbital \nphase 0.0 -- 0.2 and orbital phase 0.0 -- 0.2 (state3). \nBased on these three different regions of orbital or superorbital phases, we have analysed \nX-ray and Fermi-LAT data to see the spectral behaviour of the source at high energies. \nWe found no significant differences between flux levels but we see some variations in the\nspectral indices at Fermi-LAT energies. However, we see some difference in both the spectral\nindices and flux levels for XRT-PCA data, though the interplay of the spectral shape and the absorption\n playing a role in this trend cannot be ruled out.\n\n\nFrom the fit to the SED we have seen that we can explain the data well considering LS I $+61^{\\circ}303$\\ as a microquasar. \nIn the microquasar model, it is generally assumed that the high energy emission comes from the region which \nis very close to the compact object to reduce the effect of $\\gamma \\gamma$ absorption due to \nradiation field of companion star \\citep{Gupta-2006_a}. Magnetic field in this region is relatively high \nas considered \nfor our model, and we have estimated emission volume of the order of $~10^{8}$ cm. In this emission \nvolume, some the of the emitted $\\gamma$-rays can be absorbed through $e^+e^-$ pair creation process due to X-ray \nphotons in emission volume. We have estimated that about 20\\% of $\\gamma$-rays will be absorbed at TeV energies. \nHowever, larger sizes of emission volume will make this absorption insignificant. \nWe have seen that it is possible to have lower values of magnetic field strength \nto explain the observed data, in case of larger emission volume. \n\nWe have also seen in section \\ref{sec:sed} that if we consider the radius of emission volume obtained\nfrom variability study, the magnetic field from the model fit to the data is estimated to be $\\sim 10$ G. \nHowever, we found that the SSC model alone cannot explain the TeV data well and EC model overestimates \nthe observed flux for the luminosity of the companion star $\\sim 10^{38}$ erg s$^{-1}$. A lower value \nof this luminosity ($\\sim 10^{37}$ erg s$^{-1}$) can explain the data well. This suggests that lower values of \nmagnetic field in the emission blobs are suitable for LS I $+61^{\\circ}303$\\ to explain the observed data constraining \nthe luminosity of the companion star. With high magnetic field, the Synchrotron cooling timescale is \nmuch smaller than the variability timescale which could be as low as 2 seconds as estimated by \n\\cite{Smith-2009}. In our SED fitting, we have considered that the emitting blob is close to the compact \nobject. The blob size increases as it moves away from the compact object in the jets and magnetic \nfield reduces. The time-averaged values of flux from a particular region in the jet as considered \nby \\cite{Gupta-2006_a} could reduce the discrepancy between Synchrotron cooling time scale and the time \nscale of X-ray flux variability. From the SED, it \nseems that the one single emission process is responsible for X-ray and MeV-GeV data. Hence, we have \nconsidered Synchrotron emission process to explain the data up to GeV energies. As a result, a high \nmagnetic field is required to explain the data if the maximum energy of high energy electrons is not \nwell above $\\sim$ 1 TeV. A good quality data in the hard X-ray region can establish whether a different \nemission component is required to explain the data at MeV-GeV region. It can also indicate if we need \na different population of electrons to explain data at different energy bands in the SED.\n\nWe have also seen that the fitted model parameters show hardening of spectral index after break (see \nTable \\ref{tab:params_fit_SED}). In addition, flux levels for different states (mainly X-ray) are \ndifferent. A change in the location of the compact object relative to the companion star during the orbital and the \nsuperorbital cycles and its interaction with circumstellar disk could be responsible for the changing \nelectron spectral distribution.\n\nIn the context of timing analysis, we have seen the phase lag among radio, X-ray and $\\gamma$-ray data which \nmay suggest that they originate from different emission regions. However, in our present spectral modelling we have \nconsidered single emission zone to explain the mutiwavelength data. To support the scenario of different origins we \nneed simultaneous multiwavelength data for longer period both for timing and spectral analysis. At present, \nwe have such observation for radio, X-ray and MeV--GeV gamma-rays. However, GeV--TeV data is also required to get \na complete understanding of the source in multifrequencies.\n\nThe following major conclusions can be drawn, based on the study presented here:\n\n\\begin{itemize}\n\\item\n{The super orbital modulation is more pronounced near the apastron for all \nwavelengths, supporting geometric scenarios as a cause for the super orbital modulation.\n}\n\\item\n{There is a definite wavelength dependent variation of the maximum of the super orbital\nflux with respect to the binary phase. This variation shows a wavelength dependent\nshift.\n}\n\\item\n{Emission from radio to GeV gamma-rays during the maximum emission can be modelled by an one-zone micro-quasar \njet model. To explain the TeV emission, Comptonization from\nan External Compton source is necessary especially when low magnetic field is assumed. \nIn this case, we suggest that the photons from the companion star, with a lower luminosity\n($\\sim$10$^{37}$ erg s$^{-1}$), is adequate to explain the data.\n}\n\\item\n{Extended hard X-ray data would be necessary to constrain the synchrotron model and \nTeV observations across a super orbital cycle, along with X-ray measurements, would\nbe required to make a detailed emission model for this source.\n}\n\\end{itemize}\n\n\n\n\n\n\\section*{Acknowledgements}\n\nWe acknowledge the use of data from the High Energy Astrophysics Science Archive\nResearch Center (HEASARC), provided by NASA's Goddard Space Flight Center. Also the data\nsupplied by the UK Swift Science Data Centre at the University of Leicester has been used\nin present work.\nWe thank Hovatta Talvikki for providing us data from OVRO 40-m monitoring program which was used in \nthe research by \\citep{Richards-2011}(supported in part by NASA grants NNX08AW31G \nand NNX11A043G, and NSF grants AST-0808050 and AST-1109911). We acknowledge the use of Fermi-LAT data \nand analysis tool from Fermi Science Support Center. We would also like to thank MAGIC collaboration \nfor making their published data public which has been used in this work. We also acknowledge VERITAS\ncollaboration for their published data used in this work.\n\n\n\\bibliographystyle{apj}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\nPre-trained models are commonly used as backbones of machine learning systems.\nIn practice, we often want to \\textit{edit} models after pre-training,\\footnote{We use the term \\textit{editing} to refer to any intervention done to a model done after the pre-training stage.} to improve performance on downstream tasks \\citep{zhuang2020comprehensive,wortsman2021robust,matena2021merging,ilharco2022patching}, mitigate biases or unwanted behavior \\citep{shibani2021editing,lu2022quark,ribeiro2022adaptive,murty2022fixing},\nalign models with human preferences \\citep{askell2021general,ouyang2022training,kasirzadeh2022conversation,sparrow},\nor update models with new information \\citep{zhu2020modifying,de2021editing,mitchell2021fast,mitchell2022memory}.\n\nIn this work, we present a new paradigm for editing neural networks based on \\emph{task vectors}, which encode the information necessary to do well on a given task.\nInspired by recent work on weight interpolation \\citep{frankle2020linear,wortsman2021robust,matena2021merging,wortsman2022model,ilharco2022patching,li2022branch,ainsworth2022git}, we obtain such vectors by taking the weights of a model fine-tuned on a task and subtracting the corresponding pre-trained weights (Figure \\ref{fig:main}a).\n\nWe show that we can edit a variety of models with \\emph{task arithmetic}---performing simple arithmetic operations on task vectors (Figure \\ref{fig:main}b-d). For example, \\emph{negating} a vector can be used to remove undesirable behaviors or unlearn tasks, while \\emph{adding} task vectors leads to better multi-task models, or even improves performance on a single task.\nFinally, when tasks form an \\emph{analogy} relationship, task vectors can be combined to improve performance on tasks where data is scarce. \n\n\\paragraph{Forgetting via negation.} Users can negate task vectors to mitigate undesirable behaviors (e.g., toxic generations), or even to forget specific tasks altogether, like OCR. In Section \\ref{sec:negation}, we negate a task vector from a language model fine-tuned on toxic data \\citep{radford2019language,borkan2019nuanced}, reducing the proportion of generations classified as toxic, with little change in fluency. We also negate task vectors for image classification tasks, resulting in substantially lower accuracy on the task we wish to forget with little loss on ImageNet accuracy \\citep{deng2009imagenet}.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/task_vectors.pdf}\n \\caption{An illustration of task vectors and the arithmetic operations we study for editing models. (a) A task vector is obtained by subtracting the weights of a pre-trained model from the weights of the same model after fine-tuning (Section \\ref{sec:task-vectors}). (b) Negating a task vector degrades performance on the task, without substantial changes in control tasks (Section \\ref{sec:negation}). (c) Adding task vectors together improves the performance of the pre-trained model on the tasks under consideration (Section \\ref{sec:addition}). (d) When tasks form an analogy relationship such as supervised and unsupervised learning on two different data sources, it is possible to improve performance on a supervised target task using only vectors from the remaining three combinations of objectives and datasets (Section \\ref{sec:analogies}). \n }\n \\label{fig:main}\n\\end{figure*}\n\n\\paragraph{Learning via addition.} Adding task vectors results in better multi-task models, or improved performance on a single task. In Section \\ref{sec:addition}, we add task vectors from various image models (CLIP, \\citet{radford2021learning}) and compare the performance of the resulting model with using multiple specialized fine-tuned models. We find that the single resulting model can be competitive with using multiple specialized models. Adding two task vectors maintains 98.9\\% of the accuracy, and the average performance on the entire set of tasks increases as more task vectors are added.\nMoreover, adding a task vector from a different task can \\textit{improve} performance on a target task using text models (T5, \\citet{colin2020exploring}).\n\n\\paragraph{Task analogies.} When we can form task analogies of the form ``$A$ is to $B$ as $C$ is to $D$\", combining task vectors from the first three tasks improves performance on the fourth, even when little or no training data is available.\nIn Section \\ref{sec:analogies}, we show that we can improve domain generalization to a new target task without using labeled data from that task. More specifically, accuracy on a sentiment analysis task improves by combining a task vector from a second sentiment analysis dataset and task vectors produced using unlabeled data from both domains.\nWe also use analogies between classifying pictures and sketches of objects to improve accuracy on subgroups where little or no data is available.\n\n\nOverall, editing models with task arithmetic is simple, fast and effective. \nThere is no extra cost at inference time in terms of memory or compute, since we only do element-wise operations on model weights.\nMoreover, vector operations are cheap, allowing users to experiment quickly with multiple task vectors.\nWith task arithmetic, practitioners can reuse or transfer knowledge from models they create, or from the multitude of publicly available models all without requiring access to data or additional training.\\footnote{Code available at \\url{https:\/\/github.com\/mlfoundations\/task_vectors}.}\n\n\\section{Task Vectors}\n\\label{sec:task-vectors}\nFor our purposes, a task is instantiated by a dataset and a loss function used for fine-tuning.\nLet $\\theta_\\textrm{pre} \\in \\mathbb{R}^d$ be the weights of a pre-trained model, and $\\theta_\\textrm{ft}^t\\in \\mathbb{R}^d$ the corresponding weights after fine-tuning on task $t$. \nThe task vector $\\tau_t \\in \\mathbb{R}^d$ is given by the element-wise difference between $\\theta_\\textrm{ft}^t$ and $\\theta_\\textrm{pre}$, i.e., $\\tau_t = \\theta_\\textrm{ft}^t - \\theta_\\textrm{pre}$.\nWhen the task is clear from context, we omit the identifier $t$, referring to the task vector simply as $\\tau$.\n\nTask vectors can be applied to any model parameters $\\theta$ from the same architecture, via element-wise addition, with an optional scaling term $\\lambda$, such that the resulting model has weights $\\theta_\\textrm{new} = \\theta + \\lambda \\tau$.\nIn our experiments, the scaling term is determined using held-out validation sets.\nNote that adding a single task vector to a pre-trained model with $\\lambda=1$ results in the model fine-tuned on that task.\n\nFollowing \\citet{ilharco2022patching}, we focus on open-ended models, where it is possible to fine-tune on a downstream task without introducing new parameters (e.g., open-vocabulary image classifiers \\citep{radford2021learning,jia2021scaling,pham2021combined,alayrac2022flamingo} and text-to-text models \\citep{colin2020exploring,radford2019language,brown2020language,hoffmann2022training}).\nIn cases where fine-tuning introduces new parameters (e.g., a new classification head), we could follow \\citet{matena2021merging} and merge only the shared weights, but this exploration is left for future work.\n\n\n\\paragraph{Editing models with task arithmetic.} We focus on three arithmetic expressions over task vectors, as illustrated in Figure \\ref{fig:main}: negating a task vector, adding task vectors together, and combining task vectors to form analogies. All operations are applied element-wise to the weight vectors. \n\n\nWhen \\emph{negating} a task vector $\\tau$, applying the resulting vector $\\tau_\\textrm{new} = - \\tau$ corresponds to extrapolating between the fine-tuned model and the pre-trained model. The resulting model is worse at the target task, with little change in performance on control tasks (Section \\ref{sec:negation}).\n\\emph{Adding} two or more task vectors $\\tau_i$ yields $\\tau_\\textrm{new} = \\sum_i \\tau_i$, and results in a multi-task model proficient in all tasks, sometimes even with gains over models fine-tuned on individual tasks (Section \\ref{sec:addition}).\nFinally, when tasks $A$, $B$, $C$ and $D$ form an analogy in the form ``$A$ is to $B$ as $C$ is to $D$\",\nthe task vector $\\tau_\\textrm{new} = \\tau_C + (\\tau_B - \\tau_A)$ improves performance on task $D$, even if there is little or no data for that task (Section \\ref{sec:analogies}).\n\nFor all operations, the model weights obtained by applying $\\tau_\\textrm{new}$ are given by $\\theta_\\textrm{new} = \\theta + \\lambda \\tau_\\textrm{new}$, where the scaling term $\\lambda$ is determined using held-out validation sets.\n\n\\section{Forgetting via Negation}\n\\label{sec:negation}\n\nIn this section, we show that negating a task vector is an effective way to reduce its performance on a target task, without substantially hurting performance elsewhere.\nForgetting or ``unlearning\" can help mitigate undesired biases learned when pre-training; forgetting tasks altogether may be desirable to comply with regulations or for ethical reasons like preventing an image classifier to recognize faces, or to ``read'' personal information via OCR.\n\nThese interventions should not have a substantial effect on how models behave when processing data outside the scope of the edit \\citep{mitchell2021fast,ilharco2022patching}. Accordingly, we measure accuracy on \\textit{control tasks}, in addition to evaluating on the target tasks from which the task vector originated.\nOur experiments showcase the effectiveness of negating task vectors for editing image classification and text generation models.\n\n\\subsection{Image classification}\n\\label{sec:forget_img}\n\n\nFor image classification, we use CLIP models \\citep{radford2021learning} and task vectors from eight tasks studied by \\citet{ilharco2022patching,radford2021learning}, ranging from satellite imagery recognition to classifying traffic signs: Cars \\citep{cars}, DTD \\citep{dtd}, EuroSAT \\citep{eurosat}, GTSRB \\citep{gtsrb}, MNIST \\citep{lecun1998mnist}, RESISC45 \\citep{cheng2017remote}, SUN397 \\citep{sun397}, and SVHN \\citep{svhn}. \nWe explore additional tasks including OCR and person identification in Appendix \\ref{sec:clip-neg-extended}.\nFor the control task, we use ImageNet \\citep{deng2009imagenet}. We generate task vectors by fine-tuning on each of the target tasks, as detailed in Appendix \\ref{sec:clip-exp-details}.\n\nWe compare against two additional baselines, fine-tuning by moving in the direction of increasing loss (i.e., with gradient ascent), as in \\citet{golatkar2020eternal,tarun2021fast}, and against using a random vector where each layer has the same magnitude as the corresponding layer of task vector. Additional details are in Appendix \\ref{sec:app-neg-baselines}.\n\n\\newcolumntype{?}{!{\\vrule width .003pt}}\n\\begin{table*}\n\\caption{\\textbf{Forgetting image classification tasks via negation}. Results are shown for CLIP models, reporting average accuracy (\\%) on the eight target tasks we wish to forget (Cars, DTD, EuroSAT, GTSRB, MNIST, RESISC45, SUN397 and SVHN), and the control task (ImageNet). Negating task vectors reduce the accuracy of a pre-trained ViT-L\/14 by 45.8 percentage points on the target tasks, with little loss on the control task. Additional details and results are shown in Appendix \\ref{sec:clip-neg-extended}.}\n\\setlength\\tabcolsep{5.5pt}\n\\renewcommand{\\arraystretch}{0.9}\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{l@{\\hskip .3in}cc|cc|cc}\n\\toprule\n\\multirow{2}{*}{Method} & \\multicolumn{2}{c|}{ViT-B\/32} & \\multicolumn{2}{c|}{ViT-B\/16} & \\multicolumn{2}{c}{ViT-L\/14} \\\\\n & Target ($\\downarrow$) & Control ($\\uparrow$) & Target ($\\downarrow$) & Control ($\\uparrow$) & Target ($\\downarrow$) & Control ($\\uparrow$) \\\\\\midrule\nPre-trained & 48.3 & 63.4 & 55.2 & 68.3 & 64.8 & 75.5\\\\\\midrule\nFine-tuned & 90.2 & 48.2 & 92.5 & 58.3 & 94.0 & 72.6 \\\\\nGradient ascent & 2.73 & 0.25 & 1.93 & 0.68 & 3.93 & 16.3\\\\\nRandom vector & 45.7 & 61.5 & 53.1 & 66.0 & 60.9 & 72.9\\\\\\midrule\nNegative task vector & 24.0 & 60.9 & 21.3 & 65.4 & 19.0 & 72.9\\\\\n\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:forget_image}\n\\end{table*}\n\n\n\nAs shown in Table \\ref{tab:forget_image}, negating the task vectors is the most effective editing strategy for decreasing accuracy on the target task with little impact on the control task. For example, negative task vectors decrease the average target accuracy of ViT-L\/14 by 45.8 percentage points with little change in accuracy on the control task. In contrast, using a random vector does not have much impact on target accuracy, while fine-tuning with gradient ascent severely deteriorates performance on control tasks.\nWe present additional results in Appendix \\ref{sec:clip-neg-extended}.\n\n\n\\subsection{Text generation}\n\\label{sec:forget_lang}\n\n\\begin{table*}\n\\caption{\\textbf{Making language models less toxic with negative task vectors.} Results are shown for the GPT-2 Large model. Negative task vectors decrease the amount of toxic generations by 6$\\times$, while resulting in a model with comparable perplexity on a control task (WikiText-103). Additional details and results are shown in Appendix \\ref{sec:appendix-neg-lang}.}\n\\setlength\\tabcolsep{4.5pt}\n\\renewcommand{\\arraystretch}{0.9}\n\\footnotesize\n\n\\begin{center}\n\n\\begin{tabular}{lrrr}\n\n\\toprule\n \n Method & \\% toxic generations ($\\downarrow$)& Avg. toxicity score ($\\downarrow$) & WikiText-103 perplexity ($\\downarrow$)\n \\\\\\midrule\nPre-trained & 4.8 & 0.06 & 16.4 \\\\\\midrule\nFine-tuned & 57 & 0.56 & 16.6 \\\\\nGradient ascent & 0.0 & 0.45 & $>$10$^{10}$ \\\\\nFine-tuned on non-toxic & 1.8 & 0.03 & 17.2 \\\\\nRandom vector & 4.8 & 0.06 & 16.4 \\\\\\midrule\nNegative task vector & 0.8 & 0.01 & 16.9 \\\\\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:toxicity}\n\n\\end{table*}\nWe study whether we can mitigate a particular model behavior by negating a task vector \\emph{trained to do that behavior}.\nIn particular, we aim to reduce the amount of toxic generations produced by GPT-2 models of various sizes \\citep{radford2019language}.\nWe generate task vectors by fine-tuning on data from Civil Comments \\citep{borkan2019nuanced} where the toxicity score is \\textit{higher} than 0.8, and then negating such task vectors.\nAs in Section \\ref{sec:forget_img}, we also compare against baselines that use gradient ascent when fine-tuning \\citep{golatkar2020eternal,tarun2021fast}, and using a random task vector of the same magnitude. Additionally, we compare against fine-tuning on non-toxic samples from Civil Comments (toxicity scores smaller than 0.2), similar to \\citet{liu2021dexperts}.\nWe measure the toxicity of one thousand model generations with Detoxify \\citep{Detoxify}. For the control task, we measure the perplexity of the language models on WikiText-103 \\citep{merity2016pointer}.\n\nAs shown in Table \\ref{tab:toxicity}, editing with negative task vectors is effective, reducing the amount of generations classified as toxic from 4.8\\% to 0.8\\%, while maintaining perplexity on the control task within 0.5 points of the pre-trained model.\nIn contrast, fine-tuning with gradient ascent lowers toxic generations by degrading performance on the control task to an unacceptable level, while fine-tuning on non-toxic data is worse than task vectors both in reducing task generations and on the control task. As an experimental control, adding a random vector has little impact either on toxic generations or perplexity on WikiText-103.\nWe present additional experimental details and results in Appendix \\ref{sec:appendix-neg-lang}.\n\\section{Learning via Addition}\n\\label{sec:addition}\n\nWe now turn our attention to \\emph{adding} task vectors, either to build multi-task models that are proficient on multiple tasks simultaneously, or to improve single-task performance.\nThis operation allows us to reuse and transfer knowledge either from in-house models, or from the multitude of publicly available fine-tuned models, without additional training or access to training data.\nWe explore addition on various image classification and natural language processing tasks.\n\n\\subsection{Image classification}\n\\label{sec:add_img}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/clip_add_v3.pdf}\n \\caption{\\textbf{Adding pairs of task vectors} from image classification tasks. Adding task vectors from two tasks improves accuracy on both, resulting in a single model that is competitive with using two specialized fine-tuned models.}\n \\label{fig:clip-add-2}\n \n\\end{figure}\n\n\nWe start with the same eight models used in Section \\ref{sec:negation}, fine-tuned on a diverse set of image classification tasks (Cars, DTD, EuroSAT, GTSRB, MNIST, RESISC45, SUN397 and SVHN). In Figure \\ref{fig:clip-add-2}, we show the accuracy obtained by adding all pairs of task vectors from these tasks.\nTo account for the difference in difficulty of the tasks, we normalize accuracy on each task by the accuracy of the model fine-tuned on that task. After normalizing, the performance of fine-tuned models on their respective tasks is one, and so the average performance of using multiple specialized models is also one. As shown in Figure \\ref{fig:clip-add-2}, adding pairs of task vectors leads to a single model that outperforms the zero-shot model by a large margin, and is competitive with using two specialized models (98.9\\% normalized accuracy on average).\n\n\nBeyond pairs of tasks, we explore adding task vectors for \\textit{all} possible subsets of the tasks ($2^8$ in total). In Figure \\ref{fig:clip-add-all}, we show how the normalized accuracy of the resulting models, averaged over all the eight tasks.\nAs the number of available task vectors increases, better multi-task models can be produced. \nWhen all task vectors are available, the best model produced by adding task vectors reaches an average performance of 91.2\\%, despite compressing several models into one. Additional experiments and details are presented in Appendix \\ref{sec:appendix-add}.\n\n\\begin{SCfigure}\n \\centering\n \\begin{minipage}{0.53\\linewidth}\n \\includegraphics[width=1\\textwidth]{figures\/clip_add_allevals.pdf}\n \\end{minipage}\n \\captionsetup{width=1\\textwidth}\n \\sidecaptionvpos{figure}{t}\n \\caption{\\textbf{Adding task vectors builds multi-task models} for image classification tasks. Accuracy is averaged over all downstream tasks.\n When more task vectors are available, better multi-task vectors can be built.\n Each point represents an experiment with a subset of the eight tasks we study, and the solid line connects the average performance for each subset size. Recall that the average normalized accuracy of using multiple fine-tuned models is always one. \n Additional details and experiments are in Appendix \\ref{sec:appendix-add}.}\n \\label{fig:clip-add-all}\n\\end{SCfigure}\n\n\\subsection{Natural language processing}\n\\label{sec:add-nlp}\n\n\nIn addition to building multi-task models, we explore whether adding task vectors is a useful way of improving performance on a single target task. \nTowards this goal, we first fine-tune T5-base models on four tasks from the GLUE benchmark \\citep{wang2018glue}, as in \\citet{wortsman2022model}.\nThen, we search for compatible checkpoints on Hugging Face Hub, finding 427 candidates in total.\nWe try adding each of the corresponding task vectors to our fine-tuned models, choosing the best checkpoint and scaling coefficient based on held-out validation data.\nAs shown in Table \\ref{tab:glue}, adding task vectors can \\textit{improve} performance on target tasks, compared to fine-tuning.\nAdditional details and experiments---including building multi-task models from public checkpoints from Hugging Face Hub---are presented in Appendix \\ref{sec:appendix-add}.\n\n\\begin{table*}\n\\caption{\\textbf{Improving performance on target tasks with external task vectors.} For four text classification tasks from the GLUE benchmark, adding task vectors downloaded from the Hugging Face Hub can improve accuracy of fine-tuned T5 models. Appendix \\ref{sec:appendix-add-lang} shows additional details.}\n\\setlength\\tabcolsep{4.5pt}\n\\renewcommand{\\arraystretch}{0.9}\n\\small\n\\begin{center}\n\\begin{tabular}{lccccc}\n\\toprule\nMethod & MRPC & RTE & CoLA & SST-2 & Average \\\\\\midrule\nZero-shot &\t74.8\t& 52.7\t& 8.29\t& 92.7\t& 57.1 \\\\\nFine-tuned &\t88.5 &\t77.3 &\t52.3 &\t94.5 &\t78.1 \\\\\nFine-tuned + task vectors\t& 89.3 \\tiny{(+0.8)}\t& 77.5 \\tiny{(+0.2)}& \t53.0\t\\tiny{(+0.7)} & 94.7 \\tiny{(+0.2)}\t& 78.6 \\tiny{(+0.5)} \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:glue}\n\\end{table*}\n\n\n\\section{Task Analogies}\n\\label{sec:analogies}\n\nIn this section, we explore task analogies in the form ``$A$ is to $B$ as $C$ is to $D$\", and show that task arithmetic using vectors from the first three tasks improves performance on task $D$ even if little or not data for that task is available.\n\n\\paragraph{Domain generalization.} For many target tasks, gathering unlabeled data is easier and cheaper than collecting human annotations. When labeled data for a \\textit{target} task is not available, we can use task analogies to improve accuracy on the target task, using \n an \\textit{auxiliary} task for which there is labeled data and an unsupervised learning objective. For example, consider the target task of sentiment analysis using data from Yelp \\citep{zhang2015character}. Using task analogies, we can construct a task vector $\\hat{\\tau}_\\textrm{yelp;\\,sent} = \\tau_\\textrm{amazon;\\,sent} + (\\tau_\\textrm{yelp;\\,lm} - \\tau_\\textrm{amazon;\\,lm})$, where $\\tau_\\textrm{amazon;\\,sent}$ is obtained by fine-tuning on labeled data from an auxiliary task (sentiment analysis using data from Amazon; \\citet{mcauley2013hidden}), and $\\tau_\\textrm{yelp;\\,lm}$ and $\\tau_\\textrm{amazon;\\,lm}$ are task vectors obtained via (unsupervised) language modeling on the inputs from both datasets.\n\n In Table \\ref{tab:sentiment-analog}, we show that using such task analogies improves accuracy of T5 models at multiple scales, both for Amazon and Yelp binary sentiment analysis as target tasks. We empirically found that giving a higher weight to the sentiment analysis task vector led to higher accuracy, and we thus used two independent scaling coefficients for these experiments---one for the sentiment analysis task vector and one for both the language modeling task vectors. More details are presented in Appendix \\ref{sec:app-sentiment}. Using task vectors outperforms fine-tuning on the remaining auxiliary sentiment analysis task for all models and datasets, approaching the performance of fine-tuning on the target task.\n \n\\begin{table*}\n\\caption{\\textbf{Improving domain generalization with task analogies.} Using an auxiliary task for which labeled data is available and unlabeled data from both the auxiliary and the target datasets, task analogies improve the accuracy for multiple T5 models and two sentiment analysis target tasks \\citep{zhang2015character,mcauley2013hidden}, without using any labeled data from the target tasks.}\n\\setlength\\tabcolsep{6.5pt}\n\\renewcommand{\\arraystretch}{0.9}\n\\small\n\\begin{center}\n\\begin{tabular}{lcccccccc} \n\\toprule\n & & \\multicolumn{3}{c}{target = Yelp} & & \\multicolumn{3}{c}{target = Amazon} \\\\\\cmidrule{3-5}\\cmidrule{7-9}\nMethod & & T5-small & T5-base & T5-large & & T5-small & T5-base & T5-large \n\\\\\\midrule\nFine-tuned on auxiliary & & 88.6 & 92.3 & 95.0 & & 87.9 & 90.8 & 94.8 \\\\\nTask analogies & & 89.9 & 93.0 & 95.1 & & 89.0 & 92.7 & 95.2 \\\\\nFine-tuned on target & & 91.1 & 93.4 & 95.5 & & 90.2 & 93.2 & 95.5 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:sentiment-analog}\n\\vspace{6pt}\n\\end{table*}\n\n\n\\paragraph{Subpopulations with little data.} There is often some inherent scarcity in certain data subpopulations---for example, images of lions in indoor settings are more rare, compared to lions in outdoor settings or dogs in general (indoor or outdoors). Whenever such subpopulations admit analogies to others with more abundant data (as in this case), we can apply task analogies, e.g., $\\hat{\\tau}_\\textrm{lion indoors} = \\tau_\\textrm{lion outdoors} + (\\tau_\\textrm{dog indoors} - \\tau_\\textrm{dog outdoor})$.\n\nWe explore this scenario by creating four subpopulations, using 125 overlapping classes between ImageNet and a dataset of human sketches \\citep{eitz2012humans}. \nWe split these classes in two subsets of roughly equal size, creating four subpopulations $A$, $B$, $C$ and $D$, where the pairs $(A,C)$ and $(B, D)$ share the same classes, and $(A, B)$ and $(C, D)$ share the same style (photo-realistic images or sketches).\nAlthough these subpopulations have many classes in our experiments, we use the simplified subsets ``real dog'', ``real lion'', ``sketch dog'' and ``sketch lion'' as a running example. We present more details and samples in Appendix \\ref{sec:appendix-sketches}.\n\nGiven a target subpopulation, we create task vectors by fine-tuning three models independently on the remaining subpopulations, and then combine them via task arithmetic, e.g., $\\hat{\\tau}_\\textrm{sketch lion} = \\tau_\\textrm{sketch dog} + (\\tau_\\textrm{real lion} - \\tau_\\textrm{real dog})$ for the target subpopulation ``sketch lion''. We show the results in Figure \\ref{fig:clip-analogies}, averaged over the four target subpopulations.\nCompared to the pre-trained model, task vectors improve accuracy by 3.4 percentage points on average.\nMoreover, when some data from the target subpopulation is available for fine-tuning, starting from the edited model leads to consistently higher accuracy than starting from the pre-trained model.\nThe gains from analogies alone (with no additional data) are roughly the same as that of collecting and annotating around one hundred training samples for the target subpopulation.\n\n\\paragraph{Kings and queens.} We explore whether an image classifier can learn a new categories (e.g., ``king\") using data from three related classes that form an analogy relationship (e.g., ``queen\", ``man\" and ``woman\"). Our results are presented in Appendix \\ref{sec:appendix-kingsandqueens}, showing that task analogies yield large gains in accuracy over pre-trained models on the new target category, despite having no training data for it.\n\n\\begin{SCfigure}\n \\centering\n \\begin{minipage}{0.43\\linewidth}\n \\hspace{-0.52cm}\n \\includegraphics[width=1\\textwidth]{figures\/sketches_fewshot_full.pdf}\n \\end{minipage}\n \\captionsetup{width=1\\textwidth}\n \\vspace{-0.25cm}\n \\sidecaptionvpos{figure}{t}\n \\caption{\\textbf{Learning about subpopulations via analogy}. Combining task vectors from related subpopulations improves accuracy on the target subpopulation, when little or no data from the target supopulation is available. Accuracy is averaged over the four target subpopulations and three CLIP models. Additional details are in Appendix \\ref{sec:appendix-sketches}.}\n \\label{fig:clip-analogies}\n\\end{SCfigure}\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\nIn this section, we provide further insight into previous results by exploring the similarity between task vectors for different tasks, as well as the impact of different learning rates and random seeds. Additional analysis are presented in Appendix \\ref{sec:app-ensembles}, including discussions on the connection between ensembles and weight averaging. We conclude by discussing some limitations of our approach.\n\n\n\\begin{SCfigure}\n \\centering\n \\begin{minipage}{0.43\\linewidth}\n \\hspace{-0.52cm}\n \\includegraphics[width=1\\textwidth]{figures\/vision_cossim.pdf}\n \\end{minipage}\n \\captionsetup{width=1\\textwidth}\n \\vspace{-0.25cm}\n \\sidecaptionvpos{figure}{t}\n \\caption{\\textbf{Task vectors are typically close to orthogonal.} The plot shows the cosine similarities between vectors for different tasks, using CLIP. The largest deviations from orthogonality are found when tasks are similar to each other, for instance, for MNIST, SVHN and GTSRB---where recognizing digits is either the task itself (MNIST and SVHN), or a capability needed to solve the task (GTSRB, where the task is traffic sign recognition)---and EuroSAT and RESISC45, two satellite imagery recognition datasets.}\n \\label{fig:cossim}\n \\vspace{8pt}\n\\end{SCfigure}\n\n\n\n\\textbf{Similarity between task vectors.} In Figure \\ref{fig:cossim}, we explore the cosine similarity between task vectors for different tasks, in an effort to understand how multiple models can be collapsed into a single multi-task model via addition (Section \\ref{sec:addition}).\nWe observe that vectors from different tasks are typically close to orthogonal, and speculate that this enables the combination of task vectors via addition with minimal interference.\nWe also observe higher cosine similarities when tasks are semantically similar to each other.\nFor example, the largest cosine similarities in Figure \\ref{fig:cossim} (left) are between MNIST, SVHN and GTSRB, where recognizing digits is essential for the tasks, and between EuroSAT and RESISC45, which are both satellite imagery recognition datasets. This similarity in ``task space'' could help explain some results in \\citet{ilharco2022patching}, where interpolating the weights of a model fine-tuned on one task and the pre-trained model weights---in our terminology, applying a single task vector---sometimes improves accuracy on a different task for which no data is available (e.g., applying the MNIST task vector improves accuracy on SVHN).\n\n\\textbf{The impact of the learning rate.} In Figure \\ref{fig:lr}, we observe that increasing the learning rate degrades accuracy both when using task vectors and when fine-tuning individual models, but the decrease is more gradual for individual models.\nThese findings align with those of \\cite{wortsman2022model}, who observed that accuracy decreases on the linear path between two fine-tuned models when using a larger learning rate.\nThus, while larger learning rates may be acceptable when fine-tuning individual models, we recommend more caution when using task vectors. Further, we hypothesize that larger learning rates may explain some of the variance when adding vectors from natural language processing tasks, where we take models fine-tuned by others in the community.\n\n\n\n\\begin{SCfigure}\n \\centering\n \\begin{minipage}{0.4\\linewidth}\n \\hspace{-0.52cm}\n \\includegraphics[width=1\\textwidth]{figures\/lr_ablation.pdf}\n \\end{minipage}\n \\captionsetup{width=1\\textwidth}\n \\vspace{-0.6cm}\n \\sidecaptionvpos{figure}{t}\n \\caption{\\textbf{The impact of learning rate when fine-tuning.} When adding task vectors from CLIP ViT-L\/14 models fine-tuned on MNIST and EuroSAT, lower learning rates make the best use of the fine-tuned models, and also correspond to the highest accuracies of the fine-tuned models on the target task.}\n \\label{fig:lr}\n\\end{SCfigure}\n\n\\begin{figure*}\n \\centering\n \n \\includegraphics[width=.94\\textwidth]{figures\/intermediate_tvs.pdf}\n \\caption{\\textbf{How task vectors evolve throughout fine-tuning.} Left: the cosine similarity between the final task vector and task vectors produced at intermediate points during fine-tuning. Right: Accuracy obtained by adding intermediate task vectors from MNIST and EuroSAT. Adding intermediate task vectors can lead to high accuracy, despite fine-tuning for substantially fewer steps.}\n \\label{fig:intermediate}\n \\vspace{-8pt} \n\\end{figure*}\n\n\\textbf{The evolution of task vectors throughout fine-tuning.} In Figure \\ref{fig:intermediate}, we show how task vectors evolve throughout fine-tuning. Intermediate task vectors converge rapidly to the direction of the final task vector obtained at the end of fine-tuning. Moreover, the accuracy of the model obtained by adding intermediate task vectors from two image classification tasks saturates after just a few hundred steps. These results suggest that using intermediate task vectors can be a useful way of saving compute with little harm in accuracy.\n\n\n\\textbf{Limitations.} Task vectors are restricted to models with the same architecture, since they depend on element-wise operations on model weights. Further, in all of our experiments we perform arithmetic operations only on models fine-tuned from the same pre-trained initialization, although emerging work shows promise in relaxing this assumption \\citep{ainsworth2022git}. We also note that some architectures are very popular, and have ``standard'' initializations---e.g., at the time of writing there are over 3,000 models on Hugging Face Hub fine-tuned from the same BERT-base initialization \\cite{devlin-etal-2019-bert}, and over 800 models fine-tuned from the same T5-small initialization.\n\\section{Related work}\n\n\\paragraph{The loss landscape and interpolating weights.} The geometry of neural network loss surfaces has attracted the interest of several authors in recent years \\citep{li2018visualizing,garipov2018loss,draxler2018essentially,kuditipudi2019explaining,fort2019deep,czarnecki2019deep,pmlr-v139-wortsman21a, benton2021loss,entezari2021role,li2022branch}.\nDespite neural networks being non-linear, previous work has empirically found that interpolations between the weights of two neural networks can maintain their high accuracy, provided these two neural networks share part of their optimization trajectory~\\citep{frankle2020linear,izmailov2018averaging,neyshabur2020being,fort2020deep,wortsman2022model,choshen2022fusing,ilharco2022patching}. \n\nIn the context of fine-tuning, accuracy increases steadily when gradually moving the weights of a pre-trained model in the direction of its fine-tuned counterpart \\citep{wortsman2021robust,matena2021merging,ilharco2022patching}.\nBeyond a single task, \\citet{matena2021merging,ilharco2022patching} found that when multiple models are fine-tuned on different tasks from the same initialization, averaging their weights can improve accuracy on the fine-tuning tasks.\nSimilar results were found by \\citet{li2022branch} when averaging the parameters of language models fine-tuned on various domains.\n\\citet{choshen2022fusing} showed that ``fusing\" fine-tuned models by averaging their weights creates a better starting point for fine-tuning on a new downstream task.\n\\citet{wortsman2022model} found that averaging the weights of models fine-tuned on multiple tasks can increase accuracy on a new downstream task, without any further training.\nThese findings are aligned with results shown in Section \\ref{sec:addition}. In this work, we go beyond interpolating between models, examining extrapolating between models and additional ways of combining them (Sections \\ref{sec:negation} and \\ref{sec:analogies}). \n\n\n\\paragraph{Model interventions.} Considering that re-training models is prohibitively expensive in most circumstances, several authors have studied more efficient methods for modifying a model's behavior with interventions after pre-training, referring to this process by different names, such as patching \\citep{goel2020model,sung2021training,ilharco2022patching,murty2022fixing}, editing \\citep{shibani2021editing,mitchell2021fast,mitchell2022memory}, aligning \\citep{ouyang2022training,askell2021general,kasirzadeh2022conversation,sparrow}, or debugging \\citep{ribeiro2022adaptive,geva2022lm}. \nIn contrast to previous literature, our work provides a unique way of editing models, where capabilities can be added or deleted in a modular and efficient manner by re-using fine-tuned models.\nCloser to our work is that of \\cite{subramani2022}, who explore steering language models with vectors added to its hidden states.\nIn contrast, our work applies vectors in the weight space of pre-trained models and does not modify the standard fine-tuning procedure.\n\n\\paragraph{Task embeddings.} \\citet{achille2019task2vec,vu2020exploring,vu2022spot}, inter alia, explored strategies for representing tasks with continuous embeddings, in order to to predict task similarities and transferability, or to create taxonomic relations. While the task vectors we build could be used for such purposes, our main goal is to use them as tools for steering the behavior of pre-trained models.\n\n\\section{Conclusion}\n\nIn this paper we introduce a new paradigm for editing models based on arithmetic operations over \\emph{task vectors}. For various vision and NLP models, \\emph{adding} multiple specialized task vectors results in a single model that performs well on all target tasks, or even improves performance on a single task.\n\\emph{Negating} task vectors allows users to remove undesirable behaviors, e.g., toxic generations, or even forget specific tasks altogether, while retaining performance everywhere else. Finally, \\emph{task analogies} leverage existing data to improve performance on domains or subpopulations where data is scarce.\n\nArithmetic operations over task vectors only involve adding or subtracting model weights, and thus are efficient to compute, especially when compared to alternatives that involve additional fine-tuning. Thus, users can easily experiment with various model edits, recycling and transferring knowledge from large collections of publicly available fine-tuned models.\nSince these operations result in a single model of the same size, they incur no extra inference cost. Our code is available at {\\footnotesize \\url{https:\/\/github.com\/mlfoundations\/task_vectors}}.\n\n\n\\section*{Acknowledgements}\nWe thank \nAlex Fang,\nAri Holtzman,\nColin Raffel,\nDhruba Ghosh,\nJesse Dodge,\nMargaret Li,\nOfir Press,\nSam Ainsworth,\nSarah Pratt,\nStephen Mussmann,\nTim Dettmers, and\nVivek Ramanujan\nfor helpful discussion and comments on the paper.\n\n\n\\section{The loss landscape, weight averaging and ensembles}\n\\label{sec:app-ensembles}\n\nWhen two neural networks share part of their optimization trajectory---such as when fine-tuning from the same pre-trained initialization---previous work found that performance does not decrease substantially when linearly interpolating between their weights \\citep{frankle2020linear,izmailov2018averaging,neyshabur2020being,fort2020deep,wortsman2022model,choshen2022fusing,ilharco2022patching}. \nApplying a task vector---and any vectors produced via the arithmetic expressions we study in this work---is equivalent to a linear combination of the pre-trained model and the fine-tuned models used to generate the task vectors, since only linear operations are used.\nInterpolating between the weights of a fine-tuned model and its pre-trained counterpart as in \\citet{wortsman2021robust,ilharco2022patching} is equivalent to applying a single task vector, and adding different task vectors is equivalent to a weighted average of all models, similar to experiments from \\citet{wortsman2022model,ilharco2022patching,li2022branch}.\nOverall, previous work has empirically observed that averaging weights of neural networks can produce models with strong performance when compared to the best individual network, for several architectures, domains and datasets. \n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=.5\\textwidth]{figures\/ensembles_correlation.pdf}\n \\caption{When adding two task vectors, the performance of the resulting model approximates the performing of ensembling the corresponding fine-tuned models.}\n \\label{fig:ensembles-corr}\n\\end{figure*}\n\nOur motivation for studying task vectors is also well aligned with findings of \\citet{lucas2021analyzing,ilharco2022patching}, who observed that performance steadily increases on the linear path between a model before and after training.\\footnote{This property of neural networks is sometimes referred to as Monotonic Linear Interpolation (MLI) \\citep{lucas2021analyzing}.} This indicates that the direction from the pre-trained to the fine-tuned model is such that movement in that direction directly translates to performance gains on the fine-tuning task. Moreover, \\citet{ilharco2022patching} found that linear interpolations between a pre-trained model and a fine-tuned model are able to preserve accuracy on tasks that are unrelated to fine-tuning, while greatly improving accuracy on the fine-tuning task compared to the pre-trained model. That accuracy on the fine-tuning task and on unrelated tasks are independent of each other along the linear path between pre-trained and fine-tuned models is well aligned with our results on from Section \\ref{sec:negation}, where we find that \\textit{extrapolating} from the pre-trained model away from the fine-tuned model leads to worse performance on the fine-tuning task with little change in behavior on control tasks. \n\nFinally, we highlight the connection between linear combinations of neural network weights and the well-established practice of \\textit{ensembling} their predictions.\\footnote{For the sake of completion, the ensemble of two models $f$ with weights $\\theta_1$ and $\\theta_2$ for an input $x$ is given by $(1-\\alpha)f_{\\theta_1}(x) + \\alpha f_{\\theta_2}(x)$, for some mixing coefficient $\\alpha$. Ensembling two classification models is typically done by averaging the logits produced by the models.} This connection is discussed in depth by \\citet{wortsman2021robust,wortsman2022model}, and we briefly revisit it in the context of adding task vectors. First, recall that the arithmetic operations we study result in linear combinations of model weights. As shown by \\citet{wortsman2021robust}, in certain regimes, the result from linearly combining the weights of neural network approximate ensembling their outputs. This approximation holds whenever the loss can be locally approximated by a linear expansion, which is referred to as the NTK regime \\citep{jacot2018neural}. Moreover, as shown by \\citet{fort2020deep}, this linear expansion becomes more accuracy in the later phase of training neural networks, which closely resembles fine-tuning. When the approximation holds exactly, weight averaging and ensembles are exactly equivalent \\citep{wortsman2021robust}. This connection is further studied analytically and empirically by \\citet{wortsman2022model}.\n\nWe empirically validate the connection between ensembles and linear weight combinations in the context of adding two task vectors. Note that the model resulting from adding two task vectors with a scaling coefficient $\\lambda=0.5$ is equivalent to a uniform average of the weights of the fine-tuned models.\\footnote{\n$\\theta_\\textrm{pre} + 0.5(\\tau_1+ \\tau_2) = \\theta_\\textrm{pre} + 0.5((\\theta_1-\\theta_\\textrm{pre}) + (\\theta_2-\\theta_\\textrm{pre})) = 0.5 (\\theta_1 + \\theta_2)$.}\nWe then investigate whether accuracy of the model obtained using the task vectors correlates with the accuracy of ensembling the fine-tuned models, as predicted by theory. \nAs shown in Figure \\ref{fig:ensembles-corr}, we indeed observe that the accuracy of the model produced by adding two task vectors closely follows the accuracy of the corresponding ensemble. We observe a slight bias towards higher accuracy for the ensembles on average, and that the two quantities are also strongly correlated, with a Pearson correlation of 0.99.\n\n\n\\section{Forgetting image classification tasks}\n\\label{sec:clip-neg-extended}\n\nThis section presents additional experimental details and results complementing the findings presented in Section \\ref{sec:forget_img}, showcasing the effect of negating task vectors from image classification tasks.\n\n\\subsection{Experimental details}\n\\label{sec:clip-exp-details}\n\nWe follow the same procedure from \\cite{ilharco2022patching} when fine-tune CLIP models \\citep{radford2021learning}. Namely, we fine-tune for 2000 iterations with a batch size of 128, learning rate 1e-5 and a cosine annealing learning rate schedule with 200 warm-up steps and the AdamW optimizer \\citep{loshchilov2018decoupled, paszke2019pytorch}, with weight decay 0.1.\nWhen fine-tuning, we freeze the weights of the classification layer output by CLIP's text encoder, so that we do not introduce additional learnable parameters, as in \\cite{ilharco2022patching}.\nAs shown by \\cite{ilharco2022patching}, freezing the classification layer does not harm accuracy.\nAfter fine-tuning, we evaluate scaling coefficients $\\lambda \\in \\{0.0, 0.05, 0.1, \\cdots, 1.0\\}$, choosing the highest value such that the resulting model still retains at least 95\\% of the accuracy of the pre-trained model on the control task.\n\n\\subsection{Baselines}\n\\label{sec:app-neg-baselines}\n\nWe contrast our results with two baselines, fine-tuning with gradient ascent as in \\citet{golatkar2020eternal,tarun2021fast}, and against using a random vector of the same magnitude as the task vector on a layer-by-layer basis.\n\nIn practice, for fine-tuning with gradient ascent, we use the same hyper-parameters as for standard fine-tuning. However, instead of optimizing to minimize the cross-entropy loss $\\ell=\\mathbb{E}_{x,y \\in \\mathcal{D}}[-\\log f(x)_y]$, we optimize to minimize its negative value, $\\ell_\\textrm{neg}=-\\ell=\\mathbb{E}_{x,y \\in \\mathcal{D}}[\\log f(x)_y]$, where $x,y$ are samples in the dataset $\\mathcal{D}$ and $f(x)_y$ is the probability assigned by the model $f$ that the inputs $x$ belong to label $y$. This is equivalent to performing gradient ascent on $\\ell$.\n\nFor the random vector baseline, we first compute the different between the parameters of the pre-trained and fine-tuned models for each layer $L$, $\\tau^{(L)} = \\theta^{(L)}_\\textrm{ft}-\\theta^{(L)}_\\textrm{pre}$. Then, we draw a new vector $\\tau^{(L)}_\\textrm{rand} \\sim \\mathcal{N}(0,I)$ where each element is drawn from a normal distribution with mean 0 and variance 1. We then scale this vector so it has the same magnitude as $\\tau^{(L)}$, resulting in $\\tau^{(L)}_{\\textrm{scaled}} = \\tau^{(L)}_\\textrm{rand} \\frac{||\\tau^{(L)}||}{||\\tau^{(L)}_\\textrm{rand}||}$. Finally, we concatenate all the vectors $\\tau^{(L)}_{\\textrm{scaled}}$ for all layers to form a new vector withe the same dimensionality as the model parameters $\\theta$, which is used in the same way as task vectors. \n\n\n\\subsection{Breakdown per task}\n\nTables \\ref{tab:forget_image_l14}, \\ref{tab:forget_image_b16} and \\ref{tab:forget_image_b32} show a breakdown of accuracy for the eight tasks and the three CLIP models we examine.\n\nWe observe qualitatively similar results in all cases. Similarly to what is observed in \\cite{ilharco2022patching}, we also see that results improve with scale: on average, the largest model, ViT-L\/14, achieves \\textit{lower} accuracy on the target tasks, compared to the smaller models. \n\n\\begin{table*}\n\\caption{Forgetting via negation on image classification tasks. Results are shown for a CLIP ViT-L\/14 model \\citep{radford2021learning}, reporting accuracy on both the target (T) and control (C) tasks.}\n\\setlength\\tabcolsep{2.3pt}\n\\renewcommand{\\arraystretch}{1.05}\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{lcc?cc?cc?cc?cc?cc?cc?cc}\n\\toprule\n\\multirow{2}{*}{Method} & \\multicolumn{2}{c?}{{Cars}} & \\multicolumn{2}{c?}{DTD} & \\multicolumn{2}{c?}{EuroSAT} & \\multicolumn{2}{c?}{GTSRB} & \\multicolumn{2}{c?}{MNIST} & \\multicolumn{2}{c?}{{RESISC45}} & \\multicolumn{2}{c?}{{SUN397}} & \\multicolumn{2}{c}{{SVHN}} \\\\\n & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ \\\\\\midrule\nPre-trained & 77.8 & 75.5 & 55.4 & 75.5 & 60.2 & 75.5 & 50.6 & 75.5 & 76.4 & 75.5 & 71.0 & 75.5 & 68.3 & 75.5 & 58.6 & 75.5 \\\\\nFine-tuned & 92.8 & 73.1 & 83.7 & 72.3 & 99.2 & 70.5 & 99.3 & 73.1 & 99.8 & 72.9 & 96.9 & 73.8 & 82.4 & 72.7 & 98.0 & 72.6 \\\\\nNeg. gradients & 0.00 & 4.82 & 2.13 & 0.10 & 9.26 & 1.07 & 1.19 & 0.07 & 9.80 & 67.0 & 2.14 & 0.07 & 0.25 & 0.00 & 6.70 & 57.2 \\\\%\\midrule\nRandom vector & 72.0 & 73.3 & 52.1 & 72.2 & 59.7 & 73.5 & 43.4 & 72.5 & 74.8 & 72.8 & 70.8 & 73.0 & 66.9 & 72.7 & 47.1 & 72.9\\\\\\midrule\nNeg. task vector & 32.0 & 72.4 & 26.7 & 72.2 & 7.33 & 73.3 & 6.45 & 72.2 & 2.69 & 74.9 & 19.7 & 72.9 & 50.8 & 72.6 & 6.71 & 72.7 \\\\\n\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:forget_image_l14}\n\\end{table*}\n\n\n\\begin{table*}\n\\caption{Forgetting via negation on image classification tasks. Results are shown for a CLIP ViT-B\/16 model \\citep{radford2021learning}, reporting accuracy on both the target (T) and control (C) tasks.}\n\\setlength\\tabcolsep{2.3pt}\n\\renewcommand{\\arraystretch}{1.05}\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{lcc?cc?cc?cc?cc?cc?cc?cc}\n\\toprule\n\\multirow{2}{*}{Method} & \\multicolumn{2}{c?}{{Cars}} & \\multicolumn{2}{c?}{DTD} & \\multicolumn{2}{c?}{EuroSAT} & \\multicolumn{2}{c?}{GTSRB} & \\multicolumn{2}{c?}{MNIST} & \\multicolumn{2}{c?}{{RESISC45}} & \\multicolumn{2}{c?}{{SUN397}} & \\multicolumn{2}{c}{{SVHN}} \\\\\n & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$\\\\\\midrule\nPre-trained & 64.6 & 68.3 & 44.9 & 68.3 & 53.9 & 68.3 & 43.4 & 68.3 & 51.6 & 68.3 & 65.8 & 68.3 & 65.5 & 68.3 & 52.0 & 68.3 \\\\\nFine-tuned & 87.0 & 61.9 & 82.3 & 57.5 & 99.1 & 56.0 & 99.0 & 54.7 & 99.7 & 55.2 & 96.4 & 62.2 & 79.0 & 61.7 & 97.7 & 56.8 \\\\\nNeg. gradients & 0.36 & 0.11 & 2.13 & 0.09 & 9.26 & 0.14 & 0.71 & 0.10 & 0.04 & 1.20 & 2.60 & 0.10 & 0.25 & 0.00 & 0.08 & 3.69\\\\\nRand. task vector & 61.0 & 65.6 & 43.9 & 66.3 & 51.7 & 66.2 & 43.1 & 65.0 & 51.6 & 68.3 & 63.6 & 65.6 & 63.7 & 65.2 & 46.2 & 65.5 \\\\\\midrule\nNeg. task vector & 30.8 & 65.4 & 26.5 & 65.6 & 12.3 & 65.8 & 9.53 & 65.8 & 9.55 & 65.4 & 26.5 & 65.1 & 48.6 & 65.1 & 6.43 & 65.4 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:forget_image_b16}\n\\end{table*}\n\n\n\\begin{table*}\n\\caption{Forgetting via negation on image classification tasks. Results are shown for a CLIP ViT-B\/32 model \\citep{radford2021learning}, reporting accuracy on both the target (T) and control (C) tasks.}\n\\setlength\\tabcolsep{2.3pt}\n\\renewcommand{\\arraystretch}{1.05}\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{lcc?cc?cc?cc?cc?cc?cc?cc}\n\\toprule\n\\multirow{2}{*}{Method} & \\multicolumn{2}{c?}{{Cars}} & \\multicolumn{2}{c?}{DTD} & \\multicolumn{2}{c?}{EuroSAT} & \\multicolumn{2}{c?}{GTSRB} & \\multicolumn{2}{c?}{MNIST} & \\multicolumn{2}{c?}{{RESISC45}} & \\multicolumn{2}{c?}{{SUN397}} & \\multicolumn{2}{c}{{SVHN}} \\\\\n & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ \\\\\\midrule\nPre-trained & 59.6 & 63.4 & 44.1 & 63.4 & 45.9 & 63.4 & 32.5 & 63.4 & 48.7 & 63.4 & 60.7 & 63.4 & 63.2 & 63.4 & 31.5 & 63.4 \\\\\nFine-tuned & 79.2 & 55.2 & 78.7 & 49.3 & 98.6 & 47.2 & 98.5 & 39.1 & 99.6 & 42.5 & 95.0 & 53.2 & 75.1 & 54.6 & 97.2 & 44.7 \\\\\nNeg. gradients & 0.01 & 0.11 & 2.13 & 0.10 & 9.26 & 0.10 & 1.19 & 0.07 & 0.00 & 1.22 & 2.60 & 0.10 & 0.25 & 0.01 & 6.38 & 0.29 \\\\\nRand. task vector & 54.1 & 60.9 & 39.9 & 61.5 & 45.8 & 63.4 & 27.9 & 60.7 & 48.3 & 63.4 & 57.1 & 60.9 & 61.3 & 60.5 & 31.2 & 60.7 \\\\\\midrule\nNeg. task vector & 36.0 & 61.1 & 27.8 & 60.2 & 13.6 & 61.3 & 8.13 & 61.4 & 16.7 & 60.7 & 31.7 & 61.0 & 50.7 & 60.5 & 7.65 & 61.0 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:forget_image_b32}\n\\end{table*}\n\n\n\\subsection{Additional visualizations}\n\nIn Figure \\ref{fig:forget_img_lambdas}, we show how accuracy on the target and control tasks vary as we change the scaling coefficients $\\lambda$, both for the task vector obtained by fine-tuning on the target task and for a random vector of the same magnitude.\n\nAs the scaling coefficient increases, the curves traced by the task vector and a random vector behave differently. For task vectors, performance on the target tasks ($y$-axis) initially decreases faster than performance on the control task ($x$-axis), so there exists models with high accuracy on the control task but low accuracy on the target task. In contrast, such points do not exist in the curves traced by random vectors, which move more linearly towards the origin. In practice, this means forgetting is effective for task vectors obtained by fine-tuning, but not for random vectors.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/wses_randomdir.pdf}\n \\caption{Comparison between task vectors and random vectors for forgetting image classification tasks.}\n \\label{fig:forget_img_lambdas}\n\\end{figure*}\n\n\\subsection{The effect of class overlap}\n\nIn Tables \\ref{tab:forget_image_l14}, \\ref{tab:forget_image_b32}, \\ref{tab:forget_image_b16}, we observe that the tasks where forgetting via task vectors is least effective are tasks where the distribution of images is closer to ImageNet, SUN397 \\citep{sun397}, a scene understanding dataset with classes such as ``church\" and ``tower\", and Stanford Cars \\citep{cars}, a dataset with with many car categories such as ``2012 Tesla Model S\" or ``2012 BMW M3 coupe\". One reasonable hypothesis is that forgetting is less effective for those tasks due to the overlap with the images from the control tasks. \n\nTo better understand this effect, we measure accuracy on a subset of classes from ImageNet, such that the overlap is minimized.\nConcretely, we exclude nodes from the WordNet hierarchy from which the ImageNet classes are based.\\footnote{A visualization is available at \\url{https:\/\/observablehq.com\/@mbostock\/imagenet-hierarchy}}\nFor the Cars dataset, we exclude the all subnodes under the node ``wheeled vehicle\" (e.g., ``minivan\", ``jeep\", ``limousine\").\nFor SUN397, we exclude all subnodes under the nodes ``structure\" and ``geological formation\". \nAs shown in Table \\ref{tab:overlap-ablation}, we do not observe large differences after filtering. \n\n\\begin{table*}\n\\caption{The effect of semantic overlap with the control task in forgetting experiments on image classification tasks. Results are shown for a CLIP ViT-L\/14 model, reporting accuracy both on the target task and control task (Ctrl, ImageNet).}\n\\setlength\\tabcolsep{4.4pt}\n\\renewcommand{\\arraystretch}{1.05}\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{lcc?cc?cc?cc}\n\\toprule\n\\multirow{2}{*}{Method} & \\multicolumn{4}{c?}{{Without filtering}} & \\multicolumn{4}{c}{With filtering} \\\\\n & Cars ($\\downarrow$) & Ctrl ($\\uparrow$) & SUN397 ($\\downarrow$) & Ctrl ($\\uparrow$) & Cars ($\\downarrow$) & Ctrl ($\\uparrow$) & SUN397 ($\\downarrow$) & Ctrl ($\\uparrow$) \\\\\\midrule\nPre-trained & 77.8 & 75.5 & 68.3 & 75.5 & 77.8 & 75.5 & 68.3 & 76.1 \\\\\nFine-tuned & 92.8 & 73.1 & 82.4 & 72.7 & 92.8 & 73.3 & 82.4 & 73.1 \\\\\\midrule\nNeg. task vector & 32.0 & 72.4 & 50.8 & 72.6 & 32.0 & 72.5 & 48.1 & 72.4\\\\\n\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:overlap-ablation}\n\\end{table*}\n\n\n\\subsection{Interpolating with a model fine-tuned with gradient ascent}\n\\label{sec:apapendix-gradient-ascent}\n\nOne baseline explored in the experiments is fine-tuning with gradient ascent, as explored in \\citet{golatkar2020eternal,tarun2021fast}. Our results show that this strategy is effective at reducing the accuracy on treatment tasks, but also substantially deteriorates accuracy on the control task, which is undesirable.\n\nWe further examine whether interpolations between the pre-trained model and the model fine-tuned with gradient ascent help with forgetting. Our results, shown in Figure \\ref{fig:neggrad}, indicate that interpolations greatly mitigate the low accuracy on the control task of the fine-tuned model, leading to even better accuracy trade-offs than the solutions obtained by extrapolation with standard fine-tuning.\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/wses_avg_neggrad.pdf}\n \\caption{Comparison with interpolations between the pre-trained model and models fine-tuned with gradient ascent.}\n \\label{fig:neggrad}\n\\end{figure*}\n\n\\subsection{When negating task vectors works best}\n\\label{sec:when-neg-works}\n\nWe observe a positive correlation between the gain in accuracy from fine-tuning and the drop in accuracy when subtracting the corresponding task vector, both in comparison with the pre-trained model (Figure \\ref{fig:forget-corr}).\nWe speculate that the reason for this correlation is that when the gains from fine-tuning are small, the task vector provides a less clear direction of improvement, and the opposite direction thus provides a less clear direction of performance deterioration.\nIn the extreme case where fine-tuning does not improve accuracy, it would be surprising if the corresponding task vector is useful.\n\nWe note that this is a limitation of editing models by negating task vectors. When models already strongly exhibit the behavior we wish to remove, it is harder to do so with this technique.\nIn those circumstances, a more promising approach is to add the task vector obtained with gradient ascent, as described in Appendix \\ref{sec:apapendix-gradient-ascent}.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/forget_correlation.pdf}\n \\caption{Correlation between the gain in accuracy from fine-tuning and the drop in accuracy when subtracting the corresponding task vector for image classification tasks.}\n \\label{fig:forget-corr}\n\\end{figure*}\n\n\n\\subsection{Additional tasks}\n\nIn addition to the tasks explored in Section \\ref{sec:add_img}, we study two other tasks, OCR and person identification.\n\nFor OCR, we use the synthetic dataset from \\citet{ilharco2022patching}, built using images from SUN-397 as backgrounds and mismatched class names as texts.\nThe task vector is produced by fine-tuning on those images, with the objective of predicting the written text (and not the background).\nAs shown in Figure \\ref{fig:forget-more-tasks} (left), especially for the larger CLIP models, negating the task vectors leads to large drops in performance with little change in accuracy on ImageNet. \n\nFor person identification, we use the Celebrity Face Recognition dataset, containing close to a million pictures of around one thousand celebrities.\\footnote{\\url{https:\/\/github.com\/prateekmehta59\/Celebrity-Face-Recognition-Dataset}.} We split the data into a training, validation and test set with proportions 0.8, 0.1 and 0.1.\nResults are shown in Figure \\ref{fig:forget-more-tasks} (right).\nWhile negating the task vectors leads to performance deterioration, we find that forgetting is less effective compared to other tasks like OCR. We hypothesize that one explanation for this could be the fact that fine-tuning on this dataset does provides only small gains in accuracy, as discussed in Appendix \\ref{sec:when-neg-works}.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/forget_more_tasks.pdf}\n \\caption{Forgetting by negating task vectors on additional vision tasks, OCR and person identification.}\n \\label{fig:forget-more-tasks}\n\\end{figure*}\n\n\\section{Forgetting with text generation}\n\\label{sec:appendix-neg-lang}\n\nThis section presents additional experimental details and results complementing the findings presented in Section \\ref{sec:forget_lang}, showcasing the effect of negating task vectors from text generation.\n\n\\subsection{Experimental details}\n\nTo obtain task vectors, we fine-tune on data Civil Comments \\citep{borkan2019nuanced} where the toxicity score is larger than 0.8.\nWe then fine-tune GPT-2 models \\citep{radford2019language} from Hugging Face transformers library \\citep{wolf2019huggingface}.\nWe use a learning rate of 1e-5, and fine-tune with a causal language modeling objective with the AdamW optimizer for 5 epochs using a global batch size of 32.\nAfter fine-tuning, we evaluate models obtained by adding task vectors with scaling coefficients $\\lambda \\in \\{0.0, 0.1, \\cdots, 1.0\\}$.\nIn Table \\ref{tab:toxicity}, we report results for the largest scaling coefficient such that perplexity is still within 0.5 points of the perplexity of the pre-trained model.\nTo evaluate toxicity, we generate 1000 samples from the models. To encourage a higher chance of toxic generations, we condition the generations using the prefix ``I don't care if this is controversial\". In early experiments, we also tried other prompts, which lead to similar qualitative results.\nWe evaluate other prompts in Appendix \\ref{sec:app-realtoxic}.\nTo evaluate fluency, we measure the perplexity of the models on WikiText-103 with a striding window of size 1024 and a stride of 512 tokens.\n\n\n\\subsection{Additional models}\n\nIn addition to the GPT-2 Large models showed in Table \\ref{tab:toxicity}, we present results for GPT-2 Medium and GPT-2 Small models in Tables \\ref{tab:toxicity_gpt2med} and \\ref{tab:toxicity_gpt2small}.\nWe observe the same qualitative trends for the additional models. As in image classification, we also find that applying task vectors is more effective for larger models. \n\n\\begin{table*}\n\\caption{Making language models less toxic with negative task vectors. Results are shown for the GPT-2 Medium model.}\n\\setlength\\tabcolsep{4.5pt}\n\\renewcommand{\\arraystretch}{0.9}\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{lrrr}\n\\toprule\n Method & \\% toxic generations ($\\downarrow$)& Avg. toxicity score ($\\downarrow$) & WikiText-103 perplexity ($\\downarrow$)\n \\\\\\midrule\nPre-trained & 4.3 & 0.06 & 18.5 \\\\\nFine-tuned & 54.5 & 0.54 & 20.2 \\\\\nGradient ascent & 0.0 & 0.00 & $>$10$^{10}$ \\\\\nRandom task vector & 4.2 & 0.05 & 18.5 \\\\\\midrule\nNegative task vector & 1.8 & 0.02 & 18.9 \\\\\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:toxicity_gpt2med}\n\\end{table*}\n\n\n\\begin{table*}\n\\caption{Making language models less toxic with negative task vectors. Results are shown for the GPT-2 Small model.}\n\\setlength\\tabcolsep{4.5pt}\n\\renewcommand{\\arraystretch}{0.9}\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{lrrr}\n\\toprule\n Method & \\% toxic generations ($\\downarrow$)& Avg. toxicity score ($\\downarrow$) & WikiText-103 perplexity ($\\downarrow$)\n \\\\\\midrule\nPre-trained & 3.7 & 0.04 & 25.2 \\\\\nFine-tuned & 62.9 & 0.61 & 28.1 \\\\\nGradient ascent & 0.0 & 0.00 & $>$10$^{10}$ \\\\\nRandom task vector & 3.2 & 0.04 & 25.3 \\\\\\midrule\nNegative task vector & 2.5 & 0.03 & 25.3 \\\\\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:toxicity_gpt2small}\n\\end{table*}\n\n\\subsection{RealToxicityPrompts}\n\\label{sec:app-realtoxic}\n\nWe present additional experiments using RealToxicityPrompts \\citep{gehman2020realtoxicityprompts}, a dataset of natural language prompts used for measuring toxicity in language models. As in \\citet{gehman2020realtoxicityprompts}, we evaluate language models using 25 generations per prompt, using the Perspective API.\\footnote{\\url{https:\/\/github.com\/conversationai\/perspectiveapi}}\n\nIn Figure \\ref{fig:realtoxic}, we present results showing the expected maximum toxicity across the 25 generations and the perplexity on WikiText-103 as we vary the scaling coefficients.\nWe show results both for the \\textit{challenging} subset of the dataset, containing 1.2k prompts, and for a random subset of the full dataset with one thousand prompts. \nIn both cases, we see qualitatively similar trends: negating task vectors produced by fine-tuning on toxic data reduces the amount toxicity of the generations.\nFor GPT-2 large, we see close to vertical movement as the scaling coefficient increases, showing large decreases in accuracy with little change in perplexity on WikiText-103.\nHowever, especially for the challenging set of the benchmark, there is still significant headroom for improvement.\n\n\n\\begin{figure}%\n \\centering\n \\subfloat{{\\includegraphics[width=0.495\\linewidth]{figures\/forget_realtoxic_random.pdf} }}%\n \n \\subfloat{{\\includegraphics[width=0.495\\linewidth]{figures\/forget_realtoxic_challenging.pdf} }}%\n \n \\caption{\\textbf{Toxicity results using RealToxicityPrompts} \\citep{gehman2020realtoxicityprompts}, for various GPT-2 models.}%\n \\label{fig:realtoxic}%\n \n\\end{figure}\n\n\n\\section{Learning via addition}\n\\label{sec:appendix-add}\n\n\nIn all experiments, we add task vectors together and use a \\textit{single} scaling coefficient for the sum of the vectors, $\\lambda \\in \\{0, 0.05, 0.1, \\cdots, 1.0\\}$.\nWhile using scaling each task vector by its own coefficient could improve performance, exploring all combinations of scaling coefficients when the number of tasks is not small, due to the curse of dimensionality. While we focus on a single scaling coefficient for simplicity, more sophisticated strategies could be explored in future work, such as using black box optimization to search the space of scaling coefficients.\n\nFurthermore, we note that the best multi-task model given a set of task vectors is not often obtained by using all of the task vectors, as shown in Figure \\ref{fig:clip-add-all}. Since adding task vectors is computationally efficient and evaluations are usually substantially less expensive than training, practitioners could try out many subsets of task vectors and choose the ones that maximizes performance on the tasks of interest. Moreover, faster techniques such as the greedy algorithm proposed by \\citet{wortsman2022model} could allow users to efficiently discard task vectors that do not improve accuracy.\n\n\n\\subsection{The impact of random seeds}\n\n\n\nWe fine-tune five CLIP models on MNIST and five models EuroSAT, varying only the random seed. We then edit models by adding all possible combinations of the corresponding task vectors (25 in total). The results in Figure \\ref{fig:seeds} indicate that different random seeds have little impact in the resulting accuracy of the edited models for this set up. It is possible that we would observe larger variance in other settings such as natural language processing \\citep{dodge2020fine,juneja2022linear}, but we again observe that users can simply discard task vectors that yield no improvement in validation data.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{figures\/seed_ablation.pdf}\n \\caption{\\textbf{The impact of random seeds when fine-tuning.} Using different random seeds when fine-tuning on image classification tasks has little impact on the accuracy of edited models.}\n \\label{fig:seeds}\n\\end{figure}\n\n\n\n\\subsection{Multi-task training}\n\nIn addition to using multiple-specialized models, we compare against a single multi-task model obtained via jointly fine-tuning on the eight image classification tasks we study. We fine-tune with the same hyper-parameters described in Appendix \\ref{sec:clip-exp-details}, also freezing the classification heads. \n\nMulti-task fine-tuning on the eight tasks achieves an average normalized performance of 0.994, compared to the best result obtained with task vectors, 0.912 (recall that 1.0 is obtained with multiple specialized models). Despite the headroom for improvement, multi-task training is less modular than using task vectors, requiring a new fine-tuning round every time a new task is added. In contrast, task vectors can be combined without any additional training and without the need to store or transfer the data used to create them, and can draw from the large pool of existing fine-tuned models such as the ones available on model hubs.\n\n\\subsection{Scaling coefficients}\n\nIn Figure \\ref{fig:acc-per-alpha} (left), we show the optimal scaling coefficients for the experiments where task vectors are added together. Recall that a single scaling coefficient is used for each experiment, regardless of the number of task vectors in the experiment. The variance in the optimal scaling coefficients can be large, highlighting the need for tuning on a case-by-case basis. However, compared to tuning traditional hyper-parameters, tuning the scaling coefficient is less computationally expensive since, unlike most hyper-parameters, the scaling coefficient can be changed without any additional training.\n\n\n\n\\begin{figure}%\n \\centering\n \\subfloat{{\\includegraphics[width=0.495\\linewidth]{figures\/clip_add_alphas.pdf} }}%\n \\subfloat{{\\includegraphics[width=0.495\\linewidth]{figures\/clip_add_acc_per_alpha.pdf} }}%\n \\caption{\\textbf{The effect of scaling coefficients when adding task vectors}. Left: Optimal scaling coefficients when adding task vectors. Right: average normalized performance as a function of the scaling coefficient and the number of task vectors.}%\n \\label{fig:acc-per-alpha}%\n\\end{figure}\n\n\n\nIn Figure \\ref{fig:acc-per-alpha} (right), we show the average normalized performance across experiments as we vary the scaling coefficient and the number of task vectors. Scaling coefficients in the range 0.3 to 0.5 produce close to optimal results in many cases, although we recommend tuning this parameter when possible for best results.\n\n\\subsection{Accuracy on subsets of tasks}\n\nComplementing our results in the main paper, we show in Figure \\ref{fig:add-subsets} the average performance for all subsets task vectors, averaged only over the tasks that originated the task vectors (recall that in Figure \\ref{fig:clip-add-all} we presented the normalized accuracy averaged over \\textit{all} tasks). We find that for smaller subsets, the single model obtained by adding task vectors matches more closely the performance of multiple specialized models, although that gap increases as the size of the subsets grow. \n\n\n\\begin{figure}%\n \\centering\n \\includegraphics[width=0.55\\linewidth]{figures\/clip_add_v2.pdf}\n \\caption{\\textbf{Building multi-task models by adding task vectors.} Unlike results shown in Figure \\ref{fig:clip-add-all}, here performance is averaged only over the tasks used to build the task vectors in each experiment.}%\n \\label{fig:add-subsets}%\n\\end{figure}\n\n\n\n\\subsection{ImageNet experiments}\n\nIn addition to results presented in Section \\ref{sec:add_img}, we explore whether addition performs well when fine-tuning on a larger-scale dataset, ImageNet. We fine-tune with the same hyper-parameters as described in Appendix \\ref{sec:clip-exp-details}, except for using a larger number of steps (4 epochs, around 40 thousand steps), to account for the larger size of ImageNet.\n\nWe then add the ImageNet task vector with each of the eight task vectors from Section \\ref{sec:add_img}, measuring accuracy both on ImageNet and on the task from the second task vector. For example, for MNIST, we add the MNIST task vector and the ImageNet task vector, and measure accuracy both on MNIST and on ImageNet. As shown in Figure \\ref{fig:add-imagenet}, adding the task vectors produces a single model with high accuracy on both tasks, which in most experiments is competitive with the fine-tuned models on their respective datasets.\n\n\n\n\\begin{figure}%\n \\centering\n \\includegraphics[width=0.99\\linewidth]{figures\/imagenet_add_eval_imagenet.pdf}%\n \\bigbreak\n \\includegraphics[width=0.99\\linewidth]{figures\/imagenet_add_eval_other.pdf}%\n \\caption{\\textbf{Adding pairs of task vectors containing a task vector from ImageNet}. For all eight other target tasks from Section \\ref{sec:add_img}, adding their task vector with an ImageNet produces a model with high accuracy both on that task and on ImageNet.}%\n \\label{fig:add-imagenet}%\n\\end{figure}\n\n\\subsection{Adding pairs of task vectors from NLP tasks}\n\\label{sec:appendix-add-lang}\n\n\nIn this section, we present results for building multi-task models using checkpoints that were \\textit{not} fine-tuned by the authors, and were instead downloaded directly from a hub that hosts model checkpoints publicly (the Hugging Face Hub).\\footnote{\\url{https:\/\/huggingface.co\/models}}\n\nOur motivation is aligned that from with previous work on building multi-task models \\citep{colin2020exploring,khashabi2020unifiedqa,zhong2021adapting,mishra2022cross,wei2021finetuned,sanh2022multitask,min2022metaicl,wang2022benchmarking}.\n\nMore specifically, we explore six fine-tuned T5 models \\citep{colin2020exploring} downloaded from the Hugging Face Hub using popularity and diversity as criteria. The models were fine-tuned on a diverse set of natural language processing tasks, including sentiment analysis using movie reviews from IMDB \\citep{maas2011imdb}, question answering (RACE, \\citet{lai2017race}; QASC, \\citet{allenai:qasc}), summarization (MultiNews, \\citet{alex2019multinews}), question generation (SQuAD, \\citet{squadv1}); and constrained text generation (CommonGen, \\citet{lin2020commongen}). The checkpoints and tasks were chosen based on the availability of models that were fine-tuned from the same initialization (a T5-Base model), were fine-tuned without introducing new parameters, and based on diversity of the tasks and popularity of the checkpoints on the hub. \nThe specific checkpoints we use are:\n\n\\begin{itemize}\n \\item IMDB: \\texttt{mrm8488\/t5-base-finetuned-imdb-sentiment}\n \\item RACE: \\texttt{mrm8488\/t5-base-finetuned-race}\n \\item QASC: \\texttt{mrm8488\/t5-base-finetuned-qasc}\n \\item MultiNews: \\texttt{mrm8488\/t5-base-finetuned-summarize-news}\n \\item SQuAD: \\texttt{mrm8488\/t5-base-finetuned-question-generation-ap}\n \\item CommonGen: \\texttt{mrm8488\/t5-base-finetuned-common\\_gen}\n\\end{itemize}\n\n\n\nFor evaluation, we use accuracy for the text classification task (IMDB), exact match for question answering tasks (RACE and QASC) and ROUGE-2\\footnote{\\url{https:\/\/huggingface.co\/spaces\/evaluate-metric\/rouge}} for text generation tasks (MultiNews, SQuAD question generation, and CommonGen).\nAs in Section \\ref{sec:add_img}, we normalize the performance on each task by the performance of the fine-tuned model on that task, to account for differences in task difficulty and evaluation metric.\n\nAs in image classification, we find that we can compress pairs of models into a single multi-task model with little performance loss (Figure \\ref{fig:add-nlp}).\nThese results are somewhat surprising, since the gap between the pre-trained model and fine-tuned models is much larger, and tasks vary widely in terms of input domain, length, and output type.\nMoreover, while there is more variance across different subsets of tasks when compared to image classification, in various cases we observe \\emph{higher} performance than that of specialized models.\nOn average, the normalized average performance of the model obtained by adding task vectors is 96.7\\%.\n\n\n\\begin{figure}%\n \\centering\n \\includegraphics[width=0.99\\linewidth]{figures\/t5_add_v3.pdf}%\n \\caption{Adding pairs of task vectors from natural language processing tasks.}%\n \\label{fig:add-nlp}%\n\\end{figure}\n\n\n\n\\subsection{GLUE experiments}\n\\label{sec:app-glue}\n\nIn this section, we describe the experimental setup used for investigations presented in Section \\ref{sec:add-nlp}, studying whether performance on specific target tasks can be improved by adding external task vectors.\n\nOur experiments use T5-base models, fine-tuned on four tasks from the GLUE benchmark:\n\n\\begin{itemize}\n \\item \\textbf{Microsoft Research Paraphrase Corpus} (MRPC; \\citet{dolan2005automatically}) is a paraphrase task containing pairs of sentences labeled as either nearly semantically equivalent or not. The dataset is evaluated using the average of $F_1$ and accuracy.\n \\item \\textbf{Recognizing Textual Entailment} (RTE; \\citet{wang2018glue}) is a dataset where models are tasked to predict whether a sentence entails or contradicts another sentence. The data is originally from a series of datasets \\cite{dagan2005pascal, bar2006second, giampiccolo2007third, bentivogli2009fifth}. Accuracy is used as the evaluation metric.\n \\item \\textbf{Corpus of Linguistic Acceptability} (CoLA;\n \\citet{warstadt2018neural}) is a dataset with sentences labeled as either grammatical or ungrammatical. Models are evaluated on Matthews correlation (MCC; \\cite{matthews1975comparison}), which ranges between $-1$ and $1$.\n \\item \\textbf{Stanford Sentiment Treebank} (SST-2; \\citet{socher2013recursive}) is a sentiment analysis task, containing sentences labelled as containing \\textit{positive} or \\textit{negative} sentiment. Accuracy is used as the evaluation metric.\n\\end{itemize}\n\nFor all tasks, we split the training set into two subsets, one used for fine-tuning and one used for determining the best external task vector, with the same size as the original validation sets. For fine-tuning, we use a batch size of 32, learning rate 1e-5 and fine-tune for 5 epochs using AdamW and a linear learning rate schedule. All results are averaged over 3 random seeds.\nWhen evaluating, we perform two forward passes for each sample, one for each label, and chose the label that minimizes the perplexity of the decoder.\n\n\n\\section{Task analogies}\n\nSimilarly in the experiments where multiple models are added together, we use a \\textit{single} scaling coefficient for the vector resulting from the task arithmetic, $\\lambda \\in \\{0, 0.1, \\cdots, 1.0\\}$.\nWhile using scaling each task vector by its own coefficient could improve performance, we avoid this strategy since it complicates the search space and makes explorations more expensive. \nWe note that visual analogies has been explored in previous literature, albeit not at the task level \\cite{sadeghi2015visalogy}.\n\n\\subsection{Domain generalization}\n\\label{sec:app-sentiment}\n\nHere, we use task analogies to improve performance on tasks where no labeled data is available. We consider both Yelp \\citep{zhang2015character} and Amazon \\citep{mcauley2013hidden} binary-sentiment analysis as target tasks, using the \\texttt{amazon\\_polarity} and \\texttt{yelp\\_polarity} datasets from Huggingface datasets \\citep{lhoest-etal-2021-datasets}. As detailed in \\ref{sec:analogies}, given target and auxiliary tasks, we construct task vectors using the relationship $\\hat{\\tau}_\\textrm{target;\\,sent} = \\tau_\\textrm{target;\\,lm} + (\\tau_\\textrm{auxiliary;\\,sent} - \\tau_\\textrm{auxiliary;\\,lm})$. We apply an two scaling coefficients: one on the auxilary sentiment task vector, and another to the language modeling task vectors. \n\nWe compare our task analogy approach to two other baselines: fine-tuning on the auxiliary task, and fine-tuning on the target task. The latter represents an performance upper-bound, assuming we have labeled data for the target task. \n\nTo produce language model task vectors, we use consecutive 128-token chunks of text in each task as input-output pairs, following \\citet{lester-etal-2021-power}. To make predictions under the classification task, we follow the evaluation technique described in \\ref{sec:app-glue}.\n\n\n\nFor all models, we perform a single epoch of fine-tuning, setting a batch size of 2 and accumulating gradients across 8 steps. We use AdamW and a linear learning rate schedule. We set the maximum input and output sequence length to be 128. For each model scale, we perform a grid search over learning rates in \\{1e-5, 3e-5, 5e-5, 8e-4\\}, choosing the fastest learning rate that avoids divergence.\n\n\nTo construct a task vector using the task analogy, we perform a grid over the values $\\lambda \\in \\{0.0, 0.1, ..., 1.0\\}$ for each scaling coefficient. Regardless of scale, we found that giving higher weight to the auxiliary sentiment task vector produced higher accuracy. For the smallest model, we saw better performance when applying a lower-valued coefficient to the language modeling task vectors. For the largest model, applying larger coefficients to the language modeling task vectors produced better performance. This trend may be reflective of the finding in \\ref{sec:forget_lang} that task forgetting is more effective with larger models.\n\n\\subsection{Kings and Queens}\n\\label{sec:appendix-kingsandqueens}\n\nAs a warm-up, we consider the task of classifying images as ``queen\", ``king\", ``woman\" or ``man\".\nWe collect 200 images from the web (50 for each category), by manually searching for the terms ``queen\", ``king\", ``man\" and ``woman\" using Google Images searches. We present samples in Figure \\ref{fig:kings-and-queens-samples}.\n\nOur experiments explore whether we can improve accuracy on each target category using only data from the other three categories.\nFor each category, we fine-tune CLIP models on the remaining three categories, and combine the task vectors according to the analogy relationship, e.g., $\\hat{\\tau}_\\textrm{king} = \\tau_\\textrm{queen} + (\\tau_\\textrm{man} - \\tau_\\textrm{woman})$.\nIn addition to evaluating on our collected set of images, we also evaluate on the ImageNet dataset as a control task.\n\nAs shown in Table \\ref{tab:kingsandqueens}, task analogies yield large gains in accuracy over pre-trained models with very little change in the control task, despite having no training data for the target task. Similar to \\citet{ilharco2022patching,ramasesh2021effect}, we find that results improve with model scale.\n\n\n\\begin{table*}\n\\caption{\\textbf{Learning via analogy.} By leveraging vectors from related tasks, we can improve accuracy on four new target tasks without any training data, and with little change on control settings. Results are shown for the CLIP models \\citep{radford2019language}, additional details are provided in Appendix \\ref{sec:appendix-kingsandqueens}.}\n\\setlength\\tabcolsep{4.5pt}\n\\renewcommand{\\arraystretch}{0.9}\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{lcccccccc}\n\\toprule\n \\multirow{2}{*}{Method} & \\multicolumn{2}{c}{Queens} & \\multicolumn{2}{c}{Kings} & \\multicolumn{2}{c}{Woman} & \\multicolumn{2}{c}{Men}\n \\\\\n & Target & Control & Target & Control & Target & Control & Target & Control \\\\\n \\midrule\n ViT-B\/32 & 0.00 & 63.4 & 0.00 & 63.4 & 0.00 & 63.4 & 0.00 & 63.4\\\\\n \\quad{+ task vectors} & 42.0 & 62.4 & 30.0 & 62.4 & 69.4 & 62.5 & 58.0 & 62.6\\\\\\midrule\n ViT-B\/16 & 0.00 & 68.3 & 0.00 & 68.3 & 0.00 & 68.3 & 0.00 & 68.3 \\\\\n\\quad{+ task vectors} & 66.0 & 67.5 & 94.0 & 67.4 & 87.8 & 67.5 & 62.0 & 67.6 \\\\\\midrule\nViT-L\/14 & 0.00 & 75.5 & 0.00 & 75.5 & 0.00 & 75.5 & 0.00 & 75.5 \\\\\n\\quad{+ task vectors} & 100 & 74.7 & 100 & 74.5 & 100 & 74.6 & 96.0 & 74.6\\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:kingsandqueens}\n\\end{table*}\n\n\n\nFine-tuning CLIP models is done as described in Section \\ref{sec:clip-exp-details}, with the exception of using 40 optimization steps because of the small size of the datasets. When fine-tuning, we use only the images from one category (e.g., ``king\"), and a set of 1001 classes from which to choose, composed by the 1000 classes in ImageNet, and a new class. Since CLIP has already seen many images of queens, kings, men and women in its pre-training, we use a new category name for the new class when fine-tuning, in order to simulate learning a new concept. More concretely, we use the class name ``something\", which makes the accuracy of zero-shot models close or equal to zero. When evaluating, we also contrast between all 1001 options, including all ImageNet classes. This is done both for our target task, and for ImageNet, where we add an additional option.\nNote that we do not need to introduce any new task-specific weights to do all of these operations, since CLIP can perform classification with any set of classes by using its text encoder (which is frozen as in Section \\ref{sec:clip-exp-details}).\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=.85\\textwidth]{figures\/kinds_and_queens.pdf}\n \\caption{Samples from the dataset we collect for classifying queens, kings, women and men, as described in Section \\ref{sec:appendix-kingsandqueens}.}\n \\label{fig:kings-and-queens-samples}\n\\end{figure*}\n\n\n\\subsection{Subpopulations}\n\\label{sec:appendix-sketches}\n\nWe fine-tune CLIP models on each of the subpopulations with the same hyper-parameters as described in Section \\ref{sec:clip-exp-details}, using 500 optimization steps regardless of the number of samples. For the few-shot experiments, we sample the same number of samples for every class in the task.\nFor convenience, let ImageNet-A\\footnote{Not to be confused with the adversarial dataset from \\citet{imageneta}.} and ImageNet-B represent the two subpopulations from ImageNet, and Sketches-A and Sketches-B represent the two subpopulations from the sketches dataset from \\citet{eitz2012humans}. Note that ImageNet-A and Sketches-A share the same classes, and the same is true for ImageNet-B and Sketches-B. We present samples in Figure \\ref{fig:sketches-samples}.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=.85\\textwidth]{figures\/sketches.pdf}\n \\caption{Samples from the datasets used for the analogies with subpopulations experiments, as described in Section \\ref{sec:appendix-sketches}.}\n \\label{fig:sketches-samples}\n\\end{figure*}\n\nComplementing Figure \\ref{fig:clip-analogies}, we show a breakdown per model and for every subpopulation as a target in Table \\ref{tab:sketches}.\n\n\n\n\\paragraph{Independent scaling coefficients.} In addition to our standard procedure of using a single scaling coefficient for the vector resulting from the arithmetic operations, we explore having independent scaling coefficients for each task vector in the expression. In other words, we explore the models $\\theta_\\textrm{new} = \\theta + \\lambda_C\\tau_C + \\lambda_B\\tau_B - \\lambda_A\\tau_A$ for various scaling coefficients $\\lambda_A, \\lambda_B, \\lambda_C \\in \\{0, 0.1, \\cdots, 1.0\\}$.\nOn average, the optimal scaling coefficients were $\\lambda_B^\\star=\\lambda_C^\\star=0.32$ and $\\lambda_A^\\star=0.28$.\nUsing independent scaling coefficients improved performance over using a single scaling coefficient by 0.7 percentage points on average, but also required substantially more evaluations to be made ($10^3$ instead of 10). \n\n\n\n\\begin{table*}\n\\caption{\\textbf{Learning by analogy on subpopulations.} Results are shown for multiple CLIP models, as detailed in Section \\ref{sec:appendix-sketches}.}\n\\setlength\\tabcolsep{4.5pt}\n\\renewcommand{\\arraystretch}{0.9}\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{lccccccc}\n\\toprule\n \\multirow{2}{*}{Model} & Samples & \\multirow{2}{*}{Task vectors} & \\multicolumn{5}{c}{Accuracy} \\\\\n& per class & & Sketches-A & Sketches-B & ImageNet-A & ImageNet-B & Average\\\\\\midrule\n\\multirow{8}{*}{ViT-B\/32} & 0 & \\xmark & 0.712 & 0.677 & 0.861 & 0.923& 0.793 \\\\\n& 0 & \\cmark & 0.782 & 0.758 & 0.861 & 0.926& 0.832 \\\\\n& 1 & \\xmark & 0.754 & 0.758 & 0.868 & 0.919& 0.825 \\\\\n& 1 & \\cmark & 0.782 & 0.766 & 0.866 & 0.922& 0.834 \\\\\n& 2 & \\xmark & 0.768 & 0.778 & 0.868 & 0.919& 0.833 \\\\ \n& 2 & \\cmark & 0.786 & 0.800 & 0.867 & 0.922& 0.844 \\\\\n& 4 & \\xmark & 0.810 & 0.780 & 0.871 & 0.926& 0.847 \\\\\n& 4 & \\cmark & 0.802 & 0.796 & 0.871 & 0.927& 0.849 \\\\\\midrule\n\\multirow{8}{*}{ViT-B\/16} & 0 & \\xmark & 0.716 & 0.732 & 0.885 & 0.946& 0.820\\\\\n& 0 & \\cmark & 0.794 & 0.794 & 0.889 & 0.953& 0.858\\\\\n& 1 & \\xmark & 0.758 & 0.812 & 0.894 & 0.948& 0.853\\\\\n& 1 & \\cmark & 0.796 & 0.804 & 0.897 & 0.957& 0.863\\\\\n& 2 & \\xmark & 0.792 & 0.817 & 0.897 & 0.951& 0.865\\\\\n& 2 & \\cmark & 0.804 & 0.829 & 0.899 & 0.956& 0.872\\\\\n& 4 & \\xmark & 0.815 & 0.812 & 0.904 & 0.952& 0.871\\\\\n& 4 & \\cmark & 0.831 & 0.825 & 0.904 & 0.953& 0.878\\\\\\midrule\n\\multirow{8}{*}{ViT-L\/14} & 0 & \\xmark & 0.823 & 0.831 & 0.913 & 0.962& 0.882\\\\\n& 0 & \\cmark & 0.879 & 0.861 & 0.922 & 0.968& 0.908\\\\\n& 1 & \\xmark & 0.845 & 0.863 & 0.923 & 0.971& 0.900\\\\\n& 1 & \\cmark & 0.879 & 0.863 & 0.930 & 0.973& 0.911\\\\\n& 2 & \\xmark & 0.865 & 0.881 & 0.925 & 0.973& 0.911\\\\\n& 2 & \\cmark & 0.875 & 0.881 & 0.932 & 0.975& 0.916\\\\\n& 4 & \\xmark & 0.875 & 0.883 & 0.934 & 0.973& 0.916\\\\\n& 4 & \\cmark & 0.903 & 0.887 & 0.941 & 0.975& 0.927\\\\\n\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:sketches}\n\\end{table*}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nStochastic Resonance (SR) was discovered barely about two and half decades\nago, yet it has proved to be very useful in explaining many phenomena in \nnatural sciences[1-3]. SR refers to an enhanced response of a nonlinear system \nto a subthreshold periodic input signal in the presence of noise of optimum \nstrength. Here, noise plays a constructive role of pumping power in a \nparticular mode, that is in consonance with the applied field, at the cost of\nthe entire spectrum of modes present in it. SR, so defined, leaves a lot of \nliberty as to what is the physical quantity that is to be observed which \nshould show a maximum as a function of noise strength[4-23]. In other words, \nno unique quantifier of SR is specified. Also, in order that SR be a bonafide\nresonance the quantifier must show maximum as a function of frequency of the \napplied field as well. For instance, in a double-well system, hysteresis loop\narea, input energy or work done on the system in a period of the driving \nfield and area under the first peak in the residence time (in a well) \ndistribution are used to characterize SR as a bonafide resonance[4-17,19-22].\n\nIn the present work, motivated by recently discovered fluctuation theorems,\nwe show that in an overdamped bistable system input energy per period as well\nas the energy absorbed per period by the system from the bath, i.e, the heat,\ncan be used as quantifiers to study SR. Also, it is found that the relative \nvariance of both the quantities exhibit minimum at resonance; that is, \nwhenever input energy and heat show maximum as a function of noise strength \n(as also frequency), their respective relative fluctuations show minimum. \nThis shows that at SR the system response exhibits greater degree of \ncoherence. These fluctuations, however, are very large and often the physical\nquantities in question become non-self-averaging. We study some of these \naspects in the light of the fluctuation theorems in the following sections. \nThe fluctuation theorems are of fundamental importance to nonequilibrium \nstatistical mechanics[24-46]. The fluctuation theorems describe rigorous \nrelations for properties of distribution functions of physical variables such\nas work, heat, entropy production, etc., for systems far from equilibrium \nregimes where Einstein and Onsagar relations no longer hold. These theorems \nare expected to play an important role in determining thermodynamic \nconstraints that can be imposed on the efficient operation of machines at \nnano scales. Some of these theorems have been verified experimentally[47-53].\n\n\\section{The Model}\nWe consider the motion of a particle in a double-well potential \n$V(x)=-\\frac{a x^{2}}{2}+\\frac{b x^{4}}{4}$ under the action of a weak \nexternal field $h(t)=A\\sin(\\omega t)$. The motion is described by the \noverdamped Langevin equation[44]\n\\begin{equation}\n\\gamma \\frac{dx}{dt}=-\\frac{\\partial U(x)}{\\partial x}+\\xi(t) ,\n\\end{equation}\nwhere $U(x)=V(x)-h(t)x$. The random forces satisfy \n$\\langle \\xi(t) \\rangle =0$ and \n$\\langle \\xi(t)\\xi(t^{'}) \\rangle=2\\gamma k_{B} T \\delta(t-t^{'})$,\nwhere $\\gamma$ is the coefficient of friction, $T$ is the absolute \ntemperature and $k_{B}$ is the Boltzmann constant. In the following we use a \ndimensionless form of equation(1), namely,\n\\begin{equation}\n\\frac{dx}{dt}=-\\frac{\\partial U(x)}{\\partial x}+\\xi(t),\n\\end{equation}\nwhere $U(x)=-\\frac{x^{2}}{2}+\\frac{x^{4}}{4}-xh(t)$, and \nthe external field $h(t)=A\\sin(\\omega t)$. Now, $\\xi(t)$ satisfies \n$\\langle \\xi(t) \\xi(t^{'}) \\rangle=D \\delta(t-t^{'})$, where $D=2 k_{B} T$. \nAll the parameters are given in dimensionless units~~(in terms of \n$\\gamma$, $a$ and $b$). We consider $A \\ll 0.25$, so that the forcing \namplitude is much smaller than the barrier height between the two wells.\n\nFollowing the stochastic energetic formalism developed by Sekimoto[55], the \nwork done by the external drive $h(t)$ on the system or the input energy per \nperiod (of time $\\tau_{\\omega}$) is defined as[21] \n\\bdm\nW_{p}= \\int_0^{t_{0}+\\tau_{\\omega}} \\frac{\\partial U}{\\partial t} dt\n\\edm\n\\begin{equation}\n= -\\int_0^{t_{0}+\\tau_{\\omega}} x(t) \\frac{dh(t)}{dt} dt,\n\\end{equation} \nwhere $h(t)$ is the drive field which completes its period in time \n$\\tau_{\\omega}$. The completion of one period of $h(t)$, however, does not \nguarantee the system coming back to the same state as the starting one. In \nother words, $x(t+\\tau_{\\omega})$ need not be equal to $x(t)$ or \n$U(x,t+\\tau_{\\omega})$ may differ from $U(x,t)$. The work done over a period \n$W_{p}$ equals change in the internal energy \n$\\Delta U=U(x,t_{0}+\\tau_{\\omega})-U(x,t_{0})$ and heat $Q$ absorbed over a \nperiod (first law of thermodynamics), i.e, $W_{p}=\\Delta U_{p}+Q_{p}$. Since \n$x(t)$ is stochastic, $W_{p}$, $\\Delta U_{p}$ and $Q_{p}$ are not the same \nfor different cycles(or periods) of $h(t)$. The averages are evaluated from \na single long trajectory $x(t)$ (eqn(3)). From the same calculations one can \nalso obtain the probability distribution $P(W)$ and various moments of $W$.\nSimilarly, appealing to the first law of thermodynamics as stated above we \ncan obtain $P(Q_{p})$ and $P(\\Delta U_{p})$ and their moments, where the \nsubscript p indicates evaluation of the physical quantities over one period \nof the field. Numerical simulation of our model was carried out by using \nHuen's method[56]. To calculate $W_{p}$ and $Q_{p}$ we first evolve the \nsystem and neglect initial transients. To get better statistics we calculate \n$W_{p}$, $Q_{p}$ for $10^{6}$ cycles. In some cases we evaluate $W$, \n$\\Delta U$ and $Q$ over many periods, $n$, and calculate their averages, \nagain, for $10^6$ such entities.\n\\section{Results and Discussions}\nThe internal energy being a state variable, average change in its value over \na period $\\Delta U_{p}$ is identically equal to zero. Thus, in the time \nperiodic asymptotic state averaged work done over the period \n$\\langle W_{p} \\rangle$ is dissipated in to heat $\\langle Q_{p} \\rangle$ by \nthe system to the bath. Thus, $\\langle Q_{p} \\rangle$ can also be identified \nas hysteresis loop area. As has been reported earlier[19-22], \n$\\langle W_{p} \\rangle$, the input energy per period, shows a maximum as a \nfunction of $D$. Fig(1) shows that $\\langle W_{p} \\rangle$ and \n$\\langle Q_{p} \\rangle$ coincide, thus both the physical quantities show\nSR. Hence, in this case input energy per period, the heat per period or the \nhysteresis loop area can equally well quantify stochastic resonance. However,\nin this work we focus mostly on the fluctuation properties of these \nquantities.\n\nThe relative variances $R_{W}$ and $R_{Q}$ of both $W_{p}$ and $Q_{p}$ \nrespectively show minimum (fig(2)) as a function of $D$. It may be \nnoted that even though $\\langle W_{p} \\rangle$ and $\\langle Q_{p} \\rangle$ are identical,~~ fluctuations in $W_{p}$ differ from the fluctuations in $Q_{p}$.\n The relative variance of $Q_{p}$ is always larger than that of $W_{p}$ for all $D$. It is also noteworthy that the minimum value of the relative \nvariance is larger than one. However, the minimum becomes less than one if \nthe averages are taken not over a single period of the field but over a \nlarger(integral) number, $n>1$, of periods. Therefore, in order to obtain \nmeaningful averages of these physical quantities in such driven systems one \nneeds to study over time scales much larger than one period so that the \naverages are significantly larger than the deviations about them. Also, as \n$n$ becomes large, the differences between the relative variances of $W$ and \n$Q$ become insignificant(see inset of fig(2)). Importantly, in the system \nunder study, this situation (mean $>$ dispersion) can be achieved by \nincreasing the duration of averaging time(or the number of periods,~ $n$) more \neasily around the value of $D$ where SR occurs. The minimum of relative \nvariance occurs just because the mean value is largest there and not because \ndispersions are smallest. However, as the number of periods $n$ is increased \nthe mean value of heat dissipated over the $n$ periods \n$\\langle Q_{np} \\rangle \\sim n$ for all $n$, whereas the dispersion \n$\\sim \\sqrt{n}$ for large $n$ so that the relative variance decreases with \n$n$ as $\\frac{1}{\\sqrt{n}}$ and one gets a range of $D$ where the averages \nbecome meaningful. We have observed numerically that $Q_{np}$ behaves as an \nindependent variable only when evaluated over a larger number of cycles $n$ \nas compared to in case of $W_{np}$. For our present parameters approximately\n$Q_{np}$ is uncorrelated beyond $10$ periods, whereas $W_{np}$ is uncorrelated beyond $5$ periods.\n\nIn fig(3), we have plotted average heat dissipated \n$\\langle Q_{p} \\rangle$($=\\langle W_{p} \\rangle$) over a single period as a \nfunction of frequency. The values of physical parameters are given in the \nfigure caption. The figure shows maximum as shown in earlier literature[21].\nThus $\\langle Q_{p} \\rangle $ acts as a quantifier of bonafide stochastic \nresonance. In the inset we give the corresponding relative variance of heat \nand work as a function of frequency. We observe that heat fluctuations are \nlarger than work fluctuations at all frequencies. Near the resonance the \nrelative variance shows a minimum. It may be noted that minimum relative \nvariance of both quantities $W_{p}$ and $Q_{p}$ are larger than one(fig(2) and fig(3)). \n\nIn fig(4), we plot the probability distribution of $W_{p}$ and $Q_{p}$ for \nvarious values of $D$. For low values of $D$ (e.g., $D=0.02$) $P(W_{p})$ is \nGaussian whereas $P(Q_{p})$ has a long exponential tail as in case of a \nsystem driven in a harmonic well and with almost no chance of a particle \ngoing over to the other well of the double-well potential. As $D$ is \ngradually increased rare passages to the other well becomes a possibility and\na very small peak appears at a finite positive value of $W_{p}$(or $Q_{p}$)\n(e.g., at $D=0.04$). As $D$ is increased further, $P(W_{p})$ and $P(Q_{p})$ \nbecome multipeaked and the averages $\\langle W_{p} \\rangle$, \n$\\langle Q_{p} \\rangle$ shifts to their positive values. The distributions \nbecome most asymmetric at around $D=0.12$ (where SR occurs) and the asymmetry\nreduces again at larger $D$, fig(4). When $D$ becomes large (e.g., $D=0.5$) \nthe distribution becomes completely symmetric and at such high $D$ values the\npresence of potential hump becomes ineffective to split the distribution into\ntwo or more peaks. At very small and very large $D$ values $P(W_{p})$ is close to \nGaussian and so does $P(Q_{p})$ but with a slow decaying exponential tail. In all\nthe graphs, the distribution of $P(Q_{p})$ ($P(W_{p})$) extend to negative values of \n$Q_{p}$ ($W_{p}$). Finite value for distribution in the negative side is \nnecessary to satisfy certain fluctuation theorems. Moreover, $P(Q_{p})$\nhas higher weightage for large negative $Q_{p}$ than that of work $W_{p}$.\n\nIt is worth reemphasizing that $W$ and $Q$ behave as additive (or extrinsic) \nphysical quantities with respect to the number of periods $n$ and hence \n$\\langle W_{np} \\rangle $ and $\\langle Q_{np} \\rangle $ increase \nin proportion to $n$ whereas $\\Delta U$, in this case, is an intrinsic \nphysical quantity and $\\frac{\\Delta U}{n} \\rightarrow 0$ as \n$n \\rightarrow \\infty$. This indicates that the distributions $P(W_{np})$ and\n$P(Q_{np})$ both have identical characteristics as $n \\rightarrow \\infty$.\nTherefore, the difference between \n$(\\frac{\\sqrt{\\langle W_{np}^{2} \\rangle -\\langle W_{np} \\rangle ^{2}}}\n{\\langle W_{np} \\rangle})$ and \n$(\\frac{\\sqrt{\\langle Q_{np}^{2} \\rangle -\\langle Q_{np} \\rangle ^{2}}}\n{\\langle Q_{np} \\rangle})$ vanishes as $n \\rightarrow \\infty$. In the recent \nliterature it is shown that the distribution $P(W_{np})$ over a large number\nof periods approaches a Gaussian. Also, if one considers $W_{p}$ over a \nsingle period by increasing the noise strength, $P(W_{p})$ approaches \nGaussian and satisfies the steady state fluctuation theorem (SSFT). SSFT \nimplies[26,34-36,44-46,51-53] the probability of physical quantity $x$ to \nsatisfy the relation $P(x)\/P(-x) = \\exp(\\beta x)$, where $\\beta$ is \nthe inverse temperature and $x$ may be work, heat, etc. In fig(5), the \nevolution of $ P(Q_{np})$ is shown as $n$ is increased . As $n$ increases the \ncontribution of negative $Q$ to the distribution decreases; besides, the \ndistribution gradually becomes closer and closer to Gaussian. There is a \ncontribution to $P(Q_{np})$ due to change in the internal energy $\\Delta U$ \nwhich is supposed to dominate at very large $Q$ making the distribution \nexponential in the asymptotic regime[34,35,53]. However, it is not possible to \ndetect this exponential tail in our simulations. For large $n$, $P(Q_{np})$ \napproaches Gaussian(inset of fig(5)). The Gaussian fit of the graph almost \noverlaps and the calculated ratio, \n$\\frac{\\langle Q_{np}^{2} \\rangle -\\langle Q_{np} \\rangle^{2}}{\\frac{2}\n{\\beta} \\langle Q_{np} \\rangle}$ equals $0.99$ for $n=25$. This ratio is \ncloser to one, a requirement for SSFT to hold where $P(Q)$ is Gaussian[22,44,45]. \nFig(6) shows the plot of $ln(\\frac{P(Q_{np})}{P(-Q_{np})})$ as a function of \n$\\beta Q_{np}$ for various values of $n$. One can readily see that slope of \n$ln(\\frac{P(Q_{np})}{P(-Q_{np})})$ approaches $1$ for \n$Q \\ll \\langle Q_{np} \\rangle $ for large $n$. This is a statement of \nconventional steady state fluctuation theorem. As the number of periods $n$,\nover which $Q_{np}$ is calculated, is increased, the conventional SSFT is \nsatisfied for $Q_{np}$ less than $\\langle Q_{np} \\rangle$ (e.g., for $n=25$,\nSSFT is valid for $Q_{np}$ less than $0.4$, for $D=0.16$). There exists an \nalternative relation for heat fluctuation, namely, the extended heat \nfluctuation theorem[34,35]. Here, the distribution function obeys a different \nsymmetry property for $Q \\gg \\langle Q_{np} \\rangle$ for finite $n$. As \n$n \\rightarrow \\infty$, $\\langle Q_{np} \\rangle \\rightarrow \\infty$ in this \nlimit, and hence conventional SSFT holds which has been clarified earlier \nin linear systems[53]. \n\nIt is further interesting to investigate effects associated with SR in an asymmetric double-well potential involving two hopping time scales instead of one as in the symmetric case.~~~We therefore,~~~consider a scaled asymmetric potential $V(x)=\\frac{- x^{2}}{2}+\\frac{x^{4}}{4}-cx$ driven by the external field $h(t)$.~~~~Fig(7) shows the average input energy $\\langle W_{p} \\rangle$ and average heat $\\langle Q_{p} \\rangle $ over a single period as a function of $D$ for various values of the asymmetric parameter $c$.~~~From this figure we find that the peak becomes broader and lower as $c$ is increased.~~~The peak shifts to larger values of noise intensities for higher $c$.~~~~In other words,~~the phenomenon of SR is not as pronounced[2] as in case of $c=0$(fig(2)).~~~It is because the synchronization between signal and particle hopping between the two well becomes weak because for $c \\neq 0 $,~~~the mean time of passage for well $1$ to well $2$ is different from the mean time of passage from well $2$ to well $1$.~~~As a consequence the relative variances $R_{W}$ and $R_{Q}$ become larger as compared to in case of $c=0$(fig(2)) as shown in the inset of fig(7).\n\nIn fig(8(a)) and fig(8(b) we have plotted probability distribution $P(W_{p})$ and $P(Q_{p})$ over a single period for different values of asymmetry parameter $c$ for a fixed value of $D=0.12$, $A=0.1$ and $\\omega =0.1$.~~~As asymmetry increases the probability for particle to remain in the lowest well enhances.~~Hence particle performs simple oscillation around most stable minima over a longer time before making transitions to the other well.~~~Hence Gaussian like peak near $W \\approx 0$ or $Q \\approx 0$ increases as $c$ increases.~~~The weight of $P(W_{p})$ for larger values of work(positive as well as negative ) decreases with increase in $c$.~~~However,~~~for $P(Q_{p})$, its magnitude at large positive and negative values of $Q_{p}$ increases as we increase asymmetry parameter.~~~This contrasting behavior can be attributed to the larger fluctuations of internal energy $\\Delta U_{p}$ as one increases $c$.~~This we have verified\n separately.~~~Due to this contribution of $\\Delta U_{p}$ for $Q_{p}$,~~nature of $P(W_{p})$ and $P(Q_{p})$ are qualitatively different.~~In all cases for fixed asymmetry $c$ fluctuation in heat are larger than fluctuation in work.\n \n In fig(9) and (10) evolution for $P(W_{np})$ and $P(Q_{np})$ respectively are plotted for various values of number of periods $n$.~~We clearly observe that as $n$ increases both the distributions tend to become Gaussian distributions with the fluctuation ratio $\\frac {V}{(\\frac{2}{\\beta}\\langle M \\rangle )}=1$,~~between their variance $V$ and mean $\\langle M \\rangle$ as required to satisfy SSFT as mentioned earlier.~~To satisfy SSFT for heat we have to take \n larger number of periods as compared for work.~~Only in the large $n$ limit contribution to heat from internal energy becomes negligible.~~~In the insets of fig(9) and fig(10) we have shown a Gaussian fit(with fluctuation ratio equal to one),~~which agrees perfectly well with our numerical data.~~~Conclusions regarding validity of SSFT for asymmetric case for larger periods remain the same as for the symmetric case.\n\n\nIn summary, we find that SR shown by a particle moving in a double-well(symmetric) \npotential and driven by a weak periodic field can be characterized well by the heat~$\\langle Q_{p}\\rangle$ dissipated to the bath or the hysteresis loop area.~~~ It can equally well be characterized by the relative dispersion of $\\langle W_{p}\\rangle$ and $\\langle Q_{p}\\rangle$.~~~ At resonance relative dispersion shows a minimum as a function of both $D$ and $\\omega$.~~~~ We also show that minimum relative variance can be made less than one by taking long time protocols of the applied field.~~~For long time protocols distribution $P(Q_{np})$ satisfies conventional SSFT for $P(Q_{np})$ at $Q_{np} \\ll \\langle Q_{np} \\rangle$ for finite $n$[53].~~~We have also shown that SR gets weakened in the presence of asymmetric potential and as a consequence fluctuation in heat and work become larger.~~~SSFT too is satisfied for both work and heat,~~when they are calculated over large number of periods.\n\\section{Acknowledgements:}\nAMJ and MCM thank BRNS, DAE,Govt. of India for partial financial support.\n~~AMJ also thanks DST, India for financial support.~~ MCM acknowledges IOP, Bhubaneswar for hospitality.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\uppercase{Introduction}}\n\nWhen programming the small devices that constitutes the nodes of\nthe Internet of Things (IoT), one has to adapt to the limitations of\nthese devices.\n\nApart from their very limited processing power (especially compared to\nthe current personal computers, and even mobile devices like smartphones\nand tablets), the main specificity of the devices is that they are operated\non small batteries (e.g.: AAA or button cells).\n\nThus, one of the main challenges with these motes is the need to reduce\nas much as possible their energy consumption. We want their batteries to last\nas long as possible, for economical but also practical reasons: it may be\ndifficult---even almost impossible---to change the batteries of some of these\nmotes, because of their locations (e.g.: on top of buildings, under roads,\netc.)\n\nIoT motes are usually very compact devices: they are usually built around\na central integrated chip that contains the main processing unit and several\nbasic peripherals (such as timers, A\/D and D\/A converters, I\/O\ncontrollers\\ldots) called microcontroller units or MCUs. Apart from the MCU,\na mote generally only contains some ``physical-world'' sensors and a radio\ntransceiver for networking. The main radio communication protocol currently\nused in the IoT field is IEEE 802.15.4. Some MCUs do integrate a 802.15.4\ntransceiver on-chip.\n\nAmong the various components that constitute a mote, the most power-consuming\nblock is the radio transceiver. Consequently, to reduce the power consumption\nof IoT motes, a first key point is to use the radio transceiver only when\nneeded, keeping it powered-off as much as possible. The software element\nresponsible to control the radio transceiver in an adequate manner is\nthe \\emph{MAC~\/ RDC (Media Access Control \\& Radio Duty Cycle)}\nlayer of the network stack.\n\nA efficient power-saving strategy for IoT motes thus relies on finding the\nbetter trade-off between minimizing the radio duty cycle while keeping\nnetworking efficiency at the highest possible level. This is achieved\nby developing new, ``intelligent'' MAC~\/ RDC protocols.\n\nTo implement new, high-performance MAC~\/ RDC protocols, one needs to be\nable to react to events with good reactivity (lowest latency possible) and\nflexibility. These protocols rely on precise timing to ensure efficient\nsynchronization between the different motes and other radio-networked\ndevices of a \\emph{Personal Area Network (PAN)}, thus allowing\nto turn on the radio transceivers \\emph{only} when needed.\n\nAt the system level, being able to follow such accurate timings means having\nvery efficient interruption management, and the extensive use of hardware\ntimers, that are the most precise timing source available.\n\nThe second most power-consuming element in a mote, after the radio\ntransceiver, is the MCU itself: every current MCU offers ``low-power modes'',\nthat consist in disabling the various hardware blocks, beginning with the CPU\ncore. The main way to minimize energy consumption with a MCU is thus\nto disable its features as much as possible, only using them when needed:\nthat effectively means putting the whole MCU to sleep as much as possible.\n\nLike for the radio transceiver, using the MCU efficiently while keeping\nthe system efficient and reactive means optimal use of interruptions,\nand hardware timers for synchronization.\n\nThus, in both cases, we need to optimally use interruptions as well as\nhardware timers. Being able to use them both efficiently without too much\nhassle implies the use of a specialized operating system (OS), especially\nto easily benefit from multitasking abilities. That is what we will\ndiscuss in this paper.\n\n\n\\section{\\uppercase{Previous work and problem statement}}\n\nSpecialized OSes for the resource-constrained devices that constitute\nwireless sensor networks have been designed, published, and made available\nfor quite a long time.\n\n\\subsection{TinyOS}\n\nThe first widely used system in this domain was \\emph{TinyOS} \\cite{TinyOS}.\nIt is an open-source OS, whose first stable release (1.0) was published in\nseptember 2002. It is very lightweight, and as such well adapted to limited\ndevices like WSN motes. It has brought many advances in this domain, like\nthe ability to use Internet Protocol (IP) and routing (RPL) on 802.15.4\nnetworks, including the latest IPv6 version, and to simulate networks\nof TinyOS motes via TOSSIM \\cite{TOSSIM}.\n\nIts main drawback is that one needs to learn a specific language---named\nnesC---to be able to efficiently work within it. This language is quite\ndifferent from standard C and other common imperative programming languages,\nand as such can be difficult to master.\n\nThe presence of that specific language is no coincidence: TinyOS is built\non its own specific paradigms: it has an unique stack, from which the\ndifferent components of the OS are called as statically linked callbacks.\nThis makes the programming of applications complex, especially for\ndecomposing into various ``tasks''. The multitasking part is also\nquite limited: tasks are run in a fixed, queue-like order. Finally,\nTinyOS requires a custom GNU-based toolchain to be built.\n\nAll of these limitations, plus a relatively slow development pace (last\nstable version dates back to august 2012) have harmed its adoption,\nand it is not the mainly used OS of the domain anymore.\n\n\\subsection{Contiki}\n\nThe current reference OS in the domain of WSN and IoT is \\emph{Contiki}\n\\cite{ContikiOS}. It's also an open-source OS, which was first released\nin 2002. It is also at the origin of many assets: we can mention, among\nothers, the uIP Embedded TCP\/IP Stack \\cite{uip}, that has been extended\nto uIPv6, the low-power Rime network stack \\cite{Rime}, or the Cooja advanced\nnetwork simulator \\cite{Cooja}.\n\nWhile a bit more resource-demanding than TinyOS, Contiki is also very\nlightweight and well adapted to motes. Its greatest advantage over TinyOS\nis that it is based on standard, well-known OS paradigms, and coded\nin standard C language, which makes it relatively easy to learn and program.\nIt offers an event-based kernel, implemented using cooperative multithreading,\nand a complete network stack. All of these features and advantages have made\nContiki widespread, making it the reference OS when it comes to WSN.\n\nContiki developers also have made advances in the MAC\/RDC domain: many\nof them have been implemented as part of the Contiki network stack, and\na specifically developed, ContikiMAC, has been published in 2011\n\\cite{ContikiMAC} and implemented into Contiki as the default\nRDC protocol (designed to be used with standard CSMA\/CA as MAC layer).\n\nHowever, Contiki's extremely compact footprint and high optimization comes\nat the cost of some limitations that prevented us from using it as our\nsoftware platform.\n\nContiki OS is indeed not a real-time OS: the processing of ``events''---using\nContiki's terminology---is made by using the kernel's scheduler, which is\nbased on cooperative multitasking. This scheduler only triggers at a specific,\npre-determined rate; on the platforms we're interested in, this rate is\nfixed to 128~Hz: this corresponds to a time skew of up to 8~milliseconds\n(8000~microseconds) to process an event, interruption management being\none of the possible events. Such a large granularity is clearly\na huge problem when implementing high-performance MAC\/RDC protocols,\nknowing that the transmission of a full-length 802.15.4 packet takes\nbout 4~milliseconds (4000~microseconds), a time granularity of\n320~microseconds is needed, corresponding to one backoff period (BP).\n\nTo address this problem, Contiki provides a real-time feature,\n\\texttt{rtimer}, which allows to bypass the kernel scheduler and use\na hardware timer to trigger execution of user-defined functions. However,\nit has very severe limitations:\n\n\\begin{itemize}\n\n\\item only one instance of \\texttt{rtimer} is available, thus only one\nreal-time event can be scheduled or executed at any time; this limitation\nforbids development of advanced real-time software---like high-performance\nMAC~\/ RDC protocols---or at least makes it very hard;\n\n\\item moreover, it is unsafe to execute from \\texttt{rtimer}, even\nindirectly, most of the Contiki basic functions (i.e.: kernel, network\nstack, etc.), because these functions are not designed to handle pre-emption.\nContiki is indeed based on cooperative multithreading, whereas the\n\\texttt{rtimer} mechanism seems like a ``independent feature'', coming\nwith its own paradigm.\nOnly a precise set of functions known as ``interrupt-safe'' (like\n\\texttt{process\\_poll()}) can be safely invoked from \\texttt{rtimer},\nusing other parts of Contiki's meaning almost certainly crash or\nunpredictable behaviour. This restriction practically makes it very\ndifficult to write Contiki extensions (like network stack layer drivers)\nusing \\texttt{rtimer}.\n\n\n\\end{itemize}\n\nAlso note that this cooperative scheduler is designed to manage a specific\nkind of tasks: the \\emph{protothreads}. This solution allows to manage\ndifferent threads of execution, without needing each of them to have\nits own separate stack \\cite{Protothreads}. The great advantage of\nthis mechanism is the ability to use an unique stack, thus greatly\nreducing the needed amount of RAM for the system. The trade-off is\nthat one must be careful when using certain C constructs (i.e.:\nit is impossible to use the \\texttt{switch} statement in\nsome parts of programs that use protothreads).\n\nFor all these reasons, we were unable to use Contiki OS to develop and\nimplement our high-performance MAC\/RDC protocols. We definitely needed\nan OS with efficient real-time features and event handling mechanism.\n\n\\subsection{Other options}\n\nThere are other, less used OSes designed for the WSN\/IoT domain, but none\nof them fulfilled our requirements, for the following reasons:\n\\begin{description}\n\n\\item[SOS] \\cite{SOS} This system's development has been cancelled since november\n 2008; its authors explicitly recommend on their website to\n ``consider one of the more actively supported alternatives''.\n\n\\item[Lorien] \\cite{LorienOS} While its component-oriented approach is\n interesting, this system seems does not seem very widespread. It is currently available for only \n one hardware platform (TelosB\/SkyMote) which seriously\n limits the portability we can expect from using an OS. \n Moreover, its development seems to have slowed down quite\n a bit, since the latest available Lorien release was published\n in july 2011, while the latest commit in the project's\n SourceForge repository (r46) dates back to january 2013.\n\n\\item[Mantis] \\cite{MantisOS} While this project claims to be Open Source,\n the project has made, on its SourceForge web site, no public\n release, and the access to the source repository \n (\\texttt{http:\/\/mantis.cs.colorado.edu\/viewcvs\/}) seems\n to stall. Moreover, reading the project's main web page\n shows us that the last posted news item mentions a first beta\n to be released in 2007. The last publications about\n Mantis OS also seems to be in 2007. All of these elements \n tend to indicate that this project is abandoned\\ldots\n\n\\item[LiteOS] \\cite{LiteOS} This system offers very interesting features,\n especially the ability to update the nodes firmwares over the wireless,\n as well as the built-in hierarchical file system. Unfortunately,\n it is currently only available on IRIS\/MicaZ platforms,\n and requires AVR Studio for programming (which imposes\n Microsoft Windows as a development platform). This\n greatly hinders portability, since LiteOS is clearly strongly\n tied to the AVR microcontroller architecture.\n\n\\item[MansOS] \\cite{MansOS} This system is very recent and offers many\n interesting features, like optional preemptive multitasking,\n a network stack, runtime reprogramming, and a scripting\n language. It is available on two MCU architectures: AVR and\n MSP430 (but not ARM). However, none of the real-time features\n we wanted seems to be available: e.g. only software timers with\n a 1~millisecond resolution are available.\n\n\\end{description}\nIn any case, none of the alternative OSes cited hereabove offer the real-time\nfeatures we were looking for.\n\n\\bigskip\n\nOn the other hand, ``bare-metal'' programming is also unacceptable for us:\nit would mean sacrificing portability and multitasking; and we would also\nneed to redevelop many tools and APIs to make application programming\neven remotely practical enough for third-party developers who would\nwant to use our protocols.\n\n\\bigskip\n\nWe also envisioned to use an established real-time OS (RTOS) as a base\nfor our works. The current reference when it comes to open-source RTOS is\n\\emph{FreeRTOS} (\\texttt{http:\/\/www.freertos.org\/}). It is a robust, mature\nand widely used OS. Its codebase consists in clean and well-documented\nstandard C language. However, it offers only core features, and doesn't\nprovide any network subsystem at all. Redeveloping a whole network stack\nfrom scratch would have been too time-consuming.\n(Network extensions exist for FreeRTOS, but they are either immature,\nor very limited, or proprietary and commercial software; and most of them\nare tied to a peculiar piece of hardware, thus ruining\nthe portability advantage offered by the OS.)\n\n\\subsection{Summary: Wanted Features}\n\nTo summarize the issue, what we required is an OS that:\n\\begin{itemize}\n\\item is adapted to the limitations of the deeply-embedded MCUs that\n constitute the core of WSN\/IoT motes;\n\\item provides real-time features powerful enough to support the\n development of advanced, high-performance MAC~\/ RDC protocols;\n\\item includes a network stack (even a basic one) adapted to wireless\n communication on 802.15.4 radio medium.\n\\end{itemize}\nHowever, none of the established OSes commonly used either in the IoT domain\n(TinyOS, Contiki) nor in the larger spectrum of RTOS (FreeRTOS)\ncould match our needs.\n\n\n\\section{\\uppercase{The RIOT Operating System}}\n\nConsequently, we focused our interest on \\emph{RIOT OS} \\cite{RIOT}.\n\nThis new system---first released in 2013---is also open-source and\nspecialized in the domain of low-power, embedded wireless sensors.\nIt offers many interesting features, that we will now describe.\n\nIt provides the basic benefits of an OS: portability (it has been ported\nto many devices powered by ARM, MSP430, and---more recently---AVR\nmicrocontrollers) and a comprehensive set of features, including\na network stack.\n\nMoreover, it offers key features that are otherwise yet unknown in\nthe WSN\/IoT domain:\n\n\\begin{itemize}\n\n\\item an efficient, interrupt-driven, tickless \\emph{micro-kernel};\n\n\\item that kernel includes a priority-aware task scheduler, providing\n \\emph{pre-emptive multitasking};\n\n\\item a highly efficient use of \\emph{hardware timers}: all of them can be\n used concurrently (especially since the kernel is tickless), offering\n the ability to schedule actions with high granularity; on low-end\n devices, based on MSP430 architecture, events can be scheduled\n with a resolution of 32~microseconds;\n\n\\item RIOT is entirely written in \\emph{standard C language}; but unlike\n Contiki, there are no restrictions on usable constructs (i.e.: like\n those introduced by the protothreads mechanism);\n\n\\item a clean and \\emph{modular design}, that makes development with and\n \\emph{into} the system itself easier and more productive.\n\n\\end{itemize}\n\nThe first three features listed hereabove make RIOT a full-fledged\n\\emph{real-time} operating system.\n\nWe also believe that the tickless kernel and the optimal use of hardware\ntimers should make RIOT OS a very suited software platform to optimize energy\nconsumption on battery-powered, MCU-based devices.\n\nA drawback of RIOT, compared to TinyOS or Contiki, is its higher memory\nfootprint: the full network stack (from PHY driver up to RPL routing with\n\\mbox{6LoWPAN} and MAC~\/ RDC layers) cannot be compiled for Sky\/TelosB\nbecause of overflowing memory space. Right now, constrained devices like\nMSP430-based motes are limited to the role of what the 802.15.4 standard\ncalls \\emph{Reduced Function Devices (RFD)}, the role of \\emph{Full\nFunction Devices (FFD)} being reserved to more powerful motes (i.e.:\nbased on ARM microcontrollers).\n\nHowever, we also note that, thanks to its modular architecture, the RIOT\nkernel, compiled with only PHY and MAC~\/ RDC layers, is actually lightweight\nand consumes little memory. We consequently believe that the current\nsituation will improve with the maturation of higher layers of RIOT network\nstack, and that in the future more constrained devices could also be used\nas FFD with RIOT OS.\n\n\\medskip\n\nWhen we began to work with RIOT, it also had two other issues: the MSP430\nversions were not stable enough to make real use of the platform; and\nbeyond basic CSMA\/CA, no work related to the MAC~\/ RDC layer had been\ndone on that system. This is where our contributions fit in.\n\n\n\\section{\\uppercase{Our contributions}}\n\nFor our work, we use---as our main hardware platform---IoT motes built\naround MSP430 microcontrollers.\n\nMSP430 is a microcontroller (MCU) architecture from Texas Instruments,\noffering very low-power consumption, cheap price, and good performance thanks\nto a custom 16-bit RISC design. This architecture is very common in IoT motes.\nIt is also very well supported, especially by the Cooja simulator\n\\cite{Cooja}, which makes simulations of network scenarios---especially\nwith many devices---much easier to design and test.\n\nRIOT OS has historically been developed first on legacy ARM devices\n(ARM7TDMI-based MCUs), then ported on more recent microcontrollers\n(ARM Cortex-M) and other architectures (MSP430 then AVR). However,\nthe MSP430 port was, before we improved it, still not as ``polished''\nas ARM code and thus prone to crash.\n\nOur contribution can be summarized in the following points:\n\n\\begin{enumerate}\n\n\\item analysis of current OSes (TinyOS, Contiki, etc.) limitations,\n and why they are incompatible with development of real-time\n extensions like advanced MAC~\/ RDC protocols;\n\n\\item add debugging features to the RIOT OS kernel, more precisely\n a mechanism to handle fatal errors: crashed systems can be\n ``frozen'' to facilitate debugging during development; or,\n in production, can be made to reboot immediately, thus reducing\n unavailability of a RIOT-running device to a minimum;\n\n\\item port RIOT OS to a production-ready, MSP430-based device:\n the Zolertia Z1 mote (already supoorted by Contiki,\n and used in real-world scenarios running that OS);\n\n\\item debug the MSP430-specific portion of RIOT OS---more specifically:\n the hardware abstraction layer (HAL) of the task scheduler---making\n RIOT OS robust and production-ready on MSP430-based devices.\\\\\n Note that all of these contributions have been reviewed by RIOT's\n development team and integrated into the ``master'' branch of RIOT OS'\n Github repository (i.e.: they are now part of the standard code base of\n the system).\n\n\\item running on MSP430-based devices also allows RIOT OS applications\n to be simulated with the Cooja simulator; this greatly improves\n speed and ease of development.\n\n\\item thanks to these achievements, we now have a robust and full-featured\n software platform offering all the features needed to develop\n high-performance MAC\/RDC protocols---such as all of the time-slotted\n protocols.\n\n\\end{enumerate}\n\nAs a proof of concept of this last statement, we have implemented one\nof our own designs, and obtained very promising results, shown in\nthe next section.\n\n\n\\section{\\uppercase{Use Case: implementing the S-CoSenS RDC protocol}}\n\n\\subsection{The S-CoSenS Protocol}\n\nThe first protocol we wanted to implement is S-CoSenS \\cite{TheseBNefzi},\nwhich is designed to work on top of the IEEE 802.15.4 physical and MAC\n(i.e.: CSMA\/CA) layers.\n\nIt is an evolution of the already published CoSenS protocol \\cite{CosensConf}:\nit adds to the latter a sleeping period for energy saving.\nThus, the basic principle of S-CoSenS is to delay the forwarding (routing)\nof received packets, by dividing the radio duty cycle in three periods:\na sleeping period (SP), a waiting period (WP) where the radio medium\nis listened by routers for collecting incoming 802.15.4 packets, and\nfinally a burst transmission period (TP) for emitting adequately\nthe packets enqueued during WP.\n\nThe main advantage of S-CoSenS is its ability to adapt dynamically to the\nwireless network throughput at runtime, by calculating for each radio duty\ncycle the length of SP and WP, according to the number of relayed\npackets during previous cycles. Note that the set of the SP and the WP\nof a same cycle is named \\emph{subframe}; it is the part of a S-CoSenS\ncycle whose length is computed and known \\textit{a priori}; on the contrary,\nTP duration is always unknown up to its very beginning, because it depends\non the amount of data successfully received during the WP that precedes it.\n\nThe computation of WP duration follows a ``sliding average'' algorithm,\nwhere WP duration for each duty cycle is computed from the average\nof previous cycles as:\n\\begin{eqnarray*}\n&&\n\\overline{\\mathrm{WP}_{n}} = \\alpha \\cdot \\overline{\\mathrm{WP}_{n-1}}\n + (1 - \\alpha) \\cdot \\mathrm{WP}_{n-1}\n\\\\ &&\n\\mathrm{WP}_{n} = \\max ( \\mathrm{WP}_{min},\n \\min ( \\overline{\\mathrm{WP}_{n}}, \\mathrm{WP}_{max} ) )\n\\end{eqnarray*}\nwhere $\\overline{\\mathrm{WP}_{n}}$ and $\\overline{\\mathrm{WP}_{n-1}}$\nare respectively the average WP length at $n^{\\mathrm{th}}$ and\n$(n-1)^{\\mathrm{th}}$ cycle, while $\\mathrm{WP}_{n}$ and $\\mathrm{WP}_{n-1}$\nare the actual length of respectively the $n^{\\mathrm{th}}$ and\n$(n-1)^{\\mathrm{th}}$ cycles; $\\alpha$ is a parameter between 0 and 1\nrepresenting the relative weight of the history in the computation,\nand $\\mathrm{WP}_{min}$ and $\\mathrm{WP}_{max}$ are high and low limits\nimposed by the programmer to the WP duration.\n\nThe length of the whole subframe being a parameter given at compilation time,\nSP duration is simply computed by subtracting the calculated duration of WP\nfrom the subframe duration for every cycle.\n\nThe local synchronization between a S-CoSenS router and its leaf nodes\nis done thanks to a beacon packet, that is broadcasted by the router at\nthe beginning of each cycle. This beacon contains the duration\n(in microseconds) of the SP and WP for the currently\nbeginning cycle.\n\nThe whole S-CoSenS cycle workflow for a router is summarized in figure\n\\ref{FigSCosensDutyCycle} hereafter.\n\n\\begin{figure}[!ht]\n\\centering\n\\begin{tikzpicture}[>=latex]\n\\fill[black] (0cm, -0.25cm) rectangle +(0.2cm, 0.5cm);\n\\draw[->,thick] (0.1cm, 0.25cm) -- +(0, 0.5cm);\n\\draw (0.1cm, 1.3cm) node {Beacon};\n\\draw[anchor=west] (-0.6cm, 0.9cm) node {(broadcasted)};\n\\draw[thick] (0cm, -0.25cm) -- +(0, 0.5cm);\n\\foreach \\x in {1,2,3,4,5,6}\n{\n \\fill[lightgray] (0.2cm + \\x * 0.25cm, -0.25cm) rectangle +(0.05cm, 0.5cm);\n}\n\\draw (1.1cm, 0) node {\\textbf{SP}};\n\\draw[thick] (2cm, -0.25cm) -- +(0, 0.5cm);\n\\fill[lightgray] (2cm, -0.25cm) rectangle +(2cm, 0.5cm);\n\\draw (3cm, 0) node {\\textbf{WP}};\n\\draw[thick] (4cm, -0.25cm) -- +(0, 0.5cm);\n\\fill[lightgray] (4cm, -0.25cm) rectangle +(2cm, 0.5cm);\n\\draw (5cm, 0) node {\\textbf{TP}};\n\\draw[thick] (6cm, -0.25cm) -- +(0, 0.5cm);\n\\draw[->] (-0.5cm, 0.25cm) -- +(7cm, 0);\n\\draw[->] (-0.5cm, -0.25cm) -- +(7cm, 0);\n\\draw[->,thick] (2.5cm, 0.75cm) -- +(0, -0.5cm);\n\\draw (2.5cm, 1cm) node {P1};\n\\draw[->,thick] (3cm, 0.75cm) -- +(0, -0.5cm);\n\\draw (3cm, 1cm) node {P2};\n\\draw[->,thick] (3.5cm, 0.75cm) -- +(0, -0.5cm);\n\\draw (3.5cm, 1cm) node {P3};\n\\draw[->,thick] (4.5cm, 0.25cm) -- +(0, 0.5cm);\n\\draw (4.5cm, 1cm) node {P1};\n\\draw[->,thick] (5cm, 0.25cm) -- +(0, 0.5cm);\n\\draw (5cm, 1cm) node {P2};\n\\draw[->,thick] (5.5cm, 0.25cm) -- +(0, 0.5cm);\n\\draw (5.5cm, 1cm) node {P3};\n\\draw (0cm, -0.5cm) .. controls +(0, -0.25cm) .. +(1cm, -0.25cm);\n\\draw (1cm, -0.75cm) .. controls +(1cm, 0) .. +(1cm, -0.25cm);\n\\draw (2cm, -1cm) .. controls +(0, 0.25cm) .. +(1cm, 0.25cm);\n\\draw (3cm, -0.75cm) .. controls +(1cm, 0) .. +(1cm, 0.25cm);\n\\draw (2cm, -1.25cm) node {\\textbf{Subframe}};\n\\draw (0cm, -1.5cm) .. controls +(0, -0.25cm) .. +(1.5cm, -0.25cm);\n\\draw (1.5cm, -1.75cm) .. controls +(1.5cm, 0) .. +(1.5cm, -0.25cm);\n\\draw (3cm, -2cm) .. controls +(0, 0.25cm) .. +(1.5cm, 0.25cm);\n\\draw (4.5cm, -1.75cm) .. controls +(1.5cm, 0) .. +(1.5cm, 0.25cm);\n\\end{tikzpicture}\n\\caption{A typical S-CoSenS router cycle.\\\\\n The gray strips in the SP represents the short wake-up-and-listen\n periods used for inter-router communication.}\n\\label{FigSCosensDutyCycle}\n\\end{figure}\n\nAn interesting property of S-CoSenS is that leaf (i.e.: non-router) nodes\nalways have their radio transceiver offline, except when they have packets\nto send. When a data packet is generated on a leaf node, the latter wakes up\nits radio transceiver, listens and waits to the first beacon emitted by\nan S-CoSenS router, then sends its packet using CSMA\/CA at the beginning\nof the WP described in the beacon it received. A leaf node will put its\ntransceiver offline during the delay between the beacon and that WP\n(that is: the SP of the router that emitted the received beacon), and\nwill go back to sleep mode once its packet is transmitted.\nAll of this procedure is shown in figure \\ref{FigSCoSenSPktTx}.\n\n\\begin{figure}[!h]\n\\centering\n\\begin{tikzpicture}[>=latex]\n\\draw (-0.5cm, 0) node {\\large \\textit{R}};\n\\draw[thick] (1cm, -0.25cm) -- +(0, 0.5cm);\n\\draw (2cm, 0) node {\\textbf{SP}};\n\\draw[thick] (3cm, -0.25cm) -- +(0, 0.5cm);\n\\fill[lightgray] (3cm, -0.25cm) rectangle +(2cm, 0.5cm);\n\\draw (4cm, 0) node {\\textbf{WP}};\n\\draw[thick] (5cm, -0.25cm) -- +(0, 0.5cm);\n\\fill[lightgray] (5cm, -0.25cm) rectangle +(0.5cm, 0.5cm);\n\\draw (5.25cm, -0.5cm) node {\\textbf{TP}};\n\\draw[thick] (5.5cm, -0.25cm) -- +(0, 0.5cm);\n\\draw[->] (-0.5cm, 0.25cm) -- +(6.5cm, 0);\n\\draw[->] (-0.5cm, -0.25cm) -- +(6.5cm, 0);\n\\draw (-0.5cm, -1.5cm) node {\\large \\textit{LN}};\n\\fill[gray] (0cm, -1.25cm) rectangle +(1.3cm, -0.5cm);\n\\fill[gray] (2.9cm, -1.25cm) rectangle +(0.5cm, -0.5cm);\n\\fill[black] (1cm, -0.25cm) rectangle +(0.2cm, 0.5cm);\n\\draw[->,thick] (1.1cm, 0.25cm) -- +(0, -1.5cm);\n\\draw[anchor=east] (1cm, -0.75cm) node {Beacon};\n\\fill[black] (1cm, -1.25cm) rectangle +(0.2cm, -0.5cm);\n\\draw[->,very thick] (0cm, -2.5cm) -- +(0, 0.75cm);\n\\draw[anchor=west] (0cm, -2.5cm)\n node {\\footnotesize \\textbf{packet arrival}};\n\\fill[black] (3.1cm, -1.25cm) rectangle +(0.2cm, -0.5cm);\n\\draw[->,thick] (3.2cm, -1.25cm) -- +(0, 1cm);\n\\draw[anchor=west] (3.2cm, -0.75cm) node {P1};\n\\fill[black] (3.1cm, -0.25cm) rectangle +(0.2cm, 0.5cm);\n\\fill[black] (5.1cm, -0.25cm) rectangle +(0.2cm, 0.5cm);\n\\draw[->,thick] (5.2cm, 0.25cm) -- +(0, 0.5cm);\n\\draw (5.2cm, 1cm) node {P1};\n\\draw[->] (-0.5cm, -1.25cm) -- +(6.5cm, 0);\n\\draw[->] (-0.5cm, -1.75cm) -- +(6.5cm, 0);\n\\end{tikzpicture}\n\\caption{A typical transmission of a data packet with the S-CoSenS protocol\n between a leaf node and a router.}\n\\label{FigSCoSenSPktTx}\n\\end{figure}\n\nWe thus need to synchronize with enough accuracy different devices (that\ncan be based on different hardware platforms) on cycles whose periods\nare dynamically calculated at runtime, with resolution that needs to be\nin the sub-millisecond range. This is where RIOT OS advanced real-time\nfeatures really shine, while the other comparable OSes are\nfor that purpose definitely lacking.\n\n\\subsection{Simulations and Synchronization Accuracy}\n\nWe have implemented S-CoSenS under RIOT, and made first tests by performing\nsimulations---with Cooja---of a 802.15.4 PAN (Personal Area Network)\nconstituted of a router, and ten motes acting as ``leaf nodes''.\nThe ten nodes regularly send data packets to the router, that retransmits\nthese data packets to a nearby ``sink'' device. Both the router and the ten\nnodes use exclusively the S-CoSenS RDC\/MAC protocol. This is summarized\nin figure \\ref{FigPANtest}.\n\n\\begin{figure}[!h]\n\\centering\n\\begin{tikzpicture}[>=latex]\n\\draw (0, 1cm) circle (0.25cm); \\draw (0, 1cm) node {S};\n\\draw[->,thick] (0, 0.25cm) -- (0, 0.75cm);\n\\draw (0, 0) circle (0.25cm); \\draw (0, 0) node {R};\n\\foreach \\x in {6,7,8,9,10}\n{\n \\fill[white] (\\x * 1cm - 8cm, -1.75cm) circle (0.25cm);\n \\draw (\\x * 1cm - 8cm, -1.75cm) circle (0.25cm);\n \\draw (\\x * 1cm - 8cm, -1.75cm) node {\\x};\n \n \\draw[->,thick] (\\x * 1cm - 8cm, -1.5cm)\n -- (\\x * 0.02cm - 0.16cm, -0.25cm);\n}\n\\foreach \\x in {1,2,3,4,5}\n{\n \\fill[white] (\\x * 1cm - 3cm, -1cm) circle (0.25cm);\n \\draw (\\x * 1cm - 3cm, -1cm) circle (0.25cm);\n \\draw (\\x * 1cm - 3cm, -1cm) node {\\x};\n \n \\draw[->,thick] (\\x * 1cm - 3cm, -0.75cm)\n -- (\\x * 0.05cm - 0.15cm, -0.25cm);\n}\n\\end{tikzpicture}\n\\caption{Functional schema of our virtual test PAN.}\n\\label{FigPANtest}\n\\end{figure}\n\nOur first tests clearly show an excellent synchronization between the\nleaf nodes and the router, thanks to the time resolution offered by RIOT OS\nevent management system (especially the availability of many hardware\ntimers for direct use). This can be seen in the screenshot of our\nsimulation in Cooja, shown in figure \\ref{Screenshot}. For readability,\nthe central portion of the timeline window of that screenshot (delimited\nby a thick yellow rectangle) is zoomed on in figure \\ref{ZoomTimeline}.\n\n\\begin{figure*}[ptb]\n\\centering\n\\includegraphics[width=15.75cm]{S-CoSenS-Cooja10.png}\n\\caption{Screenshot of our test simulation in Cooja. \n(Despite the window title mentioning Contiki, the simulated application\n is indeed running on RIOT OS.)}\n\\label{Screenshot}\n\\end{figure*}\n\n\\begin{figure*}[pbt]\n\\centering\n\\includegraphics{S-CoSenS-Cooja10-Timeline.png}\n\\caption{Zoom on the central part of the timeline of our simulation.}\n\\label{ZoomTimeline}\n\\end{figure*}\n\nOn figure \\ref{ZoomTimeline}, the numbers on the left side are motes'\nnumerical IDs: the router has ID number \\textsf{1}, while the leaf nodes\nhave IDs \\textsf{2} to \\textsf{11}. Grey bars represent radio transceiver\nbeing online for a given mote; blue bars represent packet emission, and green\nbars correct packet reception, while red bars represent collision (when\ntwo or more devices emit data concurrently) and thus reception of\nundecipherable radio signals.\n\nFigure \\ref{ZoomTimeline} represents a short amount of time (around\n100~milliseconds), representing the end of a duty cycle of the router:\nthe first 20~milliseconds are the end of SP, and 80 remaining milliseconds\nthe WP, then the beginning of a new duty cycle (the TP has been disabled\nin our simulation). \n\nIn our example, four nodes have data to transmit to the router: the motes\nnumber \\textsf{3}, \\textsf{5}, \\textsf{9}, and \\textsf{10}; the other nodes\n(\\textsf{2}, \\textsf{4}, \\textsf{6}, \\textsf{7}, \\textsf{8}, and \\textsf{11})\nare preparing to transmit a packet in the next duty cycle.\n\nAt the instant marked by the first yellow arrow (in the top left of figure\n\\ref{ZoomTimeline}), the SP ends and the router activates its radio\ntransceiver to enter WP. Note how the four nodes that are to send packets\n(\\textsf{3}, \\textsf{5}, \\textsf{9}, and \\textsf{10}) do also activate their\nradio transceivers \\emph{precisely} at the same instant: this is thanks to\nRIOT OS precise real-time mechanism (based on hardware timers), that allows\nto the different nodes to precisely synchronize on the timing values\ntransmitted in the previous beacon packet. Thanks also to that mechanism,\nthe nodes are able to keep both their radio transceiver \\emph{and} their\nMCU in low-power mode, since RIOT OS kernel is interrupt-driven.\n\nDuring the waiting period, we also see that several collisions occur; they\nare resolved by the S-CoSenS protocol by forcing motes to wait a random\nduration before re-emitting a packet in case of conflict. In our example,\nour four motes can finally transmit their packet to the router in that\norder: \\textsf{3} (after a first collision), \\textsf{5}, \\textsf{10} (after\ntwo other collisions), and finally \\textsf{9}. Note that every time the\nrouter (device number \\textsf{1}) successfully receives a packet, an\nacknowledgement is sent back to emitter: see the very thin blue bars that\nfollow each green bar on the first line.\n\nFinally, at the instant marked by the second yellow arrow (in the top right\nof figure \\ref{ZoomTimeline}), WP ends and a new duty cycle begins.\nConsequently, the router broadcasts a beacon packet containing PAN timing and\nsynchronization data to all of the ten nodes. We can see that all of the\nsix nodes waiting to transmit (\\textsf{2}, \\textsf{4}, \\textsf{6}, \\textsf{7},\n\\textsf{8}, and \\textsf{11}) go idle after receiving this beacon (beacon\npackets are broadcasted and thus not to be acknowledged): they go\ninto low-power mode (both at radio transceiver and MCU level), and will\ntake advantage of RIOT real-time features to wake up precisely when\nthe router goes back into WP mode and is ready to receive their\npackets.\n\n\\subsection{Performance Evaluation: Preliminary Results}\n\nWe will now present the first, preliminary results we obtained through the\nsimulations we described hereabove.\n\nImportant: note that \\emph{we evaluate here the implementations}, and not the\nintrinsic advantages or weaknesses of the protocols themselves.\n\nWe have first focused on QoS results, by computing Packet Reception Rates\nand end-to-end delays between the various leaf nodes and the sink of the test\nPAN presented earlier in figure \\ref{FigPANtest}, to evaluate the quality\nof the transmissions allowed by using both of the protocols.\n\nFor these first tests, we used default parameters for both RDC protocols\n(ContikiMAC and S-CoSenS), only pushing the CSMA\/CA MAC layer of Contiki\nto make up to 8 attempts for transmitting a same packet, so as to put it\non par with our implementation on RIOT OS. We have otherwise not yet\ntried to tweak the various parameters offered by both the RDC protocols\nto optimize results. This will be the subject of our next experiences.\n\n\\subsubsection{Packet Reception Rates (PRR)}\n\nThe result obtained for PRR using both protocols are shown in figure\n\\ref{FigPRRresults} as well as table \\ref{TblPRRresults}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=7.5cm]{PRRgraph.png}\n \\caption{PRR results for both ContikiMAC and S-CoSenS RDC protocols,\n using default values for parameters.}\n \\label{FigPRRresults}\n\\end{figure}\n\n\\begin{table}\n\\centering\n\\begin{tabular}{|r|r|r|}\n\\hline\n PAI \\textbackslash\\ Protocol & ContikiMAC & S-CoSenS \\\\\n\\hline\n 1500 ms & 49.70\\% & 98.10\\% \\\\\n 1000 ms & 32.82\\% & 96.90\\% \\\\\n 500 ms & 14.44\\% & 89.44\\% \\\\\n 100 ms & 0.64\\% & 25.80\\% \\\\\n\\hline\n\\end{tabular}\n\\caption{PRR results for both ContikiMAC and S-CoSenS RDC protocols,\n using default values for parameters.}\n\\label{TblPRRresults}\n\\end{table}\n\nThe advantage of S-CoSenS as shown on the figure is clear and significant\nwhatever the packet arrival interval constated. Excepted for the ``extreme''\nscenario corresponding to an over-saturation of the radio channel, S-CoSenS\nachieve an excellent PRR ($\\gtrapprox 90\\%$), while ContikiMAC's PRR\nis always $\\lessapprox 50\\%$.\n\n\\subsubsection{End-To-End Transmission Delays}\n\nThe result obtained for PRR using both protocols are shown in figure\n\\ref{FigDelaysResults} and table \\ref{TblDelaysResults}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=7.5cm]{DelaysGraph.png}\n \\caption{End-to-end delays results for both ContikiMAC and S-CoSenS RDC\n protocols, using default values for parameters; note that\n vertical axis is drawn with logarithmic scale.}\n \\label{FigDelaysResults}\n\\end{figure}\n\n\\begin{table}\n\\centering\n\\begin{tabular}{|r|r|r|}\n\\hline\n PAI \\textbackslash\\ Protocol & ContikiMAC & S-CoSenS \\\\\n\\hline\n 1500 ms & 3579 ms & 108 ms \\\\\n 1000 ms & 4093 ms & 108 ms \\\\\n 500 ms & 6452 ms & 126 ms \\\\\n 100 ms & 12913 ms & 168 ms \\\\\n\\hline\n\\end{tabular}\n\\caption{End-to-end delays results for both ContikiMAC and S-CoSenS RDC\n protocols, using default values for parameters.}\n\\label{TblDelaysResults}\n\\end{table}\n\nS-CoSenS has here also clearly the upper hand, so much that we had to use\nlogarithmic scale for the vertical axis to keep figure \\ref{FigDelaysResults}\neasily readable. The advantage of S-CoSenS is valid whatever the packet\narrival interval, our protocol being able to keep delay below an acceptable\nlimit (in the magnitude of hundreds of milliseconds), while ContikiMAC\ndelays rocket up to tens of seconds when network load increases.\n\n\\subsubsection{Summary: QoS Considerations}\n\nWhile these are only preliminary results, it seems that being able to\nleverage real-time features is clearly a significant advantage when designing\nand implementing MAC\/RDC protocols, at least when it comes to QoS results.\n\n\n\n\\section{\\uppercase{Future Works and Conclusion}}\n\nWe plan, in a near future:\n\n\\begin{itemize}\n\n\\item to bring new contributions to the RIOT project: we are especially\n interested in the portability that the RIOT solution offers us;\n this OS is indeed actively ported on many devices based on powerful\n microcrontrollers based on ARM Cortex-M architecture (especially\n Cortex-M3 and Cortex-M4), and we intend to help in this porting\n effort, especially on high-end IoT motes we seek to use in our\n works (e.g.: as advanced FFD nodes with full network stack,\n or routers);\n\n\\item to use the power of this OS to further advance our work on MAC\/RDC\n protocols; more precisely, we are implementing other innovative\n MAC\/RDC protocols---such as iQueue-MAC \\cite{iQueueMAC}---under RIOT,\n taking advantage of its high-resolution real-time features to obtain\n excellent performance, optimal energy consumption, and out-of-the-box\n portability.\n\n\\end{itemize}\n\nRIOT is a powerful real-time operating system, adapted to the limitations\nof deeply embedded hardware microcontrollers, while offering state-of-the-art\ntechniques (preemptive multitasking, tickless scheduler, optimal use\nof hardware timers) that---we believe---makes it one of the most\nsuitable OSes for the embedded and real-time world.\n\nWhile we weren't able to accurately quantize energy consumption\nyet, we can reasonably think that lowering activity of MCU and radio\ntransceiver will significantly reduce the energy consumption of devices\nrunning RIOT OS. This will be the subject of some of our future\nresearch works.\n\n\\bigskip\n\nCurrently, RIOT OS supports high-level IoT protocols (6LoWPAN\/IPv6, RPL,\nTCP, UDP, etc.). However, it still lacks high-performance MAC~\/ RDC layer\nprotocols.\n\nThrough this work, we have shown that RIOT OS is also suitable for\nimplementing high-performance MAC~\/ RDC protocols, thanks to its real-time\nfeatures (especially hardware timers management).\n\nMoreover, we have improved the robustness of the existing ports of RIOT OS\non MSP430, making it a suitable software platform for tiny motes and devices.\n\n\n\n\n\n\n\\vfill\n\\bibliographystyle{apalike}\n{\\small\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Abstract}\n{\\bf \nWe derive explicit expressions for dynamical correlations of the field and density operators in the Lieb-Liniger model, within an arbitrary eigenstate with a small particle density ${\\cal D}$. They are valid for all space and time and any interaction strength $c>0$, and are the leading order of an expansion in ${\\cal D}$. This expansion is obtained by writing the correlation functions as sums over form factors when formally decomposed into partial fractions.\n\n}\n\n\\renewcommand\\Affilfont{\\fontsize{9}{10.8}\\itshape}\n\n \n\\tableofcontents\n\\section{Introduction}\nThe Lieb-Liniger model is a key paradigm of many-particle systems\\cite{LiebLiniger63a,Brezin64,korepin}. In the repulsive regime, it is considered as one of the simplest interacting quantum integrable model, for having the simplifying feature of involving only real rapidities\\cite{Lieb63,YangYang69}. The main objects of interest are the correlation functions of local observables in the thermodynamic limit, that are the macroscopic output of the model resulting from the short-range interactions between the bosons. Moreover their Fourier transform is directly measurable in cold-atoms experiments \\cite{naegerl15,Bouchoule16,Fabbri15}. Although exactly solvable, the computation of correlation functions in quantum integrable models is a notoriously difficult problem. In particular the computation of dynamical correlations for an arbitrary interaction strength $c$ and within arbitrary eigenstate is an open problem.\n\nSpecial cases of this problem were studied and solved in the past. The first of these special cases is the impenetrable bosons limit $c\\to\\infty$, that can be reformulated in terms of free fermions \\cite{girardeau}. Here, both because of the free fermionic nature of the model and the simple structure of the form factor\\footnote{The Transverse Field Ising Model, although also equivalent to free fermions, cannot be fully treated along the same lines because of its more complex form factors \\cite{GFE20}.}, the Lehmann representation of dynamical correlations can be fully resummed into a Fredholm determinant\\cite{KS90,slavnov90,izergin87,kojima97} and its asymptotics extracted with differential equations \\cite{IIKS90,IIKV92}. Another important special case is the ground state static correlation at finite $c$, that was treated within the Algebraic Bethe ansatz framework\\cite{kitanineetalg} and led to the first ab initio calculation of the critical exponents previously predicted by Conformal Field Theory and Luttinger Liquid theory\\cite{BIK86,IKR87,haldane,cazalilla}. This approach was then generalized to static correlations within arbitrary eigenstates \\cite{kozlowskimailletslvanov,kozlowskimailletslvanov2}. The particular case of static correlators within thermal eigenstates was also studied with quantum transfer matrix methods\\cite{suzuki85,klumper92,patuklumper}. The full asymptotics of ground state dynamical correlations were derived in \\cite{kitanineetcformfactor,kozlowskimaillet,kozlowski4} from form factor expansions, confirming predictions of Non-Linear Luttinger Liquid theory\\cite{IG08,PAW09,ISG12,P12,shashipanfilcaux,Price17}.\n\n\nProgress on the general case of dynamical correlations within arbitrary states at finite $c$ has been much more limited. This calculation is very different from the special cases of zero-temperature or static cases and poses important problems. The successful methods for static correlations such as the algebraic Bethe ansatz approach or the quantum transfer matrix methods do not apply to the dynamical case, and the form factor expansion used to compute ground state dynamical correlations relied on combinatorial identities that are usable for zero-entropy states only. This general case has however been studied through several approaches. Firstly, Generalized HydroDynamics (GHD) provide predictions for the leading asymptotics of the dynamical correlations of conserved charges such as the density within arbitrary macrostates\\cite{CADY16,BCDF16,Doyon18}. However the approach cannot a priori be applied to compute the next corrections, restricting the dynamical structure factor to small frequency and momentum, and does not apply neither to semi-local operators such as the field correlations. Secondly, numerical summations of dominant form factors proved very efficient and permitted to obtain numerical estimations of the dynamical structure factor on the full plane \\cite{cauxcalabreseslavnov,PC14}. On the Bethe-ansatz calculations side, one-point functions within arbitrary eigenstates were studied in \\cite{negrosmirovn,bastianellopiroli,bastianellopirolicalabrese}. An approach based on thermodynamic form factors was started \\cite{deNP15,deNP16,DNP18,panfil20,cortescuberopanfil1,cortescuberopanfil2}, but still involves non-integrable singularities and so requires a particular understanding of this feature. A regularized form factor expansion was derived for the XXZ spin chain in \\cite{kozlowski1}. In \\cite{granetessler20} was computed the full spectral sum at order $c^{-2}$ for all root densities and all momentum and frequency, involving one- and two-particle-hole excitations. It showed the necessity for a fine-tuned treatment of the non-integrable singularities that is crucial for e.g. detailed balance to be satisfied in thermal states.\n\n\nThe objective of this paper is to derive the full dynamical correlations for all space and time and all interaction strength $c$, in the limit where the particle density ${\\cal D}$ of the averaging state becomes small. This low-density limit is defined in terms of a partial fraction decomposition of the form factor, initially introduced in \\cite{GFE20} in a model that can be reformulated into free fermions. This decomposition naturally organizes the spectral sum as an expansion in the particle density ${\\cal D}$ of the averaging state, and the low-density limit is defined as the leading term of this expansion. \n\nThis result provides another limiting case where the initial problem becomes solvable, namely the computation of dynamical correlations for all space and time and arbitrary $c$ within finite-entropy macrostates. Moreover the framework also enables to compute the subleading corrections in the particle density, as was explicitly shown in \\cite{GFE20} for the Transverse Field Ising Model. The computation of these subleading corrections in the interacting case however comes with higher technical difficulties and should be the object of further work. Finally, this low-density limit calculation sheds light on the structure of the spectral sum and the nature of the states contributing in the thermodynamic limit.\n\nIn Section \\ref{sec1} we introduce the Lieb-Liniger model and recall known results on its form factors. In Section \\ref{ldd} we define what is meant by low-density limit of the dynamical correlations, taking the field correlations as an example. In Section \\ref{fieldsection} we compute the low-density limit of the field two-point function \\eqref{field}, and in Section \\ref{densitysection} the low-density limit of the density two-point function \\eqref{densityde}.\n\n\n\\section{\\texorpdfstring{Lieb-Liniger model}{Lg}}\n\\label{sec1}\n\\subsection {Definition}\n\nThe Lieb-Liniger model \\cite{LiebLiniger63a} is a non-relativistic\nquantum field theory model with Hamiltonian\n\\begin{equation}\nH=\\int_0^L dx\\left[-\\psi^\\dagger(x)\\frac{d^2}{dx^2}\\psi(x)+c\\psi^\\dagger(x)\\psi^\\dagger(x)\\psi(x)\\psi(x)\n\\right]\\,,\n\\label{HLL}\n\\end{equation}\nwhere the canonical Bose field $\\psi(x)$ satisfies equal-time\ncommutation relations\n\\begin{equation}\n[\\psi(x),\\psi^\\dagger(y)]=\\delta(x-y)\\,.\n\\end{equation}\nWe will impose periodic boundary\n conditions.\nFor later convenience we define the time-$t$ evolved version of the field $\\psi(x,t)=e^{iHt}\\psi(x)e^{-iHt}$. We also define the density operator at position $x$\n\\begin{equation}\n\\sigma(x)=\\psi^\\dagger(x)\\psi(x)\\,,\n\\end{equation}\nand its time-$t$ evolved version $\\sigma(x,t)=e^{iHt}\\sigma(x)e^{-iHt}$.\n\n\\subsection {The Bethe ansatz solution}\n\\subsubsection{The spectrum}\nThe Lieb-Liniger model is solvable by the Bethe ansatz: an eigenstate $|\\pmb{\\lambda}\\rangle$ with $N$ bosons can be written as\n\\begin{equation}\n|\\pmb{\\lambda}\\rangle=B(\\lambda_1)...B(\\lambda_N)|0\\rangle\\,,\n\\end{equation}\nwith the $B(\\lambda)$'s some creation operators, $|0\\rangle$ the pseudo-vacuum and the $\\lambda_i$'s some rapidities that satisfy the following set of 'Bethe equations'\n\\begin{equation}\ne^{iL\\lambda_k}=\\prod_{\\substack{j=1\\\\j\\neq k}}^N\\frac{\\lambda_k-\\lambda_j+ic}{\\lambda_k-\\lambda_j-ic}\\,,\n\\quad k=1,\\dots, N.\n\\end{equation}\nThe\nenergy $E$ and the momentum $P$ of this state read \n\\begin{equation}\nE(\\pmb{\\lambda})=\\sum_{i=1}^N \\lambda_i^2\\,,\\qquad P(\\pmb{\\lambda})=\\sum_{i=1}^N\\lambda_i\\,.\n\\end{equation}\nIt is convenient to express the Bethe equations in logarithmic form\n\\begin{equation}\n\\label{belog}\n\\frac{\\lambda_k}{2\\pi}=\\frac{I_k}{L}-\\frac{1}{L}\\sum_{j=1}^N \\frac{1}{\\pi}\\arctan \\frac{\\lambda_k-\\lambda_j}{c}\\,,\n\\end{equation}\nwith $I_k$ an integer if $N$ is odd, a half-integer if $N$ is even. For $c>0$ all the solutions to this equation are real \\cite{korepin}. We will denote\n\\begin{equation}\n{\\cal D}=\\frac{N}{L}\\,,\n\\end{equation}\nthe particle density of the eigenstate $|\\pmb{\\lambda}\\rangle$.\n\n\\subsubsection{The field form factors}\nOur aim is to calculate correlation functions in an eigenstate $|\\pmb{\\lambda}\\rangle$ at low particle density ${\\cal D}$. We will focus on the two-point function of the field operator\n\\begin{equation}\n \\left\\langle \\psi^\\dagger( x,t) \\psi ( 0,0) \\right\\rangle =\\frac {\\left\\langle \\pmb{\\lambda} \\left| \\psi^\\dagger( x,t) \\psi ( 0,0) \\right| \\pmb{\\lambda} \\right\\rangle } {\\left\\langle \\pmb{\\lambda} |\\pmb{\\lambda} \\right\\rangle }\\,,\n\\end{equation}\nand the two-point function of the density operator\n\\begin{equation}\n \\left\\langle \\sigma( x,t) \\sigma ( 0,0) \\right\\rangle =\\frac {\\left\\langle \\pmb{\\lambda} \\left| \\sigma( x,t) \\sigma ( 0,0) \\right| \\pmb{\\lambda} \\right\\rangle } {\\left\\langle \\pmb{\\lambda} |\\pmb{\\lambda} \\right\\rangle }\\,.\n\\end{equation}\nOur strategy is to use a Lehman representation in terms of energy eigenstates\n$|\\pmb{\\mu}\\rangle =|\\mu_1,...,\\mu_{N'}\\rangle$, where\n$\\{\\mu_1,\\dots,\\mu_{N'}\\}$ are solutions to the Bethe equations \\fr{belog} to rewrite the correlation functions as sums of form factors over the full spectrum. For the two-point function of a generic operator ${\\cal O}$ this representation reads\n\\begin{equation}\n\\label{bigsumfield}\n\\begin{aligned}\n \\left\\langle {\\cal O}^\\dagger( x,t){\\cal O}( 0,0) \\right\\rangle &=\\sum _{ \\pmb{\\mu}}\\frac {\\left| \\left\\langle \\pmb{\\mu} |{\\cal O}( 0) |\\pmb{\\lambda}\\right\\rangle \\right| ^{2}} {\\left\\langle \\pmb{\\lambda} \\left| \\pmb{\\lambda} \\right\\rangle \\left\\langle \\pmb{\\mu}\\right| \\pmb{\\mu}\\right\\rangle }e^{it\\left( E\\left( \\pmb{\\lambda}\\right) -E\\left( \\pmb{\\mu} \\right) \\right) +ix\\left( P\\left( \\pmb{\\mu}\\right) -P\\left( \\pmb{\\lambda}\\right) \\right) }\\,.\n\\end{aligned}\n\\end{equation}\n\nThe (normalized) form factors of the field and density operators between two Bethe\nstates $|\\pmb{\\lambda}\\rangle,|\\pmb{\\mu}\\rangle$ with respective numbers of Bethe roots $N,N'$ have been computed previously \\cite{Korepin82,Slavnov89,Slavnov90,KorepinSlavnov99,Oota04,KozlowskiForm11}. \n\nIn the case of the field operator, it reads\n\\begin{equation}\\label{FF}\n\\begin{aligned}\n&\\frac { \\left\\langle \\pmb{\\mu} |\\psi ( 0) |\\pmb{\\lambda}\\right\\rangle } {\\sqrt{\\left\\langle \\pmb{\\lambda} \\left| \\pmb{\\lambda} \\right\\rangle \\left\\langle \\pmb{\\mu}\\right| \\pmb{\\mu}\\right\\rangle }}=\\delta_{N,N'+1}\\frac{i^{N+1}(-1)^{N(N-1)\/2}}{L^{N-1\/2}\\sqrt{\\mathcal{N}_{\\pmb{\\lambda}}\\mathcal{N}_{\\pmb{\\mu}}}}\\frac{\\prod_{i< j}|\\lambda_i-\\lambda_j|\\prod_{i< j}|\\mu_i-\\mu_j|}{\\prod_{i, j}(\\mu_j-\\lambda_i)}\\\\\n&\\qquad\\times\\sqrt{\\frac{\\prod_{i\\neq j}(\\lambda_i-\\lambda_j+ic)}{\\prod_{i\\neq j}(\\mu_i-\\mu_j+ic)}}\\prod_{\\substack{j=1\\\\j\\neq p,s}}^N(V_j^+-V_j^-) \\,\\,\\underset{i,j=1,...,N}{\\det}\\Bigg[\\delta_{ij}+U_{ij}\\Bigg]\\,,\n\\end{aligned}\n\\end{equation}\nfor any $p,s=1,...,N$. The various terms entering this expression are\n\\begin{equation}\nV_i^\\pm=\\frac{\\prod_{k=1}^{N-1}(\\mu_k-\\lambda_i\\pm ic)}{\\prod_{k=1}^{N}(\\lambda_k-\\lambda_i\\pm ic)}\\,,\n\\end{equation}\nand the $N\\times N$ matrix\n\\begin{equation}\nU_{jk}=\\frac{i}{V_j^+-V_j^-}\\left[\\frac{2c}{c^2+(\\lambda_j-\\lambda_k)^2}-\\frac{4c^2}{(c^2+(\\lambda_p-\\lambda_k)^2)(c^2+(\\lambda_s-\\lambda_j)^2)}\\right]\\frac{\\prod_{m=1}^{N-1}(\\mu_m-\\lambda_j)}{\\prod_{m\\neq j}(\\lambda_m-\\lambda_j)}\\,,\n\\end{equation}\nand finally\n\\begin{equation}\n\\label{norm}\n\\mathcal{N}_{\\pmb{\\lambda}}=\\det G(\\pmb{\\lambda})\\,,\n\\end{equation}\nwith the Gaudin matrix \\cite{Gaudin71}\n\\begin{equation}\\label{gaudin}\nG_{ij}(\\pmb{\\lambda})= \\delta_{ij} \\left(1+\\frac{1}{L}\\sum_{k=1}^N \\frac{2c}{c^2+(\\lambda_i-\\lambda_k)^2}\\right)-\\frac{1}{L}\\frac{2c}{c^2+(\\lambda_i-\\lambda_j)^2}\\,.\n\\end{equation}\nThe form factor of the density operator reads\n\\begin{equation}\\label{desnityff}\n\\begin{aligned}\n&\\frac { \\left\\langle \\pmb{\\mu} |\\sigma ( 0) |\\pmb{\\lambda}\\right\\rangle } {\\sqrt{\\left\\langle \\pmb{\\lambda} \\left| \\pmb{\\lambda} \\right\\rangle \\left\\langle \\pmb{\\mu}\\right| \\pmb{\\mu}\\right\\rangle }}=\\delta_{N,N'}\\frac{i^{N+1}(-1)^{N(N-1)\/2}(\\sum_{j=1}^N \\lambda_j-\\mu_j)}{L^{N}\\sqrt{\\mathcal{N}_{\\pmb{\\lambda}}\\mathcal{N}_{\\pmb{\\mu}}}}\\frac{\\prod_{i< j}|\\lambda_i-\\lambda_j|\\prod_{i< j}|\\mu_i-\\mu_j|}{\\prod_{i, j}(\\mu_j-\\lambda_i)}\\\\\n&\\qquad\\times\\sqrt{\\prod_{i, j}\\frac{\\lambda_i-\\lambda_j+ic}{\\mu_i-\\mu_j+ic}}\\prod_{j\\neq p}(V_j^+-V_j^-) \\,\\,\\underset{i,j=1,...,N}{\\det}\\Bigg[\\delta_{ij}+U'_{ij}\\Bigg]\\,,\n\\end{aligned}\n\\end{equation}\nfor any $p=1,...,N$, with\n\\begin{equation}\nU'_{jk}=i\\frac{\\mu_j-\\lambda_j}{V_j^+-V_j^-}\\left[\\frac{2c}{(\\lambda_j-\\lambda_k)^2+c^2}-\\frac{2c}{(\\lambda_p-\\lambda_k)^2+c^2}\\right]\\prod_{m\\neq j}\\frac{\\mu_m-\\lambda_j}{\\lambda_m-\\lambda_j}\\,,\n\\end{equation}\nand with now\n\\begin{equation}\nV_j^\\pm=\\frac{\\prod_{k=1}^{N}(\\mu_k-\\lambda_j\\pm ic)}{\\prod_{k=1}^{N}(\\lambda_k-\\lambda_j\\pm ic)}\\,.\n\\end{equation}\n\\subsubsection{Root densities}\nIn the thermodynamic limit, any sum of a non-singular function over the Bethe roots can be expressed in terms of a \\textit{root density} that characterizes a macrostate as far as such quantities are concerned\n\\begin{equation}\n\\underset{L\\to\\infty}{\\lim}\\, \\frac{1}{L^n}\\sum_{i_1,...,i_n}f(\\lambda_{i_1},...,\\lambda_{i_n})=\\int_{-\\infty}^\\infty \\dots\\int_{-\\infty}^\\infty f(\\lambda_1,...,\\lambda_n)\\rho(\\lambda_1)\\dots\\rho(\\lambda_n)\\D{\\lambda_1}\\dots\\D{\\lambda_n}\\,.\n\\end{equation}\nHowever if the function is singular the result will in general depend on the representative state of the macrostate, see \\cite{granetessler20}.\nIt is customary to introduce the hole density $\\rho_h(\\lambda)$ defined by\n\\begin{equation}\n\\label{vartheta}\n\\rho(\\lambda)+\\rho_h(\\lambda)=\\frac{1}{2\\pi}+\\frac{1}{2\\pi}\\int_{-\\infty}^\\infty \\frac{2c}{c^2+(\\lambda-\\mu)^2}\\rho(\\mu)\\D{\\mu}\\,.\n\\end{equation}\n\n\\section {Definition of the low-density limit \\label{ldd}}\nThe purpose of this section is to define what is meant by the \\textit{low density limit} of correlation functions. It is defined as the \\textit{leading order of an expansion} in ${\\cal D}$, obtained by decomposing the form factor in partial fractions. As a consequence it is an expression valid for all $x,t$ and $c$, that becomes closer to the dynamical correlations as the particle density ${\\cal D}$ of the averaging state becomes smaller.\\\\\n\nThis definition requires some technicalities, but is rigorous and allows for a computation of the next orders, as shown in \\cite{GFE20} for a model that can be reformulated into free fermions. However, it a priori lacks some intuitive picture. For that reason we provide an interpretation of this low-density limit so defined as a Lehmann representation in terms of the \\textit{low density limit of the form factor}. The reasoning is here rather different, and consists in first approximating the form factor by the thermodynamic limit value they take when one of the two states is a dilute state, i.e. such that for any pair $i,j$ we have $L(\\lambda_i-\\lambda_j)\\to\\infty$. In this limit, the spectral sum of the dynamical correlations indeed matches the leading order of the expansion in ${\\cal D}$, providing an interesting and intuitive consistency check. But it must be stressed that the right definition of the low-density limit is more general than this intuitive calculation, since it only requires ${\\cal D}$ to be small, not the root density $\\rho(\\lambda)$ to be small everywhere.\n\n \n\n\\subsection {Partial fraction decomposition}\n\\subsubsection{Recall \\label{pfddefsec}}\nWe recall that the partial fraction decomposition (PFD) of a ratio of two polynomials $\\frac{P(X)}{\\prod_{i=1}^n(X-x_i)^{a_i}}$ with distinct $x_i$'s is the writing\n\\begin{equation}\\label{pfddef}\n\\frac{P(X)}{\\prod_{i=1}^n(X-x_i)^{a_i}}=P_0(X)+\\sum_{i=1}^n \\sum_{\\nu=1}^{a_i}\\frac{B_{i,\\nu}}{(X-x_i)^\\nu}\\,,\n\\end{equation}\nwith $P_0(X)$ a polynomial, and $B_{i,\\nu}$ coefficients given by\n\\begin{equation}\nB_{i,\\nu}=\\frac{1}{(a_i-\\nu)!}(\\tfrac{d}{dX})^{a_i-\\nu}[(X-x_i)^{a_i}P(X)]|_{X=x_i}\\,.\n\\end{equation}\nThe polynomial $P_0(X)$ can be determined by studying e.g. the large $X$ behaviour of the ratio of the two polynomials on the left-hand side of \\eqref{pfddef}.\n\\subsubsection{The poles of the normalized form factor}\nWe consider $ \\pmb{\\lambda} $ and $ \\pmb{\\mu} $ two sets of respectively $N$ and $N-1$ rapidities, and would like to apply a partial fraction decomposition to the square of the normalized field form factor, with respect to each of the $\\mu_i$'s successively. The first task is to identify the poles of a $\\mu_i$ at fixed other $\\mu_j$'s. There are a priori three types of poles for $\\mu_i$\n\\begin{enumerate}\n\\item Double poles in $\\lambda_j$ for all $j$\n\\item Simple poles in $\\mu_j \\pm ic$ for all $j$\n\\item Poles corresponding to zeros of the determinant of the Gaudin matrix $\\mathcal{N}_{\\pmb{\\mu}}$\n\\end{enumerate}\nWe remark that the last two types of poles come from the fact that we consider normalized form factors. We also remark that since some of the entries of the Gaudin matrix diverge when $\\mu_i-\\mu_j\\to \\pm ic$, the second type of pole could happen to be absent from the full normalized form factor, but this possibility will not be relevant to our discussion. Lastly we notice that the last two types of poles are in fact never attained when all the roots $\\mu_i$ are real, which is always the case if $c>0$. Indeed, when the roots are real, the Gaudin matrix is strictly dominant diagonal, i.e. satisfies $\\forall i=1,...,N-1,\\quad |G_{ii}|>\\sum_{j\\neq i}|G_{ij}|$, hence is invertible. However, when performing the PFD of the form factor it has to be considered as a mere fraction of polynomials in $\\mu_i$'s that do not necessarily satisfy the Bethe equations, and these poles have to be taken into account indeed.\n\nAmong these three types of poles, the zeros of $\\mathcal{N}_{\\pmb{\\mu}}$ are the most problematic since their location is a complicated function of the other $\\mu_j$'s. For this reason we are going to consider the PFD of the form factor without these factors.\nNamely we define $F_\\psi(\\pmb{\\lambda},\\pmb{\\mu})$ by\n\\begin{equation}\\label{pfd}\n\\begin{aligned}\n\\frac {\\left| \\left\\langle \\pmb{\\mu} |\\psi \\left( 0\\right) |\\pmb{\\lambda}\\right\\rangle \\right| ^{2}} {\\left\\langle \\pmb{\\lambda} \\left| \\pmb{\\lambda} \\right\\rangle \\left\\langle \\pmb{\\mu}\\right| \\pmb{\\mu}\\right\\rangle }= \\frac{F_\\psi(\\pmb{\\lambda},\\pmb{\\mu})}{\\mathcal{N}_{\\pmb{\\lambda}}\\mathcal{N}_{\\pmb{\\mu}}L^{2N-1}} \\,,\n\\end{aligned}\n\\end{equation}\nand consider the PFD of $F_\\psi(\\pmb{\\lambda},\\pmb{\\mu})$ only, with respect to the $\\mu_i$'s. \n\n\\subsubsection{The shape of the PFD of the reduced form factor}\nThe reduced form factor $F_\\psi(\\pmb{\\lambda},\\pmb{\\mu})$, seen as a function of $\\mu_1$, is a ratio of two polynomials with double poles in each of the $\\lambda_i$'s and simple poles in $\\mu_j\\pm ic$, so one can apply the decomposition written in Section \\ref{pfddefsec}. Since the reduced form factor goes to zero when $\\mu_1\\to\\infty$, we have $P_0(X)=0$ and so one can write\n\\begin{equation}\\label{154}\nF_\\psi(\\pmb{\\lambda},\\pmb{\\mu})=h_{\\mu_1}(\\mu_2,...,\\mu_{N-1})+\\sum_{i=1}^N \\sum_{\\nu=1}^{2}\\frac{B_{i,\\nu}(\\mu_2,...,\\mu_{N-1})}{(\\mu_1-\\lambda_i)^\\nu}\\,,\n\\end{equation}\nwith\n\\begin{equation}\nh_{\\mu_1}(\\mu_2,...,\\mu_{N-1})=\\sum_{i=2}^{N-1} \\frac{C^+_{i}(\\mu_2,...,\\mu_{N-1})}{\\mu_1-\\mu_i+ic}+\\sum_{i=2}^{N-1} \\frac{C^-_{i}(\\mu_2,...,\\mu_{N-1})}{\\mu_1-\\mu_i-ic}\\,,\n\\end{equation}\nwhere $B_{i,\\nu}(\\mu_2,...,\\mu_{N-1})$, $C^\\pm_{i}(\\mu_2,...,\\mu_{N-1})$ are 'coefficients' independent of $\\mu_1$, but that still possess a dependence in the remaining $\\mu_k$'s. They have the same pole structure, except that each $\\mu_i$ for $i\\neq 1$ does not anymore has a pole in $\\mu_1\\pm ic$. The function $h_{\\mu_1}(\\mu_2,...,\\mu_{N-1})$ is a function of $\\mu_1$ that has no poles in $\\mu_1$ when all the $\\mu_j$'s are real. \nWe now apply the same procedure to $B_{i,\\nu}(\\mu_2,...,\\mu_{N-1})$ and $h_{\\mu_1}(\\mu_2,...,\\mu_{N-1})$ with respect to $\\mu_2$ to obtain the writing\n\\begin{equation}\nF_\\psi(\\pmb{\\lambda},\\pmb{\\mu})=h_{\\mu_1,\\mu_2}(\\mu_3,...,\\mu_{N-1})+\\sum_{i=1}^N\\sum_{j=1}^N\\sum_{\\nu_i=0}^2\\sum_{\\nu_j=1}^2\\frac{B_{i,\\nu_i,j,\\nu_j}(\\mu_3,...,\\mu_{N-1})}{(\\mu_1-\\lambda_i)^{\\nu_i}(\\mu_2-\\lambda_j)^{\\nu_j}}\\,,\n\\end{equation}\nwhere $B_{i,\\nu_i,j,\\nu_j}(\\mu_3,...,\\mu_{N-1})$ is a 'coefficient' independent of $\\mu_1,\\mu_2$, and $h_{\\mu_1,\\mu_2}(\\mu_3,...,\\mu_{N-1})$ a function of $\\mu_1,\\mu_2$ without singularities in real $\\mu_1,\\mu_2$. We note that the poles in $\\mu_2$ arising from the function $h_{\\mu_1}(\\mu_2,...,\\mu_{N-1})$ that has no poles in real $\\mu_1$ are counted through the case $\\nu_i=0$. Proceeding recursively, one obtains the writing\n\n\\begin{equation}\\label{pfd}\nF_\\psi(\\pmb{\\lambda},\\pmb{\\mu})=\\sum_{\\{\\nu\\},f}\\frac{A(\\pmb{\\lambda},\\pmb{\\mu},\\{\\nu\\},f)}{\\prod_{i=1}^{N-1}(\\mu_i-\\lambda_{f(i)})^{\\nu_i}}\\,,\n\\end{equation}\nwhere each $\\nu_i$ takes the value $0$, $1$ or $2$, and where $f$ are functions defined on the points $i\\in\\{1,...,N-1\\}$ where $\\nu_i>0$, namely\n\\begin{equation}\nf:\\{i\\in\\{1,...,N-1\\} | \\nu_i>0 \\}\\to \\{1,...,N\\}\\,.\n\\end{equation}\nThe coefficients $A(\\pmb{\\lambda},\\pmb{\\mu},\\{\\nu\\},f)$ crucially do not depend on any $\\mu_i$ whenever $\\nu_i>0$, and are bounded regular functions of real $\\mu_i$ when $\\nu_i=0$. \n\n\\subsubsection{Computing the coefficients}\nIn the special case where all $\\nu_i>0$, the coefficients $A(\\pmb{\\lambda},\\pmb{\\mu},\\{\\nu\\},f)=A(\\pmb{\\lambda},\\{\\nu\\},f)$ do not depend on any $\\mu_i$'s and can be computed according to the formula\n\\begin{equation}\n\\label{formulA}\nA(\\pmb{\\lambda},\\{\\nu\\},f)=\\prod_{i=1}^{N-1}\\left[\\left(\\frac{d}{d\\mu_i}\\right)^{2-\\nu_i} (\\mu_i-\\lambda_{f(i)})^2 \\right]F_\\psi(\\pmb{\\lambda},\\pmb{\\mu})|_{\\mu_i=\\lambda_{f(i)}}\\,.\n\\end{equation}\nIf now there is a subset $K\\subset\\{1,...,N-1\\}$ such that $\\nu_i=0$ for $i\\in K$, one first defines the following function of the $\\mu_i$'s for $i\\in K$\n\\begin{equation}\n\\label{formulA2}\n\\bar{A}(\\{\\mu_i\\}_{i\\in K}|\\pmb{\\lambda},\\{\\nu\\},f)=\\prod_{\\substack{i=1\\\\ i\\notin K}}^{N-1}\\left[\\left(\\frac{d}{d\\mu_i}\\right)^{2-\\nu_i} (\\mu_i-\\lambda_{f(i)})^2 \\right]F_\\psi(\\pmb{\\lambda},\\pmb{\\mu})|_{\\mu_i=\\lambda_{f(i)},\\, i\\notin K}\\,.\n\\end{equation}\nThe function $\\bar{A}(\\{\\mu_i\\}_{i\\in K}|\\pmb{\\lambda},\\{\\nu\\},f)$ still has poles in $\\mu_i$ for $i\\in K$ since it also contains all the cases when $\\nu_i>0$. To compute the coefficient $A(\\pmb{\\lambda},\\pmb{\\mu},\\{\\nu\\},f)$ one has to remove from this function any pole in real $\\mu_i$ for $i\\in K$. Although these formulas are not very explicit, they are still of practical use to compute the simplest terms in the PFD, as we will see below.\\\\\n\n\n\nThe functions $f$ over which we sum in \\eqref{pfd} are actually rather constrained. First, since the form factor vanishes whenever two $\\mu_i$'s or two $\\lambda_i$'s coincide, we see from \\eqref{formulA} and \\eqref{formulA2} that if $\\nu_i=2$ or $\\nu_j=2$ and $f(i)=f(j)$, then the corresponding coefficient vanishes (namely, $A(\\pmb{\\lambda},\\{\\nu\\},f)=0$ if all $\\nu_k>0$ are non-zero, and $\\bar{A}(\\{\\mu_i\\}_{i\\in K}|\\pmb{\\lambda},\\{\\nu\\},f)=0$ if there is a vanishing $\\nu_k$). If we have $\\nu_i=\\nu_j=2$ it directly follows from the absence of derivative in \\eqref{formulA}; if $\\nu_i$ or $\\nu_j$ is equal to $1$, then it follows from the fact that there is a zero of order $2$ in the numerator with only one derivative. Similarly, if $k$ indices have a $\\nu_i=1$ and take the same value through $f$, then there is a zero of order $k(k-1)$ in the numerator, with only $k$ derivatives. Hence this imposes $k=2$. Thus one can impose in \\eqref{pfd} the two following constrains: (i) that $f(i)\\neq f(j)$ whenever $\\nu_i=2$ or $\\nu_j=2$, and (ii) that $f$ can take at most twice the same value.\\\\\n\nIn the following we will denote the number of elements of a set $E$ by\n\\begin{equation}\n|E|\\qquad \\text{or}\\qquad \\# E\\,,\n\\end{equation}\naccording to the most readable choice in the context.\n\n\\subsection {A density expansion}\n\nLet us now rewrite the Lehmann representation \\eqref{bigsumfield} in the following way. Instead of summing over the Bethe roots $\\mu_i$, we sum over their Bethe numbers $J_i$, and trade the ordering of the Bethe numbers for a non-ordered sum with a $\\frac{1}{(N-1)!}$ factor. Whenever two Bethe numbers coincide, the form factor is zero so that the two representations are indeed equivalent. Using \\eqref{pfd}, we obtain\n\\begin{equation}\\label{pfdbeg}\n\\begin{aligned}\n \\left\\langle \\psi^\\dagger\\left( x,t\\right) \\psi \\left( 0,0\\right) \\right\\rangle =&\\frac{1}{L^{2N-1}(N-1)!}\\\\\n &\\sum_{\\{\\nu\\},f}\\sum_{J_1,...,J_{N-1}}\\frac{A(\\pmb{\\lambda},\\pmb{\\mu},\\{\\nu\\},f)}{\\mathcal{N}_{\\pmb{\\lambda}}\\mathcal{N}_{\\pmb{\\mu}}}\\frac{e^{it\\left( E( \\pmb{\\lambda}) -E( \\pmb{\\mu}) \\right) +ix\\left( P( \\pmb{\\mu}) -P( \\pmb{\\lambda}) \\right) }}{\\prod_{i=1}^{N-1}(\\mu_i-\\lambda_{f(i)})^{\\nu_i}}\\,.\n\\end{aligned}\n\\end{equation}\nThe sum over $J_1,...,J_{N-1}$ is invariant under any change $\\tilde{f}=f \\circ (i \\,j)$ with $(i \\,j)$ the permutation of indices $i,j$, whenever $\\nu_i=\\nu_j>0$ and $f(i)$ and $f(j)$ are attained the same number of times by $f$. Hence this sum only depends on the \\textit{set of points attained a given number of times} by $f$, not on the particular realization of the function $f$. \n\n\n\nTo rewrite the sum without these functions $f$, let us define $I_k$ for $k=0,1,2$ the set of points $i$ in $\\{1,...,N\\}$ that are attained $k$ times by $f$ from points where $\\nu_j=1$, namely\n\\begin{equation}\n\\begin{aligned}\nI_k&=\\Bigg\\{i\\in\\{1,...,N\\} \\left| \\# \\{j\\in \\{1,...,N-1\\}| \\nu_j=1\\text{ and }f(j)=i\\}=k\\Bigg\\}\\right.\\,.\n\\end{aligned}\n\\end{equation}\nAs a consequence the points in $\\{1,...,N\\}$ attained by $f$ from points where $\\nu_j=2$ are $\\{1,...,N\\}-(I_0\\cup I_1\\cup I_2)$. These subsets $I_0,I_1,I_2\\subset \\{1,...,N\\}$ have to be disjoint and to satisfy $|I_0|=|I_2|+1+p$ with $p=|\\{i|\\nu_i=0\\}|$ the number of points with $\\nu_i=0$, because $f$ can take at most twice the same value. \n\nWe will denote\n\\begin{equation}\nn=|I_2|\\,,\\qquad m=|I_1|\\,,\n\\end{equation}\nand parametrize\n\\begin{equation}\n\\begin{aligned}\nI_2&=\\{j_{1},...,j_{n}\\}\\\\\nI_0&=\\{j_{n+1},...,j_{2n+p+1}\\}\\\\\nI_1&=\\{j_{2n+p+2},...,j_{2n+p+m+1}\\}\\\\\n\\{1,...,N\\}-(I_0\\cup I_1 \\cup I_2)&=\\{j_{2n+p+m+2},...,j_{N}\\}\\,.\n\\end{aligned}\n\\end{equation}\nWhen rewriting \\eqref{pfdbeg} in terms of these subsets, one picks a combinatorial factor corresponding to the number of functions $f$ with such an output. Choosing the set of points where $\\nu_i=0$ yields a factor ${N-1\\choose p}$, those where $\\nu_i=2$ a factor $(N-2n-p-m-1)!{N-1-p\\choose N-2n-p-m-1}$, those attained only once by $f$ and where $\\nu_i=1$ a factor $m!{2n+m\\choose m}$. Finally those attained twice by $f$ yield a factor $(2n)!!n!$. Writing $(2n)!!=\\tfrac{(2n)!}{n!2^n}$ yields a total combinatorial factor\n\\begin{equation}\n\\frac{(N-1)!}{2^{n}p!}\\,.\n\\end{equation}\nWe conclude that we can write\n\\begin{equation}\\label{expdensity}\n\\left\\langle \\psi^\\dagger\\left( x,t\\right) \\psi \\left( 0,0\\right) \\right\\rangle =\\sum_{n,m,p\\geq 0}S_{n,m,p}\\,,\n\\end{equation}\nwith\n\\begin{equation}\n\\begin{aligned}\n&S_{n,m,p}=\\frac{1}{2^np!L^{2N-1}}\\sum_{\\substack{I_{0,1,2}\\subset \\{1,...,N\\}\\\\|I_0|=n+p+1\\\\|I_1|=m\\\\|I_2|=n \\\\\\text{all disjoint}}}\\sum_{J_1,...,J_{N-1}}\\frac{{\\cal A}(I_0,I_1,I_2|\\{\\mu_i\\}_{i=2n+1}^{2n+p})}{\\mathcal{N}_{\\pmb{\\lambda}}\\mathcal{N}_{\\pmb{\\mu}}}\\\\\n &\\qquad\\qquad\\times\\frac{e^{it\\left( E\\left( \\pmb{\\lambda}\\right) -E\\left( \\pmb{\\mu} \\right) \\right) +ix\\left( P\\left( \\pmb{\\mu}\\right) -P\\left( \\pmb{\\lambda}\\right) \\right) }}{\\prod_{i=1}^{n}(\\mu_{2i-1}-\\lambda_{j_i})(\\mu_{2i}-\\lambda_{j_i})\\prod_{i=2n+1+p}^{2n+m+p}(\\mu_i-\\lambda_{j_{i+1}})\\prod_{i=2n+m+p+1}^{N-1}(\\mu_i-\\lambda_{j_{i+1}})^{2}}\\,.\n\\end{aligned}\n\\end{equation}\nThe specific ordering of the $\\mu_i$'s in this expression is irrelevant. \n\nHere we have ${\\cal A}(I_0,I_1,I_2|\\{\\mu_i\\}_{i=2n+1}^{2n+p})=A(\\pmb{\\lambda},\\pmb{\\mu},\\{\\nu\\},f)$ with $\\nu_i=1$ for $i=1,...,m+2n$, $\\nu_i=0$ for $i=m+2n+1,...,m+2n+p$ and $\\nu_i=2$ for $i=m+2n+p+1,...,N-1$, and with the function $f$ taking the values in $I_1$ over $1,...,m$, twice the values in $I_2$ over $m+1,...,m+2n$, and the values in $\\{1,...,N\\}-(I_0\\cup I_1\\cup I_2)$ over $m+2n+p+1,...,N-1$.\n\nSince each choice of an index in $\\{1,...,N\\}$ comes with a factor ${\\cal D}$, the term $S_{n,m,p}$ is of order ${\\cal O}({\\cal D}^{1+2n+m+p})$. Hence expression \\eqref{expdensity} is \\textit{an expansion in the particle density} ${\\cal D}$ of the averaging state. \n\n\\subsection {Definition of the low-density limit of the correlation function}\nThe low density limit of the dynamical correlations is defined as retaining only the leading term $S_{0,0,0}$ in \\eqref{expdensity}. It is obtained with $p=0$ and $I_1=I_2=\\varnothing$, and so $I_0=\\{a\\}$ for $a=1,...,N$. Namely, reparametrising $\\pmb{\\mu}=\\{\\mu_1,...,\\mu_{a-1},\\mu_{a+1},...,\\mu_N\\}$ for convenience\n\\begin{equation}\\label{S00}\n\\begin{aligned}\n &S_{0,0,0}=\\frac{1}{L}\\sum_{a=1}^N\\sum_{J_i,\\, i\\neq a}\\frac{{\\cal A}(\\{a\\},\\varnothing,\\varnothing|\\varnothing)}{\\mathcal{N}_{\\pmb{\\lambda}}\\mathcal{N}_{\\pmb{\\mu}}}\\frac{e^{it\\left( E\\left( \\pmb{\\lambda}\\right) -E\\left( \\pmb{\\mu} \\right) \\right) +ix\\left( P\\left( \\pmb{\\mu}\\right) -P\\left( \\pmb{\\lambda}\\right) \\right) }}{\\prod_{i\\neq a}L^2(\\mu_i-\\lambda_{i})^{2}}\\,.\n\\end{aligned}\n\\end{equation}\nSince all the other terms in \\eqref{expdensity} have at least a multiplying factor ${\\cal D}$, we have\n\\begin{equation}\\label{psipsild2}\n\\begin{aligned}\n &\\left\\langle \\psi^\\dagger\\left( x,t\\right) \\psi \\left( 0,0\\right) \\right\\rangle =S_{0,0,0}(1+{\\cal O}({\\cal D}))\\,.\n\\end{aligned}\n\\end{equation}\nIn the rest of paper, we will use the sign $\\underset{\\text{l.d.}}{\\sim}$ to indicate a low-density limit. Namely\n\\begin{equation}\nX\\underset{\\text{l.d.}}{\\sim} Y\\qquad \\text{means}\\qquad X=Y(1+{\\cal O}({\\cal D}))\\,.\n\\end{equation}\nUsing formula \\eqref{formulA}, one finds\n\\begin{equation}\n{\\cal A}(\\{a\\},\\varnothing,\\varnothing|\\varnothing)=\\prod_{i\\neq a}\\frac{4c^2}{(\\lambda_i-\\lambda_a)^2+c^2}\\,.\n\\end{equation}\nHence the low-density limit\n\\begin{equation}\\label{psipsild2}\n\\begin{aligned}\n &\\left\\langle \\psi^\\dagger\\left( x,t\\right) \\psi \\left( 0,0\\right) \\right\\rangle \\underset{\\text{l.d.}}{\\sim}\\frac{1}{L}\\sum_{a=1}^Ne^{it\\lambda_a^2-ix\\lambda_a}\\sum_{J_i,\\, i\\neq a}\\frac{1}{\\mathcal{N}_{\\pmb{\\lambda}}\\mathcal{N}_{\\pmb{\\mu}}}\\prod_{i\\neq a}\\left(\\frac{4c^2}{(\\lambda_i-\\lambda_a)^2+c^2}\\frac{e^{it\\left( \\lambda_i-\\mu_i \\right) +ix\\left( \\mu_i-\\lambda_i \\right) }}{L^2(\\mu_i-\\lambda_{i})^{2}}\\right)\\,.\n\\end{aligned}\n\\end{equation}\nA few comments are in order. Up to now, nothing has been said of the Bethe equations, and in principle the $\\mu_i$'s in this expression should satisfy exactly the Bethe equations \\eqref{belog}. If one wishes to determine the dynamical correlations at leading order in ${\\cal D}$, there remains only the term \\eqref{psipsild2} in the full expansion \\eqref{expdensity}; but one can also satisfy the Bethe equations \\eqref{belog} only at leading order in ${\\cal D}$, since their exact solution will involve higher orders in ${\\cal D}$ that are of the same order as the terms discarded in \\eqref{expdensity}. Stated differently, the leading order in ${\\cal D}$ of the dynamical correlations is obtained by both retaining only \\eqref{psipsild2} \\textit{and} satisfying \\eqref{belog} at leading order in ${\\cal D}$, while the higher orders in ${\\cal D}$ will require both taking into account higher terms in \\eqref{expdensity} \\textit{and} satisfying \\eqref{belog} at higher orders in ${\\cal D}$ in \\eqref{psipsild2}.\n\n\n\\subsection {Interpretation: low-density limit of the field form factor\\label{intuitive}}\nThe low-density limit is defined above as the leading term in an expansion of the correlation functions obtained by decomposing the form factors in partial fractions, that turns out to be an expansion in the density of the averaging state ${\\cal D}$. \nThis definition allows for a systematic calculation of the next corrections in the density by taking into account more terms in \\eqref{expdensity}.\\\\\n\nThe low-density limit of the correlation functions can however be recovered more intuitively but less rigorously with the following reasoning. If the root density $\\rho(\\lambda)$ of the averaging state is small, then the distance between two consecutive roots $L(\\lambda_i-\\lambda_j)$ is 'typically'\\footnote{This cannot be true for any representative state of the density, but is true for a 'typical state' whose roots are regularly spaced according to the value of the density, see \\cite{granetessler20}.} large in front of $1$, and in the limit of vanishingly low density becomes infinite. We will say that a sequence of states satisfying this property is \\textit{dilute}, namely\n\\begin{equation}\n(\\pmb{\\lambda}^{(L)})_{L\\in\\mathbb{N}}\\text{ dilute}\\iff \\forall i\\neq j,\\, \\underset{L\\to\\infty}{\\lim}\\, L|\\lambda^{(L)}_i-\\lambda^{(L)}_j|=\\infty\\,.\n\\end{equation}\nFor notational convenience we will drop the $L$ dependence of the sequence of states. Let us consider a dilute state $\\pmb{\\lambda}$ and investigate the consequences it has on the form factors between $\\pmb{\\lambda}$ and another state $\\pmb{\\mu}$.\n\nLet us first note that because of the assumption of diluteness, a Bethe number $J_i$ of $\\pmb{\\mu}$ can be at a distance ${\\cal O}(1)$ of at most one Bethe number $I_j$ of $\\pmb{\\lambda}$. Hence given a root $\\mu_i$, the quantity $L(\\mu_i-\\lambda_j)$ can be of order $1$ for at most one $\\lambda_j$. Consequently, \nit is seen from the expression \\eqref{FF} that \nin the low density limit the form factor \\eqref{FF} is non-zero only if each Bethe number $J_i$ of $\\pmb{\\mu}$ is at a distance of order $1$ from exactly one Bethe number $I_j$ of $\\pmb{\\lambda}$, and reciprocally if these $N-1$ Bethe numbers $I_j$ are each at a distance of order $1$ from exactly one Bethe number $J_k$ of $\\pmb{\\mu}$. Since there are $N$ roots in $\\pmb{\\lambda}$, there is one remaining root $\\lambda_a$ that is not close to any $\\mu_i$'s. We can re-label the roots $\\pmb{\\mu}=\\{\\mu_1,...,\\mu_{a-1},\\mu_{a+1},...,\\mu_N\\}$ so that $L(\\mu_i-\\lambda_i)$ is of order $1$ for all $i\\neq a$. Let us then investigate the value taken by the normalized form factor \\eqref{FF} in this regime.\n\nLet us first study the determinant $\\mathcal{N}_{\\pmb{\\lambda}}$ in the low-density limit. We have\n\\begin{equation}\\label{gaudin2}\nG_{ij}(\\pmb{\\lambda})= \\delta_{ij}-\\frac{1}{L}\\frac{2c}{c^2+(\\lambda_i-\\lambda_j)^2}+{\\cal O}({\\cal D})\\,,\n\\end{equation}\nso that\n\\begin{equation}\\label{det1det1}\n\\begin{aligned}\n\\mathcal{N}_{\\pmb{\\lambda}}&=\\exp \\,\\text{tr}\\, \\log G(\\pmb{\\lambda})\\\\\n&=\\exp\\left( -\\sum_{n=1}^\\infty \\frac{1}{n}\\,\\text{tr}\\, g^n+{\\cal O}({\\cal D})\\right)\\\\\n&=1+{\\cal O}({\\cal D})\\,,\n\\end{aligned}\n\\end{equation}\nwith $g_{ij}=\\frac{1}{L}\\frac{2c}{c^2+(\\lambda_i-\\lambda_j)^2}$, since $\\,\\text{tr}\\, g^n={\\cal O}({\\cal D}^n)$. Here, we did not use the diluteness of $\\pmb{\\lambda}$, but only evaluated the leading order in ${\\cal D}$ of the determinant.\n\nSecond, again because $\\pmb{\\lambda}$ is dilute, all the $L(\\mu_i-\\lambda_i)$ are negligible in front of any $\\lambda_a-\\lambda_j$. It follows that\n\\begin{equation}\nV_j^+-V_j^-\\underset{\\text{l.d.}}{\\sim}\\frac{-2ic}{(\\lambda_a-\\lambda_j)^2+c^2}\\,.\n\\end{equation}\nWe also have\n\\begin{equation}\n\\sqrt{\\frac{\\prod_{i\\neq j}(\\lambda_i-\\lambda_j+ic)}{\\prod_{i\\neq j}(\\mu_i-\\mu_j+ic)}}\\underset{\\text{l.d.}}{\\sim} i^{N-1}\\prod_{j\\neq a}\\sqrt{(\\lambda_j-\\lambda_a)^2+c^2}\\,,\n\\end{equation}\nand\n\\begin{equation}\n\\frac{\\prod_{i< j}|\\lambda_i-\\lambda_j|\\prod_{i< j}|\\mu_i-\\mu_j|}{\\prod_{i, j}(\\mu_j-\\lambda_i)}\\underset{\\text{l.d.}}{\\sim} (-1)^{(N-1)(N-2)\/2}\\prod_{j\\neq a}\\,\\text{sgn}\\,(\\lambda_j-\\lambda_a)\\frac{1}{\\prod_{j\\neq a}(\\mu_j-\\lambda_j)}\\,.\n\\end{equation}\nAs for the matrix $U$, setting $\\lambda_p=\\lambda_s=\\lambda_a$, the dominant entries are\n\\begin{equation}\nU_{aj}\\underset{\\text{l.d.}}{\\sim}-1+\\frac{2}{c}\\,,\n\\end{equation}\nwhile the other entries are of order ${\\cal O}(L^{-1})$. Hence\n\\begin{equation}\n\\underset{i,j}{\\det}(\\delta_{ij}+U_{ij})\\underset{\\text{l.d.}}{\\sim}\\frac{2}{c}\\,.\n\\end{equation}\nWe obtain the following low-density limit of the form factor\n\\begin{equation}\\label{ldff}\n\\frac { \\left\\langle \\pmb{\\mu} |\\psi \\left( 0\\right) |\\pmb{\\lambda}\\right\\rangle } {\\sqrt{\\left\\langle \\pmb{\\lambda} \\left| \\pmb{\\lambda} \\right\\rangle \\left\\langle \\pmb{\\mu}\\right| \\pmb{\\mu}\\right\\rangle }}\\underset{\\text{l.d.}}{\\sim}\n\\frac{\\phi}{\\sqrt{L}}\\prod_{j\\neq a}\\frac{2c}{\\sqrt{(\\lambda_j-\\lambda_a)^2+c^2}}\\frac{1}{L(\\mu_j-\\lambda_j)}\\,,\n\\end{equation}\nwith the phase\n\\begin{equation}\n\\phi=(-i)^N \\prod_{j\\neq a}\\,\\text{sgn}\\,(\\lambda_j-\\lambda_a)\\,.\n\\end{equation}\n\nLet us reformulate the meaning of this approximation. We consider $\\pmb{\\lambda}$ and $\\pmb{\\mu}$ two Bethe states with $N$ and $N-1$ particles respectively, and denote $\\iota:\\{1,...,N-1\\}\\to \\{1,...,N\\}$ the function such that the element of $\\{\\lambda_1,...,\\lambda_N\\}$ that is the closest to $\\mu_i$ is $\\lambda_{\\iota(i)}$. For a dilute $\\pmb{\\lambda}$, the form factor $\\frac { \\left\\langle \\pmb{\\mu} |\\psi \\left( 0\\right) |\\pmb{\\lambda}\\right\\rangle } {\\sqrt{\\left\\langle \\pmb{\\lambda} \\left| \\pmb{\\lambda} \\right\\rangle \\left\\langle \\pmb{\\mu}\\right| \\pmb{\\mu}\\right\\rangle }}$ is non-negligible in the thermodynamic limit only if $\\iota$ is one-to-one from $\\{1,...,N-1\\}$ to $\\{1,...,N\\}-\\{a\\}$ for some $a=1,...,N$. In this case, the form factor reads\n\\begin{equation}\\label{ldff2}\n\\frac { \\left\\langle \\pmb{\\mu} |\\psi \\left( 0\\right) |\\pmb{\\lambda}\\right\\rangle } {\\sqrt{\\left\\langle \\pmb{\\lambda} \\left| \\pmb{\\lambda} \\right\\rangle \\left\\langle \\pmb{\\mu}\\right| \\pmb{\\mu}\\right\\rangle }}\\underset{\\text{l.d.}}{\\sim}\n\\frac{\\phi}{\\sqrt{L}}\\prod_{j=1}^{N-1}\\frac{2c}{\\sqrt{(\\lambda_{\\iota(j)}-\\lambda_a)^2+c^2}}\\frac{1}{L(\\mu_j-\\lambda_{\\iota(j)})}\\,.\n\\end{equation}\nEquation \\eqref{ldff} corresponds to relabelling $\\mu_{\\iota^{-1}(j)}$ into $\\mu_j$ for $j=1,...,N,\\, j\\neq a$.\\\\\n\n\nSuch an expression implies an ordering of the roots $\\mu_1,...,\\mu_{N-1}$ according to the ordering of the $\\lambda_i$'s. Hence in the spectral sum \\eqref{bigsumfield}, there is no factor $1\/(N-1)!$ once expressed in terms of the Bethe numbers of the $\\mu_j$'s. Using this expression for the form factor, one indeed recovers the low-density correlation function \\eqref{psipsild2} properly defined from the partial fraction decomposition, with ${\\cal N}_{\\pmb{\\lambda}}, {\\cal N}_{\\pmb{\\mu}}=1+{\\cal O}(\\cal D)$ already imposed at leading order in ${\\cal D}$.\n\n\n\n\n\n\\section {Field two-point function\\label{fieldsection}}\nIn this section we compute $S_{0,0,0}$ in \\eqref{S00} in the low-density limit.\n\n\\subsection{States contributing to the thermodynamic limit}\nIn order to carry out the sum over the Bethe numbers in \\eqref{S00}, let us first investigate which values of Bethe numbers give a non-zero contribution in the thermodynamic limit.\n\nLet us consider Bethe number configurations such that $\\mu_i-\\lambda_i=\\mathcal{O}(L^{-b_i})$ for $i\\neq a$ with $ b_i\\geq 0$. Since the parity of the number of roots $\\lambda_i$ and $\\mu_i$ is different, the Bethe numbers of $\\lambda_i$ are integers (resp. half-odd integers) if those of $\\mu_i$ are half-odd integers (resp. integers). Hence from the Bethe equations it follows that one has $b_i\\leq 1$.\n\nNow, since the Bethe number $J_i$ can take $\\mathcal{O}(L^{1-b_i})$ values (for $\\mu_i-\\lambda_i=\\mathcal{O}(L^{-b_i})$ to be satisfied), and since $a$ in \\eqref{S00} can take $\\mathcal{O}(L)$ values, one has $\\mathcal{O}(L^{N-\\sum_{i\\neq a}b_i})$ many such configurations. Besides, each summand in \\eqref{S00} is $\\mathcal{O}(L^{-2N+1+2\\sum_{i\\neq a}b_i})$. Hence the contribution of these configurations is $\\mathcal{O}(L^{-N+1+\\sum_{i\\neq a}b_i})$. Given that $0\\leq b_i\\leq 1$, the only possibility to have a non-vanishing result in the thermodynamic limit is to have $\\forall i,\\, b_i=1$. Hence the only non-vanishing configurations in \\eqref{S00} are those for which the Bethe numbers of $\\mu_i$ differ from that of $\\lambda_i$ by $\\mathcal{O}(L^{0})$. We will denote\n\\begin{equation}\nn_i+\\frac{1}{2}=J_i-I_i\\qquad \\text{for }i\\neq a\\,,\n\\end{equation}\nthe difference between the Bethe numbers of $\\mu_i$ and $\\lambda_i$, with $n_i$ an integer of order ${\\cal O}(L^0)$.\n\n\\subsection{Decoupling of the spectral sum in the low-density limit}\nThe Bethe roots $\\mu_i$ involved in \\eqref{S00} depend on the difference of Bethe numbers $n_i$ and are all coupled through the Bethe equations. Taking the difference of the Bethe equations \\eqref{belog} for $\\mu_k$ and $\\lambda_k$, one obtains\n\\begin{equation}\n\\mu_k-\\lambda_k=\\frac{2\\pi}{L}(G^{-1}\\tilde{n})_k+{\\cal O}(L^{-2})\\,,\n\\end{equation}\nwith $G=G(\\pmb{\\mu})$ introduced in \\eqref{gaudin}, and the vector\n\\begin{equation}\n\\begin{aligned}\n\\tilde{n}_k&=n_k+\\frac{1}{2}+\\frac{1}{\\pi}\\arctan \\frac{\\lambda_k-\\lambda_a}{c}\\,.\n\\end{aligned}\n\\end{equation}\nIn the low-density limit, one has $Gx \\underset{\\text{l.d.}}{\\sim} x$ for all vectors $x$, so that the roots decouple and are expressed as\n\\begin{equation}\n\\mu_k-\\lambda_k\\underset{\\text{l.d.}}{\\sim} \\frac{2\\pi}{L}(n_k+\\alpha_k(\\lambda_a))+{\\cal O}(L^{-2})\\,,\n\\end{equation}\nwhere we introduced\n\\begin{equation}\\label{alpha}\n\\alpha_i(\\nu)=\\frac{1}{2}+\\frac{1}{\\pi}\\arctan \\frac{\\lambda_i-\\nu}{c}\\,.\n\\end{equation}\nFinally, at leading order in ${\\cal D}$, the determinants ${\\cal N}_{\\pmb{\\lambda}},{\\cal N}_{\\pmb{\\mu}}$ are equal to $1$ according to \\eqref{det1det1}. The spectral sum \\eqref{S00} can thus be expressed as a product of $N$ one-dimensional sums\n\\begin{equation}\\label{S001}\n\\begin{aligned}\nS_{0,0,0}\\underset{\\text{l.d.}}{\\sim}\\frac{1}{L}&\\sum_{a=1}^Ne^{it \\lambda_a^2-ix\\lambda_a}\\\\\n&\\times\\prod_{j\\neq a}\\left(\\frac{1}{\\pi^2}\\frac{c^2}{(\\lambda_j-\\lambda_a)^2+c^2}\\sum_{\\substack{n=-\\infty}}^{\\infty}\\frac{e^{i\\tfrac{2\\pi}{L} (n+\\alpha_{j}(\\lambda_a))(x-2\\lambda_j t)-it\\left(\\tfrac{2\\pi}{L}\\right)^2(n+\\alpha_j(\\lambda_a))^2}}{(n+\\alpha_j(\\lambda_a))^2}\\right)\\,.\n\\end{aligned}\n\\end{equation}\n\n\\subsection{Thermodynamic limit of the spectral sum}\nIn order to proceed we need to determine the thermodynamic limit of each of the one-dimensional sums, that are of the type\n\\begin{equation}\n\\sum_{n\\in\\mathbb{Z}}\\frac{e^{i\\frac{w}{L}(n+\\alpha)+i\\frac{\\tau}{L^2}(n+\\alpha)^2}}{(n+\\alpha)^2}\\,,\n\\end{equation}\nfor $\\alpha,w,\\tau$ reals, $\\alpha$ non integer. Let us first consider the case $\\tau=0$. The quantity $\\sum_{n\\in\\mathbb{Z}}\\frac{e^{iWn}}{(n+\\alpha)^2}$ is exactly the Fourier series of a certain $2\\pi$-periodic function of $W$. Noticing that for $n$ integer\n\\begin{equation}\n\\int_{-\\pi}^{\\pi}\\left[\\left(\\frac{\\pi}{\\sin \\pi\\alpha}\\right)^2e^{-iW\\alpha} +\\frac{i\\pi}{\\sin\\pi\\alpha}W e^{i\\pi\\alpha\\,\\text{sgn}\\,(W)-iW\\alpha} \\right]e^{-iWn}\\D{W}=\\frac{2\\pi}{(n+\\alpha)^2}\\,,\n\\end{equation}\nwe conclude that\n\\begin{equation}\n\\sum_{n\\in\\mathbb{Z}}\\frac{e^{iW(n+\\alpha)}}{(n+\\alpha)^2}=\\left(\\frac{\\pi}{\\sin \\pi\\alpha}\\right)^2 +\\frac{i\\pi}{\\sin\\pi\\alpha}W e^{i\\pi\\alpha\\,\\text{sgn}\\,(W)}\\,,\\qquad\\text{for }-\\pi0$. It is the leading term in the expansion \\eqref{expdensity}, and constitutes the low-density limit of the dynamical correlations of the field.\n\n\n\n\\section {Density two-point function\\label{densitysection}}\nIn this Section we apply the same reasoning as in Sections \\ref{ldd} and \\ref{fieldsection} but to the density two-point function.\n\n\\subsection{Partial fraction decomposition}\nWe consider $ |\\pmb{\\lambda} \\rangle $ and $ |\\pmb{\\mu} \\rangle $ eigenstates of the Lieb-Liniger Hamiltonian with $N$ rapidities, and define $F_\\sigma(\\pmb{\\lambda},\\pmb{\\mu})$ by\n\\begin{equation}\n\\begin{aligned}\n\\frac {\\left| \\left\\langle \\pmb{\\mu} |\\sigma \\left( 0\\right) |\\pmb{\\lambda}\\right\\rangle \\right| ^{2}} {\\left\\langle \\pmb{\\lambda} \\left| \\pmb{\\lambda} \\right\\rangle \\left\\langle \\pmb{\\mu}\\right| \\pmb{\\mu}\\right\\rangle }= \\frac{F_\\sigma(\\pmb{\\lambda},\\pmb{\\mu})}{L^{2N}\\mathcal{N}_{\\pmb{\\lambda}}\\mathcal{N}_{\\pmb{\\mu}}} \\,.\n\\end{aligned}\n\\end{equation}\nSimilarly to the field case $F_\\psi$, the reduced form factor $F_\\sigma(\\pmb{\\lambda},\\pmb{\\mu})$ is a ratio of two polynomials in the Bethe roots so that it can be decomposed into partial fractions. Repeating the arguments that apply to the field case, we similarly obtain the writing\n\\begin{equation}\\label{pfdsigma}\nF_\\sigma(\\pmb{\\lambda},\\pmb{\\mu})=\\sum_{\\{\\nu\\},f}\\frac{A(\\pmb{\\lambda},\\pmb{\\mu},\\{\\nu\\},f)}{\\prod_{i=1}^{N-1}(\\mu_i-\\lambda_{f(i)})^{\\nu_i}}\\,,\n\\end{equation}\nwhere each $\\nu_i$ takes the value $0,1$ or $2$, and where $f$ are functions $\\{i\\in\\{1,...,N\\}| \\nu_i\\neq 0\\}\\to \\{1,...,N\\}$. The coefficients $A(\\pmb{\\lambda},\\pmb{\\mu},\\{\\nu\\},f)$ crucially do not depend on any $\\mu_i$ whenever $\\nu_i>0$, and are bounded functions of real $\\mu_i$ if $\\nu_i=0$. The function $f$ has the same constraints as in the field case, namely it can take at most twice the same value, and takes only once a value on points with $\\nu_i=2$.\n\nThis decomposition also leads to an expansion in the particle density ${\\cal D}$. Namely, with the parametrization $m=|I_1|$, $n=|I_2|$, $p=|\\{i| \\nu_i=0\\}|$\n\\begin{equation}\n\\begin{aligned}\nI_2&=\\{j_{1},...,j_{n}\\}\\\\\nI_0&=\\{j_{n+1},...,j_{2n+p}\\}\\\\\nI_1&=\\{j_{2n+p+1},...,j_{2n+p+m}\\}\\\\\n\\{1,...,N\\}-(I_0\\cup I_1 \\cup I_2)&=\\{j_{2n+p+m+1},...,j_{N}\\}\\,.\n\\end{aligned}\n\\end{equation}\nwe have\n\\begin{equation}\\label{expdensitysigma}\n\\left\\langle \\sigma\\left( x,t\\right) \\sigma \\left( 0,0\\right) \\right\\rangle =\\sum_{n,m,p\\geq 0}S_{n,m,p}\\,,\n\\end{equation}\nwith\n\\begin{equation}\n\\begin{aligned}\n&S_{n,m,p}=\\frac{1}{2^np!L^{2N}}\\sum_{\\substack{I_{0,1,2}\\subset \\{1,...,N\\}\\\\|I_0|=n+p\\\\|I_1|=m\\\\|I_2|=n \\\\\\text{all disjoint}}}\\sum_{J_1,...,J_{N}}\\frac{{\\cal A}(I_0,I_1,I_2|\\{\\mu_i\\}_{i=2n+1}^{2n+p})}{\\mathcal{N}_{\\pmb{\\lambda}}\\mathcal{N}_{\\pmb{\\mu}}}\\\\\n &\\qquad\\qquad\\times\\frac{e^{it\\left( E\\left( \\pmb{\\lambda}\\right) -E\\left( \\pmb{\\mu} \\right) \\right) +ix\\left( P\\left( \\pmb{\\mu}\\right) -P\\left( \\pmb{\\lambda}\\right) \\right) }}{\\prod_{i=1}^{n}(\\mu_{2i-1}-\\lambda_{j_i})(\\mu_{2i}-\\lambda_{j_i})\\prod_{i=2n+1+p}^{2n+m+p}(\\mu_i-\\lambda_{j_{i}})\\prod_{i=2n+m+p+1}^{N}(\\mu_i-\\lambda_{j_{i}})^{2}}\\,.\n\\end{aligned}\n\\end{equation}\n The leading term is a priori obtained with $\\nu_i=2$, and the next terms by replacing some of the $\\nu_i$'s by $1$ or $0$, each time picking a factor ${\\cal D}$. However, in contrast to the field case, the coefficient of the a priori leading term is actually found to vanish, using the analogue of \\eqref{formulA} for the density operator case\n \\begin{equation}\n{\\cal A}(\\varnothing,\\varnothing,\\varnothing|\\varnothing)=0\\,.\n\\end{equation}\nIf one $\\nu_i$ is set to $1$, the coefficient still vanishes\n \\begin{equation}\n{\\cal A}(\\varnothing,\\{a\\},\\varnothing|\\varnothing)=0\\,.\n\\end{equation}\nHowever, if it is set to $0$, the coefficient is non-zero and reads\n \\begin{equation}\n{\\cal A}(\\{a\\},\\varnothing,\\varnothing|\\mu_a)=\\prod_{j\\neq a}\\frac{4c^2(\\lambda_a-\\mu_a)^2}{[(\\lambda_j-\\lambda_a)^2+c^2][(\\lambda_j-\\mu_a)^2+c^2]}\\,.\n\\end{equation}\nHence the leading order in the density expansion, i.e. the low-density limit, is obtained with\n\\begin{equation}\n\\left\\langle \\sigma( x,t) \\sigma( 0,0) \\right\\rangle\\underset{\\text{l.d.}}{\\sim} S_{0,0,1}\\,.\n\\end{equation}\nIt yields the following low-density limit of the density two-point function\n\\begin{equation}\\label{2pfdensityld}\n\\begin{aligned}\n & \\left\\langle \\sigma( x,t) \\sigma( 0,0) \\right\\rangle \\underset{\\text{l.d.}}{\\sim} \\frac{1}{L^2}\\sum_{\\lambda_a,\\mu_a}e^{ix(\\mu_a-\\lambda_a)+it(\\lambda_a^2-\\mu_a^2)}\\\\&\\times\\sum _{J_i,\\, i\\neq a}\\frac{1}{\\mathcal{N}_{\\pmb{\\lambda}}\\mathcal{N}_{\\pmb{\\mu}}}\\prod_{j\\neq a}\\frac{c^2}{(\\lambda_j-\\lambda_a)^2+c^2}\\frac{c^2}{(\\lambda_j-\\mu_a)^2+c^2}\\cdot\n \\frac{4(\\lambda_a-\\mu_a)^2}{c^2L^2(\\mu_j-\\lambda_j)^2}e^{ix(\\mu_j-\\lambda_j)+it(\\lambda_j^2-\\mu_j^2)}\\,.\n\\end{aligned}\n\\end{equation}\n\n\n\n\\subsection {Interpretation: low-density limit of the density form factor}\nSimilarly to the field operator, the low-density limit of the density two-point function is defined in terms of a partial fraction decomposition of the form factor. In the field case, one was also able to obtain a low-density limit of the field form factor by considering dilute states, see Section \\ref{intuitive}. In the density case, we are going to follow a different route in order to show the usefulness of \\eqref{ldff}, by recovering the low-density limit of the density correlation \\eqref{2pfdensityld} from the low-density approximation of the field form factor \\eqref{ldff}.\n The rationale is that the density form factor can be obtained as a limit of a field two-point function between different eigenstates $\\pmb{\\lambda}$ and $\\pmb{\\mu}$ with $N$ roots\n\\begin{equation}\n\\frac{\\langle \\pmb{\\mu}|\\sigma(0)|\\pmb{\\lambda}\\rangle}{\\sqrt{\\langle \\pmb{\\mu}|\\pmb{\\mu}\\rangle \\langle \\pmb{\\lambda}|\\pmb{\\lambda}\\rangle}}=\\underset{x\\to 0}{\\lim}\\, \\frac{\\langle \\pmb{\\mu}|\\psi^\\dagger( x,0) \\psi ( 0,0)|\\pmb{\\lambda}\\rangle}{\\sqrt{\\langle \\pmb{\\mu}|\\pmb{\\mu}\\rangle \\langle \\pmb{\\lambda}|\\pmb{\\lambda}\\rangle}}\\,.\n\\end{equation}\nThis two-point function can itself be expressed as a Lehmann representation\n\\begin{equation}\n\\frac{\\langle \\pmb{\\mu}|\\psi^\\dagger( x,0) \\psi ( 0,0)|\\pmb{\\lambda}\\rangle}{\\sqrt{\\langle \\pmb{\\mu}|\\pmb{\\mu}\\rangle \\langle \\pmb{\\lambda}|\\pmb{\\lambda}\\rangle}}=\\sum_{\\pmb{\\nu}}\\frac{\\langle \\pmb{\\nu}|\\psi(0)|\\pmb{\\mu}\\rangle^* \\langle \\pmb{\\nu}|\\psi(0)|\\pmb{\\lambda}\\rangle}{\\sqrt{\\langle \\pmb{\\mu}|\\pmb{\\mu}\\rangle \\langle \\pmb{\\lambda}|\\pmb{\\lambda}\\rangle}\\langle \\pmb{\\nu}|\\pmb{\\nu}\\rangle}e^{ix(P(\\pmb{\\nu})-P(\\pmb{\\mu}))}\\,,\n\\end{equation}\nwhere $\\pmb{\\nu}$ is a state with $N-1$ roots. In the low-density limit, according to \\eqref{ldff}, for the field form factors not to vanish one has to have $N-1$ roots $\\lambda_i$ with exactly one $\\nu_j$ around at a distance ${\\cal O}(L^{-1})$, leaving a $\\lambda_a$ without $\\nu_j$'s around. The same holds for the $\\mu_i$'s with respect to the $\\nu_j$'s. This implies that in the low-density limit, there is also exactly one $\\mu_i$ around each $\\lambda_j$ at a distance ${\\cal O}(L^{-1})$ for $j\\neq a$, leaving a $\\mu_a$ without $\\nu_j$'s nor $\\lambda_j$'s around. Since $\\pmb{\\mu}$ and $\\pmb{\\lambda}$ are an input of the problem, these $\\mu_a$ and $\\lambda_a$ are fixed by the choices of $\\pmb{\\lambda}$ and $\\pmb{\\mu}$ (they are not free parameters like in the two-point function case with $\\pmb{\\lambda}=\\pmb{\\mu}$). Then using \\eqref{ldff} we obtain the following low-density limit\n\\begin{equation}\n\\begin{aligned}\n&\\frac{\\langle \\pmb{\\mu}|\\psi^\\dagger( x,0) \\psi ( 0,0)|\\pmb{\\lambda}\\rangle}{\\sqrt{\\langle \\pmb{\\mu}|\\pmb{\\mu}\\rangle \\langle \\pmb{\\lambda}|\\pmb{\\lambda}\\rangle}}\\underset{\\text{l.d.}}{\\sim} \\\\\n&\\qquad\\qquad\\frac{e^{-ix\\mu_a}}{L}\\sum_{\\pmb{\\nu}}\\prod_{j\\neq a}\\frac{2c}{\\sqrt{(\\lambda_j-\\lambda_a)^2+c^2}}\\frac{2c}{\\sqrt{(\\mu_j-\\mu_a)^2+c^2}} \\frac{1}{L(\\nu_j-\\lambda_j)}\\frac{e^{ix(\\nu_j-\\mu_j)}}{L(\\nu_j-\\mu_j)}\\,.\n\\end{aligned}\n\\end{equation}\nThe Bethe equations allow us to write\n\\begin{equation}\n\\begin{aligned}\n\\nu_j-\\lambda_j&\\underset{\\text{l.d.}}{\\sim} \\frac{2\\pi}{L}(n_j+\\alpha_j(\\lambda_a))\\\\\n\\nu_j-\\mu_j&\\underset{\\text{l.d.}}{\\sim} \\frac{2\\pi}{L}(n_j+\\alpha_j(\\mu_a)-p_j)\\,,\n\\end{aligned}\n\\end{equation}\nwith $n_j,p_j$ integers. The integer $p_j$ is related to $\\mu_j$ and $\\lambda_j$ through\n\\begin{equation}\n\\mu_j-\\lambda_j\\underset{\\text{l.d.}}{\\sim} \\frac{2\\pi}{L}(p_j+\\alpha_j(\\lambda_a)-\\alpha_j(\\mu_a))\\,,\n\\end{equation}\nwhich is a parameter of the problem that is not to be summed over. We obtain the following factorization\n\\begin{equation}\n\\begin{aligned}\n\\frac{\\langle \\pmb{\\mu}|\\psi^\\dagger( x,0) \\psi ( 0,0)|\\pmb{\\lambda}\\rangle}{\\sqrt{\\langle \\pmb{\\mu}|\\pmb{\\mu}\\rangle \\langle \\pmb{\\lambda}|\\pmb{\\lambda}\\rangle}}\\underset{\\text{l.d.}}{\\sim} \\frac{e^{-ix\\mu_a}}{L}&\\prod_{j\\neq a}\\frac{c^2}{\\pi^2}\\frac{1}{\\sqrt{(\\lambda_j-\\lambda_a)^2+c^2}}\\frac{1}{\\sqrt{(\\lambda_j-\\mu_a+{\\cal O}(L^{-1}))^2+c^2}} \\\\\n&\\times\\sum_{n=-\\infty}^\\infty\\frac{e^{\\frac{2i \\pi x}{L}(n+\\alpha_j(\\mu_a)-p_j)}}{(n+\\alpha_j(\\lambda_a))(n+\\alpha_j(\\mu_a)-p_j)}\\,.\n\\end{aligned}\n\\end{equation}\nIn order to determine the thermodynamic limit of the field four-point function we need to carry out the sums over the integers, that reduces to sums of the type\n\\begin{equation}\n\\sum_{n\\in\\mathbb{Z}}\\frac{e^{i\\tfrac{w}{L}(n+\\alpha)}}{n+\\alpha}\\,.\n\\end{equation}\nWe notice that $\\sum_{n\\in\\mathbb{Z}}\\frac{e^{iWn}}{n+\\alpha}$ is the Fourier series of a $2\\pi$-periodic function of $W$, and\n\\begin{equation}\n\\int_{-\\pi}^\\pi \\frac{\\pi}{\\sin \\pi\\alpha}e^{i\\pi\\alpha\\,\\text{sgn}\\,(W)-iW\\alpha} e^{-iWn}\\D{W}=\\frac{2\\pi}{n+\\alpha}\\,.\n\\end{equation}\nHence\n\\begin{equation}\n\\sum_{n\\in\\mathbb{Z}}\\frac{e^{iW(n+\\alpha)}}{n+\\alpha}=\\frac{\\pi}{\\sin \\pi\\alpha}e^{i\\pi\\alpha\\,\\text{sgn}\\,(W)}\\,,\\qquad\\text{for }-\\pi0$. It is the leading term in the expansion \\eqref{expdensitysigma}, and is the low-density limit of the dynamical correlations of the density.\n\n\\subsection{Bare particle-hole excitations\\label{which}}\nIn the spectral sum \\eqref{bigsumfield} the intermediate states $\\pmb{\\mu}$ can be seen as excited states above the averaging state $\\pmb{\\lambda}$. In this picture it is natural to \\textit{expand} the spectral sum \\eqref{bigsumfield} in terms of the number of \\textit{particle-hole excitations} that $\\pmb{\\mu}$ has over $\\pmb{\\lambda}$. Such an expansion consists in writing\n\\begin{equation}\n\\left\\langle \\sigma( x,t) \\sigma( 0,0) \\right\\rangle={\\cal D}^2+\\sum_{n\\geq 1}\\mathfrak{S}_n\n\\end{equation}\nwith\n\\begin{equation}\\label{bare}\n\\mathfrak{S}_n=\\sum _{ \\substack{\\pmb{\\mu}\\\\ |\\{I_a\\}\\cap \\{J_a\\}|=N-n}}\\frac {\\left| \\left\\langle \\pmb{\\mu} |\\sigma( 0) |\\pmb{\\lambda}\\right\\rangle \\right| ^{2}} {\\left\\langle \\pmb{\\lambda} \\left| \\pmb{\\lambda} \\right\\rangle \\left\\langle \\pmb{\\mu}\\right| \\pmb{\\mu}\\right\\rangle }e^{it\\left( E\\left( \\pmb{\\lambda}\\right) -E\\left( \\pmb{\\mu} \\right) \\right) +ix\\left( P\\left( \\pmb{\\mu}\\right) -P\\left( \\pmb{\\lambda}\\right) \\right) }\\,,\n\\end{equation}\nwhich is the spectral sum restricted to intermediate states $\\pmb{\\mu}$ that share $N-n$ Bethe numbers $J_a$ with the Bethe numbers $I_a$ of $\\pmb{\\lambda}$. Ideally, each individual $\\mathfrak{S}_n$ would have a well-defined and finite thermodynamic limit $L\\to\\infty$ that could be represented as a multiple integral over root densities and hole densities\n\\begin{equation}\\label{multip}\n\\begin{aligned}\n\\underbrace{\\int_{-\\infty}^\\infty...\\int_{-\\infty}^\\infty}_{2n}&F_n(\\lambda_1,\\mu_1,...,\\lambda_n,\\mu_n)e^{it(E(\\pmb{\\lambda})-E(\\pmb{\\mu}))+ix(P(\\pmb{\\mu})-P(\\pmb{\\lambda}))}\\\\\n&\\qquad\\qquad\\qquad\\times\\rho(\\lambda_1)\\rho_h(\\mu_1)...\\rho(\\lambda_n)\\rho_h(\\mu_n)\\D{\\lambda_1}\\D{\\mu_1}...\\D{\\lambda_n}\\D{\\mu_n}\\,.\n\\end{aligned}\n\\end{equation}\nThe function $F_n(\\lambda_1,\\mu_1,...,\\lambda_n,\\mu_n)$ appearing in this expression would thus be identified as a \\textit{thermodynamic form factor} for $n$ particle-hole excitations\\cite{deNP15,deNP16,DNP18,panfil20}. This idea was backed by calculations of\n\\begin{equation}\n\\underset{\\mu\\to\\lambda}{\\lim}\\,F_1(\\lambda,\\mu)\\,,\n\\end{equation}\nthat is finite and well-defined in the thermodynamic limit, by various means \\cite{deNP16,cortescuberopanfil2}.\n\nHowever, this picture seems to be a priori contradicted by results of \\cite{granetessler20}, where the thermodynamic limit of the spectral sum \\eqref{bigsumfield} was computed exactly at order $c^{-2}$. At this order, the spectral sum \\textit{exactly} truncates to one- and two-particle-hole excitations, hence to $\\mathfrak{S}_1$ and $\\mathfrak{S}_2$. But it was found that these two separate sums in the thermodynamic limit individually \\textit{diverge} and \\textit{depend on the representative state} $\\pmb{\\lambda}$ of the root density, at order $c^{-2}$. Namely we have\n\\begin{equation}\n\\begin{aligned}\\label{s1s2}\n&\\mathfrak{S}_1=LA_1+f_1(\\rho,\\gamma_{-2})+{\\cal O}(L^{-1})+{\\cal O}(c^{-3})\\\\\n&\\mathfrak{S}_2=LA_2+f_2(\\rho,\\gamma_{-2})+{\\cal O}(L^{-1})+{\\cal O}(c^{-3})\\,,\n\\end{aligned}\n\\end{equation}\nwith $A_{1,2}$ some reals of order $c^{-2}$ and $f_{1,2}(\\rho,\\gamma_{-2})$ functions of the root density and the pair distribution function (see \\cite{granetessler20} for a precise definition). However, their sum $\\mathfrak{S}_1+\\mathfrak{S}_2$ is not divergent and depends only on $\\rho$, as we expect. To fit in the general picture \\eqref{multip}, the only solution would be that these divergences in $L$ are an artefact of the $1\/c$ expansion, i.e. that the finite-size corrections to $\\mathfrak{S}_{1,2}$ in the thermodynamic limit would be e.g. of the form $\\varphi(L\/c)$ with a function $\\varphi(x)\\to 0$ when $x\\to \\pm\\infty$. In the framework of the low-density expansion, one can investigate this issue since the calculations are performed non-perturbatively in $1\/c$, and compute the thermodynamic limit of $\\mathfrak{S}_{1}$ for a finite fixed $c$, in the low-density limit.\\\\\n\n\nTo that end, let us use the expression \\eqref{lddensity} for the low-density limit of the form factor to write\n\\begin{equation}\\label{lowdensquare}\n\\begin{aligned}\n\\frac{|\\langle \\pmb{\\mu}|\\sigma(0)|\\pmb{\\lambda}\\rangle|^2}{\\langle \\pmb{\\mu}|\\pmb{\\mu}\\rangle \\langle \\pmb{\\lambda}|\\pmb{\\lambda}\\rangle}\\underset{\\text{l.d.}}{\\sim} \\frac{1}{L^2}&\\prod_{j\\neq a}\\frac{c^2}{(\\lambda_j-\\lambda_a)^2+c^2}\\frac{c^2}{(\\lambda_j-\\mu_a)^2+c^2}\\frac{4(\\mu_a-\\lambda_a)^2}{c^2L^2(\\mu_j-\\lambda_j)^2}\\,,\n\\end{aligned}\n\\end{equation}\nand assume that $\\pmb{\\mu}$ only involves one particle-hole excitation above $\\pmb{\\lambda}$. Hence for all $j\\neq a$, the Bethe numbers $J_j$ of $\\pmb{\\mu}$ are equal to the Bethe numbers $I_j$ of $\\pmb{\\lambda}$. In the low-density limit we have then\n\\begin{equation}\n\\forall j\\neq a,\\qquad\\mu_j-\\lambda_j\\underset{\\text{l.d.}}{\\sim} \\frac{2\\pi}{L}(\\alpha_j(\\lambda_a)-\\alpha_j(\\mu_a))\\,.\n\\end{equation}\nIt follows that in the thermodynamic limit, the low-density form factor squared \\eqref{lowdensquare} becomes\n\\begin{equation}\n\\frac{|\\langle \\pmb{\\mu}|\\sigma(0)|\\pmb{\\lambda}\\rangle|^2}{\\langle \\pmb{\\mu}|\\pmb{\\mu}\\rangle \\langle \\pmb{\\lambda}|\\pmb{\\lambda}\\rangle}\\underset{\\text{l.d.}}{\\sim} \\frac{1}{L^2}\\exp[L\\phi(\\lambda_a,\\mu_a)+{\\cal O}(L^0)]\\,,\n\\end{equation}\nwith\n\\begin{equation}\\label{phi}\n\\phi(\\lambda,\\mu)=\\int_{-\\infty}^\\infty \\log \\left[\\frac{1}{c^2}\\frac{1}{1+\\tfrac{(\\nu-\\lambda)^2}{c^2}} \\frac{1}{1+\\tfrac{(\\nu-\\mu)^2}{c^2}}\\left( \\frac{\\mu-\\lambda}{\\arctan \\tfrac{\\mu-\\nu}{c}-\\arctan \\tfrac{\\lambda-\\nu}{c}}\\right)^2\\right] \\rho(\\nu)\\D{\\nu}\\,.\n\\end{equation}\nThe sign of the logarithm can be determined by applying the following inequality that is valid for all real $u$\n\\begin{equation}\\label{ineq}\nu^2\\geq \\sin^2 u\\,,\n\\end{equation}\nto the value\n\\begin{equation}\nu=\\arctan x-\\arctan y\\,,\n\\end{equation}\nfor $x,y$ reals. Indeed, using trigonometric relations, Eq \\eqref{ineq} is exactly\n\\begin{equation}\n\\left( \\frac{\\arctan x-\\arctan y}{x-y}\\right)^2\\geq \\arctan'(x)\\arctan'(y)\\,,\n\\end{equation}\nwith equality only for $x=y$. This permits to deduce that the logarithm in \\eqref{phi} is always negative whenever $\\lambda\\neq \\mu$. One concludes that we always have if $\\lambda\\neq \\mu$\n\\begin{equation}\n\\phi(\\lambda,\\mu)<0\\,,\n\\end{equation}\nand if $\\lambda=\\mu$ we have\n\\begin{equation}\n\\phi(\\lambda,\\lambda)=0\\,.\n\\end{equation}\nIt follows that in the framework of a multiple integral representation in terms of bare particle-hole excitations in the thermodynamic limit \\eqref{multip}, the function $F_1(\\lambda,\\mu)$ would be in fact zero everywhere except if $\\mu=\\lambda$ where it takes the value $1$ as deduced from \\eqref{lowdensquare}. Namely we would have\n\\begin{equation}\nF_1(\\lambda,\\mu)= \\begin{cases}0\\qquad \\text{if }\\lambda\\neq \\mu\\\\\n1\\qquad \\text{if }\\lambda= \\mu\n\\end{cases}+{\\cal O}({\\cal D})\\,.\n\\end{equation}\n\n\nThis analysis shows that the integral representation \\eqref{multip} in terms of 'bare' particle-hole excitations \\eqref{bare} is singular. If the leading behaviour at large space and time of \\eqref{multip} is indeed obtained when $\\lambda_1=\\mu_1$ in \\eqref{multip} for $n=1$, where the function $F_1(\\lambda,\\mu)$ takes a finite value, the singular behaviour of this representation should be an obstacle to the computation of the subleading orders.\n\n\n\\subsection{Dressed particle-hole excitations\\label{which2}}\nLet us now investigate the nature of the states summed over to obtain the expression \\eqref{densityde} in the low-density limit. The double integral over the root density and hole density are one-particle-hole excitations with a \\textit{macroscopic} amplitude (i.e. the difference between the Bethe numbers is ${\\cal O}(L)$). The exponentials however arise from the product of a macroscopic number of one-dimensional sums over all the other remaining roots of the averaging states. These take into account an \\textit{arbitrary} number of particle-hole excitations, but with only a \\textit{microscopic} or \\textit{mesoscopic} amplitude (i.e. the difference between the Bethe numbers can be ${\\cal O}(L^\\nu)$ for any $\\nu<1$). These configurations are represented in Figure \\ref{exfnp1k0}. They correspond to what is called a \\textit{dressed one-particle-hole excitation}. They entail expanding the spectral sum \\eqref{bigsumfield} according to\n\\begin{equation}\\label{dressedexp}\n\\left\\langle \\sigma( x,t) \\sigma( 0,0) \\right\\rangle={\\cal D}^2+\\sum_{n\\geq 1}\\mathfrak{S}^{\\rm dr}_n\\,,\n\\end{equation}\nwith\n\\begin{equation}\\label{sdr}\n\\sum_{m=1}^n\\mathfrak{S}^{\\rm dr}_m=\\sum _{ \\substack{\\pmb{\\mu}\\\\ \\exists \\tau\\text{ permutation of }\\{1,...,N\\},\\\\\\,\\# \\{a\\text{ s.t. }|I_a-J_{\\tau(a)}|={\\cal O}(L)\\} \\leq n}}\\frac {\\left| \\left\\langle \\pmb{\\mu} |\\sigma( 0) |\\pmb{\\lambda}\\right\\rangle \\right| ^{2}} {\\left\\langle \\pmb{\\lambda} \\left| \\pmb{\\lambda} \\right\\rangle \\left\\langle \\pmb{\\mu}\\right| \\pmb{\\mu}\\right\\rangle }e^{it\\left( E\\left( \\pmb{\\lambda}\\right) -E\\left( \\pmb{\\mu} \\right) \\right) +ix\\left( P\\left( \\pmb{\\mu}\\right) -P\\left( \\pmb{\\lambda}\\right) \\right) }\\,.\n\\end{equation}\n\n\n\\begin{figure}[H]\n\\begin{center}\n\\begin{tikzpicture}[scale=1]\n\\draw[->,blue] (3.5,0.25) arc (180: 0:2.5);\n\\draw[->,red] (0.5,0.25) arc (180: 0:0.25);\n\\draw[->,red] (5.,0.25) arc (180: 0:0.25);\n\\draw[->,red] (10.5,0.25) arc (0: 180:0.5);\n\\node at (-1,0) {.};\n\\draw[black] (-0.5,0) circle (3pt);\n\\node at (0,0) {.};\n\\draw[black] (0.5,0) circle (3pt);\n\\node at (1,0) {.};\n\\draw[black] (1.5,0) circle (3pt);\n\\draw[black] (2,0) circle (3pt);\n\\node at (2.5,0) {.};\n\\node at (3,0) {.};\n\\draw[black] (3.5,0) circle (3pt);\n\\draw[black] (4,0) circle (3pt);\n\\draw[black] (4.5,0) circle (3pt);\n\\draw[black] (5,0) circle (3pt);\n\\node at (5.5,0) {.};\n\\draw[black] (6,0) circle (3pt);\n\\node at (6.5,0) {.};\n\\node at (7,0) {.};\n\\draw[black] (7.5,0) circle (3pt);\n\\node at (8,0) {.};\n\\node at (8.5,0) {.};\n\\draw[black] (9,0) circle (3pt);\n\\node at (9.5,0) {.};\n\\draw[black] (10,0) circle (3pt);\n\\draw[black] (10.5,0) circle (3pt);\n\\draw[black] (11,0) circle (3pt);\n\\node at (11.5,0) {.};\n\\draw[black] (12,0) circle (3pt);\n\\node at (12.5,0) {.};\n\\filldraw[black] (-0.5,0) circle (2pt);\n\\filldraw[black] (1.,0) circle (2pt);\n\\filldraw[black] (1.5,0) circle (2pt);\n\\filldraw[black] (2,0) circle (2pt);\n\\filldraw[black] (8.5,0) circle (2pt);\n\\filldraw[black] (4,0) circle (2pt);\n\\filldraw[black] (4.5,0) circle (2pt);\n\\filldraw[black] (5.5,0) circle (2pt);\n\\filldraw[black] (6,0) circle (2pt);\n\\filldraw[black] (9,0) circle (2pt);\n\\filldraw[black] (7.5,0) circle (2pt);\n\\filldraw[black] (10,0) circle (2pt);\n\\filldraw[black] (9.5,0) circle (2pt);\n\\filldraw[black] (11,0) circle (2pt);\n\\filldraw[black] (12,0) circle (2pt);\n\\end{tikzpicture}\n\\end{center}\n\\caption{Sketch of a dressed one-particle-hole excitation: positions of the\nmomenta of the averaging state $\\pmb{\\lambda}$ (empty circles) and the\nintermediate state $\\pmb{\\mu}$ (filled circles) respectively, and position of \nthe holes (dots). In red is indicated the 'soft modes' corresponding to microscopic excitations, and in blue the only macroscopic excitation.} \n\\label{exfnp1k0}\n\\end{figure}\n\nLoosely speaking, $\\mathfrak{S}^{\\rm dr}_n$ includes $n$ macroscopic particle-hole excitations and any number of microscopic particle-hole excitations. Since these configurations are not disjoint, one requires the expression \\eqref{sdr} for a precise definition. In the low-density limit the full spectral sum truncates to $\\mathfrak{S}^{\\rm dr}_1$, and this expression is found to be finite and well-defined in the thermodynamic.\n\n\nWe note that these dressed particle-hole excitations are actually in principle what is computed in \\cite{deNP15,deNP16,DNP18}. But therein the 'soft modes' contribute as a numerical factor that multiplies the form factor. Here, within the low density expansion, it is seen in \\eqref{densityde} that these soft modes actually carry an $x$ and $t$ dependence as well. Namely we have a representation for $\\mathfrak{S}_n^{\\rm dr}$\n\\begin{equation}\n\\begin{aligned}\n\\underbrace{\\int_{-\\infty}^\\infty...\\int_{-\\infty}^\\infty}_{2n}&F_n^{x,t}(\\lambda_1,\\mu_1,...,\\lambda_n,\\mu_n)e^{it(E(\\pmb{\\lambda})-E(\\pmb{\\mu}))+ix(P(\\pmb{\\mu})-P(\\pmb{\\lambda}))}\\\\\n&\\qquad\\qquad\\qquad\\times\\rho(\\lambda_1)\\rho_h(\\mu_1)...\\rho(\\lambda_n)\\rho_h(\\mu_n)\\D{\\lambda_1}\\D{\\mu_1}...\\D{\\lambda_n}\\D{\\mu_n}\\,,\n\\end{aligned}\n\\end{equation}\nwhere the 'dressed thermodynamic form factor' $F_n^{x,t}(\\lambda_1,\\mu_1,...,\\lambda_n,\\mu_n)$ carries a $x,t$ dependence coming from the soft modes summation, although the soft modes do not modify the energy and momentum of the state in the thermodynamic limit. This dependence emerges from the double poles that lift to order $L^0$ some finite-size corrections. The function $F_1^{x,t}(\\lambda,\\mu)$ can be directly deduced from \\eqref{densityde}.\n\nFrom Sections \\ref{which} and \\ref{which2}, we conclude that the right expansion scheme of the spectral sum is an expansion in terms of dressed particle-hole excitations \\eqref{dressedexp}, in the sense that the truncated spectral sums $\\mathfrak{S}_n^{\\rm dr}$ are finite and well-defined in the thermodynamic limit, and are smooth functions of the macroscopic excited rapidities. Moreover, as shown in \\cite{granetessler20}, they admit a well-defined and uniform $1\/c$ expansion without any spurious divergences in $L$ coming from $c$-dependent finite-size effects. These two properties are not satisfied by the 'bare' particle-hole excitations expansion.\n\n\n\n\\section{Summary and conclusion}\nWe computed the dynamical two-point function of the field and density operator averaged within a state with a small particle density ${\\cal D}$, given by Equations \\eqref{field} and \\eqref{densityde}. They are valid for an arbitrary interaction strength $c>0$ and for all space and time -- hence one can deduce the spectral function and dynamical structure factor in the full momentum-frequency plane, in the same regime of small ${\\cal D}$. This low-density limit is defined as the leading term in the correlation functions obtained by decomposing the form factors into partial fractions. \n\nBesides the explicit expressions obtained in the low-density regime, this work also provides interesting insights on the nature of states that contribute to the spectral sum, and on its possible expansions as detailed in Sections \\ref{which} and \\ref{which2}. The low-density regime is indeed naturally interpreted as a single dressed particle-hole excitation, i.e. a macroscopic particle-hole excitation with an arbitrary number of microscopic particle-hole excitations. In contrast, an integral representation in terms of 'bare' particle-hole excitations in the thermodynamic limit is found to be singular, in the sense that the 'thermodynamic form factor' for one-particle-hole excitations is non-zero only in the limit where the amplitude of the macroscopic excitation vanishes. Hence this work indicates that the right expansion of the spectral sum is in terms of dressed particle-hole excitations, as already suggested by the $1\/c$ expansion developed in \\cite{granetessler20}. Importantly, this dressing by microscopic particle-hole excitations also comes with an $x$ and $t$ dependence.\n\n\nThis PFD framework allows in principle for a computation of the next orders, which constitutes the most natural direction of improvement of this work. The fact that the next orders can indeed be computed with the PFD has been shown in \\cite{GFE20} for a model that can be reformulated in terms of free fermions. In the Lieb-Liniger case, the interaction introduces technical but not fundamental difficulties, and we hope to be able to pursue this program in following works. \n\n\n\n\\paragraph{Acknowledgements}\nWe thank Fabian Essler and Jacopo De Nardis for helpful discussions. This work was\nsupported by the EPSRC under grant EP\/S020527\/1.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}