diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzpjbw" "b/data_all_eng_slimpj/shuffled/split2/finalzzpjbw" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzpjbw" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nGraphite is an allotrope of carbon with layered structure. \nThe structure results in strongly anisotropic thermal and electrical transports~\\cite{PhysRev.127.694} along with other unique properties such as self-lubrication~\\cite{PhysRevLett.92.126101} and high stability up to 4000~K~\\cite{JGRB:JGRB3482}. \nWith added characteristics of flexibility, graphite sheets are industrial products making use of those properties. \nGrafoil~\\cite{Grafoil} is one of the best known commercial products based on this material, and is used in a wide variety of applications, e.g., sealing gaskets, thermal insulators, electrodes, and as a chemical reagent~\\cite{Chung1987}. \nIt consists of small natural graphite crystals ($=10$--20~nm)~\\cite{Takayoshi2009} which are first powdered, then exfoliated at high temperature, and finally rolled under high pressure. \nIt is also widely used as an adsorption substrate for basic research of two dimensional physical and chemical properties of adsorbate thin films~\\cite{RevModPhys.79.1381} because of its atomically flat surface of microcrystallites and large specific surface area. \nFor this purpose, there is another type of exfoliated graphite called ZYX~\\cite{ZYX}, synthesized from HOPG (highly oriented pyrolytic graphite) under rather moderate exfoliation and re-compression conditions, with larger platelet size ($=$100--200~nm). \n\nRecently, a new flexible graphite sheet, pyrolytic graphite sheet (PGS)~\\cite{pgs,ADEM:ADEM201300418}, has been invented. \nPGS is a thin graphite sheet of 10--100~$\\mu$m in thickness with a single-crystal-like structure, synthesized by heat decomposition of polymeric films. \nBecause of its extremely high in-plane thermal conductivity (2--5 times higher than that of copper at room temperature) and low density ($\\approx80$\\% of aluminum), PGS and its composites are being used for thermal management in electronic devices like smartphones~\\cite{ADEM:ADEM201300418}. \nIt is potentially useful for cryogenic applications, especially in space engineering.\nOne recent example is its use in a vibration isolated thermal link for cryocoolers~\\cite{McKinley2016174}. \nHowever, so far only little is known about physical properties of PGS, including thermal transport at cryogenic temperatures.\n\nIn this article, we report results of crystal analysis and electrical and thermal transport measurements at 2--300\\,K for PGS. \nHere we focused on an uncompressed version of PGS since it has higher crystallinity and thus higher in-plane conductivity than compressed commercial PGS. \nBy measuring both electrical and thermal conductivities, we could deduce a general relationship between them for graphite family materials where the standard Wiedemann-Franz law~\\cite{ANDP:ANDP18531650802} is not applicable. \nOther characteristics important for application as an adsorption substrate, such as nitrogen adsorption isotherm and real space imaging of morphology with various microscopes, will be published elsewhere~\\cite{part2_tbp}.\n\n\\section{Pyrolytic Graphite Sheet (PGS)}\nCommercial PGS~\\cite{pgs} (hereafter, cPGS) is made first by carbonizing a stack of polymer films of a few $\\mu$m thick at $T \\lesssim 1000$~K, then by graphitizing the resultant foamed carbon precursor at $T \\approx3000$~K, and finally by compression (rubbing) which reduces the thickness by 30--50\\%. \nCompared to chemical vapor deposition, which is used to synthesize HOPG, this is a convenient mass production method for thin graphite sheets of good crystallinity. \nPrevious transmission electron microscopy (TEM) observations~\\cite{10012555627} show that the cross-sectional structure of a similar kind of graphite to that used in the present study is laminar of ultra-thin crystalline graphite layers of 6--7~nm thick which corresponds to 16--20 graphenes.\nThe average lateral size of each layer is determined to be 10--100~$\\mu$m from electron channeling contrast imaging with scanning electron microscope (SEM)~\\cite{10031183877}. \nIn general, thinner cPGS has higher crystallinity because, in the graphitization process, the liberated gas can escape more easily and the temperature distribution is more uniform. \n\nThe final compression procedure makes cPGS flexible (like paper) so as to be more useful in practical applications. \nHowever, it may break the lateral crystalline structure on large scales. \nTherefore, in this work, we mainly studied physical properties of uncompressed PGS (hereafter, uPGS) which is produced by exactly the same method as cPGS except the absence of the final compression. \nAs a trade off, uPGS is mechanically brittle and inflexible. \nThus, to shape it precisely, it is recommended to use a punch designed for cutting thin metal films~\\cite{nogamigk}. \n\nThe nominal thicknesses of uPGS studied here are 10, 17, 25, and 100~$\\mu$m.\nWe denote them as uPGS-10$\\mu$m, for example, in the following.\nTheir actual thicknesses measured by micrometer are 19$\\pm$2, 29.7$\\pm$0.6, 56$\\pm$3, and 145$\\pm$4~$\\mu$m, respectively.\nFor comparison we also studied properties of cPGS-10$\\mu$m whose measured thickness is 13$\\pm$2~$\\mu$m. \n\n\\section{Crystalline structure and defects}\nOut-of-plane X-ray diffraction was measured for uPGS-17$\\mu$m with a powder X-ray diffractometer~\\cite{xrd} using Cu K$\\alpha_1$ emission. \nThe uPGS sample of $10\\times10$ mm$^2$ was glued onto a glass holder with GE~7031 varnish. \nSharp diffraction peaks from graphite are observed indicating that PGS is made purely of graphite crystals (see Fig.~\\ref{xrdresults}).\nInterplanar spacing $d_{002}$ is determined as $0.33583(7)$~nm from peaks indexed as (002), (004), and (006) using Nelson-Riley function~\\cite{doi:10.1080\/14786444508520959}.\nFrom the full width at half maximum (FWHM) of the rocking curve of the (002) peak at $2\\theta=26.346$~deg, the mosaic angle spread is determined as 8.2$\\pm$0.1~deg as shown in the inset of Fig.~\\ref{xrdresults}. \nThis value is consistent with a mosaic angle spread ($=10\\pm$3~deg) roughly estimated from real space SEM imaging of a cross section of uPGS-17$\\mu$m~\\cite{part2_tbp}.\nIn-plane X-ray diffraction was also carried out (the data not shown here).\nThe sample was a stack of 29 uPGS-100$\\mu$m sheets (13$\\times$5~mm$^2$ each) fixed with epoxy glue (Stycast 1266) each other.\nIn addition to the peaks from regular spacing between graphene layers such as (002), those from in-plane honeycomb lattice like (110) and peaks indicative of three-dimensional graphite lattice indexed by (101), (102), (103), and (112) are observed.\n$d_{002}$ is determined as $0.33592(6)$~nm from the (002), (004), and (006) peaks, and the in-plane lattice parameter $a$ is determined as $0.2463$~nm from the (100) and (110) peaks. \nAll these diffraction results agree very well with the previous study for pyrographite films~\\cite{doi:10.1063\/1.96827}.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.9\\columnwidth]{xrd_1}%\n\\caption{Out-of-plane X-ray diffraction spectrum for uPGS-17$\\mu$m. \n(Inset) Rocking curve of (002) peak at $2\\theta=26.346$~deg where the FWHM is 8.2~deg.\n\\label{xrdresults}}%\n\\end{figure}\n\nRaman spectra of cleaved surfaces of uPGS of 17, 25, and 100~$\\mu$m thick were measured at a wavelength of 532~nm using a laser Raman microscope~\\cite{raman} with a fixed exposure time of 8~seconds. \nFor comparison, cleaved surfaces of HOPG, Grafoil, and ZYX were also measured. \nThe most intensive features in Raman spectroscopy for graphite are G ($\\approx$1580~cm$^{-1}$) and G' ($\\approx$2710~cm$^{-1}$) peaks. \nThe relative intensity of G peak to G' one, $I$(G)\/$I$(G') is a good representative of the number $n$ of graphene layers for $n \\lesssim 6$~\\cite{PhysRevLett.97.187401}. \nMeasured $I$(G)\/$I$(G') values for uPGS are similar to those of other graphites, which confirms that they are thick enough graphites (see Table~\\ref{ratio}). \nIt is consistent with that all of them have similar FWHM values of G' band ($\\approx60$~cm$^{-1}$: not shown in the table). \nD band is known to appear if the surface contains edges or defects where the three-fold symmetry of honeycomb lattice is broken~\\cite{rohtua}.\nThe D band signal was not detected in uPGS and HOPG within experimental errors indicating high crystallinity with immeasurably small amounts of domains and defects~\\cite{doi:10.1063\/1.369027}. \n\n\\begin{table}[h]\n\\begin{center}\n\\caption{Intensity ratios of $I$(G)\/$I$(G') and $I$(D)\/$I$(G) in Raman spectra for various graphite materials. All the surfaces were cleaved before the measurements.}\\label{ratio}\n\\begin{tabular}{rccc}\n\\hline \\hline\nmaterial & nominal thickness & $I$(G)\/$I$(G') & $I$(D)\/$I$(G) \\\\ \\hline\nuPGS& $100~\\mu$m & 3.2(1) & $<10^{-3}$ \\\\\n& $25~\\mu$m & 3.1(1) & $<10^{-3}$ \\\\\n& $17~\\mu$m & 3.1(1) & $<10^{-3}$ \\\\\n\\\\\nHOPG & --- & 3.3(2) & $<10^{-3}$ \\\\\n\\\\\nZYX & --- & 3.2(1) & 0.003(2) \\\\\n\\\\\nGrafoil & 130~$\\mu$m & 3.3(1) & 0.015(4) \\\\\n& 250~$\\mu$m & 3.3(1) & 0.033(4) \\\\\n\\hline \\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\section{Electrical resistivity measurement}\nWe made in-plane ($\\rho_{\\parallel}$) and out-of-plane ($\\rho_{\\perp}$) electrical resistivity measurements for uPGS samples of four different thicknesses, i.e., 10, 17, 25, and 100~$\\mu$m, in the temperature range between 2 and 300\\,K.\nThey were carried out by the 4-terminal method using the AC transport and resistance options of Physical Properties Measurement System (PPMS) of Quantum Design, Inc.\nThe typical sample size is $0.5\\times9$~mm$^{2}$ for the $\\rho_{\\parallel}$ measurement and $3\\times3$~mm$^{2}$ for the $\\rho_{\\perp}$ one. \nGold lead wires of 50~$\\mu$m in diameter were glued to the samples with rubber-based carbon paste~\\cite{ucc} which adheres strongly to graphite. \n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.92\\columnwidth]{logrhopara}%\n\\caption{Temperature dependencies of in-plane electrical resistivity $\\rho_{\\parallel}$ of uPGS with various thicknesses (solid circles) and cPGS-10$\\mu$m (open circles). The arrows indicate peak temperatures. Data of Grafoil, ZYX~\\cite{niimi:4448} and natural graphite~\\cite{PhysRev.95.22} are also plotted.\\label{logrhopara}}%\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.97\\columnwidth]{logrhoperp}%\n\\caption{Temperature dependencies of out-of-plane electrical resistivity $\\rho_{\\perp}$ of uPGS with various thicknesses (solid circles) and cPGS-10$\\mu$m (open circles). Data of Grafoil and ZYX are also plotted~\\cite{niimi:4448}.\\label{logrhoperp}}%\n\\end{figure}\n\nResults of the $\\rho_{\\parallel}$ measurement are shown in Fig.~\\ref{logrhopara}. \n$\\rho_{\\parallel}$ values of the uPGS samples are in between those of exfoliated graphites (Grafoil and ZYX) and natural graphite.\nImportantly, thinner uPGS has lower $\\rho_{\\parallel}$ in order of thickness. \nThis is consistent with the fact that thinner uPGS has better crystallinity. \nNote that the variation of $\\rho_{\\parallel}$ with thickness, 3--5 times, is larger than the variation of density, twice at most. \n\nIn exfoliated graphites it is known that there are two different conduction mechanisms.\nThe first is metallic which dominates the low temperature behavior of $\\rho_{\\parallel}$ and $\\rho_{\\perp}$, and the second is variable range hopping (VRH) which dominates high temperature behavior~\\cite{PhysRevB.25.4167,niimi:4448}.\nAs a result, there exists a peak temperature ($T_{\\mathrm{peak}}$) in $\\rho$ vs. $T$ around 20~K that separates the two behaviors as shown in Fig.~\\ref{logrhopara}.\nWe observed a similar $T$ dependence for uPGS but with higher $T_{\\mathrm{peak}}$ (indicated by the arrows in the figure) again in order of thickness. \nAs the thickness decreases, the $T$ dependence of $\\rho_{\\parallel}$ becomes weaker above $T_{\\mathrm{peak}}$ and stronger below $T_{\\mathrm{peak}}$.\nThe measured in-plane conductivity $\\sigma_{\\parallel}$ ($= 1\/\\rho_{\\parallel}(T)$) is well described by the following equation below 150\\,K: \n\\begin{equation}\n\\label{sigmaeq}\n\\sigma=\\frac{1}{\\rho_{0}+AT}+\\sigma_{0}^{\\mathrm{h}}\\exp\\left(-\\frac{T_{0}}{T}\\right)^{\\alpha},\n\\end{equation}\nwhere the first term corresponds to the metallic channel ($\\sigma_{\\mathrm{metal}}$) and the second term to the hopping one. \nWithin the VRH model, \n\\begin{gather*}\nT_{0}=\\lambda^{3}\/D_{\\mathrm{F}}k_{\\mathrm{B}},\\\\\n\\alpha=1\/(d+1),\n\\end{gather*}\nfor hopping in $d$ spatial dimensions.\nHere $\\lambda$ and $D_{\\mathrm{F}}$ are the decay length of the localized electronic wave function and the density of states at the fermi energy, respectively.\nThe hopping is presumably between neighboring microcrystallites across domain boundaries.\nThinner uPGS has longer $\\lambda$ and smaller $\\rho_{0}$, the residual resistivity, presumably because of larger microcrystalline size and less crystalline defects. \nThis explains the thickness dependence of $T_{\\mathrm{peak}}$.\nIt is noted that in our analysis $\\alpha$ is a fitting parameter unlike the previous works~\\cite{PhysRevB.25.4167, niimi:4448}.\nThe fittings give $\\alpha \\approx 1.0$, which corresponds to a simple Arrhenius type conduction, for all the samples.\n\nFigure~\\ref{logrhoperp} shows results of the $\\rho_{\\perp}$ measurement.\nAgain $\\rho_{\\perp}$ at a fixed temperature varies in order of thickness but with an opposite sign to $\\rho_{\\parallel}$, i.e., thinner uPGS has larger $\\rho_{\\perp}$.\nThis is naturally understood as follows.\nGraphite is a layered material with very weak interlayer coupling based on the van der Waals interaction.\nThus the anisotropy of resistivity ($\\eta = \\rho_{\\perp}\/\\rho_{\\parallel} > $$10^2$~\\cite{ANDP:ANDP19153531806,PhysRev.95.22}) is so large that it can easily be reduced if the system has a mosaic angle spread or the wavy laminar structure. \nThe $\\rho_{\\perp}$ data below 150\\,K can also be well fitted by Eq.~\\ref{sigmaeq} with $\\alpha\\approx 1\/2$, nearly one dimensional hopping, being consistent with the above argument on large $\\eta$ (see Fig.~\\ref{vrhperp2}).\nIf we make the same analysis for the previously reported $\\rho_{\\perp}$ data of Grafoil and ZYX in Refs.~\\cite{PhysRevB.25.4167, niimi:4448}, we obtain a similar result ($\\alpha=0.42$), although in those papers the data were analyzed assuming $\\alpha=1\/4$ (three dimensional hopping). \n\nWe note that uPGS of any thickness has a kink in the $T$ dependence of $\\rho_{\\perp}$ at $T \\approx$ 250~K above which the dependence changes randomly in every thermal cycle.\nThis results in a $\\pm$5\\% difference in resistivity at 300~K.\nThe mechanism behind this curious behavior is not known at present. \nThe morphology of uPGS with many microscopic inaccessible voids may be changed either by thermal expansion or by desorption\/adsorption of gas confined in the voids in every cooling and warming cycle. \nThe $T$ dependence of $\\rho_{\\perp}$ is excellently reproducible at $T < 250$~K for uPGS and in the whole $T$ range we studied for cPGS. \n\nWe comment on effects caused by the final compression to produce cPGS from uPGS from the viewpoint of electrical conductance. \nThe open symbols in Figs.~\\ref{logrhopara}~and~\\ref{logrhoperp} are results of $\\rho_{\\parallel}$ and $\\rho_{\\perp}$ measurements for cPGS-10$\\mu$m. \nCompared to uPGS-10$\\mu$m, the $\\rho_{\\parallel}$ value at room temperature is only slightly larger.\nHowever, it has a steeper variation down to $T_{\\mathrm{peak}}$, and $T_{\\mathrm{peak}}$ itself is lower.\nTherefore, the overall $T$ dependence of cPGS-10$\\mu$m is rather similar to those of the exfoliated graphites.\nThis would be a result of mixing between $\\rho_{\\parallel}$ and $\\rho_{\\perp}$ caused by the compression. \nThe same is true for $\\rho_{\\perp}$ where the cPGS-10$\\mu$m samples have much weaker $T$ variations.\nThis is again similar to the behavior of exfoliated graphite. \nIt should be noted that the magnitude of $\\rho_{\\perp}$ and sometimes even its $T$ dependence differ from sample to sample in the case of cPGS (two different samples are shown in Fig.~\\ref{logrhoperp}), presumably because the compression damages the laminar structure. \nAlso, $\\alpha$ values scatter to a large extent from 0.26 to 0.42. \n\n\\begin{figure}[htbp]\n\\includegraphics[width=0.97\\columnwidth]{vrhperp2}%\n\\caption{Temperature dependencies of the out-of plane conductivity $\\alpha_{\\perp}$ of various uPGS samples plotted as logarithm of $\\alpha_{\\perp}$ vs. $1\/T$. \nThe data can be fitted to Eq.~\\ref{sigmaeq}, a straight line with a slope of $\\alpha$ in this plot, much better with $\\alpha=1\/2$ (thin solid lines) than $\\alpha=1\/4$ expected from the 3D VRH model (see main text). \nThe inset is a schematic diagram of the one dimensional inter-plane hopping.\n\\label{vrhperp2}}%\n\\end{figure}\n\n\\section{Thermal conductivity measurement}\nUsually, for metallic samples, it is possible to estimate thermal conductivity ($\\kappa_\\mathrm{WF}$) from the electrical resistivity $\\rho$, which can more easily be measured, through the Wiedemann-Franz (WF) law~\\cite{ANDP:ANDP18531650802}: \n\\begin{equation}\\label{WFeq}\n\\kappa_{\\mathrm{WF}}=\\frac{L_0 T}{\\rho}, \\quad L_0=2.44\\times10^{-8}\\,\\mathrm{W}\\Omega\\mathrm{K}^{-2}.\n\\end{equation}\nHowever, in the case of semimetal such as graphite, Eq.~\\ref{WFeq} underestimates the true thermal conductivity ($\\kappa$) by several orders of magnitude at temperatures where the thermal conduction by phonons plays an important role~\\cite{PhysRevB.31.6721}.\nThus we have measured in-plane $\\kappa$ of uPGS-10$\\mu$m directly using Thermal Transport option of PPMS. \nThe sample of 7\\,mm wide and 8\\,mm long was glued on 4 gold-plated copper electrodes with silver paste.\nThe electrodes are fixed to a thin rectangular support rod made of Stycast 2850FT ($2\\times0.5\\times8$\\,mm$^{3}$).\nThe thermal conductance of the support rod is negligibly small at temperatures above 50\\,K and is less than half of the total conductance with sample at lower temperatures.\nIt was carefully measured beforehand and subtracted from the total.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.97\\columnwidth]{PGSTT2ai}%\n\\caption{Measured in-plane thermal conductivities $\\kappa$ of uPGS-10$\\mu$m (this work), HOPG~\\cite{PhysRevB.31.6721}, natural graphite~\\cite{PhysRev.95.1095}, Grafoil~\\cite{UHER1980445}, and ZYX~\\cite{UHER1980445}. \nShadow areas represent $\\kappa$ of copper with residual resistance ratios between 50--500 and aluminum alloys of different kinds (Aluminum 1100, 3003-F, 5083-O, 6061-T6, 6063-T5)~\\cite{marquardt2002cryogenic}.\nThermal conductivity estimated from electric conductivity using the Wiedemann-Franz law $\\kappa_\\mathrm{WF}$ is indicated by the broken or dash-dotted lines for each specimen~\\cite{PhysRev.95.22,PhysRevB.30.1080,niimi:4448}. \n\\label{PGSTT2ai}}%\n\\end{figure}\n\nIn Fig.~\\ref{PGSTT2ai}, measured $\\kappa$ data of uPGS-10$\\mu$m (closed circles) are shown with those of natural graphite (open circles)~\\cite{PhysRev.95.1095}, HOPG (solid line)~\\cite{PhysRevB.31.6721}, Grafoil~\\cite{UHER1980445}, and ZYX ~\\cite{UHER1980445}. \n$\\kappa$ of uPGS-10$\\mu$m is more than one order of magnitude higher than those of ZYX and Grafoil in the whole $T$ range between 2 and 300\\,K.\nThis is a great advantage of this material indicating the longer mean-free path of phonons and thus the higher crystallinity. \nRemarkably, $T$ dependencies of the three kinds of graphite are quite similar to each other.\nA peak in $\\kappa (T)$ around 150\\,K for uPGS-10$\\mu$m corresponds to the onset of Umklapp scattering.\nIt is in between the peak temperature ($T_{\\mathrm{peak}} = 100$\\,K) of natural graphite and HOPG and those ($T_{\\mathrm{peak}} \\approx 200$\\,K) of Grafoil and ZYX.\nAt $30 \\leq T \\leq 100$\\,K, the $T$ dependence is $\\kappa \\propto T^{2.01(2)}$ as expected from two-dimensional phonon conductivity.\nIt turns to $\\kappa \\propto T^{2.55(2)}$ at lower temperatures ($2 \\leq T \\leq 30$\\,K) where the conductivity is lower than that of natural graphite by a factor of 4--10. \n\nCompared to typical metallic thermal conductors, uPGS-10$\\mu$m has larger $\\kappa$ than copper at $T>60$\\,K and than aluminum alloys at $T>40$\\,K. \nThe $T$ bounds, above which uPGS can transfer heat faster, are extended down to 40\\,K and 20\\,K, respectively, if we consider the thermal diffusivity $\\kappa\/C_\\mathrm{vol}$ owing to the small density of graphite. \nHere $C_\\mathrm{vol}$ is the volumetric specific heat. \nBecause the density of graphite is 25\\% of copper and 80\\% of aluminum, $C_\\mathrm{vol}$ is always lower than those metals at any $T$ below 300\\,K~\\cite{doi:10.1021\/ja01623a006,marquardt2002cryogenic}. \nThus uPGS can be advantageous even at lower $T$ for specific purposes which requires lighter weight and\/or smaller heat capacity. \n\nIn Fig.~\\ref{PGSTT2ai}, we also plotted $\\kappa_{\\mathrm{WF}}$ estimated from the measured $\\rho_{\\parallel}$ through the WF law (broken and dash-dotted lines). \nFor all types of the graphite materials, $\\kappa$ is much higher than $\\kappa_{\\mathrm{WF}}$. \n$\\kappa\/\\kappa_{\\mathrm{WF}}$ is $\\approx$500 at $T\\approx100$\\,K and slowly decreases with decreasing $T$ down to 3--4 at the lowest temperature. \nIn addition, $T$ dependencies of $\\kappa_{\\mathrm{WF}}$ are quite different from the measured ones.\n\nFinally, it is interesting to note that the ratios $\\kappa \/ \\kappa_{\\mathrm{WF}}$ are rather similar for different graphites, particularly among uPGS-10$\\mu$m, Grafoil, and ZYX through the whole $T$ range as shown in Fig.~\\ref{ratioplot}.\nIn this sense, electrical resistance measurement still provides a useful ``rough\" estimation of thermal conductance for a variety of graphite materials. \nFrom the fact, we believe that uPGS-10$\\mu$m should have the highest in-plane thermal conductivity among other PGSs though we did not directly measure their $\\kappa$. \n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.97\\columnwidth]{ratioplot}%\n\\caption{Ratio between the measured in-plane thermal conductivity and that estimated from the measured electrical resistivity through the Wiedemann-Franz law plotted as a function of temperature for uPGS-10$\\mu$m, HOPG~\\cite{PhysRevB.31.6721,PhysRev.95.22}, natural graphite~\\cite{PhysRev.95.1095,PhysRevB.30.1080}, Grafoil~\\cite{UHER1980445,niimi:4448}, and ZYX~\\cite{UHER1980445}.\n\\label{ratioplot}}%\n\\end{figure}\n\n\\section{Conclusions}\nPGS is a recently developed mass-producible pyrolytic graphite sheet with several potential applications including a light-weight and highly conducting thermal link at cryogenic temperatures. \nWe made electrical and thermal conductivity measurements at 2$\\leq T \\leq$300\\,K along with X-ray diffraction and Raman spectroscopy characterizations of uncompressed PGSs (uPGSs) of various thicknesses between 10 and 100\\,$\\mu$m. \nThe thinnest uPGS (uPGS-10$\\mu$m) has the highest in-plane thermal conductivity ($\\kappa$) because of the highest crystallinity. \nBy the same reason, uPGS-10$\\mu$m and its compressed version (cPGS-10$\\mu$m) have more than one order of magnitude higher $\\kappa$ than that of Grafoil, a commonly used flexible exfoliated graphite sheet, in the whole temperature range. \nIt is a better thermal conductor than copper at $T>$40--60\\,K and aluminum alloys at $T>$20--40\\,K as natural graphite (NG) crystal and highly oriented pyrolytic graphite (HOPG) are so due to large thermal conduction by phonons. \nSince it is difficult to machine NG and HOPG into arbitrary thickness, uPGS\/cPGS have a great advantage over any other materials for application as a thin and inflexible\/flexible cryogenic thermal link. \n\nWe also found that there is a general relationship between the thermal conductivity estimated from in-plane electrical conductivity through the Wiedemann-Franz law ($\\kappa_\\mathrm{WF}$) and $\\kappa$ in graphite family materials. \nThis is useful so that one can conveniently estimate thermal conductance of a cryogenic part made of those materials from the more easily obtained electrical conductance.\n\n\\section{Acknowledgements}\nWe are grateful to Yoshiya Sakaguchi, Hiroyuki Hase, and Makoto Nagashima of Automotive \\& Industrial Systems Company of Panasonic corporation for providing us the uPGS and cPGS samples. \n\nThis work was financially supported by Grant-in-Aid for Scientific Research (B) (Grant No.~15H03684), and Challenging Exploratory Research (Grant No.~15K13398) from JSPS.\nThe laser Raman microscope used in this work was supplied by MERIT program, The University of Tokyo.\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{#1}}\n\\newcommand{\\mysubsection}[1]{\\subsection{#1}}\n\\newcommand{\\mysubsubsection}[1]{\\subsubsection{#1}}\n\n\\date{March 27, 2012}\n\n\\author{Martin Pelikan \n\\and\nMark W. Hauschild \n\\and\nPier Luca Lanzi}\n\n\\begin{document}\n\n\\begin{titlepage}\n\\setlength{\\parindent}{0pt}\n\n\\noindent\n\\includegraphics[width=5in]{medal2.eps}\n\\vspace*{0.075in}\n{\\color{myblue}\n\\hrule height 2pt\n}\n\\vspace*{0.5in}\n\n{\\bf\n\\textsf{{\\large\nTransfer Learning, Soft Distance-Based Bias, and the Hierarchical BOA}}\n}\n\n\\vspace*{0.25in}\n\n\\textsf{Martin Pelikan, Mark W. Hauschild, and Pier Luca Lanzi}\n\n\\vspace*{0.25in}\n\n\\textsf{MEDAL Report No. 2012004}\n\n\\vspace*{0.25in}\n\n\\textsf{March 2012}\n\n\\vspace*{0.25in}\n\n{\\bf \\textsf{Abstract}} \n\n\\vspace*{0.075in}\n\n{\\small \\textsf{An automated technique has recently been proposed to transfer learning in the hierarchical Bayesian optimization algorithm (hBOA) based on distance-based statistics. The technique enables practitioners to improve hBOA efficiency by collecting statistics from probabilistic models obtained in previous hBOA runs and using the obtained statistics to bias future hBOA runs on similar problems. The purpose of this paper is threefold: (1)~test the technique on several classes of NP-complete problems, including MAXSAT, spin glasses and minimum vertex cover; (2)~demonstrate that the technique is effective even when previous runs were done on problems of different size; (3)~provide empirical evidence that combining transfer learning with other efficiency enhancement techniques can often yield nearly multiplicative speedups.}}\n\n\\vspace*{0.25in}\n\n{\\bf \\textsf{Keywords}}\n\n\\vspace*{0.075in}\n{\\small \\textsf{Transfer learning, inductive transfer, learning from experience, estimation of distribution algorithms, hierarchical Bayesian optimization algorithm, decomposable problems, efficiency enhancement.}}\n\n\\vfill\n\n\\noindent\n\\begin{minipage}{6in}\n{\\small \\textsf{Missouri Estimation of Distribution Algorithms Laboratory (MEDAL)\\\\\nDepartment of Mathematics and Computer Science, 321 ESH\\\\\nUniversity of Missouri--St. Louis\\\\\nOne University Blvd.,\nSt. Louis, MO 63121\\\\\nE-mail: \\url{medal@medal-lab.org}\\\\\nWWW: \\url{http:\/\/medal-lab.org\/}\\\\}}\n\\end{minipage}\n\n\\end{titlepage}\n\n\\author{\n{\\bf Martin Pelikan}\\\\\nMissouri Estimation of Distribution Algorithms Laboratory (MEDAL)\\\\\nDept. of Mathematics and Computer Science, 320 ESH\\\\\nUniversity of Missouri in St. Louis\\\\\nOne University Blvd., St. Louis, MO 63121\\\\\n\\url{martin@martinpelikan.net}\\\\\n\\url{http:\/\/martinpelikan.net\/}\n\\and\n{\\bf Mark W. Hauschild}\\\\\nMissouri Estimation of Distribution Algorithms Laboratory (MEDAL)\\\\\nDept. of Mathematics and Computer Science, 321 ESH\\\\\nUniversity of Missouri in St. Louis\\\\\nOne University Blvd., St. Louis, MO 63121\\\\\n\\url{mwh308@umsl.edu}\n\\and\n{\\bf Pier Luca Lanzi}\\\\\nDipartimento di Elettronica e Informazione\\\\\nPolitecnico di Milano\\\\\nPiazza Leonardo da Vinci, 32\\\\\nI-20133 Milano, Italy\\\\\n\\url{pierluca.lanzi@polimi.it}\n}\n\n\\maketitle\n\n\\begin{abstract}\nAn automated technique has recently been proposed to transfer learning in the hierarchical Bayesian optimization algorithm (hBOA) based on distance-based statistics. The technique enables practitioners to improve hBOA efficiency by collecting statistics from probabilistic models obtained in previous hBOA runs and using the obtained statistics to bias future hBOA runs on similar problems. The purpose of this paper is threefold: (1)~test the technique on several classes of NP-complete problems, including MAXSAT, spin glasses and minimum vertex cover; (2)~demonstrate that the technique is effective even when previous runs were done on problems of different size; (3)~provide empirical evidence that combining transfer learning with other efficiency enhancement techniques can often yield nearly multiplicative speedups.\n\\end{abstract}\n\n{\\bf Keywords:} Transfer learning, inductive transfer, learning from experience, estimation of distribution algorithms, hierarchical Bayesian optimization algorithm, decomposable problems, efficiency enhancement.\n\n\n\\mysection{Introduction}\nEstimation of distribution algorithms (EDAs)~\\cite{Hauschild:11c,Larranaga:02,Pelikan:02,Pelikan:12b} guide the search for the optimum by building and sampling probabilistic models of candidate solutions. The use of probabilistic models in EDAs provides a basis for incorporating prior knowledge about the problem and learning from previous runs in order to solve new problem instances of similar type with increased speed, accuracy and reliability~\\cite{Pelikan:book,Hauschild:12}. However, much prior work in this area was based on hand-crafted constraints on probabilistic models~\\cite{Muhlenbein:99a,Muhlenbein:02,Baluja:06,Schwarz:00*} which may be difficult to design or even detrimental to EDA efficiency and scalability~\\cite{Hauschild:09c}. Recently, Pelikan and Hauschild~\\cite{Pelikan:12} proposed an automated technique capable of learning from previous runs of the hierarchical Bayesian optimization algorithm (hBOA) in order to improve efficiency of future hBOA runs on problems of similar type. The basic idea of the approach was to (1)~design a distance metric on problem variables that correlates with the expected strength of dependencies between the variables, (2)~collect statistics on hBOA models with respect to the values of the distance metric, and (3)~use the collected statistics to bias model building in hBOA when solving future problem instances of similar type. While the distance metric is strongly related to the problem being solved, the aforementioned study~\\cite{Pelikan:12} described a rather general metric that can be applied to practically any problem with the objective function represented by an additively decomposable function. However, the prior study~\\cite{Pelikan:12} evaluated the proposed technique on only two classes of problems and it did not demonstrate several key features of this technique.\n\n\\begin{sloppy}\nThe purpose of this paper is threefold: (1)~Demonstrate the technique from ref.~\\cite{Pelikan:12} on other classes of challenging optimization problems, (2)~demonstrate the ability of this technique to learn from problem instances of one size in order to introduce bias for instances of another size, and (3)~demonstrate the potential benefits of combining this technique with other efficiency enhancement techniques, such as sporadic model building~\\cite{DBLP:journals\/gpem\/PelikanSG08}. As test problems the paper considers several classes of NP-complete additively decomposable problems, including MAXSAT, three-dimensional Ising spin glass, and minimum vertex cover. The new results together with the results published in prior work~\\cite{Pelikan:12} provide strong evidence of the broad applicability and great potential of this technique for learning from experience (transfer learning) in EDAs.\n\\end{sloppy}\n\nThe paper is organized as follows. Section~\\ref{section-hboa} outlines hBOA. Section~\\ref{section-transfer-learning} discusses efficiency enhancement of estimation of distribution algorithms using inductive transfer with main focus on hBOA and the distance-based bias~\\cite{Pelikan:12}. Section~\\ref{section-experiments} presents and discusses experimental results. Section~\\ref{section-conclusions} summarizes and concludes the paper.\n\n\\mysection{Hierarchical BOA}\n\\label{section-hboa}\nThe hierarchical Bayesian optimization algorithm (hBOA)~\\cite{Pelikan:book,Pelikan:01*} works with a population of candidate solutions represented by fixed-length strings over a finite alphabet. In this paper, candidate solutions are represented by $n$-bit binary strings. The initial population of binary strings is generated at random according to the uniform distribution over candidate solutions. Each iteration starts by selecting promising solutions from the current population; here binary tournament selection without replacement is used. Next, hBOA (1)~learns a Bayesian network with local structures~\\cite{Chickering:97} for the selected solutions and (2)~generates new candidate solutions by sampling the distribution encoded by the built network. To maintain useful diversity in the population, the new candidate solutions are incorporated into the original population using restricted tournament selection (RTS)~\\cite{Harik:95a}. The run is terminated when termination criteria are met. In this paper, each run is terminated either when the global optimum is found or when a maximum number of iterations is reached. \n\nhBOA represents probabilistic models of candidate solutions by Bayesian networks with local structures~\\cite{Chickering:97,Friedman:99}. A Bayesian network is defined by two components: (1)~an acyclic directed graph over problem variables specifying direct dependencies between variables and (2)~conditional probabilities specifying the probability distribution of each variable given the values of the variable's parents. A Bayesian network encodes a joint probability distribution as $p(X_1,\\ldots,X_n)=\\prod_{i=1}^n p(X_i|\\Pi_i)$ where $X_i$ is the $i$th variable (string position) and $\\Pi_i$ are the parents of $X_i$ in the underlying graph. \n\nTo represent conditional probabilities of each variable given the variable's parents, hBOA uses decision trees~\\cite{Pelikan:01*,Chickering:97}. Each internal node of a decision tree specifies a variable, and the subtrees of the node correspond to the different values of the variable. Each leaf of the decision tree for a particular variable defines the probability distribution of the variable given a condition specified by the constraints given by the path from the root of the tree to this leaf (constraints are given by the assignments of the variables along this path). \n\n\nTo build probabilistic models, hBOA typically uses a greedy algorithm that initializes the decision tree for each problem variable $X_i$ to a single-node tree that encodes the unconditional probability distribution of $X_i$. In each iteration, the model building algorithm tests how much a model would improve after splitting each leaf of each decision tree on each variable that is not already located on the path to the leaf. The algorithm executes the split that provides the most improvement, and the process is repeated until no more improvement is possible.\nModels are evaluated using the Bayesian-Dirichlet (BDe) metric with penalty for model complexity, which estimates the goodness of a Bayesian network structure given data $D$ and background knowledge $\\xi$ as $p(B|D,\\xi) = c p(B|\\xi) p(D|B,\\xi),$\nwhere $c$ is a normalization constant ~\\cite{Chickering:97,Cooper:92}. The Bayesian-Dirichlet metric estimates the term $p(D|B,\\xi)$ by combining the observed and prior statistics for relevant combinations of variables~\\cite{Chickering:97}. \nTo favor simpler networks to the more complex ones, the prior probability $p(B|\\xi)$ is often set to decrease exponentially fast with respect to the description length of the network's parameters~\\cite{Pelikan:book,Friedman:99}.\n\n\\mysection{Learning from Experience using Distance-Based Bias}\n\\label{section-transfer-learning}\n\n\nIn hBOA and other EDAs based on complex probabilistic models, building an accurate probabilistic model is crucial to the success~\\cite{Larranaga:02,Pelikan:02,Hauschild:09c,Lima:11}. However, building complex probabilistic models can be time consuming and it may require rather large populations of solutions~\\cite{Larranaga:02,Pelikan:02}. That is why much effort has been put into enhancing efficiency of model building in EDAs and improving quality of EDA models even with smaller populations~\\cite{Hauschild:12,Muhlenbein:02,Baluja:06,Hauschild:08,Hauschild:09b}.\nLearning from experience~\\cite{Pelikan:book,Hauschild:12,Pelikan:12,Hauschild:08,Hauschild:09b} represents one approach to addressing this issue.\n\nThe basic idea of learning from experience is to gather information about the problem by examining previous runs of the optimization algorithm and to use the obtained information to bias the search on new problem instances. The use of bias based on the results of other learning tasks is also commonplace in machine learning where it is referred to as {\\em inductive transfer} or {\\em transfer learning}~\\cite{Pratt:91,Caruana:97}. \n Since learning model structure is often the most computationally expensive task in model building, learning from experience often focuses on identifying regularities in model structure and using these regularities to bias structural learning in future runs. \n\nAnalyzing probabilistic models built by hBOA and other EDAs is straightforward. The more challenging facet of implementing learning from experience in practice is that one must make sure that the collected statistics are meaningful with respect to the problem being solved. \nThe key to make the learning from experience work is to ensure that the pairs of variables are classified into a set of {\\em categories} so that the pairs in each category have a lot in common and can be expected to be either correlated or independent simultaneously~\\cite{Pelikan:12}. This section describes one approach to doing that~\\cite{Pelikan:12}, in which pairs of variables are classified into categories based on a predefined distance metric on variables.\n\n\\vspace*{-1.5ex}\n\\mysubsection{Distance Metric for Additively Decomposable Functions}\n\\vspace*{-0.5ex}\nFor many optimization problems, the objective function (fitness function) can be expressed as an additively decomposable function (ADF):\n\n\\vspace*{-1.8ex} \n\\begin{equation}\nf(X_1,\\ldots,X_n) = \\sum_{i=1}^m f_i(S_i),\n\\end{equation}\n\\vspace*{-1.8ex}\n\n\\noindent\nwhere $(X_1,\\ldots,X_n)$ are problem's decision variables (string positions), $f_i$ is the $i$th subfunction, and $S_i\\subset \\{X_1,X_2,\\ldots,X_n\\}$ is the subset of variables contributing to $f_i$. While there may often exist multiple ways of decomposing the problem using additive decomposition, one would typically prefer decompositions that minimize the sizes of subsets $\\{S_i\\}$. Note that the difficulty of ADFs is not fully determined by the order of subproblems, but also by the definition of the subproblems and their interaction; even with subproblems of order only 2 or 3, the problem can be NP-complete. \n\nThe definition of a distance between two variables of an ADF used in this paper as well as ref.~\\cite{Pelikan:12} follows the work of Hauschild et al.~\\cite{Hauschild:12,Hauschild:09c,Hauschild:08}. Given an ADF, we define the distance between two variables using a graph $G$ of $n$ nodes, one node per variable. For any two variables $X_i$ and $X_j$ in the same subset $S_k$, we create an edge in $G$ between the nodes $X_i$ and $X_j$. Denoting by $l_{i,j}$ the number of edges along the shortest path between $X_i$ and $X_j$ in $G$ (in terms of the number of edges), we define the distance between two variables as\n\n\\vspace*{-1.6ex}\n\\[\nD(X_i,X_j) = \n\\left\\{\n\\begin{array}{ll}\nl_{i,j} & \\mbox{if a path between $X_i$ and $X_j$ exists,}\\\\\nn & \\mbox{otherwise.}\n\\end{array}\n\\right.\n\\]\n\\vspace*{-1.8ex}\n\n\\noindent\nThe above distance measure makes variables in the same subproblem close to each other, whereas for the remaining variables, the distances correspond to the length of the chain of subproblems that relate the two variables. The distance is maximal for variables that are completely independent (the value of a variable does not influence the contribution of the other variable in any way). \n\nSince interactions between problem variables are encoded mainly in the subproblems of the additive problem decomposition, the above distance metric should typically correspond closely to the likelihood of dependencies between problem variables in probabilistic models discovered by EDAs. Specifically, the variables located closer with respect to the metric should more likely interact with each other. This observation has been confirmed with numerous experimental studies across a number of important problem domains from spin glasses distributed on a finite-dimensional lattice~\\cite{Hauschild:09c,Pelikan:12} to NK landscapes~\\cite{Pelikan:12}. \n\n\\vspace*{-0.7ex}\n\\mysubsection{Distance-Based Bias Based on Previous Runs of hBOA}\n\\begin{sloppy}\nThis section describes the approach to learning from experience developed by Pelikan and Hauschild~\\cite{Pelikan:12} inspired mainly by the work of Hauschild et al.~\\cite{Hauschild:12,Hauschild:08,Hauschild:09b}. \nLet us assume a set $M$ of hBOA models from prior hBOA runs on similar problems. Before applying the bias based on prior runs in hBOA, the models in $M$ are first processed to generate data that will serve as the basis for introducing the bias. The processing starts by analyzing the models in $M$ to determine the number $s(m,d,j)$ of splits on any variable $X_i$ such that $D(X_i,X_j)=d$ in a decision tree $T_j$ for variable $X_j$ in a model $m\\in M$. Then, the values $s(m,d,j)$ are used to compute the probability $P_k(d,j)$ of a $k$th split on a variable at distance $d$ from $X_j$ in a dependency tree $T_j$ given that $k-1$ such splits were already performed in $T_j$:\n\n\\vspace*{-1.3ex}\n\\begin{equation}\nP_k(d,j) = \\frac{\\left|\\{m\\in M: s(m,d,j)\\geq k\\}\\right|}{\\left|\\{m\\in M: s(m,d,j)\\geq k-1\\}\\right|}\\cdot\n\\end{equation}\n\\vspace*{-1.3ex}\n\\end{sloppy}\n\n\n\\noindent\nRecall that the BDe metric for evaluating the quality of probabilistic models in hBOA contains two parts: (1) the prior probability $p(B|\\xi)$ of the network structure $B$, and (2) the posterior probability $p(D|B,\\xi)$ of the data (population of selected solutions) given $B$. Pelikan and Hauschild~\\cite{Pelikan:12} proposed to use the prior probability distribution $p(B|\\xi)$ to introduce a bias based on distance-based statistics from previous hBOA runs represented by $P_k(d,j)$ by setting\n\\vspace*{-1.3ex}\n\\begin{equation}\n\\label{eq-pb-bias}\np(B|\\xi) = c \\prod_{d=1}^n \\prod_{j=1}^n \\prod_{k=1}^{n_s(d,j)} P^{\\kappa}_k(d,j),\n\\end{equation}\n\\vspace*{-1.8ex}\n\n\\noindent \nwhere $n_s(d,j)$ denotes the number of splits on any variable $X_i$ in $T_j$ such that $D(X_i,X_j)=d$, $\\kappa>0$ is used to tune the strength of bias (the strength of bias increases with $\\kappa$), and $c$ is a normalization constant.\nSince log-likelihood is typically used to evaluate model quality, when evaluating the contribution of any particular split, the change of the prior probability of the network structure can still be done in constant time. \n\n\n\\mysection{Experiments}\n\\label{section-experiments}\n\n\\mysubsection{Test Problems and Experimental Setup}\nThe experiments were done for three problem classes known to be difficult for most genetic and evolutionary algorithms:\n\\begin{inparaenum}[(1)]\n\\item Three-dimensional Ising spin glasses were considered with $\\pm J$ couplings and periodic boundary conditions~\\cite{Pelikan:06,young1998}; two problem sizes were used, $n=6\\times6\\times6=216$ spins and $n=7\\times7\\times7=343$ spins with 1,000 unique problem instances for each $n$. \n\\item Minimum vertex cover was considered for random graphs of fixed ratio $c$ of the number of edges and number of nodes~\\cite{Pelikan:07b,Weigt:01}; two ratios ($c=2$ and $c=4$) and two problem sizes ($n=150$ and $n=200$) were used with 1,000 unique problem instances for each combination of $c$ and $n$. \n\\item MAXSAT was considered for mapped instances of graph coloring with graphs created by combining regular ring lattices (with probability $1-p$) and random graphs (with probability $p$)~\\cite{Pelikan:03*,Gent:99}; 100 unique problem instances of $n=500$ bits (propositions) were used for each considered value of $p$, from $p=2^{-8}$ (graphs nearly identical to a regular ring lattice) to $p=2^{-1}$ (graphs with half of the edges random).\n\\end{inparaenum}\nFor more information about the test problems, we refer the reader to refs.~\\cite{Pelikan:06,Pelikan:07b,Pelikan:03*}.\n\nThe maximum number of iterations for each problem instance was set to the number of bits in the problem; according to preliminary experiments, this upper bound was sufficient.\nEach run was terminated either when the global optimum was found, when the population consisted of copies of a single candidate solution, or when the maximum number of iterations was reached. For each problem instance, we used bisection~\\cite{Pelikan:book,Sastry:01c} to ensure that the population size was within $5\\%$ of the minimum population size to find the optimum in 10 out of 10 independent runs. \nBit-flip hill climbing (HC)~\\cite{Pelikan:book} was incorporated into hBOA to improve its performance on all test problems except for the minimum vertex cover; HC was used to improve every solution in the population. For minimum vertex cover, a repair operator based on ref.~\\cite{Pelikan:07b} was incorporated instead.\nThe strength of the distance-based bias was tweaked using $\\kappa\\in\\{1,3,5,7,9\\}$.\n\n\nTo ensure that the same problem instances were not used for defining the bias as well as for testing it, 10-fold crossvalidation was used when evaluating the effects of distance-based bias derived from problem instances of the same size. \nFor each set of problems (by a set of problems we mean a set of random problem instances generated with one specific set of parameters), problem instances were randomly split into 10 equally sized subsets. In each round of crossvalidation, 1 subset of instances was left out and hBOA was run on the remaining 9 subsets of instances. The runs on the 9 subsets produced models that were analyzed in order to obtain the probabilities $P_k(d,j)$ for all $d$, $j$, and $k$. The bias based on the obtained values of $P_k(d,j)$ was then used in hBOA runs on the remaining subset of instances. The same procedure was repeated for each subset; overall, 10 rounds of crossvalidation were performed for each set of instances. When evaluating the effects of distance-based bias derived from problem instances of {\\em smaller} size, we did not use crossvalidation because in this case all runs had to be done on different problem instances (of different size). Most importantly, in every experiment, models used to generate statistics for hBOA bias were obtained from hBOA runs on {\\em different} problem instances.\nWhile the experiments were performed across a variety of computer architectures and configurations, the base case with no bias and the case with bias were always both run on the same computational node; the results of the two runs could therefore be compared against each other with respect to the actual CPU (execution) time. \n\n\n\nTo evaluate hBOA performance, we focus on the multiplicative speedup with respect to the execution time per run; the speedup is defined as a multiplicative factor by which the execution time improves with the distance-based bias compared to the base case. For example, an execution-time speedup of $2$ indicates that the bias allowed hBOA to find the optimum using only half the execution time compared to the base case without the bias. We also report the percentage of runs for which the execution time was strictly improved (shown in parentheses after the corresponding average multiplicative speedup). \n\nIn addition to the speedups achieved for various values of $\\kappa$, we examine the ability of the distance-based bias based on prior runs to apply across a range of problem sizes; this is done by using previous runs on instances of one size to bias runs on instances of another size. Since for MAXSAT, we only used instances of one size, this facet was only examined for the other two problem classes. \n\nFinally, we examine the combination of the distance-based bias based on prior runs and the sporadic model building~\\cite{DBLP:journals\/gpem\/PelikanSG08}. Specifically, we apply sporadic model building on its own using the model-building delay of $\\sqrt{n}\/2$ as suggested by ref.~\\cite{DBLP:journals\/gpem\/PelikanSG08}, and then we carry out a similar experiment using both the distance-based bias as well as the sporadic model building, recording the speedups with respect to the base case. Ideally, we would expect the speedups from the two sources to multiply. Due to the time requirements of solving MAXSAT, the combined effects were studied only for the remaining two problem classes. \n\n\n\n\\vspace*{-0.7ex}\n\\mysubsection{Results}\n\\vspace*{-0.48ex}\n\nThe results presented in tables~\\ref{table-sg-results}, \\ref{table-mvc-results} and~\\ref{table-maxsat-results} confirm the observation from ref.~\\cite{Pelikan:12} that the stronger the bias the greater the benefits, at least for the examined range of $\\kappa\\in\\{1,3,5,7,9\\}$ and most problem settings; that is why in the remainder of this discussion we focus on $\\kappa=9$. In all cases, the distance-based bias yielded substantial speedups of about~$1.2$ to~$3.1$. Best speedups were obtained for the minimum vertex cover. In all cases, performance on at least about $70\\%$ problem instances was strictly improved in terms of execution time; in most cases, the improvements were observed in a much greater majority of instances. The speedups were substantial even when the bias was based on prior runs on problem instances of different, smaller size; in fact, the speedups obtained with such a bias were nearly identical to the speedups with the bias based on the instances of the same size. The results thus provide clear empirical evidence that the distance-based bias is applicable even when the problem instances vary in size, which was argued~\\cite{Pelikan:12} to be one of the main advantages of the distance-based bias over prior work in the area but was not demonstrated. Finally, the results show the nearly multiplicative effect of the distance-based bias and sporadic model building, providing further support for the importance of the distance-based bias; the combined speedups ranged from about 4 to more than 11. \n\n\\input{tables}\n\n\\mysection{Summary and Conclusions}\n\\label{section-conclusions}\nThis paper extended the prior work on efficiency enhancement of the hierarchical Bayesian optimization algorithm (hBOA) using a distance-based bias derived from prior hBOA runs~\\cite{Pelikan:12}. \nThe paper demonstrated that (1) the distance-based bias yields substantial speedups on several previously untested classes of challenging, NP-complete problems, (2) the approach is applicable even when prior runs were executed on problem instances of different size, and (3) the approach can yield nearly multiplicative speedups when combined with other efficiency enhancement techniques. In summary, the results presented in this paper together with the prior work~\\cite{Pelikan:12} provide clear evidence that learning from experience using a distance-based bias has a great potential to improve efficiency of hBOA in particular and estimation of distribution algorithms (EDAs) in general.\n\n\nSeveral topics are of central importance for future work. \nThe approach should be adapted to other model-directed optimization techniques, including other EDAs\nand genetic algorithms with linkage learning. The approach should also be modified to introduce\nbias on problems that cannot be formulated using an additive decomposition in a straightforward\nmanner or such a decomposition is not practical. \nFinally, it is important to study the limitations of the proposed approach, and create\ntheoretical models to automatically tune the strength of the bias and predict expected speedups.\n\n\\section*{Acknowledgments} \nThis project was sponsored by the National Science Foundation under grants ECS-0547013 and IIS-1115352, and by the Univ. of Missouri--St. Louis through the High Performance Computing Collaboratory sponsored by Information Technology Services. Most experiments were performed on the Beowulf cluster maintained by ITS at the Univ. of Missouri in St. Louis and the HPC resources at the University of Missouri Bioinformatics Consortium. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.\n\n\\vspace*{-1.5ex}\n\\bibliographystyle{splncs}\n\\begin{small}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nHiggs inflation \\cite{Bezrukov:2007ep,Salopek:1988qh}\n\\cite{Bezrukov:2008ej,Barvinsky:2008ia,DeSimone:2008ei,Bezrukov:2009db,Barvinsky:2009fy,Barvinsky:2009ii,Bezrukov:2013fka} is perhaps the most\neconomical approach to cosmological inflation: it identifies the only\nknown scalar field in the Standard Model (SM) -- the Higgs boson -- with\nthe inflaton, the scalar field that drives the inflationary expansion\nin the very early universe. The action for Higgs inflation\nfeatures a non-minimal coupling of the Higgs field to\nthe Ricci scalar. If the non-minimal coupling $\\xi$ is large, it leads to\na successful period of chaotic inflation, producing primordial\npower spectra that fit the observational bounds~\\cite{Ade:2013uln}.\nHence, Higgs inflation can be considered as a \"natural\" scenario,\nsince it does not seem to require the introduction of new\nphysics to explain the inflationary expansion of the universe.\n\nHowever, the naturalness of the Higgs inflation scenario has been\nunder debate. Refs.~\\cite{Barbon:2009ya,Burgess:2009ea} used\npower-counting techniques to determine the energy scale $\\Lambda$\nat which perturbation theory breaks down, thus determining the range\nof validity of Higgs inflation. It was claimed that this cutoff scale\n$\\Lambda$ lies dangerously close to the energy scale of inflation,\nthereby questioning the naturalness of Higgs inflation. Since then\nmany works have appeared claiming that Higgs inflation is\n\"natural\"~\\cite{Lerner:2009na,Ferrara:2010in}\nor \"unnatural\"~\\cite{Burgess:2010zq,Hertzberg:2010dc}.\nPerhaps the most complete treatment has been done in Ref.~\\cite{Bezrukov:2010jz}.\nIt was found that $\\Lambda$ is generally field dependent and lies above the\ntypical energy scales in different regions, such that the perturbative\n(semiclassical) expansion is valid in Higgs inflation.\n\nAlthough there seems to be a consensus about the cutoff scale\nas computed in Ref.~\\cite{Bezrukov:2010jz} (however,\nsee the recent work~\\cite{George:2013iia}~\\footnote{\nThe results presented in Ref.~\\cite{George:2013iia}\nuse different techniques and arrive at results that are consistent with\nthose presented in this paper and earlier in Ref.~\\cite{Weenink:2013oqa}.}),\nwe revisit the computation of the cutoff in this work.\nThe reason is that there are some important aspects\nthat have not been fully taken into account. The most important\naspect is that General Relativity contains a large diffeomorphism symmetry,\nwhich when truncated resembles the symmetry of a gauge theory.\nThis means that, like in QED, some of the degrees of freedom in the action are\nactually not physical. As a consequence, some of\nthe interaction vertices obtained after a na\\\"\\i ve perturbative\nexpansion of the action are gauge dependent, and any conclusion that one arrives at\nby using these vertices can be a gauge artifact.\nMoreover, it is possible that there are\nadditional vertices that conspire to cancel dominant\nperturbative contributions. Accounting for these aspects can raise the cutoff scale.\nEven though it is possible to determine the physical cutoff scale\nwithin a gauge dependent formulation,\nby far the simplest and most reliable way\nto determine this scale is to use the physical vertices,\nwhich can be obtained from the perturbative\naction in a manifestly gauge invariant way (an alternative is to completely fix the gauge freedom\nand take account of contributions from {\\it all} vertices).\nThis formulation of the action in terms of \\textit{gauge invariant perturbations}\nhas been found for a non-minimally coupled scalar field\nup to third order in perturbations\nin Refs.~\\cite{Weenink:2010rr,Prokopec:2012ug,Prokopec:2013zya}.\nIn this work we use these previously found results\nin order to demonstrate that the cutoff scale for physical, that is, gauge\ninvariant perturbations is always $\\geq M_P$.\nTo be more precise, we find that\n\\begin{eqnarray}\n\\left(\\frac{\\Lambda}{a}\\right)_J \\gtrsim \\sqrt{M_P^2+\\xi\\phi^2}\n\\,,\\qquad\n\\left(\\frac{\\Lambda}{a}\\right)_E \\gtrsim M_P\n\\,,\n\\label{cutoffscalesJordanEinstein}\n\\end{eqnarray}\nwhere $a_J$ and $\\phi$ are the background scale factor and scalar field\nin the Jordan frame, and $a_E$ is the Einstein frame scale factor.\nThe extra scale factors have been overlooked in previous computations\nof the cutoff scale. They appear since $\\Lambda^{-1}$ is a comoving scale,\nand therefore in an expanding universe the corresponding physical length scale,\n$a\/\\Lambda$, always includes a scale factor $a$.\nThe cutoffs in Jordan and Einstein frame are different, simply\nbecause $\\Lambda$ is a dimensionful quantity which differs between\nthe frames, just like the effective Planck mass.\nIf $M_P$ is the energy scale where quantum gravity kicks in\nin the Einstein frame, then $M_{P,J}\\equiv \\sqrt{M_P^2+\\xi\\phi^2}$ can\nbe identified with the scale of quantum gravity in the Jordan frame.\nThus in either frame the perturbative (semiclassical) treatment is\nvalid all the way up to the scale at which gravity becomes strong.\nThis means that Higgs inflation is perfectly natural and that\nno new physics is necessary to explain the inflationary expansion\nof the universe and the anisotropies in the CMB.\n\nIn order to arrive at the physical cutoff scales~\\eqref{cutoffscalesJordanEinstein},\nwe first briefly discuss Higgs inflation and physically equivalent\nframes in section~\\ref{sec:Higgs inflation}. In section~\\ref{sec: the naturalness debate}\nwe review the naturalness debate. In section~\\ref{sec: gauge invariance}\nwe discuss the concept of gauge and frame dependence in cosmology\nand quote from Ref.~\\cite{Prokopec:2013zya}\nthe action for gauge and frame invariant cosmological perturbations.\nFinally we compute the cutoff scale for physical\nperturbations in section~\\ref{sec: cutoffscale} and conclude in section~\\ref{sec:Discussion}.\n\n\n\\section{Higgs inflation}\n\\label{sec:Higgs inflation}\n\nUnfortunately, inflation in the pure SM\nhas been ruled out by observations of the CMB, which have put tight constraints on\nthe inflationary potential. However, these constraints\ncan be greatly relaxed when the Higgs field $\\mathcal{H}$ is quadratically coupled\nto the Ricci scalar $R$ by a large non-minimal coupling $\\xi \\sim 10^4$\n\\cite{Futamase:1987ua,Fakir:1990eg,Komatsu:1999mt,Tsujikawa:2004my}.\nThe pure scalar and gravitational part of the action reads\n\\begin{equation}\nS = \\int d^4x \\sqrt{-g}\n\\biggl\\{\n\\frac12 M_P^2 R +\\xi\\mathcal{H}^{\\dagger}\\mathcal{H} R\n - g^{\\mu\\nu} (\\partial_{\\mu}\\mathcal{H})^{\\dagger}(\\partial_{\\nu}\\mathcal{H})\n -\\lambda \\left(\\mathcal{H}^{\\dagger}\\mathcal{H}-\\frac{v^2}{2}\\right)^{\\!2}\n\\biggr\\}\n\\,.\n\\label{SMHiggsaction}\n\\end{equation}\nHere the metric signature is ${\\rm sign}[g_{\\mu\\nu}]=(-,+,+,+)$, the Higgs self-coupling is\n$\\lambda$ and the Higgs vacuum expectation value (vev) is $v=246~\\rm{GeV}$.\nAlthough the complex doublet $\\mathcal{H}$ contains four degrees of freedom,\nthree of those can be absorbed by the gauge bosons (not shown in Eq.~\\eqref{SMHiggsaction}).\nIn unitary gauge the remaining scalar degree of freedom with non-zero vev\ncan be parametrized by $\\mathcal{H}=(0,\\Phi\/\\sqrt{2})$,\nsuch that the scalar action takes the Jordan frame form\n\\begin{equation}\nS = \\frac12 \\int d^4x \\sqrt{-g}\n\\Bigl\\{\nR F(\\Phi) -g^{\\mu\\nu} (\\partial_{\\mu}\\Phi)(\\partial_{\\nu}\\Phi)\n-\\frac{\\lambda}{2}(\\Phi^2-v^2)^2\n\\Bigr\\}\n\\,,\n\\label{Jordanframeaction}\n\\end{equation}\nwith $F(\\Phi)=M_P^2+\\xi\\Phi^2$. The claim is now that\na successful period of (chaotic) inflation is possible\nwhen the non-minimal coupling parameter is large, $\\xi \\sim 10^4$.\nSuccessful means that the model's predictions for the primordial\npower spectra of scalar and tensor perturbations\nagree with the observational constraints on these spectra.\nThe current 1$\\sigma$ constraints on the scalar spectral index\nand the tensor-to-scalar ratio are $n_s = 0.9603\\pm 0.0073$\nand $r < 0.11$, respectively~\\cite{Ade:2013uln}. On the theory\nside, it is known that in a minimally coupled model ($\\xi=0$)\nboth $n_s-1$ and $r$ are proportional to slow-roll parameters.\nMoreover, since the Hubble parameter $H$ does not directly couple\nto the time dependent expectation value of the scalar field\n$\\langle\\Phi\\rangle =\\phi(t)$ in the Friedmann equations,\nit is possible to express the slow-roll parameters in terms\nof the slope of the potential. Hence, in a minimally coupled\nmodel the shape of the scalar potential provides an intuition\nfor the successful realization of inflation. For instance,\na quartic potential is excluded by CMB observations~\\cite{Ade:2013uln}\nbecause it is insufficiently flat in the inflationary regime,\nwhich supports the previously mentioned statement that inflation is\nnot possible in the pure (\\textit{i.e.} minimally coupled) SM.\n\nUnfortunately, we do not have the luxury of such an intuition\nin the case of (large) non-minimal coupling $\\xi \\gg 1$.\nThe mixing of the gravitational and matter part of the action\nprevents us from expressing the slow-roll parameters in terms\nof the potential. Worse, it is not clear what are the slow-roll\nparameters in a non-minimally coupled model,\nthat is, what are the parameters that remain small\nduring the inflationary period, and neither is it clear how\nthe primordial power spectra depend on these small parameters.\nThe most straightforward way to see whether or not Higgs inflation\nworks is therefore to compute the primordial power spectra\nfor non-minimal coupling from scratch, which involves a derivation of the quadratic\naction for scalar and tensor perturbations in the Jordan frame~\\cite{Weenink:2010rr}.\nFortunately we can avoid this exercise\nby making use of a clever trick. If we redefine the metric,\nscalar field and scalar potential in the action as follows~\\cite{Weenink:2010rr},\n\\begin{align}\ng_{\\mu\\nu,E}&=\\frac{F}{M_P^2} g_{\\mu\\nu}\n\\nonumber\\\\\n\\left(\\frac{d\\Phi_E}{d\\Phi}\\right)^2\n&=\\frac{M_P^2}{F}\\left(1+\\frac{3}{2}\\frac{F^{\\prime 2}}{F}\\right)\n\\nonumber\\\\\nV_E(\\Phi_E)&=\\frac{M_P^4}{F^2}V(\\Phi)\n\\,,\n\\label{Conformaltransformationmetricscalar}\n\\end{align}\nwhere $F=F(\\Phi)$, $F'=dF\/d\\Phi$ and $V(\\Phi)=\\frac14 \\lambda (\\Phi^2-v^2)^2$,\nwe find that the action \\eqref{Jordanframeaction} becomes\n\\begin{equation}\nS_E = \\frac12 \\int d^4x \\sqrt{-g_E}\n\\Bigl\\{\nM_P^2 R_E -g_E^{\\mu\\nu} \\partial_{\\mu}\\Phi_E\\partial_{\\nu}\\Phi_E-2V_E(\\Phi_E)\n\\Bigr\\}\n\\,.\n\\label{Einsteinframeaction}\n\\end{equation}\nSince the metric has been rescaled by a factor it is commonly said\nthat we are in another conformal frame; the specific frame for which\nthe scalar field is minimally coupled to the Ricci scalar\nis called the Einstein frame, and quantities in this frame are indicated by a subscript $E$.\nThus the action has been rewritten\nto the more familiar minimally coupled form. The crucial point\nis that the Jordan and Einstein frame are related by\nfield redefinitions \\eqref{Conformaltransformationmetricscalar}, and hence the\ntwo formulations are physically equivalent. Nature is indifferent\nto whether we use one set of variables to describe her phenomena,\nor a different, but related, set. Physical observables should\ntherefore be invariant with respect to the frame in which you compute\nthem~\\cite{Flanagan:2004bz}. Specifically, the primordial power spectra for scalar and tensor\nperturbations (or the predicted values of $n_s$ and $r$) are the\nsame whether you compute them in the Jordan frame using variables\n$g_{\\mu\\nu}$ and $\\Phi$, or in the Einstein frame using $g_{\\mu\\nu,E}$\nand $\\Phi_E$ related to the first set by Eqs.~\\eqref{Conformaltransformationmetricscalar}.\nThis was first demonstrated in Refs.~\\cite{Makino:1991sg,Fakir:1992cg},\nsee also Ref.~\\cite{Weenink:2010rr}. Hence, the more familiar and more intuitive\nEinstein frame formulation can be used to check the successfulness of\nHiggs inflation \\cite{Bezrukov:2007ep,Salopek:1988qh}.\nIn the Einstein frame it is intuitively clear that Higgs inflation\nworks: the potential becomes exponentially flat in the large field limit.\nPredictions for the spectral index and scalar-to-tensor ratio in Higgs inflation\nare $n_s\\simeq 0.97$ and $r \\simeq 0.0032$, and are\ntherefore well within the observational bounds.\n\n\\section{The naturalness debate}\n\\label{sec: the naturalness debate}\n\n\nAt first sight Higgs inflation is a very attractive scenario, because\nit does not seem to require the introduction of new physics\nto explain the exponential expansion of the early universe.\nHowever, this has been questioned by Refs.~\\cite{Burgess:2009ea,Barbon:2009ya},\nwho looked at the ultraviolet cutoff scale $\\Lambda$ in Higgs inflation.\n$\\Lambda$ indicates the energy scale above which scattering amplitudes\nviolate the unitarity bound, and therefore determines when perturbation\ntheory breaks down. Such a cutoff can be found by power-counting techniques\nfor which one can use the following recipe:\n\\begin{enumerate}\n\\item Perform a perturbative expansion of the fields in the action.\n\\item Rescale the fields such that their kinetic terms are canonically normalized.\n\\item Read off the cutoff scale from the interaction terms with $D>4$ which\nare suppressed by $\\Lambda^{4-D}$.\n\\end{enumerate}\nIn Refs.~\\cite{Burgess:2009ea,Barbon:2009ya} the cutoff scale was obtained\nby expanding the scalar field around its vev, $\\Phi= v+\\varphi$,\nand the metric around Minkowski space, $g_{\\mu\\nu}=\\eta_{\\mu\\nu} + h_{\\mu\\nu}\/M_P$,\nwhere the normalization $M_P^{-1}$ is chosen such that the gravitational\nterms are canonically normalized.\nIn the Jordan frame, the cutoff follows from the expansion of the\n$\\xi \\mathcal{H}^{\\dagger} \\mathcal{H} R$ term, which gives the\n5-dimensional interaction $(\\xi\/M_P) \\varphi^2 \\Box h$. Reading off\nthe cutoff scale gives $\\Lambda = M_P\/\\xi$. In the Einstein frame\nthe cutoff follows from a small-field expansion of the potential,\nwhich gives $V_E=\\frac14\\lambda \\varphi_E^4-\\lambda(\\xi\/M_P)^2\\varphi_E^6+\\ldots$.\nThe dimension 6 terms are due to the small-field relation between Jordan and Einstein\nframe fields $\\Phi \\simeq \\Phi_E[1 - (\\xi\/M_P)^2]$.\nAgain, the cutoff scale from the dimension 6 term is $\\Lambda=M_P\/\\xi$.\nThe breakdown of perturbation theory may signal the appearance of new\nphysics, for example higher dimensional operators suppressed by $\\Lambda$,\nwhich solve the unitarity problems at energy scales above $\\Lambda$.\nNow, the point is that the cutoff scale $\\Lambda$ lies\ndangerously close to the energy scale of inflation, characterized\nby the Einstein frame Hubble parameter $H_E \\simeq \\sqrt{\\lambda}M_P\/\\xi$.\nThese higher dimensional operators may therefore enter the inflationary\npotential and affect inflationary predictions, or worse, spoil\nHiggs inflation. Thus we need knowledge of the ultraviolet completion\nof Higgs inflation in order to find out if inflation is successful,\nwhich obviously clashes with the original attractiveness of the scenario.\n\nSince the appearance of the works~\\cite{Burgess:2009ea,Barbon:2009ya}\nthere has been a debate in literature about\nthe presence or absence of unitarity problems in Higgs inflation.\nRefs.~\\cite{Lerner:2009na,Ferrara:2010yw,Ferrara:2010in}\nconsidered the Einstein frame analysis\nand objected, correctly, that the dimension 6 terms\nin the \\textit{small field} expansion of the potential\nonly give rise to unitarity problems in the \\textit{large field} regime\n($\\langle\\Phi\\rangle\\gg M_P\/\\xi$ or $\\langle \\Phi_E \\rangle \\geq M_P$), where the small field\nexpansion is no longer valid. Instead, one should perform a\nperturbative expansion around the field expectation value,\nwhich results in a field dependent cutoff scale.\n\nThen, by making use of the equivalence\nbetween the Jordan and Einstein frame, it was argued that there are also\nno unitarity problems in the Jordan frame for single field inflation.\nRefs.~\\cite{Burgess:2010zq,Hertzberg:2010dc} showed\nsubsequently that the unitarity bound $M_P\/\\xi$ again appears when\nthe Goldstone bosons are taken into account, both in the Jordan and Einstein\nframe. Ref.~\\cite{Burgess:2010zq} added that even in unitary gauge,\nwhere the Goldstone bosons are eaten by the gauge bosons, the cutoff\nscale appears in Higgs-gauge interactions. Ref.~\\cite{Ferrara:2010yw,Ferrara:2010in}\nargued that, instead of expanding around a small Higgs vev, the perturbative\nexpansion should be performed around a large expectation value $\\phi\\gg M_P\/\\xi$,\nwhich is the background relevant for inflation. Here $\\phi=\\langle \\Phi \\rangle$,\nwith $\\Phi$ the Higgs field in unitary gauge, $\\mathcal{H}=(0,\\Phi)$.\nIt was shown that the cutoff scale following from the Einstein frame\npotential is $M_P$ in the inflationary regime,\nbut undergoes a transition to $M_P\/\\xi$ for small field values $\\phi \\ll M_P\/\\xi$. The authors\nthen argued that this post-inflation unitary bound does not affect our ability\nto describe physical processes during inflation. Arguably the most\ncomplete treatment of the cutoff so far has been performed\nin Ref.~\\cite{Bezrukov:2010jz}. There the metric was expanded around\nan expanding background $g_{\\mu\\nu} =\\bar{g}_{\\mu\\nu}+\\delta g_{\\mu\\nu}$\nand likewise the scalar field was expanded around a time dependent\nbackground $\\Phi=\\phi+\\varphi$. Next, the Jordan frame action was\nput into canonical form by a redefinition of the metric and\nscalar field perturbations.\nIn the Jordan frame the cutoff originates from the $\\xi \\Phi^2 R$ term\nand was found to be $M_P\/\\xi$ for $\\phi \\ll M_P\/\\xi$,\n$\\xi\\phi^2\/M_P$ for $M_P\/\\xi < \\phi < M_P\/\\sqrt{\\xi}$ and $\\sqrt{\\xi}\\phi$\nfor $\\phi > M_P\/\\sqrt{\\xi}$. In the Einstein frame the same results\nwere obtained from the potential \\footnote{The cutoffs in the Jordan and Einstein\nframes are related by the conformal factor $\\sqrt{1+\\xi\\phi^2\/M_P^2}$. This\ncan be understood by realizing that the cutoff is some energy scale, or length\nscale, which is changed by the conformal transformation. In Ref. \\cite{Bezrukov:2010jz}\nthe cutoffs are only found to be equivalent\n(note: up to factors of the self-coupling $\\lambda$)\nonce this factor is taken into\naccount, see also \\cite{Bezrukov:2008ej,Bezrukov:2009db}.}.\n\n\n\\section{Gauge invariance}\n\\label{sec: gauge invariance}\n\nIn this work we revisit the computation of the cutoff scale\nin Higgs inflation. The reason is that all of the papers\nmentioned in the previous section have overlooked\na crucial aspect of the computation of quantum corrections in general relativity.\nThis crucial aspect is the fact that general relativity contains\na large diffeomorphism symmetry. Since the diffeomorphism symmetry\nresembles many aspects of gauge symmetries, general relativity\nis often called a gauge theory and its degrees of freedom (dofs)\nare generally gauge dependent. Due to gauge dependence not all\ndofs are physical. Moreover, general relativity is a constrained\ntheory, such that some of the dofs do not participate\nin the dynamics, but instead impose constraints on the system.\nTherefore, computing a cutoff by means of\nexpanding $g_{\\mu\\nu}=\\bar{g}_{mu\\nu}+\\delta g_{\\mu\\nu}$\nand $\\Phi = \\phi +\\varphi$ and using the corresponding vertices,\nwill generally give non-physical and incorrect answers.\nInstead one should first determine what are the truly physical\ndegrees of freedom, and subsequently find their interaction\nvertices and the ultraviolet cutoff.\n\nAs a small sidestep, let us illustrate the above by looking\nat an analogy: the simpler and more familiar case of QED.\nQED is described by a vector field $A_{\\mu}$,\nwhich naively contains 4 dofs. However, one of the 4 dofs\n(the longitudinal component of the spatial vector field) is\nnot physical due to the gauge symmetry\n$A_{\\mu} \\rightarrow A_{\\mu}+\\partial_{\\mu}\\Lambda$.\nMoreover one of the components of the vector field is\nnot dynamical: there are no time derivatives acting on $A_0$\nin the Maxwell action. This becomes particularly clear\nwhen the theory is written in Hamiltonian form.\nThe non-dynamical component\n(the Coulomb potential) can in fact be decoupled from the\ndynamical part of the action. Thus, out of 4 dofs in QED,\nthere are only 2 remaining dynamical dofs, which correspond to\nthe transverse components of the vector field, \\textit{i.e.}\nthe two polarizations of the photon.\n\nLet us now turn to Higgs inflation.\nThe action for Higgs inflation, which basically constitutes of\nthe Einstein-Hilbert action for general relativity, a scalar\nfield in a potential and a coupling of the scalar field to\nthe EH-action, contains naively $10+1$ dynamical degrees of freedom (dofs)\nin the metric and scalar field.\nHowever, due to the diffeomorphism symmetry (invariance\nunder coordinate reparametrizations $x^{\\mu}\\rightarrow x^{\\mu}+\\xi^{\\mu}$),\n4 of these dofs are not physical. The analogy with QED becomes\nmore apparent when we look at linearized perturbations. The action\nfor these first order perturbations is invariant under\nthe transformations $\\delta g_{\\mu\\nu}\\rightarrow \\delta g_{\\mu\\nu} +2\\nabla_{\\left(\\mu\\right.}\\xi_{\\left.\\nu\\right)}$\nand $\\varphi \\rightarrow \\varphi +\\dot{\\phi}\\xi^{0}$. This closely\nresembles the gauge symmetry of electrodynamics, only in this case\nthere are 4 gauge parameters, and thus 4 non-physical components\nof the metric.\n\nAlso, analogous to QED, 4 degrees of freedom\nin the metric are not dynamical. These are the $00$ and $0i$\ncomponents of the ADM metric,\n\\begin{equation}\nds^2=-N^2dt^2+g_{ij}(dx^i+N^idt)(dx^j+N^jdt)\n\\,.\n\\label{3rdg:ADMlineelement}\n\\end{equation}\nHere $g_{ij}$ is the spatial metric and $N$ and $N^{i}$\nare the lapse and shift functions, respectively.\nIn terms of the ADM metric the action \\eqref{Jordanframeaction}\ncan be written as\n\\begin{align}\nS=\\frac12 & \\int d^3xdt \\sqrt{g}\\Biggl\\{\nN R F(\\Phi)+\\frac{1}{N}\\left(E^{ij}E_{ij}-E^2\\right)F(\\Phi)\n-\\frac{2}{N} E F'(\\Phi)\\left(\\partial_t{\\Phi}\n- N^{i}\\partial_i\\Phi\\right)\n\\nonumber\\\\\n&+2g^{ij}\\nabla_iN \\nabla_jF(\\Phi)\n+\\frac{1}{N}\\left(\\partial_t{\\Phi}-N^{i}\\partial_i\\Phi\\right)^2\n-Ng^{ij}\\partial_i\\Phi\\partial_j\\Phi-2 N V(\\Phi)\\Biggr\\}\n\\,,\n\\label{ADMactionnonminimalEij}\n\\end{align}\nwhere $F'(\\Phi)=dF(\\Phi)\/d\\Phi$ and\nthe measure $\\sqrt{g}$, the Ricci scalar $R$\nand covariant derivatives $\\nabla_i$ are composed\nof the spatial part of the metric $g_{ij}$ alone.\nThe quantities $E_{ij}$ and $E$ are related\nto the extrinsic curvature $K_{ij}$ as $E_{ij}=-NK_{ij}$,\nwith\n\\begin{align}\nE_{ij}&=\\frac{1}{2}\\left(\\partial_tg_{ij}-\\nabla_iN_j-\\nabla_jN_i\\right)\\\\\nE&=g^{ij}E_{ij}\n\\,.\n\\end{align}\nFrom Eq.~\\eqref{ADMactionnonminimalEij} it can clearly be seen\nthat the lapse and shift functions are not dynamical (there are no kinetic terms\nfor them). This becomes even clearer in a Hamiltonian formulation~\\cite{Weenink:2010rr}, \nin which the lapse and shift functions\nmultiply the energy and momentum constraints. Because the lapse and shift functions\nare related to the constraints in general relativity and are non-dynamical,\nthey are also called constraint, or auxiliary fields. Thus, out of the\n$10+1$ degrees of freedom (dofs) in the action \\eqref{Jordanframeaction}, 4 are non-dynamical\nand related to the constraints, and 4 others are gauge dofs. We can therefore\nalready argue that there are only 3 dynamical dofs in the perturbed action,\nand we should take this into account when computing the physical cutoff.\n\nAs a first example of how incorrect results are obtained\nby not taking into account the gauge freedom and\nconstraints in general relativity, let us consider\nthe case of quantum corrections in Higgs inflation\nby expanding around a Minkowsky background, as done\nin Refs.~\\cite{Burgess:2009ea,Barbon:2009ya}. The dominant\ncontribution from the term $\\xi \\Phi^2 R$ is found to be\n$(\\xi\/M_P) \\varphi^2 \\Box h$. However, it is well known\nthat the only dynamical dofs in Minkowsky space are\nthe transverse traceless part of the metric, the graviton $\\gamma_{ij}$, and\nthe scalar field fluctuation $\\varphi$~~\\footnote{Commonly, in Minkowski space the $00$ and $0i$\ncomponents of the metric fluctuation $h_{\\mu\\nu}$ are zero\n(the solution for the auxiliary fields is zero), and -- in the absence of matter fields\n-- the gauge freedom\nis used to set the scalar and vectorial components of the spatial metric\nfluctuation $h_{ij}$ to zero. The only remaining degree of\nfreedom is the traceless and traceless $\\gamma_{ij}$.}. In that case,\na term such as $h = \\rm{Tr}[h_{\\mu\\nu}]$ disappears from\nthe action. The next most dominant term is then\n$(\\xi\/M_P^2) \\varphi^2 \\partial \\gamma_{ij}\\partial \\gamma_{ij}$,\nwhich is a dimension 6 term with a cutoff scale $M_P\/\\sqrt{\\xi}$,\nwhich is higher than the naive cutoff scale $M_P\/\\xi$.\nAlthough this is still an incorrect derivation of the cutoff\n-- one should perform an expansion around an expanding universe\nand a time dependent background field -- it already shows that\ntaking into account the gauge symmetry can raise the cutoff scale.\n\nSo let us continue by considering fluctuations on top of an expanding\nuniverse. In order to deal with the non-dynamical parts of the\nmetric and the gauge freedom, it is convenient to separate the\nmetric into different components and expand each component separately.\nWe thus insert into the action \\eqref{Jordanframeaction} (or Eq. \\eqref{ADMactionnonminimalEij})\n\\begin{align}\ng_{ij}&=a^2(t){\\rm e}^{2\\zeta}({\\rm e}^\\alpha)_{ij}\n\\nonumber\\\\\n\\Phi&=\\phi+\\varphi\n\\nonumber\\\\\nN&=\\bar{N}\\left(1+n\\right)\n\\nonumber\\\\\nN^i&=a^{-1}\\bar{N}(t)(a^{-1}\\partial_i s+n_{i}^{T})\n\\label{perturbations}\n\\,,\n\\end{align}\nwhere $a(t)$ is the scale factor and\n\\begin{equation}\n\\alpha_{ij}=a^{-2}\\partial_i\\partial_j \\tilde h + a^{-1}\\partial_{(i}h_{j)}^T+\\gamma_{ij}\n\\,,\n\\end{equation}\nwith $\\gamma_{ii}=0$, $\\partial_j\\gamma_{ji}=0$, $\\partial_j h^T_{j}=0$ and $\\partial_i n_i^T=0$.\nNow, all fluctuations of the metric and scalar field\nare by themselves not invariant under the gauge transformations (coordinate\nreparametrizations) of general relativity. Thus interaction\nvertices for the fluctuations in Eq. \\eqref{perturbations} are\ngenerally not gauge invariant, and thus not physical. In\norder to find the physical interaction vertices, the best way\nto go is to derive the action for \\textit{gauge invariant perturbations}.\nThese are variables that are by themselves invariant\nunder coordinate reparametrizations, and hence their\ninteraction vertices are physical.\nIn Refs.~\\cite{Weenink:2010rr,Prokopec:2012ug,Prokopec:2013zya} we have precisely done this:\nwe have derived the quadratic and cubic actions for gauge\ninvariant cosmological perturbations. We have found that,\nwhen written in terms of physical (gauge invariant) fields,\nthe scalar-tensor theory~(\\ref{Jordanframeaction})\ncan be separated into dynamical and constraint parts, such\nthat the constraint and dynamical fields\ndecouple.\nThe dynamical part contains one real scalar degree of freedom\nand one (traceless, transverse) tensor field with two degrees of freedom\n(polarizations).\nThe non-dynamical part contains gauge invariant versions\nof the constraint fields in the action.\nThere is one gauge invariant scalar lapse function\nand one gauge invariant shift vector field (with three components),\nwhich is usually split into a transverse vector and a (longitudinal) scalar.\nOn-shell all four gauge invariant constraint fields are zero\n(zero gauge invariant constraint fields solve the corresponding equations of motion).\nThis can also be understood by viewing the lapse and shift fields\nas auxiliary fields that can be solved for, order by order in perturbation\ntheory, and their solution inserted in the action \\cite{Prokopec:2013zya}\n(see also Ref.~\\cite{Maldacena:2002vr} for a gauge fixed approach). The\ngauge invariant lapse and shift functions being zero on-shell\ncorresponds to solving the equation of motion for the lapse\nand shift function. Solving for the auxiliary fields is hence analogous\nto decoupling the fields in the action.\nBecause of this decoupling, the considerations of the gauge invariant\nscalar-tensor theories significantly simplify, in the sense that\nit suffices to consider the action for the dynamical\ndegrees of freedom only (truncated to a certain order).\nIn Refs.~\\cite{Weenink:2010rr,Prokopec:2012ug,Prokopec:2013zya}.\nsuch an action was constructed up to cubic order in the fields\nfor a class of scalar-tensor theories with general $F(\\Phi)$ term\nin Eq.~(\\ref{Jordanframeaction}).\nIn order to do that, we had to make a choice for gauge invariant variables.\nThe potential complication is the fact that, at second order in gauge\ntransformations there are infinitely many such variables.\nHowever, one can show that\nthe form of the cubic actions, when expressed in terms of different\ngauge invariant variables, differs only by surface\nterms~\\cite{Prokopec:2012ug,Prokopec:2013zya}, which are irrelevant for\nthe (bulk) evolution. Nevertheless, these surface terms can be of physical relevance\nfor (the non-Gaussian part of) the initial state. One set of gauge invariant\nvariables is special, in that the variables are also frame independent:\nthey are invariant under a transformation from Jordan to Einstein frame.\nWhen expressed in terms of these variables,\nthe cubic actions in the Einstein and Jordan frame are\nmanifestly {\\it equal}\n(when one takes account of the trivial frame transformation of the time dependent background\nquantities that appear as prefactors in front of the cubic vertices).\n\nTherefore, in order to simplify the unitarity cutoff discussion\nit is natural to work with these unique frame (and gauge) independent\nvariables. They are the gauge invariant scalar\npertubation $W_\\zeta$ and the gauge invariant tensor (transverse, traceless graviton) perturbation\n$\\tilde\\gamma_{ij}$. These are the second order generalizations of\nthe gauge invariant Sasaki-Mukhanov field, $w_\\zeta=\\zeta-(H\/\\dot\\phi)\\varphi$,\nand the graviton perturbation $\\gamma_{ij}$,\nwhere $H(t)=\\dot a\/a=\\bar N(t)^{-1}d\\ln(a)\/dt$ is the Hubble parameter,\nand $\\phi(t)$ the background (inflaton) field.\nFor the precise definitions of $W_\\zeta$ and $\\tilde\\gamma_{ij}$ -- which are given by\n$w_\\zeta$ and $\\gamma_{ij}$ plus corrections that are quadratic in perturbations --\nwe refer to Ref.~\\cite{Prokopec:2013zya}. We have also\ngiven an explicit proof that these variables are frame independent,\nthus $W_{\\zeta,E}=W_{\\zeta}$ and $\\tilde{\\gamma}_{ij,E}=\\tilde{\\gamma}_{ij}$.\n\nThe complete action for these gauge and frame independent\ndynamical fields up to the cubic order\nfor these variable can be found in\nRefs.~\\cite{Weenink:2010rr,Prokopec:2012ug,Prokopec:2013zya},\nwhich we shall repeat here.\nThe quadratic action is a sum of the scalar and tensor parts,\n\\begin{eqnarray}\nS^{(2)}= S^{(2)}_{W_{\\zeta}} + S^{(2)}_{\\tilde\\gamma}\n\\,,\n\\label{quadratic action:total}\n\\end{eqnarray}\nwhere in the Jordan frame (see~(3.23) of Ref.~\\cite{Prokopec:2013zya}),\n\\begin{eqnarray}\nS^{(2)}_{W_{\\zeta}} &=& \\int d^{3}xdt \\bar{N} a^{3}\nz^2\\left[\\frac12\\dot{W}_{\\zeta}^2-\\frac12\\left(\\frac{\\partial_i W_{\\zeta}}{a}\\right)^2\\right]\n\\,,\n\\label{nmaction:2:wzetawzeta:GI}\\\\\nS^{(2)}_{{\\tilde\\gamma}_\\zeta} &=& \\int d^3xdt \\bar{N} a^3\\frac{F}{4}\n\\left[\\frac12(\\dot{\\tilde\\gamma}_{\\zeta ij})^2\n-\\frac12\\left(\\frac{\\partial_l \\tilde\\gamma_{\\zeta\\,ij}}{a}\\right)^2\\right]\n\\,,\n\\label{nmaction:2:gammagamma}\n\\end{eqnarray}\nwhere $\\bar N(t)$ is the background shift function\n(for conformal time $t\\rightarrow \\eta$, $\\bar N = a(\\eta)$,\nwhile for physical time $t$, $\\bar N(t)= 1$).\nNote that the prefactor in the integrand of Eq.~(\\ref{nmaction:2:wzetawzeta:GI})\nis in fact\n\\begin{equation}\nz^2 = \\frac{\\dot{\\phi}^2+\\frac{3}{2}\\frac{\\dot{F}^2}{F}}\n{\\big(H+\\frac12\\frac{\\dot{F}}{F}\\big)^2}\n\\,.\n\\label{z:Jordan}\n\\end{equation}\nThe cubic action can be conveniently split into the scalar-scalar-scalar,\nscalar-scalar-tensor, scalar-tensor-tensor and tensor-tensor-tensor vertices,\n\\begin{eqnarray}\nS^{(3)}= S^{(3)}_{W_{\\zeta}W_{\\zeta}W_{\\zeta}} + S^{(3)}_{W_{\\zeta}W_{\\zeta}\\tilde\\gamma}\n + S^{(3)}_{W_{\\zeta}\\tilde\\gamma\\tilde\\gamma} + S^{(3)}_{\\tilde\\gamma\\tilde\\gamma\\tilde\\gamma}\n\\,.\n\\label{cubic action:total}\n\\end{eqnarray}\nUp to boundary terms,\nthe scalar-scalar-scalar part of the action is (see Ref.~\\cite{Prokopec:2012ug}\nand Eq.~(6.70) of Ref.~\\cite{Weenink:2013oqa})\n\\begin{eqnarray}\nS^{(3)}_{W_{\\zeta}W_{\\zeta}W_{\\zeta}}\n &=& \\int d^3x dt\\bar{N} a^3 F \\Biggl\\{\n\\frac{1}{4}\\frac{z^4}{F^2}\\left[W_{\\zeta}\\left(\\dot{W}_{\\zeta}^2\n+\\left(\\frac{\\partial_i W_{\\zeta}}{a}\\right)^2\\right)\n-2\\dot{W}_{\\zeta}\\left(\\frac{\\partial_i}{\\nabla^2}\n\\dot{W}_{\\zeta}\\right)\\partial_i W_{\\zeta}\\right]\n\\nonumber\\\\\n&&-\\,\\frac{1}{16}\\frac{z^6}{F^3}W_{\\zeta}\\left[\\dot{W}_{\\zeta}^2\n-\\left(\\frac{\\partial_i\\partial_j}{\\nabla^2}\\dot{W}_{\\zeta}\\right)\n\\left(\\frac{\\partial_i\\partial_j}{\\nabla^2}\\dot{W}_{\\zeta}\\right)\\right]\n+\\frac12 \\frac{z^2}{F}\\left[\\frac{\\frac{\\dot{z}}{z}-\\frac12\\frac{\\dot{F}}{F}}\n{H+\\frac12\\frac{\\dot{F}}{F}}\\right]^{\\cdot}\n \\dot{W}_{\\zeta}W_{\\zeta}^2\n\\Biggr\\}\n\\,,\n\\label{3rd:Jordan: Cubic GI Action Wzeta}\n\\end{eqnarray}\nthe scalar-scalar-tensor part of the action is (see Ref.~\\cite{Prokopec:2013zya}\nand Eqs.~(7.12) and~(7.47) of~\\cite{Weenink:2013oqa}),\n\\begin{eqnarray}\nS^{(3)}_{W_{\\zeta}W_{\\zeta}\\tilde{\\gamma}_{\\zeta}}\n&=&\\int d^{3}xdt \\bar{N} a^{3}\n\\Biggl\\{\n\\frac12z^2\\tilde\\gamma_{\\zeta,ij}\\bigg(\\frac{\\partial_i}{a}W_\\zeta\\bigg)\n \\bigg(\\frac{\\partial_j}{a}W_\\zeta\\bigg)\n-\\frac{z^4}{8F}W_\\zeta\\dot{\\tilde\\gamma}_{\\zeta,ij}\\frac{\\partial_i\\partial_j}{\\nabla^2}{\\dot W}_\\zeta\n\\nonumber\\\\\n&&\\hskip 2.3cm\n+\\,\\frac{z^4}{8F}\\bigg(\\frac{\\partial_i\\partial_j}{\\nabla^2}{\\dot W}_\\zeta\\bigg)\n\\bigg(\\frac{\\partial_k}{\\nabla^2}\\dot W_\\zeta\\bigg)\n\\partial_k\\tilde\\gamma_{\\zeta,ij}\n\\Biggr\\}\n\\,,\\qquad\n\\label{3rdg:nmaction:3:ssg:GI:Wzeta}\n\\end{eqnarray}\nthe scalar-tensor-tensor part of the action is (see Ref.~\\cite{Prokopec:2013zya}\nand Eq.~(7.49) of~\\cite{Weenink:2013oqa}),\n\\begin{eqnarray}\nS^{(3)}_{W_{\\zeta}\\tilde{\\gamma}_{\\zeta}\\tilde{\\gamma}_{\\zeta}}\n&=&\\int d^{3}xdt \\bar{N} a^{3}\n\\Biggl\\{\n\\frac{z^2}{16}W_\\zeta\\bigg[\\dot{\\tilde\\gamma}_{\\zeta,ij}\\dot{\\tilde\\gamma}_{\\zeta,ij}\n +\\bigg(\\frac{\\partial_l}{a}\\tilde\\gamma_{\\zeta,ij}\\bigg)\n \\bigg(\\frac{\\partial_l}{a}\\tilde\\gamma_{\\zeta,ij}\\bigg)\\bigg]\n-\\frac{z^2}{8}\\dot{\\tilde\\gamma}_{\\zeta,ij}\\bigg(\\partial_l\\tilde\\gamma_{\\zeta,ij}\\bigg)\n \\frac{\\partial_l}{\\nabla^2}\\dot W_\\zeta\n\\Biggr\\}\n\\,,\\qquad\n\\label{3rdg:nmaction:3:sgg:GI:Wzeta}\n\\end{eqnarray}\nand finally the tensor-tensor-tensor part of the action is (see Ref.~\\cite{Prokopec:2013zya}\nand Eq.~(7.39) of~\\cite{Weenink:2013oqa}),\n\\begin{eqnarray}\nS^{(3)}_{\\tilde{\\gamma}_{\\zeta}\\tilde{\\gamma}_{\\zeta}\\tilde{\\gamma}_{\\zeta}}\n&=&\\int d^3xdt \\bar{N} a^3 \\frac{F}{8}\n\\Biggl\\{\n\\tilde{\\gamma}_{\\zeta,ij}\n\\frac{\\partial_i\\tilde{\\gamma}_{\\zeta,kl}}{a}\n\\frac{\\partial_j\\tilde{\\gamma}_{\\zeta,kl}}{a}\n+\\tilde{\\gamma}_{\\zeta,kl}\n\\frac{\\partial_i\\tilde{\\gamma}_{\\zeta,kj}}{a}\n\\frac{\\partial_j\\tilde{\\gamma}_{\\zeta,il}}{a}\n-\\tilde{\\gamma}_{\\zeta,ik}\n\\frac{\\partial_i\\tilde{\\gamma}_{\\zeta,jl}}{a}\n\\frac{\\partial_j\\tilde{\\gamma}_{\\zeta,kl}}{a}\n\\Biggr\\}\n\\,.\\qquad\n\\label{3rdg:nmaction:3:ggg:GI:Wzeta}\n\\end{eqnarray}\n\nWhen written in terms of the frame independent variables $W_\\zeta$ and $\\tilde\\gamma_{\\zeta ij}$,\nwe proved in~\\cite{Prokopec:2013zya}\nthat the action~(\\ref{quadratic action:total}--\\ref{3rdg:nmaction:3:ggg:GI:Wzeta})\nis frame independent.\nSince the frame independent perturbations themselves coincide in Jordan\nand Einstein frame, $W_\\zeta=W_{\\zeta,E}$ and $\\tilde\\gamma_{\\zeta ij}=\\tilde\\gamma_{\\zeta ij,E}$,\nthe Einstein frame and Jordan frame actions coincide as well.\nThe frame independence becomes manifest when\none expresses the background quantities in the Jordan frame $a,H,z$ in\nterms of their Einstein frame counterparts\nin Eqs.~(\\ref{quadratic action:total}--\\ref{3rdg:nmaction:3:ggg:GI:Wzeta})\nby using the relations \\eqref{Conformaltransformationmetricscalar}\nat background level. This gives the following identities\n\\begin{eqnarray}\nF^{1\/2}\\bar N &=& M_{\\rm P}\\bar{N}_E\n\\,,\\quad\nF^{1\/2}a= M_{\\rm P}a_E\n\\,,\\quad\nF^{-1\/2}\\dot W_\\zeta = M_{\\rm P}^{-1}\\dot W_{\\zeta,E}\n\\,,\\quad\n\\frac{H+\\frac{\\dot F}{2F}}{\\sqrt{F}}= \\frac{H_E}{M_{\\rm P}}\n\\nonumber\\\\\n\\frac{\\dot\\phi^2+\\frac32\\frac{\\dot F^2}{F}}{F^2}&=& \\frac{\\dot{\\phi}_E^2}{M_{\\rm P}^4}\n\\,,\\qquad\n\\frac{z}{\\sqrt{F}}= \\frac{z_E}{M_{\\rm P}}\n\\,,\\qquad\n\\frac{\\frac{\\dot z}{z}-\\frac12\\frac{\\dot F}{F}}{H+\\frac{\\dot F}{2F}}\n = \\frac{1}{M_{\\rm P}}\\frac{\\dot{z}_E}{z_E}\n\\,,\n\\label{from Jordan to Einstein}\n\\end{eqnarray}\nwhere all subscripts $E$ denote quantities in the Einstein frame, $z_E\\equiv\\dot{\\phi}_E\/H_E$\nand a dotted derivative denotes $\\dot{X}=dX\/(\\bar{N}dt)$ and $\\dot{X}_E=dX_E\/(\\bar{N}_Edt)$\nfor Jordan and Einstein frame quantities respectively.\nHence there is no need to quote the action in the Einstein frame\n(in which $z^2\/F=z_E^2\/M_P^2=\\dot{\\phi}_E^2\/(M_P^2H_E^2)$).\n\n\n\nIn order to compute the cutoff in a frame independent formulation,\nit is convenient to transform to canonically normalized variables\n\\begin{equation}\nV_\\zeta = a z W_{\\zeta} = a \\sqrt{2\\epsilon_E F } W_{\\zeta}\n\\,,\\qquad\n\\Gamma_{\\zeta,ij} = \\frac12 a \\sqrt{F} \\tilde{\\gamma}_{\\zeta,ij}\n\\,,\n\\label{3rdg:canonicalvariables}\n\\end{equation}\nwhere we have used\n\\begin{equation}\n\\epsilon_E = -\\frac{\\dot{H}_E}{H_E^2} = \\frac{z_E^2}{2M_P^2} = \\frac{z^2}{2F}\n\\,,\n\\end{equation}\nwhich relates $z$ (defined in Eq.~\\eqref{z:Jordan})\nto the slow-roll parameter $\\epsilon_E$ in the Einstein frame.\nIn terms of the canonical fields the\nsecond order action~(\\ref{quadratic action:total}--\\ref{nmaction:2:gammagamma})\nin conformal time $\\tau$ ($\\bar{N}(t)\\rightarrow a(\\tau)$) become\n\\begin{eqnarray}\nS^{(2)}_{V_\\zeta}&=&\\frac12\\int d^3xd\\tau\n\\left[V_\\zeta^{\\prime 2}-(\\partial_i V_\\zeta)^2+\\frac{(az)^{\\prime\\prime}}{az}V_\\zeta\\right]\n-\\frac12\\int d^3 x \\bigg[\\frac{(az)^{\\prime}}{az}V_\\zeta^2\n \\bigg]_{\\tau_{\\rm in}}^{\\tau_{\\rm fin}}\n\\label{scalar quadratic action:canonical}\\\\\nS^{(2)}_{\\Gamma_{\\zeta,ij}}&=&\\frac12\\int d^3x d\\tau\n\\left[\\Gamma_{\\zeta,ij}^{\\prime 2}-(\\partial_l \\Gamma_{\\zeta,ij})^2\n +\\frac{(a\\sqrt{F})^{\\prime\\prime}}{a\\sqrt{F}}\\Gamma_{\\zeta,ij}^2\\right]\n-\\frac12\\int d^3 x \\bigg[\\frac{(a\\sqrt{F})^{\\prime}}{a\\sqrt{F}}\\Gamma_{\\zeta,ij}^2\n \\bigg]_{\\tau_{\\rm in}}^{\\tau_{\\rm fin}}\n\\,,\n\\label{tensor quadratic action:canonical}\n\\end{eqnarray}\nwhere a prime denotes a derivative with respect to conformal\ntime $\\tau$. The boundary terms\nin~(\\ref{scalar quadratic action:canonical}--\\ref{tensor quadratic action:canonical})\ndo not contribute to the propagator equation of motion and thus they can be discarded.\nFurthermore, the transformation to the canonically normalized fields has led to generation of\ntime dependent (negative) mass terms\nin~(\\ref{scalar quadratic action:canonical}--\\ref{tensor quadratic action:canonical}).\nHowever, these terms can be neglected on energy and momentum scales far above the Hubble scale.\nNamely, $(az)^\\prime\/(az)=aH+(1\/2)\\phi^\\prime [d\\ln(F)\/d\\phi]+(1\/2)[\\epsilon_E^\\prime\/\\epsilon_E]$.\nNow, since the latter two terms are suppressed by slow roll parameters,\nthe first term, $aH=\\cal{H}$ (here and subsequently ${\\cal H}$ denotes a conformal\nHubble rate) is the dominant term. Likewise, one can show that the dominant term\nin $(az)^{\\prime\\prime}\/(az) = 2a^2H^2$ plus slow roll suppressed terms, such that when\nthe (conformal) energy scale, $E_c\\gg {\\cal H}$, this term can be neglected.\nBecause $(a\\sqrt{F})^{\\prime\\prime}\/(a\\sqrt{F})= 2a^2 H^2$ plus slow roll suppressed terms,\nthe same consideration applies to canonically normalized gravitons.\nThis means that the canonically normalized scalar and graviton propagator behave in the\nultraviolet (on scales $E_c = |k^0|\\gg {\\cal H}$ and $\\|\\vec k\\|\\gg {\\cal H}$) as,\n$\\sim 1\/(\\eta_{\\mu\\nu}k^\\mu k^\\nu)$ plus corrections of the order ${\\cal H}^2$, which\nwe shall neglect in the following considerations.\n\n When the cubic action~(\\ref{3rd:Jordan: Cubic GI Action Wzeta}--\\ref{3rdg:nmaction:3:ggg:GI:Wzeta})\nis expressed in terms of the canonical variables~(\\ref{3rdg:canonicalvariables}),\none gets for the pure scalar cubic action\n\\begin{eqnarray}\nS^{(3)}_{V_{\\zeta}V_{\\zeta}V_{\\zeta}} &\\simeq& \\int d^3xd\\tau \\sqrt{\\frac{\\epsilon_E}{8a^2F}}\n\\bigg\\{\n V_\\zeta\\bigg[\n (V_\\zeta^\\prime)^2-2\\frac{(az)^\\prime}{az}V_\\zeta^\\prime V_\\zeta\n +\\frac{[(az)^\\prime]^2}{(az)^2}V_\\zeta^2 +(\\partial_iV_\\zeta)^2\n \\bigg]\n\\label{pure scalar cubic:canonical}\n\\\\\n && \\hskip 2.9cm -\\, 2\\bigg(V_\\zeta^\\prime-\\frac{(az)^\\prime}{az}V_\\zeta\\bigg)\n \\bigg[\\frac{\\partial_i}{\\nabla^2}\\bigg(V_\\zeta^\\prime-\\frac{(az)^\\prime}{az}V_\\zeta\\bigg)\\bigg]\n (\\partial_iV_\\zeta)\n\\nonumber\\\\\n && \\hskip 2.9cm\n -\\,\\frac{\\epsilon_E}{2}V_\\zeta\\bigg[\n \\bigg(\\!V_\\zeta^\\prime\\!-\\frac{(az)^\\prime}{az}V_\\zeta\\bigg)^2\n\\!-\\frac{\\partial_i\\partial_j}{\\nabla^2}\\bigg(V_\\zeta^\\prime\\!-\\!\\frac{(az)^\\prime}{az}V_\\zeta\\bigg)\n-\\frac{\\partial_i\\partial_j}{\\nabla^2}\\bigg(V_\\zeta^\\prime\\!-\\!\\frac{(az)^\\prime}{az}V_\\zeta\\bigg)\n \\bigg]\n\\nonumber\\\\\n && \\hskip 2.9cm\n+\\frac{1}{\\epsilon_E}\\bigg(\n \\frac{\\epsilon_E^\\prime\/\\epsilon_E}{2{\\cal H}+(\\partial_\\tau F)\/F}\n \\bigg)^\\prime\n V_\\zeta^2\\bigg(\\!V_\\zeta^\\prime\\!-\\frac{(az)^\\prime}{az}V_\\zeta\\bigg)\n\\bigg\\}\n\\,.\n\\nonumber\n\\end{eqnarray}\nNext, the scalar-scalar-tensor cubic action~(\\ref{3rdg:nmaction:3:ssg:GI:Wzeta})\ncan be written as,\n\\begin{eqnarray}\nS^{(3)}_{V_{\\zeta}V_{\\zeta}\\Gamma_\\zeta}\n&=&\\int d^{3}xd\\tau\n\\Biggl\\{\n\\frac{1}{a\\sqrt{F}}\\Gamma_{\\zeta,ij}\\big(\\partial_iV_\\zeta\\big)\\big(\\partial_jV_\\zeta\\big)\n\\nonumber\\\\\n&&\\hskip 1.5cm\n-\\,\\frac{z^2}{4aF^{3\/2}}V_\\zeta\\Bigg(\\Gamma_{\\zeta,ij}^\\prime-\\frac{(a\\sqrt{F})^\\prime}{a\\sqrt{F}}\\Gamma_{\\zeta,ij}\\bigg)\n \\frac{\\partial_i\\partial_j}{\\nabla^2}\\bigg(V_\\zeta^\\prime-\\frac{(az)^\\prime}{az}V_\\zeta\\bigg)\n\\nonumber\\\\\n&&\\hskip 1.5cm\n+\\,\\frac{z^2}{4aF^{3\/2}}\\frac{\\partial_i\\partial_j}{\\nabla^2}\\bigg(V_\\zeta^\\prime-\\frac{(az)^\\prime}{az}V_\\zeta\\bigg)\n\\frac{\\partial_l}{\\nabla^2}\\bigg(V_\\zeta^\\prime-\\frac{(az)^\\prime}{az}V_\\zeta\\bigg)\n\\partial_l\\Gamma_{\\zeta,ij}\n\\Biggr\\}\n\\,,\\qquad\n\\label{dominant ssg}\n\\end{eqnarray}\nthe scalar-tensor-tensor part of the action~(\\ref{3rdg:nmaction:3:sgg:GI:Wzeta}) becomes,\n\\begin{eqnarray}\nS^{(3)}_{V_{\\zeta}\\Gamma_{\\zeta}\\Gamma_{\\zeta}}\n&=&\\int d^{3}xd\\tau\n\\Biggl\\{\n\\frac{z}{4aF}V_\\zeta\\bigg[\\bigg(\\Gamma_{\\zeta,ij}^\\prime\n -\\frac{(a\\sqrt{F})^\\prime}{a\\sqrt{F}}\\Gamma_{\\zeta,ij}\\bigg)\n \\bigg(\\Gamma_{\\zeta,ij}^\\prime-\\frac{(a\\sqrt{F})^\\prime}{a\\sqrt{F}}\\Gamma_{\\zeta,ij}\\bigg)\n +\\big(\\partial_l\\Gamma_{\\zeta,ij}\\big)\\big(\\partial_l\\Gamma_{\\zeta,ij}\\big)\\bigg]\n\\nonumber\\\\\n&&\\hskip 1.5cm\n-\\,\\frac{z}{2aF}\\bigg(\\Gamma_{\\zeta,ij}^\\prime\n -\\frac{(a\\sqrt{F})^\\prime}{a\\sqrt{F}}\\Gamma_{\\zeta,ij}\\bigg)\n \\big(\\partial_l\\Gamma_{\\zeta,ij}\\big)\n \\frac{\\partial_l}{\\nabla^2}\\bigg(V_\\zeta^\\prime-\\frac{(az)^\\prime}{az}V_\\zeta\\bigg)\n\\Biggr\\}\n\\,,\\qquad\n\\label{dominant sgg}\n\\end{eqnarray}\nand finally the pure tensor part of the action~(\\ref{3rdg:nmaction:3:ggg:GI:Wzeta}) becomes\n\\begin{eqnarray}\nS^{(3)}_{\\Gamma_{\\zeta}\\Gamma_{\\zeta}\\Gamma_{\\zeta}}\n\\!=\\!\\int\\! \\frac{d^3xd\\tau}{a\\sqrt{F}}\n\\biggl\\{\n\\Gamma_{\\zeta,ij}\\big(\\partial_i\\Gamma_{\\zeta,kl}\\big)\\big(\\partial_j\\Gamma_{\\zeta,kl}\\big)\n\\!+\\!\\Gamma_{\\zeta,kl}\\big(\\partial_i\\Gamma_{\\zeta,kj}\\big)\\big(\\partial_j\\Gamma_{\\zeta,il}\\big)\n\\!-\\!\\Gamma_{\\zeta,ik}\\big(\\partial_i\\Gamma_{\\zeta,jl}\\big)\\big(\\partial_j\\Gamma_{\\zeta,kl}\\big)\n\\!\\biggr\\}\n\\,.\\quad\n\\label{dominant ggg}\n\\end{eqnarray}\n\nWe shall use the cubic action~(\\ref{pure scalar cubic:canonical}--\\ref{dominant ggg})\nin section~\\ref{sec: cutoffscale} to analyse the unitarity cutoff\nimplied by the $2\\to 2$ tree-level scattering processes.\nOf course, to complete the analysis,\none would also need the gauge invariant quartic vertices, which are currently not\navailable. We do not expect however, that quartic vertices\nwill change in any way the qualitative discussion we present in the next section.\n\n\\section{The cutoff scale revisited}\n\\label{sec: cutoffscale}\n\nA usual requirement for renormalizable theories\nis that the unitarity bound is not violated in\nperturbation theory, \\textit{i.e.} scattering\namplitudes should not become bigger than unity.\nIn particular there is the requirement of tree unitarity~\\cite{Cornwall:1974km},\nwhich states that $N$ particle tree amplitudes\n$\\mathcal{A}_N$ should not grow more rapidly than\n$E^{4-N}$, where $E$ is the center of mass energy.\nIf the amplitude grows faster, perturbation theory\nfails at some cutoff scale $\\Lambda$. This usually\nmeans that some new physics should enter at this energy\nscale. When the relevant energy scales of the theory\nunder consideration are well below the cutoff scale,\nthe theory can be considered 'natural', in the sense\nthat perturbation theory is valid and there is no\nneed for new physics. Conversely, there is a naturalness problem\nif typical energy scales are higher than the cutoff scale.\n\nThe cutoff is most easily computed when the perturbative\naction is written in canonical form, such that\nthe propagator goes as $1\/k^2$, where $k^2 = \\eta_{\\mu\\nu}k^\\mu k^\\nu$\nand $k^\\mu$ is a conformal 4-momentum.\nNext, the cutoff can be read off from the vertices of dimension\nhigher then 4, which should be suppressed as $\\Lambda^{4-D}$.\nThis has been done for Higgs inflation in Ref.~\\cite{}\nin both the Einstein and Jordan frame, and we\noutline the derivation here. In the Jordan frame the starting point is\nthe action~\\eqref{Jordanframeaction} with $F(\\Phi)=M_P^2+\\xi\\Phi^2$,\nwhich is the action for Higgs inflation in unitary gauge\nand gauge interactions are neglected.\nNext, following Ref.~\\cite{Bezrukov:2011sz},\nand analogously as it was done in~(\\ref{3rdg:canonicalvariables}),\ngeneric (non-invariant) perturbations $\\delta g_{\\mu\\nu}=g_{\\mu\\nu}-\\bar g_{\\mu\\nu}$\nand $\\varphi=\\Phi(x)-\\phi(t)$ can be rescaled to canonical variables $\\delta\\hat g_{\\mu\\nu}$\nand $\\hat\\varphi$ for which the corresponding quadratic actions are canonically normalized.\nHere $\\bar g_{\\mu\\nu}$ denotes a background metric (which in the UV can be approximated\nby Minkowski metric) and $\\phi(t)$ is a background field.\nThe dominant term in the action with dimension higher than $4$ and\nin the Jordan frame is of the order\n\\begin{equation}\n\\xi \\varphi^2 \\Box \\delta g\n\\,,\n\\label{dominant vertex: dim > 4}\n\\end{equation}\nwhere $\\delta g = \\bar{g}^{\\mu\\nu} \\delta g_{\\mu\\nu}$.\nWhen reexpressed in terms of the canonically normalized fields $\\hat g_{\\mu\\nu}$\nand $\\hat\\varphi$, this term becomes\n\\begin{equation}\n \\frac{\\xi\\sqrt{M_P^2+\\xi\\phi^2}}{M_P^2+\\xi\\phi^2+6\\xi^2\\phi^2}\n\\hat{\\varphi}^2 \\Box {\\delta\\hat g}\n\\,.\n\\label{dominant vertex: dim > 4:2}\n\\end{equation}\nAt high energies this vertex scales as $E^2$, where $E=|k^0|$ denotes the energy scale. If\nwe consider a $2\\rightarrow 2$ scattering process of $\\hat{\\varphi}$\nvia exchange of a gravitational scalar $\\delta\\hat g$, we see that\nthe total amplitude scales as $E^2\/\\Lambda^2$ at high energies. The cutoff scale\nis precisely the inverse of the operator above. The cutoff in\nthe Jordan frame is thus\n\\begin{equation}\n\\Lambda \\sim \\frac{M_P^2+\\xi\\phi^2+6\\xi^2\\phi^2}{\\xi\\sqrt{M_P^2+\\xi\\phi^2}}\n\\,.\n\\label{cutoff scale:from paper}\n\\end{equation}\nIn Ref.~\\cite{Bezrukov:2010jz} the cutoff\nwas also computed \\textit{via} the Einstein frame. In that frame\nthe non-minimal coupling term is absent and the gravitational\nand field kinetic terms are canonical. Still, the cutoff\nscale reappears in the non-polynomial potential and shows\na similar behavior as above (though not exactly equal).\nFrom~(\\ref{cutoff scale:from paper}) one sees that\n$\\Lambda \\propto \\sqrt{\\xi} \\phi$ in the regime where $\\phi\\gg M_P\/\\sqrt{\\xi}$,\n$\\Lambda \\sim \\xi \\phi^2\/M_P$ for $M_P\/\\sqrt{\\xi} \\gg \\phi \\gg M_P\/\\xi$ and\n$\\Lambda \\sim M_P\/\\xi$ when $\\phi\\ll M_P\/\\xi$. The authors\nof Ref. \\cite{Bezrukov:2011sz} then argue that all relevant\nenergy scales in these regimes are lower than the cutoff scale,\nsuch that the perturbative expansion is valid\nand Higgs inflation is natural.\n\nThese results are interesting, but the question arises whether they can be trusted.\nIn the following we point at the principal potential problems with\nthe analysis outlined above, which put into question the reliability of\nthe cutoff scale in Eq.~(\\ref{cutoff scale:from paper})\n\n\\begin{itemize}\n\n\\item[{\\bf a)}] {\\bf The computation of the cutoff is gauge dependent.}\n\nBoth the metric fluctuations $\\delta g_{\\mu\\nu}$ and scalar field\nperturbation $\\varphi$ are gauge dependent, hence the\nvertex~(\\ref{dominant vertex: dim > 4}) is also gauge dependent, and has on\nits own no physical meaning. Indeed, it is well known that,\nwhen working with gauge dependent quantities, one can reach realiable conclusions\nonly when one takes account of all terms up to this order.\nNamely, different gauge dependent terms can cancel each other, which\nwas not accounted for in Ref.~\\cite{Bezrukov:2011sz}, see {\\it e.g.} Ref.~\\cite{George:2013iia}.\nFurthermore, when written in terms of gauge invariant variables,\ngauge dependent vertices (such as $\\varphi^2\\Box \\delta g$)\ncan be absorbed into the lower order action, such that they simply `disappear' from\na gauge invariant action. Next, some metric perturbations are non-dynamical\nin the sense that they act as auxiliary (constraint) fields and should be solved for,\nwhich generate additional vertices that may cancel\nthe problematic vertex. In fact, this already\nhappens at the level of the quadratic action. Na\\\"\\i vely,\nthe field perturbation has an effective mass term\n$m^2_{\\rm eff}\\varphi^2=(-\\xi \\bar{R} + V'')\\varphi^2+..$,\nwhere $\\bar{R}=6(2H^2+\\dot{H})$ is the background Ricci scalar. Such a mass term\nis huge, in the sense that $|\\xi \\bar{R}| \\gg H^2$.\nHowever, only a light inflaton field, with $m_{\\rm eff}^2 \\ll H^2$\ncan generate a nearly scale invariant power spectrum.\nThus, such a mass term is disastrous for the model\nand na\\\"\\i vely rules out Higgs inflation. But, when\ncontributions from the auxiliary fields are taken into\naccount, the problematic $\\xi \\bar R$ contribution gets canceled, leaving\nonly a light effective mass for the inflaton field.\nSimilar cancellations occur at higher orders in a gauge\ninvariant formulation.\n\n\\item[{\\bf b)}] {\\bf The canonical redefinition mixes up frames.}\n\nA crucial step in the computation of the cutoff\nwas the definition of new perturbations $\\hat{\\varphi}$\nand ${\\delta \\hat g}$ which canonically normalize\nthe kinetic terms. However, this redefinition\nis in fact nothing more than a transformation\nfrom the Jordan to the Einstein frame at the level\nof perturbations~\\footnote{This can be explicitly checked\nby comparing Eqs. (2.9) and (2.10) in Ref.~\\cite{Bezrukov:2010jz}\nto the transformations\n\\[\n\\varphi_E=\\frac{d\\phi_E}{d\\phi}\\varphi+{\\cal O}(\\varphi^2)\n\\,,\\qquad\n\\zeta_E=\\zeta+\\frac{1}{2F}\\frac{dF}{d\\phi}\\varphi+{\\cal O}(\\varphi^2)\n\\,,\\qquad\n\\gamma_{ij,E}=\\gamma_{ij}\n\\]\nand the expansion of $g_{\\mu\\nu,E}=\\Omega^2 g_{\\mu\\nu}$ to first order\nin perturbations.}!\nOn the other hand, the cutoff is computed\nfrom the term $\\xi\\Phi^2 R$ in the Jordan frame action. So somehow one\ncomputes a cutoff from a Jordan frame vertex using\nEinstein frame perturbations, which is very bizarre.\nMoreover, we would like to emphasize that, although\nthe Einstein frame is often referred to as the frame\nin which both the gravitational action and scalar field action\nare written in canonical form, \\textit{the Einstein frame is not canonical}.\nCanonical formulation means that the kinetic sectors for\ndifferent fields are decoupled (and canonically normalized), such\nthat one can straightforwardly extract the canonical\nmomentum and quantize the theory. However, in the \"canonical\" Einstein\nframe the gravitational field still couples to\nthe scalar sector as $\\sqrt{-g_E}g^{\\mu\\nu}_E \\partial_{\\mu}\\Phi_E\\partial_{\\nu}\\Phi_E$.\nConversely, the scalar field still couples to the kinetic term for\nthe metric perturbations in the $\\sqrt{-g_E}R_E$ term via\nthe auxiliary fields in the metric (which contain\n$\\varphi_E$ in its first order solution). Hence,\nthe Einstein frame is formulated in a non-canonical\nway, just like the Jordan frame. The true\ncanonical formulation is only reached once one inserts\nperturbations and decouples physical and non-physical\ndegrees of freedom, which was done in Ref.~\\cite{Prokopec:2013zya},\nand the results of which are summarized in section~\\ref{sec: gauge invariance}.\nWhen using the frame independent variables $W_\\zeta$ and $\\tilde\\gamma_{\\zeta,ij}$\nthe Jordan frame quadratic and cubic actions\nbecomes manifestly equal to the Einstein frame actions, see Eq.~(\\ref{from Jordan to Einstein}).\nThus in a frame independent formulation the notion of frames becomes meaningless.\n\n\\item[{\\bf c)}] {\\bf Inequivalence of Jordan and transformed Einstein cutoff.}\n\nAs we have mentioned, the cutoffs computed directly\nin the Jordan frame, or $\\textit{via}$ the Einstein frame,\nare similar, but not exactly the same. However, they should\ncoincide, because no physical content is lost in the frame\ntransformation. Of course the origin of this problem is related\nto the already mentioned problems, namely a non-invariant formulation\nand a mixing-up of different frames.\n\n\\end{itemize}\n\n\n\\subsection{Frame independent computation of cutoff}\n\\label{Frame independent computation of cutoff}\n\n This subsection we show that the above problems become all obsolete\nonce the theory is written in a manifestly gauge invariant and frame\nindependent way. Firstly, when one makes use\nof gauge invariant variables, all vertices are physical\nvertices. Moreover, the gauge invariant perturbations\ndecouple in the quadratic action, which makes it very\neasy to write the theory in canonically normalized form.\nAnd obviously, when frame independent perturbations are\nused, results in the Jordan and Einstein frames become manifestly equivalent.\n\n As mentioned above, a simple way of estimating the unitarity cutoff scale for\ntree tree-level scattering amplitudes is to work with the canonically normalized\nfields~(\\ref{3rdg:canonicalvariables}), for which the propagator in the\nultraviolet acquires a simple form: $\\sim (\\eta_{\\mu\\nu}k^\\mu k^\\nu)^{-1}$\nplus corrections that are suppressed by ${\\cal H}^2=(aH)^2$,\nwhere $k^\\mu=(E_c\/c,\\vec k\\,)$ denotes a conformal energy and momentum\n(the corresponding physical energy and momentum are given by $E=E_c\/a$ and $\\vec k\/a$).\nUp to boundary terms, the cubic actions for various combinations of scalar and tensor\nvertices are given in Eqs.~(\\ref{pure scalar cubic:canonical}--\\ref{dominant ggg}).\n\n Let us first consider the scalar cubic action~(\\ref{pure scalar cubic:canonical}).\nThe cubic vertex ${\\cal V}$ from the first two lines is of the order,\n\\begin{equation}\n{\\rm Scalar\\; cubic\\; vertex:}\\quad\n{\\cal V}_{V_\\zeta^3} \\sim \\frac{\\epsilon_E^{1\/2}{\\rm max}\\big[E_c^2,\\|\\vec k\\|^2\\,\\big]}{a\\sqrt{F}}\n\\,,\n\\label{scalar cubic vertex}\n\\end{equation}\nwhere we made use of $\\partial_\\tau\\rightarrow E_c$,\n$\\partial_i\\rightarrow k^i$ and we took ultraviolet limit in which\n$(az)^\\prime\/(az)\\sim {\\cal H}\\ll E_c, \\|\\vec k\\,\\|$.\nThe terms in the third line in~(\\ref{pure scalar cubic:canonical}) lead to a vertex that is\nin addition suppressed by $\\epsilon_E$, which is during inflation less than unity, and hence\ncan be neglected. Finally, the vertex contribution from the fourth line is of the order\n$\\epsilon_E^{-1\/2}[\\epsilon_E^\\prime\/(\\epsilon_E{\\cal H})]^\\prime E_c$, that means it is\nproportional to a third order slow roll parameter,\n$\\epsilon^{(3)}=[\\epsilon_E^\\prime\/(\\epsilon_E{\\cal H})]^\\prime\/(\\epsilon_E{\\cal H})$,\nand which can be assumed to be small during inflation. More precisely,\nthis vertex contribution is suppressed with respect to~(\\ref{scalar cubic vertex}) if\n${\\rm min}\\big[E_c,\\|\\vec k^2\\|\/E_c]\\gg [\\epsilon_E^\\prime\/(\\epsilon_E{\\cal H})]^\\prime\/\\epsilon_E\n=\\epsilon^{(3)}{\\cal H}$, which one can safely assume to be the case throughout inflation.\nOn the other hand, we know that in four space-time dimensions a 2-to-2 scattering aplitude\nscales as,\n\\begin{equation}\n\\frac{{\\rm max}[E_c^2,\\|\\vec k\\,\\|^2]}{\\Lambda^2}\n \\sim \\frac{{\\cal V}^2}{{\\rm max}[E_c^2,\\|\\vec k\\|^2]}\n\\,.\n\\label{general cutoff for 2-2 scattering}\n\\end{equation}\nUpon combining this with~(\\ref{scalar cubic vertex}) we finally get for the physical cutoff\nin the Jordan frame,\n\\begin{equation}\n{\\rm Scalar\\; cubic\\; vertex:}\\quad\n\\left(\\frac{\\Lambda}{a}\\right)_J \\sim \\sqrt{\\frac{M_{\\rm P}^2+\\xi\\phi^2}{\\epsilon_E}}\n \\gtrsim \\sqrt{M_{\\rm P}^2+\\xi\\phi^2}\n\\,,\n\\label{cutoff Jordan: pure scalar}\n\\end{equation}\nwhere we took account of $F=M_{\\rm P}^2+\\xi\\phi^2$\nand $\\epsilon_E\\lesssim 1$ during inflation.\nIn conclusion, we have found that during entire Higgs inflation\nfor scatterings mediated by the scalar cubic interactions the unitarity bound is\nabove the Planck scale. In fact, the cutoff in Higgs inflation is higher than the\nunitarity cuttoff in the minimally coupled inflation, $\\sim M_{\\rm P}$.\nwhich is obtained by simply setting $\\xi\\rightarrow 0$ in~(\\ref{cutoff Jordan: pure scalar}).\n\n Now making use of~(\\ref{from Jordan to Einstein}),\nfrom which we see that\n\\begin{equation}\n a_E=a_J \\frac{F^{1\/2}}{M_{\\rm P}}\n\\,,\n\\label{scale factor in two frames}\n\\end{equation}\nthe cutoff~(\\ref{cutoff Jordan: pure scalar}) can be written in the Einstein frame as\n\\begin{equation}\n\\bigg(\\frac{\\Lambda}{a}\\bigg)_E \\sim \\frac{M_{\\rm P}}{\\epsilon_E} \\gtrsim M_{\\rm P}\n\\,.\n\\label{dominant scalar cubic vertex:4}\n\\end{equation}\nThe difference between the frames can be attributed to the frame dependence of the\nphysical cutoff scale $\\Lambda\/a$, and has no physical meaning.\nTherefore, to make a meaningful comparison of the Einstein and Jordan frame cutoffs,\nthe rescaling~(\\ref{scale factor in two frames}) has to be taken account of.\nHence, we conclude that there is a perfect agreement in the cutoff scale in two different frames,\nand in both frames the physical cutoff is above the Planck scale $M_{\\rm P}= 1\/(8\\pi G_N)^{1\/2}$.\n\n\nLet us now turn our attention to other cubic vertices.\nFrom Eqs.~(\\ref{dominant ssg}--\\ref{dominant ggg}))\nwe obtain the dominant contributions for the other types of vertices,\n\\begin{eqnarray}\n&{\\rm Scalar-scalar-tensor\\; vertex:}\\quad &\n{\\cal V}_{V_\\zeta^2\\Gamma_\\zeta}\\sim\n \\frac{1}{aF^{1\/2}}\\times{\\rm max}\\Big[\\epsilon_EE_c^2, \\|\\vec k\\,\\|^2\\Big]\n\\nonumber\\\\\n&{\\rm Scalar-tensor-tensor\\; vertex:}\\quad &\n{\\cal V}_{V_\\zeta\\Gamma_\\zeta^2}\\sim\n \\frac{\\epsilon_E^{1\/2}}{aF^{1\/2}}\\times{\\rm max}\\Big[E_c^2, \\|\\vec k\\,\\|^2\\Big]\n\\nonumber\\\\\n&{\\rm Pure\\; tensor\\; vertex:} \\qquad\\qquad\\quad\\quad\\;&\n{\\cal V}_{\\Gamma_\\zeta^3}\\quad\\sim \\frac{\\|\\vec k\\,\\|^2}{aF^{1\/2}}\n\\label{dominant other cubic vertices}\n\\end{eqnarray}\nWhen these are inserted into~(\\ref{general cutoff for 2-2 scattering})\none obtains the following results for the cutoff scales from different types of vertices,\n\\begin{eqnarray}\n&{\\rm Scalar-scalar-tensor\\; vertex:}\\quad &\n\\left(\\frac{\\Lambda}{a}\\right)_J\n\\sim \\sqrt{M_{\\rm P}+\\xi\\phi^2}\\times{\\rm min}\\Big[\\epsilon_E^{-2},1\\Big]\n \\gtrsim \\sqrt{M_{\\rm P}+\\xi\\phi^2}\n\\nonumber\\\\\n&{\\rm Scalar-tensor-tensor\\; vertex:}\\quad &\n\\left(\\frac{\\Lambda}{a}\\right)_J \\sim \\sqrt{M_{\\rm P}+\\xi\\phi^2}\n \\times{\\rm min}\\Big[\\epsilon_E^{-1},1\\Big]\\gtrsim \\sqrt{M_{\\rm P}+\\xi\\phi^2}\n\\label{cutoffs individual vertices}\n\\\\\n&{\\rm Pure\\; tensor\\; vertex:} \\qquad\\qquad\\quad\\quad\\;&\n\\left(\\frac{\\Lambda}{a}\\right)_J \\sim \\sqrt{M_{\\rm P}+\\xi\\phi^2}\n \\times{\\rm min}\\Big[E_c^2\/\\|\\vec k\\|^2,1\\Big]\\gtrsim \\sqrt{M_{\\rm P}+\\xi\\phi^2}\n\\,,\\qquad\n\\nonumber\n\\end{eqnarray}\nwhere the first expression in square brackets gives a cutoff for the\ncase when $E_c\\gg \\|\\vec k\\|$, and the latter for the case when $E_c\\ll \\|\\vec k\\|$.\nAnalogous conclusions as in~(\\ref{cutoffs individual vertices})\nare reached when one considers $2-to-2$ scatterings composed by two classes of \nvertices, {\\it e.g.} a combination of a scalar-scalar-graviton and a pure graviton vertex.\n\nIn summary, we have shown that in all cases, for all kinds of vertices and throughout inflation\nthe physical cutoff in the Jordan frame is\nabove the scale $\\sqrt{M_{\\rm P}+\\xi\\phi^2}$, while in the Einstein frame it is\nabove $M_{\\rm P}$ (the difference has no physical meaning and it is attributed to the\nnon-invariant definition of the physical cutoff in the two frames)\n\\footnote{It comes as no surprise that the canonically normalized\nscalar-scalar-graviton and pure graviton vertices are not suppressed\nby a factor of $\\sqrt{\\epsilon_E}$. The reason is that in the\nde Sitter limit $\\epsilon_E \\rightarrow 0$ the curvature perturbation $\\zeta$\nbecomes a pure gauge mode, and is completely absorbed by the gauge invariant\nlapse and shift perturbations. The only remaining dynamical perturbations\nare the scalar $\\varphi$ and the graviton $\\gamma_{ij}$.\nThe term $g^{\\mu\\nu}\\partial_{\\mu}\\Phi\\partial_{\\nu}\\Phi$ in the original action\nthen gives the interaction term\n$\\gamma_{ij}\\partial_i\\varphi\\partial_j\\varphi$,\nwhich is not $\\epsilon_E$ suppressed.\nLikewise, the pure graviton vertices are always present\nand are not suppressed by powers of $\\epsilon_E$.}.\n\n\n\\begin{figure}[ttt]\n\\begin{center}\n\n\\includegraphics[width=0.8\\textwidth]{cutoffJordan.pdf}\\\\\n \\caption{Physical cutoff $(\\Lambda\/a)_J=\\sqrt{M_P^2+\\xi\\phi^2}$ in the Jordan frame.}\n\\label{fig:cutoffJordan}\n\\end{center}\n\\end{figure}\nThe Jordan frame cutoff is shown in figure\n\\ref{fig:cutoffJordan}.\nThe important point is that also\nhere the cutoff is never smaller than\nthe Planck scale. This means that\nthe perturbative expansion is valid\nat least up to an energy scale of the order the Planck\nscale for a theory with some non-minimal coupling to gravity, such as Higgs\ninflation. Thus, the above analysis strongly suggests that,\nat least for the class of tree level $2\\rightarrow 2$ scattering\nprocesses considered here, there is no naturalness problem in Higgs inflation.\n\nThere are caveats to this statement however. Firstly,\nwe have only considered the scalar-gravity sector\nof Higgs inflation, but neglected interactions\nwith \\textit{e.g.} gauge fields. In Ref.~\\cite{Burgess:2010zq} it was stated that\nthe (low) cutoff scale of $M_P\/\\xi$ also appears in\nthe Higgs-gauge interactions. However, these vertices\nare gauge dependent as well, so the problem may be absent in a gauge invariant\nformulation that includes the gauge fields.\nSecondly, for the computation of the cutoff\nwe have used the partially integrated\nactions~(\\ref{pure scalar cubic:canonical}--\\ref{dominant ggg}).\nHad we not made use of partial integrations, we would\nhave found disastrous terms in the action.\nFor example, before any partial integration, the leading term in\nthe pure cubic scalar action for $V_\\zeta=W_{\\zeta}\/(az)$ from Ref.~\\cite{Prokopec:2013zya}\ncontributes (in the limit $E_c\\gg \\|\\vec k\\|$) to the cubic vertex as,\n$(\\epsilon_E F\/a)\\times{\\rm max}[E_c^2,\\|\\vec k\\|^2]$, implying a physical\ncutoff (in the Jordan frame),\n$(\\Lambda\/a)_J\\sim \\sqrt{\\epsilon_E(M_{\\rm P}^2 +\\xi\\phi^2)}\\times {\\cal H}\/E_c$, which\nis much below the Planck scale, and hence disastrous.\nSimilar problems occur when the scalar-graviton-graviton\nvertices before partial integrations are considered.\nTherefore, for a more complete understanding of naturalness\nit is of crucial importance to understand the role of\nthe boundary terms (on equal time hypersurfaces).\n\n\n\\section{Discussion}\n\\label{sec:Discussion}\n\n We have used the frame and gauge independent formalism\nfor scalar and tensor cosmological perturbations of Ref.~\\cite{Prokopec:2013zya}\nto show that the physical cutoff for 2-to-2 tree level scatterings in Higgs inflation\nis above the Planck scale $M_{\\rm P}=1\/\\sqrt{8\\pi G_N}$ throughout inflation.\nMore precisely, we found that in the Jordan frame, the physical cutoff scale\nis $(\\Lambda\/a)_J\\gtrsim\\sqrt{M_{\\rm P}^2 +\\xi\\phi^2}$, while in\nthe Einstein frame it is $(\\Lambda\/a)_E\\gtrsim M_{\\rm P}$, where $\\xi$ is the nonminimal coupling\nand $\\phi(t)$ denotes the Higgs {\\it vev}. The physical cutoff in the Jordan frame is illustrated\nin figure~\\ref{fig:cutoffJordan}. The difference between the two frames\nis immaterial in that it can be fully attributed to the frame dependence of\nthe (physical) cutoff, see Eq.~(\\ref{scale factor in two frames}).\nOur results are incomplete, in that we have not discussed the relevance of:\n\\begin{itemize}\n\\item[$\\bullet$] quartic vertices and loops,\n\\item[$\\bullet$] vertices containing gauge and fermionic fields,\n\\item[$\\bullet$] boundary terms on equal time hypersurfaces (that result from partial integrations),\n\\end{itemize}\nfor the question of naturalness in Higgs inflation.\nWe do however believe that the principal conclusion reached in this paper will not change\nwhen these contributions are fully accounted for.\n\n\n\\section*{Acknowledgements}\n\n We thank Damien George, Sander Mooij and Marieke Postma\nfor useful discussions. This work was in part supported by Nikhef, by the D-ITP consortium, a\nprogram of the NWO that is funded by the Dutch Ministry of Education,\nCulture and Science (OCW) and by the Institute for Theoretical Physics of Utrecht University.\n\n\n\n\\bibliographystyle{apsrev}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nBars, or non-axisymmetric central light distributions, are found in about\n2\/3 of the disc galaxies in the present-day universe (Sellwood \\&\nWilkinson \\shortcite{SW93}).\n Mechanisms for bar formation have been explored quite extensively\n(Noguchi \\shortcite{NO88}, \\shortcite {NO96A}, \\shortcite\n{NO96B}, Shlosman \\& Noguchi \\shortcite{SN93}). As a result of cooling, gas discs are subject\nto gravitational instability and bars can arise spontaneously.\nIn galaxies of earlier types, due the stabilizing effect of the\ngrowing bulge, the disc is much less prone to spontaneous\nbar formation. A stronger perturbation is then needed, like an interaction\nwith a companion, a merger or the tidal forces of a galaxy cluster.\nThe bar will not last forever because it may dissolve \nwhen the core mass becomes important, and will contribute to\nthe growth of the bulge (\\cite{NS96}). This implies that some bars are young, and\nthat some bulges could contain relatively young stars.\n\nThe fact that the heavy element abundance distribution\nis flatter in barred galaxies as compared with normal spirals of\nsimilar type can be explained by the action of inward\nand outward radial flows of interstellar gas induced by the non-axisymmetric potential\nof bars (Roberts et al. \\shortcite{RO79}; Sellwood \\& Wilkinson \\shortcite{SW93};\nFriedli et al. \\shortcite{FR94}). As a consequence,\nthe slope of any pre-existing radial\ngaseous or stellar abundance gradient decreases\nwith time. The long term effect is such\nthat the stronger the bar is, the flatter the abundance\ngradient becomes with time (Friedli \\& Benz \\shortcite{FB95}). The finding\nby Martin \\& Roy \\shortcite{MR94} of a relationship between the slopes of the global\n abundance gradient and the strength of bars provides support for the scenario\nof recently formed bars (Roy \\shortcite{RO96B}). \nThus bars may not necessarily be primordial\nfeatures (Combes \\& Elmegreen \\shortcite{CE93}), but could also form at any time\nduring the lifetime of a galaxy (Friedli \\& Benz \\shortcite{FB95}; \nMartinet \\shortcite{MAR95}; Martin \\& Roy\n\\shortcite{MR95}).\n\nIn late-type galaxies (SBc, SBb), a young bar ($\\leq$1 Gyr) will be characterized as being\na gas-rich structure. Such bars can be the site of vigorous \nstar formation;\nif the star formation process is not inhibited by the high cloud velocities,\nthe radial abundance distribution will present\na steep inner gradient (homogenizing effects being compensated by\nchemical enrichment due to vigorous star formation),\ncombined with a flatter gradient beyond corotation due\nto dilution by the outward radial flows, as shown by Friedli et al. \\shortcite{FR94}\nand Friedli \\& Benz \\shortcite{FB95}.\nOn the other hand, if the level of star formation in the bar is modest or absent\ndue to inhibiting forces, the radial abundance gradient\nwould be weak both in the bar and in the disc. In both cases, with\nyoung bars, one should see breaks in the radial abundance\ndistribution, which move outward in the disc as times evolves.\nSome bulges, because they result in part from the stars in the bar being\nscattered out of the galaxy plane, should be polluted by `young' stars\naged between 0.5 and 1.0 Gyr. Although one cannot exclude the \npresence of young stars in bulges, they are very difficult to detect.\nThe interstellar gas is a better laboratory to search for young bars.\nThe best candidate found so far for\na galaxy with a young bar is NGC 3359 where Martin \\& Roy \\shortcite{MR95}\nhave observed an abrupt O\/H gradient in the central region (which \nhas an intense episode of star formation), and a flat gradient beyond\ncorotation. NGC 3319, observed by Zaritsky\net al. \\shortcite{ZA94}, appears to be a case similar to NGC 3359.\n\nIn this paper we wish to explore the O\/H radial distribution in\nthe prominent southern bar galaxy NGC 1365 in order to test \nthe scenario of recent bar formation. NGC 1365 is as close to an archetype \nbarred galaxy as one can find in the nearby universe. Its bar is very\nrich in gas and it has moderate star forming activity.\n\n\n\n\\section{Observations and data reduction}\n\n\\subsection{The galaxy NGC 1365}\n\nNGC 1365, a dominant galaxy of the Fornax cluster, is probably the most spectacular\nof the nearby barred galaxies. Classified as SB(s)b I-II by de Vaucouleurs et al. \n\\shortcite{dV91} (RC3), it has a Seyfert 1.5 type\nnucleus and a `hot spot' nuclear region\n(Sersic \\& Pastoriza \\shortcite{SP65}; Anatharamaiah et al. \\shortcite{AN93};\n Sandqvist et al. \\shortcite{SA95}). A distance for NGC 1365 of\n18 Mpc, as recently determined by a HST-WFPC2 Cepheid study \n\\cite{MA96}, is adopted. The galaxy has clearly defined offset dust lanes \non the leading edge of the bar and along its two main spiral arms. A velocity jump has\nbeen observed across the eastern and western dust lanes (\\cite{LJ87}) indicative of strong shocks in the gas flow (Athanassoula \\shortcite{AT92}).\nThe general properties of NGC 1365 are listed in Table 1.\n\n\\begin{table}\n\\caption{Global properties of NGC 1365} \n\\begin{tabular}{lc}\n\\hline \\hline\nParameter & Value \\\\ \n\\hline \n$\\alpha$ (J2000) & 3$^{\\rm h}$33$^{\\rm m}$37.$^{\\rm s}$37 \\\\\n$\\delta$ (J2000) & -36$^\\circ$08$'$25.$''$5 \\\\\nMorphological type & SB(s)b I-II\\\\\nInclination & 40$^\\circ$\\\\\nPosition angle & 220$^\\circ$ \\\\\nGalactic extinction ($A_{\\rm B})$ & 0.21\\\\\nSystemic velocity (km s$^{-1}$) & 1640 \\\\\n$\\rho_{0}$ & 5.$'$61 \\\\\n$b\/a(i)$ & 0.51 \\\\\nD (Mpc) & 18.2\\\\\nScale (pc arcsec$^{-1}$) & 88 \\\\\n\\hline\n\\end{tabular}\\\\\nNotes: (a) Positions, angles and velocity\nare from J\\\"ors\\\"ater \\& van Moorsel \\shortcite{JM95}; (b) type, extinction\nand isophotal radius are from\nde Vaucouleurs et al. \\shortcite{dV91}; stellar bar axis\nratio corrected for inclination is from Martin \\shortcite{MA95};\n(d) Distance is from Madore et al. \\shortcite{MA96}\n\\end{table} \n\n\nSpectrophotometry of several {H~\\sc ii}\\ regions in the galaxy was obtained by\nPagel et al. \\shortcite{PA79}, Alloin et al. \\shortcite{AL81} and \nRoy \\& Walsh \\shortcite{RW88}; the\nearlier observations, although limited, suggested relatively\nhigh extinction, and a shallow global \nO\/H abundance gradient, a now well-established feature of barred\ngalaxies (Vila-Costas \\& Edmunds \\shortcite{VE92}; Zaritsky et al. \\shortcite{ZA94};\nMartin \\& Roy \\shortcite{MR94}). Star formation activity is moderate or weak\nin the bar, except in the nuclear ring (radius $\\sim$ 7$''$).\n\nNGC 1365 has been studied in {H~\\sc i}\\ (e.g. Ondrechen \\& van der Hulst \\shortcite{OV89}), but\nit is only recently that a detailed {H~\\sc i}\\ investigation with the VLA \nhas been completed and published by\nJ\\\"ors\\\"ater \\& van Moorsel \\shortcite{JM95}; these authors show that\nNGC 1365 has a unique dropping rotation curve. Sandqvist et al. \n\\shortcite{SA95} performed\nradio continuum and CO observations of the central parts of the galaxy,\nfinding a radio jet and enhanced CO in the nucleus\nand dust lanes; references to earlier radio work can also be found in\nthat paper. Sandqvist et al. \\shortcite{SA95} estimated that the amount\nof molecular gas in the nuclear and bar region, is equal to the\ntotal amount of neutral atomic hydrogen in the galaxy,\nthat is 15 $\\times$ 10$^9$ M$_\\odot$.\nThe {H~\\sc i}\\ distribution shows a hole in the central region where CO is the strongest and\nthe neutral hydrogen is predominantly located in the spiral arm.\nThe bar region is a very gas-rich feature. \n\n\n\\subsection{Selection of H~{\\sc II} regions}\n\nAn AAT prime focus plate of NGC~1365 taken in the R band (RG 610 filter + \nO94-04 emulsion) on 1990 December 16 was scanned on the ESO PDS machine.\nThe triplet corrector was employed giving a scale of 15.$''$3 mm$^{-1}$ and the\nscanning aperture was 50 $\\mu$m and sampling 25 $\\mu$m (0.$''$381). This\nimage is shown in Figure 1; 163 potential {H~\\sc ii}\\ regions were identified and\ntheir X,Y positions determined by either fitting a 2-D Gaussian or more\noften simply by centroiding the image by eye. Given that there was no\noff-emission band image with which to check whether the identified regions\nwere indeed {H~\\sc ii}\\ regions, some contamination by stars, globular clusters\nor distant galaxies was anticipated. In addition to the {H~\\sc ii}\\ regions candidates, \nthe X,Y positions of HST GSC stars were measured by fitting 2-D Gaussians and a six\ncoefficient astrometric solution was made for 23 GSC stars. The RA and Dec\nof the potential {H~\\sc ii}\\ regions was then calculated. 'Sky' positions were also\nselected away from the galaxy and free of stars. In selecting {H~\\sc ii}\\ regions\nto observe, a number of considerations were taken in account: \n\n\\begin{enumerate}[(iv)]\n\\item about 55 object and sky fibres can be observed per fibre bundle per \naperture plate; \n\\item the full range of {H~\\sc ii}\\ region apparent brightness should be covered in \norder to avoid any bias; \n\\item fibres cannot be closer than 19$''$. Thus only one {H~\\sc ii}\\ region could\nbe selected in a crowded region; a nearby {H~\\sc ii}\\ region could of course\nbe selected for a second aperture plate.\nSome bright {H~\\sc ii}\\ regions should be included in common between\ndifferent aperture plates to check the observation quality and \nreproducibility of calibration; \n\\item the faintest {H~\\sc ii}\\ regions could be included on more than one aperture plate\nto increase the exposure time.\n\\end{enumerate}\n\n\\begin{figure*}\n\\centerline{\\psfig{file=n1365fig1.eps,height=18.0cm,clip=}}\n\\caption{R band image of NGC 1365 obtained by David Malin\nat the f\/3.3 prime focus of the 3.9-m Anglo-Australian Telescope.\nThe numbers correspond to the {H~\\sc ii}\\ regions of Table 2; the\n{H~\\sc ii}\\ regions are the brightest object seen closest to the number. G-1, G-2 and\nG-3 are more distant galaxies discovered serendipitously. North is\nup and east is left.}\n\\label{fig1}\n\\end{figure*}\n\n The final choice of {H~\\sc ii}\\ regions selected is indicated by the numbered\nregions on Figure 1. These were divided between two aperture plates which\nwere observed on different nights. Both plate A (observed on 1994 December 09)\nand plate B (observed on 1994 December 10 and 11) had 49 fibres on {H~\\sc ii}\\ regions \nand 7 on sky positions. There were 30 regions in common between the two plates:\n8 had no detectable emission; 7 were so faint that the better exposure on\n1993 December 11 only was used for analysis; the remaining 15 spectra\nwere combined: 5 had very faint emission and 5 were among the\nbrightest {H~\\sc ii}\\ regions observed. For targets not in common, \n9 objects were rejected which had insufficient signal-to-noise in the emission\nlines. Three objects (G-1, G-2 and G-3 on Figure 1) were found to be\ndistant galaxies (See Appendix). Judging the \nstrength of line emission from the appearance on a broad band image is clearly\nnot sufficient since some regions have stronger continuum, particularly\nat smaller galactocentric radius. The final number of distinct {H~\\sc ii}\\ region\nspectra obtained with well measured lines was 55. \n\n\\subsection{Fibre observations}\nLow dispersion spectra of the selected targets in NGC 1365 were obtained\nusing the RGO spectrograph (25 cm camera) and the Tektronix \\#2 1024 $\\times$ 1024\nCCD (24 $\\mu$m pixels) on the 3.9 m Anglo-Australian Telescope during the nights of 1993\nDecember 9-12. The FOCAP system with its bundle of about 55 fibres\nof 300 $\\mu$m core diameter (2.0$''$ on sky) was employed to \nfeed the spectrograph; the FOCAP system was used because it allowed\ncloser positioning of fibres than AUTOFIB.\nA 250 lines per mm grating was used in the\nfirst-order blaze to collimator configuration for a dispersion of 156 \\AA\\ mm$^{-1}$.\nMost of the {H~\\sc ii}\\ regions observed in NGC 1365 have\ndiameters in the range of 5$''$ -- 10$''$, much bigger\nthan the fibre apertures. To increase the area sampled by each fibre,\nthe telescope was set in a continuous scan motion, by moving it on a circle\nof 1$''$ radius at a rate of one full circle per 30 s. Convolved\nwith seeing of 1$''$ -- 2$''$, the effective sampling area was\n$\\sim$ 3$''$ -- 4$''$ in diameter. The Tektronix CCD is a thinned device and was used\nin the XTRASLOW mode (394 sec readout time) to achieve the lowest readout noise \nof 2.3 e$^-$ rms.\nThe CCD blue response is good, being greater than 40\\% at 3727 \\AA. \n\n Six 2000s exposures were taken with a first plate on 1993 December 09-10, one of which was\naffected by cloud. A second aperture plate was used for observations on the\nthird night (1993 Dec 11-12) when conditions were good, seeing $\\sim$1$''$,\nand seven 2000s exposure were performed. The {H~\\sc ii}\\ regions indicated on Figure 1 are only those for which the emission\nlines were analysed, the spectra containing at least detectable H$\\alpha$ and \nH$\\beta$ emission. Three of the objects rejected as having weak emission line spectra\nwere found to have high redshifts. They are indicated by a letter G prefix in\nFigure 1 and are discussed in the Appendix.\n\n\\begin{figure*}\n\\centerline{\\psfig{file=n1365fig2.eps,height=18.0cm,clip=}}\n\\caption{Examples of spectra of {H~\\sc ii}\\ regions in NGC 1365 obtained with FOCAP. \nThe region numbers refer to those in Figure 1 and Table 2. No correction for interstellar \nextinction has been applied.}\n\\label{fig2}\n\\end{figure*}\n\n\n\\subsection{Spectrophotometry with optical fibers}\n\nSpectroscopy using optical fibers for multiplexing is now a widely-used \ntechnique in astronomy. A great number of spectra are being\nobtained but, unfortunately, little attention has been given\nto the promising astrophysical potential of having usable spectral energy\ndistributions for so many sources in addition to radial velocities.\nThere is a general consensus that spectrophotometry cannot be done with fibres.\nHowever in the multi-fibre spectroscopy of 49 {H~\\sc ii}\\ regions in NGC~2997 (Walsh \\& Roy \\shortcite{WR89}),\nit was\nshown that with suitable care, spectrophotometry to {\\bf better} than 20\\% can be \nachieved. Five problem areas for fibre spectrophotometry were outlined in \nWalsh \\& Roy \\shortcite{WR89}. The use of a CCD with high quantum efficiency,\nas opposed to the IPCS for the NGC~2997 observations, together with the multi-night\ncoverage of one galaxy, have changed some of the emphasis of the problem areas\nand led to improved understanding of the limitations. The problem areas can be\nrestated thus: dealing with differential atmospheric refraction; \neffect on spectra of cross talk between fibres; efficacy of sky subtraction; \nreliability of spectral extraction.\n\n\\begin{enumerate}[]\n\n\\item Atmospheric refraction\n\nThe fibres were 300$\\mu$m in diameter (2.0$''$) so that at zenith distances\nabove 30$^\\circ$ the differential atmospheric refraction between 3727 and 6730\\AA\\\n(the extent of the most useful optical wavelength range for {H~\\sc ii}\\ regions) is 1.0$''$\n(see Walsh \\& Roy \\shortcite{WR90}).\nThe maximum zenith distance at which exposures were made of NGC~1365 is \n40$^\\circ$ so that differential refraction will have a modulating effect on\nthe spectrum. However the telescope was moved in a circular pattern of 1$''$\nradius to improve the sampling of the extended {H~\\sc ii}\\ regions, and this also has the\neffect of averaging out the effects of differential atmospheric refraction\nfor modest airmasses.\n\nComparison of spectra of the same {H~\\sc ii}\\ region taken at zenith distances of 6\nand 35$^\\circ$ and corrected for airmass, show only small differences in\nline ratios between the blue and red ends. However for the observations\nof the spectrophotometric standard stars (L745-46A and L870-2 \\cite{OK74}) the\ntelescope was not circled during the exposure. Depending on the exact position of the\nstar image over the fibre and the airmass, then the input spectra will differ,\nmost obviously in the blue where the differential atmospheric refraction\nchanges rapidly with wavelength. As for the case for NGC~2997, four {H~\\sc ii}\\ \nregions were in common between the fibre spectroscopy and the scanned long\nslit spectroscopy (Roy \\& Walsh \\shortcite{RW87}). However for NGC~2997, the \nproblems of relying on\nthe spectrophotometric standard for the flux calibration proved insurmountable\nand a sensitivity curve was deduced by comparing the fibre spectra with the\ndata from a long slit over identical regions. Whilst the same method could have been\nrelied on for the NGC~1365 observations, it is clearly preferable to rely on an \nindependent spectrophotometric calibration. \n\n A number of precautions were taken in the observations of the standards.\nAt least three, and sometimes five, separate observations, each of 300s \nduration, were made on the standards and at low to moderate airmass (zenith\ndistance $\\leq$ 27$^\\circ$). At each exposure the standard star was recentered in the\nguide fibre and then offset to the appropriate fibre. During data reduction \nthe individual spectra of the standards were compared in\nterms of absolute level and shape. The differences in absolute level\ncould be attributed to the star not being centered in the fibre combined with\nthe seeing, which was around 1$''$-1.5$''$. Differences\nin spectral distribution were also apparent, in particular a relative in(de)crease\nin the blue when comparing different exposures. This must be attributed to \ndifferent centering of the star within the fibre: a higher blue response occurring\nwhen the fibre was more centered to lower altitude zenith distance. \nSome conspiracy between shifts in the parallactic direction and \nperpendicular to it could produce such effects, although no attempt was made\nto model this behaviour. \n\nIn forming the mean spectrum of the standard for a\ngiven night, a weighted mean of the strongest spectra was made; however if\na spectrum differed markedly in shape from the others, it was not included in the mean.\nIt was suggested in Walsh \\& Roy \\shortcite{WR89} that improved spectrophotometry\nof standard stars could be achieved with very good seeing\nby scanning the telescope in an elongated\npattern in the direction of the parallactic angle. This was not attempted during\nthe observations described here. Such a strategy will of course \nimpair absolute spectrophotometry (which can only be achieved with good seeing\nfor fibres), but this is usually not a serious issue and\ncertainly was not of major concern here.\nThe agreement between spectra taken of the same {H~\\sc ii}\\ region on different nights\nboth with the same fibre and with a different fibre suggest that the strategy of \nscanning a small circle is sufficient to ensure repeatable spectrophotometry\nto within 5\\% and could also be applied to the observation of the standard\nstars. Incidentally the level of agreement implies that the colour\ntransmission of the fibres is similar, at least to tolerances lower than the \nspectrophotometric ones.\n\n Inverse sensitivity curves were formed for L745-46A (first and third nights)\nand for L870-2 (second, third and fourth nights). The two curves for L745-46A were\nvery significantly different in the blue, the curve derived for the third night\nshowing a sharp minimum centred at 4000\\AA. For L870-2 the curves had the same general\nshape but with differences in absolute level; for the second and third nights\nthe relative differences did not exceed about 6\\%. The sensitivity curves\nwere used to calibrate the {H~\\sc ii}\\ region spectra and the extinction, derived from\ncomparison of the hydrogen line ratios with the Case B values, was computed.\nIt was found that the L870-2 sensitivity curves gave a more consistent \nvalue of the extinction (the Seaton \\shortcite{SE79} Galactic curve\nwas employed as parametrized by Howarth \\shortcite{HO83}) for the different \nhydrogen line ratios (e.g. H$\\alpha$\/H$\\beta$ vs. H$\\delta$\/H$\\beta$). It was \ntherefore decided to use a mean weighted spectrum of\nthe L870-2 data from three nights to produce the calibration curve. On the\nhighest signal-to-noise {H~\\sc ii}\\ spectrum (\\#42, night 3) the \nextinction is the\nsame (within the errors of measurement) on all lines ratios from H$\\alpha$\/H$\\beta$ to\nH$\\epsilon$\/H$\\beta$. Thus the calibration produced by the adopted L870-2 \nspectrum is at least adequate over the range 3950 to 6600\\AA. \n\nComparison of the spectra with those of identical {H~\\sc ii}\\ regions\nobserved by Roy \\& Walsh \\shortcite{RW88} and Alloin et al. value \\shortcite{AL81}\nwas also made. There are 9 {H~\\sc ii}\\ regions in common with Alloin et al\n(L4, L6, L7, L10, L15, L21, L24, L28 and L33). There is poor agreement with\nthe IDS spectra of Alloin et al, where low extinctions ($c$=0) were derived; for the\nIPCS spectra, for L33 the derived extinction is much higher than Alloin et al.\nvalue, for L10 it is higher whilst for the others there is satisfactory \nagreement. The {\\rm \\mbox{[O \\sc ii]}\\ }\/H$\\beta$ ratios presented by Alloin et al. are \nsystematically lower than presented here.\nThe spectra were also compared with those of Roy \\& Walsh \\shortcite{RW88} for \n{H~\\sc ii}\\ region emission summed over much larger areas than the fibre data.\nThe fibre data indicated higher extinction and higher dereddened [O~II]\/H$\\beta$ \nratios than the summed {H~\\sc ii}\\ region emission. The higher extinction is most\nprobably a real effect whereby the bright cores have higher extinction, whilst\nthe higher {\\rm \\mbox{[O \\sc ii]}\\ }\/H$\\beta$ ratios may be affected by the\ndifficulties of fibre spectrophotometric calibration at lower wavelengths.\n\n\\item Cross-talk between fibres\n\n On account of the much higher signal-to-noise\nof the CCD data in comparison with the previous IPCS data (Walsh \\& Roy \n\\shortcite{WR89}), a better\nmeasurement of the cross-talk from fibre to fibre could be made. For an observation\nof the standard star, the counts in the two adjacent fibres along the slit yielded\ncross talk of 1.9 and 1.0\\%, with an increase to 2.5\\% in the red ($>$7000\\AA) in\nthe fibre with the higher level of cross-talk. For the other fibre, with lower\ncross talk, the spectrum had the same shape as that in the star fibre. A related\neffect was noticed in that some spectra after flux calibration were very blue.\nThe effect appeared strongest on weaker spectra\nalthough was not confined to them, and was also not simply correlated to the strength\nof adjacent fibre spectra. It is suggested that the effect could arise\nfrom the flat fielding, since the flat-field lamp has a low response in the blue.\nThis would make the flat field in the blue very sensitive to small changes\nin packing between fibres along the slit and small irregularities would \ncause differential cross-talk. However it is not clear why this cross-talk should be\ndifferent between the flat field and the sky data. Sky flats might assist in\nbetter understanding of this problem. It is clear that the fibres should be well\nseparated at the spectrograph entrance slit and on the detector in order to allow \nwell-defined minima to be distinguished between the spectra.\n\n\\item Sky subtraction\n\nAs suggested by Wyse \\& Gilmore \\shortcite{WG92}, at least six\nfibres were employed for determination of the sky - in fact 7 were used. After \nnormalising the\nfibre to fibre response by the flat field spectra integrated in wavelength, the\nmean sky spectrum was formed and subtracted from the object spectra. The efficacy of\nsky subtraction was tested by examining the residuals on the mean sky-subtracted \nsky spectra. Examining the goodness of subtraction of the strong sky emission \nlines alone is not very reliable since small wavelength shifts ($\\leq$0.3 pixels) \nbetween the fibre spectra can result \nin improperly subtracted sky lines, although the mean difference across the line\ncan be zero. Such small shifts can result from slight\ndifferences in the polynomial fit to the comparison lines along the\nslit. Comparison \nof the fibre-to-fibre response between the flat field and the [O~{\\sc i}]5577\\AA\\\nline shows good agreement, suggesting that the method of correcting for the\nrelative fibre transmissions is satisfactory.\n\n\\item Spectral extraction\n\n The packing of the fibres is such that the separation\nof the spectra peak-to-peak is 5 or 6 CCD pixels, and there is not\na true zero intensity between fibres. The spectra were extracted simply by summing \nas many fibre spectra per object as possible; for the better separated spectra\none signal-free pixel column was left between fibre spectra. A full extraction\nwould require solving for all the extracted spectra simultaneously. Shifting the\nextent of the extracted spectrum by one pixel causes an error of up to 5\\%. \nAssigning a column from the edge of one spectrum into the adjacent spectrum\ngives rise to an error of about 2\\%. The simple linear extraction could be\nimproved by fitting the cross-dispersion profile and by optimal extraction, but\nwould not substantially affect the results presented here.\n\nComparing the spectra of the same {H~\\sc ii}\\ \nregions taken on different nights and with different fibres shows that these\nsystematic effects dominate the photon statistical errors associated with\nthe line fitting. For the bright lines, e.g. {\\rm \\mbox{[O \\sc iii]}\\ } and H$\\alpha$, the typical\ndifferences in the line ratios (expressed as a fraction of H$\\beta$) are \n10\\%; for {\\rm \\mbox{[O \\sc ii]}\\ } this difference is around 20\\%, with values up to 30\\%.\nFor spectra taken on\ndifferent nights, but in the same fibre, the differences are only slightly larger\nthan the photon statistical errors, although conditions were not photometric\non the first and second nights. It is clearly the problems of spectral\nextraction and spectrophotometric calibration that dominate the uncertainties in\nthe derived line fluxes.\n\n\\end{enumerate}\n\n\n\\subsection{Reduction of the spectra}\n\nAll the data frames were bias subtracted and the spectra and flat fields formed\nfor each fibre. Wavelength calibration was achieved by fitting third order\npolynomials to the integrated spectra of each fibre for a Th-Ar lamp. Corrections\nof each individual exposure for airmass were made and the data summed. Following\nflux calibration by the merged L870-2 spectrum, the spectrum of each target was\nextracted and \nan interactive procedure for fitting continuum and emission lines was used\nto derive the line fluxes $F(\\lambda)$ expressed in units of \n$F(H\\beta)$ = 100. Photon noise errors on the fitted line fluxes were computed \nand propagated. The magnitude of the \ninterstellar reddening was determined by the H$\\alpha$\/H$\\beta$ \\ ratios;\ncomparison was done to the\ntheoretical decrement as given by Brocklehurst \\shortcite{BR71} for a temperature\nof 8,000 K and a density of 100 cm$^{-3}$, but after adding 2 \\AA\\\nof equivalent width to the H$\\beta$ \\ emission line to compensate for\nthe underlying Balmer absorption (cf. McCall, Rybski \\& Shields \\shortcite{MC85};\nRoy \\& Walsh \\shortcite{RW87}). If the underlying stellar absorption\nat H$\\beta$\\ is greater than 2 \\AA\\, then this correction would lead\nto an overestimation of the reddening correction, especially in {H~\\sc ii}\\ regions\nof low H$\\beta$\\ emission equivalent width. The spectrum was corrected\nin detail as a function of wavelength using the \nstandard reddening law of Seaton \\shortcite{SE79}, as\nspecified by Howarth \\shortcite{HO83}, assuming R = 3.1.\n\n\\section{Results}\n\nFigure 2 shows some examples typical of the best spectra, uncorrected for \ninterstellar reddening. The region\nnumbers refer to the {H~\\sc ii}\\ region identification numbers in Figure 1\nand Table 2. The difference in the relative strengths of the collisionally excited\nlines of {\\rm \\mbox{[O \\sc iii]}\\ } , {\\rm \\mbox{[N \\sc ii]}\\ } and {\\rm \\mbox{[S \\sc ii]}\\ } illustrate the change in excitation across\nthe galaxy. Region \\#30 is close to the nucleus.\n Table 2 lists the line intensities (H$\\beta$ = 100) corrected for reddening for the\nmain emission lines of the 55 {H~\\sc ii}\\ regions observed. $X$ and $Y$ refer\nto RA and DEC offsets in arcsec relative to the centre of the galaxy given in Table 1. The logarithmic\nextinction at H$\\beta$, $c$, includes both Galactic reddening (contribution about 0.07, \nsee Table 1) and extragalactic extinction. $\\rho\/\\rho_0$ is the radial distance\nexpressed in terms of the fractional isophotal radius (Table 1).\nWe made only a marginal detection of the WC9 star\nin region \\#30 (L4 of \\cite{AL81}) found by Phillips \\& Conti \\shortcite{PC92};\nthis is in contrast to NGC~2997 where we observed 49 {H~\\sc ii}\\ regions\nand found signatures of Wolf-Rayet stars in three (and possibly four) regions.\n\n\\subsection{Diagnostic diagrams}\n\nSeveral line ratios have been calculated after correction for reddening.\nDefining {\\rm \\mbox{[O \\sc iii]}\\ } as 1.34$\\times I${\\rm \\mbox{[O \\sc iii]}\\ } 5007\\AA, {\\rm \\mbox{[N \\sc ii]}\\ }\nas 1.34$\\times I${\\rm \\mbox{[N \\sc ii]}\\ } 6584\\AA\\ and {\\rm \\mbox{[S \\sc ii]}\\ } as $I${\\rm \\mbox{[S \\sc ii]}\\ } 6713$+$6730\\AA, and using\n{\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } as a sequencing parameter, Figure 3 shows {\\rm \\mbox{[O \\sc ii]}}\/{{\\rm \\mbox {[O \\sc iii]}}\\ },\n{\\rm \\mbox{[N \\sc ii]}}\/{{\\rm \\mbox {[O \\sc ii]}}\\ } , {\\rm \\mbox{[S \\sc ii]}}\/{{\\rm \\mbox {[O \\sc ii]}}\\ } and {\\rm \\mbox{[O \\sc iii]}}\/{{\\rm \\mbox{[N \\sc ii]}}\\ } versus this parameter. The\ntight relationships shown by these sequences imply that\nthe majority of the nebulae are ionization-bound; these trends\nare characteristic of gas volumes photoionized by massive stars\n(McCall et al. \\shortcite{MC85}), and very similar, for example, to those measured in \nM101 (Kennicutt \\& Garnett \\shortcite{KG96}). The tight sequence shown by \n{\\rm \\mbox{[N \\sc ii]}}\/{{\\rm \\mbox {[O \\sc ii]}}\\ } indicates that the\nfibre spectrophotometry indeed produces reliable line fluxes. The change\nof {\\rm \\mbox{[N \\sc ii]}}\/{{\\rm \\mbox {[O \\sc ii]}}\\ } and {\\rm \\mbox{[S \\sc ii]}}\/{{\\rm \\mbox {[O \\sc ii]}}\\ } versus {\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } is driven by the thermal\nproperties of the {H~\\sc ii}\\ regions, and not by change in the nitrogen\nand sulfur abundance\nratios (Garnett \\& Shields \\shortcite{GS87}). The abundance of oxygen mainly controls\nthe thermal equilibrium in {H~\\sc ii}\\ regions (McCall et al. \\shortcite{MC85}).\n\n\n\\begin{table*}\n\\caption{H~{\\sc ii} regions in NGC 1365 -- Reddening-corrected line fluxes (H$\\beta$ = 100)}\n\\begin{tabular}{rrrrccccccc}\n\\hline \\hline\n\nRW & X & Y & [OII] & [OIII] & HeI & [NII] & [SII] & c(H$\\beta$) & $\\rho\/\\rho_0$ \\\\\n\\# & $''$ & $''$ & 3727 & 5007 & 5876 & 6584 & 6717-30 & & \\\\\n(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) \\\\ \n\\hline \n1 & 210 & 250 & 222$\\pm$8 & 158$\\pm$5 & 14.4$\\pm$2.6 & 54$\\pm$3 & 49$\\pm$3 & 0.51$\\pm$0.10 & 0.97 \\\\\n2 & 151 & 235 & 368$\\pm$17 & 70$\\pm$4 & 12.9$\\pm$3.0 & 81$\\pm$4 & 80$\\pm$4 & 0.66$\\pm$0.13 & 0.84 \\\\\n3 & 101 & 254 & 351$\\pm$9 & 66$\\pm$2 & 10.8$\\pm$2.0 & 78$\\pm$2 & 68$\\pm$3 & 0.75$\\pm$0.07 & 0.84 \\\\\n4 & 76 & 248 & 606$\\pm$51 & 63$\\pm$9 & & 108$\\pm$10 & 137$\\pm$12 & 0.80$\\pm$0.25 & 0.81 \\\\\n5 & 23 & 216 & 249$\\pm$10 & 187$\\pm$6 & 11.7$\\pm$2.0 & 42$\\pm$2 & 56$\\pm$2 & 0.66$\\pm$0.10 & 0.72 \\\\\n6 & 26 & 207 & 364$\\pm$26 & 100$\\pm$8 & & 99$\\pm$7 & 90$\\pm$9 & 0.74$\\pm$0.18 & 0.68 \\\\\n7 & -14 & 160 & 329$\\pm$8 & 61$\\pm$2 & 13.5$\\pm$1.4 & 89$\\pm$2 & 57$\\pm$1 & 0.90$\\pm$0.06 & 0.56 \\\\\n8 & -22 & 158 & 476$\\pm$28 & 45$\\pm$4 & & 98$\\pm$6 & 71$\\pm$6 & 0.97$\\pm$0.16 & 0.56 \\\\\n9 & -21 & 139 & 174$\\pm$7 & 37$\\pm$2 & 10.6$\\pm$1.8 & 94$\\pm$3 & 77$\\pm$3 & 0.95$\\pm$0.09 & 0.50 \\\\\n10 & -31 & 131 & 262$\\pm$14 & 35$\\pm$4 & & 103$\\pm$5 & 79$\\pm$5 & 0.89$\\pm$0.12 & 0.48 \\\\\n11 & -50 & 104 & 228$\\pm$2 & 24$\\pm$1 & 9.9$\\pm$0.6 & 102$\\pm$1 & 71$\\pm$1 & 0.86$\\pm$0.02 & 0.53 \\\\\n12 & -14 & 106 & 222$\\pm$5 & 123$\\pm$3 & 10.7$\\pm$1.1 & 73$\\pm$2 & 49$\\pm$1 & 0.84$\\pm$0.06 & 0.37 \\\\\n13 & -26 & 99 & 255$\\pm$4 & 64$\\pm$1 & 10.4$\\pm$0.7 & 91$\\pm$1 & 60$\\pm$1 & 0.84$\\pm$0.03 & 0.37 \\\\\n14 & -63 & 88 & 221$\\pm$6 & 82$\\pm$2 & & 71$\\pm$2 & 51$\\pm$2 & 0.73$\\pm$0.06 & 0.42 \\\\\n15 & -79 & 160 & 306$\\pm$27 & 129$\\pm$13 & & 52$\\pm$8 & 85$\\pm$16 & 0.27$\\pm$0.27 & 0.67 \\\\\n16 & -111 & 87 & 264$\\pm$25 & 53$\\pm$9 & & 80$\\pm$8 & 114$\\pm$12 & 0.58$\\pm$0.25 & 0.55 \\\\\n17 & -57 & 77 & 225$\\pm$5 & 30$\\pm$1 & 9.9$\\pm$1.0 & 96$\\pm$2 & 77$\\pm$1 & 0.84$\\pm$0.05 & 0.37 \\\\\n18 & -112 & 50 & 614$\\pm$18 & 115$\\pm$4 & 12.6$\\pm$2.3 & 84$\\pm$3 & 105$\\pm$5 & 0.77$\\pm$0.08 & 0.47 \\\\ \n19 & -114 & 2 & 366$\\pm$5 & 134$\\pm$2 & 9.6$\\pm$0.7 & 65$\\pm$1 & 63$\\pm$2 & 0.76$\\pm$0.04 & 0.41 \\\\\n20 & -94 & 6 & 326$\\pm$3 & 59$\\pm$1 & 10.3$\\pm$0.4 & 90$\\pm$1 & 53$\\pm$1 & 0.78$\\pm$0.02 & 0.34 \\\\\n21 & -91 & -11 & 268$\\pm$7 & 24$\\pm$2 & 10.0$\\pm$1.5 & 99$\\pm$3 & 62$\\pm$2 & 0.83$\\pm$0.06 & 0.32 \\\\\n22 & -41 & -16 & 105$\\pm$9 & 20$\\pm$3 & 9.5$\\pm$1.9 & 111$\\pm$4 & 62$\\pm$4 & 1.55$\\pm$0.09 & 0.14 \\\\\n23 & -57 & -29 & 222$\\pm$12 & 21$\\pm$3 & 8.0$\\pm$2.1 & 101$\\pm$4 & 73$\\pm$5 & 1.01$\\pm$0.10 & 0.20 \\\\\n24 & -89 & -39 & 381$\\pm$10 & 56$\\pm$2 & 9.1$\\pm$1.1 & 96$\\pm$2 & 59$\\pm$2 & 1.05$\\pm$0.06 & 0.31 \\\\\n25 & -146 & -50 & 274$\\pm$6 & 117$\\pm$3 & 13.0$\\pm$1.5 & 64$\\pm$2 & 62$\\pm$2 & 0.53$\\pm$0.06 & 0.50 \\\\\n26 & -223 & -12 & 234$\\pm$27 & 136$\\pm$16 & & 78$\\pm$12 & 115$\\pm$24 & 0.07$\\pm$0.31 & 0.78 \\\\\n27 & -133 & -74 & 291$\\pm$9 & 158$\\pm$5 & 13.5$\\pm$1.9 & 62$\\pm$2 & 62$\\pm$3 & 0.69$\\pm$0.09 & 0.47 \\\\\n28 & -67 & -86 & 291$\\pm$5 & 67$\\pm$2 & 13.6$\\pm$1.2 & 81$\\pm$2 & 56$\\pm$2 & 0.81$\\pm$0.04 & 0.33 \\\\\n29 & -81 & -64 & 158$\\pm$8 & 40$\\pm$3 & & 106$\\pm$4 & 93$\\pm$4 & 0.78$\\pm$0.10 & 0.32 \\\\\n30 & 16 & 15 & 48$\\pm$2 & 4$\\pm$.5 & & 95$\\pm$1& 38$\\pm$2 & 1.19$\\pm$0.02 & 0.07 \\\\\n31 & 29 & 18 & 163$\\pm$11 & 29$\\pm$3 & & 101$\\pm$4 & 61$\\pm$4 & 1.18$\\pm$0.11 & 0.11 \\\\\n32 & 40 & 64 & 354$\\pm$13 & 34$\\pm$3 & & 107$\\pm$4 & 91$\\pm$3 & 0.87$\\pm$0.08 & 0.23 \\\\\n33 & 65 & 53 & 182$\\pm$2 & 55$\\pm$1 & 10.7$\\pm$0.4 & 87$\\pm$1 & 50$\\pm$1 & 0.88$\\pm$0.02 & 0.26 \\\\\n34 & 93 & 35 & 269$\\pm$6 & 119$\\pm$3 & 16.0$\\pm$1.5 & 71$\\pm$2 & 51$\\pm$2 & 0.47$\\pm$0.06 & 0.32 \\\\\n35 & 127 & 21 & 305$\\pm$12 & 92$\\pm$4 & & 80$\\pm$4 & 86$\\pm$3 & 0.61$\\pm$0.11 & 0.44 \\\\\n36 & 78 & 17 & 80$\\pm$3 & 14$\\pm$2 & 8.7$\\pm$1.2 & 96$\\pm$2 & 57$\\pm$2 & 0.67$\\pm$0.06 & 0.27 \\\\\n37 & 87 & 7 & 222$\\pm$12 & 34$\\pm$3 & & 108$\\pm$4 & 94$\\pm$6 & 0.78$\\pm$0.11 & 0.31 \\\\\n38 & 85 & -23 & 164$\\pm$4 & 27$\\pm$1 & 9.4$\\pm$0.6 & 105$\\pm$2 & 76$\\pm$2 & 1.11$\\pm$0.04 & 0.33 \\\\\n39 & 81 & -39 & 321$\\pm$20 & 47$\\pm$5 & 14.4$\\pm$2.5 & 102$\\pm$5 & 75$\\pm$4 & 1.37$\\pm$0.15 & 0.34 \\\\\n40 & 78 & -75 & 275$\\pm$5 & 38$\\pm$1 & 9.3$\\pm$0.8 & 93$\\pm$2 & 66$\\pm$2 & 1.10$\\pm$0.05 & 0.42 \\\\\n41 & 72 & -85 & 126$\\pm$6 & 44$\\pm$2 & 13.6$\\pm$1.8 & 88$\\pm$3 & 59$\\pm$4 & 0.92$\\pm$0.09 & 0.43 \\\\\n42 & 47 & -105 & 309$\\pm$2 & 66$\\pm$1 & 10.6$\\pm$0.3 & 95$\\pm$1 & 44$\\pm$1 & 1.14$\\pm$0.02 & 0.43 \\\\\n43 & 104 & -113 & 516$\\pm$10 & 122$\\pm$3 & 9.1$\\pm$1.6 & 74$\\pm$3 & 104$\\pm$3 & 0.53$\\pm$0.06 & 0.60 \\\\\n44 & 100 & -120 & 276$\\pm$8 & 240$\\pm$6 & 13.3$\\pm$2.3 & 52$\\pm$3 & 59$\\pm$4 & 0.40$\\pm$0.08 & 0.61 \\\\\n45 & 10 & -122 & 227$\\pm$5 & 103$\\pm$2 & 11.1$\\pm$1.7 & 76$\\pm$2 & 59$\\pm$2 & 0.61$\\pm$0.05 & 0.42 \\\\\n46 & 26 & -144 & 357$\\pm$9 & 82$\\pm$3 & 13.3$\\pm$1.7 & 82$\\pm$2 & 61$\\pm$4 & 0.63$\\pm$0.07 & 0.52 \\\\\n47 & 9 & -165 & 374$\\pm$20 & 91$\\pm$6 & & 89$\\pm$5 & 109$\\pm$7 & 0.58$\\pm$0.14 & 0.57 \\\\\n48 & 46 & -185 & 324$\\pm$13 & 107$\\pm$5 & 9.9$\\pm$2.8 & 73$\\pm$4 & 80$\\pm$5 & 0.63$\\pm$0.11 & 0.69 \\\\\n49 & -13 & -200 & 541$\\pm$53 & 63$\\pm$9 & & 78$\\pm$9 & 81$\\pm$12 & 0.80$\\pm$0.29 & 0.67 \\\\\n50 & -13 & -230 & 428$\\pm$4 & 228$\\pm$2 & 11.5$\\pm$0.4 & 44$\\pm$1 & 35$\\pm$1 & 0.69$\\pm$0.02 & 0.77 \\\\\n51 & -61 & -240 & 380$\\pm$5 & 87$\\pm$2 & 8.6$\\pm$0.7 & 78$\\pm$2 & 66$\\pm$2 & 0.55$\\pm$0.03 & 0.78 \\\\\n52 & -123 & -259 & 360$\\pm$15 & 121$\\pm$6 & 10.7$\\pm$2.7 & 90$\\pm$4 & 100$\\pm$7 & 0.77$\\pm$0.12 & 0.87 \\\\\n53 & -142 & -258 & 180$\\pm$12 & 76$\\pm$6 & & 86$\\pm$6 & 76$\\pm$7 & 0.34$\\pm$0.16 & 0.89 \\\\\n54 & -181 & -241 & 395$\\pm$15 & 168$\\pm$7 & & 63$\\pm$4 & 78$\\pm$6 & 0.38$\\pm$0.11 & 0.90 \\\\\n55 & -193 & -242 & 532$\\pm$51 & 60$\\pm$9 & & 77$\\pm$9 & 110$\\pm$13 & 0.74$\\pm$0.28 & 0.92 \\\\\n\\hline\n\\end{tabular} \n\\end{table*}\n\n\\subsection{Radial gradients in NGC 1365}\n\nGalactocentric distances and azimuthal positions of the\n{H~\\sc ii}\\ regions were calculated by assuming a 40$^\\circ$ inclination of the galaxy disc to the \nplane of sky, and a position angle of 220$^\\circ$ from J\\\"ors\\\"ater \\& \nvan Moorsel \\shortcite{JM95}. As a normalizing radius, we used the value of\nthe isophotal radius $\\rho_0$ of 5.$'$61 (RC3); the\npossible choices of a normalizing radial parameter are discussed\nin detail by Vila-Costas \\& Edmunds \\shortcite{VE92} and Zaritsky et al. \\shortcite{ZA94}.\nLine ratios which show systematic radial variations are\nthe excitation {\\rm \\mbox{[O \\sc iii]\/{H$\\beta$}}\\ }, the abundance indicators {\\rm \\mbox{[N \\sc ii]}}\/{{\\rm \\mbox{[O \\sc iii]}}\\ } and\n{\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } and reddening (Fig. 4). The radial positions\nof the {H~\\sc ii}\\ regions are given in Table 2 in terms of\nthe fractional isophotal radius. Despite a large scatter, there \nappears indeed to be a systematic\nvariation of $c$(H$\\beta$), the logarithmic extinction at H$\\beta$, as a function\nof radius; such a radial trend has been observed so far only in the galaxy\nM101 (Kennicutt \\& Garnett \\shortcite{KG96}). The linear coefficient\nof correlation between $c$(H$\\beta$) and $\\rho\/\\rho_0$ is\nR = --0.46. The value of $c$(H$\\beta$)\\ is high\non average across the disc, dropping from $\\sim$1.2 near\nthe centre to 0.6--0.8 in the outer disc; the dust lanes of NGC 1365 are\nremarkably strong and indicate the rich dust content. Despite the relatively high\nextinction (the mean of $c$ = 0.75$\\pm$0.27), one can see through the galaxy;\nin particular, a distant elliptical (object G-1 on Figure 1) was found just \nsouthwest of the bar.\n\n\\begin{figure*}\n\\centerline{\\psfig{file=n1365fig3.eps,height=16.0cm,clip=}}\n\\caption{Diagnostic diagrams of log {\\rm \\mbox{[O \\sc ii]}}\/{{\\rm \\mbox {[O \\sc iii]}}\\ }, log {\\rm \\mbox{[O \\sc iii]}}\/{{\\rm \\mbox{[N \\sc ii]}}\\ }, log {\\rm \\mbox{[N \\sc ii]}}\/{{\\rm \\mbox {[O \\sc ii]}}\\ }\nand log {\\rm \\mbox{[S \\sc ii]}}\/{{\\rm \\mbox {[O \\sc ii]}}\\ } vs. the sequencing index log {\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } are shown for the \nobserved {H~\\sc ii}\\ regions in NGC 1365.}\n\\label{fig3}\n\\end{figure*}\n\nAs seen from the spectra of Figure 2, the level of excitation in NGC 1365\nis generally low, and no direct measurement of the electron temperatures\nis possible. Therefore semi-empirical methods must be relied on to derive abundances.\nThese methods have been widely discussed and used (e.g.\nEdmunds \\& Pagel \\shortcite{EP84}; McCall et al. \\shortcite{MC85}; Evans \\shortcite{EV86};\nGarnett \\& Shields \\shortcite{GS87}; Vilchez et al. \\shortcite{VI88}; Walsh \\& Roy \n\\shortcite{WR89};\nBelley \\& Roy \\shortcite{BR92}; Martin \\& Roy \\shortcite{MR94}; Zaritsky et al. \n\\shortcite{ZA94}). They are based on the result that larger values of {\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } or \n{\\rm \\mbox{[O \\sc iii]}}\/{{\\rm \\mbox{[N \\sc ii]}}\\ } are correlated with higher electron temperatures and lower abundances of \noxygen. These calibrations have been refined by Edmunds and Pagel \\shortcite{EP84},\nMcCall et al. \\shortcite{MC85}, and Dopita \\& Evans \\shortcite{DE86}; Zaritsky et al.\n\\shortcite{ZA94} \nhave produced a synthetic calibration for {\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } by merging the\nthree previous ones. It must be recalled that factors other than\nheavy element abundances also modify nebular\nline strengths. The limitations and uncertainties of the\nsemiempirical methods due to various effects have been also discussed by\nMcGaugh \\shortcite{MG91} for ionization parameter and effective temperature\nof the ionizing radiation, by Henry \\shortcite{HE93} and Shields \\& Kennicutt\n\\shortcite{SK95} for dust, by Oey \\& Kennicutt \\shortcite{OK93} for density, by \nKinkel and Rosa \\shortcite {KR94}\nfor high Z effects and by Roy et al. \\shortcite{RO96A} for moderately low Z behavior.\n\nRadial gradients in {\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } and {\\rm \\mbox{[O \\sc iii]}}\/{{\\rm \\mbox{[N \\sc ii]}}\\ } reflect primarily abundances\ngradients wherever those line ratios are sampled over a significant\nfraction of the {H~\\sc ii}\\ region volume. The absolute O\/H abundance predicted by the different\nsemi-empirical calibrations of these ratio can vary depending on the \ncalibration employed, but the relative trends can be probed reliably\n(see for example Henry \\& Howard \\shortcite{HH95}; Zaritsky et al. \\shortcite{ZA94}; \nMartin \\& Roy \\shortcite{MR94}; Kennicutt \\& Garnett \\shortcite{KG96}).\nThe radial abundance variation in oxygen abundances \nin NGC 1365 was determined\nfrom three calibrations: i) {\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } as calibrated by Edmunds \\& Pagel \\shortcite{EP84};\nii) {\\rm \\mbox{[O \\sc iii]}}\/{{\\rm \\mbox{[N \\sc ii]}}\\ } also calibrated by Edmunds \\& Pagel \\shortcite{EP84}; and\n{\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } as calibrated by Zaritsky et al \\shortcite{ZA94}. The reader can\nthen compare the results with similar abundance data on the few other\ngalaxies which have been thoroughly sampled with spectroscopic\ntechniques, e.g. NGC 2997 (Walsh \\& Roy \\shortcite{WR89}) and\nM101 (Kennicutt \\& Garnett \\shortcite{KG96}). The overall trend in NGC 1365 is\na rather shallow abundance gradient at about --0.50 dex $\\rho_0$$^{-1}$, \nor --0.02 dex\/kpc;\nthe exact parameters of the correlation depends on the calibration used as shown\nin Table 3, where $R$ is the coefficient of correlation. The three calibrations, \n{\\rm \\mbox{[O \\sc iii]}}\/{{\\rm \\mbox{[N \\sc ii]}}\\ } by Edmunds \\& Pagel \\shortcite{EP84}, {\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } by\nZaritsky et al. \\shortcite{ZA94} and {\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } by Edmunds \\& Pagel \n\\shortcite{EP84}, give rather\nsimilar results within the errors. The calibration of {\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } by Edmunds \\& Pagel\n\\shortcite{EP84} tend to give a larger amplitude for the variation of O\/H across \nthe disc. A shallow gradient is expected for strongly\nbarred galaxies, and NGC 1365 with a deprojected bar\naxis ratio $b\/a(i)$ = 0.51, has a strong bar \n(\\cite{MA95}). The central intercept is at about 12 + log O\/H = 9.15\n(depending on the calibration used).\n\n In addition, one observes\na rather obvious break at R $\\sim$ 185 - 200$''$ ($\\rho = 0.55 - 0.60 \\rho_0$); inside this\nradius the O\/H gradient is moderately steep (--0.80 dex $\\rho_0$$^{-1}$, i.e. --0.05 dex kpc$^{-1}$), while\noutside it is flat. We have chosen $\\rho = 0.55 \\rho_0$ as the galactic\nradial distance where the break occurs and have calculated the\nappropriate equations and coefficient of correlation (Table 3);\nthis position for the break ensures a reasonable number of sample points\n(34 in the inner part and 21 in the outer part)\n while corresponding to small residuals for the fits. \nThe correlations for the inner zone are almost as strong as\nfor all the points. Comparisons\nbetween the three abundance calibrations can be made; it is obvious from Table 3, that\nthere is {\\it no} abundance gradient (within the uncertainties) beyond the break.\nWe note that the values of O\/H derived by using\nthe {\\rm \\mbox{[O \\sc iii]}}\/{{\\rm \\mbox{[N \\sc ii]}}\\ } calibration by Edmunds \\& Pagel \\shortcite{EP84}\nare very similar to those given by the synthetic calibration\nof Zaritsky et al. \\shortcite{ZA94}; this applies to individual\npoints and to the correlations (Table 3). The break is also well seen\nin the radial plot of {\\rm \\mbox{[O \\sc iii]\/{H$\\beta$}}\\ } (not shown); thus\neffects related to dust or a erroneous reddening determination can therefore \nbe excluded.\n\nPossible azimuthal asymmetries in the O\/H distribution were searched for,\nbut no evidence was found for such asymmetry. The significant assymetry in O\/H\ndistribution found by\nKennicutt \\& Garnett \\shortcite{KG96} in M101 remains a rather unusual and\nsignificant feature.\nThe apparent dispersion in abundance ($\\sim$0.2 dex) at fixed radius is probably \nmainly due to variations in nebular ionization as discussed\nby Kennicutt \\& Garnett \\shortcite{KG96}.\n \n\\begin{table*}\n\\caption{Equations of the O\/H radial distribution in NGC 1365} \n\\begin{tabular}{lcrc}\n\\hline \\hline\nRadius & Equation & $R$ & Calibration \\\\\n\\hline \nR $\\leq$ 1.0 $\\rho_0$ & 12 $+$ log O\/H = 9.10$\\pm$0.04 - 0.42$\\pm$0.07 $\\rho_0$ & -0.62 &{\\rm \\mbox{[O \\sc iii]}}\/{{\\rm \\mbox{[N \\sc ii]}}\\ } (EP84)\\\\\nR $\\leq$ 1.0 $\\rho_0$ & 12 $+$ log O\/H = 9.17$\\pm$0.05 - 0.53$\\pm$0.09 $\\rho_0$ & -0.63 & {\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } (ZA94)\\\\\nR $\\leq$ 1.0 $\\rho_0$ & 12 $+$ log O\/H = 9.12$\\pm$0.06 - 0.68$\\pm$0.11 $\\rho_0$ & -0.65 &{\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } (EP84)\\\\\n\\hline \\hline\nR $\\leq$ 0.55 $\\rho_0$ & 12 $+$ log O\/H = 9.23$\\pm$0.06 - 0.78$\\pm$0.17 $\\rho_0$ & -0.63 &{\\rm \\mbox{[O \\sc iii]}}\/{{\\rm \\mbox{[N \\sc ii]}}\\ } (EP84)\\\\\nR $\\leq$ 0.55 $\\rho_0$ & 12 $+$ log O\/H = 9.26$\\pm$0.08 - 0.77$\\pm$0.21 $\\rho_0$ & -0.54 & {\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } (ZA84)\\\\\nR $\\leq$ 0.55 $\\rho_0$ & 12 $+$ log O\/H = 9.28$\\pm$0.10 - 1.10$\\pm$0.28 $\\rho_0$ & -0.57 & {\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } (EP84)\\\\\n\\hline\nR $\\geq$ 0.55 $\\rho_0$ & 12 $+$ log O\/H = 8.85$\\pm$0.15 - 0.07$\\pm$0.20 $\\rho_0$ & -0.09 & {\\rm \\mbox{[O \\sc iii]}}\/{{\\rm \\mbox{[N \\sc ii]}}\\ } (EP84)\\\\\nR $\\geq$ 0.55 $\\rho_0$ & 12 $+$ log O\/H = 8.66$\\pm$0.17 + 0.13$\\pm$0.23 $\\rho_0$ & 0.13 & {\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } (ZA84)\\\\\nR $\\geq$ 0.55 $\\rho_0$ & 12 $+$ log O\/H = 8.50$\\pm$0.18 + 0.16$\\pm$0.24 $\\rho_0$ & 0.15 & {\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } (EP84)\\\\\n\\hline \n\\end{tabular}\\\\\n\\end{table*}\n\n\\begin{figure*}\n\\centerline{\\psfig{file=n1365fig4.eps,height=14.0cm,clip=}}\n\\caption{(a) The extinction $c$(H$\\beta$) is plotted against the normalized isophotal\nradius in NGC 1365. (b) The sequencing index log {\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } vs. the normalized\nisophotal radius is shown.}\n\\label{fig4}\n\\end{figure*}\n\n\n\\section{Discussion}\n\n\\begin{figure*}\n\\centerline{\\psfig{file=n1365fig5.eps,height=16.0cm,clip=}}\n\\caption{(a) The gradient in oxygen abundance across NGC 1365 using the\nthe calibration of {\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } by Edmunds \\& Pagel (1984). (b) The\nsame gradient using the calibration of {\\rm \\mbox{[O \\sc iii]}}\/{{\\rm \\mbox{[N \\sc ii]}}\\ } by Edmunds \\& Pagel\n(1984). (c) The gradient in oxygen abundance using the synthetic\ncalibration of {\\rm ([O \\sc ii] + [O \\sc iii])\/{H$\\beta$}\\ } by Zaritsky et al. (1994). CR and -4\/1 indicates the\nradial positions of the galaxy corotation and -4\/1 resonance respectively\nas found by J\\\"ors\\\"ater \\& van Moorsel (1995).}\n\\label{fig5}\n\\end{figure*}\n\n\\begin{figure*}\n\\centerline{\\psfig{file=n1365fig6.eps,height=10.0cm,clip=}}\n\\caption{ Comparison of the O\/H abundance gradients in the interstellar gas\nof two barred galaxies (NGC 1365 and NGC 3359) and two\nnormal spirals (M 101 and NGC 2997). The\nnumber in parenthesis indicates the sample size of {H~\\sc ii}\\ regions\nin each galaxy.}\n\\label{fig6}\n\\end{figure*}\n\n\nThe presence of a break in the radial abundance\ndistribution is one outstanding signature that a bar\nmay have recently ($\\leq$ 1 Gyr) formed in a spiral galaxy. \nFollowing Friedli et al. \\shortcite{FR94} and\nFriedli \\& Benz \\shortcite{FB95} who have\ndeveloped this scenario, two breaks may actually appear\nin the radial abundance distribution: (i) a ``steep-shallow''\nbreak, near corotation, due to vigorous enrichment by\nstar formation in the bar, followed by a much flatter portion\ndominated by the dilution effect of the outward\nflow, and (ii) an outer ``shallow-steep'' break defining\nwhere the outflow has had time to penetrate \nthe outer disc. This ``shallow-steep'' outer\nbreak moves to larger radius as time evolves,\nand as the extent of the radial homogenization reaches further out\nin the disc (Friedli et al. \\shortcite{FR94}; Friedli \\& Benz \\shortcite{FB95}).\nAs the bar ages, it dissolves and runs out of gas. Star formation then\nweakens, and dilution effects in the interstellar gas come to dominate;\nafter a few Gyr,\nthe resultant global abundance gradient ends up shallow, the break\nbecomes indistinguishable and the gradient displays\na monotonic decrease. The slope of any\npre-existing radial gaseous or stellar abundance gradient decreases\nfrom the start of the first phase. To have a break, one needs vigorous\nstar formation in the bar, and grand design spiral arms for\nstrong outward flows.\nNGC 1365 is a barred galaxy with moderate star formation along the bar\nbut an intense nuclear starburst (Seyfert nucleus and\nring of {H~\\sc ii}\\ regions); this is evidence for an age of $\\sim$1 Gyr\n(see discussion in Martin \\& Roy \\shortcite{MR95}).\n\n A ``steep-shallow'' inner break is found in the abundance\ndistribution of NGC 1365 and it is located quite a bit further out\nthan the corotation radius (Figure 5).\nIt is assumed that due to an interaction or a merger,\nthe disc of NGC 1365 became unstable and developed a bar in a\nrecent epoch.\nOn account of angular momentum transfer via the arms, the bar induced large-scale\nradial flows of interstellar gas. First,\na quick phase of intense star formation occured along the\nbar major axis as well as along spiral arms; galaxies\nshowing this property (e.g. NGC 3359) have rather young bars. \nAt a later time (closer to our epoch), a starburst was triggered when\ngas had collapsed to the centre in a nuclear\nring as observed by Sandqvist et al. \\shortcite{SA95}. Star formation now continues \nmainly along the spiral arms.\nWith the fuel progressively consumed, star formation\nis essentially limited to the ring and the spiral arms. \nThus in this framework of a recently formed bar,\n NGC 1365 would be a disc spiral with a strong,\ngas rich, but moderately young bar (age $\\sim$1 Gyr). As indicated by the absence of a rich\npopulation of {H~\\sc ii}\\ regions in the bar, the star formation\nrate is now weak, except in the center and at the bar ends. \n\n From the neutral hydrogen\nkinematics, corotation (at $r = 145''$ or at $\\rho \\sim 0.43 \\rho_0$)\nis at 1.21 times the bar semi-major axis\n(see Linblad, Linblad \\& Athanassoula \\shortcite{LLA96}.\nThere is a clear break in the O\/H abundance radial distribution \nat $\\rho \\sim 0.55\\rho_0$ or $r = 185''$ (Figure 5). Although\nthere is some uncertainties ($\\pm 10\\%$) in the locations of corotation\nand of resonances (dependence on the model)\nand of the break, the latter is beyond corotation; \nthe break in the O\/H gradient is close to the -4\/1\nresonance at $r = 180''$. The position of the abundance gradient break\n$\\sim 30''$ beyond corotation is puzzling, since this\nis a region of strong radial mixing. However mixing always competes\nwith enrichment due to star formation in changing the local abundance.\nThe intense star forming activity in NGC 1365 observed at the bar ends and \njust beyond (betrayed by clumps of several bright {H~\\sc ii}\\ regions) may\ncompensate dilution and maintain high abundances, pushing the\nlocation of the break further where star formation is reduced.\n\nThe apparent dispersion of the O\/H points, at\na fixed radius, does not vary with galactocentric distance.\nThis is to be contrasted with the behavior of another\nbarred galaxy, NGC 3359, which has also a break in \nits radial abundance distribution (see Fig. 6). NGC 3359 has vigorous star\nformation in its bar and a steep inner abundance gradient; \nthere the break is observed at\ncorotation. The abundance fluctuations, at a fixed radius, are much\nlarger in the outer parts of the disc -- where the gradient is flat --\nthan in the inner regions where the gradient is steep.\nMartin \\& Roy \\shortcite{MR95} have interpreted this behavior \nof the O\/H abundances in NGC 3559\nas due to a recently formed bar $\\sim$400 Myr old, an age less than\nthe mean azimuthal mixing time in galaxy discs \\cite{RK95}. \nWe suggest that the homogenizing action of the bar in NGC 1365 may have acted on a timescale longer\nthan the characteristic azimuthal mixing time, that is the\nbar is older than 500 Myr. The high mass of gas accumulated in the centre,\nas implied by the presence of a nuclear starburst and a Seyfert nucleus,\nalso indicates an age $\\geq$ 1 Gyr for the bar of NGC 1365. Overall,\nseveral features are consistent with a bar formed\nabout 1 Gyr ago.\n\nIn Figure 6, the abundance gradients of four galaxies which\nhave had extensive spectrophotometry across their discs are shown:\ntwo normal spiral galaxies, NGC 2997 (Walsh \\& Roy \\shortcite{WR89}) and M 101\n(Kennicutt \\& Garnett \\shortcite{KG96}); two barred \ngalaxies, NGC 3359 (Martin \\& Roy \\shortcite{MR95}) and NGC 1365 (this work).\nThe shallower gradients and the presence of breaks in the radial\nabundance distribution is\neasily seen in the two barred galaxies. The difference between\nbarred and normal spirals in radial\nabundance distributions is striking. If the recently formed bar hypothesis\nis correct, the numerical simulation of Friedli et al. \\shortcite{FR94}\nof a disc with a recently formed bar and star formation\nshow that the pre-bar O\/H radial distribution in NGC 1365 would have \nbeen very similar to that observed nowadays in M~101.\n\n\nFinally, the models of Friedli and collaborators\npredict two breaks in the radial abundance distribution of galaxies\nwith young bars. So far no observational evidence\nfor the ``shallow-steep'' break in the outer disc has been found.\nIn order to ascertain the scenario of secular evolution of\ndisc galaxies (Martinet \\shortcite{MAR95}), by the recurrent action of bars, it will be\nnecessary to find this outer break. This more definite test of the ``young bar'' hypothesis appears to be difficult however. As \ngas density falls with radial distance and the star formation\nrate weakens, there are fewer and smaller {H~\\sc ii}\\ regions,\nnot unlike the interarm {H~\\sc ii}\\ regions. Long integration times will be required\nin order to determine O\/H abundances and there may be only a few galaxies having a sufficient number of outer\n{H~\\sc ii}\\ regions to provide statistically significant samples.\n\n\n\\noindent\n{\\bf Acknowledgments} \\\\\nWe acknowledge the technical support of J. Pogson, the AAT telescope \noperator. Discussions with Daniel Friedli, Pierre Martin, and\nLaurent Drissen were most helpful.\nWe thank the PATT Committee for assigning time on the AAT for this\nproject. This investigation was funded in part by the Natural\nSciences and Engineering Research Council of Canada, the Fonds FCAR\nof the Government of Quebec and by\nthe Visitor Program of the European Southern Observatory through financial\nsupport of JRR.\n\n\\vspace{1cm}\n\n\\noindent\n{\\Large \\bf Appendix A: Serendipitously discovered galaxies} \\\\\n\nAs mentioned in section 2.2 three of the targets chosen as {H~\\sc ii}\\ regions were \nidentified from their spectra as galaxies. They are indicated as G-1 to G-3 on\nFigure 1. Brief details of the targets and their spectra are included here.\n\n\\noindent\n{\\bf G-1:} This bright galaxy is quite round (ellipticity on the B image $\\sim$ 0) so \nis probably of type around E0-E1. Its major axis has a half width of about 4$''$,\ncorrected for seeing. The spectrum shows a strong red continuum with absorption\nlines of Ca~{\\sc II} H and K, H$\\gamma$ and G-band and Mg~{\\sc I}. From the wavelengths\nof the Ca~{\\sc II} H and K lines the red-shift is 0.103. There is a \ndetection of Ca~{\\sc II} H and K absorption at the redshift of NGC 1365, making the\ngalaxy a very useful probe for studying the line of sight velocity dispersion\nin an interesting region near the bar.\n\n\\noindent\n{\\bf G-2:} This galaxy has a resolved almost circular core of half width $\\sim$2$''$\nand shows two spiral arms of total extent $\\sim$15$''$. The spectrum shows\na redward rising continuum with Ca~{\\sc II} H and K and Mg~{\\sc I} absorption lines.\nFrom the observed wavelength of the Ca~{\\sc II} H and K lines, the redshift is 0.294.\n\n\\noindent\n{\\bf G-3:} This galaxy displays a strong disc with a possible knot at its SE edge,\npossibly indicating an interacting system. The spectrum shows a rather flat continuum\nwith strong emission lines of {\\rm \\mbox{[O \\sc ii]}\\ }, H$\\beta$, and {\\rm \\mbox{[O \\sc iii]}\\ }. The redshift is\n0.131. The {\\rm \\mbox{[O \\sc iii]}\\ } 5007\/H$\\beta$ ratio is 2.4 and {\\rm \\mbox{[O \\sc ii]}\\ }\/H$\\beta$\\ ratio is 4.5.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSince their inception in 2001 \\cite{Aussillous2001}, liquid marbles (LMs) have been a source of growing interest across fields as diverse as medicine \\cite{Sarvi2015,Ledda2016}, engineering \\cite{Fujii2017,Tian2010a} and chemistry \\cite{Paven2016,Wei2016}. LMs are constructed from microlitre droplets of water, supported by a layer of hydrophobic particles on the surface. In this manner, the hydrophobic particles minimise the comparatively high surface energy of water by encapsulating the droplet, and keeping it near spherical. This permits the water droplet to remain non-wetting on many (traditionally wettable) surfaces. \n\nThere are two major variables affecting the properties of a LM: the core and the coating. Traditionally the encapsulated liquid is water, although there are a range of both common (glycerol \\cite{Aussillous2001}) and uncommon (petroleum \\cite{Bormashenko2015a}) alternatives. The coating provides the largest affect on the mechanical properties of the marble, as it is the coating that interacts with the surface the LM is resting on. Coating parameters that can be modified to impart the desired properties include the composition, grain size, and mix ratio. By varying these, it is possible to adapt a liquid marble to many different situations.\n\nLiquid marbles have previously been investigated as fluidic transport devices. They are ideally suited to the transport of microlitre quantities of liquid, due to their non-wetting nature. Recent progress have been made in this area, with LM movement initiated by magnets \\cite{Khaw2016}, electrostatic fields \\cite{Aussillous2006}, gravity \\cite{Aussillous2001}, lasers \\cite{Paven2016} and the Maran\\-goni effect \\cite{Ooi2015} all reported. The movement of LMs can be exploited for chemical reactions, by controlling the time and place of reagent mixing. This has been demonstrated both with the coalescing of LMs \\cite{Chen2017}, and in the controlled destruction of LMs once they arrive at a chosen location \\cite{dupin2009stimulus,Fujii2010}.\n\n\nIn interaction gates, Boolean values of inputs and outputs are represented by presence of physical objects at given site at a given time. If an object is present at input\/output we assume that logical value of the input\/output is {\\sc True}; if the object is absent the logical value is {\\sc False}. The signal-object realise a logical function when they pass through a collision site. The objects might fuse, annihilate or deflect on impact. \n\nThe fusion gate (figure~\\ref{fig:gate}(a)) was first implemented in fluidic devices in the 1960s. The gate is the most well known (on a par with the bistable amplifier) device in fluidics~\\cite{peter1965and, belsterling1971fluidic}: two nozzles are placed at right angles to each other, when there are jet flows in both nozzle they collide and merge into a single jet entering the central outlet. If the jet flow is present only in one of the input nozzles it goes into the vent. The central outlet represents {\\sc and} and the vent represent {\\sc and-not}. The fusion-based gate was also employed in designs of computing circuits in Belousov-Zhabotinsky medium \\cite{adamatzky2004collision, adamatzky2007binary, toth2010simple, adamatzky2011towards}, where excitation wave-fragments merge when collide; in the actions of slime mould \\cite{Mayne2015Vesicles}, when distributing vesicles collide; and a crab-based gate \\cite{gunji2011robust}, were swarms of solider crabs merge into a single swarm.\n\n\\begin{figure}[htbp]\n\\centering\n\\subfigure[]{\\includegraphics[scale=0.3]{fusion_collision}}\n\\subfigure[]{\\includegraphics[scale=0.3]{annihilation_collision}}\\\\\n\\subfigure[]{\\includegraphics[scale=0.3]{elastic_collision}}\n\\subfigure[]{\\includegraphics[scale=0.3]{nonelastic_collision}}\n\\caption{Interaction-based gates. \n(a)~Fusion of signals.\n(b)~Annihilation of signals.\n(c)~Elastic deflection of signals.\n(d)~Non-elastic deflection of signals.}\n\\label{fig:gate}\n\\end{figure}\n\nIn the annihilation gate (figure~\\ref{fig:gate}b) signals disappear on impact. This gates has two-inputs and two-outputs, each of the outputs represents {\\sc and-not}. Computational universality of the Conway's Game of Life cellular automata was demonstrated using annihilation based collisions between gliders~\\cite{berlekamp1982winning}. We can also implement the annihilation gate by colliding excitation wave-fragments at certain angles~\\cite{adamatzky2011polymorphic}. \n\nKey deficiency of the fusion and annihilation gates is that, when implemented in media other than excitable spatially-extended systems, they do not preserve physical quantity of signals, e.g. when two signals merge the output signal will have a double mass of a signal input signal. This deficiency is overcome in the conservative logic, proposed by Fredkin and Toffoli in 1978~\\cite{fredkin2002conservative}. The logical value are represented by solid elastic bodies, aka billiard balls, which deflect when made to collide with one another (figure~\\ref{fig:gate}c). Intact output trajectories of the balls represent {\\sc and-not} function, output trajectories of deflected balls represent {\\sc and} function. The gates based on elastic collision led to development of a reversible (both logically and physically) gate: Fredkin~\\cite{fredkin2002conservative} and Toffoli~\\cite{toffoli1980reversible} gates, which are the key elements of low-power computing circuits \\cite{de2011reversible, bennett1988notes}, and amongst the key components of quantum~\\cite{berman1994quantum, barenco1995elementary, smolin1996five, zheng2013implementation} and optical~\\cite{kostinski2009experimental} computing circuits. \n\nThe soft-sphere collision gate proposed by Margolus~\\cite{margolus2002universal} gives us a rather realistic representation of interaction gates with real-life physical and biological bodies (figure~\\ref{fig:gate}d). Logical value $x=1$ is given by a ball presented in input trajectory marked $x$, and $x=0$ by the absence of the ball in the input trajectory $x$; the same applies to $y=1$ and $y=0$, respectively. When the two balls, approaching the collision gate along paths $x$ and $y$ collide, {they} compress but then spring back and reflect. As a result, the balls come out along the paths marked $xy$. If only one ball approaches the gate, that is for inputs $x=1$ and $y=0$ or $x=0$ and $y=1$, the balls exit the gate via path $x\\overline{y}$ (for input $x=1$ and $y=0$) or $\\overline{x}y$ (for input $x=0$ and $y=1$). Soft-sphere-like gates have been implemented using microlitre sized water droplets on a superhydrophobic copper surface \\cite{Mertaniemi2012}. Using channels cut into the surface, {\\sc not-fanout}, {\\sc and-or} and {\\sc flip-flop} gates were demonstrated. The water droplets only rebounded in a very narrow collision property window, and a thorough \\& complete superhydrophobic surface treatment was required.\n\nIn this paper, we report the first exploitation of liquid marbles for implementation of interaction gates. The gate realised in the experimental laboratory conditions is a combination of the fusion and the Margolus gate: output trajectories of collided liquid marbles are so close that they can be interpreted as a single output. That said, if required, two liquid marbles at the output can be diverted along different paths to conserve a number of signals. By taking advantage of the liquid marbles' inherently low hysteresis, high tunability and capacity for enhanced versatility, we demonstrate the first step toward liquid marble facilitated, collision-based computing.\n\n\\section{Materials \\& Method}\n\\subsection{Regular \\& Reliable Liquid Marble Formation}\n\nWe first developed a technique for the regular and automatic formation of invariable LMs. This was achieved by programming a syringe driver (CareFusion Alaris GH) to feed a 21 gauge needle (\\SI{0.8}{\\milli\\metre} diameter) at a typical rate of \\SI{7.0}{\\milli\\litre\\per\\hour}. The rate can be easily increased or decreased, and this rate gave sufficiently fast LM formation for our purpose. The produced droplets (\\SI[separate-uncertainty = true,multi-part-units = single]{11.6 +- 0.16}{\\micro\\litre}) were permitted to fall onto a sheet of acrylic, slanted to \\ang{20} from horizontal, and surface-treated with a commercial hydrophobic spray (Rust-Oleum\\textsuperscript{\\textregistered} NeverWet\\textsuperscript{\\textregistered}). This formed beads of water, which were allowed to roll over a bed of appropriate hydrophobic powder. The result was a continuous `stream' of LMs with the same volume, coating and coating thickness. It should be noted that whilst the forming of LMs by running droplets down a powder slope has been separately developed by another group \\cite{Fujii2010}, our system prevents premature destruction of the powder bed by initially preforming the droplet on a treated hydrophobic surface.\n\n\\subsection{Maintaining Timing for Collisions}\n\nIn collision based computing, accurate timing is essential. As signals propagate through the system they must remain in sync, or the operation of many logic gates fails. In order to address this, an innovative system of electromagnets (EMs) was implemented. This was possible due to the generation of novel LMs with a mix of ultra-high density polyethylene (UHDPE) (Sigma-Aldrich, \\numrange[range-phrase = --]{3}{6e6}~\\si{\\gram\\per\\mole}, grain size approximately \\SI{100}{\\micro\\metre}) and nickel (GoodFellow Metals, \\SI{99.8}{\\percent}, grain size \\SIrange{4}{7}{\\micro\\metre}). A typical Ni\/UHDPE coating was \\SI{2.5}{\\mg}. The use of UHDPE provides strength and durability, and the inclusion of ferromagnetic nickel allows for a versatile magnetic LM.\n\nBy positioning an electromagnet (\\SI{100}{\\newton}, \\SI{12.0}{\\volt} DC, \\SI{29 x 22}{\\mm}) behind the acrylic slope, the rolling LM can be captured and released at will, by the switching on and off of said electromagnet. By controlling multiple, spatially-isolated electromagnets in series, non-concurrent LMs can be easily synchronised. To our knowledge, this is the first time electromagnets have been used to provide timing control with liquid marbles.\n\n\\subsection{Gate Design for Liquid Marble Collisions}\n\nThe collision gate was designed to allow for the colliding LMs to have a free path post-collision. This enabled the monitoring of the LM paths, and the future design and implementation of exiting pathways, creating a logic gate.\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=0.99\\linewidth]{LM-collider}\n \\caption{Schematic of our LM collider. Labelled numbers are: (1)~syringe needle, (2)~uncoated water droplet, (3)~acrylic ramp, (4)~hydrophobic powder bed, (5)~liquid marble, and (6)~electromagnet. Droplets form and fall out of the two syringe needles, landing on a superhydrophobic surface. They then roll over a bed of Ni\/UHDPE powder, before being stopped and held stationary by the electromagnets. These electromagnets are then deactivated simultaneously, allowing the LMs to roll off and collide.}\n \\label{fig:LM-collider}\n\\end{figure}\n\nTwo \\SI{16.0}{\\centi\\metre} acrylic pathways were slanted towards each other at \\ang{20}, affixed to an acrylic base sheet (\\SI{3.0}{\\mm} thick). The acrylic base sheet was then aligned with a pitch of \\ang{38} from horizontal, giving a final LM pathway slope of \\ang{16} from the horizontal plane. This gave reliable LM rolling without extreme angles. The gap between the two slanted pathways was set at \\SI{1.6}{\\centi\\metre}, after empirical testing. A \\SI{2.0}{\\centi\\metre}, length at the top of each pathway was made hydrophobic, as discussed above. Parallel auto-formation of hybrid LMs was achieved using the syringe driver, delivering \\SI{11.6}{\\micro\\litre} of water per syringe per drop. Each droplet of water was permitted to land on the treated section of each slope, before rolling across the powder beds of UHDPE and Ni to form LMs. The two rolling LMs were then captured using the electromagnets, allowing for any slight timing deviations to be accounted for. On controlled synchronous (or asynchronous) release of the electromagnets, the LMs simultaneously roll off the acrylic ramps on collision trajectories. Collisions were recorded at 120 fps using a Nikon Coolpix P900, and played back frame-by-frame for analysis. A schematic of our LM collider can be seen in figure \\ref{fig:LM-collider}, and photographs of our LM collider can be seen in figure~\\ref{fig:LM-collider-photo}.\n\n\\begin{figure}[htbp]\n\\centering\n\\subfigure[]{\\includegraphics[width=0.48\\textwidth]{Collider-photo1}}\n\\subfigure[]{\\includegraphics[width=0.23\\textwidth]{Collider-photo2}}\n\\subfigure[]{\\includegraphics[width=0.23\\textwidth]{Collider-photo3}}\n\\caption{Photographs of our LM collider, showing \n(a)~the overall layout, \n(b)~an upclose of the magnetic breaking\/release area, and\n(c)~an upclose of the droplet formation area. Labels are: (1)~syringe needle, (2)~acrylic ramp, (3)~hydrophobic powder bed, and (4)~electromagnet.}\n\\label{fig:LM-collider-photo}\n\\end{figure}\n\n\\section{Discussion}\n\\subsection{Liquid Marble Lifetime}\n\nFor a LM to be useful in a computing device, it has to have an appreciable lifetime. This is problematic for water based LM, as the gas permeability of LMs has previously been both established and exploited \\cite{Eshtiaghi2009,Tian2010a,Tian2010,Tian2013}. As such, lifetime experiments were conducted on UHDPE LMs, Ni LMs, water droplets, and our new Ni\/UHDPE hybrid LMs. Evaporation studies were conducted under ambient conditions, using \\SI{10.0}{\\micro\\litre} of DI water. UHDPE LMs were generated by rolling a droplet of water on a powder bed of UHDPE. Nickel LMs were made using a superhydrophobic surface (see above) to pre-form the droplet sphere, before rolling in an appropriate powder. A magnified view of both a Ni\/UHDPE hybrid LM and a nickel LM can be seen in figure~\\ref{fig:LM-photo}.\n\n\\begin{figure}[htbp]\n \\centering\n \\subfigure[]{\\includegraphics[width=0.4\\linewidth]{UHDPE-Ni-LM-photo-2}}\n \\subfigure[]{\\includegraphics[width=0.4\\linewidth]{Ni-LM-photo}}\n \\caption{Magnified photographs of (a) a Ni\/UHDPE LM, nickel can clearly been seen between the UHDPE particles; and (b) a pure nickel LM. Scale bars are \\SI{1.0}{\\mm}. }\n \\label{fig:LM-photo}\n\\end{figure}\n\nFrom the data shown in table~\\ref{tab:evap}, it can be seen that both the nickel and UHDPE LMs have slower evaporation rates than a pure water droplet. This is expected, as the solid particles on the surface of the liquid form an (incomplete) barrier to evaporation. It also supports previous studies \\cite{Dandan2009}. Experiments also indicated that pure UHDPE LMs evaporate at a comparable rate to pure nickel LMs. This is due to a balancing act between the short narrow pores of the nickel LM, the long wide channels of the UHDPE LM, and the much larger contact angle of UHDPE compared to nickel \\cite{Trevoy1958,Ammosova2015}. The larger grain size of the UHDPE creates longer channels for water vapour to traverse. This results in a smaller water vapour concentration gradient. \n\n\\begin{table}[htbp]\n \\centering\n \\caption{Comparison of the evaporation rates for \\SI{10}{\\micro\\litre} LMs and an uncoated water droplet. }\n \\begin{tabular}{cc}\n \\toprule\n \\textbf{LM Coating} & \\textbf{Initial Evaporation Rate} \\\\\n & \/ \\si{\\mg\\per\\minute} \\\\\n \\midrule\n (Water Droplet) & 0.1392 \\\\\n Ni & 0.1133 \\\\\n UHDPE & 0.1107 \\\\\n Ni\/UHDPE & 0.0998 \\\\\n \\bottomrule\n \\end{tabular}%\n \\label{tab:evap}%\n\\end{table}%\n\nIt is noteworthy that the new hybrid LM offers the best protection, with the lowest rate of evaporation. It is suggested that this is due to differences in the nickel and UHDPE particle sizes. This difference is clearly visible in the magnified photograph shown in figure~\\ref{fig:LM-photo}(a). The larger UHDPE particles offer good resistance to evaporation, due to their thickness. However, this also leads to large gaps between the particles, due to poor packing. This problem is alleviated by the nickel particles filling the available space. The use of two differently sized spheres for 3D spherical packing is well documented \\cite{Yamada2011,Sohn1968}, and has been shown to increase the maximum perfect packing density beyond the \\num{0.74} limit of a single-sized 3D sphere packing, to \\num{0.93} for an ideal binary system \\cite{Kansal2002}. In this instance, due to the multilayer nature of the LM coating \\cite{dupin2009stimulus}, it is more appropriate to relate to 3D sphere packing than 2D circle packing.\n\n\n\n\\subsection{Liquid Marble Collisions}\n\nIf the gate timings are not accurate, then the LMs will continue on their separate paths and shall not collide. If the timings are accurate, and it is taken that the two LMs are identical, then there are three possible outcomes of the collision. Firstly, that the LMs collide with some elastic property, and then continue on two distinct and new paths. Secondly, that the LMs collide with no elastic property, then continue vertically as two adjacent, but distinct, LMs. Thirdly, that on collision the LMs coalesce into a larger single LM with zero lateral velocity, which continues vertically down.\n\n\\begin{figure}[htbp]\n\\centering\n\\subfigure[]{\\includegraphics[width=0.35\\textwidth]{single}}\n\\subfigure[]{\\includegraphics[width=0.35\\textwidth]{bounce}}\n\\caption{Overlaid still frames of (a) a single LM, with frames at \\SIlist{0;142;209;242;267}{\\milli\\second}; \nand (b) two colliding LMs, with frames at \\SIlist{0;125;200;217;225;250;275}{\\milli\\second}.}\n\\label{fig:single-bounce}\n\\end{figure}\n\nVideo snapshots showing both a single uninterrupted and a colliding pair of LMs, can be seen in figure~\\ref{fig:single-bounce}. Our experiments demonstrate that LMs collide in an elastic manner. This is unsurprising, due to their previously reported soft-shell and compressible nature \\cite{Aussillous2001,Bormashenko2015}. It also supports the previously published, linear, non-coalescing collision of LMs \\cite{Bormashenko2015}. By monitoring the collisions at \\SI{120}{fps}, it was observed that LMs behave like two soft balls, acting in a manner described in the Soft Sphere Model (SSM), known as a Margolus gate \\cite{margolus2002universal}. A video of a typical collision can be seen in the supporting information. The important distinction between the SSM and the better known Billiard Ball Model (BBM) \\cite{fredkin2002conservative}, is the exit points of the colliding particles compared to the non-colliding particles. In the BBM, the particles are taken to be hard spheres, which instantly rebound off each other --- leading to the \\textbf{AB} paths being outside the corresponding \\textbf{\\={A}B} and \\textbf{A\\={B}} paths. In contrast, the SSM accounts for the finite and appreciable amount of time required for real-world soft spheres to rebound. The result is that the AB paths move to lie inside the unchanged \\textbf{\\={A}B} and \\textbf{A\\={B}} paths. The BBM and SSM pathways can be seen in figures \\ref{fig:BBM-SSM}(a) and \\ref{fig:BBM-SSM}(b), respectively.\n\n\\begin{figure}[htbp]\n\\centering\n\\subfigure[]{\\includegraphics[width=0.4\\linewidth]{BBM}}\n\\subfigure[]{\\includegraphics[width=0.4\\linewidth]{SSM}}\n\\subfigure[]{\\includegraphics[width=0.7\\linewidth]{steelballs}}\n\\caption{Showing the colliding and non-colliding routes for (a) BBM and (b) SSM pathways. (c) A collision of steel balls, following BBM under gravity (cf.\\ SSM LMs above). Frame times are \\SIlist{0;175;242;284;325}{\\ms}.}\n\\label{fig:BBM-SSM}\n\\end{figure}\n\nIt was possible to break the SSM analogy by increasing the speed of the LMs. The speed of the LMs was calculated by measuring the distance travelled by the LM in a certain number of frames, and knowing the recording frames per second. When the collision happens at \\SI{0.21}{\\metre\\per\\second}, the LMs bounce elastically following SSM paths. However, when the speed of collision is increased to \\SI{0.29}{\\metre\\per\\second}, the two LMs coalesce. This can be seen in the video snapshots in figure~\\ref{fig:coalesce}.\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=0.7\\linewidth]{coalesce}\n \\caption{Overlaid still frames showing the coalescence of two colliding LMs. Frames shown at \\SIlist{0;117;159;184;209;234;250}{\\milli\\second}.}\n \\label{fig:coalesce}\n\\end{figure}\n\nGrowth by coalescence of LMs is a commonly observed effect \\cite{Bhosale2012}. For a computing device, the physical nature of the input and output signals should be exactly the same. When two LMs coalesce, the output mass is double a single input mass. Consequently, if colliding LMs at this higher speed, a splitting device would be required to reduce the mass of the output LM. The facile splitting of a LM using a superhydrophobic treated scalpel has previously been reported \\cite{Bormashenko2011scalpel}.\n\nBy analysing the output paths of the LM collider, it becomes apparent that the gate could be modified to act as a 1-bit half-adder, with the possible outcomes demonstrated in figure \\ref{fig:half-adder}. When a single LM traverses the system from the \\textbf{A} or \\textbf{B} channel, it finishes at the left or right extremes, the \\textbf{A\\={B}} or \\textbf{\\={A}B} path, respectively. Once the exit pathways are combined, this is analogous to the sum output on a half-adder. An initial trial confirming feasibility of this is shown in figure~\\ref{fig:half-adder}(a), where a single LM enters from the right channel, crosses the gap, and is reflected to exit on the right side.\n\n\\begin{figure}[htbp]\n\\centering\n\\subfigure[]{\\includegraphics[width=0.7\\linewidth]{LM-reflect}}\n\\subfigure[]{\\includegraphics[width=0.35\\linewidth]{half-adder-a}}\n\\subfigure[]{\\includegraphics[width=0.35\\linewidth]{half-adder-b}}\n\\subfigure[]{\\includegraphics[width=0.35\\linewidth]{half-adder-c}}\n\\subfigure[]{\\includegraphics[width=0.35\\linewidth]{half-adder-d}}\n\\subfigure[]{\\includegraphics[width=0.4\\linewidth]{half-adder-e}}\n\\caption{(a) Overlaid frames showing the successful reflection of a LM, frames are timed at \\SIlist{0;234;334;375;400;434;476;517}{\\ms}.\n(b) \\& (c) The outcomes of a single unreflected LM passing through the adder. \n(d) The outcome of the adder when two LMs collide and coalesce. \n(e) The outcome of the adder when two LMs collide according to the SSM. \n(f) The electronic representation of a 1-bit half-adder.}\n \\label{fig:half-adder}\n\\end{figure}\n\nWhen two synchronised LMs pass through the collider, they either rebound or coalesce, according to their velocity at impact. If the LMs coalesce, then the new LM travels straight down the only \\textbf{AB} path, which can be considered to be the carry output. Alternatively if the LMs rebound, as in the SSM, then there are two \\textbf{AB} paths. One of these paths is then considered to be the carry output, and the other is discarded (the choice between the two \\textbf{AB} paths is arbitrary in this case).\n\n\\subsection{One-Bit Full-Adder Proposed}\n\nBy using this design (complete with magnetic timing control) and an intuitive {\\sc xor} gate, we can adapt the model of the one-bit full-adder proposed originally for Belousov-Zhabotinsky medium \\cite{adamatzky2015binary}. The {\\sc xor} gate can be replaced with an {\\sc or} gate without loss of logic, as there is no situation where two LMs will arrive simultaneously. The design schematic can be seen in figure~\\ref{fig:full-adder1}. There are two sets of electromagnets, which cycle on-off in pairs; first EM1 releases, then shortly after EM2 releases. This maintains synchronisation between LMs across the two collision gates.\n\nFor this design iteration, we have used channels for the passage of LMs. This is a deviation from pure collision-based computing, where free-space is used as momentary `wires' on an ad hoc basis. However, in this case, we believe the use of channels to be an important intermediate step towards this goal.\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{full-adder1-3}\n \\caption{The general design schematic for a 1-bit full-adder, operated using liquid marbles.}\n \\label{fig:full-adder1}\n\\end{figure}\n\nFor the example operations visualised in figure \\ref{fig:full-adder2}, if a LM travels down the \\textbf{A} and \\textbf{B} channel, then they will collide and travel straight to the carry output. If a LM travels down the \\textbf{B} and \\textbf{C\\textsubscript{in}} paths, then the \\textbf{B} LM crosses the first gate, before colliding with the \\textbf{C\\textsubscript{in}} LM and travelling straight down to join the carry output. If a single LM travels down the \\textbf{B} path, it will cross the first and second gate, finishing on the sum output. If a LM travels down the \\textbf{A}, \\textbf{B} and \\textbf{C\\textsubscript{in}} paths, then \\textbf{A} and \\textbf{B} will collide at the first gate and go straight to the carry output, whilst the \\textbf{C\\textsubscript{in}} LM will cross its gate and finish on the sum output.\n\n\\begin{figure}[htbp]\n\\centering\n\\subfigure[]{\\includegraphics[width=0.494\\linewidth]{fuller-adder2-a}}\n\\subfigure[]{\\includegraphics[width=0.494\\linewidth]{fuller-adder2-b}}\n\\subfigure[]{\\includegraphics[width=0.494\\linewidth]{fuller-adder2-c}}\n\\subfigure[]{\\includegraphics[width=0.494\\linewidth]{fuller-adder2-d}}\n\\caption{Example operations of the 1-bit full-adder to sum \n(a)~${1+1+0=10}$, \n(b)~${0+1+1=10}$,\n(c)~${0+1+0=01}$, and \n(d)~${1+1+1=11}$.}\n\\label{fig:full-adder2}\n\\end{figure}\n\nBased on the observations of our collision gate, we note that such a full-adder could be implemented with dimensions of approximately \\SI{15 x 15}{\\cm}, using LMs with a volume of approximately \\SI{10}{\\micro\\litre}, and a LM pre-collision run length of \\SI{2}{\\cm}. This gate could then be cascaded as required to produce an n-bit full-adder. Empirical testing has shown that reliable and manipulable LMs can be formed down to \\SI{1.0}{\\micro\\litre}, meaning that the device could then be scaled down appropriately. Using a single set of syringes, the automatic marble maker can up to eight LMs per needle per second. At this speed, synchronisation of the electromagnets becomes both more crucial and more tricky --- potentially requiring computer control. Initial investigations into the collision lifetime of the LM are promising, with six \\SI{3}{\\mm} LMs confined to a \\SI{2.5 x 2.5}{\\cm} space on an orbital shaker at 100\\,rpm, showing no signs of wear after three hours.\n\n\n\n\\section{Conclusions}\n\nIn summary, this demonstration of a collision interaction gate represents the first computing device operated by LMs. A new automatic technique for the easy and reproducible synthesis of LMs enhances the reliability of gate operations. The novel electromagnetic synchronisation of the LM collisions was made possible by the development of a new magnetic hybrid LM, with a coating composed of nickel and UHDPE, used in conjunction with electromagnets for breaking, holding, and synchronised release of the LMs. This collision gate would operate as a 1-bit half-adder, once the sum outputs are combined. A design schematic for a 1-bit full-adder was proposed.\n\n\nThe use of LMs for collision-based computation has many advantages (additional degrees of freedom) over previous approaches. Due to their nature, it is possible to carry cargo in the LMs, which adds an additional dimension to the calculations. It is also possible to initiate chemical reactions within marbles by their coalescence \\cite{Chen2017}. By varying the diameters of the LMs, different sizes can represent different values, and will have different relative trajectories --- removing the limitations of a binary system. Use of a magnet can remove the coating from magnetic marbles, which (if done on a superhydrophobic surface) can roll freely down a slope as droplets, before being reformed using a different coating. Compared to droplet computing \\cite{Mertaniemi2012,Katsikis2015}, only a tiny portion of the circuit needs to be treated hydrophobic, making larger and more complicated circuits easier and cheaper to construct. These points, combined with LM's ability to be easily merged, levitated \\cite{Chen2017}, divided \\cite{Bormashenko2011scalpel}, opened\/closed \\cite{Zhao2010}, and easily propelled by a variety of methods make LMs a fascinating and potentially prosperous addition to the unconventional computing family. \n\nWe envision the continued development of LM arithmetic circuits, and are currently working on producing working models of cascading standard gates.\n\n\n\\section*{Acknowledgements}\n\nThis research was supported by the EPSRC with grant EP\/P016677\/1 ``Computing with Liquid Marbles''.\nThe authors thank Dr Richard Mayne for his help with microscope imaging. \n\n\n\\section*{Supporting Information}\n\nVideo footage of the liquid marble collisions is available at \\url{youtu.be\/Lt1VWBtRk6E}, \\url{youtu.be\/isufSrhW_8M}, \\url{youtu.be\/sFEtVaFfxKI}, and \\url{youtu.be\/H8Si883lCw4}.\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}