diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzebti" "b/data_all_eng_slimpj/shuffled/split2/finalzzebti" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzebti" @@ -0,0 +1,5 @@ +{"text":"\\section*{Results for ISTAT data}%\n\\label{sect:ISTATresults}\n\nLifetimes beyond 105 years are highly unusual and the application of extreme value models \\citep{dehaanferreira2006} is warranted. We use the generalized Pareto distribution,\n\\begin{equation}\n\\label{eq:GP}\nF(x) = \\begin{cases}\n 1-(1+{\\gamma}x\/{\\sigma})_+^{-1\/{\\gamma}}, & x \\geq 0, \\gamma \\neq 0, \\\\\n 1-e^{-{x}\/{\\sigma}}, & x \\geq 0, \\gamma = 0,\n \\end{cases}\n\\end{equation}\nto model $x$, the excess lifetime above $u$ years. In~\\eqref{eq:GP}, $a_+=\\max(a,0)$ and $\\sigma > 0$ and $\\gamma \\in \\mathbb{R}$ are scale and shape parameters. For negative shape parameter $\\gamma$ the distribution has a finite upper endpoint at $-\\sigma\/\\gamma$, whereas $\\gamma\\geq 0$ yields an infinite upper endpoint.\n\nThe corresponding hazard function, often called the ``force of mortality'' in demography, is the density evaluated at excess age $x$, conditional on survival to then, i.e., \n\\begin{equation}\n\\label{eq:GPhazard}\nh(x)=\\dfrac{f(x)}{1-F(x)} = \\dfrac{1}{(\\sigma + \\gamma x)_+}, \\quad x\\geq 0, \n\\end{equation}\nwhere $f(x)=\\mathrm{d}F(x)\/\\mathrm{d}x$ is the generalized Pareto density function. If $\\gamma <0$, the hazard function tends to infinity at the finite upper limit for exceedances. When $\\gamma = 0$, $F$ is exponential and the hazard function is constant, meaning that the likelihood that a living individual dies does not depend on age beyond the threshold. In this case, mortality can be said to have plateaued at age $u$. \n\n\\begin{figure\n\\centering\n\\includegraphics[width=0.65\\linewidth]{figure\/Fig2.pdf}\n\\includegraphics[width=0.65\\linewidth]{figure\/Fig5.pdf}\n\\caption{Parameter stability plots for the \\textsf{ISTAT}{} data (top) and for the \\textsf{France 2019}{} data (bottom), showing the shape $\\gamma$ of the generalized Pareto distribution (left) and the scale $\\sigma_e$ of the exponential distribution (right) based on lifetimes that exceed the age threshold on the $x$-axis. The plots give maximum likelihood estimates with 95\\% confidence intervals derived using a likelihood ratio statistic. The horizontal lines in the right-hand panels correspond to the estimated scale for excess lifetimes over 108 years for the \\textsf{ISTAT}{} data.}\n\\label{fig:parameterstability}\n\\end{figure}\n\nThe choice of a threshold $u$ such that Eq.~[\\ref{eq:GP}] models exceedances appropriately is a basic problem in extreme value statistics and is surveyed by Scarrott \\& MacDonald~\\cite{scarrott\/macdonald:2012}. If $u$ is high enough for Eq.~[\\ref{eq:GP}] to provide an adequate approximation to the distribution of exceedances, then the shape parameter $\\gamma$ is approximately unchanged if a higher threshold $u'$ is used, and the scale parameters for $u$ and $u'$ have a known relationship, so a simple and commonly-used approach to the choice of threshold is to plot the parameters of the fitted distributions for a range of thresholds \\citep{davison+s:1990} and to use the lowest threshold above which parameter estimates stabilise. This choice balances the extrapolation bias arising if the threshold is too low with the increased variance incurred when taking $u$ too high to retain an adequate number of observations.\n\nThe upper left-hand panel of Figure~\\ref{fig:parameterstability} shows that for age thresholds close to 105 years the estimated shape parameters for excess life lengths are negative, with 95\\% confidence intervals barely touching zero, but there is no systematic indication of non-zero shape above 107 years. The upper right-hand panel displays the estimated scale parameter of the exponential model fitted to life lengths exceeding the threshold. The scale parameters decrease for ages 105--107 but show no indication of change after age 107, where the scale parameter estimate is 1.45. Parameter stability plots suggest an exponential model and hence a constant hazard after age 107 or so, where a mortality plateau seems to be attained.\n\n\n\n\n\n\n\n\nThe upper part of \\Cref{tab:ISTAT-MLE} shows results from fitting Eq.~[\\ref{eq:GP}] and the exponential distribution to the \\textsf{ISTAT}{}\\ for a range of thresholds. The exponential model provides an adequate fit to the exceedances over a threshold at 108 years, above which the hypothesis that $\\gamma=0$, i.e., the exponential model is an adequate model simplification, is not rejected. \n\n\n\\begin{table*}[t]\n\\centering\n\n\\begingroup\\footnotesize\n\\begin{tabular}{lrrrrrrrr}\n \\toprule\n\\textsf{ISTAT}{} & threshold & $105$ & $106$ & $107$ & $108$ & $109$ & $110$ & $111$\\\\\n \\midrule\n& $n_u$ & $3836$ & $1874$ & $947$ & $415$ & $198$ & $88$ & $34$ \\\\ \n& $\\sigma$ & $1.67\\; (0.04)$ & $1.7\\; (0.06)$ & $1.47\\; (0.08)$ & $1.47\\; (0.11)$ & $1.33\\; (0.15)$ & $1.22\\; (0.23)$ & $1.5\\; (0.47)$ \\\\ \n& $\\gamma$ & $-0.05\\; (0.02)$ & $-0.07\\; (0.03)$ & $-0.02\\; (0.04)$ & $-0.01\\; (0.06)$ & $0.03\\; (0.09)$ & $0.12\\; (0.17)$ & $0.06\\; (0.30)$ \\\\ \n& $\\sigma_e$ & $1.61\\; (0.03)$ & $1.6\\; (0.04)$ & $1.45\\; (0.05)$ & $1.45\\; (0.08)$ & $1.36\\; (0.11)$ & $1.35\\; (0.17)$ & $1.58\\; (0.32)$ \\\\ \n& $p$-value & $0.04$ & $0.01$ & $0.70$ & $0.82$ & $0.74$ & $0.44$ & $0.84$ \\\\ \n& $p_\\infty$ & $0.02$ & $0.01$ & $0.35$ & $0.41$ & $0.63$ & $0.78$ & $0.58$ \\\\ \n \\bottomrule\n\\toprule\n\\textsf{France 2019}{} & threshold & $105$ & $106$ & $107$ & $108$ & $109$ & $110$ & $111$\\\\\n \\midrule\n& $n_u$ & $9835$ & $5034$ & $2472$ & $1210$ & $550$ & $240$ & $106$ \\\\ \n& $\\sigma$ & $1.69\\; (0.02)$ & $1.59\\; (0.03)$ & $1.54\\; (0.04)$ & $1.43\\; (0.06)$ & $1.36\\; (0.08)$ & $1.34\\; (0.13)$ & $1.26\\; (0.18)$ \\\\ \n& $\\gamma$ & $-0.06\\; (0.01)$ & $-0.04\\; (0.01)$ & $-0.04\\; (0.02)$ & $-0.02\\; (0.03)$ & $0.02\\; (0.05)$ & $0.05\\; (0.07)$ & $0.09\\; (0.11)$ \\\\ \n& $\\sigma_e$ & $1.62\\; (0.02)$ & $1.53\\; (0.03)$ & $1.49\\; (0.03)$ & $1.41\\; (0.05)$ & $1.38\\; (0.07)$ & $1.39\\; (0.11)$ & $1.36\\; (0.16)$ \\\\ \n& $p$-value & $4 \\times 10^{-7}$ & $0.01$ & $0.05$ & $0.60$ & $0.72$ & $0.48$ & $0.32$ \\\\\n& $p_\\infty$ & $2 \\times 10^{-7}$ & $2 \\times 10^{-3}$ & $0.02$ & $0.30$ & $0.64$ & $0.76$ & $0.84$ \\\\ \n \\bottomrule\n\\end{tabular}\n\\caption{Estimates (standard errors) of scale and shape parameters ($\\sigma$, $\\gamma$) for the generalized Pareto distribution and of the scale parameter ($\\sigma_e$) for the exponential model for the \\textsf{ISTAT}{} and \\textsf{France 2019}{} datasets as a function of threshold, with number of threshold exceedances ($n_u$), $p$-value for the likelihood ratio test of $\\gamma=0$ and probability that $\\gamma \\geq 0$ based on the profile likelihood ratio test under the generalized Pareto model ($p_\\infty$).} \n\\label{tab:ISTAT-MLE}\n\\endgroup\n\\end{table*}\n\n\nThe estimated scale parameter obtained by fitting an exponential distribution to the \\textsf{ISTAT}{}\\ data for people older than 108 is 1.45 (years) with 95\\% confidence interval $(1.29, 1.61)$. Hence the hazard is estimated to be 0.69 (1\/years) with 95\\% confidence interval $(0.62, 0.77)$; above 108 years the estimated probability of surviving at least one more year at any given age is 0.5 with 95\\% confidence interval $(0.46, 0.54)$. \n\n\nWe investigated birth cohort effects, but found none; see {Appendix}~\\ref{subsect:cohorteffect}.\n\n\n\\section*{Results for \\textsf{France 2019}{} data}%\n\\label{sect:Frenchresults}\n\n\n\\begin{table*}[t!]\n\t\\centering\n\t\t\n\t\\begin{tabular}{l rl l rl l rl}\n\t\t\\toprule\n & \\multicolumn{2}{c}{\\textsf{ISTAT}{}}& & \\multicolumn{2}{c}{\\textsf{France 2019}{}}&&\\multicolumn{2}{c}{\\textsf{IDL 2016}{}} \\\\\n \\cline{2-3}\\cline{5-6}\\cline{8-9}\\\\\n\t\t & \\multicolumn{1}{c}{$n$} &\\multicolumn{1}{c}{$\\sigma_e$ (95\\% CI)}& & \\multicolumn{1}{c}{$n$} & \\multicolumn{1}{c}{$\\sigma_e$ (95\\% CI)} & &\\multicolumn{1}{c}{$n$} & \\multicolumn{1}{c}{$\\sigma_e$ (95\\% CI)} \\\\\n\t \\midrule\n\t\t{\\footnotesize women }& $375$ &$1.45~(1.23, 1.62)$ && $1116$ & $1.46~(1.36, 1.56)$&& $507$ & $1.39~(1.25, 1.54)$\\\\\n\t\t{\\footnotesize men } &$40$ &$1.41~(0.86, 1.98)$&& $94$ & $0.90~(0.70, 1.11)$ && $59$& $1.68~(1.16, 2.20)$ \\\\\n {\\footnotesize All}& $415$ &$1.45~(1.29, 1.61)$ && $1210$ & $1.41~(1.32, 1.51)$ && $566$ & $1.42~(1.28, 1.56)$ \\\\\n \\bottomrule\n\t\t\\end{tabular}\n\t\t\\caption{Estimates of the scale, $\\sigma_e$, of the exponential distribution, with 95\\% confidence intervals (CI). This distribution is fitted to exceedances of 108 years in the \\textsf{ISTAT}{}\\ and \\textsf{France 2019}{} data and of 110 years in the \\textsf{IDL 2016}{}\\ data analysed in \\cite{rootzen-zholud:2017}. }\n\\label{table:women-men}\n\t\t\\end{table*}\t\t\n\nEstimation for the \\textsf{France 2019}{} data was done as described in Rootz\\'en \\& Zholud \\citep{rootzen-zholud:2017}, taking into account the left- and right truncation of the lifetimes. Both the parameter stability plots in the lower panels of \\Cref{fig:parameterstability} and the results given in the lower part of Table~\\ref{tab:ISTAT-MLE} indicate that the exponential model is adequate above 108 years. For persons older than 108 the exponential scale parameter is estimated to be 1.41 (years) with 95\\% confidence interval $(1.32, 1.51)$, the hazard is estimated to be 0.71 (years$^{-1}$) with 95\\% confidence interval $(0.66, 0.76)$ and the estimated probability of surviving at least one more year is 0.49 with 95\\% confidence interval $(0.47, 0.52)$.\n \nTable~\\ref{table:women-men} shows that estimates of the scale parameter for the exponential distribution for women and men for the \\textsf{France 2019}{} data differ. If men are excluded, then the estimated scale parameter increases from 1.41 to 1.46 years, and if the oldest woman, Jeanne Calment, is also excluded, the estimate for women drops to 1.44 years. Similarly to the \\textsf{ISTAT}{} data, survival for ages 105--107 was lower in earlier cohorts. \n\n\\section*{Power}\n\n\\section*{Power}\n\nOur analysis above suggests that there is no upper limit to human lifetimes and that constant hazard adequately models excess liftime if one considers only those persons whose lifetimes exceed 108 years: there is no evidence that the force of mortality above this age is other than constant. One might wonder whether increasing force of mortality would be detectable, however, as the number of persons attaining such ages is relatively small. To assess this we performed a simulation study described in the~{Appendix}~\\ref{subsect:power}, mimicking the sampling schemes of the \\textsf{ISTAT}{}, \\textsf{France 2019}{} and \\textsf{IDL 2016}{} datasets as closely as possible and generating samples from the generalized Pareto distribution with $-0.25\\leq \\gamma \\leq 0$. To remove overlap between the last two datasets, we dropped France from \\textsf{IDL 2016}{}.\n\n\nAny biological limit to their lifespan should be common for all humans, whereas differences in mortality rates certainly arise due to social and medical environments and can be accommodated by letting hazards vary by factors such as country or sex. With the overlap dropped we can treat the datasets as independent and compute the power for a combined likelihood ratio test of $\\gamma=0$ (infinite lifetime) against alternatives with $\\gamma<0$ (finite lifetime). For concreteness of interpretation we express the results in terms of the implied upper limit of lifetime $\\iota=u-\\sigma\/\\gamma$. The left-hand panel of \\Cref{fig:powerendpoint} shows the power curves for the three datasets individually and pooled. The power of the likelihood ratio test for the alternatives $\\iota \\in \\{125, 130, 135\\}$ years, for example, is $0.46\/0.32\/0.24$ for the \\textsf{ISTAT}{} data above 108, $0.82\/0.61\/0.46$ for the \\textsf{France 2019}{} data above 108, and $0.62\/0.40\/0.29$ for the \\textsf{IDL 2016}{} data above 110. The power for $\\iota=125\/130\/135$ years based on all three datasets is $0.96\/0.80\/0.64$, so it appears to be rather unlikely that an upper limit to the human lifespan, if there is such a limit, is below 130 years or so.\n\nSimilar calculations give the power for testing the null hypothesis $\\gamma=0$ against alternatives $\\gamma<0$. Forcing all datasets to have the same shape parameter would allow them to have different endpoints so we reject the overall null hypothesis if we reject the exponential hypothesis any of the three datasets. The power of this procedure is shown with a dashed black line in \\Cref{fig:powerendpoint}. The resulting combined power exceeds $0.8$ for $\\gamma < -0.07$ and equals $0.97$ for the alternative $\\gamma=-0.1$, giving strong evidence against a sharp increase in the hazard function after 108 years. \n\n\n\n\n\\begin{figure*\n\\centering\n\\includegraphics[width=0.95\\linewidth]{figure\/Fig9b.pdf}\n\\caption{Power functions based on the \\textsf{IDL 2016}{} (excluding French records), \\textsf{France 2019}{} and \\textsf{ISTAT}{} databases and combined datasets, with rugs showing the lifetimes above 115. Left: power for the alternative of a finite endpoint $\\iota$ against the null hypothesis of infinite lifetime based on the likelihood ratio statistic. The endpoint cannot be lower than the largest observations in each database. Right: power of the Wald statistic for the null hypothesis $ \\gamma = 0$ against the one-sided alternative $ \\gamma < 0$ as a function of $\\gamma$; the dashed line represents the power obtained by rejecting the null of exponentiality when any of the three one-sided test rejects. The curves are obtained by conditioning on the birthdates and left-truncated values in the databases, then simulating generalized Pareto data whose parameters are the partial maximum likelihood estimates $(\\widehat{\\sigma}_\\gamma, \\gamma)$. The simulated records are censored if they fall outside the sampling frame for the \\textsf{ISTAT}{} data and are simulated from a doubly truncated generalized Pareto distribution for \\textsf{IDL 2016}{} and \\textsf{France 2019}{}. See {Appendix}~\\ref{subsect:power} for more details.}\n\\label{fig:powerendpoint}\n\\end{figure*}\n\n\n\\section*{Gompertz model}\n\n\nThe hazard function of the generalized Pareto distribution cannot model situations in which the hazard increases to infinity, but the upper limit to lifetimes is infinite. This possibility is encompassed by the Gompertz distribution \\citep{Gompertz:1825}, which has long been used for modelling lifetimes and often provides a good fit to data at lower ages \\citep[e.g.,][]{Thatcher:1999}. When the Gompertz model is expressed in the form\n\\begin{align*}\nF(x) = 1 - \\exp\\left\\{-(e^{\\beta x\/\\sigma }-1)\/\\beta\\right\\}, \\quad x>0,\\quad \\sigma, \\beta>0, \n\\end{align*}\n $\\sigma$ is a scale parameter with the dimensions of $x$, and the dimensionless parameter $\\beta$ controls the shape of the distribution. Letting $\\beta\\to 0$ yields the exponential distribution with mean $\\sigma$; small values of $\\beta$ correspond to small departures from the exponential model. The fact that $\\beta$ cannot be negative affects statistical comparison of the Gompertz and exponential models; see~{Appendix}~\\ref{subsect:gompertz}.\n\nThe Gompertz distribution has infinite upper limit to its support, so it cannot be used to assess whether there is a finite upper limit to the human lifespan. Its hazard function, $\\sigma^{-1}\\exp(\\beta x\/\\sigma)$, is finite but increasing for all $x$ ($\\beta>0$) or constant ($\\beta=0$). The limiting distribution for threshold exceedances of Gompertz variables is exponential, and this limit is attained rather rapidly, so a good fit of the Gompertz distribution for lower $x$ would be compatible with good fit of the exponential distribution for threshold exceedances at higher values of $x$. \n\nComputations summarised in~{Appendix}~\\ref{subsect:gompertz} show that the exponential model, and hence also the Gompertz model with very small $\\beta$, give equally good fits to the Italian and the French datasets above age 107, and that the Gompertz and generalised Pareto models fit equally well above age 105.\n\n\n\n\n\t\t\n\n\\section*{Conclusions}%\n\\label{sect:concluskons}\nNone of the analyses of the \\textsf{ISTAT}{}, \\textsf{IDL 2016}{} or \\textsf{France 2019}{} data, for women and men separately or combined, indicates any deviation from exponentially distributed residual lifetimes, or equivalently from constant force of mortality, beyond 108 years.\n\nTable~\\ref{table:women-men} shows no differences between survival after age 108 in the \\textsf{ISTAT}{} data and survival after age 110 in the \\textsf{IDL 2016}{} data for women, for men, or for women and men combined, so we merged these estimates by taking a weighted average with weights inversely proportional to the estimated variances. The resulting estimates also show no significant differences in survival between men and women, and we conclude that survival times in years after age 108 in the \\textsf{ISTAT}{} data and after age 110 in the \\textsf{IDL 2016}{} data are exponentially distributed with estimated scale parameter $1.43$ and 95\\% confidence interval $(1.33, 1.52)$. The corresponding estimated probability of surviving one more year is $0.5$ with 95\\% confidence interval $(0.47, 0.52)$. \n\nThere was no indication of differences in survival for women or the whole of the \\textsf{France 2019}{} data and in the combined \\textsf{ISTAT}{} and \\textsf{IDL 2016}{} data, but survival for men was lower in the \\textsf{France 2019}{} data. A weighted average of the estimates for the \\textsf{ISTAT}{} data, the \\textsf{France 2019}{} data and the \\textsf{IDL 2016}{} data with France removed gives an exponential scale parameter estimate of $1.42$ years with 95\\% confidence interval $(1.34, 1.49)$, and estimated probability $0.49 (0.47, 0.51)$ of surviving one more year. \n\nDeleting the men from the \\textsf{France 2019}{} data or dropping Jeanne Calment changes estimates and confidence intervals by at most one unit in the second decimal.\n\nThere is high power for detection of an upper limit to the human lifespan up to around 130 years, based on fits of the generalized Pareto model to the three datatbases. Moreover there is no evidence that the Gompertz model, with increasing hazard, fits better than the exponential model, constant hazard, above 108 years.\n\t\t\n\n\n\n\n\n\n\n\n\n\\section*{Discussion}\n\\label{sec:discussion}\n\nThe results of the analysis of the newly-available \\textsf{ISTAT}{}\\ data agree strikingly well with those for the \\textsf{IDL}{}\\ supercentenarian database and for the women in the \\textsf{France 2019}{}\\ data. Once the effects of the sampling frame are taken into account by allowing for truncation and censoring of the ages at death, a model with constant hazard after age 108 fits all three datasets well; it corresponds to a constant probability of 0.49 that a living person will survive for one further year, with 95\\% confidence interval (0.48, 0.51). The power calculations make it implausible that there is an upper limit to the human lifespan of 130 years or below.\n\nAlthough many fewer men than women reach high ages, no difference in survival between the sexes is discernible in the \\textsf{ISTAT}{} and the \\textsf{IDL 2016}{} data. Survival of men after age 108 is lower in the \\textsf{France 2019}{} data, but it seems unlikely that this reflects a real difference between France and Italy and between France and the other countries in the \\textsf{IDL}{}. It seems more plausible that this effect is due to some form of age bias or is a false positive caused by multiple testing. \n\nIf the \\textsf{ISTAT}{} and \\textsf{France 2019}{} data are split by birth cohort, then we find roughly constant mortality from age 105 for those born before the end of 1905, whereas those born in 1906 and later have lower mortality for ages 105--107; this explains the cohort effects detected by \\cite{Barbi:2018}. Possibly the mortality plateau is reached later for later cohorts. The plausibility of this hypothesis could be weighed if further high-quality data become available.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction\\label{sec:introduction}}\n\nPublic key cryptography is a critical component of many widely-used cryptosystems, and forms the basis for much of our ecommerce transaction security infrastructure. Unfortunately, the most common public key schemes are known to be insecure against quantum computers. In 1994, Peter Shor developed a quantum algorithm for efficient factorization and discrete logarithms~\\cite{shor:factor2}; the (supposed) hardness of these two problems formed the basis for RSA and DSA, respectively. Sufficiently powerful quantum computers do not yet exist, but the possibility of their existence in the future already poses problems for those with significant forward security requirements.\n\nA more secure replacement for public key cryptography is needed. Ideally, this replacement would offer information-theoretic security, and would possess most or all of the favorable qualities of public key cryptography. At present, no complete replacement exists, but quantum key distribution (QKD)---in conjunction with one-time pad (OTP) or other symmetric ciphers---appears promising.\n\nQKD---first developed by Bennett and Brassard~\\cite{bennett:BB84}---is a key distribution scheme that relies upon the uncertainty principle of quantum mechanics to guarantee that any eavesdropping attempts will be detected. In a typical QKD setup, individual photons are sent through optical fiber or through free space from the sender to the receiver. The receiver performs measurements on the photons, and sender and receiver communicate via an authenticated (but not necessarily private) classical channel.\n\nOptical attenuation of these single photon pulses limits the maximum transmission distance for a single QKD link to about 200 km over fiber with present technology~\\cite{takesue:qkd}, and significantly less through air. Unlike optically-encoded classical information, the ``signal strength'' of these photons cannot be amplified using a conventional optical amplifier; the No Cloning Theorem~\\cite{wootters:cloning} prohibits this. We refer to this challenge as the \\emph{relay problem}.\n\nTwo classes of quantum repeaters have been proposed to resolve the distance limitations of QKD. The first makes use of quantum error correction to detect and rectify errors in specially-encoded pulses. Unfortunately, the extremely low error thresholds for such schemes ($\\sim 10^{-4}$) make this impractical for use in a realistic quantum repeater. The second class of quantum repeaters uses entanglement swapping and distillation~\\cite{briegel:5932,duan:6862} to establish entanglement between the endpoints of a chain of quantum repeaters, which can then be used for QKD~\\cite{ekert:661}. This method is much more tolerant of errors, and offers resource costs that scale only polynomially with the number of repeaters (i.e., polynomially with distance). However, such repeaters do have one major drawback: they require quantum memories with long decoherence times~\\cite{duan:6862}.\n\nIn order to be useful for practical operation, a quantum repeater must possess a quantum memory that meets the following three requirements:\n\\begin{enumerate}\n\\item Long coherence times: at a minimum, coherence times must be comparable to the transit distance for the entire repeater chain (e.g., $\\sim 10\\ \\mathrm{ms}$ for a trans-Atlantic link).\n\\item High storage density: the bandwidth for a quantum repeater is limited by the ratio of its quantum memory capacity to the transit time for the entire repeater chain~\\cite{simon:190503}.\n\\item Robustness in extreme environments: practical quantum repeaters must be able to operate in the range of environments to which telecom equipment is exposed (e.g., on the ocean floor, in the case of a trans-oceanic link).\n\\end{enumerate}\nThese requirements are so demanding that it is possible that practical quantum repeaters will not be widely available until after large-scale quantum computers have been built---in other words, not until too late.\n\nThe distance limitations of QKD and the issues involved in developing practical quantum repeaters make it challenging to build secure QKD networks that span a large geographic area. The na\\\"{\\i}ve solution of classical repeaters leads to exponentially decaying security with transmission distance if each repeater has some independent probability of being compromised. If large QKD networks are to be built in the near future (i.e., without quantum repeaters), an alternative method of addressing the single-hop distance limitation must be found. We refer to this as the \\emph{relay problem}.\n\nGiven an adversary that controls a randomly-determined subset of nodes in the network, we have developed a solution to the relay problem that involves encoding encryption keys into multiple pieces using a secret sharing protocol~\\cite{shamir:secret,blakley:313}. These shares are transmitted via multiple multi-hop paths through a QKD network, from origin to destination. Through the use of a distributed re-randomization protocol at each intermediate stage, privacy is maintained even if the attacker controls a large, randomly-selected subset of all the nodes. \n\nWe note that authenticated QKD is information-theoretic secure~\\cite{renner:012332}, as is OTP; in combination, these two cryptographic primitives provide information-theoretic security on the level of an individual link. Our protocol makes use of many such links as part of a network that provides information-theoretic security with very high probability. In particular, with some very small probability $\\delta$, the protocol fails in such a way as to allow a sufficiently powerful adversary to perform undetected man-in-the-middle (MITM) attacks. The failure probability $\\delta$ can be made arbitrarily small by modest increases in resource usage. In all other cases, the network is secure. We describe the level of security of our protocol as \\emph{probabilistic information-theoretic}.\n\nIn analyzing our protocol, we consider a network composed of a chain of ``cities'', where each city contains several parties, all of whom are linked to all the other parties in that city. We assume intracity bandwidth is cheap, whereas intercity bandwidth is expensive; intercity bandwidth usage is the main resource considered in our scaling analysis. For the sake of simplicity, we consider communication between two parties (Alice and Bob) who are assumed to be at either end of the chain of cities. A similar analysis would apply to communication between parties at any intermediate points in the network.\n\n\\section{Adversary and Network Model}\n\nIt is convenient to model networks with properties similar to those described above by using undirected graphs, where each vertex represents a node or party participating in the network, and each edge represents a secure authenticated private channel. Such a channel could be generated by using QKD in conjunction with a shared secret key for authentication, or by any other means providing information-theoretic security.\n\nWe describe below an adversary and network model similar in some ways to one we proposed earlier\\footnote{Pre-print available at www.arXiv.org as arXiv:0803.2717} \nin the context of a protocol for authenticating mutual strangers in a very large QKD network, which we referred to as the \\emph{stranger authentication protocol}. In that protocol, edges represented shared secret keys, whereas here they represent physical QKD links. Network structure in the previous model was assumed to be random (possibly with a power law distribution, as is common in social networks), whereas here the network has a specific topology dictated by geographic constraints, the distance limitations of QKD, and the requirements of the protocol.\n\n\\subsection{Adversarial Capabilities and Limitations\\label{sec:ad_cap}}\nWe call the following adversary model the \\emph{sneaky supercomputer}:\n\\begin{enumerate}[(i)]\n\\item \\label{it:adcap1}The adversary is computationally unbounded.\n\\item \\label{it:adcap2}The adversary can listen to, intercept, and alter any message on any public channel.\n\\item \\label{it:adcap3}The adversary can compromise a randomly-selected subset of the nodes in the network. Compromised nodes are assumed to be under the complete control of the adversary. The total fraction of compromised nodes is limited to $(1-t)$ or less.\n\\end{enumerate}\n\nSuch an adversary is very powerful, and can successfully perform MITM attacks against public key cryptosystems (using the first capability) and against unauthenticated QKD (using the second capability), but not against a QKD link between two uncompromised nodes that share a secret key for authentication (since quantum mechanics allows the eavesdropping to be detected) \\cite{renner:012332}. The adversary can always perform denial-of-service (DOS) attacks by simply destroying all transmitted information; since DOS attacks cannot be prevented in this adversarial scenario, we concern ourselves primarily with security against MITM attacks. Later, we will briefly consider variants of this adversarial model and limited DOS attacks.\n\nThe third capability in this adversarial model---the adversary's control of a random subset of nodes---simulates a network in which exploitable vulnerabilities are present on some nodes but not others. As a first approximation to modeling a real-world network, it is reasonable to assume the vulnerable nodes are randomly distributed throughout the network.\n\nAn essentially equivalent adversarial model is achieved if we replace the third capability as follows: suppose the adversary can attempt to compromise any node, but a compromise attempt succeeds only with probability $(1-t)$, and the adversary can make no more than one attempt per node. In the worst case where the adversary attempts to compromise all nodes, the adversary will control a random subset of all nodes, with the fraction of compromised nodes being roughly $(1-t)$.\n\n\\subsection{The Network}\nFor the relay problem, let us represent the network as a graph~$G$, with~$V(G)$ being the set of vertices (nodes participating in the network) and $E(G)$ being the set of edges (secure authenticated channels, e.g. QKD links between parties who share secret keys for authentication). $N = |V(G)|$ is the number of vertices (nodes). $V_d$ is the set of compromised nodes, which are assumed to be under the adversary's control; $|V_d| \\leq N (1-t)$. Furthermore, let us assume that the network has the following structure: nodes are grouped into $m$ clusters---completely connected sub-graphs containing $n$ nodes each. There are thus $N=mn$ nodes in the network. We label the nodes as $v_{i,j}$, $i\\in \\left\\{1,\\dots,n\\right\\}$, $j\\in \\left\\{1,\\dots,m\\right\\}$. Each node is connected to one node in the immediately preceding cluster and one node in the cluster immediately following it. \n\nMore formally, let $E_\\ell(G) \\equiv \\{(v_{i,j},v_{i,j+1}) : v_{i,j}, v_{i,j+1} \\in V(G)\\}$ and $E_\\sigma(G) \\equiv \\{(v_{i,j},v_{k,j}) : v_{i,j}, v_{k,j} \\in V(G)\\}$. Then, $E(G) \\equiv E_\\ell(G) \\cup E_\\sigma(G)$.\n\nThis network structure models a chain of $m$ cities (a term which we use interchangeably with ``cluster''), each containing $n$ nodes. The cities are spaced such that the physical distance between cities allows QKD links only between adjacent cities. To realistically model the costs of communication bandwidth, we assume that use of long distance links (i.e., those represented by $E_\\ell(G)$) is expensive, whereas intracity links (i.e., $E_\\sigma(G)$) are cheap.\n\nNext, we consider two additional nodes---a sender and a receiver. The sender (hereafter referred to as Alice or simply $A$) has direct links to all the nodes in city 1, while the receiver (Bob, or $B$) has a link to all nodes in city $m$. We assume Alice and Bob to be uncompromised. An example is shown in Fig. \\ref{fig:relay_graph}.\n\n\\section{The Relay Protocol\\label{sec:relay}}\nIn the relay problem, Alice wishes to communicate with Bob over a distance longer than that possible with a single QKD link, with quantum repeaters being unavailable. As described above, Alice and Bob are separated by $m$ ``cities'', each containing $n$ participating nodes. (In the case where different cities contain different numbers of participating nodes, we obtain a lower bound on security by taking $n$ to be the minimum over all cities.) \n\n\\begin{figure} \\centering\n\\includegraphics[width=3.25 in, keepaspectratio=true]{relay_graph}\n\\caption{\\label{fig:relay_graph} White vertices represent honest parties, whereas shaded vertices represent dishonest parties. Double vertical lines represent secure communication links between all joined vertices (i.e., all parties within a given city can communicate securely). In the graph shown above, $40\\%$ of the parties in cities between Alice and Bob are dishonest, but Alice and Bob can still communicate securely using the method described in Sec. \\ref{sec:relay} and Fig. \\ref{fig:protocol}.}\n\\end{figure}\n\nTo achieve both good security and low intercity bandwidth usage, we can employ a basic secret sharing scheme with a distributed re-randomization of the shares \\cite{ben-or:distributed} performed by the parties in each city. This re-randomization procedure is similar to that used in the mobile adversary proactive secret sharing scheme \\cite{ostrovsky:112605,herzberg:339}. Note that in the following protocol description, the second subscript labels the city, while the first subscript refers to the particular party within a city.\n\n\\begin{enumerate}[(i)]\n \\item Alice generates $n$ random strings $r_{i,0}, i\\in\\{1,\\ldots,n\\}$ of length $\\ell$, $r \\in \\{0,1\\}^\\ell$. $\\ell$ is chosen as described in Sec. \\ref{sec:verify_protocol}.\n \\item Alice transmits the strings to the corresponding parties in the first city: $v_{i,1}$ receives $r_{i,0}$.\n \\item \\label{it:party_rec}When a party $v_{i,j}$ receives a string $r_{i,j-1}$, it generates $n-1$ random strings $q_{i,j}^{(k)}, k\\neq i$ of length $\\ell$, and transmits each string $q_{i,j}^{(k)}$ to party $v_{k,j}$ (i.e., transmission along the vertical double lines shown in Fig. \\ref{fig:relay_graph}).\n \\item \\label{it:party_gen}Each party $v_{i,j}$ generates a string $r_{i,j}$ as follows: \n \\[r_{i,j} \\equiv r_{i,j-1} \\oplus \\left(\\bigoplus_{k,k\\neq i} q_{i,j}^{(k)} \\right) \\oplus \\left( \\bigoplus_{k,k\\neq i} q_{k,j}^{(i)} \\right),\\]\n where the symbols ($\\oplus$ and $\\bigoplus$) are both understood to mean bitwise XOR. Note that the string $r_{i,j-1}$ is received from a party in the previous city, the strings $q_{i,j}^{(k)}$ are generated by the party $v_{i,j}$, and the strings $q_{k,j}^{(i)}$ are generated by other parties in the same city as $v_{i,j}$. The string $r_{i,j}$ is then transmitted to party $v_{i,j+1}$ (i.e., transmission along the horizontal lines shown in Fig. \\ref{fig:relay_graph}).\n \\item Steps (\\ref{it:party_rec}) and (\\ref{it:party_gen}) are repeated until the strings reach the parties in city $m$. All the parties $v_{i,m}$ in city $m$ forward the strings they receive to Bob.\n \\item Alice constructs $s \\equiv \\prod_{i} r_{i,0}$ and Bob constructs $s^\\prime \\equiv \\prod_{i} r_{i,j-1}$.\n \\item Alice and Bob use the protocol summarized in Fig. \\ref{fig:protocol} and described in detail in Section \\ref{sec:verify_protocol} to determine if $s=s^\\prime$. If so, they are left with a portion of $s$ (identified as $s_3$), which is their shared secret key. If $s \\neq s^\\prime$, Alice and Bob discard $s$ and $s^\\prime$ and repeat the protocol.\n\\end{enumerate}\n\n\\begin{figure} \\centering\n \\includegraphics[width=3.50 in, keepaspectratio=true]{alice_bob_verify}\n \\caption[]{\\label{fig:protocol} Alice and Bob perform a verification sub-protocol to check that their respective secret keys, $s=(s_1,s_2,s_3)$ and $s^\\prime=(s_1^\\prime,s_2^\\prime,s_3^\\prime)$, are in fact the same. Alice generates a random number $r$, concatenates it with the hash $H[s_3]$ of $s_3$, XORs this with $s_1$, and sends the result to Bob. Bob decodes with $s_1^\\prime$, verifies that $H[s_3] = H[s_3^\\prime]$, then sends back to Alice the result of bit-wise XORing the hash of $r$, $H[r]$, with $s_2^\\prime$. Finally, Alice decodes with $s_2$ and checks to see that the value Bob has computed for $H[r]$ is correct. Alice and Bob now know $s_3 = s_3^\\prime$ and can store $s_3$ for future use. Note that with this protocol, the adversary can fool Alice and Bob into accepting $s \\neq s^\\prime$ with 100 \\% probability if the adversary knows $s$ and $s^\\prime$. }\n \\end{figure}\n\n\\subsection{Key Verification \\label{sec:verify_protocol}}\nIn the last step of the protocol described above, Alice and Bob must verify that their respective keys, $s$ and $s^\\prime$, are the same and have not been tampered with. We note that there are many ways\\footnote{See for example pp. 13--14 of the SECOQC technical report D-SEC-48, by L. Salvail \\cite{salvail:qkd}.} to accomplish this; we present one possible method here (summarized in Fig. \\ref{fig:protocol}) for definiteness, but make no claims as to its efficiency.\n\nWe consider Alice's key $s$ to be composed of three substrings, $s_1$, $s_2$, and $s_3$, with lengths $\\ell_1$, $\\ell_2$, and $\\ell_3$, respectively (typically, $\\ell_3 \\gg \\ell_1,\\ell_2$). Bob's key $s^\\prime$ is similarly divided into $s_1^\\prime$, $s_2^\\prime$, and $s_3^\\prime$. If Alice and Bob successfully verify that $s_3^\\prime = s_3$, they can use $s_3$ as a shared secret key for OTP encryption or other cryptographic purposes.\n\nThe verification is accomplished as follows:\n\\begin{enumerate}[(i)]\n\\item Alice generates a random nonce $r$, and computes the hash $H[s_3]$ of $s3$. She then sends $(r,H[s_3]) \\oplus s_1$ to Bob.\n\\item Bob receives the message from Alice, decrypts by XORing with $s_1^\\prime$, and verifies that the received value of $H[s_3]$ matches $H[s_3^\\prime]$. If so, he accepts the key, and sends Alice the message $H[r] \\oplus s_2^\\prime$. If not, Bob aborts. \n\\item Alice decrypts Bob's message by XORing with $s_2$, and verifies that the received value of $H[r]$ is correct. If so, Alice accepts the key, and verification is successful. If not, Alice aborts.\n\\end{enumerate}\n\nWe now outline a proof of the security of this verification process, and discuss requirements for the hash function $H$. We begin with the assumption that Eve does not know $s$ or $s^\\prime$; if she does, the relay protocol has failed, and Eve can perform MITM attacks without detection (conditions under which the relay protocol can fail are analyzed in Sec. \\ref{sec:security}). Our goal is to show that Alice and Bob will with very high probability detect any attempt by Eve to introduce errors in $s_3^\\prime$ (i.e., any attempt by Eve to cause $s_3^\\prime \\neq s_3$), and that the verification process will also not reveal any information about $s_3$ to Eve.\n\nWe note that any modification by Eve of the messages exchanged by Alice and Bob during the verification process is equivalent to Eve introducing errors in $s_1^\\prime$ and $s_2^\\prime$ during the main part of the relay protocol. If she controls at least one intermediate node, Eve can introduce such errors by modifying one or more of the strings transmitted by a node under her control. We can thus completely describe Eve's attack on the protocol by a string $e=(e_1,e_2,e_3)$, where $s^\\prime = s \\oplus e$, and the three substrings $e_1$, $e_2$, and $e_3$ have lengths $\\ell_1$, $\\ell_2$, and $\\ell_3$, respectively (with $\\ell = \\ell_1+\\ell_2+\\ell_3$).\n\nIt is clear that Eve cannot gain any information about $s_3$ from the verification process, since the only information ever transmitted about $s_3$ (the hash $H[s_3]$) is encrypted by the OTP $s_1$, and $s_1$ is never re-used.\n\nBefore proceeding, let us further partition $s_1$ into two strings $s_{1a}$ and $s_{1b}$, where $s_{1a}$ is the portion of $s_1$ used to encrypt $r$, and $s_{1b}$ is the portion used to encrypt $H[s_3]$. Let $\\ell_{1a}$ and $\\ell_{1b}$ be the lengths of $s_{1a}$ and $s_{1b}$. We similarly partition $s_1^\\prime$ and $e_1$.\n\nEve's only hope of fooling Bob into accepting a tampered-with key (i.e., accepting even though $s_3^\\prime \\neq s_3$) is for her to choose $e_{1b}$ and $e_3$ such that the expression $H[s_3]\\oplus H[s_3 \\oplus e_3] = e_{1b}$ is satisfied. Random guessing will give her a $\\sim2^{-\\ell_{1b}}$ chance of tricking Bob into accepting; for Eve to do better, she must be able to exploit a weakness in the hash function $H$ that gives her some information as to the correct value of $e_{1b}$ for some choice of $e_3$. Note that Eve's best strategy for this attack is to choose $e_{1a}$ and $e_2$ to be just strings of zeroes.\n\nFrom this observation, we obtain the following condition on the hash function: for a random $s_3$ (unknown to Eve), there exists no choice of $e_3$ such that Eve has any information about the value of $e_{1b}$ she should choose to satisfy $H[s_3]\\oplus H[s_3 \\oplus e_3] = e_{1b}$. In practice, it would be acceptable for Eve to gain a very small amount of information, as long as the information gained did not raise Eve's chances much beyond random guessing. This is a relatively weak requirement on $H$, and is likely satisfied by any reasonable choice of hash function.\n\nTo fool Alice into falsely accepting, Eve can either fool Bob via the aforementioned method, or Eve can attempt to impersonate Bob by sending Alice a random string of length $\\ell_2$, in the hopes that it happens to be equal to $s_2 \\oplus H[r]$. Clearly, her chances for the latter method are no better than $2^{-\\ell_2}$. The latter method of attack only fools Alice and not Bob; it is thus of limited use to Eve.\n\nWe note that the security of the verification protocol depends on the choice of $\\ell_1$ and $\\ell_2$ (as described above); these parameters should be chosen so as to provide whatever degree of security is required. Alice and Bob choose $\\ell_3$ so as to obtain whatever size key they desire. Since the security of the verification process does not depend on $\\ell_3$, the communication cost of key verification is negligible in the limit of large $\\ell_3$ (i.e., in the limit of large final key size).\n\n\\section{Security of the Relay Protocol\\label{sec:security}}\nIn order for the secret to be compromised, there must be some $j \\in \\{1, \\ldots, m-1\\}$ such that, for all $i \\in \\{1, \\ldots, n\\}$, at least one of $v_{i,j}$ and $v_{i,j+1}$ is dishonest (i.e., such that, for some $j$, every string $r_{i,j}$ is either sent or received by a compromised party). If this happens, we say the protocol has been compromised at stage $j$. For a given $j$, the probability of compromise is $(1-t^2)^n$, but the probability for $j$ is not entirely independent of the probabilities for $j-1$ and $j+1$. Thus, we can bound from below the overall probability of the channel between Alice and Bob being secure, $p_s$, by (\\ref{eq:relay_bounds}):\n\\begin{eqnarray}\np_s & \\geq & \\left[1- (1-t^2)^n\\right]^{m-1}. \\label{eq:relay_bounds}\n\\end{eqnarray}\nFrom this result, we see that, if we wish to ensure our probability of a secure channel between Alice and Bob is at least $p_s$, it is sufficient to choose $n = \\log \\left( 1- p_s^{1\/(m-1)} \\right) \/ \\log \\left( 1- t^2 \\right)$. Intercity bandwidth consumed is proportional to $n$, so we see that we have good scaling of resource consumption with communication distance. Alternatively, we can re-write the equation for choosing $n$ in terms of a maximum allowed probability of compromise, $\\delta = 1 - p_s$. For $\\delta \\ll 1$, we obtain the following relation:\n\\begin{eqnarray*}\nn & \\simeq & \\frac{\\log{(m-1)} - \\log {\\delta}}{-\\log {(1 -t^2)}}.\n\\end{eqnarray*}\nTotal resource usage (intercity communication links required) scales as $\\mathcal{O}(mn)$, or $\\mathcal{O}(m \\log{m})$ for fixed $\\delta$, $t$. While intracity communication requirements scale faster (as $\\mathcal{O}(mn^2)$), it is reasonable to ignore this because of the comparatively low cost of intracity communication and the finite size of the earth (which effectively limits $m$ to a maximum of 100 or so for a QKD network with single link distances of $\\sim100\\ \\mathrm{km}$).\n\nIf each party in the network simultaneously wished to communicate with one other party (with that party assumed to be $m\/2$ cities away on average), total intercity bandwidth would scale as $\\mathcal{O}(m^2n^2)$. By comparison, the bandwidth for a network of the same number of parties employing public key cryptography (and no secret sharing) would scale as $\\mathcal{O}(m^2n)$. Since $n$ scales relatively slowly (i.e., with $\\log m$), this is a reasonable penalty to pay for improved security.\n\n\\section{Alternative Adversary Models}\nWe now briefly consider a number of alternative adversary models. First, let us consider replacing adversary capability (\\ref{it:adcap3}) with the following alternative, which we term (\\ref{it:adcap3}$^\\prime$): the adversary can compromise up to $k-1$ nodes of its choice. Compromised nodes are assumed to be under the complete control of the adversary, as before. In this scenario, the security analysis is trivial. If $k > n$, the adversary can compromise Alice and Bob's communications undetected. Otherwise, Alice and Bob can communicate securely. \n\nWe could also imagine an adversary controls some random subset of nodes in the network---as described by (\\ref{it:adcap3})---and wishes to disrupt communications between Alice and Bob (i.e., perform a DOS attack), but does not have the capability to disrupt or modify public channels. Alice and Bob can modify the protocol to simultaneously protect against both this type of attack and also the adversary mentioned in Section \\ref{sec:ad_cap}. To do so, they replace the simple secret sharing scheme described above with a Proactive Verifiable Secret Sharing (PVSS) scheme~\\cite{darco:vss}. In this scenario, nodes can check at each stage to see if any shares have been corrupted, and take corrective measures. This process is robust against up to $n\/4 - 1$ corrupt shares, which implies that PVSS yields little protection against DOS attacks unless $t > t_{\\mathrm{thresh}} \\approx \\sqrt{3}\/2$.\n\n\\section{Conclusion\\label{sec:conclusion}}\n\nWe have shown a protocol for solving the relay problem and building secure long-distance communication networks with present-day QKD technology. The protocol proposed employs secret sharing and multiple paths through a network of partially-trusted nodes. Through the choice of moderately large $n$ in the relay problem, one can make the possibility of compromise vanishingly small. For fixed probability of compromise of each of the intermediate nodes, the number of nodes per stage required to maintain security scales only logarithmically with the number of stages (i.e., with distance).\n\nGiven that QKD systems are already commercially available, our methods could be implemented today. \n\n\n\\section{Acknowledgments}\nWe wish to thank Louis Salvail, Aidan Roy, Rei Safavi-Naini, Douglas Stebila, Hugh Williams, Kevin Hynes, and Renate Scheidler for valuable discussions. TRB acknowledges support from a US Department of Defense NDSEG Fellowship. BCS acknowledges support from iCORE and CIFAR.\n\n\\bibliographystyle{splncs}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nOn numerous social media platforms, such as YouTube, Facebook, or Instagram, people share their\nopinions on all kinds of topics in the form of posts, images, and video clips. \nWith the proliferation of smartphones and tablets, which has greatly boosted content sharing, \npeople increasingly share their opinions on newly released products or on other topics in form of video reviews or comments. \nThis is an excellent opportunity for large companies to capitalize on, by extracting\nuser sentiment, suggestions, and complaints on their products from these video reviews.\nThis information also opens new horizons to improving our quality of life by making informed\ndecisions on the choice of products we buy, services we use, places we visit, or movies we watch\nbasing on the experience and opinions of other users.\n\nVideos convey information through three channels: audio, video, and text (in\nthe form of speech). Mining opinions from this plethora of multimodal data calls for\na solid multimodal sentiment analysis technology. One of the major problems\nfaced in multimodal sentiment analysis is the fusion of features pertaining to\ndifferent modalities. For this, the majority of the recent works in multimodal sentiment\nanalysis have simply concatenated the feature vectors of different\nmodalities. However, this does not take into account that different modalities\nmay carry conflicting information. We hypothesize that the fusion method we\npresent in this paper deals with this issue better, and present experimental\nevidence showing improvement over simple concatenation of feature vectors. Also,\nfollowing the state of the art~\\citep{porcon}, we employ recurrent\nneural network (RNN) to propagate contextual information between utterances in a\nvideo clip, which significantly improves the classification results and outperforms the\nstate of the art by a significant margin of 1--2\\% for all the modality\ncombinations.\n\nIn our method, we first obtain unimodal features for each utterance for all\nthree modalities. Then, using RNN we extract context-aware utterance features. \nThus, we transform the context-aware utterance vectors to the\nvectors of the same dimensionality. We assume that these transformed vectors contain\nabstract features representing the attributes relevant to sentiment\nclassification. Next, we compare and combine each bimodal combination of these\nabstract features using fully-connected layers. This yields fused bimodal\nfeature vectors. Similarly to the unimodal case, we use RNN to generate\ncontext-aware features. Finally, we combine these bimodal vectors into a\ntrimodal vector using, again, fully-connected layers and use a RNN to pass\ncontextual information between them. We empirically show that the feature vectors\nobtained in this manner are more useful for the sentiment classification task.\n\nThe implementation of our method is publicly available in the form of open-source code.\\footnote{\\url{http:\/\/github.com\/senticnet}}\n\nThis paper is structured as follows: \\cref{sec:related-work-1} briefly discusses important previous work in multimodal feature fusion; \\cref{sec:model} describes our method in details; \\cref{sec:experiments} reports the results of our experiments and discuss their implications; finally, \\cref{sec:conclusions} concludes the paper and discusses future work.\n\n\\section{Related Work}\n\\label{sec:related-work-1}\nIn recent years, sentiment analysis has become increasingly popular for processing social media data on online communities, blogs, wikis, microblogging platforms, and other online collaborative media~\\citep{camacsa}. Sentiment analysis is a branch of affective computing research~\\citep{porrev} that aims to classify text -- but sometimes also audio and video~\\citep{hazcon} -- into either positive or negative -- but sometimes also neutral~\\citep{chadis}. Most of the literature is on English language but recently an increasing number of works are tackling the multilinguality issue~\\citep{loomul,dashtipour2016multilingual}, especially in booming online languages such as Chinese~\\citep{penlea}.\nSentiment analysis techniques can be broadly categorized into symbolic and sub-symbolic approaches: the former include the use of lexicons~\\citep{banlex}, ontologies~\\citep{draont}, and semantic networks~\\citep{camnt5} to encode the polarity associated with words and multiword expressions; the latter consist of supervised~\\citep{onesta}, semi-supervised~\\citep{hussem} and unsupervised~\\citep{liilea} machine learning techniques that perform sentiment classification based on word co-occurrence frequencies. Among these, the most popular recently are algorithms based on deep neural networks~\\citep{yourec} and generative adversarial networks~\\citep{liigen}.\n\nWhile most works approach it as a simple categorization problem, sentiment analysis is actually a suitcase research problem~\\citep{camsui} that requires tackling many NLP tasks, including word polarity disambiguation~\\citep{xiawor}, subjectivity detection~\\citep{chasub}, personality recognition~\\citep{majdee}, microtext normalization~\\citep{satpho}, concept extraction~\\citep{dhegra}, time tagging~\\citep{zhotem}, and aspect extraction~\\citep{maatar}.\n\nSentiment analysis has raised growing interest both within the scientific community, leading to many exciting open challenges, as well as in the business world, due to the remarkable benefits to be had from financial~\\citep{xinfin} and political~\\citep{ebrcha} forecasting, e-health~\\citep{campat} and e-tourism~\\citep{valsen}, user profiling~\\citep{mihwha} and community detection~\\citep{cavlea}, manufacturing and supply chain applications~\\citep{xuuada}, human communication comprehension~\\citep{zadatt} and dialogue systems~\\citep{youaug}, etc.\n\nIn the field of emotion recognition, early works by~\\citet{de1997facial} and\n\\citet{chen1998multimodal} showed that fusion of audio and visual\nsystems, creating a bimodal signal, yielded a higher accuracy than any unimodal\nsystem. Such fusion has been analyzed at both feature\nlevel~\\citep{kessous2010multimodal} and decision\nlevel~\\citep{schuller2011recognizing}.\n\nAlthough there is much work done on audio-visual fusion for emotion recognition,\nexploring contribution of text along with audio and visual modalities in\nmultimodal emotion detection has been little\nexplored. \\citet{wollmer2013youtube} and~\\citet{rozgic2012ensemble} fused\ninformation from audio, visual and textual modalities to extract emotion and\nsentiment. \\citet{metallinou2008audio} and~\\citet{eyben2010line} fused audio and\ntextual modalities for emotion recognition. Both approaches relied on a\nfeature-level fusion. \\citet{wu2011emotion} fused audio and textual clues at\ndecision level. \\citet{pordee} uses convolutional neural network (CNN) to\nextract features from the modalities and then employs multiple-kernel learning\n(MKL) for sentiment analysis. The current state of the art, set forth by\n\\citet{porcon}, extracts contextual information from the surrounding utterances\nusing long short-term memory (LSTM). \\citet{porrev} fuses different\nmodalities with deep learning-based tools. \\citet{zadten} uses tensor\nfusion. \\citet{porens} further extends upon the ensemble of CNN and MKL.\n\nUnlike existing approaches, which use simple concatenation based early fusion~\\citep{pordee,pordep} and non-trainable tensors based fusion~\\citep{zadten}, this work proposes a hierarchical fusion capable of learning the bimodal and trimodal correlations for data fusion using deep neural networks. The method is end-to-end and, in order to accomplish the fusion, it can be plugged into any deep neural network based multimodal sentiment analysis framework. \n\n\\section{Our Method}\n\\label{sec:model}\n\nIn this section, we discuss our novel methodology behind solving the sentiment\nclassification problem. First we discuss the overview of our method and then we\ndiscuss the whole method in details, step by step.\n\n\\subsection{Overview}\n\\label{sec:overview}\n\n\\subsubsection{Unimodal Feature Extraction}\nWe extract utterance-level features for three modalities. This step is discussed\nin \\cref{UFE}.\n\n\\subsubsection{Multimodal Fusion}\n\n\\paragraph{Problems of early fusion}\nThe majority of the work on multimodal data use concatenation, or early fusion\n(\\cref{fig:early_fusion}), as their fusion strategy. The problem with this\nsimplistic approach is that it cannot filter out and conflicting or redundant\ninformation obtained from different modalities. To address this major issue, we\ndevise an hierarchical approach which proceeds from unimodal to bimodal vectors\nand then bimodal to trimodal vectors.\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[scale=0.46]{.\/hfusion-concatenation-trimmed.pdf}\n \\caption{Utterance-level early fusion, or simple concatenation}\n \\label{fig:early_fusion}\n\\end{figure}\n\n\\paragraph{Bimodal fusion}\nWe fuse the utterance feature vectors for each bimodal combination, i.e., T+V,\nT+A, and A+V. This step is depicted in \\cref{fig:hfusion-bimodal} and discussed\nin details in \\cref{sec:bimodal}.\nWe use the penultimate layer for \\cref{fig:hfusion-bimodal} as bimodal features.\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[scale=0.46]{.\/hfusion-2-modal-trimmed.pdf}\n \\caption{Utterance-level bimodal fusion}\n \\label{fig:hfusion-bimodal}\n\\end{figure}\n\n\\paragraph{Trimodal fusion}\nWe fuse the three bimodal features to obtain trimodal feature as depicted in\n\\cref{fig:hfusion-trimodal}. This step is discussed in details in \\cref{sec:trimodal}.\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[scale=0.46]{.\/hfusion-Gelbukh-trimmed.pdf}\n \\caption{Utterance-level trimodal hierarchical fusion.\\protect\\footnotemark}\n \\label{fig:hfusion-trimodal}\n\\end{figure}\n\n\\paragraph{Addition of context}\nWe also improve the quality of feature vectors (both unimodal and multimodal) by\nincorporating information from surrounding utterances using RNN. We model the\ncontext using gated recurrent unit (GRU) as depicted in \\cref{fig:architecture}.\nThe details of context modeling is discussed in \\cref{sec:context} and the\nfollowing subsections.\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=\\textwidth]{.\/chfusion.pdf}\n \\caption{Context-aware hierarchical fusion}\n \\label{fig:architecture}\n\\end{figure}\n\n\\paragraph{Classification}\nWe classify the feature vectors using a softmax layer.\n\n\n\\subsection{Unimodal Feature Extraction}\n\\label{UFE}\n\nIn this section, we discuss the method of feature extraction for three different\nmodalities: audio, video, and text.\n\n\\subsubsection{Textual Feature Extraction}\n\\label{text}\n\nThe textual data is obtained from the transcripts of the videos. We apply a deep\nConvolutional Neural Networks (CNN)~\\citep{karpathy2014large} on each utterance\nto extract textual features. Each utterance in the text is represented as an\narray of pre-trained 300-dimensional {\\tt word2vec}\nvectors~\\citep{mikolov2013efficient}. Further, the utterances are truncated or\npadded with null vectors to have exactly 50 words.\n\nNext, these utterances as array of vectors are passed through two different\nconvolutional layers; first layer having two filters of size 3 and 4\nrespectively with 50 feature maps each and the second layer has a filter of size\n2 with 100 feature maps. Each convolutional layer is followed by a max-pooling\nlayer with window $2\\times 2$.\n\nThe output of the second max-pooling layer is fed to a fully-connected layer\nwith 500 neurons with a rectified linear unit (ReLU)~\\citep{whyeteh2001rate}\nactivation, followed by softmax output. The output of the penultimate\nfully-connected layer is used as the textual feature. The translation of\nconvolution filter over makes the CNN learn abstract features and with each\nsubsequent layer the context of the features expands further.\n\n\n\n\n\\subsubsection{Audio Feature Extraction}\n\\label{audio}\n\nThe audio feature extraction process is performed at 30 Hz frame rate with 100\nms sliding window. We use openSMILE~\\citep{eyben2010opensmile}, which is capable\nof automatic pitch and voice intensity extraction, for audio feature\nextraction. Prior to feature extraction audio signals are processed with voice\nintensity thresholding and voice normalization. Specifically, we use\nZ-standardization for voice normalization. In order to filter out audio segments\nwithout voice, we threshold voice intensity. OpenSMILE is used to perform both\nthese steps. Using openSMILE we extract several Low Level Descriptors (LLD)\n(e.g., pitch , voice intensity) and various statistical functionals of them\n(e.g., amplitude mean, arithmetic mean, root quadratic mean, standard deviation,\nflatness, skewness, kurtosis, quartiles, inter-quartile ranges, and linear\nregression slope). ``IS13-ComParE'' configuration file of openSMILE is used to\nfor our purposes. Finally, we extracted total 6392 features from each input\naudio segment.\n\n\n\n\\subsubsection{Visual Feature Extraction}\n\\label{visual}\n\nTo extract visual features, we focus not only on feature extraction from each\nvideo frame but also try to model temporal features across frames. To achieve\nthis, we use 3D-CNN on the video. 3D-CNNs have been successful in the past,\nspecially in the field of object classification on 3D data~\\citep{ji20133d}. Its\nstate-of-the-art performance on such tasks motivates its use in this paper.\n\nLet the video be called $vid \\in \\mathbb{R}^{3\\times f\\times h\\times w}$, where\n$3$ represents the three RGB channels of an image and $f,\\ h,\\text{ and }w$\ndenote the cardinality, height, and width of the frames, respectively. A 3D\nconvolutional filter, named $f_{lt}\\in \\mathbb{R}^{f_m\\times 3\\times f_d\\times\nf_h\\times f_w}$, is applied to this video, where, similar to a 2D-CNN, the\nfilter translates across the video and generates the convolution output\n$conv_{out} \\in \\mathbb{R}^{f_m\\times 3\\times (f-f_d+1)\\times (h-f_h+1)\\times\n(w-f_w+1)}$. Here, $f_m,\\ f_d,\\ f_h,\\text{ and }f_w$ denote number of feature\nmaps, depth of filter, height of filter, and width of filter,\nrespectively. Finally, we apply max-pooling operation to the $conv_{out}$, which\nselects the most relevant features. This operation is applied only to the last\nthree dimensions of $conv_{out}$. This is followed by a dense layer and softmax\ncomputation. The activations of this layer is used as the overall video features\nfor each utterance video.\n\nIn our experiments, we receive the best results with filter dimensions $f_m =\n32$ and $f_d,f_h,f_w = 5$. Also, for the max-pooling, we set the window size as\n$3\\times 3\\times 3$ and the succeeding dense layer with $300$ neurons.\n\n\n\n\\footnotetext{Figure adapted from~\\citep{mastersthesis} with permission.}\n\n\\subsection{Context Modeling}\n\\label{sec:context}\n\nUtterances in the videos are semantically dependent on each other. In other\nwords, complete meaning of an utterance may be determined by taking preceding\nutterances into consideration. We call this the context of an utterance.\nFollowing~\\citet{porcon}, we use RNN, specifically GRU\\footnote{LSTM does not\n perform well} to model semantic\ndependency among the utterances in a video.\n\nLet the following items represent unimodal features:\n\\begin{align*}\n f_A \\in \\mathbb{R}^{N\\times d_A}&\\quad\\text{(acoustic features)},\\\\\n f_V \\in \\mathbb{R}^{N\\times d_V}&\\quad\\text{(visual features)},\\\\\n f_T \\in \\mathbb{R}^{N\\times d_T}&\\quad\\text{(textual features)},\n\\end{align*}\nwhere $N=$ maximum number of utterances in a video. We pad the shorter videos\nwith dummy utterances represented by null vectors of corresponding length.\nFor each modality, we feed the unimodal utterance features $f_m$ (where $m \\in\n\\{A,V,T\\}$) (discussed in \\cref{UFE}) of a video to $GRU_m$ with\noutput size $D_m$, which is defined as\n\\begin{flalign*}\n z_m&=\\sigma(f_{mt}U^{mz}+s_{m(t-1)}W^{mz}),\\\\\n r_m&=\\sigma(f_{mt}U^{mr}+s_{m(t-1)}W^{mr}),\\\\\n h_{mt}&=\\tanh(f_{mt}U^{mh}+(s_{m(t-1)}*r_m)W^{mh}),\\\\\n F_{mt}&=\\tanh(h_{mt}U^{mx}+u^{mx}),\\\\\n s_{mt}&=(1-z_m)*F_{mt}+z_m*s_{m(t-1)},\n\\end{flalign*}\nwhere $U^{mz} \\in \\mathbb{R}^{d_m\\times D_m}$, $W^{mz} \\in \\mathbb{R}^{D_m\\times\nD_m}$, $U^{mr} \\in \\mathbb{R}^{d_m\\times D_m}$, $W^{mr} \\in\n\\mathbb{R}^{D_m\\times D_m}$, $U^{mh} \\in \\mathbb{R}^{d_m\\times D_m}$, $W^{mh}\n\\in \\mathbb{R}^{D_m\\times D_m}$, $U^{mx} \\in \\mathbb{R}^{d_m\\times D_m}$,\n$u^{mx} \\in \\mathbb{R}^{D_m}$, $z_m \\in \\mathbb{R}^{D_m}$, $r_m \\in\n\\mathbb{R}^{D_m}$, $h_{mt} \\in \\mathbb{R}^{D_m}$, $F_{mt} \\in \\mathbb{R}^{D_m}$,\nand $s_{mt} \\in \\mathbb{R}^{D_m}$. This yields hidden outputs $F_{mt}$ as\ncontext-aware unimodal features for each modality. Hence, we define\n$F_m=GRU_m(f_m)$, where $F_m \\in \\mathbb{R}^{N\\times D_m}$. Thus, the\ncontext-aware multimodal features can be defined as\n\\begin{flalign*}\n \n F_A &= GRU_A(f_A),\\\\\n F_V &= GRU_V(f_V),\\\\\n F_T &= GRU_T(f_T).\n\\end{flalign*}\n\n\\subsection{Multimodal Fusion}\n\\label{sec:mul_fusion}\n\nIn this section, we use context-aware unimodal features $F_A, F_V,$ and $F_T$ to\na unified feature space.\n\nThe unimodal features may have different dimensions, i.e., $D_A\\neq D_V\\neq\nD_T$. Thus, we map them to the same dimension, say $D$ (we obtained best\nresults with $D=400$), using fully-connected layer as follows:\n\\begin{flalign*}\n g_A &= \\tanh(F_AW_A+b_A),\\\\\n g_V &= \\tanh(F_VW_V+b_V),\\\\\n g_T &= \\tanh(F_TW_T+b_T),\n\\end{flalign*}\nwhere $W_A \\in \\mathbb{R}^{D_A \\times D}$, $b_A\\in \\mathbb{R}^D$, $W_V \\in\n\\mathbb{R}^{D_V\\times D}$, $b_V\\in \\mathbb{R}^D$, $W_T \\in\n\\mathbb{R}^{D_T\\times D}$, and $b_T\\in \\mathbb{R}^D$. We can represent\nthe mapping for each dimension as\n\\[\n g_x=\\left[\n \\begin{array}{ccccc}\n c^x_{11} & c^x_{21} & c^x_{31} & \\cdots & c^x_{D1}\\\\\n c^x_{12} & c^x_{22} & c^x_{32} & \\cdots & c^x_{D2}\\\\\n \\vdots & \\vdots & \\vdots & \\cdots & \\vdots\\\\\n c^x_{1N} & c^x_{2N} & c^x_{3N} & \\cdots & c^x_{DN}\\\\\n \\end{array}\n\\right],\n\\]\nwhere $x \\in \\{V,A,T\\}$ and $c^x_{lt}$ are scalars for all $l=1,2,\\dots,D$ and\n$t=1,2,\\dots,N$. Also, in $g_x$ the rows represent the utterances and the\ncolumns the feature values. We can see these values $c^x_{lt}$ as more abstract\nfeature values derived from fundamental feature values (which are the components\nof $f_A$, $f_V$, and $f_T$). For example, an abstract feature can be the\nangriness of a speaker in a video. We can infer the degree of angriness from\nvisual features ($f_V$; facial muscle movements), acoustic features ($f_A$,\nsuch as pitch and raised voice), or textual features ($f_T$, such as the language and choice of\nwords). Therefore, the degree of angriness can be represented by $c^x_{lt}$,\nwhere $x$ is $A$, $V$, or $T$, $l$ is some fixed integer between $1$ and $D$, and $t$ is some\nfixed integer between $1$ and $N$.\n\nNow, the evaluation of abstract feature values from all the modalities may not\nhave the same merit or may even contradict each other. Hence, we need the network\nto make comparison among the feature values derived from different modalities to\nmake a more refined evaluation of the degree of anger. To this end, we take\neach bimodal combination (which are audio--video, audio--text, and video--text) at\na time and compare and combine each of their respective abstract feature values\n(i.e. $c^V_{lt}$ with $c^T_{lt}$, $c^V_{lt}$ with $c^A_{lt}$, and $c^A_{lt}$\nwith $c^T_{lt}$) using fully-connected layers as follows:\n\\begin{align}\n i^{VA}_{lt}&=\\tanh(w^{VA}_{l}.[c_{lt}^V,c_{lt}^A]^\\intercal+b^{VA}_{l}),\\label{bimodal:1}\\\\\n i^{AT}_{lt}&=\\tanh(w^{AT}_{l}.[c_{lt}^A,c_{lt}^T]^\\intercal+b^{AT}_{l}),\\label{bimodal:2}\\\\\n i^{VT}_{lt}&=\\tanh(w^{VT}_{l}.[c_{lt}^V,c_{lt}^T]^\\intercal+b^{VT}_{l}),\\label{bimodal:3} \n\\end{align}\nwhere $w^{VA}_l \\in \\mathbb{R}^2$, $b^{VA}_l$ is scalar, $w^{AT}_l \\in\n\\mathbb{R}^2$, $b^{AT}_l$ is scalar, $w^{VT}_l \\in \\mathbb{R}^2$, and $b^{VT}_l$\nis scalar, for all $l=1,2,\\dots,D$ and $t=1,2,\\dots,N$. We hypothesize that it\nwill enable the network to compare the decisions from each modality against the\nothers and help achieve a better fusion of modalities.\n\n\\paragraph{\\textbf{Bimodal fusion}}\n\\label{sec:bimodal}\n\n\\crefrange{bimodal:1}{bimodal:3} are used for bimodal fusion. The bimodal\nfused features for video--audio, audio--text, video--text are defined as\n\\begin{flalign*}\n f_{VA}= (f_{VA1},f_{VA2},\\dots,f_{VA(N)}), \\text{ where } f_{VAt}&=(i^{VA}_{1t},i^{VA}_{2t},\\dots,i^{VA}_{Dt}), \\\\\n f_{AT}= (f_{AT1},f_{AT2},\\dots,f_{AT(N)}), \\text{ where } f_{ATt}&=(i^{AT}_{1t},i^{AT}_{2t},\\dots,i^{AT}_{Dt}), \\\\\n f_{VT}= (f_{VT1},f_{VT2},\\dots,f_{VT(N)}), \\text{ where } f_{VTt}&=(i^{VT}_{1t},i^{VT}_{2t},\\dots,i^{VT}_{Dt}).\n\\end{flalign*}\n\nWe further employ $GRU_m$(~\\cref{sec:context}) ($m \\in \\{VA, VT, TA\\}$), to\nincorporate contextual information among the utterances in a video with\n\\begin{flalign*}\n F_{VA} = (F_{VA1},F_{VA2},\\dots,F_{VA(N)}) = GRU_{VA}(f_{VA}),\\\\\n F_{VT} = (F_{VT1},F_{VT2},\\dots,F_{VT(N)}) = GRU_{VT}(f_{VT}),\\\\\n F_{TA} = (F_{TA1},F_{TA2},\\dots,F_{TA(N)}) = GRU_{TA}(f_{TA}),\n\\end{flalign*}\nwhere\n\\begin{flalign*}\n F_{VAt}= (I^{VA}_{1t},I^{VA}_{2t},\\dots,I^{VA}_{D_2t}),\\\\\n F_{VTt}= (I^{AT}_{1t},I^{AT}_{2t},\\dots,I^{AT}_{D_2t}),\\\\\n F_{TAt}= (I^{VT}_{1t},I^{VT}_{2t},\\dots,I^{VT}_{D_2t}),\n\\end{flalign*}\n$F_{VA}$, $F_{VT}$, and $F_{TA}$ are context-aware bimodal features\nrepresented as vectors and $I^m_{nt}$ is scalar for $n=1,2,\\dots,D_2$,\n$D_2=500$, $t=1,2,\\dots,N$, and $m=\\text{VA,VT,TA}$.\n\n\\paragraph{Trimodal fusion}\n\\label{sec:trimodal}\n\nWe combine all three modalities using fully-connected layers as follows:\n\\begin{flalign*}\n z_{lt}=\\tanh(w^{AVT}_l.[I^{VA}_{lt},I^{AT}_{lt},I^{VT}_{lt}]^\\intercal+b^{AVT}_l),\n\\end{flalign*}\nwhere $w^{AVT}_l \\in \\mathbb{R}^3$ and $b^{AVT}_l$ is a scalar for all $l=1,2,\\dots,D_2$\nand $t=1,2,\\dots,N$.\nSo, we define the fused features as\n\\begin{flalign*}\n f_{AVT}=(f_{AVT1},f_{AVT2},\\dots,f_{AVT(N)}),\n\\end{flalign*}\nwhere\n$f_{AVTt}=(z_{1t},z_{2t},\\dots,z_{D_2t})$,\n$z_{nt}$ is scalar for $n=1,2,\\dots,D_2$ and $t=1,2,\\dots,N$.\n\nSimilarly to bimodal fusion (\\cref{sec:bimodal}), after trimodal fusion we pass\nthe fused features through $GRU_{AVT}$ to incorporate contextual information in\nthem, which yields\n\\begin{flalign*}\n F_{AVT} = (F_{AVT1},F_{AVT2},\\dots,F_{AVT(N)}) = GRU_{AVT}(f_{AVT}),\n\\end{flalign*}\nwhere $F_{AVTt}= (Z_{1t},Z_{2t},\\dots,Z_{D_3t})$, $Z_{nt}$ is scalar for $n=1,2,\\dots,D_3$, $D_3=550$, $t=1,2,\\dots,N$,\nand $F_{AVT}$ is the context-aware trimodal feature vector.\n\n\\subsection{Classification}\n\\label{sec:classification}\n\nIn order to perform classification, we feed the fused features $F_{mt}$ (where\n$m=AV,VT,TA,\\text{ or } AVT$ and $t=1,2,\\dots,N$) to a softmax layer with $C=2$\noutputs. The classifier can be described as follows:\n\\begin{flalign*}\n \\mathcal{P} &=\n \\text{softmax}(W_{\\mathit{softmax}}F_{mt}+b_{\\mathit{softmax}}),\\\\\n \\hat{y}&=\\underset{j}{\\text{argmax}}(\\mathcal{P}[j]),\n \n\\end{flalign*}\nwhere $W_{\\mathit{softmax}}\\in \\mathbb{R}^{C\\times D}$,\n$b_{\\mathit{softmax}}\\in \\mathbb{R}^C$, $\\mathcal{P}\\in \\mathbb{R}^C$, $j=$\nclass value ($0$ or $1$), and $\\hat{y}=$ estimated class value.\n\n\\subsection{Training}\n\\label{training}\nWe employ categorical cross-entropy as loss function ($J$) for training,\n\\begin{flalign*}\n J=-\\frac{1}{N}\\sum_{i=1}^N{\\sum_{j=0}^{C-1}{y_{ij}\\log{\\mathcal{P}_i[j]}}},\n \n\\end{flalign*}\nwhere $N=$ number of samples, $i=$ index of a sample, $j=$ class value, and\n\\[\n y_{ij}=\n \\begin{cases}\n 1, & \\text{if expected class value of sample }i\\text{ is }j\\\\\n 0, & \\text{otherwise.}\n \\end{cases}\n\\]\n\n Adam~\\citep{DBLP:journals\/corr\/KingmaB14} is used as optimizer due to its\n ability to adapt learning rate for each parameter individually. We train the\nnetwork for 200 epochs with early stopping, where we optimize the parameter set\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \\begin{flalign*}\n\\theta=&\\bigcup_{m\\in M}\\left (\\bigcup_{j\\in \\{z,r,h\\}}\\{U^{mj},W^{mj}\\}\\cup \\{U^{mx},u^{mx}\\}\\right\n)\\\\\n&\\cup \\bigcup_{m\\in M_2}\\bigcup_{i=1}^{D_2}\\{w^m_i\\} \\cup\n\\bigcup_{i=1}^{D_3}\\{w^{AVT}_i\\}\\cup \\bigcup_{m\\in M_1}\\{W_m,b_m\\}\\\\\n&\\cup \\{W_{softmax},b_{softmax}\\},\\\\\n \\end{flalign*}\nwhere $M=\\{A,V,T,VA,VT,TA,AVT\\}$, $M_1=\\{A,V,T\\}$, and\n$M_2=\\{VA,VT,\\allowbreak TA\\}$. \\cref{algorithm} summarizes our method.\\footnote{Implementation of this algorithm is available at\n \\url{http:\/\/github.com\/senticnet}}\n\n\n\\begin{algorithm}[!ht]\n \\small\n \\caption{Context-Aware Hierarchical Fusion Algorithm}\\label{algorithm}\n \\begin{algorithmic}[1]\n \n \n \\vspace{2mm}\n \\Procedure{TrainAndTestModel}{$U$, $V$}\\Comment{\\footnotesize{$U$ = train set, $V$ = test set}}\n \\vspace{2mm}\n \\State \\textbf{Unimodal feature extraction:}\n \\For{\\texttt{i:[1,$N$]}}\\Comment{\\footnotesize{extract baseline features}}\n \\State \\texttt{$f_{A}^{i} \\gets AudioFeatures(u_{i})$ }\n \\State \\texttt{$f_{V}^{i} \\gets VideoFeatures(u_{i})$ }\n \\State \\texttt{$f_{T}^{i} \\gets TextFeatures(u_{i})$ }\n \\EndFor\n \\For{\\texttt{m $\\in \\{A, V, T\\}$}}\n \\State $F_m$ = $GRU_m$($f_m$)\n \\EndFor\n \\vspace{2mm}\n \\State \\textbf{Fusion}:\n \\State \\texttt{$g_{A} \\gets MapToSpace(F_A)$ }\\Comment{\\footnotesize{dimensionality equalization}}\n \\State \\texttt{$g_{V} \\gets MapToSpace(F_V)$ }\n \\State \\texttt{$g_{T} \\gets MapToSpace(F_T)$ }\n\n \\vspace{2mm}\n \\State \\texttt{$f_{VA} \\gets BimodalFusion(g_V, g_A)$}\\Comment{\\footnotesize{bimodal fusion}}\n \\State \\texttt{$f_{AT} \\gets BimodalFusion(g_A, g_T)$}\n \\State \\texttt{$f_{VT} \\gets BimodalFusion(g_V, g_T)$}\n \\For{\\texttt{m $\\in \\{VA, AT, VT\\}$}}\n \\State $F_m$ = $GRU_m$($f_m$)\n \\EndFor\n \n \\vspace{2mm}\n \\State $f_{AVT} \\gets TrimodalFusion(F_{VA}, F_{AT}, F_{VT})$\n \\Comment{\\small{trimodal fusion}}\n \\State $F_{AVT}$ = $GRU_{AVT}$($f_{AVT}$)\n\n \\vspace{2mm}\n \\For{\\texttt{i:[1,$N$]}}\\Comment{\\footnotesize{softmax classification}}\n \\State $\\hat{y}^i =\\underset{j}{\\text{argmax}}(softmax(F_{AVT}^i)[j])$ \n \\EndFor\n \n\n \\State $TestModel(V)$\n \\EndProcedure\n \n \\vspace{2mm}\n \\Procedure{MapToSpace}{$x_z$} \\Comment{\\footnotesize{for modality $z$}}\n \\State $ g_z \\gets \\tanh(W_zx_z+b_z) $\n \\State \\textbf{return} $g_z$\n \\EndProcedure\n\n \\vspace{2mm}\n \\Procedure{BimodalFusion}{$g_{z_1}$, $g_{z_2}$} \\Comment{\\footnotesize{for modality\n $z_1$ and $z_2$, where $z_1\\neq z_2$}}\n \\For{\\texttt{i:[1,$D$]}}\n \\State $f_{z_1z_2}^i \\gets \\tanh(w_i^{z_1z_2}.[g^i_{z_1},\n g^i_{z_2}]^\\intercal+b_i^{z_1z_2})$\n \\EndFor\n \\State $f_{z_1z_2} \\gets (f_{z_1z_2}^1, f_{z_1z_2}^2,\\dots,f_{z_1z_2}^{D})$\n \\State \\textbf{return} $f_{z_1z_2}$\n \\EndProcedure\n \n \n \\vspace{2mm}\n \\Procedure{TrimodalFusion}{$f_{z_1}$, $f_{z_2}$, $f_{z_3}$} \\Comment{\\footnotesize{for\n modality combination $z_1$, $z_2$, and $z_3$, where $z_1\\neq z_2\\neq z_3$}}\n \\For{\\texttt{i:[1,$D$]}}\n \\State $f^i_{z_1z_2z_3} \\gets \\tanh(w_i.[f^i_{z_1}, f^i_{z_2}, f^i_{z_3}]^\\intercal+b_i)$\n \\EndFor\n \\State $f_{z_1z_2z_3} \\gets (f_{z_1z_2z_3}^1, f_{z_1z_2z_3}^2,\\dots,f_{z_1z_2z_3}^{D})$\n \\State \\textbf{return} $f_{z_1z_2z_3}$\n \\EndProcedure\n \n \\vspace{2mm}\n \\Procedure{TestModel}{$V$}\n \\State \\footnotesize{Similarly to training phase, $V$ is passed through the learnt models\n to get the features and classification outputs. \\cref{training}\n mentions the trainable parameters ($\\theta$).}\n \\EndProcedure\n \\end{algorithmic}\n\\end{algorithm}\n\n\\section{Experiments}\n\\label{sec:experiments}\n\n\\subsection{Dataset Details}\n\\label{datasets}\nMost research works in multimodal sentiment analysis are performed on datasets\nwhere train and test splits may share certain speakers. Since, each individual\nhas an unique way of expressing emotions and sentiments, finding generic and\nperson-independent features for sentiment analysis is\ncrucial. \\cref{tab:dataset} shows the train and test split for the datasets\nused.\n\n\\begin{table}[h]\n\t\\small\n\t \\addtolength\\tabcolsep{-5pt}\n\t\\begin{center}\n\t\t\\begin{tabular}{|*{14}{c|}}\n\t\t\t\\hline\n\t\t\t\\multicolumn{2}{|c|}{\\multirow{2}{*}{Dataset}} & \\multicolumn{6}{c|}{Train} & \\multicolumn{6}{c|}{Test}\\\\ \\cline{3-14}\n\t\t\t\\multicolumn{2}{|c|}{}& \\emph{pos.}&\\emph{neg.}&\\emph{happy}&\\emph{anger}&\\emph{sad}&\\emph{neu.}&\\emph{pos.}&\\emph{neg.}&\\emph{happy}&\\emph{anger}&\\emph{sad}&\\emph{neu.}\\\\ \\hline\n\t\t\t\\multicolumn{2}{|c|}{MOSI}&709&738&-&-&-&-&467&285&-&-&-&-\\\\ \\hline\n\t\t\t\\multicolumn{2}{|c|}{IEMOCAP}&-&-&1194&933&839&1324&-&-&433&157&238&380\\\\ \\hline\n\t\t\t\\multicolumn{14}{l}{\\scriptsize{pos. = positive, neg. = negative, neu. = neutral}}\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\vspace{-4.5mm}\n\t\\caption {Class distribution of datasets in both train and test splits. }\n\t\\label{tab:dataset}\n\\end{table}\n\n\\subsubsection{CMU-MOSI}\n\\label{sec:mosi}\nCMU-MOSI dataset~\\citep{zadeh2016multimodal} is rich in sentimental\nexpressions, where 89 people review various topics in English. The videos are\nsegmented into utterances where each utterance is annotated with scores between\n$-3$ (strongly negative) and $+3$ (strongly positive) by five annotators. We\ntook the average of these five annotations as the sentiment polarity and\nconsidered only two classes (positive and negative). Given every individual's\nunique way of expressing sentiments, real world applications should be able to\nmodel generic person independent features and be robust to person variance. To\nthis end, we perform person-independent experiments to emulate unseen\nconditions. Our train\/test splits of the dataset are completely disjoint with\nrespect to speakers. The train\/validation set consists of the first 62\nindividuals in the dataset. The test set contains opinionated videos by rest of\nthe 31 speakers. In particular, 1447 and 752 utterances are used for training\nand test respectively.\n\n\\subsubsection{IEMOCAP}\n\\label{sec:iemocap}\nIEMOCAP~\\citep{iemocap} contains two way conversations\namong ten speakers, segmented into utterances. The utterances are tagged with\nthe labels anger, happiness, sadness, neutral, excitement, frustration, fear,\nsurprise, and other. We consider the first four ones to compare with the\nstate of the art~\\citep{porcon} and other works. It contains 1083\nangry, 1630 happy, 1083 sad, and 1683 neutral videos. Only the videos by the\nfirst eight speakers are considered for training.\n\n\\subsection{Baselines}\nWe compare our method with the following strong baselines.\n\n\\paragraph{Early fusion}\n\\label{early-fusion}\nWe extract unimodal features (\\cref{UFE}) and simply concatenate them to\nproduce multimodal features. Followed by support vector machine (SVM)\nbeing applied on this feature vector for the final sentiment\nclassification.\n\n\\paragraph{Method from~\\citep{pordee}}\nWe have implemented and compared our method with the approach proposed by\n\\citet{pordee}. In their approach, they extracted visual features using\nCLM-Z, audio features using openSMILE, and textual features using CNN. MKL was then applied to the features obtained from\nconcatenation of the unimodal features. However, they did not conduct speaker\nindependent experiments.\n\nIn order to perform a fair comparison with~\\citep{pordee}, we\nemploy our fusion method on the features extracted by~\\citet{pordee}.\n\n\n\n\n\n\n\\paragraph{Method from~\\citep{porcon}}\nWe have compared our method with~\\citep{pordep}, which takes\nadvantage of contextual information obtained from the surrounding\nutterances. This context modeling is achieved using LSTM. We reran the\nexperiments of~\\citet{pordep} without using SVM for classification since using\nSVM with neural networks is usually discouraged. This provides a fair comparison\nwith our model which does not use SVM.\n\n\\paragraph{Method from~\\citep{zadten}}\nIn~\\citep{zadten}, they proposed a trimodal fusion method based on the tensors. We have also compared our method with their. In particular, their dataset configuration was different than us so we have adapted their publicly available code ~\\footnote{\\url{https:\/\/github.com\/A2Zadeh\/TensorFusionNetwork}} and employed that on our dataset.\n\\subsection{Experimental Setting}\n\\label{sec:exp_set}\n\nWe considered two variants of experimental setup while evaluating our model.\n\n\\paragraph{HFusion} In this setup, we evaluated hierarchical fusion\nwithout context-aware features with CMU-MOSI dataset. We removed all the GRUs\nfrom the model described in \\cref{sec:mul_fusion,sec:context} forwarded\nutterance specific features directly to the next layer. This setup is depicted\nin \\cref{fig:hfusion-trimodal}.\n\n\\paragraph{CHFusion} This setup is exactly as the model described in\n\\cref{sec:model}.\n\n\\subsection{Results and Discussion}\n\nWe discuss the results for the different experimental settings discussed in\n\\cref{sec:exp_set}.\n\n\\begin{table*}[t]\n \\centering\n \\small\n \\caption{Comparison in terms of accuracy of Hierarchical Fusion\n (HFusion) with other fusion methods for CMU-MOSI dataset; bold font\n signifies best accuracy for the corresponding feature set and\n modality or modalities, where T stands for text, V for video, and A for audio. $SOTA^1$ = Poria et al.~\\citep{pordee}, $SOTA^2$ = Zadeh et al.~\\citep{zadten}}\n \\resizebox{\\textwidth}{!}{\n \\addtolength\\tabcolsep{-3pt}\n \\begin{tabular}[t]{@{\\extracolsep{5pt}}ccccccc}\n \\hline\n \\multirow2*{\\begin{tabular}{c}Modality\\\\ Combination\\end{tabular}}&\n \\multicolumn{3}{c}{\\citep{pordee} feature set}&\n \\multicolumn{3}{c}{Our feature set}\\\\\n \\cline{2-4}\\cline{5-7}\n & $SOTA^{1}$&\n $SOTA^2$&\n HFusion&\n Early fusion &\n $SOTA^2$&\n HFusion\\\\\n \\cline{1-1}\\cline{2-4}\\cline{5-7}\n T & \\multicolumn{3}{c}{N\/A} & \\multicolumn{3}{c}{75.0\\%} \\\\\n V & \\multicolumn{3}{c}{N\/A} & \\multicolumn{3}{c}{55.3\\%} \\\\\n A & \\multicolumn{3}{c}{N\/A} & \\multicolumn{3}{c}{56.9\\%} \\\\\n \\hline\n T+V & 73.2\\% & 73.8\\% & \\textbf{74.4\\%} & 77.1\\% & 77.4\\% & \\textbf{77.8\\%}\\\\\n T+A & 73.2\\% & 73.5\\% & \\textbf{74.2\\%} & 77.1\\% & 76.3\\% & \\textbf{77.3\\%}\\\\\n A+V & 55.7\\% & 56.2\\% & \\textbf{57.5\\%} & 56.5\\% & 56.1\\% & \\textbf{56.8\\%}\\\\\n \\hline\n A+V+T & 73.5\\% & 71.2\\% & \\textbf{74.6\\%} & 77.0\\% & 77.3\\% & \\textbf{77.9\\%}\\\\\n \\hline\n \\end{tabular}\n }\n \\label{table:hfusion}\n\\end{table*}\n\n\\subsubsection{Hierarchical Fusion (HFusion)}\n\\label{hfusion}\n\nThe results of our experiments are presented in \\cref{table:hfusion}. We\nevaluated this setup with CMU-MOSI dataset (\\cref{sec:mosi}) and two\nfeature sets: the feature set used in~\\citep{pordee}\nand the set of unimodal features discussed in \\cref{UFE}.\n\nOur model outperformed~\\citep{pordee}, which employed MKL, for all bimodal\nand trimodal scenarios by a margin of 1--1.8\\%. This leads us to present two\nobservations. Firstly, the features used in~\\citep{pordee} are inferior to\nthe features extracted in our approach. Second, our hierarchical\nfusion method is better than their fusion method.\n\nIt is already established in the literature\n\\citep{pordee,perez2013utterance} that multimodal analysis outperforms\nunimodal analysis. We also observe the same trend in our experiments where\ntrimodal and bimodal classifiers outperform unimodal classifiers. The textual\nmodality performed best among others with a higher unimodal classification\naccuracy of 75\\%. Although other modalities contribute to improve the\nperformance of multimodal classifiers, that contribution is little in compare to\nthe textual modality.\n\nOn the other hand, we compared our model with early fusion\n(\\cref{early-fusion}) for aforementioned feature sets\n(\\cref{UFE}). Our fusion mechanism consistently outperforms early fusion for\nall combination of modalities. This supports our\nhypothesis that our hierarchical fusion method captures the\ninter-relation among the modalities and produce better performance vector than\nearly fusion. Text is the strongest individual modality, and we observe that\nthe text modality paired with remaining two modalities results in consistent\nperformance improvement.\n\nOverall, the results give a strong indication that the comparison among the\nabstract feature values dampens the effect of less important modalities, which\nwas our hypothesis. For example, we can notice that for early fusion T+V and T+A\nboth yield the same performance. However, with our method text with video\nperforms better than text with audio, which is more aligned with our\nexpectations, since facial muscle movements usually carry more emotional nuances\nthan voice.\n\nIn particular, we observe that our model outperformed all the strong baselines mentioned above. The method by~\\citep{pordee} is only able to fuse using concatenation. Our proposed method outperformed their approach by a significant margin; thanks to the power of hierarchical fusion which proves the capability of our method in modeling bimodal and trimodal correlations. However on the other hand, the method by~\\citep{zadten} is capable of fusing the modalities using a tensor. Interestingly our method also outperformed them and we think the reason is the capability of bimodal fusion and use that for trimodal fusion. Tensor fusion network is incapable to learn the weights of the bimodal and trimodal correlations in the fusion. Tensor Fusion is mathematically formed by an outer product, it\nhas no learn-able parameters. Wherein our method learns the weights automatically using a neural network (Equation 1,2 and 3).\n\n\\begin{table*}[t]\n \\centering\n \\caption{Comparison of Context-Aware Hierarchical Fusion (CHFusion) in terms of accuracy ($\\text{CHFusion}_{acc}$) and f-score (for IEMOCAP: $\\text{CHFusion}_{fsc}$) with the state of the art for CMU-MOSI\n and IEMOCAP dataset; bold font signifies best accuracy for the corresponding dataset and\n modality or modalities, where T stands text, V for video, A for audio. $SOTA^1$ = Poria et al.~\\citep{pordee}, $SOTA^2$ = Zadeh et al.~\\citep{zadten}. $\\text{CHFusion}_{acc}$ and $\\text{CHFusion}_{fsc}$ are the accuracy and f-score of CHFusion respectively.}\n \\resizebox{\\textwidth}{!}{\n \\addtolength\\tabcolsep{-6pt}\n \\begin{tabular}[t]{@{\\extracolsep{4pt}}cccccccc}\n \\hline\n \\multirow2*{\\begin{tabular}{c}Modality\\end{tabular}} & \\multicolumn{3}{c}{CMU-MOSI} & \\multicolumn{4}{c}{IEMOCAP} \\\\\n \\cline{2-4}\\cline{5-8}\n & $SOTA^1$ & $SOTA^2$ & $\\text{CHFusion}_{acc}$ & $SOTA^1$ & $SOTA^2$ & $\\text{CHFusion}_{acc}$ & $\\text{CHFusion}_{fsc}$\\\\\n \\cline{1-1}\\cline{2-4}\\cline{5-7}\\cline{8-8}\n T & \\multicolumn{3}{c}{76.5\\%} & \\multicolumn{3}{c}{73.6\\%} & -\\\\\n V & \\multicolumn{3}{c}{54.9\\%} & \\multicolumn{3}{c}{53.3\\%} & -\\\\\n A & \\multicolumn{3}{c}{55.3\\%} & \\multicolumn{3}{c}{57.1\\%} & -\\\\\n \\hline\n T+V & 77.8\\% & 77.1\\% & \\textbf{79.3\\%} & 74.1\\% & 73.7\\% & \\textbf{75.9\\%} & 75.6\\%\\\\\n T+A & 77.3\\% & 77.0\\% & \\textbf{79.1\\%} & 73.7\\% & 71.1\\% & \\textbf{76.1\\%} & 76.0\\%\\\\\n A+V & 57.9\\% & 56.5\\% & \\textbf{58.8\\%} & 68.4\\% & 67.4\\% & \\textbf{69.5\\%} & 69.6\\% \\\\\n \\hline\n A+V+T & 78.7\\% & 77.2\\% & \\textbf{80.0\\%} & 74.1\\% & 73.6\\% & \\textbf{76.5\\%} & 76.8\\%\\\\\n \\hline\n \\end{tabular}\n }\n \\label{table:chfusion}\n\\end{table*}\n\n\\subsubsection{Context-Aware Hierarchical Fusion (CHFusion)}\n\\label{chfusion}\n\nThe results of this experiment are shown in \\cref{table:chfusion}. This setting\nfully utilizes the model described in \\cref{sec:model}. We applied this\nexperimental setting for two datasets, namely CMU-MOSI~(\\cref{sec:mosi}) and\nIEMOCAP~(\\cref{sec:iemocap}). We used the feature set discussed in \\cref{UFE},\nwhich was also used by~\\citet{porcon}. As expected our method outperformed the simple early fusion based fusion by~\\citep{pordee}, tensor fusion by~\\citep{zadten}. The method by~\\citet{porcon} used a scheme to learn contextual features from the surrounding features. However, as a method of fusion they adapted simple concatenation based fusion method by~\\citep{pordee}. As discussed in Section \\ref{sec:context}, we employed their contextual feature extraction framework and integrated our proposed fusion method to that. This has helped us to outperform~\\citet{porcon} by significant margin thanks to the hierarchical fusion (HFusion).\n\n\\paragraph{CMU-MOSI}\nWe achieve 1--2\\% performance improvement over the state of the art\n\\citep{porcon} for all the modality combinations having textual\ncomponent. For A+V modality combination we achieve better but similar\nperformance to the state of the art. We suspect that it is due to both audio and\nvideo modality being significantly less informative than textual modality. It is\nevident from the unimodal performance where we observe that textual modality on\nits own performs around 21\\% better than both audio and video modality. Also,\naudio and video modality performs close to majority baseline. On the other hand,\nit is important to notice that with all modalities combined we achieve about\n3.5\\% higher accuracy than text alone.\n\n\nFor example, consider the following utterance: \\emph{so overall new moon even with the bigger better budgets huh it was still too long}.\nThe speaker discusses her opinion on the movie Twilight New Moon. Textually the\nutterance is abundant with positive words however audio and video comprises of a\nfrown which is observed by the hierarchical fusion based model.\n\n\\paragraph{IEMOCAP}\nAs the IEMOCAP dataset contains four distinct emotion categories, in the last layer of the network we used a softmax classifier whose output dimension is set to 4. \nIn order to perform classification on IEMOCAP dataset we feed the fused features $F_{mt}$ (where\n$m=AV,VT,TA,\\text{ or } AVT$ and $t=1,2,\\dots,N$) to a softmax layer with $C=4$\noutputs. The classifier can be described as follows:\n\\begin{flalign*}\n\\mathcal{P} &=\n\\text{softmax}(W_{\\mathit{softmax}}F_{mt}+b_{\\mathit{softmax}}),\\\\\n\\hat{y}&=\\underset{j}{\\text{argmax}}(\\mathcal{P}[j]),\n\\end{flalign*}\nwhere $W_{\\mathit{softmax}}\\in \\mathbb{R}^{4\\times D}$,\n$b_{\\mathit{softmax}}\\in \\mathbb{R}^4$, $\\mathcal{P}\\in \\mathbb{R}^4$, $j=$\nclass value ($0$ or $1$ or $2$ or $3$), and $\\hat{y}=$ estimated class value.\n\n\\begin{table}[t]\n \\centering\n \\caption{Class-wise accuracy and f-score for IEMOCAP dataset for trimodal scenario.}\n \n \n \\begin{tabular}[t]{ccccc}\n \\hline\n \\multirow{2}{*}{Metrics} & \\multicolumn{4}{c}{Classes}\\\\\n \\cline{2-5} & Happy & Sad & Neutral & Anger\\\\\n \\hline\n Accuracy & 74.3 & 75.6 & 78.4 & 79.6 \\\\\n F-Score & 81.4 & 77.0 & 71.2 & 77.6 \\\\\n \\hline\n \\end{tabular}\n \n \\label{table:iemocap-classwise}\n\\end{table}\nHere as well, we achieve performance improvement consistent with CMU-MOSI. This\nmethod performs 1--2.4\\% better than the state of the art for all the modality\ncombinations. Also, trimodal accuracy is 3\\% higher than the same for textual\nmodality. Since, IEMOCAP dataset imbalanced, we also present the f-score for each modality combination for a better evaluation. One key observation for IEMOCAP dataset is that its A+V modality\ncombination performs significantly better than the same of CMU-MOSI dataset. We\nthink that this is due to the audio and video modality of IEMOCAP being richer than\nthe same of CMU-MOSI. The performance difference with another strong baseline~\\citep{zadten} is even more ranging from 2.1\\% to 3\\% on CMU-MOSI dataset and 2.2\\% to 5\\% on IEMOCAP dataset. This again confirms the superiority of the hierarchical fusion in compare to~\\citep{zadten}. We think this is mainly because of learning the weights of bimodal and trimodal correlation (representing the degree of correlations) calculations at the time of fusion while Tensor Fusion Network (TFN) just relies on the non-trainable outer product of tensors to model such correlations for fusion.\nAdditionally, we present class-wise accuracy and f-score for IEMOCAP for trimodal (A+V+T) scenario in \\cref{table:iemocap-classwise}.\n\n\\subsubsection{HFusion vs.\\ CHFusion}\n\nWe compare HFusion and CHFusion models over CMU-MOSI dataset. We observe that\nCHFusion performs 1--2\\% better than HFusion model for all the modality\ncombinations. This performance boost is achieved by the inclusion of\nutterance-level contextual information in HFusion model by adding GRUs in\ndifferent levels of fusion hierarchy.\n\n\\section{Conclusion}\n\\label{sec:conclusions}\nMultimodal fusion strategy is an important issue in multimodal sentiment analysis. \nHowever, little work has been done so far in this direction. \nIn this paper, we have presented a novel and comprehensive fusion strategy. \nOur method outperforms the widely used early fusion on both datasets typically used to test multimodal sentiment analysis methods.\nMoreover, with the addition of context modeling with GRU, \nour method outperforms the state of the art in multimodal sentiment analysis and emotion detection by significant margin. \n\nIn our future work, we plan to improve the quality of unimodal features, especially textual features, which will further improve the accuracy of classification.\nWe will also experiment with more sophisticated network architectures.\n\n\\section*{Acknowledgement}\nThe work was partially supported by the Instituto Polit\\'ecnico Nacional via grant SIP 20172008 to A.~Gelbukh.\n\n\\bibliographystyle{elsarticle-num-names}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn complex geometry, as a generalization of holomorphic and totally real immersions, slant immersions were defined by Chen \\cite{chen}. Cabrerizo et al \\cite{cabrerizo} defined bi-slant submanifolds in almost contact metric manifolds. In \\cite{uddin} Uddin et al. studied warped product bi-slant immersions in Kaehler manifolds. They proved that there do not exist any warped product bi-slant submanifolds of Kaehler manifolds other than hemi-slant warped products and CR-warped products.\n\nThe theory of Riemannian submersions as an analogue of isemetric immersions was initiated by O'Neill \\cite{oneill} and Gray\\cite{gray}. The Riemannian submersions are important in physics owing to applications in the Yang-Mills theory, Kaluza-Klein theory, robotic theory, supergravity and superstring theories. In Kaluza-Klein theory, the general solution of a recent model is given in point of harmonic maps satisfying Einstein equations (see \\cite{yang1,kaluza 1,kaluza2,kaluza3,yang2,supergravity1,supergravity2}). Altafini \\cite{altafini} expressed some applications of submersions in the theory of robotics and \\c{S}ahin \\cite{sahin} also investigated some applications of Riemannian submersions on redundant robotic chains. On the other hand Riemannian submersions are very useful in studying the geometry of Riemannian manifolds equipped with differentiable structures.\nIn \\cite{watson} Watson introduced the notion of almost Hermitian submersions between almost complex manifolds. He investigated some geometric properties between base manifold and total manifold as well as fibers. \\c{S}ahin \\cite{sahinanti} introduced anti-invariant Riemannian submersions from almost Hermitian manifolds. He showed that such maps have some geometric properties. Also he studied slant submersions from almost Hermitian manifolds onto a Riemannian manifolds \\cite{sahinslant}. Recently, considering different conditions on Riemannian submersions many studies have been done (see \\cite{sezin,sayar2,sayar,sayar3,sahinsemi,hakan,Tastan2}).\n\nAs a special horizontally conformal maps which were introduced independently by Fuglede and Ishihara, horizontally conformal submersions are defined as follows $(M_{1},g_{1})$ and $(M_{2},g_{2})$ are Riemannian manifolds of dimension $m_{1}$ and $m_{2}$, respectively. A smooth submersion $f:(M_{1},g_{1})\\rightarrow (M_{2},g_{2})$ is called a horizontally conformal submersion if there is a positive function $\\lambda$ such that\n\\begin{align*}\n\\lambda^{2}g_{1}\\left(X_{1},Y_{1}\\right)=g_{2}\\left(f_{*}X_{1},f_{*}Y_{1}\\right)\n\\end{align*}\nfor all $X_{1}, Y_{1}\\in \\Gamma\\left(\\left(\\ker f_{*}\\right)^{\\perp}\\right)$.\nHere a horizontally conformal submersion $f$ is called horizontally homothetic if the $grad \\lambda$ is vertical i.e.\n\\begin{align*}\n\\mathcal{H}\\left(grad\\lambda\\right)=0.\n\\end{align*}\n We denote by $\\mathcal{V}$ and $\\mathcal{H}$ the projections on the vertical distributions $\\left(ker f_{*}\\right)$ and horizontal distributions $\\left(ker f_{*}\\right)^{\\perp}$. It can be said that Riemannian submersion is a special horizontally conformal submersion with $\\lambda=1$.\nRecently, Akyol and \\c{S}ahin introduced conformal anti-invariant submersions \\cite{akyolantiinv2}, conformal semi-invariant submersion\\cite{akyolsemiinv}, conformal slant submersion \\cite{akyol1} and conformal semi-slant submersions\\cite{akyolsemislant}. Also the geometry of conformal submersions have been studied by several authors \\cite{gunduzalpconf,kumar}.\\\\\nIn section 2 we review basic formulas and definitions needed for this paper. In section 3, we define the new conformal bi-slant submersion from almost Hermitian manifolds onto Riemannian manifolds and present a example. We investigate the geometry of the horizontal distribution and the vertical distribution. Finally we obtain necessary and sufficient conditions for a conformal bi-slant submersion to be totally geodesic.\n\n\\section{Preliminaries}\n\nLet $\\left(M_{1},g_{1},J\\right)$ be an almost Hermitian manifold. Then this means that $M_{1}$ admits a tensor field $J$ of type $(1,1)$ on $M_{1}$ which satisfy\n\\begin{align}\nJ^{2}=-I, \\ \\ g_{1}\\left(JE_{1},JE_{2}\\right)=g_{1}\\left(E_{1},E_{2}\\right)\n\\end{align}\nfor $E_{1},E_{2}\\in \\Gamma(TM_{1})$. An almost Hermitian manifold $M_{1}$ is called Kaehlerian manifold if\n\\begin{align*}\n\\left(\\nabla_{E_{1}}J\\right)E_{2}=0, \\ \\ E_{1},E_{2}\\in\\Gamma\\left(TM_{1}\\right)\n\\end{align*}\nwhere $\\nabla$ is the operator of Levi-Civita covariant differentiation.\n\nNow, we will give some definitions and theorems about the concept of (horizontally) conformal submersions.\n\\begin{definition}\n\tLet $(M_{1},g_{1})$ and $(M_{2},g_{2})$ are two Riemannian manifolds with the dimension $m_{1}$ and $m_{2}$, respectively. A smooth map $f:(M_{1},g_{1})\\rightarrow (M_{2},g_{2})$ is called horizontally weakly conformal or semi conformal at $q\\in M$ if, either\n\t\\begin{enumerate}[i.]\n\t\t\\item $df_{q}=0$, or\n\t\t\\item $df_{q}$ is surjective and there exists a number $\\Omega(q)\\neq 0$ satisfying\n\t\t\\begin{align*}\n\t\tg_{2}\\left(df_{q}X,df_{q}Y\\right)=\\Omega(q)g_{1}\\left(X,Y\\right)\n\t\t\\end{align*}\n\t\tfor $X,Y\\in \\Gamma\\left(\\ker(df)\\right)^{\\perp}$.\n\t\\end{enumerate}\nHere the number $\\Omega(q)$ is called the square dilation. Its square root $\\lambda(q)=\\sqrt{\\Omega(q)}$ is called the dilation. The map $f$ is called horizontally weakly conformal or semi-conformal on $M_{1}$ if it is horizontally weakly conformal at every point of $M_{1}$. it is said to be a conformal submersion if $f$ has no critical point.\n\\end{definition}\nLet $f:M_{1}\\rightarrow M_{2}$ be a submersion. A vector field $X_{1}$ on $M_{1}$ is called a basic vector field if $X_{1}\\in \\Gamma\\left(\\left(\\ker f_{*}\\right)^{\\perp}\\right)$ and $f$-related with a vector field $X_{2}$ on $M_{2}$ i.e $f_{*}(X_{1q})=X_{2f(q)}$ for $q\\in M_{1}$.\n\nThe two $(1,2)$ tensor fields $\\mathcal{T}$ and $\\mathcal{A}$ on $M$ are given by the formulas\n\\setlength\\arraycolsep{2pt}\n\\begin{eqnarray}\n\\mathcal{T}(E_{1},E_{2})&=&\\mathcal{T}_{E_{1}}E_{2}=\\mathcal{H}\\nabla_{\\mathcal{V}E_{1}}\\mathcal{V}E_{2}+\\mathcal{V}\\nabla_{\\mathcal{V}E_{1}}\\mathcal{H}E_{2} \\label{2.2} \\\\\n\\mathcal{A}(E_{1},E_{2})&=&\\mathcal{A}_{E_{1}}E_{2}=\\mathcal{V}\\nabla_{\\mathcal{H}E_{1}}\\mathcal{H}E_{2}+\\mathcal{H}\\nabla_{\\mathcal{H}E_{1}}\\mathcal{V}E_{2} \\label{2.3}\n\\end{eqnarray}\nfor $E_{1},E_{2}\\in \\Gamma\\left(TM\\right)$ \\cite{falcitelli}.\\\\\n\nNote that a Riemannian submersion $f:M_{1}\\longrightarrow M_{2}$ has totally geodesic fibers if and only if $\\mathcal{T}$ vanishes identically.\n\nConsidering the equations (2.3) and (2.4), one can write\n\\setlength\\arraycolsep{2pt}\n\\begin{eqnarray}\n\\nabla_{U_{1}}U_{2}&=&\\mathcal{T}_{U_{1}}U_{2}+\\bar{\\nabla}_{U_{1}}U_{2} \\label{2.4}\\\\\n\\nabla_{U_{1}}X_{1}&=&\\mathcal{H}\\nabla_{U_{1}}X_{1}+\\mathcal{T}_{U_{1}}X_{1} \\label{2.5}\\\\\n\\nabla_{X_{1}}U_{1}&=&\\mathcal{A}_{X_{1}}U_{1}+\\mathcal{V}\\nabla_{X_{1}}U_{1} \\label{2.6}\\\\\n\\nabla_{X_{1}}X_{2}&=&\\mathcal{H}\\nabla_{X_{1}}X_{2}+\\mathcal{A}_{X_{1}}X_{2} \\label{2.7}\n\\end{eqnarray}\nfor $X_{1},X_{2}\\in\\Gamma\\left(\\left(\\ker f_{*}\\right)^{\\perp}\\right)$ and $U_{1},U_{2}\\in\\Gamma\\left(\\ker f_{*}\\right)$, where $\\bar{\\nabla}_{U_{1}}U_{2}=\\mathcal{V}\\nabla_{U_{1}}U_{2}$. Then we easily seen that $\\mathcal{T}_{U_{1}}$ and $\\mathcal{A}_{X_{1}}$ are skew-symmetric i.e $\ng_{1}\\left(\\mathcal{A}_{X_{1}}E_{1},E_{2}\\right)=-g_{1}\\left(E_{1},\\mathcal{A}_{X_{1}}E_{2}\\right)$ and\n$g_{1}\\left(\\mathcal{T}_{U_{1}}E_{1},E_{2}\\right)=-g_{1}\\left(E_{1},\\mathcal{T}_{U_{1}}E_{2}\\right)$ for any $E_{1},E_{2}\\in\\Gamma\\left(TM_{1}\\right)$. For the special case where $f$ as the horizontal, the following Proposition be given:\n\\begin{proposition}\n\tLet $f:\\left(M_{1},g_{1}\\right)\\rightarrow\\left(M_{2},g_{2}\\right)$ be a horizontally conformal submersion with dilation $\\lambda$ and $X_{1},X_{2}\\in\\Gamma\\left(\\left(\\ker f_{*}\\right)^{\\perp}\\right)$, then\n\t\\begin{align}\n\t\\mathcal{A}_{X_{1}}X_{2}=\\frac{1}{2}\\left(\\mathcal{V}\\left[X_{1},X_{2}\\right]-\\lambda^{2}g_{1}\\left(X_{1},X_{2}\\right)grad_{\\mathcal{V}}\\left(\\frac{1}{\\lambda^{2}}\\right)\\right)\n\t\\end{align}\n \\end{proposition}\n\nLet $f:\\left(M_{1},g_{1}\\right)\\rightarrow\\left(M_{2},g_{2}\\right)$ be a smooth map between $\\left(M_{1},g_{1}\\right)$ and $\\left(M_{2},g_{2}\\right)$ Riemannian manifolds. Then the second fundamental form of $f$ is given by\n\\begin{align}\\label{2.9}\n\\left(\\nabla f_{*}\\right)\\left(E_{1},E_{2}\\right)=\\nabla^{f}_{E_{1}}f_{*}(E_{2})-f_{*}\\left(\\bar{\\nabla}_{E_{1}}E_{2}\\right)\n\\end{align}\nfor any $E_{1},E_{2}\\in\\Gamma\\left(TM_{1}\\right)$. It is known that the second fundamental form $f$ is symmetric \\cite{baird}.\n\\begin{lemma}\n\tSuppose that $f:M_{1}\\rightarrow M_{2}$ is a horizontally conformal submersion. Then for $X_{1},X_{2}\\in \\Gamma\\left(\\left(\\ker f_{*}\\right)^{\\perp}\\right)$ and $U_{1},U_{2}\\in \\Gamma\\left(\\ker f_{*}\\right)$ we have\n\t\\begin{enumerate}[i.]\n\t\t\\item $\\left(\\nabla f_{*}\\right)\\left(X_{1},X_{2}\\right)=X_{1}\\left(\\ln\\lambda\\right)f_{*}X_{2}+X_{2}\\left(\\ln\\lambda\\right)f_{*}X_{1}-g_{1}\\left(X_{1},X_{2}\\right)f_{*}\\left(\\nabla\\ln\\lambda\\right)$\n\t\t\\item $\\left(\\nabla f_{*}\\right)\\left(U_{1},U_{2}\\right)=-f_{*}\\left(\\mathcal{T}_{U_{1}}U_{2}\\right)$\n\t\t\\item $\\left(\\nabla f_{*}\\right)\\left(X_{1},U_{1}\\right)=-f_{*}\\left(\\bar{\\nabla}_{X_{1}}U_{1}\\right)=-f_{*}\\left(\\mathcal{A}_{X_{1}}V_{1}\\right)$.\n\t\\end{enumerate}\n\\end{lemma}\n The smoooth map $f$ is called a totally geodesic map if $\\left(\\nabla f_{*}\\right)\\left(E_{1},E_{2}\\right)=0$ for $E_{1},E_{2}\\in\\Gamma(TM)$ \\cite{baird}.\n \nWe assume that $g$ is a Riemannian metric tensor on the manifold $M=M_{1}\\times M_{2}$ and the canonical foliations $D_{M_{1}}$ and $D_{M_{2}}$ intersect vertically everywhere. Then $g$ is the metric tensor of a usual product of Riemannian manifold if and only if $D_{M_{1}}$ and $D_{M_{2}}$ are totally geodesic foliations.\n\n\\section{Conformal Bi-Slant Submersions}\n\n\\begin{definition}\n\tLet $\\left(M_{1},g_{1},J\\right)$ be an almost Hermitian manifold and $\\left(M_{2},g_{2}\\right)$ a Riemannian manifold. A horizontal conformal submersion $f:M_{1}\\longrightarrow M_{2}$ is called a conformal bi-slant submersion if $D$ and $\\bar{D}$ are slant distributions with the slant angles $\\theta$ and $\\bar{\\theta}$, respectively, such that $\\ker f_{*}=D\\oplus \\bar{D}$. $f$ is called proper if its slant angles satisfy $\\theta,\\bar{\\theta}\\neq 0,\\frac{\\pi}{2}$.\n\\end{definition}\n We now give a example of a proper conformal bi-slant submersion.\n \\begin{example}\n We consider the compatible almost complex structure $J_{\\omega}$ on $\\mathbb{R}^{8}$ such that\n \\begin{align*}\n J_{\\omega}=\\left(\\cos\\omega\\right) J_{1}+\\left(\\sin\\omega\\right) J_{2}, \\ 0<\\omega\\leq\\frac{\\pi}{2}\n \\end{align*}\n where\n \\begin{align*}\n J_{1}\\left(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6},x_{7},x_{8}\\right)=\\left(-x_{2},x_{1},-x_{4},x_{3},-x_{6},x_{5},-x_{8},x_{7}\\right)\\\\\n J_{2}\\left(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6},x_{7},x_{8}\\right)=\\left(-x_{3},x_{4},x_{1},-x_{2},-x_{7},x_{8},x_{5},-x_{6}\\right)\n \\end{align*}\n Consider a submersion $f:\\mathbb{R}^{8} \\rightarrow \\mathbb{R}^{4}$ defined by\n \\begin{align*}\n f\\left(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6},x_{7},x_{8}\\right)=\\pi^{5}\\left(\\frac{x_{1}-x_{3}}{\\sqrt{2}},x_{4}, \\frac{x_{5}-x_{6}}{\\sqrt{2}},x_{7}\\right)\n \\end{align*}\n Then it follows that\n \\begin{align*}\n D=span\\{U_{1}=\\frac{1}{\\sqrt{2}}\\left(\\frac{\\partial}{\\partial x_{1}}+\\frac{\\partial}{\\partial x_{3}}\\right),U_{2}=\\frac{\\partial}{\\partial x_{2}}\\}\\\\\n \\bar{D}=span\\{U_{3}=\\frac{1}{\\sqrt{2}}\\left(\\frac{\\partial}{\\partial x_{5}}+\\frac{\\partial}{\\partial x_{6}}\\right),U_{4}=\\frac{\\partial}{\\partial x_{8}}\\}\n \\end{align*}\n Thus $f$ is conformal bi-slant submersion with $\\theta$ and $\\bar{\\theta}$ such that $\\cos\\theta=\\frac{1}{\\sqrt{2}}\\cos\\omega$ and $\\cos\\bar{\\theta}=\\frac{1}{\\sqrt{2}}\\sin\\omega$.\n \\end{example}\n\n\nSuppose that $f$ is a conformal bi-slant submersion from a almost Hermitian manifold $\\left(M_{1},g_{1},J_{1}\\right)$ onto a Riemannian manifold $(M_{2},g_{2})$. For $U_{1}\\in\\Gamma\\left(\\ker f_{*}\\right)$, we have\n\\begin{equation}\\label{3.1}\nU_{1}=\\alpha U_{1}+\\beta U_{1}\n\\end{equation}\nwhere $\\alpha U_{1}\\in\\Gamma\\left(D_{1}\\right)$ and $\\beta U_{1}\\in\\Gamma\\left(D_{2}\\right)$.\\\\\nAlso, for $U_{1}\\in\\Gamma\\left(\\ker f_{*}\\right)$, we write\n\\begin{equation}\\label{3.2}\nJU_{1}=\\xi U_{1}+\\eta U_{1}\n\\end{equation}\nwhere $\\xi U_{1}\\in\\Gamma\\left(\\ker f_{*}\\right)$ and $\\eta U_{1}\\in\\Gamma\\left(\\ker f_{*}\\right)^\\perp$.\\\\\nFor $X_{1}\\in\\Gamma\\left(\\left(\\ker f_{*}\\right)^\\perp\\right)$, we have\n\\begin{equation}\\label{3.3}\nJX_{1}=\\mathcal{B}X_{1}+\\mathcal{C}X_{1}\n\\end{equation}\nwhere $\\mathcal {B}X_{1}\\in\\Gamma\\left(\\ker f_{*}\\right)$ and $\\mathcal{C}X_{1}\\in\\Gamma\\left(\\left(\\ker f_{*}\\right)^\\perp\\right)$.\\\\\nThe horizontal distribution $(\\ker f_{*})^{\\perp}$ is decompesed as\n\\begin{align*}\n(\\ker f_{*})^{\\perp}=\\eta D_{1}\\oplus\\eta D_{2}\\oplus\\mu\n\\end{align*}\nwhere $\\mu$ is the complementary distribution to $\\eta D_{1}\\oplus\\eta D_{2}$ in $(\\ker f_{*})^{\\perp}$.\\\\\n\nConsidering Definition 3.1 we can give the following result that we will use throughout the article.\n\n\\begin{theorem}\n\tSuppose that $f$ is a conformal bi-slant submersion from an almost Hermitian manifold $(M_{1},g_{1},J)$ onto a Riemannian manifold $(M_{2},g_{2})$. Then we have\n\t\\begin{enumerate}[i)]\n\t\t\\item $\\xi^{2}U_{1}=-\\left(\\cos^{2}\\theta\\right)U_{1} \\ \\text{for} \\ U_{1}\\in\\Gamma\\left(D\\right)$\n\t\t\\item $\\xi^{2}{V}_{1}=-\\left(\\cos^{2}\\bar{\\theta}\\right)V_{1} \\ \\text{for} \\ V_{1}\\in\\Gamma\\left(\\bar{D}\\right)$\n\t\\end{enumerate}\n\n\\end{theorem}\n\\begin{proof}\n\tThe proof of this theorem is similar to slant immersions \\cite{chen}.\n\\end{proof}\n\n\\begin{theorem}\n\tSuppose that $f$ is a proper conformal bi-slant submersion from a Kaehlerian manifold $(M_{1},g_{1},J)$ onto a Riemannian manifold $(M_{2},g_{2})$ with slant functions $\\theta,\\bar{\\theta}$. Then\n\t\\begin{enumerate}\n\t\t\\item[\\textit{i)}] the distribution $D$ is integrable if and only if\n\t\\begin{align*}\n\\lambda^{-2}g_{2}\\left(\\nabla f_{*}\\left(U_{1},\\eta U_{2}\\right),f_{*}\\eta V_{1}\\right)\n=&g_{1}\\left(\\mathcal{T}_{U_{2}}\\eta\\xi U_{1}-\\mathcal{T}_{U_{1}}\\eta\\xi U_{2},V_{1}\\right)\\\\&+g_{1}\\left(\\mathcal{T}_{U_{1}}\\eta U_{2}-\\mathcal{T}_{U_{2}}\\eta U_{1},\\xi V_{1}\\right)\\\\&+\\lambda^{-2}g_{2}\\left(\\nabla f_{*}\\left(U_{2},\\eta U_{1}\\right),f_{*}\\eta V_{1}\\right).\n\\end{align*}\n\t\t\\item[\\textit{ii)}] the distribution $\\bar{D}$ is integrable if and only if\n\t\t\t\\begin{align*}\n\t\t\\lambda^{-2}g_{2}\\left(\\nabla f_{*}\\left(V_{1},\\eta V_{2}\\right),f_{*}\\eta U_{1}\\right)\n\t\t=&g_{1}\\left(\\mathcal{T}_{V_{2}}\\eta\\xi V_{1}-\\mathcal{T}_{V_{1}}\\eta\\xi V_{2},U_{1}\\right)\\\\&+g_{1}\\left(\\mathcal{T}_{V_{1}}\\eta V_{2}-\\mathcal{T}_{V_{2}}\\eta V_{1},\\xi U_{1}\\right)\\\\&+\\lambda^{-2}g_{2}\\left(\\nabla f_{*}\\left(V_{2},\\eta V_{1}\\right),f_{*}\\eta U_{1}\\right).\n\t\t\\end{align*}\n\t\\end{enumerate}\n\twhere $U_{1},U_{2}\\in \\Gamma\\left(D\\right)$, $V_{1},V_{2}\\in \\Gamma\\left(\\bar{D}\\right)$.\n\\end{theorem}\n\n\\begin{proof}\t\n$i)$ From $U_{1},U_{2} \\in \\Gamma\\left(D\\right)$ and $V_{1} \\in \\Gamma\\left(\\bar{D}\\right)$ we have\n\t\\setlength\\arraycolsep{2pt}\n\t\\begin{align*}\n\tg_{1}\\left([U_{1},U_{2}],V_{1}\\right)=&g_{1}\\left(\\nabla_{U_{1}}\\xi U_{2},J V_{1}\\right)+g_{1}\\left(\\nabla_{U_{1}}\\eta U_{2},J V_{1}\\right)\\\\&-g_{1}\\left(\\nabla_{U_{2}}\\xi U_{1},J V_{1}\\right)-g_{1}\\left(\\nabla_{U_{2}}\\eta U_{1},J V_{1}\\right).\n\t\\end{align*}\n\tConsidering Theorem 3.1 we arrive\n\t\\begin{align*}\n\t\\sin^{2}\\theta g_{1}\\left([U_{1},U_{2}],V_{1}\\right)\n\t=&-g_{1}\\left(\\nabla_{U_{1}}\\eta\\xi U_{2},V_{1}\\right)+g_{1}\\left(\\nabla_{U_{1}}\\eta U_{2}, JV_{1}\\right)\\\\&+g_{1}\\left(\\nabla_{U_{2}}\\eta\\xi U_{1},V_{1}\\right)-g_{1}\\left(\\nabla_{U_{2}}\\eta U_{1},JV_{1}\\right).\n\t\\end{align*}\n\tBy using the equation \\eqref{2.5} we obtain\n\t\\begin{align*}\n\t\\sin^{2}\\theta g_{1}\\left([U_{1},U_{2}],V_{1}\\right)\n\t=&g_{1}\\left(\\mathcal{T}_{U_{2}}\\eta\\xi U_{1}-\\mathcal{T}_{U_{1}}\\eta\\xi U_{2},V_{1}\\right)+g_{1}\\left(\\mathcal{T}_{U_{1}}\\eta U_{2}-\\mathcal{T}_{U_{2}}\\eta U_{1},\\xi V_{1}\\right)\\\\&-\\lambda^{-2}g_{2}\\left(\\nabla f_{*}\\left(U_{1},\\eta U_{2}\\right),f_{*}\\eta V_{1}\\right)\\\\&+\\lambda^{-2}g_{2}\\left(\\nabla f_{*}\\left(U_{2},\\eta U_{1}\\right),f_{*}\\eta V_{1}\\right).\n\t\\end{align*}\n\tThe proof of $ii)$ can be made by applying similar calculations.\n\\end{proof}\n\n\\begin{theorem}\n\tSuppose that $f$ is a proper conformal bi-slant submersion from a Kaehlerian manifold $(M_{1},g_{1},J)$ onto a Riemannian manifold $(M_{2},g_{2})$ with slant functions $\\theta,\\bar{\\theta}$. Then the distribution $D$ defines a totally geodesic foliation if and only if\n\t\\small{\n\t\\begin{align}\n\\lambda^{-2}g_{2}\\left(\\nabla f_{*}\\left(\\eta U_{2},U_{1}\\right),f_{*}\\eta V_{1}\\right)=-g_{1}\\left(\\mathcal{T}_{U_{1}}\\eta\\xi U_{2},V_{1}\\right)+g_{1}\\left(\\mathcal{T}_{U_{1}}\\eta U_{2},\\xi V_{1}\\right).\n\\end{align}}\\normalsize\n\tand\n\t\\begin{align}\n\\lambda^{-2}g_{2}\\left(\\nabla^{f}_{X_{1}}f_{*}\\eta U_{1},f_{*}\\eta U_{2}\\right)=&-\\sin^{2}\\theta g_{1}\\left(\\left[U_{1},X_{1}\\right],U_{1}\\right)+g_{1}\\left(\\mathcal{A}_{X_{1}}\\eta\\xi U_{1},U_{2}\\right)\\nonumber\\\\&+g_{1}\\left(grad(\\ln\\lambda),X_{1}\\right)g_{1}\\left(\\eta U_{1},\\eta U_{2}\\right)\\nonumber\\\\&+g_{1}\\left(grad(\\ln\\lambda),\\eta U_{1}\\right)g_{1}\\left(X_{1},\\eta U_{2}\\right)\\nonumber\\\\&+g_{1}\\left(grad(\\ln\\lambda),\\eta U_{2}\\right)g_{1}\\left(X_{1},\\eta U_{1}\\right)\\nonumber\\\\&-g_{1}\\left(\\mathcal{A}_{X_{1}}\\eta U_{1},\\xi U_{2}\\right)\n\t\\end{align}\n\twhere $U_{1},U_{2}\\in \\Gamma\\left(D\\right)$, $V_{1}\\in \\Gamma\\left(\\bar{D}\\right)$ and $X_{1}\\in\\Gamma\\left(\\left(\\ker f_{*}\\right)^{\\perp}\\right)$.\n\\end{theorem}\n\n\\begin{proof}\n\t\n\tFor $U_{1},U_{2}\\in \\Gamma\\left(D\\right)$ and $V_{1}\\in\\Gamma\\left(\\bar{D}\\right)$ we have\n\t\\begin{align*}\n\tg_{1}\\left(\\nabla_{U_{1}}U_{2},V_{1}\\right)=&-g_{1}\\left(\\nabla_{U_{1}}\\xi^{2}U_{2},V_{1}\\right)-g_{1}\\left(\\nabla_{U_{1}}\\eta\\xi U_{2},V_{1}\\right)+g_{1}\\left(\\nabla_{U_{1}}\\eta U_{2},JV_{1}\\right).\n\t\\end{align*}\n\tThus we can write\n\t\\begin{align*}\n\t\\sin^{2}\\theta g_{1}\\left(\\nabla_{U_{1}}U_{2},V_{1}\\right)=&-g_{1}\\left(\\mathcal{T}_{U_{1}}\\eta\\xi U_{2},V_{1}\\right)+g_{1}\\left(\\mathcal{T}_{U_{1}}\\eta U_{2},\\xi V_{1}\\right)\\\\&+g_{1}\\left(\\mathcal{H}\\nabla_{U_{1}}\\eta U_{2},\\eta V_{1}\\right).\n\t\\end{align*}\n\tUsing \\eqref{2.9} we obtain \n\t\\begin{align*}\n\t\\sin^{2}\\theta g_{1}\\left(\\nabla_{U_{1}}U_{2},V_{1}\\right)=&-g_{1}\\left(\\mathcal{T}_{U_{1}}\\eta\\xi U_{2},V_{1}\\right)+g_{1}\\left(\\mathcal{T}_{U_{1}}\\eta U_{2},\\xi V_{1}\\right)\\\\&-\\lambda^{-2}g_{2}\\left(\\nabla f_{*}\\left(\\eta U_{2},U_{1}\\right),f_{*}\\eta V_{1}\\right).\n\t\\end{align*}\n\twhich is first equation in Theorem 3.3.\n\t\n\tOn the other hand any $U_{1},U_{2} \\in \\Gamma(D) $ and $X_{1}\\in\\Gamma\\left(\\left(\\ker f_{*}\\right)^{\\perp}\\right)$ we can write\n\t\\setlength\\arraycolsep{2pt}\n\t\\begin{align*}\n\tg_{1}\\left(\\nabla_{U_{1}}U_{2},X_{1}\\right)=&-g_{1}\\left(\\left[U_{1},X_{1}\\right],U_{2}\\right)-g_{1}\\left(\\nabla_{X_{1}}U_{1},U_{2}\\right)\\\\\n\t=&-g_{1}\\left(\\left[U_{1},X_{1}\\right],U_{2}\\right)+g_{1}\\left(\\nabla_{X}J\\xi U_{1},U_{2}\\right)-g_{1}\\left(\\nabla_{X_{1}}\\eta U_{1},J U_{2}\\right).\n\t\\end{align*}\n\tUsing Theorem 3.1, we arrive following equation\n\t\\begin{align*}\n\tg_{1}\\left(\\nabla_{U_{1}}U_{2},X_{1}\\right)=&-g_{1}\\left(\\left[U_{1},X_{1}\\right],U_{2}\\right)-\\cos^{2}\\theta g_{1}\\left(\\nabla_{X_{1}}U_{1},U_{2}\\right)\\\\&+g_{1}\\left(\\nabla_{X_{1}}\\eta\\xi U_{1},U_{2}\\right)-g_{1}\\left(\\nabla_{X_{1}}\\eta U_{1},JU_{2}\\right)\\nonumber\n\t\\end{align*}\n\tFrom \\eqref{2.7} and Lemma 2.1 we have\n\t\\begin{align*}\n\t\\sin^2\\theta g_{1}\\left(\\nabla_{U_{1}}U_{2},X_{1}\\right)=&-\\sin^{2}\\theta g_{1}\\left(\\left[U_{1},X_{1}\\right],U_{1}\\right)+g_{1}\\left(\\mathcal{A}_{X_{1}}\\eta\\xi U_{1},U_{2}\\right)\\\\&-g_{1}\\left(\\mathcal{A}_{X_{1}}\\eta U_{1},\\xi U_{2}\\right)-\\lambda^{-2}g_{2}\\left(\\nabla^{f}_{X_{1}}f_{*}\\eta U_{1},f_{*}\\eta U_{2}\\right)\\\\&+g_{1}\\left(grad(\\ln\\lambda),X_{1}\\right)g_{1}\\left(\\eta U_{1},\\eta U_{2}\\right)\\\\&+g_{1}\\left(grad(\\ln\\lambda),\\eta U_{1}\\right)g_{1}\\left(X_{1},\\eta U_{2}\\right)\\\\&+g_{1}\\left(grad(\\ln\\lambda),\\eta U_{2}\\right)g_{1}\\left(X_{1},\\eta U_{1}\\right)\\nonumber\n\t\\end{align*}\n\tThis completes the proof.\n\\end{proof}\n\n\\begin{theorem}\n\tSuppose that $f$ is a proper conformal bi-slant submersion from a Kaehlerian manifold $(M_{1},g_{1},J)$ onto a Riemannian manifold $(M_{2},g_{2})$ with slant functions $\\theta,\\bar{\\theta}$. Then the distribution $\\bar{D}$ defines a totally geodesic foliation if and only if\n\t\\small{\n\t\\begin{align}\n\t\\lambda^{-2}g_{2}\\left(\\nabla f_{*}\\left(\\eta V_{2},V_{1}\\right),f_{*}\\eta U_{1}\\right)=-g_{1}\\left(\\mathcal{T}_{V_{1}}\\eta\\xi V_{2},U_{1}\\right)+g_{1}\\left(\\mathcal{T}_{V_{1}}\\eta V_{2},\\xi U_{1}\\right).\n\t\\end{align}}\\normalsize\n\tand\n\t\\begin{align}\n\t\\lambda^{-2}g_{2}\\left(\\nabla^{f}_{X_{1}}f_{*}\\eta V_{1},f_{*}\\eta V_{2}\\right)=&-\\sin^{2}\\bar{\\theta} g_{1}\\left(\\left[V_{1},X_{1}\\right],V_{1}\\right)+g_{1}\\left(\\mathcal{A}_{X_{1}}\\eta\\xi V_{1},V_{2}\\right)\\nonumber\\\\&+g_{1}\\left(grad(\\ln\\lambda),X_{1}\\right)g_{1}\\left(\\eta V_{1},\\eta V_{2}\\right)\\nonumber\\\\&+g_{1}\\left(grad(\\ln\\lambda),\\eta V_{1}\\right)g_{1}\\left(X_{1},\\eta V_{2}\\right)\\nonumber\\\\&+g_{1}\\left(grad(\\ln\\lambda),\\eta V_{2}\\right)g_{1}\\left(X_{1},\\eta V_{1}\\right)\\nonumber\\\\&-g_{1}\\left(\\mathcal{A}_{X_{1}}\\eta V_{1},\\xi V_{2}\\right)\n\t\\end{align}\n\twhere $U_{1}\\in \\Gamma\\left(D\\right)$, $V_{1}, V_{2}\\in \\Gamma\\left(\\bar{D}\\right)$ and $X_{1}\\in\\Gamma\\left(\\left(\\ker f_{*}\\right)^{\\perp}\\right)$.\n\\end{theorem}\n\n\\begin{proof}\n\tThe proof of this theorem is similar to the proof of Theorem 3.3.\n\\end{proof}\n\n\\begin{theorem}\n\tSuppose that $f$ is a proper conformal bi-slant submersion from a Kaehlerian manifold $(M_{1},g_{1},J)$ onto a Riemannian manifold $(M_{2},g_{2})$ with slant functions $\\theta,\\bar{\\theta}$.Then, the vertical distribution $\\left(\\ker f_{*}\\right)$ is a locally product $M_{D}\\times M_{\\bar{D}}$ if and only if the equations (3.4), (3.5), (3.6) and (3.7) are hold where $M_{D}$ and $M_{\\bar{D}}$ are integral manifolds of the distributions $D$ and $\\bar{D}$, respectively. \t\n\\end{theorem}\n\n\\begin{theorem}\nSuppose that $f$ is a proper conformal bi-slant submersion from a Kaehlerian manifold $(M_{1},g_{1},J)$ onto a Riemannian manifold $(M_{2},g_{2})$ with slant functions $\\theta,\\bar{\\theta}$. Then the distribution $\\left(\\ker f_{*}\\right)^\\perp$ defines a totally geodesic foliation if and only if\n\\begin{align}\n\\lambda^{-2}g_{2}\\left(\\nabla^{f}_{X_{1}}f_{*}\\eta U_{1},f_{*}CX_{2}\\right)=&-g_{1}\\left(\\mathcal{A}_{X_{1}}\\eta U_{1},BX_{2}\\right)\n+\\lambda^{-2}g_{2}\\left(\\nabla^{f}_{X_{1}}f_{*}\\eta\\xi U_{1},f_{*}X_{2}\\right)\\nonumber\\\\&-g_{1}\\left(grad\\ln\\lambda,X_{1}\\right)g_{1}\\left(\\eta\\xi U_{1},X_{2}\\right)\\nonumber\\\\&-g_{1}\\left(grad\\ln\\lambda,\\eta\\xi U_{1}\\right)g_{1}\\left(X_{1},X_{2}\\right)\\nonumber\\\\&+g_{1}\\left(X_{1},\\eta\\xi U_{1}\\right)g_{1}\\left(grad\\ln\\lambda,X_{2}\\right)\\nonumber\\\\&+g_{1}\\left(grad\\ln\\lambda,\\eta U_{1}\\right)g_{1}\\left(X_{1},CX_{2}\\right)\\nonumber\\\\&-g_{1}\\left(X_{1},\\eta U_{1}\\right)g_{1}\\left(grad\\ln\\lambda,CX_{2}\\right).\n\\end{align}\nand\n\\begin{align}\n\\lambda^{-2}g_{2}\\left(\\nabla^{f}_{X_{1}}f_{*}\\eta V_{1},f_{*}CX_{2}\\right)=&-g_{1}\\left(\\mathcal{A}_{X_{1}}\\eta V_{1},BX_{2}\\right)\n+\\lambda^{-2}g_{2}\\left(\\nabla^{f}_{X_{1}}f_{*}\\eta\\xi V_{1},f_{*}X_{2}\\right)\\nonumber\\\\&-g_{1}\\left(grad\\ln\\lambda,X_{1}\\right)g_{1}\\left(\\eta\\xi V_{1},X_{2}\\right)\\nonumber\\\\&-g_{1}\\left(grad\\ln\\lambda,\\eta\\xi V_{1}\\right)g_{1}\\left(X_{1},X_{2}\\right)\\nonumber\\\\&+g_{1}\\left(X_{1},\\eta\\xi V_{1}\\right)g_{1}\\left(grad\\ln\\lambda,X_{2}\\right)\\nonumber\\\\&+g_{1}\\left(grad\\ln\\lambda,\\eta V_{1}\\right)g_{1}\\left(X_{1},CX_{2}\\right)\\nonumber\\\\&-g_{1}\\left(X_{1},\\eta V_{1}\\right)g_{1}\\left(grad\\ln\\lambda,CX_{2}\\right).\n\\end{align}\n\twhere $X_{1}, X_{2} \\in\\Gamma\\left(\\ker f_{*}\\right)^\\perp$, $U_{1}\\in\\Gamma\\left(D\\right)$ and $V_{1}\\in\\Gamma\\left(\\bar{D}\\right)$.\n\\end{theorem}\n\n\\begin{proof}\n\tFor $X_{1},X_{2} \\in\\Gamma\\left(\\ker \\pi_{*}\\right)^\\perp$ and $U_{1}\\in\\Gamma\\left(D\\right)$ we can write\n\t\\setlength\\arraycolsep{2pt}\n\t\\begin{eqnarray*}\n\tg_{1}\\left(\\nabla_{X_{1}}X_{2},U_{1}\\right)=-g_{1}\\left(\\nabla_{X_{1}}\\xi U_{1}, JX_{2}\\right)-g_{1}\\left(\\nabla_{X_{1}}\\eta U_{1},JX_{2}\\right)\n\t\\end{eqnarray*}\n\tFrom Theorem 3.1 we have\n\t\\begin{align*}\n\tg_{1}\\left(\\nabla_{X_{1}}X_{2},U_{1}\\right)=&-\\cos^{2}\\theta g_{1}\\left(\\nabla_{X_{1}}U_{1},X_{2}\\right)+g_{1}\\left(\\nabla_{X_{1}}\\eta\\xi U_{1},X_{2}\\right)\\\\&-g_{1}\\left(\\nabla_{X_{1}}\\eta U_{1},JX_{2}\\right)\n\t\\end{align*}\n\tBy using the equation (2.7) we derive\n\t\\begin{align*}\n\t\\sin^{2}\\theta g\\left(\\nabla_{X_{1}}X_{2},U_{1}\\right)=&g\\left(\\mathcal{H}\\nabla_{X_{1}}\\eta\\xi U_{1},X_{2}\\right)\n\t-g\\left(\\mathcal{H}\\nabla_{X_{1}}\\eta U_{1},CX_{2}\\right)\\\\&-g\\left(\\nabla_{X_{1}}\\eta U_{1},BX_{2}\\right).\n\t\\end{align*}\n\tThen it follows from Lemma 2.1 that\n\t\\begin{align*}\n\t\\sin^{2}\\theta g_{1}\\left(\\nabla_{X_{1}}X_{2},U_{1}\\right)=&-g_{1}\\left(\\mathcal{A}_{X_{1}}\\eta U_{1},BX_{2}\\right)\n\t+\\lambda^{-2}g_{2}\\left(\\nabla^{f}_{X_{1}}f_{*}\\eta\\xi U_{1},f_{*}X_{2}\\right)\\\\&-g_{1}\\left(grad\\ln\\lambda,X_{1}\\right)g_{1}\\left(\\eta\\xi U_{1},X_{2}\\right)\\\\&-g_{1}\\left(grad\\ln\\lambda,\\eta\\xi U_{1}\\right)g_{1}\\left(X_{1},X_{2}\\right)\\\\&+g_{1}\\left(X_{1},\\eta\\xi U_{1}\\right)g_{1}\\left(grad\\ln\\lambda,X_{2}\\right)\\\\&-\\lambda^{-2}g_{2}\\left(\\nabla^{f}_{X_{1}}f_{*}\\eta U_{1},f_{*}CX_{2}\\right)\\\\&+g_{1}\\left(grad\\ln\\lambda,\\eta U_{1}\\right)g_{1}\\left(X_{1},CX_{2}\\right)\\\\&-g_{1}\\left(X_{1},\\eta U_{1}\\right)g_{1}\\left(grad\\ln\\lambda,CX_{2}\\right).\n\t\\end{align*}\n\tThus we have the first desired equation.\n\tSimilarly for $X_{1},X_{2} \\in\\Gamma\\left(\\left(\\ker \\pi_{*}\\right)^\\perp\\right)$ and $V_{1}\\in\\left(\\bar{D}\\right)$ we find\n\t\\begin{align*}\n\t\\sin^{2}\\bar{\\theta} g_{1}\\left(\\nabla_{X_{1}}X_{2},V_{1}\\right)=&-g_{1}\\left(\\mathcal{A}_{X_{1}}\\eta V_{1},BX_{2}\\right)\n+\\lambda^{-2}g_{2}\\left(\\nabla^{f}_{X_{1}}f_{*}\\eta\\xi V_{1},f_{*}X_{2}\\right)\\\\&-g_{1}\\left(grad\\ln\\lambda,X_{1}\\right)g_{1}\\left(\\eta\\xi V_{1},X_{2}\\right)\\\\&-g_{1}\\left(grad\\ln\\lambda,\\eta\\xi V_{1}\\right)g_{1}\\left(X_{1},X_{2}\\right)\\\\&+g_{1}\\left(X_{1},\\eta\\xi V_{1}\\right)g_{1}\\left(grad\\ln\\lambda,X_{2}\\right)\\\\&-\\lambda^{-2}g_{2}\\left(\\nabla^{f}_{X_{1}}f_{*}\\eta V_{1},f_{*}CX_{2}\\right)\\\\&+g_{1}\\left(grad\\ln\\lambda,\\eta V_{1}\\right)g_{1}\\left(X_{1},CX_{2}\\right)\\\\&-g_{1}\\left(X_{1},\\eta V_{1}\\right)g_{1}\\left(grad\\ln\\lambda,CX_{2}\\right).\n\t\\end{align*}\n\tHence the proof is completed.\n\\end{proof}\n\n\n\\begin{theorem}\n\tSuppose that $f$ is a proper conformal bi-slant submersion from a Kaehlerian manifold $(M_{1},g_{1},J)$ onto a Riemannian manifold $(M_{2},g_{2})$ with slant functions $\\theta,\\bar{\\theta}$.Then the distribution $\\left(\\ker f_{*}\\right)$ defines a totally geodesic foliation on $M_{1}\n\t$ if and only if\n\t\t\\setlength\\arraycolsep{2pt}\n\t\t\\small{\n\t\\begin{align}\n\\lambda^{-2}g_{2}\\left(\\nabla^{f}_{X_{1}}f_{*}\\omega U_{1},f_{*}\\omega V_{1}\\right)=&\\left(\\cos^{2}\\theta-\\cos^{2}\\bar{\\theta}\\right)g_{1}\\left(\\nabla_{X_{1}}QU_{1},V_{1}\\right)-g_{1}\\left(\\mathcal{A}_{X_{1}}V_{1},\\eta\\xi U_{1}\\right)\\nonumber\\\\&-g_{1}\\left(\\mathcal{A}_{X_{1}}\\xi V_{1},\\eta U_{1}\\right)-\\sin^{2}\\theta g_{1}\\left(\\left[U_{1},X_{1}\\right],V_{1}\\right)\\nonumber\\\\&-g_{1}\\left(X_{1},\\eta U_{1}\\right)g_{1}\\left(grad\\ln\\lambda,\\eta V_{1}\\right)\\nonumber\\\\&+g_{1}\\left(grad\\ln\\lambda,X_{1}\\right)g_{1}\\left(\\eta U_{1},\\eta V_{1}\\right)\\nonumber\\\\&+g_{1}\\left(grad\\ln\\lambda,\\eta U_{1}\\right)g_{1}\\left(X_{1},\\eta V_{1}\\right)\n\\end{align}}\\normalsize\n\twhere $X_{1}\\in\\Gamma\\left(\\left(\\ker f_{*}\\right)^\\perp\\right)$ and $U_{1},V_{1}\\in\\Gamma\\left(\\ker f_{*}\\right)$.\n\\end{theorem}\n\n\\begin{proof}\n\tGiven $X_{1} \\in\\Gamma\\left(\\left(\\ker f_{*}\\right)^\\perp\\right)$ and $U_{1},V_{1}\\in\\left(\\ker f_{*}\\right)$. Then we obtain\n\t{\\small\n\t\t\\begin{align*}\n\t\tg_{1}\\left(\\nabla_{U_{1}}V_{1},X_{1}\\right)=&-g_{1}\\left(\\left[U_{1},X_{1}\\right],V_{1}\\right)+g_{1}\\left(J\\nabla_{X_{1}}\\xi U_{1},V_{1}\\right)-g_{1}\\left(\\nabla_{X_{1}}\\eta U_{1},JV_{1}\\right)\n\t\t\\end{align*}}\n\tBy using Theorem 3.1 we have\n\t\\begin{align*}\n\tg_{1}\\left(\\nabla_{U_{1}}V_{1},X_{1}\\right)=&-g_{1}\\left(\\left[U_{1},X_{1}\\right],V_{1}\\right)-\\cos^{2}\\theta g_{1}\\left(\\nabla_{X_{1}} PU_{1},V_{1}\\right)\\\\&-\\cos^{2}\\bar{\\theta}g_{1}\\left(\\nabla_{X} QU_{1},V_{1}\\right)+g_{1}\\left(\\nabla_{X_{1}}\\eta\\xi U_{1},V_{1}\\right)\\\\&-g_{1}\\left(\\nabla_{X_{1}}\\omega U_{1},\\xi V_{1}\\right)-g_{1}\\left(\\nabla_{X_{1}}\\omega U_{1},\\eta V_{1}\\right).\n\t\\end{align*}\n Then we arrive\n\t\\begin{align*}\n\t\\sin^{2}\\theta g_{1}\\left(\\nabla_{U_{1}}V_{1},X_{1}\\right)=&\\left(\\cos^{2}\\theta-\\cos^{2}\\bar{\\theta}\\right)g_{1}\\left(\\nabla_{X_{1}}QU_{1},V_{1}\\right)\\\\&+g_{1}\\left(\\nabla_{X_{1}}\\eta\\xi U_{1},V_{1}\\right)-\\sin^{2}\\theta g_{1}\\left(\\left[U_{1},X_{1}\\right],V_{1}\\right)\\\\&-g_{1}\\left(\\nabla_{X_{1}}\\eta U_{1},\\xi V_{1}\\right)-g_{1}\\left(\\nabla_{X_{1}}\\eta U_{1},\\eta V_{1}\\right)\n\t\\end{align*}\n\tFrom the equation \\eqref{2.6} and Lemma 2.1 we obtain\n\t\t\\begin{align*}\n\t\\sin^{2}\\theta g_{1}\\left(\\nabla_{U_{1}}V_{1},X_{1}\\right)=&\\left(\\cos^{2}\\theta-\\cos^{2}\\bar{\\theta}\\right)g_{1}\\left(\\nabla_{X_{1}}QU_{1},V_{1}\\right)-g_{1}\\left(\\mathcal{A}_{X_{1}}V_{1},\\eta\\xi U_{1}\\right)\\\\&-\\sin^{2}\\theta g_{1}\\left(\\left[U_{1},X_{1}\\right],V_{1}\\right)-g_{1}\\left(\\mathcal{A}_{X_{1}}\\xi V_{1},\\eta U_{1}\\right)\\\\&+g_{1}\\left(grad\\ln\\lambda,X_{1}\\right)g_{1}\\left(\\eta U_{1},\\eta V_{1}\\right)\\\\&+g_{1}\\left(grad\\ln\\lambda,\\eta U_{1}\\right)g_{1}\\left(X_{1},\\eta V_{1}\\right)\\\\&-g_{1}\\left(X_{1},\\eta U_{1}\\right)g_{1}\\left(grad\\ln\\lambda,\\eta V_{1}\\right)\\\\&-\\lambda^{-2}g_{2}\\left(\\nabla^{f}_{X_{1}}f_{*}\\eta U_{1},f_{*}\\eta V_{1}\\right)\n\t\\end{align*}\n\tUsing above equation the desired equality is achieved.\n\\end{proof}\n\n\\begin{theorem}\n\tSuppose that $f$ is a proper conformal bi-slant submersion from a Kaehlerian manifold $(M_{1},g_{1},J)$ onto a Riemannian manifold $(M_{2},g_{2})$ with slant functions $\\theta,\\bar{\\theta}$. Then, the total space $M_{1}$ is a locally product $M_{1D}\\times M_{\\bar{1D}}\\times M_{1\\left(\\ker f_{*}\\right)^{\\perp}} $ if and only if the equations (3.4), (3.5), (3.6), (3.7), (3.8) and (3.9) are hold where $M_{1D}$, $M_{1\\bar{D}}$ and $M_{1\\left(\\ker f_{*}\\right)^{\\perp}}$ are integral manifolds of the distributions $D$, $\\bar{D}$ and $\\left(\\ker f_{*}\\right)^{\\perp}$, respectively. \t\n\\end{theorem}\n\n\\begin{theorem}\n\tSuppose that $f$ is a proper conformal bi-slant submersion from a Kaehlerian manifold $(M_{1},g_{1},J)$ onto a Riemannian manifold $(M_{2},g_{2})$ with slant functions $\\theta,\\bar{\\theta}$. Then, the total space $M_{1}$ is a locally product $M_{1\\ker f_{*}}\\times M_{1\\left(\\ker f_{*}\\right)^{\\perp}} $ if and only if the equations (3.8), (3.9) and (3.10)are hold where $M_{1\\ker f_{*}}$ and $M_{1\\left(\\ker f_{*}\\right)^{\\perp}}$ are integral manifolds of the distributions $\\ker f_{*}$ and $\\left(\\ker f_{*}\\right)^{\\perp}$, respectively. \t\n\\end{theorem}\n\n\\begin{theorem}\n\tSuppose that $f$ is a proper conformal bi-slant submersion from a Kaehlerian manifold $(M_{1},g_{1},J)$ onto a Riemannian manifold $(M_{2},g_{2})$ with slant functions $\\theta,\\bar{\\theta}$. Then $f$ is totally geodesic if and only if\n\n\\begin{align*}\n-\\lambda^{-2}g_{2}\\left(\\nabla^{f}_{\\eta V_{1}}f_{*}\\eta U_{1},f_{*}JCX_{1}\\right)=&\\left(\\cos^{2}\\theta-\\cos^{2}\\bar{\\theta}\\right) g_{1}\\left(\\mathcal{T}_{U_{1}}QV_{1},X_{1}\\right)\\\\&+\\lambda^{-2}g_{2}\\left(\\nabla f_{*}\\left(\\xi U_{1},\\eta V_{1}\\right),f_{*} JCX_{1}\\right)\\\\&-g_{1}\\left(\\eta U_{1},\\eta V_{1}\\right)g_{1}\\left(grad\\ln\\lambda,JCX_{1}\\right)\\\\&+\\lambda^{-2}g_{2}\\left(\\nabla f_{*}\\left(U_{1},\\eta\\xi V_{1}\\right),f_{*}X_{1}\\right)\\\\&-g_{1}\\left(\\mathcal{T}_{U_{1}}\\eta V_{1},BX_{1}\\right)\n\\end{align*}\n\tand\n\t\\begin{align*}\n\\lambda^{-2}g_{2}\\left(\\nabla_{X_{1}}^{f}f_{*}\\eta U_{1},f_{*}CX_{2}\\right)=&\\left(\\cos^{2}\\theta-\\cos^{2}\\bar{\\theta}\\right)g_{1}\\left(\\mathcal{A}_{X_{1}}QU_{1},X_{2}\\right)\\\\&+\\lambda^{-2}g_{2}\\left(\\nabla_{X_{1}}^{f}f_{*}\\eta \\xi U_{1},f_{*}X_{2}\\right)\\\\&-g_{1}\\left(grad\\ln\\lambda,X_{1}\\right)g_{1}\\left(\\eta\\xi U_{1},X_{2}\\right)\\\\&-g_{1}\\left(grad\\ln\\lambda,\\eta\\xi U_{1}\\right)g_{1}\\left(X_{1},X_{2}\\right)\\\\&+g_{1}\\left(X_{1},\\eta\\xi U_{1}\\right)g_{1}\\left(grad\\ln\\lambda,X_{2}\\right)\\\\&+g_{1}\\left(grad\\ln\\lambda,\\eta U_{1}\\right)g_{1}\\left(X_{1},CX_{2}\\right)\\\\&-g_{1}\\left(X_{1},\\eta U_{1}\\right)g_{1}\\left(grad\\ln\\lambda,CX_{2}\\right)\\\\&+g_{1}\\left(\\mathcal{A}_{X_{1}}BX_{2},\\eta U\\right).\n\t\\end{align*}\n\n\twhere $X_{1},X_{2}\\in\\Gamma\\left(\\left(\\ker f_{*}\\right)^\\perp\\right)$ and $U_{1},V_{1}\\in\\Gamma\\left(\\ker f_{*}\\right)$.\n\\end{theorem}\n\n\\begin{proof}\nGiven $U_{1},V_{1}\\in \\Gamma\\left(\\ker f_{*}\\right)$ and $X_{1}\\in\\Gamma\\left(\\left(\\ker f_{*}\\right)^{\\perp}\\right)$ Then we write\n\t\\begin{equation*}\n\t\\lambda^{-2}g_{2}\\left(\\nabla f_{*}(U_{1},V_{1}),f_{*}X\\right)=-\\lambda^{-2}g_{2}\\left(f_{*}\\nabla_{U_{1}}V_{1},f_{*}X\\right).\n\t\\end{equation*}\n\tFrom Theorem 3.1 we obtain\n\t\\setlength\\arraycolsep{2pt}\n\t\\begin{align*}\n\\left(\\sin^{2}\\theta\\right) \\lambda^{-2}g_{2}\\left(\\nabla f_{*}\\left(U_{1},V_{1}\\right),f_{*}X_{1}\\right)=&\\left(\\cos^{2}\\theta-\\cos^{2}\\bar{\\theta}\\right) g_{1}\\left(\\nabla_{U_{1}}QV_{1},X_{1}\\right)\\\\&-g_{1}\\left(\\nabla_{U_{1}}\\eta V_{1},JX_{1}\\right)+g_{1}\\left(\\nabla_{U_{1}}\\eta\\xi V_{1},X_{1}\\right)\n\t\\end{align*}\n\tConsidering \\eqref{2.4}, \\eqref{2.5} and Lemma 2.1 we find \n\t\\begin{align*}\n\t\\left(\\sin^{2}\\theta\\right) \\lambda^{-2}g_{2}\\left(\\nabla f_{*}\\left(U_{1},V_{1}\\right),f_{*}X_{1}\\right)=&\\left(\\cos^{2}\\theta-\\cos^{2}\\bar{\\theta}\\right) g_{1}\\left(\\mathcal{T}_{U_{1}}QV_{1},X_{1}\\right)\\\\&+\\lambda^{-2}g_{2}\\left(\\nabla f_{*}\\left(\\xi U_{1},\\eta V_{1}\\right),f_{*} JCX_{1}\\right)\\\\&-g_{1}\\left(\\eta U_{1},\\eta V_{1}\\right)g_{1}\\left(grad\\ln\\lambda,JCX_{1}\\right)\\\\&+\\lambda^{-2}g_{2}\\left(\\nabla^{f}_{\\eta V_{1}}f_{*}\\eta U_{1},f_{*}JCX_{1}\\right)\\\\&+\\lambda^{-2}g_{2}\\left(\\nabla f_{*}\\left(U_{1},\\eta\\xi V_{1}\\right),f_{*}X_{1}\\right)\\\\&-g_{1}\\left(\\mathcal{T}_{U_{1}}\\eta V_{1},BX_{1}\\right).\n\t\\end{align*}\n\tTherefore we obtain the first equation of Theorem 3.6.\\\\\n\tOn the other hand, for $X_{1},X_{2}\\in \\Gamma\\left(\\left(\\ker f_{*}\\right)^{\\perp}\\right)$ and $U_{1}\\in\\Gamma\\left(\\ker f_{*}\\right)$ we can write\n\t\\begin{align*}\n\t\\left(\\sin^{2}\\theta\\right) \\lambda^{-2} g_{2}\\left(\\nabla f_{*}\\left(U_{1},X_{1}\\right),f_{*}X_{2}\\right)=&\\left(\\cos^{2}\\theta-\\cos^{2}\\bar{\\theta}\\right)g_{1}\\left(\\nabla_{X_{1}}QU_{1},X_{2}\\right)\\\\&+g_{1}\\left(\\nabla_{X_{1}}\\eta U_{1},BX_{2}\\right)-g_{1}\\left(\\nabla_{X_{1}}\\eta U_{1},CX_{2}\\right).\n\t\\end{align*}\n\tBy using the equation (2.6) and Lemma 2.1, we arrive\n\t\\setlength\\arraycolsep{2pt}\n\t\\begin{align*}\n\t\\left(\\sin^{2}\\theta\\right) \\lambda^{-2}g_{2}\\left(\\nabla f_{*}\\left(U_{1},X_{1}\\right),f_{*}X_{2}\\right)=&\\left(\\cos^{2}\\theta-\\cos^{2}\\bar{\\theta}\\right)g_{1}\\left(\\mathcal{A}_{X_{1}}QU_{1},X_{2}\\right)\\\\&+\\lambda^{-2}g_{2}\\left(\\nabla_{X_{1}}^{f}f_{*}\\eta \\xi U_{1},f_{*}X_{2}\\right)\\\\&-g_{1}\\left(grad\\ln\\lambda,X_{1}\\right)g_{1}\\left(\\eta\\xi U_{1},X_{2}\\right)\\\\&-g_{1}\\left(grad\\ln\\lambda,\\eta\\xi U_{1}\\right)g_{1}\\left(X_{1},X_{2}\\right)\\\\&+g_{1}\\left(X_{1},\\eta\\xi U_{1}\\right)g_{1}\\left(grad\\ln\\lambda,X_{2}\\right)\\\\&-\\lambda^{-2}g_{2}\\left(\\nabla_{X_{1}}^{f}f_{*}\\eta U_{1},f_{*}CX_{2}\\right)\\\\&+g_{1}\\left(grad\\ln\\lambda,\\eta U_{1}\\right)g_{1}\\left(X_{1},CX_{2}\\right)\\\\&-g_{1}\\left(X_{1},\\eta U_{1}\\right)g_{1}\\left(grad\\ln\\lambda,CX_{2}\\right)\\\\&+g_{1}\\left(\\mathcal{A}_{X_{1}}BX_{2},\\eta U_{1}\\right).\n\t\\end{align*}\n\tThis concludes the proof.\n\\end{proof}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nPlanetary systems are formed from rotating protoplanetary disks, which are\nthe evolved phase of circumstellar disks produced during the collapse of\na protostellar cloud with some angular momentum.\n\nA standard model of such a protoplanetary disk, is that\nof a steady-state disk in vertical hydrostatic equilibrium, with gas and \ndust fully mixed and thermally coupled (Kenyon \\& Hartmann 1987). Such\na disk is flared, not flat, but still geometrically thin in the sense defined\nby Pringle (1981). The disk intercepts a significant\namount of radiation from the central star, but other heating sources (e.g.\nviscous dissipation) can be more important. If dissipation due to mass\naccretion is high, it becomes the main source of heating. Such are the\nprotoplanetary disks envisioned by Boss (1996, 1998), which have relatively\nhot (midplane temperature $T_{\\rm m}>$ 1200~K) inner regions due to mass\naccretion rates of $\\sim 10^{-6}$ to $10^{-5} M_{\\odot} {\\rm yr}^{-1}$.\nHowever, typical T Tauri disks of age $\\sim$1~Myr\nseem to have much lower mass accretion rates\n($\\leq 10^{-8} M_{\\odot} {\\rm yr}^{-1}$) with all other characteristics\nof protoplanetary disks (Hartmann et al. 1998, D'Alessio et al. 1998).\nFor disks of such low accretion rates stellar irradiation becomes increasingly\nthe dominant source of heating, to the limit of a passive disk modeled by\nChiang \\&\nGoldreich (1997). In our paper we will confine our attention to the latter\ncase, without entering the discussion about mass accretion rates.\n\nThe optical depth in the midplane of the disk is very high in the radial\ndirection, hence the temperature structure there is governed by the\nreprocessed irradiation of the disk surface. This is the case of a passive\ndisk (no accretion). At some point along the radial direction the temperature in the\nmidplane would drop below the ice sublimation level -- Hayashi (1981)\ncalled it the \"snow line\". \n\nIn this paper we revisit the calculation of the \"snow line\" for a protosolar\nprotoplanetry disk, given its special role in the process of planet formation.\nWe pay particular emphasis on the issues involved in treating the radiative\ntransfer and the dust properties.\n\n\\section{The Model} \nOur model is that of a star surrounded by a flared disk. In this paper, we\nhave chosen two examples -- a passive disk and a disk with a \n$10^{-8} M_{\\odot} {\\rm yr}^{-1}$ accretion rate. Both have the same \ncentral star of effective\ntemperature $T_*$= 4000~K, mass $M_*$= 0.5~$M_{\\odot}$, and radius\n$R_*$= 2.5~$R_{\\odot}$. Thus they correspond to the examples used by\nChiang \\& Goldreich (1997) and D'Alessio et al. (1998), respectively.\nOur disk has a surface gas mass density\n$\\Sigma$= $r^{-3\/2}{\\Sigma}_0$, with $r$ in AU and \n${\\Sigma}_0$= 10$^3$g~cm$^{-2}$ for our standard minimum-mass solar\nnebula model; we varied ${\\Sigma}_0$ between 10$^2$ and 10$^4$g~cm$^{-2}$\nto explore the effect of disk mass on the results.\n\nThe emergent\nspectrum of the star is calculated with a stellar model atmosphere code\nwith Kurucz (1992) line lists and opacities. The disk intercepts the\nstellar radiation $F_{irr}(r)$ at a small grazing angle $\\phi(r)$ (defined in \\S 5).\nThe emergent stellar spectrum\nis input into a code which solves the continuum radiative transfer problem\nfor a dusty envelope. The solution is a general spherical geometry solution\nwith a modification of the equations to a section corresponding to a flared\ndisk (see Menshchikov \\& Henning (1997) for a similar approach). \nIn that sense, the radiative transfer is solved essentially in 1D (vertically),\nas opposed to a full-scale consistent 2D case. The appeal of our approach\nis in the detailed radiative transfer allowed by the 1D scheme.\n\nThe continuum radiative transfer problem for a dusty envelope is solved\nwith the method developed by Ivezi\\'c \\& Elitzur (1997).\nThe scale invariance applied in the method is\npractically useful when the absorption coefficient is independent of\nintensity, which is the case of dust continuum radiation. The energy\ndensity is computed for all radial grid points through matrix inversion,\ni.e. a direct solution to the full scattering problem. This is both very\nfast and accurate at high optical depths.\nNote that in our calculations (at $r \\geq 0.1$~AU) the temperatures never\nexceed 1500-1800K in the disk and we do not consider dust sublimation;\nthe dust is present at all times and is the dominant opacity source. As in\nthe detailed work by Calvet et al. (1991) and more recently by D'Alessio\net al. (1998), the frequency ranges of scattering and emission can be treated\nseparately.\n\nFor the disk with mass accretion, the energy rate per unit volume\ngenerated locally by viscous stress is given by\n$2.25 \\alpha P(z){\\Omega}(r)$, where the turbulent viscosity coefficient\nis ${\\nu}= {\\alpha}c_{\\rm s}^2\\Omega^{-1}$, $\\Omega$ is the Keplerian angular\nvelocity, ${c_{\\rm s}}^2=P{\\rho}^{-1}$ is the sound speed, and a standard value\nfor $\\alpha =0.01$ is used. The net flux produced by viscous dissipation, $F_{vis}$, is\nthe only term to balance $F_{rad}$ $-$ unlike D'Alessio et al. (1998) we\nhave ignored the flux produced by energetic particles ionization.\nThen we have the standard relation (see Bell et al. 1997), which holds true for the interior of the\ndisk where accretion heating occurs:\n$$\n\\sigma T_{vis}^4 = {{3\\dot{M} GM_*}\\over {8\\pi r^3}}\\left[1-({R_*\\over r})^{1\/2}\\right] ,\n$$\nwhere $\\dot{M}$ ~is the mass accretion rate, and $M_*$ and $R_*$ are the stellar mass\nand radius.\n\n\\section{The Dust}\n\nThe properties of the dust affect the wavelength dependence of scattering\nand absorption efficiencies. The temperature in the midplane is sensitive to\nthe dust scattering albedo (ratio of scattering to total opacity) -- a higher \nalbedo would reduce the absorbed stellar flux.\nAs with our choice of mass accretion rates, we will use dust grains with\nproperties which best describe the disks of T Tauri stars. \n\nThe modelling of circumstellar disks has always applied dust grain \nproperties derived from the interstellar medium. Most commonly used have\nbeen the grain parameters of the Mathis et al. (1977) distribution with\noptical constants from Draine \\& Lee (1984). However, recent work on\nspectral distributions (Whitney et al. 1997) and high-resolution images\n(Wood et al. 1998) of T Tauri stars has favored a grain mixture which\nKim, Martin, \\& Hendry (1994) derived from detailed fits to the\ninterstellar extinction law (hereafter KMH). Important grain properties\nare the opacity, $\\kappa$, the scattering albedo, $\\omega$, \nand the scattering asymmetry parameter, $g$.\nThe latter defines the forward throwing properties of the\ndust and ranges from 0 (isotropic scattering) to 1. What sets the KMH grains\napart is that they are more forward throwing ($g$= 0.40($R$), 0.25($K$)), \nand have higher albedo ($\\omega$= 0.50($R$), 0.36($K$)) at\neach wavelength (optical to near-IR). They are also less polarized, but\nthat is a property we do not use here. The grain size distribution has the\nlower cutoff of KMH (0.005$\\mu$m) and a smooth exponential falloff, instead\nof an upper cutoff, at 0.25$\\mu$m. Since the dust settling time is\nproportional to (size)$^{-1}$, we performed calculations with upper\ncutoffs of 0.05$\\mu$m and 0.1$\\mu$m. None of these had any significant\neffect on the temperatures.\n\n\\section{The Temperature Structure and the Snow Line}\nWe are interested in planet formation and therefore want to find the\nice condensation line (\"snow line\") in the midplane of the disk. \nTemperature inversions in the disk's vertical structure (see D'Alessio et al. \n1998) may lead to lower temperatures above the midplane, but ice condensation\nthere is quickly destroyed upon crossing the warmer disk plane.\nWe define the snow line simply in terms of the local gas-dust temperature\nin the midplane, and at a value of 170~K.\n\nIn our passive disk, under hydrostatic and radiative equilibrium;\nthe vertical and radial temperature profiles are similar to those of\nChiang \\& Goldreich (1997) and $T(r) \\propto r^{-3\/7}$. Here is why.\nThe disk has a concave upper surface (see\nHartmann \\& Kenyon 1987) with pressure scale height of the gas at the\nmidplane temperature, $h$:\n$$\n{{h}\\over r} = \\left[{rkT}\\over {GM_* \\mu m_H}\\right] ^{1\/2},\n$$\nwhere $G$ is the gravitational constant, $\\mu$ and $m_H$ are the molecular\nweight and hydrogen mass,\n$r$ is radius in the disk, and $T$ is the midplane temperature at that\nradius. For the inner region (but $r \\gg R_*$) of a disk with such concave\nshape the stellar\nincident flux $F_{irr}(r) \\propto \\phi(r) \\sigma T_*^4 r^{-2}$, where\n$\\phi(r) \\propto r^{2\/7}$ (see next section). Here $T_*$ is the effective\ntemperature of the central star. Then our calculation makes use of\nthe balance between heating by irradiation and radiative cooling:\n$\\sigma T^4(r) = F_{irr}(r)$. Therefore our midplane temperature will scale\nas $T(r)$= $T_0 r^{-3\/7}$~K. This is not surprising, given our standard\ntreatment of the vertical hydrostatic structure of the disk irradiated\nat angles $\\phi(r)$. Only the scaling coefficient, $T_0 = 140$, will be\ndifferent.\nThe difference with the Chiang \\& Goldreich model is our treatment \nof the dust\ngrains $-$ less energy is redistributed inwards in our calculation\nand the midplane temperature is lower (Figure 1). The model\nwith accretion heating is much warmer inwards of 2.5$AU$ where it joins\nthe no-accretion (passive) model $-$ stellar irradiation dominates.\n\nThe result above is for our model with $M_*=0.5M_{\\odot}$ and $T_*=4000$~K,\nwhich is standard for T~Tauri stars. It is interesting to see how the snow\nline changes for other realistic initial parameters. By retaining the same\ndust properties, this can be achieved using scaling relations rather than\ncomplete individual models as shown by Bell et al. (1997) for the pre-main\nsequence mass range of 0.5 to 2~$M_{\\odot}$. An important assumption at\nthis point is that we have still retained the same (minimum-mass solar nebula)\ndisk. With our set of equations,\nthe midplane temperature coefficient, $T_0$, will be proportional to the stellar mass:\n$T_0 \\propto M_*^{3\/(10-k)}$, where $k$ is a function of the total opacity in\nthe disk and $0 \\leq k < 2$.\n\nOn the other hand, different disk masses for a fixed central star \n($M_*=0.5M_{\\odot}$ and $T_*=4000$~K) can be modeled for zero accretion\nrate, by changing $\\Sigma_0$ by a factor of 10 in each direction (defined\nin \\S 2). Here a remaining assumption is the radial dependence of\n$\\Sigma \\propto r^{-3\/2}$; the latter could certainly be $\\propto r^{-1}$\n(Cameron 1995), or a more complex function of $r$, but it is beyond the\nintent of our paper to deal with this. Moreover that we find minimal change\nin the midplane temperature in the $r$ range of interest to us (0.1$-$5~AU).\nThe reason is a near cancellation that occurs between the amount of heating\nand increased optical depth to the midplane. One could visualize the vertical\nstructure of a passive disk for $r = 0.1-5.0$~AU as consisting of three zones:\n(1)~optically thin heating and cooling region (dust heated by direct starlight),\n(2)~optically thin cooling, but optically thick heating layer, and (3)~the\nmidplane zone, where both heating and cooling occur in optically thick\nconditions. The rate of stellar heating of the disk per unit volume is\ndirectly proportional to the density, and affects the location and temperature\nof the irradiation layer. That is nearly cancelled (except for second order\nterms) in the mean intensity which reaches the midplane. Therefore we find\nthat $T(r)$ changes within $\\pm 10K$ for a change in disk density ($\\Sigma_0$)\nof a factor of 10. Note that $T(r)$ is only approximately $\\propto r^{-3\/7}$\neven for $r = 0.1-5.0$~AU; the small effect of density on $T(r)$ has an\n$r$ dependence. However, for the purposes of this paper, $i.e.$ our chosen\nvolume of parameter space, the effect of disk mass on $T(r)$ and the ``snow line\" \nis insignificant, and we do not pursue the issue in more detail.\nNote that for a disk with a heat source in the midplane, $i.e.$ with an\naccretion rate different from zero, the midplane $T(r)$ will be strongly\ncoupled to the density, roughly $\\propto {\\rho}^{1\/4}$, and will increase\nat every $r$ for higher disk masses (e.g. Lin \\& Papaloizou 1985).\n\n\\section{The Shape of the Upper Surface of the Disk}\n\nThe \"snow line\" calculation in the previous section is made under the\nassumption that the upper surface of the disk is perfectly concave and\nsmooth at all radii, $r$. This is a very good description of such\nunperturbed disks, because thermal and gravitational instabilities are\ndamped very efficiently (D'Alessio et al. 1999).\nObviously this is not going to be the case when an\nalready formed planet core distorts the disk. But even \na small distortion of the disk's surface may affect the\nthermal balance. The distortion need only be large enough compared to\nthe grazing angle at which the starlight strikes the disk, $\\phi(r)$:\n$$\n\\phi(r) = {{0.4R_*}\\over {r}}+r{{d}\\over {dr}}({{h}\\over {r}}),\n$$\nwhere $h$ is the local scale height.\nThis small angle has a minimum\nat 0.4$AU$ and increases significantly only at very large distances:\n$\\phi(r) \\approx 0.005r^{-1} + 0.05r^{2\/7}$ (e.g. see Chiang \\& Goldreich\n1997).\n\nThe amount of compression due to the additional mass of the planet, $M_p$, \nwill depend on the Hill radius, $R_H = r ({{M_p}\\over {M_*}})^{1\/3}$,\nand how it compares to the local scale height, $h$. The depth of the depression\nwill be proportional to $(R_H\/h)^3$. The resulting depressions\n(one on each side) will be in the shadow from the central star, with a\nshade area dependent on the grazing angle, $\\phi(r)$. The solid angle\nsubtended by this shade area from the midplane determines the amount of cooling\nand the new temperature in the sphere of influence of the planet core.\nThe question then arises,\nif during the timescale preceeding the opening of the gap the midplane\ntemperature in the vicinity of the accreting planet core could drop\nbelow the ice condensation limit even for orbits with $r$ much shorter\nthan the \"snow line\" radius in the undisturbed disk. \nThe answer appears to be affirmative and a runaway develops whereby local\nice condensation leads to rapid growth of the initial rocky core, which\nin turn deepens the depression in the disk and facilitates more ice\ncondensation inside the planet's sphere of influence.\nDetails about the\ninstability which develops in this case will be given in a separate paper.\n\n\\section{Conclusion}\nWhen the large fraction of close-in extrasolar giant planets became\napparent, we thought of questioning the standard notion of a distant \"snow\nline\" beyond 3$AU$ in a protoplanetary disk. Thence comes this paper.\nWe revisited the issue by paying attention to the stellar irradiation\nand its radiative effects on the disk, thus limiting ourselves to\npassive or low accretion rate disks.\n\nWe find a snow line as close as 0.7$AU$ in a passive disk, and not\nmuch further away than 1.3$AU$ in a disk with 10$^{-8}M_{\\odot} {\\rm yr}^{-1}$\naccretion rate for $M_*=0.5M_{\\odot}$. The result is robust regardless\nof different reasonable model assumptions $-$ similar values\ncould in principle be inferred from existing disk\nmodels (Chiang \\& Goldreich 1997; D'Alessio et al. 1998). For more massive\n(and luminous) central stars, the snow line shifts outwards: to 1.0$AU$ \n(1$M_{\\odot}$) and 1.6$AU$ (2$M_{\\odot}$). The effect of different disk\nmass is much smaller for passive disks $-$ the snow line shifts inwards\nby 0.08$AU$ for ${\\Sigma}_0$= 10$^4$g~cm$^{-2}$. Our results\ndiffer from existing calculations (in that they bring the\nsnow line even closer in), because the dust grains properties we\nused have higher albedo and more forward throwing. The dust grains\nand the disk models we used are typical of T Tauri stars of age $\\sim$1~Myr.\nSo our conclusion is, that if such T Tauri disks are typical of\nprotoplanetry disks, then the snow line in them could be as close-in\nas 0.7 AU.\n\nOur estimate of the snow line is accurate to within 10\\%, once the model\nassumptions are made. These assumptions are by no means good or obvious,\nand can change the numbers considerably. For a passive disk model, the assumptions\nthat need to be justified are: the equilibrium of the disk, the lack of\ndust settling (i.e. gas and dust are well mixed), the used KMH properties \nof the dust grains, and the choice of molecular opacities. For a low\naccreting disk model, one has to add to the above list: the choice of\nviscous dissipation model (and $\\alpha$=0.01).\n\nFinally, we note that these estimates reflect a steady-state disk in\nhydrostatic equilibrium. The disk will get disturbed as planet formation\ncommences, which may affect the thermal balance locally given the small\nvalue of the grazing angle, $\\phi$. For a certain planet core mass,\nan instability can develop at orbits smaller than 1 AU which can lead \nto the formation of giant planets in situ. What is then the determining\nfactor for the division between terrestrial and giant planets in our\nSolar System remains unexplained (as it did even with a snow line at 2.7$AU$).\n\n\\acknowledgements{\nWe thank N. Calvet, B. Noyes, S. Seager, and K. Wood for reading the\nmanuscript and helpful discussions, and the referee for very thoughtful\nquestions.\n}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}