diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzqej" "b/data_all_eng_slimpj/shuffled/split2/finalzqej" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzqej" @@ -0,0 +1,5 @@ +{"text":"\\section{INTRODUCTION}\nEntanglement is a defining property of quantum theory, and plays a crucial role in a broad range of problems in physics, ranging from the black hole information paradox~\\cite{page1993information} to the characterization of phases in condensed matter systems~\\cite{eisert2010colloquium}. Put simply, entanglement refers to quantum correlations between different parts of a physical system that cannot be explained classically~\\cite{bell1964einstein, horodeckiRMP2009}. Over the years, a wide range of \\emph{entanglement measures} have been devised to quantify entanglement~\\cite{pleniomeasures2007}. Prominent among those are the \\emph{bipartite} entanglement measures, which involve splitting the system in two parts.\n\nFor the special case of globally pure quantum states $\\ket{\\psi}$ (our interest here) and a bipartition, the von Neumann entanglement entropy, also known as the entropy of entanglement or just the \\emph{entanglement entropy}, is one of the simplest measures of quantum entanglement. It vanishes if and only if there is no quantum entanglement between the two parts, in which case the state must be a product state. We study the entanglement entropy in Hilbert spaces with a tensor product structure $\\mathcal{H}=\\mathcal{H}_A\\otimes\\mathcal{H}_B$\\footnote{For fermionic systems, as considered later, one needs to work with a fermionic generalization of the tensor product, which also gives rise to a fermionic notion of the partial trace~\\cite{szalay2021fermionic}.}. To compute the entanglement entropy of subsystem $A$ (with volume $V_A$) of $\\ket{\\psi}$, one traces out the complement subsystem $B$ (with volume $V-V_A$, where $V$ is the total volume) to obtain the mixed density matrix $\\hat \\rho_A=\\mathrm{Tr}_{\\mathcal{H}_B}\\ket{\\psi}\\bra{\\psi}$. The entanglement entropy $S_A$ of subsystem $A$ is then\n\\begin{equation}\\label{Neumann.entropy}\n S_A=-\\mathrm{Tr}(\\hat \\rho_A\\ln\\hat \\rho_A),\n\\end{equation}\nwhile the $n$th R\\'enyi entropy is defined as\n\\begin{equation}\n S_A^{(n)} = - \\ln[\\mathrm{Tr}(\\hat \\rho_A^n)]\\,.\n\\end{equation}\nThe second-order R\u00e9nyi entropy $S_A^{(2)}$ has already been measured in experiments with ultracold atoms in optical lattices~\\cite{islam2015measuring, kaufman2016quantum}.\n\nWe stress that the focus of this tutorial is in pure quantum states. Quantifying entanglement in globally mixed states is more challenging. In particular, the von Neumann and R\\'enyi entanglement entropies are not entanglement measures for globally mixed states. Several of the bipartite entanglement measures defined for mixed states ({\\it e.g.},\\ distillable entanglement, entanglement cost, entanglement of formation, relative entropy of entanglement, and squashed entanglement) reduce to the entanglement entropy when evaluated on pure states~\\cite{pleniomeasures2007}.\n\t\n\\subsection{Ground-state entanglement}\n\t\nIn general one is interested in understanding the behavior of measures of entanglement in physical systems, and in determining what such a behavior can tell us about the physical properties of the system. Much progress in this direction has been achieved in the context of many-body ground states of local Hamiltonians, for which a wide range of theoretical approaches are available~\\cite{amico_fazio_08, Peschel2009, calabrese_cardy_09, eisert2010colloquium}. Such ground states usually exhibit a leading term of the entanglement entropy that scales with the area, or with the logarithm of the volume, of the subsystem. Identifying and understanding universal properties of the entanglement entropy in ground states of local Hamiltonians has been a central goal~\\cite{audenaert_eisert_02, osterloh_amico_2002, osborne_nielsen_02, vidal_latorre_03}. \n\t\nIn one-dimensional systems of spinless fermions or $\\tfrac{1}{2}$ spins, the leading (in the volume $V_A$) term in the entanglement entropy has been found to distinguish ground states of critical systems from those of noncritical ones~\\cite{vidal_latorre_03, latorre_rico_04, hastings_07}. In the former the leading term exhibits a logarithmic scaling with the volume (when described by conformal field theory, the central charge is the prefactor of the logarithm~\\cite{vidal_latorre_03, latorre_rico_04, calabrese_cardy_04}), while in noncritical ground states the leading term is a constant (which, in one dimension, reflects an area-law scaling). Subleading terms have also been studied, specially in the context of states that are physically distinct but exhibit the same leading entanglement entropy scaling. An example in the context of quadratic Hamiltonians in two dimensions are ground states that are critical with a pointlike Fermi surface versus noncritical, which both exhibit a leading area-law entanglement entropy~\\cite{wolf_06, gioev_klich_06, barthel_chung_06, li_ding_06, cramer_eisert_07}. Remarkably, the subleading term in the former scales logarithmically with $V_A$ while it is constant for noncritical ground states~\\cite{ding_brayali_08}. Also, in two-dimensional systems, critical states described by conformal field theory~\\cite{fradkin_moore_06} and states with a spontaneously broken continuous symmetry~\\cite{kallin_hastings_11, metlitski_grover_15} have been found to exhibit a universal subleading logarithmic term.\n\t\n\\subsection{Excited-state entanglement}\n\t\nIn recent years, interest in understanding the far-from-equilibrium dynamics of (nearly) isolated quantum systems and the description of observables after equilibration~\\cite{polkovnikov2011colloquium, d2016quantum, gogolin2016equilibration} have motivated many studies of the entanglement properties of highly excited eigenstates of quantum many-body systems (mostly in the context of lattice systems)~\\cite{mejia_05, alba09, Deutsch_2010, santos_12, deutsch_li_13, storms_singh_14, moelter_barthel_14, lai_yang_15, beugeling_andreanov_15, yang_chamon_15, nandy_sen_16, vidmar2017entanglement, vidmar2017entanglement2, zhang_vidmar_18, dymarsky2018subsystem, garrisson_grover_18, nakagawa_watanabe_18, vidmar2018volume, huang_19, hackl2019average, lu_grover_19, murthy_19, jafarizadeh_rajabpour_19, wilming_goihl_19, leblond_mallayya_19, faiez_20a, modak_nag_20, kaneko_iyoda_20, bhakuni_sharma_20, faiez_20b, lydzba2020entanglement, lydzba2021entanglement, haque_mcclarty_20, miao_barthel_20}. Because of the limited suit of tools available to study entanglement properties of highly excited eigenstates of model Hamiltonians, most of the results reported in those works were obtained using exact diagonalization techniques, which are limited to relatively small system sizes.\n\t\nIn contrast to the ground states, typical highly excited many-body eigenstates of local Hamiltonians have a leading term of the entanglement entropy that scales with the volume of the subsystem. Also, in contrast to the ground states, the leading volume-law term exhibits a fundamentally different behavior depending on whether the Hamiltonian is nonintegrable (the generic case for physical Hamiltonians) or integrable. In the former case the coefficient has been found to be constant, while in the latter case it depends on the ratio between the volume of the subsystem and the volume of the entire system.\n\t\nMany-body systems that are integrable are special as they have an extensive number of local conserved quantities~\\cite{sutherland_book_04}. As a result, their equilibrium properties can in many instances be calculated analytically, and their near-equilibrium properties can be ``anomalous,'' e.g., they can exhibit transport without dissipation (ballistic transport). Also, isolated integrable systems fail to thermalize if taken far from equilibrium. Interested readers can learn about the effects of quantum integrability in the collection of reviews in Ref.~\\cite{calabrese_essler_review_16}. \n\t\nThere is a wide range of quadratic Hamiltonians in arbitrary dimensions (which include a wide range of noninteracting models), e.g., translationally invariant quadratic Hamiltonians, that can be seen as a special class of integrable models. A class in which the nondegenerate many-body eigenstates are Gaussian states, while their degenerate eigenstates can always be written as Gaussian states. This means that those many-body eigenstates are fully characterized by their one-body density matrix or their covariance matrix. The entanglement entropy of highly excited eigenstates of some of those ``integrable'' quadratic Hamiltonians was studied in Refs.~\\cite{storms_singh_14, vidmar2017entanglement, zhang_vidmar_18, hackl2019average, jafarizadeh_rajabpour_19}. Other quadratic Hamiltonians in arbitrary dimensions that will be of interest to us here are quadratic Hamiltonians in which the single-particle sector exhibits quantum chaos (to be defined in the next subsections). We refer to such Hamiltonians as quantum-chaotic quadratic Hamiltonians. The entanglement entropy of highly excited eigenstates of quantum-chaotic quadratic Hamiltonians (on a lattice) was studied in Refs.~\\cite{lydzba2020entanglement, lydzba2021entanglement}. It was found to exhibit a typical leading volume-law term that is qualitatively similar to that found in eigenstates of integrable quadratic Hamiltonians (in which the single-particle sector does not display quantum chaos), such as translationally invariant quadratic Hamiltonians (on a lattice)~\\cite{vidmar2017entanglement, hackl2019average}. \n\t\nIn the presence of interactions, many-body integrable systems mostly exist in one dimension~\\cite{cazalilla_citro_review_11, guan2013fermi}. They come in two ``flavors,'' Hamiltonians that can be mapped onto noninteracting ones (a smaller class), and Hamiltonians that cannot be mapped onto noninteracting ones. Remarkably, both ``flavors'' have been found to describe pioneering experiments with ultracold quantum gases in one dimension~\\cite{moritz_stoferle_03, kinoshita_wenger_04, paredes_widera_04, kinoshita_wenger_05, kinoshita_wenger_06, amerongen_es_08, gring_kuhnert_12, fukuhara2013microscopic, pagano2014one, langen_erne_15, Bloch2016, tang_kao_18, schemmer2019generalized, wilson_malvania_20, jepsen2020spin, lev2020, malvania_zhang_21}. The entanglement entropy of highly excited eigenstates of lattice Hamiltonians that can be mapped onto noninteracting ones (which exhibit the same leading volume-law terms as their noninteracting counterparts) was studied in Refs.~\\cite{vidmar2018volume, hackl2019average}, while the entanglement entropy of highly excited eigenstates of a Hamiltonian (the spin-$\\frac{1}{2}$ XXZ chain) that cannot be mapped onto a noninteracting one was studied in Ref.~\\cite{leblond_mallayya_19}. Remarkably, in all the quadratic and integrable systems studied so far, the coefficient of the leading volume-law term of typical eigenstates has been found to depend on the ratio between the volume of the subsystem and the volume of the entire system.\n\t\nAnalytical progress understanding the previously mentioned numerical results has been achieved in some special cases. One such case is translationally invariant quadratic Hamiltonians, or models that can be mapped onto them in one dimension~\\cite{cazalilla_citro_review_11}, for which tight bounds were obtained for the leading (volume-law) term in the average entanglement entropy~\\cite{vidmar2017entanglement, hackl2019average}, and some understanding was gained about subleading corrections~\\cite{vidmar2018volume}. This was possible thanks to the Gaussian nature of the eigenstates. Another case is nonintegrable models under the assumption that their eigenstates exhibit eigenstate thermalization~\\cite{Deutsch_2010, dymarsky2018subsystem, garrisson_grover_18, murthy_19}.\n\t\n\\subsection{Random matrix theory in physics}\n\t\nRandom matrix theory has provided a more systematic approach to gaining an analytical understanding of the entanglement properties of many-body eigenstates in nonintegrable models~\\cite{yang_chamon_15, vidmar2017entanglement2, liu_chen_18, huang_gu_19, pengfei_chunxiao_20, morampudi_chandran_20, haque_mcclarty_20}. Such an approach is justified by the fact that many studies (see, {\\it e.g.},\\ Ref.~\\cite{d2016quantum} for a review) have shown that nonintegrable models exhibit ``quantum chaos.'' By quantum chaos what is meant is that statistical properties of highly excited eigenstates of such models, {\\it e.g.},\\ level spacing distributions, are described by the Wigner surmise~\\cite{d2016quantum}. This was conjectured by Bohigas, Giannoni, and Schmit (BGS)~\\cite{bohigas_giannoni_84} for quantum systems with a classical counterpart, in which case ``quantum chaos'' usually occurs when the classical counterparts are $K$-chaotic, where $K$ stands for Kolmogorov, and it is the class of systems that exhibit the highest degree of chaos. Remarkably, even statistical properties of eigenvectors such as the ratio between the variance of the diagonal and the off-diagonal matrix elements of Hermitian operators have been shown to agree with random matrix theory predictions~\\cite{mondaini_rigol_17, jansen_stolpp_19, richter_dymarsky_20, schoenle_jansen_21}. Recently, two of us (M.R. and L.V., in collaboration with P. \\L yd\\.{z}ba) used random matrix theory in the context of quantum-chaotic quadratic Hamiltonians to obtain a closed-form expression that describes the average entanglement entropy of highly excited eigenstates of quadratic models whose single-particle spectrum exhibits quantum chaos, such as the three-dimensional Anderson model~\\cite{lydzba2020entanglement, lydzba2021entanglement}.\n\t\nThe application of random matrix theory to many-body systems goes back to works by Wigner~\\cite{wigner_55, wigner_57, Wigner-surmise, wigner_58} as well as Landau and Smorodmsky~\\cite{landau1955} in the 1950s, who aimed at finding a statistical theory that described the excitation spectra in nuclei for elastic scattering processes. Their novel idea was that a sufficiently complicated operator like the Hamilton, or the lattice Dirac operator, can be replaced by a random matrix (whose entries are, preferably, Gaussian distributed as those are easier to deal with analytically) with the appropriate symmetries. For this to hold, it is not important that the physical operator has matrix entries that are all occupied with nonzero entries. In condensed matter models~\\cite{d2016quantum}, as well as in lattice QCD~\\cite{Berbenni-Bitsch-1998, damgaard-2000, Farchioni-2000, Deuzeman-2011, Kieburg:2017rrk}, numerical evidence has shown that very sparse matrices can also exhibit spectral characteristics of a random matrix with Gaussian distributed entries. It is the concept of universality that has made random matrices so versatile. Like in the central limit theorem, in which an infinite sum of independently and identically distributed random variables leads to a Gaussian random variable under very mild conditions, it happens that, for many spectral quantities, it does not matter how the random matrix is actually distributed. \n\t\nOver the years, random matrix theory has found many more applications in physics; for example, the local level density about Dirac points (also known as hard edges in random matrix theory) has been used to classify operators such as Hamiltonians and Dirac operators, and to discern global symmetries of a system. By global symmetries, it is meant those that are described by a linear involution (operators that square to unity) in terms of unitary and antiunitary operators. Well-known examples in physics are, time reversal, parity, charge conjugation, and chirality. Global symmetries play a central role when classifying systems in the context of quantum chaos~\\cite{Dyson1962}, in superconductors and topological insulators~\\cite{1997PhRvB..55.1142A, 2008PhRvB..78s5125S}, in quantum-chromodynamics-like theories in the continuum and on a lattice~\\cite{Verbaarschot:1994qf, Kieburg:2017rrk}, and in Sachdev-Ye-Kitaev-models (SYK)~\\cite{Garcia21, Kanazawa:2017dpd}.\n\t\n\\subsection{Local spectral statistics}\\label{sec:localspec}\n\t\nThere are two spectral scales that are usually discussed in the context of random matrix theory, and to which different kinds of universalities apply. Those are the local and the global spectral scales.\n\t\nThe microscopic or local spectral scale is given by the local mean level spacing where the fluctuations of the individual eigenvalues are resolved. This scale is often of more physical interest as it analyses the level repulsion of eigenvalues that are very close to each other. Such a level repulsion is usually algebraic for very small distances $s$. Namely, the level spacing distribution $p(s)$, which is the distribution of the distance of two consecutive eigenvalues, is of the form $s^\\beta$ (where $\\beta$ is the Dyson index) for small distances. \n\t\nWhile the symmetry of a Hamiltonian, such as time reversal, chirality, or charge conjugation, is not very important for the global spectral scale, it is very important for the local spectral statistics as it influences the value of $\\beta$. Wigner~\\cite{Wigner-surmise} derived the distribution for two-level Gaussian random matrices with Dyson index $\\beta=1$, which was soon generalized to $\\beta=2,4$,\n\\begin{equation}\n\tp(s)=2\\frac{(\\Gamma[(\\beta+2)\/2])^{\\beta+1}}{(\\Gamma[(\\beta+1)\/2])^{\\beta+2}}s^\\beta\\exp\\left[-\\left(\\frac{\\Gamma[(\\beta+2)\/2]}{\\Gamma[(\\beta+1)\/2]}\\right)^2s^2\\right]\n\\end{equation}\nwith the gamma function $\\Gamma[x]$. This distribution is nowadays called Wigner's surmise. The corresponding random matrices are known as the Gaussian orthogonal ensemble (GOE; $\\beta=1$), the Gaussian unitary ensemble (GUE; $\\beta=2$), and the Gaussian symplectic ensemble (GSE; $\\beta=4$). Those are usually compared with the level spacing distribution of independently distributed eigenvalues ($\\beta=0$), which gives the Poisson distribution\n\\begin{eqnarray}\n\tp(s)=e^{-s},\n\\end{eqnarray}\nand with the level spacing distribution of the one-dimensional quantum harmonic oscillator (also known as the picket fence statistics), which is a simple Dirac delta function\n\\begin{eqnarray}\n\tp(s)=\\delta(1-s).\n\\end{eqnarray}\nAll five benchmark distributions are shown in Fig.~\\ref{fig:level-spacing}(a).\n\t\n\\begin{figure*}[!t]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{figure01}\n\t\\caption{(a) The level spacing distributions of the Poisson distribution (solid line; $\\beta=0$), the Wigner surmise of the GOE (dotted line; $\\beta=1$), of the GUE (dashed line; $\\beta=2$), of the GSE (dash-dot line; $\\beta=4$), and the picket fence statistics (vertical line; $\\beta=\\infty$). (b) Three Monte Carlo simulations (symbols) of the spacing between eigenvalues $(50\\cdot M)$ and $(50\\cdot M+1)$ of the direct sum of $M$ GUEs with a matrix dimension $N=100$ (in total, the matrix dimension is $100^M\\times 100^M$), compared to the Poisson distribution (solid line), and the Wigner surmise of the GUE (dashed line). The ensemble size is $10^5$ such that the statistical error is about $1\\%$. The bin size is about $0.1$, but varies as the unfolding slightly changes their actual value.}\n\t\\label{fig:level-spacing}\n\\end{figure*}\n\t\nThe use of the Wigner surmise as a diagnostic of quantum chaos and integrability followed fundamental conjectures by BGS~\\cite{bohigas_giannoni_84} (mentioned before) and Berry and Tabor~\\cite{berry_tabor_77}, respectively. The latter states that, for an integrable bounded system with more than two dimensions and incommensurable frequencies of the corresponding tori, the spectrum should follow the Poisson statistics. However, both conjectures have to be understood with the following care as the eigenvalue spectrum must be prepared appropriately.\n\\begin{itemize}\n\t\\item[(i)] The spectrum must be split into subspectra with fixed ``good'' quantum numbers such as the spin, parity, and conserved charges. This requires knowledge of all the symmetries of the model. This step must be taken since a direct sum of independent GUE matrices can yield a level spacing distribution that resembles the Poisson statistics; see Fig.~\\ref{fig:level-spacing}(b). \n\t\\item[(ii)] One needs to unfold the spectra, meaning, that the distance between consecutive eigenvalues must be in average equal to one. This second step is crucial as only then the level spacing distributions are comparable and universal statistics can be revealed. The eigenvalue spectrum of an irregularly shaped drum, a complex molecule, and that of a heavy nucleus have completely different energy scales. After the unfolding of their spectra these scales are removed and show common behavior. Yet, the procedure of unfolding is far from trivial for empirical spectra. There are other means such as the study of the ratio between the two spacings of three consecutive eigenvalues~\\cite{Oganesyan-2007}. But this observable also has its limitations as this kind of ``automatic unfolding'' only works in the bulk of the spectrum. It fails at spectral edges and other critical points in the spectrum.\n\\end{itemize}\n\t\nIn the context of the Wigner surmise, we should stress that even though the statistics of the spectral fluctuations are well described at the level of the mean level spacing~\\cite{PhysRev.120.1698, FRENCH19715, BOHIGAS1971383} (even beyond the context of many-body systems; see, {\\it e.g.},\\ the reviews and books~\\cite{Guhr1998, mehta2004, akemann2011, haake2019} and the references therein), it was soon realized that there are statistical properties of the spectral fluctuations of many-body Hamiltonians that cannot be described using full random matrices; see Refs.~\\cite{BOHIGAS1971261, FRENCH1970449, monfrench1975, Benet:2000cy}. This is due to the fact that usually only one-, two- and maybe up to four-body interactions represent the actual physical situation. Random matrices that reflect these sparse interactions are called embedded random matrix ensembles~\\cite{monfrench1975, RevModPhys.53.385, Guhr1998, Kota2001, Kota2014}. In the past decades, they have experienced a revival due to studies of the SYK model~\\cite{1993PhRvL..70.3339S, 2016PhRvD..94j6002M, Garcia-Garcia:2016mno, Garcia-Garcia:2017pzl, Garcia-Garcia:2018fns, 2014MPAG...17..441E}, and two-body interactions~\\cite{Vyas:2018aal, 2017AIPC.1912b0003B, 2018tqrf.book..457S}. A full understanding of how these additional tensor structures, which arise naturally in quantum many-body systems, impact the entanglement of the energy eigenstates is currently missing.\n\t\n\\subsection{Global spectral statistics and eigenvector statistics}\n\t\nThe second scale is the macroscopic or global spectral scale, which is usually defined as the average distance between the largest and the smallest eigenvalues. For this scale, Wigner~\\cite{wigner_55, wigner_58} derived the famous Wigner semicircle, which describes the level density of a Gaussian distributed real symmetric matrix. He was also the first to show, again under mild conditions, that the Gaussian distribution of the independent matrix entries can be replaced by an arbitrary distribution, and nevertheless one still obtains the Wigner semicircle. One important feature of this kind of universality is that it does not depend on the symmetries of the operators. For instance, whether the matrix is real symmetric, Hermitian, or Hermitian self-dual has no impact on the level density, which is in all those cases a Wigner semicircle~\\cite{Forrester_2010}. The global spectral scale also plays a crucial role in time series analysis~\\cite{Giraud2015} and telecommunications~\\cite{Couillet2011}, where instead of the Wigner semicircle the Mar\\v{c}enko-Pastur distribution~\\cite{marcenko} describes the level density. \n\t\nThe global scale is always important when considering the so-called linear spectral statistics, meaning an observable that is of the form $\\sum_{j=1}^Nf(\\lambda_j)$, where the $\\lambda_j$ are the eigenvalues of the random matrix. This is the situation that we encounter when computing the entanglement entropy, where the $\\lambda_j$ are the eigenvalues of the density matrix; cf. Eq.~\\eqref{Neumann.entropy}. Therefore, we expect that the leading terms in the entanglement entropy are insensitive to the Dyson index $\\beta$, so that the entanglement entropy can serve as an excellent diagnostic for integrable or chaotic behavior. \n\t\nA related diagnostic for the amplitude $A$ of vector components of eigenstates is the Porter-Thomas distribution~\\cite{PhysRev.104.483}, which is used to decide whether a state is localized or delocalized. The Porter-Thomas distribution is a $\\chi^2$ distribution,\n\\begin{equation}\n\t\\mathcal{I}(A)= \\left(\\frac{\\beta N}{2}\\right)^{\\beta\/2}\\frac{A^{\\beta\/2-1}}{\\Gamma[\\beta\/2]}\\exp\\left[-\\frac{\\beta N}{2}A\\right] ,\n\\end{equation}\nwhere the normalization of the first moment is chosen to be equal to $1\/N$. Note that in the quaternion case one defines the amplitude as the squared modulus of a quaternion number. Hence, as a sum of four squared real components, similar to the squared modulus of a complex number (which is the sum of the square of the real and imaginary parts). Actually, the application of random matrices for computing the entanglement entropy is based on this idea. We can only replace a generic eigenstate by a Haar-distributed vector on a sphere after assuming that the state is delocalized. Unlike the Porter-Thomas distribution, as previously mentioned, the leading terms in the entanglement entropy are expected to be independent of the Dyson index $\\beta$ (which has yet to be proved).\n\t\nThe relation between certain quantum informational questions and random matrix theory also has a long history, and the techniques developed are diverse (see, e.g., the review~\\cite{2016JMP....57a5215C} and Chapter 37 of Ref.~\\cite{akemann2011}). Questions about generic distributions and the natural generation of random quantum states have been a focus of attention~\\cite{Hall:1998mh, 2004JPhA...37.8457S}. The answers to those questions are still debated as there are several measures of the set of quantum states and each has its benefits and flaws; for instance, two of those are based on the Hilbert-Schmidt metric and the Bures metric~\\cite{Bures1969, Hall:1998mh}. Those measures define some kind of ``uniform distribution'' on the set of all quantum states and, actually, generate random matrix ensembles that have been studied to some extent~\\cite{Hall:1998mh, 2001JPhA...34.7111Z, 2004JPhA...37.8457S, 2003JPhA...3610083S, 2010JPhA...43e5302O, 2016CMaPh.342..151F, wei2021quantum}. In this tutorial, we encounter one of the aforementioned ensembles, namely, the one related to the Hilbert-Schmidt metric, which naturally arises from a group action so that the states are Haar distributed according to this group action.\n\t\n\\subsection{Typicality and entanglement}\n\t\nAn important question that one can ask, which relates to the latest observations made in the context of random matrix ensembles, is what are the entanglement properties of typical pure quantum states. This was the earliest question to be addressed. Following work by Lubkin~\\cite{lubkin1978entropy} and Lloyd and Pagels~\\cite{lloyd1988complexity}, Page~\\cite{page1993average} obtained a closed analytical formula for the average entanglement entropy (over all pure quantum states) as a function of the system and subsystem Hilbert space dimensions. This formula was rigorously proven later in Refs.~\\cite{foong1994proof, sanchez1995simple, Sen:1996ph}. In lattice systems in which the dimension of the Hilbert space per site is finite, one can show that Page's formula results in a ``volume-law'' behavior, {\\it i.e.},\\ the entanglement entropy scales linearly in the volume $V_A$ of the subsystem, $S_A\\propto V_A$ (for a large system of volume $V$ and a subsystem with $V_A d_B\n\t\\end{cases}\n\\end{equation}\nwhere $\\Psi(x)=\\Gamma'(x)\/\\Gamma(x)$ is the digamma function. In the thermodynamic limit $V\\to \\infty$ when $V_A,V-V_A\\to\\infty$ also so that the subsystem fraction\n\\begin{equation}\n\tf=\\frac{V_A}{V}\n\\end{equation}\nis fixed, Page's formula~\\eqref{Page} reduces to\n\\begin{equation}\n\t\\braket{S_A}\\!=\\!\n\tf\\,V\\ln 2-2^{-|1-2f|V-1}+O(2^{-V})\\,,\n\t\\label{eq:Page-therm}\n\\end{equation}\nwhere we will be careful to consistently use Landau's ``big $O$'' and ``little $o$'' notation in this manuscript, such that\n\\begin{align}\n\tf(V)&=O(V^n) & \\Longleftrightarrow && \\lim_{V\\to\\infty}\\frac{f(V)}{V^n}&=c\\neq 0\\,,\\\\\n\t& & \\text{and}&&\\nonumber \\\\\n\tf(V)&=o(V^n) & \\Longleftrightarrow &&\\lim_{V\\to\\infty}\\frac{f(V)}{V^n}&=0\\,.\n\\end{align}\n\t\nThe first term in Eq.~\\eqref{eq:Page-therm} is a volume law: the average entanglement entropy scales as the minimum between the volumes $V_A=f V$ and $V_B=(1-f)V$. For $f\\neq \\frac{1}{2}$, the second term is an exponentially small correction. In fact, at fixed $f$ and in the limit $V\\to\\infty$, the second term $-2^{-|1-2f|V-1}$ becomes $-\\frac{1}{2}\\delta_{f,\\frac{1}{2}}$. We can also resolve precisely how this Kronecker delta arises in the neighborhood of $f=\\frac{1}{2}$. As it may be difficult to reach exactly $f=\\frac{1}{2}$ in physical experiments, the more precise statement is that we see the correction whenever $f=\\frac{1}{2}+O(1\/V)$. Formally, we can thus resolve the correction term exactly as $-2^{-|\\Lambda_f|-1}$ for $f=\\frac{1}{2}+\\Lambda_f\/V$, as visualized in Fig.~\\ref{fig:Page-discon}.\n\t\n\\begin{figure*}[!t]\n\t\\centering \n\t\\includegraphics[width=\\linewidth]{figure03}\n\t\\caption{The average entanglement entropy $\\braket{S_A}=a V-b+o(1)$ as a function of the subsystem fraction $f=V_A\/V$ for large $V$. (a) Leading-order behavior, also known as the Page curve. (b) The constant correction, which is given by a Kronecker delta $-\\frac{1}{2}\\delta_{f,\\frac{1}{2}}$. This Kronecker delta is resolved in (c) by carrying out a double scaling limit $V\\to\\infty$ with $f=\\frac{V_A}{V}=\\frac{1}{2}+\\frac{\\Lambda_f}{V}$.}\n\t\\label{fig:Page-discon}\n\\end{figure*}\n\t\nWe find similar Kronecker delta contributions $\\delta_{f,\\frac{1}{2}}$ in subsequent sections where we discuss the typical entropy at fixed particle number and in the setting of Gaussian states. These terms highlight nonanalyticities in the entanglement entropy that can be resolved by double scaling limits. Those ``critical points'' occur at symmetry points and along axes. In the present case, this has happened with the dimensions $d_A$ and $d_B$ reflecting whether the density operator $\\hat\\rho_A=WW^\\dagger$ or $\\hat\\rho_B=W^\\dagger W$ contains generic zero eigenvalues. \n\t\nThe variance of the entanglement entropy of a random pure state is given by the exact formula (for $d_A\\leq d_B$) \\cite{vivo_pato_16,wei2017proof,bianchi2019typical}\n\\begin{align}\n\t(\\Delta S_A)^2=&\\; \\textstyle \\frac{d_A+d_B}{d_A d_B+1}\\Psi'(d_B+1)-\\Psi'(d_A d_B+1)\\nonumber\\\\[.5em]\n\t&\\textstyle-\\frac{(d_A-1)(d_A+2d_B-1)}{4d_B^2(d_A d_B+1)}\\,,\n\t\\label{eq:PageDeltaS}\n\\end{align}\nwhere $\\Psi'(x)=\\frac{d\\Psi(x)}{dx}=\\frac{d^2[\\ln{\\Gamma(x)}]}{dx^2}$ is the first derivative of the digamma function. It can be derived using similar techniques as those outlined above for the average. In particular, the fixed trace condition can be separated as before via the trick of the Fourier-Laplace transform, such that one is left with an average over the complex Wishart-Laguerre ensemble. The derivation is tedious and lengthy because one has to deal with double sums, which can be computed as described in Appendix~\\ref{app:Gaussfixednumber}.\\footnote{Our computation of the variance for Gaussian states at fixed particle number presented in Appendix~\\ref{app:Gaussfixednumber} shows how to deal with the double sums, and can also be used in the general setting. Basically, one needs to replace the Jacobi polynomials and their corresponding weight by the Laguerre polynomials and the weight function $x^{d_B-d_A}e^{-x}$.}\n\t\nIn the thermodynamic limit discussed above, Eq.~\\eqref{eq:PageDeltaS} reduces to\n\t\\begin{equation}\n\t\t(\\Delta S_A)^2=\n\t\t\\big(\\tfrac{1}{2}-\\tfrac{1}{4}\\delta_{f,\\frac{1}{2}}\\big)\\;2^{-(1+|1-2f|) V}\\,+\\,o(2^{-(1+|1-2f|) V}). \n\t\t\\label{eq:variance-page}\n\t\\end{equation}\nThis shows that the variance is exponentially small in $V$. As a result, in the thermodynamic limit the entanglement entropy of a typical state is given by Eq.~\\eqref{eq:Page-therm} \\cite{bianchi2019typical}.\n\t\nAnew, one could resolve the variance at the critical point $f=\\frac{1}{2}$ via a double scaling limit $f=\\frac{1}{2}+\\Lambda_f\/V$. This yields $(\\Delta S_A)^2=2^{-V}2^{-2|\\Lambda_f|-1}(1-2^{-2|\\Lambda_f|-1})$.\n\t\n\\subsection{Fixed number of particles}\\label{sec:page-fixedN}\n\t\nLet us go over to a Hilbert space $\\mathcal{H}^{(N)}$ with a fixed number of particles, but still carrying over the idea to draw states uniformly from the sphere in this Hilbert space. We further assume that there is a notion of a bipartition into subsystem $A$ and $B$, such that one can specify for each particle if it is in subsystem $A$ or $B$. Such a decomposition is not a simple tensor product anymore, but it is a direct sum of tensor products\n\\begin{align}\\label{eq:Hspace-decomposition}\n\t\\mathcal{H}^{(N)}=\\bigoplus^{N}_{N_A=0}\\Big(\\mathcal{H}_A^{(N_A)}\\otimes\\mathcal{H}_B^{(N-N_A)}\\Big)\\,.\n\\end{align}\nThe direct sum is over the occupation number in $A$ (which labels the center of the subalgebra). Each summand represents those states where $N_A$ particles are in subsystem $A$ and $N-N_A$ particles are in subsystem $B$ (assuming indistinguishable particles).\n\t\nWhen $N_A$ is larger than dimension $V_A$ of subsystem $A$, or $N-N_A$ is larger than $V-V_A$, we consider the tensor product $\\mathcal{H}_A^{(N_A)}\\otimes\\mathcal{H}_B^{(N-N_A)}$ as the empty set and, thence, nonexistent. This is the case as, due to Pauli's exclusion principle, we cannot put more fermions in the system than there are quantum states. We also adapt this understanding for the following discussion where direct sums, ordinary sums, and products are reduced to the components that are actually present.\n\t\n\\subsubsection{Statistical ensemble of states}\n\t\nLet us consider fermionic creation $\\hat{f}_i^\\dagger$ and annihilation $\\hat{f}^{}_i$ operators, which satisfy the anticommutation relations $\\{\\hat{f}^{}_i,\\hat{f}_j^\\dagger\\}=\\delta_{ij}$, $\\{\\hat{f}_i,\\hat{f}_j\\}=0$ with $i,j=1,\\ldots,V$. The corresponding number operators are\n\\begin{equation}\n\t\\hat{N}=\\sum_{i=1}^V\\hat{f}_i^\\dagger \\hat{f}^{}_i\\,,\\quad \\hat{N}_A=\\sum_{i=1}^{V_A}\\hat{f}_i^\\dagger \\hat{f}^{}_i\\,,\\quad \\hat{N}_B=\\sum_{i=V_A+1}^{V}\\hat{f}_i^\\dagger \\hat{f}^{}_i\\,,\n\t\\label{eq:Hilbert-sum}\n\\end{equation}\nwhere one can see that\n\\begin{equation}\n\t\\hat{N}=\\hat{N}_A+\\hat{N}_B\\,.\n\\end{equation}\nThe Hilbert space of the system can be decomposed as a direct sum of Hilbert spaces at fixed eigenvalue $N$ of $\\hat{N}$,\n\\begin{equation}\n\t\\mathcal{H}=\\bigotimes_{i=1}^V\\mathcal{H}_i\\;=\\;\\bigoplus_{N=0}^{V}\\,\\mathcal{H}^{(N)},\n\\end{equation}\nwith $\\mathcal{H}^{(N)}$ given by Eq.~\\eqref{eq:Hspace-decomposition}. The dimension of each $N$-particle sector is\n\\begin{equation}\n\td_N=\\dim\\mathcal{H}^{(N)}\\,=\\,\\frac{V!}{N!\\,(V-N)!}\\,.\n\t\\label{eq:dN}\n\\end{equation}\nIt is immediate to check that $\\dim \\mathcal{H}=\\sum_{N=0}^V d_N=2^V$. Similarly, one can use the number operators $\\hat{N}_A$ and $\\hat{N}_B$ to decompose the Hilbert spaces $\\mathcal{H}_A$ and $\\mathcal{H}_B$ into sectors\n\\begin{equation}\n\t\\mathcal{H}_A=\\;\\bigoplus_{N_A=0}^{V_A}\\,\\mathcal{H}_A^{(N_A)}\\,,\\qquad \\mathcal{H}_B=\\;\\bigoplus_{N_B=0}^{V-V_A}\\,\\mathcal{H}_B^{(N_B)}\\,.\n\\end{equation}\nLet us stress once again that, while $\\mathcal{H}$ is a tensor product over $A$ and $B$,\n\\begin{align}\n\t\\mathcal{H}=\\left(\\bigotimes_{i=1}^{V_A}\\mathcal{H}_i\\right)\\otimes\\left(\\bigotimes_{i=V_A+1}^{V}\\mathcal{H}_i\\right)\\;=\\;\\mathcal{H}_A\\otimes \\mathcal{H}_B\\,,\n\\end{align}\nthe sector at fixed number $N\\leq V_A$ is not a tensor product. It is the direct sum of tensor products from Eq.~\\eqref{eq:Hspace-decomposition}. The corresponding dimensions of the subsystems are\n\\begin{align}\n\t\\begin{split}\\label{eq:dAdB}\n\t\t& d_A(N_A)=\\dim\\mathcal{H}_A^{(N_A)}\\,=\\,\\frac{V_A!}{N_A!\\,(V_A-N_A)!}\\,,\\\\\n\t\t& d_B(N_B)=\\dim\\mathcal{H}_B^{(N_B)}\\,=\\,\\frac{(V-V_A)!}{N_B!\\,((V-V_A)-N_B)!}\\,.\n\t\\end{split}\n\\end{align}\nOne can check that the dimensions add up correctly,\n\\begin{equation}\n\t\\sum_{N_A=0}^N d_A(N_A)\\, d_B(N-N_A)=\\frac{V!}{N!(V-N)!}=d_N\\,.\\label{eq:normalization-varrho}\n\\end{equation}\nThe formula for $d_A$, and equivalently that of $d_B$, follows from a simple counting argument of how many choices there are to place $N_A$ indistinguishable particles on $V_A$ modes. Let us underline that it does not matter what we label particles and what holes. Note that $d_A(N_A)$ or $d_B(N-N_A)$ will vanish for $N_A$ outside of the interval $[\\max(0,N+V_A-V),\\min(N,V_A)]$, but we will not truncate the sum, as we will soon turn it into a Gaussian integral.\n\t\nFrom these dimensions we can readily read off two exact symmetries:\n\t\n\\noindent (i) It does not matter whether one considers subsystem $A$ or $B$. One can exchange $(d_A(N_A),V_A,N_A) \\leftrightarrow (d_B(N-N_A),V-V_A,N-N_A)$. This allows us to restrict the discussion to $V_A\\leq V\/2$. However, the dimensions of the two Hilbert spaces are exchanged, which (as we will show) yields nonanalytic points along $V_A=V\/2$ due to the two branches of Page curve~\\eqref{Page}.\n\t\n\\noindent (ii) Additionally, there is a particle-hole symmetry since it does not matter whether one counts particles or holes. Actually, the ``particles'' do not necessarily need to represent particles but they can be, for instance, up spins while the ``holes'' are down spins (having in mind spin-$\\frac{1}{2}$ systems). Any binary structure with fermion statistics (meaning Pauli principle) can be described in this setting. Mathematically, the particle-hole symmetry is reflected in the exchange $(N,N_A)\\leftrightarrow(V-N,V_A-N_A)$. We note that in this case the dimensions are not exchanged so one does not switch branches in Page curve~\\eqref{Page}. Therefore, the symmetry points at $N=V\/2$ will be analytic, as we will also show. This symmetry allows us to restrict $N\\leq V\/2$.\n\t\n\\noindent In summary, we only need to study the behavior in the quadrant $(V_A,N)\\in(0,\\frac{V}{2}]^2$. The remaining quadrants are obtained by symmetry.\n\t\nLike in the setting in which we do not fix the particle number, we can relate the problem to random matrix theory. Here, we briefly recall the most important ingredients from Ref.~\\cite{bianchi2019typical}. A state $\\ket{\\psi}\\in\\mathcal{H}^{(N)}$ can be again written in a basis. We choose the orthonormal basis vectors $\\ket{a,N_A} \\otimes \\ket{b,N-N_A} \\in \\mathcal{H}_A^{(N_A)} \\otimes \\mathcal{H}_B^{(N-N_A)}$ so that the state vector has the expansion\n\\begin{equation}\n\t\\ket{\\psi}=\\bigoplus_{N_A=0}^N \\sum_{a=1}^{d_A}\\sum_{b=1}^{d_B} \\tilde{w}_{ab}^{(N_A)}\\ket{a,N_A}\\otimes\\ket{b,N-N_A}\n\\end{equation}\nwith the abbreviations $d_A=d_A(N_A)$ and $d_B=d_B(N-N_A)$. The normalization is then reflected by the triple sum\n\\begin{equation}\\label{norm.Page.fixed}\n\t\\sum_{N_A=0}^N\\sum_{a=1}^{d_A}\\sum_{b=1}^{d_B}|\\tilde{w}_{ab}^{(N_A)}|^2=1.\n\\end{equation}\nThe direct sum over $N_A$ is important as it tells us that the density operator $\\hat\\rho_A=\\operatorname{Tr}_{\\mathcal{H}_B}\\ket{\\psi}\\bra{\\psi}$ has a block diagonal form, namely,\n\\begin{equation}\n\t\\hat\\rho_A=\\bigoplus_{N_A=0}^N \\sum_{a_1,a_2=1}^{d_A} \\sum_{b=0}^{d_B}\\tilde{w}_{a_1b}^{(N_A)}(\\tilde{w}_{a_2b}^{(N_A)})^*\\ket{b,N_A}\\bra{a_2,N_A}.\n\\end{equation}\nAgain, we can understand the coefficients $\\tilde{w}_{ab}^{(N_A)}\\in\\mathbb{C}$ as the entries of a $d_A\\times d_B$ matrix $\\tilde{W}_{N_A}$. The point is that those matrices are coupled by condition~\\eqref{norm.Page.fixed}. In Ref.~\\cite{bianchi2019typical} those matrices were decoupled by understanding their squared Hilbert-Schmidt norms as probability weights, {\\it i.e.},\\ defining\n\\begin{equation}\n\tp_{N_A}=\\sum_{a=1}^{d_A}\\sum_{b=1}^{d_B}|\\tilde{w}_{ab}^{(N_A)}|^2\\in[0,1]\n\\end{equation}\nsuch that $\\tilde{W}_{N_A}=\\sqrt{p_{N_A}}\\,W_{N_A}$. This notation allows one to identify the density operator of subsystem $A$ with the block diagonal matrix $\\hat\\rho_A = \\mathrm{diag} (p_0W_0W_0^\\dagger, \\ldots, p_N W_NW_N^\\dagger)$, as illustrated in Fig.~\\ref{fig:RDMsketch}.\n\t\n\\begin{figure}[t!]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{figure04}\n\t\\caption{Sketch of the block dimensions of the reduced density matrix $\\hat\\rho_A$ of subsystem $A$ at the subsystem fraction $f=\\frac{1}{2}$. (a) Case $V=12$ at half filling $n=\\frac{1}{2}$, for which $V_A = 6$. The number of particles ranges from $N_A=0$ to $N_A=6$, with $N_A=3$ representing the largest block. (b) Case $V=20$ at quarter filling $n=\\frac{1}{4}$, for which $V_A = 10$. The number of particles ranges from $N_A=0$ to $N_A=5$, with $N_A=5$ representing the largest block. The blocks with $N_A \\geq N_{\\rm crit} = 3$ are larger than the corresponding blocks in subsystem $B$ (not shown in the figure).}\n\t\\label{fig:RDMsketch}\n\\end{figure}\n\t\nThus, the entanglement entropy becomes the sum\n\\begin{align}\n\tS_A(\\ket{\\psi})=\\sum_{N_A=0}^N&\\Big[p_{N_A}\\operatorname{Tr}(W_{N_A}W_{N_A}^\\dagger\\ln[W_{N_A}W_{N_A}^\\dagger])\\nonumber\\\\\n\t&+p_{N_A}\\ln(p_{N_A})\\Big].\n\t\\label{page.ententro.fixed}\n\\end{align}\nAnew, the symmetry between the two subsystems is reflected by the spectral decomposition theorem since it holds that $\\hat\\rho_B = \\operatorname{Tr}_{\\mathcal{H}_A}\\ket{\\psi}\\bra{\\psi} = \\mathrm{diag}(p_0W_0^\\dagger W_0,\\ldots,p_NW_N^\\dagger W_N)$.\n\t\nSince the norms are encoded in the probability weights $p_{N_A}$, each matrix $W_{N_A}W_{N_A}^\\dagger$ independently describes a fixed trace ensemble, {\\it i.e.},\\ $\\operatorname{Tr} W_{N_A}W_{N_A}^\\dagger=1$. Thus, it can be dealt with in the same way as in Page's case, in particular each of those can be traced back to a complex Wishart-Laguerre ensemble of matrix dimension $d_A\\times d_B$. The probability weights $p_{N_A}\\in[0,1]$ are also drawn randomly via the joint probability distribution~\\cite{bianchi2019typical}\n\\begin{equation}\n\t\\frac{\\delta\\left(1-\\sum_{N_A=0}^{N}p_{N_A}\\right)\\prod_{N_A=0}^Np_{N_A}^{d_Ad_B-1}dp_{N_A}}{\\int\\delta\\left(1-\\sum_{N_A=0}^{N}p_{N_A}\\right)\\prod_{N_A=0}^Np_{N_A}^{d_Ad_B-1}dp_{N_A}}.\n\\end{equation}\nThe Dirac delta function enforces condition~\\eqref{norm.Page.fixed}, while the factors $p_{N_A}^{d_Ad_B-1}$ are the Jacobians for the polar decomposition of the vectors in $\\mathcal{H}_A^{(N_A)} \\otimes \\mathcal{H}_B^{(N-N_A)}$ into their squared norm $p_{N_A}$ and the direction, which is encoded in $W_{N_A}$. The normalization of the distribution of $p_{N_A}$ was computed in Ref.~\\cite{bianchi2019typical} and can be deduced by inductively tracing the integrals over $p_{N_A}$ back to Euler's beta integrals in Eq.~(5.12.1) of Ref.~\\cite{NIST:DLMF}.\n\t\n\\subsubsection{Average and variance}\n\t\n\\begin{figure*}[t!]\n\t\\centering \n\t\\includegraphics[width=\\linewidth]{figure05}\n\t\\caption{The leading entanglement entropy $s_A(f,n) = \\lim_{V\\to\\infty} \\braket{S_A}_N\/V$ from Eq.~\\eqref{eq:leading-general} [see Eq.~\\eqref{eq:sA-useful}]. For $n=\\frac{1}{2}$, $s_A(f,n)$ coincides with Page's result (maximal entanglement). (a) Three-dimensional plot as a function of the subsystem fraction $f=V_A\/V$ and the filling ratio $n=N\/V$. One can see the mirror symmetries $V_A\\to V-V_A$ and $N\\to V-N$. (b) Results at fixed $n$ plotted as functions of $f$. The colored lines agree in both plots so that the right one can be seen as sections of the left one along the colored lines.}\n\t\\label{fig:Page}\n\\end{figure*}\n\t\nWith these definitions and discussions, we are now ready to state the main result in Eq.~(23) of Ref.~\\cite{bianchi2019typical}: the average entanglement entropy in system $A$ of a uniformly distributed random state in $\\mathcal{H}^{(N)}$ is given by\n\\begin{align}\\label{eq:Scenter}\n\t\\begin{split}\n\t\t\\hspace{-2mm}\\braket{S_A}_N\\!&=\\!\\!\\!\\!\\sum^{\\min(N,V_A)}_{N_A=0}\\! \\frac{d_Ad_B}{d_N}\\big(\\braket{S_A}\\!+\\!\\Psi(d_N\\!+\\!1)\\!-\\!\\Psi(d_Ad_B\\!+\\!1)\\big),\n\t\\end{split}\n\\end{align}\nwhere $d_A=d_A(N_A)$ and $d_B=d_B(N-N_A)$ depend on $N_A$ according to Eq.~\\eqref{eq:dAdB} and $\\braket{S_A}$ refers to Page's result~\\eqref{Page} for given $d_A$ and $d_B$. Equation~\\eqref{eq:Scenter} follows from the average over $W_{N_A}W_{N_A}^\\dagger$ in Eq.~\\eqref{page.ententro.fixed}, which are independent fixed trace random matrices. The prefactor $d_Ad_B\/d_N$, as well as the additional digamma functions, follow from Euler's beta integral in Eq.~(5.12.1) of Ref.~\\cite{NIST:DLMF}. In particular, we have used\n\\begin{align}\n\t\\begin{split}\n\t\t\\langle p_{N_A}^\\epsilon\\rangle&=\\frac{\\int_0^1 p_{N_A}^{\\epsilon+d_Ad_B-1}(1-p_{N_A})^{d_N-d_Ad_B-1}dp_{N_A}}{\\int_0^1 p_{N_A}^{d_Ad_B-1}(1-p_{N_A})^{d_N-d_Ad_B-1}dp_{N_A}}\\\\\n\t\t&=\\frac{\\Gamma[\\epsilon+d_Ad_B]\\Gamma[d_N]}{\\Gamma[d_Ad_B]\\Gamma[\\epsilon+d_N]}\n\t\\end{split}\n\\end{align}\nfor any $\\epsilon>-d_Ad_B$. The average on the right-hand side can be obtained by rescaling $p_j\\to (1-p_{N_A})p_j$ for any $j\\neq N_A$, which decouples the average over $p_{N_A}$ with the remaining probability weights $p_j$.\n\t\nWe can write Eq.~\\eqref{eq:Scenter} as\n\\begin{equation}\n\t\\braket{S_A}_N=\\sum^N_{N_A=0}\\varrho_{N_A}\\varphi_{N_A},\n\\end{equation}\nby introducing the quantities\n\\begin{align}\n\t\\begin{split}\\label{eq:varphi}\n\t\t&\\varrho_{N_A}=\\frac{d_Ad_B}{d_N}\\,,\\\\\n\t\t&\\varphi_{N_A}\\!=\\!\n\t\t\\begin{cases}\n\t\t\t\\Psi(d_N\\!+\\!1)\\!-\\!\\Psi(d_B\\!+\\!1)\\!-\\!\\frac{d_A-1}{2d_B}&\\quad d_A\\leq d_B \\\\[0.5em]\n\t\t\t\\Psi(d_N\\!+\\!1)\\!-\\!\\Psi(d_A\\!+\\!1)\\!-\\!\\frac{d_B-1}{2d_A} &\\quad d_A> d_B\n\t\t\\end{cases}\\\\\n\t\t&=\\scriptsize\\Psi(d_N\\!+\\!1)\\!-\\!\\Psi(\\max(d_A,d_B)\\!+\\!1)\\!-\\!\\min\\left(\\tfrac{d_A-1}{2d_B},\\tfrac{d_B-1}{2d_A}\\right).\n\t\\end{split}\n\\end{align}\nThe function $\\varrho_{N_A}$ can be understood as a probability distribution of having $N_A$ particles in $A$, with the normalization $\\sum_{N_A}\\varrho(N_A)=1$ following from Eq.~\\eqref{eq:normalization-varrho}. The function $\\varphi_{N_A}$, when understood as a continuous function, has a kink at $N_{\\mathrm{crit}}$, which refers to the largest integer such that $d_A(N_\\mathrm{crit})\\leq d_B(N-N_{\\mathrm{crit}})$. There is only one situation in which $N_{\\rm crit}$ is not well defined, namely, when $V_A=N=V\/2$ or, equivalently, when $f=n=\\frac{1}{2}$ with $f=V_A\/V$ and $n=N\/V$. Then it always holds that $d_A(N_A)=d_B(N-N_A)$ for all $N_A=0,\\ldots,N$. In this case, we do not need an $N_{\\rm crit}$ as the terms in both sums are the same.\n\t\nWe are unable to evaluate this sum exactly, but we can expand $\\braket{S_A}_N$ in powers of $V$ and approximate the sum by an integral\n\\begin{align}\\label{eq:average-int}\n\t\\hspace{-2mm}\\braket{S_A}_N\\!=\\!\\!\\sum^N_{N_A=0}\\!\\!\\varrho_{N_A}\\varphi_{N_A}\\!=\\!\\int^{\\infty}_{-\\infty} \\!\\!\\!\\!\\!\\!\\!\\varrho(n_A)\\varphi(n_A)dn_A\\!+\\!o(1),\n\\end{align}\nwhere $\\varrho(n_A)$ is the saddle point approximation of $V\\varrho_{n_AV}=Vd_Ad_B\/d_N$, which represents the probability distribution for the intensive variable $n_A=N_A\/V$. This is enough for computing the leading orders without double scaling. We find the normal distribution\n\\begin{align}\\label{Gauss.approx.Page}\n\t\\varrho(n_A)=\\frac{1}{\\sigma \\sqrt{2\\pi}}\\exp\\left[-\\frac{1}{2}\\left(\\frac{n_A-\\bar{n}_A}{\\sigma}\\right)^2\\right]+o(1)\n\\end{align}\nwith mean $\\bar{n}_A=fn$ and variance $\\sigma^2=f(1-f)n(1-n)\/V$.\n\t\n\\begin{figure*}\n\t\\centering \n\t\\includegraphics[width=\\linewidth]{figure06}\n\t\\caption{The entanglement entropy $\\braket{S_A}_N$ from Eq.~\\eqref{eq:leading-general} as viewed from the contributions of the first three terms in the expansion in $V$. (a)--(c) Three-dimensional plots as functions of the subsystem fraction $f=V_A\/V$ and the filling ratio $n=N\/V$. (d) Resolving the expansion coefficient $b$ for $f=\\frac{1}{2}+\\frac{\\Lambda_f}{\\sqrt{V}}$ around $f=\\frac{1}{2}$, as given by Eq.~\\eqref{eq:squareroot-full-Gaussian-half-system}, approaching zero for large $|\\Lambda_f|$. (e) Resolving the expansion coefficient $c$ for $n=\\frac{1}{2}+\\frac{\\Lambda_{n}}{\\sqrt{V}}$ and $f=\\frac{1}{2}+\\frac{\\Lambda_f}{V}$ around $f=n=\\frac{1}{2}$, as given by Eq.~\\eqref{eq:constant-full-Gaussian-half}, approaching $\\frac{2\\ln2-1}{4}$ for large $|\\Lambda_f|$ or $|\\Lambda_{n}|$. We underline that the subleading contributions are multiplied by a minus sign.}\n\t\\label{fig:general-N-visual}\n\\end{figure*}\n\t\nIn Appendix~\\ref{app:Ncrit}, we carefully analyze the difference $\\delta n_{\\mathrm{crit}}=n_{\\mathrm{crit}}-\\bar{n}_A$ for $n_{\\mathrm{crit}}=N_{\\mathrm{crit}}\/V$ and find that, for fixed $f<\\frac{1}{2}$, one always has $\\delta n_{\\mathrm{crit}}=O(1)$ and $\\delta n_{\\mathrm{crit}}>0$. Thus, for $f\\neq\\frac{1}{2}$, the center of the Gaussian $\\bar{n}_A$ is sufficiently separated from $n_{\\mathrm{crit}}$. This allows us to disregard the second sum in Eq.~\\eqref{eq:Scenter} as it is exponentially suppressed. In the case that $f>\\frac{1}{2}$, we can disregard the first sum because of the symmetry between the two subsystems $A$ and $B$.\n\t\nTo find the observable $\\varphi(n_A)$ from Eq.~\\eqref{eq:varphi}, we use Stirling's approximation\n\\begin{align}\\label{approx.Digamma}\n\t\\Psi[d_N\\!+\\!1]\\!-\\!\\Psi[\\max(d_A,d_B)\\!+\\!1]&=\\ln\\min\\left(\\tfrac{d_N}{d_B},\\tfrac{d_N}{d_A}\\right)\\!+\\!o(1).\n\\end{align}\nMoreover, it holds for $V\\gg1$ and fixed $f\\in(0,1)$\n\\begin{align}\n\t\\min\\left(\\tfrac{d_A-1}{d_B},\\tfrac{d_B-1}{d_A}\\right)&=\\delta_{f,\\frac{1}{2}}\\delta_{n,\\frac{1}{2}}+o(1).\n\\end{align}\nThe Kronecker-delta is, in fact, a ``relic'' of a double scaling limit, see Figs.~\\ref{fig:Page-discon}(b) and~\\ref{fig:Page-discon}(c) for a similar result in the context of Page's setting without fixed particle number. It can be resolved assuming that $f$ is close to $1\/2$ but not exactly at $1\/2$, see Appendix~\\ref{app:general-average}. When collecting all terms up to order $O(1)$, we obtain\n\\begin{align}\\label{eq:psi}\n\t\\begin{split}\n\t\t\\varphi(n_A)&=[n_A\\ln(n_A)-f\\ln(f)-n\\ln[(1-n)\/n]\\\\\n\t\t&\\quad-\\ln(1-n)+(f-n_A)\\ln(f-n_A)]V\\\\\n\t\t&\\quad+ \\frac{1}{2}\\ln\\left[\\frac{n_A (f-n_A)}{f(1-n)n}\\right]-\\frac{1}{2}\\delta_{f,\\frac{1}{2}}\\delta_{n,\\frac{1}{2}}+o(1)\\,,\n\t\\end{split}\n\\end{align}\nfor $n_A\\geq n_{\\rm crit}$. For $n_A\\leq n_{\\rm crit}$, we need to apply the symmetries $n_A\\to n-n_A$ and $f\\to 1-f$ in expansion~\\eqref{eq:psi}.\n\t\nIn the limit $V\\to\\infty$, Gaussian~\\eqref{Gauss.approx.Page} narrows because the standard deviation scales like $\\sigma\\sim1\/\\sqrt{V}$. We can, therefore, expand $\\varphi(n_A)$ in powers of $(n_A-\\bar{n}_A)$ around the mean $\\bar{n}_A$. In order to find the average up to a constant order, it suffices to expand up to the quadratic order and then calculate integral~\\eqref{eq:average-int}. Only for $f=\\frac{1}{2}$, we have $\\delta n_{\\mathrm{crit}}=o(1)$, so that we need to take into account the nonanalyticity in $\\varphi(n_A)$ introduced by the symmetry when exchanging the two subsystems. In this case, we integrate two different Taylor expansions for $n_A\\leq n\/2$ and $n_A\\geq n\/2$, which will introduce a term of order $\\sqrt{V}$, as discussed below.\n\t\nCombining these results, we arrive at the main result of this subsection,\n\\begin{align}\\label{eq:leading-general}\n\t\\langle S_A\\rangle_{N}&=[(n-1)\\ln(1-n)-n\\ln(n)]\\, f\\,V\\nonumber\\\\\n\t&-\\sqrt{\\frac{n(1-n)}{2\\pi}}\\left|\\ln\\left(\\frac{1-n}{n}\\right)\\right|\\delta_{f,\\frac{1}{2}}\\sqrt{V}\\nonumber\\\\\n\t&+\\frac{f+\\ln(1-f)}{2}-\\frac{1}{2}\\delta_{f,\\frac{1}{2}}\\delta_{n,\\frac{1}{2}}+o(1),\n\\end{align}\nvalid for $f\\leq \\frac{1}{2}$. The leading, volume-law, term in Eq.~\\eqref{eq:leading-general} is the same as that obtained in Refs.~\\cite{garrisson_grover_18, vidmar2017entanglement2} using random matrix theory, and the same as in Ref.~\\cite{bianchi2019typical} [see Eq.~(27)], where it is interpreted as the typical entanglement entropy in the (highly degenerate) eigenspace of a Hamiltonian of the form $\\hat{H}=\\hat{N}=\\hat{N}_A+\\hat{N}_B$. The subleading $\\sqrt{V}$ term was first discussed in Ref.~\\cite{vidmar2017entanglement2}, specifically, it coincides with the bound for such a term computed at $f=\\frac{1}{2}$~\\cite{vidmar2017entanglement2}. It is remarkable that, for $n\\neq\\frac{1}{2}$, the constant term is nothing but that obtained in Ref.~\\cite{vidmar2017entanglement2} within a ``mean field'' calculation, while at $n=f=\\frac{1}{2}$ the extra $-\\frac{1}{2}$ correction was found in Ref.~\\cite{vidmar2017entanglement2} numerically, both for random states as well as for eigenstates of a nonintegrable Hamiltonian. We had all the ingredients to guess the general form in Eq.~\\eqref{eq:leading-general}. Its actual derivation with all the details fills several pages, and can be found in Appendix~\\ref{app:general-average}. A visualization of the leading term in Eq.~\\eqref{eq:leading-general} can be found in Fig.~\\ref{fig:Page}.\n\t\nAn important question concerns the resolution of the Kronecker deltas in Eq.~\\eqref{eq:leading-general}, which indicate nontrivial scaling limits. The Kronecker deltas are only obtained along the critical line $f=\\frac{1}{2}$, which contains a multicritical point at $n=\\frac{1}{2}$ when $V\\to\\infty$. One needs to take the resolution into account because experiments are carried out in finite systems in which $f$ and $n$ can only be fixed within some experimental resolution. Consequently, it is important to understand within which margin of error one needs to choose $f$ and $n$ to observe the corresponding terms. This question can be answered by analyzing the limit $V\\to\\infty$ in the double scaling $f=\\frac{1}{2}+V^{-\\alpha} \\Lambda_f$ and\/or $n=\\frac{1}{2}+V^{-\\beta}\\Lambda_{n}$. We find that the $\\sqrt{V}$ correction in Eq.~\\eqref{eq:leading-general} (for fixed $n$) becomes visible for $\\alpha=\\frac{1}{2}$, {\\it i.e.},\\ whenever the difference between $f$ and $\\frac{1}{2}$ is of order $1\/\\sqrt{V}$ or smaller. The constant correction requires a more detailed analysis as it depends on the relative scaling of both $f$ and $n$ around $f=n=\\frac{1}{2}$. Subtle cancelations have to be taken into account as not all sources of corrections, such as $N_{\\rm crit}$, approximation~\\eqref{approx.Digamma}, or the rewriting of the sum as an integral, are equally important; see Appendix~\\ref{app:general-average}. The visualization of the terms in Eq.~\\eqref{eq:leading-general} that include Kronecker deltas, as well as their scaling, is presented in Fig.~\\ref{fig:general-N-visual}.\n\t\nThe variance $(\\Delta S_A)^2_{N}={\\langle S_A^2\\rangle}_{N}-{\\langle S_A\\rangle}^2_{N}$ of the entanglement entropy of pure quantum states in $\\mathcal{H}^{(N)}$ can be found using the result in Eq.~(24) of Ref.~\\cite{bianchi2019typical}. When expressed as a sum over the number of particles $N_A$, it takes the form\n\\begin{equation}\n\t(\\Delta S_A)^2_{N}=\\frac{1}{d_N+1}\\Big[\\!\\!\\sum_{N_A=0}^{N}\\!\\!\\varrho\\;(\\varphi^2_{N_A}\\!+\\!\\chi_{N_A}\\big)\\!-\\!\\big(\\!\\!\\sum_{N_A=0}^{N}\\!\\!\\varrho_{N_A}\\;\\varphi_{N_A}\\big)^2\\Big],\n\t\\label{eq:DSA2N}\n\\end{equation}\nwhere $\\varrho_{N_A}$ and $\\varphi_{N_A}$ are given in Eq.~\\eqref{eq:varphi} and $\\chi_{N_A}$ is defined as\n\\begin{align}\n\t\\chi\\!=\\!\\!\n\t&\\begin{cases}\n\t\t\\scriptstyle\\!\\! (d_A\\!+d_B)\\Psi'\\!(d_B+1)-(d_N\\!+1)\\Psi'\\!(d_N+1)-\\frac{(d_A\\!-\\!1)(d_A\\!+2d_B\\!-1)}{4d_B^2},\n\t\t&\\!\\!\\!\\!\\scriptstyle\\!\\! d_A\\leq d_B, \\\\[0.8em]\n\t\t\\scriptstyle\\!\\! (d_A\\!+d_B)\\Psi'\\!(d_A+1)-(d_N\\!+1)\\Psi'\\!(d_N+1)-\\frac{(d_B\\!-\\!1)(d_B\\!+2d_A\\!-1)}{4d_A^2},\n\t\t&\\!\\!\\!\\!\\scriptstyle\\!\\! d_A> d_B.\n\t\\end{cases}\n\\end{align}\nAs earlier, $d_N$, $d_A(N_A)$, $d_B(N-N_A)$ are understood as functions of the particle number and are given by Eqs.~\\eqref{eq:dN} and~\\eqref{eq:dAdB}. In the thermodynamic limit $V\\to\\infty$, at fixed subsystem fraction $f=V_A\/V$ and fixed particle density $n=N\/V$, the variance is exponentially small and its asymptotic scaling can be obtained via the saddle point methods of Appendix \\ref{app:general-average}. In particular, we have\n\\begin{align}\n\t&\\sum_{N_A=0}^{N}\\!\\!\\varrho_{N_A}\\;\\varphi^2_{N_A}\\,-\\,\\Big(\\!\\!\\sum_{N_A=0}^{N}\\!\\!\\varrho_{N_A}\\;\\varphi_{N_A}\\Big)^2 =\\\\\n\t&\\quad=\\!\\int^{\\infty}_{-\\infty} \\!\\!\\!\\!\\!\\!\\!\\varrho(n_A)\\varphi^2(n_A)dn_A-\\Big(\\int^{\\infty}_{-\\infty} \\!\\!\\!\\!\\!\\!\\!\\varrho(n_A)\\varphi(n_A)dn_A\\Big)^2\\!+\\!o(1)\\nonumber\\\\[.5em]\n\t&\\quad=\\big[f(1\\!-\\!f)-\\frac{1}{2\\pi}\\delta_{f,\\frac{1}{2}}\\big]\\big(\\!\\ln \\frac{n}{1\\!-n}\\big)^2 \\,n(1\\!-\\!n)\\, V+o(V)\\nonumber,\n\\end{align}\nand\n\\begin{equation}\n\t\\sum_{N_A=0}^{N}\\!\\!\\varrho_{N_A}\\chi_{N_A}=\\frac{1}{4}\\delta_{f,\\frac{1}{2}}\\delta_{n,\\frac{1}{2}}+o(1)\\,,\n\\end{equation}\nwhere we have used the fact that, for large dimensions, $d_A\\gg 1$ and $d_B\\gg 1$, $\\chi$ scales as\n\\begin{align}\n\t\\chi_{N_A}=\n\t\\begin{cases}\n\t\t\\frac{d_A}{2d_B}+O(1\/d_B^2)\\,, & d_A< d_B \\\\\n\t\t\\frac{1}{4} +o(1)\\,, & d_A= d_B \\\\\n\t\t\\frac{d_B}{2d_A}+O(1\/d_A^2)\\,, & d_A> d_B \\,.\n\t\\end{cases}\n\t\\label{eq:chi-asympt}\n\\end{align}\nTherefore, the term in brackets in Eq.~\\eqref{eq:DSA2N} is of order $V$, while the denominator $d_N+1$ is exponentially large. Using the Stirling approximation for $d_N$ in Eq.~\\eqref{eq:DSA2N}, we find that\n\\begin{equation}\n\t(\\Delta S_A)^2_{N}= \\alpha\\, V^{\\frac{3}{2}}\\operatorname{e}^{-\\beta V}+ o(\\operatorname{e}^{-\\beta V}),\n\t\\label{eq:DeltaS-N}\n\\end{equation}\nwith\n\\begin{align}\n\t\\alpha=&\\,\\scriptstyle\\sqrt{2\\pi} \\big[f(1-f)-\\frac{1}{2\\pi}\\delta_{f,\\frac{1}{2}}\\big]\\left(\\ln\\! \\frac{n}{1-n}\\right)^2\\,[n(1\\!-\\!n)]^{\\frac{3}{2}}\\, +\\,o(1)\\nonumber\\\\[.5em]\n\t\\beta=&-n\\ln n-(1-n)\\ln(1-n)\\,.\n\\end{align}\nThis means that the average entanglement entropy in Eq.~\\eqref{eq:leading-general} is also the typical entanglement entropy of pure quantum states with $N$ fermions, namely, the overwhelming majority of pure quantum states with $N$ fermions have the entanglement entropy in Eq.~\\eqref{eq:leading-general}.\n\t\n\\subsubsection{Weighted average and variance}\\label{sec:general-mu}\n\t\nHaving computed the average entanglement entropy of pure states with $N$ particles, next we can compute the average over the entire Hilbert space. A subtlety is that the system is in either of the Hilbert spaces $\\mathcal{H}_N$, but we do not know in which one. Therefore, while the distribution of the pure states with a fixed particle number is given quantum mechanically, meaning uniformly distributed over a unit sphere, we additionally have a classical probability for the particle number $N$. \n\t\nWith this in mind, we can average over $\\braket{S_A}_N$ within each sector with $N$ particles weighted by its Hilbert space dimension $d_N$ from Eq.~\\eqref{eq:dN}. More generally, we can introduce a weight parameter $w$ and a probability $P_N$ of finding $N$ particles:\n\\begin{align}\n\tP_N=\\frac{1}{Z}d_N \\operatorname{e}^{-w N}.\n\t\\label{eq:PN-binomial}\n\\end{align}\nHere $Z=\\sum_{N=0}^V d_N \\operatorname{e}^{-w N}=(1+\\operatorname{e}^{-w})^V$ normalizes the distribution. The average filling fraction $\\bar{n}$ can be expressed in terms of the weight parameter $w$ as\n\\begin{align}\n\t\\bar{n}=\\sum_{N=0}^V P_N \\frac{N}{V}\\;=\\;\\frac{1}{1+\\operatorname{e}^w}\\,\n\t\\label{eq:nbar}\n\\end{align}\nwith half-filling $\\bar{n}=\\frac{1}{2}$ corresponding to equiweighted sectors, {\\it i.e.},\\ $w=0$. The variance of the filling fraction,\n\\begin{align}\n\t(\\Delta n)^2=\\sum_{N=0}^V P_N\\; (\\frac{N}{V}-\\bar{n})^2\\;=\\;\\frac{\\bar{n}(1-\\bar{n})}{V}\\,\n\t\\label{eq:Dn}\n\\end{align}\ncan be obtained easily by noting that $P_N$ is a binomial distribution.\n\t\nWe calculate the average entanglement entropy at fixed weight parameter $w$,\n\\begin{align}\n\t\\braket{S_A}_w=\\sum_{N=0}^V P_N \\braket{S_A}_N\\,,\n\t\\label{eq:Sw-def}\n\\end{align}\nup to constant order in $V$ by expanding $\\braket{S_A}_N$ around $\\bar{n}$ and then using the known variance $(\\Delta n)^2$. Since $\\braket{S_A}_N$ is analytic as a function of $N$ (for $f<\\frac{1}{2}$) and does not have any discontinuities in its derivatives, it suffices to expand its leading order (linear in $V$) around $\\bar{n}$ as\n\\begin{align}\n\t\\begin{split}\\label{eq:sA-useful}\n\t\ts_A(f,n)&=[(n-1)\\ln(1-n)-n\\ln{n}]f\\\\\n\t\t&=[(\\bar{n}-1)\\ln(1-\\bar{n})-\\bar{n}\\ln{\\bar{n}}]f\\\\\n\t\t&\\quad+f\\ln[(1-\\bar{n})\/\\bar{n}]-\\frac{f(n-\\bar{n})^2}{2(1-\\bar{n})\\bar{n}}\\\\\n\t\t&\\quad+o(n-\\bar{n})^3,\n\t\\end{split}\n\\end{align}\nand calculate its expectation value with respect to the binomial distribution. Using $\\braket{(n-\\bar{n})^2}=\\sigma^2$, we find the constant correction $-\\frac{f}{2}$, which cancels the identical term in Eq.~\\eqref{eq:leading-general}. Terms of order $V^{1\/2}$ and $V^0$ can be directly evaluated at $n=\\bar{n}$, where the binomial distribution is centered, because its finite width on those terms will only contribute corrections of subleading order $O(1)$. Hence, the resulting average is equal to\n\\begin{align}\\label{eq:Page-weighted}\n\t\\begin{split}\n\t\t\\braket{S_A}_{w}&=\\left[(\\bar{n}-1)\\ln(1-\\bar{n})-\\bar{n}\\ln(\\bar{n})\\right] fV\\\\\n\t\t&\\quad-\\sqrt{\\frac{\\bar{n}(1-\\bar{n})}{2\\pi}}\\left|\\ln\\left(\\frac{1-\\bar{n}}{\\bar{n}}\\right)\\right|\\delta_{f,\\frac{1}{2}}\\sqrt{V}\\\\\n\t\t&\\quad+\\frac{\\ln(1-f)}{2}-\\frac{2}{\\pi}\\,\\delta_{f,\\frac{1}{2}}\\delta_{\\bar{n},\\frac{1}{2}}+o(1)\\,,\n\t\\end{split}\n\\end{align}\nwhere $\\bar{n}=1\/(1+e^{w})$ was computed in Eq.~\\eqref{eq:nbar}. A pedagogical derivation of Eq.~\\eqref{eq:Page-weighted} can be found in Appendix~\\ref{app:general-weighted}. Interestingly, Eq.~\\eqref{eq:Page-weighted} can be summarized by the simple relation $\\braket{S_A}_{w} = \\braket{S_A}_{N=\\bar{N}} - \\frac{f}{2}+o(1)$ except at $f=\\bar{n}=\\frac{1}{2}$, where the Kronecker delta from Eq.~\\eqref{eq:leading-general} leads to additional integrals, as explained in Appendix~\\ref{app:general-weighted}.\n\t\nFor $w=0$ with $\\bar{n}=\\frac{1}{2}$, Eq.~\\eqref{eq:Page-weighted} describes the average entanglement entropy of uniformly weighted eigenstates of the number operator (with respect to the Haar measure). This average was computed in Ref.~\\cite{huang_19} as $\\braket{S_A}_{w=0}=f V \\ln{2}+\\frac{\\ln(1-f)}{2}-\\frac{2}{\\pi}\\delta_{f,1\/2}$, which coincides with Eq.~\\eqref{eq:Page-weighted} for $\\bar{n}=\\frac{1}{2}$.\n\t\nSimilarly, one can compute the variance of the weighted entanglement entropy\n\\begin{align}\n\t\\begin{split}\n\t\t(\\Delta S_A)^2_w&=\\sum_{N=0}^V P_N\\, \\langle S_A^2\\rangle_N-\\big(\\sum_{N=0}^V P_N\\, \\langle S_A\\rangle_N\\big)^2\\\\\n\t\t&=\\bar{n}(1-\\bar{n})\\big(\\ln \\frac{\\bar{n}}{1-\\bar{n}}\\big)^2 f V+o(V)\\,.\n\t\t\\label{eq:DeltaS-w}\n\t\\end{split}\n\\end{align}\nNote that, while the variance $(\\Delta S)^2_N$ at a fixed number of particles is exponentially small at large $V$, the weighted variance $(\\Delta S)^2_w$ scales linearly in $V$ because of the $O(V^{-1})$ variance $(\\Delta n)^2$ in the filling fraction. For $f\\neq0$ and $\\bar{n}\\neq 0$, the leading-order term only vanishes at $\\bar{n}=\\frac{1}{2}$. However, we always have $\\lim_{V\\to\\infty}(\\Delta S_A)_w\/\\braket{S_A}_w=0$, {\\it i.e.},\\ the \\emph{relative standard deviation} vanishes in the thermodynamic limit, so that the average entanglement entropy $\\braket{S_A}_w$ and the \\emph{typical} eigenstate entanglement entropy always coincide.\n\t\n\t\n\\section{PURE FERMIONIC GAUSSIAN STATES} \\label{sec:gaussian}\n\t\nIn this section, we define fermionic Gaussian states and calculate the average and variance of the entanglement entropy for this family of states. Following Ref.~\\cite{bianchi2021page}, we do this first for pure fermionic Gaussian states, for which the number of particles is not fixed. Next, we derive new results for fermionic Gaussian states with a fixed number of particles. In both cases we mimic the idea of a uniformly distributed state. This works because in both cases there is a natural action of a compact group and the set is given by a single orbit of this group action. Thus, one can choose the unique Haar measure to generate an ensemble of fermionic Gaussian states.\n\t\nIt may be natural to ask whether the same analysis could also be carried out for bosonic Gaussian states. Unfortunately, the answer is in the negative. The ensemble of bosonic Gaussian states is noncompact with unbounded entanglement entropy since the corresponding invariance group is a noncompact one. So any group invariant average would diverge. Moreover, the only bosonic Gaussian state that has a fixed particle number is the vacuum with zero particles and zero entanglement. To circumvent the problem, one could fix the \\emph{average} number of particles. Then, the corresponding manifold would be again compact and one can average over all those Gaussian states (in a similar spirit as in Refs.~\\cite{serafini2007canonical, fukuda2019typical}), but the resulting analysis would be rather different from our approach here. It may be possible to use a duality between bosonic and fermionic entanglement entropy of Gaussian states~\\cite{jonsson2021entanglement} for this, but we will not carry out this analysis here.\n\t\n\\subsection{Definition of fermionic Gaussian states}\n\t\nInstead of starting with pure fermionic Gaussian states, it is easier to begin with mixed Gaussian states because the pure ones can be understood as limits of this definition. We choose a Majorana basis $\\{\\gamma_j\\}_{j=1,\\ldots,2V}$ in the $2^V$-dimensional Hilbert space $\\mathcal{H}$ since the corresponding ensemble is easier to describe. This Majorana basis satisfies the anticommutation relation $\\{\\gamma_j,\\gamma_k\\}=\\delta_{jk}$, meaning that they create a Clifford algebra and can be chosen to be Hermitian, $\\gamma_j^\\dagger=\\gamma_j$. Moreover, it holds that $\\operatorname{Tr}\\left(\\prod_{l=1}^m\\gamma_{j_l}\\right)=0$ with $j_{l}\\in\\{1,\\ldots,V\\}$ and any positive integer $m$ whenever there is a $\\gamma_j$ that does not appear in this product with an even order. Otherwise, it holds that $\\operatorname{Tr}\\left(\\prod_{l=1}^m\\gamma_{j_l}\\right)=\\pm2^{V-m\/2}$, which is up to a factor $2^{-m\/2}$ the dimension of the representation of the Clifford algebra as well as the dimension of the Hilbert space $\\mathcal{H}$.\n\t\nA Gaussian state is then any density operator of the form\n\\begin{equation}\n\t\\hat\\rho(\\gamma)=\\frac{\\exp(-\\sum_{j,k=1}^{2V}q_{jk}\\gamma_j\\gamma_k)}{\\operatorname{Tr} \\exp(-\\sum_{j,k=1}^{2V}q_{jk}\\gamma_j\\gamma_k)}=\\frac{\\exp(-\\gamma^\\dagger Q\\gamma)}{\\operatorname{Tr} \\exp(-\\gamma^\\dagger Q\\gamma)}\n\\end{equation}\nwith the Majorana operator-valued column vector $\\gamma = (\\gamma_1, \\ldots, \\gamma_{2V})^\\dagger$. This form gives the Gaussian states their name. The Hermiticity of $\\hat\\rho(\\gamma)$ implies that the coefficient matrix $Q=\\{q_{jk}\\}_{j,k=1,\\ldots,2V_A}$ needs to be Hermitian, while the anticommutation relations of the Majorana basis allows us to set the real symmetric part to zero. Indeed, due to\n\\begin{equation}\n\t\\begin{split}\n\t\t\\sum_{j,k=1}^{2V}q_{jk}\\gamma_j\\gamma_k&=\\sum_{j=1}^{2V}q_{jj}+\\sum_{1\\leq j\\bar{n}\n\t\\end{cases}\\nonumber\n\\end{align}\nwith $f\\leq \\frac{1}{2}$. Note that, at $w=0$ (corresponding to $\\bar{n}=\\frac{1}{2}$), the leading-order $O(V)$ term vanishes. In general, we have $\\lim_{V\\to \\infty} (\\Delta S_A)_{\\mathrm{G},w}\/\\braket{S_A}_{\\mathrm{G},w}=0$, which shows that in the thermodynamic limit the average \\eqref{eq:expansion} also gives the typical value of the entanglement entropy.\n\n\t\n\\section{EXACT RELATION TO RANDOM HAMILTONIANS}\\label{sec:RMT}\n\t\nSo far, we have focused on ensembles of quantum states and computed statistical properties of the entanglement entropy with respect to the following six ensembles: (1a) random states, (2a) random states with fixed total particle number, (3a) weighted averages over random states with fixed total particle number, (1b) random fermionic Gaussian states, (2b) random fermionic Gaussian states with fixed total particle number, and (3b) weighted averages over random fermionic Gaussian states with fixed total particle number. In this section, we shift the focus from ensembles of quantum states to random Hamiltonians, their eigenstates, and their dynamics.\n\t\n\\subsection{Random many-body Hamiltonians}\\label{sec:rig-res-rmt}\n\t\nEnsembles (1a), (2a), and (3a) can be realized using eigenstates (even only ground states) of random Hamiltonians that are traditional random matrices. The ensuing Hamiltonians give an \\emph{exact} correspondence to Page's setting, {\\it i.e.},\\ the averages and variances will agree at all orders (meaning even at finite $V$) when the respective random Hamiltonian satisfies the properties discussed next.\n\t\nWe first consider case (1a), for which the number of particles is not fixed. The state vector in this case explores the entire sphere of the Hilbert space $\\mathcal{H}$. Thus, any random Hamiltonian that creates a Haar-distributed random state vector is suitable. For instance, let us study the random-matrix Hamiltonian\n\\begin{equation}\n\t\\hat{H}_\\text{1a}=\\sum^{2^V}_{\\kappa,\\lambda=1}C_{\\kappa\\lambda}\\ket{v_\\kappa}\\bra{v_\\lambda},\n\\end{equation}\nwhere $\\ket{v_\\lambda}$ is an orthonormal basis of the Hilbert space and $C_{\\kappa\\lambda}$ is a Haar-distributed random matrix. To get Haar-distributed eigenvectors, the diagonalization $C=U^\\dagger E U$ must involve random matrices $U$ drawn from the Haar measure of ${\\rm U}(2^V)$, while the distribution of the eigenvalues appearing in the diagonal matrix $E$ can be arbitrary. A simple, and one of the most common examples of such a distribution for $C$ is given by the GUE~\\cite{mehta2004, Forrester_2010, akemann2011},\n\\begin{equation}\\label{GUE-dist}\n\t\\begin{split}\n\t\tP(\\hat{H}_\\text{1a})=&2^{-2^{V-1}}\\pi^{-2^{2V-1}}\\exp\\left[-\\frac{1}{2}\\sum_{\\kappa,\\lambda=1}^{2^V}|C_{\\kappa\\lambda}|^2\\right]\\\\\n\t\t=&2^{-2^{V-1}}\\pi^{-2^{2V-1}}e^{-\\operatorname{Tr}\\hat{H}_\\text{1a}^2\/2}.\n\t\\end{split}\n\\end{equation}\n\t\nTo relate the Hamiltonian $\\hat{H}_\\text{1a}$ to many-body Hamiltonians, we rewrite it into a polynomial in fermionic creation and annihilation operators\n\\begin{align}\\label{H1a}\n\t\\hat{H}_\\text{1a}\\;\\;&=\\sum_{l=0}^{2V}\\sum_{j_1,\\ldots,j_l=1}^{2V}c^{(l)}_{j_1\\ldots j_l}\\,\\hat{\\xi}_{j_1}\\cdots\\hat{\\xi}_{j_l},\n\\end{align}\nwith $\\{\\hat{\\xi}_j\\}_{j=1,\\ldots,2V} = (\\hat{f}_1, \\dots, \\hat{f}_V, \\hat{f}_1^\\dagger, \\dots, \\hat{f}_V^\\dagger)$. The coefficients $c^{(l)}_{j_1,\\ldots,j_{l}}$ satisfy symmetries that reflect the anticommutation relations, $\\{\\hat{f}_k,\\hat{f}_l\\} = \\{\\hat{f}_k^\\dagger,\\hat{f}_l^\\dagger\\}=0$ and $\\{\\hat{f}_k,\\hat{f}_l^\\dagger\\} = \\delta_{kl}$, the Hermiticity of $\\hat{H}_\\text{1a}$, and the fact that in each sum over $c^{(l)}_{j_1,\\ldots,j_{l}}$ there are exactly $l$ operators involved that cannot be reduced to a smaller order of a many-body interaction. Exploiting the unitary matrix $T$ in Eq.~\\eqref{Tdef}, in particular going into a Majorana basis, shows that $\\tilde{c}^{(l)}_{k_1,\\ldots,k_{l}}=\\sum_{j_1,\\ldots,j_l=1}^{2V}c^{(l)}_{j_1,\\ldots,j_{l}}\\prod_{a=1}^lT_{j_a k_a}$ is totally skew symmetric in the indices and is real when $l(l-1)\/2$ is even and imaginary when $l(l-1)\/2$ is odd.\n\t\nThe statistical distribution of the coefficients $c^{(l)}_{j_1,\\ldots,j_{l}}$ is determined by the distribution of matrix $C_{\\mu\\nu}$. The best way to see this is to go into the Majorana basis $\\gamma_1,\\ldots,\\gamma_{2V}$ via relation~\\eqref{gammafrel}. Then, one needs to take into account the normalization $\\gamma_j^2=\\tfrac{1}{2}\\mathbb{1}_{2^V}$ to determine this distribution, which leads to\n\\begin{equation}\n\t\\begin{split}\n\t\tP(\\hat{H}_\\text{1a})&=\\prod_{l=1}^{2V}\\prod_{1\\leq j_1<\\ldots0$ is also possible, but must be largely done by hand, {\\it i.e.},\\ we would organize the eigenstates of a random Hamiltonian based on their particle number and then choose one at random using the statistical weight encoded by $w$.\n\t\nMany-body interacting Hamiltonians studied in nuclear physics~\\cite{monfrench1975, FRENCH1970449, BOHIGAS1971261, BOHIGAS1971383, PhysRev.120.1698, FRENCH19715} are related to these kinds of Hamiltonians. They, as well as the SYK models, are called embedded random matrices~\\cite{RevModPhys.53.385, Guhr1998, Benet:2000cy, Kota2001, Kota2014}. For instance, for a $q$-body Hamiltonian, we set $c^{(l)}_{j_1\\ldots j_l,k_1\\ldots k_l}=0$ for all $l\\neq q$ and choose the above Gaussian distribution for $c^{(l)}_{j_1\\ldots j_l,k_1\\ldots k_l}$. As for case (1a) and SYK$q$ for a fixed $q$, the many-body Hamiltonian may satisfy additional global symmetries so that subleading terms may deviate from our results. However, we expect that a mixture of $q$-body interactions should speed up the convergence to the leading-order result in the thermodynamic limit $V\\to\\infty$.\n\t\n\\subsection{Random quadratic Hamiltonians}\n\t\nCase (1b) for random pure fermionic Gaussian states is obtained from $\\hat{H}_\\text{1a}$ by setting all coefficients $c^{(l)}_{i_1, \\ldots, i_{l}} = 0$ whenever $l\\neq2$ in Eq.~\\eqref{H1a}; the resulting random quadratic Hamiltonian reads\n\\begin{align}\n\t\\hat{H}_\\text{1b}&=\\sum^{2V}_{i,j=1}c^{(2)}_{ij}\\,\\hat{\\xi}_i\\hat{\\xi}_j\\,,\n\\end{align}\nwith coefficients $c^{(2)}_{ij}$, drawn from a probability distribution that depends only on matrix invariants of $TC_{(2)}T^T$ with $C_{(2)}=\\{c^{(2)}_{ij}\\}_{i,j=1,\\ldots,2V}$, such as traces $\\operatorname{Tr} (TC_{(2)}T^T)^{2k}$. Then the invariance under ${\\rm O}(2V)$ is guaranteed, which is needed for the uniformly distributed pure fermionic Gaussian states that are the eigenvectors of this Hamiltonian. The Gaussian choice as the distribution of the coefficients $c^{(2)}_{ij}$ is equal to\n\\begin{equation}\n\t\\begin{split}\n\t\tP(\\hat{H}_\\text{1b})=&\\prod_{1\\leq j_1 0$, implying that the asymptotic entanglement entropy is approached from below as the system size increases.\n\t\n\\subsection{Quantum-chaotic quadratic model} \\label{sec:qchaoticquadratic}\n\t\nNext, we focus on a quadratic model, namely, a model whose Hamiltonian is bilinear in fermionic creation and annihilation operators. We explore how well the results for fermionic Gaussian states from Sec.~\\ref{sec:gaussian} predict the behavior of the entanglement entropy in eigenstates of a particle-number-conserving quadratic model that exhibits {\\it single-particle} quantum chaos. By single-particle quantum chaos we mean that the statistical properties of the single-particle energy spectrum are described by the Wigner-Dyson statistics of random matrix theory. Hence, we refer to this model as a quantum-chaotic quadratic model~\\cite{lydzba2021entanglement}. This is to be contrasted to the model in Sec.~\\ref{sec:qchaoticinteracting}, which exhibits {\\it many-body} quantum chaos, and to which we referred to as a quantum-chaotic interacting model.\n\t\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{figure15}\n\t\\caption{Average entanglement entropy density $\\bar S\/[(V\/2)\\ln 2]$ in the 3D Anderson model~(\\ref{def_H_Anderson}) at $\\bar n=\\frac{1}{2}$. Main panel: plot of $\\bar S\/[(V\/2)\\ln 2]$ versus $f$ at disorder strength $W=1$, in a cubic lattice with $V=8000$ sites (symbols). The results are obtained averaging over 100 randomly selected many-body eigenstates and 10 Hamiltonian realizations. The solid line is the corresponding thermodynamic limit result for fermionic Gaussian states given by $\\langle S_A \\rangle_{{\\rm G},w=0}$ in Eq.~(\\ref{eq:expansion}). Inset: plot of $\\delta s_{{\\rm G},w=0} = (\\langle S_A \\rangle_{{\\rm G},w=0} - \\bar S)\/[(V\/2) \\ln 2]$ versus $1\/\\sqrt{V}$ at $f=\\frac{1}{2}$, for $W=1$ and 3, where $\\langle S_A \\rangle_{{\\rm G},w=0}$ corresponds to the fermionic Gaussian states [Eq.~(\\ref{eq:sum-chem-gauss})] at $w=0$ and the same $V$ as $\\bar S$. The results for $\\bar S$ are obtained averaging over $10^2$ to $10^4$ randomly selected many-body eigenstates and over 5 to 500 Hamiltonian realizations. Lines are linear fits $a_0 + a_1\/\\sqrt{V}$ to the results for $V \\geq 2000$. We get $a_0 = 2.4 \\times 10^{-4}$ and $a_1 = 0.03$ for $W=1$ (solid line), and $a_0 = 3.0 \\times 10^{-4}$ and $a_1 = 0.10$ for $W=3$ (dashed line). The numerical results for $\\bar S$ are from Ref.~\\cite{lydzba2021entanglement}.} \\label{fig:S_Anderson_scaling}\n\\end{figure}\n\t\nA well-known quadratic model that exhibits single-particle quantum chaos is the 3D Anderson model below the localization transition. The Hamiltonian of this model reads\n\\begin{equation} \\label{def_H_Anderson}\n\t\\hat H_{\\rm And} = -t \\sum_{\\langle i,j\\rangle} (\\hat f_i^\\dagger \\hat f^{}_j + \\hat f_j^\\dagger \\hat f^{}_i) + \\frac{W}{2}\\sum_i \\varepsilon_i \\hat n_i \\, , \n\\end{equation}\nwhere the first sum runs over nearest-neighbors sites on a cubic lattice. The operator $\\hat f_j^\\dagger$ ($\\hat f^{}_j$) creates (annihilates) a spinless fermion at site $j$, and $\\hat n_j = \\hat f_j^\\dagger \\hat f^{}_j$ is the site occupation operator. The operators $\\hat f_j^\\dagger$ and $\\hat f^{}_j$ satisfy the standard anticommutation relations $\\{\\hat{f}_l,\\hat{f}_k\\} = \\{\\hat{f}_l^\\dagger,\\hat{f}_k^\\dagger\\} = 0$ and $\\{\\hat{f}_l,\\hat{f}_k^\\dagger\\} = \\delta_{lk}$. The single-site occupation energies $\\varepsilon_i \\in [-1,1]$ are independently and identically distributed random numbers drawn from a box distribution. The 3D Anderson model exhibits a delocalization-localization transition at the critical disorder $W_c \\approx 16.5$ (see, {\\it e.g.},\\ Refs.~\\cite{kramer_mackinnon_93, markos_06, evers_mirlin_08, suntajs_prosen_21} for reviews). Our focus here is on disorder strengths well below this transition, $W \\ll W_c$. We stress that, when referring to single-particle quantum chaos in the context of the 3D Anderson model~\\eqref{def_H_Anderson}, we have in mind the fixed Hilbert space $\\mathcal{H}_1$ as the model of a single particle.\n\t\nEven though it has been known for decades that the single-particle spectral properties of the 3D Anderson model in the delocalized regime are well described by the Wigner-Dyson statistics~\\cite{altshuler_shklovskii_86, altshuler_zharekeshev_88, shklovskii_shapiro_93}, the entanglement entropy of energy eigenstates was studied only recently~\\cite{lydzba2021entanglement}. The latter study showed that the volume-law contribution of typical many-body eigenstates is accurately described by the volume-law term of the asymptotic expression in Eq.~(\\ref{eq:thermodynamic-limit}) for $n=\\frac{1}{2}$, which is the same as that in Eq.~(\\ref{eq:expansion}) for $\\bar n=\\frac{1}{2}$. This result suggests that the leading (volume-law) term in the eigenstate entanglement entropy of the 3D Anderson model deep in the delocalized regime is universal. In the main panel of Fig.~\\ref{fig:S_Anderson_scaling}, we plot the average eigenstate entanglement entropy density $\\bar S\/[(V\/2)\\ln 2]$ of randomly selected eigenstates as a function of the subsystem fraction $f$. The results show remarkable agreement with the corresponding thermodynamic limit expression for the weighted average entanglement entropy over fermionic Gaussian states $\\langle S_A\\rangle_{{\\rm G},w=0}$ in Eq.~(\\ref{eq:expansion}).\n\t\nIn spite of the latter agreement, we note that the average entanglement entropy over fermionic Gaussian states does not describe the first subleading term of the average entanglement entropy in the 3D Anderson model. As shown in the inset of Fig.~\\ref{fig:S_Anderson_scaling}, the first subleading term in the latter model scales $\\propto \\sqrt{V}$ at $f=\\frac{1}{2}$. No such term appears in $\\langle S_A\\rangle_{{\\rm G},w=0}$ in Eq.~(\\ref{eq:expansion}). The fact that, for the 3D Anderson model, the subleading $O(\\sqrt{V})$ term is not described by Eq.~(\\ref{eq:expansion}) is in stark contrast to what we found in Sec.~\\ref{sec:qchaoticinteracting} for a quantum-chaotic {\\it interacting} model. In the latter case, subleading terms that are $O(1)$ or greater in the physical model are properly described by the average $\\langle S_A\\rangle_N$ in Eq.~(\\ref{eq:Scenter}). Hence, the origin of the $O(\\sqrt{V})$ contribution to the entanglement entropy of eigenstates in the 3D Anderson model remains an open question. Such a contribution is not present in our analytical calculations of the averages over Gaussian states.\n\t\n\\subsection{Translationally invariant noninteracting fermions} \\label{sec:translational}\n\t\nNext, we consider a paradigmatic quadratic model that does not exhibit quantum chaos at the single-particle level. Namely, translationally invariant noninteracting fermions, for which the Hamiltonian is a sum of hopping terms over nearest-neighbor sites [the first term in Eq.~\\eqref{def_H_Anderson}]. For simplicity, we focus on the 1D case\n\\begin{equation} \\label{def_H_Tinvariant}\n\t\\hat H_\\text{T}^\\text{1D} = - \\sum_{i=1}^{V} \\left( \\hat f_i^\\dagger \\hat f^{}_{i+1} + \\hat f_{i+1}^\\dagger \\hat f^{}_{i} \\right) ,\n\\end{equation}\nwith periodic boundary conditions, $\\hat f^{}_{V+1} \\equiv \\hat f^{}_1$. The single-particle eigenenergies of the model in Eq.~(\\ref{def_H_Tinvariant}) are given by the well-known expression $\\epsilon_n = -2\\cos(2\\pi n\/V)$ with $n = 0, 1, ..., V-1$, which makes apparent that the statistical properties of the single-particle spectrum are not described by the Wigner-Dyson statistics.\n\t\nThe average eigenstate entanglement entropy of the model in Eq.~(\\ref{def_H_Tinvariant}) was studied in Ref.~\\cite{vidmar2017entanglement} (before the universal predictions for the quantum-chaotic quadratic models and the fermionic Gaussian states were derived). The numerical calculations in Ref.~\\cite{vidmar2017entanglement} were carried out by averaging the entanglement entropy over the full set of $2^V$ many-body eigenstates. Remarkably, the numerical results were found to converge rapidly to the thermodynamic limit result, as shown for the case of $f=\\frac{1}{2}$ in the inset of Fig.~\\ref{fig:S_Tinvariant_scaling}. Thanks to that scaling, we find the volume-law coefficient $s^\\infty_\\text{T}$ of the average entanglement entropy $\\bar S_\\text{T} = s^\\infty_\\text{T} V_A \\ln2$ at $f=\\frac{1}{2}$ to high numerical accuracy, $s^\\infty_\\text{T} = 0.5378(1)$, which is consistent with the result reported in Ref.~\\cite{vidmar2017entanglement}. This is to be contrasted to the volume-law coefficient $s^\\infty_{{\\rm G},w=0}$ of fermionic Gaussian states $\\langle S_A\\rangle_{{\\rm G},w=0} = s^\\infty_{{\\rm G},w=0} V_A \\ln2$ from Eq.~(\\ref{eq:expansion}), which yields $s^\\infty_{{\\rm G},w=0} = 0.5573$. We then see that $s^\\infty_\\text{T}$ and $s^\\infty_{{\\rm G},w=0}$ are close but different. The full curve for $S_\\text{T}$ as a function of $f$, for $V=36$, is shown in Fig.~\\ref{fig:S_Tinvariant_scaling} together with the full curve for $\\langle S_A\\rangle_{{\\rm G},w=0}$ from Eq.~(\\ref{eq:expansion}). They are clearly different and, given the abovementioned fast convergence of the numerical results with $V$, we expect the differences to remain in the thermodynamic limit. The exact analytical form of the $\\bar S_\\text{T}(f)$ curve for translationally invariant free fermions remains elusive, but tight bounds have already been calculated~\\cite{hackl2019average}.\n\t\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{figure16}\n\t\\caption{Average entanglement entropy density $\\bar S\/[(V\/2) \\ln 2]$ of translationally invariant noninteracting fermions in a one-dimensional lattice, described by the Hamiltonian in Eq.~(\\ref{def_H_Tinvariant}). Main panel: plot of $\\bar S\/[(V\/2) \\ln 2]$ versus $f$ in the lattice with $V=36$ sites. The results are obtained by averaging over all $2^V$ many-body eigenstates. The solid line is the corresponding thermodynamic limit result for fermionic Gaussian states given by $\\langle S_A \\rangle_{{\\rm G},w=0}$ in Eq.~\\eqref{eq:expansion}. Inset: plot of $\\delta s_{\\rm T} = (\\bar S_{\\rm T} - \\bar S)\/([V\/2] \\ln 2)$ versus $1\/V$ at $f=\\frac{1}{2}$, where $\\bar S_{\\rm T}\/([V\/2] \\ln 2) = 0.5378$. The solid line shows the function $a\/V^\\zeta$, with $a = 0.23$ and $\\zeta=1.96$. The numerical results for $\\bar S$ are from Ref.~\\cite{vidmar2017entanglement}.}\\label{fig:S_Tinvariant_scaling}\n\\end{figure}\n\t\nWe conclude by noting that, for the translationally invariant quantum-chaotic interacting model studied in Sec.~\\ref{sec:qchaoticinteracting}, the average eigenstate entanglement entropy is accurately described by the corresponding entanglement entropy of general pure states. The role of Hamiltonian symmetries in the average entanglement entropy of energy eigenstates in quantum-chaotic interacting and quantum-chaotic quadratic models remains an important question to be explored in future studies.\n\n\\begin{table*}[!t]\n\t\\renewcommand{\\arraystretch}{1.7}\n\t\\hspace*{-0.6cm}\\begin{center}\\begin{tabular}{l||ll|ll}\n\t\t\t& \\textbf{(a) General pure states} & & \\multicolumn{2}{l}{\\textbf{(b) Pure fermionic Gaussian states}} \\\\\n\t\t\t\\hline\n\t\t\t\\hline\n\t\t\t\\multirow{2}{*}{\\shortstack[l]{(1) no\\\\ particle\\\\ number}} & $\\braket{S_A}=a V\\!-\\!b+O(2^{-V})$ \\& \\textbf{exact} & $\\rightarrow$ \\eqref{eq:Page-therm}, Fig.~\\ref{fig:Page-discon}, \\cite{page1993average} & $\\braket{S_A}_{\\mathrm{G}}=a V\\!+\\!b\\!+\\!O(\\frac{1}{V})$ \\& \\textbf{exact} & $\\rightarrow$ \\eqref{eq:Gaussian-average}, Fig.~\\ref{fig:Page-Gaussian}, \\cite{bianchi2021page}\\\\\n\t\t\t& $(\\Delta S_A)^2=\\alpha e^{-\\beta V}+o(e^{-\\beta V})$ & $\\rightarrow$ \\eqref{eq:variance-page}, \\cite{vivo_pato_16} & $(\\Delta S_A)^2_{\\mathrm{G}}=a+o(1)$ & $\\rightarrow$ \\eqref{eq:Gaussian-variance}, \\cite{bianchi2021page}\\\\\n\t\t\t\\hline\n\t\t\t\\multirow{2}{*}{\\shortstack[l]{(2) fixed\\\\particle\\\\ number}} & $\\braket{S_A}_N=a V\\!-\\!b\\sqrt{V}\\!-\\!c\\!+\\!o(1)$ & $\\rightarrow$ \\eqref{eq:leading-general}, Fig.~\\ref{fig:general-N-visual} & $\\braket{S_A}_{\\mathrm{G},N}=aV\\!-\\!\\frac{b}{V}\\!+\\!O(\\frac{1}{V^2})$ \\& \\textbf{exact} & $\\rightarrow$ \\eqref{eq:thermodynamic-limit}\\\\\n\t\t\t& $(\\Delta S_A)^2_N=\\alpha\\, V^{\\frac{3}{2}}\\operatorname{e}^{-\\beta V}$ & $\\rightarrow$ \\eqref{eq:DeltaS-N} & $(\\Delta S_A)^2_{\\mathrm{G},N}=a\\!+\\!o(1)$ & $\\rightarrow$ \\eqref{eq:variance-Gaussian}, Fig.~\\ref{fig:variance}\\\\\n\t\t\t\\hline\n\t\t\t\\multirow{2}{*}{\\shortstack[l]{(3) fixed\\\\ weight}} & $\\braket{S_A}_w=aV\\!+\\!b\\!+\\!c\\sqrt{V}\\!+\\!o(1)$ & $\\rightarrow$ \\eqref{eq:Page-weighted} & $\\braket{S_A}_{\\mathrm{G},w}=\\!a V\\!+\\!b\\!+\\!\\tfrac{c}{\\sqrt{V}}\\!+\\!\\tfrac{d}{V}\\!+\\!o(\\tfrac{1}{V})$ & $\\rightarrow$ \\eqref{eq:expansion}, Fig.~\\ref{fig:Gaussian-mu-visual}\\\\\n\t\t\t& $(\\Delta S_A)^2_w= a V+o(V)$ & $\\rightarrow$ \\eqref{eq:DeltaS-w} & $(\\Delta S_A)^2_{\\mathrm{G},w}=a V+o(V)$ & $\\rightarrow$ \\eqref{eq:variance-Gw}\n\t\\end{tabular}\\end{center}\n\t\\caption{Overview of the results discussed in this tutorial. We list the main results, indicate up to which order in $V$ we derived the respective expressions (and if there exists an exact formula), and where the respective formulas can be found (equations, figures, references). Most results for fixed particle number are new, but if special cases or the leading order term were already known before, we cite the relevant works after the equation in the main text.}\n\t\\label{tab:results}\n\\end{table*}\n\t\n\\section{SUMMARY AND OUTLOOK}\n\t\nIn this section, we briefly summarize the key results discussed in this tutorial, and give an outlook of where we envision the methods introduced to be applicable. We also mention some open questions in the context of the entanglement entropy of typical pure states.\n\t\n\\subsection{Summary}\n\t\nWe provided a pedagogical introduction to the current understanding of the behavior of the entanglement entropy of pure quantum states. We derived analytical expressions for the average entanglement entropy of general and Gaussian states, and considered states with and without a fixed number of particles. A comprehensive summary of the results discussed can be found in Table~\\ref{tab:results}, where we contrast results for: (1) arbitrary particle number, (2) fixed particle number $N$ and (3) fixed weight parameter $w$ for both (a) general pure states and (b) Gaussian states. This yields the six state ensembles (1a) through (3b).\n\t\nFor both Gaussian and general pure states, the leading-order behavior $\\braket{S_A}$ at half-filling $N=V\/2$ coincides with the full average without fixing the total particle number, while the next-to-leading-order terms differ. For general pure states, we confirmed an additional contribution proportional to $\\sqrt{V}$ at $f=\\frac{1}{2}$ in Eq.~\\eqref{eq:leading-general}, previously found in Ref.~\\cite{vidmar2017entanglement2}. For Gaussian states, we derived the exact formula, which does not contain such a term and has a next-to-leading-order term of order $1\/V$ [Eq.~\\eqref{eq:Gaussian-expansion}]. However, we did find a contribution of order $1\/\\sqrt{V}$ in the asymptotic average $\\braket{S_A}_{\\mathrm{G},w}$ at fixed $w$ with $f=\\bar{n}$, {\\it i.e.},\\ whenever the subsystem fraction $f$ equals the average filling ratio $\\bar{n}=\\braket{N\/V}=1\/(1+e^{w})$.\n\t\nWe traced back these contributions to the nonanalytic behavior of the average entanglement entropy as a function of the subsystem fraction $f$ and the filling ratio $n$. In the case of Gaussian states, we identified the additional particle-subsystem symmetry $n\\leftrightarrow f$, which is responsible for the $1\/\\sqrt{V}$ term. From a mathematical perspective, the origin of the $\\sqrt{V}$ term in $\\braket{S_A}_N$ is therefore the same as that of the $1\/\\sqrt{V}$ term in $\\braket{S_A}_{\\mathrm{G},w}$, namely, both calculations involve the average of a nonanalytic function with respect to an approximately Gaussian statistical distribution. Square root powers of $V$ appear exactly when the mean of the Gaussian lies in a neighborhood of the nonanalyticity, {\\it i.e.},\\ there is a jump in one of the function's derivatives.\n\t\nFinally, we connected the results obtained for the average entanglement entropy in the six ensembles of states mentioned before to the average entanglement entropy in eigenstates of specific random matrices and of physical Hamiltonians. Maybe the most surprising result in the context of quantum-chaotic interacting Hamiltonians is that not only does the leading term in the average agree with the corresponding ensemble average, but also subleading terms that are $O(1)$ or larger in the volume, {\\it e.g.},\\ $O(\\sqrt{V})$. Why this is so is a question that deserves to be further explored. Equally intriguing is to understand why the same is not true in the case of quantum-chaotic quadratic Hamiltonians.\n\t\n\\subsection{Outlook}\n\t\nLooking forward, an important question is how general are the methods and results discussed here. We focused on fermionic systems, for which we can compare general pure states with Gaussian pure states, and unveiled the effect of fixing the total particle number. Our results for general pure states apply equally to hard-core bosons and spin-$\\frac{1}{2}$ systems. In the latter, the total magnetization plays the role that the total particle number plays in fermionic and hard-core boson models.\n\t\n\\subsubsection{Typical eigenstate entanglement entropy as a diagnostic of quantum chaos and integrability}\n\t\nAs mentioned in the Introduction, a novel picture that the recent numerical studies such as those discussed in Sec.~\\ref{sec:relphysham} have started to consolidate is that typical many-body eigenstates of quantum-chaotic interacting Hamiltonians have similar entanglement properties as typical pure states in the Hilbert space. In parallel, typical many-body eigenstates of quantum-chaotic quadratic Hamiltonians have similar entanglement properties as typical Gaussian pure states. We quantified how similar they are by showing that typical eigenstates of a specific quantum-chaotic interacting Hamiltonian exhibit $O(1)$ and greater terms in the entanglement entropy that are the same than in typical pure states in the Hilbert space. For typical many-body eigenstates of quantum-chaotic quadratic Hamiltonians, we showed that the $O(V_A)$ term is the same as in typical Gaussian pure states. These statements (for $V_A=fV\\leq V\/2$) are true independently of whether one deals with states in which the number of particles is fixed or not.\n\t\nIn the context of Hamiltonians that do not exhibit many-body quantum chaos, namely, in which the many-body level spacing distributions are not described by the Wigner surmise~\\cite{d2016quantum}, we showed that typical many-body energy eigenstates of translationally invariant noninteracting fermions exhibit an $O(V_A)$ term that behaves qualitatively similar (but is not equal) to that obtained for typical Gaussian pure states, namely, the prefactor of such a term is a function of the subsystem fraction $f$. The same behavior was found in Ref.~\\cite{leblond_mallayya_19} for the typical entanglement entropy of many-body eigenstates of the integrable spin-$\\frac{1}{2}$ XXZ chain. This is fundamentally different from what happens in typical many-body eigenstates of quantum-chaotic interacting Hamiltonians, in which the prefactor is maximal (it depends only on the filling $n$) as in typical pure states.\n\t\nHence, as conjectured in Ref.~\\cite{leblond_mallayya_19}, the entanglement entropy of typical many-body energy eigenstates can be used to distinguish models that exhibit many-body quantum chaos (whose level spacing distributions are described by the Wigner surmise, and are expected to thermalize when taken far from equilibrium~\\cite{d2016quantum}) from those that do not. This is a welcome addition to the toolbox for identifying quantum chaos as it relies on the properties of the eigenstates as opposed to the properties of the eigenenergies. Other entanglement-based diagnostics of quantum chaos and integrability have been proposed in recent years, among them are the operator entanglement growth~\\cite{prosen07, alba19, alba21}; the diagonal entropy~\\cite{santos_11, rigol_16}, the mutual information scrambling~\\cite{alba19}, and entanglement revivals~\\cite{modak20} after quantum quenches; the tripartite operator mutual information~\\cite{hosur16, ryu21}; and the entanglement negativity between two subsystems in a tripartition of many-body energy eigenstates~\\cite{grover20}.\n\t\nIt is important to emphasize that an advantage of using the entanglement properties of energy eigenstates, instead of the properties of the eigenenergies, is that one does not need to resolve all the symmetries of the model nor does one need to do an unfolding of the spectrum, which are of paramount importance to identify quantum chaos using the eigenenergies as discussed in Sec.~\\ref{sec:localspec}. In addition, in comparison to some of the entanglement diagnostics that were mentioned above, one does not need to study dynamics. Further works are needed on interacting integrable models to establish whether the leading term of the entanglement entropy of typical many-body energy eigenstates is universal or not, and to understand the nature of the subleading terms. So far, results are available only for the integrable spin-$\\frac{1}{2}$ XXZ chain~\\cite{leblond_mallayya_19}.\n\t\n\\subsubsection{Beyond qubit-based systems}\n\t\nThe analytical tools introduced and explained in this tutorial can be used beyond the fermionic systems we studied (and beyond the spin-$\\frac{1}{2}$ and hard-core boson systems we mentioned), and facilitate the study of bosonic systems with a fixed particle number. To be concrete, a bosonic subsystem with $V_A$ out of $V$ bosonic modes and total particle number $N$ can be treated analogously to Eq.~\\eqref{eq:Scenter}, but with dimensions respecting the bosonic commutation statistics, {\\it i.e.},\\ \n\t\\begin{align}\\label{eq:boson-dim}\n\t\td_A(N_A)&=\\frac{(N_A+V_A-1)!}{N_A!(V_A-1)!}\\,,\\\\\n\t\td_B(N-N_A)&=\\frac{(N-N_A+V-V_A-1)}{(N-N_A)!(V-V_A-1)!}\\,,\\\\\n\t\td_N&=\\frac{(N+V-1)!}{N!(V-1)!}\\,,\n\t\\end{align}\nwhich follows from the combinatorics of sampling with replacement without caring about the order, {\\it e.g.},\\ for $d_A$, we ask how many ways there are to distribute $N_A$ indistinguishable particles over $V_A$ sites (where each site can hold arbitrarily many particles). Anew, it holds that ${\\sum}_{N_A=0}^N d_A(N_A)d_B(N-N_A)=d_N$.\n\t\nFollowing Page's approach, we again choose an arbitrary uniformly distributed random vector state in the Hilbert space $\\mathcal{H}_N$. Thus, the invariance of the state under the unitary group ${\\rm U}(d_N)$, now with a different dimension $d_N$, still applies. Therefore, we can follow the same strategy as in Sec.~\\ref{sec:page-fixedN}, in particular we can exploit Eq.~\\eqref{eq:Scenter} with dimensions~\\eqref{eq:boson-dim}. This yields, in the thermodynamic limit with fixed $f\\in(0,\\frac{1}{2})$ and $n\\in(0,\\infty)$,\\footnote{We evaluate Eq.~\\eqref{eq:average-int}, where $\\varrho(n_A)$ and $\\varphi(n_A)$ slightly change from expanding Eq.~\\eqref{eq:boson-dim} via a saddle point approximation. This yields the normal distribution $\\varrho(n_A)$, with mean $\\bar{n}_A=fn$ and variance $\\sigma^2=(1-f)f(1+n)n\/V$, and $\\varphi(n_A)$ in Eq.~\\eqref{eq:psi} becomes\n\\begin{align*}\n\t\\begin{split}\n\t\t\\varphi(n_A)&=[n_A\\ln(n_A)+f\\ln(f)+n\\ln[(1+n)\/n]\\\\\n\t\t&\\quad+\\ln(1+n)-(f+n_A)\\ln(f+n_A)]V\\\\\n\t\t&\\quad+ \\tfrac{1}{2}\\ln\\left(\\tfrac{n_A (f+n_A)}{f(1+n)n}\\right)-\\tfrac{1}{2}\\delta_{f,\\tfrac{1}{2}}\\delta_{n_A,n\/2}+o(1)\n\t\\end{split}\n\\end{align*}\nfor $n_A\\geq n_{\\rm crit}$ with $n_{\\rm crit}=N_{\\rm crit}$ again given by $d_A(N_{\\rm crit})=d_B(N-N_{\\rm crit})$. For $n_A\\leq n_{\\rm crit}$ one needs to apply the symmetry $(n_A,f)\\leftrightarrow(n-n_A,1-f)$. The summand at $N_A=N\/2$ reflected by the term $\\delta_{n_A,n\/2}$ has to be taken as it is and is not integrated. Nevertheless, one can check numerically that it yields a term of order $1\/\\sqrt{V}$ and is thus subleading in Eq.~\\eqref{eq:bosonic-results}.}\n\\begin{align}\n\t\\begin{split}\\label{eq:bosonic-results}\n\t\t\\braket{S_A}_{\\mathrm{bos},N}&=fV[n\\ln(1+n^{-1})+\\ln(1+n)]\\\\\n\t\t&\\quad+\\sqrt{V}\\sqrt{\\frac{n+n^2}{8\\pi}}\\ln(1+n^{-1})\\,\\delta_{f,\\frac{1}{2}}\\\\\n\t\t&\\quad+\\frac{f+\\ln(1-f)}{2}+o(1),\n\t\\end{split}\\\\\n\t\\braket{S_A}_{\\mathrm{bos},w}&=\\braket{S_A}_{\\mathrm{bos},N=\\bar{n}V}-\\frac{f}{2}+o(1)\\,,\n\\end{align}\nwhere the weighted average is only meaningful for $w>0$, for which $\\bar{n}=1\/(e^w-1)$. Note that there is no particle-hole symmetry for bosons, and that $n=N\/V$ can be arbitrarily large.\n\t\nOther natural generalizations are spin-$s$ systems with $s>\\frac{1}{2}$ and systems consisting of distinguishable particles. These cases can also be studied using the methods discussed in this tutorial, after carrying out the respective combinatorics of the Hilbert space dimensions $d_A$ and $d_B$. Also, systems with global symmetries such as time-reversal invariance or chirality can be considered, which have an impact on the respective symmetry group so that the Hilbert space is not invariant anymore under ${\\rm U}(d_N)$ but only under ${\\rm O}(d_N)$ or ${\\rm U}(d_{N_1})\\times{\\rm U}(d_{N_2})$. The leading terms are expected to be the same, as the respective random matrix ensembles share the same level densities. Deviations are expected to occur in subleading terms.\n\t\n\\subsubsection{Other ensembles and entanglement measures}\n\t\nWe focused on ensembles of states, general and Gaussian pure states for arbitrary and fixed particle numbers, which mirror the entanglement properties of typical (``infinite-temperature'') eigenstates of physical lattice models. It is also possible to construct ensembles of pure states in which one fixes the energy, which mirror the entanglement properties of ``finite-temperature'' eigenstates of physical lattice models. Steps in this direction have already been taken using different tools; see, {\\it e.g.},\\ Refs.~\\cite{Deutsch_2010, nakagawa_watanabe_18, Fujita:2018wtr, lu_grover_19, murthy_19, bianchi2019typical}. In the context the scaling of the eigenstate entanglement entropy at different energy densities (``temperatures''), let us also emphasize that all the average entanglement entropies computed in this tutorial exhibited a leading volume-law term, namely, the leading term in the average entropies scales with the number of modes $V$ and is thus agnostic to the individual shape or area of the subsystem. In contrast, as discussed in the Introduction, it is well known that low-energy states of many physical systems of interest exhibit a leading area law term. An important open question is whether one can define ensembles of pure states that exhibit leading terms in the entanglement entropy that are area law.\n\t\nInstead of considering the von Neumann entanglement entropy, one can also consider other quantities that are defined with respect to the invariant spectrum of the reduced density operator $\\hat\\rho_A=\\mathrm{Tr}_{\\mathcal{H}_B}\\ket{\\psi}\\bra{\\psi}$ of a pure state $\\ket{\\psi}$. Such quantities include the well-known Renyi entropies $S^{(n)}_A(\\ket{\\psi})$, and the eigenstate capacity~\\cite{de2019aspects}. We focused on the von Neumann entropy, as it is arguably the most prominent measure of bipartite entanglement. Nonetheless, we expect that our findings can also be extended to the aforementioned quantities; see, {\\it e.g.},\\ Refs.~\\cite{liu_chen_18, pengfei_chunxiao_20, lydzba2021entanglement, ulcakar_vidmar_22} for studies of Renyi entropies and Refs.~\\cite{bhattacharjee2021eigenstate, huang2021second} for studies of the eigenstate capacity.\n\t\nIt would also be interesting to explore multipartite entanglement measures for different ensembles of pure states. This will likely require new techniques, and it is not clear what the most suitable measure is. The latter question is the subject of ongoing research.\n\t\n\\section*{Acknowledgments}\nWe would like to thank Pietro Don\\`a, Peter Forrester, Patrycja \\L yd\\.{z}ba, Lorenzo Piroli, and Nicholas Witte for inspiring discussions. E.B.~acknowledges support from the National Science Foundation, Grant No.~PHY-1806428, and from the John Templeton Foundation via the ID 61466 grant, as part of the \"Quantum Information Structure of Spacetime (QISS)\" project (\\hyperlink{http:\/\/www.qiss.fr}{qiss.fr}). L.H.~gratefully acknowledges support from the Alexander von Humboldt Foundation. M.K.~acknowledges support from the Australian Research Council (ARC) under grant No.~DP210102887. M.R.~acknowledges support from the National Science Foundation under Grant No.~2012145. L.V.~acknowledges support from the Slovenian Research Agency (ARRS), Research core fundings Grants No.~P1-0044 and No.~J1-1696. L.H.~and M.K.~are also grateful to the MATRIX Institute in Creswick for hosting the online research programme and workshop ``Structured Random Matrices Downunder'' (26 July\u201313 August 2021).\n\t\n\\onecolumngrid\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe goal of this presentation at the Hot Quarks 2006 workshop was to attempt to develop a consistent understanding of the \nterm ``sQGP'' and the physics conclusions that result. The first step in achieving such a\ngoal is to detail what the letter ``s'' actually stands for and what is means. \nDoes the terminology change from quark gluon plasma (QGP) to sQGP alphabetically symbolize an\nimportant paradigm shift in the understanding of high temperature nuclear matter?\n\nFirst, we detail what various people and collaborations have stated that ``sQGP'' means.\nM. Gyulassy explained: \n``The name 'sQGP' (for strongly interacting Quark Gluon Plasma) helps to distinguish that matter from ordinary hadronic resonance matter (as described for example by RQMD) and\nalso from the original 1975 asymptotically free QGP (which I dubbed wQGP) that is now theoretically defined\nin terms of re-summed thermal QCD~\\cite{newdirections}.'' \nGyulassy and McLerran~\\cite{gyulassymclerran} have argued \n``Our criteria for the discovery of QGP are (1) Matter at energy densities so large that simple\ndegrees of freedom are quarks and gluons. This energy density is that predicted from lattice gauge theory for\nthe existence of a QGP in thermal systems, and is about 2 $GeV\/fm^3$, (2) The matter must be to a good approximation\nthermalized, (3) The properties of the matter associated with the matter while it is hot and dense must follow\nQCD computations based on hydrodynamics, lattice gauge theory results, and perturbative QCD for hard processes \nsuch as jets. All of the above are satisfied from the published data at RHIC... This leads us to conclude that the \nmatter produced at RHIC is a strongly coupled QGP (sQGP) contrary to original expectations that were based on \nweakly coupled plasma estimates.''\n\n\\begin{figure*}\n\\begin{center}\n\\resizebox{0.6\\textwidth}{!}{%\n \\includegraphics{figure_v2compilation.eps}\n}\n\\caption{Azimuthal anisotropy ($v_2$) as a function of $p_T$ from minimum bias gold-gold collisions. Hydrodynamic calculations \nare shown as dashed lines.}\n\\label{fig:1} \n\\end{center}\n\\end{figure*}\n\nAlthough the estimates of the energy density at early times ($t=1~fm\/c$) utilizing various methods disagree by\nmore than a factor of two~\\cite{PHENIX_whitepaper}, \nall values are significantly above that predicted for the QGP phase transition for the first few $fm\/c$. For\nexample, the value from the Bjorken energy density equation is up to a factor of four lower than from hydrodynamic\ncalculations, but the Bjorken value is often viewed as a lower limit since it ignores any effects from \nlongitudinal work. Thus, the first criteria seems to be met. Agreement of hydrodynamic calculations and\nexperimental data on transverse momentum spectra and in particular elliptic flow $v_2$ \n(see Figure~\\ref{fig:1}~\\cite{PHENIX_whitepaper,STARflow}) \nindicate very rapid equilibration times of order $t \\approx 1~fm\/c$~\\cite{heinz}. There have been questions\nraised about the required degree of thermalization~\\cite{borghini}; and, \nthe originally stated agreement of hydrodynamics with the lattice equation of state (EOS) appears to\nbe overstated so that no quantitative constraint on latent heat or softness is yet warranted~\\cite{PHENIX_whitepaper,pasi}.\nHowever, it does appear that equilibration is approached more substantially than one might have expected \nfrom perturbative calculations (see later discussion\non this point). Thus the first two criteria listed in ~\\cite{gyulassymclerran} appear satisfied and \nmight allow one to scientifically conclude that RHIC collisions have\ncreated the QGP. However, it is the critical third point that defines the experimental discovery of such. \n\n\\section{Strongly interacting versus strongly coupled}\n\nIn the literature there is a mixture of terminology from strongly interacting and strongly coupled. If it is strongly coupled, \nwhich coupling is being referred to? In many talks and publications, the ``strongly coupled'' refers to the \nplasma coupling parameter $\\Gamma$ (often used in the case of electromagnetic EM plasmas). \n\n\\subsection{Plasma Coupling $\\Gamma$}\n\nThis couping is defined as $\\Gamma = \/ $, where PE is the average potential \nenergy and KE is the average kinetic energy. This parameter is used as a measure of the interaction strength in\nEM plasmas. Most EM plasmas that people are familiar with are weakly coupled plasmas where $\\Gamma << 1$. These \nbehave like gases. However, for $\\Gamma >> 1$ the EM plasmas are strongly coupled and behave as low viscosity liquids and \nas solids at even larger $\\Gamma$, as shown in Figure~\\ref{fig:2}~\\cite{ichimaru}.\n\n\\begin{figure*}\n\\begin{center}\n\\resizebox{0.6\\textwidth}{!}{%\n \\includegraphics{figure_ichimaru.eps}\n}\n\\caption{Plotted is the scaled shear viscosity ($\\eta^{*} = \\eta\/mn\\omega_{p}a^{2}$) as a function of $\\Gamma$ for\nsupercooled OCP fluids.}\n\\label{fig:2} \n\\end{center}\n\\end{figure*}\n\nSince EM plasmas have been widely studied, it is natural to seek to categorize the quark gluon plasma (QGP) in a similar fashion.\nRecently at RHIC, there has been significant\npublication on the QGP as a ``near-perfect liquid.'' Thus a question from someone outside the field of heavy ions is whether the\nmatter is in the plasma phase or liquid phase (often thought to be different regimes in the EM matter case). \nOne must be careful about two different definitions of liquid being used here. \nLiquid can refer to a specific phase of electromagnetic matter and secondly where \nliquid refers to any matter whose dynamic evolution can be described by hydrodynamic equations of motion.\nAn EM plasma in the strong coupling\n(large $\\Gamma$ regime) is a plasma in that the electric charges are not confined to atoms, but has the liquid like property (second definition) of \nlow viscosity. \nAt RHIC, the matter produced shows some evidence of low viscosity (though not quantitative yet in terms of an upper limit on the shear viscosity). \nThus, it may be a liquid (by the second definition), but may not share other EM liquid phase (first definition) properties. \nFor example, many electromagnetic liquids are also highly incompressible. For\nthe QGP, at baryon chemical potential $\\mu_{B} = 0$ the pressure (P) and volume (V) are independent. Again, the matter shares a property, but\nnot all.\n\nThese analogies are often useful, but only if they lead to new insights, rather than just new declarations and new terminology.\nOne has to be careful to define which properties are analogous. For example, QCD always has screening of long range color magnetic\nfields which means even a weakly interacting (asymptotically free) QGP will be quite different from a weakly coupled EM plasma. Also,\non short distance scales, color electric and magnetic fields can be of equal order. \n\nSome in the field have argued the following logic: Since the matter produced at RHIC has a large $\\Gamma$ value, it must be a plasma\n(as a phase). This leads to the very strong conclusion that the matter at RHIC is a plasma (meaning a deconfined plasma of quarks\nand gluons). However, though EM plasmas are categorized in terms of $\\Gamma$, not all large $\\Gamma$ (i.e. low viscosity) matter\nis a plasma at all.\nAs an example, there have been recent experiments with Lithium atoms where the mean free paths approach\nzero under certain conditions~\\cite{lithium}. The Feshbach resonance in binary collisions of these alkali atoms at ultra-cold\ntemperatures allow experimentalists to tune the interaction strength. The measurements reveal low viscosity and\n``flow'' reminiscent of that seen in RHIC collisions. However, these atoms are clearly not an EM plasma. \nThus, at RHIC, demonstrating low viscosity does not \nprove the matter is a plasma.\n\nOne can push the plasma analogy and attempt to estimate the value of the $\\Gamma$ parameter for the QGP and then \nattempt to infer other properties of the medium. One such estimate~\\cite{thoma} yields:\n\\begin{equation}\n\\Gamma = {{} \\over {}} \\approx {{\\alpha_{s}\/r} \\over {3T}} \\approx {{\\alpha_{s}T} \\over {3T}} \\approx \\alpha_{s}\n\\end{equation}\nthen utilizing the relation $\\alpha_s = g^{2}(T)\/4\\pi$ and putting back in $d$ the characteristic inter-particle distance, one obtains:\n\\begin{equation}\n\\Gamma = {{Cg^{2}} \\over {4 \\pi d T}} \\approx 1.5-5\n\\end{equation}\nNote that this result is different from an earlier much larger estimate which had a factor of $4\\pi$ unit error and was without\na factor of two scale-up for the approximately equal strength color magnetic interaction~\\cite{thoma}. Thoma notes that for EM plasmas ``the \nphase transition to the gas phase, assumed to happen at $\\Gamma_c \\approx 1$, takes place now at a few times the\ntransition temperature [from the QGP liquid to the QGP gas~\\cite{thoma}.'' Note the title of this\narticle is ``The Quark-Gluon Plasma Liquid.'' \nIn the PHENIX whitepaper it states\n``considerations such as these have led some to denote QGP in this regime as 'sQGP' for strongly interacting QGP~\\cite{PHENIX_whitepaper}.''\n\nIn a recent set of papers~\\cite{shuryak_cqgp}, the authors invoke a model referred to as cQGP where they calculate the shear viscosity as a function\nof the dimensionless $\\Gamma$ parameter. The calculation seems to show a QGP with liquid like behavior (low viscosity) at large $\\Gamma$\nand an indication of solid behavior at even larger $\\Gamma$, as was seen in the EM plasma case. There has been speculation that\nthe QGP formed in heavy ion collisions could have crystalline or polymer chain type solid structures~\\cite{shuryak_qm05}. However, it\nis critical to note that the letter 'c' stands for classical. Thus, the entire calculation is done in the non-relativistic, non-quantum\nregime and thus the possible insights gained have to be viewed with skepticism. \n\nThe entire utilization of $\\Gamma$ raises some significant questions. The potential energy is taken as the Coulomb (short range) part\nof the QCD potential as $\\alpha_{s}\/r$. Unfortunately, when one has a system of (nearly) massless, relativistic particles then the\npotential energy is not a well defined concept in a relativistic Quantum Field Theory (QFT). This issue applies to a QFT for QED or QCD, but\nis of particular concern for the QGP case here since anywhere near the transition temperature the light quarks are relativistic. \nThe fundamental problem is that there is no unique distinction between the \nparticles and the fields and thus no unique manner of separating potential energy and kinetic energy. In which category do the\ngluons belong for example? In the case of heavy quarks, one might approximate them as static source charges and thus have a reasonable\nattempt at separating the potential energy. However, this is not the case for the QGP overall, and the assumption of a non-relativistic\nlimit in the cQGP case discussion above is not close to the real case for the QGP even near the critical temperature T = 170~MeV.\nThere are attempts to formulate an alternative for calculating $\\Gamma$~\\cite{jacak}.\n\nMany people are interested in the $\\Gamma$ calculation since it is how many EM plasmas are categorized. However, other perfectly well-defined\nin hydrodynamics and in a QFT measures of the interaction strength do exist that can alternatively be used.\n\n\\subsection{Shear Viscosity over Entropy Density $\\eta\/s$}\n\n\\begin{figure*}\n\\begin{center}\n\\resizebox{0.6\\textwidth}{!}{%\n\\includegraphics{graph-He-N-H20.eps}\n}\n\\caption{Plotted are the shear viscosity to entropy density ratios ($\\eta\/s$) divided by\nthe conjectured lower bound as a function of temperature in Kelvin. Shown are curves for\nhelium, nitrogen and water.}\n\\label{fig:3} \n\\end{center}\n\\end{figure*}\n\nThere is a well defined measure of the interaction strength. It is the ratio of the shear viscosity (a measure of the\nmean free path of particles) and its entropy density (measure of the inter particle distances). It is in fact this ratio \n$\\eta \/ s$ that may be very small in the QGP as inferred from hydrodynamic calculations and their comparison to experimental data.\nRecent measurements of charm quark suppression at moderate $p_T ~\\approx 2-5~GeV\/c$ and non-zero elliptic flow $v_{2}$, may give\nthe best constraint on the diffusion coefficient from heavy quarks and subsequently $\\eta\/s$~\\cite{mooreteaney,naglevienna}. Full three-dimensional viscous\nhydrodynamic calculations in comparison with precision data are needed to set a quantitatively reliable limit on $\\eta\/s$. \nLattice simulations are presently unable to make reliable predictions of most dynamical properties of the quark-gluon\nplasma. The calculation of phenomenologically relevant transport properties, \nsuch as the shear viscosity or collective modes, remains an important \nchallenge \\cite{Petreczky:2005zy}.\n\nHowever, recently there has been important progress in calculating \nthese dynamical properties perturbatively in a dual quantum field theory \ninvolving black holes in anti-de Sitter (AdS) space \\cite{Kovtun:2004de}. \nThis approach is based on the insight derived from string theory that \nweakly coupled gravity theories in higher dimensions can be dual to \nfour-dimensional gauge theories in the strong coupling limit \\cite{Maldacena:1997re}. It must\nbe emphasized that these AdS\/CFT (conformal field theory) techniques \npresently have the limitation that no higher dimensional gravity or \nstring theory is known which is dual to QCD. Work by Son {\\it et al.} indicate\nthat there may be a lower viscosity bound $\\eta\/s > 1\/4\\pi$ applicable\nfor all systems including the quark gluon plasma. A critical goal for\nthe field is to put the QCD matter data point on a plot like the one shown\nin Figure~\\ref{fig:3} for other systems~\\cite{Kovtun:2004de}. \n\nAn interesting side note is that in the figure these systems have a minimum\nin the ratio $\\eta\/s$. In fact, for helium, super-fluidity sets in at approximately\n2 Kelvin, which is below the minimum. The minimum occurs around 4 Kelvin which\nis the gas to liquid phase transition point. Thus the minimum is not a minimum\nin viscosity, but rather the sudden change in entropy associated with the phase\ntransition. Note the recent paper on the subject~\\cite{larry}.\n\nThe most common example of a very low viscosity (or near perfect) fluid are the cases\nshown in Figure~\\ref{fig:3} which are referred to as super-fluids. In most cases this\nsuper-fluidity comes about from quantum mechanical effects dealing with the limited \nexcitations at low temperature. This seems quite different from the system at RHIC and\nthus though there are many examples in the literature describing the matter at RHIC as a \nnear perfect fluid, it is not termed a super-fluid.\n\n\\subsection{Strong Coupling $\\alpha_s$}\n\n\\begin{figure*}\n\\begin{center}\n\\resizebox{0.6\\textwidth}{!}{%\n \\includegraphics{v2paper_mbv2a.eps}\n}\n\\caption{Impact parameter averaged gluon elliptic flow as a function\nof $p_T$ for Au+Au reactions at $\\sqrt{s_{NN}}=130~GeV$ from MPC with various\nvalues of the transport opacity for b=0. Also shown are data points\nfrom the STAR experiment.}\n\\label{fig:4} \n\\end{center}\n\\end{figure*}\n\nAnother interpretation of the letter ``s'' is strongly coupled in the sense of\na large QCD coupling $\\alpha_s$. Clearly $\\alpha_s$ is always, in any experimentally\naccessible energy range, much greater than $\\alpha_{EM} = 1\/137$. The wQGP, where \nthe letter ``w'' stands for weak coupling, implies that perturbative expansions should\nconverge as $\\alpha_s << 1$. By contrast, sQGP would simply imply that perturbative \ntechniques would not be applicable. U. Heinz observed that \n``perturbative mechanisms seem unable to explain the phenomenological required\nvery short thermalization time scale, pointing to strong non-perturbative dynamics\nin the QGP even at or above $2 \\times T_c$.''~\\cite{uli}.\n\nIn specific, analytic calculations utilizing perturbative expansions of\ngluon scattering lead to long equilibration times ($ > 2.6 fm\/c$) and thus rather modest\nelliptic flow (i.e. small $v_2$)~\\cite{baier}. There are also numerical simulations that give similar \nresults utilizing a $2 \\rightarrow 2$ cross section of approximately 3 mb, as shown in Figure~\\ref{fig:4}~\\cite{molnar}.\nOne can artificially increase the cross section (or transport opacity) to match the data and it requires an order of\nmagnitude increase in the cross section. In this sense, it is not a wQGP. There are two important\ncaveats on these calculations. One is that the Equation of State is too hard relative to lattice\nresults for the QGP. More importantly is that there is some controversy over the inclusion of $2 \\rightarrow 3$ and $3 \\rightarrow 2$\nprocesses. Z.Xu {\\it et al}~\\cite{zhu} claim that their inclusion results in a dramatic decrease in the equilibration time\nand thus a large increase in $v_2$. At this conference it became clear that the critical part of\ntheir result is that in $2 \\rightarrow 3$ that the resulting gluons are emitted isotropically. Under\nthis assumption it is easy to see why it leads to rapid isotropization. Other implementations of these\nprocesses show much smaller effects, in large part due to forward peaking of the emission\ndistribution. This issue needs to be resolved.\n\nIn the third category used by Gyulassy and McLerran for discovery of the QGP, they cite utilizing \nperturbative methods to understand jet probes.\nRadiative energy loss calculations are done perturbatively to describe the jet quenching phenomena. In\nfact, the calculations are effectively leading order. GLV~\\cite{glv}, for example, assumes the correct pQCD interaction\nstrength (noting that some calculations use a fixed couping $\\alpha_s$ and others running), and then determine the color charge\ndensity. One obtains a result for $dN\/dy({\\rm gluons}) = 1000$ or $dN\/dy({\\rm quarks,gluons}) = 2000$. The final entropy density\ndS\/dy is of order 5000, and thus since the entropy cannot be larger at earlier times it translates roughly into a \nlimit $dN\/dy({\\rm quarks,gluons}) < 1300$~\\cite{muller_annualreview}. \nOne possibility is that more than just radiative energy loss contributes as has been highlighted by recent heavy quark results (perhaps indicating collisional energy loss). However, another\napproach is to say you know the color charge density and can then infer the coupling strength. \nThis then implies that the coupling strength is much larger than predicted from the effectively leading\norder perturbative calculation - which may be consistent with the sQGP description. \n\n\\subsection{Bound States}\n\nThis strong coupling $\\alpha_s$ is taken by Shuryak and collaborators~\\cite{shuryak_bound} to imply that\nthe interaction between quasi-particles is strong enough to bind them. Thus the sQGP\nis composed of bound (not necessarily color neutral) $qq$, $q\\overline{q}$, $gg$, $qg$, \netc. states. \nHowever, recent lattice calculations for Baryon number - Electric charge correlations show no\nsuch quasi-particles with these quantum numbers~\\cite{karsch}. It appears that lattice QCD is\nruling out $qq$ and $q\\overline{q}$ states, though the results can say nothing about states without\nthese quantum numbers like $qg$ and $gg$ states. \n\n\\subsection{Expectations}\n\nA reasonable question is why there was an original expectation for a wQGP or perturbative plasma. \n``For plasma conditions realistically obtainable in nuclear collisions ($T \\approx 250~MeV$, g = $\\sqrt{4\\pi\\alpha_s}$)\nthe effective gluon mass $mg^{*} \\approx 300~$MeV. We must conclude, therefore, that the notion of\nalmost free gluons (and quarks) in the high temperature phase of QCD is quite far from the truth. Certainly \none has $mg^{*} << T$ when $g<<1$, but this condition is never really satisfied in QCD, because\n$g \\approx 1\/2$ even at the Planck scale ($10^{19}$~GeV).''~\\cite{bmueller}.\nDespite this observation, many noted that from lattice gauge theory results the value of\n$\\epsilon\/T^{4}$ approaches 80\\% of the non-interacting gas limit. \nSome viewed this as\nindicating only weak interactions, while some in the \nlattice community already thought that this 20\\% difference from the Stefan Boltzmann limit \nwas the effect of strong residual interactions in a non-perturbative system.\nAlso, recent results from AdS\/CFT have\nshown that one can be at the 80\\% limit and still be in the very strongly interacting limit.\n\n\\section{Summary}\n\nExciting results of emergent phenomena at RHIC such as strong flow and jet quenching have sparked a great deal\nof very positive new thinking about the medium created in these collisions. It appears to represent a paradigm shift, \nalthough the earlier paradigm of a perturbatively describable (asymptotically free) plasma seems to have been poorly \nmotivated.\nF. Karsch puts it best: ``I do not really care what the 's' in sQGP means. However, I am worried and partly also disappointed about\nthe way this new name is used. The disappointment, of course, arises from the fact that suddenly a new name\nseems to be necessary to describe the properties of QCD in a temperature regime which lattice gauge theory since\na long time have identified as 'not being an ideal gas' and 'impossible to be described by perturbation theory~\\cite{newdirections}.''\n\nAs the field of heavy ions progresses, a coherent picture of the medium created may be emerging. At this point there\nare many ideas, some commensurate and other incommensurate with each other. \nHopefully the future \nwill tell us which are correct.\n\n\n\\section{Acknowledgment}\n\nWe thank the workshop\norganizers for providing an environment for stimulating discussion and new ideas from young people. We also acknowledge useful discussions prior to this workshop at the Boulder Workshop 2 and useful comments by one anonymous referee. We acknowledge support from the United States Department of Energy grant DE-FG02-00ER41152. \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nOne of the most important physical phenomena studied in condensed matter systems is the transport of electrons, especially when they are restricted to move in one dimension. This is because of the unique nature of the inter-particle interactions in one dimension which leads to interesting physics which is substantially different from that of the higher dimensions where interactions are tackled conveniently using the Fermi liquid theory. Secondly the emergence of advanced technologies has made the realization of one dimensional systems possible that have unusual properties and hold a promising future - carbon nanotubes \\cite{bockrath1999luttinger}, semiconducting quantum wire \\cite{auslaender2000experimental, yacoby1997magneto} and so on. The suitable alternative to the Fermi liquid theory to capture the many body physics of such 1D systems is the Luttinger liquid theory \\cite{haldane1981luttinger} which has served as the paradigm for one dimensional systems and is based on linearization of the dispersion relations of the constituent particles near the Fermi level. \n\nMost of the physical phenomena of such systems can be systematically studied provided one has analytical forms of the correlation functions - to obtain these is the stated goal in quantum many body physics. In one dimension, this goal is achieved using bosonization methods where a fermion field operator is expressed as the exponential of a bosonic field \\cite{von1998bosonization}. This operator approach to bosonization, which goes under the name g-ology \\cite{giamarchi2004quantum}, can be used successfully to compute the N-point Green functions of a clean Luttinger liquid. But the Fermi-Bose correspondence used in the g-ology methods is insufficient to tackle impurities and to circumvent this, other techniques like renormalization group (RG) methods are mandatory \\cite{matveev1993tunneling}.\n\n\nA novel technique by the name of `Non chiral bosonization technique' has been developed that uses a basis different from the plane wave basis to deal strongly inhomogeneous Luttinger liquid, without adhering to RG methods \\cite{das2018quantum}. NCBT can extract the most singular part of the correlation functions of a Luttinger liquid with arbitrary strength of the external impurities as well as that of mutual interactions between the particles. It has also been applied successfully to study the one step fermionic ladder (two 1D wires placed parallel and close to each other with hopping between a pair of opposing points) \\cite{das2017one} and slowly moving heavy impurities in a Luttinger liquid \\cite{das2018ponderous}. The Green functions enables one to predict different physical phenomena occurring in the system such as Friedel oscillations \\cite{Egger1995friedel1}, conductance \\cite{fendley1995exact, fendley1995exact2}, Kondo effect \\cite{furusaki1994kondo, schiller1995exact}, resonant tunneling \\cite{kane1992resonant, furusaki1993resonant}, etc. \n\nIn the seminal work by Kane and Fisher \\cite{kane1992transport}, it has been shown how impurities can bring drastic effects to the conductance of the particles which can be as severe as `cutting the chain' by even a small scatterer. Since then the study of transport phenomena in a Luttinger liquid with impurities has interested a number of researchers \\cite{giamarchi1992conductivity, ogata1994collapse, safi1997conductance, ponomarenko1995renormalization}. The conductance of a narrow quantum wire with non-interacting electrons moving ballistically is given by $e^2\/h$. This conductance is renormalized for a Luttinger liquid and is given by $g e^2$\/h, where g is the Luttinger liquid parameter which depends on the mutual interaction strength of the particles \\cite{kane1992transport, apel1982combined, ogata1994collapse}. But no renormalization of the universal conductance is required if the electrons have a free behavior in the source and drain reservoirs \\cite{ponomarenko1995renormalization, maslov1995landauer}. Matveev et al. used a simple renormalization group method to calculate the conductance of a weakly interacting electron gas in presence of a single scatterer \\cite{matveev1993tunneling}. Ogata and Anderson \\cite{ogata1993transport} used Green's functions to study conductivity of a Luttinger liquid and showed that if the spin-charge separation is taken into account, the resistivity has a linear temperature dependence. Besides conductance, resonant tunneling is yet another important phenomena studied in Luttinger liquid with double barriers \\cite{kane1992resonant, kane1992transmission, furusaki1993resonant, moon1993resonant}. Kane and Fisher studied resonant tunneling in a single channel interacting electron gas through a double barrier and found that the width of the resonance vanishes, as a power of temperature, in the zero-temperature limit \\cite {kane1992resonant, kane1992transmission}. Furusaki and Nagaosa studied the same for spinless fermions and calculated the conductance as a function of temperature and gate voltage \\cite{furusaki1993resonant}. In another work, Furusaki studied resonant tunneling in a quantum dot weakly coupled to Luttinger liquids \\cite{furusaki1998resonant} and a few years later, this model was supported by experimental evidences \\cite{auslaender2000experimental}.\n\n\nIn this work, the conductance of a Luttinger liquid in presence of a cluster of impurity is calculated both in the Kubo formalism as well as the outcome of a tunneling experiment using the correlation functions obtained using NCBT. All the necessary limiting cases like Launderer's formula, conductance of a clean Luttinger liquid, half-line, etc. are all obtained. From the tunneling conductance the well known concepts of `cutting the chain' and `healing the chain' are elucidated. The condition of resonant tunneling for a double impurity system is obtained and the behavior of the correlation function exponents near its vicinity is elucidated.\n\n\\section{System description}\n\nThe system under study consists of a Luttinger liquid with short ranged mutual interactions amongst the particles and a cluster of impurities centered around an origin. The Hamiltonian of the system is given as follows.\n\\small\n\\begin{equation}\n\\begin{aligned}\nH =& \\int^{\\infty}_{-\\infty} dx \\mbox{ } \\psi^{\\dagger}(x) \\left( - \\frac{1}{2m} \\partial_x^2 + V(x) \\right) \\psi(x)\\\\\n & \\hspace{1cm} + \\frac{1}{2} \\int^{ \\infty}_{-\\infty} dx \\int^{\\infty}_{-\\infty} dx^{'} \\mbox{ }v(x-x^{'}) \\mbox{ }\n \\rho(x) \\rho(x^{'})\n\\label{Hamiltonian}\n\\end{aligned}\n\\end{equation}\n\\normalsize\nThe first term is the kinetic term followed by the potential energy term which represents the impurity cluster which is modeled as a finite sequence of barriers and wells around a fixed point. The potential cluster can be as simple as one delta impurity $V_0\\delta(x)$ or two delta impurities placed close to each other $V_0( \\delta(x+a)+\\delta(x-a))$, finite barrier\/well $\\pm V \\theta(x+a)\\theta(a-x)$ and so on, where $\\theta(x)$ is the Heaviside step function. The RPA (random phase approximation) is imposed on the system, without which the calculation of the analytical expressions of the correlation functions is formidable. In this limit, the Fermi momentum and the mass of the fermion are allowed diverge in such a way that their ratio, viz., the Fermi velocity is finite (i.e. $ k_F, m \\rightarrow \\infty $ but $ k_F\/m = v_F < \\infty $). Under the choice of units: $ \\hbar = 1 $, $ k_F $ is both the Fermi momentum as well as a wavenumber \\cite{stone1994bosonization}. The RPA limit linearizes the energy momentum dispersion near the Fermi surface ($E=E_F+p v_F$ instead of $E=p^2\/(2m)$). It is also imperative to define how the width of the impurity cluster `2a' scales in the RPA limit and the assertion is that $ 2 a k_F < \\infty $ as $ k_F \\rightarrow \\infty $. On the other hand, the heights and depths of the various barriers\/wells are assumed to be in fixed ratios with the Fermi energy $ E_F = \\frac{1}{2} m v_F^2 $ even as $ m \\rightarrow \\infty $ with $ v_F < \\infty $. \n\nIn case of the different potentials consisting the cluster, the only quantities that will be used in the calculation of the Green functions is the reflection (R) and transmission (T) amplitudes which can be easily calculated using elementary quantum mechanics and are provided in an earlier work \\cite{das2018quantum}. For instance, in the case of a single delta potential: $V_0\\delta(x)$,\n\\scriptsize\n\\begin{equation}\n\\begin{aligned}\nT=&\\frac{1}{\\left(1+V_0 \\frac{i}{v_F}\\right)}\\mbox{ };\\mbox{ }\nR=-\\frac{iV_0}{v_F\\left(1+V_0 \\frac{i}{v_F}\\right)} \\\\\n\\end{aligned}\n\\end{equation}\n\\normalsize\nIn the case of a double delta potential separated by a distance 2a between them : $V_0( \\delta(x+a)+\\delta(x-a))$,\n\\scriptsize\n\\begin{equation}\n\\begin{aligned}\nT=&\\frac{1}{\\left(1+V_0 \\frac{i}{v_F}\\right)^2-\\left(\\frac{i V_0}{v_F}e^{i 2 k_F a}\\right)^2}\\\\\nR=&-\\frac{2i\\frac{V_0^2}{v_F^2} \\sin{[2 k_F a]} +\\frac{2i V_0}{v_F}\\cos{[2 k_F a]}}{\\left(1+V_0 \\frac{i}{v_F}\\right)^2-\\left(\\frac{i V_0}{v_F}e^{i 2 k_F a}\\right)^2} \\\\\n\\end{aligned}\n\\end{equation}\n\\normalsize\nIn this work the generalized notion of R and T is used in this work to signify the reflection and transmission amplitudes of the cluster of impurities in consideration. The third term in equation (\\ref{Hamiltonian}) represents the forward scattering mutual interaction term such that\n\\[ \n\\hspace{2 cm} v(x-x^{'}) = \\frac{1}{L} \\sum_{q} v_q \\mbox{ }e^{ -i q(x-x^{'}) } \n\\]\nwhere $ v_q = 0 $ if $ |q| > \\Lambda $ for some fixed bandwidth $ \\Lambda \\ll k_F $ and $ v_q = v_0 $ is a constant, otherwise.\\\\\n\n\\section{ Non chiral bosonization and two point functions}\nAs in conventional bosonization schemes using the operator approach \\cite{giamarchi2004quantum}, the fermionic field operator is expressed in terms of currents and densities. But in NCBT the field operator is modified to include the effect of back-scattering by impurities. Hence it is suitable to study translationally non invariant systems like the ones considered in this work.\n\\begin{equation}\n\\begin{aligned}\n\\psi_{\\nu}(x,\\sigma,t) \\sim C_{\\lambda ,\\nu,\\gamma}\\mbox{ }e^{ i \\theta_{\\nu}(x,\\sigma,t) + 2 \\pi i \\lambda \\nu \\int^{x}_{sgn(x)\\infty}\\mbox{ } \\rho_s(-y,\\sigma,t) dy}\n\\label{PSINU}\n\\end{aligned}\n\\end{equation}\nHere $\\theta_{\\nu}$ is the local phase which is a function of the currents and densities which is also present in the conventional bosonization schemes \\cite{giamarchi2004quantum}, ideally suited for homogeneous systems.\n\\small\n\\begin{equation}\n\\begin{aligned}\n\\theta_{\\nu}(x,\\sigma,t) =& \\pi \\int^{x}_{sgn(x)\\infty} dy \\bigg( \\nu \\mbox{ } \\rho_s(y,\\sigma,t)\\\\\n&\\hspace{1 cm} - \\int^{y}_{sgn(y)\\infty} dy^{'} \\mbox{ }\\partial_{v_F t } \\mbox{ }\\rho_s(y^{'},\\sigma,t) \\bigg)\n\\end{aligned}\n\\end{equation}\\normalsize\nThe new addition in equation (\\ref{PSINU}) is the optional term $\\rho_s(-y)$ which ensures the necessary trivial exponents for the single particle Green functions for a system of otherwise free fermions with impurities, which are obtained using standard Fermi algebra and they serve as a basis for comparison for the Green functions obtained using bosonization. The adjustable parameter is the quantity $\\lambda$ which can take values either 0 or 1 as per requirement. Thus NCBT operator reduces to standard bosonization operator used in g-ology methods by setting $\\lambda=0$. The factor $2 \\pi i$ ensures that the fermion commutation rules are obeyed. The quantities $C_{\\lambda ,\\nu,\\gamma}$ are pre-factors and are fixed by comparison with the non-interacting Green functions obtained using Fermi algebra. The suffix $\\nu$ signifies a right mover or a left mover and takes values 1 and -1 respectively. The field operator as given in equation (\\ref{PSINU}) is to be treated as a mnemonic to obtain the Green functions and not as an operator identity, which avoids the necessity of the Klein factors that are conventionally used to conserve the number as the correlation functions, unlike the field operators, are number conserving. The field operator (annihilation) is clubbed together with another such field operator (creation) to obtain the non interacting two point functions after fixing the C's and $\\lambda$'s. Finally the densities $\\rho$'s in the RHS of equation (\\ref{PSINU}) are replaced by their interacting versions to obtain the many body Green functions, the details being described in an earlier work \\cite{das2018quantum}. The two point functions obtained using NCBT are given in \\hyperref[AppendixA]{Appendix A}.\n\n\n\n\\section{Conductance }\n\\subsection{Kubo conductance}\nThe general formula for the conductance of a quantum wire (obtained from Kubo's formula that relates it to current-current correlations) without leads but with electrons experiencing forward scattering short-range mutual interactions\nand in the presence of a finite number of barriers and wells clustered around an origin is obtained.\nConsider an electric field $ E(x,t) = \\frac{ V_g }{ L} $ between $ -\\frac{L}{2} < x < \\frac{L}{2} $\nand $ E(x,t) = 0 $ for $ |x| > \\frac{L}{2} $. Here $ V_g $ is the voltage between two extreme points.\nThus a d.c. situation is being considered right from the start. This corresponds to a vector potential,\n\\begin{equation}\n\\begin{aligned}\nA(x,t) = \\left\\{\n \\begin{array}{ll}\n -\\frac{ V_g }{ L} (ct), & \\hbox{ $ -\\frac{L}{2} < x < \\frac{L}{2} $ ;} \\\\\n \\hspace{.3cm}0, & \\hbox{otherwise.}\n \\end{array}\n\\right.\n\\end{aligned}\n\\end{equation}\nHere c is the speed of light. This means the average current can be written as,\n\\begin{equation}\n\\begin{aligned}\n = &\\frac{ie}{c}\\sum_{ \\sigma^{'} }\n\\int^{L\/2}_{-L\/2} dx^{'}\\mbox{ } \\int_{-\\infty}^{t} dt^{'}\n\\mbox{ }\\frac{ V_g }{ L} (ct') \\\\\n&< [j(x,\\sigma,t),j(x^{'},\\sigma^{'},t^{'})]>_{LL}\n\\end{aligned}\n\\end{equation}\nThe current current correlation can be obtained using the Green functions derived in the present work (see \\hyperref[AppendixB]{Appendix B}) to obtain the formula for conductance (in proper units) as follows,\n\\begin{equation}\nG = \\frac{ e^2 }{h} \\frac{v_F }{ v_h } \\mbox{ }\\bigg (1- \\frac{v_F }{v_h} \\mbox{ }\\frac{|R|^2}{1-\\frac{(v_h-v_F)}{v_h}|R|^2}\\bigg)\n\\label{kubo}\n\\end{equation}\n\nHere $v_F$ is the Fermi velocity, \\scriptsize $ v_h = \\sqrt{v_F^2+2v_F v_0\/\\pi} $ \\normalsize is the holon velocity and $v_0$ is the strength of interaction between fermions as already described in Section 2. See \\hyperref[AppendixB]{Appendix B} for more details.\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[scale=0.3]{conductanceL}\n \\caption{Conductance as a function of the absolute value of the reflection amplitude as well the interaction parameter ($ v_F = 1 $)}\\label{Cond3D}\n\\end{figure}\n\\noindent The Kubo conductance formula obtained in equation (\\ref{kubo}) is plotted in fig. \\ref{Cond3D} as a function of the reflection coefficient and interaction strength. It can be seen that when the reflection coefficient becomes unity ($|R|=1$), then the conductance vanishes irrespective of the interaction parameter. On the other hand, for any fixed value of $|R|$, the conductance increases as the mutual interaction becomes more and more attractive (negative $v_0$) and decreases as the interaction becomes more and more repulsive (positive $v_0$). On the other hand for a fixed value of interaction parameter, the conductance decreases with increase in the reflection parameter.\n\n\\subsubsection{Limiting cases.}\n{\\bf No interaction}. In absence of interactions $v_0=0$ and hence $v_h=v_F$ and thus from equation (\\ref{kubo}),\n\\[\n\\hspace{2cm}G = \\frac{ e^2 }{h} (1 - |R|^2) = \\frac{ e^2 }{h} |T|^2\n\\]\n which is the Landauer's formula for conductance.\n\n{\\bf No impurity} In this case, there is no reflection and hence $|R|=0$ and thus from equation (\\ref{kubo}),\n\\[\n\\hspace{2cm}G = \\frac{ e^2 }{h} \\frac{v_F}{v_h} =\\frac{ e^2 }{h} g\n\\]\nwhich the renormalized conductance of an infinite Luttinger liquid (with parameter g).\n\n{\\bf Infinite barrier}\nIn the case of a half line, $|R|=1$ and thus from equation (\\ref{kubo}),\n\\[\n\\hspace{1 in}G=0\n\\]\nirrespective of the value of holon velocity $v_h$.\\\\ \n\n\n\n\n\\subsection{Tunneling conductance} The Kubo conductance is the linear response to external potentials and is therefore related to four-point correlation functions of fermions. Alternatively, conductance may also be thought of the outcome of a tunneling experiment \\cite{kane1992transport}.\nHere fermions are injected from one end and collected from the other end. In this sense the conductance is related to the two-point function or the single particle Green function. Thus we expect these two notions to be qualitatively different from each other. From this point of view, the conductance is ($|T| $ is the magnitude of the transmission amplitude for free fermions plus impurity) ,\n\\begin{equation}\nG = \\frac{ e^2 }{h } |T| \\mbox{ }\n| v_F\\int^{\\infty}_{-\\infty}dt\\mbox{ }<\\{ \\psi_{ R } ( \\frac{L}{2},\\sigma,t) , \\psi^{\\dagger}_{ R } (-\\frac{L}{2},\\sigma,0) \\}>\n |\n \\label{TUNNEL1}\n\\end{equation}\nIn this case the results depend on the length of the wire $ L $ and a cutoff $ L_{\\omega} = \\frac{ v_F }{ k_B T } $ that may be regarded either as inverse temperature or inverse frequency (in case of a.c. conductance). The result (derived in \\hyperref[AppendixB]{Appendix B}) is\n\\begin{equation}\nG \\sim \\left( \\frac{ L }{ L_{ \\omega } }\\right)^{-2Q } \\mbox{ } \\left( \\frac{ L }{ L_{ \\omega} }\\right)^{ 4X }\n\\label{GGEN}\n\\end{equation}\nHere Q and X are obtained from equation (\\ref{luttingerexponents}). It is important to stress that the present work has carefully defined tunneling conductance and it is not simply related to the dynamical density of states of either the bulk or the half line (\\hyperref[AppendixB]{Appendix B}). Of particular interest is the weak link limit where $ |R| \\rightarrow 1 $. The limiting case of the weak link are two semi-infinite wires.\nIn this case,\n\\begin{equation}\nG_{weak-link} \\sim \\left( \\frac{ L }{ L_{ \\omega} }\\right)^{ \\frac{ (v_h + v_F)^2-4v^2_F }{ 4 v_h v_F } }\n\\label{GGEN}\n\\end{equation}\nHence the d.c. conductance scales as $ G_{weak-link} \\sim (k_B T)^{ \\frac{ (v_h + v_F)^2-4v^2_F }{ 4 v_h v_F } } $. This formula is consistent with the\nassertions of Kane and Fisher \\cite{kane1992transport} that show that at low temperatures $ k_B T \\rightarrow 0 $\nfor a fixed $ L $, the conductance vanishes as a power law in the temperature if the interaction between the fermions is repulsive ($ v_h > v_F > 0 $) and diverges as a power law if the interactions between the fermions is attractive ($ v_F > v_h > 0 $). Their result is applicable to spinless fermions without leads $ G_{weak-link-nospin} \\sim (k_B T)^{ \\frac{2}{K} - 2 } $. In order to compare with the result of the present work, this exponent has to be halved $ G_{weak-link-with-spin} \\sim (k_B T)^{ \\frac{1}{K_{ \\rho } } - 1 } $. This exponent is the same as the exponent of the present work so long as $ |v_h-v_F| \\ll v_F $ ie. $ \\frac{ (v_h + v_F)^2-4v^2_F }{ 4 v_h v_F } \\approx \\frac{1}{K_{ \\rho } } - 1 $ since $ K_{\\rho} = \\frac{ v_F }{v_h} $.\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[scale=0.35]{conductanceT}\n \\caption{Conductance exponent $\\eta$ as a function of the absolute value of the reflection amplitude $|R|$ and the ratio $\\beta=\\frac{v_h}{v_F}$. }\\label{eta}\n\\end{figure}\nIn general, the claim of the present work is that the temperature dependence of the tunneling d.c. conductance of a wire with no leads and in the presence of barriers and wells and mutual interaction between particles (forward scattering, infinite bandwidth ie. $ k_F \\gg \\Lambda_b \\rightarrow \\infty $) is,\n\\begin{equation}\nG \\sim (k_B T)^{ \\eta} ;\\mbox{ }\\mbox{ } \\mbox{ } \\eta = 4X - 2 Q\n\\label{Cond}\n\\end{equation}\nWhen $ \\eta > 0 $ the conductance vanishes at low temperatures as a power law - characteristic of a weak link. However when $ \\eta < 0 $ the conductance diverges at low temperature as a power law - characteristic of a clean quantum wire. Of special interest is the situation $ \\eta = 0 $ where the conductance is independent of temperature. This crossover from a conductance that vanishes as a power law at low temperatures to one that diverges as a power law occurs at reflection coefficient\n $ |R|^2 = |R_{c2}|^2 \\equiv \\frac{v_h (v_h-v_F)}{3 v_F^2+v_h^2} $ which is valid only for repulsive interactions $ v_h > v_F $. For attractive interactions, $ \\eta < 0 $ for any $ |R|^2 $ which means\n the conductance always diverges as a power law at low temperatures. This means attractive interactions heal the chain for all reflection coefficients including in the extreme weak link case.\n On the other hand for repulsive interactions, for $ |R| > |R_{c2}| $, $ \\eta > 0 $ the chain is broken (conductance vanishes) at low temperatures. For $ |R| < |R_{c2}| $, $ \\eta < 0 $ and even though the interactions are repulsive the chain is healed (conductance diverges).\n\n\\begin{figure*}[t!]\n\\begin{center}\n\\includegraphics[scale=0.12]{DoubleDelta_PQX}\\hspace{0.5 cm}\n\\includegraphics[scale=0.17]{DoubleDelta_SYZ}\\hspace{0.5 cm}\n\\includegraphics[scale=0.17]{DoubleDelta_ABCD}\n\n\\scriptsize (a) \\hspace{5 cm}(b)\\hspace{5 cm} (c)\n\\end{center}\n\\caption{ Anomalous exponents (L.E) vs impurity strength $V_0$ for symmetric double barrier: (a) Exponents for $\\langle \\psi_R(X_1) \\psi_R^{\\dagger}(X_2)\\rangle$ on the same side (b) Exponents for $\\langle \\psi_R(X_1) \\psi_L^{\\dagger}(X_2)\\rangle$ on the same side (c) Exponents for $\\langle \\psi_R(X_1) \\psi_R^{\\dagger}(X_2)\\rangle$ on opposite sides.}\n\\label{resonance}\n\\end{figure*}\n\n\\subsubsection{ Derivation of RG equation for the tunneling conductance }\n\nIn the well-cited work of Matveev et al \\cite{matveev1993tunneling}, the RG equation for the tunneling conductance is derived which is valid for weak mutual interaction between fermions (they consider both forward scattering as well as backward scattering but in the present work we consider only forward scattering between fermions but of arbitrary strength and sign subject to the limitation that the holon velocity be real). Both in their work and in the present work the transmission amplitude of free fermions can vary continuously between zero and unity i.e. it is not constrained in any way. Note that we have chosen an infinite bandwidth to derive the power-law conductance in equation (\\ref{Cond}). Had we chosen a finite bandwidth while calculating equation (\\ref{TUNNEL1}), the resulting expressions would be considerably more complicated as Matveev et al have also found. We shall postpone a proper discussion of this interesting question to a later publication. For now we look at equation (8) of their paper rather than equation (12) since we are interested in the large bandwidth case only for now. Since $ G \\sim {\\mathcal{T}} $ in their notation, we may expand the conductance exponent $ 4X - 2 Q $ in powers of $ v_0 $ the forward scattering mutual interaction between fermions to leading order (in the notation of Matveev et al this is $ V(0) $ and $ V(2k_F) \\equiv 0 $ in the present work),\n\\begin{equation}\n\\frac{ \\delta {\\mathcal{T}} }{ {\\mathcal{T}}_0 } \\approx 4 X \\mbox{ } \\log(\\omega)\\approx {\\mathcal{R}}_0\\frac{ v_0 }{ \\pi v_F} \\mbox{ } \\log(\\omega)\n\\label{Matveev}\n\\end{equation}\nfor $ |v_0| \\ll v_F $.\nwhere $ {\\mathcal{R}}_0 = 1 - {\\mathcal{T}}_0 $ (in the notation of the present work this would be $ |R|^2 = 1- |T|^2 $ and $ \\omega \\rightarrow |k-k_F|d \\sim k_BT $. The equation (\\ref{Matveev}) is precisely equation (8) of Matveev et al. Thus mutually interacting fermions renormalize the impurities but isolated impurities do not renormalize the homogenous Luttinger parameters such as $ K = \\frac{v_F}{v_h} $. Note that our results for the conductance equation (\\ref{Cond}) is the {\\it{ end result }} of properly taking into account the renormalizations to all orders in the infinite-bandwidth-forward-scattering fermion-fermion interactions with no restriction on the bare transmission coefficient of free fermions plus impurity. The final answers of equation (\\ref{Cond}) involve only the bare transmission and reflection coefficients for the same reason why the zero point energy of the harmonic oscillator derived properly using Hermite polynomials (rather than using perturbative RG around free particle, say) involves the bare spring constant (ie. $ \\frac{1}{2} \\hbar \\sqrt{\\frac{ k }{m } } $). Incidentally, even the final answers of Matveev et al. such as their equation (13) involve the bare parameters only since this formula is the {\\it{end result}} of taking into account all the renormalization properly.\n\n It is hard to overstate the importance of these results. They show that it is possible to analytically interpolate between the weak barrier and weak link limits without involving RG techniques. It also shows that NCBT is nothing but non-perturbative RG in disguise.\n\n\n\\section{Resonant tunneling across a double barrier}\n\\begin{figure*}[t!]\n\\begin{center}\n\\includegraphics[scale=0.45]{Plots_asymmetric_double_delta_X}\\hspace{1.5cm}\n\\includegraphics[scale=0.45]{Plots_asymmetric_double_delta_A}\\\\\n(a)\\hspace{7cm}(b)\\\\\n\\end{center}\n\\caption{ Anomalous exponents for double barrier: The anomalous exponents (a) X and (b) A as functions of impurity strength $V_1$ and $V_2$ for an asymmetric double delta potential. Near resonance (the point of intersection of the cross lines), the system has the same colour it has when both $V_1$ and $V_2$ are zero.}\n\\label{densityplot}\n\\end{figure*}\n\nResonant tunneling is well-known in elementary quantum mechanics. Typically, this phenomenon is studied in a double-barrier system. When the Fermi wavenumber bears a special relation with the inter-barrier separation and height, the reflection coefficient becomes zero and the Green functions of the system behave as if they are those of a translationally invariant system. Consider a symmetric double delta-function with strength $V_0 $ and separation $ d $. Define, $ \\xi_0 = k_F d $. The resonance condition in this case is well-known to be,\n\n\\begin{equation}\n\\hspace{0.5 in }V_0 \\sin{[\\xi_0]} +v_F\\cos{[\\xi_0]}=0 \\label{eq:cond}\n\\end{equation}\nResonant tunneling is studied for a square double barrier potential in one dimensions by Zhi Xiao et al. \\cite{xiao2012revisiting}. After taking the limiting cases of the square barriers tending to delta potentials and imposing the RPA limit, equation (\\ref{eq:cond}) is obtained.\n\nThe anomalous exponents of the correlation functions given in \\hyperref[AppendixA]{Appendix A} are plotted in fig. \\ref{resonance} in the vicinity of resonance to see the signatures of resonance tunneling on the Luttinger liquid Green function.\nIt may be seen that when the system is at resonance (depicted by the vertical line), all the anomalous exponents take exactly the same value that they take when there is no barrier at all.\\\\\n\nFor an asymmetric double delta system, $V(x)=V_1 \\delta(x+a)+ V_2 \\delta(x-a)$, the anomalous exponents can be calculated using NCBT. The form of the exponents are the same as given in \\hyperref[AppendixA]{Appendix A} but the expression of the reflection amplitude is now different and is given by (here $\\xi_0= 2 k_F a$) \\cite{das2018quantum}.\n\\begin{equation}\n\\begin{aligned}\n\\label{asymmetric}\nR=&-\\frac{2 i \\frac{V_1 V_2}{v_F^2} \\sin{[\\xi_0]}+\\frac{2i}{v_F}(\\frac{V_1 e^{i \\xi_0}+V_2e^{-i\\xi_0}}{2})}{\\left(1+i\\frac{V_1+V_2}{v_F}+\\frac{i^2 V_1V_2}{v_F^2}\\right)+\\frac{V_1V_2}{v_F^2}e^{2 i \\xi_0}}\\\\\n\\end{aligned}\n\\end{equation}\n For this case also resonance is achieved when both $V_1$ and $V_2$ becomes equal ($V_1=V_2=V_0$) and $V_0$ obeys the same condition in equation $(\\ref{eq:cond})$. Two of the anomalous exponents X and A (expressions given in equations (\\ref{luttingerexponents}) and (\\ref{asymmetric})) for the asymmetric double delta system are plotted in fig. (\\ref{densityplot}). The point of intersection of the cross lines is the condition for resonance and it can easily be seen that the exponent takes the same value (color) at resonance point as it otherwise takes for the no-impurity system ($V_1=V_2=0$). \n\n\\section{Conclusion}\nThe correlation functions of an inhomogeneous Luttinger liquid obtained using the Non chiral bosonization are successfully used to calculate the conductance in the Kubo formalism as well as in a tunneling experiment. The formulas are valid for any strength of the impurities as well as that of the inter-particle interactions and various standard results are obtained as limiting cases of these formulas. The condition of resonant tunneling is also obtained and the behavior of the correlation functions near resonance is described. \n\n\n\\section*{APPENDIX A: Two point functions using NCBT}\n\\label{AppendixA}\n\\setcounter{equation}{0}\n\\renewcommand{\\theequation}{A.\\arabic{equation}}\n\nThe full Green function is the sum of all the parts. The notion of weak equality is introduced which is denoted by \\begin{small} $ A[X_1,X_2] \\sim B[X_1,X_2] $ \\end{small}. This really means \\begin{small} $ \\partial_{t_1} Log[ A[X_1,X_2] ] = \\partial_{t_1} Log[ B[X_1,X_2] ] $\\end{small} assuming that A and B do not vanish identically. In addition to this, the finite temperature versions of the formulas below can be obtained by replacing $ Log[Z] $ by $ Log[ \\frac{\\beta v_F }{\\pi}Sinh[ \\frac{\\pi Z}{ \\beta v_F} ] ] $ where $ Z \\sim (\\nu x_1 - \\nu^{'} x_2 ) - v_a (t_1-t_2) $ and singular cutoffs ubiquitous in this subject are suppressed in this notation for brevity - they have to be understood to be present. {\\bf Notation:} $X_i \\equiv (x_i,\\sigma_i,t_i)$ and $\\tau_{12} = t_1 - t_2$. \n\\scriptsize\n\n\\begin{equation}\n\\begin{aligned}\n\\Big\\langle T\\mbox{ }\\psi(X_1)\\psi^{\\dagger}(X_2) \\Big\\rangle \n=&\\Big\\langle T\\mbox{ }\\psi_{R}(X_1)\\psi_{R}^{\\dagger}(X_2) \\Big\\rangle +\\Big \\langle T\\mbox{ }\\psi_{L}(X_1)\\psi_{L}^{\\dagger}(X_2)\\Big\\rangle \\\\\n+&\\Big\\langle T\\mbox{ }\\psi_{R}(X_1)\\psi_{L}^{\\dagger}(X_2) \\Big\\rangle + \\Big\\langle T\\mbox{ }\\psi_{L}(X_1)\\psi_{R}^{\\dagger}(X_2)\\Big\\rangle \\\\\n\\label{break}\n\\end{aligned}\n\\end{equation}\n\n\\small\n\\begin{bf} Case I : $x_1$ and $x_2$ on the same side of the origin\\end{bf} \\\\ \\scriptsize\n\n\n\\begin{equation*}\n\\begin{aligned}\n\\Big\\langle T\\mbox{ }\\psi&_{R}(X_1)\\psi_{R}^{\\dagger}(X_2)\\Big\\rangle \\sim \n\\frac{(4x_1x_2)^{\\gamma_1}}{(x_1-x_2 -v_h \\tau_{12})^{P} (-x_1+x_2 -v_h \\tau_{12})^{Q}} \\\\\n\\times&\\frac{1}{ (x_1+x_2 -v_h \\tau_{12})^{X} (-x_1-x_2 -v_h \\tau_{12})^{X} (x_1-x_2 -v_F \\tau_{12})^{0.5}}\\\\\n\\Big\\langle T\\mbox{ }\\psi&_{L}(X_1)\\psi_{L}^{\\dagger}(X_2)\\Big\\rangle \\sim \n\\frac{(4x_1x_2)^{\\gamma_1}}{(x_1-x_2 -v_h \\tau_{12})^{Q} (-x_1+x_2 -v_h \\tau_{12})^{P}} \\\\\n\\times&\\frac{1}{ (x_1+x_2 -v_h \\tau_{12})^{X} (-x_1-x_2 -v_h \\tau_{12})^{X}(-x_1+x_2 -v_F \\tau_{12})^{0.5}}\\\\\n\\Big\\langle T\\mbox{ }\\psi&_{R}(X_1)\\psi_{L}^{\\dagger}(X_2)\\Big\\rangle \\sim \n\\frac{(2x_1)^{\\gamma_1}(2x_2)^{1+\\gamma_2}+(2x_1)^{1+\\gamma_2}(2x_2)^{\\gamma_1}}{2(x_1-x_2 -v_h \\tau_{12})^{S} (-x_1+x_2 -v_h \\tau_{12})^{S}} \\\\\n\\times&\\frac{1}{ (x_1+x_2 -v_h \\tau_{12})^{Y} (-x_1-x_2 -v_h \\tau_{12})^{Z}(x_1+x_2 -v_F \\tau_{12})^{0.5}}\\\\\n\\end{aligned}\n\\end{equation*}\n\n\\begin{equation}\n\\begin{aligned}\n\\Big\\langle T\\mbox{ }\\psi&_{L}(X_1)\\psi_{R}^{\\dagger}(X_2)\\Big\\rangle \\sim \n\\frac{(2x_1)^{\\gamma_1}(2x_2)^{1+\\gamma_2}+(2x_1)^{1+\\gamma_2}(2x_2)^{\\gamma_1}}{2(x_1-x_2 -v_h \\tau_{12})^{S} (-x_1+x_2 -v_h \\tau_{12})^{S}} \\\\\n\\times&\\frac{1}{ (x_1+x_2 -v_h \\tau_{12})^{Z} (-x_1-x_2 -v_h \\tau_{12})^{Y}(-x_1-x_2 -v_F \\tau_{12})^{0.5}}\\\\\n\\label{SS}\n\\end{aligned}\n\\end{equation}\n\n\\small\n\\begin{bf}Case II : $x_1$ and $x_2$ on opposite sides of the origin\\end{bf} \\\\ \\scriptsize\n\n\\begin{equation}\n\\begin{aligned}\n\\Big\\langle T\\mbox{ }\\psi&_{R}(X_1)\\psi_{R}^{\\dagger}(X_2)\\Big\\rangle \\sim \n\\frac{(2x_1)^{1+\\gamma_2}(2x_2)^{\\gamma_1} }{2(x_1-x_2 -v_h \\tau_{12})^{A} (-x_1+x_2 -v_h \\tau_{12})^{B}} \\\\\n\\times&\\frac{(x_1+x_2)^{-1}(x_1+x_2 + v_F \\tau_{12})^{0.5}}{ (x_1+x_2 -v_h \\tau_{12})^{C} (-x_1-x_2 -v_h \\tau_{12})^{D} (x_1-x_2 -v_F \\tau_{12})^{0.5}}\\\\\n&\\hspace{2cm}+\\frac{(2x_1)^{\\gamma_1} (2x_2)^{1+\\gamma_2}}{2(x_1-x_2 -v_h \\tau_{12})^{A} (-x_1+x_2 -v_h \\tau_{12})^{B}} \\\\\n\\times&\\frac{(x_1+x_2)^{-1}(x_1+x_2 - v_F \\tau_{12})^{0.5}}{ (x_1+x_2 -v_h \\tau_{12})^{D} (-x_1-x_2 -v_h \\tau_{12})^{C} (x_1-x_2 -v_F \\tau_{12})^{0.5}}\\\\\n\\Big\\langle T\\mbox{ }\\psi&_{L}(X_1)\\psi_{L}^{\\dagger}(X_2)\\Big\\rangle \\sim \n\\frac{(2x_1)^{1+\\gamma_2}(2x_2)^{\\gamma_1} }{2(x_1-x_2 -v_h \\tau_{12})^{B} (-x_1+x_2 -v_h \\tau_{12})^{A}} \\\\\n\\times&\\frac{(x_1+x_2)^{-1}(x_1+x_2 - v_F \\tau_{12})^{0.5}}{ (x_1+x_2 -v_h \\tau_{12})^{D} (-x_1-x_2 -v_h \\tau_{12})^{C} (-x_1+x_2 -v_F \\tau_{12})^{0.5}}\\\\\n&\\hspace{2cm}+\\frac{(2x_1)^{\\gamma_1} (2x_2)^{1+\\gamma_2}}{2(x_1-x_2 -v_h \\tau_{12})^{B} (-x_1+x_2 -v_h \\tau_{12})^{A}} \\\\\n\\times&\\frac{(x_1+x_2)^{-1}(x_1+x_2 + v_F \\tau_{12})^{0.5}}{ (x_1+x_2 -v_h \\tau_{12})^{C} (-x_1-x_2 -v_h \\tau_{12})^{D} (-x_1+x_2 -v_F \\tau_{12})^{0.5}}\\\\\n\\Big\\langle T\\mbox{ }\\psi&_{R}(X_1)\\psi_{L}^{\\dagger}(X_2)\\Big\\rangle \\sim \\mbox{ }0\\\\\n\\Big\\langle T\\mbox{ }\\psi&_{L}(X_1)\\psi_{R}^{\\dagger}(X_2)\\Big\\rangle \\sim \\mbox{ }0\\\\\n\\label{OS}\n\\end{aligned}\n\\end{equation}\n\\normalsize\nwhere\n\\footnotesize\n\\begin{equation}\nQ=\\frac{(v_h-v_F)^2}{8 v_h v_F} \\mbox{ };\\mbox{ } X=\\frac{|R|^2(v_h-v_F)(v_h+v_F)}{8 v_h (v_h-|R|^2 (v_h-v_F))} \\mbox{ };\\mbox{ }C=\\frac{v_h-v_F}{4v_h}\n\\label{luttingerexponents}\\end{equation}\n\\normalsize\nThe other exponents can be expressed in terms of the above exponents.\n\\footnotesize\n\\begin{equation*}\n\\begin{aligned}\n&P= \\frac{1}{2}+Q \\mbox{ };\\hspace{0.8 cm} S=\\frac{Q}{C}( \\frac{1}{2}-C) \\mbox{ };\\hspace{0.85 cm} Y=\\frac{1}{2}+X-C ; \\\\\n& Z=X-C\\mbox{ };\\hspace{0.8 cm} A=\\frac{1}{2}+Q-X \\mbox{ };\\hspace{0.8 cm} B=Q-X \\mbox{ };\\hspace{1 cm} \\\\\n&D=-\\frac{1}{2}+C \\mbox{ };\\hspace{.6 cm} \\gamma_1=X \\mbox{ };\\hspace{1.65 cm} \\gamma_2=-1+X+2C;\\\\\n\\end{aligned}\n\\end{equation*}\n\\normalsize\n\\section*{APPENDIX B: Conductance of a quantum wire}\n\\label{AppendixB}\n\\setcounter{equation}{0}\n\\renewcommand{\\theequation}{B.\\arabic{equation}}\n\n\nIn this section, the conductance of a quantum wire with no leads is discussed first using Kubo's formula and next using the idea that it is the outcome of a tunneling experiment.\n\\subsection{Kubo formalism}\nThe electric field is $ E(x,t) = \\frac{ V_g }{ L} $ between $ -\\frac{L}{2} < x < \\frac{L}{2} $ and $ E(x,t) = 0 $ for $ |x| > \\frac{L}{2} $. Here $ V_g $ is the Voltage between two extreme points. Thus a d.c. situation is being considered right from the start. This corresponds to a vector potential ( c is the velocity of light),\n\\[\nA(x,t) = \\left\\{\n \\begin{array}{ll}\n -\\frac{ V_g }{ L} (ct), & \\hbox{ $ -\\frac{L}{2} < x < \\frac{L}{2} $ ;} \\\\\n 0, & \\hbox{otherwise.}\n \\end{array}\n\\right.\n\\]\nThis means (since $ j \\approx j_s $, the slow part) ,\n\\begin{equation}\n\\begin{aligned}\n = &\\frac{ie}{c}\\sum_{ \\sigma^{'} }\n\\int^{L\/2}_{-L\/2} dx^{'}\\mbox{ } \\int_{-\\infty}^{t} dt^{'} \\\\\n&\\times\\frac{ V_g }{ L} (ct') < [j(x,\\sigma,t),j(x^{'},\\sigma^{'},t^{'})]>_{LL}\n\\label{gencond}\n\\end{aligned}\n\\end{equation}\n\n\\subsubsection{ Clean wire: $ |R| = 0 $ but $ v_0 \\neq 0 $ }\nUsing the Green function from equation (\\ref{SS}) and setting $|R|=0$, the current current commutation relation can be calculated as,\n\\footnotesize\n\\begin{equation}\n\\begin{aligned}\n<[j_s&(x,\\sigma,t),j_s(x',\\sigma',t')]> \n=-\\frac{v^2_F }{ 8\\pi^2 } \\mbox{ } \\sum_{ \\nu = \\pm 1 }\n (2 \\pi i) \\\\\n&\\partial_{ v_F t' }\\left( \\delta( x-x' + \\nu v_h(t-t') ) + \\sigma \\sigma' \\mbox{ }\\delta( x-x' + \\nu v_F(t-t') ) \\right)\n\\label{cleancond}\n\\end{aligned}\n\\end{equation}\n\\normalsize\nInserting equation (\\ref{cleancond}) into equation (\\ref{gencond}), the following is obtained.\\footnotesize\n\\begin{equation*}\n\\begin{aligned}\n = \\frac{ie}{c}\\sum_{ \\sigma^{'} }\n\\int^{\\frac{L}{2}}_{-\\frac{L}{2}} dx^{'} \\int_{-\\infty}^{t} dt^{'} \\mbox{ }\\frac{ V_g }{ L} (ct') \n\\Big(\\frac{-v^2_F }{ 8\\pi^2 } \\mbox{ } \\sum_{ \\nu = \\pm 1 }\n (2 \\pi i) \\mbox{ }\\\\\n\\times&\\partial_{ v_F t' }\\left( \\delta( x-x' + \\nu v_h(t-t') ) \n\\mbox{ }+ \\sigma \\sigma' \n\\delta( x-x' + \\nu v_F(t-t') ) \\right)\\Big)\n\\end{aligned}\n\\end{equation*}\\normalsize\nFinally,\n\\[\n = -\n\\mbox{ } V_g \\frac{e }{ (2\\pi) } \\frac{ v_F }{ v_h}\n\\]\nor,\n\\[\nI = (-e) =\n\\mbox{ } V_g \\frac{e^2 }{ (2\\pi) } \\frac{ v_F }{ v_h}\n\\]\nThis gives the formula for the conductance (per spin) for a clean quantum wire with interactions,\n\\[\nG = \\frac{ e^2}{2\\pi } \\frac{ v_F }{ v_h}\n\\]\n\\normalsize\nor in proper units,\n\\[\n\\begin{boxed}\n{G = \\frac{ e^2}{2\\pi \\hbar}\\mbox{ } \\frac{ v_F }{ v_h} = \\frac{ e^2}{h} \\mbox{ }\\frac{ v_F }{ v_h} }\n\\end{boxed}\n\\]\nA comparison with standard g-ology with the present chosen model gives the following identifications (Eq.(2.105) of Giamarchi \\cite{giamarchi2004quantum}).\n\\begin{equation*}\n\\begin{aligned}\n&g_{1,\\perp} = g_{1,\\parallel} = 0\n\\\\&\ng_{2,\\perp} = g_{2,\\parallel} = g_{4,\\perp} = g_{4,\\parallel} = v_0\n\\\\&\ng_{ \\rho } = g_{1,\\parallel} - g_{2,\\parallel} - g_{ 2, \\perp} = 0-v_0-v_0 = -2v_0\n\\\\&\ng_{ \\sigma }= g_{1,\\parallel} - g_{2,\\parallel} + g_{ 2, \\perp}= 0-v_0+v_0 = 0\n\\\\\n&\ng_{4,\\rho} = g_{4,\\parallel}+ g_{ 4,\\perp} = 2 v_0\n\\\\&\ng_{4,\\sigma} = g_{4,\\parallel} - g_{ 4,\\perp} = 0\n\\\\&\ny_{ \\rho } = g_{ \\rho }\/( \\pi v_F ) = - \\frac{2 v_0 }{ \\pi v_F }\n\\\\&\ny_{ \\sigma } = g_{ \\sigma } \/ ( \\pi v_F ) = 0\n\\\\&\ny_{4,\\rho} = g_{4,\\rho }\/(\\pi v_F) = g_{4,\\rho }\/(\\pi v_F) = 2 v_0\/(\\pi v_F)\n\\\\&\ny_{4,\\sigma} = g_{4,\\sigma }\/(\\pi v_F) = 0\n\\end{aligned}\n\\end{equation*}\n\\begin{equation*}\n\\begin{aligned}\nu_{ \\rho } =& v_F \\sqrt{ (1+y_{4,\\rho}\/2)^2 -(y_{\\rho}\/2)^2 }\\\\\n =& v_F \\sqrt{ 1+2v_0\/(\\pi v_F) } \\equiv v_h\n\\end{aligned}\n\\end{equation*}\n\\begin{equation*}\\small\n\\begin{aligned}\nK_{ \\rho } =& \\sqrt{ \\frac{1 + y_{4,\\rho}\/2+y_{\\rho}\/2}{1 + y_{4,\\rho}\/2-y_{\\rho}\/2} }\n = \\sqrt{ \\frac{1 }{1 + 2v_0\/(\\pi v_F)} } = \\frac{ v_F }{ v_h }\n\\end{aligned}\n\\end{equation*}\\normalsize\n\\[\nu_{ \\sigma } = v_F \\sqrt{ (1+ y_{4,\\sigma}\/2)^2 - (y_{\\sigma}\/2)^2 } = v_F\n\\]\n\\[\nK_{\\sigma} = \\sqrt{ \\frac{1 + y_{4,\\sigma}\/2 + y_{\\sigma}\/2 }{1 + y_{4,\\sigma}\/2 - y_{\\sigma}\/2 } } = 1\n\\]\n\nThis gives,\n\n\\[\n\\begin{boxed}\n{G = \\frac{ e^2}{h} \\mbox{ }\\frac{ v_F }{ v_h} = \\frac{ e^2}{h} \\mbox{ } K_{\\rho}}\n\\end{boxed}\n\\]\nwhich is the standard result for a clean quantum wire.\n\n\n\n\\subsubsection{ The general case: $ |R| > 0 $ and $ v_0 \\neq 0 $ }\n\nAgain, using the Green function from equation (\\ref{SS}) for general value of $|R|$, the current current commutation relation can be calculated as,\n\\begin{equation*}\n\\begin{aligned}\n<[&j_s(x,\\sigma,t),j_s(x',\\sigma',t')]> \\\\\n=& - (2 \\pi i) \\frac{v_F v_h^2 }{ 8\\pi^2 v_h } \\mbox{ }\\partial_{v_h t'} \\sum_{ \\nu = \\pm 1 }\\bigg ( \\delta ( \\nu(x-x') + v_h(t-t') )\n\\\\&\\hspace{1.5cm}- \\frac{v_F }{v_h} \\mbox{ }Z_h\\mbox{ }\n\\delta ( \\nu(|x|+|x' |) + v_h(t-t') )\n\\bigg)\n\\\\\n&\n - (2 \\pi i)\n\\frac{\\sigma\\sigma' v_F^2}{ 8\\pi^2 } \\mbox{ } \\partial_{v_Ft'}\\sum_{ \\nu = \\pm 1 }\\bigg ( \\delta ( \\nu(x-x') + v_F(t-t') )\n\t\\\\&\\hspace{1.5cm}-|R|^2 \\delta ( \\nu(|x|+|x' |) + v_F(t-t') )\n\\bigg)\n\\end{aligned}\n\\end{equation*}\nwhere,\n\\[\n\\hspace{1 in} Z_h = \\frac{ |R|^2 }{ \\bigg( 1 - \\frac{(v_h-v_F)}{ v_h }\n |R|^2 \\bigg) }\n\\]\nThus,\n\\begin{equation*}\n\\begin{aligned}\n =& ie \\sum_{ \\sigma^{'} }\n\\int^{L\/2}_{-L\/2} dx^{'}\\mbox{ } \\int_{-\\infty}^{t} dt^{'}\\partial_{v_ht^{'}} \\mbox{ }\\frac{ V_g }{ L}\n (2 \\pi i) \\frac{v_F }{ 8\\pi^2 } \\mbox{ }\\\\\n&\\sum_{ \\nu = \\pm 1 }\\bigg ( \\theta( -\\nu(x-x') - v_h(t-t') )\n\\\\&- \\frac{v_F }{v_h} \\mbox{ }Z_h\\mbox{ }\n\\theta ( -\\nu(|x|+|x' |) - v_h(t-t') )\n\\bigg)\n\\end{aligned}\n\\end{equation*}\ntherefore,\n\\[\n =\n\\frac{2 ie }{v_h}\n V_g\n (2 \\pi i) \\frac{v_F }{ 8\\pi^2 } \\mbox{ }\\bigg (1\n- \\frac{v_F }{v_h} \\mbox{ }Z_h\n\\bigg)\n\\]\nThe conductance of a quantum wire without leads but in the presence of barriers and wells is,\n\\[\nG = \\frac{ e^2 }{(2\\pi)}\n \\frac{v_F }{ v_h } \\mbox{ }\\bigg (1\n- \\frac{v_F }{v_h} \\mbox{ }Z_h\n\\bigg)\n\\]\nHence the general formula for the conductance of a quantum wire without leads but with electrons experiencing forward scattering short-range mutual interactions\nand in the presence of a finite number of barriers and wells clustered around an origin is (in proper units),\n\\begin{equation}\n\\begin{boxed}\n{G = \\frac{ e^2 }{h}\n \\frac{v_F }{ v_h } \\mbox{ }\\bigg (1\n- \\frac{v_F }{v_h} \\mbox{ }Z_h\n\\bigg)}\n\\end{boxed}\n\\end{equation}\nThe above general formula agrees with the three well known limiting cases.\n\\\\ \\mbox{ } \\\\\n(i) when $ v_h = v_F $, Landauer's formula $ G = \\frac{ e^2 }{ h } \\mbox{ }|T|^2 $ is recovered.\n\\\\ \\mbox{ } \\\\\n(ii) when $ |R| = 0 $, the formula $ G = \\frac{ e^2 }{ h } \\mbox{ }K_{\\rho} $ is also recovered.\n\\\\ \\mbox{ } \\\\\n(iii) when $ |R| = 1 $, $ G = 0 $ regardless of what $ v_h $ is.\n\\\\ \\mbox{ } \\\\\n\n\n\\subsection{ Conductance from a tunneling experiment }\n\nIf the conduction process is envisaged as a tunneling phenomenon as against the usual Kubo formula based approach which involves relating conductance to current-current correlation, a qualitatively different formula for the conductance is obtained.\n\n\n\nFirst observe that the quantity $ |T|^2 $ and $ K_{ \\rho } $ both serve as a ``transmission coefficient\" - the former when mutual interactions are absent but barriers and wells are present and the latter vice versa. Both these may be related to spectral function of the field operator (single particle spectral function) as follows.\n\\begin{equation*}\n\\begin{aligned}\n&v_F\\int^{\\infty}_{-\\infty}dt\\mbox{ }<\\{ \\psi_{ \\nu } (x,\\sigma,t) , \\psi^{\\dagger}_{ \\nu } (x',\\sigma,0) \\}>\n \\\\&= -(2\\pi i)\\sum_{ \\gamma,\\gamma^{'} = \\pm 1 } \\theta( \\gamma x ) \\theta( \\gamma^{'} x^{'}) g_{ \\gamma,\\gamma^{'} }(\\nu,\\nu)\n\\\\&\nv_F\\int^{\\infty}_{-\\infty}dt\\mbox{ }<\\{ \\psi_{ \\nu } (\\nu \\frac{L}{2},\\sigma,t) , \\psi^{\\dagger}_{ \\nu } (-\\nu \\frac{L}{2},\\sigma,0) \\}>\n\\\\&= -(2\\pi i)g_{ \\nu,-\\nu }(\\nu,\\nu)\n\\\\&\nv_F\\int^{\\infty}_{-\\infty}dt\\mbox{ }<\\{ \\psi_{ \\nu } (\\nu \\frac{L}{2},\\sigma,t) , \\psi^{\\dagger}_{ \\nu } (-\\nu \\frac{L}{2},\\sigma,0) \\}>\n = T\n\\end{aligned}\n\\end{equation*}\nwhere $g_{ \\gamma,\\gamma^{'} }(\\nu,\\nu)$ are functions of the reflection (R) and the transmission (T) amplitudes of the system and is given explicitly as follows.\n\\footnotesize\n\\begin{equation}\n\\begin{aligned}\n\\hspace*{-0.2 cm}\n\\label{gexp}\ng_{\\gamma_1,\\gamma_2} (\\nu_1,\\nu_2)=\\frac{i}{2\\pi}& \\Big[ \\delta_{\\nu_1,\\nu_2} \\delta_{\\gamma_1,\\gamma_2} \\\\\n&+(T \\delta_{\\nu_1,\\nu_2}+R \\delta_{\\nu_1,-\\nu_2})\\delta_{\\gamma_1,\\nu_1}\\delta_{\\gamma_2,-\\nu_2}\\\\\n&+(T^{*} \\delta_{\\nu_1,\\nu_2}+R^{*} \\delta_{\\nu_1,-\\nu_2})\\delta_{\\gamma_1,-\\nu_1}\\delta_{\\gamma_2,\\nu_2}\\Big]\n\\end{aligned}\n\\end{equation}\n\\normalsize\nFrom this point of view, the conductance is related to the magnitude of the above complex number. Choosing it to be proportional to the magnitude of the complex number (rather than the square of the magnitude) allows perfect agreement with the RG equations of Matveev et al. \\cite{matveev1993tunneling} as we have seen in the main text ($|T|$ is the magnitude of the transmission amplitude of free fermions plus impurity):\\small\n\\begin{equation}\nG = \\frac{ e^2 }{h} \\mbox{ }|T|\\mbox{ }\n| v_F\\int^{\\infty}_{-\\infty}dt\\mbox{ }<\\{ \\psi_{ R } ( \\frac{L}{2},\\sigma,t) , \\psi^{\\dagger}_{ R } (-\\frac{L}{2},\\sigma,0) \\}>\n |\n \\label{TUNNEL}\n\\end{equation}\\normalsize\nNote that the above formula is {\\bf{not related}} to the square of the dynamical density of states. The dynamical density of states is\nequal-space and unequal time Green function. For tunneling, an electron is injected at $ x = - L\/2 $\nand collected at $ x^{'} = + L\/2 $ as is the case here which is unequal-space unequal-time Green function i.e. the Green function for the electron traversing the impurity.\nTechnically speaking, the g-ology methods are able to handle only the no barrier case and the half line case properly hence for a weak link they are sometimes forced to surmise that conductance has something to do with dynamical density of states for a half line near the weak link. The present approach is not only different but physically more sensible and compelling. Using the Green function from equation (\\ref{OS}),\n\\footnotesize\n\\begin{equation*}\n\\begin{aligned}\n\\Big\\langle T\\psi_R(\\frac{L}{2},\\sigma,t)&\\psi_R^{\\dagger}(-\\frac{L}{2},\\sigma,0)\\Big\\rangle\n=\\frac{i}{2\\pi}\\mbox{ }e^{-\\frac{1}{2} \\log{[L-v_Ft]}}\n\\\\&\\times e^{-\\frac{1}{2} \\log{[L-v_ht]}}e^{-\\frac{(v_h-v_F)^2}{8 v_h v_F} \\log{\\vline \\frac{ L^2-(v_ht)^2 }{ L_{ \\omega }^2 }\\vline }}\n\\end{aligned}\n\\end{equation*}\n\\normalsize\nHence,\n\\footnotesize\n\\begin{equation*}\n\\begin{aligned}\n\\Big\\langle \\{\\psi_R(\\frac{L}{2}&,\\sigma,t),\\psi_R^{\\dagger}(-\\frac{L}{2},\\sigma,0)\\}\\Big\\rangle\n=\\frac{i}{2\\pi}\\mbox{ }e^{-\\frac{1}{2} \\log{[L-v_F(t-i\\epsilon)]}}\\\\\n&\\times e^{-\\frac{1}{2} \\log{[L-v_h(t-i\\epsilon)]}}e^{-\\frac{(v_h-v_F)^2}{8 v_h v_F} \\log{\\vline \\frac{ L^2-(v_h(t-i\\epsilon))^2 }{ L_{ \\omega }^2 }\\vline }}\\\\\n&\\hspace{1 in} - \\frac{i}{2\\pi}\\mbox{ }e^{-\\frac{1}{2} \\log{[L-v_F(t+i\\epsilon)]}}\n\\\\&\\times e^{-\\frac{1}{2} \\log{[L-v_h(t+i\\epsilon)]}}e^{-\\frac{(v_h-v_F)^2}{8 v_h v_F} \\log{\\vline \\frac{ L^2-(v_h(t+i\\epsilon))^2 }{ L_{ \\omega }^2 }\\vline }}\n\\end{aligned}\n\\end{equation*}\\normalsize\nwhile integrating over $ t $ the only regions that contribute are $ L-v_F t \\approx 0 $ and $ L - v_h t \\approx 0 $. When $ v_h \\neq v_F $ these two are different regions. Set $ L - v_F t = y $ then $ L - v_h t =\nL - \\frac{v_h}{v_F} (L-y) $ and $ L + v_h t =\nL + \\frac{v_h}{v_F} (L-y) $. The implication is, integration over $ t $ is now integration over $ y $ and this is important only when $ y $ is close to zero. Next set $ L - v_h t = y^{'} $ then $ L + v_h t = 2L -y^{'} $ and\n $ L - v_F t =\nL - \\frac{v_F}{v_h} (L-y^{'}) $ and the integrals are important only when $ y^{'} $ is close to zero. This means,\n\\small\n\\begin{equation*}\n\\begin{aligned}\nv_F &\\int^{\\infty}_{-\\infty } dt \\mbox{ }\\Big\\langle \\{\\psi_R(\\frac{L}{2},\\sigma,t),\\psi_R^{\\dagger}(-\\frac{L}{2},\\sigma,0)\\}\\Big\\rangle\n\\\\=&\n \\int^{\\infty}_{-\\infty } dy \\mbox{ }\\frac{i}{2\\pi}\\mbox{ }\n \\left( e^{-\\frac{1}{2} \\log{[y+v_Fi\\epsilon]}} - e^{-\\frac{1}{2} \\log{[y-v_Fi\\epsilon]}} \\right) \\mbox{ }\\\\\n&\\hspace{1.2cm}e^{-\\frac{1}{2} \\log{[L (1- \\frac{v_h}{v_F}) + \\frac{v_h}{v_F} y ]}}\ne^{-\\frac{(v_h-v_F)^2}{8 v_h v_F} \\log{\\vline \\frac{ L^2- \\frac{v^2_h}{v^2_F} (L-y)^2 }{ L_{ \\omega }^2 }\\vline }}\n\\\\\n+& \\frac{v_F}{v_h} \\int^{\\infty}_{-\\infty } dy^{'} \\mbox{ }\\frac{i}{2\\pi}\\mbox{ }\\left( e^{-\\frac{1}{2} \\log{[y^{'} + v_h i\\epsilon ]}} -e^{-\\frac{1}{2} \\log{[y^{'} - v_h i\\epsilon ]}} \\right)\\\\\n&\\hspace{1.2cm}e^{-\\frac{1}{2}\\log{[L (1- \\frac{v_F}{v_h}) + \\frac{v_F}{v_h} y^{'} ]}}\n \\mbox{ } e^{-\\frac{(v_h-v_F)^2}{8 v_h v_F} \\log{\\vline \\frac{ y^{'}(2L-y^{'}) }{ L_{ \\omega }^2 }\\vline }}\n\\end{aligned}\n\\end{equation*}\n\\normalsize\nOnly the dependence on $ L $ is of interest. Write $ y = L \\mbox{ }s $ and $ y^{'} = L \\mbox{ } s^{'}$. Hence,\n\\begin{equation*}\n\\begin{aligned}\nv_F \\int^{\\infty}_{-\\infty } dt& \\mbox{ }\\Big\\langle \\{\\psi_R(\\frac{L}{2},\\sigma,t),\\psi_R^{\\dagger}(-\\frac{L}{2},\\sigma,0)\\}\\Big\\rangle\\\\\t\n&\\sim e^{-\\frac{(v_h-v_F)^2}{8 v_h v_F} \\log{\\vline \\frac{ L^2 }{ L_{ \\omega }^2 }\\vline }}\n\\end{aligned}\n\\end{equation*}\nThis means the tunneling conductance of a clean (no barrier) quantum wire scales as,\n\\begin{equation*}\n\\begin{aligned}\nG_{clean} \\sim \\frac{e^2}{h } &\\mbox{ } \\mbox{ }\ne^{-\\frac{(v_h-v_F)^2}{4 v_h v_F} \\log{\\vline \\frac{ L }{ L_{ \\omega } }\\vline }}\n\\sim& \\left( \\frac{ L_{ \\omega } }{ L } \\right)^{ \\frac{1}{4} ( K_{ \\rho } + \\frac{1}{ K_{ \\rho } } - 2 ) }\n\\end{aligned}\n\\end{equation*}\n\\normalsize\nwhere $ L_{ \\omega } = \\frac{ v_F }{ k_B T } $ is the length scale associated with temperature\n(or frequency since $ k_BT $ is interchangeable with $ \\omega $). It says that at low temperatures, the tunneling d.c. conductance of a clean quantum wire with no leads but\n with interactions ($ v_h \\neq v_F $) diverges as a power law with exponent $ \\frac{1}{4} ( K_{ \\rho } + \\frac{1}{ K_{ \\rho } } - 2 ) > 0 $.\n Fortuitously, the magnitude of this exponent matches with the exponent of the dynamical density of states of a clean wire (no impurity). However when impurities (or a weak link) is present, there is no guarantee that this coincidence will persist.\n For a clean wire there is nothing for a electron to tunnel across so this exercise is pointless. What should be studied is tunneling across a weak link. The general case involves including a finite number finite barriers and wells clustered around the origin. This case is solved elegantly here where a closed formula for the conductance exponents may be obtained\nunlike in competing approaches found in the literature where a combination of RG and other approaches are needed that fall well short of providing a closed expression for the exponents. {\\it{ More importantly, the present approach is able to provide an analytical interpolation from the weak barrier limit (see above) to the weak link limit to be discussed below - something the competing approaches are incapable of doing without solving complicated RG flow equations, often numerically. }}\n\nIn the general case with the barriers and wells, the Green function for points on opposite sides of the origin has a form that is qualitatively different from the form when the points are on the same side of the origin. This is the really striking prediction of this work.\n\n\n\\subsection{ With the impurities }\n\n\\noindent Consider the general Green function for $ xx^{'} < 0 $ (equation (\\ref{OS})). From that it is possible to conclude\n($ W = g_{1,-1}(1,1)\\theta(x)\\theta(-x')+g_{-1,1}(1,1)\\theta(-x)\\theta(x') $),\n\\begin{equation}\n\\begin{aligned}\n\n=\\frac{v_F+v_h}{2 \\sqrt{v_F v_h}} \\mbox{ }g_{1,-1}(1,1)\\mbox{ }\\\\\n&e^{(2X+2C)\\log{[L]}}\\mbox{ }e^{-\\frac{1}{2} \\log{[L-v_Ft ]}}e^{- \\frac{1}{2}\\log{[L-v_ht]}}\\\\\n&e^{- (Q-X)\n\\log{[L^2-(v_ht)^2]}}e^{-C\n\\log{[-(v_ht)^2]}}\n\\end{aligned}\n\\end{equation}\nSince $ G \\sim | v_F \\int^{ \\infty }_{-\\infty } dt <\\{\\psi_R(\\frac{L}{2},\\sigma,t),\\psi_R^{\\dagger}(-\\frac{L}{2},\\sigma,0)\\}> | $ it is possible to read off the conductance exponent as follows,\n\\begin{equation}\nG \\sim \\left( \\frac{ L }{ L_{ \\omega } }\\right)^{-2Q } \\mbox{ } \\left( \\frac{ L }{ L_{ \\omega} }\\right)^{ 4X }\n\\label{GGEN}\n\\end{equation}\nwhere $ Q=\\frac{(v_h-v_F)^2}{8 v_h v_F} $ and\n $ X=\\frac{|R|^2(v_h-v_F)(v_h+v_F)}{8 v_h (v_h-|R|^2 (v_h-v_F))} $.\n\\\\\nIt is easy to see that for a vanishing barrier $ |R| \\rightarrow 0 $, the earlier result of the conductance of a clean quantum wire is recovered. The other interesting limit is the weak link limit where $ |R| \\rightarrow 1 $. The limiting case of the weak link are two semi-infinite wires.\nIn this case,\n\\begin{equation}\nG_{weak-link} \\sim \\left( \\frac{ L }{ L_{ \\omega} }\\right)^{ \\frac{ (v_h + v_F)^2-4v^2_F }{ 4 v_h v_F } }\n\\label{GGEN}\n\\end{equation}\nHence the d.c. conductance scales as $ G_{weak-link} \\sim (k_B T)^{ \\frac{ (v_h + v_F)^2-4v^2_F }{ 4 v_h v_F } } $. This formula is consistent with the\nassertions of Kane and Fisher ( C. L. Kane and Matthew P. A. Fisher\nPhys. Rev. Lett. {\\bf{68}}, 1220 (1992) \\cite{kane1992transport}) that show that at low temperatures $ k_B T \\rightarrow 0 $\nfor a fixed $ L $, the conductance vanishes as a power law in the temperature if the interaction between the fermions is repulsive ($ v_h > v_F > 0 $) and diverges as a power law if the interactions between the fermions is attractive ($ v_F > v_h > 0 $). Their result is applicable to spinless fermions without leads $ G_{weak-link-nospin} \\sim (k_B T)^{ \\frac{2}{K} - 2 } $ to compare with the result of the present work this exponent has to be halved $ G_{weak-link-with-spin} \\sim (k_B T)^{ \\frac{1}{K_{ \\rho } } - 1 } $.\nThis exponent is the same as what we have derived since $ \\frac{ (v_h + v_F)^2-4v^2_F }{ 4 v_h v_F } \\approx \\frac{1}{K_{ \\rho } } - 1 $ so long as $ v_h \\approx v_F $ (weak interactions).\n In general, the claim of the present work is that the temperature dependence of the tunneling d.c. conductance of a wire with no leads in the presence of barriers and wells and mutual interaction between particles is,\n\\[\nG \\sim (k_B T)^{ \\eta} ;\\mbox{ }\\mbox{ } \\mbox{ } \\eta = 4X - 2 Q\n\\]\n\nWhen $ \\eta > 0 $ the conductance vanishes at low temperatures as a power law - characteristic of a weak link. However when $ \\eta < 0 $ the conductance diverges at low temperature as a power law - characteristic of a clean quantum wire. This result should not be taken too literally since it is based on the general validity of the surmise in Eq.(\\ref{TUNNEL}). This divergence should be taken as an indication of a saturation to a non-zero value.\nOf special interest is the situation $ \\eta = 0 $ where the conductance is independent of temperature. This crossover from a conductance that vanishes as a power law at low temperatures to one that diverges as a power law occurs at reflection coefficient\n $ |R|^2 = |R_c|^2 \\equiv \\frac{v_h (v_h-v_F)}{3 v_F^2+v_h^2} $ which is valid only for repulsive interactions $ v_h > v_F $. For attractive interactions, $ \\eta < 0 $ for any $ |R|^2 $ which means\n the conductance always diverges as a power law at low temperatures. This means attractive interactions heal the chain for all reflection coefficients including in the extreme weak link case.\n On the other hand for repulsive interactions, for $ |R| > |R_c| $, $ \\eta > 0 $ and the chain is broken (conductance vanishes) at low temperatures. For $ |R| < |R_c| $, $ \\eta < 0 $ and even though the interactions are repulsive the chain is healed (conductance diverges).\n\nNote that this section that calculates conductance is based on a serendipitous surmise equation (\\ref{TUNNEL}) which equates the tunneling conductance to a certain integral over the one-particle Green function. In hindsight, this surmise works only for temperatures small compared to the bandwidth and for repulsive interactions. Strictly speaking we have to apply a bias and properly calculate the current flowing in a system with bias, impurity, finite-bandwidth interactions and finite temperature. Not surprisingly this is an ambitious project that will lead to a proper formula for the current flowing as a function of the bias and all the other parameters. We expect to recover the RG formulas of Matveev, Yue and Glazman in the limit of weak interactions for a general bandwidth and both attractive and repulsive interactions (not infinite bandwidth repulsive interactions like we have have done in the present manuscript). The main purpose of including this section is just to support the main result namely the Green function of the system. For this the derivation of Eq.(8) of Matveev, Yue and Glazman as we have been successful in doing in the main text is already sufficient.\n\n\n\n\\section*{Funding}\nA part of this work was done with financial support from Department of Science and Technology, Govt. of India DST\/SERC: SR\/S2\/CMP\/46 2009.\\\\\n\n\\bibliographystyle{apsrev4-1}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec_intro} Introduction}\n\\label{sec:introduction}\n\nThe study of time-dependent solutions of the one-dimensional\nSchr\\\"{o}dinger equation is a frequent topic in many\nundergraduate textbooks on quantum mechanics. The problem of a Gaussian\nor minimum-uncertainty wavepacket solution for the case of a free particle\n(defined more specifically below) is the most typical example cited, often \nbeing worked out in detail, or at least explored in problems \\cite{texts}. \nThe emphasis is often on the time-dependent position spread for such \nsolutions, typically written in the forms\n\\begin{equation}\n(\\Delta x_t)^2 = \n(\\Delta x_0)^2\\left(1+\\left(\\frac{t}{t_0}\\right)^2\\right)\n = (\\Delta x_0)^2 + \\frac{(\\Delta p_0)^2 t^2}{m^2}\n\\label{not_general_case}\n\\end{equation}\nwhere the spreading time or coherence time can be defined by $t_0 \n\\equiv m\\Delta x_0\/\\Delta p_0$. Textbooks rightly point out the essentially\nclassical nature of much of this result, explained by the fact that \nthe higher momentum components of the wave packet outpace the slower ones, \ngiving a position-spread which eventually increases linearly with time as \n$\\Delta x_t \\approx \\Delta v_0 t$, where $\\Delta v_0$ is identified with \n$\\Delta p_0\/m$.\n\nThe form of the expression for $\\Delta x_t$ in Eqn.~(\\ref{not_general_case})\nis a special case of the most general possible form of the time-dependent\nspatial width of a one-dimensional wave packet solution of the \nfree-particle Schr\\\"{o}dinger equation which is well-known in the pedagogical\nliterature \\cite{baird} - \\cite{andrews}, but seemingly found in many fewer\ntextbooks \\cite{merzbacher}. This general case can be written\nin the form\n\\begin{equation}\n(\\Delta x_t)^2 = \n(\\Delta x_0)^2 +\n\\left\\langle \n(\\hat{x}-\\langle \\hat{x} \\rangle_0)\n(\\hat{p}-\\langle \\hat{p} \\rangle_0) \n+ \n(\\hat{p}-\\langle \\hat{p} \\rangle_0)\n(\\hat{x}-\\langle \\hat{x} \\rangle_0)\n\\right\\rangle_0 \\frac{t}{m}\n+ \\frac{(\\Delta p_0^2) t^2}{m^2}\n\\label{general_case}\n\\end{equation}\nwhere the coefficient of the term linear in $t$ measures a non-trivial \ncorrelation between the momentum- and position-dependence of the initial \nwave packet. \nWhile such correlations are initially not present in the standard Gaussian\nwave packet example routinely used in textbook analyses, which therefore\ngives rise to the simpler form in Eqn.~(\\ref{not_general_case}), \na non-vanishing $x-p$ correlation does develop for later times as has \nbeen discussed in at least\none well-known text \\cite{bohm} and several pedagogical articles \n\\cite{leblond}.\n\nFor wave packets which are constructed in such a way that large momentum \ncomponents ($p > \\langle \\hat{p} \\rangle_0$) are initially preferentially \nlocated in the `back' of the packet ($x < \\langle \\hat{x}\\rangle_0$), \nthe initial correlation can, in fact, be negative\nleading to time-dependent wave packets which initially shrink in size,\nwhile the long-time behavior of any 1D free particle wave packet is indeed \nalways dominated\nby the quadratic term in Eqn.~(\\ref{general_case}), consistent with standard\nsemi-classical arguments. (We stress that we will consider here only \nlocalized wave packets which are square-integrable, for which the evaluation \nof $\\Delta x_t$ and $\\Delta p_t$ is possible, and not pure plane wave states \nnor the special non-spreading, free-particle solutions discovered by Berry \nand Balazs \\cite{berry}.)\n\nFor the standard Gaussian or minimum uncertainty wave packet used in most \ntextbook examples, and in fact for any initial wave packet of the form \n$\\psi(x,0) = R(x)\\exp(ip_0(x-x_0)\/\\hbar)$ \nwhere $R(x)$ is a real function, this initial \ncorrelation vanishes and the more familiar special case of $\\Delta x_t$\nin Eqn.~(\\ref{not_general_case}) results, leading many students to believe\nthat it is the most general result possible. \nIt is, however, very straightforward to construct initial quantum states \nconsisting of simple Gaussian wave functions, such as squeezed states or \nlinear combination of Gaussians, which have the required initial \nposition-momentum correlations `built in', and which therefore exhibit \nthe general form \nin Eqn.~(\\ref{general_case}), including examples where the position-space\nwave packet can initially shrink in width. Since these examples can be \nanalyzed with little or no more mathematical difficulty than the standard \nminimum-uncertainty cases commonly considered in textbooks \\cite{texts}, \nwe will focus on providing two such examples below. We will, however, also \nemphasize the utility of different ways of visualizing the time-dependent \nposition-momentum correlations suggested by the form in \nEqn.~(\\ref{general_case}). \n\n\nThe derivation of Eqn.~(\\ref{general_case}) has been most often\ndiscussed \\cite{baird}, \\cite{styer} using the evaluation of the \ntime-dependence of expectation values described by \n\\begin{equation}\n\\frac{d}{dt} \\langle \\hat{A} \\rangle\n= \\frac{i}{\\hbar} \\left\\langle [\\hat{H},\\hat{A}] \\right\\rangle\n\\label{time-development}\n\\end{equation}\nusing the free particle Hamiltonian, $\\hat{H} = \\hat{p}^2\/2m$,\nor related matrix methods \\cite{nicola}; since we are interested only\nin expectation values of operators ($\\hat{A} = \\hat{x}$ or $\\hat{p}$) \nwhich are themselves\nindependent of time, there is no additional $\\langle d\\hat{A}\/dt\\rangle$ term\nin Eqn.~(\\ref{time-development}). In the next section, we\nderive the necessary time-dependent expectation values of powers of\nposition and momentum \nin a complementary way, using very general momentum-space ideas.\n(Identical methods can then also be used to evaluate the general form \nof $\\Delta x_t$ for the related case of uniform acceleration, which we\ndiscuss in Appendix~\\ref{sec:appendix}.)\nThen in Sec.~\\ref{sec:standard} we briefly review the special case of the\nminimum-uncertainty Gaussian wave packet (to establish notation) focusing\non the introduction of useful tools to help visualize possible \ncorrelations between position and momentum in free particle wave\npackets, especially the direct visualization of the real\/imaginary\nparts of $\\psi(x,t)$, the time-dependent spatial distribution of kinetic \nenergy, as well as the Wigner quasi-probability distribution. \nThen, in Sec.~\\ref{sec:correlated}, we exhibit two cases of \ncorrelated wave packets with the general form of $\\Delta x_t$\nin Eqn.~(\\ref{general_case}), which are\nsimple extensions of these standard results. A similar\nexample, involving squeezed states, has been discussed in \nRef.~\\cite{ford}, \nbut we will focus here on understanding the detailed\nposition-momentum correlations which give rise to the term linear in \n$t$ in Eqn.~(\\ref{general_case}), especially using the techniques\noutlined in Sec.~\\ref{sec:standard} for their visualization.\nFinally, we make some concluding remarks as well as noting\nin an Appendix that very similar results (both for the general form of \nthe time-dependent $\\Delta x_t$ and for the exemplary cases studied\nhere) can be obtained for the Schr\\\"{o}dinger equation corresponding to\nthe case of constant acceleration.\n\n\n\n\n\\section{Time-dependent $\\Delta x_t$ using momentum-space wavefunctions}\n\\label{sec:momentum_space}\n\nWhile the general result for the free-particle $\\Delta x_t$ is most \noften obtained using formal methods involving the time-dependence of \nexpectation values as in Eqn.~(\\ref{time-development}), \none can also evaluate time-dependent powers of position and momentum \nfor a free particle in terms of the\ninitial wave packet quite generally in terms of the momentum-space\ndescription of the quantum state, namely $\\phi(p,t)$, obtaining the\nsame results, in a manner which is nicely complementary to more standard \nanalyses. Depending on the ordering of topics in a given quantum mechanics \ncourse syllabus, this discussion might well be applicable and understandable \nearlier in the curriculum than the more formal method. \n\nIn this approach, the most general momentum-space wave function \nwhich solves the free-particle time-dependent Schr\\\"{o}dinger equation \n\\begin{equation}\n\\frac{p^2}{2m}\\phi(p,t) = \\hat{H} \\phi(p,t) = \\hat{E} \\phi(p,t)\n= i\\hbar \\frac{\\partial}{\\partial t} \\phi(p,t)\n\\, , \n\\end{equation}\ncan be written in the form \n\\begin{equation}\n\\phi(p,t) = \\phi_{0}(p)\\, e^{-ip^2t\/2m\\hbar}\n\\end{equation}\nwith $\\phi(p,0) = \\phi_{0}(p)$ being the initial state wavefunction.\nThe $t$-dependent expectation values for powers of momentum are trivial \nsince\n\\begin{eqnarray}\n\\langle \\hat{p} \\rangle_t & = & \\int_{-\\infty}^{+\\infty}\n\\, p \\, |\\phi_{0}(p)|^2\\,dp \\equiv \\langle \\hat{p} \\rangle_0 \n\\label{p_1} \\\\\n\\langle \\hat{p}^2 \\rangle_t & = & \\int_{-\\infty}^{+\\infty}\n\\, p^2 \\, |\\phi_{0}(p)|^2\\,dp \\equiv \\langle \\hat{p}^2 \\rangle_0 \n\\label{p_2}\n\\end{eqnarray}\nso that \n\\begin{equation}\n(\\Delta p_t)^2 = \\langle \\hat{p}^2\\rangle_t - \\langle \\hat{p}\\rangle_t^2\n= \n\\langle \\hat{p}^2\\rangle_0 - \\langle \\hat{p}\\rangle_0^2\n= \n(\\Delta p_0)^2\n\\end{equation}\nas expected for a free-particle solution for which \n$|\\phi(p,t)|^2 = |\\phi_{0}(p)|^2$ is independent of time. \n\nIn this representation, the position operator is given by the \nnon-trivial form $\\hat{x} = i\\hbar (\\partial\/\\partial p)$, and \nthe time-dependent \nexpectation value of position can be written as\n\\begin{eqnarray}\n\\langle \\hat{x} \\rangle_t & = &\n\\int_{-\\infty}^{+\\infty}\n[\\phi(p,t)]^{*}\\, \\hat{x}\\, [\\phi(p,t)]\\,dp \\nonumber \\\\\n& = & \n\\int_{-\\infty}^{+\\infty}\n\\left[\\phi_{0}^{*}(p)\\,e^{+ip^2t\/2m\\hbar}\\right]\n\\,\n\\left(i\\hbar \\frac{\\partial}{\\partial p}\\right)\n\\left[\\phi_{0}(p)\\,e^{-ip^2t\/2m\\hbar}\\right]\\,\ndp \\nonumber \\\\\n& = & \n\\int_{-\\infty}^{+\\infty}\n[\\phi_{0}^{*}(p)] \\left(i\\hbar \\frac{\\partial }{\\partial p}\\right)\n[\\phi_{0}(p)]\\,dp\n+ \\frac{t}{m} \\int_{-\\infty}^{+\\infty} \\, p\\, |\\phi_{0}(p)|^2\\,dp\n\\nonumber \\\\\n& = & \\langle \\hat{x} \\rangle_0 + \\frac{t}{m} \\langle \\hat{p}\\rangle_0\n\\label{x_1}\n\\end{eqnarray}\nwhich is consistent with Ehrenfest's theorem for the essentially\nclassical behavior of $\\langle \\hat{x}\\rangle_t$. \nThe same formalism can\nbe used to evaluate $\\langle \\hat{x}^2\\rangle_t$ and gives\n\\begin{equation}\n\\langle \\hat{x}^2\\rangle_t\n = \n\\langle \\hat{x}^2 \\rangle_0\n+ \\frac{t}{m} \\langle \\hat{x} \\hat{p} + \\hat{p} \\hat{x}\\rangle_0\n+ \\langle \\hat{p}^2\\rangle_0 \\frac{t^2}{m^2}\n\\label{x_2}\n\\end{equation}\nwhere one can use the general representation-independent commutation\nrelation $[\\hat{x},\\hat{p}] = i\\hbar$ to simplify the answer to this form.\nThe symmetric combination of position\nand momentum operators, written here as $(\\hat{x}\\hat{p}+\\hat{p}\\hat{x})$, \nwhich is obviously Hermitian, guarantees that this expression is manifestly \nreal.\n(Discussions in textbooks on symmetrizing products of non-commuting operators\nabound, but such results are seldom put into the context of being useful\nor natural in specific calculations, as is apparent in their use here.)\n\nCombining Eqns.~(\\ref{x_1}) and (\\ref{x_2}) then gives the most general\nform for the time-dependent spread in position to be\n\\begin{eqnarray}\n(\\Delta x_t)^2 & = & \\langle \\hat{x}^2\\rangle_t \n- \\langle \\hat{x}\\rangle_t^2 \\nonumber \\\\\n& = & \n\\left(\\langle \\hat{x}^2 \\rangle_0\n+ \\frac{t}{m} \\langle \\hat{x} \\hat{p} + \\hat{p} \\hat{x}\\rangle_0\n+ \\langle \\hat{p}^2\\rangle_0 \\frac{t^2}{m^2} \\right)\n\\nonumber \n- \\left(\\langle \\hat{x} \\rangle_0 + \\frac{t}{m} \\langle \\hat{p}\\rangle_0\\right)^2\n\\nonumber \\\\\n& = & \n(\\Delta x_0)^2 +\n\\left(\n\\langle \\hat{x} \\hat{p} + \\hat{p} \\hat{x} \\rangle_0\n- 2 \n\\langle \\hat{x} \\rangle_0 \\langle \\hat{p} \\rangle_0\n\\right)\n\\frac{t}{m}\n+ \\frac{(\\Delta p_0^2) t^2}{m^2} \n\\nonumber \\\\\n& = & \n(\\Delta x_0)^2 +\n\\left\\langle \n(\\hat{x} - \\langle \\hat{x} \\rangle_0)\n(\\hat{p} - \\langle \\hat{p} \\rangle_0) \n+ \n(\\hat{p} - \\langle \\hat{p} \\rangle_0)\n(\\hat{x} - \\langle \\hat{x} \\rangle_0)\n\\right\\rangle_0 \n\\frac{t}{m}\n+ \\frac{(\\Delta p_0^2) t^2}{m^2} \n\\, . \n\\end{eqnarray}\nWe have rewritten \nthe term linear in $t$ in a form which stresses that it is a correlation \nbetween $x$ and $p$, similar in form to related classical quantities \nsuch as the covariance in probability and statistics. Recall that for\ntwo classical quantities, $A$ and $B$, described by a joint probability \ndistribution, the covariance is defined as\n\\begin{equation}\ncov(A,B) = \n\\left \\langle\n\\left( A - \\langle A \\rangle \\right)\n\\left( B - \\langle B \\rangle \\right)\n\\right \\rangle\n= \\langle AB \\rangle - \\langle A \\rangle \\langle B \\rangle\n\\,. \n\\end{equation}\nAs we will see in the next section, there is no initial correlation for \nthe familiar minimum-uncertainty Gaussian wave packets. However, for simple \nvariations on the standard example, as in Sec.~\\ref{sec:correlated}, we will\nfind non-vanishing correlations, which we can visualize with the methods in\nSec.~\\ref{sec:standard}.\n\n\n\nWe stress that the notion of a time-dependent correlation between $x$ and \n$p$ at arbitrary times ($t>0)$ can be easily generalized from these results, \nand we can define a generalized covariance for these two variables \n\\cite{merzbacher} -- \\cite{leblond} \n(or any two operators, $\\hat{A}, \\hat{B}$) as\n\\begin{equation}\ncov(\\hat{x},\\hat{p};t) \\equiv \\frac{1}{2} \n\\left\\langle \n(\\hat{x} - \\langle \\hat{x} \\rangle_t)\n(\\hat{p} - \\langle \\hat{p} \\rangle_t) \n+ \n(\\hat{p} - \\langle \\hat{p} \\rangle_t)\n(\\hat{x} - \\langle \\hat{x} \\rangle_t)\n\\right\\rangle_t\n\\label{covariance}\n\\end{equation}\nwhere the additional factor of $1\/2$ accounts for the necessarily\nsymmetric combination which appears, compared to the classical\ndefinition. One can then speak\nof a time-dependent correlation coefficient defined by\n\\begin{equation}\n\\rho(x,p;t) \\equiv\n\\frac{cov(x,p;t)}{\\Delta x_t\\cdot \\Delta p_t}\n\\label{correlation_coefficient}\n\\end{equation}\nin analogy with related quantities from statistics. This correlation\ncan be shown \\cite{leblond} to satisfy the inequality\n\\begin{equation}\n[\\rho(x,p;t)]^2 \\leq 1 - \\left(\\frac{|\\langle [\\hat{x},\\hat{p}]\\rangle|}{2\\Delta x_t\n\\cdot \\Delta p_t}\\right)^2\n= 1- \\left(\\frac{\\hbar}{2\\Delta x_t\\cdot \\Delta p_t}\\right)^2\n\\end{equation}\nwhich vanishes for the standard minimum-uncertainty Gaussian\nat $t=0$, but which is non-zero for later times, as we will see below.\n\n\n\n\n\n\n\n\n\n\\section{Standard minimum-uncertainty Gaussian wave packets}\n\\label{sec:standard}\n\n\nThe standard initial minimum-uncertainty Gaussian wave packet, which gives \nthe familiar time-dependent spread in Eqn.~(\\ref{not_general_case}), \ncan be written in generality as \n\\begin{equation}\n\\phi_0(p) = \\phi_{(G)}(p,0) = \n\\sqrt{\\frac{\\alpha}{\\sqrt{\\pi}}}\n\\; e^{-\\alpha^2(p-p_0)^2\/2}\n\\; e^{-ipx_0\/\\hbar}\n\\label{initial_gaussian}\n\\end{equation}\nwhere $x_0,p_0$ are used to characterize the arbitrary initial central \nposition and momentum values respectively. This form gives \n\\begin{equation}\n\\langle \\hat{p} \\rangle_{t} = p_0\n\\, , \n\\qquad\n\\quad\n\\langle \\hat{p}^2 \\rangle_{t} = p_0^2 + \\frac{1}{2\\alpha^2}\n\\, ,\n\\qquad\n\\mbox{and}\n\\qquad\n\\Delta p_t = \\Delta p_0 = \\frac{1}{\\alpha \\sqrt{2}}\n\\label{momentum_results}\n\\end{equation}\nwhich are, of course, consistent with the general results in \nEqns.~(\\ref{p_1}) and (\\ref{p_2}).\n\n\nThe explicit form of the position-space wave function is given \nby Fourier transform as \n\\begin{equation}\n\\psi_{(G)}(x,t) = \\frac{1}{\\sqrt{2\\pi\\hbar}} \\sqrt{\\frac{\\alpha}{\\sqrt{\\pi}}}\n\\int_{-\\infty}^{+\\infty}\\, e^{ip(x-x_0)\/\\hbar}\\,\ne^{-\\alpha^2 (p-p_0)^2\/2}\\,\ne^{-ip^2t\/2m\\hbar}\\,dp\n\\end{equation}\nwhich can be evaluated in closed form (using the change of variables\n$q \\equiv p-p_0$ and standard integrals) to obtain\n\\begin{equation}\n\\psi_{(G)}(x,t) = \\frac{1}{\\sqrt{\\sqrt{\\pi} \\alpha \\hbar (1+it\/t_0)}}\n\\,\ne^{ip_0(x-x_0)\/\\hbar}\n\\, e^{-ip_0^2t\/2m\\hbar}\n\\,\ne^{-(x-x_0-p_{0}t\/m)^2\/2(\\alpha \\hbar)^2(1+it\/t_0)}\n\\label{free_particle_position_solution}\n\\end{equation}\nwhere $t_0 \\equiv m\\hbar \\alpha^2$ is the spreading time. \nThis then gives \n\\begin{equation}\n|\\psi_{(G)}(x,t)|^2 = \\frac{1}{\\sqrt{\\pi}\\beta_t}\n\\, e^{- [x-\\overline{x}(t)]^2\/\\beta_t^2}\n\\end{equation}\nwhere \n\\begin{equation}\n\\overline{x}(t) \\equiv x_0 + p_0t\/m\n\\qquad\n\\mbox{and}\n\\qquad\n\\beta_t \\equiv \\beta \\sqrt{1+(t\/t_0)^2}\n\\qquad\n\\mbox{with}\n\\qquad\n\\beta \\equiv \\alpha \\hbar\n\\,. \n\\end{equation}\nThis gives\n\\begin{equation}\n\\langle \\hat{x} \\rangle_t = \\overline{x}(t)\n\\quad\n\\qquad\n\\mbox{and}\n\\qquad\n\\quad\n\\langle \\hat{x}^2 \\rangle_t = [\\overline{x}(t)]^2 + \\frac{\\beta_t^2}{2},\n\\end{equation}\nso that\n\\begin{equation}\n(\\Delta x_t)^2 = \n\\frac{\\beta_t^2}{2}\n= \n\\frac{\\beta^2}{2}\n\\left(1+\\left(\\frac{t}{t_0}\\right)^2\\right)\n= (\\Delta x_0)^2 + (\\Delta p_0 t\/m)^2\n\\label{gaussian_result}\n\\end{equation}\nwhich is the familiar textbook result, and for $t=0$ has the minimum\nuncertainty product $\\Delta x_0 \\cdot \\Delta p_0 = \\hbar\/2$. \n\nIt is easy to confirm by direct calculation that there is no initial \n($t=0$) $x-p$ correlation ($cov(x,p;0)=0$) for this wavefunction, \nconsistent with\nthe lack of a term linear in $t$ in Eqn.~(\\ref{gaussian_result}). We \nemphasize that such correlations do indeed develop as the wavepacket \nevolves in time, which can be seen by examining the form of either the real or\nimaginary parts of $\\psi_{(G)}(x,t)$ as shown in Fig.~1 (where we specify\nthe model parameters used in that plot in the accompanying figure caption). \nWe note that for times $t> 0$, the `front end' of the wave packet shown \nthere is clearly more `wiggly' than the `back end' (simply count the nodes\non either side of $\\langle x \\rangle_t$.)\nThe time-dependent correlation function or covariance defined \nin Eqn.~(\\ref{covariance}) and correlation coefficient\nfrom Eqn.~(\\ref{correlation_coefficient}) are easily calculated \nfor this specific case to be\n\\begin{equation}\ncov(x,p;t) = \\frac{\\hbar}{2} \\left(\\frac{t}{t_0}\\right)\n\\qquad\n\\quad\n\\mbox{and}\n\\quad\n\\qquad\n\\rho(x,p;t) = \\frac{t\/t_0}{\\sqrt{1+(t\/t_0)^2}}\n\\label{standard_gaussian_correlations}\n\\end{equation}\nwhich clearly expresses the increasingly positive correlation of\nfast (slow) momentum components being preferentially in the leading \n(trailing) edge of the wave packet. We note that such correlations\nhave been discussed in Refs.~\\cite{bohm} and \\cite{leblond}.\n\n\n\nThis observation can also be described quantitatively by examining the \ndistribution of kinetic energy of such a free-particle Gaussian wavepacket \n\\cite{bassett}. \nIn this approach, the standard expression for the kinetic energy is \nrewritten using integration-by-parts in the form\n\\begin{equation}\n\\langle \\hat{T}\\rangle_{t}\n = \\frac{1}{2m}\\langle \\hat{p}^2\\rangle_{t}\n = -\\frac{\\hbar^2}{2m}\n\\int_{-\\infty}^{+\\infty} dx \\,\\psi^*(x,t) \\frac{\\partial ^2 \\psi(x,t)}{\\partial x^2} \n = \\frac{\\hbar^2}{2m}\\int_{-\\infty}^{+\\infty} dx \n\\left|\\frac{\\partial \\psi(x,t)}{\\partial x}\\right|^2 \n\\end{equation} \nwhich can be used to define a {\\it local kinetic energy density}, \n${\\cal T}(x,t)$, via\n\\begin{equation}\n{\\cal T}(x,t) \\equiv \n\\frac{\\hbar^2}{2m} \n\\left|\\frac{\\partial \\psi(x,t)}{\\partial x} \\right|^2\n\\qquad\n\\quad\n\\mbox{where}\n\\qquad\n\\quad\n\\langle \\hat{T} \\rangle_t = \\int_{-\\infty}^{+\\infty} {\\cal T}(x,t)\\,dx \n\\equiv T(t)\n\\, .\n\\label{kinetic_energy_distribution}\n\\end{equation}\nAs this notion is useful in systems other than for free particle states,\nwe allow for the possibility that the total kinetic energy varies with\ntime. \nSince this local density is clearly real and positive-definite, we can use it\nto visualize the distribution of kinetic energy (or `wiggliness') \nin any time-dependent wavefunction. We can then define similar quantities\nfor the kinetic energy in the `front' and\/or `back' halves of the wave\npacket, using $\\langle x\\rangle_t$ as the measuring point, via\n\\begin{equation}\nT^{(+)}(t) \\equiv \\int_{\\langle x \\rangle_t}^{+\\infty} {\\cal T}(x,t)\\,dx \n\\qquad\n\\quad\n\\mbox{and}\n\\quad\n\\qquad\nT^{(-)}(t) \\equiv \\int^{\\langle x \\rangle_t}_{-\\infty} {\\cal T}(x,t)\\,dx \n\\label{half_kinetic_energies}\n\\, . \n\\end{equation}\n\nFor the standard Gaussian wave packet in \nEqn.~(\\ref{free_particle_position_solution}), the local kinetic energy \ndensity is given by\n\\begin{equation}\n{\\cal T}_{(G)}(x,t) = \\frac{1}{2m}\n\\left( p_0^2 + \\left[\\frac{2[x-\\overline{x}(t)] p_0}{\\alpha^2\\hbar}\\right]\n\\left[\\frac{t\/t_0}{(1+t^2\/t_0^2)}\\right]\n+ \\frac{[x-\\overline{x}(t)]^2}{(\\alpha^2 \\hbar)^2 (1+t^2\/t_0^2)}\\right)\n|\\psi_{(G)}(x,t)|^2\n\\, . \n\\label{gaussian_case}\n\\end{equation}\nThe expectation value of the kinetic energy is correctly given by\n\\begin{equation}\nT_{(G)}(t) = \\int_{-\\infty}^{+\\infty}\\, {\\cal T}_{(G)}(x,t)\\,dx = \\frac{1}{2m} \n\\left(p_0^2 + \\frac{1}{2\\alpha^2}\\right)\n\\end{equation}\nand receives non-zero contributions from only the first and last terms in \nbrackets\nin Eqn.~(\\ref{gaussian_case}), since the term linear in \n$[x-\\overline{x}(t)]$ \nvanishes (when integrated over all space) for symmetry reasons. The\nindividual values of $T^{(\\pm)}_{(G)}(t)$ can also be calculated and \nare given by\n\\begin{equation}\nT^{(\\pm)}_{(G)}(t) \n= \\frac{1}{2m}\n\\left(\\frac{1}{2}\\right)\n\\left( \np_0^2 \n\\pm \n\\left(\\frac{2p_0}{\\alpha \\sqrt{\\pi}} \\right) \\frac{t\/t_0}{\\sqrt{1+t^2\/t_0^2}} \n+ \\frac{1}{2\\alpha^2}\n\\right)\n\\label{left_and_right_kinetic_energies}\n\\end{equation}\nwhich are individually positive definite. The time-dependent fractions\nof the total kinetic energy contained in the $(+)\/(-)$ (right\/left) halves \nof this standard wave packet are given by\n\\begin{equation}\nR^{(\\pm)}_{(G)}(t) \\equiv \n\\frac{T^{(\\pm)}_{(G)}(t)}{T^{(+)}_{(G)}(t) + T^{(-)}_{(G)}(t)}\n= \\frac{1}{2} \\pm \n \\left(\\frac{2}{\\sqrt{\\pi}}\\right)\n\\left( \\frac{(p_0\\alpha)}{(2(p_0\\alpha)^2+1)}\\right) \n\\frac{t\/t_0}{\\sqrt{1+t^2\/t_0^2}}\n\\label{define_r_function}\n\\,. \n\\end{equation}\nFor the model parameters used in Fig.~1, for $t=2t_0$ this corresponds \nto $R^{(+)}\/R^{(-)} = 56\\%\/44\\%$, consistent with the small, but obvious,\ndifference in the kinetic energy distribution seen by `node counting'.\n\n\n\nFinally, this growing correlation can be exhibited in yet another way, \nnamely through the Wigner quasi-probability distribution, defined by\n\\begin{eqnarray}\nP_{W}(x,p;t)\n & \\equiv &\n\\frac{1}{\\pi \\hbar}\n\\int_{-\\infty}^{+\\infty}\n\\psi^{*}(x+y,t)\\,\\psi(x-y,t)\\,e^{+2ipy\/\\hbar}\\,dy \\\\\n& = & \n\\frac{1}{\\pi \\hbar}\n\\int_{-\\infty}^{+\\infty}\n\\phi^*(p+q,t)\\, \\phi(p-q,t)\\, e^{-2ixq\/\\hbar}\\,dq\n\\label{wigner_function}\n\\, . \n\\end{eqnarray}\nThis distribution, first discussed by Wigner \\cite{wigner}, \nand reviewed extensively in the research \\cite{wigner_research}\nand pedagogical \\cite{wigner_pedagogical} literature (and even in the \ncontext of wave packet spreading \\cite{wigner_lee}),\nis as close as one can come to a quantum phase-space distribution,\nand while not directly measurable, can still be profitably used to \nillustrate any $x-p$ correlations. \nFor the standard minimum-uncertainty Gaussian wavefunctions defined by \nEqns.~(\\ref{initial_gaussian}) or (\\ref{free_particle_position_solution}), \none finds that \\cite{kim_noz}\n\\begin{equation}\nP_{W}(x,p;t) = \\frac{1}{\\hbar \\pi}\n\\, e^{-(p-p_0)^2 \\alpha^2}\n\\, e^{-(x-x_0-pt\/m)^2\/\\beta^2}\n= \nP_{W}(x-pt\/m,p;0)\n\\,.\n\\label{explicit_wigner_function}\n\\end{equation}\nContour plots of $P_{W}(x,p;t)$ corresponding to the time-dependent\nstandard Gaussian wave packet for two different times ($t=0$ and\n$t=2t_0$) are also shown at the bottom of Fig.~1, where the the \nelliptical contours with principal axes parallel to the $x,p$ \naxes for the $t=0$ case are indicative of the vanishing initial correlation, \nwhile the slanted contours at later times are consistent with the correlations\ndeveloping as described by Eqn.~(\\ref{standard_gaussian_correlations}).\n(We note that Bohm \\cite{bohm} uses a similar illustration, but discusses it \nonly in the context of classical phase space theory and Liouville's theorem.)\nThe visualization tools used in Fig.~1 (explicit plots of \n$Re[\\psi(x,t)]$, and the Wigner function) and the distribution of\nkinetic energy as encoded in Eqns.~(\\ref{left_and_right_kinetic_energies})\nor (\\ref{define_r_function}), \ncan then directly be used to examine the correlated wave packets we \ndiscuss in the next section.\n\nAs a final reminder about the quantum mechanical ``engineering''\nof model one-dimensional wavepackets, we recall that since an initial\n$\\phi_{0}(p)$ is related to the time-dependent $\\psi(x,t)$ for\nfree-particle solutions via\n\\begin{equation}\n\\psi(x,t) = \\frac{1}{\\sqrt{2\\pi\\hbar}}\\,\n\\int_{-\\infty}^{+\\infty}\\,\n\\left[\\phi_{0}(p)\\,e^{-ip^2t\/2m\\hbar}\\right]\\,e^{ipx\/\\hbar}\\,dp\n\\end{equation}\nthen the simple modification \n\\begin{equation}\n\\tilde{\\phi}_{0}(p) = \\phi_{0}(p)\\, e^{-ipa\/\\hbar}\\,e^{ip^2\\tau\/2m\\hbar}\n\\label{change_phi}\n\\end{equation}\nleads to the related position-space wavefunction satisfying\n\\begin{equation}\n\\tilde{\\psi}(x,t) = \\frac{1}{\\sqrt{2\\pi\\hbar}}\\,\n\\int_{-\\infty}^{+\\infty}\\,\n\\left[\\left(\\phi_{0}(p)\\, e^{-ipa\/\\hbar}\\,e^{ip^2\\tau\/2m\\hbar}\\right)\\,e^{-ip^2t\/2m\\hbar}\\right]\\,e^{ipx\/\\hbar}\\,dp \n = \\psi(x-a,t-\\tau)\n\\label{change_psi}\n\\end{equation}\nso that simple shifts in coordinate and time labels are possible, \nand squeezed states often make use of similar connections.\n\n\\section{Correlated Gaussian wave packets}\n\\label{sec:correlated}\n\n\\subsection{Squeezed states}\n\\label{subsec:squeezed}\n\nOne of the simplest modifications of a standard minimum-uncertainty\nGaussian initial state which induces non-trivial initial correlations\nbetween position and momentum is given by\n\\begin{equation}\n\\phi_{(S)}(p,0) = \n\\sqrt{\\frac{\\alpha}{\\sqrt{\\pi}}}\n\\; e^{-\\alpha^2(p-p_0)^2(1+iC)\/2}\n\\; e^{-ipx_0\/\\hbar}\n\\label{initial_squeezed}\n\\,. \n\\end{equation}\n(A similar version of a squeezed state, but with $\\psi(x,0)$ modified,\nhas been discussed in Ref.~\\cite{ford}.) Because the additional $C$ term\nis a simple phase, the modulus of $\\phi(p,t)$ is unchanged so that\nthe expectation values of momentum, $\\langle \\hat{p}\\rangle_0$ and\n$\\langle \\hat{p}^2 \\rangle_0$, and the momentum-spread, are still given \nby Eqn.~(\\ref{momentum_results}) as for the standard Gaussian example.\nHowever, there is now an obvious coupling between the usual `smooth'\n$\\exp(-\\alpha^2(p-p_0)^2\/2)$ term which describes the peak momentum values\nand the `oscillatory' $\\exp(-ipx_0\/\\hbar)$ terms which dictates the\nspatial location and spread, governed by the presence of the new $C$ term, \nwhich leads to a non-zero initial $x-p$ correlation.\n\n\nThe time-dependent position-space wavefunction is obtained via Fourier\ntransform with literally no more work than for the standard Gaussian and\none finds\n\\begin{equation}\n\\psi_{(S)}(x,t) = \\frac{1}{\\sqrt{\\sqrt{\\pi} \\beta (1+i[C+t\/t_0])}}\n\\,\ne^{ip_0(x-x_0)\/\\hbar}\n\\, e^{-ip_0^2t\/2m\\hbar}\n\\,\ne^{-(x-x_0-p_{0}t\/m)^2\/2\\beta^2(1+i[C+t\/t_0])}\n\\label{squeezed_position}\n\\end{equation}\ngiving \n\\begin{equation}\n|\\psi_{(S)}(x,t)|^2\n= \\frac{1}{\\sqrt{\\pi}b(t)}\n\\, e^{-[x-\\overline{x}(t)]^2\/b^2(t)}\n\\qquad\n\\mbox{where}\n\\qquad\nb(t) \\equiv \\beta \\sqrt{1+(C+t\/t_{0})^2}\n\\,. \n\\end{equation}\nThus, the initial state in Eqn.~(\\ref{initial_squeezed}) gives the same \ntime-dependent Gaussian behavior as the standard case, still peaked at \n$x=\\overline{x}(t)$, but with a spatial width shifted in time from \n$t \\rightarrow t + Ct_0$. This can be understood from the results in \nEqns.~(\\ref{change_phi}) and (\\ref{change_psi}) where the new\n$C$-dependent terms in Eqn.~(\\ref{initial_squeezed}) give rise to \neffective $a$ and $\\tau$ shifts given by \n\\begin{equation}\na = -C\\alpha^2 \\hbar p_0\n\\qquad\n\\quad\n\\mbox{and}\n\\qquad\n\\quad\n\\tau = - C\\alpha^2m\\hbar = - Ct_0\n\\, . \n\\end{equation}\nThe $\\tau$ shift then affects the time-dependent width, $b(t)$, but the \ncombined $a,\\tau$ shifts undo each other in the argument of the Gaussian\nexponential because they are highly correlated due to the form in\nEqn.~(\\ref{initial_squeezed}).\n\nThe time-dependent position expectation values are then\n\\begin{equation}\n\\langle \\hat{x}\\rangle_t = \\overline{x}(t)\n\\qquad\n\\quad\n\\mbox{and}\n\\qquad\n\\quad\n\\langle \\hat{x}^2 \\rangle_t = [\\overline{x}(t)]^2 + \\frac{[b(t)]^2}{2},\n\\end{equation}\nso that\n\\begin{eqnarray}\n(\\Delta x_t)^2 = \\frac{[b(t)]^2}{2} \n& = & \n\\frac{\\beta^2}{2} \\left(1 + (C+t\/t_0)^2\\right) \\nonumber \\\\\n& =& \n\\frac{\\beta^2}{2}(1+C^2) + C\\beta^2\\frac{t}{t_0} + \\frac{\\beta^2 t^2}{2t_0^2}\n\\nonumber \\\\\n& = & (\\Delta x_0)^2 + At + \\frac{(\\Delta p_0)^2 t^2}{m^2}\n\\label{squeezed_spread}\n\\end{eqnarray}\nwhich has a non-vanishing linear term if $C\\neq 0$. The initial width\nof this packet is larger than for the minimal uncertainty solution\nby a factor of $\\sqrt{1+C^2}$, but has the same quadratic time-dependence\nsince $\\Delta p_0$ is the same.\n\nOne can confirm by direct calculation that $\\phi_{(S)}(p,0)$ and \n$\\psi_{(S)}(x,0)$ both do \n have an initial non-vanishing\ncorrelation leading to this form and this is also clear from plots of the\ninitial wave packet as shown in Fig.~2. We plot there an example with the\nsame model parameters as in Fig.~1, but with $C=-2$ which leads to an\nanti-correlation (since $C<0$) with higher momentum components (more wiggles) \nin the `back edge' of the initial packet. This gives an intuitive\nexpectation for a wave packet which\ninitially shrinks in time, consistent with the result in \nEqn.~(\\ref{squeezed_spread}), and with the plot shown in Fig.~2 for\n$t=2t_0$. The parameters were chosen such that for this time the initial\ncorrelation has become `undone', leading to something like the standard\nGaussian initial state, from which point it spreads in a manner which is\nmore familiar. The initial correlation is achieved, however, \nat the cost of increasing the initial uncertainty principle product\nby a factor of $\\sqrt{1+C^2}$. \nThe complete time-dependent correlation coefficient\nfrom Eqn.~(\\ref{correlation_coefficient}) is \n\\begin{equation}\n\\rho(x,p;t) = \\frac{(C+t\/t_0)}{\\sqrt{1+(C+t\/t_0)^2}}\n\\end{equation}\ncorresponding in this case to a roughly $90\\%$ initial correlation. \nThe required initial correlation is also clearly evident\nfrom the Wigner quasi-probability distribution for this case, where we\nfind \n\\begin{equation}\nP_{W}(x,p;t) = \\frac{1}{\\hbar \\pi}\n\\, e^{-(p-p_0)^2 \\alpha^2}\n\\, e^{-(x-x_0-pt\/m - C(p-p_0)t_0\/m)^2\/\\beta^2}\n\\,.\n\\end{equation}\nIn this case, the initial correlation for $C<0$ shown in Fig.~2 is\nconsistent with the desired anti-correlation, since the slope of the\nelliptical contours is negative.\n\n\nIn a very similar manner, the expressions for the kinetic energy\ndensity distribution from Eqn.~(\\ref{define_r_function}) are simply \nshifted to\n\\begin{equation}\nR^{(\\pm)}_{(S)}(t) \\equiv \\frac{T^{(\\pm)}_{(S)}(t)}{T^{(+)}_{(S)}(t) + T^{(-)}_{(S)}(t)}\n= \\frac{1}{2} \\pm \n \\left(\\frac{2}{\\sqrt{\\pi}}\\right)\n\\left( \\frac{(p_0\\alpha)}{(2(p_0\\alpha)^2+1)}\\right) \n\\frac{(C+t\/t_0)}{\\sqrt{1+(C+t\/t_0)^2}}\n\\end{equation}\nso that for $C<0$, there is an initial asymmetry in the front\/back kinetic\nenergy distribution, with more `wiggles' in the trailing half of the\npacket. For the $C=-2$ case in Fig.~2, the initial ($t=0$)\nfront\/back asymmetry is $R^{(+)}\/R^{(-)} = 44\\%\/56\\%$.\n\nWe note that while a number of quantities (time-dependent spread in position,\ncorrelation coefficient, kinetic energy distribution) are simply obtained \nby the $t \\rightarrow t + Ct_0$ shift, other important metrics, such as the\nautocorrelation function \\cite{bassett_2}, $A(t)$, retain basically \nthe same form.\n\nOne can imagine generating initial Gaussian states with non-zero \ncorrelations of the type in Eqn.~(\\ref{initial_squeezed}), motivated \nby results obtained by the use of modern atom trapping techniques, \nsuch as in Ref.~\\cite{meekhof}. In a number of such experiments, \nharmonically bound ions are cooled to essentially their ground state,\nafter which changes in the external binding potential can generate\nvarious {\\it nonclassical motional states} such as coherent states\n(by sudden shifts in the central location of the binding potential\n\\cite{heinzen}) and squeezed states (by changing the strength of the\nharmonic binding force, i.e., the spring constant). The subsequent \ntime-development of Gaussian packets in such states can then lead to\nthe desired correlated states, at which point the external binding\npotential can be suddenly removed, with free-particle propagation\nthereafter. \n\nAs an example, the initial state in a harmonic oscillator potential\nof the form $V(x) = m\\omega^2x^2\/2$ given by\n\\begin{equation}\n\\psi(x,0) = \\frac{1}{\\sqrt{\\beta \\sqrt{\\pi}}}\n\\, e^{ip_0x\/\\hbar}\\,e^{-x^2\/2\\beta^2}\n\\end{equation}\nevolves in time as \\cite{bassett}\n\\begin{equation}\n\\psi(x,t) = \n\\exp\n\\left[\n\\frac{im\\omega x^2 \\cos(\\omega t)}{2\\hbar \\sin(\\omega t)}\n\\right]\n\\frac{1}{\\sqrt{A(t) \\sqrt{\\pi}}}\n\\exp\n\\left[ \n-\\frac{i m \\omega \\beta}{2\\hbar \\sin(\\omega t)}\n\\frac{(x-x_s(t))^2}{A(t)}\n\\right]\n\\label{position_space_sho_solution}\n\\end{equation}\nwhere\n\\begin{equation}\nA(t) \\equiv \\beta \\cos(\\omega t) + i \\left(\\frac{\\hbar}{m \\omega \\beta}\n\\right) \\sin(\\omega t)\n\\qquad\n\\mbox{and}\n\\qquad\nx_s(t) \\equiv \\frac{p_0 \\sin(\\omega t)}{m \\omega}\n\\, .\n\\end{equation}\nThe time-dependent expectation values are then\n\\begin{equation}\n\\langle x\\rangle_t = x_s(t)\n\\, ,\n\\qquad\n\\Delta x_t = \\frac{|A(t)|}{\\sqrt{2}}\n\\, ,\n\\qquad\n\\mbox{and}\n\\qquad\n\\langle p \\rangle_t = p_0\\cos(\\omega t)\n\\end{equation}\nand it is then easy to show that the time-dependent correlation of this\nstate is given by\n\\begin{equation}\ncov(x,p;t) = \\frac{m\\omega \\sin(\\omega t)\\cos(\\omega t)}{2}\n\\left[\n\\left(\n\\frac{\\hbar}{m\\omega \\beta}\\right)^2 - \\beta^2\n\\right]\n\\,.\n\\end{equation}\nFor the special case of coherent states, where $\\beta = \\sqrt{\\hbar\/m\\omega}$,\nthe correlations vanish identically for all times (as does the asymmetry in \nkinetic energy \\cite{bassett}), while for more general solutions, removing \nthe potential at times other than integral multiples of $\\tau\/2$ (where \n$\\tau$ is the classical period) would yield an initially correlated Gaussian.\n\n\n\n\n\n\n\n\\subsection{Linear combinations of Gaussian solutions}\n\\label{subsec:linear_combination}\n\nOne of the simplest examples of correlated position-momentum behavior of\na system, leading to an initial shrinking of a spatial width, can be\nclassically modelled by two 1D non-interacting particles, with the faster \nparticle placed initially behind the slower one. A quantum mechanical \nsolution of the free-particle \nSchr\\\"{o}dinger equation involving simple Gaussian forms which mimics this quasi-classical behavior, and for which all expectation values and correlations\ncan be evaluated in simple closed form, consists of a linear combination\nof two minimal-uncertainty Gaussian solutions of the form\n\\begin{equation}\n\\psi_{2}(x,t) = N\\left[\n\\cos(\\theta) \\psi_{(G)}^{(A)}(x,t)\n+ \n\\sin(\\theta) \\psi_{(G)}^{(B)}(x,t)\n\\right]\n\\label{two_gaussians}\n\\end{equation}\nwhere $A,B$ correspond to two different sets of initial position and \nmomentum parameters, namely $(x_A,p_A)$ and $(x_B,p_B)$, $\\theta$ describes\nthe relative weight of each component, and $N$ is an overall normalization;\nwe assume for simplicity that each component Gaussian has the same initial\nwidth, $\\beta$.\nSince each $\\psi_{(G)}(x,t)$ is separately normalized, the value of $N$\ncan be easily evaluated using standard Gaussian integrals with the result\nthat\n\\begin{equation}\nN^{-2} = \n1 \n+\n\\sin(2\\theta)\n\\;\ne^{-(x_A-x_B)^2\/4\\beta^2\n- (p_A-p_B)^2\\beta^2\/4\\hbar^2}\n\\cos[(x_B-x_A)(p_B+p_A)\/2\\hbar]\n\\end{equation}\nso that if the two initial Gaussians are far apart in phase space, namely if \n\\begin{equation}\n\\frac{(x_A-x_B)^2}{4\\beta^2}\n+ \n\\frac{(p_A-p_B)^2\\beta^2}{4\\hbar^2}\n>> 1\n\\, , \n\\end{equation}\nthe normalization factor $N$ can be effectively set to unity, and \nall cross-terms in the evaluation of expectation values can also \nbe neglected. \n\nIn this limit, the various initial expectation values required for the\nevaluation of the time-dependent spread in Eqn.~(\\ref{general_case}) are \ngiven by \n\\begin{eqnarray}\n\\langle \\hat{x} \\rangle_0 & = & \\cos^2(\\theta) x_A + \\sin^2(\\theta) x_B\n\\\\\n\\langle \\hat{x}^2 \\rangle_0 & = &\n\\cos^2(\\theta) \\left(x_A^2 + \\frac{\\beta^2}{2}\\right)\n+ \n\\sin^2(\\theta) \\left(x_B^2 + \\frac{\\beta^2}{2}\\right)\n- \\left[\\cos^2(\\theta) x_A + \\sin^2(\\theta) x_B\\right]^2\n\\end{eqnarray}\nso that\n\\begin{equation}\n(\\Delta x_0)^2 = \n[\\sin(2\\theta)]^2 \\left(\\frac{x_A-x_B}{2}\\right)^2\n+ \\frac{\\beta^2}{2}\n\\end{equation}\nwith a similar result for the momentum-spread, namely\n\\begin{equation}\n(\\Delta p_0)^2 = \n[\\sin(2\\theta)]^2 \\left(\\frac{p_A-p_B}{2}\\right)^2\n+ \\frac{\\hbar^2}{2\\beta^2}\\,.\n\\end{equation}\nThe necessary initial correlation is given by\n\\begin{equation}\n\\langle \\hat{x}\\hat{p} + \\hat{p}\\hat{x} \\rangle_0 - \n2\\langle \\hat{x}\\rangle_0 \\langle \\hat{p} \\rangle_0\n= \n2 [\\sin(2\\theta)]^2 \\left[\\frac{(x_A-x_B)(p_A-p_B)}{4}\\right]\n\\end{equation}\nso that the time-dependent spread in position is given by\n\\begin{equation}\n(\\Delta x_t)^2 =\n[\\sin(2\\theta)]^2 \n\\left[\n\\left(\\frac{x_A-x_B}{2}\\right)\n+ \\left(\\frac{p_A-p_B}{2}\\right)\\frac{t}{m}\n\\right]^2\n+\n\\frac{\\beta^2}{2}\n+\n\\frac{\\hbar^2 t^2}{2m^2\\beta^2}\n\\,. \n\\end{equation}\nIn the limit we're considering, namely when $|x_A-x_B| >> \\beta$\nand\/or $|p_A-p_B| >> \\hbar\/\\beta$, the time-dependent width can be dominated\nby the quasi-classical value dictated by two well-separated `lumps' of\nprobability, and if $(x_A-x_B)$ and $(p_A-p_B)$ have opposite signs, then this\nlarge position spread can initially decrease in time because of the\ninitial correlations. This example, while not as `quantum mechanical' as\nthat in Sec.~\\ref{subsec:squeezed}, does clearly and simply exhibit the \nposition-momentum correlations necessary for the presence of the $A$ term \nin Eqn.~(\\ref{squeezed_spread}), with the `fast one in the \nback, and the slow one in the front'.\n\nOne can imagine producing linear combinations of isolated, but highly \ncorrelated, Gaussian wave packets at very different points in phase space, \nby invoking the dynamical time-evolution of bound state wave packets which \nleads to the phenomenon of wave packet revivals, especially\nfractional revivals \\cite{revivals}. For the idealized case of the\ninfinite square well potential \\cite{aronstein}, \nat $t=T_{rev}\/4$ (where $T_{rev}$ is\nthe full revival time), an initially localized wave packet is 'split'\ninto two smaller copies of the original packet, located at opposite\nends of phase space \\cite{belloni}, of the form in Eqn.~(\\ref{two_gaussians}).\nIf, in this model system, the infinite wall boundaries are suddenly\nremoved at such a point in time, \nwe then have the case considered in this section.\n\n\n\n\\section{Conclusion and discussion}\n\\label{sec:conclusion}\n\n\nThe study of the time-dependence of the spatial width of wave packets\nin model systems can produce many interesting results, a number of which\nare quasi-classical in origin, while some are explicitly quantum mechanical.\nTime-dependent wave packet solutions of the Schr\\\"{o}dinger equation for\nthe harmonic oscillator are easily shown to exhibit intricate correlated\nexpansion\/contraction of widths in position- and momentum-space \n\\cite{saxon} and modern experiments \\cite{meekhof}, \\cite{heinzen} \ncan probe a wide variety of such states. \nEven the behavior of otherwise free Gaussian wavepackets \ninteracting with (or `bouncing from') an infinite wall \n\\cite{doncheski_1}, \\cite{dodonov}, \\cite{doncheski_2}\ncan lead to wave packets which temporarily shrink in size. \n\n\n\nWhile the fact that free-particle wavepackets can also exhibit \ninitial shrinking of their spatial width is well-known in the\nphysics pedagogical literature, it is perhaps not appreciated enough \nin the context of introductory quantum mechanics courses because of the \nseeming lack of simple, mathematically tractable, and intuitively\nvisualizable examples, and we have provided two such simple cases here. \nWe have also emphasized the usefulness of several tools for the detailed \nanalysis of the structure of quantum states as they evolve, namely the direct\nvisualization of the real\/imaginary part of the spatial wavefunction, the\ntime-dependent spatial distribution of the kinetic energy (how the\n`wiggliness' changes in time), and the Wigner quasi-probability\ndistribution all of which provide insight into\nthe correlated $x-p$ structure of quantum states.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Appendices}\n\\input{supp\/combinedES}\n\\input{supp\/supp_meta}\n\\input{supp\/supp_modeldetail}\n\\input{supp\/supp_plot}\n\\input{supp\/supp_stimuli}\n\\input{supp\/code}\n\n\n\\newpage\n\n\n\\section*{Broader Impact}\nOutputs of neural language models trained on natural language expose their users to stereotypes and biases learned by such models. CEAT is a tool for analysts and researchers to measure social biases in these models, which may help develop bias mitigation methods for neural language models. On the other hand, some users might utilize CEAT to detect certain biases or harmful stereotypes and accordingly target social groups by automatically generating large-scale biased text. Some users might generate and share biased content to shift public opinion as part of information influence operations. By focusing on the attitude bias measured by valence, a malicious actor might figure out ways to automatically generate hate speech while targeting certain social groups. \n\nIn addition to the improper use of CEAT, another ethical concern is about IBD and UIBD: \nIBD and UIBD can detect stereotypical associations for an intersectional group, but the detected words may be used in the generation of offensive content that perpetuates or amplifies existing biases.\nUsing the biased outputs of these neural language models leads to a feedback cycle when machine generated biased text ends up in training data contributing to perpetuating or amplifying bias.\n\n\\fi\n\\section{Introduction}\n\\label{sec:intro}\n\nState-of-the-art off-the-shelf neural language models such as the multi-million dollar GPT-3, associates men with competency and occupations demonstrating higher levels of education, in downstream natural language processing (NLP) tasks such as sequence prediction \\cite{brown2020language}. When GPT-3's user interface for academic access is prompted for language generation with the input ``What is the gender of a doctor,'' the first answer is ``A: Doctor is a masculine noun;'' whereas when prompted with ``What is the gender of a nurse,'' the first answer is ``It's female.'' Propagation of social group bias in NLP applications such as automated resume screening, that shapes the workforce by making consequential decisions about job candidates, would not only perpetuate existing biases but potentially exacerbate harmful bias in society to affect future generations \\cite{de2019bias, raghavanchallenges}. To enhance transparency in NLP, we use the representations of words learned from word co-occurrence statistics to discover social biases.\nOur methods uncover unique intersectional biases associated with individuals that are members of multiple minority groups. After identifying these emergent biases, we use numeric representations of words that vary according to neighboring words to analyze how prominent bias is in different contexts. Recent work has shown that human-like biases are embedded in the statistical regularities of language that are learned by word representations, namely word embeddings \\cite{caliskan2017semantics, blodgett2020language}. We build a method on this work to automatically identify intersectional biases, such as the ones associated with African American and Mexican American women from static word embeddings (SWE). Then, we measure how human-like biases manifest themselves in contextualized word embeddings (CWE), which are dynamic word representations generated by neural language models that adapt to their context. \n\n\n\n\\iffalse\nWhat is the gender of a doctor?\nA: Doctor is a masculine noun.\nWhat is the gender of a doctor?\nIs it a man or a woman?\nMost would say a man; a few would say a woman.\n\n\nWhat is the gender of a nurse? \\\\\nIt's female.\nWhat is the gender of an actor?\nIt's male.\nWhat is the gender of a writer?\nIt's female.\nWhat is the gender of a pilot?\nIt's male.\n\\fi\n\n \n\n\n\n Artificial intelligence systems are known not only to perpetuate social biases, but they may also amplify existing cultural assumptions and inequalities \\cite{campolo2017ai}. While most work on biases in word embeddings focuses on a single social category (e.g., gender, race) \\citep{caliskan2017semantics, bolukbasi2016man, garg2018word,zhao2018learning,gonen2019lipstick}, the lack of work on identifying intersectional biases, the bias associated with populations defined by multiple categories \\citep{cabreradiscovery}, leads to an incomplete measurement of social biases \\citep{hancock2007multiplication,hurtado2008more}. For example, \\citet{caliskan2017semantics}'s Word Embedding Association Test (WEAT) quantifies biases documented by the validated psychological methodology of the Implicit Association Test (IAT) \\citep{greenwald1998measuring, greenwald2003understanding}. The IAT provides the sets of words to represent social groups and attributes to be used while measuring bias. Consequently, the analysis of bias via WEAT is limited to the types of IATs and their corresponding words contributed by the IAT literature, which happens to include intersectional representation for only African American women. To overcome these constraints of WEATs, we extend WEAT to automatically identify attributes associated with individuals that are members of more than one social group. While this allows us to discover emergent intersectional biases, it is also a promising step towards automatically identifying all biased associations embedded in the regularities of language. To fill the gap in understanding the complex nature of intersectional bias, we develop a method called Intersectional Bias Detection (IBD) to automatically identify intersectional biases without relying on pre-defined attribute sets from the IAT literature.\n\n\n\n\n\nBiases associated with intersectional group members contain emergent elements that do not overlap with the biases of their constituent minority identities \\citep{ghavami2013intersectional,arrington201513}.\n For example, \"hair weaves\" is stereotypically associated with African American females but not with African Americans or females.\nWe extend IBD and introduce a method called Emergent Intersectional Bias Detection (EIBD) to identify the emergent intersectional biases of an intersectional group in SWE. Then, we construct new tests to quantify these intersectional and emergent biases in CWE.\nTo investigate the influence of different contexts, we use a fill-in-the-blank task called masked language modeling. The goal of the task is to generate the most probable substitution for the [MASK] that is surrounded with neighboring context words in a given sentence. BERT, a widely used language model trained on this task, substitutes [MASK] in ``Men\/women \\textit{excel} in [MASK].'' with ``science'' and ``sports'', reflecting stereotype-congruent associations. However, when we feed in similar contexts ``The man\/woman is \\textit{known} for his\/her [MASK],'' BERT fills ``wit'' in both sentences, which indicates gender bias may not appear in these contexts. Prior methods use templates analogous to masked language modeling to measure bias in CWE \\citep{may2019measuring,tan2019assessing,kurita2019quantifying}. The templates are designed to substitute words from WEAT's sets of target words and attributes in a simple manner such as \"This is [TARGET]\" or \"[TARGET] is a [ATTRIBUTE]\"\nIn this work, we propose the Contextualized Embedding Association Test (CEAT), a test eschewing templates and instead generating the distribution of effect magnitudes of biases in different contexts from a control corpus. To comprehensively measure the social and intersectional biases in this distribution, a random-effects model designed to combine effect sizes of similar bias interventions summarizes the overall effect size of bias in the neural language model \\citep{dersimonian2007random}. As a result, instead of focusing on biases template-based contexts, CEAT measures the distribution of biased associations in a language model.\n\n\n\\noindent \\textbf{Contributions.} In summary, this paper presents three novel contributions along with three complementary methods (CEAT, IBD, and EIBD) to automatically identify intersectional biases as well as emergent intersectional biases in SWE, then use these findings to measure all available types of social biases in CWE. We find that ELMo is the most biased, followed by BERT, then GPT, with GPT-2 being the least biased. The overall level of bias correlated with how contextualized the CWE generated by the models are. Our results indicate that the strongest biased associations are embedded in the representations of intersectional group members such as African American women. Data, source code, and detailed results are available.\n\n\\noindent \\textbf{Intersectional Bias Detection (IBD).} We develop a novel method for SWE to detect words that represent biases associated with intersectional group members. To our knowledge, IBD is the first algorithmic method to automatically identify individual words that are strongly associated with intersectionality. IBD reaches an accuracy of 81.6\\% and 82.7\\%, respectively, when evaluated on intersectional biases associated with African American females and Mexican American females that are provided in \\citet{ghavami2013intersectional}'s validation dataset. In these machine learning settings, the random chances of correct identification are 14.3\\% and 13.3\\%. Currently, the validation datasets represent gender as a binary label. Consequently, our method uses binary categorization when evaluating for gender related biases. However, we stress that our method generalizes to multiple categories from binary. In future work, we aim to design non-categorical methods that don't represent individuals as members of discrete categories compared to potentially using continuous representations. Accordingly, we also plan to compile validation datasets that won't constrain our evaluation to categorical assumptions about humans.\n \n \\noindent \\textbf{Emergent Intersectional Bias Detection (EIBD).} We contribute a novel method to identify emergent intersectional biases that do not overlap with biases of constituent social groups in SWE. To our knowledge, EIBD is the first algorithmic method to detect the emergent intersectional biases in word embeddings automatically. EIBD reaches an accuracy of 84.7\\% and 65.3\\%, respectively, when validating on the emergent intersectional biases of African American females and Mexican American females that are provided provided in \\citet{ghavami2013intersectional}'s validation dataset. In these machine learning settings, the random chances of correct identification are 9.2\\% and 6.1\\%. \n\n \n\n \\noindent \\textbf{Contextualized Embedding Association Test (CEAT).} WEAT measures human-like biases in SWE. We extend WEAT to the dynamic setting of neural language models to quantify the distribution of effect magnitudes of social and intersectional biases in \\textit{contextualized} word embeddings and summarize the combined magnitude of bias by pooling effect sizes with the validated random-effects methodology \\cite{hedges1983random, borenstein2007meta}. We show that the magnitude of bias greatly varies according to the context in which the stimuli of WEAT appear. Overall, the pooled mean effect size is statistically significant in all CEAT tests including intersectional bias measurements and all models contain biased representations.\n\n\n \\iffalse\nThe remaining parts of the paper are organized as follows.\nSection~\\ref{sec:related} reviews the related work. \nSection~\\ref{sec:data} provides the details of the datasets used in the approach and evaluation.\nSection~\\ref{sec:approach} introduces the three complementary methods.\nSection~\\ref{sec:experiments} gives the details of experiments and results. Section~\\ref{sec:discussion} discusses our findings and results. Section~\\ref{sec:conclusion} concludes the paper.\n\\fi\n\\section{Problem Statement}\n\\label{sec:problem}\nIn this work, we consider an analyst interested in human-like biases in word embeddings.\nDepending on the context, the analyst's goal might be measuring the biases in CWEs with pre-defined target and attribute words or detecting the intersection-related biases in static word embeddings with pre-defined target groups and a set of possible attributes to be detected. \n\n\nIn the first case, for each word of the stimuli, the analyst needs to obtain several sentences containing it and generate corresponding CWEs. \n The analyst proceeds by randomly picking a CWE vector for each word in stimuli and calculating the effect magnitude of bias based on WEAT test each time, and subsequently deriving a sampling distribution of the effect magnitudes. This distribution can be used to construct summary statistics and to test hypothesis to measure the biases in CWEs.\n \n In the second case, the analyst needs to obtain the static word embeddings of the stimuli. The detection model can be viewed as a two-class classifier with the pre-defined threshold of bias score. The model will then calculate a bias score for each attribute and classify the attributes based on the bias score.\n\n\n\\fi\n\\section{Related Work}\n\\label{sec:related}\nSWE are trained on word co-occurrence statistics of corpora to generate numeric representations of words so that machines can process language \\citep{mikolov2013distributed,pennington2014glove}. Previous work on bias in SWE has shown that human-like biases that have been documented by the IAT are embedded in the statistical regularities of language \\citep{caliskan2017semantics}. The IAT \\citep{greenwald1998measuring} is a widely used measure of implicit bias in human subjects that quantifies the differential reaction time to pairing two concepts. Analogous to the IAT, \\citet{caliskan2017semantics} developed the WEAT to measure the biases in SWE by quantifying the relative associations of two sets of target words (e.g., African American and European American) that represent social groups with two sets of polar attributes (e.g., pleasant and unpleasant). WEAT computes an effect size (Cohen's $d$) that is a standardized bias score and its $p$-value based on a one-sided permutation test. WEAT measures biases pre-defined by the IAT such as racism, sexism, ableism, and attitude towards the elderly, as well as widely shared non-discriminatory non-social group associations. \\citet{swinger2019biases} presented an adaptation of the WEAT to identify biases associated with clusters of names.\n\nRegarding the biases of intersectional groups categorized by multiple social categories, there is prior work in the social sciences focusing on the experiences of African American females \\citep{crenshaw1989demarginalizing,hare1988meaning, kahn1989psychology,thomas1995psychology}. Buolamwini et al. demonstrated intersectional accuracy disparities in commercial gender classification in computer vision \\citep{buolamwini2018gender}. \\citet{may2019measuring} and \\citet{tan2019assessing} used the attributes presented in \\citet{caliskan2017semantics} to measure emergent intersectional biases of African American females in CWE. We develop the first algorithmic method to automatically identify intersectional bias and emergent bias attributes in SWE, which can be measured in both SWE and CWE. Furthermore, we construct new embedding association tests for the intersectional groups. As a result, our work is the first to discuss biases regarding Mexican American females in word embeddings. \\citet{ghavami2013intersectional} used a free-response procedure in human subjects to collect words that represent intersectional biases. They show that emergent intersectional biases exist in several gender-by-race groups in the U.S. We use the validation dataset constructed by \\citet{ghavami2013intersectional} to evaluate our methods.\n\n\nRecently, neural language models, which use neural networks to assign probability values to sequences of words, have achieved state-of-the-art results in NLP tasks with their dynamic word representations, CWE \\citep{edunov2018understanding,bohnet2018morphosyntactic,yang2019xlnet}. Neural language models typically consist of an encoder that generates CWE for each word based on its accompanying context in the input sequence. Specifically, the collection of values on a particular layer's hidden units forms the CWE \\citep{tenney2019you}, which has the same vector shape as a SWE. However, unlike SWE that represent each word, including polysemous words, with a fixed vector, CWE of the same word vary according to its context window that is encoded into its representation by the neural language model. \\citet{ethayarajh2019understanding} demonstrate how these limitations of SWE impact measuring gender biases. With the wide adaption of neural language models \\citep{edunov2018understanding,bohnet2018morphosyntactic,yang2019xlnet}, human-like biases were observed in CWE \\citep{kurita2019quantifying,zhao2019gender,may2019measuring,tan2019assessing}.\n To measure human-like biases in CWE, \\citet{may2019measuring} applied the WEAT to contextualized representations in template sentences. \\citet{tan2019assessing} adopted the method of \\citet{may2019measuring} by applying \\citet{caliskan2017semantics}'s WEAT to the CWE of the stimuli tokens in templates such as ``This is a [TARGET]''. \\citet{kurita2019quantifying} measured biases in BERT based on the prediction probability of the attribute in a template that contains the target and masks the attribute, e.g., [TARGET] is [MASK].\n \\citet{hutchinson2020social} reveal biases associated with disabilities in CWE and demonstrate undesirable biases towards mentions of disability in applications such as toxicity prediction and sentiment analysis. \n\n\n\n\n\n\\citet{nadeem2020stereoset} present a large-scale natural language dataset in English to measure stereotypical biases in the domains of gender, profession, race, and religion. Their strategy cannot be directly compared to ours since it is not aligned with our intersectional bias detection method, which is complementary to CEAT.\n The majority of prior work measures bias in a limited selection of contexts to report the unweighted mean value of bias magnitudes, which does not reflect the scope of contextualization of biases embedded in a neural language model.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Data}\n\\label{sec:data}\nIdentifying and measuring intersectional and social biases in word embeddings as well as neural language models requires four types of data sources that are detailed in this section. (1) SWE carry the signals for individual words that have statistically significant biased associations with social groups and intersectionality. Application of our methods IBD and EIBD to SWE automatically retrieves biased associations. (2) CWE extracted from sentence encodings of neural language models provide precise word representations that depend on the context of word occurrence. We apply CEAT to summarize magnitude of bias in neural language models. (3) A corpus provides the samples of sentences used in CEAT when measuring the overall bias and analyzing the variance of contexts in CWE of neural language models. (4) Stimuli designed by experts in social psychology represent validated concepts in natural language including social group and intersectional targets in addition to their corresponding attributes.\n\n\\subsection{Static Word Embeddings (SWE)}\nWe use GloVe \\cite{pennington2014glove} SWE trained on the word co-occurrence statistics of the Common Crawl corpus to automatically detect words that are highly associated with intersectional group members. The Common Crawl corpus consists of 840 billion tokens and more than 2 million unique vocabulary words collected from a crawl of the world wide web. Consequently, GloVe embeddings capture the language representation of the entire Internet population that contributed to its training corpus. GloVe embeddings learn fine-grained semantic and syntactic regularities \\cite{pennington2014glove}. \\citet{caliskan2017semantics} have shown that social biases are embedded in the linguistic regularities learned by GloVe.\n\n\n\n\n\\subsection{Contextualized Word Embeddings (CWE)}\n\nWe generate the CWE by widely used neural language model implementations of ELMo from \\url{https:\/\/allennlp.org\/elmo}, BERT, GPT and GPT-2 from \\url{https:\/\/huggingface.co\/transformers\/v2.5.0\/model_doc\/} \\cite{peters2018deep,devlin2018BERT,radford2018improving,radford2019language}. Specifically, CWE is formed by the collection of values on a particular layer's hidden units in the neural language model. BERT, GPT and GPT-2 use subword tokenization.\nSince GPT and GPT-2 are unidirectional language models, CWE of the last subtokens contain the information of the entire word \\cite{radford2019language}. We use the CWE of the last subtoken in the word as its representation in GPT and GPT-2. For consistency, we use the CWE of the last subtoken in the word as its representation in BERT.\nBERT and GPT-2 provide several versions. We use BERT-small-cased and GPT-2-117m trained on cased English text. The sizes of the training corpora detailed below have been verified from \\citet{assenmacher2020comparability}. We obtained academic access to GPT-3's API which does not provide training data or the CWE. Accordingly, we are not able to systematically study GPT-3.\n\n\n\\textbf{ELMo} is a 2-layer bidirectional long short term memory (Bi-LSTM) \\cite{hochreiter1997long} language model trained on the Billion Word Benchmark dataset \\cite{chelba2013one} that takes up $\\sim$9GB memory. ELMo has 93.6 million parameters. It is different from the three other models since CWE in ELMo integrate the hidden states in all layers instead of using the hidden states of the top layer. \nWe follow standard usage and compute the summation of hidden units over all aggregated layers of the same token as its CWE \\cite{peters2018deep}. CWE of ELMo have 1,024 dimensions. \n\n\n\\textbf{BERT} \\cite{devlin2018BERT} is a bidirectional transformer encoder \\cite{vaswani2017attention} trained on a masked language model and next sentence prediction. BERT is trained on BookCorpus \\cite{zhu2015aligning} and English Wikipedia dumps that take up $\\sim$16GB memory \\cite{bender2021dangers}. We use BERT-small-case with 12 layers that has 110 million parameters. We extract the values of hidden units on the top layer corresponding to the token as its CWE of 768 dimensions.\n\n\\textbf{GPT} \\cite{radford2018improving} is a 12-layer transformer decoder trained on a unidirectional language model on BookCorpus that takes up $\\sim$13GB memory \\cite{zhu2015aligning}. We use the values of hidden units on the top layer corresponding to the token as its CWE. This implementation of GPT has 110 million parameters. The CWE have 768 dimensions.\n\n\n\\textbf{GPT-2} \\cite{radford2019language} is a transformer decoder trained on a unidirectional language model and is a scaled-up version of GPT. GPT-2 is trained on WebText that takes up $\\sim$40GB memory \\cite{radford2019language}.\nWe use GPT-2-small which has 12 layers and 117 million parameters. \nWe use the values of hidden units on the top layer corresponding to the token as its CWE. CWE of GPT-2 have 768 dimensions\n\nWe provide the source code, detailed information, and documentation in our open source repository at \\url{https:\/\/github.com\/weiguowilliam\/CEAT}.\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Corpus}\n We need a comprehensive representation of all contexts a word can appear in naturally occurring sentences in order to investigate how bias associated with individual words varies across contexts. Identifying the potential contexts in which a word can be observed is not a trivial task. Consequently, we simulate the distribution of contexts a word appears in, by randomly sampling sentences that the word occurs in a large corpus.\n\n\n\\citet{voigt2018rtgender} have shown that social biases are projected into Reddit comments.\nConsequently, we use a Reddit corpus to generate the distribution of contexts that words of interest appear in. The corpus consists of 500 million comments made in the period between 1\/1\/2014 and 12\/31\/2014.\nWe take all the stimuli used in \\citet{caliskan2017semantics}'s WEAT that measures effect size of bias for social groups and related attributes. For each WEAT type, we retrieve the sentences from the Reddit corpus that contain one of these stimuli. In this way, we collect a great variety of CWE from the Reddit corpus to measure bias comprehensively in a neural language model while simulating the natural distribution of contexts in language. We discuss the justification of sampling 10,000 sentences from the Reddit corpus in the upcoming sections.\n\n\\subsection{Stimuli}\n\\label{subsec:stimuli}\n\\citet{caliskan2017semantics}'s WEAT is inspired by the IAT literature \\cite{greenwald1995implicit, greenwald1998measuring, greenwald2003understanding} that measures implicit associations of concepts by representing them with stimuli. Experts in social psychology and cognitive science select stimuli which are words typically representative of various concepts. These linguistic or sometimes picture-based stimuli are proxies to overall representations of concepts in cognition. Similarly, in the word embedding space, WEAT uses these unambiguous stimuli as semantic representations to study biased associations related to these concepts. Since the stimuli are chosen by experts to most accurately represent concepts, they are not polysemous or ambiguous words. Each WEAT, designed to measure a certain type of association or social group bias, has at least 32 stimuli. There are 8 stimuli for each one of the four concepts. Two of these concepts represent target groups and two of them represent polar attributes. WEAT measures the magnitude of bias by quantifying the standardized differential association or targets with attributes. The larger the set of appropriate stimuli to represent a concept, the more statistically significant and accurate the representation becomes \\cite{caliskan2017semantics}. \n\n\n\\noindent \\textbf{Validation data for intersectional bias.} To investigate intersectional bias with respect to race and gender, we represent members of social groups with target words provided by WEAT and Parada et al. \\citep{caliskan2017semantics,parada2016ethnolinguistic}. WEAT and Parada et al. represent racial categories with frequent given names that signal group membership. WEAT contains a balanced combination of common female and male names of African Americans and European Americans whereas Parada et al. presents the Mexican American names for women and men combined. \nThe intersectional bias detection methods identify attributes that are associated with these target group representations. Human subjects provide the validation set of intersectional attributes with ground truth information in prior work \\citep{ghavami2013intersectional}. The evaluation of intersectional bias detection methods uses this validation set. One limitation of these validation sets is the way they represent gender as a binary category. We will address this constraint in future work by constructing our own validation sets that won't have to represent people by discrete categorical labels of race and gender.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Approach}\n\\label{sec:approach}\nOur approach includes four components. (1) \\cite{caliskan2017semantics}'s WEAT for SWE is the foundation of our approach to summarizing overall bias in CWE generated by neural language models. (2) Random-effects models from the meta analysis literature summarizes the combined effect size for a neural language model's CWE via combining 10,000 WEAT samples by weighting each result with the within-WEAT and between-WEAT variances~\\cite{hedges1983random}. (3) Our novel method IBD automatically detects words associated with intersectional biases. (4) Our novel method EIBD automatically detects words that are uniquely associated with members of multiple minority or disadvantaged groups, but do not overlap with the biases of their constituent minority identities. \n\nSupplementary materials includes the details of all the bias types studied in this paper, namely, WEAT biases introduced by \\citet{caliskan2017semantics} as well as intersectional biases and their validation set introduced by \\citet{ghavami2013intersectional} and \\citet{parada2016ethnolinguistic}.\n\n\\subsection{Word Embedding Association Test (WEAT)}\nWEAT, designed by \\citet{caliskan2017semantics}, measures the effect size of bias in SWE, by quantifying the relative associations of two sets of target words (e.g., career, professional; and family, home) with two sets of polar attributes (e.g., woman, female; and man, male). Two of these WEATs measure baseline associations that are widely accepted such as the attitude towards flowers vs. insects or the attitude towards musical instruments vs. weapons. Human subjects and word embeddings tend to associate flowers and musical instruments with pleasantness that corresponds to positive valence. However, human subjects associate insects and weapons with unpleasantness that corresponds to negative valence. \\citet{greenwald1998measuring} refers to these as universally accepted stereotypes since they are widely shared across human subjects and are not potentially harmful to society. However, the rest of the tests measure the magnitude of social-group associations, such as gender and race stereotypes and attitude towards the elderly or people with disabilities. Biased social-group associations in word embeddings can potentially be prejudiced and harmful to society. Especially, if downstream applications of NLP that use static or dynamic word embeddings to make consequential decisions about individuals, such as resume screening for job candidate selection, perpetuate existing biases to eventually exacerbate historical injustices \\cite{de2019bias, raghavanchallenges}. The formal definition of \\citet{caliskan2017semantics}'s WEAT, the test statistic, and the statistical significance of biased associations are detailed in the appendices.\n\n\n\n\\iffalse\nWe present a formal definition of \\citet{caliskan2017semantics}'s WEAT. Let $X$ and $Y$ be two sets of target words of equal size, and $A$, $B$ be two sets of attribute words. Let $cos(\\vec{a},\\vec{b})$ stand for the cosine similarity between the embeddings of words $a$ and $b$. Here, the vector $\\vec{a}$ is the embedding for word $a$. The test statistic is \n\\vspace{-1mm}\n\\[ s(X,Y,A,B) = \\sum_{x\\in X}{s(x,A,B)} - \\sum_{y\\in Y}{s(y,A,B)} \\]\n\\vspace{-1mm}\nwhere \n\\vspace{-1mm}\n\\[ s(w,A,B) = mean_{a \\in A}cos(\\vec{w}, \\vec{a})-mean_{b \\in B}cos(\\vec{w}, \\vec{b}) \\]\n\nA permutation test calculates the statistical significance of association $s(X,Y,A,B)$. The one-sided $p-value$ is \n\\[ P = Pr_{i} [s(X_{i},Y_{i},A,B)>s(X,Y,A,B))] \\]\nwhere $\\{(X_i,Y_i)\\}_{i}$ represents all the partitions of $X\\cup Y$ in two sets of equal size. Random permutations of these stimuli sets represent the null hypothesis as if the biased associations did not exist so that we can perform a statistical significance test by measuring the unlikelihood of the null hypothesis, given the effect size of WEAT.\n\nThe effect size of bias is calculated as \n\\[ ES = \\frac{mean_{x \\in X}s(x,A,B)-mean_{y \\in Y}s(y,A,B)}{std\\_dev_{w \\in X\\bigcup Y}s(w,A,B)} \\]\n\n\\fi\n\n\\subsection{Intersectional Bias Detection (IBD) }\nIBD identifies words associated with intersectional group members, defined by two social categories simultaneously. Our method automatically detects the attributes that have high associations with the intersectional group from a set of SWE. Analogous to the Word Embedding Factual Association Test (WEFAT) \\citep{caliskan2017semantics}, we measure the standardized differential association of a single stimulus $w \\in W$ with two social groups $A$ and $B$ using the following statistic.\n\\vspace{-2mm}\n\\[ s(w, A, B) = \\frac{\\textrm{mean}_{a \\in A} \\textrm{cos}(\\vec{w}, \\vec{a}) - \\textrm{mean}_{b \\in B} \\textrm{cos}(\\vec{w}, \\vec{b})}{\\textrm{std-dev}_{x \\in A \\cup B}\\textrm{cos}(\\vec{w}, \\vec{x})}\\]\n\\vspace{-2mm}\n\nWe refer to the above statistic as the \\textbf{association score}, which is used by WEFAT to verify that gender statistics are embedded in linguistic regularities. Targets $A$ and $B$ are words that represent males (e.g., he, him) and females (e.g., she, her) and $W$ is a set of occupations. For example, \\textit{nurse} has an association score $s(nurse, A, B)$ that measures effect size of gender associations. WEFAT has been shown to have high predictive validity ($\\rho=0.90$) in quantifying facts about the world \\citep{caliskan2017semantics}. \n\nWe extend WEFAT's {\\em gender} association measurement to quantify the relative association to other social categories (e.g., race), by following an approach similar to lexicon induction that quantifies certain associations without annotating large-scale ground truth training data \\cite{hatzivassiloglou1997predicting, riloff2003learning, turney2003measuring}. Let $P_i = (A_i,B_i$) (e.g., African American and European American) be a pair of social groups, and $W$ be a set of attribute words.\nWe calculate the association score $s(w,A_i,B_i)$ for $w \\in W$. If $s(w,A_i,B_i)$ is greater than the positive effect size threshold $t$, $w$ is detected to be associated with group $A_i$.\nLet $W_i = \\{w|s(w,A_i,B_i)>t, w \\in W\\}$ be the associated word list for each pair $P_i$. \n\nWe detect the biased attributes associated with an intersectional group $C_{mn}$ defined by two social categories $C_{1n}, C_{m1}$ with $M$ and $N$ subcategories ($C_{11}, \\dots, C_{mn}$) (e.g., African American females by race ($C_{1n}$) and gender ($C_{m1}$)). We assume, there are three racial categories $M =3$, and two gender categories $N=2$ in our experiments because of the limited structure of representation for individuals in the validation dataset as well as the stimuli. We plan to extend these methods to non-binary individuals and non-categorical representations. However, precisely validating such an approach would require us to construct the corresponding validation sets, which currently don't exist. \\textbf{Generalizing the method to represent humans with continuous values as opposed to categorical group labels is left to future work.} There are in total $ M \\times N $ combinations of intersectional groups $C_{mn}$. We use all groups $C_{mn}$ to build WEFAT pairs\n$P_{ij} = (C_{11}, C_{ij}), i = 1,...,M, j = 1,...,N$. Then, we detect lists of words associated with each pair $W_{ij}, i = 1,...,M, j = 1,...,N$ based on threshold $t$ determined by an ROC curve. We detect the attributes highly associated with the intersectional group, for example C$_{11}$, from all $( M\\times N)$ WEFAT pairs.\nWe define the words associated with intersectional biases of group C$_{11}$ as $W_{IB}$ and these words are identified by \n\\vspace{-3mm}\n\n\\[W_{IB} = \\bigcup_{\\substack{1\\leq i\\leq M\\\\1\\leq j\\leq N}}W_{IB_{ij}},\\;\n\\] \nwhere \n\\vspace{-5mm}\n \\[ \\hspace{12mm} W_{IB_{ij}} = \\{w|s(w,C_{11},C{_{ij}})>t_{mn}, w \\in W_{IB_{mn}}\\} \\] \n\n\\noindent where \n\\vspace{-3mm}\n\\[ W_{IB_{mn}} = \\{(\\bigcup_{\\substack{1\\leq i\\leq M\\\\1\\leq j\\leq N}}W_{ij})\\cup W_{random}\\} \\] \n\n\\noindent W$_{11}$ contains validated words associated with C$_{11}$. Each W$_{ij}$ contains validated words associated with one intersectional group \\cite{ghavami2013intersectional}. W$_{random}$ contains random words, which are stimuli taken from WEAT that are not associated with any C$_{ij}$, thus represent true negatives. \n\n\n\nTo identify the thresholds, we treat IBD as a one-vs-all verification classifier in machine learning to determine whether attributes belong to group $C_{11}$. \nWe select the threshold with the highest value of $true\\: positive\\: rate - false\\: positive\\: rate$ ($TPR - FPR$). When multiple thresholds have the same values, we select the one with the highest $TP$ to detect more attributes associated with $C_{11}$. Detection accuracy is calculated as true positives plus true negatives over true positives plus true negatives plus false positives plus false negatives $(\\frac{TP+TN}{TP+TN+FP+FN})$. The attributes which are associated with $C_{11}$ and are detected as $C_{11}$ are $TP$. The attributes which are not associated with $C_{11}$ and are not detected as $C_{11}$ are $TN$. The attributes which are associated with $C_{11}$ but are not detected as $C_{11}$ are $FN$. The attributes which are not associated with $C_{11}$ but are detected as $C_{11}$ are $FP$.\n\n\n\n\n\n\n\n\n\\subsection{Emergent Intersectional Bias Detection (EIBD)}\nEIBD identifies words that are uniquely associated with intersectional group members. These emergent biases are only associated with the intersectional group (e.g., African American females $C_{11}$) but not associated with its constituent category such as African Americans $S_{1n}$ or females $S_{m1}$. EIBD is a modified and extended version of IBD. The formal definition is in the appendices.\n\n\n\n\nConceptually, to detect words uniquely associated with African American females in a set of attributes $W$, we assume there are two classes (females, males) of gender and two classes (African Americans, European Americans) of race.\nWe measure the relative association of all words in $W$ first\nwith African American females and African American males, second with African American females and European American females, third with African American females and European American males. (Fourth is the comparison of the same groups, which leads to $d=0$ effect size, which is always below the detection threshold.) The union of attributes with an association score greater than the selected threshold represents intersectional biases associated with African American females. \nThen, we calculate the association scores of these IBD attributes first with females and males, second with African Americans and European Americans. We remove the attributes with scores greater than the selected threshold from these IBD attributes, that are highly associated with single social categories. The union of the remaining attributes are the emergent intersectional biases.\n\n\n\n\n\n\n\n\\subsection{Contextualized Embedding Association Test (CEAT)}\nCEAT quantifies social biases in CWE by extending the WEAT methodology that measures human-like biases in SWE \\citep{caliskan2017semantics}. \nWEAT's bias metric is effect size (Cohen's $d$). In CWE, since embeddings of the same word vary based on context, applying WEAT to a biased set of CWE will not measure bias comprehensively. To deal with a range of dynamic embeddings representing individual words, CEAT measures the distribution of effect sizes that are embedded in a neural language model. \n\n\n\n\n\n\nIn WEAT's formal definition \\citep{caliskan2017semantics}, $X$ and $Y$ are two sets of target words of equal size; $A$ and $B$ are two sets of evaluative polar attribute words of equal size. Each word in these sets of words is referred to as a stimulus. Let $cos(\\vec{a},\\vec{b})$ stand for the cosine similarity between vectors $\\vec{a}$ and $\\vec{b}$. \nWEAT measures the magnitude of bias by computing the effect size ($ES$) which is the standardized differential association of the targets and attributes. The $p$-value ($P_w$) of WEAT measures the probability of observing the effect size in the null hypothesis, in case biased associations did not exist. According to Cohen's effect size metric, $d > \\mid 0.5 \\mid$ and $d > \\mid 0.8\\mid$ are medium and large effect sizes, respectively \\citep{rice2005comparing}.\n\n\n\nIn a neural language model, each stimulus $s$ from WEAT contained in $n_s$ input sentences has at most $n_s$ different CWE $\\vec{s_1},..., \\vec{s_{n_s}}$ depending on the context in which it appears.\nIf we calculate effect size $ES(X,Y,A,B)$ with all different $\\vec{s}$ for a stimulus $s \\in X$ and keep the CWE for other stimuli unchanged, there will be at most $n_s$ different values of effect size. For example, if we assume each stimulus $s$ occurs in 2 contexts and each set in $X, Y, A, B$ has 5 stimuli, the total number of combinations for all the CWE of stimuli will be $2^{5\\times4} = 1,048,576$. The numerous possible values of $ES(X,Y,A,B)$ construct a \\textit{distribution} of effect sizes, therefore we extend WEAT to CEAT.\n\n\n\nFor each CEAT, all the sentences, where a CEAT stimulus occurs, are retrieved from the Reddit corpus. Then, we generate the corresponding CWE from these sentences with randomly varying contexts. In this way, we generate $n_s$ CWE from $n_s$ extracted sentences for each stimulus $s$, where $n_s$ can vary according to the contextual variance of each stimulus.\nWe sample random combinations of CWE for each stimulus $N$ times. In the $i^{th}$ sample out of $N$, for each stimulus that appears in at least $N$ sentences, \nwe randomly sample one of its CWE vectors without replacement. If a stimulus occurs in less than $N$ sentences, especially when $N$ is very large, we randomly sample from its CWE vectors with replacement so that they can be reused while preserving their distribution. We provide the analysis and extended results in the appendices for both $N=1,000$ and $N=10,000$, which result in similar bias magnitudes. Based on the sampled CWEs, we calculate each sample's effect size $ES_i(X,Y,A,B)$, sample variance $V_i(X,Y,A,B)$ and $p$-value $P_{w_i}(X,Y,A,B)$ in WEAT. Then, we generate $N$ of these samples to approximate the distribution of effect sizes via CEAT. \n\n\n\n\n\n\n\n\n\nThe distribution of bias effects in CEAT represents random effects computed by WEAT where we do not expect to observe the same effect size due to variance in context \\cite{hedges1983random}. As a result, in order to provide comprehensive summary statistics, we applied a random-effects model from the validated meta-analysis literature to compute the weighted mean of the effect sizes and statistical significance \\citep{rosenthal2002meta, borenstein2007meta}. The summary of the effect magnitude of a particular bias in a neural language model, namely combined effect size (CES), is the weighted mean of a distribution of random effects,\n\\vspace{-1mm}\n\\[CES(X,Y,A,B) = \\frac{\\sum_{i=1}^{N}v_i ES_i}{\\sum_{i=1}^{N}v_i}\\]\n\\vspace{-2mm}\n\n\\noindent where $v_i$ is the inverse of the sum of in-sample variance $V_i$ and between-sample variance in the distribution of random effects $\\sigma_{between}^2$. Methodological details are in the appendices.\n\n\\iffalse\nBased on the central limit theorem, the limiting form of the distribution of $\\frac{CES}{SE(CES)}$ is the standard normal distribution \\citep{montgomery2010applied}.\nThen the statistical significance of CES, two-tailed $p$-value of the hypothesis that there is no difference between all the contextualized variations of the two sets of target words in terms of their relative similarity to two sets of attribute words is given by the following formula, where $\\Phi$ is the standard normal cumulative distribution function and $SE$ stands for the standard error. \n\\[ P_c(X,Y,A,B) = 2 \\times [1 - \\Phi ( | \\frac{CES}{SE(CES)} | ) ] \\]\n\\fi\n\n\\subsection{Random-Effects Model}\n\\label{subsec:random}\n\nMeta-analysis is the statistical procedure for combining data from multiple studies \\cite{hedges1998fixed}. Meta-analysis describes the results of each separate study by a numerical index (e.g., effect size) and then summarizes the results into combined statistics. In bias measurements, we are dealing with effect size. Based on different assumptions whether the effect size is fixed or not, there are two kinds of methods: \\textit{fixed-effects} model and \\textit{random-effects} model. \nFixed-effects model expects results with fixed-effect sizes from different intervention studies. On the other hand, random-effects model treats the effect size as they are samples from a random distribution of all possible effect sizes \\cite{dersimonian1986meta,hedges2014statistical}. The expected results of different intervention studies in the random-effects model don't have to match other studies' results. \nIn our case, since the effect sizes calculated with the CWE in different contexts are expected to vary, we cannot assume a fixed-effects model. Instead, we use a random-effects model that is appropriate for the type of data we are studying. \n\nWe apply a random-effects model from the validated meta-analysis literature using the methods of \\citet{hedges1998fixed}. Specifically, we describe the procedures for estimating the comprehensive summary statistic, \\textbf{combined effect size (CES)}, which is the weighted mean of a distribution of random-effect sizes. Each effect size is weighted by the variance in calculating that particular effect size in addition to the overall variance among all the random-effect sizes. \n\nWe combine effect size estimates from $N$ independent WEATs. The details of CES are in the appendices.\n\n\n\n\\subsection{Intersectional and Emergent Intersectional Bias Detection in Static Word Embeddings}\n\n\n\n\n\n \\begin{figure*}[ht!]\n \\centering\n {%\n\\begin{tabular}{cccc}\n \\includegraphics[width=.2\\textwidth]{plot\/roc_supp\/af_inter.pdf} &\n \\includegraphics[width=.2\\textwidth]{plot\/roc_supp\/af_unique.pdf} &\n \\includegraphics[width=.2\\textwidth]{plot\/roc_supp\/lf_inter.pdf} &\n\\includegraphics[width=.2\\textwidth]{plot\/roc_supp\/lf_unique.pdf}\\\\\n \\end{tabular}}\n \\vspace{-2mm} \\caption{\\textbf{ROC curves of IBD and EIBD for African American females (AF) and Mexican American females (MF).} The value that maximizes the $true\\: positive\\: rate\\: -\\: false\\: positive\\: rate$ is selected as the optimal threshold marked with a dot.\n `emerg inter bias' stands for emergent intersectional bias. \n\\vspace{-4mm} }\n \\label{fig:roc}\n\\end{figure*}\n\n\\addtolength{\\textfloatsep}{-0.05in}\n\n\n\n\n\n\n \n\n \n\n \n\n\n\n\n\n\n\n\n\n\n\n\\section{Results and Evaluation}\n\\label{sec:experiments}\n\n\nWe measure ten types of social biases via WEAT (C1-C10) and construct our own intersectional bias tests in ELMo, BERT, GPT, and GPT-2. Accordingly, we present four novel intersectional bias tests via IBD and EIBD for studying African American, European American, and Mexican American men and women.\n\nWe use the stimuli introduced in Section~\\ref{subsec:stimuli} to represent the target groups. For intersectional and emergent bias tests, we use the attributes associated with the intersectional minority or disadvantaged group members vs the majority European American males as the two polar attribute sets. We sample $N=10,000$ combinations of CWE for each CEAT since according to various evaluation trials, the resulting CES and $p$-value remain consistent under this parameter.\n\n\n\\subsection{Evaluation of IBD and EIBD}\n\\label{sec:evaluation}\n\n We use IBD and EIBD to automatically detect and retrieve the intersectional and emergent biases associated with intersectional group members (e.g., African American females, Mexican American females) in GloVe SWE. \nTo evaluate our methods IBD and EIBD, we use validated stimuli provided in prior work that represents each social group with frequent given names, as explained in Section~\\ref{sec:data}. \nIBD and EIBD experiments use the same test set consisting of 98 attributes associated with 2 groups defined by gender (females, males), 3 groups defined by race (African American, European American, Mexican American), 6 intersectional groups in total defined by race and gender, in addition to random words taken from WEAT not associated with any group \\cite{ghavami2013intersectional}. These random words represent the true negatives for evaluating the identification task.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWe draw the ROC curves of four bias detection tasks in Figure~\\ref{fig:roc}, then select the highest value of\n$TPR - FPR$ as thresholds for each intersectional group. \nIBD achieves an accuracy of 81.6\\% and 82.7\\%, respectively, when detecting the intersectional biases of African American females and Mexican American females, where the random correct identification rates are 14.3\\% and 13.3\\%. EIBD reaches an accuracy of 84.7\\% and 65.3\\%, respectively, when detecting the emergent intersectional biases unique to African American females and Mexican American females. The probability of random correct attribute detection in EIBD tasks are 9.2\\% and 6.1\\%. Intersectional biases have the highest magnitude compared to other biases across all language models, potentially disadvantaging members that belong to multiple minority groups in downstream applications.\n\n\n\n\nThe current validation set with ground truth information about each word constrains our evaluation to a closed-world machine learning classification task, where we know the category each stimulus belongs to. On the other hand, evaluating the entire semantic space resembles an open-world machine learning problem where millions of stimuli in the entire word embedding vocabulary belong to unknown categories, thus require human-subject annotation studies. In future work, a human subject study can further evaluate the threshold selection criteria, which would require validating a large set of biases retrieved from the entire vocabulary.\n \n \n \n \n \\begin{table*}[t]\n\n \\begin{minipage}[c]{0.68\\textwidth}\n\\centering\n\n\\vspace{-3mm}\n\\label{table:socialbias-measure}\n \\resizebox{0.99\\textwidth}{!} {%\n\\begin{tabular}{|p{3mm} l | r | cc | cc | cc |cc |}\n\\hline\n\\multicolumn{3}{| c |}{ \\multirow{2}{*}{\\textbf{Test}}} &\n \\multicolumn{2}{c|}{\\textbf{ELMo}} &\n \\multicolumn{2}{c|}{\\textbf{BERT}} &\n \\multicolumn{2}{c|}{\\textbf{GPT}} &\n \\multicolumn{2}{c |}{\\textbf{GPT-2}} \\\\ \\cline{4-11} \n \\multicolumn{3}{|c|}{} & \\textbf{$d$} & \\textbf{$p$} & \\textbf{$d$} & \\textbf{$p$} & \\textbf{$d$} & \\textbf{$p$} & \\textbf{$d$} & \\textbf{$p$} \\\\ \\hline\n \n \n\\multirow{2}{*}{\\shortstack{C1:}} & Flowers\/Insects & random & \\cellcolor{darkgray}1.40 & $<10^{-30}$ & \\cellcolor{darkgray}0.97 & $<10^{-30}$ & \\cellcolor{darkgray}1.04 & $<10^{-30}$ & 0.14 & $<10^{-30}$ \\\\\n\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & \\cellcolor{darkgray}{1.35} & $<10^{-30}$ & \\cellcolor{mediumgray}{0.64 } & $<10^{-30}$ & \\cellcolor{darkgray}{1.01 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.21 } & $<10^{-30}$ \\\\ \\hline\n\n\n\n\\multirow{2}{*}{{\\shortstack{C2:}}} & Instruments\/Weapons & random & \\cellcolor{darkgray}1.56 & $<10^{-30}$ & \\cellcolor{darkgray}0.94 & $<10^{-30}$ & \\cellcolor{darkgray}1.12 & $<10^{-30}$ & \\cellcolor{lightgray}-0.27 & $<10^{-30}$ \\\\\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & \\cellcolor{darkgray}{1.59} & $<10^{-30}$ & \\cellcolor{mediumgray}{0.54} & $<10^{-30}$ & \\cellcolor{darkgray}{1.09} & $<10^{-30}$ & \\cellcolor{lightgray}{-0.21 } & $<10^{-30}$ \\\\ \\hline\n \n \\multirow{2}{*}{{\\shortstack{C3:}}} & EA\/AA names & random & \\cellcolor{lightgray}0.49 & $<10^{-30}$ & \\cellcolor{lightgray}0.44 & $<10^{-30}$ & -0.11 & $<10^{-30}$ & -0.19 & $<10^{-30}$ \\\\\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & \\cellcolor{lightgray}{0.47 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.31} & $<10^{-30}$ & -0.10 & $<10^{-30}$ & 0.09 & $<10^{-30}$ \\\\ \\hline\n \n \\multirow{2}{*}{{\\shortstack{C4:}}} & EA\/AA names & random & 0.15 & $<10^{-30}$ & \\cellcolor{lightgray}0.47 & $<10^{-30}$ & 0.01 & $<10^{-2}$ & \\cellcolor{lightgray}-0.23 & $<10^{-30}$ \\\\\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & \\cellcolor{lightgray}{0.23 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.49 } & $<10^{-30}$ & 0.00 & $0.20$ & -0.13 & $<10^{-30}$ \\\\ \\hline\n \n \\multirow{2}{*}{{\\shortstack{C5:}}} & EA\/AA names & random & 0.11 & $<10^{-30}$ & 0.02 & $<10^{-7}$ & 0.07 & $<10^{-30}$ & \\cellcolor{lightgray}-0.21 & $<10^{-30}$ \\\\\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & 0.17 & $<10^{-30}$ & 0.07 & $<10^{-30}$ & 0.04 & $<10^{-27}$ & -0.01 & 0.11 \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{C6:}}} & Males\/Female names & random & \\cellcolor{darkgray}1.27 & $<10^{-30}$ & \\cellcolor{darkgray}0.92 & $<10^{-30}$ & 0.19 & $<10^{-30}$ & \\cellcolor{lightgray}0.36 & $<10^{-30}$ \\\\\n & Career\/Family & fixed & \\cellcolor{darkgray}{1.31 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.41} & $<10^{-30}$ & 0.11 & $<10^{-30}$ & \\cellcolor{lightgray}{0.34} & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{C7:}}} & Math\/Arts & random & \\cellcolor{mediumgray}0.64 & $<10^{-30}$ & \\cellcolor{lightgray}0.41 & $<10^{-30}$ & \\cellcolor{lightgray}0.24 & $<10^{-30}$ & -0.01 & $<10^{-2}$ \\\\\n & Male\/Female terms & fixed & \\cellcolor{darkgray}{0.71 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.20 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.23} & $<10^{-30}$ & -0.14 & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{C8:}}} & Science\/Arts & random & \\cellcolor{lightgray}0.33 & $<10^{-30}$ & -0.07 & $<10^{-30}$ & \\cellcolor{lightgray}0.26 & $<10^{-30}$ & -0.16 & $<10^{-30}$ \\\\\n & Male\/Female terms & fixed & \\cellcolor{mediumgray}{0.51 } & $<10^{-30}$ & 0.17 & $<10^{-30}$ & \\cellcolor{lightgray}{0.35} & $<10^{-30}$ & -0.05 & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{C9:}}} & Mental\/Physical disease & random & \\cellcolor{darkgray}1.00 & $<10^{-30}$ & \\cellcolor{mediumgray}0.53 & $<10^{-30}$ & 0.08 & $<10^{-29}$ & 0.10 & $<10^{-30}$ \\\\\n & Temporary\/Permanent & fixed & \\cellcolor{darkgray}{1.01} & $<10^{-30}$ & \\cellcolor{lightgray}{0.40} & $<10^{-30}$ & \\cellcolor{lightgray}{-0.23 } & $<10^{-30}$ & \\cellcolor{lightgray}{-0.21 } & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{C10:}}} & Young\/Old people's names & random & 0.11 & $<10^{-30}$ & -0.01 & 0.016 & 0.07 & $<10^{-30}$ & -0.16 & $<10^{-30}$ \\\\\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & \\cellcolor{lightgray}{0.24} & $<10^{-30}$ & 0.07 & $<10^{-30}$ & 0.04 & $<10^{-17}$ & -0.14 & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{I1:}}} & AF\/EM names & random & \\cellcolor{darkgray}1.24 & $<10^{-30}$ & \\cellcolor{mediumgray}0.77 & $<10^{-30}$ & 0.07 & $<10^{-30}$ & 0.02 & $<10^{-2}$ \\\\\n & AF\/EM intersectional & fixed & \\cellcolor{darkgray}{1.25} & $<10^{-30}$ & \\cellcolor{darkgray}{0.98 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.23 } & $<10^{-30}$ & -0.19 & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{I2:}}} & AF\/EM names & random & \\cellcolor{darkgray}1.25 & $<10^{-30}$ & \\cellcolor{mediumgray}0.67 & $<10^{-30}$ & -0.09 & $<10^{-30}$ & 0.02 & $<10^{-2}$ \\\\\n & {\\small AF emergent\/EM intersectional} & fixed & \\cellcolor{darkgray}{1.27} & $<10^{-30}$ & \\cellcolor{darkgray}{1.00} & $<10^{-30}$ & \\cellcolor{lightgray}{0.23 } & $<10^{-30}$ & -0.14 & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{I3:}}} & MF\/EM names & random & \\cellcolor{darkgray}1.31 & $<10^{-30}$ & \\cellcolor{mediumgray}0.68 & $<10^{-30}$ & -0.06 & $<10^{-30}$ & \\cellcolor{lightgray}0.38 & $<10^{-30}$ \\\\\n & MF\/EM intersectional & fixed & \\cellcolor{darkgray}{1.29}& $<10^{-30}$ & \\cellcolor{mediumgray}{0.51} & $<10^{-30}$ & 0.00 & 0.81 & \\cellcolor{lightgray}{0.32 } & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{I4:}}} & MF\/EM names & random & \\cellcolor{darkgray} 1.51 & $<10^{-30}$ &\\cellcolor{darkgray} 0.86 & $<10^{-30}$ & 0.16 & $<10^{-30}$ & \\cellcolor{lightgray}-0.32 & $<10^{-30}$ \\\\\n & {\\small MF emergent\/EM intersectional} & fixed & \\cellcolor{darkgray}{1.43} & \n $<10^{-30}$ & \\cellcolor{mediumgray}{0.58} & $<10^{-30}$ & \\cellcolor{lightgray}{0.20} & $<10^{-30}$ & \\cellcolor{lightgray}{-0.25} & $<10^{-30}$ \\\\ \\hline \n\\multicolumn{11}{c}{{\\small $^{\\ast}$Pleasant and unpleasant attributes used to measure valence and attitudes towards targets from \\citet{greenwald1998measuring}.}}\\\\\n\n\\end{tabular}\n}\n \\end{minipage}\\hfill\n \\begin{minipage}[c]{0.32\\textwidth}\n\\vspace{-1mm} \\caption{\n\\textbf{CEAT measures of social and intersectional biases in language models.} We report the overall magnitude of bias in language models with CES ($d$, rounded down) and statistical significance with combined $p$-values ($p$, rounded up). CES pools $N = 10,000$ samples from a random-effects model. The first row for each bias test uses completely random samples, whereas the second row for the bias test uses the same sentences to generate CWE across all neural language models.\n $Ci$ stands for the $i^{th}$ WEAT in \\citet{caliskan2017semantics}'s Table 1. $Ii$ stands for our tests constructed for measuring intersectional biases. $A\\_$ stands for African Americans, $E\\_$ for European Americans, $M\\_$ for Mexican Americans, $\\_F$ for females, and $\\_M$ for males. Light, medium, and dark gray shading of combined $d$ values (CES) indicates small, medium, and large effect size, respectively. \n } \\label{table:socialbias-measure}\n\n \\end{minipage}\n\\vspace{-3mm} \n \\end{table*}\n\n \n \\iffalse\n\n\\begin{table*}[t]\n\\centering\n\\caption{\n\\textbf{CEAT for social and intersectional biases.} We report the overall magnitude of bias in a language model with CES ($d$, rounded down) and its statistical significance with combined $p$-values ($p$, rounded up). CES pools $N = 10,000$ samples from a random-effects model. The first row for each bias test uses completely random samples, whereas the second row for the bias test uses the same sentences to generate CWE across all neural language models.\n $Ci$ stands for the $i^{th}$ WEAT test in \\citet{caliskan2017semantics}'s Table 1. $Ii$ stands for the novel tests constructed for intersectional biases. $A\\_$ stands for African Americans. $E\\_$ stands for European Americans. $M\\_$ stands for Mexican Americans. $\\_F$ stands for females. $\\_M$ stands for males. Light, medium, and dark gray shading of combined $d$ values (CES) indicates small, medium, and large effect size respectively. \n}\n\n\\vspace{-3mm}\n\\label{table:socialbias-measure}\n \\resizebox{0.63\\textwidth}{!} {%\n\\begin{tabular}{|p{7mm} l | r | cc | cc | cc |cc |}\n\\hline\n\\multicolumn{3}{| c |}{ \\multirow{2}{*}{\\textbf{Test}}} &\n \\multicolumn{2}{c|}{\\textbf{ELMo}} &\n \\multicolumn{2}{c|}{\\textbf{BERT}} &\n \\multicolumn{2}{c|}{\\textbf{GPT}} &\n \\multicolumn{2}{c |}{\\textbf{GPT-2}} \\\\ \\cline{4-11} \n \\multicolumn{3}{|c|}{} & \\textbf{$d$} & \\textbf{$p$} & \\textbf{$d$} & \\textbf{$p$} & \\textbf{$d$} & \\textbf{$p$} & \\textbf{$d$} & \\textbf{$p$} \\\\ \\hline\n \n \n\\multirow{2}{*}{\\shortstack{C1:}} & Flowers\/Insects & random & \\cellcolor{darkgray}1.40 & $<10^{-30}$ & \\cellcolor{darkgray}0.97 & $<10^{-30}$ & \\cellcolor{darkgray}1.04 & $<10^{-30}$ & 0.14 & $<10^{-30}$ \\\\\n\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & \\cellcolor{darkgray}{1.35} & $<10^{-30}$ & \\cellcolor{mediumgray}{0.64 } & $<10^{-30}$ & \\cellcolor{darkgray}{1.01 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.21 } & $<10^{-30}$ \\\\ \\hline\n\n\n\n\\multirow{2}{*}{{\\shortstack{C2:}}} & Instruments\/Weapons & random & \\cellcolor{darkgray}1.56 & $<10^{-30}$ & \\cellcolor{darkgray}0.94 & $<10^{-30}$ & \\cellcolor{darkgray}1.12 & $<10^{-30}$ & \\cellcolor{lightgray}-0.27 & $<10^{-30}$ \\\\\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & \\cellcolor{darkgray}{1.59} & $<10^{-30}$ & \\cellcolor{mediumgray}{0.54} & $<10^{-30}$ & \\cellcolor{darkgray}{1.09} & $<10^{-30}$ & \\cellcolor{lightgray}{-0.21 } & $<10^{-30}$ \\\\ \\hline\n \n \\multirow{2}{*}{{\\shortstack{C3:}}} & EA\/AA names & random & \\cellcolor{lightgray}0.49 & $<10^{-30}$ & \\cellcolor{lightgray}0.44 & $<10^{-30}$ & -0.11 & $<10^{-30}$ & -0.19 & $<10^{-30}$ \\\\\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & \\cellcolor{lightgray}{0.47 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.31} & $<10^{-30}$ & -0.10 & $<10^{-30}$ & 0.09 & $<10^{-30}$ \\\\ \\hline\n \n \\multirow{2}{*}{{\\shortstack{C4:}}} & EA\/AA names & random & 0.15 & $<10^{-30}$ & \\cellcolor{lightgray}0.47 & $<10^{-30}$ & 0.01 & $<10^{-2}$ & \\cellcolor{lightgray}-0.23 & $<10^{-30}$ \\\\\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & \\cellcolor{lightgray}{0.23 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.49 } & $<10^{-30}$ & 0.00 & $0.20$ & -0.13 & $<10^{-30}$ \\\\ \\hline\n \n \\multirow{2}{*}{{\\shortstack{C5:}}} & EA\/AA names & random & 0.11 & $<10^{-30}$ & 0.02 & $<10^{-7}$ & 0.07 & $<10^{-30}$ & \\cellcolor{lightgray}-0.21 & $<10^{-30}$ \\\\\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & 0.17 & $<10^{-30}$ & 0.07 & $<10^{-30}$ & 0.04 & $<10^{-27}$ & -0.01 & 0.11 \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{C6:}}} & Males\/Female names & random & \\cellcolor{darkgray}1.27 & $<10^{-30}$ & \\cellcolor{darkgray}0.92 & $<10^{-30}$ & 0.19 & $<10^{-30}$ & \\cellcolor{lightgray}0.36 & $<10^{-30}$ \\\\\n & Career\/Family & fixed & \\cellcolor{darkgray}{1.31 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.41} & $<10^{-30}$ & 0.11 & $<10^{-30}$ & \\cellcolor{lightgray}{0.34} & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{C7:}}} & Math\/Arts & random & \\cellcolor{mediumgray}0.64 & $<10^{-30}$ & \\cellcolor{lightgray}0.41 & $<10^{-30}$ & \\cellcolor{lightgray}0.24 & $<10^{-30}$ & -0.01 & $<10^{-2}$ \\\\\n & Male\/Female terms & fixed & \\cellcolor{darkgray}{0.71 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.20 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.23} & $<10^{-30}$ & -0.14 & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{C8:}}} & Science\/Arts & random & \\cellcolor{lightgray}0.33 & $<10^{-30}$ & -0.07 & $<10^{-30}$ & \\cellcolor{lightgray}0.26 & $<10^{-30}$ & -0.16 & $<10^{-30}$ \\\\\n & Male\/Female terms & fixed & \\cellcolor{mediumgray}{0.51 } & $<10^{-30}$ & 0.17 & $<10^{-30}$ & \\cellcolor{lightgray}{0.35} & $<10^{-30}$ & -0.05 & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{C9:}}} & Mental\/Physical disease & random & \\cellcolor{darkgray}1.00 & $<10^{-30}$ & \\cellcolor{mediumgray}0.53 & $<10^{-30}$ & 0.08 & $<10^{-29}$ & 0.10 & $<10^{-30}$ \\\\\n & Temporary\/Permanent & fixed & \\cellcolor{darkgray}{1.01} & $<10^{-30}$ & \\cellcolor{lightgray}{0.40} & $<10^{-30}$ & \\cellcolor{lightgray}{-0.23 } & $<10^{-30}$ & \\cellcolor{lightgray}{-0.21 } & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{C10:}}} & Young\/Old people's names & random & 0.11 & $<10^{-30}$ & -0.01 & 0.016 & 0.07 & $<10^{-30}$ & -0.16 & $<10^{-30}$ \\\\\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & \\cellcolor{lightgray}{0.24} & $<10^{-30}$ & 0.07 & $<10^{-30}$ & 0.04 & $<10^{-17}$ & -0.14 & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{I1:}}} & AF\/EM names & random & \\cellcolor{darkgray}1.24 & $<10^{-30}$ & \\cellcolor{mediumgray}0.77 & $<10^{-30}$ & 0.07 & $<10^{-30}$ & 0.02 & $<10^{-2}$ \\\\\n & AF\/EM intersectional & fixed & \\cellcolor{darkgray}{1.25} & $<10^{-30}$ & \\cellcolor{darkgray}{0.98 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.23 } & $<10^{-30}$ & -0.19 & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{I2:}}} & AF\/EM names & random & \\cellcolor{darkgray}1.25 & $<10^{-30}$ & \\cellcolor{mediumgray}0.67 & $<10^{-30}$ & -0.09 & $<10^{-30}$ & 0.02 & $<10^{-2}$ \\\\\n & AF emergent\/EM intersectional & fixed & \\cellcolor{darkgray}{1.27} & $<10^{-30}$ & \\cellcolor{darkgray}{1.00} & $<10^{-30}$ & \\cellcolor{lightgray}{0.23 } & $<10^{-30}$ & -0.14 & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{I3:}}} & MF\/EM names & random & \\cellcolor{darkgray}1.31 & $<10^{-30}$ & \\cellcolor{mediumgray}0.68 & $<10^{-30}$ & -0.06 & $<10^{-30}$ & \\cellcolor{lightgray}0.38 & $<10^{-30}$ \\\\\n & MF\/EM intersectional & fixed & \\cellcolor{darkgray}{1.29}& $<10^{-30}$ & \\cellcolor{mediumgray}{0.51} & $<10^{-30}$ & 0.00 & 0.81 & \\cellcolor{lightgray}{0.32 } & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{I4:}}} & MF\/EM names & random & \\cellcolor{darkgray} 1.51 & $<10^{-30}$ &\\cellcolor{darkgray} 0.86 & $<10^{-30}$ & 0.16 & $<10^{-30}$ & \\cellcolor{lightgray}-0.32 & $<10^{-30}$ \\\\\n & MF emergent\/EM intersectional & fixed & \\cellcolor{darkgray}{1.43} & \n $<10^{-30}$ & \\cellcolor{mediumgray}{0.58} & $<10^{-30}$ & \\cellcolor{lightgray}{0.20} & $<10^{-30}$ & \\cellcolor{lightgray}{-0.25} & $<10^{-30}$ \\\\ \\hline \n \\multicolumn{9}{c}{\\hspace{0mm} $^{\\ast}$\\footnotesize{(Un)pleasant attributes used to measure valence and attitudes towards targets from \\citet{greenwald1998measuring}.}}\n\n\\end{tabular}\n}\n\\end{table*}\n\n\\fi\n\\subsection{Evaluation of CEAT} Congruent with \\citet{caliskan2017semantics}'s WEAT findings, Table~\\ref{table:socialbias-measure} presents significant effect sizes for all previously documented and validated biases. GPT-2 exhibited less bias than other neural language models. \nOur method CEAT, designed for CWEs, computes the combined bias score of a distribution of effect sizes present in neural language models. We find that the effect magnitudes of biases reported by Tan and Celis \\citep{tan2019assessing} are individual samples in the distributions generated by CEAT. We can view their method as a special case of CEAT that calculates the individual bias scores of a few pre-selected samples. In order to comprehensively measure the overall bias score in a neural language model, we apply a random-effects model from the meta-analysis literature that computes combined effect size and combined statistical significance from a distribution of bias measurements. As a result, when CEAT reports significant results, some of the corresponding bias scores in prior work are not statistically significant. Furthermore, our results indicate statistically significant bias in the opposite direction in some cases. These negative results suggest that some WEAT stimuli tend to occur in stereotype-incongruent contexts more frequently.\n\nWe sampled combinations of CWE $10,000$ times for each CEAT test; nonetheless, we observed varying intensities of the same social bias in different contexts. Using a completely random set vs fixed set of contexts derived from $10,000$ sentences lead to low variance in corresponding bias scores. Using a fixed set of contexts for each model makes it possible to evaluate the magnitude of bias across models for the same variables. Experiments conducted with $1,000$, $5,000$, $10,000$ samples of CWE lead to similar bias scores with low variance. As a result, the number of samples can be adjusted according to computational resources. However, future work on evaluating the lower bound of sampling size with respect to model and corpus characteristics would optimize the sampling process. Accordingly, the computation of overall bias in the language model would become more efficient.\n\n\n\\subsection{IBD, EIBD, and CEAT Results} We report the overall magnitude of bias (CES) and $p$-value in Table~\\ref{table:socialbias-measure}. We pick an example from Table~\\ref{table:socialbias-measure} that reflects the great disparity in bias magnitudes between the two models. We present the distribution histograms of effect sizes in Figure~\\ref{fig:weat}, which show the overall biases that can be measured with a comprehensive contextualized bias test related to the emergent biases associated with occurrences of stimuli unambiguously regarding Mexican American females (See row I4 in Table~\\ref{table:socialbias-measure}) with ELMo and GPT-2. \nThe distribution plots for other bias tests are provided in our project repository.\n\n\n\nWe find that CEAT uncovers more evidence of intersectional bias than gender or racial biases. This findings suggest that, members of multiple minority or disadvantaged groups are associated with the strongest levels of bias in neural language representations. To quantify the intersectional biases in CWEs, we construct tests I1-I4. Tests with Mexican American females tend to have stronger bias with a higher CES than those with African American females. \nSpecifically, 13 of 16 instances in intersection-related tests (I1-I4) have significant stereotype-congruent CES; 9 of 12 instances in gender-related tests (C6-C8) have significant stereotype-congruent CES; 8 of 12 instances in race-related tests (C3-C5) have significant stereotype-congruent CES. In gender bias tests, the gender associations with career and family are stronger than other biased gender associations. In all models, the significantly biased intersectionality associations have larger effect sizes than racial biases. \n\n\n\n\nAccording to CEAT results in Table~\\ref{table:socialbias-measure}, ELMo is the most biased whereas GPT-2 is the least biased with respect to the types of biases CEAT measures. We notice that significant negative CES exist in BERT, GPT and GPT-2, which imply that stereotype-incongruent biases with small effect size exist. \n \n\\section{Discussion}\n \\label{sec:discussion}\n\n\n\n\n\n\\iffalse\nfor +- : significant positive results in Tan: c1-gpt2, c7-gpt2, c8-BERT,gpt2,c3-gpt,gpt2, c4-gpt2,gpt2 (this mean CES is negative)\nI'm checking the last condition\nthere is\ne.g., C9-gpt, their es is -1.39, it should definitely be significant\nC2-gpt2: -0.49; C10-gpt2: -0.45.\nThere're other negative results, but are not so big\n\\fi\n\n\nAccording to our findings, GPT-2 has the highest variance in bias magnitudes followed by GPT, BERT, and ELMo (see an example in Figure~\\ref{fig:weat}). The overall magnitude of bias decreases in the same order for the types of biases we measured. The similar number of parameters in these models or the size of the training corpora do not explain the distribution of bias that we observe w.r.t. variance and overall magnitude. However, \\citet{ethayarajh2019contextual} note the same descending pattern when measuring words' self-similarity, after adjusting for anisotropy (non-uniform directionality), across their CWE in GPT-2, BERT, and ELMo. (ELMo is compared in three layers due to its architecture.) \\citet{ethayarajh2019contextual} also find that upper layers of contextualizing models produce more context-specific representations. Quantifying how contextualized these dynamic embeddings are supports our findings that the highest variance in bias magnitude, low overall bias, and low self-similarity correlate. This correlation may explain the results that we are observing. As more recent models are learning highly-contextualized CWE in upper layers, the representations in highly-contextualized layers are almost overfitting to their contexts. Since words appear in numerous contexts, the more contextualized and diverse a word's representation becomes, the less overall bias and general stereotypical associations.\n\n\n\n\n\n\nWe present and validate a bias detection method generalizable to identifying biases associated with any social group or intersectional group member. We detect and measure biases associated with Mexican American and African American females in SWE and CWE.\nOur emergent intersectional bias measurement results for African American females are in line with previous findings \\citep{may2019measuring,tan2019assessing}.\nIBD and EIBD can detect intersectional biases from SWE with high accuracy in an unsupervised manner by following a lexicon induction strategy \\cite{hatzivassiloglou1997predicting}. This approach can be complementary to the stimuli list predefined by social psychologists.\nOur current intersectional bias detection validation approach can be used to identify association thresholds when generalizing this work to the entire word embedding dictionary. Exploring all the potential biases associated with targets is left to future work since it requires extensive human subject validation studies in collaboration with social psychologists. We list all the stimuli representing biased associations in the supplementary materials. To name a few, the superset of intersectional biases associated with African American females are: aggressive, assertive, athletic, bigbutt, confident, darkskinned, fried-chicken, ghetto, loud, overweight, promiscuous, unfeminine, unintelligent, unrefined. Emergent intersectional biases associated with African American females are: aggressive, assertive, bigbutt, confident, darkskinned, fried-chicken, overweight, promiscuous, unfeminine. The superset of intersectional biases associated with Mexican American females are: attractive, cook, curvy, darkskinned, feisty, hardworker, loud, maids, promiscuous, sexy, short, uneducated, unintelligent. Emergent intersectional biases associated with Mexican American females are: cook, curvy, feisty, maids, promiscuous, sexy.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWe follow the conventional method of using the most frequent given names in a social group that signal group membership in order to accurately represent targets \\citep{caliskan2017semantics,greenwald1998measuring}.\nOur results indicate that the conventional method that relies on stimuli selected by experts in social psychology works accurately. Prior work on lexicon induction methods compensates for the lack of existing annotated data on valence \\cite{hatzivassiloglou1997predicting, riloff2003learning, turney2003measuring}. Nevertheless, principled and robust lexicon induction methods that can be validated in this domain, when measuring the representation accuracy of target group lexica or any semantic concept. Developing these principled methods is left to future work. \n\nSemantics of languages can be represented by the distributional statistics of word co-occurrences \\cite{firth1957synopsis, harris1954distributional}. Consequently, our methods are language agnostic and can be applied to neural language models as well as word embeddings in any language as long as the stimuli for accurately representing the semantics of concepts are available. Project Implicit (\\url{https:\/\/implicit.harvard.edu\/implicit}) has been hosting IATs for human subjects all over the world in numerous languages for two decades. As a result, their IATs, that inspired WEATs, provide stimuli for targets and attributes in numerous languages. We leave generalizing our methods to other languages to future work since state-of-the-art neural language models are not widely or freely available for languages other than English as of 2021.\n\n\n\n\n\n When simulating contexts for WEAT, we make an assumption that the Reddit corpus represents naturally occurring sentences. Nevertheless, we acknowledge that the Reddit corpus also reflects the biases of the underlying population contributing to its corpus. Studying the accuracy of simulating the most common distribution of contexts and co-occurring stimuli is left to future work since we don't have validated ground truth data for evaluating the distribution parameters of contexts in large-scale corpora. Instead, for evaluation, validation, and comparison, we rely on validated ground truth information about biases documented by \\citet{caliskan2017semantics} in word embeddings as well as biases documented by millions of people over decades via the implicit association literature \\cite{nosek2002harvesting} and \\citet{ghavami2013intersectional}'s intersectional biases.\n \n \n\nGiven the energy and funding considerations, we are not able to train these language models on the same large-scale corpora to compare how a neural language model's architecture learns biases, because the training processes for these models are computationally and financially expensive \\cite{bender2021dangers}. The size of state-of-the-art models increase by at least a factor of 10 every year. BERT-Large from 2018 has 355 million parameters, GPT-2 from early 2019 reaches 1.5 billion, and GPT-3 from mid-2020 finally gets to 175 billion parameters. The GPT-2 model used 256 Google Cloud TPU v3 cores for training, which costs 256 US dollars per hour. GPT-2 requires approximately 168 hours or 1 week of training on 32 TPU v3 chips \\cite{strubell2019energy}. GPT-3 is estimated to cost $\\sim$12 million US dollars \\cite{floridi2020gpt} and we are not able to get access to its embeddings or training corpora. Regardless, measuring the scope of biases with validated bias quantification and meta-analysis methods, we are able to compare the biased associations learned by neural language models that are widely used. Being able to study neural language models comprehensively is critical since they are replacing SWE in many NLP applications due to their high accuracy in various machine learning tasks.\n \n \n\n\nWe would like to conclude the discussion with our ethical concerns regarding the dual use of IBD and EIBD, that can detect stereotypical associations for an intersectional group or disadvantaged individuals. Words retrieved by our methods may be used in the generation of offensive or stereotypical content that perpetuates or amplifies existing biases. For example, information influence operations in the 1970s used \\citet{osgood1964semantic}'s semantic differential technique among human subjects to retrieve the words that would most effectively induce a negative attitude in a South American population towards their administration \\cite{landis1982cia}. Similarly, biased neural language models may be exploited to automate large-scale information influence operations that intend to sow discord among social groups \\citet{toney2020pro, toney2020valnorm}. The biased outputs of these language models, that get recycled in future model generation's training corpora, may lead to an AI bias feedback cycle. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\n\nWe introduce methods called IBD and EIBD to identify biases associated with members of multiple minority groups. These methods automatically detect the intersectional biases and emergent intersectional biases captured by word embeddings. Intersectional biases associated with African American and Mexican American females have the highest effect size compared to other social biases. Complementary to pre-defined sets of attributes to measure widely known biases, our methods automatically discover biases.\nIBD reaches an accuracy of 81.6\\% and 82.7\\% in detection, respectively, when validating on the intersectional biases of African American females and Mexican American females.\nEIBD reaches an accuracy of 84.7\\% and 65.3\\% in detection, respectively, when validating on the emergent intersectional biases of African American females and Mexican American females.\n\n\nWe present CEAT to measure biases identified by IBD and EIBD in language models. CEAT uses a random-effects model to comprehensively measure social biases embedded in neural language models that contain a distribution of context-dependent biases. CEAT simulates this distribution by sampling ($N=10,000$) combinations of CWEs without replacement from a large-scale natural language corpus. \nUnlike prior work that focuses on a limited number of contexts defined by templates to measure the magnitude of particular biases, CEAT provides a comprehensive measurement of overall bias in contextualizing language models. Our results indicate that ELMo is the most biased, followed by BERT, and GPT. GPT-2 is the least biased language model with respect to the social biases we investigate. The overall magnitude of bias negatively correlates with the level of contextualization in the language model. Understanding how the architecture of a language model contributes to biased and contextualized word representations can help mitigate the harmful effects to society in downstream applications.\n\n\n\n\n\n\\section{Plots}\n\n\n\n\n\n\\section{Stimuli}\nThe stimuli used to represent targets and attributes in CEAT (C1-C10) are taken from Caliskan et al.\\cite{caliskan2017semantics}.\nWe construct four intersection-related CEAT for African American females and Mexican American females. \n\n\nWhen conducting intersection-related CEAT , \nwe use the names from Caliskan et al. \\cite{caliskan2017semantics} and Parada et al. \\cite{parada2016ethnolinguistic} to represent the target intersectional groups. Caliskan et al.'s WEAT provides the female and male names of African Americans and European Americans from the first Implicit Association Test in 1998 \\cite{greenwald1998measuring}. Parada et al. provide the female and male names of Mexican Americans \\cite{parada2016ethnolinguistic}. To determine and verify the gender of names, we use three gender checkers \\cite{huang2019gender}. We only use the name as a target word in our experiments, if the name is categorized to belong to the same gender by all of the three checkers. Human subjects provide the validation set of intersectional attributes with ground truth information \\cite{ghavami2013intersectional}. We use this validation set for evaluating the intersection-related CEAT, IBD and EIBD experiments.\nTo follow the order of stereotype-congruity, we use European American males as the second target group and use the attributes associated with their intersectional biases as the second attribute set in intersection-related CEAT. There are only three emergent intersectional biases associated with European American males in the validation set, which doesn't provide a sufficient number of stimuli. A small set of stimuli does not satisfy the requirements for generating statistically significant concept representation and WEATs. Related stimuli details are discussed in the dataset and stimuli sections of the main paper. In addition, if the size of the first attribute set is smaller than that of the attributes of European American males, we randomly select an equal number of attributes associated with the intersectional biases of European American males. WEAT requires equal-sized sets of attributes.\n\n\n\\subsection{CEAT I1}\nWe use the frequent given names of African American females and European American males as two target social groups and use the attributes associated with the intersectional biases of African American females and attributes associated with the intersectional biases of European American males as the two attribute groups.\n\nSince `assertive' is associated with both African American females and European American males, we do not include it in this test.\n\n\\begin{itemize}\n \\item \\textbf{African American females}: Aisha, Keisha, Lakisha, Latisha, Latoya, Malika, Nichelle, Shereen, Tamika, Tanisha, Yolanda, Yvette\n \\item \\textbf{European American males}: Andrew, Brad, Frank, Geoffrey, Jack, Jonathan, Josh, Matthew, Neil, Peter, Roger, Stephen\n \\item \\textbf{Intersectional biases of African American females}: aggressive, athletic, bigbutt, confident, darkskinned, fried-chicken, ghetto, loud, overweight, promiscuous, unfeminine, unintelligent, unrefined\n \\item \\textbf{Intersectional biases of European American males}: all-American, arrogant, attractive, blond, high-status, intelligent, leader, privileged, racist, rich, sexist, successful, tall\n\\end{itemize}\n\n\\subsection{CEAT I2}\nWe use the frequent given names of African American females and European American males as two target groups. We use attributes associated with emergent intersectional biases of African American females and attributes associated with intersectional biases of European American males as two attribute groups.\n\n\nSince `assertive' is associated with emergent intersectional bias of African American females and intersectional bias of European American males, we do not include it in this test.\n\n\\begin{itemize}\n \\item \\textbf{African American females}: Aisha, Keisha, Lakisha, Latisha, Latoya, Malika, Nichelle, Shereen, Tamika, Tanisha, Yolanda, Yvette\n \\item \\textbf{European American males}: Andrew, Brad, Frank, Geoffrey, Jack, Jonathan, Josh, Matthew, Neil, Peter, Roger, Stephen\n \\item \\textbf{Emergent intersectional biases of African American females}: aggressive, bigbutt, confident, darkskinned, fried-chicken, overweight, promiscuous, unfeminine\n \\item \\textbf{Intersectional biases of European American males}: arrogant, blond, high-status, intelligent, racist, rich, successful, tall\n\\end{itemize}\n\n\\subsection{CEAT I3}\nWe use the frequent given names of Mexican American females and European American males as the target groups and the words associated with their intersectional biases as the attribute groups.\n\nSince `attractive' is associated with intersectional biases of both Mexican American females and European American males, we do not include it in this test.\n\n\\begin{itemize}\n \\item \\textbf{Mexican American females}: Adriana, Alejandra, Alma, Brenda, Carolina, Iliana, Karina, Liset, Maria, Mayra, Sonia, Yesenia\n \\item \\textbf{European American males}: Andrew, Brad, Frank, Geoffrey, Jack, Jonathan, Josh, Matthew, Neil, Peter, Roger, Stephen\n \\item \\textbf{Intersectional biases of Mexican American females}: cook, curvy, darkskinned, feisty, hardworker, loud, maids, promiscuous, sexy, short, uneducated, unintelligent\n \\item \\textbf{Intersectional biases of European American males}: all-American, arrogant, blond, high-status, intelligent, leader, privileged, racist, rich, sexist, successful, tall\n\\end{itemize}\n\n\\subsection{CEAT I4}\nWe use the frequent given names of Mexican American females and European American males as target groups. We use words associated with the emergent intersectional biases of Mexican American females and words associated with the intersectional biases of European American males as the two attribute groups.\n\n\\begin{itemize}\n \\item \\textbf{Mexican American females}: Adriana, Alejandra, Alma, Brenda, Carolina, Iliana, Karina, Liset, Maria, Mayra, Sonia, Yesenia\n \\item \\textbf{European American males}: Andrew, Brad, Frank, Geoffrey, Jack, Jonathan, Josh, Matthew, Neil, Peter, Roger, Stephen\n \\item \\textbf{Emergent intersectional biases of Mexican American females}: cook, curvy, feisty, maids, promiscuous, sexy\n \\item \\textbf{Intersectional biases of European American males}: arrogant, assertive, intelligent, rich, successful, tall\n\\end{itemize}\n\n\\subsection{IBD and EIBD}\nWe detect the attributes associated with the intersectional biases and emergent intersectional biases of African American females and Mexican American females in GloVe SWE. We assume that there are three subcategories under the race category (African American, Mexican American, European American) and two subcategories under the gender category (female, male). We use the frequent given names to represent each intersectional group. Again, we note that, in future work we'd generalize this work to $n$ subcategories under each category. Further, in future work, instead of categorizing people into social groups, we'd like to explore representing individuals in social data with continuous real-valued variables as opposed to associating them with category labels.\n\n\\begin{itemize}\n \\item \\textbf{African American females}: Aisha, Keisha, Lakisha, Latisha, Latoya, Malika, Nichelle, Shereen, Tamika, Tanisha, Yolanda, Yvette\n \\item \\textbf{African American males}: Alonzo, Alphonse, Hakim, Jamal, Jamel, Jerome, Leroy, Lionel, Marcellus, Terrence, Tyrone, Wardell\n \\item \\textbf{European American females}: Carrie, Colleen, Ellen, Emily, Heather, Katie, Megan, Melanie, Nancy, Rachel, Sarah,\\\\Stephanie\n \\item \\textbf{European American males}: Andrew, Brad, Frank, Geoffrey, Jack, Jonathan, Josh, Matthew, Neil, Peter, Roger, Stephen\n \\item \\textbf{Mexican American females}: Adriana, Alejandra, Alma, Brenda, Carolina, Iliana, Karina, Liset, Maria, Mayra, Sonia, Yesenia\n \\item \\textbf{Mexican American males}: Alberto, Alejandro, Alfredo, Antonio, C\u00e9sar, Jes\u00fas, Jos\u00e9, Juan, Miguel, Pedro, Rigoberto, Rogelio\n\n\\end{itemize}\n\n\nWe list all the attributes used in validation experiments. These are stimuli associated with different social groups and a set of random words that are not expected to be associated with social groups. These random attributes are borrowed from the insect target category of the `universally accepted stereotypes' IAT, which is a baseline WEAT. We use 98 words in total in the validation experiments. \n\n\nWe also list the probability of random chance of correct classification in parentheses next to each social group. The goal here is to present the success rate of the machine learning detection method in a meaningful way. To calculate the probability of random chance of correct classification, we treat the detection task as if this were a one-vs-all verification task, which is one type of a classification task, that aims to verify the membership of a data point in the main class. All the other data points from the open world, that don't belong to the main class we are testing for, can belong to any other class. However, in reality, our detection method is a semi-supervised non-parametric prediction method which relies on the threshold $t$.\nSome of the attributes of some social groups overlap. For example, `assertive' is associated with emergent intersectional biases of African American females and also associated with emergent intersectional biases of European American males. As a result, the sum of all random chances of different groups is not 100\\%.\n\\begin{itemize}\n \\item \\textbf{Biases of Females}: attractive, caring, dependent, emotional, feminine, jealous, manipulative, materialistic, motherly, petite, soft, submissive, talkative (random chance of correct classification: 13.3\\%)\n \\item \\textbf{Biases of Males}: aggressive, ambitious, arrogant, fixer-upper, high-status, intelligent, leader, messy, provider, respected, sexist, tall, unfaithful (random chance of correct classification: 13.3\\%)\n \n \\item \\textbf{Biases of African Americans}: athletic, criminals, dangerous, gangsters, ghetto, lazy, loud, poor, tall, uneducated, unrefined, violent (random chance of correct classification: 12.2\\%)\n \\item \\textbf{Emergent Intersectional Biases of African American Females}: aggressive, assertive, bigbutt, confident, \\\\darkskinned, fried-chicken, overweight, promiscuous, unfeminine (random chance of correct classification: 9.2\\%)\n \\item \\textbf{Intersectional Biases of African American Females}: aggressive, assertive, athletic, bigbutt, confident, darkskinned, fried-chicken, ghetto, loud, overweight, promiscuous, unfeminine, unintelligent, unrefined (random chance of correct classification: 14.3\\%)\n \\item \\textbf{Emergent Intersectional Biases of African American Males}: darkskinned, hypersexual, rapper (random chance of correct classification: 3.1\\%)\n \\item \\textbf{Intersectional Biases of African American Males}: athletic, criminals, dangerous, darkskinned, gangsters, hypersexual, lazy, loud, poor, rapper, tall, unintelligent, violent (random chance of correct classification: 13.3\\%)\n \n \n \\item \\textbf{Biases of European Americans}: all-American, arrogant, attractive, blond, blue-eyes, high-status, ignorant, intelligent, overweight, patronizing, privileged, racist, red-neck, rich, tall (random chance of correct classification: 15.3\\%)\n \\item \\textbf{Emergent Intersectional Biases of European American Females}: ditsy (random chance of correct classification: 1.0\\%)\n \\item \\textbf{Intersectional Biases of European American Females}: arrogant, attractive, blond, ditsy, emotional, feminine, high-status, intelligent, materialistic, petite, racist, rich, submissive, tall (random chance of correct classification: 14.3\\%)\n \\item \\textbf{Emergent Intersectional Biases of European American Males}: assertive, educated, successful (random chance of correct classification: 3.1\\%)\n \\item \\textbf{Intersectional Biases of European American Males}: all-American, arrogant, assertive, attractive, blond, educated, high-status, intelligent, leader, privileged, racist, rich, sexist, successful, tall (random chance of correct classification: 15.3\\%)\n \n\n\n \\item \\textbf{Biases of Mexican Americans}: darkskinned, day-laborer, family-oriented, gangster, hardworker, illegal-immigrant, lazy, loud, macho, overweight, poor, short, uneducated, unintelligent (random chance of correct classification: 14.3\\%)\n \\item \\textbf{Emergent Intersectional Biases of Mexican American Females}: cook, curvy, feisty, maids, promiscuous, sexy (random chance of correct classification: 6.1\\%)\n \\item \\textbf{Intersectional Biases of Mexican American Females}: attractive, cook, curvy, darkskinned, feisty, hardworker, loud, maids, promiscuous, sexy, short, uneducated, unintelligent (random chance of correct classification: 13.3\\%)\n \\item \\textbf{Emergent Intersectional Biases of Mexican American Males}: drunks, jealous, promiscuous, violent (random chance of correct classification: 4.1\\%)\n \\item \\textbf{Intersectional Biases of Mexican American Males}: aggressive, arrogant, darkskinned, day-laborer, drunks, hardworker, illegal-immigrant, jealous, macho, poor, promiscuous, short, uneducated, unintelligent, violent (random chance of correct classification: 15.3\\%)\n \n \\item \\textbf{Random (Insects)}: ant, bedbug, bee, beetle, blackfly, caterpillar, centipede, cockroach, cricket, dragonfly, flea, fly, gnat, hornet, horsefly, locust, maggot, mosquito, moth, roach, spider, tarantula, termite, wasp, weevil (random chance of correct classification: 25.5\\%)\n\\end{itemize}\n\n\n\n\\section{Open Source Code, Data, and Documentation}\n\\url{https:\/\/github.com\/weiguowilliam\/CEAT} is the link to our open source git repository. Code and links to datasets are available in the project repository. In addition, answers to frequently asked questions about the details of extracting the contextualized word embeddings are documented. The extracted embeddings for the stimuli take up approximately $\\sim50GB$ memory. \n\n\n\n\\subsection{Meta-Analysis Details for CEAT}\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \nIn this section, we first construct all CEAT in the main paper (C1-C10,I1-I4) with sample size $N=1,000$ to provide a comparison of results with different sample sizes. We report CES $d$ and combined $p-value$ $p$ in Table~\\ref{table:supp-main}. We replicate these results with $N=1,000$ instead of using the original $N=10,000$ to show that even with $N=1,000$, we get valid results. Accordingly, we proceed to calculate all types of biases associated with intersectional groups based on the attributes used in original WEAT. \nWe notice that there are five tests which are significant with sample size $N=10,000$ but insignificant with sample size $N=1,000$. They are C10 with Bert, C4 with GPT, C7 with GPT-2, I3 with GPT-2 and I4 with GPT-2. We also notice that CES of same test can be different with different sample size but all differences are smaller than $0.1$.\n\n\n\\begin{table*}[t]\n\\caption{\\textbf{CEAT from main paper (C1-C10,I1-I4) with sample size $N=1,000$ as opposed to the $N=10,000$ hyper-parameter in the main paper.} We report the CES ($d$) and combined $p-values$ of all CEAT ($p$) in the main paper with sample size $N=1,000$. We observe that all of the results are consistent with the CES and $p-values$ reported in the main paper on Table 1. Light, medium, and dark gray shading of combined $d$ values (CES) indicates small, medium, and large effect size, respectively. There are five tests which are significant with sample size $N=10,000$ but not significant with sample size $N=1,000$. However, these have small effect sizes and as a result we don't expect statistical significance. According to our experiments, the Spearman correlation between WEAT's effect size and $p-value$ is $\\rho=0.99$. Smaller effect sizes are expected to have insignificant p-values. Accordingly, all of the results under $N=1,000$ are consistent with the main findings. The notable yet consistent differences are C10 with Bert, C4 with GPT, C7 with GPT-2, I3 with GPT-2, and I4 with GPT-2. CES varies minimally with different sample size ($N$), but the differences of the results are smaller than $0.1$, suggesting the degree of effect size remains consistent. In edge cases, where statistical significance or effect size is close to a significance threshold, gradually increasing $N$, in increments of $N=+500$ would provide more reliable results. $A\\_$ stands for African Americans. $E\\_$ stands for European Americans. $M\\_$ stands for Mexican Americans. $\\_F$ stands for females. $\\_M$ stands for males.\\\\}\n\\label{table:supp-main}\n \\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{@{}lcccccccc@{}}\n\\toprule\n\\textbf{Test} &\n \\multicolumn{2}{c}{\\textbf{ELMo}} &\n \\multicolumn{2}{c}{\\textbf{BERT}} &\n \\multicolumn{2}{c}{\\textbf{GPT}} &\n \\multicolumn{2}{c}{\\textbf{GPT-2}} \\\\ \\cmidrule(l){2-9} \n & $d$ & $p$ & $d$ & $p$ & $d$ & $p$ & $d$ & $p$ \\\\ \\midrule\nC1: Flowers\/Insects, P\/U$^{\\ast}$ - Attitude & \\cellcolor{darkgray}1.39 & $<10^{-30}$ & \\cellcolor{darkgray}0.96 & $<10^{-30}$ & \\cellcolor{darkgray}1.05 & $<10^{-30}$ & 0.13 & $<10^{-30}$ \\\\\nC2: Instruments\/Weapons, P\/U$^{\\ast}$ - Attitude & \\cellcolor{darkgray}1.56 & $<10^{-30}$ & \\cellcolor{darkgray}0.93 & $<10^{-30}$ & \\cellcolor{darkgray}1.13 & $<10^{-30}$ & \\cellcolor{lightgray}-0.28 & $<10^{-30}$ \\\\\nC3: EA\/AA names, P\/U$^{\\ast}$ - Attitude & \\cellcolor{lightgray}0.48 & $<10^{-30}$ &\\cellcolor{lightgray} 0.45 & $<10^{-30}$ & -0.11 & $<10^{-30}$ & \\cellcolor{lightgray}-0.20 & $<10^{-30}$ \\\\\nC4: EA\/AA names, P\/U$^{\\ast}$ - Attitude & 0.16 & $<10^{-30}$ & \\cellcolor{lightgray}0.49 & $<10^{-30}$ & 0.00 & 0.70 & \\cellcolor{lightgray}-0.23 & $<10^{-30}$ \\\\\nC5: EA\/AA names, P\/U$^{\\ast}$ - Attitude & 0.12 & $<10^{-30}$ & 0.04 & $<10^{-2}$ & 0.05 & $<10^{-4}$ & -0.17 & $<10^{-30}$ \\\\\nC6: Males\/Female names, Career\/Family & \\cellcolor{darkgray}1.28 & $<10^{-30}$ & \\cellcolor{darkgray}0.91 & $<10^{-30}$ & \\cellcolor{lightgray}0.21 & $<10^{-30}$ & \\cellcolor{lightgray}0.34 & $<10^{-30}$ \\\\\nC7: Math\/Arts, Male\/Female terms & \\cellcolor{mediumgray}0.65 & $<10^{-30}$ & \\cellcolor{lightgray}0.42 & $<10^{-30}$ & \\cellcolor{lightgray}0.23 & $<10^{-30}$ & 0.00 & 0.81 \\\\\nC8: Science\/Arts, Male\/Female terms & \\cellcolor{lightgray}0.32 & $<10^{-30}$ & -0.07 & $<10^{-4}$ & \\cellcolor{lightgray}0.26 & $<10^{-30}$ & -0.16 & $<10^{-30}$ \\\\\nC9: Mental\/Physical disease, Temporary\/Permanent & \\cellcolor{darkgray}0.99 & $<10^{-30}$ & \\cellcolor{mediumgray}0.55 & $<10^{-30}$ & 0.07 & $<10^{-2}$ & 0.04 & 0.04 \\\\\nC10: Young\/Old people's names, P\/U$^{\\ast}$ - Attitude & 0.11 & $<10^{-19}$ & 0.00 & 0.90 & 0.04 & $<10^{-2}$ & -0.17 & $<10^{-30}$ \\\\\nI1: AF\/EM, AF\/EM intersectional & \\cellcolor{darkgray}1.24 & $<10^{-30}$ & \\cellcolor{mediumgray}0.76 & $<10^{-30}$ & 0.05 & $<10^{-3}$ & 0.05 & 0.06 \\\\\nI2: AF\/EM, AF emergent\/EM intersectional & \\cellcolor{darkgray}1.24 & $<10^{-30}$ & \\cellcolor{mediumgray}0.70 & $<10^{-30}$ & -0.12 & $<10^{-30}$ & 0.03 & 0.26 \\\\\nI3: MF\/EM, MF\/EM intersectional & \\cellcolor{darkgray}1.30 & $<10^{-30}$ & \\cellcolor{mediumgray}0.69 & $<10^{-30}$ & -0.08 & $<10^{-30}$ & \\cellcolor{lightgray}0.36 & $<10^{-30}$ \\\\\nI4: MF\/EM, MF emergent\/EM intersectional &\n \\cellcolor{darkgray}1.52 &\n $<10^{-30}$ &\n \\cellcolor{darkgray}0.87 &\n $<10^{-30}$ &\n 0.14 &\n $<10^{-27}$ &\n \\cellcolor{lightgray}-0.26 &\n $<10^{-30}$ \\\\ \\bottomrule\n \\multicolumn{9}{c}{$^{\\ast}$Unpleasant and pleasant attributes used to measure valence and attitudes towards targets \\cite{greenwald1998measuring}.}\n\\end{tabular}}\n\\end{table*}\n\n\nWe also construct four types of supplementary CEAT for all pairwise combinations of six intersectional groups: African American females (AF), African American males (AM), Mexican American females (MF), Mexican American males (MM), European American females (EF), European American males (EM). We use two intersectional groups as two target social groups. For each pairwise combination, we build four CEAT : first, measure attitudes with words representing pleasantness and unpleasantness as two attribute groups (as in C1); second, measure career and family associations that are particularly important in gender stereotypes with the corresponding two attribute groups (as in C6); third, similar to the career-family stereotypes for gender, measure math and arts associations that are particularly important in gender stereotypes with the corresponding two attribute groups (as in C7); fourth, similar to the math-arts stereotypes for gender, measure science (STEM) and arts associations that are particularly important in gender stereotypes with the corresponding two attribute groups (as in C8). We report the CES ($d$) and combined $p-values$ ($p$) in Table 2 with sample size $N=1,000$. All of these attributes are from the C1, C6, C7 and C8 WEAT of Caliskan et al. \\cite{caliskan2017semantics}.\n\n\\input{supp\/longtable}\n\\input{supp\/longtable_2}\n\n\n\n\\subsection{Formal Definition of WEAT}\nWe present a formal definition of \\citet{caliskan2017semantics}'s WEAT. Let $X$ and $Y$ be two sets of target words of equal size, and $A$, $B$ be two sets of attribute words. Let $cos(\\vec{a},\\vec{b})$ stand for the cosine similarity between the embeddings of words $a$ and $b$. Here, the vector $\\vec{a}$ is the embedding for word $a$. The test statistic is \n\\[ s(X,Y,A,B) = \\sum_{x\\in X}{s(x,A,B)} - \\sum_{y\\in Y}{s(y,A,B)} \\]\nwhere \n\\[ s(w,A,B) = mean_{a \\in A}cos(\\vec{w}, \\vec{a})-mean_{b \\in B}cos(\\vec{w}, \\vec{b}) \\]\n\nA permutation test calculates the statistical significance of association $s(X,Y,A,B)$. The one-sided $p-value$ is \n\\[ P = Pr_{i} [s(X_{i},Y_{i},A,B)>s(X,Y,A,B))] \\]\nwhere $\\{(X_i,Y_i)\\}_{i}$ represents all the partitions of $X\\cup Y$ in two sets of equal size. Random permutations of these stimuli sets represent the null hypothesis as if the biased associations did not exist so that we can perform a statistical significance test by measuring the unlikelihood of the null hypothesis, given the effect size of WEAT.\n\nThe effect size of bias is calculated as \n\\[ ES = \\frac{mean_{x \\in X}s(x,A,B)-mean_{y \\in Y}s(y,A,B)}{std\\_dev_{w \\in X\\bigcup Y}s(w,A,B)} \\]\n\n\\subsection{Formal Definition of EIBD}\nWe first detect $C_{11}$'s intersectional biases $W_{IB}$ with IBD.\nThen, we detect the biased attributes associated with only one constituent category of the intersectional group $C_{11}$ (e.g., associated only with race $S_{1n}$ - or only with gender $S_{m1}$). Each intersectional category $C_{1n}$ has M constituent subcategories $S_{in},i=1,...M$ and category $C_{m1}$ has N constituent subcategories $S_{mj},j=1,...,N$.\n$S_{1n}$ and $S_{m1}$ are the constituent subcategories of intersectional group $C_{11}$.\n\nThere are in total $M+N$ groups defined by all the single constituent subcategories. We use all $M+N$ groups to build WEFAT pairs $P_i = (S_{1n},S_{in}),i=1,...,M$ and $P_j=(S_{m1},S_{mj}),j=1,...N$. Then, we detect lists of words associated with each pair $W_i,i=1,...M$ and $W_j,j=1,...,N$ based on the same positive threshold $t_{mn}$ used in IBD. We detect the attributes highly associated with the constituent subcategories $S_{1n}$ and $S_{m1}$ of the target intersectional group $C_{11}$ from all $(M+N)$ WEFAT pairs. We define the words associated with emergent intersectional biases of group $C_{11}$ as $W_{EIB}$ and these words are identified by the formula\n\\vspace{-3mm}\n\\[ W_{EIB} = (\\bigcup_{i=1}^{M} (W_{IB}-W_{i}))\n\\bigcup (\\bigcup_{j=1}^{N} (W_{IB}-W_{j})) \\]\n\\noindent where \n\\vspace{-6mm}\n\\[ W_i = \\{w|s(w,S_{1n},S_{in})>t_{mn}, w \\in W_{IB}\\}\\] \n\n\\noindent and \n\\vspace{-6mm}\n\\[ W_j= \\{w|s(w,S_{m1},S_{mj})>t_{mn}, w \\in W_{IB}\\}\\]\n\n\n\n\n\n\n\n\n\n\n\\subsection{Random-Effects Model Details}\nEach effect size is calculated by \n\\[ ES_{i} = \\frac{mean_{x \\in X}s(x,A,B)-mean_{y \\in Y}s(y,A,B)}{std\\_dev_{w \\in X\\bigcup Y}s(w,A,B)} \\]\n\n The estimation of in-sample variance is $V_{i}$, which is the square of $std\\_dev_{w \\in X\\bigcup Y}s(w,A,B)$. \n We use the same principle as estimation of the variance components in ANOVA to measure the between-sample variance $\\sigma^{2}_{between}$, which is calculated as:\n\\[\\sigma^{2}_{between}=\\left\\{\n\\begin{aligned}\n &\\frac{Q-(N-1)}{c} & if \\hspace{2mm}\\*\\* Q \\geq N-1\\\\\n &0 & if\\hspace{2mm}\\*\\* Q < N-1\n\\end{aligned}\n\\right.\n\\]\nwhere \n\\vspace{-3mm}\n\\[\nW_{i} = \\frac{1}{V_{i}}\n\\]\n\n\\vspace{-3mm}\n\n\\[c = \\sum W_{i} - \\frac{\\sum W_{i}^{2}}{\\sum W_{i}} \\hspace{2mm} \\& \\hspace{2mm} Q = \\sum W_{i} ES_{i}^{2} - \\frac{(\\sum W_{i}ES_{i})^2}{\\sum W_{i}} \\]\n\n\nThe weight $v_{i}$ assigned to each WEAT is the inverse of the sum of estimated in-sample variance $V_{i}$ and estimated between-sample variance in the distribution of random-effects $\\sigma^{2}_{between}$.\n\\[\nv_{i} = \\frac{1}{V_{i} + \\sigma^{2}_{between}}\n\\]\n\nCES, which is the sum of the weighted effect sizes divided by the sum of all weights, is then computed as\n\\[\nCES = \\frac{\\sum_{i=1}^{N}v_{i}ES_{i}}{\\sum_{i=1}^{N}v_{i}}\n\\]\n\nTo derive the hypothesis test, we calculate the standard error (SE) of CES as the square root of the inverse of the sum of the weights.\n\\[\nSE(CES) = \\sqrt{\\frac{1}{\\sum_{i=1}^{N}v_{i}}}\n\\]\nBased on the central limit theorem, the limiting form of the distribution of $\\frac{CES}{SE(CES)}$ is the standard normal distribution \\cite{montgomery2010applied}.\nSince we notice that some CES are negative, we use a two-tailed $p-value$ which can test the significance of biased associations in two directions.\nThe two-tailed $p-value$ of the hypothesis that there is no difference between all the contextualized variations of the two sets of target words in terms of their relative similarity to two sets of attribute words is given by the following formula,\n where $\\Phi$ is the standard normal cumulative distribution function and $SE$ stands for the standard error.\n \\[ P_{combined}(X,Y,A,B) = 2 \\times [1 - \\Phi ( | \\frac{CES}{SE(CES)} | ) ] \\]\n\n\\section{Data}\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}