diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzefqh" "b/data_all_eng_slimpj/shuffled/split2/finalzzefqh" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzefqh" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}{\\label{Introduction}}\nSingle electron transistors (SETs)\n\\cite{IEEE.Transactions.on.Magnetics.1987.1142-5,\nPhysRevLett.59.109} are extremely sensitive electrometers, with demonstrated charge sensitivities of the order of $\\mathrm{\\mu e\/\\sqrt{Hz}}$ \\cite{ApplPhysLett.79.4031, JApplPhys.100.114321}. Due to their high charge sensitivities they have found a large number of applications in research, for example, SETs are used to detect nano-mechanical oscillators \\cite{Science.304.74}, to count electrons \\cite{Nature.423.422, Nature.434.361} and to read out qubits \\cite{PhysRevB.69.140503, PhysRevLett.90.027002,\nNature.421.823}.\n\nThe fundamental limitation for the sensitivity of the SET is set by shot noise generated when electrons tunnel across the tunnel barriers \\cite{Korotkov, Nature.406.1039}. Shot noise was observed in a two junction structure (without gate) by Birk \\emph{et al.} \\cite{PhysRevLett.75.1610}.\n\nHowever, there are two other types of noise which are limiting the charge sensitivity in experiments. At high frequencies, above $1\\,\\mathrm{MHz}$, the sensitivity is limited by the amplifier noise. At low frequencies the sensitivity is limited by $1\/f$ noise which is due to background charge fluctuators near the SET. A collection of several fluctuators with different frequencies leads to a $1\/f$ spectrum. In several cases it has been observed that there is a background of $1\/f$ noise and a single or a few more strongly coupled fluctuators, resulting in random telegraph noise, which in the frequency spectrum leads to Lorentzians superimposed on the $1\/f$ background \\cite{ApplPhysLett.61.237, ApplSupercondIEEETrans.5.3085}.\n\nUnderstanding the nature of the $1\/f$ noise is also very important for solid state qubits since $1\/f$ noise strongly limits the decoherence time for these qubits. It has been suggested by Ithier \\emph{et al.} that the charge noise has a cut-off at the frequency of the order of $0.5\\,\\mathrm{MHz}$ \\cite{PhysRevB.72.134519}.\n\nEven though there have been many efforts \\cite{ApplPhysLett.61.237,\nApplSupercondIEEETrans.5.3085, IEEETransInstrMeas.46.303,\nJApplPhys.84.3212, JApplPhys.78.2830, Bouchiat, PhysRevB.56.7675,\nJApplPhys.86.2132, JApplPhys.83.310, JLowTempPhys.123.103} to reveal the physical origin of the background charge fluctuators the nature of these fluctuators is still unknown. It is not even clear where these fluctuators are located. The charge fluctuators can be located either in the tunnel barrier or outside the barrier but in close proximity of the junction.\n\nThe role of the substrate has been examined in several experiments\n\\cite{Bouchiat, JApplPhys.84.3212, ApplPhysLett.91.033107}. However, those experiments did not show a strong dependence of the noise on the substrate material. The barrier dielectric has been proposed as location of the charge traps by several groups \\cite{ApplSupercondIEEETrans.5.3085, JApplPhys.84.3212,\nJApplPhys.78.2830, PhysRevB.53.13682}.\n\nSeveral groups have shown that the low frequency noise at the output of the SET varies with the current gain ($\\partial I\/\\partial Q_g$) of the SET and that the maximum noise is found at the bias point with maximum gain \\cite{JApplPhys.78.2830, JApplPhys.86.2132, JApplPhys.83.310}. This indicates that the noise sources acts at the input of the device, \\emph{i.e.} as an external fluctuating charge. A detailed comparison of the noise to the gain was done by Starmark \\emph{et al.} \\cite{JApplPhys.86.2132}. All above mentioned experiments were performed with conventional SETs by measuring current or voltage noise at relatively low frequencies. \\emph{i.e.} below a few kHz.\n\nIn this work we have measured low frequency noise in the Single Electron Transistor which has demonstrated a very high charge sensitivity\n\\cite{JApplPhys.100.114321}, by using the Radio Frequency Single Electron Transistor (rf-SET) technique \\cite{Science280.1238, Wahlgren}. This allowed us to measure low frequency noise of the reflected voltage from the rf-SET in the range of a few hertz up to tens of MHz, and due to high charge sensitivity we were not limited by the amplifier noise. We find two Lorentzians superimposed on a $1\/f$ spectrum and that the noise in the range$50\\,\\mathrm{kHz}$-$1\\,\\mathrm{MHz}$ is quite different for positive and for negative gain of the transistor. By analyzing the bias and gate dependence of the noise we argue that the noise in this frequency range is dominated by electron tunneling to an aluminum grain, which acts as a single electron box capacitively connected to the SET island.\n\nThe paper is organized as follows. In section \\ref{LFN_model} we describe a model for the low frequency noise, which allows us to separate contributions from the different noise sources. In section \\ref{Experimental_detail} we describe the experimental details of our measurements. Section \\ref{Experimental_results} is the main part of this paper and contains the experimental results. Finally in section \\ref{Discussion} we describe our model for the nature of the low frequency noise in our SETs.\n\n\\section{Low-Frequency Noise Model}{\\label{LFN_model}}\nWe start by analyzing the different contributions of the measured noise. What we actually measure is the reflected voltage from the tank circuit in which the SET is embedded. The rf-SET tank circuit is a series $LC$ circuit with inductance $L$ and capacitance $C$. The power spectral density of the reflected voltage can be decomposed into the following terms originating from charge noise,\nresistance noise, shot noise and amplifier noise according to the following equation \\cite{Korotkov_unpubl, JApplPhys.86.2132}:\n\\begin{equation}{\\label{Noise_decomp}}\nS_{|v_{r}|}=\\left(\\frac{\\partial |v_r|}{\\partial\nQ_g}\\right)^2S_{Q_g}(f)+ \\left(\\frac{\\partial |v_r|}{\\partial\nR_1}\\right)^2S_{R_{1}}(f)+\\left(\\frac{\\partial |v_r|}{\\partial\nR_2}\\right)^2S_{R_{2}}(f)+S_{Shot}+S_{Ampl.},\n\\end{equation}\nwhere $Q_g$ is the charge at the input gate and $R_{1,2}$ are the tunnel resistances of the two junctions. Here we neglected the higher order terms and possible correlation terms between the charge noise and the resistance noise. In the case when the charge fluctuator is located in the tunnel barrier the correlation term may not be negligible. By measuring the noise for different bias points having different gains (\\emph{i.e.} different $\\partial |v_r|\/\\partial Q_g$) it is possible to extract information on whether the noise is associated with charge or resistance fluctuators.\n\nWe have designed the matching circuit for the rf-SET to work in the over-coupled regime, in order to have a monotonic dependence of the reflection coefficient as a function of the SET differential conductance. This regime corresponds to the matching condition when the internal quality factor of the rf-SET tank circuit\n($Q_{int}=\\sqrt{L\/C}\/R$) is larger than the external quality factor \n($Q_{ext}=Z_0\/\\sqrt{L\/C}$), where $Z_0=50\\,\\Omega$ is the characteristic impedance of the transmission line, and $R$ is the SET differential resistance.\n\nWe have theoretically analyzed the reflected voltage from the rf-SET as a function bias and gate voltages applied to the SET. The reflected voltage characteristic of the rf-SET can be calculated by the orthodox theory \\cite{Averin} using a master equation approach. From this theory we can also calculate the derivatives in eq.(\\ref{Noise_decomp}) of the reflected voltage with respect to variations in gate charge and in the resistances of the SET tunnel junctions.\n\n\\begin{figure}\n\\centering\\epsfig{figure=Figure1.eps,width=7cm}\n\\caption{\\label{Orthodox}\\small(color online) Calculated derivatives from equation (\\ref{Noise_decomp}) as a function of bias voltage $V$, and gate charge $Q_g=C_gV_g$. The derivative of the reflected voltage from the rf-SET with respect to (a) the charge fluctuations $(\\partial |v_r|\/\\partial Q_g)^2$; (b) the resistance fluctuations in the 1st junction $(\\partial |v_r|\/\\partial R_1)^2$. Sensitivity to the resistance fluctuations in the second junction has a mirror symmetry, along SET open state $(Q_g=0.5\\,e)$, with the sensitivity to the resistance fluctuations in the first barrier.}\n\\end{figure}\n\nFigure \\ref{Orthodox}(a) shows the sensitivity of the rf-SET to charge fluctuations as a function of the bias and gate voltages. The charge sensitivity is a symmetric function around the SET open state ($Q_g=0.5\\,e$), and has maxima close to the onset of the open state.\n\nThe sensitivity of the SET to resistance fluctuations in the first tunnel barrier is shown in the figure \\ref{Orthodox}(b). The sensitivity to resistance fluctuations in the second barrier (not shown) is identical to figure \\ref{Orthodox}(b), except that it is mirrored along the SET open state $(Q_g=0.5\\,e)$.\n\nBy operating at different bias and gate voltage we can choose operation points where the noise contribution from the different derivatives dominates, and it is possible to distinguish charge noise from resistance noise.\n\n\\section{Experiment details}\n{\\label{Experimental_detail}}\nThe samples were fabricated on oxidized silicon substrates using electron beam lithography and a standard double-angle evaporation technique. The asymptotic resistance of the measured single electron transistor was $25\\,\\mathrm{k\\Omega}$. The charging energy $E_C=e^2\/2C_\\Sigma\\simeq 18\\pm 2\\,\\mathrm{K}$ was extracted from the measurements of the SET stability diagram of the reflected signal with frequency $f=350\\,\\mathrm{MHz}$ versus bias voltage and gate voltages. From the asymmetry of the SET stability diagram we could\nalso deduce that the asymmetry in the junction capacitances was $30\\%$.\n\nThe SET was embedded in a resonant circuit and operated in the radio frequency mode \\cite{Science280.1238, Wahlgren}. The bandwidth of the setup was approximately $10\\,\\mathrm{MHz}$ limited by the quality factor of the resonance circuit. The radio frequency signal was launched toward the resonance circuit and the reflected signal was amplified by two cold amplifiers, and then downconverted using homodyne mixing. The output signal from the mixer containing the noise information was then measured by a spectrum analyzer. The sample was attached to the mixing chamber of a dilution refrigerator which was cooled to a temperature of approximately $25\\,\\mathrm{mK}$. All measurements were performed in the normal (nonsuperconducting) state at a magnetic field of $1.5\\,\\mathrm{T}$.\n\nWe have performed the noise measurements for different gate voltages and different bias voltages. Due to experimental problems we have performed measurements mostly for negative biases. The sample shows very high charge sensitivity of the order of $1\\,\\mathrm{\\mu e\/\\sqrt{Hz}}$. For a detailed description of the sensitivity with respect to different parameters (see\nref.\\cite{JApplPhys.100.114321}). A small sinusoidal charge signal\nof $7.3\\cdot 10^{-4}\\,\\mathrm{e_{rms}}$ with a frequency of $133\\,\\mathrm{Hz}$ was applied to the gate, which allowed us to calibrate the charge sensitivity referred to the input of the SET.\n\n\\begin{figure}\n\\centering\\epsfig{figure=Figure2.eps,width=7cm}\n\\caption{\\label{Points}\\small(color online) (a) The SET stability diagram ($\\partial I\/\\partial Q_g$) measured at a temperature of $25\\,\\mathrm{mK}$. Horizontal black line corresponds to the bias voltage, where transfer function $I(Q_g)$ (see panel (b)) was obtained. The points \\textbf{A}\/ (($\\circ$) for negative and ($+$) for positive bias) where $I=0$ inside the Coulomb blockade region. The points \\textbf{B} and \\textbf{D} ($\\ast\/\\bullet$) correspond to\nmaximum positive\/negative gain, $\\partial I\/\\partial Q_g$ that is where $\\partial^2I\/\\partial Q_g^2=0$. The measurement points \\textbf{C}, close to the current maximum current, correspond to $\\partial I\/\\partial Q_g=0$, marked with ($+$) at negative bias and with ($\\circ$) at positive bias. (b) SET transfer function $I(Q_g)$ measured for the bias voltage $V=-0.4\\,\\mathrm{mV}$. In the stability diagram this measurements is shown as a black line.}\n\\end{figure}\n\nBefore each measured spectrum, we have employed a charge-locked loop\n\\cite{ApplPhysLett.81.4859}, with a help of a lock-in amplifier. The first ($\\partial I\/\\partial Q_g$) or the second ($\\partial^2 I\/\\partial Q_g^2$) derivatives of the current were used as an error signals for stabilization of the gate point to compensate for the slow drift at the current maximum points (\\textbf{C} in fig.\\ref{Points}(b)), or at the maximum gain points (\\textbf{B} and \\textbf{D} in fig.\\ref{Points}(b)) respectively. Each noise spectrum\nis however measured with the feed back loop turned off.\n\n\\section{Experimental Results}\n{\\label{Experimental_results}}\nIn order to separate the contributions from different noise sources we have performed measurements at specific points. The measurement points are shown in fig. \\ref{Points}. At point \\textbf{A} ($Q_g\\approx 0\\,e$) there is an almost a complete Coulomb blockade with a zero current and zero gain ($\\partial |v_r|\/\\partial Q_g=0$). Here the derivatives of the reflected voltage with respect to resistance fluctuations in the tunnel barriers ($\\partial\n|v_r|\/\\partial R_{1,2}=0$) are also zero (see fig.\\ref{Orthodox}(b)).\nAt this point we see only amplifier noise --- different curves for different bias voltages show the same noise level convincing us that we see only amplifier noise. These measurements thus serve to calibrate the noise of the amplifier.\n\nThe measurements at points \\textbf{B} and \\textbf{D}, correspond to the requirement of maximum current gain ($\\max|\\partial I\/\\partial Q_g|$) and therefore also high $|\\partial |v_r|\/\\partial Q_g|$, diagonals in figure\\ref{Orthodox}(a). At these points there are contributions from all the noise sources (see fig. \\ref{Orthodox}), but since the absolute gain and the current are very similar at the points \\textbf{B} and \\textbf{D}, these measurements can be compared.\n\nWe have also measured noise at the points \\textbf{C} ($Q_g\\approx 0.5\\,e$) close to the maximum of the current transfer function ($|I(Q_g)|$). At this point the gain of the reflected signal ($\\partial |v_r|\/\\partial Q_g$) is quite low, but the shot noise could be high. The derivatives of the reflected voltage with respect to resistance fluctuations in the tunnel barriers ($\\partial |v_r|\/\\partial R_{1,2}$) are small but not equal to zero (see fig. \\ref{Orthodox}(b)).\n\\begin{figure}\n\\centering\\epsfig{figure=Figure3.eps,width=7cm}\n\\caption{\\label{Noise}\\small (color online) (a) Power spectral density (PSD) of the reflected voltage measured in the points \\textbf{A} (black curve); \\textbf{B} (blue curve); \\textbf{C} (green curve); \\textbf{D} (red curve). (b) Normalized noise in the points \\textbf{B} (blue curve); \\textbf{D} (red curve). The black continuous lines are a fits to the measured PSD with a sum of two Lorentzian functions.}\n\\end{figure}\n\nThe noise of the reflected voltage at the fixed bias point and for the different gate points are shown in fig. \\ref{Noise}(a). We start by subtracting the amplifier noise and then we compare the noise measured at the different points. Comparing the noise measured at points \\textbf{B} and \\textbf{D}, where the current gain has a maximum, with the noise measured at point \\textbf{C} close to the maximum of the transfer function we see that the noise at the point\n\\textbf{C} is substantially lower, even though the current is higher. From this we draw the conclusion that the difference is not due to the shot noise.\n\nComparing the noise spectra measured at the points \\textbf{B} and \\textbf{D} it is necessary to note that both the currents and the gains are very similar at these points. Both spectra \\textbf{B} and \\textbf{D} have a $1\/f$ dependence at low frequencies with two Lorentzian shoulders at higher frequencies. At frequencies above $30\\,\\mathrm{kHz}$ the noise at point \\textbf{D} drops well bellow the noise at point \\textbf{B}. At $300\\,\\mathrm{kHz}$ the difference is a factor of 5.\n\nWe have fitted the noise spectra at points \\textbf{B} and \\textbf{D} to a sum of two Lorentzian functions. The results of these fits are shown in figure \\ref{Noise}(b). From these fits we can extract the cut-off frequency and the level for each of the two Lorentzians.\n\nThe low-frequency Lorentzian has a cut-off frequency of the order of $1\\,\\mathrm{kHz}$. The cut-off frequency is the same for both slopes ($\\partial I\/\\partial Q_g \\,^>_< 0$), and it does not show any bias or gate dependence within the accuracy of our measurements.\n\nIn contrast, the high-frequency Lorentzian has a cut-off frequency above $f_{co}>50\\rm\\,kHz$, with a strong dependence on the bias and gate voltages. The bias dependence of the Lorentzian cut-off frequency for the positive ($\\partial I\/\\partial Q_g>0$) and negative ($\\partial I\/\\partial Q_g <0$) slopes are shown in figure \\ref{Gate_Depend}(a).\n\n\\begin{figure}[b]\n\\centering\\epsfig{figure=Figure4.eps,width=12cm}\n\\caption{\\label{Gate_Depend}\\small (color online) (a) The bias dependence of the cut-off frequency for the high frequency Lorentzian. (b) The gate dependence of the cut-off frequency for the high frequency Lorentzian. (c) The bias dependence of the cut-off frequency for the low frequency Lorentzian. (d) The gate dependence of the cut-off frequency for the low frequency Lorentzian. Blue points correspond to the negative slope $\\partial I\/\\partial Q_g>0$ (see fig. \\ref{Points}). Red points correspond to the positive slope $\\partial I\/\\partial Q_g<0$ (see fig. \\ref{Points}). The error bars are extracted from the fits to the Lorentzians. The continuous lines (blue, red) show the bias and gate dependencies for the Lorentzian cut-off frequency calculated in the described model.}\n\\end{figure}\n\nFor the negative slope $(\\partial I\/\\partial Q_g<0)$(blue points in figure \\ref{Gate_Depend}(a)), the cut-off frequency remains practically constant ($f_{co}\\simeq 50\\rm\\, kHz$) for negative biases. Close to the zero bias voltage the Lorentzian cut-off frequency switches to a higher frequency ($f_{co}\\simeq 150\\rm\\, kHz$). For the positive slope $(\\partial I\/\\partial Q_g>0)$ (red curve in figure \\ref{Gate_Depend}), the situation is different. For negative bias voltage the Lorentzian cut-off frequency is continuously growing from $f_{co}\\simeq 60\\rm\\, kHz$ and reaches a maximum ($f_{co}\\simeq 120\\rm\\, kHz$) close to zero bias voltage. For the positive bias voltage it rapidly decrease from the maximum to the initial value $f_{co}\\simeq 60\\rm\\, kHz$).\nFigure \\ref{Gate_Depend}(b), shows the gate dependence for the Lorentzian cut-off frequencies for both slopes $(\\partial I\/\\partial Q_g\\,^>_<0)$. As is clearly shown in this figure, the gate dependence for the positive and negative slopes are different. The gate dependence for the positive slope $(\\partial I\/\\partial Q_g>0)$ (red curve) has a peak, with a small negative offset on the gate charge from the SET open state. The Lorentzian cut-off frequency behavior on the negative slope $(\\partial I\/\\partial Q_g<0)$ (blue curve) is a step like function.\n\nBy integrating the Lorentzians in the noise spectra (see figure \\ref{Noise}(b)) we can calculate the total variation of induced charge on the SET island for both fluctuators. The variation of induced charge, for the low-frequency fluctuator is of the order of $\\Delta q_{lf}\\approx 6.6\\,\\mathrm{me_{rms}}$. The same estimation for the high-frequency fluctuator gives $\\Delta q_{hf}\\approx 30\\,\\mathrm{me_{rms}}$.\n\n\\section{Discussion}{\\label{Discussion}}\nIn this section we analyze the bias and the gate dependence of the cut-off frequency of the high frequency Lorentzian ($f_{co}\\simeq 50-150\\,\\mathrm{kHz}$). In particular, we try to explain why the cut-off frequency is different for different biasing conditions.\n\nIn our analysis we have assumed that there are in principle two possible sources for the low frequency noise, resistance fluctuators or charge fluctuators. The physical nature of the resistance fluctuators is not well understood, but they can be related with charge fluctuations. For instance, a charge oscillating in the tunnel barrier may modify both the transparency and the induced island charge. \n\nThe noise from a resistance fluctuator in one of the tunnel barriers would have an asymmetry along the onset of the SET open state, as it was shown in section \\ref{LFN_model} (see fig. \\ref{Orthodox}(b)). In order to explain the bias dependence of the cut-off frequency (see fig. \\ref{Gate_Depend}(a)) in terms of resistance fluctuators we must assume that there is an individual resistance fluctuator located in each of the SET tunnel barriers. Furthermore these fluctuators must have the same tunneling rates. With this strong assumption, however, it is impossible to explain the sharp drop of the experimentally measured gate dependence of the cut-off frequency (see. fig. \\ref{Gate_Depend}(b)).\n\nThus, in order to explain the results for the high-frequency Lorentzian, we will assume that there are individual charge fluctuators affecting the SET, and that each Lorentzian in the experimentally measured spectra is due to a single fluctuator coupled to the SET island. Many experimental groups have suggested a microscopic nature of these fluctuators. The microscopic nature is not known well, but there are suggestions that it could be traps in the substrate dielectric close to the SET or in the aluminum surface oxide. \n\nHere we will argue that the sources of these two level fluctuators are located outside the barrier and that they may have a \\textit{mesoscopic} nature. In reference \\cite{PhysRevB.53.13682} it is argued based on electrostatic analysis of the tunnel barrier, that such fluctuators could not be localized inside barrier. There are also other experiments \\cite{PhysRevB.67.205313, JApplPhys.96.6827}, where it is argued that the charge fluctuators most probably are localized outside the tunnel barrier.\n\nA typical SET is made from thin aluminum films which are not uniform; they consist of small grains connected to each other. In figure \\ref{SET_SEB}(a) we are show a SEM image of a sister sample to the measured one. In figure \\ref{SET_SEB}(b) we also show an AFM image of the same sister sample. It can be clearly seen that there are many small grains close to the device. We will assume that some grains are separated from the main film by a thin oxide layer but also capacitively connected to the SET island.\n\\begin{figure}\n\\centering\\epsfig{figure=Figure5-1.eps,width=10cm}\n\\caption{{\\label{SET_SEB}}\\small (a) A SEM image of a sister sample to the measured one. The black bar shows $100\\,\\mathrm{nm}$ linear scale. (b) An AFM pictures of the edge of an aluminum film on the $SiOx$ surface (c) Equivalent electrostatic scheme, where the small metallic grain is capacitively coupled to the SET island and has a tunnel connection with a bias lead.}\n\\end{figure}\n\nElectrostatically such a grain can be described as a single electron box \\cite{ZeitsPhysB.85.327}, capacitively coupled to the SET island with capacitance $C'_g$ and having a tunnel contact with resistance $R'$ and capacitance $C'$ with one of the bias leads as indicated in figure\\,\\ref{SET_SEB}(c). The situation is almost equivalent if the grain would be tunnel connected to the SET island and capacitively connected to the SET bias lead. For a detailed analysis we should estimate the energy scales for this grain. We assume that the linear dimension of the stray aluminum grain is in the range $1-5\\,\\mathrm{nm}$. The charging energy for this grain is of the order $E'_C \\equiv e^2\/(2 C'_\\Sigma) \\sim 10^{-1}\\,\\mathrm{eV}$, where $C'_\\Sigma=C'+C'_g+C'_{env}$, and $C'_{env}$ is the capacitance to the rest of the environment. This charging energy is substantially larger than the experimental temperature and the charging energy of the SET ($k_\\mathrm{B}T \\ll E_C \\ll E'_C$). In addition there will be be further separation of the energy levels due to the small size of the grains. \n\nThe ratio of capacitances $C'_g\/C'_\\Sigma$ is given directly by the charge induced on the SET island which we already have extracted by integrating the Lorentzians. Thus for the high frequency Lorentzian we have $C'_g\/C'_\\Sigma=0.030$ and for the low frequency Lorentzian this ratio is 0.0066.\n\nIn our model the single electron box is capacitively coupled to the SET island, and tunnel coupled with one of the SET leads. The average potential of the SET island $\\phi$ acts, in this system, as a gate potential for the single electron box and induces the charge $n'_g=C'_g \\phi \/e$ on the grain.\nThe charging dynamics for the electron box can be described by the orthodox theory using a master equation approach \\cite{Averin}. Electron tunneling changes the number of excess electrons $n$ in the grain. The differences in the electrostatic energy, when electrons tunnel to $(+)$ and from $(-)$ the grain are:\n\\begin{equation}\n\\Delta\\mathcal{E}_c^{\\pm}(n)=2 E'_C \\left( \\pm n \\mp n'_g +1\/2\\right).\n\\end{equation}\n\nThe tunnel rates of electron tunneling to or from the grain is a function of the tunnel resistance $R'$ and the electrostatic energy gain $\\Delta \\mathcal{E}_c^{\\pm}(n)$:\n\\begin{equation}{\\label{Tunnel_rates}}\nw^{\\pm}_n=\\frac{1}{e^2 R'}\n\\frac{\\Delta \\mathcal{E}_c^{\\pm}(n)}{1-\\exp\\left(-\\Delta \\mathcal{E}_c^{\\pm}(n)\/(k_\\mathrm{B}T)\\right)},\n\\end{equation}\n\nThe probability $\\sigma_n$ to have $n$ excess electrons in the grain\nobeys the master equation:\n\\begin{equation}\n\\frac{d\\sigma_n}{dt}=w^+_{n-1}\\sigma_{n-1}+w^-_{n+1}\\sigma_{n+1}- \\sigma_n(w^+_n+w^-_n)\n\\end{equation}\n\nFor our case of low temperature, $k_B T \\ll E'_C$ \nthere are only two nonvanishing probabilities, $\\sigma_n$ and $\\sigma_{n+1}$. This simplifies the problem, and it is convenient to define the distance from the grain degeneracy point, \\textit{i.e.} the point where these two nonvanishing states are degenerate, as $\\Delta n'_g=C'_g (\\phi -\\phi_0) \/e$. Here $\\phi_0=e(n+1\/2)\/C'_g$ is the SET island potential needed to reach the grain degeneracy point. \n\nIn the time domain, the dynamics for charging this grain from the lead is a random telegraph process, and the spectrum of this process is a Lorentzian function with a cut-off frequency defined by the sum of charging and escaping rates \\cite{JApplPhys.25.341}:\n\\begin{equation}{\\label{Eq_Cut_off frequency}}\nf_{co} = w^{+}_{n}+w^{-}_{n+1} = \\frac { \\Delta n'_g}{R' C'_\\Sigma} \\coth \\left( \\frac{E'_C}{k_B T} \\Delta n'_g \\right)\n= \\mathcal{A}(\\phi -\\phi_0) \\coth \\left( \\mathcal{B} (\\phi -\\phi_0) \\right),\n\\end{equation}\nwhere $\\mathcal{A}=C'_g\/(e C'_\\Sigma R')$ and $\\mathcal{B}=e C'_g\/(2 k_B T C'_\\Sigma)$.\n\nFrom eq. (\\ref{Eq_Cut_off frequency}) we thus see that the cut-off frequency of the Lorentzian depends directly on $\\Delta n'_g$ and therefore on the potential of the SET island, $\\phi$. When the grain is biased away from its degeneracy point, \\textit{i.e.} when $\\Delta n'_g > 2k_B T\/E'_C$, the cut-off frequency grows linearly with the SET island potential. This means that if we are far from the grain degeneracy point, the cut-off frequency is relatively high and the relative frequency shift due to the change in $\\phi$ will be small. On the other hand if we are close to the grain degeneracy point the cut-off frequency will be close to its minimum and the relative change in frequency due to $\\phi$ can be substantial. The maximum relative frequency change occurs when the potential just barely reaches the grain degeneracy point and can be calculated from eq.\\,\\ref{Eq_Cut_off frequency}. Taking in to account the bounds of the island potential, $-e\/(2C_\\Sigma)<\\phi0 \\quad\\text{in } \\Omega,\n\\\\\n& u=0 \\quad\\text{on } \\partial\\Omega,\n\\end{aligned}\n\\right.\n\\end{align*}\nwhere $\\Omega$ is a smooth bounded domain in $\\R^N$, $N\\geq 3$, $p= \\frac{N+2}{N-2}$ and\n $\\lambda$ is a real positive parameter.\n\nIn this article, we are interested in obtaining solutions to this problem, in the special case $N=3$, that concentrate in $k$ different points of $\\Omega$, $k\\geq 2$.\nIn particular, we\nanalyze the role of the Green function of $\\Delta+\\lambda$ in the presence of multi-peak solutions when $\\lambda$ is regarded as a parameter.\n\nSolutions to $(\\wp_\\lambda)$ correspond to critical points of the energy functional\n\\[\nJ_\\lambda(u)=\\frac{1}{2}\\int_{\\Omega}\\vert\\nabla u\\vert^2-\\frac{\\lambda}{2}\\int_{\\Omega}u^2-\\frac{1}{p+1}\\int_{\\Omega}|u|^{p+1}.\n\\]\nAlthough this functional is of class $C^2$ in $H_0^1(\\Omega)$, it does not satisfy the Palais-Smale condition at all energy levels, and hence variational arguments to find solutions are delicate and sometimes fail.\n\nLet $\\lambda_1$ denote the first eigenvalue of $-\\Delta$ with Dirichlet boundary condition.\nIt is well known that\n$(\\wp_\\lambda)$ admits no solutions if\n$\\lambda \\geq\\lambda_1$, which can be verified by\ntesting the equation against a first eigenfunction of the Laplacian. Moreover, the classical Pohozaev identity \\cite{pohozaev} guarantees that problem $(\\wp_\\lambda)$ with $\\lambda\\leq 0$ has no solution in a starshaped domain.\n\n\n\nIn the classical paper \\cite{brezis-nirenberg}, Brezis and Nirenberg showed that\nleast energy\nsolutions\nto this problem exist for $\\lambda \\in (\\lambda^*,\\lambda_1)$, where $\\lambda^* \\in [0,\\lambda_1)$ is a special number depending on the domain.\nThey also showed that if $N\\geq 4$, then $\\lambda^*=0$ and in particular $(\\wp_\\lambda)$ has a solution with minimal energy for all $\\lambda \\in (0,\\lambda_1)$.\n\n\nWhen $N=3$ the situation is strikingly different, since, as it is shown in \\cite{brezis-nirenberg}, $\\lambda^*>0$ and no solutions with minimal energy exist\nwhen $\\lambda \\in (0,\\lambda^*)$.\nIn 2002, Druet \\cite{druet} showed that there is no solution with minimal energy neither for $\\lambda=\\lambda^*$,\nwhich implies that $\\lambda^*$ can be characterized as the critical value such that a solution of $(\\wp_\\lambda)$ with minimal energy exists if and only if $\\lambda\\in (\\lambda^*,\\lambda_1)$.\n\nIn the particular case of the ball in $\\R^3$, Brezis and Nirenberg \\cite{brezis-nirenberg} also proved that $\\lambda^* = \\frac{\\lambda_1}{4}$ and that a solution to $(\\wp_\\lambda)$ exists if and only if $\\lambda \\in ( \\frac{\\lambda_1}{4}, \\lambda_1)$. By the results of Gidas, Ni, Nirenberg \\cite{gidas-ni-nirenberg} and Adimurthi, Yadava \\cite{adimurthi-yadava} this solution is unique and corresponds indeed to the minimum of the energy functional.\n\n\nIn dimensions three a characterization of $\\lambda^*$ can be given in terms of the \\emph{Robin function $g_\\lambda$} defined as follows.\nLet $\\lambda\\in(0,\\lambda_1)$. For a given $x\\in\\Omega$ consider the Green function $G_\\lambda(x, y)$, solution of\n\\[\n\\begin{array}{rlll}\n-\\Delta_yG_\\lambda-\\lambda G_\\lambda&=&\\delta_x&y\\in \\Omega,\n\\\\\nG_\\lambda(x, y)&=&0&y\\in \\partial\\Omega,\n\\end{array}\n\\]\nwhere $\\delta_x$ is the Dirac delta at $x$.\nLet $H_{\\lambda}(x,y) = \\Gamma(y-x)-G_\\lambda(x,y)$ with $\\Gamma(z)=\\frac{1}{4\\pi \\vert z\\vert}$, be its regular part,\nand let us define the Robin function of $G_\\lambda$ as\n$g_\\lambda(x) := H_\\lambda(x, x)$.\n\nIt is known that $g_\\lambda(x)$ is a smooth function which goes to $+\\infty$ as $x$ approaches $\\partial \\Omega$. The minimum of $g_\\lambda$ in $\\Omega$ is strictly decreasing in $\\lambda$, is strictly positive when $\\lambda$ is close to $0$ and approaches $-\\infty$ as $\\lambda\\uparrow \\lambda_1$.\n\nIt was conjectured in \\cite{brezis} and proved by Druet \\cite{druet} that $\\lambda^*$ is the largest $\\lambda \\in (0,\\lambda_1)$ such that $\\min_{\\Omega} g_\\lambda \\geq 0$. Moreover, Druet also proved that, as $\\lambda\\downarrow \\lambda^*$, least energy solutions to $(\\wp_\\lambda)$ develop a singularity which is located at a point $\\zeta_0\\in \\Omega$ such that $g_{\\lambda^*}(\\zeta_0) = 0$.\nNote that $\\zeta_0$ is a global minimizer of $g_{\\lambda^*}$ and hence a critical point.\nA concentrating family of solutions can exist at other values of $\\lambda$.\nIndeed, del Pino, Dolbeault and Musso \\cite{DPDM}\nproved that if $\\lambda_0 \\in (0,\\lambda_1)$ and $\\zeta_0\\in\\Omega$ are such that\n\\[\ng_{\\lambda_0}(\\zeta_0)=0, \\quad\n\\nabla g_{\\lambda_0}(\\zeta_0)=0,\n\\]\nand either $\\zeta^0$ is a strict local minimum or a nondegenerate critical point of $g_{\\lambda}$, then for $\\lambda - \\lambda_0 >0$, there is a solution $u_\\lambda$ of $(\\wp_\\lambda)$ such that\n\\[\nu_\\lambda (x) = w_{\\mu,\\zeta}\\,(1+o(1))\n\\]\nin $\\Omega$ as $\\lambda - \\lambda_0 \\to 0$, where\n\\[\nw_{\\mu,\\zeta}(x) =\n\\frac{\\alpha_3 \\, \\mu^{1\/2}}{(\\mu^2+ | x-\\zeta|^2 )^{1\/2}},\n\\quad \\alpha_3 = 3^{1\/4},\n\\]\n$\\zeta\\to \\zeta_0$ and $\\mu=O(\\lambda - \\lambda_0)$.\n\nThe behavior described above, namely \\emph{bubbling} of a family of solutions, was\nalready studied in higher dimensions.\nHan \\cite{han} proved that if $N\\geq 4 $, minimal energy solution of $(\\wp_\\lambda)$ concentrate at a critical point of the Robin function $g_0$ as $\\lambda\\downarrow0$. See also Rey \\cite{Rey} for an arbitrary family of solutions that concentrates at a single point. Conversely, Rey in \\cite{Rey,Rey2} showed that attached to any $C^1$-stable critical point of the Robin function $g_0$ there is a family of solutions of $(\\wp_\\lambda)$ that blows up at this point as $\\lambda\\downarrow 0$.\n\nUnlike the case of dimension three, bubbling behavior with concentration at multiple points as $\\lambda \\downarrow 0$ is known in higher dimensions.\nIndeed, Musso and Pistoia \\cite{Musso-Pistoia2002} constructed multispike solutions in a smooth bounded domain $\\Omega \\subset \\R^N$, $N\\geq 5$.\nTo state precisely their result let us consider an integer $k \\geq 1$,\nlet us write $\\bar\\mu = (\\bar\\mu_1,\\ldots,\\bar\\mu_k)\\in \\R^k $, $\\zeta = (\\zeta_1,\\ldots,\\zeta_k) \\in \\Omega^k$, $\\zeta_i\\not=\\zeta_j$ for $i\\not=j$, and define\n\\[\n\\psi_k(\\bar\\mu,\\zeta) = \\frac{1}{2} ( M(\\zeta) \\,\\bar\\mu^{\\frac{N-2}{2}} , \\bar\\mu^{\\frac{N-2}{2}} ) - \\frac{1}{2}\n\\,B\\sum_{i=1}^k\\bar\\mu_i^2\n\\]\nwhere $\\bar\\mu^{\\frac{N-2}{2}} = (\\bar\\mu_1^{\\frac{N-2}{2}},\\ldots,\\bar\\mu_k^{\\frac{N-2}{2}})$, and $M(\\zeta)$ is the matrix with coefficients\n\\[\nm_{ii}(\\zeta) = g_0(\\zeta_i), \\quad\nm_{ij}(\\zeta) = -G_0(\\zeta_i,\\zeta_j), \\quad \\text{for } i\\not=j.\n\\]\nHere $B>0$ is a constant depending only on the dimension.\nIt is shown in \\cite{Musso-Pistoia2002} that if $\\psi_k$ has a stable critical point $(\\bar\\mu,\\zeta)$ then, for $\\lambda>0$ small, problem $(\\wp_\\lambda)$ has a family of solutions that blow up at the $k$ points $\\zeta_1,\\ldots,\\zeta_k$, with profile near $\\zeta_i$ given by $w_{\\mu_i,\\zeta_i}$ and rates $\\mu_i \\sim \\bar\\mu_i \\,\\lambda^{\\frac{1}{N-4}}$. Musso and Pistoia also exhibit classes of domains where such critical points of $\\psi_k$ can be found.\nA related multiplicity result is given by the same authors in \\cite{Musso-Pistoia2003}, where $\\Omega$ is a domain with a sufficiently small hole.\nThey show that for $\\lambda<0$ small there is a family of solutions concentrating at two points.\n\n\n\n\nAs far as we know, there are no works dealing with solutions with multiple concentration in lower dimensions ($N=3$ and $N=4$), and it is not clear what type of finite dimensional function governs the location and the concentration rate of the bubbling solutions.\n\nIn this work we focus in dimension three. We give conditions on the parameter $\\lambda$ such that solutions with simultaneous concentration at $k$ points exist and find the finite dimensional function describing the location and rate of concentration.\nWe remark that the condition on $\\lambda$ that we obtain for solutions with multiple bubbling in dimension three is a non-obvious but natural generalization of the condition given by Dolbeault, del Pino and Musso \\cite{{DPDM}} for single bubble solutions in dimension three,\nand is somehow related to the result of Musso and Pistoia \\cite{Musso-Pistoia2002} for $\\lambda^*=0$ in higher dimensions.\n\nIn order to state our results we need some notation. For a given integer\n$k\\geq2$\nset\n\\[\n\\Omega_k^* =\\{ \\zeta= ( \\zeta_1,\\dots, \\zeta_k) \\in \\Omega^k : \\zeta_i\\not=\\zeta_j \\text{ for all } i\\not=j\\}.\n\\]\nFor $\\zeta= ( \\zeta_1,\\dots, \\zeta_k) \\in \\Omega_k^*$, let us consider the matrix\n\\[\nM_\\lambda(\\zeta):=\n\\begin{pmatrix}\ng_{\\lambda}(\\zeta_1) & -G_{\\lambda}(\\zeta_1,\\zeta_2) & \\ldots & -G_{\\lambda}(\\zeta_1,\\zeta_k) \\\\\n-G_{\\lambda}(\\zeta_1,\\zeta_2) & g_{\\lambda}(\\zeta_2) & \\ldots & -G_\\lambda(\\zeta_2,\\zeta_k)\n\\\\\n\\vdots & & & \\vdots\n\\\\\n-G_\\lambda(\\zeta_1,\\zeta_k) & -G_\\lambda(\\zeta_2,\\zeta_k) & \\ldots & g_\\lambda(\\zeta_k)\n\\end{pmatrix}.\n\\]\nIn other words, $M_\\lambda(\\zeta)$ is the matrix whose\n$ij$ component is given by\n\\[\n\\begin{cases}\ng_\\lambda(\\zeta_i) & \\text{if } i=j\n\\\\\n- G_\\lambda(\\zeta_i,\\zeta_j) & \\text{if } i\\not=j .\n\\end{cases}\n\\]\nDefine the function\n\\begin{align*}\n\\psi_\\lambda(\\zeta) = \\det M_\\lambda(\\zeta) , \\quad \\zeta \\in \\Omega_k^*.\n\\end{align*}\nOur main result is the following.\n\\begin{theorem}\n\\label{thm1}\nAssume that for a number $\\lambda=\\lambda_0 \\in (0,\\lambda_1)$ there is \\,$\\zeta^0 = ( \\zeta_1^0,\\ldots,\\zeta_k^0) \\in\\Omega_k^*$ such that:\n\\begin{itemize}\n\n\\item[(i)]\n$\\psi_{\\lambda_0}(\\zeta^0)=0$ and $M_{\\lambda_0}(\\zeta^0)$ is positive semidefinite,\n\n\\item[(ii)]\n$ D_{\\zeta}\\psi_{\\lambda_0}(\\zeta^0)=0 $,\n\n\\item[(iii)]\n$D_{\\zeta\\zeta} ^2\\psi_{\\lambda_0}(\\zeta^0)$ is non-singular,\n\n\\item[(iv)]\n$\\frac{\\partial \\psi_{\\lambda} }{\\partial \\lambda}\\big|_{\\lambda=\\lambda_0} (\\zeta^0) < 0 $.\n\n\\end{itemize}\nThen for $\\lambda=\\lambda_0+\\varepsilon$, with $\\varepsilon>0$ small, problem $(\\wp_\\lambda)$ has a solution $u$ of the form\n\\[\nu = \\sum_{j=1}^k w_{\\mu_j,\\zeta_j} + O(\\varepsilon^{\\frac{1}{2}})\n\\]\nwhere $\\mu_j = O(\\varepsilon)$, $\\zeta_j\\to \\zeta_j^0$, $j=1,\\ldots,k$, and $O(\\varepsilon^{\\frac{1}{2}})$ is uniform in $\\Omega$ as $\\varepsilon\\to 0$.\n\\end{theorem}\n\n\\medskip\t\n\n\\noindent\nWe remark that\nTheorem~\\ref{thm1} admits some variants. For example, if $\\frac{\\partial \\psi_{\\lambda} }{\\partial \\lambda}\\big|_{\\lambda=\\lambda_0} (\\zeta^0) > 0 $, then a solution with $k$ bubbles can be found for $\\lambda=\\lambda_0-\\varepsilon$, with $\\varepsilon>0$ small.\nWhen $k=2$ the assumption that $M_{\\lambda_0}(\\zeta^0)$ is positive semidefinite is equivalent to $g_{\\lambda_0}(\\zeta_1^0)>0$ or $g_{\\lambda_0}(\\zeta_2^0)>0$.\n\n\nAs an example where the previous theorem can be applied,\nlet us consider the annulus\n\\[\n\\Omega_a = \\{ x\\in \\R^3 \\ : \\ a < |x| < 1 \\} ,\n\\]\nwhere $00$ small, has a solution with $k$ bubbles centered at the vertices of a planar regular polygon for some $\\lambda_0\\in (0,\\lambda_1)$. As a byproduct of the construction we also deduce that\n\\[\n\\lambda_0<\\lambda^*.\n\\]\nA detailed proof of these assertions is given in Section~\\ref{exampleAnnulus}.\nThe ideas developed here can be applied\nto obtain two bubble solutions in more general thin axially symmetric domains.\n\n\nIn dimension $N\\geq 4$ qualitative similar solutions were detected by Wang-Willem \\cite{wang-willem} for all $\\lambda $ in an interval almost equal to $ (0,\\lambda_1)$ by using variational methods.\nThe existence of this kind of solutions in dimension three was (to the best of our knowledge) not known.\n\nWe should remark that multipeak solutions cannot be constructed in a ball, since the solution of $(\\wp_\\lambda)$ is radial and unique if it exists. This may indicate that if we consider $(\\wp_\\lambda)$ in the annulus $\\Omega_a$ with $a>0$ sufficiently small there are no multipeak solutions.\n\n\n\n\n\n\n\n\nFinally, we mention that several interesting results have been obtained on the existence of sign changing solutions to the Brezis-Nirenberg problem. See for instance Ben Ayed, El Mehdi, Pacella \\cite{benayed-elmehdi-pacella}, Iacopetti \\cite{iacopetti}, Iacopetti and Vaira \\cite{iacopetti-vaira} and the references therein. It is in fact foreseeable that the methods developed in this work can also give the existence of multipeak sign changing solutions in dimension 3.\n\n\nThe paper is organized as follows. In Section~\\ref{sectEnergyExpansion} we introduce some notation and give the energy expansion for a multi-bubble approximation.\nSections~\\ref{sectLinear} and ~\\ref{sectNonlinear} are respectively devoted to the study the linear and nonlinear problems involved in the Lyapunov Schmitd reduction, which is carried out in Section~\\ref{secReduction}.\nTheorem~\\ref{thm1} is proved in Section~\\ref{sectProof}.\nFinally, in Section~\\ref{exampleAnnulus} we give the details for the case of the annulus $\\Omega_a$.\n\n\n\\section{Energy expansion of a multi-bubble approximation}\n\\label{sectEnergyExpansion}\n\n\\noindent\nWe denote by\n\\[\nU(z):=\\frac{\\alpha_3 }{(1+\\vert z\\vert^2)^{1\/2}},\\quad \\alpha_3=3^{1\/4},\n\\]\nthe standard bubble.\nIt is well known that all positive solutions to the Yamabe equation\n\\[\n\\Delta w + w^5 = 0 \\quad\\text{in }\\mathbb{R}^3\n\\]\nare of the form\n\\begin{align*}\nw_{\\mu,\\zeta}(x):\n&=\\mu^{-1\/2}\\,U\\Bigl(\\frac{x-\\zeta}{\\mu}\n\\Bigr)\n=\\frac{\\alpha_3 \\, \\mu^{1\/2}}{\\Bigl(\\mu^2+\\vert x-\\zeta\\vert^2\\Bigr)^{1\/2}},\n\\end{align*}\nwhere $\\zeta$ is a point in $\\mathbb{R}^3$ and $\\mu$ is a positive number.\n\nFrom now on we assume that $0<\\lambda<\\lambda_1(\\Omega)$.\n\nFor a given $k\\geq 2$,\nwe consider $k$ different points $\\zeta_1,\\ldots,\\zeta_k\\in\\Omega$ and\nsmall positive numbers $\\mu_1,\\ldots, \\mu_k$ and denote by\n\\[\nw_i:=w_{\\mu_i,\\zeta_i}.\n\\]\nWe are looking for solutions of $(\\wp_\\lambda)$ that at main order are given by $\\sum_{i=1}^k w_i$. Since $w_i$ are not zero on $\\partial\\Omega$ it is natural to correct this approximation by terms that provide the Dirichlet boundary condition.\nIn order to do this we introduce, for each $i=1,\\ldots,k$, the function\n$\\pi_i$ defined as the unique solution of the problem\n\\[\n\\begin{array}{rlll}\n\\Delta \\pi_i+\\lambda\\,\\pi_i&=&-\\lambda\\,w_i&\\text{in }\\Omega,\n\\\\\n\\pi_i&=&-w_i&\\text{on }\\partial\\Omega,\n\\end{array}\n\\]\nand then we shall consider as a first approximation of the solution to $(\\wp_\\lambda)$ one of the form\n\\[\nU^0 = U_1+\\ldots+U_k,\n\\]\nwhere\n\\[\nU_i(x) = w_i(x) + \\pi_i(x).\n\\]\nObserve that $U_i\\in H_0^1(\\Omega)$ and satisfies the equation\n\\begin{equation}\n\\label{projection}\n\\left\\{\n\\begin{array}{rlll}\n\\Delta U_i+\\lambda U_i&=&-w_i^5&\\text{in } \\Omega,\n\\\\\nU_i&=&0&\\text{on }\\partial\\Omega.\n\\end{array}\n\\right.\n\\end{equation}\nLet us recall that the energy functional associated to $(\\wp_\\lambda)$ when $N=3$ is given by:\n\\[\nJ_\\lambda(u)=\\frac{1}{2}\\int_{\\Omega}\\vert\\nabla u\\vert^2-\\frac{\\lambda}{2}\\int_{\\Omega}u^2-\\frac{1}{6}\\int_{\\Omega}|u|^{6}.\n\\]\nLet us write $\\zeta= (\\zeta_1,\\ldots,\\zeta_k)$ and $\\mu=(\\mu_1,\\ldots,\\mu_k)$ and note that $U^0 = U^0(\\mu,\\zeta)$.\nSince we are looking for solutions close to $U^0(\\mu,\\zeta)$, formally we expect\n$\nJ_\\lambda( U^0(\\mu,\\zeta) )$\nto be almost critical in the parameters $\\mu,\\zeta$.\nFor this reason it is important to obtain an asymptotic formula of the functional $(\\mu,\\zeta)\\rightarrow J_\\lambda( U^0(\\mu,\\zeta))$ as $\\mu\\to 0$.\n\nFor any $\\delta>0$ set\n\\begin{multline*}\n\\Omega_\\delta^k:=\\{\n\\zeta\\equiv(\\zeta_1,\\ldots,\\zeta_k)\\in\\Omega^k: \\,\\textrm{dist}(\\zeta_i,\\partial \\Omega)>\\delta, \\vert \\zeta_i-\\zeta_j\\vert>\\delta,\\\\\n i=1,\\ldots,k,\\,j=1,\\ldots,k,\\, i\\neq j\n\\} .\n\\end{multline*}\nThe main result in this section is the expansion of the energy in the case of a multi-bubble ansatz.\n\n\\begin{lemma}\n\\label{lemmaEnergyExpansion}\nLet $\\delta>0$ be fixed and let $\\zeta\\in\\Omega_\\delta^k$.\nThen as $\\mu_i\\rightarrow 0$, the following expansion holds:\n\\begin{align*}\nJ_\\lambda\\Bigl(\\sum_{i=1}^k U_i\\Bigr):=&\\,k\\,a_0\n+a_1\\sum_{i=1}^k\\Bigl(\\mu_i\\,g_{\\lambda}(\\zeta_i)-\\sum_{j\\neq i}\\mu_i^{1\/2}\\,\\mu_j^{1\/2}\\,G_{\\lambda}(\\zeta_i,\\zeta_j)\\Bigr)+a_2\\,\\lambda\\,\\sum_{i=1}^k \\mu_i^2\n\\\\\n&\n-a_3\\,\\sum_{i=1}^k\\Bigl(\\mu_i\\,g_{\\lambda}(\\zeta_i)-\\sum_{j\\neq i}\\mu_i^{1\/2}\\mu_j^{1\/2}\\,G_{\\lambda}(\\zeta_i,\\zeta_j)\\Bigr)^2\n+ \\theta_\\lambda^{(1)} (\\mu,\\zeta) ,\n\\end{align*}\nwhere $\\theta_\\lambda^{(1)} (\\zeta,\\mu)$ is such that for any $\\sigma>0$ and $\\delta>0$ there is $C$ such that\n\\[\n\\Bigl| \\frac{\\partial^{m+n}}{\\partial\\zeta^m\\partial\\mu^n}\\,\\theta_\\lambda^{(1)} (\\zeta,\\mu) \\Bigr| \\leq C (\\mu_1+\\ldots+\\mu_k)^{3-\\sigma -n} ,\n\\]\nfor $m = 0,1$, $n = 0,1,2$, $m+n\\leq 2$, all small $\\mu_i$, $i=1,2,\\ldots,k$, and all $\\zeta\\in\\Omega_\\delta^k$.\n\\end{lemma}\n\nThe $a_j$'s are\nthe following explicit constants\n\\begin{align}\n\\label{a0}\na_0:&=\\frac{1}{3}\\int_{\\R^3}U^6\n= \\frac{1}{4}(\\alpha_3\\pi)^2,\n\\\\\n\\label{a1}\na_1:&=2\\pi\\alpha_3\\int_{\\R^3}U^5\n=8(\\alpha_3\\pi)^2,\n\\\\\n\\label{a2}\na_2:&=\\frac{\\alpha_3}{2}\\int_{\\R^3}\\biggl[\\biggl(\\frac{1}{\\vert z\\vert}-\\frac{1}{\\sqrt{1+\\vert z\\vert^2}}\n\\biggr)U+\\frac{1}{2}\\,\\vert z\\vert\\, U^5\\biggr]\\,dz\na_2=(\\alpha_3\\pi)^2,\n\\\\\n\\label{a3}\na_3:&=\\frac{5}{2}(4\\pi\\alpha_3)^2\\int_{\\R^3}U^4\n=120\\,(\\alpha_3\\pi^2)^2.\n\\end{align}\nTo prove this lemma we need some preliminary results.\nTo begin with, we recall the relationship between the functions $\\pi_{i}(x)$ and the regular part of Green's function, $H_\\lambda(\\zeta_i,x)$.\nLet us consider the (unique) radial solution $\\mathcal{D}_0(z)$ of the following problem in entire space\n\\[\n\\begin{array}{rlll}\n\\Delta \\mathcal{D}_0&=&-\\lambda\\,\\alpha_3\\,\n\\Bigl(\n\\frac{1}{(1+\\vert z\\vert^2)^{1\/2}}\n-\\frac{1}{\\vert z\\vert}\n\\Bigr)\n&\\text{in } \\mathbb{R}^3,\n\\\\\n\\mathcal{D}_0&\\rightarrow\n&0&\n\\text{as } \\vert z\\vert\\rightarrow \\infty.\n\\end{array}\n\\]\nThen $\\mathcal{D}_0(z)$ is a $C^{0,1}$\nfunction with $\\mathcal{D}_0(z)\\sim \\vert z\\vert^{-1}\\log \\vert z\\vert$ as $\\vert z\\vert\\rightarrow \\infty$.\n\n\\bigskip\n\n\\begin{lemma}\n\\label{lemma22}\nFor any $\\sigma > 0$ the following expansion holds as $\\mu_i\\rightarrow 0$\n\\[\n\\mu_i^{-\\frac{1}{2}}\\pi_i(x)=-4\\pi\\alpha_3\\,H_{\\lambda}(x,\\zeta_i)+\\mu_i\\,\\mathcal{D}_0\\Bigl(\\frac{x-\\zeta_i}{\\mu_i}\\Bigr)+\\mu_i^{2-\\sigma}\\,\\theta(\\mu_i,x,\\zeta_i)\n\\]\nwhere for $m=0,1$, $n=0,1,2$, $m+n\\leq 2$, the function\n$\\mu_i^n\\frac{\\partial^{m+n}}{\\partial\\zeta_i^m\\partial\\mu_i^n}\\,\\theta(\\mu_i,y,\\zeta_i)\n$\nis bounded uniformly on $y\\in\\Omega$, all small $\\mu_i$ and $\\zeta_i$ in compact subsets of $\\Omega$.\n\\end{lemma}\n\\begin{proof}\nSee \\cite[Lemma 2.2]{DPDM}.\n\\end{proof}\n\n\n\n\n\n\nFrom Lemma \\ref{lemma22} and the fact that,\naway from $ x=\\zeta_i$,\n\\[\n\\mathcal{D}_0\\Bigl(\\frac{x-\\zeta_i}{\\mu_i}\\Bigr)=O(\\mu_i\\log \\mu_i),\n\\]\nthe following holds true.\n\\begin{lemma}\n\\label{Uiexp}\nLet $\\delta>0$ be given. Then for any $\\sigma > 0$ and $x\\in\\Omega\\setminus B_\\delta(\\zeta_i)$ the following expansion holds as $\\mu_i\\rightarrow 0$\n\\[\n\\mu_i^{-\\frac{1}{2}}U_i(x)=4\\pi\\,\\alpha_3\\,G_{\\lambda}(x,\\zeta_i)+\\mu_i^{2-\\sigma}\\,\\hat{\\theta}(\\mu_i,x,\\zeta_i)\n\\]\nwhere for $m=0,1$, $n=0,1,2$, $m+n\\leq 2$, the function\n$\\mu_i^n\\frac{\\partial^{m+n}}{\\partial\\zeta_i^m\\partial\\mu_i^n}\\,\\hat{\\theta}(\\mu_i,x,\\zeta_i)\n$\nis bounded uniformly on $x\\in\\Omega\\setminus B_\\delta(\\zeta_i)$, all small $\\mu_i$ and $\\zeta_i$ in compact subsets of $\\Omega$.\n\\end{lemma}\n\n\n\n\n\n\n\n\n\n\n\nWe also recall the expansion of the energy for the case of a single bubble, which was proved in \\cite{DPDM}.\n\n\n\n\\begin{lemma}\n\\label{lemma21}\nFor any $\\sigma > 0$ the following expansion holds as $\\mu_i\\rightarrow 0$\n\\begin{equation*}\nJ_\\lambda(U_i)=a_0+a_1\\,g_{\\lambda}(\\zeta_i)\\,\\mu_i+\n\\bigl(a_2\\,\\lambda-a_3\\,g_{\\lambda}(\\zeta_i)^2\\bigr)\\,\\mu_i^2+\\mu_i^{3-\\sigma}\\,\n\\theta(\\mu_i,\\zeta_i),\n\\end{equation*}\nwhere for $m=0,1$, $n=0,1,2$, $m+n\\leq 2$, the function\n$\\mu_i^n\\frac{\\partial^{m+n}}{\\partial\\zeta_i^m\\partial\\mu_i^n}\\,\\theta(\\mu_i,\\zeta_i)\n$\nis bounded uniformly on all small $\\mu_i$ and $\\zeta_i$ in compact subsets of $\\Omega$.\nThe $a_j$'s are given in \\eqref{a0}--\\eqref{a3}.\n\\end{lemma}\n\n\n\n\n\n\\bigskip\n\n\\begin{proof}[Proof of Lemma~\\ref{lemmaEnergyExpansion}]\n\\noindent\nWe decompose\n\\begin{align*}\nJ_\\lambda \\Bigl(\\sum_{i=1}^k U_i\\Bigr)\n&=\\frac{1}{2}\\sum_{i=1}^k\\Bigl(\\int_{\\Omega} \\vert\\nabla U_i\\vert^2+\\sum_{j\\neq i}\\int_{\\Omega}\\nabla U_i\\cdot \\nabla U_j\\Bigr)\n\\\\\n&\\quad -\\frac{\\lambda}{2}\\sum_{i=1}^k\\Bigl( \\int_{\\Omega}U_i^2+\\sum_{j\\neq i}\\int_{\\Omega}U_i\\,U_j\\Bigr)-\\frac{1}{6}\\int_{\\Omega}\\Bigl(\\sum_{i=1}^k U_i\\Bigr)^6\n\\\\\n& =\\sum_{i=1}^kJ_\\lambda(U_i)+\\frac{1}{2}\\sum_{i=1}^k\\sum_{j\\neq i}\\int_{\\Omega}[\\nabla U_i\\cdot \\nabla U_j-\\lambda \\,U_i\\,U_j]\n\\\\\n& \\quad -\\frac{1}{6}\\int_{\\Omega}\\Bigl[\\Bigl(\\sum_{i=1}^k U_i\\Bigr)^6-\\sum_{i=1}^k U_i^6\\Bigr].\n\\end{align*}\nIntegrating by parts in $\\Omega$ we get\n\\[\n\\int_{\\Omega}\\nabla U_i\\cdot \\nabla U_j=\n\\int_{\\Omega}(-\\Delta U_i)U_j+\n\\int_{\\partial\\Omega}\\frac{\\partial U_i}{\\partial \\eta} U_j=\\int_{\\Omega}(-\\Delta U_i)U_j,\n\\]\nwhere $\\frac{\\partial }{\\partial \\eta}$ denotes the derivative along the unit outgoing normal at a point of $\\partial \\Omega$.\nFrom \\eqref{projection} one gets\n\\[\n\\int_{\\Omega}\\nabla U_i\\cdot \\nabla U_j=\\int_{\\Omega}(-\\Delta U_i)U_j=\\int_{\\Omega}(\\lambda U_i+w_i^5)U_j.\n\\]\nand so\n\\[\n\\int_{\\Omega}\\nabla U_i\\cdot \\nabla U_j-\\lambda\\int_{\\Omega}U_i\\, U_j=\\int_{\\Omega}w_i^5\\, U_j.\n\\]\nHence,\n\\begin{equation}\nJ_\\lambda \\Bigl(\\sum_{i=1}^k U_i\\Bigr)=\\sum_{i=1}^kJ_\\lambda(U_i)+\\frac{1}{2}\\sum_{i=1}^k \\sum_{j\\neq i}\\int_{\\Omega} w_i^5\\, U_j-\\frac{1}{6}\\int_{\\Omega}\\Bigl[\\Bigl(\\sum_{i=1}^k U_i\\Bigr)^6-\\sum_{i=1}^k U_i^6\\Bigr].\n\\label{Jlambda}\n\\end{equation}\nLet $\\rho\\in(0,\\delta\/2)$ and denote by\n\\[\n\\mathcal{O}_\\rho=\\Omega\\setminus\\cup_{j=1}^kB_{\\rho}(\\zeta_j).\n\\]\nLet us decompose\n\\begin{equation}\n\\label{Ei}\n\\int_{\\Omega}\\Bigl[\\Bigl(\\sum_{i=1}^k U_i\\Bigr)^6-\\sum_{i=1}^k U_i^6\\Bigr]=\\sum_{i=1}^k\\int_{B_{\\rho}(\\zeta_i)}\nE_i+\\sum_{i=1}^k\n\\int_{\\mathcal{O}_\\rho}\nE_i,\n\\end{equation}\nwhere\n\\begin{align}\n\\notag\nE_i:=&\\Bigl[(U_i+Q_i)^6-U_i^6\\Bigr]-\\sum_{j\\neq i} U_j^6\\\\\n=&6\\,(U_i^5\\,Q_i+U_i\\,Q_i^5)+15\\,(U_i^4\\,Q_i^2+Q_i^2\\,U_i^4)\n+20\\,U_i^3\\,Q_i^3-\\sum_{j\\neq i} U_j^6.\n\\label{U6}\n\\end{align}\nand\n$Q_i:=\\sum_{j\\neq i} U_j$.\n\nFrom now on, we write simply $O(\\mu^r)$ to indicate that some function is of the order of $(\\mu_1+\\ldots+\\mu_k)^r$ for any $r>0$.\n\nNotice that, if $s+t=6$,\n\\[\n\\mathcal{R}_{i,j}^{s,t}:=\\int_{\\mathcal{O_\\rho}} U_i^s\\,U_j^t=O(\\mu^3).\n\\]\nIf, additionally, $s>t$,\n\\[\n\\tilde{\\mathcal{R}}_{i,j}^{s,t}:=\\int_{B_\\rho(\\zeta_i)} U_i^t\\,U_j^s=O(\\mu^3).\n\\]\nThis implies, in particular, that $\\int_{\\mathcal{O}_\\rho}\nE_i=O(\\mu^3)$ and that $\\int_{B_{\\rho}(\\zeta_i)}U_j^6=O(\\mu^3)$.\n\n\n(i)\\,If $s=5$ and $t=1$, then we have\n\\begin{align}\n\\label{Ui5Uj}\n\\int_{B_\\rho(\\zeta_i)}U_i^5\\,U_j&=\n\\int_{B_\\rho(\\zeta_i)}w_i^5\\,U_j\n+5\\int_{B_\\rho(\\zeta_i)}w_i^4\\,\\pi_i\\,U_j+\n\\mathcal{R}_{i,j}^1,\n\\end{align}\nwhere\n\\[\n\\mathcal{R}_{ij}^1:=20\n\\int_0^1 d\\tau\\,(1-\\tau)\n\\int_{B_\\rho(\\zeta_i)}(w_i+\\tau\\pi_i)^3\\,\\pi_i^2\\,U_j.\n\\]\nUsing the change of variable $x=\\zeta_i+\\mu_i z$\nand calling $B_{\\mu_i}=B_{\\frac{\\rho}{\\mu_i}}(0)$\nwe find that\n\\[\n\\int_{B_\\rho(\\zeta_i)}w_i^5\\,U_j\\,dx=\n\\mu_i^{\\frac{1}{2}}\\mu_j^{\\frac{1}{2}}\\int_{B_{\\mu_i}}U^5(z)\\,\\mu_j^{-\\frac{1}{2}}\\,U_j(\\zeta_i+\\mu_i z)\\,dz.\n\\]\nBy Lemma \\ref{Uiexp} we have\n\\[\n\\mu_j^{-\\frac{1}{2}}\\,U_j(\\zeta_i+\\mu_i z)=4\\pi\\alpha_3\\,G_{\\lambda}(\\zeta_i+\\mu_i z,\\zeta_j)+\\mu_j^{2-\\sigma}\\hat{\\theta}(\\mu_j,\\zeta_i+\\mu_i z,\\zeta_j).\n\\]\nWe expand\n\\begin{equation}\n\\label{Greenexp}\nG_\\lambda(\\zeta_i+\\mu_i z,\\zeta_j)=\nG_\\lambda(\\zeta_i,\\zeta_j)+\n\\mu_i\\, {\\bf c}\\cdot z+\\theta_2(\\zeta_i+\\mu_i z,\\zeta_j),\n\\end{equation}\nwhere ${\\bf c}=D_1G_\\lambda(\\zeta_i,\\zeta_j)$ and $\\vert\\theta_2(\\zeta_i+\\mu_i z,\\zeta_j)\\vert\\leq C\\mu_i^2\\,\\vert z\\vert^2.$\n\n\n\\noindent\nBy symmetry,\n\\[\n\\int_{B_{\\mu_i}}({\\bf c}\\cdot z)\\,U^5(z)\\,dz=0\n\\]\nan so,\n\\begin{align}\n\\int_{B_\\rho(\\zeta_i)}w_i^5\\,U_j\\,dy=&4\\pi\\alpha_3 \\,\n\\mu_i^{\\frac{1}{2}}\\,\\mu_j^{\\frac{1}{2}}\\,G_\\lambda(\\zeta_i,\\zeta_j)\\int_{\\R^3}U^5(z)\\,dz+\\mathcal{R}_{i,j}\\notag\n\\\\\n=&2a_1\\,\n\\mu_i^{\\frac{1}{2}}\\,\\mu_j^{\\frac{1}{2}}\\,G_\\lambda(\\zeta_i,\\zeta_j)+\\mathcal{R}_{i,j}^2,\n\\label{wi5Uj}\n\\end{align}\nwhere\n$\na_1:= 2\\pi\\,\\alpha_3\\int_{\\R^3}U^5\n$ and\n\\begin{align*}\n\\mathcal{R}_{i,j}^2:=&-4\\pi\\alpha_3 \\,\n\\mu_i^{\\frac{1}{2}}\\,\\mu_j^{\\frac{1}{2}}\\,G_\\lambda(\\zeta_i,\\zeta_j)\\int_{\\R^3\\setminus B_{\\mu_i}}U^5(z)\\,dz\\\\\n&+\n4\\pi\\alpha_3 \\,\n\\mu_i^{\\frac{1}{2}}\\,\\mu_j^{\\frac{1}{2}}\\,\\int_{B_{\\mu_i}}U^5(z)\\,\\theta_2(\\zeta_i+\\mu_i z,\\zeta_j)\\,dz\\\\\n&+\\mu_i^{\\frac{1}{2}}\\,\\mu_j^{\\frac{1}{2}}\\int_{B_{\\mu_i}}\\mu_j^{2-\\sigma}\\,U^5(z)\\,\\hat{\\theta}(\\mu_j,\\zeta_i+\\mu_i z,\\zeta_j)\n\\,dz.\n\\end{align*}\nFrom Lemma~\\ref{lemma22} and \\cite[Appendix]{DPDM}, we have the following expansions, for any $\\sigma > 0$, as $\\mu_i\\rightarrow 0$\n\\begin{equation*}\n\\mu_i^{-\\frac{1}{2}}\\pi_i(\\zeta_i+\\mu_iz)=-4\\pi\\alpha_3\\,H_{\\lambda}(\\zeta_i+\\mu_iz,\\zeta_i)+\\mu_i\\,\\mathcal{D}_0(z)+\\mu_i^{2-\\sigma}\\,\\theta(\\mu_i,\\zeta_i+\\mu_iz,\\zeta_i)\n\\end{equation*}\n\\begin{equation*}\nH_{\\lambda}(\\zeta_i+\\mu_iz,\\zeta_i)=g_{\\lambda}(\\zeta_i)+\\frac{\\lambda}{8\\pi}\\,\\mu_i\\vert z\\vert\n+\\theta_0(\\zeta_i,\\zeta_i+\\mu_iz)\n\\end{equation*}\nwhere $\\theta_0$ is a function of class $C^2$ with $\\theta_0(\\zeta_i,\\zeta_i)=0$.\n\n\\noindent\nThe above expressions, combined with\nLemma \\ref{Uiexp} and \\eqref{Greenexp}, gives\n\\begin{align}\n\\nonumber\n\\int_{B_\\rho(\\zeta_i)}w_i^4\\,\\pi_i\\,U_j=&\n\\mu_i^{\\frac{3}{2}}\\,\\mu_j^{\\frac{1}{2}}\\int_{B_{\\mu_i}}U^4(z)\\,\\mu_i^{-\\frac{1}{2}}\\pi_i(\\zeta_i+\\mu_i z)\\,\\mu_j^{-\\frac{1}{2}}U_j(\\zeta_i+\\mu_i z)\\,dz\n\\\\\n\\notag\n&=-\\mu_i^{\\frac{3}{2}}\\,\\mu_j^{\\frac{1}{2}}(4\\pi\\alpha_3)^2 \\,g_\\lambda(\\zeta_i)\\,\nG_\\lambda(\\zeta_i,\\zeta_j)\\int_{\\R^3}U^4(z)\\,dz+\\mathcal{R}_3\\\\\n&=-\\frac{2}{5}\\,a_3\\,\\mu_i^{\\frac{3}{2}}\\,\\mu_j^{\\frac{1}{2}}\\, g_\\lambda(\\zeta_i)\\,\nG_\\lambda(\\zeta_i,\\zeta_j)+\\mathcal{R}_{i,j}^3,\n\\label{wi4piiUj}\n\\end{align}\nwhere $a_3:=\\frac{5}{2}(4\\pi\\alpha_3)^2\\int_{\\R^3}U^4.$\n\nFrom \\eqref{Ui5Uj}, \\eqref{wi5Uj} and \\eqref{wi4piiUj}, we get\n\\begin{align}\n\\label{Ui5UjF}\n\\int_{B_\\rho(\\zeta_i)}U_i^5\\,U_j=&\n2\\,a_1\\,\n\\mu_i^{\\frac{1}{2}}\\mu_j^{\\frac{1}{2}}G_\\lambda(\\zeta_i,\\zeta_j)\n-2\\,a_3\\,\\mu_i^{\\frac{3}{2}}\\,\\mu_j^{\\frac{1}{2}} g_\\lambda(\\zeta_i)\\,\nG_\\lambda(\\zeta_i,\\zeta_j)\n\\\\&+\n\\mathcal{R}_{i,j}^1+\\mathcal{R}_{i,j}^2+5\\,\\mathcal{R}_{i,j}^3,\n\\mathcal{R}_{i,j}^{5,1}.\n\\notag\n\\end{align}\n\n(ii)\\, If $s=4$ and $t=2$, we have\n\\begin{align*}\n\\int_{B_\\rho(\\zeta_i)}U_i^4\\,U_j\\,U_m&=\n\\int_{B_\\rho(\\zeta_i)}w_i^4\\,U_j\\,U_m+\n\\mathcal{R}_{i,j,m}^5,\n\\end{align*}\nwhere\n\\[\n\\mathcal{R}_{i,j,m}^5:=4\n\\int_0^1 d\\tau\\,\n\\int_{B_\\rho(\\zeta_i)}(w_i+\\tau\\pi_i)^3\\,\\pi_i\\,U_j\\,U_m.\n\\]\nFrom Lemma \\ref{lemma22}, Lemma \\ref{Uiexp}, and \\eqref{Greenexp}, we get\n\\begin{align*}\n\\int_{B_\\rho(\\zeta_i)}w_i^4\\,U_j\\,U_m=&\n\\mu_i\\,\\mu_j\\,\\mu_m\\int_{B_{\\mu_i}}U^4(z)\\,\n\\Bigl(\\mu_j^{-\\frac{1}{2}}\\,U_j(\\zeta_i+\\mu_i z)\\Bigr)\\,\n\\Bigl(\\mu_m^{-\\frac{1}{2}}\\,U_m(\\zeta_i+\\mu_i z)\\Bigr)\\,dz\\\\\n&=\\mu_i\\,\\mu_j\\,\\mu_m\\,(4\\pi\\alpha_3)^2 \\,\nG_\\lambda(\\zeta_i,\\zeta_j)\\,G_\\lambda(\\zeta_i,\\zeta_m)\\int_{\\R^3}U^4(z)\\,dz+\\mathcal{R}_{i,j,m}^6\\\\\n&=\\frac{2}{5}\\,a_3\\,\\mu_i\\,\\mu_j\\,\\mu_m\\,\nG_\\lambda(\\zeta_i,\\zeta_j)\\,G_\\lambda(\\zeta_i,\\zeta_m)+\\mathcal{R}_{i,j,m}^6.\n\\end{align*}\nTherefore,\n\\begin{align}\n\\int_{B_\\rho(\\zeta_i)}U_i^4\\,U_j\\,U_m&=\n\\frac{2}{5}\\,a_3\\,\\mu_i\\,\\mu_j\\,\\mu_m\\,\nG_\\lambda(\\zeta_i,\\zeta_j)\\,G_\\lambda(\\zeta_i,\\zeta_m)+\\mathcal{R}_{i,j,m}^5\n+\\mathcal{R}_{i,j,m}^6.\n\\label{Ui4UjUm}\n\\end{align}\n\n(iiii)\\, If $s=3$ and $t=3$, we have\n\\begin{equation*}\n\\int_{B_\\rho(\\zeta_i)}U_i^3\\,U_j^3=\n\\mathcal{R}_{i,j}^{8},\n\\end{equation*}\nwhere\n\\begin{align*}\n\\mathcal{R}_{i,j}^8:=&\\int_{B_\\rho(\\zeta_i)}w_i^3\\,U_j^3+3\\int_0^1 ds\\,\n\\int_{B_\\rho(\\zeta_i)}(w_i+s\\pi_i)^2\\,\\pi_i\\,U_j^3.\n\\end{align*}\nTo analyse the size of the remainders $\\mathcal{R}_{i,j}^\\ell$ we proceed as in \\cite{DPDM}. We have the following\n\\begin{equation*}\n\\frac{\\partial^{m+n}}{\\partial\\zeta^m\\partial\\mu^n}\\mathcal{R}_{i,j}^\\ell=O(\\mu^{3-(n+\\sigma)})\n\\end{equation*}\nfor each $m=0,1$, $n=0,1,2$, $m+n\\leq 2$,\n$\\ell=1,\\ldots, 8$,\nuniformly on all small $(\\mu,\\zeta)\\in \\Gamma_\\delta$.\n\n\n\\noindent\nAnalogous statements hold true for ${\\mathcal{R}}_{i,j}^{s,t}$ and $\\tilde{\\mathcal{R}}_{i,j}^{s,t}$ with $s+t=6$.\n\nFrom \\eqref{U6} and the previous analisys\nwe get that\n\\begin{align*}\n\\int_{B_{\\rho}(\\zeta_i)}\nE_i\n&=6\\,\\int_{B_{\\rho}(\\zeta_i)}U_i^5\\,Q_i+15\\int_{B_{\\rho}(\\zeta_i)}U_i^4\\,Q_i^2+\\mathcal{R}\\\\\n&=\n6\\,\\sum_{j\\neq i}\\int_{B_{\\rho}(\\zeta_i)}U_i^5\\,U_j+15\\sum_{j\\neq i}\\sum_{m\\neq i}\\int_{B_{\\rho}(\\zeta_i)}U_i^4\\, U_j\\, U_m+\\mathcal{R}.\n\\end{align*}\nThis expression together with\n\\eqref{Ui5UjF} and \\eqref{Ui4UjUm} yields\n\\begin{align*}\n\\int_{B_{\\rho}(\\zeta_i)}\nE_i\n&=\n6\\,\\sum_{j\\neq i}\\Bigl[\n2\\,a_1\\,\n\\mu_i^{\\frac{1}{2}}\\mu_j^{\\frac{1}{2}}G_\\lambda(\\zeta_i,\\zeta_j)\n-2\\,a_3\\,\\mu_i^{\\frac{3}{2}}\\,\\mu_j^{\\frac{1}{2}} g_\\lambda(\\zeta_i)\\,\nG_\\lambda(\\zeta_i,\\zeta_j)\\Bigr]\\\\\n&+6\\sum_{j\\neq i}\\sum_{m\\neq i}\n\\Bigl[\na_3\\,\\mu_i\\,\\mu_j\\,\\mu_m\\,\nG_\\lambda(\\zeta_i,\\zeta_j)\\,G_\\lambda(\\zeta_i,\\zeta_m)\n\\Bigr]\\\\\n&=\n6\\,\\sum_{j\\neq i}\\Bigl[\n2\\,a_1\\,\n\\mu_i^{\\frac{1}{2}}\\mu_j^{\\frac{1}{2}}G_\\lambda(\\zeta_i,\\zeta_j)\n-2\\,a_3\\,\\mu_i^{\\frac{3}{2}}\\,\\mu_j^{\\frac{1}{2}} g_\\lambda(\\zeta_i)\\,\nG_\\lambda(\\zeta_i,\\zeta_j)\\Bigr]\\\\\n&+6\\,a_3\\,\\mu_i\\Bigl(\\sum_{j\\neq i}\n\\mu_j\\,\nG_\\lambda(\\zeta_i,\\zeta_j)\n\\Bigr)^2.\n\\end{align*}\nCombining relations \\eqref{Jlambda}, \\eqref {Ei}, \\eqref{U6},\n\\eqref{wi5Uj}, Lemma \\ref{lemma21} and the above expression we get\nthe conclusion.\nFor the statement of this lemma $\\theta_\\lambda^{(1)}$ is defined as the sum of all remainders.\n\n\nThe formula\n\\[\n\\int_0^\\infty\\Bigl(\\frac{r}{1+r^2} \\Bigr)^q\\frac{dr}{r^{\\alpha+1}}=\\frac{\\Gamma\\bigl(\\frac{q-\\alpha}{2}\\bigr)\\,\\Gamma\\bigl(\\frac{q+\\alpha}{2}\\bigr)}{2\\,\\Gamma(q)}\n\\]\nyields that\n\\[\na_1=8(\\alpha_3\\pi)^2,\n\\quad\na_3=120\\,(\\alpha_3\\pi^2)^2.\n\\]\n\\end{proof}\n\n\n\n\\section{The linear problem}\n\\label{sectLinear}\n\n\n\nLet $u$ be a solution of $(\\wp_\\lambda)$. For\n$\\varepsilon>0$, we define\n\\[\nv(y) = \\varepsilon^{1\/2} u(\\varepsilon y) .\n\\]\nThen $v$ solves the boundary value problem\n\\begin{equation}\n\\label{maineqreesc}\n\\left\\{\n\\begin{array}{rlll}\n\\Delta v+\\varepsilon^2\\,\\lambda\\, v&=&-v^5&\\text{in } \\Omega_\\varepsilon,\n\\\\\nv&>&0&\\text{in }\\Omega_\\varepsilon,\n\\\\\nv&=&0&\\text{on }\\partial\\Omega_\\varepsilon,\n\\end{array}\n\\right.\n\\end{equation}\nwhere $\\Omega_\\varepsilon=\\varepsilon^{-1}\\,\\Omega$.\nThus finding a solution of $(\\wp_\\lambda)$ which is a small perturbation of $\\sum_{i=1}^k U_i $ is equivalent to finding a solution of $(\\wp_\\lambda)$ of the form\n\\[\n\\sum_{i=1}^k V_i +\\phi,\n\\]\nwhere\n\\begin{align*}\nV_i(y) &= \\varepsilon^{ \\frac{1}{2}} U_i( \\varepsilon y)\n= w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)+\\varepsilon^{\\frac{1}{2}}\\pi_i(\\varepsilon\\, y)\n\\quad y \\in \\Omega_\\varepsilon,\n\\end{align*}\nfor $i=1,\\ldots, k$, and $\\phi$ is small in some appropriate sense.\n\n\n\n\n\nNotice that $V_i$ satisfies\n\\begin{equation*}\n\\left\\{\n\\begin{array}{rlll}\n\\Delta V_i+\\varepsilon^2 \\,\\lambda \\,V_i&=&-w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^5&\\text{in } \\Omega_\\varepsilon,\n\\\\\nV_i&=&0&\\text{on }\\partial\\Omega_\\varepsilon,\n\\end{array}\n\\right.\n\\end{equation*}\nwhere\n\\begin{align}\n\\label{muiPrime}\n\\mu_i^{\\prime}=\\frac{\\mu_i}{\\varepsilon},\\quad \\zeta_i^{\\prime}=\\frac{\\zeta_i}{\\varepsilon}.\n\\end{align}\nThen solving \\eqref{maineqreesc} is equivalent to finding $\\phi$ such that,\n\\begin{equation}\n\\label{linearizedproblem}\n\\left\\{\n\\begin{array}{rlll}\nL(\\phi)&=&-N(\\phi)-E&\\text{in } \\Omega_\\varepsilon,\n\\\\\n\\phi&=&0&\\text{on }\\partial\\Omega_\\varepsilon,\n\\end{array}\n\\right.\n\\end{equation}\nwhere\n\\[\nL(\\phi)=\\Delta\\phi+\\varepsilon^2\\,\\lambda\\, \\phi+5V^4\\,\\phi,\n\\]\n\\[\nN(\\phi)=(V+\\phi)^5-V^5-5V^4\\,\\phi,\n\\]\n\\begin{equation}\n\\label{error}\nE=V^5-\n\\sum_{i=1}^k w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^5.\n\\end{equation}\nand\n\\begin{align}\n\\label{defV}\nV = \\sum_{i=1}^k V_i.\n\\end{align}\nIn what follows,\nthe canonical basis of $\\R^3$ will be denoted by\n\\[\n\\textrm{e}_1=(1,0,0), \\quad \\textrm{e}_2=(0,1,0)\\quad \\textrm{e}_3=(0,0,1).\n\\]\nLet $z_{i,j}$, $i=1,2,$ be given by\n\\begin{equation}\n\\left\\{\n\\label{zij}\n\\begin{array}{rcl}\nz_{i,j}(y)&=&D_{\\zeta_i^{\\prime}} w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)\\cdot \\textrm{e}_j\\quad j=1,2,3\\\\\nz_{i,4}(y)&=&\\frac{\\partial\\,w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}}{\\partial\\,\\mu_i^{\\prime}}(y).\n\\end{array}\n\\right.\n\\end{equation}\nWe recall that for each $i$, the functions $z_{i,j}$ for $j=1,...,4$, span the space\nof all bounded solutions of the linearized problem:\n\\[\n \\Delta z+5\\,w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\, z=0\\quad\\text{ in } \\R^3.\n \\]\nA proof of this fact can be found for instance in \\cite{Rey}.\n\n\n\nObserve that\n\\[\n\\int_{\\R^3} w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4 z_{i,j}\\,z_{i,l}=0\\quad\\text{if }j\\neq l.\n\\]\nIn order to study the operator $L$, the key idea is that, as $\\varepsilon\\rightarrow 0$, the linear operator $L$ is close to being the sum of\n\\[\n\\Delta +5w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4,\n \\]\n$i=1,\\ldots,k$.\n\n\n\n\n\n\nRather than solving \\eqref{linearizedproblem} directly, we will look for a solution of the following problem first: Find a function $\\phi$ such that for certain constants $c_{i,j}$, $i=1,2$, $j=1,2,3,4$,\n\\begin{equation}\n\\label{linearizedproblemproj}\n\\left\\{\n\\begin{array}{rlll}\nL(\\phi)&=&-N(\\phi)-E+\\sum_{i,j}c_{ij}\\,w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\n&\\text{in } \\Omega_\\varepsilon,\n\\\\\n\\phi&=&0&\\text{on }\\partial\\Omega_\\varepsilon,\n\\\\\n\\int_{\\Omega_\\varepsilon}w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\,\\phi&=&0&\\text{for all }i,j.\n\\end{array}\n\\right.\n\\end{equation}\nAfter this is done, the remaining task is to adjust the parameters $\\zeta_i^\\prime, \\mu_i^\\prime$ in such a way that all constants $c_{ij}=0$.\n\nIn order to solve problem \\eqref{linearizedproblemproj} it is necessary to understand its linear part. Given a function $h$ we consider the problem of finding $\\phi$ and real numbers $c_{ij}$ such that\n\\begin{equation}\n\\label{linpart}\n\\left\\{\n\\begin{array}{rlll}\nL(\\phi)&=&h+\\sum_{i,j}c_{ij}\\,w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\n&\\text{in } \\Omega_\\varepsilon,\n\\\\\n\\phi&=&0&\\text{on }\\partial\\Omega_\\varepsilon,\n\\\\\n\\int_{\\Omega_\\varepsilon}w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\,\\phi&=&0&\\text{for all }i,j.\n\\end{array}\n\\right.\n\\end{equation}\nWe would like to show that this problem is uniquely solvable with uniform bounds in suitable functional spaces.\nTo this end, it is convenient to introduce the following weighted norms.\n\n\\noindent\nGiven a fixed number $\\nu\\in(0,1)$,\nwe define\n\\begin{align*}\n\\Vert f\\Vert_{\\ast}\n&=\\sup_{y\\in\\Omega_\\epsilon}\n\\Bigl(\n\\omega(y)^{-\\nu}\\,\\vert f(y)\\vert\n+\\omega(y)^{-\\nu-1}\\,\\vert \\nabla f(y)\\vert\n\\Bigr)\n\\\\\n\\Vert f\\Vert_{\\ast\\ast}\n&=\\sup_{y\\in\\Omega_\\epsilon}\n\\omega(y)^{-(2+\\nu)}\\,\\vert f(y) \\vert,\n\\end{align*}\nwhere\n\\[\n\\omega(y)=\\sum_{i=1}^k\\bigl(1+\\vert y-\\zeta_i^{\\prime}\\vert\\bigr)^{-1}.\n\\]\n\n\\begin{prop}\n\\label{solvability}\nLet $0<\\alpha<1$. Let $\\delta>0$ be given.\nThen there exist a positive number $\\varepsilon_0$ and a constant $C>0$ such that if \\,$0<\\varepsilon<\\varepsilon_0$, and\n\\begin{align}\n\\label{parametros}\n\\vert \\zeta_i^{\\prime}-\\zeta_j^{\\prime} \\vert >\\frac{\\delta}{\\varepsilon}, \\ i\\not=j; \\quad\ndist(\\zeta_i^{\\prime},\\partial\\Omega_{\\varepsilon})>\\frac{\\delta}{\\varepsilon}\n\\text{ and }\n\\delta<\\mu_i^{\\prime}<\\delta^{-1},\\ i=1,\\ldots,k,\n\\end{align}\nthen for any $h\\in C^{0,\\alpha}(\\Omega_\\varepsilon)$ with $\\Vert h\\Vert_{\\ast\\ast}<\\infty$, problem \\eqref{linpart} admits a unique solution $\\phi=T(h)\\in C^{2,\\alpha}(\\Omega_\\varepsilon)$. Besides,\n\\begin{equation}\n\\label{cotas}\n\\Vert T(h)\\Vert_{\\ast}\\leq C\\,\\Vert h\\Vert_{\\ast\\ast} \\quad\\text{and}\\quad\n\\vert c_{ij}\\vert\\leq C\\,\\Vert h\\Vert_{\\ast\\ast},\\,\\,i=1,\\ldots,k,\\,\\, j=1,2,3,4.\n\\end{equation}\n\\end{prop}\nHere and in the rest of this paper, we denote by $C$ a positive constant that may change from line to line but is always independent of $\\varepsilon$.\n\n\nFor the proof of the previous proposition\nwe need the following a priori estimate:\n\\begin{lemma}\n\\label{lemmaCotaApriori}\nLet $\\delta>0$ be a given small number.\nAssume the existence of sequences $(\\varepsilon_n)_{n\\in\\mathbb{N}}$, $(\\zeta^{\\prime}_{i,n})_{n\\in\\mathbb{N}}$,, $(\\mu^{\\prime}_{i,n})_{n\\in\\mathbb{N}}$ such that $\\varepsilon_n> 0$,\n$\\varepsilon_n\\rightarrow 0$,\n\\[\\vert \\zeta^{\\prime}_{i,n}-\\zeta^{\\prime}_{j,n} \\vert >\\frac{\\delta}{\\varepsilon_n}, \\ i\\not=j;\n\\quad\ndist(\\zeta^{\\prime}_{i,n},\\partial\\Omega_{\\varepsilon_n})>\\frac{\\delta}{\\varepsilon_n}\n\\text{ and }\n\\delta<\\mu^{\\prime}_{i,n}<\\delta^{-1},\\ i=1,\\ldots,k,\n\\]\nand for certain functions $\\phi_n$ and $h_n$ with\n$\\Vert h_n\\Vert_{\\ast\\ast}\\rightarrow 0$ and scalars\n$c_{ij}^n$, $i=1,\\ldots,k$, $j=1,2,3,4$, one has\n\\begin{equation}\n\\label{linpartn}\n\\left\\{\n\\begin{array}{rlll}\nL(\\phi_n)&=&h_n+\\sum_{i,j}c_{ij}^n\\,w_{\\mu_{i,n}^{\\prime},\\zeta_{i,n}^{\\prime}}^4\\,z_{ij}^n\n&\\text{in } \\Omega_{\\varepsilon_n},\n\\\\\n\\phi_n&=&0&\\text{on }\\partial\\Omega_{\\varepsilon_n},\n\\\\\n\\int_{\\Omega_{\\varepsilon_n}}w_{\\mu_{i,n}^{\\prime},\\zeta_{i,n}^{\\prime}}^4\\,z_{ij}^n\\,\\phi_n&=&0&\\text{for all }i,j,\n\\end{array}\n\\right.\n\\end{equation}\nwhere the functions $z_{ij}^n$\nare defined as in \\eqref{zij} for $\\zeta_{i,n}^{\\prime}$ and $\\mu_{i,n}^{\\prime}$.\nThen\n\\[\n\\lim_{n\\rightarrow\\infty}\\Vert \\phi_n\\Vert_{\\ast}=0.\n\\]\n\\end{lemma}\n\\begin{proof}\nArguing by contradiction, we may assume that $\\Vert \\phi_n\\Vert_{\\ast}=1$.\nWe shall establish first the weaker assertion that\n\\[\n\\lim_{n\\rightarrow\\infty}\\Vert \\phi_n\\Vert_{\\infty}=0.\n\\]\nLet us assume, for contradiction, that except possibly for a subsequence\n\\begin{equation}\n\\label{contra}\n\\lim_{n\\rightarrow\\infty}\\Vert \\phi_n\\Vert_{\\infty}=\\gamma,\\quad\\text{ with }0<\\gamma\\leq 1.\n\\end{equation}\nWe consider a cut-off function $\\eta\\in C^{\\infty}(\\R)$ with\n\\[\n\\eta(s)\\equiv 1 \\quad\\text{for } s\\leq \\frac{\\delta}{2},\\quad\n\\eta(s)\\equiv 0 \\quad\\text{for } s\\geq \\delta.\n\\]\nWe define\n\\begin{equation}\n\\label{zklcortada}\n{\\bf z}_{kl}^n(y):=\\eta(2\\,\\varepsilon_n\\,\\vert y-\\zeta_{k,n}^{\\prime}\\vert)\\,z_{kl}^n(y).\n\\end{equation}\nTesting \\eqref{linpartn} against ${\\bf z}_{kl}^n$ and integrating by parts twice we get the following relation\n\\[\n\\sum_{i,j}c_{ij}^n\\,\n\\int_{\\Omega_{\\varepsilon_n}}\nw_{\\mu_{i,n}^{\\prime},\\zeta_{i,n}^{\\prime}}^4\\,z_{ij}^n\\,{\\bf z}_{kl}^n\n=\n\\int_{\\Omega_{\\varepsilon_n}}\nL({\\bf z}_{kl}^n)\\,\\phi_n-\\int_{\\Omega_{\\varepsilon_n}}h_n\\,{\\bf z}_{kl}^n.\n\\]\nSince $z_{kl}^n$ lies on the kernel of\n\\[\nL_{k}:=\\Delta +5w_{\\mu_k^{\\prime},\\zeta_k^{\\prime}}^4,\n\\]\nwriting $L({\\bf z}_{kl}^n)=L({\\bf z}_{kl}^n)-L_k(z_{kl}^n)$,\nit is easy to check that\n\\[\n\\Bigl\\vert\n\\int_{\\Omega_{\\varepsilon_n}}\nL({\\bf z}_{kl}^n)\\,\\phi_n\n\\Bigr\\vert =o(1)\\, \\Vert \\phi_n\\Vert_{\\ast}\\quad\\text{for } l=1,2,3,4.\n\\]\nTo obtain the last estimate, we take into account the effect of the Laplace operator on the cut-off function $\\eta$ which is used to define ${\\bf z}_{kl}^n$ and the effect of the difference between the two potentials $V^4$ and $w_{\\mu_k^{\\prime},\\zeta_k^{\\prime}}^4$ which appear respectively in the definition of $L$ and $L_{k}$.\n\nOn the other hand, a straightforward computation yields\n\\[\n\\Bigl\\vert\n\\int_{\\Omega_{\\varepsilon_n}}h_n\\,{\\bf z}_{kl}^n\n\\Bigr\\vert \\leq C\\, \\Vert h_n\\Vert_{\\ast\\ast}.\n\\]\nFinally, since\n\\[\n\\int_{\\Omega_{\\varepsilon_n}}\nw_{\\mu_{i,n}^{\\prime},\\zeta_{i,n}^{\\prime}}^4\\,z_{ij}^n\\,{\\bf z}_{kl}^n=C\\,\\delta_{i,k}\\,\\delta_{j,l}+o(1) \\quad \\text{with } \\delta_{i,k}=\n\\left\\{\n\\begin{array}{ll}\n1 &\\text{ if }i=k\\\\\n0 &\\text{ if }i\\neq k,\n\\end{array}\n\\right.\n\\]\nwe conclude that\n\\[\n\\lim_{n\\rightarrow\\infty}c_{ij}^n=0, \\quad \\text{for all }i,j.\n\\]\nNow, let $y_n \\in \\Omega_{\\varepsilon_n}$ be such that $\\phi_n(y_n)=\\gamma$, so that $\\phi_n$ attains its absolute maximum value at this point.\nSince $\\Vert \\phi_n\\Vert_{\\ast}=1,$ there is a radius $R>0$ and $i\\in\\{1,\\ldots,k\\}$ such that, for $n$ large enough,\n\\[\\vert y_n-\\zeta_{i,n}^\\prime\\vert\\leq R\n.\\]\nDefining $\\tilde{\\phi}_n(y)=\\phi_n(y+\\zeta_{i,n}^\\prime)$ and using elliptic estimates together with Ascoli-Arzela's theorem, we have that, up to a subsequence, $\\tilde{\\phi}_n$ converges uniformly over compacts to a nontrivial bounded solution $\\tilde{\\phi}$ of\n\\begin{equation*}\n\\left\\{\n\\begin{array}{rlll}\n-\\Delta \\,\\tilde{\\phi}+5\\,w_{\\mu_{i}^{\\prime},0}^4\\,\\tilde{\\phi}&=&0&\\text{in }\\R^3,\n\\\\\n\\int_{\\R^3}w_{\\mu_{i}^{\\prime},0}^4\\,z_{0,j}\\tilde{\\phi}&=&0 &\\text{for }j=1,2,3,4,\n\\end{array}\n\\right.\n\\end{equation*}\nwhich is bounded by a constant times $\\vert y\\vert^{-1}$.\nHere\n$z_{0,j}$ is defined as\nin \\eqref{zij} taking $\\zeta_{i}^{\\prime}=0$ and $\\mu_{i}^{\\prime}:=\\lim_{n\\rightarrow\\infty}\\mu_{i,n}^{\\prime}$ (up to subsequence).\nFrom the assumptions, it follows that $\\delta\\leq \\mu_{i}^{\\prime}\\leq \\delta^{-1}$.\n\nNow, taking into account that\nthe solution $w_{\\mu_{i}^{\\prime},0}$ is nondegenerate, the above implies that\n$\\tilde{\\phi}=\\sum_{j=1}^4 \\alpha_j\\,z_{0,j}(y)$\nand then, from the orthogonality conditions\nwe can deduce that $\\alpha_j=0$ for $j=1,2,3,4.$\nFrom here we obtain $\\tilde{\\phi}\\equiv 0$, which contradicts \\eqref{contra}.\nThis proves that $\\lim_{n\\rightarrow\\infty}\\Vert \\phi_n\\Vert_{\\infty}=0$.\n\nNext we shall establish that $\\Vert \\phi_n\\Vert_{\\ltimes}\\rightarrow 0$ where\n\\begin{align*}\n\\Vert \\phi \\Vert_{\\ltimes}\n&=\\sup_{y\\in\\Omega_\\epsilon}\n\\omega(y)^{-\\nu}\\,\\vert \\phi(y)\\vert .\n\\end{align*}\nDefining\n\\[\n\\psi_n(x)=\\frac{1}{\\varepsilon_n^\\nu}\\,\\phi_n\\Bigl(\\frac{x}{\\varepsilon_n}\\Bigr),\\quad x\\in\\Omega\n\\]\nwe have that $\\psi_n$ satisfies\n\\begin{equation*}\n\\left\\{\n\\begin{array}{rllll}\n\\Delta \\,\\psi_n+\\lambda \\,\\psi_n&=&\\varepsilon_n^{-(2+\\nu)}\\Bigl\\{\n&-5 \\varepsilon_n^{1\/2}\n\\left( \\varepsilon_n^{1\/2}\n\\sum_{i=1}^k\nU_{\\mu_{i,n},\\zeta_{i,n}}\\right)^4\\,\\varepsilon_n^{\\nu}\\,\\psi_n\\\\\n&&&+\ng_n\n+\\sum_{i,j}c_{ij}^n\\,\\varepsilon_n^{2}\\,w_{\\mu_{i,n},\\zeta_{i,n}}^4\\,Z_{ij}^n\\Bigr\n\\}\n&\\text{in } \\Omega,\n\\\\\n\\psi_n&=&0&&\\text{on }\\partial\\Omega,\n\\end{array}\n\\right.\n\\end{equation*}\nwhere $\\mu_{i,n}=\\varepsilon_n\\,\\mu_{i,n}^{\\prime}$,\n$\\zeta_{i,n}=\\varepsilon_n\\,\\zeta_{i,n}^{\\prime}$,\n$g_n(x)=h_n\\bigl(\\frac{x}{\\varepsilon_n}\\bigr)$ and $Z_{ij}^n(x)=z_{ij}^n\\bigl(\\frac{x}{\\varepsilon_n}\\bigr)$.\n\n\nLet $\\zeta_i\\in\\Omega$ be such that, after passing to a subsequence, $\\vert\\zeta_{i,n}-\\zeta_i\\vert\\leq\\frac{\\delta}{4}$ for all $n\\in\\mathbb{N}$. Notice that, by the assumptions, $B_{\\frac{\\delta}{4}}(\\zeta_i)\\subset\\Omega$ and $B_{\\frac{\\delta}{4}}(\\zeta_i)\\cap B_{\\frac{\\delta}{4}}(\\zeta_j)=\\emptyset$ for $i\\not=j$.\nFrom the assumption $\\Vert \\phi_n\\Vert_{*} = 1$ we deduce that\n\\begin{equation*}\n\\vert \\psi_n(x)\\vert\\leq \\biggl(\n\\sum_{i=1}^k\n\\frac{1}{\\varepsilon_n+\\vert x-\\zeta_{i,n}\\vert} \\biggr)^{\\nu} ,\n\\quad \\forall x\\in \\Omega.\n\\end{equation*}\nSince $\\lim_{n\\rightarrow\\infty}\\Vert h_n\\Vert_{\\ast\\ast}\\rightarrow 0$,\n\\[\n\\vert g_n(x)\\vert\\leq o(1)\\,\\varepsilon_n^{2+\\nu}\\,\\biggl(\n\\sum_{i=1}^k\n\\frac{1}{\\varepsilon_n+\\vert x-\\zeta_{i,n}\\vert}\n\\biggr)^{2+\\nu}\n\\quad \\text{for }x\\in \\Omega.\n\\]\nFrom Lemma \\ref{Uiexp} we know that,\naway from\n$\\zeta_{i,n}$,\n\\[\nU_{\\mu_{i,n},\\zeta_{i,n}}(x)=C\\,\\varepsilon_{n}^{1\/2}\\,(1+o(1))\\,G_{\\lambda}(x,\\zeta_{i,n}).\n\\]\nMoreover, it is easy to see that also away from\n$\\zeta_{i,n}$,\n\\[\n\\varepsilon_n^{-\\nu}\\,\n\\sum_{j=1}^4 c_{ij}^n\\,w_{\\mu_{i,n},\\zeta_{i,n}}^4\\,Z_{ij}^n=o(1)\\quad \\text{as } \\varepsilon_n\\rightarrow 0,\n\\]\nand so, a diagonal convergence argument allows us to conclude that $\\psi_n(x)$ converges uniformly over compacts of $\\bar{\\Omega}\\setminus\\{\\zeta_1,\\ldots,\\zeta_k\\}$ to $\\psi(x)$, a solution of\n\\[\n-\\Delta \\,\\psi+\\lambda \\,\\psi=0\n\\quad\\text{in } \\Omega\\setminus\\{\\zeta_1,\\ldots,\\zeta_k\\},\n\\quad\n\\psi=0\\quad\\text{on }\\partial\\Omega ,\n\\]\nwhich satisfies\n\\begin{equation*}\n\\vert \\psi(x)\\vert\\leq \\biggl(\n\\sum_{i=1}^k\n\\frac{1}{\\vert x-\\zeta_{i,n}\\vert} \\biggr)^{\\nu} ,\n\\quad \\forall x\\in \\Omega.\n\\end{equation*}\nThus $\\psi$ has a removable singularity at all $\\zeta_i$, $i=1,\\ldots,k$, and we conclude that $\\psi(x)=0$. Hence, over compacts of $\\bar{\\Omega}\\setminus\\{\\zeta_1,\\ldots,\\zeta_k\\}$, $\\vert \\psi_n(x)\\vert=o(1).$\nIn particular, this implies that, for all\n$x\\in\\Omega\\setminus\n\\bigl( \\cup_{i=1}^k\nB_{\\frac{\\delta}{4}}(\\zeta_{i,n}) \\bigr)$,\n$\n\\vert \\psi_n(x)\\vert\\leq o(1).\n$\nThus we have\n\\begin{equation}\n\\label{57}\n\\vert \\phi_n(y)\\vert\\leq o(1)\\,\\varepsilon_n^\\nu,\n\\quad \\text{for all } y\\in \\Omega_{\\varepsilon_n}\\setminus\\Bigl(\n\\bigcup_{i=1}^k\nB_{\\frac{\\delta}{4\\varepsilon_n}}(\\zeta_{i,n}^\\prime)\n\\Bigr).\n\\end{equation}\nNow, consider a fixed number $M$, such that $M<\\frac{\\delta}{4\\,\\varepsilon_n}$, for all $n\\in\\mathbb{N}$.\n\n\n\\noindent\nSince $\\Vert \\phi_n\\Vert_\\infty=o(1)$,\n\\begin{equation}\n\\label{58}\n\\bigl(1+|y-\\zeta_{i,n}^\\prime|\\bigr)^{\\nu}\n\\vert \\phi_n(y)\\vert\\leq o(1) \\quad\\text{for all }\ny\\in \\overline{B_{M}(\\zeta_{i,n}^\\prime)}.\n\\end{equation}\nWe claim that\n\\begin{equation}\n\\label{59}\n\\bigl(1+|y-\\zeta_{i,n}^\\prime|\\bigr)^{\\nu}\n\\vert \\phi_n(y)\\vert\\leq o(1) \\quad\\text{for all }\ny\\in A_{\\varepsilon_n,M},\n\\end{equation}\nwhere\n $A_{\\varepsilon_n,M}:=B_{\\frac{\\delta}{4\\,\\varepsilon_n}}(\\zeta_{i,n}^\\prime)\\setminus\\overline{B_{M}(\\zeta_{i,n}^\\prime)}$.\n\nThe proof of this assertion relies on the fact that the operator $L$ satisfies the weak maximum principle in $A_{\\varepsilon_n,M}$ in the following sense: if $u$ is bounded, continuous in $\\overline{A_{\\varepsilon_n,M}}$, $u\\in H^1(A_{\\varepsilon_n,M})$ and satisfies $L(u)\\geq 0$ in $A_{\\varepsilon_n,M}$ and $u\\leq 0$ in $\\partial\\,A_{\\varepsilon_n,M},$ then, choosing a larger $M$ if necessary, $u\\leq 0$ in $A_{\\varepsilon_n,M}$. We remark that this result is just a consequence of the fact that $L(\\vert y-\\zeta_{i,n}^\\prime\\vert^{-\\nu})\\leq 0$ in $A_{\\varepsilon_n,M}$ provided that $M$ is large enough but independent of $n$.\n\n\\noindent\nNext, we shall define an appropriate\nbarrier function. First we\nobserve that there exists $\\eta_n^1\\rightarrow 0$, as $\\varepsilon_n\\rightarrow 0$, such that\n\\begin{equation}\n\\label{Lphi}\n\\vert y-\\zeta_{i,n}^{\\prime}\\vert^{2+\\nu}\\,\\vert L(\\phi_n)\\vert\\leq \\eta_n^1 \\quad\n\\text{in } A_{\\varepsilon_n,M}.\n\\end{equation}\nOn the other hand, from\n\\eqref{57} we deduce the\nexistence of $\\eta_n^2\\rightarrow 0$, as $\\varepsilon_n\\rightarrow 0$, such that\n\\begin{equation}\n\\label{extrad}\n\\varepsilon_n^{-\\nu}\\vert \\phi_n(y)\\vert\\leq \\eta_n^2 \\quad \\text{ if } \\vert y-\\zeta_{i,n}^\\prime\\vert =\\delta\/4\\varepsilon_n,\n\\end{equation}\nand from \\eqref{58} we deduce the existence of $\\eta_n^3\\rightarrow 0$, as $\\varepsilon_n\\rightarrow 0$, such that\n\\begin{equation}\n\\label{intrad}\nM^{\\nu}\\vert \\phi_n(y)\\vert\\leq \\eta_n^3, \\quad \\text{if } \\vert y-\\zeta_{i,n}^\\prime\\vert =M.\n\\end{equation}\nSetting $\\eta_n=\\max\\{ \\eta_n^1, \\eta_n^2, \\eta_n^3\\}$ we find that\nthe function\n\\[\n\\varphi_n(y)=\\eta_n\\,\\vert y-\\zeta_{i,n}^{\\prime}\\vert^{-\\nu}\n\\]\ncan be used for the intended comparison argument.\n\nIndeed, for each $i=1,\\ldots,k$ we can write\n\\begin{align*}\nL(\\vert y-\\zeta_{i,n}^{\\prime}\\vert^{-\\nu})\n&=-\\Bigl(\n\\nu\\,(1-\\nu)-\\bigl(\\varepsilon_n^2\\,\\lambda\n+5(V_1+V_2)^4\\bigr)\\,\\vert y-\\zeta_{i,n}^{\\prime}\\vert^2\n\\Bigr)\n\\,\\vert y-\\zeta_{i,n}^{\\prime}\\vert^{-(2+\\nu)}\\\\\n&\\leq -\\frac{\\nu\\,(1-\\nu)}{2} \\,\\vert y-\\zeta_{i,n}^{\\prime}\\vert^{-(2+\\nu)}\n\\end{align*}\nprovided $\\vert y-\\zeta_{i,n}^{\\prime}\\vert$ is large enough,\nand then\n\\begin{equation*}\nL(\\varphi_n)\\leq -\\frac{\\nu\\,(1-\\nu)}{2} \\,\\,\\eta_n\\,\\vert y-\\zeta_{i,n}^{\\prime}\\vert^{-(2+\\nu)} \\quad\n\\text{in } A_{\\varepsilon_n,M}\n\\end{equation*}\nprovided $M$ is fixed large enough (independently of $n$).\nThis together with \\eqref{Lphi} yields that $\n\\vert L(\\phi_n)\\vert\\leq - C L(\\varphi_n)\n$ in $A_{\\varepsilon_n,M}$. Moreover, it follows from\n\\eqref{extrad} and \\eqref{intrad} that\n$\n\\vert \\phi_n(y)\\vert\\leq C \\varphi_n(y)\n$ on $\\partial \\,A_{\\varepsilon_n,M}$\nand thus the maximum principle allows us to conclude that \\eqref{59} holds.\n\nThus, we have shown that $\\|\\phi_n \\|_{\\ltimes} \\to 0$ as $n\\to\\infty$.\nA standard argument using an appropriate scaling and elliptic estimates shows that $\\|\\phi\\|_*\\to 0$ as $n\\to \\infty$, which contradicts the assumption $\\Vert \\phi_n \\Vert_*=1$.\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition \\ref{solvability}]\nLet us consider the space:\n\\[\nH=\\Bigl\\{\n\\phi\\in H_0^1(\\Omega_\\varepsilon): \\int_{\\Omega_\\varepsilon}w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\,\\phi=0, \\, i=1,\\ldots,k,\\, j=1,2,3,4\n\\Bigr\\}\n\\]\nendowed with the inner product:\n\\[\n[\\phi,\\psi]=\\int_{\\Omega_\\varepsilon}\n\\nabla \\phi \\cdot\\nabla \\psi\n-\\varepsilon^2\\,\\lambda \\int_{\\Omega_\\varepsilon}\n\\phi \\,\\psi.\n\\]\nProblem \\eqref{linpart} expressed in weak form is equivalent to that of finding a $\\phi\\in H$ such that\n\\[\n[\\phi,\\psi]=\\int_{\\Omega_\\varepsilon}\\Bigl[5(V_1+V_2)^4 \\phi -h-\\Bigr]\\,\\psi\\quad\\text{for all }\\psi\\in H.\n\\]\nWith the aid of Riesz's representation theorem, this equation gets rewritten in $H$ in the operational form $\\phi = K(\\phi) + \\tilde{h}$, for certain $\\tilde{h}\\in H$, where $K$ is a compact operator in $H$.\nFredholm's alternative guarantees unique solvability of this problem for any $\\tilde{h}$ provided that the homogeneous equation $\\phi=K(\\phi)$ has only the zero solution in $H$. Let us observe that this last equation is precisely equivalent to \\eqref{linpart} with $h=0$. Thus existence of a unique solution follows.\nEstimate \\eqref{cotas} can be deduced from Lemma~\\ref{lemmaCotaApriori}.\n\\end{proof}\n\nIt is important, for later purposes, to understand the differentiability of the operator\n$T:h\\mapsto \\phi$ with respect to the variables $\\mu_i^{\\prime}$ and $\\zeta_i^{\\prime}$, $i=1,\\ldots,k$,\nfor $\\varepsilon$ fixed. That is, only the parameters $\\mu_i$ and $\\zeta_i$ are allowed to vary.\n\n\\begin{prop}\n\\label{propDerLinearOp}\nLet $\\mu^{\\prime}:=(\\mu_1^{\\prime},\\ldots,\\mu_k^{\\prime})$ and $\\zeta^{\\prime}:=(\\zeta_1^{\\prime},\\ldots,\\zeta_k^{\\prime})$.\nUnder the conditions of Proposition \\ref{solvability}, the map $T$ is of class $C^1$ and the derivative $D_{\\mu^{\\prime},\\,\\zeta^{\\prime}}\\,D_{\\mu^{\\prime}}T$ exists and is a continuous function.\nBesides, we have\n\\[\n\\Vert D_{\\mu^{\\prime},\\,\\zeta^{\\prime}}T(h)\\Vert_{\\ast}\n+\\Vert D_{\\mu^{\\prime},\\,\\zeta^{\\prime}}\\,D_{\\mu^{\\prime}}T(h)\\Vert_{\\ast}\\leq C\n\\Vert h\\Vert_{\\ast\\ast}.\n\\]\n\\end{prop}\n\\begin{proof}\nLet us begin with differentiation with respect to $\\zeta^\\prime$. Since $\\phi$ solves problem \\eqref{linpart},\nformal differentiation yields that $X_{n}:=\\partial_{(\\zeta^\\prime)_n}\\phi$, $n=1,\\ldots,3k$, should satisfy\n\\begin{equation*}\nL(X_{n})=\n-5\\,\\bigl[ \\partial_{(\\zeta^\\prime)_n} V^4\\bigl]\\,\\phi\n+\\sum_{i,j}c_{ij}^n\\,w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\n+\\sum_{i,j}c_{ij}\\,\\partial_{(\\zeta^\\prime)_n}\\bigl[w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\bigr]\n\\quad\\text{in } \\Omega_\\varepsilon\n\\end{equation*}\ntogether with\n\\begin{equation}\n\\label{ort}\n\\int_{\\Omega_\\varepsilon}X_{n}\\,w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}+\\int_{\\Omega_\\varepsilon}\\phi\\,\\partial_{(\\zeta^\\prime)_n}\\bigl[w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\bigr]=0\\quad\\text{for }j=1,2,3,4 ,\n\\end{equation}\nwhere $c_{ij}^n=\\partial_{(\\zeta^\\prime)_n}c_{ij}$.\n\n\n\n\\noindent\nLet us consider constants $b_{ml}$\nsuch that\n\\[\n\\int_{\\Omega_\\varepsilon}\n\\Bigl(\nX_{n}-\\sum_{m,l}b_{ml} \\,{\\bf z}_{ml}\n\\Bigr)\\,w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}=0,\n\\]\nwhere ${\\bf z}_{ml}$ is defined in \\eqref{zklcortada}.\nFrom \\eqref{ort} we get\n\\begin{equation*}\n\\sum_{m,l}b_{ml}\n\\int_{\\Omega_\\varepsilon}w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\, {\\bf z}_{ml}=-\\int_{\\Omega_\\varepsilon}\\partial_{(\\zeta^\\prime)_n}\\bigl[w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\bigr]\\,\\phi\n\\end{equation*}\nfor $i=1,\\ldots,k$, $j=1,2,3,4$.\nSince this system is diagonal dominant with uniformly bounded coefficients, we see that it is uniquely solvable and that\n\\[\nb_{ml}=O(\\Vert \\phi\\Vert_\\ast)\n\\]\nuniformly on $\\zeta^\\prime$, $\\mu^\\prime$ in $\\Omega_\\varepsilon$.\nOn the other hand, it is not hard to check that\n\\[\n\\bigl\\Vert \\phi\\,\\partial_{(\\zeta^\\prime)_n} V^4 \\bigr\\Vert_{\\ast\\ast}\\leq C\\,\n\\Vert \\phi\\Vert_\\ast.\n\\]\nRecall now that from Proposition \\ref{solvability} $c_{i,j}=O(\\Vert h\\Vert_{\\ast\\ast})$. Since besides\n\\[\\Bigl\\vert\\partial_{(\\zeta^\\prime)_n}\\bigl[w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\, z_{ij}(x)\\bigr]\\Bigr\\vert\\leq C\\, \\bigl\\vert y-\\zeta^\\prime_i\\bigr\\vert^{-7},\\]\nwe get\n\\[\\Bigl\\Vert\n\\sum_{i,j}c_{ij}\\,\\partial_{(\\zeta^\\prime)_n}\\bigl[\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\, z_{ij}\n\\bigr]\\Bigr\\Vert_{\\ast\\ast}\\leq C\\,\n\\Vert h\\Vert_{\\ast\\ast}.\n\\]\nSetting $X=X_{n}-\\sum_{m,l}b_{ml} \\,{\\bf z}_{ml}$, we have that $X$ satisfies\n\\[\nL(X)=f+\\sum_{i,j}c_{ij}^n\\,w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\quad\\text{in }\\Omega_\\varepsilon,\n\\]\nwhere\n\\[\nf=\\sum_{m,l}b_{ml} \\,L({\\bf z}_{ml})-5\\,\\phi\\,\\partial_{(\\zeta^\\prime)_n}V^4\n+\\sum_{i,j}c_{ij}\\,\\partial_{(\\zeta^\\prime)_n}\\bigl[w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\bigr].\n\\]\nThe above estimates, together with the fact that $\\Vert \\phi\\Vert_\\ast\\leq C\\,\n\\Vert h\\Vert_{\\ast\\ast}$ implies that\n\\[\\Vert f \\Vert_{\\ast\\ast}\\leq C\\,\n\\Vert h\\Vert_{\\ast\\ast}.\\]\nMoreover, since $X\\in H_0^1(\\Omega)$\n and\n\\[\n\\int_{\\Omega_\\varepsilon}\nX\\,w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}=0\n\\quad\\text{for all }i,j,\n\\]\nwe have that $X=T(f)$.\nThis computation is not just formal. Indeed,\narguing directly by definition, one gets that\n\\[\n\\partial_{(\\zeta^\\prime)_n}\\phi=\\sum_{m,l}b_{ml} \\,{\\bf z}_{ml}+T(f)\\quad\\text{and}\\quad\n\\Vert \\partial_{(\\zeta^\\prime)_n}\\phi\\Vert_{\\ast}\\leq C \\,\n\\Vert h\\Vert_{\\ast\\ast}.\n\\]\nThe corresponding result for differentiation with respect to the $\\mu_i$'s follows similarly. This concludes the proof.\n\\end{proof}\n\n\n\\section{The nonlinear problem}\n\\label{sectNonlinear}\n\nIn this section we consider the nonlinear problem\n\\eqref{linearizedproblemproj}, namely,\n\\begin{equation}\n\\label{linearizedproblemproj2}\n\\left\\{\n\\begin{array}{rlll}\nL(\\phi)&=&-N(\\phi)-E+\\sum_{i,j}c_{ij}\\,w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\n&\\text{in } \\Omega_\\varepsilon,\n\\\\\n\\phi&=&0&\\text{on }\\partial\\Omega_\\varepsilon,\n\\\\\n\\int_{\\Omega_\\varepsilon}w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\,\\phi&=&0&\\text{for all }i,j ,\n\\end{array}\n\\right.\n\\end{equation}\nand show that it has a small solution $\\phi$ for $\\varepsilon>0$ small enough.\n\n\n\nWe first obtain an estimate of the error $E$ defined in \\eqref{error}.\nAssuming \\eqref{parametros} it is possible to show that $E$ satisfies\n$\n\\|E\\|_{**} \\leq C \\varepsilon.\n$\nHowever, for the proof of the main theorem, we require a stronger estimate. In order to find it, we need to impose certain extra assumptions on the parameters.\n\nLet us use the notation\n\\[\n\\mu^{\\frac{1}{2}} =\n\\left[\n\\begin{matrix}\n\\mu_1^{\\frac{1}{2}}\n\\\\\n\\vdots\n\\\\\n\\mu_k^{\\frac{1}{2}}\n\\end{matrix}\n\\right] \\in \\R^k.\n\\]\n\n\\begin{lemma}\n\\label{lemma-error}\nAssuming that the parameters $\\mu_i,\\zeta_i$ satisfy \\eqref{parametros}, where $\\delta>0$ is fixed small, we have the existence of\n$\\varepsilon_1>0$, $C>0$, such that for all $\\varepsilon \\in (0,\\varepsilon_1)$\n\\[\n\\| E \\|_{**} \\leq C ( \\varepsilon^{\\frac{1}{2}} |M_\\lambda(\\zeta) \\mu^{\\frac{1}{2}}| + \\varepsilon^2) .\n\\]\n\\end{lemma}\n\\begin{proof}\nWe recall that\n\\begin{align*}\nE(y)=\\Bigl(\n\\sum_{i=1}^k\n\\bigl[\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)+\\varepsilon^{\\frac{1}{2}}\\pi_i(\\varepsilon\\, y)\n\\bigr]\n\\Bigr)^5\n- \\sum_{i=1}^k\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^5(y)\n,\\quad y\\in \\Omega_{\\varepsilon} .\n\\end{align*}\nFirst we note that\n\\[\n|E(y)|\\leq C \\varepsilon^5,\\quad \\text{if }\ny \\in \\widetilde \\Omega_\\varepsilon := \\Omega_\\varepsilon \\setminus \\bigcup_{j=1}^k B_{\\delta\/\\varepsilon}(\\zeta_j') ,\n\\]\nand this implies that\n\\begin{align}\n\\label{EcomplementB}\n\\sup_{y \\in \\widetilde \\Omega_\\varepsilon} \\omega(y)^{-(2+\\nu)} |E(y)| \\leq C \\varepsilon^{5-\\nu}.\n\\end{align}\nFor $y \\in B_{\\delta\/\\varepsilon}(\\zeta_i'))$ and $j\\not=i$, thanks to Lemma~\\ref{lemma22} we have\n\\[\n\\varepsilon^{\\frac{1}{2}}\\pi_i(\\varepsilon\\, y) = O(\\varepsilon), \\quad\nw_{\\mu_j^{\\prime},\\zeta_j^{\\prime}}(y)+\\varepsilon^{\\frac{1}{2}}\\pi_j(\\varepsilon\\, y)\n= O(\\varepsilon).\n\\]\nHence, using Taylor's theorem and the fact that $\\mu_i = O ( \\varepsilon )$ (which follows from \\eqref{parametros}), we find that\n\\begin{align}\n\\nonumber\nE(y)\n& = 5 w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^4\n\\Bigl( \\varepsilon^{\\frac{1}{2}}\\pi_i(\\varepsilon\\, y)\n+ \\sum_{j\\not=i} w_{\\mu_j^{\\prime},\\zeta_j^{\\prime}}(y)+\\varepsilon^{\\frac{1}{2}}\\pi_j(\\varepsilon\\, y)\n\\Bigr) \\\\\n\\label{expansionE}\n& \\quad + O( w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^3 \\varepsilon^2) + O(\\varepsilon^5)\n,\\quad \\text{for } y \\in B_{\\delta\/\\varepsilon}(\\zeta_i').\n\\end{align}\nNow, Lemma~\\ref{lemma22} guarantees that, for $y \\in B_{\\delta\/\\varepsilon}(\\zeta_i')$,\n\\begin{align}\n\\label{pi}\n\\pi_i(\\varepsilon y)\n&= -4\\pi \\alpha_3 \\mu_i^{\\frac{1}{2}} H_\\lambda(\\varepsilon y,\\zeta_i)\n+O( \\mu_i^{\\frac{3}{2}} )\n=-4\\pi \\alpha_3 \\mu_i^{\\frac{1}{2}} g_\\lambda(\\zeta_i)\n+O( \\varepsilon^{\\frac{3}{2}} ) .\n\\end{align}\nSimilarly, Lemma~\\ref{Uiexp} yields that, for $y \\in B_{\\delta\/\\varepsilon}(\\zeta_i'))$ and $j\\not=i$,\n\\begin{align}\n\\nonumber\nw_{\\mu_j^{\\prime},\\zeta_j^{\\prime}}(y)+\\varepsilon^{\\frac{1}{2}}\\pi_j(\\varepsilon\\, y)\n&= V_j(y) = \\varepsilon^{ \\frac{1}{2}} U_j( \\varepsilon y)\n\\\\\n\\nonumber\n& = 4\\pi\\,\\alpha_3 \\varepsilon^{ \\frac{1}{2}} \\mu_j^{\\frac{1}{2}} \\,G_{\\lambda}(\\varepsilon y ,\\zeta_j)+O(\\mu_i^{\\frac{5}{2}-\\sigma} )\n\\\\\n\\label{vJ}\n&= 4\\pi\\,\\alpha_3\\varepsilon^{ \\frac{1}{2}} \\mu_j^{\\frac{1}{2}}\\,G_{\\lambda}(\\zeta_i ,\\zeta_j) + O(\\varepsilon^2).\n\\end{align}\nUsing \\eqref{expansionE}, along with \\eqref{pi} and \\eqref{vJ}, we find that\n\\begin{align}\n\\nonumber\nE(y)\n& = 20\\pi \\alpha_3 \\varepsilon^{\\frac{1}{2}}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^4\n\\Bigl(\n-\\mu_i^{\\frac{1}{2}} g_\\lambda(\\zeta_i)\n+\\sum_{j\\not=i} \\mu_j^{\\frac{1}{2}} G_\\lambda(\\zeta_i,\\zeta_j)\n\\Bigr) \\\\\n\\label{expansionE2}\n& \\quad + O( w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^3 \\varepsilon^2) + O(\\varepsilon^5) ,\n\\quad \\text{for } y \\in B_{\\delta\/\\varepsilon}(\\zeta_i')),\n\\end{align}\nwhich implies\n\\[\n\\sup_{y \\in B_{\\delta\/\\varepsilon}(\\zeta_i'))}\n\\omega(y)^{-(2+\\nu)} |E(y)|\n\\leq C \\varepsilon^{\\frac{1}{2}}\n\\bigl|\n- \\mu_i^{\\frac{1}{2}} g_\\lambda(\\zeta_i) + \\sum_{j\\not=i} \\mu_j^{\\frac{1}{2}} \\,G_{\\lambda}(\\zeta_i ,\\zeta_j)\n\\bigr| + C \\varepsilon^2.\n\\]\nThis together with \\eqref{EcomplementB} yields the desired estimate.\n\\end{proof}\n\nWe note that just assuming that $\\mu_i$, $\\zeta_i$ satisfy \\eqref{parametros} we have $|M_\\lambda(\\zeta) \\mu^{\\frac{1}{2}}| \\leq C \\varepsilon^{\\frac{1}{2}}$ and hence\n\\begin{align}\n\\label{estE1}\n\\| E \\|_{**}\\leq C \\varepsilon.\n\\end{align}\nHowever, this estimate is not sufficient to prove the main theorem. An essential part of the argument is to work with $\\zeta$ and $\\mu$ so that $M_\\lambda(\\zeta) \\mu^{\\frac{1}{2}} $ is smaller than $\\varepsilon^{\\frac{1}{2}}$.\n\n\n\\medskip\n\n\n\\begin{lemma}\n\\label{lema2}\nAssume that $\\zeta_i'$, $\\mu_i'$ satisfy \\eqref{parametros} where $\\delta>0$ is fixed small.\nThen there exist\n$\\varepsilon_1>0$, $C_1>0$, such that for all $\\varepsilon \\in (0,\\varepsilon_1)$ problem \\eqref{linearizedproblemproj2} has a unique solution $\\phi$ that satisfies\n\\begin{align}\n\\label{estPhi}\n\\|\\phi\\|_*\\leq C ( \\varepsilon^{\\frac{1}{2}} |M_\\lambda(\\zeta) \\mu^{\\frac{1}{2}}| + \\varepsilon^2) .\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nIn order to find a solution to problem \\eqref{linearizedproblemproj2} it is sufficient to solve the fixed point problem\n\\[\n\\phi= A(\\phi),\n\\]\nwhere\n\\begin{align}\n\\label{def-A}\nA(\\phi) = - T(N(\\phi)+E) ,\n\\end{align}\nand $T$ is the linear operator defined in Proposition \\ref{solvability}.\n\n\n\n\n\n\nNow, for a small $\\gamma>0$, let us consider the ball\n$\\mathcal{F}_\\gamma:= \\{\\phi \\in C(\\overline{\\Omega}_{\\varepsilon}) \\, \\vline \\ \\|\\phi\\|_*\\leq \\gamma \\}.$\nWe shall prove that\n$A$ is a contraction in $ \\mathcal{F}_\\gamma$ for small $\\varepsilon>0$.\nFrom Proposition \\ref{solvability}, we get\n\\[ \\|A(\\phi)\\|_*\\leq C\\left[ \\|N(\\phi)\\|_{**}+\\|E\\|_{**} \\right].\\]\nWriting the formula for $N$ as\n\\[\nN(\\phi)=20\\int_{0}^1(1-t)\\,[V+t\\phi]^3\\,dt \\,\\phi^2,\n\\]\nwe get the following estimates which are valid for\n$\\phi_1, \\phi_2\\in \\mathcal{F}_\\gamma$,\n\\[\n\\|N(\\phi_1)\\|_{**}\\leq C \\|\\phi_1\\|_*^2 ,\n\\]\n\\begin{equation}\n\\label{N}\n\\|N(\\phi_1)-N(\\phi_2)\\|_{**}\\leq C \\,\\gamma \\,\\|\\phi_1-\\phi_2\\|_*.\n\\end{equation}\nThus, we can deduce the existence of a constant $C>0$ such that\n\\[\n\\|A(\\phi)\\|_*\\leq C \\left[\\gamma^2 + \\|E\\|_{**} \\right] .\n\\]\nFrom Lemma~\\ref{lemma-error} we obtain the basic estimate $\\|E\\|_{**} \\leq C\\varepsilon$ with $C$ independent of the parameters $(\\mu,\\zeta)$ satisfying \\eqref{parametros}.\nChoosing $\\gamma = 2 C \\|E \\|_{**} $ we see that $A$ maps $\\mathcal{F}_\\gamma$ into itself if $\\gamma\\leq \\frac{1}{2C}$, which is true for $\\varepsilon>0$ small.\nUsing now \\eqref{N} we obtain\n\\[\n\\|A(\\phi_1) - A(\\phi_2) \\|_* \\leq C \\gamma \\|\\phi_1-\\phi_2\\|_*\n\\]\nfor $\\phi_1,\\phi_2 \\in \\mathcal{F}_\\gamma$. Therefore $A$ is a contraction in $ \\mathcal{F}_\\gamma$ for small $\\varepsilon>0$ and hence a unique fixed point of $A$ exists in this ball.\nThe solution $\\phi$ satisfies\n\\begin{align}\n\\label{estPhiNonlinear}\n\\|\\phi\\|_* \\leq \\gamma = 2 C \\|E\\|_{**} \\leq C ( \\varepsilon^{\\frac{1}{2}} |M_\\lambda(\\zeta) \\mu^{\\frac{1}{2}}| + \\varepsilon^2) ,\n\\end{align}\nby Lemma~\\ref{lemma-error}.\nThis concludes the proof of the lemma.\n\\end{proof}\n\n\n\n\n\n\\medskip\n\n\nWe shall next analyze the differentiability of the map $(\\zeta',\\mu')\\rightarrow \\phi$.\n\nFirst we claim that:\n\\begin{lemma}\nAssume that the parameters $\\mu_i,\\zeta_i$ satisfy \\eqref{parametros}. Then\n\\begin{align}\n\\label{estDerEMu}\n\\| D_{\\mu_i'} E \\|_{**} \\leq C \\varepsilon ,\n\\\\\n\\label{estDerEZeta}\n\\| D_{\\zeta_i^\\prime}\\, E \\|_{**} \\leq C \\varepsilon .\\end{align}\n\\end{lemma}\n\n\\begin{proof}\n\nFirst we observe that\n\\begin{align*}\n\\partial_{\\mu_i^\\prime}w_{\\mu_i',\\zeta_i'}&=\n\\frac{\\alpha_3 \\,\\bigl(|y-\\zeta_i'|^2-\\mu_i'^2\\bigr)}{2\\,\\sqrt{\\mu_i'}\\,\\bigl(|y-\\zeta_i'|^2+\\mu_i'^2\\bigr)^{\\frac{3}{2}}}, \\quad\nD_{\\zeta_i^\\prime}w_{\\mu_i',\\zeta_i'}=\n\\frac{\\alpha_3\\, \\sqrt{\\mu_i'}\\,(y-\\zeta_i')}{\\bigl(|y-\\zeta_i'|^2+\\mu_i'^2\\bigr)^{\\frac{3}{2}}} .\n\\end{align*}\nand hence\n\\begin{equation}\n\\label{DMUZIE}\n|\\partial_{\\mu_i^\\prime}w_{\\mu_i',\\zeta_i'}|\\leq C\\, w_{\\mu_i',\\zeta_i'} \\quad\\text{and}\\quad\n|D_{\\zeta_i^\\prime}\\,w_{\\mu_i',\\zeta_i'}|\\leq C\\, w_{\\mu_i',\\zeta_i'}^2.\n\\end{equation}\nLet us prove \\eqref{estDerEZeta}, the other being similar.\nLet us assume without loss of generality that $i=1$.\nRecall that\n\\[\nE=V^5-\n\\sum_{i=1}^k w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^5,\n\\]\nand so\n\\begin{align*}\nD_{\\zeta_1^\\prime}\\, E\n&=\n5V^4 D_{\\zeta_1^\\prime}\\, V_1\n- 5 w_{\\mu_1',\\zeta_1'}^4 D_{\\zeta_1^\\prime}\\, w_{\\mu_1',\\zeta_1'}\n\\\\\n&=\n5\n\\biggl[\n\\Bigl(\n\\sum_{i=1}^k w_{\\mu_i',\\zeta_i'} + \\varphi_i\n\\Bigr)^4\n-\nw_{\\mu_1',\\zeta_1'}^4 \\biggr]\nD_{\\zeta_1^\\prime}\\,w_{\\mu_1',\\zeta_1'}\n +\n5\\,\\Bigl(\n\\sum_{i=1}^k w_{\\mu_i',\\zeta_i'} + \\varphi_i\n\\Bigr)^4 \\, D_{\\zeta_1^\\prime}\\, \\varphi_1 ,\n\\end{align*}\nwhere $\\varphi_i(y) = \\varepsilon^{1\/2} \\pi_i(\\varepsilon y)$.\nBy \\eqref{DMUZIE}, we have that, for $y \\in B_{\\delta\/\\varepsilon}(\\zeta_1')$,\n\\begin{align}\n\\nonumber\n&\n\\biggl|\n\\biggl(\n\\Bigl(\n\\sum_{i=1}^k w_{\\mu_i',\\zeta_i'} + \\varphi_i\n\\Bigr)^4\n-\nw_{\\mu_1',\\zeta_1'}^4 \\biggr)\\,\nD_{\\zeta_1'} w_{\\mu_1',\\zeta_1'}\n\\biggr|\n\\\\\n\\nonumber\n& \\leq\nC w_{\\mu_1',\\zeta_1'}^3 \\Bigl( | \\varphi_1 |+ \\sum_{i=2}^k \\bigl( |w_{\\mu_i',\\zeta_i'}| + | \\varphi_i| \\bigr) \\Bigr) \\, |D_{\\zeta_1'} w_{\\mu_1',\\zeta_1'} |\n\\\\\n\\label{estDerE1}\n& \\leq C \\,\\varepsilon \\, w_{\\mu_1',\\zeta_1'}^5 .\n\\end{align}\nNote that from Lemma~\\ref{lemma22}, $ |D_{\\zeta_1'} \\varphi_1 (y)|\\leq C \\varepsilon^2$. Then,\n for $y \\in B_{\\delta\/\\varepsilon}(\\zeta_1')$,\n\\begin{align}\n\\nonumber\n\\left|\n5 \\Bigl(\n\\sum_{i=1}^k w_{\\mu_i',\\zeta_i'} + \\varphi_i\n\\Bigr)^4 \\,D_{\\zeta_1'} \\varphi_1\n\\right|\n& \\leq C \\left( w_{\\mu_1',\\zeta_1'}^4 + \\varepsilon^4 \\right) \\varepsilon^2\n\\\\\n\\label{estDerE2}\n& \\leq C \\,\\varepsilon^2\\, w_{\\mu_1',\\zeta_1'}^4.\n\\end{align}\nUsing \\eqref{estDerE1} and \\eqref{estDerE2} we find that\n\\[\n\\sup_{y \\in B_{\\delta\/\\varepsilon}(\\zeta_1')}\n\\omega(y)^{-(2+\\nu)} |D_{\\zeta_1'} E(y)| \\leq C \\varepsilon.\n\\]\nThe supremum on the rest of $\\Omega_\\varepsilon$ can be estimated similarly and this yields \\eqref{estDerEZeta}.\n\\end{proof}\n\n\n\n\\begin{lemma}\nAssume that $\\zeta$, $\\mu$ satisfy \\eqref{parametros}.\nThen\n\\begin{align}\n\\label{derPhiZeta}\n\\| D_{\\zeta_i'} \\phi\\|_* & \\leq C ( \\| E\\|_{**} + \\|D_{\\zeta'} E\\|_{**} ) ,\n\\\\\n\\label{derPhiMu}\n\\| D_{\\mu_i'} \\phi\\|_*& \\leq C ( \\| E\\|_{**} + \\|D_{\\mu'} E\\|_{**} ) .\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nTo prove differentiability of the function $\\phi(\\zeta')$ we first\nrecall that $\\phi$ is found solving the fixed point problem\n\\[\n\\phi = A (\\phi; \\mu',\\zeta')\n\\]\nwhere $A$ is given in \\eqref{def-A} but now we emphasize the dependence on $\\mu ',\\zeta '$.\nFormally, differentiating this equation with respect to $\\zeta_i'$ we find\n\\begin{align}\n\\label{fixedDerPhi}\nD_{\\zeta_i'} \\phi =\n\\partial_{\\zeta_i'} A(\\phi;\\mu',\\zeta') +\n\\partial_\\phi A(\\phi;\\mu',\\zeta')\n[D_{\\zeta_i'} \\phi].\n\\end{align}\nThe notation we are using is $D_{\\zeta_i'}$ for the total derivative of the corresponding function and $\\partial_{\\zeta_i'}$ for the partial derivative.\nFrom this fixed point problem for $D_{\\zeta_i'} \\phi $ we shall derive an estimate for $\\|D_{\\zeta_i'} \\phi\\|_*$.\n\nSince $A(\\phi;\\mu',\\zeta') = - T( N(\\phi;\\mu',\\zeta') + E;\\mu',\\zeta')$ we get\n\\begin{align*}\n\\partial_{\\zeta_i'}A(\\phi;\\mu',\\zeta')\n&=\n-\\partial_{\\zeta_i'}\nT( N(\\phi;\\mu',\\zeta') + E;\\mu',\\zeta')\n-\nT( \\partial_{\\zeta_i'}N(\\phi;\\mu',\\zeta') ;\\mu',\\zeta')\n\\\\\n& \\quad\n- T( D_{\\zeta_i'} E;\\mu',\\zeta') .\n\\end{align*}\nFrom Proposition \\ref{solvability} we see that\n\\[\n\\| T( D_{\\zeta_i'} E;\\mu',\\zeta') \\|_*\n\\leq\nC \\| D_{\\zeta_i'} E\\|_{**} .\n\\]\nUsing Proposition~\\ref{propDerLinearOp} and estimates \\eqref{estPhiNonlinear} and \\eqref{N}, we find that\n\\[\n\\| \\partial_{\\zeta_i'}\nT\\bigl( N(\\phi;\\mu',\\zeta') + E;\\mu',\\zeta'\\bigr) \\|_*\n\\leq C \\| N(\\phi;\\mu',\\zeta') + E \\|_{**}\n\\leq\nC \\|E \\|_{**} .\n\\]\nSimilarly,\n\\begin{align*}\n\\| T( \\partial_{\\zeta_i'}N(\\phi;\\mu',\\zeta') ;\\mu',\\zeta') \\|_*\n& \\leq\nC\n\\| \\partial_{\\zeta_i'}N(\\phi;\\mu',\\zeta') \\|_{**}\n\\leq\nC\n\\| \\phi \\|_{*}^2\n\\leq C \\|E\\|_{**}^2.\n\\end{align*}\nTherefore,\n\\begin{align}\n\\label{estDerAZeta}\n\\| D_{\\zeta_i'}A(\\phi;\\mu',\\zeta') \\|_* \\leq C \\|E\\|_{**}.\n\\end{align}\nNext we estimate\n\\begin{align}\n\\nonumber\n\\|\n\\partial_\\phi A(\\phi;\\mu',\\zeta')\n[D_{\\zeta_i'} \\phi] \\|_*\n&=\n\\|\nT ( \\partial_\\phi N(\\phi;\\mu',\\zeta')\n[D_{\\zeta_i'} \\phi] ) \\|_*\n\\\\\n\\nonumber\n& \\leq\n\\| \\partial_\\phi N(\\phi;\\mu',\\zeta')\n[D_{\\zeta_i'} \\phi] \\|_{**}\n\\\\\n\\nonumber\n& \\leq\nC \\, \\|\\phi\\|_* \\, \\|D_{\\zeta_i'} \\phi\\|_*\n\\\\\n\\label{estDerAPhi}\n& \\leq\nC \\, \\| E \\|_* \\, \\|D_{\\zeta_i'} \\phi\\|_* .\n\\end{align}\nFrom \\eqref{estDerAZeta}, \\eqref{estDerAPhi} and the fixed point problem \\eqref{fixedDerPhi} we deduce \\eqref{derPhiZeta}.\nThe proof of \\eqref{derPhiMu} is similar.\n\\end{proof}\n\nAs a corollary of the previous lemma and taking into account\n\\eqref{estDerEMu}, \\eqref{estDerEZeta}, and \\eqref{estE1} we get the following estimate\n\\begin{align}\n\\label{estDerPhi2}\n\\|D_{\\zeta_i'} \\phi \\|_* + \\|D_{\\mu_i'} \\phi \\|_* \\leq C \\varepsilon .\n\\end{align}\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{The reduced energy}\n\\label{secReduction}\n\nAfter Problem (\\ref{linearizedproblemproj}) has been solved, we will find a solution to the original problem (\\ref{maineqreesc}) if we manage to adjust the pair $(\\zeta',\\mu' )$ in such a way that $c_{i}(\\zeta',\\mu' )=0$, $i=1,2,3,4$. This is the {\\em reduced problem} and it turns out to be variational, that is, its solutions are critical points of the reduced energy functional\n\\begin{align}\n\\label{defIlambda}\nI_\\lambda(\\zeta',\\mu' )\n=\n\\bar J_\\lambda(V+\\phi)\n\\end{align}\nwhere $\\bar J_\\lambda$ is the energy functional for the problem \\eqref{maineqreesc}, that is,\n\\[\n\\bar J_\\lambda(v)=\\frac{1}{2}\\int_{\\Omega_\\varepsilon}\\vert\\nabla v\\vert^2 - \\varepsilon^2\\, \\frac{ \\lambda}{2}\\int_{\\Omega_\\varepsilon}v^2-\\frac{1}{6}\\int_{\\Omega_\\varepsilon}v^6,\n\\]\nthe function $V$ is the ansatz given in \\eqref{defV} and $\\phi = \\phi(\\zeta',\\mu' )$ is the solution of (\\ref{linearizedproblemproj}) constructed in Lemma~\\ref{lema2} for $\\varepsilon \\in (0,\\varepsilon_1)$.\n\n\n\\begin{lemma}\n\\label{lemmaReduction1}\nAssume that $\\zeta_i'$, $\\mu_i'$ satisfy \\eqref{parametros} where $\\delta>0$ is fixed small and $\\varepsilon_1>0$ is small as in Lemma~\\ref{lema2}. Then $I_\\lambda$ is $C^1$ and\n$V+\\phi$ is a solution to \\eqref{maineqreesc} if and only if\n\\begin{align}\n\\label{reducedSystem}\nD_{\\zeta'} I_\\lambda(\\zeta',\\mu' )=0,\n\\quad\nD_{\\mu'} I_\\lambda(\\zeta',\\mu' )=0 .\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nDifferentiating $I_\\lambda$ with respect to $\\mu_n'$\nand using that $\\phi$ solves \\eqref{linearizedproblemproj}\nwe find\n\\begin{align*}\n\\partial_{\\mu_n'}\nI_\\lambda(\\zeta',\\mu' )\n&=\nD \\bar J_\\lambda(V+\\phi)[\\partial_{\\mu_n'} V + \\partial_{\\mu_n'} \\phi]\n\\\\\n&= - \\sum_{i,j} c_{ij} \\int_{\\Omega_\\varepsilon}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\n\\,( \\partial_{\\mu_n'} V + \\partial_{\\mu_n'} \\phi ) .\n\\end{align*}\nSimilarly\n\\begin{align*}\nD_{\\zeta_{n}'}\nI_\\lambda(\\zeta',\\mu' )\n&= - \\sum_{i,j} c_{ij} \\int_{\\Omega_\\varepsilon}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij} \\,\n( D_{\\xi_{n}'} V + D_{\\xi_{n}'} \\phi ) .\n\\end{align*}\nSince all terms in these expressions depends continuously on $\\zeta',\\mu'$ we deduce that $I_\\lambda$ is $C^1$.\n\nClearly if $V+\\phi$ is a solution to \\eqref{maineqreesc} then all $c_{ij}=0$ and hence \\eqref{reducedSystem} holds.\nReciprocally, if \\eqref{reducedSystem} holds, then\n\\begin{align}\n\\label{systemC}\n\\left\\{\n\\begin{aligned}\n\\sum_{i,j} c_{ij}\n\\int_{\\Omega_\\varepsilon}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij} \\,\n( \\partial_{\\mu_n'} V + \\partial_{\\mu_n'} \\phi )\n&=0\n\\\\\n\\sum_{i,j} c_{ij}\n\\int_{\\Omega_\\varepsilon}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij} \\,\n( D_{\\zeta_{n}'} V \\cdot e_l + D_{\\zeta_{n}'} \\phi \\cdot e_l)\n&=0,\n\\end{aligned}\n\\right.\n\\end{align}\nfor all $n=1,\\ldots,k$. Thanks to \\eqref{estDerPhi2} we see that\n\\[\n\\int_{\\Omega_\\varepsilon}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\,\n\\partial_{\\mu_n'} \\phi \\to0,\n\\quad\n\\int_{\\Omega_\\varepsilon}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij} \\,\nD_{\\zeta_{n}'} \\phi \\to0,\n\\]\nas $\\varepsilon\\to 0$. Also, by \\eqref{zij} and the expansion in Lemma~\\ref{lemma22} we find that\n\\[\n\\int_{\\Omega_\\varepsilon}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\n\\,\\partial_{\\mu_n'} V\n=\n\\delta_{j4}\\, \\delta_{ik}\n\\int_{\\R^3} w_{\\mu',0}^4 (\\partial_\\mu w_{\\mu',0})^2 + o(1)\n\\]\nand\n\\[\n\\int_{\\Omega_\\varepsilon}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\n\\,D_{\\zeta_{n}'} V \\cdot e_l\n=\n\\delta_{ik}\\,\n\\delta_{jl}\n\\int_{\\R^3}\nw_{\\mu',0}^4 (\\nabla w_{\\mu',0}\\cdot e_1)^2\n+o(1)\n\\]\nas $\\varepsilon\\to0$, for some $\\mu' \\in (\\delta,\\frac{1}{\\delta})$.\n\nTherefore the system of equations \\eqref{systemC} is invertible for the $c_{ij}$ when $\\varepsilon>0$ is small, and hence $c_{ij}=0$ for all $i,j$.\n\\end{proof}\n\n\n\nA nice feature of the system of equations \\eqref{reducedSystem} is that it turns out to be equivalent to finding critical points of a functional of the pair $(\\zeta',\\mu')$ which is close, in appropriate sense, to the energy of $k$ bubbles $U_1 + \\ldots + U_k$.\n\n\n\n\\begin{lemma}\n\\label{lemmaApproxEnergy}\nAssume the same conditions as in Lemma~\\ref{lemmaReduction1}.\nThen\n\\begin{align}\n\\label{expansion2}\nI_\\lambda(\\zeta',\\mu') = J_\\lambda(\\sum_{i=1}^k U_i) + \\theta_\\lambda^{(2)}(\\zeta',\\mu'),\n\\end{align}\nwhere $\\theta$ satisfies\n\\[\n\\theta_\\lambda^{(2)}(\\zeta',\\mu') = - \\int_0^1 s \\left[\\int_{\\Omega_\\varepsilon}\\vert\\nabla \\phi\\vert^2 - \\varepsilon^2 \\lambda \\phi^2-5 (V + s\\phi)^4 \\phi^2\\right]\\,ds ,\n\\]\nwhere $\\phi = \\phi(\\zeta',\\mu') $ is the solution of \\eqref{linearizedproblemproj2} found in Lemma~\\ref{lema2}.\n\\end{lemma}\n\\begin{proof}\nFrom Taylor's formula\nwe find that\n\\begin{align*}\nI_\\lambda(\\zeta',\\mu' )\n&=\n\\bar J_\\lambda(V)\n+ D \\bar J_\\lambda(V+\\phi) [\\phi]\n+ \\theta_\\lambda^{(2)}(\\zeta',\\mu'),\n\\end{align*}\nwhere\n\\begin{align}\n\\label{formulaR}\n\\theta_\\lambda^{(2)}(\\zeta',\\mu')\n&= -\\int_0^1 s D^2 \\bar J_\\lambda(V + s\\phi)[\\phi^2] \\,ds .\n\\end{align}\nBut since $\\phi$ satisfies \\eqref{linearizedproblemproj}, we have that\n\\begin{align*}\nD \\bar J_\\lambda(V+\\phi) [\\phi]\n&=- \\sum_{i,j} c_{ij} \\int_{\\Omega_\\varepsilon} w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij} \\phi = 0 ,\n\\end{align*}\nwhich implies \\eqref{expansion2}.\n\\end{proof}\n\nWe remark that assuming \\eqref{parametros} we get\n\\[\n\\left|\n\\theta_\\lambda^{(2)}(\\zeta',\\mu') \\right|\\leq C \\varepsilon^2 ,\n\\]\nsince \\eqref{estPhi} holds.\n\n\n\n\n\\section{Critical multi-bubble}\n\\label{sectProof}\n\\noindent\nLet $k\\geq2$ be a given integer.\nFor $\\delta>0$ fixed small we consider the sets\n\\begin{multline*}\n\\Omega_\\delta^k:=\\{\n\\zeta\\equiv(\\zeta_1,\\ldots,\\zeta_k)\\in\\Omega^k: \\,\\textrm{dist}(\\zeta_i,\\partial \\Omega)>\\delta, \\vert \\zeta_i-\\zeta_j\\vert>\\delta,\n i=1,\\ldots,k,\\, j\\neq i\n\\}\n\\end{multline*}\nRecall that the main term in the expansion of $J_\\lambda\\Bigl(\\sum_{i=1}^k U_i\\Bigr)$ is the function\n\\begin{align*}\nF_\\lambda(\\zeta,\\mu)\n&:=\n\\,k\\,a_0\n+a_1\\sum_{i=1}^k\\Bigl(\\mu_i\\,g_{\\lambda}(\\zeta_i)-\\sum_{j\\neq i}\\mu_i^{1\/2}\\,\\mu_j^{1\/2}\\,G_{\\lambda}(\\zeta_i,\\zeta_j)\\Bigr)+a_2\\,\\lambda\\,\\sum_{i=1}^k \\mu_i^2\n\\\\\n&\n-a_3\\,\\sum_{i=1}^k\\Bigl(\\mu_i\\,g_{\\lambda}(\\zeta_i)-\\sum_{j\\neq i}\\mu_i^{1\/2}\\mu_j^{1\/2}\\,G_{\\lambda}(\\zeta_i,\\zeta_j)\\Bigr)^2 ,\n\\end{align*}\nwhere $\\zeta\\in\\Omega_\\delta^k$,\n$\\mu\\equiv(\\mu_1,\\ldots,\\mu_k)\\in(\\R^+)^k$\nand the constants $a_i$ are given in \\eqref{a0}--\\eqref{a3}.\n\n\\noindent\n\n\n\n\\begin{proof}[Proof of Theorem~\\ref{thm1}]\n\nBy Lemma~\\ref{lemmaReduction1}, $v= V+\\phi$ solves \\eqref{maineqreesc} if the function $I_\\lambda(\\zeta',\\mu')$ defined in \\eqref{defIlambda} has a critical point.\n\nIn the sequel we will write also $I_\\lambda(\\zeta,\\mu)$ for the same function but depending on $\\zeta$, $\\mu$, which we always assume satisfy the relation \\eqref{muiPrime} with $\\zeta'$, $\\mu'$.\n\nUsing the expansion of $J_\\lambda\\Bigl(\\sum_{i=1}^kU_i\\Bigr)$ given in Lemma~\\ref{lemmaEnergyExpansion}, together with Lemma~\\ref{lemmaApproxEnergy}, we see that\n$I_\\lambda(\\zeta,\\mu)$ has the form\n\\[\nI_\\lambda(\\zeta,\\mu) = F_\\lambda(\\zeta,\\mu) + \\theta_\\lambda(\\zeta,\\mu)\n\\]\nwhere $\\theta_\\lambda(\\zeta,\\mu)= \\theta_\\lambda^{(1)}(\\zeta,\\mu) + \\theta_\\lambda^{(2)}(\\zeta,\\mu) $, $\\theta_\\lambda^{(1)}$ is the remainder that appears in Lemma~\\ref{lemmaEnergyExpansion} and $\\theta_\\lambda^{(2)}$ the remainder in Lemma~\\ref{lemmaApproxEnergy}.\n\n\nIt is convenient to perform the change of variables\n\\begin{align}\n\\label{LambdaMu}\n\\Lambda_i := \\mu_i^{1\/2} ,\n\\end{align}\nwhere now $\\Lambda\\equiv(\\Lambda_1,\\ldots,\\Lambda_k)\\in \\R^k$, and write, with some abuse of notation,\n\\begin{align*}\nF_{\\lambda}(\\zeta,\\Lambda)\n:=&\n\\,k\\,a_0\n+a_1\\sum_{i=1}^k\\Bigl(\\Lambda_i^2\\,g_{\\lambda}(\\zeta_i)-\\sum_{j\\neq i}\\Lambda_i\\,\\Lambda_j\\,G_{\\lambda}(\\zeta_i,\\zeta_j)\\Bigr)+a_2\\,\\lambda\\,\\sum_{i=1}^k \\Lambda_i^4\\\\\n&-a_3\\,\\sum_{i=1}^k\\Bigl(\\Lambda_i^2\\,g_{\\lambda}(\\zeta_i)-\\sum_{j\\neq i}\\Lambda_i \\Lambda_j\\,G_{\\lambda}(\\zeta_i,\\zeta_j)\\Bigr)^2.\n\\end{align*}\nNote that $\\partial_{\\mu'_i} I_\\lambda(\\mu',\\zeta')=0$ is equivalent to $\\partial_{\\mu_i} \\tilde F_\\lambda=0$, whenever $\\Lambda_i \\not=0$.\n\nThe function $F_\\lambda$ can be expressed in terms of the matrix $M_\\lambda$ as\n\\[F_{\\lambda}(\\zeta,\\Lambda)\n=\n\\,k\\,a_0\n+a_1\\,\n\\Lambda^T M_\\lambda(\\zeta) \\Lambda\n+a_2\\,\\lambda\\,\\sum_{i=1}^k \\Lambda_i^4-a_3\\,\\sum_{i=1}^k\\Lambda_i^2\\,( M_\\lambda(\\zeta) \\Lambda )_i^2.\n\\]\nIn what follows we write $\\sigma_1(\\varepsilon,\\zeta) $ for the smallest eigenvalue of $M_\\lambda(\\zeta)$ where $\\lambda = \\lambda_0 + \\varepsilon$.\nUsing the Perron-Frobenius theorem or a direct argument as in \\cite{bahri-li-rey} the eigenvalue $\\sigma_1(\\varepsilon,\\zeta) $ is simple and has an eigenvector $v_1(\\varepsilon,\\zeta) $ with $|v_1(\\varepsilon,\\zeta) |=1$ and whose components are all positive.\nBy a standard application of the implicit function theorem, we have that $\\sigma_1(\\varepsilon,\\zeta)$ and $v_1(\\varepsilon,\\zeta)$ are smooth functions of $\\varepsilon$ and $\\zeta$ in a neighborhood of $(0,\\zeta^0)$.\n\n\\noindent\nWe also have the following properties as a consequence of the hypothesis:\n\\[\nD_\\zeta \\sigma_1(0,\\zeta^0) = 0,\\qquad\nD^2_{\\zeta\\zeta} \\sigma_1(0,\\zeta^0)\\, \\text{is nonsingular},\\qquad\n\\frac{\\partial \\sigma_1 }{\\partial \\lambda} (0,\\zeta^0)<0.\n\\]\nThese assertions can be proved by observing that\n\\[\n\\psi_{\\lambda_0+\\varepsilon}(\\zeta)=\\det M_{\\lambda_0+\\varepsilon}(\\zeta) = \\sigma_1(\\varepsilon,\\zeta) \\sigma_*(\\varepsilon,\\zeta) ,\n\\]\nwhere $\\sigma_*(\\varepsilon,\\zeta)$ is the product of the rest of the eigenvalues of $M_{\\lambda_0+\\varepsilon}(\\zeta)$. Since $\\sigma_1$ is a simple eigenvalue and $M_{\\lambda_0}(\\zeta^0)$ is positive semidefinite, we have $\\sigma_*(0,\\zeta^0)>0$ and this is still true for $\\varepsilon,\\zeta$ in a neighborhood of $(0,\\zeta^0)$.\nThen the properties stated above for $\\sigma_1$ follow from our assumptions on $\\psi_{\\lambda_0+\\varepsilon}(\\zeta)$.\n\n\nSince $\\frac{\\partial \\sigma_1 }{\\partial \\lambda} (0,\\zeta^0)<0$,\nwe deduce that there are $\\varepsilon_0>0$ and $c_0>0$ such that\n \\begin{align}\n\\label{sigma-negative}\n\\sigma_1(\\varepsilon,\\zeta) <0 ,\n\\quad\n\\text{for } \\varepsilon \\in (0,\\varepsilon_0),\n\\quad\n\\zeta \\in B_{c_0 \\sqrt{\\varepsilon}}(\\zeta^0).\n\\end{align}\nNext we construct a $k\\times k$ matrix $P (\\varepsilon,\\zeta)$ for $\\varepsilon$ and $\\zeta$ in a neighborhood of $(0,\\zeta^0)$ with the following properties:\n\\begin{itemize}\n\\item[a)] the first column of $P$ is $v_1(\\varepsilon,\\zeta)$,\n\\item[b)] columns 2 to $k$ of $P$ are orthogonal to $v_1(\\varepsilon,\\zeta)$,\n\\item[c)] $P (\\varepsilon,\\zeta)$ is smooth for $\\varepsilon$ and $\\zeta$ in a neighborhood of $(0,\\zeta^0)$,\n\\item[d)] $P (0,\\zeta^0)$ is such that $M_{\\lambda_0}(\\zeta^0) = P (0,\\zeta^0) D P (0,\\zeta^0)^T$ with $D$ diagonal,\n\\item[e)] $P (0,\\zeta^0)^T P (0,\\zeta^0) = I$.\n\\end{itemize}\nTo achieve this we let $\\bar v_1,\\ldots,\\bar v_k$ be an orthonormal basis of $\\R^k$ of eigenvectors of $M_{\\lambda_0}(\\zeta^0)$ such that $\\bar v_1 = v_1(0,\\zeta^0)$.\nWe let, for $\\varepsilon>0$ and $\\zeta$ close to $\\zeta^0$,\n\\[\nv_i(\\varepsilon,\\zeta) = \\bar v_i - (\\bar v_i\\cdot v_1(\\varepsilon,\\zeta) ) v_1(\\varepsilon,\\zeta)\n, \\quad 2\\leq i\\leq k,\n\\]\nand $P $ be the matrix whose columns are $v_1(\\varepsilon,\\zeta) ,\\ldots, v_k(\\varepsilon,\\zeta) $.\n\nWe remark that although it would be more natural to consider a matrix $\\tilde P(\\varepsilon,\\zeta)$, which diagonalizes $M_\\lambda(\\zeta)$, this matrix may not be differentiable with respect to $\\varepsilon$ and $\\zeta$. For this reason we choose to work with $P$ as defined before.\n\nLet us perform the following change of variables\n\\begin{align}\n\\label{changeLambda}\n\\Lambda=|\\sigma_1|^{1\/2}P(\\varepsilon,\\zeta) \\bar{\\Lambda} .\n\\end{align}\nNote that the quadratic form $\\Lambda^T M_\\lambda(\\zeta) \\Lambda$ can be written as\n\\begin{align*}\n\\Lambda^T M_\\lambda(\\zeta) \\Lambda\n= \\sigma_1(\\varepsilon,\\zeta) |\\sigma_1(\\varepsilon,\\zeta) | \\bar\\Lambda_1^2 + |\\sigma_1(\\varepsilon,\\zeta)| (\\bar\\Lambda')^T Q(\\varepsilon,\\zeta) \\bar\\Lambda',\n\\end{align*}\nwhere\n\\[\n\\bar\\Lambda' =\n\\left[\n\\begin{matrix}\n\\bar\\Lambda_2 \\\\\n\\vdots\\\\\n\\bar\\Lambda_k\n\\end{matrix}\n\\right]\n,\\quad\nQ(\\varepsilon,\\zeta)= P'(\\varepsilon,\\zeta)^T M_{\\lambda_0+\\varepsilon}(\\zeta) P'(\\varepsilon,\\zeta)\n\\]\nand $ P'(\\varepsilon,\\zeta) = [v_2,\\ldots,v_k]$ is the matrix formed by the columns 2 to $k$ of $P(\\varepsilon,\\zeta)$.\n\n\n\n\nThus $I_\\lambda(\\zeta,\\bar\\Lambda) = F_\\lambda(\\zeta,\\bar\\Lambda) + \\theta_\\lambda(\\zeta,\\bar\\Lambda)$ can be written as\n\\begin{align}\n\\nonumber\nI_\\lambda(\\zeta,\\bar{\\Lambda}) & = ka_0\n+ a_1 \\Big[ -\\sigma_1(\\varepsilon,\\zeta)^2 \\bar\\Lambda_1^2 + |\\sigma_1(\\varepsilon,\\zeta)| (\\bar\\Lambda')^T Q(\\varepsilon,\\zeta) \\bar\\Lambda' \\Bigr]\n\\\\\n\\label{formF1lambda}\n& \\quad\n+\\sigma_1(\\varepsilon,\\zeta)^2\\,\n\\mathcal Poly_4(\\varepsilon,\\zeta,\\bar \\Lambda)\n+ \\theta_\\lambda(\\zeta,\\bar \\Lambda),\n\\end{align}\nwhere\n\\begin{align*}\n& \\mathcal Poly_4(\\varepsilon,\\zeta,\\bar \\Lambda)\n:=a_2\\lambda\\sum_{i=1}^k\n\\Bigl(\\sum_{j=1}^k\nP_{ij} (\\varepsilon,\\zeta) \\bar{\\Lambda}_j\\Bigr)^4\n\\\\\n& \\quad\n-a_3\\sum_{i=1}^k\n\\Big[\n\\Bigl(\\sum_{j=1}^k\nP_{ij}(\\varepsilon,\\zeta)\\bar{\\Lambda}_j\\Bigr)^2\n\\Bigl(\n\\sigma_1 (\\varepsilon,\\zeta)v_{1,i}(\\varepsilon,\\zeta) \\bar\\Lambda_1\n+\n\\sum_{j=2}^k\n\\sum_{l=1}^k\n(M_{\\lambda_0+\\varepsilon}(\\zeta))_{il}\\, P_{lj}(\\varepsilon,\\zeta) \\,\\bar{\\Lambda}_j\\Bigr)^2\n\\Big] ,\n\\end{align*}\nand $\\theta_\\lambda(\\zeta,\\bar \\Lambda)$ denotes the function $\\theta_\\lambda(\\zeta,\\mu) $ where we have used the transformations \\eqref{LambdaMu} and \\eqref{changeLambda}.\n\nNote that $\\mathcal Poly_4(\\varepsilon,\\zeta,\\bar \\Lambda) $ is a polynomial in the variables $\\bar{\\Lambda}_1,\\ldots,\\bar{\\Lambda}_k$ of degree $4$ whose coefficients are functions of $\\varepsilon$ and $\\zeta$.\n\nWe need to solve the equations $D_{\\zeta} I_{\\lambda}=0$,\n$\\frac{\\partial I_{\\lambda}}{\\partial\\bar{\\Lambda}_1}=0$, $\\ldots,$\n$\\frac{\\partial I_{\\lambda}}{\\partial\\bar{\\Lambda}_k}=0$.\nBecause of the the absolute value of $\\sigma_1$ appearing in \\eqref{formF1lambda} it is a bit more convenient to modify this function by defining\n\\begin{align*}\n\\bar F_{\\lambda}(\\zeta,\\bar{\\Lambda})\n& = ka_0\n- a_1 \\sigma_1(\\varepsilon,\\zeta)^2 \\bar\\Lambda_1^2\n- a_1 \\sigma_1(\\varepsilon,\\zeta) (\\bar\\Lambda')^T Q(\\varepsilon,\\zeta) \\bar\\Lambda'\n\\\\\n& \\quad\n+\\sigma_1(\\varepsilon,\\zeta)^2\\, \\mathcal Poly_4(\\varepsilon,\\zeta,\\bar \\Lambda)\n+ \\theta_\\lambda(\\zeta,\\bar \\Lambda) ,\n\\end{align*}\nwhich coincides with $I_{\\lambda}$ when $\\sigma_1<0$.\n\n\n\n\n\n\nNext we compute\n\\begin{align*}\nD_{\\zeta} \\bar F_{\\lambda}\n& =\n-2a_1\\sigma_1\\,(D_{\\zeta}\\sigma_1)\\bar{\\Lambda}_1^2\n-a_1 (D_\\zeta \\sigma_1) (\\bar\\Lambda')^T Q \\bar\\Lambda'\n- a_1 \\sigma_1 (\\bar\\Lambda')^T ( D_\\zeta Q) \\bar\\Lambda'\n\\\\\n& \\quad\n+2\\,\\sigma_1\\,(D_{\\zeta}\\sigma_1) \\, \\mathcal Poly_4\n+\\sigma_1^2\\, D_{\\zeta} \\mathcal Poly_4\n+D_\\zeta \\theta_\\lambda,\n\\\\\n\\frac{\\partial \\bar F_\\lambda}{\\partial \\bar\\Lambda_1}\n&=\n- 2 a_1 \\sigma_1^2 \\bar\\Lambda_1\n+\\sigma_1^2\\,\n\\frac{\\partial }{\\partial \\bar \\Lambda_1}\\mathcal Poly_4\n+ \\frac{\\partial \\theta_\\lambda}{\\partial \\bar \\Lambda_1} ,\n\\\\\n\\frac{\\partial \\bar F_\\lambda}{\\partial \\bar\\Lambda_l}\n&=\n- 2a_1 \\sigma_1 \\sum_{j=2}^k Q_{j-1,l-1} \\bar\\Lambda_j\n+\\sigma_1^2\\,\n\\frac{\\partial }{\\partial \\bar \\Lambda_l} \\mathcal Poly_4\n+ \\frac{\\partial \\theta_\\lambda}{\\partial \\bar \\Lambda_l} ,\n\\end{align*}\nwith $ l=2,\\ldots,k$.\n\n\n\n\n\nObserve that, whenever $\\sigma_1<0$, the equations\n$D_{\\zeta} \\bar F_{\\lambda}=0$,\n$\\frac{\\partial \\bar F_{\\lambda}}{\\partial\\bar{\\Lambda}_n}=0$, $n=1,\\ldots,k$, are equivalent to\n\\begin{align}\n\\nonumber\n0&=\n-2a_1\\bar{\\Lambda}_1^2 (D_\\zeta\\sigma_1)\n- \\frac{a_1}{\\sigma_1} (D_\\zeta \\sigma_1 ) (\\bar\\Lambda')^T Q \\bar\\Lambda'\n- a_1 (\\bar\\Lambda')^T ( D_\\zeta Q ) \\bar\\Lambda'\n\\\\\n\\label{eq1}\n& \\quad\n+2 (D_{\\zeta}\\sigma_1) \\, \\mathcal Poly_4\n+\\sigma_1\\, D_{\\zeta}\\mathcal Poly_4\n+\\frac{1}{\\sigma_1}\t D_\\zeta \\theta_\\lambda,\n\\\\\n\\label{eq2}\n0&=\n- 2 a_1 \\bar\\Lambda_1\n+\\frac{\\partial }{\\partial \\bar \\Lambda_1}\\mathcal Poly_4\n+ \\frac{1}{\\sigma_1^2}\\frac{\\partial \\theta_\\lambda}{\\partial \\bar \\Lambda_1} ,\n\\\\\n\\label{eq3}\n0&=- 2a_1 \\sum_{j=2}^k Q_{j-1,l-1} \\bar\\Lambda_j\n+\\sigma_1\\, \\frac{\\partial }{\\partial \\bar \\Lambda_l}\\mathcal Poly_4\n+ \\frac{1}{\\sigma_1} \\frac{\\partial \\theta_\\lambda}{\\partial \\bar \\Lambda_l} ,\n\\end{align}\nwith $ l=2,\\ldots,k$.\nNote that we have normalized the equations (the first one was divided by $\\sigma_1$, the second by $\\sigma_1^2$ and the last ones by $\\sigma_1$).\n\n\nWe claim that there exists $\\varepsilon_0>0$ such that for each $\\varepsilon\\in (0,\\varepsilon_0)$ the system \\eqref{eq1}, \\eqref{eq2}, \\eqref{eq3}\nhas a solution $(\\zeta(\\varepsilon)$, $\\bar\\Lambda(\\varepsilon))$ such that $\\sigma_1(\\varepsilon,\\zeta(\\varepsilon))<0$, thus yielding a critical point of $I_{\\lambda_0+\\varepsilon}$.\n\n\n\nWe will prove that \\eqref{eq1}, \\eqref{eq2}, \\eqref{eq3} has a solution using degree theory in a ball centered at a suitable point $(\\bar\\Lambda^0, \\zeta^0)$ and with a conveniently small radius.\n\nTo find the center of this ball, let us consider a simplified version of equations \\eqref{eq2}, \\eqref{eq3}, by omitting the terms involving $\\theta_\\lambda$ and evaluating at $\\varepsilon=0$, $\\zeta=\\zeta^0$.\nUsing that $Q(0,\\zeta^0)$ is the diagonal matrix with entries $\\sigma_2,\\ldots,\\sigma_k$,\nwhere $0,\\sigma_2,\\ldots,\\sigma_k$ are the eigenvalues of $M_{\\lambda_0}(\\zeta^0)$, we get\n\\begin{align}\n\\label{eq2a}\n0&=\n- 2 a_1 \\bar\\Lambda_1\n+\\frac{\\partial }{\\partial \\bar \\Lambda_1} \\mathcal Poly_4(0,\\zeta_0,\\bar \\Lambda) , \\\\\n\\label{eq3a}\n0&=- 2a_1 \\sigma_l \\bar\\Lambda_l\n ,\\quad l=2,\\ldots,k .\n\\end{align}\nWe note that there is a solution of \\eqref{eq2a}, \\eqref{eq3a} which has the form\n$\\bar\\Lambda^0= ( \\bar\\Lambda^0_1,\\ldots,\\bar\\Lambda^0_k)$ with\n\\[\n\\bar\\Lambda^0_l = 0 \\qquad \\text{for all }l =2,\\ldots, k\n\\]\nand\n\\begin{align}\n\\label{barLambda10}\n\\bar{\\Lambda}^0_1:=\\sqrt{\\frac{a_1}{2 a_2 \\lambda_0 \\sum_{i=1}^k P_{i1}(0,\\zeta^0)^4 }} .\n\\end{align}\nFor later purposes it will be useful to know that the linearization of the functions on the right hand side of \\eqref{eq2a} and \\eqref{eq3a} around $\\bar\\Lambda^0$ define an invertible operator.\nSince the right hand side of \\eqref{eq3a} is a constant times the identity it is sufficient to study\nthe expression\n$ - 2 a_1 \\bar\\Lambda_1 +\\frac{\\partial }{\\partial \\bar \\Lambda_1}\\mathcal Poly_4(0,\\zeta_0,\\bar \\Lambda) $.\nA straightforward computation yields\n\\begin{align}\n\\nonumber\n\\frac{\\partial }{\\partial\\bar{\\Lambda}_1}\n\\left[\n - 2 a_1 \\bar\\Lambda_1 + \\frac{\\partial }{\\partial \\bar \\Lambda_1}\\mathcal Poly_4 (0,\\zeta_0,\\bar \\Lambda)\n\\right]\n(\\bar{\\Lambda}^0)\n&=-2a_1\\,+\n12a_2\\lambda_0\\Bigl(\\sum_{i=1}^kP_{i1}(0,\\zeta^0)^4\\Bigr)(\\bar{\\Lambda}_1^0)^2\n\\\\\n\\label{firstCoefficient}\n& =4a_1,\n\\end{align}\nwhich is nonzero.\n\n\nWe now introduce one more change of variables\n\\[\n\\widehat\\Lambda_j = \\bar\\Lambda_j - \\bar\\Lambda_j^0, \\quad 1\\leq j\\leq k .\n\\]\nDefine\n\\[\n\\Upsilon (\\zeta,\\widehat \\Lambda) = A (\\zeta,\\widehat{\\Lambda}) + \\mathcal R (\\zeta,\\widehat{\\Lambda}),\n\\]\nwhere\n\\[\nA (\\zeta,\\widehat{\\Lambda}) = (A_0 (\\zeta,\\widehat{\\Lambda}) ,A_1 (\\zeta,\\widehat{\\Lambda}) ,\\ldots, A_k (\\zeta,\\widehat{\\Lambda}) ))\n\\]\nwith\n\\begin{align*}\nA_0 (\\zeta,\\widehat{\\Lambda}) &=\n- a_1 (\\bar\\Lambda_1^0)^2 D^2_{\\zeta\\zeta}\\sigma_1(0,\\zeta^0) (\\zeta-\\zeta^0),\n\\\\\nA_1 (\\zeta,\\widehat{\\Lambda}) &= 4 a_1 \\widehat\\Lambda_1\n+ \\sum_{j=2}^k \\frac{\\partial^2 }{\\partial \\bar \\Lambda_j \\partial \t\\bar \\Lambda_1}\n\\mathcal Poly_4\n(0,\\zeta^0,\\bar \\Lambda^0)\\widehat \\Lambda_j\n+D_\\zeta \\frac{\\partial }{ \\partial \t\\bar \\Lambda_1}\n\\mathcal Poly_4\n(0,\\zeta^0,\\bar \\Lambda^0)(\\zeta-\\zeta^0),\n\\\\\nA_l (\\zeta,\\widehat{\\Lambda}) &= -2 a_1 \\sigma_l \\widehat \\Lambda_l , \\quad l=2,\\ldots,k,\n\\end{align*}\nand\n\\[\n\\mathcal{R} (\\zeta,\\widehat{\\Lambda}) = (\\mathcal{R}_0 (\\zeta,\\widehat{\\Lambda}) ,\\mathcal{R}_1 (\\zeta,\\widehat{\\Lambda}) ,\\ldots, \\mathcal{R}_k (\\zeta,\\widehat{\\Lambda}) ))\n\\]\nwith\n\\begin{align*}\n\\mathcal R_0 (\\zeta,\\widehat \\Lambda)\n&=\n-a_1 (\\bar \\Lambda_1^0)^2 \\bigl( D_\\zeta\\sigma_1(\\varepsilon,\\zeta) - D^2_{\\zeta\\zeta} \\sigma_1(0,\\zeta^0)(\\zeta-\\zeta^0) \\bigr)\n\\\\\n& \\quad\n-2a_1 (2 \\bar \\Lambda_1^0 \\widehat \\Lambda_1 + \\widehat \\Lambda_1^2) D_\\zeta\\sigma\t_1(\\varepsilon,\\zeta)\n\\\\\n& \\quad\n- \\frac{a_1}{\\sigma_1} D_\\zeta \\sigma_1 (\\widehat \\Lambda')^T Q(\\varepsilon,\\zeta) \\widehat\\Lambda'\n- a_1 (\\widehat\\Lambda')^T ( D_\\zeta Q(\\varepsilon,\\zeta) ) \\widehat\\Lambda'\n\\\\\n& \\quad\n+2 (D_{\\zeta}\\sigma_1) \\, ( \\mathcal Poly_4(\\varepsilon,\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda) -\n\\mathcal Poly_4(0,\\zeta^0,\\bar\\Lambda^0 ) )\n+\\sigma_1\\, D_{\\zeta}\\mathcal Poly_4(\\varepsilon,\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda)\n\\\\\n&\\quad\n+\\frac{1}{\\sigma_1}\t D_\\zeta \\theta_\\lambda(\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda),\n\\\\\n\\mathcal R_1 (\\zeta,\\widehat \\Lambda)\n&=\n\\frac{\\partial }{\\partial \\bar \\Lambda_1}\\mathcal Poly_4 (\\varepsilon,\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda)\n-\n\\frac{\\partial }{\\partial \\bar \\Lambda_1}\\mathcal Poly_4 (0,\\zeta^0,\\bar\\Lambda^0)\n- \\sum_{j=1}^k \\frac{\\partial^2 }{\\partial \\bar \\Lambda_j \\partial \t\\bar \\Lambda_1}\\mathcal Poly_4(0,\\zeta^0,\\bar \\Lambda^0)\\widehat \\Lambda_j\n\\\\\n& \\quad\n-D_\\zeta \\frac{\\partial }{ \\partial \t\\bar \\Lambda_1}\n\\mathcal Poly_4\n(0,\\zeta^0,\\bar \\Lambda^0)(\\zeta-\\zeta^0)\n+ \\frac{1}{\\sigma_1^2}\\frac{\\partial \\theta_\\lambda}{\\partial \\bar \\Lambda_1} (\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda) ,\n\\\\\n\\mathcal R_l (\\zeta,\\widehat \\Lambda)\n&=- 2a_1 \\sum_{j=2}^k ( Q_{j-1,l-1} (\\varepsilon,\\zeta) -\\delta_{jl}) \\widehat \\Lambda_j\n+\\sigma_1(\\varepsilon,\\zeta)\\, \\frac{\\partial }{\\partial \\bar \\Lambda_l} \\mathcal Poly_4 (\\varepsilon,\\zeta,\\Lambda^0 + \\widehat \\Lambda)\n\\\\\n&\\quad\n+ \\frac{1}{\\sigma_1(\\varepsilon,\\zeta)} \\frac{\\partial \\theta_\\lambda}{\\partial \\bar \\Lambda_l} (\\zeta,\\Lambda^0 + \\widehat \\Lambda) ,\\quad l=2,\\ldots k.\n\\end{align*}\nLet us indicate the motivation for the definition of $A_0$. In equation \\eqref{eq1} we combine the terms $-2a_1\\bar{\\Lambda}_1^2 (D_\\zeta\\sigma_1) $ and $ 2 (D_{\\zeta}\\sigma_1) \\, \\mathcal Poly_4$ into the expression\n\\begin{align*}\n&\n-2a_1\\bar{\\Lambda}_1^2 (D_\\zeta\\sigma_1) + 2 (D_{\\zeta}\\sigma_1) \\, \\mathcal Poly_4\n\\\\\n&=\n-2a_1 (\\bar{\\Lambda}_1^0)^2(D_\\zeta\\sigma_1)\n-2a_1\\bigl ( 2 \\bar{\\Lambda}_1^0 \\widehat{\\Lambda}_1 + \\widehat{\\Lambda}_1^2 \\Bigr) (D_\\zeta\\sigma_1)\n\\\\\n& \\quad\n+ 2 (D_{\\zeta}\\sigma_1) \\, \\mathcal Poly_4(0,\\zeta^0,\\bar\\Lambda_0)\n+ 2 (D_{\\zeta}\\sigma_1) \\, ( \\mathcal Poly_4 - \\mathcal Poly_4(0,\\zeta^0,\\bar\\Lambda_0) ) .\n\\end{align*}\nIn this expression we combine\n\\begin{align*}\n-2a_1 (\\bar{\\Lambda}_1^0)^2(D_\\zeta\\sigma_1)\n+ 2 (D_{\\zeta}\\sigma_1) \\, \\mathcal Poly_4(0,\\zeta^0,\\bar\\Lambda_0)\n&= 2 (D_{\\zeta}\\sigma_1) \\Bigl[\n-a_1 (\\bar{\\Lambda}_1^0)^2 + \\mathcal Poly_4(0,\\zeta^0,\\bar\\Lambda_0) \\Bigr].\n\\end{align*}\nBut an explicit computation using \\eqref{barLambda10} gives\n\\[\n-a_1( \\bar{\\Lambda}_1^0)^2 + \\mathcal Poly_4(0,\\zeta^0,\\bar\\Lambda^0) =\n-\\frac{1}{2} a_1( \\bar{\\Lambda}_1^0)^2 .\n\\]\nThen\n\\begin{align*}\n&\n-2a_1 (\\bar{\\Lambda}_1^0)^2(D_\\zeta\\sigma_1)\n+ 2 (D_{\\zeta}\\sigma_1) \\, \\mathcal Poly_4(0,\\zeta^0,\\bar\\Lambda_0)\n\\\\\n&= -a_1 (\\bar \\Lambda_1^0)^2 (D_\\zeta\\sigma_1)\n\\\\\n&=\n-a_1 (\\bar \\Lambda_1^0)^2 D^2_{\\zeta\\zeta}\\sigma_1(0,\\zeta^0)(\\zeta-\\zeta^0)\n-a_1 (\\bar \\Lambda_1^0)^2\n\\bigl(\n(D_\\zeta\\sigma_1) - D^2_{\\zeta\\zeta}\\sigma_1(0,\\zeta^0)(\\zeta-\\zeta^0)\n\\bigr) .\n\\end{align*}\nWe define $A_0$ as $-a_1 (\\bar \\Lambda_1^0)^2 D^2_{\\zeta\\zeta}\\sigma_1(0,\\zeta^0)(\\zeta-\\zeta^0)$ and we leave all the others terms in $\\mathcal R_0$.\n\nThen the equations \\eqref{eq1}, \\eqref{eq2} and \\eqref{eq3} for the unknowns $\\widehat\\Lambda_j$, $1\\leq j\\leq k$ and $\\zeta$ are equivalent to\n\\[\n\\Upsilon (\\zeta,\\widehat \\Lambda) = 0 .\n\\]\nWe are going to show that the this equation has a solution in the ball\n\\[\n\\mathcal B = \\{ (\\zeta,\\widehat\\Lambda) \\in \\R^{3k}\\times \\R^k : | (\\zeta-\\zeta^0,\\widehat\\Lambda) | < \\varepsilon^{1-\\sigma} \\}\n\\]\nwith a fixed and small $\\sigma>0$, using degree theory.\n\n\n\n\n\n\n\nThe linear operator $(\\zeta,\\widehat\\Lambda) \\mapsto A(\\zeta-\\zeta^0,\\widehat{\\Lambda})$ is invertible thanks to hypothesis (iii) in the statement of the theorem and \\eqref{firstCoefficient}. Hence there is a constant $c>0$ such that\n\\[\n|A(\\zeta, \\widehat\\Lambda)|\\geq c |( (\\zeta-\\zeta^0), \\widehat\\Lambda)|,\n\\]\nfor $(\\zeta,\\widehat{\\Lambda} ) \\in \\partial \\mathcal B$, if we take $\\varepsilon>0$ sufficiently small.\nTo conclude that the equation $A(\\zeta,\\widehat{\\Lambda}) + \\mathcal R(\\zeta,\\widehat{\\Lambda}) =0$ has a solution in $\\mathcal B$, it suffices to verify that\n\\[\n|\\mathcal R(\\zeta,\\widehat{\\Lambda})| \\leq\no( \\varepsilon^{1-\\sigma})\n\\]\nuniformly for $( \\zeta,\\widehat{\\Lambda} ) \\in \\bar{\\mathcal B}$ as $\\varepsilon\\to 0$.\n\nBefore performing the computations we recall the assumptions we are imposing on $\\mu$, $\\zeta$.\nFrom \\eqref{LambdaMu} and \\eqref{changeLambda} we have\n\\[\n\\mu^{\\frac{1}{2}} = |\\sigma_1(\\varepsilon,\\zeta) |^{\\frac{1}{2}} P(\\varepsilon,\\zeta) \\bar{\\Lambda}.\n\\]\nThen for $( \\zeta,\\widehat{\\Lambda} ) \\in \\bar{\\mathcal B}$,\n\\begin{align}\n\\label{cotaZetaGorro}\n|\\zeta-\\zeta^0|\\leq \\varepsilon^{1-\\sigma}\n\\end{align}\nand\n\\begin{align}\n\\label{cotaLambdaGorro}\n\\bar\\Lambda = \\bar\\Lambda^0 + \\widehat{\\Lambda},\n\\quad\n| \\widehat{\\Lambda} | \\leq \\varepsilon^{1-\\sigma} .\n\\end{align}\nUsing Taylor's theorem we see that, for $|\\zeta-\\zeta^0|\\leq \\varepsilon^{1-\\sigma}$,\n\\begin{align}\n\\label{cotasSigma}\n-c_1\\varepsilon\\leq \\sigma_1(\\varepsilon,\\zeta) \\leq -c_2\\varepsilon\n\\end{align}\nwith $c_1,c_2>0$, and in particular\n\\[\n|\\mu_i | \\leq C \\varepsilon ,\\quad i=1,\\ldots,k.\n\\]\nAlso for $|\\zeta-\\zeta^0|\\leq \\varepsilon^{1-\\sigma}$,\n\\begin{align}\n\\label{cotaGradSigma1}\n|D_\\zeta \\sigma_1(\\varepsilon,\\zeta)| \\leq C \\varepsilon^{1-\\sigma} .\n\\end{align}\nWe will also need the following estimates: for $( \\zeta,\\widehat{\\Lambda} ) \\in \\bar{\\mathcal B}$ we have\n\\begin{align}\n\\label{cota1}\n|D_\\zeta \\theta_\\lambda(\\zeta, \\bar\\Lambda^0+\\hat \\Lambda)| & \\leq C \\varepsilon^{3-\\sigma},\n\\\\\n\\label{cota2}\n\\Bigl|\n\\frac{\\partial \\theta_\\lambda}{\\partial \\bar \\Lambda_1} (\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda) \\Bigr| & \\leq C \\varepsilon^{3-\\sigma\/2},\n\\\\\n\\label{cota3}\n\\Bigl|\n\\frac{\\partial \\theta_\\lambda}{\\partial \\bar \\Lambda_l} (\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda) \\Bigr| & \\leq C \\varepsilon^{3-\\sigma} , \\quad l=2,\\ldots,k.\n\\end{align}\nWe will prove these estimates later on.\n\nFor $( \\zeta,\\widehat{\\Lambda} ) \\in \\bar{\\mathcal B}$ let us estimate $\\mathcal R_0 (\\zeta,\\widehat \\Lambda) $.\nWe start with\n\\begin{align*}\n& \\big|\n(\\bar \\Lambda_1^0)^2 \\bigl( D_\\zeta\\sigma_1(\\varepsilon,\\zeta) - D^2_{\\zeta\\zeta} \\sigma_1(0,\\zeta^0)(\\zeta-\\zeta^0) \\bigr)\n\\bigr|\n\\\\\n& \\leq C |D_\\zeta\\sigma_1(\\varepsilon,\\zeta)-D_\\zeta\\sigma_1(0,\\zeta)|\n+C\\bigl| D_\\zeta\\sigma_1(0,\\zeta) - D_\\zeta\\sigma_1(0,\\zeta^0) - D^2_{\\zeta\\zeta} \\sigma_1(0,\\zeta^0)(\\zeta-\\zeta^0) \\bigr|\n\\\\\n& \\leq C \\varepsilon + C \\varepsilon^{2-2\\sigma} \\leq C \\varepsilon,\n\\end{align*}\nsince $|\\zeta-\\zeta^0|\\leq \\varepsilon^{1-\\sigma}$.\nNext,\n\\begin{align*}\n\\bigl|\n(2 \\bar \\Lambda_1^0 \\widehat \\Lambda_1 + \\widehat \\Lambda_1^2) D_\\zeta\\sigma\t_1(\\varepsilon,\\zeta)\n\\bigr|\n& \\leq C\\varepsilon^{2-2\\sigma}\n\\end{align*}\nbecause $|\\widehat{\\Lambda}_1|\\leq \\varepsilon^{1-\\sigma}$ and $D_\\zeta \\sigma_1(0,\\zeta^0)=0$.\nTo estimate $ \\frac{D_\\zeta \\sigma_1}{\\sigma_1} (\\widehat \\Lambda')^T Q(\\varepsilon,\\zeta) \\widehat\\Lambda' $ we note that \\eqref{cotasSigma} together with \\eqref{cotaGradSigma1} implies\n\\[\n\\Bigl|\n\\frac{D_\\zeta \\sigma_1(\\varepsilon,\\zeta)}{\\sigma_1(\\varepsilon,\\zeta)}\n\\Bigr| \\leq C \\varepsilon^{-\\sigma}\n\\]\nand so\n\\begin{align*}\n\\Bigl|\na_1 \\frac{D_\\zeta \\sigma_1}{\\sigma_1} (\\widehat \\Lambda')^T Q(\\varepsilon,\\zeta) \\widehat\\Lambda' \\Bigr|\n\\leq C \\varepsilon^{2-3\\sigma}.\n\\end{align*}\nNext, using \\eqref{cota1} we estimate\n\\begin{align*}\n\\bigl|\na_1 (\\widehat\\Lambda')^T ( D_\\zeta Q(\\varepsilon,\\zeta) ) \\widehat\\Lambda'\n\\bigr|\n&\\leq C \\varepsilon^{2-2\\sigma},\n\\\\\n\\bigl|\n2 (D_{\\zeta}\\sigma_1) \\, ( \\mathcal Poly_4(\\varepsilon,\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda) -\n\\mathcal Poly_4(0,\\zeta^0,\\bar\\Lambda^0 ) )\n\\bigr|\n&\\leq C \\varepsilon^{2-2\\sigma} ,\n\\\\\n|\\sigma_1(\\varepsilon,\\zeta) D_\\zeta \\mathcal Poly_4( \\varepsilon,\\zeta,\\bar\\Lambda^0 + \\widehat{\\Lambda})|\n& \\leq C \\varepsilon ,\n\\\\\n\\left|\\frac{1}{\\sigma_1}\t D_\\zeta \\theta_\\lambda(\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda)\n\\right|\n&\\leq C \\varepsilon^{2-\\sigma}\n\\end{align*}\nThis proves that\n\\begin{align}\n\\label{R0}\n|\\mathcal R_0 (\\zeta,\\widehat \\Lambda) |\\leq C \\varepsilon\n\\end{align}\nfor $( \\zeta,\\widehat{\\Lambda} ) \\in \\bar{\\mathcal B}$, if we have fixed $\\sigma>0$ small.\n\n\nLet us estimate $ |R_1 (\\zeta,\\widehat \\Lambda) | $ for $( \\zeta,\\widehat{\\Lambda} ) \\in \\bar{\\mathcal B}$.\nBy Taylor's theorem we have that\n\\begin{align*}\n& \\Bigl|\\frac{\\partial }{\\partial \\bar \\Lambda_1}\\mathcal Poly_4 (\\varepsilon,\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda)\n-\n\\frac{\\partial }{\\partial \\bar \\Lambda_1}\\mathcal Poly_4 (0,\\zeta^0,\\bar\\Lambda^0)\n- \\sum_{j=1}^k \\frac{\\partial^2 }{\\partial \\bar \\Lambda_j \\partial \t\\bar \\Lambda_1}\\mathcal Poly_4(0,\\zeta^0,\\bar \\Lambda^0)\\widehat \\Lambda_j\n\\\\\n& \\quad\n-D_\\zeta \\frac{\\partial }{ \\partial \t\\bar \\Lambda_1}\n\\mathcal Poly_4\n(0,\\zeta^0,\\bar \\Lambda^0)(\\zeta-\\zeta^0)\n\\Bigr| \\leq C \\varepsilon + C |\\zeta-\\zeta^0|^2 + C |\\widehat{\\Lambda}|^2 \\leq C \\varepsilon.\n\\end{align*}\nOn the other hand by \\eqref{cota2} we have\n\\begin{align*}\n\\left|\n\\frac{1}{\\sigma_1^2}\\frac{\\partial \\theta_\\lambda}{\\partial \\bar \\Lambda_1} (\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda)\n\\right| \\leq C \\varepsilon^{1-\\sigma\/2} .\n\\end{align*}\nThis shows that\n\\begin{align}\n\\label{R1}\n|\\mathcal R_1 (\\zeta,\\widehat \\Lambda) |\\leq C \\varepsilon^{1-\\sigma\/2}\n\\end{align}\nfor $( \\zeta,\\widehat{\\Lambda} ) \\in \\bar{\\mathcal B}$.\n\n\nFinally, using \\eqref{cota3}, we have that for $( \\zeta,\\widehat{\\Lambda} ) \\in \\bar{\\mathcal B}$ and $l=2,\\ldots,k$, the following holds\n\\begin{align*}\n\\biggl|\n2a_1 \\sum_{j=2}^k ( Q_{j-1,l-1} (\\varepsilon,\\zeta) -\\delta_{jl}) \\widehat \\Lambda_j\n\\biggr|\n\\leq C \\varepsilon^{2-2\\sigma},\n\\end{align*}\n\\begin{align*}\n\\biggl| \\sigma_1(\\varepsilon,\\zeta)\\, \\frac{\\partial }{\\partial \\bar \\Lambda_l} \\mathcal Poly_4 (\\varepsilon,\\zeta,\\Lambda^0 + \\widehat \\Lambda)\n\\biggr|\n\\leq C \\varepsilon^{2-2\\sigma},\n\\end{align*}\nand\n\\begin{align*}\n\\left|\n\\frac{1}{\\sigma_1(\\varepsilon,\\zeta)} \\frac{\\partial \\theta_\\lambda}{\\partial \\bar \\Lambda_l} (\\zeta,\\Lambda^0 + \\widehat \\Lambda)\n\\right|\n\\leq C \\varepsilon^{2-\\sigma}.\n\\end{align*}\nTherefore,\n\\begin{align}\n\\label{R2}\n|\\mathcal R_l (\\zeta,\\widehat \\Lambda) |\\leq C \\varepsilon^{2-2\\sigma}, \\quad l=2,\\ldots,k\n\\end{align}\nfor $( \\zeta,\\widehat{\\Lambda} ) \\in \\bar{\\mathcal B}$.\n\nCombining \\eqref{R0}, \\eqref{R1} and \\eqref{R2} we obtain\n\\[\n|\\mathcal{R}(\\zeta,\\widehat{\\Lambda})|\\leq C \\varepsilon^{1-\\sigma\/2},\n\\quad \\forall ( \\zeta,\\widehat{\\Lambda} ) \\in \\bar{\\mathcal B}.\n\\]\nA standard application of degree theory then yields a solution of $\\Upsilon(\\zeta,\\widehat{\\Lambda})=0$ in the ball $\\mathcal{B}$.\nNote that for $(\\zeta,\\widehat{\\Lambda}) \\in \\mathcal B$ we are in the region where \\eqref{sigma-negative} holds, and hence $\\sigma_1(\\zeta , \\bar\\Lambda^0 + \\widehat{\\Lambda})<0$. Therefore we have found a critical point of $I_\\lambda(\\zeta,\\mu)$, which was the desired conclusion.\n\\end{proof}\n\n\n\n\n\\begin{proof}[Proof of \\eqref{cota1}, \\eqref{cota2}, \\eqref{cota3}]\nBy Lemma~\\ref{lemmaEnergyExpansion} (using the satement with $\\frac{\\sigma}{2}$ instead of $\\sigma$) we get directly the estimates\n\\begin{align}\n\\label{theta1a}\n| D_\\zeta \\theta_\\lambda^{(1)} (\\zeta,\\mu) | & \\leq C |\\mu|^{3-\\sigma\/2} \\leq C \\varepsilon^{3-\\sigma\/2} ,\n\\\\\n\\label{theta1b}\n\\Bigl|\n\\frac{\\partial \\theta_\\lambda^{(1)}}{\\partial \\bar\\Lambda_i} (\\zeta,\\mu) \\Bigr|\n&\\leq C |\\mu|^{3-\\sigma\/2} \\leq C \\varepsilon^{3-\\sigma\/2} .\n\\end{align}\nTo estimate $D_\\zeta \\theta_\\lambda^{(2)}$, we recall formula \\eqref{formulaR} which gives\n\\begin{align*}\n\\theta_\\lambda^{(2)}(\\zeta,\\mu)\n&= \\int_0^1 s D^2 \\bar J_\\lambda(V + s\\phi)[\\phi^2] \\,ds .\n\\\\\n&=\n\\int_0^1 s \\left[\\int_{\\Omega_\\varepsilon}\\vert\\nabla \\phi\\vert^2 - \\varepsilon^2 \\lambda \\phi^2-5 (V + s\\phi)^4 \\phi^2\\right]\\,ds\n\\end{align*}\nand therefore\n\\begin{align*}\n|D_\\zeta \\theta_\\lambda^{(2)}(\\zeta,\\mu) |\n\\leq C \\|\\phi\\|_{*} \\|D_\\zeta\\phi\\|_* + \\frac{C}{\\varepsilon}\\|\\phi\\|_*^2.\n\\end{align*}\nWe can compute\n\\begin{align*}\nM_\\lambda(\\zeta) \\mu^{\\frac{1}{2}}\n&=|\\sigma_1|^{\\frac{1}{2}} M_\\lambda P \\bar\\Lambda\n=|\\sigma_1|^{\\frac{1}{2}} \\Bigl( \\sigma_1 v_1 \\bar\\Lambda_1 + \\sum_{l=2}^k \\bar v_l \\bar \\Lambda_l \\Bigr) ,\n\\end{align*}\nand thanks to \\eqref{cotasSigma} we see that\n\\[\n|M_\\lambda(\\zeta) \\mu^{\\frac{1}{2}}|\\leq C \\varepsilon^{\\frac{3}{2}-\\sigma} ,\n\\]\nwhich in turn implies\n\\[\n\\| E \\|_{**} \\leq C \\varepsilon^{2-\\sigma}, \\quad\n\\| \\phi \\|_{*} \\leq C \\varepsilon^{2-\\sigma} .\n\\]\nFrom this we deduce\n\\[\n| \\theta_\\lambda^{(2)}(\\zeta,\\mu) |\\leq C \\varepsilon^{4-2\\sigma}.\n\\]\nWe can write \\eqref{expansionE2} in the form (near $\\zeta_i'$)\n\\begin{align*}\nE(y)\n& = - 20\\pi \\alpha_3 \\varepsilon^{\\frac{1}{2}}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)\nM_\\lambda \\mu^{\\frac{1}{2}} + O( w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y) \\varepsilon^2) + O(\\varepsilon^5)\n\\\\\n&=\n- 20\\pi \\alpha_3 \\varepsilon^{\\frac{1}{2}}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)\n|\\sigma_1|^{\\frac{1}{2}} \\Bigl( \\sigma_1 v_1 \\bar\\Lambda_1 + \\sum_{l=2}^k \\bar v_l \\bar \\Lambda_l \\Bigr)\n+ O( w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^3 \\varepsilon^2) + O(\\varepsilon^5) .\n\\end{align*}\nThe $O(\\cdot )$ terms are bounded together with their derivatives with respect to $\\zeta'$, $\\mu'$.\nDifferentiating $E$ with respect to $\\zeta'$ and $\\bar\\Lambda_l$, taking into account the last expression, and thanks to\n\\eqref{cotaZetaGorro}, \\eqref{cotaLambdaGorro}, \\eqref{cotasSigma} and \\eqref{cotaGradSigma1},\nwe find that for $|y-\\zeta_i|\\leq \\frac{\\delta}{\\varepsilon}$ the following hold\n\\begin{align*}\nD_{\\zeta'} E\n&= O(\\varepsilon^{2-\\sigma} )w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^4\n+O( w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^3 \\varepsilon^2) + O(\\varepsilon^5),\n\\end{align*}\n\\begin{align*}\nD_{\\bar\\Lambda_1} E\n&= O(\\varepsilon^{ 2-\\sigma } )w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^4\n+O( w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^3 \\varepsilon^2) + O(\\varepsilon^5),\n\\end{align*}\nand for $l=2,\\dots,k$\n\\begin{align*}\nD_{\\bar\\Lambda_l} E\n&= O(\\varepsilon )w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^4\n+O( w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^3 \\varepsilon^2) + O(\\varepsilon^5) .\n\\end{align*}\nFrom this and analogous estimates outside of all the balls $B_{\\delta\/\\varepsilon}(\\zeta_i')$ it follows that\n\\begin{align*}\n\\|D_{\\zeta'} \\phi\\|_* \\leq \\varepsilon^{ 2-\\sigma } , \\quad\n\\|D_{\\bar\\Lambda_1} \\phi\\|_* \\leq \\varepsilon^{ 2-\\sigma}, \\quad\n\\|D_{\\bar\\Lambda_l} \\phi\\|_* \\leq \\varepsilon, \\quad l=2,\\ldots,k.\n\\end{align*}\nAs a consequence,\n\\begin{align}\n\\label{theta2}\n|D_\\zeta\\theta_\\lambda^{(2)}(\\zeta,\\mu) |\\leq C \\varepsilon^{3-\\sigma},\n\\quad\n|D_{\\bar\\Lambda_1} \\theta_\\lambda^{(2)}(\\zeta,\\mu) |\\leq C \\varepsilon^{4-2\\sigma},\n\\quad\n|D_{\\bar\\Lambda_l}\\theta_\\lambda^{(2)}(\\zeta,\\mu) |\\leq C \\varepsilon^{3-\\sigma} ,\n\\end{align}\nfor $l=2,\\ldots,k$.\n(Here we are assuming $\\sigma>0$ small so that $3-\\sigma < 4 - 2 \\sigma$).\n\nCombining \\eqref{theta1a}, \\eqref{theta1b} and \\eqref{theta2} we obtain the estimates\n\\eqref{cota1}, \\eqref{cota2}, \\eqref{cota3}.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{The case of the annulus}\n\\label{exampleAnnulus}\n\nLet $00$ and a solution of $(\\wp_\\lambda)$ with $k$ concentration points.\n\\end{prop}\n\nExplicit values of $a_k$ seem difficult to get, but one can obtain estimates that show that for a low number of peaks the annulus does not need to be so thin. In particular for two bubbles we have the following estimate.\n\n\n\\begin{prop}\n\\label{propA2}\nFor $a\\in (\\frac{1}{49},1)$ there is $\\lambda>0$ and a solution of $(\\wp_\\lambda)$ with $2$ concentration points.\n\\end{prop}\n\nLet us give first a lemma about the behavior of the Green function for a thin domain.\nFor this we write now $G_0^a(x,y)$, $H_0^a(x,y)$, $g_0^a(x) = H_0^a(x,x)$ for the Green function, its regular part and the Robin function respectively for $\\lambda=0$ in the domain $\\Omega_a$.\n\\begin{lemma}\n\\label{lemma81}\nLet $x_0, y_0$ be fixed so that $|x_0|=|y_0|=1$ and $y_0 \\not= x_0$. Then\n\\begin{align}\n\\label{convergenceGreen}\nG_0^a(y,x) \\to 0\n\\end{align}\nas $a \\to 1$ uniformly for $y = r y_0$ with $r\\in (a,1)$ and $x = r' x_0$ with $r'\\in (a,1)$.\nMoreover,\n\\begin{align}\n\\label{convergenceRobin}\n\\min_{\\Omega_a} g_0^a \\to \\infty\n\\end{align}\nas $a \\to 0$.\n\\end{lemma}\n\\begin{proof}\nTo prove \\eqref{convergenceGreen} let us write $\\varepsilon= 1 -a>0$ and let $\\varepsilon\\to 0$. We also change the notation $G_0^a$ to $G_0^\\varepsilon$, $\\Omega_a$ to $\\Omega_\\varepsilon$ and shift coordinates so that the annulus is centered at $-e_1$:\n\\[\n\\Omega_\\varepsilon = \\{ z\\in \\R^3 : 1-\\varepsilon<|z + e_1|<1\\},\n\\]\nwhere $e_1=(1,0,0)$.\nWithout loss of generality we can assume that $y_0 = 0$.\nOur assumption now is that $|x_0+e_1|=1$ and $x_0\\not=0$.\n\n\nBy the maximum principle\n\\[\n0\\leq G_0^\\varepsilon(y,x) \\leq \\Gamma(y-x) , \\quad \\forall y\\in \\Omega_\\varepsilon\\setminus\\{x\\},\n\\]\nfor any $x\\in \\Omega_\\varepsilon$.\nLet $\\rho =\\frac{|x_0-y_0|}{4} >0$. Then there is $C$ such that\n\\[\n0\\leq G_0^\\varepsilon(y,x) \\leq C , \\quad \\forall y\\in \\Omega_\\varepsilon \\cap B_\\rho(0),\n\\]\nfor any $x$ in the segment $\\{ t x_0 + (1-t)(-e_1) : t\\in (a,1)\\}$.\n\nLet\n\\[\n\\tilde G^\\varepsilon(y') = G_0^\\varepsilon(\\varepsilon y',x)\n\\quad y' \\in \\frac{1}{\\varepsilon}( \\Omega_\\varepsilon \\cap B_\\rho(0)) .\n\\]\nThen $\\tilde G^\\varepsilon$ is harmonic and bounded in $\\frac{1}{\\varepsilon}( \\Omega_\\varepsilon \\cap B_\\rho(0)) $. By standard elliptic estimates, up to a subsequence, $\\tilde G^\\varepsilon \\to \\tilde G$ which is harmonic and bounded on the slab $S = \\{ (x_1,x_2,x_3) : -10$. This implies that $\\min_{\\Omega_a} g_0^a \\geq \\frac{c}{1-a}\\to \\infty$ as $a\\to 1$.\n\\end{proof}\n\n\n\n\nTo prove Propositions~\\ref{propA1} and \\ref{propA2} we consider a configuration of points in the $xy$ plane at equal distance from the origin and spaced at uniform angles, that is,\n\\[\n\\zeta_j(r) = ( r e^{2 \\pi i \\frac{j-1}{k}} , 0 ) \\in \\R^3 , \\quad j=1,\\ldots, k,\n\\]\nwhere the notation we are using for $z\\in {\\mathbb C}$ and $t\\in \\R$, is $(z,t) = (Re(z),Im(z),t)$.\n\nDefine then the matrix $M_\\lambda$ restricted to this configuration as\n\\[\n\\tilde M_\\lambda(r) = M_\\lambda(\\zeta(r) ),\n\\]\nwhere $\\zeta(r) = (\\zeta_1(r) , \\ldots, \\zeta_k(r) )$.\nSimilarly we define\n\\[\n\\tilde \\psi_\\lambda(r) = \\tilde \\psi_\\lambda( \\zeta(r)) ,\n\\]\nand denote by $\\tilde \\sigma_j(\\lambda,r)$ the eigenvalues of $\\tilde M_\\lambda(r)$ with $\\tilde \\sigma_1$ the smallest one.\n\n\n\\begin{proof}[Proof of Proposition~\\ref{propA1}]\nLet $k\\geq 2$ be given.\nBy Lemma~\\ref{lemma81}, if $a>0$ is small, we have\n\\begin{align}\n\\label{positiveSigma1Lambda0}\n& \\tilde \\sigma_j(0,r) >0 , \\quad \\forall r\\in (a,1), \\ j=1,\\ldots, k,\n\\\\\n\\label{positivePairLambda0}\n& g_0(\\zeta_1(r))^2 - G_0(\\zeta_1(r), \\zeta_j(r))^2>0\\quad \\forall r\\in (a,1), \\ j=2,\\ldots, k.\n\\end{align}\nNow, we\ndefine\n\\begin{align}\n\\label{lambda0}\n\\lambda_0 = \\sup\\, \\{\\lambda \\in (0,\\lambda_1): \\sigma_j(\\lambda',r) >0 \\quad \\forall r\\in (a,1), \\ j=1,\\ldots, k , \\ \\lambda'\\in (0,\\lambda)\t \\}.\n\\end{align}\nThen $\\lambda_0$ is well defined by continuity and \\eqref{positiveSigma1Lambda0}. We will need the following properties:\n\\begin{align}\n\\label{positivePair}\n& g_\\lambda(\\zeta_1(r))^2 - G_\\lambda(\\zeta_1(r), \\zeta_j(r))^2>0\\quad \\forall\n\\lambda \\in [0,\\lambda_0),\nr\\in (a,1), \\ j=2,\\ldots, k,\n\\\\\n\\label{lambdaLess}\n& \\lambda_0 <\\lambda_1,\n\\\\\n\\label{sigma1}\n& \\tilde \\sigma_1(\\lambda_0,r) \\geq 0 \\quad \\text{and there exists $r_0 \\in (a,1)$ such that $\\sigma_1(\\lambda_0,r_0)=0$},\n\\\\\n\\label{simpleEigenvalue}\n& \\tilde \\sigma_j(\\lambda_0,r) >0 \\quad \\text{for all $r\\in (a,1)$ and $ j=2,\\ldots, k$},\n\\\\\n\\label{sigma1Decreasing}\n& \\frac{ \\partial \\tilde \\sigma_1 }{\\partial \\lambda}(\\lambda_0,r)<0, \\quad \\forall r\\in (a,1).\n\\end{align}\nLet us prove \\eqref{positivePair}. If this fails, then for some $\\lambda \\in [0,\\lambda_0)$, some $r_0 \\in (a,1)$, and some $ j=2,\\ldots, k $, we have $g_\\lambda(\\zeta_1(r))^2 - G_\\lambda(\\zeta_1(r), \\zeta_j(r))^2 \\leq 0$.\nThis condition implies that the matrix $\\tilde M_\\lambda(r)$ has a nonpositive eigenvalue. This follows from the criterion that asserts that a symmetric matrix $A=(a_{i,j})_{1\\leq i,j\\leq k}$ is positive definite if and only if all submatrices $(a_{i,j})_{1\\leq i,j\\leq m}$ are positive definite for $ m=1,\\ldots, k$ (we apply this to $\\tilde M_\\lambda(r)$ after the permutation of the rows 2 and $j$, and the columns 2 and $j$).\nBut this contradicts the definition of $\\lambda_0$ \\eqref{lambda0}.\n\nLet us prove \\eqref{lambdaLess}. For this we recall that $\\min_{\\Omega_a} g_0>0$ and $\\min_{\\Omega_a} g_\\lambda \\to -\\infty$ as $\\lambda\\uparrow\\lambda_1$. Therefore there exists $r \\in (a,1) $ and $\\lambda \\in (0,\\lambda_1)$ such that $g_\\lambda(\\zeta_1(r)) = 0$.\nThis implies that $ g_\\lambda(\\zeta_1(r))^2 - G_\\lambda(\\zeta_1(r), \\zeta_j(r))^2<0$\nfor any $ j=2,\\ldots, k$.\nBy \\eqref{positivePair} this value of $\\lambda$ is greater or equal than $\\lambda_0$. It follows that $\\lambda_0<\\lambda_1$.\n\nSince $\\lambda_0<\\lambda_1$ by continuity we deduce the validity of \\eqref{sigma1}.\nWe also deduce from this and the way we have arranged the eigenvalues that $\\sigma_j(\\lambda_0,r)\\geq 0$ for all $ j=2,\\ldots, k$ and for all $r\\in (a,1)$.\n\nTo continue the proof of the stated properties we need a formula for the eigenvalues of a circulant matrix. We recall that a matrix $A$ of $k\\times k$ is circulant if it has the form\n\\[\nA =\n\\left[\n\\begin{matrix}\na_0 & a_{k-1} & a_{k-2} & \\ldots & a_{2} & a_{1}\n\\\\\na_1 & a_0 & a_{k-1} & \\ldots & a_{3} & a_{2}\n\\\\\na_2 & a_1 & a_0 & \\ldots & a_{4} & a_{3}\n\\\\\n\\vdots & \\vdots& \\vdots& &\\vdots & \\vdots\n\\\\\na_{k-1} & a_{k-2} & a_{k-3} & \\ldots & a_{1} & a_{0}\n\\end{matrix}\n\\right]\n\\]\nfor some complex numbers $a_0,\\ldots,a_{k-1}$.\n(This means each column is obtained from the previous one by a rotation in the components).\nWe note that the matrix $\\tilde M_\\lambda(r)$ has this structure with\n\\begin{align*}\na_0 &= g_\\lambda(\\zeta_1(r)),\n\\\\\na_j &= -G_\\lambda( \\zeta_1(r), \\zeta_{j+1}(r)), \\quad j=1,\\ldots, k-1 ,\n\\end{align*}\nsince $G_\\lambda (\\zeta_l(r), \\zeta_j(r) ) = G_\\lambda (\\zeta_{l+1}(r), \\zeta_{j+1}(r) ) $.\n\nIt is known that the eigenvalues $\\nu_l$ ($l=0,\\ldots, k-1$) of the circulant matrix $A$ are given by\n\\[\n\\nu_l = \\sum_{j=0}^{k-1} a_j e^{\\frac{2\\pi i}{k} j l } , \\quad l=0,\\ldots, k-1.\n\\]\nThese numbers coincide up to relabeling the indices with the numbers $\\tilde\\sigma_j(\\lambda,r)$.\nWe note that since $\\tilde M_\\lambda(r)$ is symmetric, the eigenvalues are real.\nWe claim that\n\\[\n\\nu_0 < \\nu_j \\quad j=2,\\ldots, k-1.\n\\]\nIndeed, since the $\\nu_l$ are real\n\\begin{align*}\n\\nu_l & = g_\\lambda(\\zeta_1(r)) - \\sum_{j=1}^{k-1} Re\\left[ G_\\lambda(\\zeta_1(r), \\zeta_{j+1}(r)) e^{\\frac{2\\pi i}{k} j l }\\right]\n\\\\\n& > g_\\lambda(\\zeta_1(r)) - \\sum_{j=1}^{k-1} G_\\lambda(\\zeta_1(r), \\zeta_{j+1}(r)) = \\nu_0,\n\\end{align*}\nwhere the strict inequality holds because there are point $e^{\\frac{2\\pi i}{k} j l }$ in the sum which are not colinear and $G_\\lambda(\\zeta_1(r), \\zeta_{j+1}(r)) >0$.\nThis proves \\eqref{simpleEigenvalue} and also that\n\\begin{align}\n\\label{formulaSigma1}\n\\tilde \\sigma_1(\\lambda,r) = g_\\lambda(\\zeta_1(r)) - \\sum_{j=1}^{k-1} G_\\lambda(\\zeta_1(r), \\zeta_{j+1}(r)) ,\n\\end{align}\nfor all $\\lambda \\in [0,\\lambda_0]$ because for this range of $\\lambda$ we know that the eigenvalues $\\tilde \\sigma_j$ are nonnegative.\nFrom this formula we obtain\n\\begin{align*}\n\\frac{\\partial \\tilde \\sigma_1}{\\partial \\lambda}(\\lambda,r)\n&=\n\\frac{\\partial g_\\lambda}{\\partial \\lambda}\n(\\zeta_1(r)) - \\sum_{j=1}^{k-1}\n\\frac{\\partial G_\\lambda }{\\partial \\lambda} (\\zeta_1(r), \\zeta_{j+1}(r)) < 0\n\\end{align*}\nfor $\\lambda \\in [0,\\lambda_0]$, which proves \\eqref{sigma1Decreasing}.\n\nLet us see that we are almost in a situation where Theorem~\\ref{thm1} can be applied.\nLet $r_0$ be the number found in property \\eqref{sigma1}.\nThe eigenvalue $\\tilde \\sigma_1(\\lambda_0,r_0)$ is zero and $\\tilde M_{\\lambda_0}(r_0)$ is positive semidefinite (assumption (i)), we have $D_\\zeta \\sigma_1(\\lambda_1,\\zeta(r_0))=0$ because $\\zeta(r_0)$ is a global minimum for $\\sigma_1(\\lambda_0,\\cdot)$. Condition (iv) follows from \\eqref{sigma1Decreasing}.\n\n\n\nThe only hypothesis in Theorem~\\ref{thm1} which has not been verified is the nondegeneracy of $\\zeta(r_0)$ as a critical point of $\\sigma_1(\\lambda_0,\\cdot)$. In fact this nondegeneracy does not hold because the problem is invariant about rotations about the $z$ (or $x_3$) axis.\nWe could impose a symmetry condition on the functions involved so that degeneracy by rotation is eliminated, but still we do not know whether we have nondegeneracy in the radial direction.\nInstead of this assumption, we will see that a slight modification of the argument in the proof of Theorem~\\ref{thm1} yields the desired conclusion. Basically, the nature of the critical point of $F_\\lambda$ in this case is stable with respect to $C^1$ perturbations.\n\nWe recall from Section~\\ref{secReduction} that to construct a solution it is sufficient to find a critical point of the function\n$\n\\bar J_\\lambda( \\sum_{j=1}^k V_j + \\phi)\n$\nand\n\\[\n\\bar J_\\lambda\\Bigl(\\sum_{j=1}^k V_j + \\phi\\Bigr)\n= J_\\lambda\\Bigl(\\sum_{j=1}^k U_j \\Bigr ) + o(\\varepsilon^{2})\n\\]\nwhere $o(\\varepsilon^{2})$ is in $C^1$ norm.\nTherefore it is enough to ensure that $J_\\lambda(\\sum_{j=1}^k U_j ) $ has a critical point that is stable under $C^1$ perturbations.\n\n\n\n\n\nIn the case when $\\Omega_a$ is an annulus, and $\\zeta_j(r) = (re^{2\\pi i\\frac{j-1}{k}},0)$ using that $g_\\lambda(\\zeta_j(t))$ only depends on $r$ and considering $\\mu = \\mu_1 = \\ldots = \\mu_k $, by Lemma~\\ref{lemmaEnergyExpansion} we have that\n\\[\nJ_\\lambda\\Bigl(\\sum_{j=1}^k U_j \\Bigr) = F_\\lambda(\\mu,r) + R_\\lambda(\\mu,r),\n\\]\nwhere\n\\[\nF_\\lambda(\\mu,r) = k a_0 + 2 a_1 \\mu f_\\lambda(r) + k a_2 \\lambda \\mu^2 - a_3 \\mu^2 f_\\lambda(r)^2\n\\]\nwith\n\\begin{align*}\nf_\\lambda(r) &= k g_\\lambda(\\zeta_1(r)) - k \\sum_{j=2}^k G_\\lambda(\\zeta_1(r), \\zeta_j(r))\n\\end{align*}\nand\n\\[\nR_\\lambda(\\mu,r) = O(\\mu^{3-\\sigma}).\n\\]\nfor some $\\sigma\\in(0,1)$.\n\nAs was observed previously, for $\\lambda \\in [0,\\lambda_0]$, $f_\\lambda(r)$ is precisely the eigenvalue $\\tilde \\sigma_1(\\lambda,r)$ (see \\eqref{formulaSigma1}).\nTherefore \\eqref{sigma1} gives $f_{\\lambda_0}(r)\\geq 0$ and then there exists $r_0\\in (a,1)$ such that $f_{\\lambda_0}(r_0)=0$.\n\nSince we have \\eqref{sigma1Decreasing} we deduce that for $\\lambda = \\lambda_0 +\\varepsilon$ with $\\varepsilon>0$ small enough and $r$ close to $r_0$,\nwe have $f_\\lambda(r)<0$ and so the equation\n\\[\n\\frac{\\partial}{\\partial \\mu}\nF_\\lambda(\\mu,\\zeta) = 0\n\\]\nhas a solution given explicitly by\n\\[\n\\mu_0(\\lambda,r) = \\frac{- a_1 f_\\lambda(r)}{k a_2 \\lambda - a_3 f_\\lambda(r)^2} > 0.\n\\]\nWe consider this expression only for $r$ in a neighborhood of $r_0$, so that $f_\\lambda(r) \\leq 0$.\nThen\n\\[\n\\frac{\\partial}{\\partial \\mu} ( F_\\lambda(\\mu,r) +R_\\lambda(\\mu,r) ) = 0\n\\]\nhas a solution $\\mu(\\lambda,r) $ close to $\\mu_0(\\lambda,r) $.\nNote that since $\\frac{\\partial}{\\partial \\mu} R_\\lambda(\\mu,r) = O(\\mu^{2-\\sigma})$, we have\n\\[\n|\\mu(\\lambda,r)- \\mu_0(\\lambda,r)|\\leq C | f_\\lambda(r)|^{2-\\sigma}.\n\\]\nReplacing $\\mu(\\lambda,r)$ in $F_\\lambda$ we find\n\\[\nF_\\lambda( \\mu(\\lambda,r), r ) + R_\\lambda( \\mu(\\lambda,r), r ) =\n-\\frac{a_1^2 f_\\lambda(r)^2}{k a_2 \\lambda-a_3 f_\\lambda(r)} + O( | f_\\lambda(r)|^{3-\\sigma} ).\n\\]\nFrom this formula, \\eqref{sigma1Decreasing} and the property\n\\[\nf_\\lambda(r) \\to \\infty \\quad \\text{ as \\, $r\\to a$ or $r\\to 1$},\n\\]\nwe get that $F_\\lambda( \\mu(\\lambda,r), r ) + R_\\lambda( \\mu(\\lambda,r), r ) $ has a critical point $r_\\lambda$ for which $f_\\lambda(r_\\lambda)<0$.\n\\end{proof}\n\n\n\n\n\n\n\\begin{proof}[Proof of Proposition~\\ref{propA2}]\nThe argument is the same as in Proposition~\\ref{propA1}, except that for this result we claim that properties \\eqref{positiveSigma1Lambda0} and \\eqref{positivePairLambda0} hold for $a\\in (\\frac{1}{49},1)$.\nIn the case $k=2$ both properties actually follow from the following claim: if $a \\in (\\frac{1}{49},1)$ then\n\\begin{align}\n\\label{g1}\ng_0(x) > G_0(x,-x) , \\quad \\forall x\\in \\Omega_a.\n\\end{align}\nTo prove this we use an explicit formula for the Green function in the annulus $ \\Omega_a$, which can be found in \\cite{grossi-vujadinoic}, to obtain that:\n\\[\ng_0(x) = \\frac{1}{\\omega_{2}}\n\\sum_{m=0}^\\infty\nP_m(x)\n\\quad \\text{and}\\quad\nG_0(x,-x) = \\frac{1}{\\omega_{2}}\\left[\n\\frac{1}{2\\vert x\\vert}-\n\\sum_{m=0}^\\infty\n(-1)^m P_m(x)\\right],\n\\]\nwhere\n\\[\nP_m(x):=\\frac{a^{2m+1}-2a^{2m+1} |x|^{2m+1} +|x|^{2(2m+1)} }{(2m+1) |x|^{2(m+1)}(1-a^{2m+1}) } .\n\\]\nNotice that $P_m(x)$ is nonnegative for all $m\\geq 0$, and therefore,\n\\begin{align*}\ng_0(x) - G_0(x,-x) &= \\frac{1}{\\omega_{2}}\n\\left[-\n\\frac{1}{2\\vert x\\vert}+\n\\sum_{m=0}^\\infty\n[1+(-1)^m] \\,P_m(x)\\right]\\\\\n&\\geq\n\\frac{1}{\\omega_{2}}\n\\left[-\n\\frac{1}{2\\vert x\\vert}+\n2P_0(x)\\right]\\qquad \\forall x\\in \\Omega_a.\n\\end{align*}\nA sufficient condition to have \\eqref{g1} is then\n\\[\n4 \\frac{a-2a|x|+|x|^2}{|x|^2(1-a)}> \\frac{1}{|x|}, \\quad \\forall x\\in\\Omega_a.\n\\]\nThis in turn holds if $a\\in(\\frac{1}{49},1)$.\n\\end{proof}\n\n\n\n\\bigskip \\noindent \\textbf{Acknowledgement.}\nThe research of M. Musso has been partly supported by FONDECYT Grant 1160135 and Millennium Nucleus Center for Analysis of PDE, NC130017.\nD. Salazar was partially funded by grant Hermes 35454 from Universidad Nacional de Colombia sede Medell\\'\\i n and\nMillennium Nucleus Center for Analysis of PDE, NC130017.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn recent years, multimode optical fibers (MMFs) are at the focus of numerous studies aiming at enhancing the capacity of optical communications and endoscopic imaging systems \\cite{Richardson2013, Ploschner2015}. \nIdeally, one would like to utilize the transverse modes of the fiber to deliver information via multiple channels, simultaneously. However, inter-modal interference and coupling between the guided modes of the fiber result in scrambling between channels. \nOne of the most promising approaches for unscrambling the transmitted information is by shaping the optical wavefront at the proximal end of the fiber in order to get a desired output at the distal end. \nDemonstrations include compensation of modal dispersion \\cite{Shen2005, Alon2014,Wen2016}, focusing at the distal end \\cite{DiLeonardo2011,Papadopoulos2012, Caravaca-Aguirre2013,Papadopoulos2013,BoonzajerFlaes2018}, and delivering images \\cite{Cizmar2012, Bianchi2012, Choi2012} or an orthogonal set of modes \\cite{Carpenter:14, Carpenter2015} through the fiber. \n\nTypically in wavefront shaping, the incident wavefront is controlled using spatial light modulators (SLMs), digital micromirror devices (DMDs) or nonlinear crystals.\nIn all cases, the shaped wavefront sets the superposition of guided modes that is coupled into the fiber. For a fixed transmission matrix (TM) of the fiber, this superposition determines the field at the output of the fiber, as depicted in Fig. \\ref{fig:matrix}(a)). Hence, in a fiber that supports $N$ guided modes, wavefront shaping provides at most $N$ complex degrees of control. \nHowever, many applications require the number of degrees of control to be larger than the number of modes. \nFor example, one of the key ingredients for spatial division multiplexing is mode converters, which require simultaneous control over the output field of multiple incident wavefronts.\nTo this end, complex multimode transformations were previously demonstrated by applying phase modulations at multiple planes \\cite{Morizur:10, Labroille:14, Fontaine2019, Fickler2019}. However, this requires free-space propagation between the modulators, thus limiting the stability of the system and increasing its footprint. \n\nIn this work we propose and demonstrate a new method for controlling light at the output of MMF, which does not rely on shaping the incident light and that can be implemented in an all-fiber configuration. \nInspired by the ongoing efforts to generate on-chip mode converters by manipulating modal interference in multimode interferometers \\cite{Piggott2015, Bruck2016, Harris:18}, we directly control the light propagation inside the fiber to manipulate its TM, allowing us to generate a desired field at its output (Fig. \\ref{fig:matrix}(b)). Since the TM is determined by $O\\left(N^{2}\\right)$ complex parameters, TM-shaping provides access to much more degrees of control than shaping the incident wavefront. \n\nTo control the fiber's TM, we apply computer controlled bends at multiple positions along the fiber. \nSince the stress induced by the bends changes the boundary conditions of the system, it modifies the TM such that different bends yield different speckle patterns at the distal end (Fig. \\ref{fig:matrix}(c)). \nWe can therefore obtain a desired field at the output of the fiber by imposing a set of controlled bends, without modifying the incident wavefront. \nSince in this approach the input field is fixed, it does not require an SLM or any other free-space component. Such an all-fiber configuration is especially attractive for MMF-based applications that require high throughput and an efficient control over the field at the output of the fiber. As a proof-of-concept demonstration of TM-shaping, we demonstrate focusing at the distal end of the fiber, and conversion between the fiber modes. \n\n\\begin{figure}[!tbh]\n\\begin{centering}\n\\includegraphics[width=\\columnwidth]{Figures\/fig1.png}\n\\par\\end{centering}\n\\caption{\\textbf{Shaping the transmission matrix of multimode optical fibers.}\n(a) The conventional method for wavefront shaping in complex media, performed e.g. by using an SLM and free space optics to tailor the incoming wavefront at the proximal end of the multimode fiber. (b) Proposed method for light modulation, in which the transmission matrix of the medium is altered, e.g. by performing perturbations on the fiber itself. (c) Illustration of the sensitivity of the output pattern on the fiber geometry. Three different configurations of the fiber (depicted by red, green and blue curves), correspond to three different speckle patterns at the output of the fiber. Since the input field coupled into the fiber is fixed, the different output patterns correspond to different transmission matrices of the fiber.}\n\\label{fig:matrix}\n\\end{figure}\n\n\n\n\\section{Experimental Techniques}\n\n\\subsection{Principle}\n\nOur method relies on applying controlled weak local bends along the fiber. To this end, we use an array of computer-controlled piezoelectric actuators to locally apply pressure on the fiber at multiple positions \\cite{Golubchik2015, regelman2016method}. \nThe TM of the fiber depends of the curvatures of the bends, which are determined by the travel of each actuator. \nTo obtain a target pattern at the distal end, we compare the intensity pattern recorded at the output of the fiber with a desired target pattern. Using an iterative algorithm, we search for the optimal configuration of the actuators, i.e. the optimal travel of each actuator, that maximizes the overlap of the output and target patterns. \n \n\\subsection{Experimental Setup}\nThe experimental setup is depicted in Fig. \\ref{fig:setup}. A HeNe laser (wavelength of $\\lambda=632.8 \\hspace{2pt} nm$) is coupled to an optical fiber, overfilling its core. We placed 37 piezoelectric actuators along the fiber. By applying a set of computer-controlled voltages to each actuator, we controlled the vertical displacement of the actuators. \nEach actuator bends the fiber by a three-point contact, creating a bell-shaped local deformation of the fiber, with a curvature that depends on the vertical travel of the actuator (see Figs. \\ref{fig:setup}(b,c)). \nFor the maximal curvature we applied ($R\\approx10 \\hspace{2pt} mm$), we measured an attenuation of few percent per actuator due to bending loss. The spacing between nearby actuators was set to be at least $3 \\hspace{2pt} cm$, which is larger than $\\frac{d^2}{\\lambda}$ for $d$ the core's diameter, such that the interference pattern inside the fiber between two adjacent actuators is uncorrelated. At the distal end, a CMOS camera records the intensity distribution of both the horizontally and vertically polarized light. \n\nWe used two types of multimode fibers: a fiber supporting few modes for demonstrating mode conversion, and a fiber supporting numerous modes for demonstrating focusing. For the focusing experiment, we used a 2 meter-long graded-index (GRIN) multimode optical fiber with numerical aperture (NA) of 0.275 and core diameter of $d_{MMF}=62.5 \\hspace{2pt} \\mu m$ (InfiCor OM1, Corning). The fiber supports approximately 900 transverse modes per polarization at $\\lambda=632.8 \\hspace{2pt} nm$ ($V\\approx85$), yet we used weak focusing at the fiber's input facet to excite only $\\approx 280$ modes.\nFor the experiments with the few mode fiber (FMF), we used a 5 meter-long step-index (SI) fiber, with an NA of 0.1 and core diameter of $d_{FMF}=10 \\hspace{2pt} \\mu m$ (FG010LDA, Thorlabs). In principle, at our wavelength the fiber supports 6 modes per polarization, ($V \\approx5$).\n\n\n\\begin{figure}[!tbh]\n\\begin{centering}\n\\includegraphics[width=\\columnwidth]{Figures\/fig2.png}\n\\par\\end{centering}\n\\caption{\\textbf{Experimental setup for controlling the transmission matrix of optical fibers.} \n(a) The laser beam is coupled into the optical fiber, which is fixed to a metal bar. 37 actuators are placed above the fiber, applying local vertical bends. The light that is emitted from the distal facet of the fiber travels through a polarizing beamsplitter, and both horizontal and vertical polarizations are recorded by a CMOS camera. (b) Top view of five actuators, bending the fiber from above. (c) The fiber is pressed by two pins that are attached to each actuator, and one pin which is placed below it, creating a three-point contact. A computer-controlled voltage that is applied on each actuator sets its travel and defines the curvature of the local deformation it poses on the fiber. L, lens; M, mirror; PBS, polarizing beamsplitter; CMOS, camera.}\n\\label{fig:setup}\n\\end{figure}\n\n\n\\subsection{Optimization Process}\n\nThe curvature of the bends, set by the travel of each actuator, modifies how light propagates through the fiber and thus determines the speckle pattern that is received at the distal end. \nWe can therefore define an optimization problem of finding the voltages that should be applied to the actuators, to receive a given target pattern at the output of the fiber. \nThe distance between the target and each measured pattern is quantified by a cost function, which the algorithm iteratively attempts to minimize. \n\nFor $M$ actuators, the solution space is an $M$-dimensional sub-space, defined by the voltages range and the algorithm's step intervals, and can be searched using an optimization algorithm. While the optical system is linear in the optical field, the response of the actuators, i.e. the modulation they pose on the complex light field, is not linear in the voltages.\n\nMoreover, since a change in the curvature of an actuator at one point along the fiber affects the interference pattern at all of the following actuators positions, the actuators cannot be regarded as independent degrees of control. \nSimilar nonlinear dependence between degrees of control is obtained, for example, for wave control in chaotic microwave cavities \\cite{Geoffroy2018}. \nOut of the wide range of iterative optimization algorithms that can efficiently find a solution to such nonlinear optimization problems, we chose to use Particle Swarm Optimization (PSO) \\cite{PSO}, as on average it achieved the best results out of the algorithms we tested (See the Supplementary Material for more details regarding the use of PSO). \n\n\n\n\\section{Results}\n\\subsection{Focusing at the Distal End of the Fiber}\n\nTo illustrate the concept of shaping the intensity patterns at the output of the fiber by controlling its TM, we first demonstrate focusing the light to a sharp spot at the distal end of the fiber. We excite a subset of the fiber modes by weakly focusing the input light on the proximal end of the fiber. Due to inter-modal interference and mode mixing, at the output of the fiber the modes interfere in a random manner, exhibiting a fully developed speckle pattern (Fig. \\ref{fig:MMF}(a)). Based on the number of speckle grains in the output pattern, we estimate that we excite the first 280 guided fiber modes.\n\nTo focus the light to some region of interest (ROI) in the recorded image, we run the optimization algorithm to enhance the total intensity at the target area. We define the enhancement factor $\\eta$ by the total intensity in the ROI after the optimization, divided by the ensemble average of the total intensity in the ROI before the optimization. The ensemble average is computed by averaging the output intensity over random configurations of the actuators, and applying an additional azimuthal integration to improve the averaging.\n\nWe start by choosing an arbitrary spot in the output speckle pattern of one of the polarizations. \nWe define a small ROI surrounding the chosen position, in an area that is roughly the area of a single speckle grain, and run the optimization scheme to maximize the total intensity of that area.\nFig. \\ref{fig:MMF} depicts the output speckle pattern of the horizontal polarization before (Fig. \\ref{fig:MMF}(a)) and after (Fig. \\ref{fig:MMF}(b) the optimization, using all 37 actuators. The enhanced speckle grain is clearly visible and has a much higher intensity than its surroundings, corresponding to an enhancement factor of $\\eta=25$. \n\nWe repeat the focusing experiment described above with a varying number of actuators $M$. When a subset of actuators is used, the remaining are left idle throughout the optimization. Fig. \\ref{fig:MMF}(d) summarizes the results of this set of experiments, showing the obtained enhancement factor $\\eta$ grows linearly with the number of active actuators $M$. \nIt is instructive to compare this linear scaling with the well-known results for focusing light through random media using SLMs or DMDs.\nVellekoop and Mosk have shown that when the number of degrees of control (i.e. independent SLM or DMD pixels) is small compared to the effective number of transverse modes of the sample, the enhancement scales linearly with the number of degrees of control. The slope of the linear scaling $\\alpha$ depends on the speckle statistics and on the modulation mode \\cite{Vellekoop2007,Vellekoop2008,Vellekoop:15}. For Rayleigh speckle statistics, as in our system (see Supplementary Material), the slopes predicted by theory are $\\alpha=1$ for perfect amplitude and phase modulation, $\\alpha=\\frac{\\pi}{4}\\approx0.78$ for phase-only modulation \\cite{Vellekoop:15}. Experimentally measured slopes, however, are typically smaller, mainly due to technical limitations such as finite persistence time of the system, unequal contribution of the degrees of control, and statistical dependence between them. \nInterestingly, we measure a slope of $\\alpha\\approx 0.71$, which is close to the theoretical value for phase-only modulation for Rayleigh speckles, and higher than typical experimentally measured slopes (e.g. $\\alpha\\approx 0.57$ in \\cite{VellekoopPhdThesis}). Naively, one could expect a lower slope in our system, since as discussed above, in our configuration the degrees of control are not independent. The large slope values we obtain may indicate that the bends change not only the relative phases between the guided modes (corresponding to phase modulation), but also their relative amplitudes (corresponding to amplitude modulation), via mode-mixing and polarization rotation.\n\nTo further study the linear scaling, we performed a set of numerical simulations. We used a simplified scalar model for the light propagating in a GRIN fiber, in which the fiber is composed of multiple sections, where each section is made of a curved and a straight segment. The curved segments simulate the bend induced by an actuator, and the straight segments simulate the propagation between actuators (see Supplementary Material for more details). As in the experiment, we use the PSO algorithm to focus the light at the distal end of the fiber. The numerical results exhibit a clear linear scaling, with slopes in the range of 0.57-0.64 (see Fig. S3 in Supplementary Material). Simulations for fibers supporting $N=280$ modes, roughly the number of modes we excite in our experiment, exhibit a slope of $\\alpha\\approx0.64$, slightly lower than the the experimentally measured slope. \n\n\\begin{figure}[!tbh]\n\\begin{centering}\n\\includegraphics[width=\\columnwidth]{Figures\/fig3.png}\n\\par\\end{centering}\n\\caption{\\textbf{Focusing at the output of a multimode fiber}. (a) Image of the speckle pattern at the output of the fiber, before the optimization process. \n(b, c) The output intensity pattern, after optimizing the travel of the 37 actuators to focus the light to a single target (b), and two foci simultaneously (c). (d) The average enhancement as a function of the number of active actuators. Each data point (blue circles) was obtained by averaging the enhancement over several experiments, where the error bars indicate the standard error.\nA linear fit yields a slope of $\\alpha\\approx0.71$, which is close to the theoretical slope for phase-only modulation. The fit intersects the $\\hat{y}$ axis at $M_{0}\\approx1.5$, matching our observation that about $4-5$ actuators are required to overcome the inherent noise of the system. Numerical simulations for a GRIN fiber with $NA=0.275$ and $a=17.1 \\hspace{2pt} \\mu m$ (red circles), exhibit a linear scaling with a slope of $\\alpha\\approx 0.64$. The slope increases with the number of guided modes assumed in the simulation. Here we chose the number of modes ($M=280$) according to the number of excited modes in the experiment. \n}\n\\label{fig:MMF}\n\\end{figure}\n\nAs in experiments with SLMs, focusing is not limited to a single spot. To illustrate this, we used the optimization algorithm to simultaneously maximize the intensity at two target areas. Fig. \\ref{fig:MMF}(c) shows a typical result, exhibiting an enhancement which is half of the enhancement obtained when focusing to a single spot, as expected by theory \\cite{Vellekoop2008}. In principle, it is possible to focus the light to an arbitrary number of spots, yet in practice we are limited by the number of available actuators. \n\n\\subsection{Mode Conversion in a Few Mode Fiber}\n\nIn the previous section, we demonstrate the possibility to use our system as an all-fiber SLM, i.e. to shape an output complex field by modifying the relative complex weight of the propagating modes. In the following, we show that we can go further by studying the feasibility of TM-shaping to tailor the output patterns in the few-mode regime, where the number of fiber modes is comparable with the number of actuators. Specifically, we are interested in converting an arbitrary superposition of guided modes to one of the linearly-polarized (LP) modes supported by the fiber. To this end, we utilize the PSO optimization algorithm to find the configuration of actuators that maximizes the overlap between the output intensity pattern and the desired LP mode.\nThe target LP modes of the step-index fiber were computed numerically for the parameters of our fiber, and scaled to match the transverse dimensions of the fiber image. \nFig. \\ref{fig:FMF} presents a few examples of conversions between LP modes using 33 and 12 actuators. A mixture of $LP_{01}$ and $LP_{11}$ at two different polarizations can be converted to $LP_{11}$ in one polarization (Fig. \\ref{fig:FMF}(a)). Alternatively, a horizontally polarized $LP_{11}$ mode can be converted to a superposition of horizontally $LP_{01}$ and a vertically polarized $LP_{11}$ (Fig. \\ref{fig:FMF}(b)). The Pearson correlation between the target and final patterns in these examples is $0.93$. Similar results are obtained when we run the optimization with fewer active actuators, with a negligible reduction in the correlation between the target and final pattern. For example, with 12 actuators we observe correlations of $0.90$ for the conversion presented in Fig. \\ref{fig:FMF}(c). Optimization with less than 12 actuator shows poorer performance, as the number of actuators becomes comparable with the number of guided modes. \n\n\n\\begin{figure}[!tbh]\n\\begin{centering}\n\\includegraphics[width=0.8\\columnwidth]{Figures\/fig4.png}\n\\par\\end{centering}\n\\caption{\\textbf{Conversion between transverse fiber modes.} \nIntensity patterns recorded at the output of the fiber, before (left column) and after (middle column) the optimization, exhibiting conversion between the $LP$ fiber modes at orthogonal polarizations. The PSO algorithm iteratively minimizes the $\\ell_{1}$ distance between the measured pattern and the target mask (right column). (a) and (b) are obtained using 33 actuators (with Pearson correlation of $0.94$ and $0.92$ respectively), (c) is obtained with 12 actuators (with correlation of $0.90$).}\n\\label{fig:FMF}\n\\end{figure}\n\n\n\n\\section{Discussion}\nControlling the transmission matrix of a multimode fiber, rather than the wavefront that is coupled to it, opens the door for an unprecedented control over the light at the output of the fiber. Since the number of degrees of control, the number of actuators in our implementation, is not limited by the number of fiber modes $N$, it can allow simultaneous control for orthogonal inputs and\/or spectral components. \nIn fact, if $O(N^2)$ degrees of control are available, one can expect generating arbitrary $N \\times N$ transformations between the input and output modes. Over the past two decades there is an ever-growing interest in realizing reconfigurable multimode transformations, for a wide range of applications, such as quantum photonic circuits \\cite{Reck:1994, Carolan2015, Taballione2018, Harris:18, Gigan2019} optical communications \\cite{Miller2015, Fontaine2019}, and nanophotonic processors \\cite{Piggott2015, Annoni2017}. \nThese realizations require strong mixing of the input modes, as the output modes are arbitrary superpositions of the input modes. The mixing can be achieved, for example, by diffraction in free-space propagation between carefully designed phase plates \\cite{Morizur:10, Labroille:14, Fontaine2019, Fickler2019}, a mesh of Mach-Zehnder interferometers with integrated modulators \\cite{Harris:18}, engineered scattering elements in multimode interferometers \\cite{Piggott2015, Bruck2016}, or scattering by complex media \\cite{Geoffroy2018, Maxime:19}. In our implementation, we rely on the natural mode mixing and inter-modal interference in multimode fibers, allowing implementation using standard commercially available fibers. \n\nThe main limitation our current proof-of-concept suffers from is the achievable modulation rates.\nThe piezo-based implementation limits the achievable modulation rates. The response time of the system to abrupt changes of the piezos is approximately $30 \\hspace{2pt} ms$ (see Supplementary Material), allowing in principle for modulation rates as high as 30 Hz. In practice, our system works at slower rates ($\\approx$5 Hz), mainly due the latency of the piezoelectric actuators and the camera. The total optimization time for the focusing experiments is $50$ minutes, and $12-15$ minutes for the mode conversion experiments.\nFaster electronics and development of a stiffer and more efficient bending mechanism will allow higher modulation rates, limited by the resonance frequency of the piezo benders ($\\approx$ 300-500 Hz). To achieve even faster rates, a different technology should be used for applying perturbations to the fibers, e.g. utilizing all-fiber acousto-optical modulators \\cite{acousto-optic-book} or the 'smart fibers' technology with integrated modulators \\cite{Fink12}. Optical fibers with built-in modulators can also be utilized for a scalable implementation of our method. \n\n\n\n\\section{Conclusions and Outlook}\n\nIn this work we proposed a novel technique for controlling light in multimode optical fibres, by modulating its TM using controlled perturbations. We presented proof-of-principle demonstrations of focusing light at the distal end of the fiber, and conversion between guided modes, without utilizing any free-space components. \nSince our approach to modulate the TM of the fiber is general and not limited to mechanical perturbations, it could be directly transferred to other types of actuators, e.g. in-fiber electro-optical or acousto-optical modulators, to achieve all-fiber, loss-less, fast, and scalable implementations. \nThe all-fiber configuration and the possibility to control more degrees of freedom than the number of guided modes, makes our method attractive for fiber-based applications that require control over multiple inputs and\/or wavelengths. \nMoreover, the possibility to achieve high dimension complex operations opens the way to the implementation of optical neural networks. \nOur system can provide an important building block for linear reconfigurable transformations, which can be further used in combination with fibers and lasers that exhibit strong gain and\/or nonlinearity for deep learning applications.\n\n\n\\section*{Funding Information}\n\nThis research was supported by the Zuckerman STEM Leadership Program, the ISRAEL SCIENCE FOUNDATION (grant No. 1268\/16), the Ministry of Science \\& Technology, Israel and the French \\textit{Agence Nationale pour la Recherche} (grant No. ANR-16-CE25-0008-01 MOLOTOF), and Laboratoire international associ\u00e9 Imaginano.\n\n\\section*{Acknowledgments}\nWe thank Daniel Golubchik and Yehonatan Segev for invaluable help. \n\n\\section*{Disclosures}\nThe authors declare no conflicts of interest.\n\n\\printbibliography\n\n\\renewcommand{\\thefigure}{S\\arabic{figure}}\n\\setcounter{figure}{0}\n\\newpage\n\\section{Supplementary Material}\n\n\\subsection{Typical Time scales of the Optical System}\n\n\\subsubsection{Response Time}\n \nTo measure the typical response time of the system, we introduced abrupt changes to the voltages applied to a subset of the piezoelectric actuators, and recorded the speckle pattern obtained at the distal end of the fiber.\nWe then calculated the 2D Pearson correlation coefficient between each of the captured frames and the first frame. The measurements were repeated using different subsets of piezos. Examples of a few of these measurements, for subsets that include between one and four actuators, are shown in Fig. \\ref{fig:response}(a). The abrupt voltage change causes a fast change to the recorded speckle pattern, yielding a sharp decrease in the computed correlation coefficient. As expected, the bigger the subset of the piezos, the stronger the correlation drop. This sharp decrease is the result of the change in the actuators configuration (the bend they pose), and manifests in a change to the captured speckle pattern. \nOnce the actuator's position stabilized, the correlation stabilized on a lower value. To ensure that the patterns with lower correlation with regard to the first frame are correlated with one another (thus ensuring that the plateau is not a result of the statistical properties of speckles), we also calculated the 2D correlation coefficient of each frame from the last acquired frame. These results are shown in Fig. \\ref{fig:response}(b) for the same groups of actuators. The high correlation after the configuration change indeed verifies that the speckle pattern did not change further.\nBased on such measurements we were able to estimate the response time of the system at $30 \\hspace{2pt} ms$, which corresponds to modulation rates of 33 Hz.\n\n\\begin{figure*}[htb]\n\\begin{centering}\n\\includegraphics[width=0.7\\linewidth]{Figures\/sup_fig1.png}\n\\par\\end{centering}\n\\caption{\\label{fig:S1}\\textbf{Response time of the experimental system.}\n The 2D correlation coefficient of each frame with (a) the first and (b) the last of the acquired frames, when a configuration of actuators (the voltage which is applied on these piezos) is changed. Blue lines show a change of configuration of a single actuator, red of two actuators, yellow of three and purple of four.} \n\\label{fig:response}\n\\end{figure*}\n\n\n\\subsubsection{Decorrelation Time}\nTo estimate the stability of the system, we calculated the 2D correlation coefficient of the speckle pattern at the distal end of the fiber over time when the system is idle, i.e. no changes are performed to the states of the actuators. This loss of correlation is known to be attributed to the sensitivity of bare optical fibers to thermal fluctuations in the room and changes of pressure due to air flow. \nWith the GRIN MMF, we found that the system remained highly correlated ($corr \\geq 0.99$) for $\\simeq10$ minutes. The correlation decreased slowly and linearly for 55 minutes, reaching $corr=0.976$. The correlation then decreased faster, reaching $corr=0.883$ after two hours. With the SI FMF, the system remained stable and highly correlated ($corr \\geq 0.996$) over the course of 15 hours.\n\n\\subsection{Rayleigh Statistics}\nThe slopes of the linear scaling of the focusing enhancement factor $\\eta$ as a function of the number of degrees of control rely on the intensity statistics of the generated speckle patterns. The theoretical values reported in the main text are derived for Rayleigh intensity statistics [1]. It is therefore important to compare the intensity statistics of the speckle patterns we obtain in our system with the predictions of Rayleigh statistics. Such a comparison is depicted in \\ref{fig:rayleigh}, which shows excellent agreement with theory. \n\n\\begin{figure}[htb]\n\\begin{centering}\n\\includegraphics[width=\\columnwidth]{Figures\/sup_fig2.png}\n\\par\\end{centering}\n\\caption{\\textbf{Intensity distribution of the speckle patterns at the end of the fiber.}\n The natural logarithm of the probability distribution function (PDF) of the speckle patterns intensities, as a function of normalized pixel intensity (dots), and a linear fit (line). The data points correspond to experimental readouts, without background noise subtraction.}\n\\label{fig:rayleigh}\n\\end{figure}\n\n\n\\subsection{Optimization Technique}\nAs described in the main text, the results were obtained by finding solutions to optimization problems. These problems used a feedback loop- at each iteration, the speckle pattern at the distal end of the fiber was recorded using the CMOS camera.\nThis pattern was evaluated according to its similarity to a target pattern, and this score was given to the optimization algorithm as a cost, which it tried to minimize by changing the configuration of bends which are applied to the fiber segments. Lower costs were obtained for bend configurations which yielded patterns with high similarity to the target.\n\nThe optimization algorithm we chose to use is the Particle Swarm Optimization (PSO), which is a genetic algorithm. It randomly initializes a population of points (referred to as particles) in an \\textit{M}-dimensional search space, representing the voltages which are assigned to the \\textit{M} actuators. These positions are iteratively improved according to their local and global memory from previous iterations. Its stochastic nature helps avoiding local extrema in non-convex problems.\nAn open-source implementation of PSO [2] was modified to fit our experimental setup and simulation. We defined a single run as a single instance of the optimization process, i.e. achieving a single optimized target speckle pattern, such as the example shown in Fig. 3(b) in the main text. With the GRIN MMF, each such run constituted of 80 iterations, with the following hyper-parameters: population size of 120, inertia weight of $w=1$, inertia damping ratio of $w_{damp}=0.99$, personal learning coefficient of $c_1=1.5$, global learning coefficient of $c_2=2$. With the SI FMF, each run used 86-108 iterations, with a population of size 50. The values of the other hyper-parameters were not changed.\n\n\n\\subsection{Simulation}\n\nSince our system is linear in the optical field, it is natural to describe the propagation of light in it with matrix formalism. We divided the fiber into multiple segments, calculated the transmission matrix (TM) of each segment and computed the total TM of the fiber by multiplying them. To represent our experimental system, we composed bent segments (which mimic the effect of actuators) and straight segments (for the propagation between actuators). A bent segment was approximated by a circular arc, with a defined curvature. To find the guided modes and propagation constants of different segments, we used a numerical module [3] which solves the scalar Helmholtz equation under the weakly guiding approximation [4]. We used 10 radii of curvatures, to simulate 10 different vertical positions of the actuators, which impose 10 different perturbations. These radii were linearly spaced between a maximal and a minimal value, which we estimated from the experimental system. \n\nMode-mixing in short GRIN fibers mostly occurs within groups of degenerate modes. To mimic this phenomenon, we introduced unitary block matrices, whose block sizes were determined according to the mode degeneracy, as expressed in the propagation constants, allowing mixing between modes with the same propagation constants. It is note worthy that without introducing this feature, we were unable to achieve focusing. \n\nWe used the same discrete set of possible curvatures for all of the actuators in all runs, and the same optimization mechanism as the experimental setup to achieve a focus. The optimization process mapped one of the possible curvatures for each of the bent fiber segments. In runs where not all of the actuators where used, the remaining were set as straight segments (with no curvature) to maintain the same propagation distance in all runs.\nFig. \\ref{fig:simulation} shows the enhancement factor $\\eta$ as a function of the number of actuators whose curvatures were optimized for simulated fibers. It is noticeable that the enhancement factor scales linearly with the number of simulated actuators, where the slope ranges between 0.57-0.64 for the displayed fiber parameters, at a wavelength of $\\lambda=632.8 \\hspace{2pt} nm$.\n\n\\begin{figure}[h]\n\\begin{centering}\n\\includegraphics[width=\\linewidth]{Figures\/sup_fig3.png}\n\\par\\end{centering}\n\\caption{\\textbf{Simulation of focusing in a multimode optical fiber.}\nThe average enhancement factor that was achieved (circles) and the standard error (bars), as a function of the number of actuators whose curvatures where modified as part of the optimization process in the simulation. Several results obtained for different fiber parameters are shown in different colors, along with a linear curve (gray dashed line).}\n\\label{fig:simulation}\n\\end{figure}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\\label{sec:introduction}\nMentorship is fundamental in many professions\\cite{scandura1992mentorship,de2003mentor,payne2005longitudinal,janosov2020elites}. In science, successful mentoring is crucial not only for a mentee's growth and success, but also for the career advancement of the mentor\\cite{kram1988mentoring,lee2007nature,bhattacharjee2007nsf}. In mentoring relationships, mentees learn scientific values, skills, and build their scientific network\\cite{green1991professional}. Also, mentorship has been shown to play a prominent role on a researcher's first job placement\\cite{enders2005border,hilmer2007dissertation,wright2008mentoring,clauset2015systematic,way2019productivity}. At the same time, mentors obtain benefits from training mentees, like higher productivity, job satisfaction, and a broader social network in the long run\\cite{allen2004career,astrove2017mentors}. A mentor can have multiple mentees over a career, and their number and success can improve the mentor's institutional recognition\\cite{rossi2017genealogical,semenov2020network}. Yet, despite the important role of mentees and of junior researchers in the scientific ecosystem, we witness a large fraction of early-stage researchers exiting academia at an alarming rate\\cite{roach2010taste,petersen2012persistence,moss2012science,ghaffarzadegan2015note,milojevic2018changing,xing2019strong,woolston2019phds,huang2020historical,levine2020covid,davis2022pandemic}, and we still have a limited quantitative understanding of the impact of mentors and their research group on the survival rate and career evolution. Given also the increasing reports of unhealthy working environments experienced by graduate students and early career researchers in academia\\cite{levecque2017work, guthrie2018understanding, woolston2019phds, gonzalez2020risk, murguia2022navigating}, it is of fundamental importance to understand which kind of mentorship minimizes dropout rate, supports junior researchers' well-being, and enables talent diffusion.\n\nThe success of a mentor-mentee relationship is characterized by the complex interaction of different factors, like institutional environment, country of origin of the mentor and mentee, or funding for PhD research\\cite{sugimoto2011academic,baruffaldi2016productivity,brostrom2019academic,way2019productivity}. Previous research on mentorship has been primarily based on anecdotal studies and self-report surveys, and supports the hypothesis that both mentees and mentors benefit from the mentoring relationship\\cite{lewis1992carl,payne2005longitudinal}. Most recently, some large-scale quantitative studies provided a quantitative understanding of the interplay between mentor and mentee performance \\cite{malmgren2010role, liu2018understanding, fortunato2018science}. For example, in Mathematics a mentor's fecundity, that is the number of mentees that a mentor supervises, is positively correlated with the number of the mentor's publications, and mathematicians have an academic fecundity similar to that of their mentors\\cite{malmgren2010role}. Mentees in STEM fields not only learn technical skills and traditional knowledge\\cite{liu2018understanding} but also inherit hidden capabilities displaying a higher propensity for producing prize-winning research, becoming a member of the National Academy of Science, and achieving ``superstardom''\\cite{ma2020mentorship}. Researchers have a higher probability of continuing in academia if they can better synthesize intelligence between their graduate and postdoctoral mentors\\cite{lienard2018intellectual,wuestman2020genealogical}. Moreover, graduate mentors are less instrumental to their mentees' survival and fecundity than postdoctoral mentors\\cite{lienard2018intellectual}. However, mentees who show independence from the mentor's research topics after graduation have a higher tendency to be part of the academic elite\\cite{ma2020mentorship}. Early-career investigators who coauthor with high impact scientists in the early stage have a long-lasting competitive advantage over those who do not have these collaboration opportunities\\cite{li2019early}. Mentorship is also connected to the chaperone effect in scientific publishing\\cite{sekara2018chaperone}: publishing with a senior mentor in a journal is crucial to become corresponding author in a later publication in the same journal, and this effect is particularly pronounced for high-impact publishing venues. \n\nThese prior works have well demonstrated the positive association between the success of mentors and mentees. However, they have a major limitation: they mainly focus on the career success of those surviving in academia. As such, they are affected by survival bias and fail to capture why a mentee does not continue their academic career. Indeed, in a mentorship relation, a mentee can benefit from a mentor's broad vision and valuable research experience, especially when working with academically successful mentors. However, the mentee may face a strong competition for the mentor's limited time, since successful mentors tend to supervise more mentees, work with more collaborators\\cite{johnson2002toward}, do more academic service, like peer reviewing or covering scientific editorial roles\\cite{ma2020mentorship}, and manage scientific groups of large size \\cite{luckhaupt2005mentorship,malmgren2010role,brostrom2019academic}. Therefore, mentees in large groups have to compete for the mentor's attention and have on average less chances of interactions with the mentor than mentees in small groups, entailing potential risks for the mentees' career evolution. \n\nGiven this premise, here we ask a fundamental question: What are the advantages and disadvantages of working with successful mentors, especially in relation to their scientific group size? To address this question, we construct a dataset combining mentor-mentee relations and their academic records. This dataset can capture their academic proliferation and publication performance and can be used to explore the potential drivers of mentee success in academia\\cite{ke2022dataset,david2012neurotree,sinha2015overview,wang2019review}. Most importantly, we can perform a survival analysis accounting for survivor bias, and understand not only the factors associated with success, but also those that lead to dropout. We further apply a coarsened exact matching regression model to uncover the causal relationship between mentees' group size and academic performance, which rules out potential confounding factors and uncovers alternative predictors\\cite{iacus2009cem,iacus2012causal}. \n\n\\section*{Results}\\label{sec:result}\n\\subsection*{Data and data curation.}\nOur analysis is based on two distinct data sets. The first one is curated from the Academic Family Tree (AFT, Supplementary S1.1), an online website (\\url{Academictree.org}) for collecting mentor-mentee relationship in a crowd-sourced fashion. AFT initially focused on Neuroscience and expanded later to span more than 50 disciplines. The second data set is the Microsoft Academic Graph (MAG, \\url{https:\/\/aka.ms\/msracad}, Supplementary S1.2), a bibliographic database containing entities about authors, doctypes (journals, conferences, etc.), affiliations, and citations. One advantage of MAG over other publication databases is that all entities have been disambiguated and associated with identifiers. These two data sets have been connected by matching the same scientists in each data set, and this matching has been validated with extensive and strict procedures\\cite{ke2022dataset}. From this combined dataset, we extract the genealogical data of 309,654 scientists who published 9,248,726 papers in Chemistry, Neuroscience, and Physics between 1900 and 2021 (Methods and Supplementary Note 1 for data curation).\n\n\\begin{figure}[!bt]\n \\centering\n \\includegraphics[width=1\\textwidth]{figs\/fig1.png}\n \\caption{\\textbf{Illustration of two academic family networks and mentor-mentee collaboration networks.} \\textbf{a.} Genealogy network of a mentee. The network is built around a focal mentee $p_1$ (the large blue circle). The purple node corresponds to $p_1$'s mentor. A directed link between two nodes indicates the mentorship relation, where the mentor is the node the link departs from. $G_1$ indicates the set of nodes co-mentored by $p_1$'s mentor 5 years before and after $p_1$'s graduation. This time window is denoted as $d$. The squared nodes in $G_2$ are the mentees mentored by the nodes of $G_1$ during the first 25 years after their graduation. The blue node $p_1$ and the green node $p_2$ in $G_1$ have their mentees in $G_2$, whereas the grey node $p_3$ has no offspring, hence no mentee-node in $G_2$. Therefore, $p_1$ and $p_2$ are survived mentees, according to our definition, while $p_3$ drops out. Because of the number of nodes in $G_1$, $p_1$ is a mentee in a small group. \\textbf{b.} Genealogy network of the mentee $p_4$. The difference in respect to panel a) is that the number of nodes in $G_1$, i.e. the group size, is among the top 25\\% in the dataset, meaning that the mentee is mentored in a big group. \\textbf{c.} Mentor-mentee co-authored publications and the corresponding weighted collaboration network among the mentor and mentees in the small group of panel a. Here the mentor and the $G_1$ mentees have co-authored three papers during the tranining period, resulting in a collaboration network where each node corresponds to an author and the edge weights represent the number of co-authored papers. \\textbf{d.} Mentor-mentee co-authored publications and the corresponding weighted collaboration network among the mentor and their mentees in the big group of panel b.}\n \\label{fig:schematic}\n\\end{figure}\n\n\\subsection*{Genealogy networks, mentee generations, and group size}\nThese curated datasets allow us to construct for each researcher $p$ their academic genealogy network, that is a temporal directed network where each node represents a researcher and a directed link is a mentorship relation pointing from a mentor to their mentee (Fig. \\ref{fig:schematic}a,b). Each node has a time attribute, corresponding to their doctoral or postdoctoral graduation year. The nodes included in this network are: (i) the node corresponding to the researcher $p$, (ii) the mentor of $p$, (iii) the set of nodes that are mentored by $p$'s mentor 5 years before and after $p$'s graduation, denoted as generation $G_1$, (iv) the set of nodes mentored by the nodes of $G_1$ during the first 25 years after graduation, denoted as generation $G_2$. For example, in Fig. \\ref{fig:schematic}a, we show the academic genealogy network of researcher $p_1$.\nTo account for the longitudinal limits of the dataset, for each researcher we only consider two generations of nodes and include in $G_2$ only the mentees mentored during the first 25 years after a researcher's graduation (Supplementary Fig. S2 and Table S2). Also, we consider only researchers that graduated between 1900 and 1995, in order to have at least 25 years of career after graduation and to avoid right-censoring issues\\cite{leung1997censoring}.\n\nIn order to understand the relation between the mentees' academic performance and the mentorship environment they were trained in, we introduce the concept of \\textit{group size} and provide measures of \\textit{academic performance}.\nThe group size of a given mentee is defined as the total number of nodes in $G_1$, that is the number of mentees that were supervised by the same mentor 5 years before and after the mentee's graduation. For example, in Fig. \\ref{fig:schematic}a, the node $p_1$ is mentored with two other mentees during the five years before and after $p_1$'s graduation, whereas in Fig. \\ref{fig:schematic}b, $p_4$ is mentored with four other mentees five years before and after $p_4$'s graduation. The group size is thus 3 for $p_1$ and 5 for $p_4$. Notice that the group size associated with a mentee is fixed, but a mentor can lead a group whose size can change over time and is equal to the number of mentees mentored in any 10 years window. The usage of this time years window ($d$ in Fig. \\ref{fig:schematic}) to quantify the group size is motivated by previous work \\cite{lienard2018intellectual}. We also show that group size has the same distribution when using different time windows (Supplementary Fig. S3), indicating that our findings do not depend on the choice of $d$.\n\nNext, we define small groups and big groups: we first identify all the mentees who graduated in a given year, then we rank them in descending order according to their group size. Mentees in the top 25\\% are in \\textit{big} groups, while those in the bottom 25\\% are in \\textit{small} groups. Since the average and the 75\\% quantile of group size are generally increasing over time \\cite{wuchty2007increasing,wu2019large}, the threshold separating big groups and small groups is time-dependent, and accounts for the increasing size effect (Supplementary Fig. S3 and Table S3). \n\n\\begin{figure}[!bt]\n \\centering\n \\includegraphics[width=1\\textwidth]{figs\/fig2.pdf}\n \\caption{\\label{fig:survival_rate}\\textbf{Survival rate, fecundity, and yearly citations.} \\textbf{a-c.} The evolution of the total number of mentees (dark grey bars) and the number of survived mentees (light grey bars). Survived mentees are those that had at least one mentee themselves. \\textbf{d-f.} The evolution of the survival rate of mentees from small groups (blue lines) and from big groups (orange lines), compared with the overall average survival rate (grey lines). \\textbf{g-i.} Fecundity during the first 25 years of career of all mentees (left two bars in each subplot) and of survived mentees (right two bars in each subplot) from small groups and big groups. \\textbf{j-l.} Average yearly citations of all mentees (left two bars in each subplot) and of survived mentees (right two bars in each subplot) from small groups and big groups. The result of the Mann-Whitney significance test comparing distributions is reported at the top of each paired bars (*p < 0.05; **p < 0.01; ***p < 0.001).}\n\\end{figure}\n\n\\subsection*{Academic performance measures}\nTo quantify the academic performance of a mentee, we use three kinds of widely used measures \\cite{malmgren2010role,milojevic2018changing,xing2019strong}: \\textit{fecundity}, \\textit{survival}, and \\textit{publishing performance indicators}.\nThe \\textit{fecundity} of a node is defined as the number of their mentees 25 years after graduation, that is the number of their neighbors in $G_2$. For example, the node $p_1$ trains two mentees in $G_2$ (Fig. \\ref{fig:schematic}a), while $p_3$ has no trainees (Fig. \\ref{fig:schematic}b), hence $p_1$ and $p_3$ have fecundity equal to two and zero respectively.\n\nLike previous work\\cite{lienard2018intellectual}, we define \\textit{survival} as having at least one mentee during the first 25 years, which is equivalent to having at least one neighbor in $G_2$. These researchers are ``surviving'' because they eventually establish themselves and build a scientific group. In contrast, mentees that do not have mentees themselves are considered dropouts, as they have not built a scientific group of their own, although they might continue publishing. In Fig. \\ref{fig:schematic}a and b, the blue and green circles are survived mentees while grey circles are dropouts. We then define the \\textit{survival rate} of the group, which is the fraction of nodes in $G_1$ that survive. For example, in Fig. \\ref{fig:schematic}a 2 out of 3 mentees in $G_1$ survive, as they have a mentee in $G_2$, then the survival rate associated with this (small) group is 0.67. Similarly, in Fig. \\ref{fig:schematic}b the survival rate of the (big) group is 0.4. We use also alternative definitions of survival\\cite{milojevic2018changing,xing2019strong}, based on the mentee's publication record 10 years after graduation, and obtain findings similar to those shown in the following sections (Supplementary S2.2 and Fig. S6).\n\nIn addition to measures of academic survival and fecundity, we focus on publishing performance, as captured by \\textit{productivity}, that is the number of papers published during the mentee's career, and \\textit{average number of yearly citations} acquired by these papers. \nFinally, we use the publication record also to construct the \\textit{collaboration} network between the mentor and all the mentees in $G_1$, to understand if this has an effect on the future career of mentees. In Fig. \\ref{fig:schematic}c and \\ref{fig:schematic}d, we provide two examples of collaboration networks, derived by the shown publications, where a node represents an author and two nodes are linked if they co-author at least one publication. The link weight corresponds to the number of co-authored publications. \n\n\n\n\\subsection*{Mentees trained in big groups have lower survival rate}\nGiven that fecundity and publications are both widely used as a proxy of success\\cite{malmgren2010role,wuestman2020genealogical,rossi2017genealogical,clauset2017data}, we ask: what are the success differences between mentees from big groups and small groups? Who will perform better in their future academic career: Those trained together with many other mentees in supposedly high-profile large groups or those trained with just a few other mentees? Apart from group size, are there any other confounding factors associated with the development of a mentee's career?\n\nTo answer these questions, we first investigate the evolution of the total number of mentees (dark grey bars) and survived mentees (light grey bars) (Fig. \\ref{fig:survival_rate}a-c). The total number of mentees has overall significantly increased from 1910 to 2000. In particular, after a temporary slow down soon after the World War II, the second half of the 20th century has witnessed a striking increase in both the total number of mentees and survived mentees, that continued until today. However, the rate of survived mentees was lower than the total number of mentees, indicated by the increasing difference between the dark grey and light grey bars. Indeed, the survival rate (grey lines in Fig. \\ref{fig:survival_rate}d-f), is (i) relatively stable for Chemistry until the 60s, (ii) suffered from a temporary decrease during World War II for Physics, followed by an increase probably because of the newly revived welfare after the war, which provided a large number academic positions in university and research institutes, and (iii) for Neuroscience had many ups and downs before and during World War II, and an increase until the early 70s. However, for all three disciplines the survival rate exhibits a striking declining trend after the 70s, which is still ongoing. When we split the survival rate into big groups and small groups, we find a pronounced difference (Fig. \\ref{fig:survival_rate}d-f): Mentees from big groups have a significantly lower survival rate than those from small groups starting in the 1940's (Chemistry) or 1950's (Physics and Neuroscience), indicating that mentees from big groups were much less likely to continue in academia. In the 90s, the survival rate of mentees trained in big groups is between 30\\% and 40\\% lower than those from small groups. The different results about survival rates in small and big groups do not depend on the time-dependent threshold identifying small groups and big groups (Supplementary Table 3, Fig. S6-S7). \n\n\\subsection*{Mentees trained in big groups have higher fecundity and higher yearly citations}\nIn the following analysis, we mainly focus on the data after 1960 when big groups and small groups exhibit evident difference in survival rate. We find significant differences between the mentees from big groups and small groups also in the other academic performance measures: Mentees from small groups are on average more successful in both fecundity (left two bars in the panels of Fig. \\ref{fig:survival_rate}h-i) and yearly citations (left two bars in the panels of Fig. \\ref{fig:survival_rate}j-l) than those from big groups, except for fecundity in Chemistry (Fig. \\ref{fig:survival_rate}g). One possibility leading to this exception is that Chemistry is a predominantly experimental discipline requiring a large workforce, therefore the mentees from big groups inherit from their mentoring groups a much larger fecundity than those from small groups. Interestingly, when we compare the academic achievements of only survived mentees, that is mentees that have at least fecundity one, the observed performance differences in fecundity and average yearly citations reverse between groups (right two bars in each panel of Fig. \\ref{fig:survival_rate}g-l). This means that mentees from small groups tend to do better in terms of average fecundity and yearly citations than those from big groups. However, if we consider only the mentees that manage to survive and establish a group, those from big group have an advantage, since they tend to have higher fecundity and yearly citations. Taken together, this advantage reversal happens because of the low survival rate in big groups, which lowers the average fecundity and citations of mentees from big groups. These findings are not a trivial consequence of dividing the data into the small groups and big groups, as shown by a null model where we randomize the mentor-mentee relationships while keeping the group size constant. In this null model, we do not see significant differences between mentees from big groups and small groups (Supplementary Fig. S4). The findings about survival suggest that being mentored in a big scientific group can have long-term career competitive advantages in academic performance, but these are conditional to the lower odds of surviving.\n\n\\begin{figure}[!b]\n \\centering\n \\includegraphics[width=1\\textwidth]{figs\/fig3.pdf}\n \\caption{\\textbf{Fecundity distribution and citation representation disparity among mentees from big groups and small groups.} \\textbf{a-c.} Complementary cumulative distribution function (CCDF) of fecundity. The orange (blue) line displays the fecundity distribution of $G_1$ mentees from big (small) scientific groups. The green dashed line marks the point where the two probabilities are equal and the two distributions cross (only for Chemistry and Physics). \\textbf{Inset:} The evolution of the equal probability point, $k$, for each decade since 1960s. \\textbf{d-f.} Relative representation $Rr$ (see Methods) of $G_1$ mentees from small and big groups in the top 5\\%, 10\\%, 25\\%, and 50\\% of the average annual citations (AAC) ranking . \\textbf{g-i.} The evolution of $Rr$ in the top 5\\% and top 10\\% of the AAC ranking.}\n \\label{fig:evolution}\n\\end{figure}\n\n\\subsection*{Researchers trained in small groups have small groups, researchers trained in big groups have big groups}\nThe results in Fig. \\ref{fig:survival_rate}g-l imply that big groups and small groups have different advantages based on the chosen success metric, namely survival probability, future fecundity, and average citations. Here we further explore the respective advantages of the two kinds of groups according to the academic aim of a mentee. We investigate the complementary cumulative distribution function (CCDF) of the mentees' fecundity, that is the probability that a researcher has at least $k$ mentees in their career, (Fig. \\ref{fig:evolution}a-c), and focus on the value of $k$ where the probabilities for researchers trained in big groups and small groups are equal.\nWe observe that for Chemistry and Physics there is one point where the two probabilities are equal and two distributions cross. The point of crossover is at $k=5$ in the period 1990-1995, meaning that the likelihood to survive and have 5 mentees or less is higher for researchers trained in small groups. On the other hand, researchers trained in big groups have a higher likelihood to mentor 5 or more mentees in their careers, despite their lower odds of surviving. The two distributions do not cross over in Neuroscience and display only minor, although statistically significant differences, indicating that researchers trained in small groups have a slightly higher probability of having $k$ mentees, for all values of $k$ in the period 1990-1995. The point of equal probability $k$ identifies two different regimes: One regime where fecundity is smaller than $k$ and is associated with a higher likelihood to mentees trained in small groups; in the other regime, fecundity is larger than $k$ and is associated with higher likelihood to mentees trained in big groups. This opposite role of small and big groups regarding fecundity suggests two different strategies: a big group is to be preferred if a mentee aims at high fecundity, while a small group is to be preferred if the aim of a mentee is to avoid dropout, although the expected fecundity will be smaller. We calculate the points of same probability for each decade since 1960s (Fig. \\ref{fig:evolution}a-c insets, and Supplementary Fig. S5) and find an increasing trend with time. This phenomenon indicates that researchers trained in a big group face high risks of dropout, if their aim is not a high fecundity.\n\n\\begin{figure}[!b]\n \\centering\n \\includegraphics[width=1\\textwidth]{figs\/fig4.pdf}\n \\caption{\\textbf{Results of coarsened exact matching regressions.} \\textbf{a-c.}\n Logistic regression of the odd of surviving in academia. The green (pink) bars indicate the positive (negative) regression coefficients for the corresponding variables on the y-axis. The numbers next to the bars indicate the value of the regression coefficient. The statistical significance of the variables are presented at the top of each value (* p < 0.05; **p < 0.01; ***p < 0.001). The error bars indicate the standard error of each regression coefficient. \\textbf{d-f.} Linear regression of the fecundity of $G_1$ mentees. \\textbf{g-i.} Logistic regression of the odds of being in the top 5\\% in the AAC rank. Note that we show only the statistically significant variables of the regression models after coarsened exact matching the data.}\n \\label{fig:regression}\n\\end{figure}\n\\subsection*{Big groups are more likely to nurture future top-cited researchers} \nApart from academic fecundity, citation is one of the most popular and widely recognized metrics to measure a researcher's success. \nWe measure the mentee's citation success by the probability of being a top-cited scientist. \nBased on previous work \\cite{ma2020mentorship}, we measure the average annual citations (denoted by AAC) of each mentee during their career, and use it to create a ranking for each decade, based on the mentee's graduation year, from 1960 to 1995. We then define the relative representation $Rr_{X\\%}$ which captures how many mentees we observe in the top $X\\%$ of the ranking compared to a random model where group size has no effect on the ranking (see Methods). In general, $Rr>0$ means that the mentees of a given group are more represented than expected. Conversely, $Rr<0$ indicates that mentees are underrepresented compared to the expectation. In Fig. \\ref{fig:evolution}d, we consider the mentees trained in big groups and small groups in the period 1990-1995 and study their relative representation in the top 5\\%, 10\\%, 25\\% and 50\\% of the AAC ranking. We find that mentees from big groups, if surviving, are over-represented among top-cited scientists. Moreover, the result is more pronounced in the top 10\\% and the pattern is consistent across different research fields. Taken together, Fig. \\ref{fig:survival_rate}g-l and Fig. \\ref{fig:evolution}d-f show that survivors from big scientific groups are not only likely to have a better average academic performance, but also have a competitive advantage in being top-cited scientists. We additionally study how the relative representation evolved over time (Fig. \\ref{fig:evolution}g-i). Big groups are becoming less dominant in raising top-cited scientists in Chemistry and Physics in recent decades, as indicated by the orange line decreasing from 0.35 to 0.16 in Fig. \\ref{fig:evolution}g and from 0.38 to 0.18 in Fig. \\ref{fig:evolution}h. The same trend was present from 1960 to 1990 in Neuroscience, but seems to have changed in the 1990s (Fig. \\ref{fig:evolution}i).\nSurvived mentees from small groups have become more represented than previously, even though they are still underrepresented compared to those surviving from big groups. However, we do not find evident changing trends with respect to the top 25\\% and top 50\\% AAC rank (Supplementary Fig. S8, S9 and S10 for details). Our results imply that the candidates surviving in a big group perform better in terms of impact than those from small groups. \n\n\\subsection*{Controlling for confounding factors}\nIn order to understand the role of potential confounding factors, we use a coarsened exact matching (CEM) coupled with regression models to study the relation between scientific group size and predictors of future academic performance. CEM regression consists in running a separate regression model on matched groups of mentees, resulting in a more stringent way of controlling for confounding factors than regression alone (Methods)\\cite{iacus2009cem,iacus2012causal}. In Fig. \\ref{fig:regression}a-c, the logistic regression applied to CEM datasets \nshows that the most significant variable, with a negative weight, to predict survival is \\textit{MenteeFromBigGroup} variable, confirming the finding that being trained in a big group lowers the odds of future survival in academia. The positive regression coefficient of the variable \\textit{First5YearPubsOfMentee} indicates that a mentee's early productivity is associated with survival, supporting previous findings \\cite{milojevic2018changing,xing2019strong}. Moreover, working with senior supervisors (larger \\textit{CareerAgeOfMentor}) rather than with junior supervisors gives a slight yet significant advantage to survive in Chemistry and Neuroscience, while it seems to have a slight negative effect on mentee survival in Physics. Also, the regression coefficient of the \\textit{YearlyPubsOfMentor} variable indicates that the mentor's yearly productivity, i.e. the average number of papers published in a year, has a negative effect on the mentee's survival probability. Taken together, a possible explanation for the results observed in Fig. \\ref{fig:regression}a-c is that busy mentors, such as those from big groups and with a high publishing rate, have typically little time to spend on supervising each mentee, affecting their future academic career. \nThe negative association between mentor productivity during the mentee training and mentee success is further confirmed when we study the distribution of the number of mentor's papers divided by group size (Fig. \\ref{fig:regression_support}a-c). The CCDF for mentors supervising big groups is always larger than those supervising small groups, indicating that mentors leading big groups tend to have more publications on average than those leading small groups. At the same time, their mentees have a lower survival probability than mentees trained in small groups (Fig. \\ref{fig:survival_rate}d-f).\n\\begin{figure}[!bt]\n \\centering\n \\includegraphics[width=1\\textwidth]{figs\/fig5.pdf}\n \\caption{Mentors' productivity and collaboration with their mentees during the mentees' training period. \\textbf{a-c.} The complementary cumulative distribution function (CCDF) of the mentors' productivity leading small groups (blue) and big groups (orange). Here productivity is the number of publications published by the mentor during the training period $d$, highlighted in Fig. \\ref{fig:schematic}a. \\textbf{d-f.} CCDF of the number of papers co-authored with the mentor by survived mentees (green pluses) and by dropped out mentees (grey diamonds). This data refers only to mentees trained in big groups. \\textbf{Inset:} CCDF of the number of papers co-authored with the mentor of survived mentees (green pluses) and dropped out mentees (grey diamonds) trained in small groups only.}\n \\label{fig:regression_support}\n\\end{figure}\n\nWe use a regression approach on CEM datasets also to control for confounding factors in the prediction of fecundity and citation performance, and confirm our previous observations: Group size is a significant factor, being positively associated with future fecundity and citation performance, captured by being among the top 5\\% scientists for yearly citations (\\textit{Top5\\%YearlyCitations}, Fig. \\ref{fig:regression}d-i). The only exception is Neuroscience, where group size is not significant to predict a top-cited scientist (Fig. \\ref{fig:regression}j).\nOverall, the regression analysis confirms that, if surviving, a mentee from a big group has long-term competitive advantages compared to small groups.\n\nApart from group size, we find one more variable, the number of papers co-authored with a mentor during training (\\textit{CollaPubsWithMentor}, see schematic illustration in Fig. \\ref{fig:schematic}c-d), which is positively associated with fecundity and citation performance. This is compatible with the hypothesis that the mentees that receive more supervision from their mentors, as signalled by the higher number of coauthored papers, have higher chances of future success.\nThis is further confirmed by the observed statistical disparities of the mentor-mentee collaboration in big groups and small groups (Fig. \\ref{fig:regression_support}e-g): in big groups, mentees who will survive tend to have more co-authored papers with a mentor during training than those that will dropout. For mentees trained in small groups, there is no noticeable distribution difference between survived mentees and dropepd out mentees (Fig. \\ref{fig:regression_support}e-g Insets). Mentees working in small groups can receive more even attention from the mentor in the same period because there are fewer trainees. Finally, our regression models have between $66\\%$ and $73\\%$ prediction accuracy in mentee survival and reveal another main factor (Fig. \\ref{fig:regression} and Supplementary Table S5, Fig. S15): Surprisingly, the more productive a mentor is, the smaller the probability that their mentee will stay in academia.Taken together, our findings quantitatively support the hypothesis that the attention received from the mentor plays a key role for the higher survival rate and success of mentees in academia\\cite{ma2020mentorship}.\n\n\\section*{Discussion, limitations, and conclusions}\\label{sec:discussion}\nOur findings about the effects of group size and mentor productivity support our hypothesis that the attention allocation of the mentor affects the future academic success of mentees: A highly productive mentor supervising a big group tends to provide less supervision opportunities to each mentee, which results in a higher dropout rate. In big groups, this tendency is counterbalanced only when there are frequent collaborations, and hence more supervision opportunities, between mentee and mentor. Taken together, based on large-scale data in scientific genealogy and scientometrics, we offer empirical evidence for both potential profits and risks of working with successful mentors. \n\nOur study has some limitations. First, some mentorship relations might not be reported in the AFT dataset, which could affect the actual group size measure. To mitigate this issue, we analyzed only the three most represented fields in the data: Chemistry, Neuroscience, and Physics (Supplementary S1.2). Our CEM and regression analysis should also mitigate reporting bias due to the different visibility of mentors, since we control for individual productivity and citations. Also, prior literature has widely investigated the AFT dataset\\cite{lienard2018intellectual,ma2020mentorship,schwartz2021impact,david2012neurotree,ke2022dataset}, and has not found obvious biases that could affect our findings. Second, the AFT dataset only reports formal mentorship relations. In academia, graduate students receive informal mentorship from many researchers, including postdocs, teachers, other faculty and academic staff \\cite{acuna2020some}. These relations are not captured by this data set and, to our knowledge, by no other openly available data set. Yet, while information about informal mentorship could provide more causal explanation to our findings on career evolution, the reported relation between group size of the official mentor and survival, fecundity, and academic achievements would still hold. Third, in this study we use a narrow definition of academic achievements, such as survival, fecundity, and average annual citations. These measures are oblivious of other dimensions of success, not quantifiable in our data, and do not fully represent a successful academic career, in all its aspects. Yet, since decisions in the academic enterprise are lately strongly driven by quantitative measures like those used in this paper, we believe that it is important to study the properties and relations between these indicators. \n\nOur findings indicate that a simple characteristic such as the size of the mentor's group can help predicting the long-lasting achievements of a researcher's career. Our work also raises important questions: Should research policies balance the number of mentees per mentor, given the association with a higher dropout rate? Or should they promote excellence of future career, as arguably nurtured in big successful groups, despite the higher risk of dropout? \nThere are also open questions that we have not tackled here but that offer important future directions of inquiry. \nAn important one is: what is the effect of group structure on the mentees' success? We have shown a strong collaboration between mentee and mentor counterbalances the lower odds of survival in a big group. However, we have not explored the role of the inner group structure, as captured by collaboration ties between the supervised mentees or between a mentee and other junior academics. These collaborations could provide mutual support, mitigating future dropout risk.\nOther important open questions concern how our findings change when differentiating data based on gender, country of origin, or ethnicity. Indeed, previous research shows the existence of strong biases in mentorship and in science in general \\cite{moss2012science,lariviere2013bibliometrics,dutt2016gender,dennehy2017female,schwartz2021impact,hernandez2020inspiration} which could intersect in problematic ways with the big group effect. Answering these questions could not only offer a better understanding of the fundamental mechanisms that underpin a scientific career from the beginning but might also substantially improve our ability to retain young researchers, to improve workplace quality, and to nurture high-impact scientists.\n\n\\clearpage\n\n\\section*{Methods}\\label{sec:methods}\n\\subsection*{Data Preparation}\nThe Academic Family Tree (AFT, \\url{Academictree.org}) records formal mentorship mainly based on training relationships of graduate student, postdoc and research assistant from 1900 to 2021. AFT includes 743,176 mentoring relationships among 738,989 scientists across 112 fields. The data can be linked to the Microsoft Academic Graph (MAG, \\url{https:\/\/aka.ms\/msracad}) which is one of the largest multidisciplinary bibliographic databases. The combined data contains the publication records of mentors and mentees, which can be used to calculate the measurements of publication-related performance in our analysis (Supplementary Note 2 and Note 3). The combined data of AFT and MAG is taken from \\cite{ke2022dataset}. In this paper, we conduct our analysis on researchers in Chemistry, Physics, and Neuroscience, amounting to 350,733 mentor-mentee pairs, and to 309,654 scientists who published 9,248,726 papers. We motivate our choice for the studied fields in Supplementary Note 1 (Data and preprocessing). \n\n\\subsection*{Relative Representation (Rr)}\nGiven a time window, we rank the mentees graduated in this window according to their average annual citations (AAC), calculated over their whole career. Then we compute the observed representation of big group mentees, $R_{BG}(X)$ in the top X\\% of the AAC ranking as:\n\\begin{equation}\n\\label{eq1}\n R_{BG} (X) = \\frac{N_{BG}(X)}{N_{BG}(X) + N_{SG}(X)} \n\\end{equation}\nwhere $N_{BG}(X)$ and $N_{SG}(X)$ are the number of mentees from big groups and small groups, respectively, found in the top $X\\%$ AAC ranking.\nWe compare this observed representation with the expected representation if the position in the ranking is independent of the group size a mentee is from. The expected representation is:\n\\begin{equation}\n\\label{eq2}\n R^{exp}_{BG}=\\frac{N_{BG}}{N_{BG}+N_{SG}}\n\\end{equation}\nwhere $N_{BG}$ and $N_{SG}$ are the total number of mentees from big groups and small groups, respectively.\nThe relative representation in the top $X\\%$ ranking, $Rr_{BG}(X\\%)$ is obtained by subtracting (\\ref{eq2}) from (\\ref{eq1}) and dividing by (\\ref{eq2}):\n\\begin{equation}\n\\label{eq3}\n Rr_{BG}(X\\%) = \\frac{R_{BG} (X) - R^{exp}_{BG}}{R^{exp}_{BG}} \n\\end{equation}\nSimilarly, the relative representation for small groups is defined as:\n\\begin{equation}\n\\label{eq4}\n Rr_{SG}(X\\%) = \\frac{R_{SG}(X) - R^{exp}_{SG}}{R^{exp}_{SG}} \n\\end{equation}\nwhere $R_{SG}(X)$ and $R^{exp}_{SG}$ are obtained by swapping $N_{BG}(X)$ and $N_{SG}(X)$ in (\\ref{eq1}) and (\\ref{eq2}). \n\n\\subsection*{Coarsened Exact Matching (CEM) Regression}\nIn causal inferences, analyzing matched data set is generally less model-dependent (i.e., less prone to the modeling assumptions) than analyzing the original full data\\cite{iacus2012causal,ho2007matching}. For this reason, we use a matching approach before applying regression models to our datasets. With a matching approach\\cite{iacus2009cem,iacus2012causal}, two groups can be balanced, resulting in similar empirical distributions of the covariates. There are many approaches to matching: one approach is based on exact matching, which is the most accurate but also not usable in practice as it returns too few observations in the matched samples. Here, we use the Coarsened Exact Matching (CEM): this method first coarsen the data into linear bins, then matches elements of two groups that fall within the same bin. This approach returns approximately balanced data and allows to control for covariates. \nTaken together, the CEM approach involves three steps:\n\\begin{itemize}\n \\item [1.] Given each mentee $i$, we define a vector $\\mathbf{V}_i$ where each element of the vector corresponds to an individual variable, like number of publications or number of collaborators. \n \\item [2.] We coarsen each control variable, creating bins for each quantile of the distribution\\cite{iacus2012causal} (Supplementary S3.2). Then for each $i$, we convert the vector $\\mathbf{V}_i$ into a coarsened vector $\\mathbf{V^C}_i$, where each element maps the individual variable to the corresponding bin of the coarsened variable.\n \\item [3.] We then perform an exact matching of the coarsened vectors, that is for each $i$ in one group we find a $j$ in the other group such that $\\mathbf{V^C}_i==\\mathbf{V^C}_j$.\n\\item [4.] We discard all elements $i$ that we are not able to match.\n\\end{itemize}\nThese procedure returns to CEM datasets. After creating these datasets , we apply regression models to estimate the effect of the independent variable (group size) on outcome variables (survival, fecundity, and yearly citations). \nSpecifically, we use a logistic regression, a linear regression, and a logistic regression model to study the effects of group size, respectively, on the mentee's survival, fecundity, and being among the top5\\% cited researchers. See the Supplementary Note 3 and Note 4 for details on variables, CEM regression models, Chi-square test and cross-validation.\n\n\\subsection*{Regression variables}\nThe regression includes controls of the following variables: \n\\begin{itemize}\n \\item [$\\bullet$]\\textit{YearlyPubsOfMentor} -- number of yearly publications over a mentor's career.\n \\item [$\\bullet$]\\textit{TotalPubsOfMentor} -- number of total publications over a mentor's career.\n \\item [$\\bullet$]\\textit{YearlyCitationOfMentor} -- number of yearly citations over a mentor's career.\n \\item [$\\bullet$]\\textit{TotalCitationOfMentor} -- number of total citations over a mentor's career.\n \\item [$\\bullet$]\\textit{YearlyCollaOfMentor} -- number of yearly coauthors over a mentor's career.\n \\item [$\\bullet$]\\textit{TotalCollaOfMentor} -- number of total coauthors over a mentor's career.\n \\item [$\\bullet$]\\textit{PubsOfMentorInTraining} -- number of mentor's papers during a given mentee's training period.\n \\item [$\\bullet$]\\textit{CareerAgeOfMentorInTraining} -- a mentor's career stage at the mentee's graduation.\n \\item [$\\bullet$]\\textit{First5YearPubsOfMentee} -- number of publications in the first 5 years of a mentee's career.\n \\item [$\\bullet$]\\textit{First5YearCitationOfMentee} -- number of total papers' citations in the first 5 years of a mentee's career.\n \\item [$\\bullet$]\\textit{First5YearCollaOfMentee} -- number of coauthors in the first 5 years of a mentee's career.\n \\item [$\\bullet$]\\textit{CollaPubsWithMentor} -- number of co-authored papers with their mentor during training period.\n \\item [$\\bullet$]\\textit{MenteeFromBigGroup} -- the independent variable is 1 or 0 if the mentee graduated from a big group or small group in the regressions.\n \\item [$\\bullet$]\\textit{survival} -- the dependent variable is binary.\n \\item [$\\bullet$]\\textit{fecundity} -- the dependent variable is discrete.\n \\item [$\\bullet$]\\textit{Top5\\%YearlyCitations} -- the dependent variable that is being among the top 5\\% scientists for yearly citations, which is also binary.\n\\end{itemize}\nMore information about the variables of the regressions can be found in the Supplementary Note 3.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nInvariants under local unitary transformations are tightly related\nto the discussions on nonlocality - a fundamental phenomenon in\nquantum mechanics, to the quantum entanglement and classification of\nquantum states under local transformations. In recent years many\napproaches have been presented to construct invariants of local\nunitary transformations. One method is developed in terms of\npolynomial invariants Refs. \\cite{Rains,Grassl}, which allows in\nprinciple to compute all the invariants of local unitary\ntransformations, though it is not easy to perform operationally. In\nRef. \\cite{makhlin}, a complete set of 18 polynomial invariants is\npresented for the locally unitary equivalence of two qubit-mixed\nstates. Partial results have been obtained for three qubits states\n\\cite{linden}, tripartite pure and mixed states \\cite{SCFW}, and\nsome generic mixed states \\cite{SFG, SFW, SFY}. Recently the local\nunitary equivalence problem for multiqubit \\cite{mqubit} and general\nmultipartite \\cite{B. Liu} pure states has been solved.\n\nHowever, generally one still has no operational ways to judge the\nequivalence of two arbitrary dimensional bipartite or multipartite\nmixed states under local unitary transformations. An effective way\nto deal with the local equivalence of quantum states is to find the\ncomplete set of invariants under local unitary transformations.\nNevertheless usually these invariants depend on the detailed\nexpressions of pure state decompositions of a state. For a given\nstate, such pure state decompositions are infinitely many.\nParticularly when the density matrices are degenerate, the problem\nbecomes more complicated. Since in this case even the eigenvector\ndecompositions of a given state are not unique.\n\nIn this note, we give a way of constructing invariants under local\nunitary transformations such that the invariants obtained in this\nway are independent from the detailed pure state decompositions of a\ngiven state. They give rise to operational necessary conditions for\nthe equivalence of quantum states under local unitary\ntransformations. We show that the hyperdeterminants, the generalized\ndeterminant for higher dimensional matrices \\cite{gel}, can be used\nto construct such invariants. The hyperdeterminant is in fact\nclosely related to the entanglement measure like concurrence\n\\cite{Hill,wott,uhlm,rungta,ass} and 3-tangle \\cite{coff}. It has\nalso been used in classification of multipartite pure state\n\\cite{miy,Luq,vie}. By employing hyperdeterminants, we construct\nsome trace invariants that are independent of the detailed pure\nstate decompositions of a given state. These trace invariants are a\npriori invariant under local unitary transformations.\n\n\\section{State decomposition independent local invariant}\n\nLet $H_1$ and $H_2$ be $n$ and $m$-dimensional complex Hilbert\nspaces, with $\\{\\vert i\\rangle\\}_{i=1}^n$ and $\\{\\vert\nj\\rangle\\}_{j=1}^m$ the orthonormal basis of spaces $H_1$ and $H_2$\nrespectively. Let $\\rho$ be an arbitrary mixed state defined on\n$H_1\\otimes H_2$,\n\\begin{eqnarray}\\label{general decomposition-1}\n\\rho=\\sum_{i=1}^I p_i|v_i\\ra\\la v_i|,\n\\end{eqnarray}\nwhere $|v_i\\ra$ is a normalized bipartite pure state of the form:\n$$\n|v_i\\ra=\\sum_{k,l=1}^{n,m}a_{kl}^{(i)}|kl\\ra,\\ \\\n\\sum_{k,l=1}^{n,m}a_{kl}^{(i)}a_{kl}^{(i)\\ast}=1,\\ \\ a_{kl}^{(i)}\\in\n\\Cb,\n$$\nwhere $\\ast$ denotes complex conjugation. Denote $A_i$ the matrix\nwith entries given by the coefficients of the vector\n$\\sqrt{p_{i}}|v_i\\ra$, i.e. $(A_i)_{kl}=(\\sqrt{p_{i}}a_{kl}^{(i)})$\nfor all $i=1,\\cdots,I$. Define $I\\times I$ matrix $\\Omega$ such that\n$(\\Omega)_{ij}=tr(A_iA_{j}^{\\dag}), ~~~ i,j=1,\\cdots,I,$ \\ where\n$\\dag$ stands for transpose and complex conjugation.\n\n\nThe pure state decomposition (\\ref{general decomposition-1}) of a\ngiven mixed state $\\rho$ is not unique. For another decomposition:\n\\begin{eqnarray}\\label{general decomposition-2}\n\\rho=\\sum_{i=1}^I q_i|\\psi_i\\ra\\la\\psi_i|,\n\\end{eqnarray}\nwith\n$$\n|\\psi_i\\rangle=\\sum_{k,l=1}^{n,m}b_{kl}^{(i)}|kl\\ra,\\ \\\n\\sum_{k,l=1}^{n,m}b_{kl}^{(i)}b_{kl}^{(i)\\ast}=1,\\ \\\nb_{kl}^{(i)}\\in\\Cb,\n$$\none similarly has matrices $B_i$ with entries\n$(B_i)_{kl}=(\\sqrt{q_{i}}b_{kl}^{(i)})$, $i=1,\\cdots,I$, and\n$I\\times I$ matrix $\\Omega^\\prime $ with entries\n$$\n(\\Omega^\\prime )_{ij}=tr(B_iB_{j}^{\\dag}),~~~ \\ i,j=1,\\cdots,I.\n$$\n\nA quantity $F(\\rho)$ is said to be invariant under local unitary\ntransformations if $F(\\rho)=F((u_1\\otimes u_2)\\rho (u_1\\otimes\nu_2)^\\dag)$ for any unitary operators $u_1\\in SU(n)$ and $u_2\\in\nSU(m)$. Generally $F(\\rho)$ may depend on the detailed pure state\ndecomposition. We investigate invariants $F(\\rho)$ that are\nindependent on the detailed decompositions of $\\rho$. That is,\nexpression in Eq. (\\ref{general decomposition-1}) and expression in\nEq. (\\ref{general decomposition-2}) give the same value of $F(\\rho)$\nfor a given state $\\rho$. These kind of invariants are of special\nsignificance in determining the equivalence of two density matrices\nunder local unitary transformations.\n\nTwo density matrices $\\rho$ and $\\tilde{\\rho}$ are said to be\nequivalent under local unitary transformations if there exist\nunitary operators $u_1$ (resp. $u_2$) on the first (resp. second)\nspace of $H_1\\otimes H_2$ such that \\be\\label{lu} \\tilde{\\rho}=\n(u_1\\otimes u_2)\\rho(u_1\\otimes u_2)^\\dag. \\ee\n\nA necessary condition that (\\ref{lu}) holds is that the local\ninvariants have the same values $F(\\rho)=F(\\tilde{\\rho})$. Therefore\nif the expression of the invariants $F(\\rho)$ do not depend on the\ndetailed pure state decomposition, one can easily compare the values\nof the invariants between $F(\\rho)$ and $F(\\tilde{\\rho})$. Otherwise\none has to verify $F(\\rho)=F(\\tilde{\\rho})$ by surveying all the\npossible pure state decompositions of $\\rho$ and $\\tilde{\\rho}$. In\nparticular, when $\\rho$ is degenerate, even the eigenvector\ndecomposition is not unique, which usually gives rise to the main\nproblem in finding an operational criterion for local equivalence of\nquantum states. In fact, we have presented a complete set of\ninvariants in \\cite{ZGFJL}. However, these invariants depend on the\neigenvectors of a state $\\rho$. When the state is degenerate, this\nset of invariants is no longer efficient as criterion of local\nequivalence.\n\nWe set out to discuss how to find parametrization independent local\nunitary invariants. First of all we give an elementary result that\nthe determinant can be used to give invariants that are independent\nfrom the choice of the pure state decomposition.\n\n\\noindent{ \\bf Theorem 1:} The coefficients $F_i(\\Omega)$,\n$i=1,2,...,I$, of the characteristic polynomials of the matrix\n$\\Omega$,\n\\begin{eqnarray}\\label{thm}\n\\det(\\Omega-\\lambda\\,E)= \\lambda^I + \\lambda^{I-1} F_1(\\Omega) +\n\\cdots + \\lambda F_{I-1}(\\Omega)+ F_{I}(\\Omega) =\n\\Sigma_{i=0}^{I}\\lambda^{I-i}F_{i}(\\Omega),\n\\end{eqnarray}\nwhere $E$ is the $I \\times I$ unit matrix, $F_{0}(\\Omega)=1$, $\\det$\ndenotes the determinant, have the following properties:\n\n(i) $F_{i}(\\Omega)$ are independent of the pure state decompositions\nof $\\rho$;\n\n(ii) $F_{i}(\\Omega)$ are invariant under local unitary\ntransformations, $i=1,\\cdots, I$.\n\n\\noindent{ \\bf Proof:} (i) If Eq. (\\ref{general decomposition-1})\nand Eq. (\\ref{general decomposition-2}) are two different\nrepresentations of a given mixed state $\\rho$, we have\n$B_{i}=\\Sigma_{j} U_{ij}A_{j}$ for some unitary operator\n$U$\\cite{nie}. Consequently,\n$$\n\\ba{rcl} \\Omega_{ij}^{\\prime}&=& \\displaystyle tr(B_{i}B_{j}^{\\dag})\n= tr\\left[\\sum_{k,l}U_{ik}A_{k}U_{jl}^\\ast A_{l}^{\\dag}\\right]\\\\[5mm]\n&=&\\displaystyle \\sum_{k,l}U_{ik}U_{jl}^\\ast\\, tr(A_{k}A_{l}^{\\dag})\n=\\sum_{k,l}U_{ik}U_{jl}^\\ast \\Omega_{kl} =(U\\Omega U^{\\dag})_{ij},\n\\ea\n$$\ni.e. $\\Omega^\\prime =U\\Omega U^{\\dag}$. Therefore\n$\\det(\\Omega^\\prime -\\lambda\\,E)=\\det( U\\Omega\nU^{\\dag}-\\lambda\\,E)=\\det(\\Omega-\\lambda\\,E)$. Thus the matrices\n$\\Omega$ and $\\Omega^\\prime $ have the same characteristic\npolynomials. Namely $F_{i}(\\Omega)=F_{i}(\\Omega^\\prime )$. Therefore\n$F_{i}(\\Omega)$ are invariants under the pure state decomposition\ntransformations.\n\n(ii) Let $P\\otimes Q\\in SU(n)\\otimes SU(m)$. Under the local unitary\ntransformations one has\n$$\\tilde{\\rho}=(P\\otimes Q)\\rho (P\\otimes\nQ)^{\\dag}=\\sum_{i=1}^{I}p_i(P\\otimes Q)|v_i\\ra\\la v_i| (P\\otimes\nQ)^{\\dag}=\\sum_{i=1}^{I}p_i|w_i\\ra\\la w_i|,$$ with\n$$|w_i\\ra=P\\otimes Q|v_i\\ra=\\sum_{k,l=1}^{n,m}a_{kl}^{(i)\\prime\n}|kl\\ra,\\ \\ \\sum_{k,l=1}^{n,m}a_{kl}^{(i)\\prime }a_{kl}^{{(i)\\prime\n}\\ast}=1,\\ \\ a_{kl}^{(i)\\prime }\\in \\Cb.$$ Denote\n$(A_i^{\\prime})_{kl}=\\sqrt{p_i}a_{kl}^{(i)\\prime}$. We have \\be\nA_{i}^{\\prime}=PA_iQ^{T}. \\ee Therefore\n$tr(A_iA_{j}^{\\dag})=tr(A_i^{\\prime}A_{j}^{\\prime \\dag})$ and\n$\\Omega(\\rho)=\\Omega(\\tilde{\\rho})$. Hence\n$F_{i}(\\Omega(\\rho))=F_{i}(\\Omega(\\tilde{\\rho}))$, and\n$F_{i}(\\Omega)$, $i=1,\\cdots, I$, are invariant under local unitary\ntransformations. \\qed\n\nIn particular, the invariants $F_1= \\sum tr(\\sum_i A_iA_{i}^{\\dag})$\nand $F_I = \\det (\\Omega)$. For the case $I=2$, one has\n$$\\Omega=\\left(\n\\begin{array}{cc}\ntr(A_1A_1^{\\dag}) & tr(A_1A_2^{\\dag}) \\\\\ntr(A_2A_1^{\\dag}) & tr(A_2A_2^{\\dag}) \\\\\n\\end{array}\n\\right)$$ and $F_1=tr(A_1A_1^{\\dag})+tr(A_2A_2^{\\dag})$,\n$F_2=tr(A_1A_1^{\\dag})tr(A_2A_2^{\\dag})-tr(A_1A_2^{\\dag})tr(A_2A_1^{\\dag})$.\n\n\\noindent{ \\bf Remark:} The number of local invariants $F_i$ is\nuniquely determined by the rank $r$ of the mixed state $\\rho$, i.e.\n$I=r$. Therefore we only need to calculate the invariants\ncorresponding to the eigenvector decomposition. Because for\narbitrary pure state decomposition $\\rho=\\Sigma_{j=1}^{J}\nq_j|\\psi_j\\rangle\\langle\\psi_j|$ with $J>r$, the above determinant\nis the same as that of the eigenvector decomposition of\n$\\rho=\\Sigma_{i}^r p_i|\\phi_i\\rangle\\langle\\phi_i|$ after adding\n$J-r$ zero vectors. The determinant $\\det(\\Omega^\\prime\n-\\lambda\\,E)$ of the eigenvector decomposition of $\\rho$ after\nadding $J-r$ zero vectors and $\\det(\\Omega-\\lambda\\,E)$ of\n$\\rho=\\Sigma_{j=1}^{r} q_j|\\psi_j\\rangle\\langle\\psi_j|$ without\n$J-r$ zero vectors have the relation: $\\det(\\Omega^\\prime\n-\\lambda\\,E)=\\lambda^{J-r}\\det(\\Omega-\\lambda\\,E)$. This means that\nthe number of independent local invariants given by (\\ref{thm}) does\nnot depend on the number of pure states in the ensemble of a given\n$\\rho$. Therefore if two mixed states $\\rho$ and $\\tilde{\\rho}$ have\ndifferent ranks, they are not local unitary equivalent. If their\nranks are the same, one only needs to calculate the corresponding\ninvariants with respect to the same numbers $I$ of pure states in\nthe pure state decompositions.\n\nIn fact for a quantum state $\\rho$ in eigenvector decomposition\n$\\rho=\\sum_i \\lambda_i |\\psi_i\\rangle\\langle\\psi_i|$, the\ncorresponding matrix $\\Omega$ is a diagonal one with $\\rho$'s\neigenvalues $ \\lambda_i$ as the diagonal entries. In this case the\nlocal invariants from Theorem 1 are just the coefficients of the\ncharacteristic polynomial of the quantum state $\\rho$. Theorem 1\nshows that these coefficients are local invariants and independent\nfrom the detailed pure state decompositions. But the easy approach\nemployed in Theorem 1 can be generalized to construct more local\ninvariants that are independent of the detailed pure state\ndecompositions by using hyperdeterminant \\cite{gel}.\n\nIn order to derive more parametrization independent quantities we\nconsider the multilinear form $f_A: \\underbrace{V\\otimes \\cdots\n\\otimes V}_\\text{$2s$}\\mapsto \\mathbb C$ given by\n\\begin{equation}\\label{eq:form}\nf_A(e_{i_1}, \\cdots, e_{i_s}, e_{j_1}, \\cdots,\ne_{j_s})=tr(A_{i_1}A_{j_1}^{\\dagger}\\cdots\nA_{i_s}A_{j_s}^{\\dagger}),\n\\end{equation}\nwhere $e_i$ ($1\\leq i\\leq I$) are standard basis elements in\n$V={\\mathbb C}^I$. The multilinear form $f$ can also be written as a\ntensor in $V^*\\otimes\\cdots \\otimes V^*$:\n\\begin{equation}\\label{eq:form2}\nf_A=\\sum_{\\underline{i},\n\\underline{j}}tr(A_{i_1}A_{j_1}^{\\dagger}\\cdots\nA_{i_s}A_{j_s}^{\\dagger})e_{i_1}^*\\otimes\\cdots\\otimes\ne_{i_s}^*\\otimes e_{j_1}^*\\otimes\\cdots\\otimes e_{j_s}^*,\n\\end{equation}\nwhere $e_i^*$ are standard $1$-forms on ${\\mathbb C}^I$ such that\n$e_i^*(e_j)=\\delta_{ij}$, and $\\underline{i}=(i_1, \\cdots, i_s),\n\\underline{j}=(j_1, \\cdots, j_s), 1\\leq i_p, j_p\\leq I$. In general\nwe call the $2s$-dimensional matrix or hypermatrix\n$A=(A_{\\underline{i}\\underline{j}})=(tr(A_{i_1}A_{j_1}^{\\dagger}\\cdots\nA_{i_s}A_{j_s}^{\\dagger}))$ formed by the coefficients of\n(\\ref{eq:form2}) the hypermatrix of the multilinear form $f_A$\nrelative to\nthe standard basis. \n\nThe Cayley hyperdeterminant Det($A$) \\cite{gel} is defined to be the\nresultant of the multilinear form $f_A$, that is, Det(A) is the\nnormalized integral equation of the hyperplane given by the\nmultilinear form $f_A$. It is known that \\cite{gel} the\nhyperdeterminant exists for a given format and is unique up to a\nscalar factor, if and only if the largest number in the format is\nless than or equal to the sum of the other numbers in the format.\nHyperdeterminants enjoy many of the properties of determinants. One\nof the most familiar properties of determinants, the multiplication\nrule $det(AB) = det(A) det(B)$, can be generalized to the situation\nof hyperdeterminants as follows. Given a multilinear form\n$f(x^{(1)}, ..., x^{(r)})$ and suppose that a linear transformation\nacting on one of its components using an $n\\times n$ matrix B, $y_r\n= B x_r$. Then\n\\begin{equation}\\label{eq:det-action}\nDet(f.B) = Det(f) det(B)^{N\/n},\n\\end{equation}\nwhere $N$ is the degree of the hyperdeterminant. Therefore we have\nthe following result.\n\n\\begin{lemma} The hyperdeterminant of format $(k_1,\\ldots,k_r)$ is an invariant under\nthe action of the group $SL(k_1) \\otimes \\cdots \\otimes SL(k_r)$,\nand subsequently also invariant under $SU(k_1) \\otimes \\cdots\n\\otimes SU(k_r)$.\n\\end{lemma}\n\\noindent{ \\bf Proof:} For $(A, B, \\cdots, C)\\in SL(k_1) \\otimes\n\\cdots \\otimes SL(k_r)$, it follows from Eq. (\\ref{eq:det-action})\nthat\n\n\\begin{align}\\label{eq:det-eq}\nDet((A_{(1)}\\cdot B_{(2)}\\cdot\\cdots C_{(r)}\\cdot)f) = Det(f)\ndet(A)^{N\/k_1}det(B)^{N\/k_2}\\cdots det(C)^{N\/k_r} =Det(f).\n\\end{align}\n\n\n\nThe three-dimensional hyperdeterminant of the format $2\\times\n2\\times 2$ is known as the Cayley's hyperdeterminant \\cite{Ca}. In\nthis case the hyperdeterminant of a hypermatrix $A$ with components\n$a_{ijk}$, $i,j,k \\in \\{0, 1\\}$, is given by\n\\begin{eqnarray}\nDet(A) &=& a_{000}^2a_{111}^2 + a_{001}^2a_{110}^2 +\na_{010}^2a_{101}^2 + a_{100}^2a_{011}^2 -\n2a_{000}a_{001}a_{110}a_{111}\\\\\\nonumber\n&&-2a_{000}a_{010}a_{101}a_{111}-2a_{000}a_{011}a_{100}a_{111} -\n2a_{001}a_{010}a_{101}a_{110}\n\\\\\\nonumber &&-2a_{001}a_{011}a_{110}a_{100}- 2a_{010}a_{011}a_{101}a_{100} +\n4a_{000}a_{011}a_{101}a_{110}\\\\\\nonumber && +\n4a_{001}a_{010}a_{100}a_{111}.\n\\end{eqnarray}\nThis hyperdeterminant can be written in a more compact form by using\nthe Einstein convention and the Levi-Civita symbol\n$\\varepsilon^{ij}$, with $\\varepsilon^{00} =\\varepsilon^{11} = 0,\n\\varepsilon^{01} = -\\varepsilon^{10} = 1$; and $b_{kn} =\n(1\/2)\\varepsilon^{il}\\varepsilon^{jm}a_{ijk}a_{lmn}$, $ Det(A)\n=(1\/2)\\varepsilon^{il}\\varepsilon^{jm}b_{ij}b_{lm}$. The\nfour-dimensional hyperdeterminant of the format $2\\times 2\\times 2\n\\times 2$ has been given in Ref. \\cite{Luq}.\n\nFor the general mixed state $\\rho$ in Eq. (\\ref{general\ndecomposition-1}), we can define a hypermatrix $\\Omega_{s}$ with\nentries\n\\begin{eqnarray}\n(\\Omega_{s})_{i_1i_2\\cdots i_sj_1j_2\\cdots j_s}\n=tr(A_{i_1}A_{j_1}^{\\dag}A_{i_2}A_{j_2}^{\\dag}\\cdots\nA_{i_s}A_{j_s}^{\\dag}),\n\\end{eqnarray}\nfor $i_k,j_k=1,\\cdots,I$, $s \\geq 1$, $0\\leq i_{j}\\leq k_{j}$. The\nformat of $\\Omega_{s}$ is $I\\times \\cdots \\times I$.\n\n\\noindent{ \\bf Theorem 2:} $Det (\\Omega_s-\\lambda\\,E)$, with\n$E=(E_{i_1,i_2,\n\\cdots,i_s,j_1,j_2,\\cdots,j_s})=(\\delta_{i_1j_1}\\delta_{i_2j_2}\n\\cdots \\delta_{i_sj_s})$, is independent of the pure state\ndecompositions of $\\rho$. It is also invariant under local unitary\ntransformations of $\\rho$. In particular, all coefficients of\npolynomial $Det (\\Omega_s-\\lambda\\,E)$ are local invariants\nindependent from the pure state decompositions and are invariance\nunder local unitary transformations.\n\n\\noindent{ \\bf Proof:} We first show that it is independent from the\npure state decomposition of $\\rho$. Let Eq. (\\ref{general\ndecomposition-1}) and Eq. (\\ref{general decomposition-2}) be two\ndifferent representations of a given mixed state $\\rho$. We have\n\\begin{eqnarray}\n(\\Omega_s^\\prime )_{i_1i_2\\cdots i_sj_1j_2\\cdots\nj_s}&=&tr(B_{i_1}B_{j_1}^{\\dag}B_{i_2}B_{j_2}^{\\dag}\\cdots\n B_{i_s}B_{j_s}^{\\dag})\\\\\\nonumber\n&=&tr\\left[\\Sigma_{i_1^\\prime j_1^\\prime, \\cdots, i_s^\\prime\n j_s^\\prime} U_{i_1i_1^\\prime}A_{i_1^{\\prime}}U_{j_1j_1^\\prime}^\\ast\nA_{j_1^{\\prime}}^{\\dag}\\cdots\nU_{i_si_s^\\prime}A_{i_s^{\\prime}}U_{j_sj_s^\\prime}^\\ast\nA_{j_s^{\\prime}}^{\\dag}\\right]\\\\\\nonumber &=&((U \\otimes U \\otimes\n\\cdots \\otimes U) (\\Omega_{s})(U^{\\dag} \\otimes U^{\\dag} \\otimes\n\\cdots \\otimes U^{\\dag}))_{i_1i_2\\cdots i_sj_1j_2\\cdots j_s}.\n\\end{eqnarray}\nTherefore $\\Omega^\\prime _s=(U \\otimes U \\otimes \\cdots \\otimes U)\n\\Omega_{s} (U^{\\dag} \\otimes U^{\\dag} \\otimes \\cdots \\otimes\nU^{\\dag})$. Using the action, the associated multilinear form\n$f_{\\omega}$ is acted upon by $U \\otimes U \\otimes \\cdots \\otimes U$\nand $U^{\\dag} \\otimes U^{\\dag} \\otimes \\cdots \\otimes U^{\\dag}$ as\nfollows:\n\\begin{equation*}\n(U_{(1)}\\cdot \\cdots U_{(s)}\\cdot U^{\\ast}_{(1)}\\cdot \\cdots\nU^{\\ast}_{(s)}\\cdot) f_{\\omega}\n\\end{equation*}\nUsing the formula under the action (\\ref{eq:det-eq}) we get $Det\n(\\Omega^\\prime _s-\\lambda\\,E)=Det (\\Omega_s-\\lambda\\,E)$, and thus\n$Det (\\Omega_s-E\\,\\lambda)$ does not depend on the detailed pure\nstate decompositions of a given $\\rho$. Note that in general we\ndon't know the exact formula for the hyperdeterminant, but we can\nstill derive its invariance abstractly.\n\nOn the other hand, under local unitary transformations\n$\\tilde{\\rho}=(P\\otimes Q)\\rho(P\\otimes Q)^\\dag$ for some local\nunitary operators $P\\otimes Q\\in SU(n)\\otimes SU(m)$, similar to the\nproof of the second part of the Theorem 1 and using Lemma\n\\ref{eq:det-action} in \\cite{gel}, it is easy to get\n$\\Omega_s=\\Omega^\\prime _s$. Therefore $Det(\\Omega_s-\\lambda\\,E)$ is\ninvariant under local unitary transformations. Moreover,\nfollowing\\cite{Luq}, the invariant polynomials are invariance under\nlocal unitary transformations. \\qed\n\nAs the application of our theorems we now give two interesting\nexamples.\n\nExample 1: Consider two mixed states $\\rho_1=diag\\{1\/2,1\/2,0,0\\}$\nand $\\rho_2=diag\\{1\/2,0,1\/2,0\\}$. $\\rho_1$ has a pure state\ndecomposition with\n$$\nA_0=\\left(\n\\begin{array}{cc}\n\\frac{1}{\\sqrt{2}} & 0 \\\\\n0 & 0 \\\\\n\\end{array}\n\\right),~~~~ A_1=\\left(\n\\begin{array}{cc}\n 0 & \\frac{1}{\\sqrt{2}} \\\\\n 0 & 0\n \\end{array}\n \\right).\n$$\nWhile $\\rho_2$ has a pure state decomposition with\n$$\nB_0=\\left(\n \\begin{array}{cc}\n \\frac{1}{\\sqrt{2}} & 0 \\\\\n 0 & 0 \\\\\n \\end{array}\n \\right),~~~~\n B_1=\\left(\n \\begin{array}{cc}\n 0 & 0 \\\\\n \\frac{1}{\\sqrt{2}} & 0\n \\end{array}\n \\right).\n$$\nWe have the corresponding matrices\n$(\\Omega(\\rho_1))_{i,j}=tr(A_iA_{j}^{\\dag})$ and $(\\Omega\n(\\rho_2))_{i,j}=tr(B_iB_{j}^{\\dag})$, $i,j=0,1$. From Theorem 1 one\ncan find that these two states have the same values of the\ninvariants in Eq. (\\ref{thm}), $F_i(\\Omega(\\rho_1))\n=F_i(\\Omega(\\rho_2))$.\n\nWe now consider further the four-dimensional hyperdeterminant of the\nformat $2\\times 2\\times 2 \\times 2$ \\cite{Luq}. Let\n$(\\Omega(\\rho_1))_{ijkl}=tr(A_iA_{j}^{\\dag}A_kA_{l}^{\\dag})\\equiv\na_r$, $r=0,\\cdots,15$, where $r=8i+4j+2k+l$. From Ref. \\cite{Luq},\none invariant with degree $4$ is given by\n$$\nN(\\rho_1)=det\\left(\n\\begin{array}{cccc}\na_{0} & a_{1} & a_{8} & a_{9} \\\\\n a_{2} & a_{3} & a_{10} & a_{11} \\\\\n a_{4} & a_{5} & a_{12} & a_{13} \\\\\n a_{6} & a_{7} & a_{14} & a_{15}\n \\end{array}\n\\right)=\\frac{1}{256}.\n$$\nHowever for $\\rho_{2}$ we have $N(\\rho_{2})=0$. Therefore $\\rho_1 $\nand $\\rho_{2}$ are not equivalent under local unitary\ntransformations.\n\nIn Ref. \\cite{che}, the Ky Fan norm of the realignment matrix of the\nquantum states $\\cal{N}(\\rho)$ is proved to be invariant under local\nunitary operations. By calculation we find\n${\\cal{N}}(\\rho_1)={\\cal{N}}(\\rho_2)=\\frac{1}{\\sqrt{2}}$. This means\nthe Ky Fan norm of the realignment matrix can not recognize that\n$\\rho_1$ and $\\rho_2$ are not equivalent under local unitary\ntransformations. Therefore Theorem 2 has its superiority over it\nwith respect to these two states.\n\n\nExample 2: Let two mixed states $\\sigma_1=\\left(\n\\begin{array}{cccc}\n\\frac{1}{3} & 0 & 0 & 0 \\\\\n 0 & \\frac{1}{3} & \\frac{1}{3} & 0 \\\\\n 0 & \\frac{1}{3} & \\frac{1}{3} & 0 \\\\\n 0 & 0 & 0 & 0\n \\end{array}\n\\right)$ and $\\sigma_2=diag\\{2\/3,0,0,1\/3\\}$. Then $\\sigma_1$ has a\npure state decomposition with\n$$\nC_0=\\left(\n\\begin{array}{cc}\n\\frac{1}{\\sqrt{3}} & 0 \\\\\n0 & 0 \\\\\n\\end{array}\n\\right),~~~~ C_1=\\left(\n\\begin{array}{cc}\n 0 & \\frac{1}{\\sqrt{3}} \\\\\n \\frac{1}{\\sqrt{3}} & 0\n \\end{array}\n \\right).\n$$\nWhile $\\sigma_2$ has a pure state decomposition with\n$$\nD_0=\\left(\n \\begin{array}{cc}\n \\frac{\\sqrt{2}}{\\sqrt{3}} & 0 \\\\\n 0 & 0 \\\\\n \\end{array}\n \\right),~~~~\n D_1=\\left(\n \\begin{array}{cc}\n 0 & 0 \\\\\n 0 & \\frac{1}{\\sqrt{3}}\n \\end{array}\n \\right).\n$$\nWe have the corresponding matrices\n$(\\Omega(\\sigma_1))_{i,j}=tr(C_iC_{j}^{\\dag})$ and $(\\Omega\n(\\sigma_2))_{i,j}=tr(D_iD_{j}^{\\dag})$, $i,j=0,1$. From Theorem 1\none can find that these two states have the same values of the\ninvariants in Eq. (\\ref{thm}), $F_i(\\Omega(\\sigma_1))\n=F_i(\\Omega(\\sigma_2))$.\n\nWe also consider further the four-dimensional hyperdeterminant of\nthe format $2\\times 2\\times 2 \\times 2$. Also let\n$(\\Omega(\\sigma_1))_{ijkl}=tr(C_iC_{j}^{\\dag}C_kC_{l}^{\\dag})\\equiv\na_s$, $r=0,\\cdots,15$, where $s=8i+4j+2k+l$. From Ref. \\cite{Luq},\nthe another invariant with degree $4$ is given by\n$$\nM(\\sigma_1)=det\\left(\n\\begin{array}{cccc}\na_{0} & a_{8} & a_{2} & a_{10} \\\\\n a_{1} & a_{9} & a_{3} & a_{11} \\\\\n a_{4} & a_{12} & a_{6} & a_{14} \\\\\n a_{5} & a_{13} & a_{7} & a_{15}\n \\end{array}\n\\right)=\\frac{1}{6561}.\n$$\nHowever for $\\sigma_{2}$ we have $M(\\sigma_{2})=0$. Therefore\n$\\sigma_1 $ and $\\sigma_{2}$ are not equivalent under local unitary\ntransformations. From example 2, one can see that the spectra of\ntheir reduced one-qubit density matrices have the same value.\nTherefore, only by the spectra of their reduced one-qubit density\nmatrices can not judge the equivalence of given states.\n\nOur results can be generalized to multipartite case. Let\n$H_1,H_2,\\cdots,H_m$ be $n_1,n_2, \\cdots, n_m$-dimensional complex\nHilbert spaces with $\\{\\vert k_1\\rangle\\}_{k_1=1}^{n_1}$, $\\{\\vert\nk_2\\rangle\\}_{k_2=1}^{n_2}$, $\\cdots $, $\\{\\vert\nk_m\\rangle\\}_{k_m=1}^{n_m}$ the orthonormal basis of $H_1, H_2,\n\\cdots, H_m$ respectively. Let $\\rho$ be an arbitrary mixed state\ndefined on $H_1\\otimes H_2\\otimes\\cdots\\otimes H_m$,\n$\\rho=\\sum_{i=1}^Ip_i|v_i\\ra\\la v_i|$, where $|v_i\\ra$ is a\nmultipartite pure state of the form:\n$|v_i\\ra=\\sum_{k_1,k_2,\\cdots,k_m=1}^{n_1,n_2,\\cdots,n_m}a_{k_1k_2\\cdots\nk_m}^{(i)}|k_1k_2\\cdots k_m\\ra,\\ \\ a_{k_1k_2\\cdots k_m}^{(i)}\\in\n\\Cb$. Now we view $|v_i\\ra$ as bipartite pure state under the\npartition between the first $l$ subsystems and the rest, $1\\leq l\n