diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzchpq" "b/data_all_eng_slimpj/shuffled/split2/finalzzchpq" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzchpq" @@ -0,0 +1,5 @@ +{"text":"\\section{Abstract}\nApplied machine learning (ML) has rapidly spread throughout the physical sciences; in fact, ML-based data analysis and experimental decision-making has become commonplace.\nWe suggest a shift in the conversation from proving that ML can be used to evaluating how to equitably and effectively implement ML for science.\nWe advocate a shift from a \"more data, more compute\" mentality to a model-oriented approach that prioritizes using machine learning to support the ecosystem of computational models and experimental measurements.\nWe also recommend an open conversation about dataset bias to stabilize productive research through careful model interrogation and deliberate exploitation of known biases.\nFurther, we encourage the community to develop ML methods that connect experiments with theoretical models to increase scientific understanding rather than incrementally optimizing materials.\nFurther, we encourage the community to develop machine learning methods that seek to connect experiments with theoretical models to increase scientific understanding rather than simply use them optimize materials. \nMoreover we envision a future of radical materials innovations enabled by computational creativity tools combined with online visualization and analysis tools that support active outside-the-box thinking inside the scientific knowledge feedback loop.\nFinally, as a community we must acknowledge ethical issues that can arise from blindly following machine learning predictions and the issues of social equity that will arise if data, code, and computational resources are not readily available to all.\n\n\\section{Introduction}\nSince Frank Rosenblatt created Perceptron to play checkers \\cite{Rosenblatt1960}, machine learning (ML) applications have been used to emulate human intelligence. The field has grown immensely with the advent of ever more powerful computers with increasingly smaller size combined with the development of robust statistical analyses. These advances allowed Deep Blue to beat Grandmaster Gary Kasparov in chess and Watson to win {\\it Jeopardy!} The technology has since progressed to more practical applications such as advanced manufacturing and common tasks we now expect from our phones like image and speech recognition. The future of ML promises to obviate much of the tedium of everyday life by assuming responsibility for more and more complex processes, \\textit{e.g.}, autonomous driving.\n\nWhen it comes to scientific application, our perspective is that ML methods are just another component of the scientific modeling toolbox, with a somewhat different profile of representational basis, parameterization, computational complexity, and data\/sample efficiency. Fully embracing this view will help the materials and chemistry communities to overcome perceived limitations and at the same time evaluate and deploy these methods with the same le vel of rigor and introspection as any physics-based modeling methodology. Toward this end, in this essay we identify five areas in which materials researchers can clarify our thinking to enable a vibrant and productive community of scientific ML practitioners:\n\n\\begin{enumerate}\n\\item Maintain perspective on resources required\n\\item Openly assess dataset bias\n\\item Keep sight of the goal\n\\item Dream big enough for radical innovation\n\\item Champion an ethical and equitable research ecosystem\n\\end{enumerate}\n\n\\section{Maintain perspective on resources required}\\label{sec:resources}\n\nThe recent high profile successes in mainstream ML applications enabled by internet-scale data and massive computation~\\cite{DBLP:journals\/corr\/abs-2005-14165,deng2009imagenet} have spurred two lines of discussion in the materials community that are worth examining more closely. The first is an unmediated and limiting preference for large scale data and computation, under the assumption that successful machine learning is unrealistic for materials scientists with datasets that are orders of magnitude smaller than those at the forefront of the publicity surrounding deep learning. The second is a tendency to dismiss brute-force machine learning systems as unscientific. While there is some validity to both these viewpoints, there are opportunities in materials research for productive, creative ML work with small datasets and for the \"go big or go home\" brute-force approach.\n\n\\subsection{Molehills of data (or compute) are sometimes better than mountains}\nA common sentiment in the contemporary deep learning community is that the most reliable means of improving the performance of a deep learning system is to amass ever larger datasets and apply raw computational power. This sometimes can encourage the fallacy that large scale data and computation are fundamental requirements for success with ML methods. This can lead to needlessly deploying massively overparameterized models when simpler ones may be more appropriate~\\cite{d2020underspecification}, and it limits the scope of applied ML research in materials by biasing the set of problems people are willing to consider addressing. There are many examples of productive, creative ML work with small datasets in materials research that counter this notion~\\cite{HattrickSimpers2018, Xue2016}.\n\nIn the small data regime, high quality data with informative features often trump excessive computational power with massive data and weakly correlated features. A promising approach is to exploit the bias-variance tradeoff by performing more rigorous feature selection or crafting a more physically motivated model form~\\cite{childs2019embedding}. Alternatively, it may be wise to reduce the scope of the ML task by restricting the material design space or use ML to solve a smaller chunk of the problem at hand. ML tools for exploratory analysis with appropriate features can bring us much higher dimensional spaces even at an early stage of the research, which may be helpful to have a bird's-eye view on our target.\n\nThere are also specific machine learning disciplines aimed at addressing the well-known issues of small datasets, dataset bias, noise, incomplete featurization, and over-generalization, and there has been some effort to develop tools to address them. Data augmentation and other regularization strategies can allow even small datasets to be treated with large deep learning models. Another common approach is transfer learning, where a proxy model is trained on a large dataset and adapted to a related task with fewer data points \\cite{Yamada2019, Hoffmann2019, goetz2021addressing}. Chen {\\it et. al.} showed that multi-fidelity graph networks could be used in comparatively inexpensive low-fidelity calculations to bolster the accuracy of ML predictions for expensive high-fidelity calculations~\\cite{Chen2021}. Finally, active learning methods are now being explored in many areas of materials research, where surrogate models are initialized on small datasets and updated as new data are taken with new predictions made, often in a manner that balances exploration with optimization~\\cite{Lookman2019}. Generally a solid understanding of uncertainty of the data is critical for success with these strategies, but ML systems can lead us to some insights or perhaps serve as a guide for optimization which might otherwise be intractable.\n\nWe assert that the materials community would generally benefit from taking a more model-oriented approach to applied machine learning, in contrast to the popular prediction-oriented approach that many method-development papers take. To achieve the goals of scientific discovery and knowledge generation, predictive ML must often play a supporting role within a larger ecosystem of computational models and experimental measurements. It can be productive to reassess~\\cite{Bartel2020} the predictive tasks we are striving to address with ML methods; more carefully thought out applications may provide more benefit than simply collecting larger datasets and training higher capacity models.\n\n\\subsection{Writing off massive computation can lead to missed opportunities}\nOn the other hand, quantifying brute computation as \"unscientific\" can lead to missed opportunities to meaningfully accelerate and enable new kinds or scales of scientific inquiry~\\cite{Holm2019}. Even without investment in massive datasets or specialized ML models, there is evidence that simply increasing the scale of computation applied can help compensate for small datasets~\\cite{he2019rethinking}. In many cases, advances enabled in this way do not directly contribute to scientific discovery or development, but they absolutely change the landscape of feasible scientific research by lowering the barrier to exploration and increasing the scale and automation of data analysis. \n\nFor example, recent advances in learned potential methods have provided paradigm-shifting performance improvements in protein structure prediction~\\cite{Senior2020} and offer the potential to vastly expand the domain of atomistic material simulation. Similarly, when good physical models of data-generating processes exist, massive computation can enable new scientific applications through scalable automated data analysis systems. Recent examples include phase identification in electron backscatter diffraction (EBSD)~\\cite{Kaufmann2020} and X-ray diffraction (XRD)~\\cite{maffettoneCrystallographyCompanionAgent2021c}, and local structural analysis via extended x-ray absorption fine structure (EXAFS)~\\cite{Timoshenko2020, Schmeide2021}. \n\nEven for domains where high-fidelity forward models are not available, generative models provide similar advances in data analysis capabilities. For example, a UV-Vis autoencoder trained on a large dataset of optical spectra~\\cite{Stein2019} directly enabled inverse design of solid-state functional materials~\\cite{Noh2019}.\n\n \n In light of the potential value of large-scale computation in advancing fundamental science, the materials field should make computational efficiency~\\cite{DBLP:journals\/corr\/abs-1907-10597} an evaluation criterion alongside accuracy and reproducibility~\\cite{DBLP:journals\/corr\/abs-2003-12206}. Comparison of competing methods using equal computational budgets can provide insight into which methodological innovations actually contribute to improved performance (as opposed to simply boosting model capacity) and can provide context for the feasibility of various methods to be deployed as online data analysis tools. Careful design and interpretation of benchmark tasks and performance measures are needed for the community to avoid chasing arbitrary targets that do not meaningfully facilitate scientific discovery and development of novel and functional materials.\n\n\\section{Openly assess dataset bias}\\label{sec:bias}\n\\subsection{Acknowledging dataset bias}\nIt is widely accepted that materials datasets are distinct from the datasets used to train and validate machine learning systems for more \"mainstream\" applications in a number of ways. While some of this is hyperbole, there are some genuine differences that have a large impact on the overall outlook for ML in materials research. For instance, there is a community-wide perception that all machine learning problems involve data on the scale of the classic image recognition and spam\/ham problems. While the MNIST\\cite{mnist} dataset contains 280,000 labeled images, about twice the number of labeled instances in the Materials Project Database\\cite{Jain2013}, other popular machine learning benchmark datasets are much more modest in size. For instance, the Iris Dataset contains only 50 samples each of three species of Iris and is treated as a standard dataset for evaluating a host of clustering and classification algorithms. As noted above dataset size is not necessarily the major hurdle for the materials science community in terms of developing and deploying ML systems; however, the data, input representation, and task must each be carefully considered.\n\nViewed as a monolithic dataset, the materials literature is an extremely heterogeneous multiview corpus with a significant fraction of missing entries. Even if this dataset were accessible in a coherent digital form, its diversity and deficiencies would pose substantial hurdles to its suitability for ML-driven science. Most research papers narrowly focus on a single or a small handful of material instances, address only a small subset of potentially relevant properties and characterization modalities, and often fail to adequately quantify measurement uncertainties. Perhaps most importantly, there is a strong systemic bias towards positive results~\\cite{Dwan2008}. All of these factors negatively impact the generalization potential of ML systems. \n\n\\begin{figure}[h!tbp]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{searchunderthelight}\n \\caption{Where to search for new discoveries?}\n \\label{fig:search}\n\\end{figure}\n\nTwo aspects of publication bias play a particularly large role: domain bias and selection bias. Domain bias results when training datasets do not adequately cover the input space. For example, Jia {\\it et. al.} recently demonstrated that the \"tried and true\" method of selecting reagents following previous successes artificially constrained the range of chemical space searched, providing the AI with a distorted view of the viable parameter space~\\cite{Jia2019}. Severe domain bias can lead to overly optimistic estimates of the performance of ML systems~\\cite{Wallach2018, Rauer2020} or in the worst case even render them unusable for real-world scientific application \\cite{griffiths2021dataset}. \n\nSelection bias arises when some external factor influences the likelihood of a data points inclusion in the dataset.\nIn scientific research, a major source of such selection bias is the large number of unreported failures. For instance the Landolt-Bornstein collection lists 71\\% of the alloys as being glass formers while the actual number of glass-forming compounds is estimated to be 5\\%~\\cite{10.1007\/978-3-642-13850-8}. This further complicates the already challenging task of learning from imbalanced datasets by skewing the prior probability of glass formation through dataset imbalance. Schrier {\\it et. al.} reported on how incorporating failed experiments into ML models can actually improve upon the overall predictive power of a model~\\cite{Raccuglia2016}.\n\nFurthermore, the annotations or targets used to train ML systems do not necessarily represent true physical ground truth. As an example, in the field of metallic glasses the full width half-maximum (FWHM) of the strongest diffraction peak at low {\\it q} is often used to categorize thin-film material as being metallic glass, nanocrystalline, or crystalline. Across the literature the FWHM value used as the threshold to distinguish between the first two classes varies from 0.4 to 0.7 \\AA$^{-1}$ (with associated uncertainties) depending upon the research group. Although compendiums invariably capture the label ascribed to the samples, they almost ubiquitously omit the threshold used for the classification, the uncertainty in the measurement of the FWHM, and the associated synthesis and characterization metadata. Comprehensive studies often report only reduced summaries for the datasets presented and include full details only for a subset of \"representative data.\" These shortcomings are common across the primary materials science literature. Given that even experts can reasonably disagree on the interpretation of experimental results, the lack of access to primary datasets prevents detailed model critique, posing a substantial impediment to model validation~\\cite{HattrickSimpers2021,griffiths2021dataset}. The push for creating F.A.I.R. (Findable, Accessible, Interoperable, and Reusable \\cite{Wilkinson2016}) datasets with human\/computer readable data structures notwithstanding, most of the data and meta-data for materials that have ever been made and studied have been lost to time.\n\n Systematic errors in datasets are not restricted to experimental results alone. Theoretical predictions from high throughput density functional theory (DFT) databases, for example, are a valuable resource for predicted material (meta-) stability, crystal structures, and physical properties, but DFT computations contain several underlying assumptions that are responsible for known systematic errors e.g., calculated band gaps. DFT experts are well aware of these limitations and their implications for model building; however, scientists unfamiliar with the field may not be able to reasonably draw conclusions about the potential viability of a model's predictions given these limitations. Discrepancy between DFT and experimental data will expand as systems get increasingly more complex, a longstanding trend in applied materials science. A heterogeneous model, in particular, may cause large uncertainty depending on the complexity of the input structure, and many times little to no information is detailed about the structure or the rationale for choosing it.\n \n Finally, even balanced datasets with quantified uncertainties are not guaranteed to generate predictive models if the features used to describe the materials and\/or how they are made are not sufficiently descriptive. Holistically describing the composition, structure, microstructure of existing materials is a challenging problem and the feature set used (e.g., microstructure 2-point correlation, compositional descriptors and radial distribution functions for functional materials, and calculated physical properties) is largely community driven. This presupposes that we know and can measure the relevant features during our experiments.\n Often identifying the parameters that strongly influence materials synthesis and the structural aspects highly correlated to function is a matter of scientific inquiry in and of itself.\n For example, identifying the importance of temperature in cross-linking rubber or the effect of moisture in the reproducible growth of super-dense, vertically aligned single-walled carbon nanotubes requires careful observation and lateral thinking to connect seemingly independent or unimportant variables.\n If these parameters (or covariate features, \\textit{e.g.}, CVD system pump curves) are not captured from the outset, then there is no hope of algorithmically discovering a causal model, and weakly predictive models are likely to be the best case output.\n\n\\subsection{Productivity in spite of dataset bias}\nBias in historical and as-collected datasets should be acknowledged, it but does not entirely preclude their use to train an AI targeted towards scientific inquiry. Instead one can continue to gain productive insights from AI by taking the appropriate approach and thinking analytically about the results of the model. \n\nOne method for maintaining \"good\" features and models is to adapt an active human intervention in the ML loop. For example, we have recently demonstrated that Random Forest models that are tuned to aggressively maximize only cross-validation accuracy may produce low-quality, unreliable feature ranking explainability~\\cite{Lei2021}. Carefully tracking which features (and data points) the model is most dependent on for its predictions allows a researcher to ensure that the model is capturing physically relevant trends, identify new potential insight into material behavior, and spot possible outliers. Similarly, when physics-based models are used to generate features and training data for ML models, subsequent comparison of new predictions to theory-based results offers the opportunity for improvement of both models~\\cite{Liu2020}. An alternative approach, as recently demonstrated by Kusne {\\it et. al.} is to directly have the ML model request expert input, such as performing a measurement or calculation, that is expected to lower predictive uncertainties~\\cite{Kusne2020}.\n\nEspecially with small datasets, it is important to characterize the extent of dataset bias and perform careful model performance analysis to obtain realistic estimates of the generalization of ML models. See Ref.~\\cite{Rauer2020} for compelling examples, an overview of recently-developed unbiasing techniques in the computational chemistry literature with details on the Asymmetric Validation Embedding method which quantifies the bias of a dataset relative to the ability of a first-nearest-neighbor model to memorize the training data. This method explicitly accounts for the label distribution but is specific to classification tasks. Leave-one-cluster-out cross-validation~\\cite{Meredig2018} is more general, using only distances in input space to define cross validation groups to reduce information leakage between folds. Similarly, De Breuck {\\it et. al.} used principal component analysis as a method for investigating the role of dataset bias by investigating the density of data points with scores plots~\\cite{DeBreuck2021}. \n\nA culture of careful model criticism is also important for robust applied ML research~\\cite{lipton2018troubling}. A narrow focus on benchmark tasks can lead to false incremental progress, where over time models begin overfitting to a particular test dataset and then lack generalizability beyond the initial dataset~\\cite{DBLP:journals\/corr\/abs-1902-10811}. Recht {\\it et. al.}~\\cite{DBLP:journals\/corr\/abs-1902-10811} demonstrated that a broad range of computer vision models suffer from this effect by developing extended test sets for the CIFAR-10 and ImageNet datasets extensively used in the community for model development. This can make it difficult to reason about exactly which methodological innovations truly contribute to generalization performance. Because many aspects of ML research are empirical, carefully designed experiments are needed to separate genuine improvements from statistical effects, and care is needed to avoid {\\it post-hoc} rationalization (Hypothesizing After the Results are Known (HARK)~\\cite{DBLP:journals\/corr\/abs-1904-07633}).\n\nThat there is historical dataset bias is both unavoidable and unresolvable, but once identified this bias does not necessarily constrain the search for new materials in directions that directly contradict the bias~\\cite{Nguyen2021}. For instance, Jia {\\it et. al.} identified anthropogenic biases in the design of amine-templated metal oxides, in that a small number of amine complexes had been used for a vast majority of the literature~\\cite{Jia2019}. Their solution was to perform 548 randomly generated experiments to demonstrate that a global maximum had not been reached but also to erode the systemic data bias their models observed. This is not to say that such an approach is a panacea for dataset or feature set bias as such experiments are still designed by scientists carrying their own biases (e.g., using only amines) and may suffer from uncaptured (but important!) features. Of course, a question remains how to best remove human bias from the experimental pipeline. One might begin that endeavor by allowing researchers to use their intuition and insights for featurization, data curation, and goal setting, while permitting the ML to perform the ultimate selection of the experiment to be performed and manage data acquisition. \n\n\\section{Keep sight of the goal}\\label{sec:goal}\nWhile the implementation of ML in materials science is goal driven, often focused on a push for better accuracy and faster calculations, these are not always the only objectives or even the most important ones. Consider the trade-off between accuracy and discovery. If one is optimizing the pseudopotentials to use for DFT ~\\cite{Behler2007, Bartk2010}, then design is centered around accuracy. On the other hand, if the goal is to identify a material that has a novel combination of physical properties, simply knowing that such a compound exists may be sufficient to embark on a meaningful research effort. The details related to synthesis and processing of the actual phase may likely go far beyond what is possible with any extant ML models, especially with limited benchmark datasets as one approaches the boundary of new science. \n\nThere are clearly cases where ML is the obvious choice to accelerate research, but there can be concerns about the suitability of ML to answer the relevant question. Many applied studies focus only on physical or chemical properties of materials and often fail to include parameters relating to their fundamental utility such as reproducibility, scalability, stability, productivity, safety, or cost~\\cite{olivetti2018toward}. While humans may not be able to find correlations or patterns in high-dimensional spaces, we have rich and diverse background knowledge and heuristics; we have only just begun the difficult work of inventing ways of building this knowledge into machine learning systems. In addition, for domains with small datasets, limited features, and a strong need for higher-level inference rather than a surrogate model, ML should not necessarily be the default approach. A more traditional approach may be faster due to the error in the ML models associated with sample size, and heuristics can play a role even with larger datasets~\\cite{George2021}. \n\nOne alternative is to employ a hybrid method which may include a Bayesian methodology to analysis~\\cite{gelman1995bayesian} or may use ML to guide the work through selective intervention~\\cite{hutchinson2017overcoming}. ML is only a means to model data (Figure~\\ref{fig:hallpetch}), and a good fit to the dataset is no guarantee that the model will be useful since it may have little to no relationship to actual science as it attempts to emulate apparent correlations between the features and the targets. A subsequent corollary is that any predictions from ML, especially when working with small datasets, may be unphysical. Again, we stress that it doesn't imply that we should never use ML for small datasets. Rather we need to employ ML tools judiciously and understand their limitations in the context of our scientific goals. For instance, most ML models are reasonably good at interpolation~\\cite{friedman2017elements}. On the other hand, ML is not nearly as robust when used for extrapolation, although this can be mitigated to some extent by including rigorous statistical analyses on the predictions ~\\cite{Tran2020}.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{grainsize.pdf}\n \\caption{A Gaussian Process model can effectively reproduce the grain size dependence of the mechanical strength of an alloy even though it is completely devoid of any knowledge of the effect of the density of grain boundaries for large-grain metals \\cite{Cordero2016}, the impact of grain boundary sliding in nanocrystalline alloys \\cite{Trelewicz2007} or even the regime change.}\n \\label{fig:hallpetch}\n\\end{figure}\n\nA discussion of errors and failure modes can help one understand the bounds of the validity of any ML analysis although it is often lacking or limited. An honest discourse includes not only principled estimates of model performance and detailed studies of predictive failure modes but also notes how reproducible the results within and across research groups. Such disclosure is important for the trustworthiness of ML for any application. \n\nFinally, one of the biggest potential pitfalls that can occur, even for large well-curated datasets, is that one can lose sight of the goal by focusing on the accuracy of the model rather than using it to learn new science. There is a particular risk of the community spending disproportionate effort incrementally optimizing models to overfit against benchmark tasks~\\cite{DBLP:journals\/corr\/abs-1902-10811}, which may or may not even truly represent meaningful scientific endeavors in themselves. The objective should not be to identify the one algorithm that is good at everything but rather to develop a more focused effort that addresses a specific scientific research question. For ML to reach its true potential to transform research and not just serve as a tool to expedite materials discovery and optimization, it needs to help provide a means to connect experimental and theoretical results instead of simply serving as a convenient means to describe them. For the ML novice it is helpful to remember to keep the scientific goal at the forefront when selecting a model and designing training and validation procedures.\n\n\\section{Dream big enough for radical innovation}\\label{sec:innovation}\n\nTo date, AI has increased its presence in materials science for mainly three applications: 1) automating data analysis that used to be manual, 2)serving as lead-generation in a materials screening funnel, illustrated by the Open Quantum Materials Database and Materials Project, and 3) optimizing existing materials, processes, and devices in a broadly incremental manner. While these applications are critically important in this field, we have witnessed that radical innovation historically has often been accomplished out of the context of these frameworks, driven by human interests or serendipity along with stubborn trial and error. For instance, graphene was first isolated during Friday night experiments when Geim and Novoselov would try out experimental science that was not necessarily linked to their day jobs. Escobar {\\it et. al.} discovered that peeling adhesive tape can emit enough x-rays to produce images~\\cite{Sanderson2008}. Shirakawa discovered a conductive polyacetylene film by accidentally mixing doping materials at a concentration a thousand times too high~\\cite{Guo2020}. Design research has argued that every radical innovation investigated was done without careful analysis of a person's or even a society's needs~\\cite{Norman2014}. If this is the case, an ultimate question about ML deployment in materials science would be, can ML help humans make the startling discovery of \"novel\" materials and eventually new science?\n\n\nAccording to a proposed categorization in design research~\\cite{Norman2014}, one can position their research based on scientific and application familiarity (Fig~\\ref{fig:innovation}). Here, incremental areas (blue region) can provide easier data acquisition and interpretation of results but may hinder new discovery. In contrast, an unexplored area may more likely provide such unexpected results but presents a huge risk of wasting research resources due to the inherent uncertainty. Self-aware resource allocation and inter-area feedback will be needed to balance novelty with the probability of successful research outcomes. Although there is currently a lack of ML methods that can directly navigate one in the radical change\/radical application quadrant to discover new science, we expect that there are methodologies that can harness ML to increase the chance of radical discovery.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{radicalinnovation2.pdf}\n \\caption{(a) Research categorization based upon the degree of scientific and application familiarity (b) Research loop involving machine learning with traditional and outside-the-box steps.}\n \\label{fig:innovation}\n\\end{figure}\n\n\\subsection{Active outside-the-box exploration driven by ML-assisted knowledge acquisition}\nHuman interests motivate outside-the-box research that may lead to a radical discovery, and these interests are fostered by theoretical or experimental knowledge acquisition. Therefore, any applied AI and automated research systems may contribute to discrete discovery by accelerating the knowledge feedback loop (Fig~\\ref{fig:innovation}b). Such ML-involved research loop can include a proposal of hypotheses, theoretical and experimental examination, knowledge extraction, and generalization, which may lead to an opportunity for radical thinking. For ML to play a meaningful role in expediting this loop, one should maintain exploratory curiosity at each step and be inspired or guided by any outputs while attentively being involved in the loop. Additionally, at the very beginning of proof-of-concept research, either in a current research loop or outside-the-box search, the fear of reproducibility should not prevent the attempt at new ideas because the scientific community needs to integrate conflicting observations and ideas into a coherent theory~\\cite{Redish2018}.\n\nOne can harken back to Delbruck's principle of limited sloppiness~\\cite{Yaqub2018}, which reminds us that our experimental design sometimes tests unintended questions, and hidden selectivity requires attention to abnormality. In this context, ML may help us notice the anomaly or even hidden variables with a rigorous statistical procedure, leading to new pieces of knowledge and outside-the-box exploration. For instance, Nega et al. used automated experiments and statistical analysis to clarify the effect of trace water on crystal\/domain growth of halide perovskite~\\cite{Nega2021}, which had often been communicated only in intra-lab conversation. Since such correlation analysis can only shed light on a domain where features are input, researchers still need comprehensive experimental records containing both data and metadata to be fed, possibly regardless of their initial interests. Also, an unbiased and flexible scientific attitude based upon observation may be crucial to reconceptualize a question after finding the abnormality.\n\n\\subsection{Deep generative inverse design to assist in creating material concepts}\nFunctionality-oriented inverse design~\\cite{Zunger2018} is an emerging approach for searching chemical spaces~\\cite{Kirkpatrick2004} for small molecules and possibly solid-state compounds~\\cite{ren2020inverse}. Briefly, a deep generative model learns a probabilistic latent representation of chemical space, and a surrogate model is used to optimize target properties in the latent space; novel compounds likely to have desired properties can then be sampled from the generative model~\\cite{SanchezLengeling2018}. While the design spaces, such as the 166 billion molecules mapped by chemical space projects~\\cite{Reymond2015}, are far beyond the human capability to understand them comprehensively, AI may distill patterns connecting functionalities and compound structures spanning the space. This approach can be a critical step in conceptualizing materials design based upon desired functionalities and further accelerating the AI-driven research loop. One application of such inverse design is to create a property-first optimization loop which includes defining a desired property, proposing a material and structure for the property, validating the results with (automated) experiments, and refining the model. \n\nWhile these generative methods may start to approach creativity, they still explicitly aim to learn an empirical distribution based on the available data. Therefore, extrapolation outside of the current distribution of known materials is not guaranteed to be productive. This suggests that these methods would probably not generate a carbon nanotube given only pre-nanotube-era structures for training or generate ordered superlattices if there is none in the training data. In addition, these huge datasets are mainly constructed based on simulation, and we need to be careful about a gap between simulated and actual experimental data as discussed previously. Still, a new concept extracted from inverse design may inspire researchers to jump into a new discrete subfield of material design by actively interpreting the abstracted property-structure relationship.\n\n\\subsection{Creative AI for materials science}\nThe essence of scientific creativity is the production of new ideas, questions, and connections \\cite{Lehmann2019}.\nThe era of AI as an innovative investigator in this sense has yet to arrive.\nHowever, since human creativity has been captured by actively learning and connecting dots highlighted by our curiosity, it may be possible that machine \"learning\" can be as creative as humans in order to reach radical innovation. While conventional supervised natural language processing~\\cite{Krallinger2017} has required large hand-labeled datasets for training, a recent unsupervised learning study~\\cite{Tshitoyan2019} indicates the possibility of extracting knowledge from literature without human intervention to identify relevant content and capturing preliminary materials science concepts such as the underlying structure of the periodic table and structure-properties relationships. This study was demonstrated by encoding latent literature into information-dense word embeddings, which recommended some materials for a specific application ahead of human discovery. Since the amount of currently existing literature is too massive for human cognition, generative AI systems may be useful to suggest a specific design or concept given appropriately defined functionalities. \n\nAn underlying challenge is how to deal with implicit and non-machine-readable data reported in the literature. For instance, it is common to summarize experimental results with a 2D figure which just describes some tendency in a limited range along with some maxima\/minima. Such disproportionate summarization does not span the entire range of the experimental space described in the figure, and may bias the parameter space that a model might explore depending upon how the literature is written. This also returns us to the issue of addressing the hesitancy of publishing \"unsuccessful\" research data. One may need to be careful in accepting AI-driven proposals since there is likely a gap between a human-interest-driven leap and a ML-driven suggestion based on some learned representation of the unstructured data gleaned from the literature.\n\nBeyond latent variable optimization, one may consider computational creativity, which is used to model imagination in fields such as the arts~\\cite{DBLP:journals\/corr\/abs-2006-08381}, music~\\cite{DBLP:journals\/corr\/abs-1709-01620}, and gaming. This endeavor may start with finding a vector space to measure novelty as a distance~\\cite{berns2020bridging}. A novelty-oriented algorithm searches the space for a set of distant new objects that is as diverse as possible as to maximize novelty instead of an objective function~\\cite{lehman2011abandoning}. Since there would be some bias for measuring the distance along with exploratory space, deep learning novelty explorer (DeLeNox) was recently proposed~\\cite{DBLP:journals\/corr\/abs-2103-11715} as a means to dynamically change the distance functions for improved diversity. These approaches could be applied to materials science to diversify research directions and help us pose and consider novel materials and ideas though measuring novelty may be subjective and most challenging for the community, and one always needs to be mindful of ethical and physical materials constraints.\n\n\\section{Champion an ethical and equitable research ecosystem}\\label{sec:ethics}\nLooking toward the future of the use of ML in materials science, there are issues, such as potential physical, economic, and legal risks, that have yet to be fully discussed and resolved.\nFor example, ML may predict mixing several materials together to form a new compound with a set of desired properties, but the synthesis is dangerous because of the toxic gases produced during a side reaction or the final product is flammable or explosive.\nAlso consider that indiscriminate use of ML could lead to infringement upon intellectual property rights if the algorithm is unaware of the protected status of certain processes or materials.\nA yet unanswered question regarding either scenario is, who is the responsible party - the person who created the ML environment or the person who provided the data which did not capture all potential hazards and conflicts? It is paramount that the community reach a consensus on issues such as this before widespread autonomous use of ML.\n\nAnother concern to be addressed as ML transforms materials research is the prospect for enormous inequities between the computationally rich and poor, where the rich quickly explore large parameter spaces and the have-nots fall behind, unable to compete. This disparity would grow larger and faster if end users, reviewers, and program managers deem that only resource-intensive ML is trustworthy. Although a materials cloud platform~\\cite{klimeck2008nanohub, Talirz2020} could help to bridge the gap between these groups, it would be meaningless without a strong culture of open publication of training source code, model parameters, and appropriate benchmark datasets. Yet even making these resources freely available may still be insufficient to sustain a level playing field unless there is equivalent access to state-of-the art instrumentation to validate the increasingly more detailed predictions. Clearly, we have time before we arrive at that reckoning, but the complexity of the matter requires us to begin discussing it now.\n\n\\section{Summary}\nMachine learning has been effective at expediting a variety of tasks, and the initial stage of its implementation for materials research has already confirmed that it has great promise to accelerate science and discovery~\\cite{baker2019workshop}. To realize that full potential, we need to tailor its usage to answer well defined questions while keeping perspective of the limits of the resources needed and the bounds of meaningful interpretation of the resulting analyses. Eventually, we may be able develop ML algorithms that will consistently lead us to new breakthroughs in an open and equitable framework. In the meantime, a complementary team of humans, AI, and robots has already begun to advance materials science for the common good.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet $\\varphi:M\\to (N,h)$ be an immersion of a manifold $M$ into a\nRiemannian manifold $(N,h)$. We say that $\\varphi$ is {\\em\nbiharmonic}, or $M$ is a {\\em biharmonic submanifold}, if its mean curvature vector\nfield $H$ satisfies the following equation\n\\begin{eqnarray}\\label{eq: bih_eq}\n\\tau_2(\\varphi)=- m\\left(\\Delta H + \\trace{R^{N}}(\nd\\varphi(\\cdot), H) d\\varphi(\\cdot)\\right)=0,\n\\end{eqnarray}\nwhere $\\Delta$ denotes the rough Laplacian on sections of the\npull-back bundle $\\varphi^{-1}(TN)$ and $R^N$ denotes the curvature\noperator on $(N,h)$. The section $\\tau_2(\\varphi)$ is called the\n{\\em bitension field}.\n\nWhen $M$ is compact, the biharmonic condition arises from a\nvariational problem for maps: for an arbitrary smooth map\n$\\varphi:(M,g)\\to (N,h)$ we define\n$$\nE_{2}\\left( \\varphi \\right) = \\frac{1}{2} \\int_{M} |\\tau(\\varphi)|^{2}\\, v_{g},\n$$\nwhere $\\tau(\\varphi)=\\trace\\nabla d\\varphi$ is the {\\it tension field}. The functional $E_2$ is called the {\\em bienergy functional}. When $\\varphi:(M,\\varphi^{\\ast}h)\\to (N,h)$ is an immersion, the tension\nfield has the expression $\\tau(\\varphi)=mH$ and \\eqref{eq: bih_eq} is equivalent to $\\varphi$ being a critical point of $E_2$.\n\n\nObviously, any minimal immersion ($H=0$) is biharmonic. The non-harmonic biharmonic immersions are called {\\it proper biharmonic}.\n\nThe study of proper biharmonic submanifolds is nowadays becoming a\nvery active subject and its popularity initiated with the\nchallenging conjecture of B-Y.~Chen (see the recent book \\cite{C11}): {\\em any biharmonic submanifold\nin the Euclidean space is minimal}.\n\nChen's conjecture was generalized to: {\\em any biharmonic submanifold in a\nRiemannian manifold with non-positive sectional curvature is\nminimal}, but this was proved not to hold. Indeed, in \\cite{OT10}, Y.-L.~Ou and L.~Tang\nconstructed examples of proper biharmonic hypersurfaces\nin a $5$-dimensional space of non-constant negative sectional\ncurvature.\n\nYet, the conjecture is still open in its full generality for ambient\nspaces with constant non-positive sectional curvature, although it\nwas proved to be true in numerous cases when additional geometric\nproperties for the submanifolds were assumed (see, for example,\n\\cite{BMO08,CMO02,C91,D98,D92,HV95}).\n\nBy way of contrast, as we shall detail in Section~\\ref{sec:\nbih-sub}, there are several families of examples of proper\nbiharmonic submanifolds in the $n$-dimensional unit Euclidean sphere\n$\\mathbb{S}^{n}$. For simplicity we shall denote these classes by {\\bf B1}, {\\bf B2}, {\\bf B3} and {\\bf B4}.\n\n The goal of this paper is to continue the study of\nproper biharmonic submanifolds in $\\mathbb{S}^{n}$ in order to achieve their classification.\nThis program was initiated for the very first time in \\cite{J86} and then developed\nin \\cite{BMO12} -- \\cite{BO09}, \\cite{CMO02,CMO01,NU11,NU12, O02}.\n\nIn the following, by a rigidity result for proper biharmonic submanifolds we mean:\\\\\n{\\em find under what conditions a proper biharmonic submanifold in ${\\mathbb S}^n$ is one of the main examples {\\bf B1}, {\\bf B2}, {\\bf B3} and {\\bf B4}}.\n\nWe prove rigidity results for the following types of submanifolds in ${\\mathbb S}^n$: Dupin hypersurfaces; hypersurfaces, both compact and non-compact, with bounded norm of the second fundamental form; hypersurfaces satisfying intrinsic geometric properties; PMC submanifolds; parallel submanifolds.\n\nMoreover, we include in this paper two results of J.H.~Chen published in \\cite{C93}, in Chinese. We give a complete proof of these results using the invariant formalism and shortening the original proofs.\n\n\\vspace{2mm}\n\n{\\bf Conventions.}\nThroughout this paper all manifolds, metrics, maps are assumed to be smooth, i.e. $C^\\infty$. All manifolds are assumed to be connected. The following sign conventions are used\n$$\n\\Delta V=-\\trace\\nabla^2 V\\,,\\qquad R^N(X,Y)=[\\nabla_X,\\nabla_Y]-\\nabla_{[X,Y]},\n$$\nwhere $V\\in C(\\varphi^{-1}(TN))$ and $X,Y\\in C(TN)$.\nMoreover, the Ricci and scalar curvature $s$ are defined as\n$$\n\\langle \\ricci(X),Y\\rangle=\\ricci(X,Y)=\\trace (Z\\to R(Z,X)Y)), \\quad s=\\trace \\ricci,\n$$\nwhere $X,Y,Z\\in C(TN)$.\n\n\\vspace{2mm}\n\n{\\bf Acknowledgements.}\nThe authors would like to thank professor Jiaping Wang for some helpful discussions and Juan Yang for the accurate translation of \\cite{C93}. The third author would like to thank the Department of Mathematics and Informatics of the University of Cagliari for the warm hospitality.\n\n\\section{Biharmonic immersions in ${\\mathbb S}^n$}\\label{sec: bih-sub}\n\nThe key ingredient in the study of biharmonic submanifolds is the\nsplitting of the bitension field with respect to its normal and\ntangent components. In the case when the ambient space is the unit Euclidean sphere we have the following characterization.\n\n\\begin{theorem}[\\cite{C84, O02}]\\label{th: bih subm S^n}\nAn immersion $\\varphi:M^m\\to\\mathbb{S}^n$ is biharmonic if and only if\n\\begin{equation}\\label{eq: caract_bih_spheres}\n\\left\\{\n\\begin{array}{l}\n\\ \\Delta^\\perp {H}+\\trace B(\\cdot,A_{H}\\cdot)-m\\,{H}=0,\n\\vspace{2mm}\n\\\\\n\\ 2\\trace A_{\\nabla^\\perp_{(\\cdot)}{H}}(\\cdot)\n+\\dfrac{m}{2}\\grad {|H|}^2=0,\n\\end{array}\n\\right.\n\\end{equation}\nwhere $A$ denotes the Weingarten operator, $B$ the second\nfundamental form, ${H}$ the mean curvature vector field, $|H|$ the mean curvature function,\n$\\nabla^\\perp$ and $\\Delta^\\perp$ the connection and the Laplacian\nin the normal bundle of $\\varphi$, respectively.\n\\end{theorem}\n\nIn the codimension one case, denoting by $A=A_\\eta$ the shape operator with respect to a (local) unit section $\\eta$ in the normal bundle and\nputting $f=(\\trace A)\/m$, the above result reduces to the following.\n\\begin{corollary}[\\cite{O02}]\\label{cor: caract_hypersurf_bih}\nLet $\\varphi:M^m\\to\\mathbb{S}^{m+1}$ be an orientable hypersurface. Then $\\varphi$ is biharmonic if and only if\n\\begin{equation}\\label{eq: caract_bih_hipersurf_spheres}\n\\left\\{\n\\begin{array}{l}\n{\\rm (i)}\\quad \\Delta f=(m-|A|^2) f,\n\\\\ \\mbox{} \\\\\n{\\rm (ii)}\\quad A(\\grad f)=-\\dfrac{m}{2}f\\grad f.\n\\end{array}\n\\right.\n\\end{equation}\n\\end{corollary}\n\nA special class of immersions in $\\mathbb{S}^n$ consists of the parallel mean curvature immersions (PMC), that is immersions such that $\\nabla^{\\perp}H=0$. For this class of immersions Theorem~\\ref{th: bih subm S^n} reads as follows.\n\n\\begin{corollary}[\\cite{BO12}]\\label{th: caract_bih_pmc}\nLet $\\varphi:M^m\\to\\mathbb{S}^n$ be a PMC immersion. Then $\\varphi$ is biharmonic if and only if\n\\begin{equation}\\label{eq: caract_bih_Hparallel_I}\n\\trace B(A_H(\\cdot),\\cdot)=mH,\n\\end{equation}\nor equivalently,\n\\begin{equation}\\label{eq: caract_bih_Hparallel_II}\n\\left\\{\n\\begin{array}{ll}\n\\langle A_H, A_\\xi\\rangle=0,\\quad \\forall\\xi\\in C(NM)\\, \\text{with}\\,\\, \\xi\\perp H,\n\\\\ \\mbox{} \\\\\n|A_H|^2=m|H|^2,\n\\end{array}\n\\right.\n\\end{equation}\nwhere $NM$ denotes the normal bundle of $M$ in $\\mathbb{S}^n$.\n\\end{corollary}\n\nWe now list the main examples of proper biharmonic immersions in $\\mathbb{S}^n$.\n\n\\begin{list}{\\labelitemi}{\\leftmargin=2em\\itemsep=1.5mm\\topsep=0mm}\n\\item[{\\bf B1}.] The canonical inclusion of the small hypersphere\n\\begin{equation*}\\label{eq: small_hypersphere}\n\\mathbb{S}^{n-1}(1\/\\sqrt 2)=\\left\\{(x,1\/\\sqrt 2)\\in\\mathbb{R}^{n+1}: x\\in \\mathbb{R}^n, |x|^2=1\/2\\right\\}\\subset\\mathbb{S}^{n}.\n\\end{equation*}\n\\item[{\\bf B2}.] The canonical inclusion of the standard (extrinsic) products of spheres\n\\begin{equation*}\\label{eq: product_spheres}\n\\mathbb{S}^{n_1}(1\/\\sqrt 2)\\times\\mathbb{S}^{n_2}(1\/\\sqrt 2)=\\left\\{(x,y)\\in\\mathbb{R}^{n_1+1}\\times\\mathbb{R}^{n_2+1}, |x|^2=|y|^2=1\/2\\right\\}\\subset\\mathbb{S}^{n},\n\\end{equation*}\n$n_1+n_2=n-1$ and $n_1\\neq n_2$.\n\\item[{\\bf B3}.] The maps $\\varphi=\\imath\\circ\\phi:M\\to \\mathbb{S}^n$, where $\\phi:M\\to \\mathbb{S}^{n-1}(1\/\\sqrt 2)$ is a minimal immersion, and $\\imath:\\mathbb{S}^{n-1}(1\/\\sqrt 2)\\to\\mathbb{S}^n$ denotes the canonical inclusion.\n\n\\item[{\\bf B4}.] The maps $\\varphi=\\imath\\circ(\\phi_1\\times\\phi_2): M_1\\times M_2\\to \\mathbb{S}^n$, where $\\phi_i:M_i^{m_i}\\to\\mathbb{S}^{n_i}(1\/\\sqrt 2)$, $0 < m_i \\leq n_i$, $i=1,2$, are minimal immersions, $m_1\\neq m_2$, $n_1+n_2=n-1$, and $\\imath:\\mathbb{S}^{n_1}(1\/\\sqrt 2)\\times\\mathbb{S}^{n_2}(1\/\\sqrt 2)\\to \\mathbb{S}^n$ denotes the canonical inclusion.\n\\end{list}\n\n\n\\begin{remark}\n\\begin{itemize}\n\\item[(i)] The proper biharmonic immersions of class {\\bf B3} are pseudo-umbilical, i.e. $A_H=|H|^2\\Id$, have parallel mean curvature vector field and mean curvature $|H|=1$. Clearly, $\\nabla A_H=0$.\n\n\\item[(ii)] The proper biharmonic immersions of class {\\bf B4} are no longer pseudo-umbilical, but still have parallel mean curvature vector field and their mean curvature is $|H|={|m_1-m_2|}\/{m}\\in(0,1)$, where $m=m_1+m_2$. Moreover, $\\nabla A_H=0$ and the principal curvatures in the direction of $H$, i.e. the eigenvalues of $A_H$, are constant on $M$ and given by $\\lambda_1=\\ldots=\\lambda_{m_1}=({m_1-m_2})\/{m}$, $\\lambda_{m_1+1}=\\ldots=\\lambda_{m_1+m_2}=-({m_1-m_2})\/{m}$. Specific B4 examples were given by W.~Zhang in \\cite{Z11} and generalized in \\cite{BMO08a, WW12}.\n\\end{itemize}\n\\end{remark}\n\n\nWhen a biharmonic immersion has constant mean curvature (CMC) the following bound for $|H|$ holds.\n\n\\begin{theorem}[\\cite{O03}]\\label{teo:h=cst-b3}\nLet $\\varphi:M\\to\\mathbb{S}^n$ be a CMC proper biharmonic immersion. Then $|H|\\in(0,1]$, and $|H|=1$ if and only if $\\varphi$ induces a minimal immersion of $M$ into $\\mathbb{S}^{n-1}(1\/\\sqrt 2)\\subset\\mathbb{S}^n$, that is $\\varphi$ is {\\bf B3}.\n\\end{theorem}\n\n\\section{Biharmonic hypersurfaces in spheres}\n\nThe first case to look at is that of CMC proper biharmonic hypersurfaces in $\\mathbb{S}^{m+1}$.\n\n\\begin{theorem}[\\cite{BMO08, BO12}]\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a CMC proper biharmonic hypersurface. Then\n\\begin{itemize}\n\\item[(i)] $|A|^2=m$;\n\\item[(ii)] the scalar curvature $s$ is constant and positive, $s=m^2(1+|H|^2)-2m$;\n\\item[(iii)] for $m>2$, $|H|\\in(0,({m-2})\/{m}]\\cup\\{1\\}$. Moreover, $|H|=1$ if and only if $\\varphi(M)$ is an open subset of the small hypersphere $\\mathbb{S}^m(1\/\\sqrt 2)$, and $|H|=({m-2})\/{m}$ if and only if $\\varphi(M)$ is an open subset of the standard product $\\mathbb{S}^{m-1}(1\/\\sqrt 2)\\times\\mathbb{S}^1(1\/\\sqrt 2)$.\n\\end{itemize}\n\\end{theorem}\n\n\\begin{remark}\nIn the minimal case the condition $|A|^2=m$ is exhaustive. In fact a minimal hypersurface in $\\mathbb{S}^{m+1}$ with $|A|^2=m$ is a minimal standard product of spheres (see \\cite{CdCK70, L69}). We point out that the full classification of CMC hypersurfaces in $\\mathbb{S}^{m+1}$ with $|A|^2=m$, therefore biharmonic, is not known.\n\\end{remark}\n\n\\begin{corollary}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a complete proper biharmonic hypersurface.\n\\begin{itemize}\n\\item[(i)] If $|H|=1$, then $\\varphi(M)=\\mathbb{S}^{m}(1\/\\sqrt 2)$ and $\\varphi$ is an embedding.\n\\item[(ii)] If $|H|=({m-2})\/{m}$, $m>2$, then $\\varphi(M)=\\mathbb{S}^{m-1}(1\/\\sqrt 2)\\times\\mathbb{S}^1(1\/\\sqrt 2)$ and the universal cover of $M$ is $\\mathbb{S}^{m-1}(1\/\\sqrt 2)\\times\\mathbb{R}$.\n\\end{itemize}\n\\end{corollary}\n\nAs a direct consequence of \\cite[Theorem 2]{NS69} we have the following result.\n\n\\begin{theorem}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a CMC proper biharmonic hypersurface. Assume that $M$ has non-negative sectional curvature. Then $\\varphi(M)$ is either an open part of $\\mathbb{S}^{m}(1\/\\sqrt 2)$, or an open part of $\\mathbb{S}^{m_1}(1\/\\sqrt 2)\\times \\mathbb{S}^{m_2}(1\/\\sqrt 2)$, $m_1+m_2=m$, $m_1\\neq m_2$.\n\\end{theorem}\n\nIn the following we shall no longer assume that the biharmonic hypersurfaces have constant mean curvature, and we shall split our study in three cases. In Case 1 we shall study the proper biharmonic hypersurfaces with respect to the number of their distinct principal curvatures, in Case 2 we shall study them with respect to $|A|^2$ and $|H|^2$, and in Case 3 the study will be done with respect to the sectional and Ricci curvatures of the hypersurface.\n\n\\subsection{Case 1}\n\nObviously, if $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ is an umbilical proper biharmonic hypersurface in $\\mathbb{S}^{m+1}$, then $\\varphi(M)$ is an open part of $\\mathbb{S}^{m}(1\/\\sqrt 2)$.\n\nWhen the hypersurface has at most two or exactly three distinct principal curvatures everywhere we obtain the following rigidity results.\n\n\\begin{theorem}[\\cite{BMO08}]\\label{th: hypersurf_2curv}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a hypersurface. Assume that $\\varphi$ is proper biharmonic with at most two distinct principal curvatures everywhere. Then $\\varphi$ is CMC and $\\varphi(M)$ is either an open part of $\\mathbb{S}^{m}(1\/\\sqrt 2)$, or an open part of $\\mathbb{S}^{m_1}(1\/\\sqrt 2)\\times \\mathbb{S}^{m_2}(1\/\\sqrt 2)$, $m_1+m_2=m$, $m_1\\neq m_2$. Moreover, if $M$ is complete, then either\n$\\varphi(M)=\\mathbb{S}^{m}(1\/\\sqrt 2)$ and $\\varphi$ is an embedding, or $\\varphi(M)=\\mathbb{S}^{m_1}(1\/\\sqrt 2)\\times \\mathbb{S}^{m_2}(1\/\\sqrt 2)$, $m_1+m_2=m$, $m_1\\neq m_2$ and $\\varphi$ is an embedding when $m_1\\geq 2$ and $m_2\\geq 2$.\n\\end{theorem}\n\n\\begin{theorem}[\\cite{BMO08}]\\label{teo:quasi-umbilicall-conformally-flat}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$, $m\\geq 3$, be a proper biharmonic hypersurface. The following statements are equivalent:\n\\begin{itemize}\n\\item[(i)] $\\varphi$ is quasi-umbilical,\n\\item[(ii)] $\\varphi$ is conformally flat,\n\\item[(iii)] $\\varphi(M)$ is an open part of $\\mathbb{S}^m(1\/\\sqrt 2)$ or of $\\mathbb{S}^{m-1}(1\/\\sqrt 2)\\times \\mathbb{S}^{1}(1\/\\sqrt 2)$.\n\\end{itemize}\n\\end{theorem}\n\nIt is well known that, if $m\\geq 4$, a hypersurface $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ is quasi-umbilical if and only if it is conformally flat. From Theorem~\\ref{teo:quasi-umbilicall-conformally-flat}\nwe see that under the biharmonicity hypothesis the equivalence remains true when $m=3$.\n\n\\begin{theorem}[\\cite{BMO10}]\\label{th: hypersurf_3curv}\nThere exist no compact CMC proper biharmonic hypersurfaces $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ with three distinct principal curvatures everywhere.\n\\end{theorem}\n\nIn particular, in the low dimensional cases, Theorem~\\ref{th: hypersurf_2curv}, Theorem~\\ref{th: hypersurf_3curv} and a result of S.~Chang (see \\cite{CH93}) imply the following.\n\n\\begin{theorem}[\\cite{CMO01,BMO10}]\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a proper biharmonic hypersurface.\n\\begin{itemize}\n\\item[(i)] If $m=2$, then $\\varphi(M)$ is an open part of $\\mathbb{S}^{2}(1\/\\sqrt 2)\\subset\\mathbb{S}^3$.\n\\item[(ii)] If $m=3$ and $M$ is compact, then $\\varphi$ is CMC and $\\varphi(M)=\\mathbb{S}^{3}(1\/\\sqrt 2)$ or $\\varphi(M)=\\mathbb{S}^{2}(1\/\\sqrt 2)\\times\\mathbb{S}^{1}(1\/\\sqrt 2)$.\n\\end{itemize}\n\\end{theorem}\n\nWe recall that an orientable hypersurface $\\varphi:M^m\\to\\mathbb{S}^{m+1}$ is\nsaid to be {\\it isoparametric} if it has constant principal curvatures or, equivalently, the number $\\ell$ of distinct principal curvatures $k_1 > k_2>\\cdots\n> k_\\ell$ is constant on $M$ and the $k_i$'s are constant. The distinct principal curvatures have constant multiplicities $m_1, \\ldots,m_\\ell$, $m = m_1 + m_2 + \\ldots + m_\\ell$.\n\nIn \\cite{IIU08}, T.~Ichiyama, J.I.~Inoguchi and H.~Urakawa classified the proper biharmonic isoparametric hypersurfaces in spheres.\n\n\\begin{theorem}[\\cite{IIU08}]\\label{teo:isoparametric}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be an orientable isoparametric hypersurface. If $\\varphi$ is proper biharmonic, then $\\varphi(M)$ is either an open part of $\\mathbb{S}^m(1\/\\sqrt2)$, or an open part of\n$\\mathbb{S}^{m_1}(1\/\\sqrt2)\\times\\mathbb{S}^{m_2}(1\/\\sqrt2)$, $m_1+m_2=m$,\n$m_1\\neq m_2$.\n\\end{theorem}\n\nAn orientable hypersurface $\\varphi:M^m\\to\\mathbb{S}^{m+1}$ is\nsaid to be a {\\it proper Dupin hypersurface} if the number $\\ell$ of distinct principal curvatures is constant on $M$ and each principal curvature function is constant along its corresponding principal directions.\n\n\\begin{theorem}\\label{th: Dupin_bih_CMC}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be an orientable proper Dupin hypersurface. If $\\varphi$ is proper biharmonic, then $\\varphi$ is CMC.\n\\end{theorem}\n\n\\begin{proof}\nAs $M$ is orientable, we fix $\\eta\\in C(NM)$ and denote $A=A_\\eta$ and $f=(\\trace A)\/m$. Suppose that $f$ is not constant. Then there exists an open subset $U\\subset M$ such that $\\grad f\\neq 0$ at every point of $U$. Since $\\varphi$ is proper biharmonic, from \\eqref{eq: caract_bih_hipersurf_spheres} we get that $-{mf}\/{2}$ is a principal curvature with principal direction $\\grad f$. Since the hypersurface is proper Dupin, by definition, $\\grad f(f)=0$, i.e. $\\grad f=0$ on $U$, and we come to a contradiction.\n\\end{proof}\n\n\\begin{corollary}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be an orientable proper Dupin hypersurface with $\\ell\\leq 3$. If $\\varphi$ is proper biharmonic, then $\\varphi(M)$ is either an open part of $\\mathbb{S}^m(1\/\\sqrt2)$, or an open part of\n$\\mathbb{S}^{m_1}(1\/\\sqrt2)\\times\\mathbb{S}^{m_2}(1\/\\sqrt2)$, $m_1+m_2=m$,\n$m_1\\neq m_2$.\n\\end{corollary}\n\n\\begin{proof}\nTaking into account Theorem~\\ref{th: hypersurf_2curv}, we only have to prove that there exist no proper biharmonic proper Dupin hypersurfaces with $\\ell=3$. Indeed, by Theorem~\\ref{th: Dupin_bih_CMC}, we conclude that $\\varphi$ is CMC. By a result in \\cite{BMO12}, $\\varphi$ is of type $1$ or of type $2$, in the sense of B.-Y.~Chen. If $\\varphi$ is of type $1$, we must have $\\ell=1$ and we get a contradiction. If $\\varphi$ is of type $2$, since $\\varphi$ is proper Dupin with $\\ell=3$, from Theorem~9.11 in \\cite{C96}, we get that $\\varphi$ is isoparametric. But, from\nTheorem~\\ref{teo:isoparametric}, proper biharmonic isoparametric hypersurfaces must have $\\ell\\leq 2$.\n\n\\end{proof}\n\n\\subsection{Case 2}\nThe simplest result is the following.\n\\begin{proposition}\\label{pro: a-compact}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a compact hypersurface. Assume that $\\varphi$ is proper biharmonic with nowhere zero mean curvature vector field and $|A|^2\\leq m$, or $|A|^2\\geq m$. Then $\\varphi$ is CMC and $|A|^2=m$.\n\\end{proposition}\n\\begin{proof}\nAs $H$ is nowhere zero, we can consider $\\eta=H\/|H|$ a global unit section in the normal bundle $NM$ of $M$ in $\\mathbb{S}^{m+1}$.\nThen, on $M$,\n$$\n\\Delta f=(m-|A|^2)f,\n$$\nwhere $f=(\\trace A)\/m=|H|$. Now, as $m-|A|^2$ does not change sign, from the maximum principle we get $f=$ constant and $|A|^2=m$.\n\\end{proof}\n\nIn fact, Proposition~\\ref{pro: a-compact} holds without the hypothesis ``$H$ nowhere zero''. In order to prove this we shall consider the cases $|A|^2\\geq m$ and $|A|^2\\leq m$, separately.\n\n\\begin{proposition}\\label{prop: |B|>m}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a compact hypersurface. Assume that $\\varphi$ is proper biharmonic and $|A|^2\\geq m$. Then $\\varphi$ is CMC and $|A|^2=m$.\n\\end{proposition}\n\\begin{proof}\nLocally,\n$$\n\\Delta f=(m-|A|^2)f,\n$$\nwhere $f=(\\trace A)\/m$, $f^2=|H|^2$,\nand therefore\n$$\n\\frac{1}{2}\\Delta f^2=(m-|A|^2)f^2-|\\grad f|^2\\leq 0.\n$$\nAs $f^2$, $|A|^2$ and $|\\grad f|^2$ are well defined on the whole $M$, the formula holds on $M$. From the maximum principle we get that $|H|$ is constant and $|A|^2=m$.\n\\end{proof}\n\nThe case $|A|^2\\leq m$ was solved by J.H.~Chen in \\cite{C93}. Here we include the proof for two reasons. First, the original one is in Chinese and second, the formalism used by J.H.~Chen was local, while ours is globally invariant. Moreover, the proof we present is slightly shorter.\n\n\\begin{theorem}[\\cite{C93}]\\label{th: jchen1}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a compact hypersurface in $\\mathbb{S}^{m+1}$. If $\\varphi$ is proper biharmonic and $|A|^2\\leq m$, then $\\varphi$ is CMC and $|A|^2=m$.\n\\end{theorem}\n\n\\begin{proof}\nWe may assume that $M$ is orientable, since, otherwise, we consider the double covering $\\tilde{M}$ of $M$. This is compact, connected and orientable, and in the given hypotheses $\\tilde{\\varphi}:\\tilde M\\to \\mathbb{S}^{m+1}$ is proper biharmonic and $|\\tilde A|^2\\leq m$. Moreover, $\\tilde{\\varphi}(\\tilde{M})=\\varphi(M)$.\n\nAs $M$ is orientable, we fix a unit global section $\\eta\\in C(NM)$ and denote $A=A_\\eta$ and $f=(\\trace A)\/m$.\nIn the following we shall prove that\n\\begin{eqnarray}\\label{eq: fund_ineq_chen1}\n&&\\frac{1}{2}\\Delta \\left(|\\grad f|^2+\\frac{m^2}{8}f^4+f^2\\right)+\\frac{1}{2}\\Div(|A|^2\\grad f^2)\\leq\\nonumber\\\\\n&&\\leq\\frac{8(m-1)}{m(m+8)}(|A|^2-m) |A|^2 f^2,\n\\end{eqnarray}\non M, and this will lead to the conclusion.\n\nFrom \\eqref{eq: caract_bih_hipersurf_spheres}(i) one easily gets\n\\begin{equation}\\label{eq: delta_f^2}\n\\frac{1}{2}\\Delta f^2=(m-|A|^2)f^2-|\\grad f|^2\n\\end{equation}\nand\n\\begin{equation}\\label{eq: delta_f^4}\n\\frac{1}{4}\\Delta f^4=(m-|A|^2)f^4-3f^2|\\grad f|^2.\n\\end{equation}\n\nFrom the Weitzenb\\\"{o}ck formula we have\n\\begin{equation}\\label{eq: Weitz_norm_grad}\n\\frac{1}{2}\\Delta |\\grad f|^2=-\\langle\\trace\\nabla^2\\grad f,\\grad f\\rangle-|\\nabla\\grad f|^2,\n\\end{equation}\nand, since\n$$\n\\trace \\nabla^2\\grad f=-\\grad(\\Delta f)+ \\ricci(\\grad f),\n$$\nwe obtain\n\\begin{equation}\\label{eq: cons_Weitz_norm_grad}\n\\frac{1}{2}\\Delta |\\grad f|^2=\\langle \\grad \\Delta f,\\grad f\\rangle-\\ricci(\\grad f,\\grad f)-|\\nabla\\grad f|^2.\n\\end{equation}\n\nEquations \\eqref{eq: caract_bih_hipersurf_spheres}(i) and \\eqref{eq: delta_f^2} imply\n\\begin{eqnarray}\\label{eq: grad_delta_f}\n\\langle \\grad \\Delta f,\\grad f\\rangle&=&(m-|A|^2)|\\grad f|^2-\\frac{1}{2}\\langle \\grad |A|^2, \\grad f^2\\rangle\\nonumber\\\\\n&=&(m-|A|^2)|\\grad f|^2-\\frac{1}{2}\\left(\\Div(|A|^2\\grad f^2)+|A|^2\\Delta f^2\\right)\\nonumber\\\\\n&=&m|\\grad f|^2-\\frac{1}{2}\\Div(|A|^2\\grad f^2)-|A|^2(m-|A|^2)f^2.\n\\end{eqnarray}\n\nFrom the Gauss equation of $M$ in $\\mathbb{S}^{m+1}$ we obtain\n\\begin{equation}\\label{eq:ricci-minsn}\n\\ricci(X,Y)=(m-1)\\langle X,Y\\rangle+\\langle A(X),Y\\rangle\\trace A-\\langle A(X), A(Y)\\rangle,\n\\end{equation}\nfor all $X, Y\\in C(TM)$, therefore, by using \\eqref{eq: caract_bih_hipersurf_spheres}(ii),\n\\begin{equation}\\label{eq: cons_Gauss}\n\\ricci(\\grad f,\\grad f)=\\left(m-1-\\frac{3m^2}{4}f^2\\right)|\\grad f|^2.\n\\end{equation}\n\nNow, by substituting \\eqref{eq: grad_delta_f} and \\eqref{eq: cons_Gauss} in \\eqref{eq: cons_Weitz_norm_grad} and using \\eqref{eq: delta_f^2} and \\eqref{eq: delta_f^4}, one obtains\n\\begin{eqnarray*}\n\\frac{1}{2}\\Delta |\\grad f|^2&=&\\left(1+\\frac{3m^2}{4}f^2\\right)|\\grad f|^2-\\frac{1}{2}\\Div(|A|^2\\grad f^2)\\\\\n&&-|A|^2(m-|A|^2)f^2-|\\nabla\\grad f|^2\\\\\n&=&-\\frac{1}{2}\\Delta f^2-\\frac{m^2}{16}\\Delta f^4-(m-|A|^2)\\left(|A|^2-\\frac{m^2}{4}f^2-1\\right)f^2\\\\\n&&-\\frac{1}{2}\\Div(|A|^2\\grad f^2)-|\\nabla\\grad f|^2.\n\\end{eqnarray*}\nHence\n\\begin{eqnarray}\\label{eq: eq_int_1}\n&-\\frac{1}{2}\\Delta \\left(|\\grad f|^2+\\frac{m^2}{8}f^4+f^2\\right)-\\frac{1}{2}\\Div(|A|^2\\grad f^2)=\\nonumber\\\\\n&=(m-|A|^2)\\left(|A|^2-\\frac{m^2}{4}f^2-1\\right)f^2+|\\nabla\\grad f|^2.\n\\end{eqnarray}\n\nWe shall now verify that\n\\begin{equation}\\label{eq: fund_ineq1}\n(m-|A|^2)\\left(|A|^2-\\frac{m^2}{4}f^2-1\\right)\\geq (m-|A|^2)\\left(\\frac{9}{m+8}|A|^2-1\\right),\n\\end{equation}\nat every point of $M$.\nLet us now fix a point $p\\in M$. We have two cases.\\\\\n{\\it Case 1.} If $\\grad_p f\\neq 0$, then $e_1=({\\grad_p f})\/{|\\grad_p f|}$ is a principal direction for $A$ with principal curvature $\\lambda_1=-m f(p)\/2$. By considering $e_k\\in T_pM$, $k=2,\\ldots,m$, such that $\\{e_i\\}_{i=1}^m$ is an orthonormal basis in $T_pM$ and $A(e_k)=\\lambda_k e_k$, we get at $p$\n\\begin{eqnarray}\\label{eq: |A|}\n|A|^2&=&\\sum_{i=1}^m |A(e_i)|^2=|A(e_1)|^2+\\sum_{k=2}^m |A(e_k)|^2=\\frac{m^2}{4}f^2+\\sum_{k=2}^m \\lambda_k^2\\nonumber\\\\\n&\\geq& \\frac{m^2}{4}f^2+\\frac{1}{m-1}\\left(\\sum_{k=2}^m \\lambda_k\\right)^2=\\frac{m^2(m+8)}{4(m-1)}f^2,\n\\end{eqnarray}\nthus inequality \\eqref{eq: fund_ineq1} holds at $p$.\n\\\\\n{\\it Case 2.} If $\\grad_p f = 0$, then either there exists an open set $U\\subset M$, $p\\in U$, such that $\\grad f_{\/U}=0$, or $p$ is a limit point for the set $V=\\{q\\in M: \\grad_q f\\neq 0\\}$.\\\\\nIn the first situation, we get that $f$ is constant on $U$, and from a unique continuation result for biharmonic maps (see \\cite{O03}), this constant must be different from zero. Equation \\eqref{eq: caract_bih_hipersurf_spheres}(i) implies $|A|^2=m$ on $U$, and therefore inequality \\eqref{eq: fund_ineq1} holds at $p$.\\\\\nIn the second situation, by taking into account {\\it Case 1} and passing to the limit, we conclude that inequality \\eqref{eq: fund_ineq1} holds at $p$.\n\nIn order to evaluate the term $|\\nabla \\grad f|^2$ of equation \\eqref{eq: eq_int_1}, let us consider a local orthonormal frame field $\\{E_i\\}_{i=1}^m$ on $M$. Then, also using \\eqref{eq: caract_bih_hipersurf_spheres}(i),\n\\begin{eqnarray}\\label{eq: fund_ineq2}\n|\\nabla \\grad f|^2&=&\\sum_{i,j=1}^m\\langle \\nabla_{E_i}\\grad f,E_j\\rangle^2\\nonumber\\geq\\sum_{i=1}^m\\langle \\nabla_{E_i}\\grad f,E_i\\rangle^2\\\\\n&\\geq&\\frac{1}{m}\\left(\\sum_{i=1}^m\\langle \\nabla_{E_i}\\grad f,E_i\\rangle\\right)^2= \\frac{1}{m}(\\Delta f)^2\\nonumber\\\\\n&=&\\frac{1}{m}(m-|A|^2)^2 f^2.\n\\end{eqnarray}\nIn fact, \\eqref{eq: fund_ineq2} is a global formula.\n\nNow, using \\eqref{eq: fund_ineq1} and \\eqref{eq: fund_ineq2} in \\eqref{eq: eq_int_1}, we obtain \\eqref{eq: fund_ineq_chen1}, and by integrating it, since $|A|^2\\leq m$, we get\n\\begin{equation}\\label{eq: int3}\n(|A|^2-m)|A|^2 f^2=0\n\\end{equation}\non $M$. Suppose that there exists $p\\in M$ such that $|A(p)|^2\\neq m$. Then there exists an open set $U\\subset M$, $p\\in U$, such that $|A|^2_{\/U}\\neq m$. Equation \\eqref{eq: int3} implies that $|A|^2 f^2_{\/U}=0$.\nNow, if there were a $q\\in U$ such that $f(q)\\neq 0$, then $A(q)$ would be zero and, therefore, $f(q)=0$.\n Thus $f_{\/U}=0$ and, since $M$ is proper biharmonic, this is a contradiction. Thus $|A|^2=m$ on $M$ and $\\Delta f=0$, i.e. $f$ is constant and we conclude.\n\\end{proof}\n\n\n\\begin{remark}\nIt is worth pointing out that the statement of Theorem~\\ref{th: jchen1} is similar in the minimal case: if $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ is a minimal hypersurface with $|A|^2\\leq m$, then either $|A|=0$ or $|A|^2=m$ (see \\cite{S68}).\nBy way of contrast, an analog of Proposition~\\ref{prop: |B|>m} is not true in the minimal case. In fact, it was proved in \\cite{PT83} that if a minimal hypersurface $\\varphi:M^3\\to \\mathbb{S}^{4}$ has $|A|^2>3$, then\n$|A|^2\\geq 6$.\n\\end{remark}\nObviously, from Proposition~\\ref{prop: |B|>m} and Theorem~\\ref{th: jchen1} we get the following result.\n\n\\begin{proposition}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a compact hypersurface. If $\\varphi$ is proper biharmonic and $|A|^2$ is constant, then $\\varphi$ is CMC and $|A|^2=m$.\n\\end{proposition}\n\nThe next result is a direct consequence of Proposition~\\ref{prop: |B|>m}.\n\\begin{proposition}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a compact hypersurface. If $\\varphi$ is proper biharmonic and $|H|^2\\geq {4(m-1)}\/({m(m+8)})$, then $\\varphi$ is CMC.\nMoreover,\n\\begin{itemize}\n\\item[(i)] if $m\\in\\{2,3\\}$, then $\\varphi(M)$ is a small hypersphere $\\mathbb{S}^m(1\/\\sqrt 2)$;\n\\item[(ii)] if $m=4$, then $\\varphi(M)$ is a small hypersphere $\\mathbb{S}^4(1\/\\sqrt 2)$ or a standard product of spheres $\\mathbb{S}^3(1\/\\sqrt 2)\\times \\mathbb{S}^1(1\/\\sqrt 2)$.\n\\end{itemize}\n\\end{proposition}\n\\begin{proof}\nTaking into account \\eqref{eq: |A|}, the hypotheses imply $|A|^2\\geq m$.\n\\end{proof}\n\n\nFor the non-compact case we obtain the following.\n\\begin{proposition}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$, $m>2$, be a non-compact hypersurface. Assume that $M$ is complete and has non-negative Ricci curvature. If $\\varphi$ is proper biharmonic, $|A|^2$ is constant and $|A|^2\\geq m$, then $\\varphi$ is CMC and $|A|^2=m$. In this case $|H|^2\\leq({(m-2)}\/{m})^2$.\n\\end{proposition}\n\\begin{proof}\nWe may assume that $M$ is orientable (otherwise, we consider the double covering $\\tilde{M}$ of $M$, which is non-compact, connected, complete, orientable, proper biharmonic and with non-negative Ricci curvature; the final result will remain unchanged). We consider $\\eta$ to be a global unit section in the normal bundle $NM$ of $M$ in $\\mathbb{S}^{m+1}$.\nThen, on $M$, we have\n\\begin{equation}\\label{eq: d1}\n\\Delta f=(m-|A|^2)f,\n\\end{equation}\nwhere $f=(\\trace A)\/m$,\nand\n\\begin{equation}\\label{eq: d2}\n\\frac{1}{2}\\Delta f^2=(m-|A|^2)f^2-|\\grad f|^2\\leq 0.\n\\end{equation}\nOn the other hand, as $f^2=|H|^2\\leq |A|^2\/m$ is bounded, by the Omori-Yau Maximum Principle (see, for example, \\cite{Y75}), there exists a sequence of points $\\{p_k\\}_{k\\in \\mathbb{N}}\\subset M$ such that\n$$\n\\Delta f^2(p_k)>-\\frac{1}{k}\\qquad\\textrm{and}\\qquad \\lim_{k\\to\\infty} f^2(p_k)=\\sup_M f^2.\n$$\nIt follows that $\\displaystyle{\\lim_{k\\to\\infty}}\\Delta f^2(p_k)=0$, so $\\displaystyle{\\lim_{k\\to\\infty}((m-|A|^2)f^2(p_k))}=0$.\n\nAs $\\displaystyle{\\lim_{k\\to\\infty} f^2(p_k)=\\sup_M f^2>0}$, we get $|A|^2=m$. But from \\eqref{eq: d1} follows that $f$ is a harmonic function on $M$. As $f$ is also a bounded function on $M$, by a result of Yau (see \\cite{Y75}), we deduce that $f=$ constant.\n\\end{proof}\n\n\\begin{corollary}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a non-compact hypersurface. Assume that $M$ is complete and has non-negative Ricci curvature. If $\\varphi$ is proper biharmonic, $|A|^2$ is constant and $|H|^2\\geq {4(m-1)}\/({m(m+8)})$, then $\\varphi$ is CMC and $|A|^2=m$. In this case, $m\\geq 4$ and $|H|^2\\leq(({m-2})\/{m})^2$.\n\\end{corollary}\n\n\\begin{proposition}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a non-compact hypersurface. Assume that $M$ is complete and has non-negative Ricci curvature. If $\\varphi$ is proper biharmonic, $|A|^2$ is constant, $|A|^2\\leq m$ and $H$ is nowhere zero, then $\\varphi$ is CMC and $|A|^2=m$.\n\\begin{proof}\nAs $H$ is nowhere zero we consider $\\eta=H\/|H|$ a global unit section in the normal bundle. Then, on $M$,\n\\begin{equation}\n\\Delta f=(m-|A|^2)f,\n\\end{equation}\nwhere $f=|H|>0$. As $m-|A|^2\\geq 0$ by a classical result (see, for example, \\cite[pag.~2]{L06}) we conclude that $m=|A|^2$ and therefore $f$ is constant.\n\\end{proof}\n\\end{proposition}\n\n\\subsection{Case 3}\nWe first present another result of J.H.~Chen in \\cite{C93}. In order to do that, we shall need the following lemma.\n\\begin{lemma}\\label{lem: nablaA}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be an orientable hypersurface, $\\eta$ a unit section in the normal bundle, and put $A_\\eta=A$. Then\n\\begin{itemize}\n\\item[(i)] $(\\nabla A)(\\cdot,\\cdot)$ is symmetric,\n\\item[(ii)] $\\langle(\\nabla A)(\\cdot,\\cdot),\\cdot\\rangle$ is totally symmetric,\n\\item[(iii)] $\\trace (\\nabla A)(\\cdot,\\cdot)=m\\grad f$.\n\\end{itemize}\n\\end{lemma}\n\n\\begin{theorem}[\\cite{C93}]\\label{th: jchen2}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ be a compact hypersurface. If $\\varphi$ is proper biharmonic, $M$ has non-negative sectional curvature and $m\\leq 10$, then $\\varphi$ is CMC and $\\varphi(M)$ is either $\\mathbb{S}^{m}(1\/\\sqrt 2)$, or $\\mathbb{S}^{m_1}(1\/\\sqrt 2)\\times \\mathbb{S}^{m_2}(1\/\\sqrt 2)$, $m_1+m_2=m$, $m_1\\neq m_2$.\n\\end{theorem}\n\n\\begin{proof}\nFor the same reasons as in Theorem~\\ref{th: jchen1} we include a detailed proof of this result. We can assume that $M$ is orientable (otherwise, as in the proof of Theorem~\\ref{th: jchen1}, we work with the oriented double covering of $M$). Fix a unit section $\\eta\\in C(NM)$ and put $A=A_\\eta$ and $f=(\\trace A)\/m$.\n\nWe intend to prove that the following inequality holds on $M$,\n\\begin{equation}\\label{eq: fund_ineq_chen2}\n\\frac{1}{2}\\Delta\\left(|A|^2+\\frac{m^2}{2}f^2\\right)\\leq \\frac{3m^2(m-10)}{4(m-1)}|\\grad f|^2-\\dfrac{1}{2}\\sum_{i,j=1}^m (\\lambda_i-\\lambda_j)^2 R_{ijij}.\n\\end{equation}\n\nFrom the Weitzenb\\\"ock formula we have\n\\begin{equation}\\label{eq: Weitz_norm_A}\n\\frac{1}{2}\\Delta |A|^2=\\langle \\Delta A,A\\rangle-|\\nabla A|^2.\n\\end{equation}\nLet us first verify that\n\\begin{eqnarray}\\label{eq: DeltaA_aux}\n\\trace(\\nabla^2 A)(X,\\cdot,\\cdot)=\\nabla_X (\\trace\\nabla A),\n\\end{eqnarray}\nfor all $X\\in C(TM)$. Fix $p\\in M$ and let $\\{E_i\\}_{i=1}^n$ be a local orthonormal frame field, geodesic at $p$. Then, also using Lemma~\\ref{lem: nablaA}(i), we get at $p$,\n\\begin{eqnarray*}\n\\trace(\\nabla^2 A)(X,\\cdot,\\cdot)&=&\\sum_{i=1}^m (\\nabla^2 A)(X,E_i,E_i)=\\sum_{i=1}^m (\\nabla_X \\nabla A)(E_i,E_i)\\nonumber\\\\\n&=&\\sum_{i=1}^m \\{\\nabla_X \\nabla A(E_i,E_i)-2\\nabla A(\\nabla_X E_i,E_i)\\}=\\sum_{i=1}^m \\nabla_X \\nabla A(E_i,E_i)\\nonumber\\\\\n&=&\\nabla_X (\\trace\\nabla A).\n\\end{eqnarray*}\nUsing Lemma~\\ref{lem: nablaA}, the Ricci commutation formula (see, for example, \\cite{B}) and \\eqref{eq: DeltaA_aux}, we obtain\n\\begin{eqnarray}\\label{eq: DeltaA}\n\\Delta A(X)&=&-(\\trace\\nabla^2 A) (X)=-\\trace(\\nabla^2 A)(\\cdot,\\cdot,X)=-\\trace(\\nabla^2 A)(\\cdot,X,\\cdot)\\nonumber\\\\\n&=&-\\trace(\\nabla^2 A)(X,\\cdot,\\cdot)- \\trace(RA)(\\cdot,X,\\cdot)\\nonumber\\\\\n&=&-\\nabla_X (\\trace\\nabla A)-\\trace (R A)(\\cdot,X,\\cdot)\\nonumber\\\\\n&=& -m\\nabla_X \\grad f-\\trace (R A)(\\cdot,X,\\cdot),\n\\end{eqnarray}\nwhere\n$$\nRA(X,Y,Z)=R(X,Y)A(Z)-A(R(X,Y)Z),\\quad \\forall\\,X,Y,Z\\in C(TM).\n$$\n\nAlso, using \\eqref{eq: caract_bih_hipersurf_spheres}(ii) and Lemma~\\ref{lem: nablaA}, we obtain\n\\begin{eqnarray}\\label{eq: partial}\n\\trace\\langle A(\\nabla_{\\cdot}\\grad f),\\cdot\\rangle\n&=&\\trace \\langle \\nabla_{\\cdot}A(\\grad f)-(\\nabla A)(\\cdot,\\grad f),\\cdot\\rangle\\nonumber\\\\\n&=&-\\dfrac{m}{4}\\trace \\langle \\nabla_{\\cdot} \\grad f^2,\\cdot\\rangle-\\langle \\trace(\\nabla A),\\grad f\\rangle\\nonumber\\\\\n&=&\\dfrac{m}{4}\\Delta f^2-m|\\grad f|^2.\n\\end{eqnarray}\n\nUsing \\eqref{eq: DeltaA} and \\eqref{eq: partial}, we get\n\\begin{eqnarray}\\label{eq: DeltaA_A}\n\\langle \\Delta A,A\\rangle&=&\\trace\\langle \\Delta A(\\cdot),A(\\cdot)\\rangle\\nonumber\\\\\n&=&-m\\trace\\langle \\nabla_{\\cdot}\\grad f,A(\\cdot)\\rangle+\\langle T, A\\rangle\\nonumber\\\\\n&=&-m\\trace\\langle A(\\nabla_{\\cdot}\\grad f),\\cdot\\rangle+\\langle T, A\\rangle\\nonumber\\\\\n&=&m^2|\\grad f|^2-\\dfrac{m^2}{4}\\Delta f^2+\\langle T, A\\rangle,\n\\end{eqnarray}\nwhere $T(X)=-\\trace (R A)(\\cdot,X,\\cdot)$, $X\\in C(TM)$.\n\nIn the following we shall verify that\n\\begin{equation}\\label{eq: estim_nablaA}\n|\\nabla A|^2\\geq\\dfrac{m^2(m+26)}{4(m-1)}|\\grad f|^2,\n\\end{equation}\nat every point of $M$. Now, let us fix a point $p\\in M$.\n\nIf $\\grad_p f=0$, then \\eqref{eq: estim_nablaA} obviously holds at $p$.\n\nIf $\\grad_p f\\neq 0$, then on a neighborhood $U\\subset M$ of $p$ we can consider an orthonormal frame field $E_1=({\\grad f})\/{|\\grad f|}$, $E_2$,\\ldots, $E_m$, where $E_k(f)=0$, for all $k=2,\\ldots, m$.\nUsing \\eqref{eq: caract_bih_hipersurf_spheres}(ii), we obtain on $U$\n\\begin{eqnarray}\\label{eq: NA1}\n\\langle (\\nabla A)(E_1,E_1),E_1\\rangle&=&\\frac{1}{|\\grad f|^3}(\\langle \\nabla_{\\grad f}A(\\grad f),\\grad f\\rangle\\nonumber\\\\&&\n-\\langle A(\\nabla_{\\grad f}\\grad f),\\grad f\\rangle)\\nonumber\\\\\n&=&-\\frac{m}{2}|\\grad f|.\n\\end{eqnarray}\nFrom here, using Lemma~\\ref{lem: nablaA}, we also have on $U$\n\\begin{eqnarray}\\label{eq: NA2}\n\\sum_{k=2}^m\\langle (\\nabla A)(E_k,E_k),E_1\\rangle&=&\\sum_{i=1}^m\\langle (\\nabla A)(E_i,E_i),E_1\\rangle-\\langle (\\nabla A)(E_1,E_1),E_1\\rangle\\nonumber\\\\\n&=&\\langle \\trace\\nabla A,E_1\\rangle+\\frac{m}{2}|\\grad f|=\\frac{3m}{2}|\\grad f|.\n\\end{eqnarray}\nUsing \\eqref{eq: NA1} and \\eqref{eq: NA2}, we have on $U$\n\\begin{eqnarray}\n|\\nabla A|^2&=&\\sum_{i,j=1}^m|(\\nabla A)(E_i,E_j)|^2 =\\sum_{i,j,h=1}^m\\langle(\\nabla A)(E_i,E_j),E_h\\rangle^2\\nonumber\\\\\n&\\geq& \\langle(\\nabla A)(E_1,E_1),E_1\\rangle^2+ 3\\sum_{k=2}^m\\langle(\\nabla A)(E_k,E_k),E_1\\rangle^2\\nonumber\\\\\n&\\geq& \\langle(\\nabla A)(E_1,E_1),E_1\\rangle^2+ \\frac{3}{m-1}\\left(\\sum_{k=2}^m\\langle(\\nabla A)(E_k,E_k),E_1\\rangle\\right)^2\\nonumber\\\\\n&=&\\dfrac{m^2(m+26)}{4(m-1)}|\\grad f|^2,\n\\end{eqnarray}\nthus \\eqref{eq: estim_nablaA} is verified, and \\eqref{eq: Weitz_norm_A} implies\n\\begin{equation}\\label{eq: Delta_intermed}\n\\frac{1}{2}\\Delta\\left(|A|^2+\\frac{m^2}{2} f^2\\right)\\leq \\frac{3m^2(m-10)}{4(m-1)}|\\grad f|^2+\\langle T,A\\rangle.\n\\end{equation}\n\nFix $p\\in M$ and consider $\\{e_i\\}_{i=1}^m$ to be an orthonormal basis of $T_pM$, such that $A(e_i)=\\lambda_i e_i$. Then, at $p$, we get\n\\begin{eqnarray*}\\label{eq: T_A}\n\\langle T, A\\rangle=-\\dfrac{1}{2}\\sum_{i,j=1}^m (\\lambda_i-\\lambda_j)^2 R_{ijij},\n\\end{eqnarray*}\nand then \\eqref{eq: Delta_intermed} becomes \\eqref{eq: fund_ineq_chen2}.\n\nNow, since $m\\leq 10$ and $M$ has non-negative sectional curvature, we obtain\n$$\n\\Delta\\left(|A|^2+\\frac{m^2}{2}|H|^2\\right)\\leq 0\n$$\non $M$. As $M$ is compact, we have\n$$\n\\Delta\\left(|A|^2+\\frac{m^2}{2}|H|^2\\right)= 0\n$$\non $M$, which implies\n\\begin{equation}\\label{eq:lambdarijij}\n(\\lambda_i-\\lambda_j)^2 R_{ijij}=0\n\\end{equation}\n on $M$. Fix $p\\in M$.\nFrom the Gauss equation for $\\varphi$, $R_{ijij}=1+\\lambda_i\\lambda_j$, for all $i\\neq j$, and from\n\\eqref{eq:lambdarijij} we obtain\n$$\n(\\lambda_i-\\lambda_j) (1+\\lambda_i\\lambda_j)=0,\\quad i\\neq j.\n$$\nLet us now fix $\\lambda_1$. If there exists another principal curvature $\\lambda_j\\neq \\lambda_1$, $j>1$, then from the latter relation we get that $\\lambda_1\\neq 0$ and $\\lambda_j=-1\/\\lambda_1$.\nThus $\\varphi$ has at most two distinct principal curvatures at $p$. Since $p$ was arbitrarily fixed, we obtain that $\\varphi$ has at most two distinct principal curvatures everywhere and we conclude by using Theorem~\\ref{th: hypersurf_2curv}.\n\\end{proof}\n\n\\begin{proposition}\\label{pro:nonnegricciesistepxpnozero}\nLet $\\varphi:M^m\\to \\mathbb{S}^{m+1}$ , $m\\geq 3$, be a hypersurface. Assume that $M$ has non-negative sectional curvature and for all $p\\in M$ there exists $X_p\\in T_pM$, $|X_p|=1$, such that $\\ricci(X_p,X_p)=0$. If $\\varphi$ is proper biharmonic, then $\\varphi(M)$ is an open part of $\\mathbb{S}^{m-1}(1\/\\sqrt 2)\\times\\mathbb{S}^1(1\/\\sqrt 2)$.\n\\end{proposition}\n\n\\begin{proof}\nLet $p\\in M$ be an arbitrarily fixed point, and $\\{e_i\\}_{i=1}^m$ an orthonormal basis in $T_pM$ such that $A(e_i)=\\lambda_i e_i$. For $i\\neq j$, using \\eqref{eq:ricci-minsn}, we have that $\\ricci(e_i,e_j)=0$. Therefore, $\\{e_i\\}_{i=1}^m$ is also a basis of eigenvectors for the Ricci curvature. Now, if $\\ricci(e_i,e_i)>0$ for all $i=1,\\ldots m$, then $\\ricci(X,X)>0$ for all $X\\in T_pM\\setminus\\{0\\}$. Thus there must exist $i_0$ such that $\\ricci(e_{i_0},e_{i_0})=0$. Assume that $\\ricci(e_{1},e_{1})=0$. From $0=\\ricci(e_{1},e_{1})=\\sum_{j=2}^m R_{1j1j}=\\sum_{j=2}^m K_{1j}$ and since $K_{1j}\\geq 0$ for all\n$j\\geq 2$, we conclude that $K_{1j}=0$ for all $j\\geq 2$, that is $1+\\lambda_1 \\lambda _j=0$ for all $j\\geq 2$. The latter implies that $\\lambda_1\\neq 0$ and $\\lambda_j=-1\/\\lambda_1$ for all $j\\geq 2$. Thus $M$ has two distinct principal curvatures everywhere, one of them of multiplicity one.\n\\end{proof}\n\n\n\\begin{remark}\nIf $\\varphi:M^m\\to \\mathbb{S}^{m+1}$, $m\\geq 3$, is a compact hypersurface, then the conclusion of\nProposition~\\ref{pro:nonnegricciesistepxpnozero} holds replacing the hypothesis on the Ricci curvature with the requirement that the first fundamental group is infinite. In fact, the full classification of compact hypersurfaces\nin $\\mathbb{S}^{m+1}$ with non-negative sectional curvature and infinite first fundamental group was given in \\cite{C03}.\n\\end{remark}\n\n\\section{PMC biharmonic immersions in $\\mathbb{S}^n$}\n\nIn this section we list some of the most important known results on PMC biharmonic submanifolds in spheres and we prove some new ones. In order to do that we first need the following lemma.\n\n\\begin{lemma}\\label{lem: AH_B}\nLet $\\varphi:M^m\\to N^n$ be an immersion. Then $|A_H|^2\\leq |H|^2 |B|^2$ on $M$. Moreover, $|A_H|^2= |H|^2 |B|^2$ at $p\\in M$ if and only if either $H(p)=0$, or the first normal of $\\varphi$ at $p$ is spanned by $H(p)$.\n\\end{lemma}\n\n\\begin{proof}\nLet $p\\in M$. If $|H(p)|=0$, then the conclusion is obvious.\nConsider now the case when $|H(p)|\\neq0$, let $\\eta_p=H(p)\/|H(p)|\\in N_pM$ and let $\\{e_i\\}_{i=1}^m$ be a basis in $T_pM$. Then, at $p$,\n\\begin{eqnarray*}\n|A_H|^2&=&\\sum_{i,j=1}^m\\langle A_H(e_i),e_j\\rangle^2=\\sum_{i,j=1}^m\\langle B(e_i,e_j),H\\rangle^2=|H|^2\\sum_{i,j=1}^m\\langle B(e_i,e_j),\\eta_p\\rangle^2\\\\\n&\\leq& |H|^2 |B|^2.\n\\end{eqnarray*}\nIn this case equality holds if and only if $\\displaystyle{\\sum_{i,j=1}^m\\langle B(e_i,e_j),\\eta_p\\rangle^2=|B|^2},$\ni.e.\n$$\n\\langle B(e_i,e_j),\\xi_p\\rangle =0,\\quad \\forall\\,\\xi_p\\in N_pM\\,\\text{ with}\\,\\, \\xi_p\\perp H(p).\n$$\nThis is equivalent to the first normal at $p$ being spanned by $H(p)$ and we conclude.\n\\end{proof}\n\nUsing the above lemma we can prove the following lower bound for the norm of the second fundamental form.\n\n\\begin{proposition}\nLet $\\varphi:M^m\\to \\mathbb{S}^n$ be a PMC proper biharmonic immersion. Then $m\\leq |B|^2$ and equality holds if and only if $\\varphi$ induces a CMC proper biharmonic immersion of $M$ into a totally geodesic sphere $\\mathbb{S}^{m+1}\\subset \\mathbb{S}^n$.\n\\end{proposition}\n\\begin{proof}\nBy Corollary~\\ref{th: caract_bih_pmc} we have $|A_H|^2=m|H|^2$ and, by using Lemma~\\ref{lem: AH_B}, we obtain $m\\leq|B|^2$.\n\nSince $H$ is parallel and nowhere zero, equality holds if and only if the first normal is spanned by $H$, and we can apply the codimension reduction result of J.~Erbacher (\\cite{E71}) to obtain the existence of a totally geodesic sphere $\\mathbb{S}^{m+1}\\subset \\mathbb{S}^n$, such that $\\varphi$ is an immersion of $M$ into $\\mathbb{S}^{m+1}$. Since $\\varphi:M^m\\to \\mathbb{S}^n$ is PMC proper biharmonic, the restriction $M^m\\to \\mathbb{S}^{m+1}$ is CMC proper biharmonic.\n\\end{proof}\n\n\n\\begin{remark}\n\\begin{itemize}\n\\item[(i)] Let $\\varphi=\\imath\\circ\\phi:M\\to \\mathbb{S}^n$ be a proper biharmonic immersion of class {\\bf B3}. Then $m\\leq|B|^2$ and equality holds if and only if the induced $\\phi$ is totally geodesic.\n\n\\item[(ii)] Let $\\varphi=\\imath\\circ(\\phi_1\\times\\phi_2): M_1\\times M_2\\to \\mathbb{S}^n$ be a proper biharmonic immersion of class {\\bf B4}. Then $m\\leq|B|^2$ and equality holds if and only if both $\\phi_1$ and $\\phi_2$ are totally geodesic.\n\\end{itemize}\n\\end{remark}\n\nThe above remark suggests to look for PMC proper biharmonic immersions with $|H|=1$ and\n$|B|^2=m$.\n\n\\begin{corollary}\nLet $\\varphi:M^m\\to\\mathbb{S}^n$ be a PMC proper biharmonic immersion. Then $|H|=1$ and $|B|^2=m$ if and only if $\\varphi(M)$ is an open part of $\\mathbb{S}^{m}(1\/\\sqrt 2)\\subset\\mathbb{S}^{m+1}\\subset\\mathbb{S}^n$.\n\\end{corollary}\n\nThe case when $M$ is a surface is more rigid. Using the classification of PMC surfaces in $\\mathbb{S}^{n}$ given by S.-T.~Yau \\cite{Y74}, and \\cite[Corollary~5.5]{BMO08}, we obtain the following result.\n\n\\begin{theorem}[\\cite{BMO08}]\\label{th: bih_PMC_surf}\nLet $\\varphi:M^2\\to\\mathbb{S}^n$ be a PMC proper biharmonic surface. Then $\\varphi$ induces a minimal immersion of $M$ into a small hypersphere $\\mathbb{S}^{n-1}(1\/\\sqrt{2})\\subset\\mathbb{S}^n$.\n\\end{theorem}\n\n\\begin{remark}\nIf $n=4$ in Theorem~\\ref{th: bih_PMC_surf}, then the same conclusion holds under the weakened assumption that the surface is CMC as it was shown in \\cite{BO09}.\n\\end{remark}\nIn the higher dimensional case we have the following bounds for the value of the mean curvature of a\nPMC proper biharmonic immersion.\n\n\\begin{theorem}[\\cite{BO12}]\\label{th: pmc1}\nLet $\\varphi:M^m\\to\\mathbb{S}^n$ be a PMC proper biharmonic immersion. Assume that $m>2$ and $|H|\\in (0,1)$.\nThen $|H|\\in (0,({m-2})\/{m}]$, and $|H|=({m-2})\/{m}$ if and only\nif locally $\\varphi(M)$ is an open part of a standard product\n$$\nM_1\\times\\mathbb{S}^1(1\/\\sqrt{2})\\subset\\mathbb{S}^n,\n$$\nwhere $M_1$ is a minimal embedded submanifold of $\\mathbb{S}^{n-2}(1\/\\sqrt{2})$. Moreover, if $M$ is\ncomplete, then the above decomposition of $\\varphi(M)$ holds globally, where $M_1$ is a complete minimal submanifold of $\\mathbb{S}^{n-2}(1\/\\sqrt{2})$.\n\\end{theorem}\n\n\\begin{remark}\nThe same result of Theorem~\\ref{th: pmc1} was proved, independently, in \\cite{WW12}.\n\\end{remark}\nIf we assume that $M$ is compact and $|B|$ is bounded we obtain the following theorem.\n\n\\begin{theorem}\\label{th: pmc-santos}\nLet $\\varphi:M^m\\to\\mathbb{S}^{m+d}$ be a compact PMC proper biharmonic immersion with $m\\geq 2$, $d\\geq 2$ and\n$$\nm<|B|^2\\leq m \\frac{d-1}{2d-3}\\left(1+\\frac{3d-4}{d-1}|H|^2-\\frac{m-2}{\\sqrt{m-1}}|H| \\sqrt{1-|H|^2}\\right).\n$$\n\\begin{itemize}\n\\item[(i)] If $m=2$, then $|H|=1$, and either $d=2$, $|B|^2=6$, $\\varphi(M^2)=\\mathbb{S}^{1}(1\/{2})\\times\\mathbb{S}^{1}(1\/{2})\\subset\\mathbb{S}^{3}(1\/\\sqrt{2})$ or $d=3$, $|B|^2=14\/3$, $\\varphi(M^2)$ is the Veronese minimal surface in $\\mathbb{S}^{3}(1\/\\sqrt{2})$.\n\n\\item[(ii)] If $m>2$, then $|H|=1$, $d=2$, $|B|^2=3m$ and\n$$\n\\varphi(M^m)=\\mathbb{S}^{m_1}\\left(\\sqrt{{m_1}\/{(2m)}}\\right)\\times \\mathbb{S}^{m_2}\\left(\\sqrt{{m_2}\/{(2m)}}\\right)\\subset \\mathbb{S}^{m+1}(1\/\\sqrt{2}),\n$$\nwhere $m_1+m_2=m$, $m_1\\geq 1$ and $m_2\\geq 1$.\n\\end{itemize}\n\\end{theorem}\n\\begin{proof}\nThe result follows from the classification of compact PMC immersions with bounded $|B|^2$ given in\nTheorem~1.6 of \\cite{S94}.\n\\end{proof}\n\n\n\n\\begin{theorem}[\\cite{BO12}]\\label{th: pmc2}\nLet $\\varphi:M^m\\to\\mathbb{S}^n$ be a PMC proper biharmonic immersion with $\\nabla A_H=0$. Assume that $|H|\\in (0,({m-2})\/{m})$.\nThen, $m>4$ and, locally,\n$$\n\\varphi(M)=M^{m_1}_1\\times M^{m_2}_2\n\\subset\\mathbb{S}^{n_1}(1\/\\sqrt{2})\\times\\mathbb{S}^{n_2}(1\/\\sqrt{2})\\subset\\mathbb{S}^n,\n$$\nwhere $M_i$ is a minimal embedded submanifold of $\\mathbb{S}^{n_i}(1\/\\sqrt{2})$, $m_i\\geq 2$,\n$i=1,2$, $m_1+m_2=m$, $m_1\\neq m_2$, $n_1+n_2=n-1$. In this case $|H|={|m_1-m_2|}\/{m}$.\nMoreover, if $M$ is complete, then the above decomposition of $\\varphi(M)$ holds globally, where $M_i$ is a complete minimal submanifold of $\\mathbb{S}^{n_i}(1\/\\sqrt{2})$, $i=1,2$.\n\n\\end{theorem}\n\n\\begin{corollary}\nLet $\\varphi:M^m\\to\\mathbb{S}^n$, $m\\in\\{3,4\\}$, be a PMC proper biharmonic immersion with $\\nabla A_H=0$. Then $|H|\\in \\{({m-2})\/{m},1\\}$. Moreover, if $|H|=({m-2})\/{m}$, then locally\n$\\varphi(M)$ is an open part of a standard product\n$$\nM_1\\times\\mathbb{S}^1(1\/\\sqrt{2})\\subset\\mathbb{S}^n,\n$$\nwhere $M_1$ is a minimal embedded submanifold of $\\mathbb{S}^{n-2}(1\/\\sqrt{2})$,\nand if $|H|=1$, then $\\varphi$ induces a minimal immersion of $M$ into $\\mathbb{S}^{n-1}(1\/\\sqrt 2)$.\n\\end{corollary}\n\nWe should note that there exist examples of proper biharmonic submanifolds of $\\mathbb{S}^{5}$ and\n$\\mathbb{S}^{7}$ which are not PMC but with $\\nabla A_H=0$ (see \\cite{S05} and \\cite{FO12}).\n\n\n\\section{Parallel biharmonic immersions in $\\mathbb{S}^n$}\n\nAn immersed submanifold is said to be {\\it parallel} if\nits second fundamental form $B$ is parallel, that is $\\nabla^\\perp B=0$.\n\nIn the following we give the classification for proper biharmonic parallel immersed surfaces in $\\mathbb{S}^n$.\n\\begin{theorem}\\label{teo:parallel-surfaces}\nLet $\\varphi:M^2\\to \\mathbb{S}^n$ be a parallel surface in $\\mathbb{S}^n$. If $\\varphi$ is proper biharmonic, then the codimension can be reduced to $3$ and $\\varphi(M)$ is an open part of either\n\\begin{itemize}\n\\item[{\\rm(i)}] a totally umbilical sphere $\\mathbb{S}^2(1\/\\sqrt2)$ lying in\na totally geodesic $\\mathbb{S}^3\\subset \\mathbb{S}^5$,\nor\n\\item[{\\rm(ii)}] the minimal flat torus $\\mathbb{S}^1(1\/2)\\times \\mathbb{S}^1(1\/2)\\subset\n\\mathbb{S}^3(1\/\\sqrt2)$; $\\varphi(M)$ lies in a totally geodesic $\\mathbb{S}^4\\subset \\mathbb{S}^5$,\nor\n\\item[{\\rm(iii)}] the minimal Veronese surface in $\\mathbb{S}^4(1\/\\sqrt2)\\subset \\mathbb{S}^5$.\n\\end{itemize}\n\\end{theorem}\n\\begin{proof}\nThe proof relies on the fact that parallel submanifolds in $\\mathbb{S}^n$ are classified in the following three categories (see, for example, \\cite{C10}):\n\\begin{itemize}\n\\item[(a)] a totally umbilical sphere $\\mathbb{S}^2(r)$ lying in a totally geodesic $\\mathbb{S}^3\\subset\\mathbb{S}^n$;\n\n\\item[(b)] a flat torus lying in a totally geodesic $\\mathbb{S}^4\\subset\\mathbb{S}^n$\ndefined by\n$$\n(0,\\ldots,0,a\\cos u,a\\sin u,b\\cos v,b\\sin v,\\sqrt{1-a^2 -b^2}),\\quad a^2 +b^2 \\leq1;\n$$\n\\item[(c)] a surface of positive constant curvature lying in a totally geodesic $\\mathbb{S}^5\\subset\\mathbb{S}^n$ defined by\n$$\nr\\left(0,\\ldots,0,\\frac{v w}{\\sqrt{3}},\\frac{u w}{\\sqrt{3}},\\frac{u v}{\\sqrt{3}},\\frac{u^2-v^2}{2\\sqrt{3}},\n\\frac{u^2+v^2-2w^2}{6},\\frac{\\sqrt{1-r^2}}{r}\\right),\n$$\nwith $u^2+v^2+w^2=3$ and $02$ and $|H|\\in (0,1)$.\n\n\\begin{theorem}\nLet $\\varphi:M^m\\to\\mathbb{S}^n$ be a parallel proper biharmonic immersion. Assume that $m>2$ and $|H|\\in (0,1)$. Then $|H|\\in (0,({m-2})\/{m}]$. Moreover:\n\\begin{itemize}\n\\item[(i)] $|H|=({m-2})\/{m}$ if and only\nif locally $\\varphi(M)$ is an open part of a standard product\n$$\nM_1\\times\\mathbb{S}^1(1\/\\sqrt{2})\\subset\\mathbb{S}^n,\n$$\nwhere $M_1$ is a parallel minimal embedded submanifold of $\\mathbb{S}^{n-2}(1\/\\sqrt{2})$;\n\n\\item[(ii)] $|H|\\in (0,({m-2})\/{m})$ if and only if $m>4$ and, locally,\n$$\n\\varphi(M)=M^{m_1}_1\\times M^{m_2}_2\n\\subset\\mathbb{S}^{n_1}(1\/\\sqrt{2})\\times\\mathbb{S}^{n_2}(1\/\\sqrt{2})\\subset\\mathbb{S}^n,\n$$\nwhere $M_i$ is a parallel minimal embedded submanifold of $\\mathbb{S}^{n_i}(1\/\\sqrt{2})$, $m_i\\geq 2$,\n$i=1,2$, $m_1+m_2=m$, $m_1\\neq m_2$, $n_1+n_2=n-1$.\n\\end{itemize}\n\\end{theorem}\n\\begin{proof} We only have to prove that $M_i$ is a parallel minimal submanifold of $\\mathbb{S}^{n_i}(1\/\\sqrt{2})$, $m_i\\geq 2$. For this, denote by $B^i$ the second fundamental form of $M_i$ in $\\mathbb{S}^{n_i}(1\/\\sqrt{2})$, $i=1,2$. If $B$ denotes the second fundamental form of $M_1\\times M_2$ in $\\mathbb{S}^n$, it is easy to verify, using the expression of the second fundamental form of $\\mathbb{S}^{n_1}(1\/\\sqrt{2})\\times\\mathbb{S}^{n_2}(1\/\\sqrt{2})$ in $\\mathbb{S}^n$, that\n$$\n(\\nabla^\\perp_{(X_1,X_2)} B)((Y_1,Y_2),(Z_1,Z_2))=((\\nabla^\\perp_{X_1} B^1)(Y_1,Z_1),(\\nabla^\\perp_{X_2} B^2)(Y_2,Z_2)),\n$$\nfor all $X_1,Y_1,Z_1\\in C(TM_1)$, $X_2 ,Y_2, Z_2\\in C(TM_2)$. Consequently, $M_1\\times M_2$ is parallel in $\\mathbb{S}^n$ if and only if $M_i$ is parallel in $\\mathbb{S}^{n_i}(1\/\\sqrt{2})$, $i=1,2$.\n\\end{proof}\n\n\n\\section{Open problems}\nWe list some open problems and conjectures that seem to be natural.\n\n\\begin{conjecture}\\label{conj: 1}\nThe only proper biharmonic hypersurfaces in $\\mathbb{S}^{m+1}$ are the open parts of hyperspheres $\\mathbb{S}^m(1\/\\sqrt2)$ or of the standard products of spheres $\\mathbb{S}^{m_1}(1\/\\sqrt2)\\times\\mathbb{S}^{m_2}(1\/\\sqrt2)$,\n$m_1+m_2=m$, $m_1\\neq m_2$.\n\\end{conjecture}\n\nTaking into account the results presented in this paper, we have a series of statements equivalent to Conjecture~\\ref{conj: 1}:\n\\begin{itemize}\n\\item[1.] A proper biharmonic hypersurface in $\\mathbb{S}^{m+1}$ has at most two principal curvatures everywhere.\n\\item[2.] A proper biharmonic hypersurface in $\\mathbb{S}^{m+1}$ is parallel.\n\\item[3.] A proper biharmonic hypersurface in $\\mathbb{S}^{m+1}$ is CMC and has non-negative sectional curvature.\n\\item[4.] A proper biharmonic hypersurface in $\\mathbb{S}^{m+1}$ is isoparametric.\n\\end{itemize}\n\nOne can also state the following intermediate conjecture.\n\n\\begin{conjecture}\\label{conj: 2}\nThe proper biharmonic hypersurfaces in $\\mathbb{S}^{m+1}$ are CMC.\n\\end{conjecture}\n\nRelated to PMC immersions and, in particular, to Theorem~\\ref{th: pmc2}, we propose the following problem.\n\n\\begin{problem}\nFind a PMC proper biharmonic immersion $\\varphi:M^m\\to\\mathbb{S}^{n}$ such that\n$A_{H}$ is not parallel.\n\\end{problem}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLegged robots can select discrete footholds to cross various complicated terrain, which leads them to execute motor tasks on fields such as field rescue and planetary exploration in the future. The hexapod robots that have higher stability and superior load capacity than biped robots and quadruped robots are widely used\\cite{moore2002reliable}\\cite{tunc2016experimental}\\cite{picardi2020bioinspired}. Still, the planning of gait and foothold for such robots is more complicated with more legs. The traditional planning framework plans gait first, and then plan the foothold of the swing leg according to the terrain\\cite{kalakrishnan2011learning}\\cite{belter2016adaptive}\\cite{fankhauser2018robust}. The two steps are independent of each other, making optimal decisions based on corresponding rules or evaluation functions. However, in harsh environments, traditional planning frameworks can easily cause robots to be trapped because such planning methods make decisions are based on only the current environment and the state of the robot, and it does not consider the following situation. In this paper, We focus on improving the robot's ability to pass in a sparse foothold environment by selecting appropriate footholds and gait.\n\n\n \n \n Gait is usually used to express the walking mode of the legged robot. The choice of gait can affect the robot's forward speed, stability, and passability. Classifying by whether the gait changes periodically, there are two modes, including periodic gait and aperiodic gait. According to different planning methods, gait can be divided into the rule-based method and CPG method. For rule-based method, when walking in a periodic gait, assuming that all footsteps are valid, legged robots move forward in a fixed swing sequence, which is usually taken as 3+3 tripod gait, 4+2 quadruped gait or 5+1 wave gait for hexapod robots\\cite{chu2002comparison}. Because these gaits are quickly to use, they are widely used by researchers\\cite{estremera2010continuous}\\cite{bjelonic2018weaver}\\cite{belter2019employing}. When the terrain is rugged or some areas are unsupportable, the legged robot needs to change its gait according to the terrain information and its state information, then generate a sequence of gaits with an irregular order. This kind of aperiodic gait is called as free gait. The free gait proposed by Kugushev and Jaroshevskij\\cite{kugushev1975problems} in 1975 is characterized by aperiodic, irregular, asymmetric, and terrain adapt. For free gait, the order of legs changes in a non-fixed but flexible manner depending on the trajectory, terrain properties and motion state. In irregular terrain, this gait type is more flexible and adaptable than periodic and regular gait. A large number of free gaits for quadruped or hexapod robots have been developed so far\\cite{mcghee1979adaptive}\\cite{hirose1984study}\\cite{estremera2005generating}.\\par\n \n \n \n \\begin{figure}[tbp] \n\\centering \n\\includegraphics[scale=0.13]{photo\/fig1.pdf} \n\\caption{Establishing Monte Carlo tree for walking planning of hexapod robot in sparse foothold environment. Each node represents a state of the robot. When the constructed node reaches the target position, the entire search algorithm ends. The dotted arrows in the figure indicate omitted parts, and the whole tree in the figure is a schematic structure and is not complete.} \n\\label{fig1} \n\\end{figure}\n\n \n\n\nAnother biologically-inspired gait generation method is the CPG method. From the perspective of imitating the biological gait rhythm control, the CPG gait planning method regards each foot of the robot as a neuron and realizes the walking of the robot by periodically triggering the movement of each foot. Shik et al.\\cite{shik1966control} proposed that the rhythmic movement of animals was controlled by a central pattern generator (CPG). Since then, a large number of scholars have carried out studies on CPG to planning the gait for legged robots \\cite{venkataraman1997simple}\\cite{arena2004adaptive}\\cite{liu2013central}. CPG gait mainly controls the movement of the legs through an oscillator, without any feedback, and it is easy to achieve smooth transitions between gait. However, when the environment becomes complicated, if a sequence of legs is wrong, the whole CPG model will collapse. Therefore some scholars combined it with the reflex model to improve the environmental adaptability of the CPG model. For example, Santos\\cite{santos2011gait} used the reflex model to provide reflection signals and realized the dynamic regulation of the movement rhythm by modifying the CPG model parameters. Mustafa Suphi Erden\\cite{erden2008free} used the reinforcement learning method to conduct the CPG network and reflection model's structural training.\\par\n\n\n\nThe selection of foothold is often carried out after gait planning. For foothold planning method, Expert threshold method is commonly used by scholars to select the footholds according to the features such as the roughness of the terrain, the amount of slope, the degree of proximity to the edge, the amount of slip and the height variance \\cite{krotkov1996perception}\\cite{rebula2007controller}. Kolter\\cite{kolter2008hierarchical} used a hierarchical apprenticeship learning algorithm to select the footholds. It still used the human expert experience to adjust the cost function weights. Besides, Kalakrishnan\\cite{kalakrishnan2009learning} proposed a more elaborate method, using geometric terrain template learning to extract useful landing features, and the terrain template was completed by human teaching. \\cite{belter2011rough} established a 2.5D map through 2D lidar, and propose a foothold selection algorithm, which employs unsupervised learning to create an adaptive decision surface. The robot can learn from realistic simulations, and no prior expert-given rules or parameters are used. The above methods only consider the environmental characteristics where the robot is. When the environment becomes extremely harsh, if a leg has no foothold, no related work has been found to solve it. Our method is that the problem can be avoided by sequence optimization, or a fault-tolerant gait method can be combined to deal with it.\\par\n\n\n\nInspired by well-known artificial intelligence case AlphaGo\\cite{silver2016mastering}\\cite{silver2017mastering}, Mente-Carlo Tree Search(MCTS)\\cite{browne2012survey} is an excellent method to find an optimal decision to solve the sequence optimization problem. Monte Carlo methods have a long history within numerical algorithms and have also had significant success in various AI game-playing algorithms. Recently, Monte Carlo trees have been used in unmanned vehicles and robots. For example, \\cite{lenz2016tactical} adopted the MCTS algorithm to consider interactions between different vehicles to plan cooperative motion plans. \\cite{naghshvar2018risk} combines QMDP, unscented transform, and MCTS to establish an autonomous driving decision framework. For the first time, MCTS was used to solve the planning problem of legged robots\\cite{clary2018monte}. This work mainly demonstrates the application of the MCTS method to the field of blind walking of biped robots, which requires robots to avoid obstacles on the platform ground. \\par\n\n\n\n\n\nFor other sequence optimization methods, there are several works. \\cite{aceituno2017simultaneous} combines contact search, and trajectory generation into a Mixed-Integer Convex Optimization problem at the same time, a sequence optimization method was formed. \\cite{naderi2017discovering} combines a graph-based high-level path planner with low-level sampling-based optimization of climbing to plan a footstep sequence. In the work of \\cite{tsounis2019deepgait}, the quadruped robot Anymal was trained to walk in complex environments through Deep Reinforcement Learning. They trained the perceptual planning layer and control layer into two networks. The perceptual planning layer strategy can generate basic motion sequences that lead the robot to the target position. The method is similar to the sequence optimization problem we emphasized, but they do not focus on the comparison of such passability with traditional methods. Whether the superiority in the sparse foothold environment can be guaranteed is not carefully explained. Besides, the above optimization work is carried out on quadruped or biped robots. Hexapod robots have a richer combination of gait and foothold, and no related work has been described yet.\\par\n\nIn this article, we mainly discuss how to plan gait and foothold to improve the robot's ability to pass in sparse foothold. The main contributions of this paper lie in: \\par\n\n1) The gait generation and foothold planning are solved as a sequence optimization problem, and the Monte Carlo Tree Search algorithm is used to optimize the decision sequence. Method couples gait generation and foothold selection.\\par\n\n2) Treat the legs without candidate foothold as faulty legs, and combine the idea of fault-tolerant gait with our planning method to improve the passing ability of hexapod robots in extreme environments. In addition, a free fault-tolerant gait expert planning method considering environmental fault tolerance is also proposed.\\par\n\n3) Two methods, Fast-MCTS, and Sliding-MCTS are proposed. The Fast-MCTS method has higher pass performance and faster search speed. Sliding-MCTS has an effective balance between optimization and search time.\\par\n\n4) Compare the indicators of traditional methods and sequence optimization methods in the sparse foothold environment. The advantages and disadvantages of different methods are explained.\n\n\n\n\n\n\n\\section{Fault Tolerant Free Gait Planning}\nTo explain the method better, first define and explain the relevant indicators for gait and foothold planning of hexapod robot.\n\n\\subsection{Notation and Definition}\n\n\\begin{figure}[ht] \n\\centering \n\\includegraphics[scale=0.6]{photo\/hexapodAxis.pdf} \n\\caption{Parameter definition of hexapod robot planning } \n\\label{hexapodAxis} \n\\end{figure}\n\n\\noindent \\textbf{Definition}:Support polygon\\par\nThe support polygon is a convex polygon formed by the projection points of the robot's supporting feet positions falling on the horizontal plane. Support polygons are often used to measure the stability of legged robot. If the horizontal projection of robot's COG falls within the supporting polygon, then the robot is statically stable. When robot moves, if the center of gravity is too close to the edge of the supporting polygon, the stability of robot is poor. In order to reduce the critical stability process during the planning process, this paper uses the centroid as center to reduce the support polygon, As shown in Figure \\ref {scaledPolygon}(a), the polygon formed by the solid line is the original supporting polygon, and the polygon formed by the broken line is the reduced supporting polygon. $(x_c,y_c)$ represents the coordinate of centroid, and $(x_i,y_i)$ denotes the coordinate of one support leg's feet position. The formula for calculating centroid coordinates $(x_c,y_c)$ is as follows:\\par\n\\begin{equation} \nx_c = \\frac{1}{6A}\\sum_{i=1}^{n}(x_i+x_{i+1})(x_i\\cdot y_{i+1} - x_{i+1}\\cdot y_i )\n\\end{equation}\n\\begin{equation} \ny_c = \\frac{1}{6A}\\sum_{i=1}^{n}(y_i+y_{i+1})(x_i\\cdot y_{i+1} - x_{i+1}\\cdot y_i )\n\\end{equation}\nWhere A represents the area of the original supporting polygon.\n\n\\begin{equation} \nA = \\frac{1}{2}\\sum_{i=1}^{n}(x_i\\cdot y_{i+1} - x_{i+1}\\cdot y_i )\n\\end{equation}\nFinally, according to the calculated centroid position and a constant stability margin $BM_0$, the support polygon is reduced.\n\n\\begin{figure}[ht] \n\\centering \n\\includegraphics[scale=0.9]{photo\/scaledPolygon.pdf} \n\\caption{(a) Support polygons shrink in proportion. (b) Simplified one-leg workspace.} \n\\label{scaledPolygon} \n\\end{figure}\n\n\\noindent \\textbf{Definition}:Single Leg Workspace \\par\nThe workspace of a single leg is simplified into a fan-shaped space, as shown in Figure \\ref{scaledPolygon}(b). The sector is defined in the single-leg workspace coordinate system $\\sum_{s_i}(O_{s_i}-x_s^{(i)}y_s^{(i)}z_s^{(i)})$ , Coordinate system $\\sum_{s_i}$ is fixed to the robot.\\newline\n\n\\noindent \\textbf{Definition}:Support State\\par\n$c_f:= [s_1,s_2,s_3,s_4,s_5,s_6]\\in \\mathbb{R}^{1\\times 6}$ is a vector indicating the support state of hexapod moving to the next step. If leg $ i $ is a support leg, the value of $ s_i $ is 0, if the leg $ i $ is a swing leg, the value of $ s_i $ is 1. \\newline\n\n\\noindent \\textbf{Definition}:Fault Leg \\par\nIf the environment is very complicated, which results in that some legs do not have effective footholds to choose from. Then at this time, the leg with no alternative foothold is defined as the fault leg. For possible physical damages to a leg, we also treat it as a fault leg.\\newline\\par\n\n\\noindent \\textbf{Definition}:Leg Fault State\\par\n$t_F:=[f_1,f_2,f_3,f_4,f_5,f_6]\\in \\mathbb{R}^{1\\times 6}$ represents the leg fault state vector of the six-legged robot from the current state to the next state. If leg $i$ is a fault leg, the value of $f_i$ is 1. If leg $i$ is a normal leg, the value of $f_i$ is 0. Note that if a leg is the fault leg, it must not be a support leg.\\newline\\par\n\n\\noindent \\textbf{Definition}:Hexapod State \\par\n$\\Phi:= <$ $_B^W\\!R,\\ _{B}^{W}\\!{r}\\ , c_F,\\ t_F,\\ _{F}^{W}\\textrm{r}$ $> $ is defined as the state of hexapod robot. Where $_B^W\\!R\\in SO_3$ is the rotation matrix representing the attitude of the base w.r.t $W$ frame. $_{B}^{W}\\!{r}\\in \\mathbb{R}^{3} $ is the target position of robot's COG in next step w.r.t $W$ frame. $c_F$ is the support state vector of the robot from the current state to the next state. $t_F$ represents the leg fault state vector of the hexapod robot from the current state to the next state. $_{F}^{W}\\!{r}\\in \\mathbb{R}^{3} $ is the target position of $i_{th}$ foot in next step(foothold position) w.r.t $W$ frame.\n\\newline\n\n\\noindent \\textbf{Definition}:Static Margin\\par\nStability margin, $SM$, also known as the absolute static stability margin, is the smallest distance from the vertical projection of the COG(centre of gravity) on a horizontal plane to the sides of the support polygon formed by joining the projections of the footholds on the same horizontal plane, as is shown in Figure \\ref{hexapodAxis}.\\newline\n\n\\noindent \\textbf{Definition}:Reduced Kinematic Margin\\par\nAs shown in Figure \\ref{scaledPolygon}(b), the reduced kinematic margin, $KM_i$, represents the distance that the $i_{th}$ foot position moves in the opposite direction of the robot motion and reaches the boundary of the working space of leg $i$. \\newline\\par\n\n\\noindent \\textbf{Definition}:Maximum Advance Amount Based COG\\par\nThe maximum advance amount based support area is the maximum distance which the hexapod can moves in the forward direction in the condition that the COG can't exceed the support area. It is defined as $AA$.\\newline\n\n\\noindent \\textbf{Definition}:Maximum Step Length\\par\nThe maximum step length is the maximum distance that hexapod can move as long as in the forward direction. It depends on the hexapod's state and is defined as:\n\\begin{equation} \\label{MSL}\nMSL={\\rm min}(KM_i,AA)(i=1,2,3...6)\n\\end{equation}\n\\begin{figure*}[ht] \n\\centering \n\\includegraphics[scale=0.8]{photo\/planningPipline.pdf} \n\\caption{Traditional legged robot motion planning framework.} \n\\label{planningPipline} \n\\end{figure*}\n\n\\noindent \\textbf{Definition}:Support State List \\par\nSupport State List represents the maximum allowable set of support states for the robot. Each leg of a legged robot has support state and swing state two states. The combination of different leg states constitutes the support state of the robot. For a hexapod robot, there are $2^6$, 64 possible support states. To ensure static stability, the number of supporting legs should not be less than 3. After excluding these support states, only 42 alternative support states remain, as shown in Table \\ref{SupportStateList}.When planning the next support state with a specified six-footed robot state, the supportable state table is used as an initial candidate state table for screening, and a new candidate support state table that meets the requirements is finally obtained.\\newline\\par\n\n\n\\noindent \\textbf{Definition}:Solution Sequence \\par\nThe solution sequence represents the state sequence of the robot from the current position to the target position. Define the solution sequence as $\\Psi=\\left \\{ \\Phi_1,\\Phi_2...\\Phi_k \\right \\} $, Indicates that the robot needs to go through $k$ state transitions to reach its destination.\n\n\\begin{table}[h]\n\\caption{Support State List}\n\\label{SupportStateList} \n\\begin{tabular}{cccccccccccccc}\n\\hline \n\\textbf{Num} & \\multicolumn{6}{c}{ \\textbf{Support State}} & \\textbf{Num} & \\multicolumn{6}{c}{\\textbf{Support State}}\\\\\n\\hline \n1 & 0 & 0 & 0 & 1 & 1 & 1 & 22 & 1 & 0 & 1 & 0 & 1 & 0 \\\\\n2 & 0 & 0 & 1 & 0 & 1 & 1 & 23 & 1 & 0 & 1 & 0 & 1 & 1 \\\\\n3 & 0 & 0 & 1 & 1 & 0 & 1 & 24 & 1 & 0 & 1 & 1 & 0 & 0 \\\\\n4 & 0 & 0 & 1 & 1 & 1 & 0 & 25 & 1 & 0 & 1 & 1 & 0 & 1 \\\\\n5 & 0 & 0 & 1 & 1 & 1 & 1 & 26 & 1 & 0 & 1 & 1 & 1 & 0 \\\\\n6 & 0 & 1 & 0 & 0 & 1 & 1 & 27 & 1 & 0 & 1 & 1 & 1 & 1 \\\\\n7 & 0 & 1 & 0 & 1 & 0 & 1 & 28 & 1 & 1 & 0 & 0 & 0 & 1 \\\\\n8 & 0 & 1 & 0 & 1 & 1 & 0 & 29 & 1 & 1 & 0 & 0 & 1 & 0 \\\\\n9 & 0 & 1 & 0 & 1 & 1 & 1 & 30 & 1 & 1 & 0 & 0 & 1 & 1 \\\\\n10 & 0 & 1 & 1 & 0 & 0 & 1 & 31 & 1 & 1 & 0 & 1 & 0 & 0 \\\\\n11 & 0 & 1 & 1 & 0 & 1 & 0 & 32 & 1 & 1 & 0 & 1 & 0 & 1 \\\\\n12 & 0 & 1 & 1 & 0 & 1 & 1 & 33 & 1 & 1 & 0 & 1 & 1 & 0 \\\\\n13 & 0 & 1 & 1 & 1 & 0 & 0 & 34 & 1 & 1 & 0 & 1 & 1 & 1 \\\\\n14 & 0 & 1 & 1 & 1 & 0 & 1 & 35 & 1 & 1 & 1 & 0 & 0 & 0 \\\\\n15 & 0 & 1 & 1 & 1 & 1 & 0 & 36 & 1 & 1 & 1 & 0 & 0 & 1 \\\\\n16 & 0 & 1 & 1 & 1 & 1 & 1 & 37 & 1 & 1 & 1 & 0 & 1 & 0 \\\\\n17 & 1 & 0 & 0 & 0 & 1 & 1 & 38 & 1 & 1 & 1 & 0 & 1 & 1 \\\\\n18 & 1 & 0 & 0 & 1 & 0 & 1 & 39 & 1 & 1 & 1 & 1 & 0 & 0 \\\\\n19 & 1 & 0 & 0 & 1 & 1 & 0 & 40 & 1 & 1 & 1 & 1 & 0 & 1 \\\\\n20 & 1 & 0 & 0 & 1 & 1 & 1 & 41 & 1 & 1 & 1 & 1 & 1 & 0 \\\\\n21 & 1 & 0 & 1 & 0 & 0 & 1 & 42 & 1 & 1 & 1 & 1 & 1 & 1 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\n\n\\subsection{Fault Tolerant Free Gait Method}\n\n\n\n\n\\noindent(1) Traditional free gait planning pipline\\par\nThe free fault-tolerant gait proposed in this section is based on the traditional expert planning process. Traditional expert planning pipeline is shown in Figure \\ref{planningPipline}. First, plan the support state $c_F$ (gait). Second, according to the selected gait, the step length of the robot is determined. Third, for the swing legs determined by step 1, the optimal footholds $_{F}^{W}\\!{r}$ are selected in their working space. Finally, According to the terminal robot state $\\Phi_{k+1}$, current robot state $\\Phi_{k}$ and environmental obstacle information planned in the above steps, the robot body and leg trajectory are generated.\\par\n\nTo improve the stability and passiblity, the traditional planning framework also contains posture optimization step. This article focuses on gait and foothold planning, so we temporarily ignore the body posture optimization process, which will not affect the propose of our method. \\newline\\par\n\n\n\\noindent(2) Fault tolerant gait planning\\par\nFault-tolerant gait refers to the gait that occurs when each leg of the robot cannot work correctly due to hardware reasons, such as driver failure, motor failure, legs locked. Because the hexapod robot has more number of legs, the robot can still work in the condition of static stability even if one or two legs cannot run. As shown in Figure \\ref{evironmentFaultFig}, In the wild environment, there are situations where it is impossible to guarantee that all legs can fall, such as local slippage, subsidence, or sudden narrowing of the terrain. Therefore, referring to the idea of fault tolerance, the above situation is also regarded as a temporary leg fault. In a word, we define the leg without candidate foothold or having physical damages as the fault leg. Combined the idea of dynamic fault tolerance, the robot has a stronger ability to pass and adapt to the environment. When the faults occur, the hexapod can continue to walk by raising the fault legs up. When the fault is eliminated, the robot restores the function of the faulty leg.\n\n\n\\begin{figure}[h] \n\\centering \n\\includegraphics[scale=0.85]{photo\/faultEnvironment.pdf} \n\\caption{Schematic diagram of environmental fault-tolerant terrain.} \n\\label{evironmentFaultFig} \n\\end{figure}\n\n\n\n\n\\noindent(3) Planning Method\\par\nThe gait planning in the article relies on heuristic rules because these rules are the only ones that can plan leg motions accurately and guarantee stability using physical laws. The task of free gait is to choose which leg is the swing leg and which leg is the support leg aperiodic during the walk. \n\n\\subsubsection{Support State Planning}\nIn most cases, there are three criteria to choose support state.\\par\nOne criterion to be taken into account in support state planning is the maximization of the stability margin. It is safer for the robot to have a larger static stability margin during walking.\\par\nSecond, maximize the amount of robot advance in a step, which is determined by the step length of the robot. It can be seen from the formula \\ref{MSL}, the maximum step length is determined by the robot's support state, kinematic margin, and robot's pose.\\par\nThird, to realize the idea of fault tolerance, choosing other legs as possible supporting legs instead of selecting the fault leg as the supporting leg. Adding the support states that can make the robot stable in support state table \\ref{SupportStateList} to the Set $ S$.\\par\nTo any $s\\in S$, the robot has the max step length $MS_{s}$ at $ MD_s $ direction on the condition that the robot has a stable state. For fault-tolerant gait, $MD_s$ represents the direction vector that advances in the support state $s$ and changes continuously during the path tracking process. To any $s\\in S$, noting that the robot's stability margin as $SM_{s}$.\\par\nBased on the above criteria, the evaluation equation for selecting the support state $s_0$ is designed below:\n\n\n\\begin{equation}\n \\left\\{\\begin{aligned}\n f(s) &= \\omega_1 \\cdot MS_{s} + \\omega_2 \\cdot SM_{s}\\\\\n s_0 &=\\mathop{\\arg\\max}_{s} (f(s))\n \\end{aligned}\n\\ \\right.\n \\qquad \\text{$s\\in S$}\n\\end{equation}\n\n$\\omega_1 $ rewards the maximum step length of the robot at the current state. The larger $\\omega_1 $ is, the more likely the robot is to take larger steps and move faster. The $\\omega_2 $ rewards the static stability margin of the hexapod robot, the value of $\\omega_2 $ increases, and the robot tends to choose a support state with a larger static stability margin. \\par \n\nAccording to the support state table \\ref{SupportStateList}, the relevant evaluation values of all support states can be calculated, but the candidate states need to be filtered before calculating the evaluation values.\\par\n$\\bullet$ Delete support states with static stability margin less than 0 before state transition. \\par\n$\\bullet$ Delete the state where the fault leg is selected as the supporting leg at the current stage. \\par\n$\\bullet$ Delete the same support state as in the previous step. If it is retained, the program may end up in an infinite loop\\newline\\par\n\n\n\n\\subsubsection{Step Length Planning}\nThe planning of step length also needs to consider the trade-off between the robot's speed and stability. As long as the stability can be guaranteed, it is better to plan a longer step length. Because the support polygons are reduced using the method mentioned before, a certain margin of stability margin is reserved to ensure the static stability of the robot. Here we set the step size to the maximum step size $MS_{s_0}$ to maximize the walking speed of the robot. \\newline\\par\n\n\n\n\\subsubsection{Foothold Planning}\nFor each swing leg, there may be multiple alternative footholds in its future workspace pretending that the hexapod's body has moved supported by the combination of supporting legs.\nThere are two principles for choosing footholds in this article. First, for a specific leg of the robot, it is more inclined to select foothold with higher leg's reduced kinematic margin $KM$. Second, the foothold selection prefers to choose the foothold combinations which make the robot have a higher stability margin. Noting the all possible foothold combinations of swing legs as set $C$, and noting the selected foothold combination of swing legs as $c_0$. The foothold combination selecting method is shown in the following equation.\n\\begin{equation}\n \\left\\{\\begin{aligned}\n f(c) &= \\omega_L \\cdot \\overline{KM}(c) + \\omega_M \\cdot SM(c)\\\\\n c_0 &=\\mathop{\\arg\\max}_{c} (f(c))\n \\end{aligned}\n\\ \\right.\n \\qquad \\text{$c\\in C$}\n\\end{equation}\n$\\overline{KM}(c)$ represents that average of all swing legs' kinematic margin after all swing legs swing done according to the foothold combination $c$. $SM(c)$ represents that the static stability margin of the hexapod robot with $c$ foothold combination of swing legs when the hexapod's body move $MS_{s_0}$ at $MD_{s_0}$ direction then reach the next state. The first item rewards the reduced kinematic margin of the foothold. The second item rewards the hexapod robot's static stability margin at the end of the state transition.\n$w_L$ and $w_M$ are corresponding weight coefficients, and their values are greater than 0.\n\nFor fault legs, the choice of foothold is different. It does not have a fixed foothold. The virtual foothold of the leg moves with the movement of the body and floats in the air.\\newline\\par \n\n\n\\subsubsection{Trajectory planning}\nGiven the start and target COG posture and target foothold position, the trajectory of the COG and swing leg needs to be planned to find a smooth trajectory. In addition, the trajectory should also ensure collision-free, optimal energy consumption, etc. In this paper, we use a simple polynomial method to plan the trajectory of the body. By setting constraints such as the position, velocity, and acceleration of the starting point, a trajectory equation with continuous acceleration can be obtained. \\newline\\par \n\n\n\\noindent(4) Defects of Expert Method\\par\n$\\bullet$ The planning of gait does not consider environmental information, which will affect the plan of footholds.\\par\n\n$\\bullet$ The planning for the step length of the robot is too violent, which affects the selection of the foothold. The two are coupled with each other.\\par\n\n$\\bullet$ The selection of the foothold considers the environment where the robot is, but it ignores the future situation. The planning will not only affect the next step planning but also affect all the decisions behind it.\\par\n\nThe above limitations can be summarized as the planning of gait, step length, and foothold of a legged robot is a sequential decision problem. All the previous decision-making plans will have an impact on subsequent decisions. A well-designed rule-based expert planning method cannot meet all requirements, and there are always some situations that cannot be dealt with.\\newline\\par \n\n\n\\section{Gait and Foothold Planning Based MCTS}\n\\subsection{Standard MCTS Method}\n\n\\noindent(1)Basic MCTS Algorithm\n\n\\begin{figure}[ht] \n\\centering \n\\includegraphics[scale=0.5]{photo\/stantardMCTS.pdf} \n\\caption{Workflow of Monte Carlo tree search algorithm} \n\\label{stantardMCTS} \n\\end{figure}\n\nMonte Carlo Tree Search (MCTS) is an algorithm that uses random (Monte Carlo) samples to find the best decision. Here, we briefly outline the main ideas of MCTS, as is shown in Figure \\ref{stantardMCTS}: First, the selection process is based on an existing search tree. Traverse the tree according to the tree strategy to decide which direction to take at each branch until it reaches the leaf node. Then, use one of the remaining possible actions to expand the leaf node to obtain a new leaf node. Starting at that node, within a fixed range or until the final node is reached, a simulation (often also called scaling) will be performed using some default strategy (i.e. the default behaviour of all relevant participants). Update the values of all traversed nodes based on some cost functions that evaluate the simulation results.\\par\n\nMCTS algorithm can be grouped into two distinct policies:\\par\n\n$\\bullet$ Tree Policy: Select or create a leaf node from the nodes already contained within the search tree. Search tree strategies need to strike a balance between exploitation and exploration, the classic method of search tree strategy is UCB (Upper Confidence Bound) algorithm \\cite{kocsis2006bandit}.\\par\n\n$\\bullet$ Simulation Policy: Play out the domain from a given non-terminal node to produce a value estimate.\n\nMCTS has many advantages, which makes it useful for the legged robot to plan its gait, foothold sequence:\\par\n$\\bullet$ MCTS is a statistical anytime algorithm for which more computing power generally leads to better performance. It can be stopped, and a result is available. It might not be optimal but valid.\\par\n$\\bullet$ MCTS can be used with little or no domain knowledge.\\par\n\n$\\bullet$ MCTS can enforce different policies on different nodes, so it is easy to scale.\\par\n\n$\\bullet$ MCTS can be highly parallelized, with multiple iterations at a time and multiple simulations at a time. It facilitates engineering applications.\\newline\\par\n\n\n\n\\noindent(2) Extensions for Legged Robot Planning Domain\\par\nBased on MCTS, this section proposes two modified MCTS methods for hexapod robots, one of which is called the Fast-MCTS method and the other is called as Sliding-MCTS method. First, introduce some definitions for standard MCTS in the field of hexapod robot planning.\\par\n\n1)Action Space: For hexapod robot planning, each node of the Monte Carlo tree represents the state of the robot, $\\Phi:= <_B^W\\!R,\\ _{W}^{B}\\!{r}\\ , c_F,\\ ,t_F,\\ _{F}^{W}\\textrm{r}> $, which includes the robot's posture, position, foothold position, support status during the transfer process, leg error status. The set of actions that lead the robot from the current node to the candidate nodes is called the action space. According to Table \\ref{SupportStateList} , $n$ alternative support states for a robot state can be obtained and note these $n$ alternative support states as set $S$. For any support state $s\\in S$, the maximum advancement $MS_s$ corresponding to the advancing direction $MD_s$ can be obtained. Discrete the maximum advancement $MS_s$ into three different step sizes: $MS_s\/3$, $2\\cdot MS_s\/3$, and $MS_s$, which constitute the set $L$. For a step length of $l \\in L$ and the supporting state s, there are $m_{l,s}$ combinations of footholds. Define the number of candidate states for each hexapod robot state $\\Phi_k$ as $N_{alternative}(\\Phi_k)$, as is shown in Figure \\ref{alternativeNodeFig} , its calculation formula is as follows:\n\\begin{equation} \\label{alternativeStateNum}\n N_{alternative}(\\Phi_k) = \\sum_{s \\in S}\\sum_{l \\in L}3m_{s,l}\n\\end{equation}\n\n\\begin{figure}[ht] \n\\centering \n\\includegraphics[scale=0.7]{photo\/alternativeNodeFig.pdf} \n\\caption{Schematic diagram of alternative nodes} \n\\label{alternativeNodeFig} \n\\end{figure}\n\n2)Search Tree Policy: For search tree policy, we use the standard UCB1 algorithm, the calculation formula is as follows:\n\\begin{equation}\n UCB1 = X_j + C\\cdot \\sqrt{\\frac{2\\cdot{\\rm ln}n}{n_j}}\n\\end{equation}\n\nWhere $X_j$ represents the average reward value of node $j$, $ n_j$ is the number of visits to node $j$. $n$ represents the number of visits by the root node. The parameter $C$ is a balance factor, which decides whether to focus on exploration or utilization when select. If the value of $C$ is large, it is more inclined to select some nodes with lower reward value. If the value of $C$ is small, it is more inclined to visit the nodes with higher evaluation reward value. From this, we can see how the UCB algorithm finds a balance between exploration and utilization: both the nodes that have obtained the largest average reward in the past time and the nodes with lower average rewards have chance to be selected.\\newline\\par\n\n3)Simulation Policy: There are two simulation policies in this paper. The first one is the free fault-tolerant gait planning method introduced in the previous section. The second is an entirely random method, which is to randomly select an executive action in the action space.\\newline\\par\n\n4)Simulation Horizon: The goal of a game of Gomoku, Go, etc. using Monte Carlo Tree Search is to win. For a hexapod robot walking in a sparse environment, the purpose is to pass it safely. The result returned by the MCTS simulation process of the six-legged robot is set as \"Pass\" or \"Not Pass\". In the sparse foothold environment, there are few nodes passed by the simulation, which will lead to node scores mostly 0. When the UCB algorithm is used for selection, the function of utilization is lost. According to the literature \\cite{clary2018monte}, humans only planned three steps in advance during the walking process. And it is tested in \\cite{matthis2014visual} that planning a certain distance in advance can already get a high passability, and continuing to increase the planning distance has no obvious effect on improving the passability. Therefore, this article sets a simulation view $SH$. If the simulation distance exceeds $SH$, then the simulation result is \"Pass\".\\newline\\par\n\n5)Simulation Termination condition: This termination condition applies to both node expansion and simulation. The termination conditions are as follows:\\par\n$\\bullet$ During the continuous $N_{\\rm stop}$ state transitions, the robot's forward amount is close to 0.\\par\nNote: Parameter $N_{\\rm stop}$ is set differently for different simulation policies. For example, the value of $N_{\\rm stop}$ can be small in the expert method. Because when the robot is stuck, the possibility of pass using expert method is low. For the random method, the temporary stuck can be released by continually switching the combination of the foothold and the support state, so the value of $N_{\\rm stop}$ can be slightly larger.\\par\n\n$\\bullet$ The expanded node's position is greater than simulation Horizon $SH$ or reaches the target position.\\newline\\par\n\n6) Node Score and Backpropagation\\par\n For each of the node called $j$, the score of it is defined as:\n \\begin{equation}\n X_j = N_{{\\rm pass},j} \/ N_{{\\rm visit}, j}\n \\end{equation}\n\nWhere $X_j$ represents the score of node $j$, $N_{{\\rm pass},j}$ represents the total number of simulation passes for the node $j$ or the descendants of the node $j$. And $N_{{\\rm visit},j}$ represents the total number of visits to node $j$ or its descendants.\\par\n\nUsing backtracking algorithm to update $N_{\\rm visit}(\\Phi)$ and $N_{\\rm pass}(\\Phi)$ from leaf node to root node. Representing $\\Phi$ as any of the leaf node's ancestor node. The update formula is as follows:\n\\begin{equation}\n N_{\\rm visit}(\\Phi)=N_{\\rm visit}(\\Phi) + 1\n\\end{equation}\n\n\\begin{equation}\n N_{\\rm pass}(\\Phi)=N_{\\rm pass}(\\Phi)+\\Delta_{\\rm simScore}\n\\end{equation}\n\\begin{equation}\n\\Delta_{\\rm simScore}=\\left\\{\n\\begin{array}{rcl}\n0 & & {\\rm not \\ pass}\\\\\n1 & & {\\rm pass} \n\\end{array} \\right.\n\\end{equation}\n\nWhere $\\Delta_{\\rm simScore}$ denotes the result of the simulation of the expanded node. When transfer to the root node, the backpropagation ends.\\newline\\par\n\n\n\\noindent(3)Defects of Standard MCTS in Hexapod Planning\\par\nAlthough the standard MCTS was quickly applied to the field of foot robots, the unmodified standard MCTS has the following problems. \\par \nFirst, the state of each hexapod robot usually has hundreds of candidate states. During the process of building the search tree, the time consumed by the entire expansion increases exponentially. Searching for a state tree passing only 1m has exceeded tens of thousands of nodes, and processing with a single-threaded CPU takes up to ten minutes. This is too unfriendly for real-time planning of foot robots.\\par\nSecond, in a dense foothold environment, it can be passed through a simple expert method, so there is no need to spend a lot of time to use MCTS method.\\par\nThird, the binary scoring method in measuring node scores is too crude. Although this method can also search for feasible solutions, there is no tendency to optimize the search result sequence. For example, it is more desirable to obtain a faster walking sequence.\n\n\\subsection{Selection Planning Based Fast MCTS}\nIn order to solve the standard MCTS speed problem, this section proposes a fast Monte Carlo search method for the planning of the hexapod robot, and it is called Fast-MCTS. In the simulation step of the standard MCTS method, a large number of simulations have been performed. But only the results of the simulation are used, and the state sequence obtained during the simulation was discarded. The Fast-MCTS uses simulation sequences to quickly build the master branch of the search tree and iteratively updates the master branch by the branch with the highest potential to the destination. The primary purpose of this algorithm is to construct a feasible state sequence quickly, but its optimality cannot be guaranteed. The fast Monte Carlo tree search algorithm is different from the standard MCTS framework. It consists of four main steps: Extension, Simulation, Updating Master Branch, and Backtracking.\\par\nFirst, take the starting state $\\Phi _{\\rm start}$ of the hexapod robot as the specified starting node$\\Phi _{ k}$. \\par\n$\\bullet$ Extension: Expand all candidate states of the specified node $\\Phi_k$, each candidate node can only be expanded once. Note the nodes expended as set $AS_{\\Phi_k}$\\par\n\n$\\bullet$ Simulation: To each node ${{\\Phi_0}}\\in AS_{\\Phi_k}$, using the default strategy simulation(Expert method or random method) until reaching the termination condition. Noting the distance of the simulation as $d({\\Phi_{0}})$. Taking the nodes of the simulation generated as set $T_{\\Phi_0}$. The simulation termination conditions are similar to the standard MCTS simulation, but without the parameter of the simulation horizon. The simulation termination condition of this method is that the robot is continuously stuck or reaches its destination.\\par\n\n$\\bullet$ Updating Master Branch: Select the extended maximum simulation distance node $\\Phi _{k,f} \\in AS(\\Phi _ k) $.\\par\n\\begin{equation}\n \\Phi_{k,f} =\\mathop{\\arg\\max}_{\\Phi \\in AS_{\\Phi _k}} (d(\\Phi))\n\\end{equation}\n\nAdding simulation node sequence $T_{\\Phi_0} $ to the search tree and considering the branch as master branch.\\par\n\n$\\bullet$ Backtracking: If the master branch does not reach the destination, then select the node $\\Phi_I$ from the leaf node, which is closest to the target, toward the root node successively, and start to expand, simulate, and update the master branch.\\par\n\n\nNext, introduce the flow of the entire algorithm, according to Figure \\ref{fastMCTSfig}. In the Figure \\ref{fastMCTSfig}(a), all the candidate state nodes are expanded according to the selected node. Then the simulation is performed with them as the starting point, the simulation distance and state sequence are recorded. In Figure \\ref{fastMCTSfig}(b), the node with the largest simulation distance is selected for expansion, and each node in the state sequence recorded in the simulation is added one by one. The master branch indicated by the thick solid line in the figure. Figure \\ref{fastMCTSfig}(c)(d)(e) indicates that if the master branch does not reach the destination, the algorithm will gradually expand backwards from the furthest child node and update the master branch. The end condition of the entire algorithm is: the tree node reaches the destination, or the program traces back to the root node. The method is presented as pseudocode in Algorithm 1.\\par\n\n\\begin{figure}[h] \n\\centering \n\\includegraphics[scale=0.51]{photo\/fastMCTSfig.pdf} \n\\caption{Workflow of Fast-MCTS} \n\\label{fastMCTSfig} \n\\end{figure}\n\nThe algorithm uses the results of the simulation to establish the main branch quickly and updates the main branch by backtracking until it reaches the destination or back to the root node. The idea of the algorithm is to find the position where the robot is easily trapped through multiple simulations, and then keep back and try again until it finds an available solution. The method is consistent with human behaviour during walking. Although this algorithm cannot guarantee the optimality, it has a fast search speed and an excellent effect on improving the passability of a specified strategy and for example, improving the passing performance of expert methods.\n\n\\subsection{Selection Planning Based Sliding MCTS}\nThe two methods of planning a six-legged robot based on MCTS have been introduced. The first method uses the standard MCTS method for planning of a six-legged robot. The entire algorithm is very computationally intensive and time-consuming. Fast-MCTS can quickly find a feasible path, which has a good effect on improving the passability of the expert planning method. However, the entire algorithm is too sparsely sampled, does not highlight the idea of estimating the global situation through sampling, and does not optimize the solution sequence. In view of the above problems, this paper proposes a search algorithm that not only improves the search speed but also optimizes the random sequence. It is defined as Sliding-MCTS.\\par\nThe core processing steps of the algorithm are described below:\\par\n1): Moving root node\\par\nSliding-MCTS is similar to the standard MCTS method. The most crucial difference is that the root node of standard MCTS is fixed, while the root node of Sliding-MCTS will change after a period of sampling.\\par\nThe core idea of this algorithm is that each step of the robot decision is determined after a large number of samples. Once the best next step in the current situation is selected, then the node corresponding to the state of the robot at this step is chosen as the new root node. As shown in Figure \\ref{slidingMCTS}, the simulation in each pane selects the best next step to continue, circularly, a sequence of states is generated.\\par\n\n\\begin{figure*}[ht] \n\\centering \n\\includegraphics[scale=0.8]{photo\/slidingMCTS.pdf} \n\\caption{Workflow of sliding-MCTS} \n\\label{slidingMCTS} \n\\end{figure*}\n\n\n\n2):Simulation Horizon\\par\nTo facilitate the subsequent quantization of the node score, the simulation horizon distance $SH$ described above is set to a fixed number of simulation steps $N_{\\rm SimStepNum}$.\\par\n\n3):Node score\\par\nIn the previous article, we introduced the node score, which is defined as the number of successful simulations divided by the number of visits. According to the definition of this score, although an available solution sequence can be found in most cases, there are still some problems. First of all, the definition of scores lacks relevant indicators in the field of legged-robots, which results in the algorithm having no effective target at runtime. Although the solution sequence can be found, people still prefer the algorithm to plan a sequence that walks faster or more stable. In addition, in some cases, some nodes can pass during simulation, but the distance of the node from its parent node is minimal. The algorithm sometimes selects this type of node repeatedly, resulting in an infinite loop and unable to obtain an effective solution. To obtain a higher quality solution sequence, the reward function is used as a new node evaluation method here. The score of node $i$ is defined as $J_i$, and its components are shown in Figure \\ref{estimateFunctionFig}. The composition of the score items is as follows:\\par\n\n\\begin{figure}[h] \n\\centering \n\\includegraphics[scale=0.75]{photo\/estimateFunctionFig.pdf} \n\\caption{Schematic diagram of the reward function indicator composition. Gray nodes indicate expanded nodes and red nodes indicate simulation nodes.} \n\\label{estimateFunctionFig} \n\\end{figure}\n\n$J_{i,{\\rm SimStepL}}$: It rewards the average step size of the simulation sequence from node $i$ with a fixed number of simulation steps $N_{\\rm SimStepNum}$. The longer the simulation distance, the larger the average step size of the sequence. The higher the value of this parameter is, the higher the potential for the node $i$ to have greater passability\\par\n$J_{i,{\\rm StepExp}}$: It rewards the average step size of the extended sequence nodes from node $i$ to the current root node, making the algorithm tend to converge to sequences that walk faster. Define the state sequence from the extended node $i$ to the root node as a set $C_i$. Define the quantity of elements in $C_i$ as $n_i$. Take the step size between node $i$ and its parent as $s_i$. For the root node $r$, $s_r$ is equal to 0. The formula for calculating $J_{i,{\\rm StepExp}}$ is as follows:\\par\n\\begin{equation}\nJ_{i,{\\rm StepExp}} = \\frac{1}{n_i}\\sum_{j\\in C_i }s_j\n\\end{equation}\n$J_{i,{\\rm marginExp}}$: It rewards the average static stability margin of the extended sequence nodes from node $i$ to the current root node, making the algorithm tend to converge to a sequence with a larger average stability margin. The $SM_i$ represents the stability margin of node $i$. The formula for calculating $J_{i,{\\rm marginExp}}$ is as follows:\\par\n\\begin{equation}\nJ_{i,{\\rm marginExp}} = \\frac{1}{n_i}\\sum_{j\\in C_i }SM_j\n\\end{equation}\n$J_{i,{\\rm disToPar}}$: It rewards the step size from the node $i$ to its parent node, preventing the algorithm from repeatedly accessing nodes with a minimal forward distance.\\par\nThe sum of each term multiplied by the corresponding weight is the score of node $i$.\\par\n\\begin{equation}\n J_i=\\sum \\omega_{i,(.)}J_{i,(.)}\n\\end{equation}\nWhere $\\omega_{i,(.)}$ represents the weight coefficient of each term. According to our debugging experience, the weight value corresponding to $J_{i,{\\rm SimStepL}}$ and $J_{i,{\\rm StepExp}}$ can be larger. $J_{i,{\\rm disToPar}}$'s weight value can be small to prevent getting stuck in advance due to excessively greedy forward speed.\\par\n\n\n4):Score Backpropagation\\par\nWhen the extended node calculates a new reward score, upward propagation does not calculate the average of all extended node scores but retains the highest score. The propagation formula is as follows:\n\\begin{equation}\n X_i=J_i\n\\end{equation}\n\n\\begin{equation}\nX_j=\\left\\{\n\\begin{array}{rcl}\nJ_i & & {\\mathrm{if} J_i>X_j}\\\\\nX_j & & {\\mathrm{else}} \n\\end{array} \\right.\n\\end{equation}\n\nFor the gait and foothold sequence planning of a legged robot, the goal is to find only one result sequence. Therefore, it is better to measure the quality of a tree by its best child nodes. Conversely, if the average score of the entire tree is used as a measurement index, some nodes with lower scores will diminish the scores of the best nodes. In a sparse foothold environment, there are a few solution sequences that can pass, and such a measure will make it difficult for the algorithm to find these solutions.\\par\n\n5):Single Step Decision Time\\par\nThe state of the robot at each step is determined after a certain period of sampling. Define the single-step decision time as the time required for $N_{\\rm Samp}$ samplings. The parameter $N_{\\rm Samp}$ can be adjusted according to the actual situation. As the complexity of the environment increases, the value of $N_{\\rm Samp}$ can increase correspondingly.\n\n6):Algorithm Termination Condition\\par\n\nThere are two algorithm termination conditions: first, if the extended node reaches the specified target point, the algorithm stops; second, if the expanded node approaches the farthest simulation distance, the algorithm terminates. It happens when encountering an area that cannot pass. As shown in Figure \\ref{simulatioinHorizon}, the edge of the grey area is the farthest simulation position of the robot. If the farthest simulation position is very close to the current expansion node, the entire algorithm cannot continue.\\par\n\n\\begin{figure}[htb] \n\\centering \n\\includegraphics[scale=0.5]{photo\/simulatioinHorizon.pdf} \n\\caption{Schematic diagram of the movement of the simulation horizon.} \n\\label{simulatioinHorizon} \n\\end{figure}\n\n7):Choose the best subtree\\par\nThe best subtree is selected using the standard UCB formula, but the coefficient $C$ is set to zero. Select the branch where the node with the highest score is located, and subtract the remaining branches.\n\nAlthough Sliding-MCTS does not completely optimize the entire sequence, the algorithm still has a good effect. There are three reasons for this. First, as mentioned earlier, only planning certain steps in advance will hardly reduce the overall passability. Second, compared with the standard MCTS algorithm, Sliding-MCTS can greatly decrease the search time. Third, based on the parameter $N_{\\rm Samp}$ ,$N_{\\rm SimStepNum}$and simulation steps $N_{\\rm SimStepNum}$, the search time and the optimization degree can be effectively balanced. The method is presented as pseudocode in Algorithm 2.\\par\n\n\\section{Experiment}\n\\begin{figure}[h] \n\\centering \n\\includegraphics[scale=1]{photo\/ElspiderFig.pdf} \n\\caption{Elspider hexapod robot} \n\\label{ELSpiderFig} \n\\end{figure}\n\nWe validated our approach on the Elspider robot\\cite{liu2018static}\\cite{liu2018state}\\cite{gao2019low}. The experimental platform is an electric heavy-duty hexapod Elspider developed by Harbin Institute of Technology, as shown in Figure \\ref{ELSpiderFig}. The overall mass of the robot is 300kg, and it can walk stably under a load of 150kg. The design of the machine adopts a high-stability uniformly distributed six-leg configuration, and the driving wheelsets are evenly distributed at the base joints of each leg. The robot is approximately 1.9m long, 2.1m wide, and 0.5m tall. The relevant parameters of the robot are shown in Table \\ref{robotParameter}, the radius of the trunk body(0.4m), coxa link(0.18m), thigh link(0.5m), and shank link(0.5m).\\par\n\n\\begin{table}[]\n\\centering\n\\caption{Mechanical and geometric parameters of Elsiper robot}\n\\label{robotParameter}\n\\begin{tabular}{ccc}\n\\hline\nParameter & Lengh(m) & Mass(kg) \\\\\n\\hline\nBody & 0.4 & 121.9 \\\\\nCoxa link & 0.18 & 3.6 \\\\\nThigh link & 0.5 & 22 \\\\\nShank link & 0.5 & 7.2 \\\\\nFoot & 0.025 & 0.2 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\n\\begin{table*}[ht]\n\\centering\n\\caption{Experimental parameter setting table}\n\\label{parameterForAlgorithm}\n\\begin{tabular}{lll}\n\\hline\n\\multicolumn{1}{c}{\\textbf{Parameter}} & \\multicolumn{1}{c}{\\textbf{Value}} & \\multicolumn{1}{c}{\\textbf{Parameter Meaning}} \\\\\n\\hline\n$BM_{\\rm min}$ & 0.05{\\rm m} & Minimal static stability margin \\\\\n$\\omega_1$ & 0.7 & Support state planning weight coefficient \\\\\n$\\omega_2$ & 0.3 & Support state planning weight coefficient \\\\\n$\\omega_L$ & 0.7 & Foothold planning weight coefficient \\\\\n$\\omega_M$ & 0.3 & Foothold planning weight coefficient \\\\\n$N_{\\rm stop}$ &5 & Threshold for the number of consecutive stuck \\\\\n$N_{\\rm SimStepNum}$ &20 & Fixed simulation steps \\\\\n$\\omega_{i,{\\rm SimStepL}}$ &3 & Sequence evaluation function weight coefficient \\\\\n$\\omega_{i,{\\rm StepExp}}$ &1 & Sequence evaluation function weight coefficient \\\\\n$\\omega_{i,{\\rm marginExp}}$ &0.5 & Sequence evaluation function weight coefficient \\\\\n$\\omega_{i,{\\rm disToPar}} $ &0.2 & Sequence evaluation function weight coefficient \\\\\n$N_{\\rm Samp}$ &500 & Number of single steps sampling for sliding MCTS \\\\\n$C$ &0.3 & UCB algorithm balance coefficient \\\\\n\n\\hline\n\\end{tabular}\n\\end{table*}\n\n\nTo examine the behaviour of the proposed algorithm, We designed three different types of experiments to expand the description. The first experiment is performed on terrain with randomly distributed footholds. This experiment can statistically compare the different planning methods' ability to pass complex terrain, speed of advance, and planning time. By reducing the support polygon area, the stability of the robot is ensured. Therefore, the comparison of the body stability margin index is not performed in the experiment. The second type of experiment is tested in artificially designed challenging terrains to verify the validity of the proposed method. The last experiment is to test on a real robot to illustrate how the proposed method can be applied to a real environment.\\par\n\nThe experimental planning method includes the following six methods: (1) Triple gait. (2) Wave gait. (3) Free fault-tolerant gait. (4) Fast-MCTS adopts a random scheme as a simulation strategy, which is defined as Fast-MCTS (Random). (5) Fast-MCTS adopts the free-fault-tolerant gait expert scheme as the planning scheme of the simulation strategy, which is defined as FastMCTS (Expert). (6) Sliding-MCTS method, its simulation strategy uses a random method.\\par\n\nTriple gait and wave gait are two typical gait methods commonly used by hexapod robots. The diagonal gait is the fastest, while the wave gait is the slowest but the most stable. The planning of step length and foothold is the same as the expert planning method described above. If the robot is trapped or reaches the target point, the algorithm ends. \\par \n\nThe latter three methods are planning methods based on MCTS. As mentioned in formula \\ref{alternativeStateNum}, there are $\\sum_{s \\in S}\\sum_{l \\in L}3m_{s,l}$ candidate states of state $k$. To reduce the calculation amount of the algorithm, only one foothold combination is reserved through expert planning method for a support state. Therefore, the number of candidate states for the next step of each state is reduced to $\\sum_{s \\in S}3$. How to select valuable alternative states to accelerate search time is also a research direction.\\par\n\nAll algorithms run on the Intel i5 2.20GHz notebook computer and only use single-thread programming. The setting values of the entire algorithm parameters are shown in Table \\ref{parameterForAlgorithm}.\n\n\n\n\\subsection{Random Terrain Simulation Experiment}\nThe terrain of the random experiment is shown in Figure \\ref{experimentTerrainFig}. The entire map is 12.5 meters long and 5 meters wide, and the footholds are randomly distributed in this area. The starting point of the robot is the coordinate origin, the forward direction is the positive direction of the $x$ axis, and the target point is (8,0). When the robot advances more than 8 meters, the robot reaches the target point.\\par\n\n\\begin{figure}[h] \n\\centering \n\\includegraphics[scale=0.5]{photo\/13.png} \n\\caption{Random foothold distribution experiment map} \n\\label{experimentTerrainFig} \n\\end{figure}\n\nExperiments were carried out on terrains with three different numbers of footholds, including 400 footholds, 350 footholds, and 300 footholds. Each density generates 20 different maps, and experiments on six different planning schemes are performed on each map.\\par\n\nFigure \\ref{originDataDisFig} shows the raw data of 60 experiments. The abscissa is the label of the different test map, and the ordinate is the distance travelled by the robot. It can be seen intuitively that the passing capacity of the three planning methods based on sequence optimization is much higher than the results of three single-step optimization expert planning method. With the reduction of the number of footholds, the situation that the robot can reach the target point gradually decreases. For the single-step optimization method, it can be seen that in most cases, the free fault-tolerant gait has a larger amount of progress, but there are still some cases where the triple gait goes further. Although the free fault-tolerant gait method constructed according to expert experience improves the passing ability to a certain extent, the method still cannot guarantee that all cases are better than other typical gait methods.\\par\n\\begin{figure}[H] \n\\centering \n\\includegraphics[scale=0.6]{photo\/originDataDisFig.pdf} \n\\caption{Comparison of advance distance for different planning method tested on different maps.} \n\\label{originDataDisFig} \n\\end{figure}\n\n\n\n\n\n\n\nBy statistical analysis of the passability data, the error band diagrams of different planning methods under three different foothold density can be obtained. It can be seen from Figure \\ref{aaaaaaaaaa}(a) that as the density of the footholds of the terrain increases, the advance distance of all planning methods gradually increases. The error bands of triple gait, wave gait and free fault-tolerant gait become broader as the density of the foothold increases. In contrast, the error bands of the last three planning methods using MCTS gradually becomes narrower as the density of the foothold increases. In the environment with low foothold density, the rule-based expert method has poor passability. Therefore, in most cases, the robot has a small travel distance. The increase in the density of footholds has improved the robot's passability. However, there are still some maps that the robot cannot pass due to the defeats of the rule-based method. Therefore, the result shows that the error band becomes wider with the rise of the foothold density of terrain. MCTS-based sequence optimization methods are different. Most of the latter three planning methods can still travel long distances in low foothold density environment, and a small part cannot pass because the environment is too harsh. When the foothold density of terrain increases, the robot can reach the destinations almost for all maps. Therefore, the result shows that the error band becomes narrower with the rise of the foothold density of terrain.\\par\n\n\n\\begin{figure*}[t] \n\\centering \n\\includegraphics[scale=1]{photo\/aaaaaaaaaa.pdf} \n\\caption{Experimental data of different planning methods under different foothold density environments:(a)Forward distance error band diagram for different planning methods. (b)The comparison chart of the average advance distance represents the passing ability of the robot. (c)Comparison of the average step size of the robot. This can indicate the robot's advancing speed. } \n\\label{aaaaaaaaaa} \n\\end{figure*}\n\n\n\n\n\n\n\n\n\n\n\nCompare the passing capabilities of all planning methods, as shown in Figure \\ref{aaaaaaaaaa}(b). It can be seen that the free fault-tolerant gait has significantly higher passing capacity than the diagonal gait and wave gait. In terms of passing ability, the three planning methods using sequence optimization are far superior to the first three methods. The passing capacity of sliding-MCTS and fast-MCTS is the best. \n\n\n\nIn terms of forward speed, we use the average length of the entire planning sequence to represent the forward speed of the robot. According to Figure \\ref{aaaaaaaaaa}(c), the diagonal gait is the fastest, and Fast-MCTS (Expert) is the second-fastest one. The free fault-tolerant gait has a slower walking speed than Sliding-MCTS in a sparse foothold environment, and the walking speed is faster than free fault-tolerant gait when the foothold density of terrain is denser. It can be seen that the Sliding-MCTS method can ensure the best passing ability, besides it can search for a high-speed gait sequence in a sparse foothold environment. The slowest speeds are diagonal gait and Fast-MCTS (Random). Although Fast-MCTS (Random) has a high passing capacity, the sequence it searches for has not been optimized by a large number of samples, resulting in many invalid states in the entire sequence and the lowest speed.\n\n\\begin{figure*}[ht] \n\\centering \n\\includegraphics[scale=0.6]{photo\/bbbbbb.pdf} \n\\caption{(a)Single-step planning time error band diagram for different planning methods. (b)Index comparison chart of different planning methods} \n\\label{bbbbbb} \n\\end{figure*}\n\n\nTo compare the planning time of the algorithms, we use the single-step planning time of the entire sequence to represent. As shown in Figure \\ref{bbbbbb}(a), the gait planning time of the first three expert methods is about 3ms, and the planning time increases as the density of the foothold increases. It is because when the density of foothold becomes larger, there are more available foothold can be selected, and more support states can be chosen by free fault-tolerant gait. And it takes more time to calculate with the increase of foothold density. The single-step planning time of Fast-MCTS algorithm is about 1s. As the environment becomes harsher, the search time gradually increases. For certain sparse foothold environments, the algorithm occasionally finds solution sequences quickly. So this leads to a larger error band for planning time in environments with low foothold density. The search time of the Sliding-MCTS method is determined by the number of expansion nodes $N_{\\rm sliding}$ planned for each step and the fixed number of simulation steps $N_{\\rm SimStepNum}$. Its single-step planning time is about 30s. The reason that the planning time becomes longer as the foothold density increases is the same as the free fault-tolerant gait. In a low-density foothold environment, the number of invalid nodes is unstable. The higher the density of environmental footholds, the less the number of invalid node expansions. This is also the reason why the error band gradually narrows.\\par\n\n\nIn summary, as shown in Figure \\ref{bbbbbb}(b), the six planning methods have their advantages and disadvantages. In terms of passability, Fast-MCTS (Random) and Sliding-MCTS methods have the highest passability. Expert planning methods have very poor passability, and free fault-tolerant gaits that take into account environmental fault tolerance have slightly better passability. In terms of walking speed, the triple gait is the fastest, followed by Fast-MCTS (Expert), and the speeds of free fault-tolerant gait and Sliding-MCTS are both fast. The other two methods are slow. In terms of planning time, the three planning methods using MCTS take longer, and the Fast-MCTS single-step planning time is about one second. The single-step planning time of Sliding-MCTS is relatively long, and it is related to its set parameters. However, the planning of each step of this method is independent and is not affected by subsequent plan. Therefore, Sliding-MCTS is suitable for local planning.\n\n\n\n\n\n\n\n\n\n\n\\subsection{Special Terrain Simulation Experiment}\n\n\n\\begin{figure*}[ht] \n\\centering \n\\includegraphics[scale=0.67]{photo\/demo.png} \n\\caption{(a) Artificially designed terrains. (b)(c)(d) Screenshots and gait charts of the robot's passage through three different terrains. The light blue part of the gait diagram corresponds to the screenshot of the robot walking. Each screenshot of the robot represents an operating state, and the red curve is the trajectory of the swinging leg at this time. In the gait diagram, black indicates that the leg is a swinging leg, yellow indicates that the leg is a supporting leg, and red indicates that the leg is a fault-tolerant leg.} \n\\label{demoFig} \n\\end{figure*}\n\n\nThe proposed sliding-MCTS algorithm is applied to some artificial terrain to verify its validity. As shown in Figure \\ref{demoFig}(a), we design 3 different terrains. The first type represents the segmented terrain that can be seen in real life. The second terrain is more extreme, with a rectangular area deducted in the middle of the flat terrain. The third type of terrain represents continuous trench terrain, and the width of the trench is inconsistent. All three types of terrain robots can pass smoothly. Figure \\ref{demoFig} is a screenshot of a part of the robot passing the terrain and a gait diagram of the entire process. Figures \\ref{demoFig} (b)(c) show that in a harsh environment, the robot can temporarily lift its legs without an effective foothold through such terrain. In Figure \\ref{demoFig} (c), the robot even becomes a quadruped robot to cross the terrain. In Figure \\ref{demoFig} (d), the robot continuously adjusts the step size to cross the continuous trench terrain effectively. In the gait diagram, black represents the swing state, yellow represents the support state, and red represents the wrong leg (still belongs to swing state). It can be seen that the robot can successfully pass these challenging terrains by continuously adjusting the gait.\n\n\n\\begin{figure*}[ht] \n\\centering \n\\includegraphics[scale=1]{photo\/realRobotExperiment.pdf} \n\\caption{Elspider walking on discrete bricks.} \n\\label{realRobotExperiment} \n\\end{figure*}\n\n\n\\subsection{Physical robot experiment}\n\tWe carried out some physical experiments to illustrate the feasibility of the algorithm. The experiment terrain is set in advance, as shown in Figure \\ref{realRobotExperiment}, bricks represent discrete areas where the robot's feet can land. The position of the robot is measured by a visual capture system. The planning method uses the Sliding-MCTS algorithm proposed in this paper. The robot needs to go straight from one side of the field to the end of the field. It can be seen that the robot can choose the bricks scattered on the ground as a foothold, and successfully reached the target position. The experimental results show that the algorithm proposed in this paper can be effectively applied to physical robots.\n\n\n\n\n\\section{Conclusion} \nIn this work, a gait foothold planning method based on MCTS is proposed. Before introducing the sequence optimization method, according to the harshness of the environment, we combine fault-tolerant gait planning and free gait, a free fault-tolerant gait method based on the expert planning method is proposed. According to the particularity of the application of MCTS in the field of legged robots, we have made some changes to the standard MCST and introduced two methods, Fast MCTS and Sliding MCTS. FastMCTS can quickly improve the passing ability of the default planning method, but it is not very convergent. SlidingMCTS can effectively balance the search time and convergence while having a good passing ability. The simulation experiments verify the advantages and disadvantages of different methods, the rule-based expert method has a fast calculation speed, while the optimization-based method has better passability. The calculation time of the optimization method can also meet real-time requirements. Finally, through artificially designing some challenging terrain to test the algorithm, and applying the algorithm on the physical robot, the results show that the proposed method can have a good passability in the sparse foothold environment. In the future, we will also continue to study how to increase the search speed of the algorithm and combine it with machine vision to explore the wild environment in real-time.\n\n\n\\section{Acknowledgements} \nThis study was supported in part by the National Natural Science Foundation of China(91948202), and the National Key Research and Development Project of China(2019YFB1309500).\n\n\n\n\\bibliographystyle{unsrt}\n\n\n\n\n\n\n\n\n\n\\newpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nQuantum teleportation scheme was first introduced by Bennett et al.\nin 1993. In their pioneering work, they proposed a scheme for teleporting\nan unknown qubit using a maximally entangled Bell state \\cite{bennett1993}.\nSince then many teleportation schemes have been proposed and many\nvariants of teleportation (e.g., remote state preparation (RSP), quantum\nsecret sharing (QSS), quantum information splitting (QIS), bidirectional\nteleportation (see \\cite{pathak2013elements,sharma2015controlled}\nand references therein)) have also been introduced. Recently Yu et\nal. \\cite{yu2017} have proposed a scheme for the teleportation of\ntwo different quantum states to two different receivers. Specifically,\nin their scheme, Alice wants to teleport a state\n\n\\begin{eqnarray}\n|\\chi_{a}\\rangle & = & \\alpha_{1}|0\\rangle^{\\otimes m}+\\beta_{1}|1\\rangle^{\\otimes m}\\label{eq:1}\n\\end{eqnarray}\n and an another state\n\n\\begin{eqnarray}\n|\\chi_{b}\\rangle & = & \\alpha_{2}|0\\rangle^{\\otimes(m+1)}+\\beta_{2}|1\\rangle^{\\otimes(m+1)}\\label{eq:2}\n\\end{eqnarray}\n to Bob$_{1}$ and Bob$_{2}$, respectively using a five qubit-cluster\nstate\n\n\\begin{eqnarray}\n|\\psi\\rangle_{12345} & = & \\frac{1}{2}(|00000\\rangle+|01011\\rangle+|10100\\rangle+|11111\\rangle)_{12345}.\\label{eq:3}\n\\end{eqnarray}\n\nIn their work, Yu et al. have referred to state $|\\chi_{a}\\rangle$\nas $m$-qubit state of GHZ class and analogously $|\\chi_{b}\\rangle$\nas $(m+1)$-qubit state of GHZ class. We have some reservation about\nusing this nomenclature and would prefer to name these states as generalized\nBell-type state as was logically done in several earlier works \\cite{pathak2011,panigrahi2006}.\nIn fact, in Ref. \\cite{pathak2011}, it was explicitly shown that\nany quantum state of the from $\\alpha|x\\rangle+\\beta|\\bar{x}\\rangle:|\\alpha|^{2}+|\\beta|^{2}=1$\nwhere $x$ varies from $0$ to $2^{n}-1$ and $\\bar{x}=1^{\\otimes n}\\oplus x$\nin modulo 2 arithmetic, can be teleported using a Bell state. Clearly,\nthe states considered by Yu et al. (i. e., $|\\chi_{a}\\rangle$ and\n$|\\chi_{b}\\rangle$) are of the form $\\alpha|x\\rangle+\\beta|\\bar{x}\\rangle$\nand it's obvious that $|\\chi_{a}\\rangle$ and $|\\chi_{b}\\rangle$\ncan be independently teleported to two receivers using two Bell states.\nThus, the use of a five qubit-cluster state or any such complicated\nquantum channel is not required to perform the multi-output teleportation\ntask considered by Yu et al. \n\nExtending the above observation, it will be apt to note that a generalized\nscheme for teleportation has been reported in \\cite{sisodia2017},\nwhere it is mentioned that teleportation of a quantum state having\n$m$-unknown coefficients require $\\lceil\\log_{2}m\\rceil$ Bell states.\nThe scheme proposed by Yu et al., is essentially meant for teleportation\nof a product state $|\\psi_{ab}\\rangle=|\\chi_{a}\\rangle\\otimes|\\chi_{b}\\rangle$\nhaving four unknown coefficient $\\alpha_{1}$, $\\beta_{1}$, $\\alpha_{2}$,\nand $\\beta_{2}$ and hence require only $\\lceil\\log_{2}4\\rceil=2$\nBell states to perform the teleportation task. In fact, the scheme\nof \\cite{sisodia2017} allows one to teleport more general quantum\nstates using two Bell state\\textcolor{black}{s}. Interestingly, despite\nthe existence of these general results, several authors have recently\nreported different type of teleportation schemes using excessively\nhigher amount of quantum resource. For example, in Ref. \\cite{bikash2020}\na four qubit cluster state is used as a quantum resource for teleporting\ntwo qubit\\textcolor{black}{{} states}. The two qubit \\textcolor{black}{states}\nused for teleportation is\n\n\\begin{eqnarray}\n|\\lambda\\rangle_{ab} & = & \\alpha|00\\rangle+\\beta|01\\rangle+\\gamma|10\\rangle+\\delta|11\\rangle.\\label{eq:4}\n\\end{eqnarray}\nNow, as per the scheme reported in \\cite{sisodia2017} and the references\ntherein, since there are four unknown coefficients in the state to\nbe teleported, it will be sufficient to use$\\lceil\\log_{2}4\\rceil=2$\nBell states. The discussion so far is sufficient to establish that\nthe resources used in Yu et al., paper are not optimal and we could\nhave concluded this comment here, but the fact that they have realized\ntheir scheme for $m=1$ using IBM quantum experience have motivated\nus to explicitly implement our scheme scheme for $m=1$ with the help\nof a quantum computer whose cloud-based access is provided by IBM.\n\nThe paper is organized as follows. Our scheme for multi-output teleportation\nusing two Bell states is described in Sec. \\ref{sec:circui}. Subsequently,\nthe implementation of the scheme using an IBM quantum computer and\nthe relevant results are reported in Sec. \\ref{sec:Experimental-realization-usingIBM}.\nFinally, the paper is concluded in Sec. \\ref{sec:Conclusion}. \n\n\\section{Multi-output quantum teleportation using two Bell states\\label{sec:circui}}\n\nIn 2017, Yu et al., coined the term multi-output quantum teleportation\n{\\cite{yu2017}} in an effort to propose a scheme that\nallows Alice to teleport two different single qubit states $|\\chi_{1}\\rangle$and\n$|\\chi_{2}\\rangle$ to two different receivers using a four qubit\ncluster state $|\\psi\\rangle_{A_{1}A_{2}B_{1}B_{2}}$. In the original\nscheme, Alice used to keep the first two qubits (indexed by subscripts\n$A_{1}$and $A_{2}$) of the cluster state with herself and sends\nthe other two qubits to the two receivers, say Bob$_{1}$ and Bob$_{2}$\n(qubits sent to ${\\rm Bob}_{i}$ is indexed by $B_{i}$). Now, Alice\ndoes a measurement in the cluster basis on first four qubits $|\\psi_{i}\\rangle_{12A_{1}A_{2}}$,\ntwo of which are information qubits and the other two are the qubits\nwhich Alice kept with her. The measurement result is publicly announced\nand the two receivers applies the corresponding unitary operators\nto obtain the corresponding desired states $|\\chi_{1}\\rangle$ and\n$|\\chi_{2}\\rangle$, respectively. Almost in the similar line, in\n2021, Yu et al., proposed another scheme for multi-output quantum\nteleportation, but this time the states to be teleported were $m$-qubit\nand $(m+1)$-qubit states (cf. Eqs. (\\ref{eq:1}) and (\\ref{eq:2})\nand the related discussions in the previous section) and the quantum\nchannel used was a five-qubit cluster state (see Eq. \\ref{eq:3}).\nWe have already mentioned that the same multi-output teleportation\ntask can be done using two bell states. As the experimental part of\nYu et al. is restricted to $m=1$ case, for comparison, in Fig. \\ref{fig:Multi-output-quantum-teleportati},\nwe explicitly show the schematic of the quantum circuit that will\nbe required for performing the task using two Bell states. Let $|\\chi_{a}\\rangle$\nand $|\\chi_{b}\\rangle$ be the two states to be teleported (Eqs. (\\ref{eq:1})\nand (\\ref{eq:2}) for $m=1)$. The state $|\\chi_{b}\\rangle$ can be\nreduced to a simpler state $|\\chi_{b}^{\\prime}\\rangle$ after applying\na unitary operation CNOT with control on first qubit and target on\nsecond qubit. Now the problem reduces to the teleportation of the\nproduct state of $|\\chi_{a}\\rangle$ and |$\\chi_{b}^{\\prime}\\rangle=\\alpha_{2}|0\\rangle+\\beta_{2}|1\\rangle$\nas\n\n\\begin{eqnarray}\nCNOT|\\chi_{b}\\rangle & \\longrightarrow & |\\chi'_{b}\\rangle|0\\rangle=(\\alpha_{2}|0\\rangle+\\beta_{2}|1\\rangle)\\otimes|0\\rangle.\\label{eq:5}\n\\end{eqnarray}\n\n\\begin{figure}\n\\begin{centering}\n\\includegraphics[scale=0.5]{Figures\/blockdiag}\n\\par\\end{centering}\n\\caption{(Color online) An optimal quantum circuit illustrating a multi-output\nquantum teleportation scheme\\label{fig:Multi-output-quantum-teleportati}}\n\n\\end{figure}\n\n\n\\section{Experimental realization using an IBM quantum computer\\label{sec:Experimental-realization-usingIBM}}\n\nWe have designed a simple (but experimentally realizable using IBM\nquantum experience) circuit shown in Fig. \\ref{fig:MQTCkt} (a) which\nis equivalent to the schematic of the circuit shown in Fig. \\ref{fig:Multi-output-quantum-teleportati}\nexcept the presence of the first and last CNOT gates. Local operations\nperformed by these two CNOT gates do not affect the main teleportation\npart. This circuit is run in IBM quantum composer to yield the results\nreported in the following subsection. There is another reason for\nimplementing the circuit without the CNOT gates, as that allowed us\nto use ibmq\\_casablanca which is a seven qubit quantum computer that\nhas enough resources to implement the circuit shown in Fig. \\ref{fig:MQTCkt}\n(a), but not enough qubits to implement the technically equivalent\ncircuit shown in Fig. \\ref{fig:Multi-output-quantum-teleportati}.\nThe ibmq\\_casablanca is one of the IBM Quantum Falcon processors \\cite{ibm2021}.\nThe circuit given in Fig. \\ref{fig:MQTCkt} (a) can be briefly described\nas a process in which Alice wants to teleport $\\text{\\ensuremath{\\frac{1}{\\sqrt{2}}}(|0\\ensuremath{\\rangle}+|1\\ensuremath{\\rangle})\\ensuremath{\\otimes\\ensuremath{\\frac{1}{\\sqrt{2}}}(|0\\ensuremath{\\rangle}+|1\\rangle)=|+\\rangle\\otimes|+\\rangle}}$\nto the receivers Bob$_{1}$ and Bob$_{2}$, respectively as the first\nCNOT in Fig. \\ref{fig:Multi-output-quantum-teleportati} can transform\nthe Bell state $|\\phi^{+}\\rangle=\\frac{1}{\\sqrt{2}}(|00\\rangle+|11\\rangle)$\nto separable state $\\frac{1}{\\sqrt{2}}(|0\\rangle+|1\\rangle)\\otimes|0\\rangle$\nand the last CNOT can recreate it at the receiver's end with the help\nof an ancilla qubit and output of the teleportation process. In Fig.\n\\ref{fig:MQTCkt} (a), first two qubits are the information qubits\nand the last four qubits are the quantum channels used for the teleportation,\nwhich is comprised of two Bell states as desired and argued above\nas sufficient resource. Now Alice does a Bell measurement on first\n($Q_{1}$) and third qubits ($Q_{0}$) and another Bell measurement\non second ($Q_{5}$) and fifth qubits ($Q_{4}$) and then sends the\nmeasurement results to Bob$_{1}$ and Bob$_{2}$. Here it may be noted\nthat qubit numbers are indexed in accordance with the convention adopted\nby IBM Quantum experience in describing the 7 qubit quantum computer\nwhose topology is shown in Fig. \\ref{fig:MQTCkt} (b). Further, the\nqubits are chosen such that the circuit after transpilation has a\nminimal circuit cost \\cite{dueck2018optimization}. According to the\nmeasurement results announced by Alice, Bob$_{1}$ and Bob$_{2}$\napply corresponding unitaries to obtain the teleported states. Clearly\nthis is just two independent implementation of the standard teleportation\ncircuit, and the same is enough to achieve what is done using costly\nquantum resources in the earlier works.\n\n\\begin{figure}\n\\begin{centering}\n\\includegraphics[scale=0.5]{Figures\/MQTCktTopo}\n\\par\\end{centering}\n\\centering{}\\caption{(Color online) (a) Quantum circuit for the teleportation of the states\n$\\frac{1}{\\sqrt{2}}(|0\\rangle+|1\\rangle)$ and $\\frac{1}{\\sqrt{2}}(|0\\rangle+|1\\rangle)$\nto two different receivers Bob$_{1}$ and Bob$_{2}$ simultaneously\nusing 2 Bell states $|\\phi^{+}\\rangle^{\\otimes2}$ (b) Topology of\nibmq\\_casablanca. \\label{fig:MQTCkt}}\n\\end{figure}\n\n\n\\subsection{Results}\n\nThe circuit described above is run using ibmq\\_casablanca which is\na 7 qubit superconductivity based quantum computer that uses transmon\nqubits. The obtained result is illustrated in Fig \\ref{fig:results}.\nAs we teleported $|+\\rangle|+\\rangle$ it was expected that in output\nstates $|00\\rangle,|01\\rangle,|10\\rangle$ and $|11\\rangle$ would\nappear with equal probability, but from Fig. \\ref{fig:results} we\ncan see that the states are produced with slightly different probabilities,\nthe same is also depicted in the corresponding density matrix shown\nin Fig. \\ref{fig:ExDM}. This is because of the inherent implementation\nerrors as summarized in Table \\ref{tab:Calibration-data-of}. Fidelity\nbetween the state produced and the expected state is computed using\nthe formula $F(\\sigma,\\rho)=Tr\\left[\\sqrt{\\sqrt{\\sigma}\\rho\\sqrt{\\sigma}}\\right]^{2}$,\nwhere $\\sigma$ is theoretical (expected) density matrix of the final\nstate and $\\rho$ is the density matrix of the experimentally obtained\nfinal state. The fidelity is obtained as 84.64 \\% for the case illustrated\nhere for a particular set of experiment comprised of 8192 runs of\nthe experiment. To check the consistency of the result the same exercise\nis repeated 10 times and the fidelities are obtained as (in \\%) 77.51,\n84.64, 79.31, 78.98, 76.17, 81.33, 83.64, 80.21, 74.65, 79.92. The\nstandard deviation is 3.096. This is a reasonably accurate result\nand the fidelity is quite high compared to the classical limit of\n2\/3. This simply establishes that resources used in the earlier works\nwere not optimal. Fidelity can not be compared with the earlier work,\nas Yu et al., have not reported that. However, it's obvious that simpler\nentangled states used here will be \\textcolor{black}{affected} less\nby the noise. \n\n\\begin{figure}\n\\begin{centering}\n\\includegraphics[scale=0.6]{Figures\/RDMQT2}\n\\par\\end{centering}\n\\centering{}\\caption{(Color online) Experimental result for the quantum circuit shown in\nFig \\ref{fig:MQTCkt}. \\label{fig:results}}\n\\end{figure}\n\n\\begin{figure}\n\\begin{centering}\n\\includegraphics[scale=0.6]{Figures\/EDMMQTe}\n\\par\\end{centering}\n\\caption{(Color Online) Experimental quantum state tomography result for the\ncircuit shown in Fig \\ref{fig:MQTCkt}.\\label{fig:ExDM}}\n\\end{figure}\n\n\\begin{table}\n\\begin{centering}\n\\begin{tabular}{|c|c|c|>{\\centering}p{2cm}|>{\\centering}p{2cm}|>{\\centering}p{2cm}|>{\\centering}p{5cm}|}\n\\hline \nQubit & T1 ($\\mu s$) & T2 ($\\mu s$) & \\centering{}Frequency (GHz) & \\centering{}Readout assignment error & \\centering{}Single-qubit Pauli-X-error & \\centering{}CNOT error\\tabularnewline\n\\hline \n$Q_{0}$ & 97.07 & 41.56 & \\centering{}4.822 & \\centering{}$3.52\\times10^{-2}$ & \\centering{}$2.73\\times10^{-4}$ & \\centering{}cx0\\_1: $1.105\\times10^{-2}$\\tabularnewline\n\\hline \n$Q_{1}$ & 179.27 & 106.63 & \\centering{}4.76 & \\centering{}$1.56\\times10^{-2}$ & \\centering{}$1.56\\times10^{-4}$ & \\centering{}cx1\\_3: $6.796\\times10^{-3}$, cx1\\_2: $1.013\\times10^{-2}$,\ncx1\\_0: $1.105\\times10^{-2}$\\tabularnewline\n\\hline \n$Q_{2}$ & 164.86 & 96.43 & \\centering{}4.906 & \\centering{}$8.50\\times10^{-3}$ & \\centering{}$3.54\\times10^{-4}$ & \\centering{}cx2\\_1: $1.013\\times10^{-2}$\\tabularnewline\n\\hline \n$Q_{3}$ & 123.23 & 151.27 & \\centering{}4.879 & \\centering{}$1.70\\times10^{-2}$ & \\centering{}$3.40\\times10^{-4}$ & \\centering{}cx3\\_1: $6.796\\times10^{-3}$, cx3\\_5: $1.139\\times10^{-2}$\\tabularnewline\n\\hline \n$Q_{4}$ & 128.4 & 54.14 & 4.871 & \\centering{}$3.06\\times10^{-2}$ & \\centering{}$2.88\\times10^{-4}$ & cx4\\_5: $1.148\\times10^{-2}$\\tabularnewline\n\\hline \n$Q_{5}$ & 133.5 & 91.77 & 4.964 & \\centering{}$9.60\\times10^{-3}$ & \\centering{}$3.17\\times10^{-4}$ & cx5\\_3: $1.139\\times10^{-2}$, cx5\\_4:$1.148\\times10^{-2}$, cx5\\_6:\n$1.156\\times10^{-2}$,\\tabularnewline\n\\hline \n$Q_{6}$ & 112.08 & 166.07 & 5.177 & \\centering{}$2.18\\times10^{-2}$ & \\centering{}$4.70\\times10^{-4}$ & cx6\\_5:$1.156\\times10^{-2}$\\tabularnewline\n\\hline \n\\end{tabular}\n\\par\\end{centering}\n\\caption{Calibration data of ibmq\\_casablanca on Dec 01, 2021.cxi\\_j represents\nCNOT gate with control qubit i and target qubit j.\\label{tab:Calibration-data-of}}\n\\end{table}\n\n\n\\section{Conclusion\\label{sec:Conclusion}}\n\nIt's shown that quantum resources used in Yu et al., \\cite{yu2021}\nfor multiparty teleportation was not optimal and the same drawback\nexists in \\cite{bikash2020} and other similar works. Relevant existing\nresults are noted and it's explicitly shown that ibmq\\_casablanca\ncan be used to implement the task described by Yu et al., using only\ntwo Bell states. Here the purpose was only to show that cluster state\nand similar resources are not required for performing this type of\ntasks, and consequently we have restricted ourselves to simplest possible\nimplementation of the multi-output quantum teleportation. It's obvious\nto extend this approach for the multioutput teleportation of more\ncomplex quantum states.\n\n\\section*{Acknowledgment:}\n\nAuthor acknowledges the support from the QUEST scheme of Interdisciplinary\nCyber Physical Systems (ICPS) program of the Department of Science\nand Technology (DST), India (Grant No.: DST\/ICPS\/QuST\/Theme-1\/2019\/14\n(Q80)). He also thanks Anirban Pathak for his feedback and advises\nin relation to the present work.\n\n\\bibliographystyle{ieeetr}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDue to the lack of a Riemann mapping theorem in several complex variables, it is of fundamental importance to study the biholomorphic equivalence of various domains in $\\cv^n$, $n\\ge 2$. For such a study, it is necessary to introduce different kinds of holomorphic invariants. In this paper, we study two such invariants, the Fridman invariants and (generalized) squeezing functions.\n\nThe Fridman invariant was defined by Fridman in \\cite{Fridman1983} for Kobayashi hyperbolic domains $D$ in $\\cv^n$, $n\\ge 1$, as follows. Denote by $B_D^k(z,r)$ the $k_D$-ball in $D$ centered at $z\\in D$ with radius $r>0$, where $k_D$ is the Kobayashi distance on $D$. For two domains $D_1$ and $D_2$ in $\\cv^n$, denote by $\\oc_u(D_1,D_2)$ the set of \\textit{injective} holomorphic maps from $D_1$ into $D_2$.\n\nRecall that a domain $\\Omega\\subset \\cv^n$ is said to be \\textit{homogeneous} if the automorphism group of $\\Omega$ is transitive. For any bounded homogeneous domain $\\Omega$, set\n$$h_D^\\Omega(z)=\\inf \\{1\/r:\\ B_D^k(z,r)\\subset f(\\Omega),\\ f\\in \\oc_u(\\Omega,D)\\}.$$\nFor comparison purposes, we call $e_D^\\Omega(z):=\\tanh (h_D^\\Omega(z))^{-1}$ the \\textit{Fridman invariant} (cf. \\cite{Deng-Zhang2019, Nikolov-Verma2018}).\n\nFor any bounded domain $D\\subset \\cv^n$, the \\textit{squeezing function} was introduced in \\cite{Deng2012} by Deng, Guan and Zhang as follows:\n$$s_D(z)=\\sup \\{r:\\ r\\bv^n\\subset f(D),\\ f\\in \\oc_u(D,\\bv^n),\\ f(z)=0\\}.$$\nHere $\\bv^n$ denotes the unit ball in $\\cv^n$. Comparing with the Fridman invariant, it seems natural to consider more general squeezing functions, replacing $\\bv^n$ by other ``model domains\".\n\nRecall that a domain $\\Omega$ is said to be \\textit{balanced} if for any $z\\in \\Omega$, $\\lambda z\\in \\Omega$ for all $|\\lambda|\\le 1$. Let $\\Omega$ be a bounded, balanced and convex domain in $\\cv^n$. The \\textit{Minkowski function} $\\rho_\\Omega$ is defined as (see e.g. \\cite{Pflug2013})\n$$\\rho_\\Omega(z)=\\inf \\{t>0:\\ z\/t\\in \\Omega\\},\\ \\ \\ z\\in \\cv^n.$$\nNote that $\\Omega=\\{z\\in \\cv^n:\\ \\rho_\\Omega(z)<1\\}$. Set $\\Omega(r)=\\{z\\in \\cv^n:\\ \\rho_\\Omega(z)0$ such that $\\bv^n(z,s)\\subset B_D^k(z,r_1)$. By Cauchy's inequality, for any $i$, $|\\det g_i'(z)|\\frac{1}{c}$, for any $i$. Thus, we have $|\\det f'(0)|>0$ and $|\\det g'(z)|>0$. By Lemma \\ref{lh}, both $f$ and $g$ are injective. In particular, $f(\\Omega)\\subset D$ and $g(B_D^k(z,\\arctanh(e_D^\\Omega(z))))\\subset \\Omega$. Since $f\\circ g(w)=w$, for all $w\\in B_D^k(z,\\arctanh(e_D^\\Omega(z)))$, it shows that $f$ is the desired extremal map.\n\\end{proof}\n\nBased on Theorem \\ref{tee}, we can give another proof of \\cite[Theorem 1.3(2)]{Fridman1983} as follows.\n\n\\begin{thm}\nIf there exists $z\\in D$ such that $e_D^\\Omega(z)=1$, then $D$ is biholomorphically equivalent to $\\Omega$.\n\\end{thm}\n\\begin{proof}\nSince $\\Omega$ is homogeneous, $s_\\Omega(z)\\equiv c$ for some positive number $c$. Thus, by \\cite[Theorem 4.7]{Deng2012}, $\\Omega$ is Kobayashi complete, hence taut.\n\nWithout loss of generality, assume that $0\\in \\Omega$. Let $f_i$'s and $g_i$'s be as in the proof of Theorem \\ref{tee}. Since $e_D^\\Omega(z)=1$, we have $\\bigcup_i B_D^k(z,r_i)=D$.\n\nSince $\\Omega$ is taut, by \\cite[Theorem 5.1.5]{Kobayashi98}, there exists a subsequence $\\{g_{k_i}\\}$ of $\\{g_i\\}$ which converges to a holomorphic map $g:D \\rightarrow \\Omega$ uniformly on compact subsets of $D$. By the decreasing property of the Kobayashi distance, for $z_1,z_2\\in D$ such that $g(z_1)=g(z_2)$, we have for $k_i$ large enough,\n$$k_D(z_1,z_2)\\le k_{f_{k_i}(\\Omega)}(f_{k_i}\\circ g_{k_i}(z_1),f_{k_i}\\circ g_{k_i}(z_2))=k_\\Omega(g_{k_i}(z_1),g_{k_i}(z_2)).$$\nLetting $k_i\\rightarrow \\infty$, by the continuity of the Kobayashi distance, we have $k_D(z_1,z_2)\\le k_\\Omega(g(z_1),g(z_2))=0$. Since $D$ is Kobayashi hyperbolic, we have $z_1=z_2$. Thus, $g$ is injective and $D$ is biholomorphic to a bounded domain.\n\nNow Theorem \\ref{tee} applies and shows that $D$ is biholomorphically equivalent to $\\Omega$.\n\\end{proof}\n\nIt was shown in \\cite[Theorem 1.3(1)]{Fridman1983} that $h_D^\\Omega(z)$, hence $e_D^\\Omega(z)$, is continuous. For its proof, Fridman showed that for $z_1$ and $z_2$ sufficiently close, $|1\/h_D^\\Omega(z_1)-1\/h_D^\\Omega(z_2)|\\le k_D(z_1,z_2)$. Our next result gives a ``global\" version of this estimate in terms of $e_D^\\Omega(z)$.\n\n\\begin{thm}\\label{tec}\nFor any $z_1$ and $z_2$ in $D$, we have\n$$|e_D^\\Omega(z_1)-e_D^\\Omega(z_2)|\\le \\tanh[k_D(z_1,z_2)].$$\n\\end{thm}\n\nFor the proof of Theorem \\ref{tec}, we need the following basic fact, whose proof we provide for completeness.\n\n\\begin{lem}\\label{ltanh}\nSuppose that $t_i\\ge 0$, $i=1,2,3$, and $t_3\\le t_1+t_2$. Then,\n$$\\tanh(t_3)\\le \\tanh(t_1)+\\tanh(t_2).$$\n\\end{lem}\n\\begin{proof}\nSince $t_3\\le t_1+t_2$, we have\n$$-\\frac{2}{e^{2t_3}+1}-1\\le -\\frac{2}{e^{2(t_1+t_2)}+1}-1.$$\nDefine\n$$f(t_1,t_2)=\\frac{2}{e^{2t_1}+1}+\\frac{2}{e^{2t_2}+1}-\\frac{2}{e^{2(t_1+t_2)}+1}-1.$$\nTo show that $\\tanh(t_3)\\le \\tanh(t_1)+\\tanh(t_2)$, it suffices to show that $f(t_1,t_2)\\le 0$, for all $t_1,t_2\\ge 0$. For any fixed $t_1\\ge 0$, consider\n$$g(t_2)=\\frac{2}{e^{2(t_1+t_2)}+1}-\\frac{2}{e^{2t_2}+1}.$$\nThen,\n$$g'(t_2)=-\\frac{4e^{2(t_1+t_2)}}{(e^{2(t_1+t_2)}+1)^{2}}+\\frac{4e^{2t_2}}{(e^{2t_2}+1)^{2}}.$$\nSince the function $\\ds \\frac{e^t}{(e^t+1)^{2}}$ is decreasing for $t\\ge 0$, we have $g'(t_2)\\ge 0$ for all $t_2\\ge 0$. Hence, $g(t_2)\\ge g(0)$ for all $t_2\\ge 0$, which implies that $f(t_1,t_2)=g(0)-g(t_2)\\le 0$ for all $t_1,t_2\\ge 0$.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{tec}]\nFix $0<\\epsilon0\\ge e_D^\\Omega(z_1)-\\epsilon-\\tanh[k_D(z_1,z_2)].$$\n\nIf $z_2\\in B_D^k(z_1,\\arctanh[e_D^\\Omega(z_1)-\\epsilon])$, then by Lemma \\ref{ltanh}, we have for all $z$ with $\\tanh[k_D(z_2,z)]0$ such that $K\\subset D_j$ for all $j>N$. In this case, we also say that $\\{D_j\\}_{j\\ge 1}$ \\textit{exhausts} $D$.\n\n\\begin{cor}\\label{cek}\nLet $\\{D_j\\}_{j\\ge 1}$ be a sequence of exhausting subdomains of $D$. If $\\ds \\lim_{j\\rightarrow \\infty}e_{D_j}^\\Omega(z)=e_D^\\Omega(z)$ for all $z\\in D$, then the convergence is uniform on compact subsets of $D$.\n\\end{cor}\n\\begin{proof}\nLet $K$ be a compact subset of $D$. Then there exists $00$ such that $\\bigcup_{z\\in K}\\bv^n(z,r)\\subset D_j$ for all $j>N_1$. Fix any $\\epsilon>0$ and take $\\delta=r\\epsilon\/3$. Since $\\{\\bv^n(z,\\delta)\\}_{z\\in K}$ is an open covering of $K$, there is a finite set $\\{z_i\\}_{i=1}^m$ such that $K\\subset \\bigcup_{i=1}^m \\bv^n(z_i,\\delta)$. For any $z\\in K$, there is some $z_i$ such that $z\\in \\bv^n(z_i,\\delta)$. By Theorem \\ref{tec} and the decreasing property of the Kobayashi distance, we have\n\\begin{align*}\n|e_D^\\Omega(z)-e_{D_j}^\\Omega(z)|\n&\\le |e_D^\\Omega(z)-e_D^\\Omega(z_i)|+|e_D^\\Omega(z_i)-e_{D_j}^\\Omega(z_i)|+|e_{D_j}^\\Omega(z)-e_{D_j}^\\Omega(z)|\\\\\n&\\le \\tanh[k_D(z,z_i)]+|e_D^\\Omega(z_i)-e_{D_j}^\\Omega(z_i)|+\\tanh[k_{D_j}(z,z_i)]\\\\\n&\\le 2\\tanh[k_{\\bv^n(z_i,r)}(z,z_i)]+|e_D^\\Omega(z_i)-e_{D_j}^\\Omega(z_i)|\\\\\n&<2\\epsilon\/3+|e_D^\\Omega(z_i)-e_{D_j}^\\Omega(z_i)|\n\\end{align*}\nOn the other hand, there exists $N_2>0$ such that $|e_D^\\Omega(z_i)-e_{D_j}^\\Omega(z_i)|<\\epsilon\/3$ for all $z_i$ and $j>N_2$. Take $N=\\max\\{N_1,N_2\\}$. Then for any $j>N$, we have $|e_D^\\Omega(z)-e_{D_j}^\\Omega(z)|<\\epsilon$ for all $z\\in K$.\nThis completes the proof.\n\\end{proof}\n\nThe condition $\\ds \\lim_{j\\rightarrow \\infty}e_{D_j}^\\Omega(z)=e_D^\\Omega(z)$ in the previous corollary is usually referred to as the \\textit{stability} of the Fridman invariant, which was shown to be true when $D$ is Kobayashi complete in \\cite[Theorem 2.1]{Fridman1983}. Under the weaker assumption of $D$ being taut (or bounded), we have the following inequality.\n\n\\begin{thm}\\label{tes}\nSuppose that $D$ is bounded or taut. Let $\\{D_j\\}_{j\\ge 1}$ be a sequence of exhausting subdomains of $D$. Then for any $z\\in D$, $\\ds \\limsup_{j\\rightarrow \\infty} e_{D_j}^\\Omega(z)\\le e_D^\\Omega(z)$.\n\\end{thm}\n\nTo prove Theorem \\ref{tes}, we need the following\n\n\\begin{lem}\\label{lbe}\nLet $\\{D_j\\}_{j\\ge 1}$ be a sequence of exhausting subdomains of $D$. Then for any $z\\in D$ and $r>0$, $\\{B_{D_j}^k(z,r)\\}_{j\\ge 1}$ exhausts $B_D^k(z,r)$.\n\\end{lem}\n\\begin{proof}\nBy Lemma \\ref{lbc}, we know that $B_D^k(z,r)$ is a subdomain of $D$ for any $z\\in D$ and $r>0$. Firstly, we show that\n$$\\lim_{j\\rightarrow \\infty}k_{D_j}(z',z'')=k_D(z',z''),\\ \\ \\forall z',z''\\in D.$$\nConsider a sequence of subdomains $\\{G_j\\}_{j\\ge 1}$ such that (i) $G_j\\Subset D$, (ii) $G_j\\subset G_{j+1}$, (iii) $D=\\bigcup_{j\\ge 1} G_j$. By \\cite[Proposition 3.3.5]{Pflug2013}, we have\n$$\\lim_{j\\rightarrow \\infty}k_{G_j}(z',z'')=k_D(z',z''),\\ \\ \\forall z',z''\\in D.$$\nFor any $j\\ge 1$, there exists $N_j>0$ such that $G_j\\subset D_i$, for all $i>N_j$. By the decreasing property of the Kobayashi distance, we get\n$$\\lim_{j\\rightarrow \\infty}k_{D_j}(z',z'')=k_D(z',z''),\\ \\ \\forall z',z''\\in D.$$\n\nNow we prove that for any $K\\Subset B_D^k(z,r)$, there exists $N>0$ such that $K\\subset B_{D_j}^k(z,r)$ for all $j>N$.\n\nSince $k_D(z,\\cdot)$ is continuous, there exists $00$ such that $\\bigcup_{w\\in K} \\bv^n(w,\\delta)\\Subset B_D^k(z,r)$. Hence, there exists $N_1>0$ such that $\\bigcup_{w\\in K} \\bv^n(w,\\delta)\\subset D_j$ for all $j>N_1$.\n\nLet $0<\\epsilon0$ such that $|k_{D_j}(z,z_l)-k_D(z,z_l)|<\\epsilon\/3$ for any $j>N_2$ and $1\\le l\\le m$. For any $w\\in K$, there is some $z_l$ such that $w\\in \\bv^n(z_l,\\delta_1)$. Set $N=\\max\\{N_1,N_2\\}$. Then for all $j>N$, by the decreasing property of the Kobayashi distance, we have\n\\begin{align*}\n&|k_{D_j}(z,w)-k_D(z,w)|\\\\\n\\le &|k_{D_j}(z,w)-k_{D_j}(z,z_l)|+|k_{D_j}(z,z_l)-k_D(z,z_l)|+|k_D(z,z_l)-k_D(z,w)|\\\\\n\\le &k_{D_j}(z_l,w)+|k_{D_j}(z,z_l)-k_D(z,z_l)|+k_D(z_l,w)\\\\\n\\le &2k_{\\bv^n(z_l,\\delta)}(z_l,w)+|k_{D_j}(z,z_l)-k_D(z,z_l)|\\\\\n<&2\\epsilon\/3+\\epsilon\/3=\\epsilon.\n\\end{align*}\nTherefore, $k_{D_j}(z,w)N$. This completes the proof.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{tes}]\nSince the proof for the taut case is similar as (and simpler than) for the bounded case, we will assume that $D$ is bounded.\n\nFor any $z\\in D$, let $e_{D_{l_i}}^\\Omega$ be a sequence such that $\\ds \\lim_{l_i\\rightarrow \\infty}e_{D_{l_i}}^\\Omega(z)=\\limsup_{j\\rightarrow \\infty} e^\\Omega_{D_j}(z)=:\\tanh r$. For any $0<\\epsilon0$ such that $e_{D_{l_i}}^\\Omega>\\tanh(r-\\epsilon)$ for all $l_i>N_1$.\n\nWithout loss of generality, assume that $0\\in \\Omega$. By definition, for any $l_i>N_1$, there exists an open holomorphic embedding $f_{l_i}:\\Omega \\rightarrow D_{l_i}$ such that $f_{l_i}(0)=z$ and $B_{D_{l_i}}^k(z,r-\\epsilon) \\subset f_{l_i}(\\Omega)$. Since $D$ is bounded, by Montel's theorem, there exists a subsequence $\\{f_{k_i}\\}$ of $\\{f_{l_i}\\}$ which converges to a holomorphic map $f:\\Omega \\rightarrow \\bar{D}$ uniformly on compact subsets of $\\Omega$.\n\nBy Lemma \\ref{lbc}, each $B_{D_{l_i}}^k(z,r-\\epsilon)$ is a domain. Define $g_{l_i}=f_{l_i}^{-1}|B_{D_{l_i}}^k(z,r-\\epsilon)$. By Montel's theorem and Lemma \\ref{lbe}, we may assume that the sequence $g_{k_i}$ converges uniformly on compact subsets of $B_D^k(z,r-\\epsilon)$ to a holomorphic map $g:B_D^k(z,r-\\epsilon)\\rightarrow \\bar\\Omega$.\n\nTake $s>0$ such that $\\bv^n(z,s)\\Subset B_D^k(z,r-\\epsilon)$. By Lemma \\ref{lbe}, there exists $N>N_1$ such that $\\bv^n(z,s)\\subset B_{D_{l_i}}^k(z,r-\\epsilon)$, for all $l_i>N$. Consider $g_{l_i}|\\bv^n(z,s)$. By Cauchy's inequality, $|\\det g_{l_i}'(z)|N$, for some positive constant $c$. So we have $|\\det f_{l_i}'(0)|>\\frac{1}{c}$ for all $l_i>N$. Thus, we have $|\\det f'(0))|>0$ and $|\\det g'(z)|>0$. By Lemma\\ref{lh}, both $f$ and $g$ are injective. In particular, $f(\\Omega)\\subset D$ with $f(0)=z$ and $g(B_D^k(z,r-\\epsilon))\\subset \\Omega$ with $g(z)=0$. Since $f\\circ g(w)=w$ for all $w\\in B_D^k(z,r-\\epsilon)$, we get $e_D^\\Omega(z)\\ge \\tanh(r-\\epsilon)$. Since $\\epsilon$ is arbitrary ,we have $\\ds e_D^\\Omega(z)\\ge \\tanh r=\\limsup_{j\\rightarrow \\infty} e^\\Omega_{D_j}(z)$.\n\\end{proof}\n\nBased on Corollary \\ref{cek} and Theorem \\ref{tes}, we can slightly refine \\cite[Theorem2.1]{Fridman1983} as follows.\n\n\\begin{thm}\nSuppose that $D$ is Kobayashi complete and $\\{D_j\\}_{j\\ge 1}$ exhausts $D$. Then $\\ds \\lim_{j\\rightarrow \\infty} e_{D_j}^\\Omega(z)=e_D^\\Omega(z)$ uniformly on compact subsets of $D$.\n\\end{thm}\n\\begin{proof}\nSince $D$ is Kobayashi complete, thus taut, we have $\\ds \\limsup_{j\\rightarrow \\infty} e_{D_j}^\\Omega(z)\\le e_D^\\Omega(z)$ for all $z\\in D$, by Theorem\\ref{tes}.\n\nFor $z\\in D$ and $0<\\epsilon0$ such that $B_D^k(z,e_D^\\Omega(z)-\\epsilon)\\subset f((1-\\delta)\\Omega)\\Subset D$. Hence, there exists $N>0$ such that $B_D^k(z,e_D^\\Omega(z)-\\epsilon)\\subset f((1-\\delta)\\Omega)\\subset D_j$ for all $j>N$. By the decreasing property of the Kobayashi distance, we have $B^k_{D_j}(z,e_D^\\Omega(z)-\\epsilon)\\subset B_D^k(z,e_D^\\Omega(z)-\\epsilon)$. So we have $B^k_{D_j}(z,e_D^\\Omega(z)-\\epsilon)\\subset f((1-\\delta)\\Omega)$ for all $j>N$, which implies that $\\ds \\liminf_{j\\rightarrow \\infty} e_{D_j}^\\Omega(z)\\ge e_D^\\Omega(z)-\\epsilon$. Since $\\epsilon$ is arbitrary, we get $\\ds \\liminf_{j\\rightarrow \\infty} e_{D_j}^\\Omega(z)\\ge e_D^\\Omega(z)$ and hence $\\ds \\lim_{j\\rightarrow \\infty}e_{D_j}^\\Omega(z)=e_D^\\Omega(z)$. By Corollary \\ref{cek}, the convergence is uniform on compact subsets of $D$.\n\\end{proof}\n\n\\section{Generalized squeezing functions}\\label{S:squeezing}\n\nThroughout this section, we suppose that $D$ is a bounded domain in $\\cv^n$ and $\\Omega$ is a bounded, balanced and convex domain in $\\cv^n$ (unless otherwise stated).\n\nDenote by $k_\\Omega$ and $c_\\Omega$ the Kobayashi and Carath\\'{e}odory distance on $\\Omega$, respectively. The following Lempert's theorem is well-known:\n\n\\begin{thm}\\cite[Theorem 1]{L:convex}\\label{T:convex}\nOn a convex domain $\\Omega$, $k_\\Omega=c_\\Omega$.\n\\end{thm}\n\nCombining Theorem \\ref{T:convex} with \\cite[Proposition 2.3.1 (c)]{Pflug2013}, we have the following key lemma.\n\n\\begin{lem}\\label{lnk}\nFor any $z\\in \\Omega$, $\\rho_\\Omega(z)=\\tanh (k_\\Omega(0,z))=\\tanh (c_\\Omega(0,z))$.\n\\end{lem}\n\nWe will also need the following basic fact.\n\n\\begin{lem}\\label{lmn}\n$\\rho_\\Omega$ is a $\\cv$-norm.\n\\end{lem}\n\\begin{proof}\nFor any $z_1$, $z_2\\in \\cv^n$, we want to show that $\\rho_\\Omega(z_1+z_2)\\le \\rho_\\Omega(z_1)+\\rho_\\Omega(z_2)$.\n\nFix $\\epsilon >0$. Take $c_1=\\rho_\\Omega(z_1)+\\epsilon\/2$ and $c_2=\\rho_\\Omega(z_2)+\\epsilon\/2$, Then $z_1\/c_1 \\in \\Omega$ and $z_2\/c_2 \\in \\Omega$. Since $\\Omega$ is convex, we get\n$$\\frac{z_1+z_2}{c_1+c_2}=\\frac{c_1}{c_1+c_2}\\frac{z_1}{c_1}+\\frac{c_2}{c_1+c_2}\\frac{z_2}{c_2}\\in \\Omega$$\nHence, $\\rho_\\Omega( z_1+z_2)\\le c_1+c_2 \\le \\rho_\\Omega(z_1)+ \\rho_\\Omega(z_2)+\\epsilon$. Since $\\epsilon$ is arbitrary, we obtain $\\rho_\\Omega( z_1+z_2)\\le \\rho_\\Omega(z_1)+ \\rho_\\Omega(z_2)$.\n\nSince $\\Omega$ is bounded, it is obvious that $\\rho_\\Omega(z)>0$ for all $z\\neq 0$, which completes the proof.\n\\end{proof}\n\nWe say that $f\\in \\oc_u(D,\\Omega)$ is an \\textit{extremal map} at $z\\in D$ if $\\Omega(s_D^\\Omega(z))\\subset f(D)$. When $\\Omega=\\bv^n$, the existence of extremal maps was given in \\cite[Theorem 2.1]{Deng2012}. The proof of the next theorem is very similar to that of Theorem \\ref{tee} and \\cite[Theorem 2.1]{Deng2012}, based on Montel's theorem and the generalized Hurwitz theorem, so we omit the details.\n\n\\begin{thm}\\label{tse}\nAn extremal map exists at each $z\\in D$.\n\\end{thm}\n\nAs an immediate corollary, we have\n\n\\begin{cor}\n$s_D^\\Omega(z)=1$ for some $z\\in D$ if and only if $D$ is biholomorphically equivalent to $\\Omega$.\n\\end{cor}\n\nIn \\cite[Theorem 3.1]{Deng2012}, it was shown that $s_D(z)$ is continuous. Moreover, it was given in \\cite[Theorem 3.2]{Deng2012} without details the following inequality:\n$$|s_D(z_1)-s_D(z_2)|\\le 2\\tanh[k_D(z_1,z_2)],\\ \\ \\ z_1,z_2\\in D.$$\nOur next theorem gives the same inequality for generalized squeezing functions, and in particular shows that they are also continuous.\n\n\\begin{thm}\\label{tsc}\nFor any $z_1,z_2\\in D$, we have\n$$|s_D^\\Omega(z_1)-s_D^\\Omega(z_2)|\\le 2\\tanh[k_D(z_1,z_2)].$$\nIn particular, $s_D^\\Omega(z)$ is continuous.\n\\end{thm}\n\\begin{proof}\nBy Theorem \\ref{tse}, there exists a holomorphic embedding $f:D \\rightarrow \\Omega$ such that $f(z_1)=0$ and $\\Omega(s_D^\\Omega(z_1)) \\subset f(D)$.\n\nIf $\\tanh[k_D(z_1,z_2)]\\ge s_D^\\Omega(z_1)$, then it is obvious that\n$$s_D^\\Omega(z_2)>0\\ge \\frac{s_D^\\Omega(z_1)-\\tanh[k_D(z_1,z_2)]}{1+\\tanh[k_D(z_1,z_2)]}.$$\n\nSuppose now that $\\tanh[k_D(z_1,z_2)]\\tanh[k_D(z_1,z_2)]=\\tanh[k_{f(D)}(f(z_1),f(z_2))]\\\\\n&\\ge \\tanh[k_\\Omega(f(z_1),f(z_2))]=\\tanh[k_\\Omega(0,f(z_2))]=\\rho_\\Omega(f(z_2)).\n\\end{aligned}$$\nDefine\n$$h(w):=\\frac{w-f(z_2)}{1+\\tanh[k_D(z_1,z_2)]},$$\nand set $g(z)=h\\circ f(z)$. Then $g\\in \\oc_u(D,\\Omega)$ and $g(z_2)=0$.\n\nFor any $w\\in \\Omega$ with\n$$\\rho_\\Omega(w)<\\frac{s_D^\\Omega(z_1)-\\tanh[k_D(z_1,z_2)]}{1+\\tanh[k_D(z_1,z_2)]},$$\nwe have \n$$\\rho_\\Omega(h^{-1}(w)-f(z_2))=\\rho_\\Omega(h^{-1}(w)-h^{-1}(g(z_2)))0\\ge s_D^\\Omega(z_1)-\\tanh[k_D(z_1,z_2)].$$\n\nSuppose now that $\\tanh[k_D(z_1,z_2)]0$, a subsequence $\\{l_j\\}$ and $z_{l_j}\\in K\\subset D_{l_j}$ such that\n$$|s^\\Omega_{D_{l_j}}(z_{l_j})-s_D^\\Omega(z_{l_j})|\\ge \\epsilon.$$\nSince $K$ is compact, there exists a convergent subsequence, again denoted by $\\{z_{l_j}\\}$, with $\\lim_{j\\rightarrow \\infty} z_{l_j}=z\\in K$. Choose $r>0$ such that $\\overline{\\bv^n(z,r)}\\subset D$. Then, there is $N_1>0$ such that $z_{l_j}\\in \\bv^n(z,r)\\subset D_{l_j}$ for all $l_j>N_1$. By Theorem \\ref{tsc} and the decreasing property of the Kobayashi distance, for all $l_j>N_1$ we have\n\\begin{align*}\n|s^\\Omega_{D_{l_j}}(z_{l_j})-s_D^\\Omega(z_{l_j})|\n& \\le |s^\\Omega_{D_{l_j}}(z_{l_j})-s^\\Omega_{D_{l_j}}(z)|+|s^\\Omega_{D_{l_j}}(z)-s_D^\\Omega(z)|+|s_D^\\Omega(z)-s_D^\\Omega(z_{l_j})|\\\\\n&\\le 2\\tanh[k_{D_{l_j}}(z_{l_j},z)]+|s^\\Omega_{D_{l_j}}(z)-s_D^\\Omega(z)|+2\\tanh[k_D(z,z_{l_j})]\\\\\n& \\le 4\\tanh\\left(\\frac{\\|z_{l_j}-z\\|}{r}\\right)+|s^\\Omega_{D_{l_j}}(z)-s_D^\\Omega(z)|.\n\\end{align*}\nIt is clear that there is $N_2>0$ such that for all $l_j>N_2$ we have\n$$\\tanh\\left(\\frac{\\|z_{l_j}-z\\|}{r}\\right)<\\frac{\\epsilon}{6}\\ \\ \\textup{and}\\ \\ |s^\\Omega_{D_{l_j}}(z)-s_D^\\Omega(z)|<\\frac{\\epsilon}{3}.$$\nSet $N=\\max\\{N_1,N_2\\}$. Then for all $l_j>N$ we have\n$$|s^\\Omega_{D_{l_j}}(z_{l_j})-s_D^\\Omega(z_{l_j})|<\\epsilon,$$\nwhich is a contradiction.\n\\end{proof}\n\nThe notion of the squeezing function was originally introduced to study the ``uniform squeezing\" property. In this regard, we have the following\n\n\\begin{thm}\\label{te}\nFor two bounded, balanced and convex domains $\\Omega_1$ and $\\Omega_2$ in $\\cv^n$, $s^{\\Omega_1}_D(z)$ has a positive lower bound if and only if $s^{\\Omega_2}_D(z)$ has a positive lower bound.\n\\end{thm}\n\\begin{proof}\nIt suffices to prove the equivalence when $\\Omega_2=\\bv^n$. By Lemma \\ref{lmn}, $\\rho_{\\Omega_1}(z)$ is a $\\cv$-norm. Thus, it is continuous and there exist $M\\ge m>0$ such that $m\\|z\\|\\le \\rho_{\\Omega_1}(z) \\le M\\|z\\|$. Then, one readily checks using the definition that\n$$\\frac{s^{\\Omega_1}_D(z)}{M}\\le s^{\\bv_n}_D(z)\\le \\frac{s^{\\Omega_1}_D(z)}{m}.$$\n\\end{proof}\n\nCombining Theorem \\ref{te} with \\cite[Theorems 4.5 \\& 4.7]{Deng2012}, we have the following\n\n\\begin{thm}\\label{tckc}\nIf $s_D^\\Omega(z)$ has a positive lower bound, then $D$ is complete with respect to the Carath\\'{e}odory distance, the Kobayashi distance and the Bergman distance of $D$.\n\\end{thm}\n\n\\section{Comparison of Fridman invariants and generalized squeezing functions}\\label{S:comparison}\n\nSince Fridman invariants and generalized squeezing functions are similar in spirit to the Kobayashi-Eisenman volume form $K_D$ and the Carath\\'{e}odory volume form $C_D$, respectively, it is natural to study the comparison of them. For this purpose, we will always assume that $D$ is a bounded domain in $\\cv^n$ and $\\Omega$ is a bounded, balanced, convex and homogeneous domain in $\\cv^n$.\n\nSimilar to the classical quotient invariant $M_D(z):=C_D(z)\/K_D(z)$, we introduce the quotient $m_D^\\Omega(z)=s_D^\\Omega(z)\/e_D^\\Omega(z)$, which is also a biholomorphic invariant. When $\\Omega=\\bv^n$, we simply write $m_D(z)=s_D(z)\/e_D(z)$.\n\nIn \\cite{Nikolov-Verma2018}, Nikolov and Verma have shown that $m_D(z)$ is always less than or equal to one. The next result shows that the same is true for $m_D^\\Omega(z)$.\n\n\\begin{thm}\\label{te>s}\nFor any $z\\in D$, we have $m_D^\\Omega(z)\\le 1$.\n\\end{thm}\n\\begin{proof}\nFor any $z\\in D$, by Theorem \\ref{tse}, there exists a holomorphic embedding $f:D \\rightarrow \\Omega$ such that $f(z)=0$ and $\\Omega(s_D^\\Omega(z))\\subset f(D)$.\n\nDefine $g(w):=f^{-1}(s_D^\\Omega(z)w)$, which is an injective holomorphic mapping from $\\Omega$ to $D$ with $g(0)=z$. By the decreasing property of the Kobayashi distance and Lemma \\ref{lnk}, we have\n$$B^k_{f(D)}(0,\\arctanh[s_D^\\Omega(z)])\\subset B^k_\\Omega(0,\\arctanh[s_D^\\Omega(z)])=\\Omega(s_D^\\Omega(z)).$$\nThus,\n$$B_D^k(z,\\arctanh[s_D^\\Omega(z)])=f^{-1}(B^k_{f(D)}(z,\\arctanh[s_D^\\Omega(z)]))\\subset f^{-1}(\\Omega(s_D^\\Omega(z)))=g(\\Omega).$$\nThis implies that $e_D^\\Omega(z)\\ge s_D^\\Omega(z)$, i.e. $m_D^\\Omega(z)\\le 1$.\n\\end{proof}\n\nA classical result of Bun Wong (\\cite[Theorem E]{W:ball}) says that if there is a point $z\\in D$ such that $M_D(z)=1$, then $D$ is biholomorphic to the unit ball $\\bv^n$. In \\cite[Theorem 3]{RY:comparison}, we showed that an analogous result for $m_D(z)$ does not hold. The next result is a generalized version of \\cite[Theorem 3]{RY:comparison} for $m_D^\\Omega(z)$.\n\n\\begin{thm}\\label{te=s}\nIf $D$ is bounded, balanced and convex, then $m_D^\\Omega(0)=1$.\n\\end{thm}\n\\begin{proof}\nBy Theorem \\ref{tee}, there exists a holomorphic embedding $f:\\Omega \\rightarrow D$ such that $f(0)=0$ and $B_D^k(0,e_D^\\Omega(0))\\subset f(\\Omega)$.\n\nDefine $g(w):=f^{-1}(e_D^\\Omega(0)w)$, which is an injective holomorphic mapping from $D$ to $\\Omega$ with $g(0)=0$. By the decreasing property of the Kobayashi distance and Lemma \\ref{lnk}, we have\n$$B_{f(\\Omega)}^k(0,\\arctanh[e_D^\\Omega(0)])\\subset B_D^k(0,\\arctanh[e_D^\\Omega(0)])=D(e_D^\\Omega(0)).$$\nThus,\n$$B_\\Omega^k(0,\\arctanh[e_D^\\Omega(0)])=f^{-1}(B_{f(\\Omega)}^k(z,\\arctanh[e_D^\\Omega(0)]))\\subset f^{-1}(D(e_D^\\Omega(0)))=g(\\Omega).$$\nThis implies that $s_D^\\Omega(0)\\ge e_D^\\Omega(0)$. By Theorem \\ref{te>s}, we always have $s_D^\\Omega(0)\\le e_D^\\Omega(0)$. This completes the proof.\n\\end{proof}\n\n\\begin{cor}\\label{cs=s}\nLet $\\Omega_i$, $i=1,2$, be two bounded, balanced, convex and homogeneous domains in $\\cv^n$. Then $s^{\\Omega_2}_{\\Omega_1}(z_1)=s^{\\Omega_1}_{\\Omega_2}(z_2)$ for all $z_1\\in \\Omega_1$ and $z_2\\in \\Omega_2$.\n\\end{cor}\n\\begin{proof}\nSince both $\\Omega_1$ and $\\Omega_2$ are homogeneous, it suffices to show that $s^{\\Omega_2}_{\\Omega_1}(0)=s^{\\Omega_1}_{\\Omega_2}(0)$.\n\nBy Lemma \\ref{lnk}, we have $B^k_{\\Omega_2}(0,\\arctanh(r))=\\Omega_2(r)$ for $r>0$. Then, by definition, $s^{\\Omega_2}_{\\Omega_1}(0)=e^{\\Omega_1}_{\\Omega_2}(0)$. By Theorem \\ref{te>s}, we get $s^{\\Omega_2}_{\\Omega_1}(0)=s^{\\Omega_1}_{\\Omega_2}(0)$.\n\\end{proof}\n\nWe can also compare generalized squeezing functions for different model domains as follows.\n\n\\begin{thm}\nLet $\\Omega_i$, $i=1,2$, be two bounded, balanced, convex and homogeneous domains in $\\cv^n$. Then, for any $z\\in D$, we have\n$$s^{\\Omega_1}_{\\Omega_2}(0)s^{\\Omega_2}_D(z)\\le s^{\\Omega_1}_D(z)\\le \\frac{1}{s^{\\Omega_1}_{\\Omega_2}(0)}s^{\\Omega_2}_D(z).$$\n\\end{thm}\n\\begin{proof}\nFor any $z\\in D$, by Theorem \\ref{tse}, there exists a holomorphic embedding $f:D \\rightarrow \\Omega_1$ such that $f(z)=0$ and $\\Omega_1(s_D^\\Omega(z))\\subset f(D)$. And there exists a holomorphic embedding $g:\\Omega_1 \\rightarrow \\Omega_2$ such that $g(0)=0$ and $\\Omega_2(s^{\\Omega_2}_{\\Omega_1}(0))\\subset g(\\Omega_1)$.\n\nSet $F=g\\circ f$. Then $F\\in \\oc_u(D,\\Omega_2)$ with $F(z)=0$. Denote $\\Omega=\\Omega_2(s^{\\Omega_2}_{\\Omega_1}(0))$. Then $\\Omega$ is a bounded, balanced and convex domain with $\\rho _\\Omega=\\frac{1}{s^{\\Omega_2}_{\\Omega_1}(0)}\\rho _{\\Omega_2}$. By the decreasing property of the Kobayashi distance and Lemma \\ref{lnk}, we have\n\\begin{align*}\nB^k_\\Omega(0,\\arctanh[s^{\\Omega_1}_D(z)])&\\subset B^k_{g(\\Omega_1)}(0,\\arctanh[s^{\\Omega_1}_D(z)])=g(B^k_{\\Omega_1}(0,\\arctanh[s^{\\Omega_1}_D(z)]))\\\\\n&=g(\\Omega_1(s^{\\Omega_1}_D(z)))\\subset g(f(D))=F(D).\n\\end{align*}\nOn the other hand, by Lemma \\ref{lnk}, we have\n\\begin{align*}\nB^k_\\Omega(0,\\arctanh[s^{\\Omega_1}_D(z)])&=\\{w\\in \\Omega:\\rho _\\Omega(w)