diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbkav" "b/data_all_eng_slimpj/shuffled/split2/finalzzbkav" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbkav" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nCombinatorial optimisation (CO) is the task of searching for a solution from a finite collection of possible candidate solutions that maximises the objective function. Put differently, the task is to reduce the large finite collection of possible solutions to a single (or small number of) optimal solution(s). In some cases, CO problems require methods that either have a bias to the problem structure or can learn the problem structure during the optimisation process such that it can be exploited. This hidden problem structure is caused by variable correlations and variable decompositions (building-blocks\/modules \\cite{goldberg1989genetic,holland1992adaptation}) and is, generally, unknown. The hidden structure can contain a multitude of characteristics such as near-separable decomposition, hierarchy, overlapping linkage and, as this paper shows, deep structure.\n\nDeep learning (DL) is tasked with learning high-order representations (features) of a data-set that construct an output to satisfy the learning objective. The higher-order features are constructed from a sub-set of units from the layer below. DL performs this recursively, reducing the dimensionality of the visible space and generating an organised hierarchical structure. Deep Neural Networks (DNNs) are capable of learning complex high-order features from unlabelled data. Evolutionary search has been used in conjunction with DNNs, namely to decide on network topological features (number of hidden layers, nodes per a layer etc) \\cite{stanley2002evolving} and for evolving weights of a DNN \\cite{such2017deep} (neuro-evolution). However, DO is different; whereas previous methods use optimisers to improve the performance of a learning algorithm, DO is the reverse - it uses learning to improve the performance of an optimisation algorithm (`learning how to optimise, not optimising how to learn').\n\nIn CO it is a common intuition that solutions to small sub-problems can be combined together to solve larger sub-problems, and that this can proceed through multiple levels until the whole problem is solved. However, in practice, this is difficult to achieve (without expert domain knowledge) because the problem structure necessary for such problem decomposition is generally unknown. In learning, it is a common intuition that concepts can be learned by combining low-level features to discover higher-level features, and that this can proceed through multiple levels until the high-level concepts are found. DO brings these two together so that multi-level learning can discover multi-level problem structure automatically.\n\nModel-Building Optimisation Algorithms (MBOAs), also known as Estimation of Distribution Algorithm's (EDAs) \\cite{hauschild2011introduction} are black-box solvers, inspired by biological evolutionary processes, that solve CO problems by using machine learning techniques to learn and exploit the hidden problem structure. MBOAs work by learning correlations present in a sample of fit candidate solutions and construct a model that captures the multi-variate correlations which, if learnt successfully, represents the hidden problem structure. They then proceed to generate new candidate solutions by exploiting this learnt information enabling them to find solutions that are otherwise pathologically difficult to find. It is the ability of MBOAs to exploit the hidden structure that has brought their success to solving optimisation problems \\cite{aickelin2007estimation,santana2008protein,pelikan2005hierarchical,goldman2014parameter,thierens2010linkage}. Models used in MBOAs include Bayesian Networks \\cite{pelikan1999boa,pelikan2005hierarchical}, Dependency Matrices \\cite{hsu2015optimization}, and Linkage Trees \\cite{thierens2010linkage,goldman2014parameter}.\n\nThe model is used for two tasks in MBOAs: 1) To learn, unsupervised, correlations between variables as to form higher-orders of organisation reducing the dimensionality of the solution space from combinations of individual variables to combinations of module solutions (features). 2) To generate or modify new candidate solutions in a way that exploits this learnt information. Thus far, the models used in MBOAs are simplistic in comparison to the state of the art models used by the machine learning community. We believe DNNs contain all the necessary characteristics required for solving CO problems and fit naturally as the model role in MBOAs.\n\nHow the learnt information is exploited has a profound effect on the algorithm's performance. One approach is to directly sample from the model, i.e. generating complete solutions before applying a selection pressure that filters out which complete solutions are better than others \\cite{pelikan1999boa}. This effectively conserves correlated variables during future search. A second approach is to use model-informed variation where selection is applied directly after a partial change is made to the solution. This results in an adaptation of the variation operator from substituting single variables to substituting module solutions \\cite{watson2011transformations,cox2014solving,mills2014transforming,thierens2010linkage}. DO utilises a model-informed approach as this has been shown to solve optimisation problems that algorithms generating complete solutions from the model cannot \\cite{caldwell2017get}.\n\nThe application of neural networks to solving optimisation problems has an esteemed history \\cite{hopfield1985neural}. Learning heuristics to generalise over a set of CO instances \\cite{khalil2017learning,bello2016neural,zhang2000solving} and adapting the learning function to bias future search \\cite{hopfield1985neural,boyan2000learning} are popular approaches. DO is different as it uses a DNN to recursively adapt the variation applied. The use of an autoencoder in MBOAs has been attempted \\cite{probst2015denoising,churchill2014denoising}, however they limit the autoencoder to a single hidden layer and use the model to generate complete candidate solutions rather than using model-informed variation. DO is the first algorithm to use a deep multi-layered feed-forward neural network to solve CO problems within the framework of MBOAs.\n\nThe focus of this paper is to introduce the concept of DO to show how DNNs can be extended to the field of MBOAs. By making this connection we open the opportunity to use the advanced DL tools that have a well-developed conceptual understanding but are not currently applicable to CO and MBOAs. We use two theoretical MAXSAT problems to demonstrate the performance of DO. The Hierarchical Transformation Optimisation Problem (HTOP) contains controllable deep structure that provides clear evaluation of DO during the optimisation process. Additionally, HTOP provides clear evidence that DO is performing as theorised - specifically with regards to the rescaling of the variation operator and the essential requirement for using a layerwise technique. The Parity Modular Constraint optimisation problem (MC\\textsubscript{parity}) contains structure with greater than pairwise dependencies and is a simplistic example of how DO can solve problems that current state of the art MBOAs cannot. Finally, DO is used to solve benchmark instances of the Travelling Salesman Problem (TSP) to demonstrate the applicability to CO problems containing characteristics such as non-binary representations and in-feasible solutions. Comparison is made with three heuristic methods in which DOs performance is better.\n\n\\section{The Deep Optimisation Algorithm}\n\n\\begin{algorithm}\n\\caption{Deep Optimisation}\\label{Alg:DO}\n\\textbf{Initialise Model}\\;\n\\While{Optimising Model}{\n \\textbf{Reset Solution}\\;\n \\While{Optimising Solution}{\n Perform model-informed variation to solution\\;\n Calculate fitness change to solution\\;\n \\eIf{Deleterious fitness change}{\n Reject change to solution\\;\n }{\n Keep change to solution\\;\n }\n }\nUpdate the model using optimised solution as a training example.\n}\n\\end{algorithm}\n\nThe Deep Optimisation algorithm is presented in Algorithm \\ref{Alg:DO}. The algorithm consists of two optimisation cycles, a solution optimisation cycle and model optimisation cycle, inter-locked in a two-way relationship. The relationship between these two cycles can be understood as a meta-heuristic method where the solution optimiser (heuristic method) is influenced by the model (external control). The solution optimisation cycle is an iterative procedure that produces a locally optimal solution using model-informed variation. The model optimisation cycle is an iterative procedure that updates the connection weights of a neural network to satisfy the learning objective.\n\nDO uses the deep learning Autoencoder model (AE) due to its ability to learn higher-order features from unlabelled data. The encoder and decoder network are updated during training. Only the decoder network is used for generating reconstructions from the hidden layer. DO uses an online learning approach where the learning rate controls the ratio between the exploration and exploitation of the search space.\n\n\\subsection{Model Optimisation Cycle}\n\nThe AE uses an encoder ($W$) and decoder network ($W'$). The encoder network performs a transformation from the visible units, $X$, to hidden units $H_1$ using a non-linear activation function $f$ on the sum of the weighted inputs: $H_1(X)=f(W_1X + b_1)$, where $W$ and $b$ are the connection weights and bias respectively. The decoder network generates a reconstruction of the visible units, $X_r$, from the hidden units: $X_r(H_1)=f(W_1'H_1 + b_{r1})$ where $W'$ is the transpose of the encoder weights. The backpropagation algorithm is used to train the network using a mean squared error of the reconstruction $X_r$ and input $X$.\n\n\nA deep AE consist of multiple hidden layers with the encoder performing a transformation from $H_{n-1}$ to $H_n$ defined by $H_n(H_{n-1})=f(W_nH_{n-1} + b_n)$ and the decoder reconstructing $H_{r(n-1)}$ defined by $H_{r(n-1)}(H_{r(n)})=f(W_n'H_{r(n)}+b_{rn})$. DO utilises a layer-wise approach for both training and generating samples. Initially the AE has a single hidden layer and is trained on solutions developed using the naive local search operator. The network then transitions such that the AE consist of two hidden layers and variation is informed by the first layer whilst training updates all connection weights. By constructing a network with less hidden units than visible units creates a regularisation pressure to learn a compression of the training data. At each hidden level, an optimised model will contain a meaningful compression of the lower level relating to higher-orders of organisation. Our experiments show the significance of using a layer-wise approach in comparison to an end-to-end network approach. We employ the notation DO\\textsuperscript{n} to differentiate between the number of hidden layers used in the AE.\n\n\\subsection{Solution Optimisation Cycle}\n\nThe solution optimisation cycle produces locally optimal solutions as guided by model-informed variation. Specifically, a candidate solution $X$ is initialised from a random uniform distribution. A random variation is applied to the candidate solution, forming $X'$, if the variation has caused a beneficial fitness change, or no change to the fitness, the variation is kept, $X=X'$, otherwise the variation is rejected. This procedure is repeated until no further improvements. By repeatedly resetting a candidate solution ensures the training data has sufficiently good coverage of the solution space.\n\nA model-informed variation is generated by performing a bit-substitution to the hidden layer activations at layer $n$, forming $H_n'$. $H_n'$ is then decoded to the solution level using the trained decoder network forming $X'$. The solution optimisation cycle continues as before where the fitness of $X'$ is determined and if there is a fitness benefit, or no change to the fitness, when compared to $X$, $H_n = H_n'$ and $X = X'$, otherwise the change is rejected. A decoded variation made to the hidden layer causes a change to the solution level that exploits the learnt problem structure. Concretely, module-substitutions are constructed by performing bit-substitutions to the hidden layer and decoding to the solution level. At a solution reset, it is important that $H_n$ is an accurate mapping for the current solution state. Therefore the hidden layer $H_n$ is reset using a random distribution $U[-1,1]$. It is then decoded to the solution level $X$ to construct an initial candidate solution. The output of the autoencoder is continuous between the activation values and therefore requires interpreting to a solution. For MAXSAT, DO uses a deterministic interpretation. Specifically if $X'[n] > 0$ then $X'[n] = 1$ else $X'[n] = -1$ where $n$ is the variable index. DO allows neutral changes to the solution. This allows for some degree of drift in the latent space to allow for small effects caused to the decoded output to accumulate and make a meaningful variation to the solution.\n\nAs DO uses an unsupervised learning algorithm, for it to learn a meaningful representation the training data must contain information about the hidden problem structure in its natural form. This structure becomes apparent when applying a hill-climbing algorithm to a solution because it ensures that it contains combinations of variables that provide meaningful fitness contributions. Initially, the AE model will have no meaningful knowledge of the problem structure and therefore a model-informed variation is equivalent to a naive search operator. After a transition, DO does not require knowledge of which operator has been initially used, it simply learns and applies its own learnt higher-order variation.\n\n\\subsection{Transition: Searching in Combinations of Features}\\label{searchfeature}\nAfter the network has learnt a good meaningful representation at hidden layer $n$ the following changes occur to DO, which we term a transition.\n\n\\begin{enumerate}\n \\item An additional hidden layer $H_{n+1}$ is added to the AE. Previous learnt weights are retained and training updates all weight ($W_1$ to $W_{n+1}$).\n \\item The hidden layer used for generating model-informed variation is changed from $H_{n-1}$ to $H_n$. Initialisation of a candidate solution is generated from $H_n$.\n\\end{enumerate}\n\nItem 1 is analogous to the approach introduced by Hinton and Salakhutdinov \\cite{hinton2006reducing} for training DNNs. The layer-wise procedure is important for learning a tractable representation at each hidden layer. The multi-layer network is trained on solutions developed using variation decoded from the layer below the current network depth. This is a significant requirement as DO learns from its own dynamics \\cite{watson2011optimization}. There may be many possible mappings in which the problem structure can be represented. Thus deeper layers are not only a representation of higher-order features present in the problem, but are reliant on how the higher-order features have been learnt and exploited, which, in-turn, is determined by the shallower layers. Therefore if the shallower layers do not contain a meaningful representation, then attempting to train or perform variation generated from deeper layers will be ineffective as we prove in our experiments. Item 2 is a layer-wise procedure for generating candidate solutions. The method of generating variation to the solution is the same at any hidden layer. Simply, only the hidden layer where bit-substitutions are performed and decoded from has been changed from $H_{n-1}$ to $H_n$ (to a deeper hidden layer).\n\nThis transition procedure is performed recursively until the maximum depth of the autoencoder is reached at which Item 1 is not performed. Like the learning rate, the timing of transition impacts the balance between exploration and exploitation of the search space. Once transitioned not only does the model provide information on how to adapt the applied variation but the solution optimisation cycle provides feedback to the model optimiser. Specifically, correctly learnt features will cause beneficial changes to a solution during optimisation, and therefore will be repeatedly accepted during the solution optimisation cycle and thus repeatedly presented to the model during training, reinforcing the learnt correlations. In contrast, incorrectly learnt features will cause deleterious fitness changes and therefore will not be accepted and thus not present in the training data.\n\n\\section{Performance Analysis of Deep Optimisation}\nTwo theoretical CO problems within the MAXSAT class are specifically designed to demonstrate how DO works and show that DO can solve problems containing high-order dependencies that state of the art MBOA's cannot.\n\n\\subsection{How Deep Optimisation Works}\\label{sec:howDO}\n\n\\begin{table}[]\n \\centering\n\\resizebox{0.6\\linewidth}{!}{%\n\n\\begin{tabular}{lll}\n \\toprule\n \\multicolumn{3}{c}{\\textbf{HTOP}} \\\\\n \\midrule\n $a$ $b$ $c$ $d$ & $t(a,b,c,d)$ & $f(a,b,c,d)$\\\\\n \\midrule\n 1 0 0 0 & 0 0 & 1\\\\\n 0 1 0 0 & 0 1 & 1\\\\\n 0 0 1 0 & 1 0 & 1\\\\\n 0 0 0 1 & 1 1 & 1\\\\\n Otherwise & - & 0\\\\\n \\bottomrule\n\n\\end{tabular}%\n}\n\\caption{HTOP transformation $t$, and fitness function $f$.}\n\\label{table:HTOP}\n\\end{table}\n\nThe Hierarchical Transformation Optimisation Problem (HTOP) is formed within the MAXSAT class where there objective is to find a solution that satisfies the maximum number of constraints imposed on the problem. HTOP is a consistent constraint problem and has four global optima. HTOP is specifically designed to provide clarity on how DO works with specific regards to the process of rescaling the variation operator to higher-order features and the necessity for a DNN to use a layerwise procedure. HTOP is inspired by Watson's Hierarchical If and only If (HIFF) problem \\cite{watson1998modeling} and uses the same recursive construction with an adaptation to cause deep structure. The generalised hierarchical construction is summarised here. The solution state to the problem is\n\\begin{math}\n x = \\{x_1,\\ldots,x_i,\\ldots,x_N\\},\n\\end{math}\nwhere $x_i \\in \\{0,1\\}$ and $N$ is the size of the problem. $p$ represents the number of levels in the hierarchy and $N_p$ represents the number of building-blocks of length $L_p$ at each hierarchical level. Each block containing $k$ variables is converted into a low-dimensional representation of length $L_p\/R$ by a transformation function $t$, where $R$ is the ratio of reduced dimensionality creating a new higher-order string $V^{p+1}=\\{V^{p+1}_1,\\dots,V^{p+1}_{N_pL_p\/R}\\}$. In what follows $k=4$ and $R=2$ using the transformation function detailed in Table \\ref{table:HTOP}, where a solution to a module is a one-hot bit string.\n\nThe transformation function is derived from a machine learning benchmark named the 424 encoder problem \\cite{ackley1985learning}. Learning the structure is not trivial and cannot be well approximated by pairwise associations unlike for HIFF. The transformation is applied recursively constructing deep constraint where at each level of hierarchy a one-hot coding is required to be learnt. The null variable is used to ensure that a fitness benefit at the higher level can only be achieved by satisfying all lower level transformations beneath it.\n\nHTOP is pathologically difficult for a bit-substitution hill climber. Satisfying a depth 2 constraint requires coordination of module transformations such that the transformed representations of two modules below construct a one-hot solution, e.g. Module 1 transformation = 01 ($X=0100$) and Module 2 transformation = 00 ($X=1000$). A bit-substitution operation is unable to change a module solution without causing a deleterious fitness change. Therefore a higher-order variation is required that performs substitutions of module solutions. This recursive and hierarchical construction requires the solver to successively rescale the search operator to higher-orders of organisation.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.32\\linewidth]{DO0Fit.pdf}\n\t\\includegraphics[width=0.32\\linewidth]{DO1Fit.pdf}\n\t\\includegraphics[width=0.32\\linewidth]{DO3Fit.pdf}\n\t\\caption{A deep representation allows for learning and exploiting deep structure to find a global optimum.}\n\t\\label{Fig:FitnessHTOP}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{Variation.pdf}\n\t\\caption{Example solution trajectories during solution optimisation using DO\\textsuperscript{3} before transition (left), after 1\\textsuperscript{st} transition (middle) and after 2\\textsuperscript{nd} transition (right). Variation is adapted from bit-substitutions to module solutions to combinations of module solutions as highlighted by circles. Module boundaries are represented by vertical dashed lines.}\n\t\\label{sFig:VariationHTOP}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.32\\linewidth]{bnw_32_fit.pdf}\n\\includegraphics[width=0.32\\linewidth]{bnw_64_fit.pdf}\n\\includegraphics[width=0.32\\linewidth]{bnw_128_fit.pdf}\n\\caption{Deep representations are consistently better performing than shallow ones, and incremental addition of layers is better than end-to-end learning.}\n\\label{Fig:shalvdeep}\n\\end{figure}\nA HTOP instance of size 32 (HTOP\\textsubscript{32}) is used to demonstrate how DO successfully learns, represents and exploits the deep structure. Further, we show the performance difference between DO\\textsuperscript{0} (a bit-substitution restart hill-climbing algorithm), DO\\textsuperscript{1} (a restart hill-climber using a shallow network) and DO\\textsuperscript{3} (a restart hill-climber using a deep network). The algorithms use 320 steps to optimise a solution and produce a total of 2000 solutions. Figure \\ref{Fig:FitnessHTOP} presents the solution fitness after each solution optimisation cycle.\n\nDO\\textsuperscript{0} is unable to find a globally optimal solution. It simply gets trapped at local optima as a bit-substitution is insufficient to improve a solution without a deleterious fitness change. DO\\textsuperscript{0} therefore has exponential time-complexity to produce a global optima. For DO\\textsuperscript{1}, the results show that a single hidden layer is sufficient for finding a global optima. The vertical dashed line illustrates the location of transition. After which DO\\textsuperscript{1} is able to perform module substitution without any deleterious fitness effects and thus search for a combination of module solutions to satisfy deeper constraints. However, note that HTOP\\textsubscript{32} contains 4 levels of hierarchy and thus a single layer network is not sufficient to fully represent the problem structure. As a result, DO\\textsuperscript{1} is unable to perform meta-module substitutions (a change of multiple module solutions simultaneously) and thus the algorithm is unable to converge to a global optima (reliably find a globally optimal solution). DO\\textsuperscript{3} shows it is able to find and converge to a globally optimal solution due to having a sufficiently deep network that can correctly learn and represent the full problem structure and thus able to perform meta-module substitutions.\n\nFigure \\ref{sFig:VariationHTOP} presents example solution trajectories during the solution optimisation cycle for DO\\textsuperscript{3} on HTOP\\textsubscript{32}. Initially, before transition, DO only performs a bit-substitution variation and is successful in finding solutions for each module (a one-hot solution per a module), but it is unable to change between module solutions and thus we observe no further changes. After the 1\\textsuperscript{st} transition, we observe the variation has been scaled up to allow variation of module solutions (as highlighted by the circles). Now DO can search for the correct combination of modules that satisfy depth 1 constraints without deleterious fitness changes. After the 2\\textsuperscript{nd} transition DO is capable of performing meta-module substitutions (module solutions of size 8) enabling it to easily satisfy depth 2 constraints. Hence, we observe, DO is able to learn and represent deep hidden structure and correctly exploit this information in a deep and recursive manner in-order to reduce the dimensionality of the search and adapt the variation operator to solve the problem.\n\nHTOP is a problem that contains deep structure, such that as the size of the problem increases so does the depth of the problem structure. Consequently, a shallow model is unable to solve large instances. Presented in Figure \\ref{Fig:shalvdeep} are results showing the fitness for the best solution found, in 10 repeats, by DO using a layer-wised approach: DO\\textsuperscript{0}, DO\\textsuperscript{1},DO\\textsuperscript{2}, DO\\textsuperscript{3} (standard method), DO using an end-to-end approach: DO(E2E)\\textsuperscript{2}, DO(E2E)\\textsuperscript{3}. HTOP\\textsubscript{32}, HTOP\\textsubscript{64} and HTOP\\textsubscript{128} instances are used which have a termination criteria of 800, 2500 and 15000 model optimisation steps respectively.\n\nIt is observed that a deeper network is required to solve problems with deeper constraints. Furthermore, the results show the significance of using the DNN in a layer-wise method instead of an end-to-end method. DO(E2E) works by constructing an end-to-end DNN at initialisation and model updates modify all hidden layer connections. A bit-substitution is used to produce the initial training data. At transition, the deepest hidden layer is used for generating a variation. The results clearly show that, whilst the DNN is sufficient to represent the problem structure (as proven by the layer-wise results), using an end-to-end model is not efficient at learning the problem structure so that it can be exploited effectively. Results show that as the DNN gets deeper, using an end-to-end approach produces successively inferior results. A layer-wise approach is therefore essential for DO to work and scale to large problems.\n\n\n\\subsection{Solving what MBOAs Cannot}\n\n\\begin{table}[]\n\\resizebox{\\linewidth}{!}{%\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{Module} & \\multicolumn{3}{c|}{Fitness} \\\\ \\hline\n1 & 2 & 3 & 4 & \\multicolumn{1}{l|}{Within Module} & \\multicolumn{1}{l|}{Between Module} & \\multicolumn{1}{l|}{Total Fitness} \\\\ \\hline\n1000 & 0100 & 1101 & 0000 & 3x1 = 3 & 1 + 1 + 1 = 3 & 3.0003 \\\\ \\hline\n1000 & 1000 & 1101 & 1101 & 4x1 = 4 & 2\\textsuperscript{2} + 2\\textsuperscript{2} = 8 & 4.0008 \\\\ \\hline\n1000 & 1000 & 1000 & 1101 & 4x1 = 4 & 3\\textsuperscript{2} + 1 =10 & 4.0010 \\\\ \\hline\n1000 & 1000 & 1000 & 1000 & 4x1 = 4 & 4\\textsuperscript{2} = 16 & 4.0016 \\\\ \\hline\n\\end{tabular}%\n}\n\\caption{Example solutions to MC\\textsubscript{parity}. A global optima is a solution with all modules containing the same parity answer.}\n\\label{tab:MCparity}\n\\end{table}\n\n\\begin{equation} \\label{eq1}\n\\begin{split}\nF & = \\sum_{i=0}^m\\begin{cases}1 & \\left(\\sum_{j=0}^{n}S_j^m\\right)\\mod 2 = 1\\\\0 & otherwise\\end{cases} \\\\\n & + p\\times\\sum_{k=0}^{n\/2}\\left(\\sum_{i=0}^m\\begin{cases}1 & S^m == Type_k\\\\0 & otherwise \\end{cases}\\right)^2\n\\end{split}\n\\end{equation}\n\nThe Parity Modular Constraint optimisation problem (MC\\textsubscript{parity}) is an adaptation of the Modular Constraint Problem \\cite{watson2011optimization} where module solutions are odd parity bit-strings. A problem of size $N$ is divided into $m$ modules each of size $n$. There are $n\/2$ sub-solutions per a module and each of the sub-solutions is assigned a type. A fitness point is awarded, for a module, if a module contains an odd parity solution, otherwise no point. A global solution is one where all modules in the problem contain the same parity solution ($n\/2$ global optima). The between module fitness is the summation of the squared count of each module solution type present in the whole solution. The fitness function is provided in Equation \\ref{eq1} and examples of a solutions fitness is presented in Table \\ref{tab:MCparity}. For the scaling analysis performed here we use $n=4$.\n\n\n\nAlthough this problem supports many solutions within each module the smaller fitness benefits of coordinating modules are more rare.\nBy ensuring the module fitness is much more beneficial than the between module fitness (p $\\ll$ 1) requires the algorithm to perform module substitutions of odd parity to follow the fitness gradient to coordinate the module solutions without deleterious fitness effects. If an algorithm cannot learn and exploit the high-order structure of the parity modules then finding a global optima will require exponential time with respect to the number of modules in the problem. Conversely, an algorithm that can will easily follow the fitness gradient to correctly coordinate the module solutions and thus scale polynomial with respect to the number of modules in the problem.\n\nLeading MBOAs such as LTGA, P3 and DSMGA use a dependency structure matrix (DSM) and the mutual information metric to capture variable dependencies. They are successful in capturing module structures containing more than 2 variables however it is hypothesised they are unable to correctly capture structure that contains greater than pairwise dependencies between variables. A simple example being parity. A neural network is capable of learning and capturing higher-order dependencies between variables.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{ModuleParity.pdf}\n\\caption{DO has polynomial time complexity and MBOAs has exponential time complexity when solving a problem containing high-order dependencies}\n\\label{Fig:MBOAres}\n\\end{figure}\n\n\nFor LTGA and DO the parameters are manually adjusted such that all 50 runs produce the global optimum. The results are presented in Figure \\ref{Fig:MBOAres}. The data points present the average number of fitness evaluations required to find the global optimum for the 50 independent runs for N $<$ 300 and 10 for N=300 and N=400. For LTGA the population is adjusted manually until a change of 10\\% would not cause a failure. For DO, from smallest to largest N, the learning rate varied from 0.05 to 0.0015 and transition from 60 to 1000 solutions. The network topology included up to three hidden layers and used a compression of $\\sim 10\\%$ at each layer. P3 is parameterless and required no adjustment. Two implementations are included for LTGA, \\cite{thierens2010linkage} and \\cite{goldman2014parameter}. The differences between both LTGA implementations and P3 are interesting and it is hypothesised to be caused by the way in which solutions can be prioritised according to their fitness for constructing the model. More significantly, LTGA and P3 scale exponentially whereas DO scales polynomial ($\\sim N\\textsuperscript{2}$).\nTo verify that the deep structure of DO is necessary a shallow version of DO is included in the results: DO\\textsuperscript{1} (limited to a single hidden layer). The scaling appears to be exponential where results could not be achieved for problem instances greater than 108 as the tuning of parameters became extremely sensitive. Whereas with HTOP we can see clearly what the deep structure is that needs to be learnt, in the MC\\textsubscript{parity} problem we can see that high-order dependencies defeat other algorithms but are not a problem for DO.\n\n\\section{Solving the Travelling Salesman Problem}\\label{sec:TSP}\n\nIn this section we apply DO to solve the travelling salesman problem (TSP). A solution to a TSP is a route that visits all locations once and returns to the starting location. The optimisation problem is to minimise the total travelling cost. We use 6 TSP instances from the TSP library \\cite{reinhelt2014tsplib}: 3 symmetric and 3 asymmetric, and compare with three other heuristic methods. Our aim here is to provide an example of how DO can be successfully used to solve CO problems containing characteristics such as non-binary representations and in-feasible solutions. The results that follow do not show DO outperforming state of the art heuristic methods (these problems are not particularly difficult for Lin-Kernighan Helsgaun algorithm \\cite{helsgaun2000effective}) but they do show that DO can find and exploit deep structure that can be used to solve TSP problems better than shallow methods.\n\n\n\n\n\\begin{table*}[t]\n\\centering\n\\resizebox{0.8\\textwidth}{!}{%\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Problem \\\\ Instance\\end{tabular}} & \\multirow{2}{*}{Type} & \\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Number \\\\ Locations\\end{tabular}} & \\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Performance\\\\DO (\\%)\\end{tabular}} & \\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Avg trials \\\\ req. for DO\\end{tabular}} & \\multicolumn{3}{l|}{\\begin{tabular}[c]{@{}c@{}}Performance using\\\\same trials as DO (\\%)\\end{tabular}} & \\multicolumn{3}{c|}{\\begin{tabular}[c]{@{}c@{}}Performance using\\\\10000 trials (\\%)\\end{tabular}} \\\\ \\cline{6-11}\n & & & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{Swap} & \\multicolumn{1}{l|}{Insert} & \\multicolumn{1}{l|}{2-Opt} & Swap & Insert & 2-Opt \\\\ \\hline\n fr26 & Sym & 26 & 0 & 30 & 4.6 & 1.2 & 0.2 & 0 & 0 & 0 \\\\ \\hline\n brazil58 & Sym & 58 & 0 & 224 & 17.0 & 4.0 & 0.1 & 0.9 & 0 & 0 \\\\ \\hline\n st70 & Sym & 70 & 0 & 806 & 25.8 & 6.1 & 0.4 & 20.9 & 3.9 & 0.02 \\\\ \\hline\n ftv35 & Asym & 36 & 0 & 112 & 1.6 & 0.3 & 2.7 & 0.8 & 0 & 1.4 \\\\ \\hline\n p43 & Asym & 43 & 0 & 393 & 0.3 & 0.1 & 0 & 0.1 & 0.02 & 0 \\\\ \\hline\n ft70 & Asym & 70 & 0 & 1776 & 17.1 & 4.4 & 26.7 & 7.2 & 2.2 & 23.1 \\\\ \\hline\n\\end{tabular}%\n}\n\\caption{DO exploits useful structure in TSP problems to find the global optimum (column 4-5) that is not found by a heuristic method within the same number of trials (columns 6-8) nor found easily within 10000 trials (columns 9-11). Values report percentage difference from the global optimum.}\n\\label{tab:TSPres}\n\\end{table*}\n\n\n\\subsection{TSP Representation}\n\nTo apply DO the TSP solution is transformed into a binary representation using a connection matrix $C$ of size N\\textsuperscript{2} where $C\\textsubscript{ij}$ represents an edge. $C\\textsubscript{ij} = 1$ signifies that $j$ is the next location after $i$ (the remaining entries are 0). There are a total of N connections, where each location is only connected to one other location (not itself) to construct a valid tour. The connection matrix is sparse and we found that normalising the data improved training. This is a non-compact representation but it is sufficient for demonstrating DO's ability for finding and exploiting deep structure.\n\nThe output generated by DO is continuous and is interpreted to construct a valid TSP solution. The interpretation is detailed in Algorithm \\ref{Alg:intTSP}. There are two stochastic elements included in the routine. The first element is the starting location from which the tour is then constructed. Choosing a random starting position removes the bias associated with starting at the problem defined starting location. The second element is selecting the next location in the tour. The autoencoder is trained such that positive numbers are connections in a tour. A negative output indicates no connection is made between locations. However, for the case when all locations with a positive connection have been used in the tour then, to ensure a feasible solution, the next location is randomly selected from the set of possible locations available. This ensures that learnt sub-tours (building blocks) are correctly conserved during future search and allows the location of the sub-tour to vary within the complete tour (searching in combinations of learnt building blocks). The construction method resembles the method used by Hopfield and Tank \\cite{hopfield1985neural} but here we use a max function rather than a probabilistic interpretation.\n\n\n\\begin{algorithm}\n\\caption{Interpretation for TSP Solution}\\label{Alg:intTSP}\nSet Tour[0] = select, uniformaly at random, starting position\\;\nSet ValidLocs = all possible TSP Locations\\;\nRemove Tour[0] from ValidLocs\\;\nConVec = Vector of size N\\;\ni = 1\\;\n\\Repeat{ValidLocs empty}{\n ConVec = connection vector generated from Autoencoder for location Tour[i-1] \\;\n NextLoc = Where(max(ConVec[ValidLocs])) \\;\n \\eIf{ ConVec[NextLoc] $>$ 0}\n\t{\n\t\tTour[i] = NextLoc\\;\n\t}{\n\t\tTour[i] = select, uniformaly at random, from ValidLocs\\;\n\t}\n\tremove Tour[i] from ValidLocs\\;\n i++\\;}\nCycle tour until Tour[0] = defined start location\\;\nTour[i] = defined start location\\;\n\\end{algorithm}\n\n\\subsection{Results}\n\nThe performance of DO is compared with three local search heuristics: location swap; location insert and 2-opt. The location swap heuristic consists of selecting, at random, two positions in a TSP tour and swapping the locations. The location insert heuristic selects a position in the tour at random, removes the location from the position and inserts it at another random position. The 2-opt heuristic \\cite{croes1958method} involves selecting two edge connections between locations, swapping the connections and reversing the sub-tour between the connections. For our experiments, DO used the location insert heuristic before transition as it produces good training data for both symmetric and asymmetric TSP cases. When performing search in the hidden layer local search is also applied.\n\nThe results, averaged over 10 runs, are provided in Table \\ref{tab:TSPres}. DO solves all TSP instances each time and the number of trials (training examples) are reported in column 5. Columns 6-8 report the percentage difference between the global optimum and the best found solution for a restart hill climber using a heuristic within the trials used by DO to find the global optimum. This demonstrates that DO is exploiting structure as it is able to find the global optimum faster. Note DO used the insert heuristic for all TSP instances, therefore 2-opt can perform better on some small cases as observed. Columns 9-11 report the percentage difference when the heuristic search is allowed 10000 trials. These results further confirm DO is exploiting structure reliably as, especially for the larger instances, the global optimum is not easily found.\n\n\\section{Conclusion}\nDO is the first algorithm to use a DNN to repeatedly redefine the variation operator used to solve CO problems. The experiments show there exist CO problems that DO can solve that state of the art MBOAs cannot. They also show there exists CO problems that a DNN can solve that a shallow neural network cannot and that using a layer-wise method can solve that an end-to-end method cannot. Further, results show that DO can be successfully applied to CO problems containing characteristics including non-binary representations and in-feasible solutions. This paper thus expands the use of DNN to be applied to CO problems.\n\nDO provides the opportunity to use the advanced deep learning tools that have been utilised throughout the community for other applications of deep learning and are not available to MBOAs, tools such as dropout, regularisation and different network architectures. These tools application have been shown to improve generalisation in conventional DNN tasks and should therefore also improve the ability to learn problem structure and thus DO's ability in solving CO problems. The application in DO remains to be investigated. Whether other network architectures (convolution neural networks, deep belief networks, generative adversarial networks) offer capabilities that are useful for solving some kinds of CO problems, e.g. problems with transposable sub-solutions, is also of interest.\n\n\\bibliographystyle{aaai}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\setcounter{equation}{0}\nExperimental results obtained a few years ago for ferrofluids \\cite{Litt}\nand recently for $\\gamma $-Fe$_{2}$O$_{3}$ nanoparticles \\cite{Sappey}\nindicate that for dilute samples (weak interparticle interactions), the\ntemperature $T_{\\max }$ at the maximum of the zero-field-cooled\nmagnetization $M_{ZFC},$ first increases with increasing field, attains a\nmaximum and then decreases. More recently, additional experiments performed\non the $\\gamma $-Fe$_{2}$O$_{3}$ particles dispersed in a polymer \\cite\n{Ezzir} confirm the previous result for dilute samples and show that, on\nthe contrary, for concentrated samples (strong interparticle interactions) $%\nT_{\\max }$ is a monotonic decreasing function of the magnetic field \\cite\n{Ezzir} (see Fig. \\ref{fig1}).\n\\begin{figure}[t]\n\\unitlength1cm\n\\begin{picture}(15,12)\n\\centerline{\\epsfig{file=fig1.eps,angle=0,width=10cm}}\n\\end{picture}\n\\vspace{-5cm}\n\\caption{ \\label{fig1}\nThe temperature $T_{\\max }$ at the maximum of the ZFC\nmagnetization plotted against the applied field for different sample\nconcentrations. The volume fraction of the particles in the sample 4D\n($\\gamma-Fe_{2}O_{3}$, mean diameter $\\sim 8.32$ nm) \ndetermined from the density measurements is $C_{v}=0.0075$, $0.012$, $0.047$.}\n\\end{figure}\nThe behaviour observed for dilute samples (isolated nanoparticles) is\nrather unusual since one intuitively expects the applied field to lower the\nenergy barrier and thus the blocking temperature of all particles and\nthereby the temperature $T_{\\max }$ \\footnote{\nThe temperature $T_{\\max }$ at the maximum of the ZFC magnetization is\nassumed to be roughly given by the average of the blocking temperature $%\nT_{B} $ of all particles in the (dilute) assembly.}. Resonant tunneling is\none of the arguments which has been advanced \\cite{Friedman} as a mechanism\nresponsible for an increase of the relaxation rate around zero field. More\nrecently $M_{ZFC}$ measurements \\cite{Sappey} have shown an anomaly in the\ntemperature range 40-60K, which is probably too high for quantum effects to\nmanifest themselves. Yet another explanation \\cite{Sappey} of the $T_{\\max }$\neffect was proposed using arguments based on the particle anisotropy-barrier\ndistribution. It was suggested that for randomly oriented particles of a \n{\\it uniform size}, and for small values of the field, the field depresses\nthe energy barriers, and thereby enlarges the barrier distribution, so\nlowering the blocking temperature. It was also suggested that the increase\nof the barrier-distribution width overcompensates for the decrease of the\nenergy barrier in a single particle. However, the discussion of the\nrelaxation time was based on the original Arrhenius approach of N\\'{e}el.\nHere the escape rate, that is the inverse relaxation time, is modelled as an\nattempt frequency multiplied by the Boltzmann factor of the barrier height,\nwith the inverse of the attempt frequency being of the order of 10$^{-10}$\ns, thus precluding any discussion of the effect of the applied field and the\ndamping parameter on the prefactor, and so their influence on the blocking\ntemperature.\n\nIt is the purpose of this paper to calculate the blocking temperature using\nthe Kramers theory of the escape of particles over potential barriers as\nadapted to magnetic relaxation by Brown \\cite{Brown}. The advantage of using\nthis theory is that it allows one to precisely calculate the prefactor as a\nfunction of the applied field and the damping parameter (provided that\ninteractions between particles are neglected). Thus the behaviour of the\nblocking temperature as a function of these parameters may be determined. It\nappears from the results that even such a precise calculation is unable to\nexplain the maximum in the blocking temperature $T_{B}$ as a function of the\napplied field. Thus an explanation of this effect is not possible using the\nN\\'{e}el-Brown model for a {\\it single} particle as that model invariably\npredicts a monotonic decrease of $T_{B}$ as a function of the applied field.\n\nIn view of this null result we demonstrate that the $T_{\\max }$ effect may\nbe explained by considering an assembly of non-interacting particles having\na volume distribution. This is accomplished by using Gittleman's \\cite\n{Gittleman} model which consists of writing the zero-field cooled\nmagnetization of the assembly as a sum of two contributions, one from the\nblocked magnetic moments and the other from the superparamagnetic ones, with\nthe crucial assumption that the superparamagnetic contribution is given by a\nnon-linear function (Langevin function) of the applied magnetic field and\ntemperature. If this is done even the simple N\\'{e}el-Brown expression for\nthe relaxation time leads to a maximum in $T_{\\max }$ for a wide range of\nvalues of the anisotropy constant $K.$ It was claimed in \\cite{Luc Thomas}\nthat a simple volume distribution, together with a N\\'{e}el expression for\nthe relaxation time, leads to the same result for FeC particles, although\nthe author used the Langevin function for the superparamagnetic contribution\nto magnetization.\n\nTherefore, the particular expression for the single-particle relaxation time\nwhich is used appears not to be of a crucial importance in the context of\nthe calculation of the blocking temperature.\n\nIn the next section we briefly review Kramers' theory of the escape rate.\n\\section{Kramers' escape rate theory}\n\nThe simple Arrhenius calculation of reaction rates for an assembly of {\\it %\nmechanical particles} undergoing translational Brownian motion, in the\npresence of a potential barrier, was much improved upon by Kramers \\cite\n{Kramers}. He showed, by using the theory of the Brownian motion, how the\nprefactor of the reaction rate, as a function of the damping parameter and\nthe shape of the potential well, could be calculated from the underlying\nprobability-density diffusion equation in phase space, which for Brownian\nmotion is the Fokker-Planck equation (FPE). He obtained (by linearizing the\nFPE about the potential barrier) explicit results for the escape rate for\nintermediate-to-high values of the damping parameter and also for very small\nvalues of that parameter. Subsequently, a number of authors (\\cite\n{Meshkov-Melnikov}, \\cite{BHL}) showed how this approach could be extended\nto give formulae for the reaction rate which are valid for all values of the\ndamping parameter. These calculations have been reviewed by H\\\"{a}nggi et al.%\n\\cite{Hanggi et al.}.\n\nThe above reaction-rate calculations pertain to an assembly of mechanical\nparticles of mass $m$ moving along the $x$-axis so that the Hamiltonian of a\ntypical particle is \n\\begin{equation}\nH=\\frac{p^{2}}{2m}+V(q), \\label{Ham}\n\\end{equation}\nwhere $q$ and $p$ are the position and momentum of a particle; and $V(q)$ is\nthe potential in which the assembly of particles resides, the interaction of\nan individual particle with its surroundings is then modelled by the\nLangevin equation \n\\begin{equation}\n\\dot{p}+\\varsigma p+\\frac{dV}{dq}=\\lambda (t) \\label{Langevin}\n\\end{equation}\nwhere $\\lambda (t)$ is the Gaussian white noise, and $\\varsigma $ is the\nviscous drag coefficient arising from the surroundings of the particle.\n\nThe Kramers theory was first adapted to the thermal rotational motion of the\nmagnetic moment (where the Hamiltonian, unlike that of Eq.(\\ref{Ham}), is\neffectively the Gibbs free energy) by Brown \\cite{Brown} in order to improve\nupon N\\'{e}el's concept of the superparamagnetic relaxation process (which\nimplicitly assumes discrete orientations of the magnetic moment and which\ndoes not take into account the gyromagnetic nature of the system). Brown in\nhis first explicit calculation \\cite{Brown} of the escape rate confined\nhimself to axially symmetric (functions of the latitude only) potentials of\nthe magneto-crystalline anisotropy so that the calculation of the relaxation\nrate is governed (for potential-barrier height significantly greater than $%\nk_{B}T$) by the smallest non-vanishing eigenvalue of a one-dimensional\nFokker-Planck equation. Thus the rate obtained is valid for all values of\nthe damping parameter. As a consequence of this very particular result, the\nanalogy with the Kramers theory for mechanical particles only becomes fully\napparent when an attempt is made to treat non axially symmetric potentials\nof the magneto-crystalline anisotropy which are functions of both the\nlatitude and the longitude. In this context, Brown \\cite{Brown} succeeded in\ngiving a formula for the escape rate for magnetic moments of single-domain\nparticles, in the intermediate-to-high (IHD) damping limit, which is the\nanalogue of the Kramers IHD formula for mechanical particles. In his second\n1979 calculation \\cite{Brown} Brown only considered this case. Later Klik\nand Gunther \\cite{Klik and Gunther}, by using the theory of first-passage\ntimes, obtained a formula for the escape rate which is the analogue of the\nKramers result for very low damping. All these (asymptotic) formulae which\napply to a general non-axially symmetric potential, were calculated\nexplicitly for the case of a uniform magnetic field applied at an arbitrary\nangle to the anisotropy axis of the particle by Geoghegan et al. \\cite\n{Geoghegan et al} and compare favorably with the exact reaction rate given\nby the smallest non-vanishing eigenvalue of the FPE \\cite{Coffey1}%\n,\\thinspace \\cite{Coffey2}, \\cite{Coffey3} and with experiments on the\nrelaxation time of single-domain particles \\cite{Wensdorfer et al.}.\n\nIn accordance with the objective stated in the introduction, we shall now\nuse these formulae (as specialized to a uniform field applied at an\narbitrary angle by Geoghegan et al. \\cite{Geoghegan et al} and Coffey et al. \n\\cite{Coffey1},\\thinspace \\cite{Coffey2}, \\cite{Coffey3}) for the\ncalculation of the blocking temperature of a single particle.\n\nA valuable result following from these calculations will be an explicit\ndemonstration of the breakdown of the non-axially symmetric asymptotic\nformulae at very small departures from axial symmetry, manifesting itself in\nthe form of a spurious increase in $T_{\\max }$. Here interpolation formulae\njoining the axially symmetric and non-axially symmetric asymptotes\n(analogous to the one that joins the oscillatory and non-oscillatory\nsolutions of the Schr\\\"{o}dinger equation in the WKBJ method \\cite{Fermi})\nmust be used in order to reproduce the behaviour of the exact reaction rate\ngiven by the smallest non-vanishing eigenvalue of the FPE, which always\npredicts a monotonic decrease of $T_{\\max }$, as has been demonstrated by\nGaranin et al. \\cite{Garanin et al.} in the case of a transverse field.\n\n\\section{Calculation of the blocking temperature}\n\\label{Calculation of the blocking temperature}\n\nFollowing the work of Coffey et al. cited above, the effect of the applied\nmagnetic field on the blocking temperature is studied by extracting $T_{B}$\nfrom the analytic (asymptotic) expressions for the relaxation-time (inverse\nof the Kramers escape rate) \\cite{Coffey1},\\thinspace \\cite{Coffey2}, \\cite\n{Coffey3}, which allow one to evaluate the prefactor as a function of the\napplied field and the dimensionless damping parameter $\\eta _{r}$ in the\nGilbert-Landau-Lifshitz (GLL) equation. For single-domain particles the\nequation of motion of the unit vector describing the magnetization inside\nthe particle is regarded as the Langevin equation of the system (detailed in \n\\cite{Coffey4}).\n\nOur discussion of the N\\'{e}el-Brown model as applied to the problem at hand\nproceeds as follows~:\n\nIn an assembly of ferromagnetic particles with uniaxial anisotropy excluding\ndipole-dipole interactions, the ratio of the potential energy $vU$ to the\nthermal energy $k_{B}T$ of a particle is described by the bistable form \n\\begin{equation}\n\\beta U=-\\alpha ({\\bf e\\cdot n)}^{2}-\\xi ({\\bf e\\cdot h)} \\label{1}\n\\end{equation}\nwhere $\\beta =v\/(k_{B}T)$, $v$ is the volume of the single-domain particle; $%\n\\alpha =\\beta K\\gg 1$ is the anisotropy (reduced) barrier height parameter; $%\nK$ is the anisotropy constant; $\\xi =\\beta M_{s}H$ is the external field\nparameter; ${\\bf e,n,}$ and ${\\bf h}$ ($h\\equiv \\xi \/2\\alpha $) are unit\nvectors in the direction of the magnetization ${\\bf M}$, the easy axis, and\nthe magnetic field ${\\bf H}$, respectively. $\\theta $ and $\\psi $ denote the\nangles between ${\\bf n}$ and ${\\bf e}$ and between ${\\bf n}$ and ${\\bf h,}$\nrespectively. The N\\'{e}el time, which is the time required for the magnetic\nmoment to surmount the potential barrier given by (\\ref{1}), is\nasymptotically related to the smallest nonvanishing eigenvalue $\\lambda _{1}$\n(the Kramers' escape rate) of the Fokker-Planck equation, by means of the\nexpression $\\tau \\approx 2\\tau _{N}\/_{\\lambda _{1}}\\cite{Brown}$, where the\ndiffusion time is\n\\begin{equation}\n\\tau _{N}\\simeq \\frac{\\beta M_{s}}{2\\gamma }\\left[ \\frac{1}{\\eta _{r}}+\\eta\n_{r}\\right] , \\label{2}\n\\end{equation}\n$\\gamma $ is the gyromagnetic ratio, $M_{s}$ the intrinsic magnetization, $%\n\\eta $ the phenomenological damping constant, and $\\eta _{r}$ the GLL\ndamping parameter $\\eta _{r}=\\eta \\gamma M_{s}.$\n\nAs indicated above, Brown \\cite{Brown} at first derived a formula for $%\n\\lambda _{1}$, for an arbitrary {\\it axially symmetric} bistable potential\nhaving minima at $\\theta =(0,\\pi )$ separated by a maximum at $\\theta _{m},$\nwhich when applied to Eq.(\\ref{1}) for ${\\bf h\\Vert n,}$ i.e. a magnetic\nfield parallel to the easy axis, leads to the form given by Aharoni \\cite\n{Aharoni}, $\\theta _{m}=\\cos ^{-1}(-h),$%\n\\begin{equation}\n\\lambda _{1}\\approx \\frac{2}{\\sqrt{\\pi }}\\alpha ^{3\/2}(1-h^{2})\\times \\left[\n(1+h)\\;e^{-\\alpha (1+h)^{2}}+(1-h)\\;e^{-\\alpha (1-h)^{2}}\\right] \\label{4}\n\\end{equation}\nwhere $0\\leq h\\leq 1,$ $h=1$ being the critical value at which the bistable\nnature of the potential disappears.\n\nIn order to describe the non-axially symmetric asymptotic behaviour, let us\ndenote by $\\beta \\Delta U_{-}$ the smaller reduced barrier height of the two\nconstituting escape from the left or the right of a bistable potential. Then\nfor very low damping, i.e. for $\\eta _{r}\\times \\beta \\Delta U_{-}\\ll 1$\n(with of course the reduced barrier height $\\beta \\Delta U_{-}\\gg 1$,\ndepending on the size of the nanoparticle studied) we have \\cite{Coffey3}, \n\\cite{Brown}, \\cite{Coffey4} the following asymptotic expression for the\nN\\'{e}el relaxation time \n\\begin{eqnarray}\n\\tau _{VLD}^{-1} &\\approx &\\frac{\\lambda }{2\\tau _{N}} \\label{LD} \\\\\n&\\approx &\\frac{\\eta _{r}}{2\\pi }\\left\\{ \\omega _{1}\\times \\beta\n(U_{0}-U_{1})e^{-\\beta (U_{0}-U_{1})}+\\omega _{2}\\times \\beta\n(U_{0}-U_{2})e^{-\\beta (U_{0}-U_{2})}\\right\\} \\nonumber\n\\end{eqnarray}\nFor the intermediate-to-high damping, where $\\eta _{r}\\times \\beta \\Delta\nU_{-}>1$ (again with the reduced barrier height $\\beta \\Delta U_{-}$ much\ngreater than unity) we have \\cite{Coffey3} the asymptotic expression \n\\begin{equation}\n\\tau _{IHD}^{-1}\\approx \\frac{\\Omega _{0}}{2\\pi \\omega _{0}}\\left\\{ \\omega\n_{1}e^{-\\beta (U_{0}-U_{1})}+\\omega _{2}e^{-\\beta (U_{0}-U_{2})}\\right\\} ,\n\\label{IHD}\n\\end{equation}\nwhere \n\\begin{eqnarray*}\n\\omega _{1}^{2} &=&\\frac{\\gamma ^{2}}{M_{s}^{2}}c_{1}^{(1)}c_{2}^{(1)},\\quad\n\\omega _{2}^{2}=\\frac{\\gamma ^{2}}{M_{s}^{2}}c_{1}^{(2)}c_{2}^{(2)} \\\\\n\\omega _{0}^{2} &=&-\\frac{\\gamma ^{2}}{M_{s}^{2}}c_{1}^{(0)}c_{2}^{(0)},\\quad\n\\\\\n\\Omega _{0} &=&\\frac{\\eta _{r}g^{\\prime }}{2}\\left[ -c_{1}^{(0)}-c_{2}^{(0)}+%\n\\sqrt{(c_{2}^{(0)}-c_{1}^{(0)})^{2}-\\frac{4}{\\eta _{r}^{2}}%\nc_{1}^{(0)}c_{2}^{(0)}}\\right] \\\\\ng^{\\prime } &=&\\frac{\\gamma }{(1+\\eta _{r}^{2})M_{s}}\n\\end{eqnarray*}\nHere $\\omega _{1},\\omega _{2}$ and $\\omega _{0}$ are respectively the well\nand saddle angular frequencies associated with the bistable potential, $%\n\\Omega _{0}$ is the damped saddle angular frequency and the $c_{j}^{(i)}$\nare the coefficients of the truncated (at the second order in the direction\ncosines) Taylor series expansion of the crystalline anisotropy and external\nfield potential at the wells of the bistable potential denoted by $1$ and $2$\nand at the saddle point denoted by $0$. A full discussion of the application\nof these general formulae to the particular potential, which involves the\nnumerical solution of a quartic equation in order to determine the $%\nc_{j}^{(i)}$ with the exception of the particular field angle $\\psi =\\frac{%\n\\pi }{4}$ or $\\frac{\\pi }{2}$, in Eq.(\\ref{1}) is given in Refs. \\cite\n{Geoghegan et al}, \\cite{Kennedy}.\n\nThe blocking temperature $T_{B}$ is defined as the temperature at which $%\n\\tau =\\tau _{m}$, $\\tau _{m}$ being the measuring time. Therefore, using\nEqs.(\\ref{2}), (\\ref{4}) and (\\ref{LD}) (or (\\ref{2}), (\\ref{4}) and (\\ref\n{IHD})) and solving the equation $\\tau =\\tau _{m}$ for the blocking\ntemperature $T_{B}$, we obtain the variation of $T_{B}$ as a function of the\napplied field, for an arbitrary angle $\\psi $ between the easy axis and the\napplied magnetic field.\n\nIn particular, for very small values of $\\psi $ we have used Eq.(\\ref{4}),\nas the problem then becomes almost axially symmetric and the arguments\nleading to Eqs.(\\ref{LD}) and (\\ref{IHD}) fail \\cite{Brown}, \\cite{Geoghegan\net al}, \\cite{Coffey4}, and appropriate connection formulae must be used so\nthat they may attain the axially symmetric limit. We then sum over $\\psi ,$\nas the easy axes of the particles in the assembly are assumed to be randomly\ndistributed. In Fig. \\ref{fig2} we have plotted the resulting $T_{B}$ vs. $H$ for\ndifferent values of the damping parameter $\\eta _{r}$. We have checked that\nlowering (or raising) the value of the measuring time $\\tau _{m}$ shifts the\ncurve $T_{B}(H)$ only very slightly upwards (or downwards) while leaving the\nqualitative behaviour unaltered. \n\\begin{figure}[t]\n\\unitlength1cm\n\\begin{picture}(15,11)\n\\centerline{\\epsfig{file=fig2.eps,angle=0,width=10cm}}\n\\end{picture}\n\\vspace{-4cm}\n\\caption{ \\label{fig2}\nThe blocking temperature $T_{B}$ as a function of the\napplied field as extracted from the formulae for the relaxation time of a\nsingle-domain particle and summed over the arbitrary angle $\\psi ,$ plotted\nfor different values of the damping parameter $\\eta _{r}$ (see text), and\nmean volume $\\left\\langle V\\right\\rangle =$ $265$ nm$^{3}.$ ($K=1.25\\times\n10^{5}$erg\/cm$^{3},\\;\\gamma =2.10^{7}$ S$^{-1}$G$^{-1},M_{s}=300$ emu\/cm$%\n^{3},\\;\\tau _{m}=100$ s).\n}\n\\end{figure}\nIn order to compare our analytical results\nwith those of experiments on particle assemblies, we have calculated the\ntemperature $T_{\\max }$ at the maximum of the ZFC magnetization. In the\npresent calculations we have assumed that $M_{s}$ is independent of\ntemperature. We find that the temperature $T_{\\max }$ behaves in the same\nway as was observed experimentally \\cite{Sappey}, \\cite{Ezzir} for dilute\nsamples (see Fig.\\ \\ref{fig3}, where the parameters are those of the most dilute sample\nin Fig.\\ \\ref{fig1}, with $\\eta _{r}=2.5$).\n\\begin{figure}[t]\n\\unitlength1cm\n\\begin{picture}(15,11)\n\\centerline{\\epsfig{file=fig3.eps,angle=0,width=10cm}}\n\\end{picture}\n\\vspace{-4cm}\n\\caption{ \\label{fig3}\nThe temperature $T_{\\max }$ at the maximum of the ZFC\nmagnetization for an assembly of particles with randomly distributed easy\naxes, as a function of the applied field obtained by averaging the blocking\ntemperature in Fig.\\ 2 over the volume log-normal distribution ($\\sigma\n=0.85 $), see text; with $\\eta _{r}\\medskip =2.5$.\n}\n\\end{figure}\n\\noindent Moreover, our calculations for a single particle show that the\nblocking temperature $T_{B}(H)$ exhibits a bell-like shape in the case of\nintermediate-to-high damping. This behaviour is however spurious as is shown\nbelow.\n\n\\section{Spurious behaviour of the blocking temperature at low fields}\n\nWe have mentioned that initially the non-axially symmetric asymptotic\nformulae appear to render the low-field behaviour of $T_{\\max }$. However,\nthis apparent behaviour is spurious as the asymptotic formulae (as one may\nverify (i) by exact calculation of the smallest non-vanishing eigenvalue of\nthe Fokker-Planck equation, and (ii) e.g. the IHD formula does not continue\nto the axially symmetric asymptote) fail at low fields, since the IHD\nformula diverges like $h^{-1\/2},$ for all angles, where $h$ is the reduced\nfield as defined in sect.\\ \\ref{Calculation of the blocking\ntemperature}. The effect of the divergence is thus to \nproduce a spurious maximum in $T_{\\max }$ as a function of the applied field.\n\nIn order to verify this, we have also performed such calculations \\cite\n{Geoghegan et al}, \\cite{Coffey1}, \\cite{Kennedy} using exact numerical\ndiagonalization of the Fokker-Planck matrix. The smallest non-vanishing\neigenvalue $\\lambda _{1}$ thus obtained leads to a blocking temperature\nwhich agrees with that rendered by the asymptotic formulae with the all\nimportant exception of IHD at very low fields where the exact calculation\ninvariably predicts a monotonic decrease in the blocking temperature rather\nthan the peak predicted by the IHD formula (\\ref{IHD}), so indicating that\nthe theoretical peak is an artefact of the asymptotic expansion, caused by\nusing Eq.(\\ref{IHD}) in a region beyond its range of validity, that is in a\nregion where the potential is almost axially symmetric due to the smallness\nof the applied field which gives rise to a spurious discontinuity between\nthe axially and non axially symmetric asymptotic formulae.\n\nAn explanation of this behaviour follows (see also \\cite{Garanin et al.}):\nin the non-axially symmetric IHD asymptote Eq.(\\ref{IHD}) which is\nformulated in terms of the Kramers escape rate, as the field tends to zero,\nfor high damping, the saddle angular frequency $\\omega _{0}$ tends to zero.\nThus the saddle becomes infinitely wide and so the escape rate predicted by\nEq.(\\ref{IHD}) diverges leading to an apparent rise in the blocking\ntemperature until the field reaches a value sufficiently high to allow the\nexponential in the Arrhenius terms to take over. When this occurs the\nblocking temperature decreases again in accordance with the expected\nbehaviour. This is the field range where one would expect the non-axially\nasymptote to work well.\n\nIn reality, as demonstrated by the exact numerical calculations of the\nsmallest non vanishing eigenvalue of the Fokker-Planck matrix, the small\nfield behaviour is not as predicted by the asymptotic behaviour of Eq.(\\ref\n{IHD}) (it is rather given by the axially-symmetric asymptote) because the\nsaddle is limited in size to $\\omega _{0}.$ Thus the true escape rate cannot\ndiverge, and the apparent discontinuity between the axially-symmetric and\nnon axially-symmetric results is spurious, leading to apparent rise in $%\nT_{B} $. In reality, the prefactor in Eq.(\\ref{IHD}) can never overcome the\nexponential decrease embodied in the Arrhenius factor. Garanin \\cite\n{Garanin2} (see \\cite{Garanin et al.}) has discovered bridging formulae\nwhich provide continuity between the axially-symmetric Eq.(\\ref{4}) and non\naxially symmetric asymptotes leading to a monotonic decrease of the blocking\ntemperature with the field in accordance with the numerical calculations of\nthe lowest eigenvalue of the Fokker-Planck equation.\n\nAn illustration of this was given in Ref. \\cite{Garanin et al.} for the\nparticular case of $\\psi =\\frac{\\pi }{2},$ that is a transverse applied\nfield. If the escape rate is written in the form \n\\[\n\\tau ^{-1}=\\frac{\\omega _{1}}{\\pi }A\\exp (-\\beta \\Delta U) \n\\]\nwhere $\\omega _{1}$ is the attempt frequency and is given by \n\\[\n\\omega _{1}=\\frac{2K\\gamma }{M_{s}}\\sqrt{1-h^{2}}, \n\\]\nthen the factor $A,$ as predicted by the IHD formula, behaves as $\\eta _{r}\/%\n\\sqrt{h}$ for $h\\ll 1,\\eta _{r}^{2},$ while for $h=0$, $A$ behaves as $2\\pi\n\\eta _{r}\\sqrt{\\alpha \/\\pi },$ which is obviously discontinuous. So that a\nsuitable interpolation formula is required. Such a formula (analogous to\nthat used in the WKBJ method \\cite{Fermi}) is obtained by multiplying the\nfactor $A$ of the axially symmetric result by $e^{-\\xi }I_{0}(\\xi ),$ where $%\nI_{0}(\\xi )$ is the modified Bessel function of the first kind, and $\\xi\n=2\\alpha h$ (see \\cite{Garanin et al.}).\n\nThis interpolation formula, as is obvious from the large and small $\\xi $\nlimits, automatically removes the undesirable $1\/\\sqrt{h}$ divergence of the\nIHD formula and establishes continuity between the axially symmetric and\nnon-axially symmetric asymptotes for $\\psi =\\pi \/2$, as dictated by the\nexact solution.\n\nIt is apparent from the discussion of this section that the N\\'{e}el-Brown\nmodel for a single particle is unable to explain the maximum in $T_{\\max },$\nas a careful calculation of the asymptotes demonstrates that they always\npredict a monotonic decrease in the blocking temperature. However this\neffect may be explained by considering an assembly of non-interacting\nparticles with a (log-normal) volume distribution and using\nGittleman's \\cite\n{Gittleman} model as shown below, where the superparamagnetic contribution\nto magnetization is taken to be a non-linear function (Langevin function) of\nthe magnetic field.\n\n\\section{Possible explanation of the maximum in $T_{\\max }$}\n\\label{Possible explanation of the maximum}\n\nOur explanation of the low-field behaviour of $T_{\\max }$ is based on\nextracting $T_{\\max }$ from the zero-field cooled magnetization curve\nassuming a volume distribution of particles. According to Gittleman's model\nthe zero-field cooled magnetization of the assembly can be written as a sum\nof two contributions, one from the blocked magnetic moments and the other\nfrom the superparamagnetic ones. In addition, we write the superparamagnetic\ncontribution as a Langevin function of the applied magnetic field and\ntemperature.\n\nGittleman \\cite{Gittleman} proposed a model in which the alternative\nsusceptibility of an assembly of non-interacting particles, with a volume\ndistribution and randomly distributed easy axes, can be written as \n\\begin{equation}\n\\chi (T,\\omega )=\\frac{1}{Z}%\n\\displaystyle \\int %\n\\limits_{0}^{\\infty }dVVf(V)\\chi _{V}(T,\\omega ), \\label{Susc1}\n\\end{equation}\nwhere $Z=$ $\\int_{0}^{\\infty }dVVf(V),$ $f(V)$ is the volume distribution\nfunction, $\\chi _{V}$ is the susceptibility of the volume under\nconsideration, and $dVVf(V)\\chi _{V}$ is the contribution to the total\nsusceptibility corresponding to volumes in the range $V-V+dV.$ $\\chi _{V}$\nis then calculated by assuming a step function for the magnetic field, i.e. $%\nH=0$ for $t<0$ and $H=H_{0}=const$ for $t>0.$ Then, the contribution to the\nmagnetization from particles of volume $V$ is given by \n\\begin{equation}\nM_{V}(t)=VH_{0}\\left( \\chi _{0}-(\\chi _{0}-\\chi _{1})e^{-t\/\\tau }\\right) ,\n\\label{Susc2}\n\\end{equation}\nwhere $\\chi _{0}=M_{s}^{2}(T)V\/3k_{B}T$ is the susceptibility at\nthermodynamic equilibrium and $\\chi _{1}=M_{s}^{2}(T)V\/3E_{B}$ is the\ninitial susceptibility of particles in the blocked state (see \\cite{Dormann\net al.} and many references therein). The Fourier transform of (\\ref{Susc2})\nleads to the complex susceptibility \n\\begin{equation}\n\\chi =\\frac{(\\chi _{0}+i\\omega \\tau \\chi _{1})}{1+i\\omega \\tau },\n\\label{Susc3}\n\\end{equation}\nwhose real part reads \\cite{Gittleman} \n\\begin{equation}\n\\chi ^{\\prime }=\\frac{\\chi _{0}+\\omega ^{2}\\tau ^{2}\\chi _{1}}{1+\\omega\n^{2}\\tau ^{2}}, \\label{Susc4}\n\\end{equation}\nwhere $\\tau _{m}$ is the measuring time, and $\\omega $ is the angular\nfrequency ($=2\\pi \\nu $).\n\nStarting from (\\ref{Susc4}) the application of an alternating field yields:\na) $\\chi ^{\\prime }=\\chi _{0}$ if $\\omega \\tau \\ll 1.$ At high temperature\nthe magnetic moments orientate themselves on a great number of occasions\nduring the time of a measurement, and thus the susceptibility is the\nsuperparamagnetic susceptibility $\\chi _{0}.$ b) $\\chi ^{\\prime }=\\chi _{1}$\nif $\\omega \\tau \\gg 1.$ At low temperature the energy supplied by the field\nis insufficient to reverse the magnetic moments the time of a measurement.\nHere the susceptibility is the static susceptibility $\\chi _{1}.$ Between\nthese two extremes there exists a maximum at the temperature $T_{\\max }.$\n$\\chi ^{\\prime }$ can be calculated from (\\ref{Susc4}) using the formula for\nthe relaxation time $\\tau $ appropriate to the anisotropy symmetry, and\nconsidering a particular volume $V,$ one can determine the temperature $%\nT_{\\max }.$\n\nIn an assembly of particles with a volume distribution, $\\chi^{\\prime }$\ncan be calculated for a (large) volume distribution by postulating that at a\ngiven temperature and given measuring time, certain particles are in the\nsuperparamagnetic state and that the others are in the blocked state. The\nsusceptibility is then given by the sum of two contributions \n\\begin{equation}\n\\chi ^{\\prime }(T,\\nu )=%\n\\displaystyle \\int %\n\\limits_{V_{c}}^{\\infty }dVVf(V)\\chi _{1}(T,\\nu )+%\n\\displaystyle \\int %\n\\limits_{0}^{V_{c}}dVVf(V)\\chi _{0}(T,\\nu ), \\label{Susc5}\n\\end{equation}\nwhere $V_{c}=V_{c}(T,H,\\nu )$ is the blocking volume defined as the volume\nfor which $\\tau =\\frac{1}{\\nu} = \\tau_{m}$. $\\chi ^{\\prime }$ shows a maximum at $%\nT_{\\max }$ near $.$\n\nIf this is done even the simple N\\'{e}el-Brown\\footnote{%\nThis is the simplest non-trivial case since the relaxation time (and thereby\nthe critical volume) depends on the magnetic field.} expression for the\nrelaxation time leads to a maximum in $T_{\\max }$ when the superparamagnetic\ncontribution to magnetization is a Langevin function of the magnetic field.\nThus the particular expression for the single-particle relaxation time used\nappears not to be of a crucial importance in the context of the calculation\nof the blocking temperature.\n\nIn Figs. \\ref{fig4a}-\\ref{fig4c} we plot the result of such calculations (see appendix) where the\nparameters correspond to the samples of Fig. \\ref{fig1}. In Fig. \\ref{fig4a} we compare the\nresults from linear and non-linear (Langevin function) dependence of the\nmagnetization on the magnetic field. We see that indeed the non-linear\ndependence on $H$ of the superparamagnetic contribution to magnetization\nleads to a maximum in $T_{\\max }$ while in the linear case the temperature $%\nT_{\\max }$ is a monotonic function of the field, for all values of $K$\n(corresponding to our samples). This only shows that the volume distribution\nby itself cannot account for the non-monotonic behaviour of the temperature $%\nT_{\\max }$, contrary to what was claimed in \\cite{Luc Thomas}.\n\\begin{figure}[t]\n\\unitlength1cm\n\\begin{picture}(15,12)\n\\centerline{\\epsfig{file=fig4a.eps,angle=0,width=10cm}}\n\\end{picture}\n\\vspace{-5cm}\n\\caption{ \\label{fig4a}\nTemperature $T_{\\max }$ as a function of the applied field obtained by\nthe calculations of sect. V and appendix. Squares : the superparamagnetic\nmagnetization $M_{sp}$ is a linear function of the magnetic field given by\nEq.(\\ref{11}). Circles : $M_{sp}$ is the Langevin function given by Eq.(\\ref\n{11b}). The parameters are the same as in Figs.1-3, and $K=1.0\\times 10^{5}$%\nerg\/cm$^{3}$.\n}\n\\end{figure}\n\\begin{figure}[t]\n\\unitlength1cm\n\\begin{picture}(15,11)\n\\centerline{\\epsfig{file=fig4b.eps,angle=0,width=9cm}}\n\\end{picture}\n\\vspace{-4.9cm}\n\\caption{ \\label{fig4b}\nTemperature $T_{\\max }(H)$ for different values of the anisotropy\nconstant $K.$\n}\n\\end{figure}\n\\begin{figure}[h]\n\\unitlength1cm\n\\begin{picture}(15,11.5)\n\\centerline{\\epsfig{file=fig4c.eps,angle=0,width=8cm}}\n\\end{picture}\n\\vspace{-4cm}\n\\caption{ \\label{fig4c}\n$T_{\\max }(H)$ for different values of the volume-distribution width $%\n\\sigma ,$ and $K=1.5\\times 10^{5}$erg\/cm$^{3}.$\n}\n\\end{figure}\n\nIn fact, in the non-linear case, $T_{\\max }$ exhibits three different\nregimes with field ranges depending on all parameters and especially on $K$\n(see Figs. \\ref{fig4a}, \\ref{fig4b}). For example, for $K=1.5\\times 10^{5}$ erg\/cm$^{3}$, the\nfield ranges are (in Oe) $0.0140$. In the first\nrange, i.e. very low fields, $T_{\\max }$ slightly decreases, then in the\nsecond range it starts to increase up to a maximum, and finally for very\nhigh fields $T_{\\max }$ decreases down to zero. These three regimes were\nobtained experimentally in \\cite{Luc Thomas} in the case of diluted FeC\nparticles.\n\nNext, we studied the effect of varying the anisotropy constant $K$. In\nFig. \\ref{fig4b} we plot the temperature $T_{\\max }$ vs. $H$ for different values of $%\nK.$ It is seen that apart from the obvious shifting of the peak of $T_{\\max\n} $ to higher fields, this peak broadens as the anisotropy constant $K$\nincreases.\n\nWe have also varied the volume-distribution width $\\sigma $ and the results\nare shown in Fig. \\ref{fig4c}. There we see that the maximum of $T_{\\max }$ tends to\ndisappear as $\\sigma $ becomes smaller.\n\nFinally, these results show that the non-monotonic behaviour of $T_{\\max }$\nis mainly due to the non-linear dependence of the magnetization as a\nfunction of magnetic field, and that the magneto-crystalline anisotropy and\nthe volume-distribution width have strong bearing on the variation of the\ncurvature of $T_{\\max }$ vs. field.\n\n\\section{Conclusion}\n\nOur attempt to explain the experimentally observed maximum in the curve $%\nT_{\\max }(H)$ for dilute samples using the asymptotic formulae for the\nprefactor of the relaxation rate of a single-domain particle given by Coffey\net al. \\cite{Coffey4}, has led to the conclusion that these asymptotic\nformulae are not valid for small fields, where the maximum occurs. However,\nthis negative result has renewed interest in the long-standing problem of\nfinding bridging formulae between non-axially and axially symmetric\nexpressions for the prefactor of the escape rate. Recently, this problem has\nbeen partially solved in \\cite{Garanin et al.}.\n\nOn the other hand, exact numerical calculations \\cite{Geoghegan et al}, \\cite\n{Coffey1}, \\cite{Kennedy} of the smallest eigenvalue of the Fokker-Planck\nmatrix invariably lead to a monotonic decrease in the blocking temperature\n(and thereby in the temperature $T_{\\max }$) as a function of the magnetic\nfield. We may conclude then that the expression of the single-particle\nrelaxation time does not seem to play a crucial role. Indeed, the\ncalculations of sect.\\ \\ref{Possible explanation of the maximum} have\nshown that even the simple N\\'{e}el-Brown \nexpression for the relaxation time leads to a maximum in $T_{\\max }$ if one\nconsiders an assembly of particles whose magnetization, formulated through\nGittleman's model, has a superparamagnetic contribution that is a Langevin\nfunction of the magnetic field. The magneto-crystalline anisotropy and the\nvolume-distribution width have strong influence.\n\nAnother important point, whose study is beyond the scope of this work, is\nthe effect of interparticle interactions on the maximum in the temperature $%\nT_{\\max }$. As was said in the introduction, this maximum disappears in\nconcentrated samples, i.e. in the case of intermediate-to-strong\ninterparticle interactions. A recent study \\cite{Chantrell} based on Monte\nCarlo simulations of interacting (cobalt) fine particles seems to recover\nthis result but does not provide a clear physical interpretation of the\neffect obtained. In particular, it was shown there that interactions have a\nstrong bearing on the effective variation of the average energy barrier with\nfield, as represented in an increase of the curvature of the variation of $%\nT_{\\max }$ with $H$ as the packing density (i.e. interparticle interactions)\nincreases.\n\n\\clearpage\n\n\\section*{Acknowledgements}\nH.K., W.T.C. and E.K. thank D.\\ Garanin for helpful conversations.\nThis work was supported in part by Forbairt Basic Research Grant\nNo SC\/97\/70 and the Forbairt research collaboration fund 1997-1999. The work\nof D.S.F.C. and E.C.K. was supported by EPSRC (GR\/L06225).\n\\newpage \n\n\\section*{Appendix: Obtaining $T_{\\max }$ from the ZFC magnetization}\n\nHere we present the (numerical) method of computing the temperature $T_{\\max\n}$ at the peak of the zero-field cooled magnetization of non-interacting\nnanoparticles.\n\nThe potential energy for a particle reads \n\\begin{equation}\n\\frac{\\beta U}{\\alpha }=\\sin ^{2}\\theta -2h\\cos (\\psi -\\theta ) \\label{1app}\n\\end{equation}\nwhere all parameters are defined in sect.\\ \\ref{Calculation of the blocking temperature}.\n\nThen, one determines the extrema of the potential $U$ and defines the escape\nrate $\\lambda $ according to the symmetry of the problem. Here we consider,\nfor simplicity, the axially-symmetry N\\'{e}el-Brown model where $\\lambda $\nis given by Eq. (\\ref{4}).\n\nThe next step consists in finding the critical volume $V_{c}$ introduced in\nEq. (\\ref{Susc5}). $V_{c}$ is defined as the volume at which the relaxation\ntime (or the escape rate) is equal to the measuring time $\\tau _{m}=100s$\n(or measuring frequency). That is, if one defines the function \n\\begin{equation}\nF(V)=\\lambda (\\alpha ,\\theta _{a},\\theta _{m},\\theta _{b})-\\frac{\\tau _{N}}{%\n\\tau _{m}}, \\label{3}\n\\end{equation}\nwhere $\\theta _{a},\\theta _{b},\\theta _{m}$ correspond to the two minima and\nmaximum of the potential, respectively, the critical volume $V_{c}$ is\nobtained as the volume that nullifies the function $F(V)$ for given values\nof $T,H$ and all other fixed parameters ($\\gamma ,\\eta _{r},M_{s}$ and the\nvolume-distribution width $\\sigma $).\n\nThen, $M_{zfc}$ is defined according to Gittleman's model \\cite{Gittleman},\nnamely \n\\begin{equation}\nZ\\times M_{zfc}(H,T,\\psi )=%\n\\displaystyle \\int %\n\\limits_{0}^{V_{c}}{\\cal D}V\\,M_{sp}(H,T,V,\\psi )+%\n\\displaystyle \\int %\n\\limits_{V_{c}}^{\\infty }{\\cal D}V\\,M_{b}(H,T,V,\\psi ) \\label{4app}\n\\end{equation}\nwhere $M_{sp}$ and $M_{b}$ are the contributions to magnetization from\nsuperparamagnetic particles with volume $V\\leq V_{c}$ and particles still in\nthe blocked state with volume $V>V_{c}$. $f(V)=(1\/\\sigma \\sqrt{2\\pi })\\exp\n(-\\log ^{2}(V\/V_{m})\/2\\sigma ^{2})$, is the\nlog-normal volume distribution, $V_{m}$ being the mean volume; $Z\\equiv \\int_{0}^{\\infty }{\\cal D}%\nV=\\int_{0}^{\\infty }Vf(V)dV.$\n\nEq.(\\ref{4app}) can be rewritten as \n\\begin{equation}\nZ\\times M_{zfc}=%\n\\displaystyle \\int %\n\\limits_{0}^{V_{c}}{\\cal D}V\\,M_{sp}-%\n\\displaystyle \\int %\n\\limits_{0}^{V_{c}}{\\cal D}V\\,M_{b}+%\n\\displaystyle \\int %\n\\limits_{0}^{\\infty }{\\cal D}V\\,M_{b}. \\label{5}\n\\end{equation}\nNow using, \n\\begin{equation}\nM_{b}(H,T,V,\\psi )=\\frac{M_{s}^{2}H}{2K}\\sin ^{2}\\psi , \\label{6}\n\\end{equation}\n$M_{b}$ can be taken outside the integral in the last term above. Thus%\n\\footnote{%\nThe reason for doing so is to avoid computing the integral $%\n\\int_{V_{c}}^{\\infty }$ which is numerically inconvenient.}, \n\\[\nZ\\times M_{zfc}=%\n\\displaystyle \\int %\n\\limits_{0}^{V_{c}}{\\cal D}V\\,(M_{sp}-\\,M_{b})+\\,Z\\times M_{b}. \n\\]\n\nThe final expression of $M_{zfc}$ is obtained by averaging over the angle $%\n\\psi $ ($\\left\\langle \\sin ^{2}\\psi \\right\\rangle =\\frac{2}{3}$), \n\\begin{equation}\nM_{zfc}=\\frac{1}{Z}%\n\\displaystyle \\int %\n\\limits_{0}^{\\pi \/2}d\\psi \\sin \\psi \\times \n\\displaystyle \\int %\n\\limits_{0}^{V_{c}}{\\cal D}V\\,(M_{sp}-\\,M_{b})+\\,\\frac{M_{s}^{2}H}{3K}.\n\\label{8}\n\\end{equation}\nThe expression of $M_{sp}$ varies according to the model used. Chantrell et\nal. \\cite{Chantrell et al.} have given an expression which is valid for $%\nM_{s}HV\/k_{B}T\\ll 1,$%\n\\begin{equation}\nM_{sp}(H,T,V,\\psi )=\\frac{M_{s}^{2}VH}{k_{B}T}\\left( \\cos ^{2}\\psi +\\frac{1}{%\n2}\\left[ 1-\\cos ^{2}\\psi (1-\\frac{I_{2}}{I_{0}})\\right] \\right) , \\label{9}\n\\end{equation}\nwith \n\\begin{equation}\n\\frac{I_{2}}{I_{0}}=\\frac{1}{\\alpha }\\left( -\\frac{1}{2}+\\frac{e^{\\alpha }}{%\nI(\\alpha )}\\right) ,\\quad I(\\alpha )=2%\n\\displaystyle \\int %\n\\limits_{0}^{1}dxe^{\\alpha x^{2}}. \\label{10}\n\\end{equation}\nNote that upon averaging over $\\psi ,$ the expression in (\\ref{9}) reduces\nto \n\\begin{equation}\nM_{sp}(H,T,V)=\\frac{M_{s}^{2}VH}{3k_{B}T}, \\label{11}\n\\end{equation}\nwhich is just the limit of the Langevin function for $M_{s}HV\\ll k_{B}T,$\ni.e. \n\\begin{equation}\nM_{sp}(H,T,V)=M_{s}{\\cal L}\\left( \\frac{M_{s}HV}{k_{B}T}\\right) .\n\\label{11b}\n\\end{equation}\n\nTherefore, the expression in (\\ref{8}) becomes \n\\begin{equation}\nM_{zfc}=\\frac{1}{Z}%\n\\displaystyle \\int %\n\\limits_{0}^{V_{c}}{\\cal D}V\\,\\left( M_{sp}-\\,M_{b}\\right) +\\,\\frac{%\nM_{s}^{2}H}{3K} \\label{12app}\n\\end{equation}\nThis is valid only in the case of a relaxation time independent of $\\psi ,$\nas in the N\\'{e}el-Brown model, which is applicable to an assembly of\nuniformly oriented particles. However, if one wanted to use the expressions\nof the relaxation time given by Coffey et al. and others, where $\\tau $\ndepends on the angle $\\psi ,$ as is the case in reality, one should not\ninterchange integrations over $\\psi $ and $V,$ as is done in (\\ref{8}),\nsince $V_{c}$ in general depends on $\\tau $ and thereby on $\\psi $.\n\nTherefore, the final expression for $M_{zfc}$ that was used in our\ncalculations for determining the temperature $T_{\\max }$ is given by eqs. (%\n\\ref{11}), (\\ref{11b}), (\\ref{12app}).\n\n\\clearpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAn evolution of the magnetic phase transition in the helical magnet MnSi at high pressure is reported in a number of publications~\\cite{1,2,3}. It became clear that the phase transition temperature decreased with pressure and practically reached the zero value at $\\sim$15 kbar. However a nature of this transition at zero temperature and high pressure is still a subject of controversial interpretations. Early it was claimed an existence of tricritical point on the phase transition line that might result in a first order phase transition in MnSi at low temperatures~\\cite{4}. The latter would prevent an existence of quantum critical point in MnSi. This view was seemingly supported by the volume measurements at the phase transition in MnSi~\\cite{5,6}. However, this idea was disputed in papers~\\cite{7,8}, where stated that the observed volume anomaly at the phase transitions in MnSi at low temperatures was simply the slightly narrowing anomaly clearly seen at elevated temperatures. On the other hand some experimental works and the recent Monte-Carlo calculations may indicate a strong influence of inhomogeneous stress arising at high pressures and low temperatures on characteristics of phase transitions that could make any experimental data not entirely conclusive~\\cite{7,8,9}.\n\n\nIn this situation it would be appealing to use a different approach to discover a quantum criticality in MnSi, for instance, making use doping as a controlling parameter. Indeed, it became known that doping MnSi with Fe and Co decreases a temperature of the magnetic phase transitions and finally completely suppress the transitions at some critical concentrations of the dopants. In case of Fe doping a critical concentration consist about 15\\% (actually different estimates vary from 0.10 to 0.19)~\\cite{10,11,13}. \n\nActually, the general belief that the concentration of the dopant added to the batch will be the same in the grown crystal is incorrect. One needs to preform chemical and x-ray analysises to make a certain conclusion about the real composition of material. Anyway there are some evidences (non Fermi liquid resistivity, logarithmic divergence of specific heat) that indeed the quantum critical point occurs in (MnFe)Si in the vicinity of iron concentration 0.15\\% at ambient pressure. However, in the recent publication it is claimed that (Mn$_{0.85}$Fe$_{0.15}$)Si experiences a second order phase transition at the pressure range to $\\sim$0-23 kbar, therefore placing the quantum critical point in this material at high pressure~\\cite{14}. \n\n\nTo this end it seems appropriate to take another look at the situation. \nWe report here results of study of a single crystal with nominal composition Mn$_{0.85}$Fe$_{0.15}$)Si. The sample was prepared from the ingot obtained by premelting of Mn (purity 99.99\\%, Chempur), Fe (purity 99.98\\%, Alfa Aesar), and Si ($\\rho_n$=300 Ohm cm, $\\rho_p$=3000 Ohm cm) under argon atmosphere in a single arc oven, then a single crystal was grown using the triarc Czochralski technique. \nThe electron-probe microanalysis shows that real composition is (Mn$_{0.795}$Fe$_{0.147}$)$_{47.1}$Si$_{52.9}$, which indicates some deviations from the stoichiometric chemical compositions common to the silicide compounds. But hereafter we will call the sample under study as (MnFe)Si.\n\n \nThe lattice parameter of the sample appeared to be a=4.5462\\AA. Note that the lattice parameter of pure MnSi is somewhat higher and equal to a=4.5598 \\AA. This implies that iron plays a role of some sort of pressure agent. Let's estimate what pressure is needed to compress pure MnSi to the volume corresponding to the lattice parameter of the material under study. We use a simple linear expression of the form $P=K\\dfrac{\\Delta V}{V}$, where $P$-pressure, $K=-V(\\frac{dP}{dV})_T$- bulk modulus, $\\dfrac{\\Delta V}{V}$= $(V_{MnSi}-V_{(MnFe)Si})\/V_{(MnFe)Si}$. Taking $K$=1.64 Mbar~\\cite{15} and $\\dfrac{\\Delta V}{V}$=8.96$\\cdot 10^{-3}$ (it follows from the given above the lattice parameters values), one obtain $P$=14.63 kbar. Surprisingly this value practically coincide with the pressure corresponding to the phase transition in the pure MnSi at zero temperature~\\cite{1,2,3,4}. This adds an extra argument in favor of quantum criticality of (MnFe)Si in the vicinity of iron concentration 0.15\\%.\n\n\n\n\n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig1.eps}\n\\caption{\\label{fig1} (Color online) Magnetization curves for (MnFe)Si (a) and MnSi (b)~\\cite{7,17}.}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig2.eps}\n\\caption{\\label{fig2} (Color online) The inverse magnetic susceptibility $1\/\\chi$ for (MnFe)Si and MnSi~\\cite{7,17} as measured at 0.01 T.}\n \\end{figure}\n\n\n\\section{Experimental}\nWe performed some magnetic, dilatometric, electrical and heat capacity measurements to characterize the sample of (MnFe)Si. All measurements were made making use the Quantum Design PPMS system with the heat capacity and vibrating magnetometer moduli. The linear expansion of the sample was measured by the capacity dilatometer~\\cite{16}. The resistivity data were obtained with the standard four terminals scheme using the spark welded Pt wires as electrical contacts.\n\n\nThe experimental results are displayed in Fig.~\\ref{fig1}--\\ref{fig11}. Whenever it is possible the corresponding data for pure MnSi are depicted at the same figures to facilitate comparisons of the data.\n\n\nIn Fig.~\\ref{fig1} the magnetization curves for both (MnFe)Si and MnSi are shown. As it follows the magnetization of (MnFe)Si (a) does not reveal an existence of the spontaneous magnetic moment in contrast with a case of MnSi. From the saturated magnetization of MnSi at high field (Fig.~\\ref{fig1}b), the magnetic moment per atom Mn is 0.4$\\mu_B$.\n\n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig3.eps}\n\\caption{\\label{fig3} (Color online) Specific heat of (MnFe)Si as a function of temperature at different magnetic fields. Specific heat of (MnFe)Si divided by temperature $C_p\/T$ is shown in the inset in the logarithmic scale at zero magnetic field. }\n\\end{figure}\n \n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig4.eps}\n\\caption{\\label{fig4} (Color online) a) Specific heat of (MnFe)Si as a function of temperature in the 2--20 K range. The line is the power function fit the experimental data (shown in the plot).\nb) Specific heat of MnSi at high pressure measured by the ac-calorimetry technique~\\cite{19}.}\n\\end{figure}\n \n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig5.eps}\n\\caption{\\label{fig5} (Color online) Temperature dependence of $C_p$ (a) and $C_p\/T$ (b) for (MnFe)Si and MnSi~\\cite{7,20}.}\n\\end{figure}\n \n\n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig6.eps}\n\\caption{\\label{fig6} (Color online) Dependence of linear thermal expansion of (MnFe)Si and MnSi~\\cite{7,20} on temperature. MnSi data reduced to (MnFe)Si ones at 200 K for better viewing.} \n\\end{figure}\n \nAs seen from Fig.~\\ref{fig2} the magnetic susceptibility $\\chi$ of (MnFe)Si does not obey the Curie-Weiss law, which clearly works in the paramagnetic phase of MnSi. The temperature dependence of $1\/\\chi$ for (MnFe)Si is well described in the range 5-150 K by the expression $1\/\\chi=A+cT^{0.78}$, which was also observed for some substances with quantum critical behavior~\\cite{18}. This expression can rewritten in the form $(1\/\\chi - 1\/\\chi_0)^{-1}=cT^{-1}$, implying a divergence of the quantity $(1\/\\chi - 1\/\\chi_0)^{-1}$. The nature of the anomalous part of the $1\/\\chi$ at $<5$~K (see inset in Fig.~\\ref{fig2}) will be discussed later.\n\n\nAs can be seen from Fig.~\\ref{fig3} magnetic field does not influence much the specific heat of (MnFe)Si at least at high temperatures. Also is seen in the inset of Fig.~\\ref{fig3} that the ratio of $C_p\/T$ does not fit well the logarithmic law.\n\n\nThe power law behavior of $C_p$ in the range to 20~K is characterized by the exponent $\\sim 0.6$ (Fig.~\\ref{fig4}), which immediately leads to the diverging expression for $C_p\/T\\sim T^{-1+0.6}$ (see Fig.~\\ref{fig5}b). This finding contradicts to the data~\\cite{12} declaring the logarithmic divergence of $C_p\/T$ for (MnFe)Si in about the same temperature range (see the inset in Fig.~\\ref{fig3}). In Fig.~\\ref{fig4}b is shown how the phase transition in MnSi at high pressure close to the quantum critical region influences the specific heat. The additional illustration of this kind is provided by the resistivity data (see Fig.~\\ref{fig11}). So one cannot find any similar evidence in Fig.~\\ref{fig4}a for the would be phase transition, which was suggested in~\\cite{14}.\n\n\nFig.~\\ref{fig5} shows the temperature dependences of specific heats $C_p$ (a) and specific heats divided by temperature $C_p\/T$ (b) for (MnFe)Si and MnSi. As can be seen both quantities do not differ much at temperatures above the magnetic phase transitions in MnSi even with applied magnetic field. The great difference arises at and below phase transition temperatures in MnSi. The remarkable thing is the diverging behavior of $C_p\/T$ that is removed by an application of strong magnetic field (Fig.~\\ref{fig5}b) leading to the finite value of $(C_p\/T)$ at T=0 corresponding to the electronic specific heat term $\\gamma$, therefore restoring Fermi liquid picture \n\n\n\n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig7.eps}\n\\caption{\\label{fig7} (Color online) Linear thermal expansion coefficients of (MnFe)Si and MnSi~\\cite{7,20}. } \n\\end{figure}\n\nAs is seen in Fig.~\\ref{fig6} the magnetic phase transition in MnSi is signified by a significant volume anomaly. Nothing of this kind exists on the thermal expansion curve of (MnFe)Si. Probably a somewhat different situation can be observed in Fig.~\\ref{fig7}, which displays the temperature dependences of linear thermal expansion coefficients $\\beta=(1\/L_0) (dL\/dT)_p$ for (MnFe)Si and MnSi. It is seen a surprisingly good agreement between both data at high temperature. A specific feature of $\\beta$ of (MnFe)Si is a small tail at T$<5$~K. This tale inclines to cross the temperature axis at finite value therefore tending to the negative $\\beta$ as it does occur in MnSi in the phase transition region (see Figs.~\\ref{fig7} and \\ref{fig8}). Just this behavior of $\\beta$ creates sudden drop at low temperatures in the seemingly diverging ratio $\\beta\/C_{p}$, which conditionally may be called the Gruneisen parameter (See Fig.~\\ref{fig9}). \n \n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig8.eps}\n\\caption{\\label{fig8} (Color online) Linear thermal expansion coefficients of (MnFe)Si as functions of temperature and magnetic fields.} \n\\end{figure}\n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig9.eps}\n\\caption{\\label{fig9} (Color online) Gruneisen ratio tends to diverse at $T\\rightarrow0$. This tendency is interrupted by a peculiar behavior of the thermal expansion coefficient.} \n\\end{figure}\n\nFig.~\\ref{fig8} shows that magnetic field strongly influences the \"tale\" region of the thermal expansion coefficient of (MnFe)Si that indicates its fluctuation nature. This feature should be linked to the anomalous part of the $1\/\\chi$ at $<5$~K (Fig.~\\ref{fig2}).\n\n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig10.eps}\n\\caption{\\label{fig10} (Color online) Resistivities of (MnFe)Si and MnSi~\\cite{21} as functions of temperature.} \n\\end{figure}\n\nResistivities of (MnFe)Si and MnSi as functions of temperature are shown in Fig.~\\ref{fig10}. The quasi linear non Fermi liquid behavior of resistivity of (MnFe)Si at low temperature in contrast with the MnSi case is quite obvious. With temperature increasing the resistivity of (MnFe)Si evolves to the \"saturation\" curve typical of the strongly disordered metals and similar to the post phase transition branch of the resistivity curve of MnSi~\\cite{22}.\n\n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig11.eps}\n\\caption{\\label{fig11} (Color online)Dependence of temperature derivative of resistivity of (MnFe)Si and MnSi on temperature: a) $d\\rho\/dT$ of (MnFe)Si as functions of temperature and magnetic fields, b) $d\\rho\/dT$ of MnSi as a function of temperature at ambient and high pressure (in the inset)~\\cite{8,21}.} \n\\end{figure}\n\n\nA comparison of Fig.~\\ref{fig11} (a) and (b) shows a drastic difference in behavior of $d\\rho\/dT$ at the phase transition in MnSi and in (Mn,Fe)Si in the supposedly critical region. The peculiar form of $d\\rho\/dT$ of (Mn,Fe)Si does not look as a phase transition feature though it certainly reflects an existence of significant spin fluctuations. This feature should be related to the anomalies of the magnetic susceptibility (fig.~\\ref{fig2}) and thermal expansion coefficient (fig.~\\ref{fig8}).\n\n\\section{Discussion}\nAs we have shown in the Introduction the lattice parameter of our sample of (MnFe)Si corresponds to the one of compressed pure MnSi by pressure about 1.5 GPa. At this pressure and zero temperature the quantum phase transition in MnSi does occur, nature and properties of which are still under discussion~\\cite{7}. Alternative way to reach the quantum regime is to use so called \"chemical pressure\" doping MnSi with suitable \"dopants\" that could avoid disturbing inhomogeneous stresses arising at conventional pressure loading. So as it appeared the composition Mn$_{0.85}$Fe$_{0.15}$Si indeed demonstrated properties typical of the quantum critical state ~\\cite{11,12}. However the conclusions of Ref.~\\cite{11,12} were disputed in the publication~\\cite{14}, authors of which claim on the basis of the muon spin relaxation experiments that 15\\% Fe-substituted (Mn,Fe)Si experiences a second order phase transition at ambient pressure then reaching a quantum critical point at pressure $\\sim$21--23 kbar.\n\n\nWith all that in mind we have carried out a number of measurements trying to elucidate the problem. Below we summarize our finding.\n\n\n\\begin{enumerate}\n\\item There is no spontaneous magnetic moment in (MnFe)Si at least at 2~K (Fig.1). Magnetic susceptibility of (MnFe)Si can be described by the expression $1\/\\chi=A+cT^{0.78}$ or $(1\/\\chi - 1\/\\chi_0)^{-1} = cT^{-1}$ in the temperature range $\\sim$5--150~K, implying divergence of the quantity $(1\/\\chi - 1\/\\chi_0)^{-1}$. This behavior also was observed earlier in case of some substances close to quantum critical region (Fig.~\\ref{fig2})~\\cite{18}. At $T<5$~K a behavior of $1\/\\chi$ deviates from the mentioned expression in a way, which can be traced to the analogous feature at the fluctuation region of MnSi at $T>T_c$ (see the round inset in Fig.~\\ref{fig2}).\n\\item Specific heat of (MnFe)Si is well defined by the simple power expression $C\\sim T^{0.6}$ in the range 2--20~K , which does not show any features inherited to phase transitions as it take place in case of MnSi at pressure close to the quantum phase transition (Fig.~\\ref{fig3},\\ref{fig4}). This expression immediately leads to the divergence of the quantity $C_p\/T\\sim T^{-1+0.6}$, which can be suppressed by magnetic field that leads to restoring Fermi liquid picture with finite value of electronic specific heat term $\\gamma$ (Fig.~\\ref{fig5}).\n\\item The thermal expansion experiment with (MnFe)Si does not reveal any features that can be linked to a phase transition (Fig.~\\ref{fig6}). However the thermal expansion coefficient $\\beta$ show a low temperature tale, which inclines to cross the temperature axis at finite value tending to become negative as it occurs in MnSi (Fig.~\\ref{fig7},\\ref{fig8}). This specifics of $\\beta$ causes a sudden low temperature drop of the Gruneisen parameter otherwise it would diverge at $T\\rightarrow 0$. An application of magnetic field suppresses this kind of behavior of the thermal expansion coefficient therefore revealing its fluctuation nature (Fig.~\\ref{fig8}).\n\\item The resistivity of (MnFe)Si clearly demonstrates non Fermi liquid behavior with no specifics indicating a phase transition. However, the temperature derivative of resistivity $d\\rho\/dT$ of (MnFe)Si shows non trivial form, which indicates an existence of significant spin fluctuations. That should be related to the low temperature \"tales\" both magnetic susceptibility and thermal expansion coefficient.\n\\end{enumerate}\n\\section{Conclusion}\nFinally, magnetic susceptibility in the form $(1\/\\chi - 1\/\\chi_0)^{-1}$ and Gruneisen parameter $\\beta\/C_p$ in (MnFe)Si show diverging behavior, which is interrupted at about 5~K by factors linked somehow with spin fluctuations analogues to ones preceding the phase transition in MnSi (see Fig.~\\ref{fig2},\\ref{fig7},\\ref{fig8}). Specific heat divided by temperature $C_p\/T$ of (MnFe)Si clearly demonstrate diverging behavior to 2~K. The electrical resistivity of (MnFe)Si exhibits non Fermi liquid character.\n\n\nGeneral conclusions: there are no thermodynamic evidences in favor of a second order phase transition for the 15\\% Fe substituted MnSi. The trajectory corresponding to the present composition of (MnFe)Si is a critical one, i.e. approaching quantum critical point at lowering temperature, which agrees with conclusions made in Ref.~\\cite{11,12}. However, the critical trajectory in fact is a tangent to the phase transition line and therefore some properties inevitably would be influenced by the cloud of spin helical fluctuations bordering the phase transition. This situation produces some sort of a mixed state instead of a pure quantum critical one that probably was seen in the experiments~\\cite{14}. \n\n\n\\section{Acknowledgements}\nWe express our gratitude to I.P. Zibrov and N.F. Borovikov for technical assistance.\nAEP and SMS greatly appreciate financial support of the Russian Foundation for Basic Research (grant No. 18-02-00183), the Russian Science Foundation (grant 17-12-01050). \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nUsually various types of diffusion processes are classified by analysis of the spread of the distance traveled by a random walker. If the mean square displacement grows like $\\langle [x-x(0)]^2 \\rangle \\propto t^\\delta$ with $\\delta<1$ the motion is called subdiffusive, in contrast to normal ($\\delta=1$) or superdiffusive ($\\delta>1$) situations. For a free Brownian particle moving in one dimension, a stochastic random force entering its equation of motion is assumed to be composed of a large number of independent identical pulses. If they posses a finite variance, then by virtue of the standard Central Limit Theorem (CLT) the distribution of their sum follows the Gaussian statistics. However, as it has been proved by L\\'evy and Khintchine, the CLT can be generalized for independent, identically distributed (i.i.d) variables characterized by non-finite variance or even non-finite mean value. With a L\\'evy forcing characterized by a stability index $\\alpha<2$ independent increments of the particle position sum up yielding $\\langle [x-x(0)]^2 \\rangle \\propto t^{2\/\\alpha}$ \\cite{bouchaud1990}, see below. Such\nenhanced, fast superdiffusive motion is observed in various real situations when a test particle is able to perform unusually large jumps \\cite{dubkov2008,shlesinger1995,metzler2004}. L\\'evy flights have been documented to describe motion of fluorescent probes in living polymers, tracer particles in rotating flows and cooled atoms in laser fields. They serve also as a paradigm of efficient searching strategies \\cite{Viswanathan1996,sims2008,reynolds2009} in social and environmental problems with some level of controversy \\cite{Edwards2007}.\n\nIn contrast, transport in porous, fractal-like media or relaxation kinetics in inhomogeneous materials are usually ultraslow, i.e. subdiffusive \\cite{klafter1987,metzler2000,metzler2004}. The most intriguing situations take place however, when both effects -- occurrence of long jumps and long waiting times for the next step -- are incorporated in the same scenario \\cite{zumofen1995}. The approach to this kind of anomalous motion is provided by continuous time random walks (CTRW) which assume that the steps of the walker occur at random times generated by a renewal process.\nIn particular, a mathematical idealization of a free Brownian motion (Wiener process $W(t)$) can be then derived as a limit (in distribution) of i.i.d random (Gaussian) jumps taken at infinitesimally short time intervals of non-random length. Other generalizations are also possible, e.g. $W(t)$ can be defined as a limit of random Gaussian jumps performed at random Poissonian times. The characteristic feature of the Gaussian Wiener process is the continuity of its sample paths. In other words, realizations (trajectories) of the Wiener process are continuous (although nowhere differentiable) \\cite{doob1942}. The process is also self-similar (scale invariant) which means that by rescaling $t'=\\lambda t$ and $W'(t)=\\lambda^{-1\/2}W(\\lambda t)$ another Wiener process with the same properties is obtained. Among scale invariant stable processes, the Wiener process is the only one which possesses finite variance \\cite{janicki1994,saichev1997,doob1942,dubkov2005b}. Moreover, since the correlation function of increments $\\Delta W(s)= W(t+s)-W(t)$ depends only on time difference $s$ and increments of non-overlapping times are statistically independent, the formal differentiation of $W(t)$ yields a white, memoryless Gaussian process \\cite{vankampen1981}:\n\\begin{equation}\n\\dot{W}(t)=\\xi(t),\\qquad \\langle \\xi(t)\\xi(t') \\rangle =\\delta(t-t')\n\\end{equation}\ncommonly used as a source of idealized environmental noises within the Langevin description\n\\begin{equation}\ndX(t)=f(X)dt + dW(t).\n\\end{equation}\nHere $f(X)$ stands for the drift term which in the case of a one-dimensional overdamped motion is directly related to the potential $V(X)$, i.e. $f(X)= -dV(X)\/dX$.\n\n\nIn more general terms the CTRW concept may asymptotically lead to non-Markov, space-time fractional noise $\\tilde{\\xi}(t)$, and in effect, to space-time fractional diffusion. For example, let us define $\n\\Delta \\tilde{W}(t)\\equiv \\Delta X(t)=\\sum^{N(t)}_{i=1}X_i,$\nwhere the number of summands $N(t)$ is statistically independent from $X_i$ and governed by a renewal process $\\sum^{N(t)}_{i=1}T_i\\leqslant t < \\sum^{N(t)+1}_{i=1}T_i$ with $t>0$.\nLet us assume further that $T_i$, $X_i$ belong to the domain of attraction of stable distributions, $T_i\\sim S_{\\nu,1}$ and $X_i\\sim S_{\\alpha,\\beta}$, whose corresponding characteristic functions $\\phi(k)= \\langle \\exp(ikS_{\\alpha,\\beta}) \\rangle=\\int^{\\infty}_{-\\infty}e^{ikx} l_{\\alpha,\\beta}(x;\\sigma) dx$, with the density $l_{\\alpha,\\beta}(x;\\sigma)$, are given by\n\\begin{equation}\n\\phi(k) = \\exp\\left[ -\\sigma^\\alpha|k|^\\alpha\\left( 1-i\\beta\\mathrm{sign}k\\tan\n\\frac{\\pi\\alpha}{2} \\right) \\right],\n\\label{eq:charakt1}\n\\end{equation}\nfor $\\alpha\\neq 1$\nand\n\\begin{equation}\n\\phi(k) = \\exp\\left[ -\\sigma|k|\\left( 1+i\\beta\\frac{2}{\\pi}\\mathrm{sign}k\\log|k| \\right) \\right].\n\\label{eq:charakt2}\n\\end{equation}\nfor $\\alpha=1$.\nHere the parameter $\\alpha\\in(0,2]$ denotes the stability index, yielding the asymptotic long tail power law for the $x$-distribution, which for $\\alpha<2$ is of the $|x|^{-(1+\\alpha)}$ type. The parameter $\\sigma$ ($\\sigma\\in(0,\\infty)$) characterizes the scale whereas $\\beta$ ($\\beta\\in[-1,1]$) defines an asymmetry (skewness) of the distribution.\n\nNote, that for $0<\\nu<1$, $\\beta=1$, the stable variable $S_{\\nu,1}$ is defined on positive semi-axis. Within the above formulation the counting process ${N(t)}$ satisfies\n\\begin{eqnarray}\n&& \\lim_{t\\rightarrow \\infty}\\mathrm{Prob}\\left\\{ \\frac{N(t)}{(t\/c)^{\\nu}}t\\right\\}\\nonumber \\\\\n&& = \\lim_{n\\rightarrow\\infty}\\mathrm{Prob}\\left\\{ \\sum_{i=1}^{[n]}T_i>\\frac{cn^{1\/\\nu}}{x^{1\/\\nu}}\\right\\} \\\\\n&& = \\lim_{n\\rightarrow\\infty}\\mathrm{Prob}\\left\\{ \\frac{1}{cn^{1\/\\nu}}\\sum_{i=1}^{[n]}T_i>\\frac{1}{x^{1\/\\nu}}\\right\\}\n = 1-L_{\\nu,1}(x^{-1\/\\nu}), \\nonumber\n\\end{eqnarray}\nwhere $[(t\/c)^{\\nu}x]$ denotes the integer part of the number $(t\/c)^{\\nu}x$ and $L_{\\alpha,\\beta}(x)$ stands for the stable distribution of random variable $S_{\\alpha,\\beta}$, i.e. $l_{\\alpha,\\beta}(x)=dL_{\\alpha,\\beta}(x)\/dx$.\nMoreover, since\n\\begin{equation}\n\\lim_{n\\rightarrow\\infty}\\mathrm{Prob}\\left\\{ \\frac{1}{c_1n^{1\/\\alpha}}\\sum_{i=1}^{n}X_it \\right \\}$, where $U(s)$ denotes a strictly increasing $\\nu$-stable process whose distribution $L_{\\nu,1}$ yields a Laplace transform $\\langle e^{-kU(s)} \\rangle =e^{-sk^{\\nu}}$. The parent process $\\tilde{X}(s)$ is composed of increments of symmetric $\\alpha$-stable motion described in an operational time $s$\n\\begin{equation}\nd\\tilde{X}(s)=-V'(\\tilde{X}(s))ds+dL_{\\alpha,0}(s),\n\\label{eq:langevineq}\n\\end{equation}\nand in every jump moment the relation $U(S_t)=t$ is fulfilled. The (inverse-time) subordinator $S_t$ is (in general) non-Markovian hence, as it will be shown, the diffusion process $\\tilde{X}(S_t)$ possesses also some degree of memory. The above setup has been recently proved \\cite{magdziarz2007b,magdziarz2008,magdziarz2007} to give a proper stochastic realization of the random process described otherwise by a fractional diffusion equation:\n\\begin{equation}\n \\frac{\\partial p(x,t)}{\\partial t}={}_{0}D^{1-\\nu}_{t}\\left[ \\frac{\\partial}{\\partial x} V'(x) + \\frac{\\partial^\\alpha}{\\partial |x|^\\alpha} \\right] p(x,t),\n\\label{eq:ffpe}\n\\end{equation}\nwith the initial condition $p(x,0)=\\delta(x)$. In the above equation ${}_{0}D^{1-\\nu}_{t}$ denotes the Riemannn-Liouville fractional derivative ${}_{0}D^{1-\\nu}_{t}=\\frac{\\partial}{\\partial t}{}_{0}D^{-\\nu}_{t}$ defined by the relation\n\\begin{equation}\n{}_{0}D^{1-\\nu}_{t}f(x,t)=\\frac{1}{\\Gamma(\\nu)}\\frac{\\partial}{\\partial t}\\int^{t}_0 dt'\\frac{f(x,t')}{(t-t')^{1-\\nu}}\n\\end{equation}\nand $\\frac{\\partial^{\\alpha}}{\\partial |x|{\\alpha}}$ stands for the Riesz fractional derivative with the Fourier transform ${\\cal{F}}[\\frac{\\partial^{\\alpha}}{\\partial |x|^{\\alpha}} f(x)]=-|k|^{\\alpha}\\hat{f}(x)$.\nEq.~(\\ref{eq:ffpe}) has been otherwise derived from a generalized Master equation \\cite{metzler1999}.\nThe formal solution to Eq.~(\\ref{eq:ffpe}) can be written \\cite{metzler1999} as:\n\\begin{equation}\np(x,t)=E_{\\nu}\\left (\\left[ \\frac{\\partial}{\\partial x} V'(x) + \\frac{\\partial^\\alpha}{\\partial |x|^\\alpha} \\right]t^{\\nu}\\right)p(x,0).\n\\end{equation}\nFor processes with survival function $\\Psi(t)=1-\\int_0^t\\psi(\\tau)d\\tau$ (cf. Eq.~(\\ref{eq:master})) given by the Mittag-Leffler function Eq.~(\\ref{eq:mittag}), this solution takes an explicit form\n\\cite{saichev1997,jespersen1999,metzler1999,scalas2006,germano2009}\n\\begin{equation}\np(x,t)\n=\\sum_{n=0}^{\\infty}\\frac{t^{\\nu n}}{n!}E^{(n)}_{\\nu}(-t^{\\nu})w_n(x)\n\\end{equation}\nwhere $E^{(n)}_{\\nu}(z)=\\frac{d^n}{dz^n}E_{\\nu}(z)$ and $w_n(x)\\propto l_{\\alpha,0}(x)$, see \\cite{scalas2004,scalas2006}.\n\n\nIn this paper, instead of investigating properties of an analytical solution to Eq.~(\\ref{eq:ffpe}), we switch to a Monte Carlo method \\cite{magdziarz2007b,magdziarz2008,magdziarz2007,gorenflo2002,meerschaert2004} which allows generating trajectories of the subordinated process $X(t)$ with the parent process $\\tilde{X}(s)$ in the potential free case, i.e. for $V(x)=0$. The assumed algorithm provides means to examine the competition between subdiffusion (controlled by a $\\nu$-parameter) and L\\'evy flights characterized by a stability index $\\alpha$. From the ensemble of simulated trajectories the estimator of the density $p(x,t)$ is reconstructed and statistical qualifiers (like quantiles) are derived and analyzed.\n\nAs mentioned, the studied process is $\\nu\/\\alpha$ self-similar (cf. Eq.~(\\ref{eq:scaling})). We further focus on examination of a special case for which $\\nu\/\\alpha=1\/2$. As an exemplary values of model parameters we choose $\\nu=1,\\alpha=2$ (Markovian Brownian diffusion) and $\\nu=0.8,\\alpha=1.6$ (subordination of non-Markovian sub-diffusion with L\\'evy flights). Additionally we use $\\nu=1,\\alpha=1.6$ and $\\nu=0.8,\\alpha=2$ as Markovian and non-Markovian counterparts of main cases analyzed. Fig.~\\ref{fig:trajectories} compares trajectories for all exemplary values of $\\nu$ and $\\alpha$. Straight horizontal lines (for $\\nu=0.8$) correspond to particle trapping while straight vertical lines (for $\\alpha=1.6$) correspond to L\\'evy flights. The straight lines manifest anomalous character of diffusive process.\n\n\nTo further verify correctness of the implemented version of the subordination algorithm \\cite{{magdziarz2007b}}, we have performed extensive numerical tests. In particular, some of the estimated probability densities have been compared with their analytical representations and the perfect match between numerical data and analytical results have been found. Fig.~\\ref{fig:numeric-theory} displays numerical estimators of PDFs and analytical results for $\\nu=1$ with $\\alpha=2$ (Gaussian case, left top panel), $\\nu=1$ with $\\alpha=1$ (Cauchy case, right top panel), $\\nu=1\/2$ with $\\alpha=1$ (left bottom panel) and $\\nu=2\/3$ with $\\alpha=2$ (right bottom panel). For those last two cases, the expressions for $p(x,t)$ has been derived \\cite{saichev1997}, starting from the series representation given by Eq.~(\\ref{eq:series}). For $\\nu=1\/2,\\;\\alpha=1$ the appropriate formula reads\n\\begin{equation}\np(x,t)=-\\frac{1}{2\\pi^{3\/2}\\sqrt{t}}\\exp\\left(\\frac{x^2}{4t}\\right)\\mathrm{Ei}\\left(-\\frac{x^2}{4t}\\right),\n\\label{eq:pa1n12}\n\\end{equation}\nwhile for $\\nu=2\/3,\\;\\alpha=2$ the probability density is\n\\begin{equation}\np(x,t)=\\frac{3^{2\/3}}{2t^{1\/3}}\\mathrm{Ai}\\left[ \\frac{|x|}{(3t)^{1\/3}} \\right].\n \\label{eq:pa2n23}\n\\end{equation}\n$\\mathrm{Ei}(x)$ and $\\mathrm{Ai}(x)$ are the integral exponential function and the Airy function respectively.\nWe have also compared results of simulations and Eq.~(\\ref{eq:closedformula}) for other sets of parameters $\\nu$, $\\alpha$. Also there, the excellent agreement has been detected (results not shown).\n\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[angle=0,width=8.0cm]{trajectories.eps}\n\\caption{Sample trajectories for $\\nu=1, \\alpha=2$ (left top panel), $\\nu=1, \\alpha=1.6$ (left bottom panel), $\\nu=0.8, \\alpha=2$ (right top panel) and $\\nu=0.8, \\alpha=1.6$ (right bottom panel). Eq.~(\\ref{eq:langevineq}) was numerically approximated by subordination techniques with the time step of integration $\\Delta t=10^{-2}$ and averaged over $N=10^6$ realizations.}\n\\label{fig:trajectories}\n\\end{center}\n\\end{figure}\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[angle=0,width=8.0cm]{numeric-theory.eps}\n\\caption{(Color online) PDFs for $\\nu=1$ with $\\alpha=2$ (left top panel), $\\nu=1$ with $\\alpha=1$ (right top panel), $\\nu=1\/2$ with $\\alpha=1$ (left bottom panel) and $\\nu=2\/3$ with $\\alpha=2$ (right bottom panel). Eq.~(\\ref{eq:langevineq}) was numerically approximated by subordination techniques with the time step of integration $\\Delta t=10^{-3}$ and averaged over $N=10^6$ realizations. Solid lines present theoretical densities: Gaussian (left top panel), Cauchy (right top panel) and the $p(x,t)$ given by Eqs.~(\\ref{eq:pa1n12}) (left bottom panel) and (\\ref{eq:pa2n23}) (right bottom panel). Note the semi-log scale in the bottom panels.}\n\\label{fig:numeric-theory}\n\\end{center}\n\\end{figure}\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[angle=0,width=8.0cm]{hist-cdf.eps}\n\\caption{(Color online) PDFs (top panel) and $1-\\mathrm{CDF}(x,t)$ (bottom panel) at $t=2$ (left panel), $t=15$ (right panel). Simulation parameters as in Fig.~\\ref{fig:trajectories}. Eq.~(\\ref{eq:langevineq}) was numerically approximated by subordination techniques with the time step of integration $\\Delta t=10^{-3}$ and averaged over $N=10^6$ realizations. Solid lines present theoretical asymptotic $x^{-1.6}$ scaling representative for $\\alpha=1.6$ and $\\nu=1$, i.e. for Markovian L\\'evy flight.}\n\\label{fig:histograms}\n\\end{center}\n\\end{figure}\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[angle=0,width=8.0cm]{hist-cdf12.eps}\n\\caption{(Color online) PDFs (top panel) and $1-\\mathrm{CDF}(x,t)$ (bottom panel) at $t=2$ (left panel), $t=15$ (right panel). Simulation parameters as in Fig.~\\ref{fig:trajectories}. Eq.~(\\ref{eq:langevineq}) was numerically approximated by subordination techniques with the time step of integration $\\Delta t=10^{-2}$ and averaged over $N=10^6$ realizations. Solid lines present theoretical asymptotic $x^{-1.2}$ scaling representative for $\\alpha=1.2$ and $\\nu=1$, i.e. for Markovian L\\'evy flight.}\n\\label{fig:histograms12}\n\\end{center}\n\\end{figure}\n\n\n\n\n\nFigure~\\ref{fig:histograms} and \\ref{fig:histograms12} display time-dependent probability densities $p(x,t)$ and corresponding cumulative distribution functions ($CDF(x,t)=\\int_{-\\infty}^xp(x',t)dx'$) for ``short'' and for, approximately, an order of magnitude ``longer'' times. The persistent cusp \\cite{sokolov2002} located at $x=0$ is a finger-print of the initial condition $p(x,0)=\\delta(x)$ and is typically recorded for subdiffusion induced by the subordinator $S_t$ with $\\nu<1$. For Markov L\\'evy-Wiener process \\cite{dybiec2006,denisov2008} for which the characteristic exponent $\\nu=1$, the cusp disappears and PDFs of the process $\\tilde{X}(S_t)$ become smooth at $x=0$. In particular, for the Markovian Gaussian case ($\\nu=1$, $\\alpha=2$) corresponding to a standard Wiener diffusion, PDF perfectly coincides with the analytical normal density $N(0,\\sqrt{t})$.\n\n\nThe presence of L\\'evy flights is also well visible in the power-law asymptotic of CDF, see bottom panels of Figs.~\\ref{fig:histograms} and \\ref{fig:histograms12}. Indeed, for $\\alpha<2$ independently of the actual value of the subdiffusion parameter $\\nu$ and at arbitrary time, $p(x,t)\\propto |x|^{-(\\alpha+1)}$ for $x\\rightarrow \\infty$. Furthermore, all PDFs are symmetric with median and modal values located at the origin.\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[angle=0,width=8.0cm]{stdev-only-divided.eps}\n\\caption{(Color online) Time dependence of $\\mathrm{var}(x)\/t$. Straight lines present $t^{2\\nu\/\\alpha-1}$ theoretical scaling (see Eq.~(\\ref{eq:variancescaling}) and explanation in the text). Simulation parameters as in Fig.~\\ref{fig:trajectories}.}\n\\label{fig:stdev}\n\\end{center}\n\\end{figure}\n\n\n\n\n\\section{Scaling properties of moments}\n\nThe $\\nu\/\\alpha$ self-similar character of the process (cf. Eq.~(\\ref{eq:scaling})) is an outcome of allowed long flights and long breaks between successive steps. In consequence, the whole distribution scales as a function of $x\/t^{\\nu\/\\alpha}$ with the width of the distribution growing superdiffusively for $\\alpha<2\\nu$ and subdiffusively for $\\alpha > 2\\nu$. This $t^{\\nu\/\\alpha}$ scaling is also clearly observable in the behavior of the standard deviation and quantiles $q_p(t)$, defined via the relation $\\mathrm{Prob}\\left \\{X(t)\\leqslant q_p(t)\\right \\}=p$, see Figs.~\\ref{fig:stdev}, \\ref{fig:quantiles}. For random walks subject to superdiffusive, long-ranging trajectories ($\\alpha=1.6$), the asymptotic scaling is observed for sufficiently long times, cf. Fig.~\\ref{fig:stdev}. On the other hand, normal (Gaussian) distribution of jumps superimposed on subdiffusive motion of trapped particles ($\\nu =0.8$) clearly shows rapid convergence to the $\\nu\/\\alpha$ law. Notably, both sets $\\nu=1,\\alpha=2$ and $\\nu=0.8,\\alpha=1.6$ lead to the same scaling $t^{1\/2}$, although in the case $\\nu=0.8,\\alpha=1.6$, the process $X(t)=\\tilde{X}(S_t)$ is non-Markov, in contrast to a standard Gaussian diffusion obtained for $\\nu=1, \\alpha=2$. Thus, the competition between subdiffusion and L\\'evy flights questions standard ways of discrimination between normal (Markov, $\\langle [x-x(0)]^2 \\rangle \\propto t$) and anomalous (generally, non-Markov $\\langle [x-x(0)]^2 \\rangle \\propto t^\\delta$) diffusion processes.\n\nIndeed, for $\\nu=1$, the process $X(t)$ is not only $1\/\\alpha$ self-similar but it is also memoryless (i.e. Markovian). In such a case, the asymptotic PDF $p(x,t)$ is $\\alpha$-stable \\cite{janicki1994,weron1995,weron1996,dybiec2006} with the scale parameter $\\sigma$ growing with time like $t^{1\/\\alpha}$, cf. Eq.~(\\ref{eq:charakt1}). This is no longer true for subordination with $\\nu<1$ when the underlying process becomes non-Markovian and the spread of the distribution follows the $t^{\\nu\/\\alpha}$-scaling (cf. Fig.~\\ref{fig:quantiles}, right panels).\n\n\nSome additional care should be taken when discussing the scaling character of moments of $p(x,t)$ \\cite{bouchaud1990,fogedby1998,jespersen1999}. Clearly, L\\'evy distribution (with $\\alpha<2$) of jump lengths leads to infinite second moment (see Eqs.~(\\ref{eq:charakt1}) and (\\ref{eq:charakt2}))\n\\begin{equation}\n\\langle x^2 \\rangle = \\int_{-\\infty}^{\\infty} x^2 l_{\\alpha,\\beta}(x;\\sigma) dx = \\infty,\n\\end{equation}\nirrespectively of time $t$.\nMoreover, the mean value $\\langle x \\rangle$ of stable variables is finite for $\\alpha > 1$ only ($\\langle x \\rangle=0$ for symmetric case under investigation). Those observations seem to contradict demonstration of the scaling visible in Fig.~\\ref{fig:stdev} where standard deviations derived from ensembles of trajectories are finite and grow in time according to a power law.\nA nice explanation of this behavior can be given following argumentation of Bouchaud and Georges \\cite{bouchaud1990}: Every finite but otherwise arbitrarily long trajectory of a L\\'evy flight, i.e. the stochastic process underlying Eq.~(\\ref{eq:ffpe}) with $\\nu=1$, is a sum of finite number of independent stable random variables. Among all summed $N$ stable random variables there is the largest one, let say $l_c(N)$. The asymptotic form of a stable densities\n\\begin{equation}\nl_{\\alpha,\\beta}(x;\\sigma) \\propto x^{-(1+\\alpha)},\n\\label{eq:asymptotics}\n\\end{equation}\ntogether with the estimate for $l_c(N)$ allow one to estimate how standard deviations grows with a number of jumps $N$. In fact, the largest value $l_c(N)$ can be estimated from the condition\n\\begin{equation}\nN\\int_{l_c(N)}^{\\infty}l_{\\alpha,\\beta}(x)dx\\approx 1.\n\\label{eq:trials}\n\\end{equation}\nwhich locates most of the ``probability mass'' in events not exceeding the step length $l_c$ (otherwise, the relation states that $l_c(N)$ occurred at most once in $N$ trials, \\cite{bouchaud1990}). Alternatively, $l_c(N)$ can be estimated as a value which maximizes probability that the largest number chosen in $N$ trials is $l_c$\n\\begin{equation}\nl_{\\alpha,\\beta}(l_c)\\left[ \\int\\limits_0^{l_c} l_{\\alpha,\\beta}(x)dx \\right]^{N-1}=l_{\\alpha,\\beta}(l_c)\\left[ 1 - \\int\\limits_{l_c}^\\infty l_{\\alpha,\\beta}(x)dx \\right]^{N-1}.\n\\label{eq:minimalization}\n\\end{equation}\nBy use of Eqs.~(\\ref{eq:trials}) and (\\ref{eq:asymptotics}), simple integration leads to\n\\begin{equation}\nl_c(N)\\propto N^{1\/\\alpha}.\n\\label{eq:threshold}\n\\end{equation}\nDue to finite, but otherwise arbitrarily large number of trials $N$, the effective distributions becomes restricted to the finite domain which size is controlled by Eq.~(\\ref{eq:threshold}). Using the estimated threshold, see Eq.~(\\ref{eq:threshold}) and asymptotic form of stable densities, see Eq.~(\\ref{eq:asymptotics}), it is possible to derive an estimate of $\\langle x^2 \\rangle$\n\\begin{equation}\n\\langle x^2 \\rangle \\approx \\int^{l_c}x^2 l_{\\alpha,\\beta}(x) dx\\approx \\left( N^{1\/\\alpha} \\right)^{2-\\alpha}=N^{2\/\\alpha-1}.\n\\end{equation}\nFinally, after $N$ jumps\n\\begin{equation}\n\\langle x^2 \\rangle_N=N\\langle x^2 \\rangle \\propto N^{2\/\\alpha}.\n\\label{eq:variancescaling}\n\\end{equation}\nConsequently, for L\\'evy flights standard deviation grows like a power law with the number of jumps $N$.\nIn our CTRW scenario incorporating competition between long rests and long jumps, the number of jumps $N=N(t)$ grows sublinearly in time, $N\\propto t^{\\nu}$, leading effectively to $\\langle x^2 \\rangle_N\\propto t^{2\\nu\/\\alpha}$ with $0<\\nu<1$ and $0<\\alpha<2$. Since in any experimental realization tails of the L\\'evy distributions are almost inaccessible and therefore effectively truncated, analyzed sample trajectories follow the pattern of the $t^{\\nu\/\\alpha}$ scaling, which is well documented in Fig.~\\ref{fig:stdev}.\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[angle=0,width=8.0cm]{quantiles.eps}\n\\caption{(Color online) Quantiles: $q_{0.9}$, $q_{0.8}$, $q_{0.7}$, $q_{0.6}$ (from top to bottom) for $\\nu=1, \\alpha=2$ (left top panel), $\\nu=1, \\alpha=1.6$ (left bottom panel), $\\nu=0.8, \\alpha=2$ (right top panel) and $\\nu=0.8, \\alpha=1.6$ (right bottom panel). The straight line presents theoretical $t^{\\nu\/\\alpha}$ scaling. Simulation parameters as in Fig.~\\ref{fig:trajectories}.}\n\\label{fig:quantiles}\n\\end{center}\n\\end{figure}\n\n\n\n\n\n\\section{Discriminating memory effects}\n\nClearly, by construction, for $\\nu < 1$ the limiting ``anomalous diffusion'' process $X(t)$ is non-Markov. This feature is however non-transparent when discussing statistical properties of the process by analyzing ensemble-averaged mean-square displacement for the parameters set $\\nu\/\\alpha=1\/2$, (e.g. $\\nu=0.8,\\alpha=1.6$) when contrary to what might be expected, $\\langle [x-x(0)]^2 \\rangle \\propto t$, similarly to the standard, Markov, Gaussian case.\nThis observation implies a different problem to be brought about: Given an experimental set of data, say time series representative for a given process, how can one interpret its statistical properties and conclude about anomalous (subdiffusive) character of underlying kinetics? The similar question has been carried out in a series of papers discussing use of transport coefficients in systems exhibiting weak ergodicity breaking (see \\cite{he2008} and references therein).\n\nTo further elucidate the nature of simulated data sets for $\\nu\/\\alpha=1\/2$, we have adhered to and tested formal criteria \\cite{vankampen1981,fulinski1998,dybiec2009b,dybiec-sparre} defining the Markov process. The standard formalism of space- and time- continuous Markov processes requires fulfillment of the Chapman-Kolmogorov equation ($t_1>t_2>t_3$)\n\\begin{equation}\n P(x_1,t_1|x_3,t_3) = \\sum_{x_2}P(x_1,t_1|x_2,t_2)P(x_2,t_2|x_3,t_3).\n\\label{eq:ch-k}\n\\end{equation}\nalong with the constraint for conditional probabilities which for the ``memoryless'' process should not depend on its history. In particular, for a hierarchy of times $t_1>t_2>t_3$, the following relation has to be satisfied\n\\begin{equation}\n P(x_1,t_1|x_2,t_2)=P(x_1,t_1|x_2,t_2,x_3,t_3).\n\\label{eq:cond}\n\\end{equation}\nEqs.~(\\ref{eq:ch-k}) and (\\ref{eq:cond}) have been used to directly verify whether the process under consideration is of the Markovian or non-Markovian type. From Eq.~(\\ref{eq:ch-k}) squared cumulative deviation $Q^2$ between LHS and RHS of the Chapman-Kolmogorov relation summed over initial ($x_3$) and final ($x_1$) states has been calculated \\cite{fulinski1998}\n\\begin{eqnarray}\n Q^2 & = & \\sum\\limits_{x_1,x_3}\\bigg[P(x_1,t_1|x_3,t_3) \\nonumber \\\\\n & & \\qquad\\quad - \\sum\\limits_{x_2}P(x_1,t_1|x_2,t_2)P(x_2,t_2|x_3,t_3)\\bigg]^2.\n\\label{eq:ch-k-averaged}\n\\end{eqnarray}\nThe same procedure can be applied to Eq.~(\\ref{eq:cond}) leading to\n\\begin{eqnarray}\nM^2 & = & \\sum_{x_1,x_2,x_3}\\Big[P(x_1,t_1|x_2,t_2) \\nonumber \\\\\n& & \\qquad\\quad - P(x_1,t_1|x_2,t_2,x_3,t_3)\\Big]^2.\n\\label{eq:cond-averaged}\n\\end{eqnarray}\n\n\nFigure~\\ref{fig:ch-k} presents evaluation of $Q^2$ (top panel) and $M^2$ (bottom panel) for $t_1=27$ and $t_3=6$ as a function of the intermediate time $t_2=\\{7,8,9,\\dots,25,26\\}$. It is seen that deviations from the Chapman-Kolmogorov identity are well registered for processes with long rests when subdiffusion wins competition with L\\'evy flights at the level of sample paths.\nThe tests based on $Q^2$ (see Eq.~(\\ref{eq:ch-k-averaged})) and $M^2$ (see Eq.~(\\ref{eq:cond-averaged})) have comparative character. The deviations $Q^2$ and $M^2$ are about three order of magnitudes higher for the parameter sets $\\nu=0.8, \\alpha=2.0$ and $\\nu=0.8, \\alpha=1.6$ than $Q^2$ and $M^2$ values for the Markovian counterparts with $\\nu=1$ and $\\alpha=2$, $\\alpha=1.6$, respectively. Performed analysis clearly demonstrates non-Markovian character of the limiting diffusion process for $\\nu<1$ and the findings indicate that scaling of PDF, $p(x,t)=t^{-1\/2}p(xt^{-1\/2},1)$ or, in consequence, scaling of the variance\n$\\langle [x-x(0)]^2 \\rangle \\propto t$ and interquantile distances (see Fig.~\\ref{fig:quantiles}) do not discriminate satisfactory between ``normal'' and ``anomalous'' diffusive motions \\cite{zumofen1995}. In fact, linear in time spread of the second moment does not necessarily characterize normal diffusion process. Instead, it can be an outcome of a special interplay between subdiffusion and L\\'evy flights combined in the subordination $X(t)=\\tilde{X}(S_t)$. The competition between both processes is better displayed in analyzed sample trajectories $X(t)$ where combination of long jumps and long trapping times can be detected, see Fig.~\\ref{fig:trajectories}.\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[angle=0,width=8.0cm]{ch-k-q-m.eps}\n\\caption{(Color online) Squared sum of deviations $Q^2$, see Eq.~(\\ref{eq:ch-k-averaged}), (top panel) and $M^2$, see Eq.~(\\ref{eq:cond-averaged}), (bottom panel) for $t_1=27$, $t_3=6$ as a function of the intermediate time $t_2$. 2D histograms were created on the $[-10,10]^2$ domain. 3D histograms were created on the $[-10,10]^3$ domain. Due to non-stationary character of the studied process the analysis is performed for the series of increments $x(t+1)-x(t)$. Simulation parameters as in Fig.~\\ref{fig:trajectories}.}\n\\label{fig:ch-k}\n\\end{center}\n\\end{figure}\n\n\\section{Conclusions}\n\nSummarizing, by implementing Monte Carlo simulations which allow visualization of stochastic trajectories subjected to subdiffusion (via time-subordination) and superdiffusive L\\'evy flights (via extremely long jumps in space), we have demonstrated that the standard measure used to\ndiscriminate between anomalous and normal behavior cannot be applied\nstraightforwardly. The mean square displacement alone, as derived from the (finite) set of time-series data does not provide full\ninformation about the underlying system dynamics. In order\nto get proper insight into the character of the motion, it is necessary to perform analysis of individual trajectories.\nSubordination which describes a transformation between a physical time and an operational time of the system \\cite{sokolov2000,magdziarz2007b} is responsible for unusual statistical properties of waiting times between subsequent steps of the motion. In turn,\nL\\'evy flights are registered in instantaneous long jumps performed by a walker. Super- or sub- linear character of the motion in physical time is dictated by a coarse-graining procedure, in which fractional time derivative with the index $\\nu$ combines with a fractional spatial derivative with the index $\\alpha$.\nSuch situations may occur in motion on random potential surfaces where the presence of vacancies and other defects introduces both --\nspatial and temporal disorder \\cite{sancho2004}.\nWe believe that the issue of the interplay of super- and sub-diffusion with a crossover resulting in a pseudo-normal paradoxical diffusion may be of special interest in the context of e.g. facilitated target location of proteins on folding heteropolymers \\cite{belik2007} or in analysis of single particle tracking experiments \\cite{lubelski2008,he2008,metzler2009,bronstein2009}, where the hidden subdiffusion process can be masked and appear as a normal diffusion.\n\n\n\n\\begin{acknowledgments}\nThis project has been supported by the Marie Curie TOK COCOS grant (6th EU Framework\nProgram under Contract No. MTKD-CT-2004-517186) and (BD) by the Foundation for Polish Science.\nThe authors acknowledge many fruitfull discussions with Andrzej Fuli\\'nski, Marcin Magdziarz and Aleksander Weron.\n\\end{acknowledgments}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe European Space Agency's {\\it Gaia}\\ mission (\\citealt{gaiaMission}) Early Data Release 3 (EDR3) provides photometry in the $G$, $G_{BP}$ and $G_{RP}$ bands, as well as precise astrometry and parallax measurements for over 1.5 billion sources (\\citealt{gaiaEDR3,gaiaEDR3Astro,gaiaEDR3Phot,gaiaEDR3Plx}). Although the absolute number of sources is comparable to {\\it Gaia}\\ Data Release 2 (DR2; \\citealt{gaiaDR2,gaiaDR2Astro,gaiaDR2Cat,gaiaDR2Phot,gaiaDR2Plx}), the astrometric and photometric precision has drastically improved thanks to a 3-fold increase in the celestial reference sources and longer data collection baseline (22 vs 34 months), as well as an updated and improved processing pipeline (\\citealt{gaiaEDR3Astro}). This quantity and quality is defining a new standard for Galactic studies. The more recent {\\it Gaia}\\ Data Release 3 (DR3) augments EDR3 by providing additional information on some detected targets such as variability indicators, radial velocity, binary star information, as well as low-resolution spectra for >200 million sources (e.g. \\citealt{gaiaDR3cat,gaiaDR3var,gaiaDR3rv,gaiaDR3gold,deangeli22}). \n\nThe INT\/WFC Photometric H$\\alpha$ Survey of the Northern Galactic Plane (IPHAS; \\citealt{iphas}) is the first comprehensive digital survey of the northern Galactic disc ($|b|<5^\\circ$), covering a Galactic longitude range of $29^\\circ < l < 215^\\circ$. The IPHAS observations are obtained using the Wide Field Camera (WFC) at the prime focus of the 2.5m Isaac Newton Telescope (INT) on La Palma, Spain. IPHAS images are taken through three filters: a narrow-band H$\\alpha$, and two broad-band Sloan $r$ and $i$ filters. The UV-Excess Survey of the northern Galactic Plane (UVEX; \\citealt{uvex}) has covered the same footprint as IPHAS using the same WFC on the INT in the two broad-band Sloan $r$ and $g$ filters as well as a Sloan $u$-like $U_{RGO}$ filter. Exposures are set to reach an $r$-band depth of $\\approx 21$ in both surveys. Pipeline data reduction for both surveys is handled by the Cambridge Astronomical Survey Unit (CASU). Further details on the data acquisition and pipeline reduction can be found in \\cite{iphas}, \\cite{uvex} and \\cite{iphasEDR}. A defining feature of both these surveys are the quasi-contemporaneous observations of each filter set so as to recover reliable colour information for sources without the contributing effects of variability on timescales longer than $\\approx 10$ minutes. This same characteristic is also shared by the {\\it Gaia}\\ mission. Recently \\cite{igaps} has produced the IGAPS merged catalogue of IPHAS and UVEX observations, while \\cite{greimel21} provides the IGAPS images. Additionally to merging the sources observed by both IPHAS and UVEX, a global photometric calibration has been performed on IGAPS, which resulted in photometry being internally reproducible to 0.02 magnitudes (up to magnitudes of $\\approx 18-19$, depending on the band) for all except the $U_{RGO}$ band. Furthermore, this 174-column catalogue provides astrometry for both the IPHAS and UVEX observations as well as the observation epoch, which allows to perform a precise cross-match with {\\it Gaia}\\ given the proper motion information provided. The astrometric solution of IGAPS is based on {\\it Gaia}\\ DR2. Although no per source errors are available, the astrometric solution yields typical astrometric errors in the $r$ band of 38mas.\n\nThe United Kingdom Infrared Deep Sky Survey (UKIDSS; \\citealt{ukidss}) is composed of five public surveys of varying depth and area coverage which began in May 2005. UKIDSS uses the Wide Field Camera (WFCAM, see \\citealt{casali07}) on the United Kingdom Infrared Telescope (UKIRT). All data is reduced and calibrated at the Cambridge Astronomical Survey Unit (CASU) using a dedicated software pipeline and are then transferred to the WFCAM Science Archive (WSA; \\citealt{hambly08}) in Edinburgh. There, the data are ingested and detections in the different passbands are merged. The UKIDSS Galactic Plane Survey (GPS; \\citealt{ukidssGPS}) is one of the five UKIDSS public surveys. UKIDSS GPS covers most of the northern Galactic plane in the $J$, $H$ and $K$ filters for objects with declination less than 60 degrees, and contains in excess of a billion sources. We use in this work UKIDSS\/GPS Data Release 11 (DR11). Similarly to IGAPS, no per source errors are available, but the astrometric solution of UKIDSS based on {\\it Gaia}\\ DR2 yields a typical astrometric error of 90 mas. \n \n\\cite{gIPHAS} described and provided a sub-arcsecond cross-match of {\\it Gaia}\\ DR2 against IPHAS. The resulting value-added catalogue provided additional precise photometry for close to 8 million sources in the northern Galactic plane in the $r$, $i$, and H$\\alpha$\\ bands. This paper describes a sub-arcsecond cross-match between {\\it Gaia}\/DR3, IGAPS and UKIDSS GPS. Similarly to \\cite{gIPHAS} this cross-match of northern Galactic plane surveys (XGAPS) takes into account the different epochs of observations of all surveys and the {\\it Gaia}\\ astrometric information (including proper motions) to achieve sub-arcsecond precision when cross-matching the various surveys. XGAPS provides photometry in up to 9 photometric bands ($U$, $g$, $r$, $i$, H$\\alpha$, $J$, $H$, $K$, $BP$, $RP$ and $G$) for 33,987,180 sources. XGAPS also provides a quality flag indicating the reliability of the {\\it Gaia}\\ astrometric solution for each source, which has been inferred through the use of Random Forests (\\citealt{RF}). Section \\ref{sec:crossMatch} describes our cross-matching procedure, including the preliminary selection cuts applied to all datasets. Section \\ref{sec:RF} describes the machine learning model (using Random Forests) to train and select sources from the XGAPS catalogue which can be considered to have reliable {\\it Gaia}\\ astrometry, while Section \\ref{sec:catalogue} describes a potential application for selecting blue-excess sources for spectroscopic follow-up. Finally conclusions are drawn in Section \\ref{sec:conclusion}, and the catalogue format is summarised in the appendix.\n\n\n\\section{Cross-matching Gaia with IGAPS and UKIDSS} \\label{sec:crossMatch}\n\nThe aim of XGAPS is to cross-match all sources detected in IGAPS (either IPHAS or UVEX) to {\\it Gaia}\\ DR3, and as a second step cross-match those sources to UKIDSS. The cross match is restricted to sources with a significant {\\it Gaia}\\ DR3 parallax detection and IGAPS sources identified as being stellar-like. \n\n\\subsection{Selection cuts} \\label{sec:selection}\nBefore the cross-match some selection cuts are applied to the master catalogues.\n\nFrom {\\it Gaia}\\ DR3 only objects satisfying the following are selected:\n\\begin{itemize}\n\\item Are within an area slightly larger than the IGAPS footprint ($203);\n\\item Have a signal-to-noise parallax measurement above 3 (\\texttt{parallax\\_over\\_error}>3).\n\\end{itemize}\n\\noindent This results in 41,572,231 sources. For reference, the removal of the two signal-to-noise limits would result in 240,725,104 {\\it Gaia}\\ DR3 sources within the IGAPS footprint. The parallax signal-to-noise limit ensures distances up to 1--1.5 kpc are well covered.\n\nBecause IGAPS is already a merge between IPHAS and UVEX, the selection cuts are applied to the individual surveys. For IPHAS detections, sources are retained only if the $r$, $i$ and H$\\alpha$\\ detections are not flagged as either saturated, vignetted, contaminated by bad pixels, flagged as noise-like, or truncated at the edge of the CCD. For UVEX the same cut as IPHAS is applied to the $U$, $g$, $r$ detections with the additional constraint that detections are not located in the degraded area of the $g$-band filter. Of the 295.4 million sources in IGAPS, 212,378,160 are retained through the IPHAS selection cuts and 221,495,812 are retained through the UVEX ones.\n\nFinally the UKIDSS Galactic Plane Survey Point Source Catalogue contains 235,696,744 sources within $203. A more detailed analysis of this effect has already been discussed in \\cite{gIPHAS}. What was found is an upper limit of 0.1\\% on the fraction of mismatches associated with their selection cut of \\texttt{phot\\_g\\_mean\\_flux\\_over\\_error}>5 in some of the most crowded regions of the Galactic plane mostly affecting the faintest sources. For XGAPS it is expected that the number of miss-matches is even lower than the 0.1\\% miss-match fraction quoted in \\cite{gIPHAS} as the selection cut now includes many more {\\it Gaia}\\ sources. The next section also introduces an additional quality flag that can be used to further clean erroneous matches and\/or targets that have spurious astrometric solutions.\n\n\n\\section{Cleaning XGAPS with Random Forests} \\label{sec:RF}\nThe left panel of Fig. \\ref{fig:cmd1} shows colour-magnitude diagram (CMD) using the {\\it Gaia}-based colours for all cross-matched targets as described in Section \\ref{sec:crossMatch}. The distances used to convert apparent to absolute magnitudes have been inferred via $M = m+5+5 \\log_{10}(\\varpi\/1000)$, where $M$ and $m$ are the absolute and apparent magnitudes respectively, and $\\varpi$ the parallax in milliarcseconds provided by {\\it Gaia}\\ DR3. \\cite{gaiaEDR3Plx} provide a correction to the $\\varpi$ measurements to correct for the zero point bias. This correction is not applied here, and neither is extinction, but users of XGAPS can do so through the available code provided by \\cite{gaiaEDR3Plx}. \n\nAs can be seen from the left panel of Fig. \\ref{fig:cmd1} both CMDs appear to be ``polluted'' by spurious sources. This is particularly evident in the regions between the main sequence and white dwarf tracks, where a low population density of sources is expected. Similar contamination can also be observed in different colour combination CMD plots. Spurious astrometric solutions from {\\it Gaia}\\ can be due to a number of reasons. One of the major causes that produce such spurious parallax measurements is related to the inclusion of outliers in the measured positions. In {\\it Gaia}\\ DR3 this is more likely to occur in regions of high source density (as is the case in the Galactic plane) or for close binary systems (either real or due to sight line effects) which have not been accounted for. The dependence of spurious parallax measurements on other measured quantities in {\\it Gaia}\\ DR3 is not straight forward to disentangle, and CMDs cannot be easily cleaned through the use of empirical cuts on the available {\\it Gaia}\\ DR3 parameters.\n\n\\begin{figure*}\n \\includegraphics[width=1\\columnwidth]{Figures\/CMDGaia.jpeg}\n \n \\includegraphics[width=1\\columnwidth]{Figures\/CMDneg.jpeg}\n \\caption{Left panel: {\\it Gaia}-based absolute CMD of all cross-matched sources between {\\it Gaia}\\ and IGAPS. Right panel: The recovered negative parallax ``mirror sample'' from the {\\it Gaia}\/IGAPS cross-match. In producing this the absolute val;ue of the {\\it Gaia}\\ parallax measurements are used.}\n \\label{fig:cmd1}\n\\end{figure*}\n\nSeveral methods attempting to identify spurious astrometric sources have been explored in the literature. \\cite{gIPHAS} defined both a ``completeness'' and ``purity'' parameter that can be used to clean the resulting CMDs from the previous cross-match between {\\it Gaia}\\ DR2 and IPHAS. More recently, \\cite{GCNS} employed a machine learning classifier based on Random Forests to identify spurious astrometric measurements in the 100 pc sample of {\\it Gaia}\\ EDR3. In both cases, a negative parallax sample had been used to infer common properties of spurious astrometric sources. This was then generalised and applied to the positive parallax sources to identify spurious measurements. \n\nA classifier will only be as good at generalising a given set of properties as the provided training set allows. Here a Random Forest classifier is also used to clean XGAPS from the contamination of bad astrometric measurements. To explore this further, the same cross-matching method as described in Section \\ref{sec:crossMatch} is performed using as a master {\\it Gaia}\\ catalogue of all sources satisfying the same quality cuts as described in Section \\ref{sec:selection} but inverting the parallax signal-to-noise selection criteria to be less than $-3$ (\\texttt{parallax\\_over\\_error}<-3). This produces a total of 1,034,661 sources after the cross-matching with the IGAPS catalogue has been performed. The right panel of Fig. \\ref{fig:cmd1} shows the {\\it Gaia}\\ CMD of the recovered negative parallax ``mirror sample'' after having parsed through the same cross-matching pipeline as all other XGAPS sources. To obtain ``absolute magnitudes'' for sources the absolute value of the negative parallax has been used. It is clear from comparing both panels of Fig. \\ref{fig:cmd1} that the suspiciously spurious parallax sources and negative parallax sources occupy similar regions of the CMDs. This in turn suggests that the same systematic measurement challenges are affecting both these samples, even though there is no clear parameter combination cut from the {\\it Gaia}\\ astrometric measurements that can be used to exclude spurious sources.\n\n\nIn a similar way to what has been adopted in \\cite{GCNS} to remove spurious sources, a Random Forest (\\citealt{RF}) is trained through the use of XGAPS data to classify all $\\approx 34$ million entries into two categories (good vs. bad astrometric solutions) purely based on astrometric quantity and quality indicators provided by {\\it Gaia}\\ DR3 and augmented by astrometric indicators resulting from XGAPS. To achieve this a reliable training set of both categories is required. Because XGAPS sources are found in the crowded Galactic plane, and because these sources may suffer from specific systematic errors, a training\/testing set is constructed from XGAPS data alone. The good astrometric solution set is compiled by selecting all sources in XGAPS which have a parallax signal-to-noise measurement above 5. This results in 19,242,307 good astrometric solution sources used for training. Although some bad parallax measurement sources may be expected to have a parallax signal-to-noise measurement above 5, it is reasonable to assume that a small fraction of sources will fall into this category. The bad astrometric training sources are compiled through the use of the ''negative parallax mirror sample'', for which the CMD is shown in the right panel of Fig. \\ref{fig:cmd1}. This is obtained by selecting sources with a parallax signal-to-noise measurement below $-5$, resulting in 250,069 sources. In total, the set of good and bad astrometric solution targets is 19,492,876. The testing set is created by randomly selecting 20\\% of the lowest populated class (50,113 from the bad astrometric sources), and randomly selecting the same number of sources from the other class. All remaining sources are used as a training set.\n\nThe classification model consists of a trained Random Forest (\\citealt{RF}) using a total of 26 predictor variables listed in Table \\ref{tab:RF} which are purely astrometry based. Each decision tree in the Random Forest is grown using 5 randomly chosen predictor variables, and each tree is grown to their full length. Surrogate splits when creating decision trees are used to take into account missing variables in some of the training samples. Each tree is grown by resampling targets, with replacement, in the training sample while keeping the total number of training samples per tree the same as the total number of targets used for training. Because the number of good astrometric training sources is much larger than the bad astrometric sources, each tree is grown using all bad astrometric sources (200,456 after having removed the testing set), and randomly under-sampling the same number of good training sources. This ensures that there is a balance between the two classes for each grown tree. These resampling techniques ensure that each tree is grown using a different subset of the training set and related predictors, which in turn avoids the Random Forest from overtraining (\\citealt{RF}). In total, the Random Forest consists of 1001 decision trees. Final source classifications are assigned by the largest number of trees that classified the source as a particular class. The vote ratio between the two classes is also retained in the XGAPS catalogue. We have further attempted to establish the relative predictor importance for each of the 26 predictors used. This is achieved through the same classifier methodology described. However, for computational time purposes, the predictor importance values only are obtained by growing each tree using the same good training sources (200,456 randomly selected from the entire population) rather than resampling these for each individual tree. The resulting predictor importance using the out-of-bag samples during training is included in Table \\ref{tab:RF}.\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{l l}\nPredictor Name & Predictor Importance\\\\\n\\hline\\hline\n pmra & 11.68 \\\\\n pmdec & 9.07 \\\\\n bMJD\\_separation\\_UVEX & 4.30 \\\\\n bMJD\\_separation\\_IPHAS & 4.26 \\\\\n ipd\\_frac\\_multi\\_peak & 4.06 \\\\\n ipd\\_gof\\_harmonic\\_amplitude & 3.61 \\\\\n astrometric\\_n\\_good\\_obs\\_al & 2.67 \\\\\n astrometric\\_n\\_obs\\_al & 2.65 \\\\\n scan\\_direction\\_mean\\_k1 & 2.53 \\\\\n parallax\\_error & 2.42 \\\\\n scan\\_direction\\_mean\\_k2 & 2.24 \\\\\n scan\\_direction\\_mean\\_k3 & 2.22 \\\\\n ruwe & 1.96 \\\\\n astrometric\\_excess\\_noise\\_sig & 1.84 \\\\\n astrometric\\_gof\\_al & 1.81 \\\\\n astrometric\\_excess\\_noise & 1.74 \\\\\n pmdec\\_error & 1.70 \\\\\n redChi2 & 1.64 \\\\\n scan\\_direction\\_strength\\_k1 & 1.57 \\\\\n astrometric\\_sigma5d\\_max & 1.50 \\\\\n ipd\\_frac\\_odd\\_win & 1.49 \\\\\n scan\\_direction\\_mean\\_k4 & 1.49 \\\\\n astrometric\\_n\\_bad\\_obs\\_al & 1.42 \\\\\n astrometric\\_chi2\\_al & 1.36 \\\\\n pmra\\_error & 1.33 \\\\\n astrometric\\_n\\_obs\\_ac & 0.27 \\\\ \n \\hline\n\\end{tabular}\n\\caption{Out-of-bag predictor importance of all predictors used for classification by the Random Forest classifier ordered according to importance. The predictor names used in the table correspond to column names used in the XGAPS catalogue. A short description of each can be found in the Appendix.\n}\n\\end{center}\n\\label{tab:RF} \n\\end{table}\n\nThe Random Forest is robust against variations in the number of trees or candidate predictors, as altering these did not produce substantially different results as evaluated on the test set. It is important to note that although the bad training sources can be considered to be the result of bonafide spurious astrometric measurements, some systems in the good training set are expected to have been mislabeled by the training set selection criteria. Thus when inspecting the Random Forest classification accuracy on the testing set only sources with misclassified labels from the bad astrometric sources should be considered, and these should provide a lower limit on the true accuracy of the classifier. The final result on the testing set is summarised by the confusion matrix shown in Figure \\ref{fig:conf}. Overall 1984 sources are classified as bad sources owning a parallax signal-to-noise measurement above 5. More importantly, 503 out of 50,113 bad astrometric sources (1.0\\%) have been mislabeled, and these should provide the lower limit on the accuracy of the classifier.\n\n\\begin{figure}\n\t\\includegraphics[width=1\\columnwidth]{Figures\/confusionChart.jpeg}\n \\caption{Confusion matrix between the positive and negative parallax samples computed on the test set. Class values of 0 represent \"bad\" astrometric sources while a value of 1 represent \"good\" astrometric sources. Details of the definition of the test set and training of the Random Forest can be found in Section \\ref{sec:RF}.}\n \\label{fig:conf}\n\\end{figure}\n\n\\begin{figure}\n\t\\includegraphics[width=1\\columnwidth]{Figures\/voteRF.jpeg}\n \\caption{Distribution of associated votes as computed by the trained Random Forest described in Section \\ref{sec:RF} for the full XGAPS targets. The \\texttt{voteRF} value is included in the XGAPS catalogue for each source. Objects with \\texttt{voteRF}>0.5 have a \\texttt{flagRF} value of 1 in XGAPS rather than 0.} \n \\label{fig:voteRF}\n\\end{figure}\n\nHaving trained the classification model, all $\\approx 34$ million sources in XGAPS are parsed through the Random Forest classifier and receive an associated vote (see Fig. \\ref{fig:voteRF}) from each tree and an associated flag with the predicted classification. Sources are classified as good astrometric sources if more than 50\\% of individual trees in the Random Forest classifier have classified them as such, and are assigned a flag in the catalogue of \\texttt{flagRF}=1. If this is not achieved, the source flags are set to \\texttt{flagRF}=0. This results in 30,927,929 (91\\%) targets with \\texttt{flagRF}=1 and 3,059,251 (9\\%) with \\texttt{flagRF}=0.\n\n\n\\begin{figure*}\n\t\\includegraphics[width=1\\columnwidth]{Figures\/IPHASseparation.jpeg}\n \\includegraphics[width=1\\columnwidth]{Figures\/UVEXseparation.jpeg}\n \\caption{Distribution of the separations between all matched sources in XGAPS between the {\\it Gaia}\/IPHAS targets (left) and {\\it Gaia}\/UVEX targets (right) are shown with black solid lines. Both panels also show the decomposition of the distribution employing the Random Forest classifier to select ``good'' astrometric targets (\\texttt{flagRF}=1, blue solid lines) and``bad'' astrometric targets (\\texttt{flagRF}=0, red solid lines).}\n \\label{fig:separation}\n\\end{figure*}\n\nFig. \\ref{fig:separation} shows the angular separation between the individual IPHAS and UVEX matches to the epoch-corrected {\\it Gaia}\\ DR3 sources. The bulk of the population finds an angular separation of about 0.02 arcseconds, but there exists an additional component of sources evident at larger separations. Although these sources have found their correct match between {\\it Gaia}\\ and both IPHAS and UVEX, the larger angular separation may in fact be attributed to poor astrometry in {\\it Gaia}\\ DR3. Shown in the same figure are also the distribution of the good vs. bad astrometric sources as classified using the trained Random Forest. It is clear that the classifier has been able to separate those sources with relatively large angular separation when compared to the bulk of the population. \n\nThis split between the good vs. bad astrometric sources can also be validated when considering other astrometric predictor variables used by the classifier. Fig. \\ref{fig:predVars} shows the distributions of an additional 3 predictor variables (\\texttt{parallax\\_error}, \\texttt{pmra\\_error}, \\texttt{ruwe}) as well as the parallax signal-to-noise measurement (\\texttt{parallax\\_over\\_error}) which has been used to select the training set. In all cases the Random Forest classifier appears to have separated the apparent bimodal distributions observed in the predictor variables. \n\n\\begin{figure*}\n\t\\includegraphics[width=1\\columnwidth]{Figures\/plxOverError.jpeg}\n \\includegraphics[width=1\\columnwidth]{Figures\/plxError.jpeg}\n \\newline\n \\includegraphics[width=1\\columnwidth]{Figures\/pmraError.jpeg}\n \\includegraphics[width=1\\columnwidth]{Figures\/ruwe.jpeg}\n \\caption{Distributions of a subset of astrometric parameters taken from {\\it Gaia}\\ DR3 included in XGAPS (black solid lines). All but the \\texttt{parallax\\_over\\_error} values have been used for training the Random Forest classifier. All panels also show the decomposition of the distribution employing the Random Forest classifier to select ``good'' astrometric targets (\\texttt{flagRF}=1, blue solid lines) and ``bad'' astrometric targets (\\texttt{flagRF}=0, red solid lines).}\n \\label{fig:predVars}\n\\end{figure*}\n\nInspecting the CMDs of the predicted good vs. bad astrometric targets provides additional insight on the Random Forest performance. Fig. \\ref{fig:gaiaRF} displays the {\\it Gaia}\\ CMD of the predicted good vs. bad astrometric targets. Overall, the Random Forest classifies a total of 30,944,717 good astrometric targets ($\\approx$91\\%) and 3,042,463 bad astrometric targets ($\\approx$9\\%). It is clear that most of the bad astrometric sources are correctly removed as they populate the same region in the CMD as the negative parallax sample used for training (see right panel of Fig. \\ref{fig:cmd1}). Although the split has been efficiently achieved, it is also the case that some good astrometric sources have been flagged as bad ones by the classifier, and vice-versa. This is particularly evident when inspecting the CMD region for sources classified as having good astrometry (left panel in Fig. \\ref{fig:gaiaRF}), which appears to still be populated with relatively large number of sources on the blue side of the main sequence. Furthermore, some sources flagged as having bad astrometry by the classifier appear to populate the WD track, and it is also possible some of these have been mislabeled (right panel in Fig. \\ref{fig:gaiaRF}). Overall however, the bulk of the bad astrometric sources appears to have been removed correctly. \n \n\\begin{figure*}\n\t\\includegraphics[width=1\\columnwidth]{Figures\/GoodGaiaRF.jpeg}\n \\includegraphics[width=1\\columnwidth]{Figures\/BadGaiaRF.jpeg}\n \\caption{{\\it Gaia}-based absolute CMDs for all targets in the XGAPS catalogue. The panel on the left shows all targets with \\texttt{flagRF}=1, while targets with \\texttt{flagRF}=0 are displayed in the right panel. Although all sources displayed have a positive parallax measurement, the \"bad\" astrometric sample in XGAPS as defined by the Random Forest occupies a similar region in CMD space as the negative parallax ``mirror sample'' used for training and shown in the right panel Fig. \\ref{fig:cmd1}.}\n \\label{fig:gaiaRF}\n\\end{figure*}\n\n\n\\section{Potential applications of the XGAPS catalogue} \\label{sec:catalogue}\n\nOwning broad and narrow-band photometric measurements for $\\approx 34$ million Galactic plane sources, astrometric information, as well as multi-epoch photometry in many of these, the applications for the XGAPS catalogue can be wide-reaching, especially for the identification of specific source types and related population studies. Examples based on the {\\it Gaia}\/IPHAS catalogue (\\citealt{gIPHAS}) include the discovery of new binary systems (\\citealt{carrell22}), the selection and identification of Herbig Ae\/Be systems (\\citealt{vioque20}), planetary nebulae (\\citealt{sabin22}), as well as candidate X-ray emitting binaries (\\citealt{gandhi21}). Further applications may also be found in constructing reliable training sets for classification, as has been used by \\cite{gaiaSynth} to train a Random Forest for classification of targets based on synthetic photometry.\n\nAlso important, XGAPS provides information that can be efficiently used in selecting targets for large multi-object spectroscopic surveys such as the WHT Enhanced Area Velocity Explorer (WEAVE: \\citealt{weave}) and the 4-metre Multi-Object Spectrograph Telescope (4MOST: \\citealt{4most}). An example of this is the selection of white dwarf candidates in the Galactic plane to be observed with 4MOST as part of the community selected White Dwarf Binary Survey (PIs: Toloza and Rebassa-Mansergas). This includes a total of 28,102 targets that satisfy the following criteria in XGAPS:\n\\begin{itemize}\n\\item Have a {\\it Gaia}\\ declination $<5$ degrees\n\\item Have the \\texttt{flagRF} set to 1\n\\item Lie within the region $M_{U} > 3.20 \\times (U-g) + 6.42$ and $(U-g)<1.71$\n\\end{itemize}\n\\noindent The resulting CMD using the UVEX colours is shown in the left panel of Fig. \\ref{fig:wdbSelection}. The declination cut was employed to ensure targets are observable from Paranal Observatory where the 4MOST survey will be carried out from. The \\texttt{flagRF} is employed to minimise spurious cross-matches and bad astrometric targets. The final colour-magnitude cuts are somewhat ad-hoc at this stage (especially as the $U_{RGO}$ band has not yet been photometrically calibrated across the full survey), but attempt to select all blue-excess sources relative to the main sequence as defined in the UVEX passbands (the bluest set of the XGAPS catalogue). Although preliminary and in need of refinement using well-validated and spectroscopically confirmed targets, these colour cuts provide a first attempt to select white dwarf candidates in the plane for the 4MOST survey. A further cut using the IPHAS passbands of $(r-H\\alpha)>0.56 \\times (r-i) + 0.27$ to select H$\\alpha$-excess sources yields 241 likely accreting white dwarf systems (right panel of Fig. \\ref{fig:wdbSelection}). We point out that these colour cuts are preliminary, and only serve to demonstrate the potential application of the XGAPS catalogue. Specifically for the selection of H$\\alpha$-excess sources, a more refined method of selecting H$\\alpha$-excess candidates based on the local population as defined in absolute colour-magnitude diagrams has been shown to produce more complete samples of objects, but this comes at the expense of purity (e.g. \\citealt{fratta21}).\n\n\\begin{figure*}\n\t\\includegraphics[width=1\\columnwidth]{Figures\/blueExcess_WDBS.jpeg}\n \\includegraphics[width=1\\columnwidth]{Figures\/haExcess_WDBS.jpeg}\n \\caption{{\\it Gaia}\/UVEX CMD (left panel) and corresponding IPHAS-based colour-colour diagrams demonstrating simple selection cuts to select candidate white dwarf systems to be observed by 4MOST (\\citealt{4most}). Gray points in both panels show all targets in XGAPS with declination smaller than 5 degrees (observable from Paranal) and \\texttt{flagRF}=1. Blue points mark targets selected as blue-excess sources, likely related to white dwarf emission contributing to the UVEX photometry. The red points mark blue-excess candidates that also display evidence of H$\\alpha$-excess emission as determined from the IPHAS photometry. The exact cuts are described in Section \\ref{sec:catalogue}.}\n \\label{fig:wdbSelection}\n\\end{figure*}\n\n\n\\section{Conclusion} \\label{sec:conclusion}\n\nWe have presented the XGAPS catalogue which provides a sub-arcsecond cross-match between {\\it Gaia}\\ DR3, IPHAS, UVEX and UKIDSS. It contains photometric and astrometric measurements for $\\approx 34$ million sources within the northern Galactic plane. In total, XGAPS contains 2 epoch photometry in the $r$-band, as well as single-epoch (not simultaneous) photometry in up to 9 broad-band filters ($U_{RGO}$, $g$, $r$, $i$, $J$, $H$, $K$, $G$, $G_{BP}$ and $G_{RP}$) and one narrow-band H$\\alpha$-filter. XGAPS additionally provides a confidence metric inferred using Random Forests aimed at assessing the reliability of the {\\it Gaia}\\, astrometric parameters for any given source in the catalogue. XGAPS is provided as a catalogue with 111 columns. A description of the columns is presented in Table \\ref{tab:cols}. The full XGAPS catalogue can be obtained through ViZieR. As XGAPS only covers the northern Galactic plane, future extensions are planned to merge the southern Galactic plane and bulge using data from the VST Photometric H$\\alpha$\\ Survey of the Southern Galactic Plane and Bulge (VPHAS+: \\citealt{vphas}).\n\n\n\\section*{Acknowledgements}\nCross-matching between catalogues has been performed using STILTS, and diagrams were produced using the astronomy-oriented data handling and visualisation software TOPCAT (\\citealt{topcat}). This work has made use of the Astronomy \\& Astrophysics package for Matlab (\\citealt{matlabOfek}). This research has also made extensive use of the SIMBAD database, operated at CDS, Strasbourg. This work is based on observations made with the Isaac Newton Telescope operated on the island of La Palma by the Isaac Newton Group of Telescopes in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof\u00edsica de Canarias. This work is also based in part on data obtained as part of the UKIRT Infrared Deep Sky Survey. This work has made additional use of data from the European Space Agency (ESA) mission {\\it Gaia} (\\url{https:\/\/www.cosmos.esa.int\/gaia}), processed by the {\\it Gaia} Data Processing and Analysis Consortium (DPAC, \\url{https:\/\/www.cosmos.esa.int\/web\/gaia\/dpac\/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\\it Gaia} Multilateral Agreement.\n\nMM work was funded by the Spanish MICIN\/AEI\/10.13039\/501100011033 and by ``ERDF A way of making Europe'' by the ``European Union'' through grant RTI2018-095076-B-C21, and the Institute of Cosmos Sciences University of Barcelona (ICCUB, Unidad de Excelencia ``Mar\\'{\\i}a de Maeztu'') through grant CEX2019-000918-M. ARM acknowledges support from Grant RYC-2016-20254 funded by MCIN\/AEI\/10.13039\/501100011033 and by ESF Investing in your future, and from MINECO under the PID2020-117252GB-I00 grant.\n\n\\section*{Data Availability}\nThe XGAPS catalogue produced in this paper is available and can be found on VizieR.\n\n\n\n\\bibliographystyle{mnras}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}