diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzodje" "b/data_all_eng_slimpj/shuffled/split2/finalzzodje" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzodje" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\n\n\n\n\n\n\nFinancial systems consist of many units influencing each other through interactions of different nature and scale, thereby exhibiting rather complex dynamics. The time series of the system observables, which capture these dynamics, are functions with pronounced random components. Hence, one requires convenient statistical tools to infer both the behavior of individual units and the overall performance of the financial system. In this aspect, a lot of effort has been put into developing tools for studying the pairwise relationships between assets as they provide direct estimate for the intensity of the mutual interactions. Besides this, the pairwise relationships also have particular practical relevance in construction of optimal portfolios and efficient asset allocations. As a result, various statistical approaches have been applied to plethora of financial markets, among which, equity prices at stock markets~\\cite{plerou1999universal, mantegna1999hierarchical}, currencies at foreign exchanges~\\cite{mizuno2006correlation, naylor2007topology} and even market indexes~\\cite{eryiugit2009network}.\n\nThe majority of these studies mainly focus on relationships based on \\textit{simultaneous} observations of the logarithmic returns of the examined assets. The results obtained from such studies are able to explain the mutual interactions only when the time needed for spreading of the news across the financial market is negligible in comparison to the period of calculation of log returns. The relevance of the results obtained in such situations is backed by the efficient market hypothesis. The hypothesis states that the current price of any asset traded in the market incorporates all relevant information and no prediction regarding the future evolution would be profitable without taking additional risks~\\cite{malkiel1970efficient}. Numerous studies confirm that in practice this usually holds when one considers log returns on daily or longer time scale. When shorter time scales are considered, such as one-minute log returns, one might expect significant delay effects. To address this issue, one usually resorts to the usage of \\textit{lagged} approaches. These approaches give estimates for the relationship between the observations of the two quantities of interest which are apart from each other for certain period ~\\cite{kawaller1987temporal,lo1990contrarian}. The variable whose values are delayed is called the \\textit{lagger}, whereas the other one is the \\textit{leader}. \n\n\nResearch on these lead-lag relationships predominantly focuses on correlation analyses of dynamics of stock prices or market indexes~\\cite{huth2014high,xia2018emergence,valeyre2019emergence}\\footnote{We note that the ordinary correlations are special case of the lagged ones, and are called zero-lagged or contemporary correlations in order to distinguish them from the latter.}. To the best of our knowledge, the examination of the lead-lag relationships in a foreign exchange market remains largely unexplored. To bridge this gap, here we study the relationships between one-minute log returns on exchange rates with a lag of one minute. We focus on returns lagged for one minute due to three reasons. First, even though past research suggests that the lead-lag relationships can be felt in the price of the lagger even up to 45 minutes later, as trading becomes more and more automatic, the lagging period has decreased dramatically to typical value measured in seconds~\\cite{huth2014high}, or even milliseconds~\\cite{dao2018ultra}. Second, the foreign exchange is known to be a very liquid one, with most of the transactions being ordered automatically and one can not expect significant lags that span longer than one minute. This reasoning was supported by studying lagged effects with two-minute lags. In this case, we found that although such lagged relationships might appear sporadically, the pattern is not persistent as it is when one-minute lags are considered.\n\nWe consider three different approaches to estimate the lead-lag relationships: i) lagged correlations, ii) lagged partial correlations and iii) Granger causality. The lagged correlations provide a direct estimate for the intensity of linear lead-lag relationship between two exchange rates. The lagged partial correlations extend this approach by eliminating the possible serial autocorrelation of the lagger and the contemporaneous correlation of both rates under study. Finally, the Granger causality~\\cite{granger1969investigating} statistically tests for possible causal relationship between two exchange rates by considering the predictive potential by using past returns. We find that, with few exceptions, the lagged correlations between exchange rate pairs which relate currencies do not pass tests of statistical significance, which is in accordance with the efficient market hypothesis. However, there are many statistically significant lagged correlations, particularly when one considers rates which involve stock market indexes. In these correlation coefficients, the stock market indexes, which are known to have slower dynamics than the currency exchange raters, appear as leaders. Interestingly, this is opposite to previous findings which suggest that assets with faster dynamics behave as leaders \\cite{kawaller1987temporal, lo1990contrarian, toth2006increasing}. The partial correlation analysis further confirmed our findings. The Granger causality analysis, in addition to providing a test for the findings in the correlation analysis, revealed that the leaders in the lead-lag relationships also increase the predictive ability for the determination of future returns of the lagger. Based on these observations one could believe that the information from the leader towards the lagger does not transfer simultaneously, but it can be seen as a process that stretches for a certain period. \n\n\nOut of the statistically significant pairwise correlations and causality relationships we obtained matrices, which can be regarded as directed networks that describe the effect of one asset towards another. To infer the most influential leaders, in the spirit of~\\cite{brin1998anatomy}, we apply the PageRank algorithm. PageRank is a widely used procedure that has been applied in original or modified form to various domains \\cite{ white2003algorithms, ma2008bringing, falagas2008comparison, radicchi2011best}. The resulting ranking allows us to infer the major sources of information spillover in the studied financial network.\n\n\n\nThe outline of the paper is as follows. In Section 2 we introduce the methods applied for the estimation of the lagged correlations, lagged partial correlations and causality, as well as the generation of the resulting networks. Section 3 presents the empirical results. The last section summarizes our findings and provides directions for future research.\n\n\n\n\n\n\n\n\n\n\\section{\\label{sec:methods}Methods}\n\n\\subsection{Lagged correlations\\label{Lagged_corr_theory}}\n\nThe dynamics of the prices of financial assets are known to be non-stationary. Hence, one can not simply use them to examine the relationship between different assets. Nevertheless, the logarithmic return,\n\\begin{equation}\n r_n = \\log p(t_{n+1}) - \\log p (t_{n}),\n \\label{eq:log_ret_def}\n\\end{equation}\nwhere the price $p$ is assumed to be observed at discrete moments $t_n$, is usually assumed to be weakly stationary~\\cite{fan2017elements}. This property of log returns makes them more appropriate quantities for uncovering statistical relationships between assets.\n\nAssuming that we have observations of the log returns $r_i$ and $r_j$ of two prices at the series of discrete moments $t_n$, the $\\tau$-lagged covariance between them is given by\n\\begin{equation}\nC_{i,j}(\\tau) = \\langle \\left[r_i\\left(t_n\\right) - \\langle r_i \\rangle \\right] \\left[r_j\\left(t_n + \\tau \\right) - \\langle r_j \\rangle \\right] \\rangle,\n\\label{eq:lag_cov_def}\n\\end{equation}\nwhere the angular brackets denote averaging. We emphasize that one should keep the order of indices since in this notation the first index is the leader, while the second denotes the lagger. In general, the lagged covariances are not commutative $C_{i,j}(\\tau) \\neq C_{j,i}(\\tau)$, in contrast to the ordinary, zero-lag covariances. Since the log return of one price might typically deviate more than that of the other one, a more appropriate quantity is the correlation coefficient\n\\begin{equation}\n\\rho_{i,j}(\\tau) = \\frac{C_{i,j}(\\tau)}{\\sqrt{C_{i,i}(0) C_{j,j}(0)}} = \\frac{C_{i,j}(\\tau)}{\\sigma_{i} \\sigma_{j}}.\n\\end{equation}\nIn the last expression the notation is simplified by using the standard deviation $\\sigma_i = \\sqrt{C_{i,i}}$.\n\n\n\\subsection{Lagged partial correlations \\label{Lagged_partial_theory}}\n\nIn reality, the existence of lagged correlation can be due to one exchange rate being the driving force of the other, or due to the combination of mutual contemporaneous correlation and lagged autocorrelation. In order to isolate this potential effect, we calculate the partial correlation coefficient. For three random variables $X$, $Y$, and $Z$ with respective pairwise correlations $\\rho(X,Y)$, $\\rho(Y,Z)$, and $\\rho(X,Z)$, the partial correlation between $X$ and $Y$ with $Z$ considered known, is defined as\n\\begin{equation}\n \\rho(X,Y|Z) = \\frac{\\rho(X,Y) - \\rho(X,Z)\\rho(Y,Z)}{\\sqrt{[1-\\rho^2(X,Z)][1-\\rho^2(Y,Z)]}}.\n\\end{equation}\nTherefore, when the current return of the lagger $j$ has the role of known variable, the correlation between its next value and current return on the leader $i$ is\n\\begin{equation}\n \\rho_{i,j}^P(\\tau) = \\frac{\\rho_{i,j}(\\tau) - \\rho_{i,j}(0)\\rho_{j,j}(\\tau)}{\\sqrt{[1-\\rho_{i,j}^2(0)][1-\\rho_{j,j}^2(\\tau)]}}.\n\\end{equation}\nThe last expression is useful when one has calculated already the values of the ordinary correlations and thus would immediately obtain the partial one. Another approach for calculation of the partial correlation is based on linear regression~\\cite{baba2004partial}. It can be described as follows. Let $Z$ be the variable whose influence has to be removed, the two other variables being $X$ and $Y$, and their best estimates by observing $Z$ being $\\hat{X}_Z$ and $\\hat{Y}_Z$ respectively. The respective partial correlation is then the ordinary correlation between the residuals $r_X = X-\\hat{X}_Z$ and $r_Y = Y-\\hat{Y}_Z$, given by\n\\begin{equation}\n \\rho(X,Y|Z) = \\rho(X-\\hat{X}_Z, Y-\\hat{Y}_Z).\n\\end{equation}\nThis approach allows for easier calculation of the statistical significance of the estimated partial correlation, and as such has been widely applied in studies of relationships between currencies and stocks \\cite{kenett2010dominating, kenett2015partial, mai2018currency}.\n\n\n\\subsection{Lagged correlation networks \\label{Lagged_net_theory}}\n\n\nAs the lagged correlations are not commutative (in contrast to the contemporaneous correlations), the resulting network is directed. In order to keep the strength of influence we create a weighted network where the direction is from the lagger towards the leader with weight that is equal to the absolute value of the lagged correlation coefficient $w_{j,i} = |\\rho_{j,i}|$. We consider absolute values since the sign of the correlation coefficient denotes only the direction of change of the exchange rate. Typically, extraction of the core of financial correlation networks is performed with the Minimal Spanning Tree (MST) sub-graph procedure~\\cite{mantegna1999hierarchical} or the Planar Maximally Filtered Graph (PMFG)~\\cite{kenett2010dominating, tumminello2005tool}. Because the obtained statistically validated lagged correlation networks are rather sparse, we may instead use the PageRank algorithm in order to determine the most influential leaders~\\cite{brin1998anatomy}. The PageRank algorithm was originally used to rank web pages, by assuming that pages with more incoming links from others are more important. A particular feature which favors PageRank above other ranking procedures is that it intrinsically assigns higher weights to links that originate from important nodes. \n\nThe application of PageRank first involves creating a row stochastic matrix from the weighted network with elements\n\\begin{equation}\n a_{j,i} = \\frac{w_{j,i}}{\\sum_k w_{j,k}}.\n\\end{equation}\nThe last matrix can be seen as a Markov chain transition matrix. In terms of Markov chain theory, the element $a_{j,i}$ corresponds to the probability of jumping from state $j$ to $i$. For such matrices arising from correlation networks, the bigger values within $j$-th row correspond to the columns $i$ associated with rates from which $j$ obtains larger impact. As such, the resulting rankings should provide information regarding the market news spreading among the the exchange rates included in the analysis. Higher ranking should correspond to exchange rates whose changes in returns will also have a higher overall impact in the overall system dynamics.\n\n\n\\subsection{Granger causality \\label{Granger_theory}}\n\n\n\n\n\nThe potential influence of the return on certain exchange rate on future returns on other exchange rates can be assessed by applying the causality analysis in the sense of Granger~\\cite{granger1969investigating}. WIthin this framework, cause and effect relationship between a pair of two dynamical variables exists if knowledge of past values of one of them improves the prediction of the future of the other.\nGranger causality is estimated as follows. Consider the linear regression of the nearest future value of some variable $x$ from its past $p$ terms\n\\begin{equation}\n x_{n+1} = \\sum_{k=0}^{p-1} \\alpha_k x_{n-k} + \\varepsilon_n,\n \\label{eq:self_regression}\n\\end{equation}\nwhere $\\varepsilon_n$ is the regression residual. The weights $\\alpha_k$ are such that the residual variance $\\sigma_{\\varepsilon}^2$ is minimal, so the linear regression is optimal. Now, make an optimal combined regression by using $q$ past values of the variable $y$ as well\n\\begin{equation}\n x_{n+1} = \\sum_{k=0}^{p-1} \\alpha_k x_{n-l} + \\sum_{l=0}^{q-1} \\beta_l y_{n-l} + \\eta_n.\n\\end{equation}\nIf some of the parameters $\\beta_l$ are nonzero with certain statistical significance, which in turn leads to reduced variance $\\sigma_{\\eta}^2 < \\sigma_{\\varepsilon}^2$, then it is said that $y$ Granger causes $x$. If $x$ also Granger causes $y$ than one has a feedback system. The regression depth $q$ (and $p$ as well) depends on the problem at hand. We consider only one past step, as it has been observed that in foreign exchange markets the serial autocorrelation quickly vanishes.\nAnother reason for this choice is that, as it will be seen in the Data Section, due to the presence of gaps and periods of repeating identical values in the studied time series, there is a much smaller number of consecutive nonzero returns than the one required for a meaningful regression with more explanatory variables.\nTherefore, our Granger causality setting for the effect of exchange rate $i$ on exchange rate $j$ is described as\n\\begin{equation}\n r_j(t_{n+1}) = \\alpha_{i,j} r_j(t_n) + \\beta_{i,j} r_i(t_n) + \\gamma_{i,j} + \\eta_{i,j,n}.\n \\label{eq:regression_def}\n\\end{equation}\nIn the last equation $\\alpha_{i,j}$ estimates the self driving force, $\\beta_{i,j}$ measures the influence of the return of exchange rate $r_i$, $\\gamma_{i,j}$ accounts for possible nonzero mean return of the rate $r_j$, and $\\eta_{i,j,n}$ is the noise term. When $\\beta_{i,j}$ is zero, then $r_j$ is not caused by $r_i$, otherwise there is causal relationship from returns on rate $i$ to $j$. \n\n\n\nThe quality of a regression model is usually estimated in terms of the mean of the squared residuals. Concretely, let us denote with $S_0$ the variance of the dependent variable. Correspondingly, let $S$ be the mean of the square of the residuals obtained from regression of the kinds of Eqs. (\\ref{eq:self_regression}) or (\\ref{eq:regression_def}). The estimate of the quality of the model is then given by the coefficient of determination \n\\begin{equation}\n R^2 = 1 - \\frac{S}{S_0}.\n\\end{equation}\nWhen one compares two regression models, nested into each other, one needs a way to compare them. Because the one with more parameters is expected to have better performance than the other, one needs to know whether it significantly outperforms the simpler peer. The respective tool for such estimates is the $F$-test. Consider a simpler regression model with $p_1$ parameters which result in residual sum of squares $S_1$, and its bigger alternative with $p_2$ parameters and residual sum of squares $S_2$ , applied on the same $N$ data items. Then, if both have the same quality in the null hypothesis, the respective $F$-test statistic\n\\begin{equation}\n F = \\frac{\\frac{S_1-S_2}{p_2-p_1}}{\\frac{S_2}{N-p_2}},\n\\end{equation}\nhas $F$ distribution with $(p_2-p_1,N-p_2)$ degrees of freedom.\n\nWe note that a more general approach can be obtained with the vector autoregression model (VAR)~\\cite{lutkepohl2005new}. VAR is a generalization of the previously explained procedure, where the log return $r_j (t_{n+1})$ is regressed with more than one, or even all other returns. Unfortunately, we cannot apply this procedure on the data under study, because many exchange rates have either zero return, or missing values at different moments, and the regression would then be meaningless.\n\n\n\n\n\n\n\n\n\n\n\n\\section{Data}\n\n\n\\subsection{Data source}\n\n\n\n\n\n\n\nWe use data gathered from \\url{www.histdata.com}\\footnote{All studied data is made freely available by its publisher.}. The dataset contains highly frequent one-minute exchange rate values only on the bid quotes, which is a slight drawback since generally as price of an asset is considered the mean of the respective bid and ask values. Even though the bid is not simply a drifted version of the mean price, it can serve in the analysis as an estimate for the value from the point of view of the brokers.\n\n\nWe extract exchange rate data for 66 pairs among 33 assets consisting of 19 currencies, 10 indexes of major stock markets, 2 oil types, and gold and silver. The list of assets together with their abbreviations is given in Table~\\ref{tab:Abbreviations}.\n\n\n\n\n\\begin{table}[h!]\n \\begin{center}\n \\caption{List of studied assets}\n \\label{tab:Abbreviations}\n \\begin{tabular}{l|l}\n \\textbf{Currencies} \\\\\n \\hline \n AUD -- Australian Dollar & MXN -- Mexican Peso\\\\\n CAD -- Canadian Dollar & NOK -- Norwegian Krone\\\\\n CHF -- Swiss Franc & NZD -- New Zealand Dollar\\\\\n CZK -- Czech Koruna & PLN -- Polish Zloty\\\\\n DKK -- Danish Krone & SEK -- Swedish Krona\\\\\n EUR -- EURO & SGD -- Singapore Dollar\\\\\n GBP -- British Pound & TRY -- Turkish Lira\\\\\n HKD -- Hong Kong Dollar & USD -- US Dollar\\\\\n HUF -- Hungarian Forint & ZAR -- South African Rand\\\\\n JPY Japanese Yen \\\\\n \\hline \n \\textbf{Indexes} \\\\\n \\hline \n AUX -- ASX 200 & JPX -- Nikkei Index 400 \\\\\n ETX -- EUROSTOXX 50 & NSX -- NASDAQ 100 \\\\\n FRX -- CAC 40 & SPX -- S\\&P 500 \\\\\n GRX -- DAX 30 & UDX -- Dollar Index \\\\\n HKX -- HAN SENG & UKX -- FTSE 100 \\\\\n \\hline \n \\textbf{Commodities} \\\\\n \\hline \n\t BCO -- Brent Crude Oil & XAG -- Silver \\\\\n WTI -- West Texas Intermediate & XAU -- Gold \\\\\n \\hline\n \n \\end{tabular}\n \\end{center}\n\\end{table}\n\n\n\n\n\n\n \n\n\\subsection{\\label{sec:dataProc}Data preprocessing}\n\nIn spite of the foreign exchange market being highly dynamic, the data which we study includes several cases where a rate has the same value in few consecutive minutes. This means that the respective one-minute returns are zero. In addition, there are gaps with missing data in the time series. We filter out the data from such situations and examine four different lead-lag scenarios, as illustrated in Fig.~\\ref{fig:case-studies}\n One can think of the correlations and causal relationships studied in these scenarios as conditional ones, because the calculations are conditioned on the presence of nonzero return on certain variables. In the first scenario, only the leader is restricted to have nonzero return, while the lagger is allowed to be dormant in the following minute. In this aspect, we calculate the lagged correlation only from pairs in which the exchange rate which precedes has only nonzero returns. If in such case one obtains statistically significant correlation, it should mean that, although sometimes the lagger might retain its value in the following minute, when it modifies, the change would likely have the same sign and similar size as the leader had in the past minute. We argue that this is due to market information flowing from the leader towards the lagger. In addition, the statistically significant correlation might represent a \\textit{potential statistical arbitrage}, since after observing the return of the leader, fast-profit seekers could take the appropriate position on the lagger. We consider this as potential arbitrage, because even though the lagger's exchange rate would more likely have the right direction of change, the expectation of the profit might not compensate the transaction costs. Obviously, a detailed study involving also data on the transaction costs and ask quotes could reveal whether these lead-lag relationships are indeed profitable. \n\n\\begin{figure*}[t!]\n\\begin{adjustwidth}{0in}{0in}\n\\includegraphics[width=12.7cm]{lagged_correlations_fig.eps}\n\\caption{Visual representation of the four scenarios of restrictions on the values of the log returns and in which calculations are they used (bottom lines). In all four cases the left square denotes the leader, while the right one(s) correspond to the lagger. Full square denotes that only observations with nonzero value is considered, while empty one means that the log return which is used might have zero value as well.\n\\label{fig:case-studies}}\n\\end{adjustwidth}\n\\end{figure*}\n\nThe second case corresponds to the correlations obtained from the observations of simultaneous nonzero returns on the leader and the lagger in the next minute. Under such circumstances, one has an estimate on how much of the change of the price of the lagger, which certainly occurred, is possibly influenced by the previous return on the leader. \n\nIn order to calculate the partial correlation, in the third scenario we further specify the lagger to have nonzero return at the same minute as the leader. Such restriction is necessary for removal of the correlation between simultaneous returns of the leader and the lagger and the lagged autocorrelation of the latter, as potential contributors. This scenario was also used for determination of causal relationships.\n\nIn the last case we use observations of nonzero returns on the leader and lagger at the same minute, and apply it solely for identifying Granger causality. Under this scenario, one checks for predictability potential based on linear regression when one has two signals -- simultaneous observations of nonzero values of the two relevant returns.\n\nThe collected data spans from January 2016 until December 2018. For computational reasons we calculated the quantities of interest for each of the 36 months separately. In representing the rankings, the averages of the monthly results are given. \n\n\n Moreover, we point out that even though log returns are usually assumed to be weakly stationary, the empirical data may not exhibit such properties. This might be especially true for foreign exchange markets where the assets display extremely high dynamics and volatility. To statistically test whether the data used here is appropriate for studying lead-lag relationships that we are interested in, we utilize the Dickey-Fuller test~\\cite{dickey1979distribution}. This is a standard test used in time series analysis for examining the weak stationarity of a series. Under the null hypothesis, the test assumes that the time series under study has unit root. More precisely, we have applied augmented version of the test \\cite{wooldridge2015introductory}, where the difference of consecutive returns $\\Delta r_i(t_n) = r_i(t_n) - r_i(t_{n-1})$ is regressed with the past return $r_i(t_{n-1})$, and the previous difference $\\Delta r_i(t_{n-1})$. We could not examine lags of the difference of log returns of higher order because the presence of gaps in the data would decrease the number of samples where chains of consecutive differences are available. Correspondingly, this would weaken the statistical power of the test. Also, possible presence of time trend in the series was accounted for. To sum up, we have considered the following regression model\n \\begin{equation}\n \\Delta r_i(t_n) = \\alpha + \\delta t_n + \\theta r_i(t_{n-1}) + \\gamma_1 \\Delta r_i(t_{n-1}) + \\epsilon_n,\n \\end{equation}\nwhere $\\alpha$, $\\delta$, $\\theta$, and $\\gamma_1$ are coefficients, and $\\epsilon_n$ is the error term. Under the null hypothesis, $\\theta=0$ suggests presence of unit root.\n \n The Dickey Fuller test statistic calculated for each month in 2016 separately, and for all exchange rates is reported in the Table \\ref{tab:DFTestStat}. Because the critical value of the test statistic for level 0.01 is -3.96 \\cite{fuller2009introduction}, one can observe that in each case the null hypothesis of presence of unit root can be rejected with high confidence. Accordingly, one can consider that the time series of log returns are weakly stationary.\n \n \n \\begin{table}[h!]\n\\begin{minipage}{\\textwidth}\n \\begin{center}\n \\caption{Dickey-Fuller test statistic (DFTS) for studied exchange rates in 2016}\n \n \\label{tab:DFTestStat}\n \\begin{tabular}{|l|r||l|r||l|r|}\n \\hline \n \\textbf{Rate} & \\textbf{DFTS} & \\textbf{Rate} & \\textbf{DFTS} & \\textbf{Rate} & \\textbf{DFTS}\\\\\n \\hline \n AUD\/CAD & -125.6 & EUR\/SEK & -113.8 & USD\/CAD & -121.8 \\\\ \n \\hline\nAUD\/CHF & -122.1 & EUR\/TRY & -117.0 & USD\/CHF & -122.5 \\\\ \n \\hline\nAUD\/JPY & -121.4 & EUR\/USD & -131.0 & USD\/CZK & -118.8 \\\\ \n \\hline\nAUD\/NZD & -126.3 & FRX\/EUR & -89.9 & USD\/DKK & -122.1 \\\\ \n \\hline\nAUD\/USD & -122.0 & GBP\/AUD & -123.8 & USD\/HKD & -49.2 \\\\ \n \\hline\nAUX\/AUD & -79.8 & GBP\/CAD & -123.7 & USD\/HUF & -111.1 \\\\ \n \\hline\nBCO\/USD & -96.4 & GBP\/CHF & -123.5 & USD\/JPY & -117.4 \\\\ \n \\hline\nCAD\/CHF & -122.6 & GBP\/JPY & -118.7 & USD\/MXN & -113.2 \\\\ \n \\hline\nCAD\/JPY & -120.4 & GBP\/NZD & -123.2 & USD\/NOK & -118.3 \\\\ \n \\hline\nCHF\/JPY & -118.5 & GBP\/USD & -120.5 & USD\/PLN & -116.3 \\\\ \n \\hline\nETX\/EUR & -54.7 & GRX\/EUR & -92.5 & USD\/SEK & -125.5 \\\\ \n \\hline\nEUR\/AUD & -119.0 & HKX\/HKD & -83.8 & USD\/SGD & -116.4 \\\\ \n \\hline\nEUR\/CAD & -122.0 & JPX\/JPY & -68.9 & USD\/TRY & -103.1 \\\\ \n \\hline\nEUR\/CHF & -121.4 & NSX\/USD & -105.4 & USD\/ZAR & -111.4 \\\\ \n \\hline\nEUR\/CZK & -39.6 & NZD\/CAD & -123.7 & WTI\/USD & -98.4 \\\\ \n \\hline\nEUR\/DKK & -28.1 & NZD\/CHF & -122.6 & XAG\/USD & -108.9 \\\\ \n \\hline\nEUR\/GBP & -122.6 & NZD\/JPY & -121.1 & XAU\/AUD & -117.4 \\\\ \n \\hline\nEUR\/HUF & -91.2 & NZD\/USD & -121.0 & XAU\/CHF & -118.9 \\\\ \n \\hline\nEUR\/JPY & -117.7 & SGD\/JPY & -118.8 & XAU\/EUR & -118.7 \\\\ \n \\hline\nEUR\/NOK & -121.7 & SPX\/USD & -75.7 & XAU\/GBP & -102.1 \\\\ \n \\hline\nEUR\/NZD & -122.1 & UDX\/USD & -105.9 & XAU\/USD & -117.4 \\\\ \n \\hline\nEUR\/PLN & -101.8 & UKX\/GBP & -89.9 & ZAR\/JPY & -110.2 \\\\ \n \\hline\n \\end{tabular}\n \\end{center}\n \\end{minipage}\n\\end{table}\n\n \n \n\n\nFinally, we point that here we have considered exchange rates as assets, instead of using values of currencies or commodities. To obtain the value of a currency one should quote it in terms of another one, taken as base, by using the appropriate exchange rate~\\cite{basnarkov2019correlation}. However, one does not always have a direct exchange rate between the appropriate base and any other currency in this market. In such cases two or more exchange rates might be needed for determination of the value of the asset. When many gaps in the data are present, or when some returns are inactive for certain period, getting such values can be problematic. Besides this, it has been found that the choice of basic currency can result in nontrivial difference in the obtained correlations~\\cite{keskin2011topology, basnarkov2019correlation}. \n\n\n\n\n\\section{\\label{sec:results}Results and discussion}\n\n\\subsection{Statistical significance of results}\n\n\nTo determine statistical significance of the results, we applied the Bonferroni threshold correction. The Bonferroni correction is standardly used for handling situations when one makes multiple tests simultaneously, as is the case with the overall significance of the correlation matrix~\\cite{tumminello2011statistically, curme2015emergence}. For correlations between each of the $N$ assets the Bonferroni correction is $N \\cdot N$. Thus, the appropriate threshold for $p$ value at level $0.01$, which is widely applied in the literature~\\cite{curme2015emergence}, is $0.01 \/ (66 \\cdot 66)$. This threshold was applied for the two types of lagged correlations and for the Granger causality relationships.\n\n\n\n\\subsection{Lagged correlations}\n\nAs it was elaborated in Section~\\ref{Lagged_corr_theory}, the lagged correlations quantify the relationship between series of one-minute log returns of two exchange rates with one of them trailing the other for a unit period. The existence of statistically significant correlation implies that the one with preceding returns presumably influences the other. We calculated such lagged correlations for all pairs in the dataset and summarized them in the respective $66\\times 66$ correlation matrix. Due to the very intensive trading of the currencies, and highly dynamical nature of their values, most of the lagged correlations between exchange rates involving currencies do not pass the statistical significance tests. However, there are many nonzero terms in the lagged correlation matrix, notably those involving stock market indexes. \nAs expected, the correlation matrix is asymmetrical because the influence is not identical in both directions. Out of this lagged correlation matrix we create a directed network where a link from node $j$ to $i$ exists if the rate $i$ has statistically significant correlation with $j$, and where $i$ is the leader.\n\n In the second, third and fourth columns of Table~\\ref{tab:Ranking} we present the ten most influential exchange rates on average, according to the PageRank for the three years under consideration and the three lagged correlation scenarios respectively. The results appear rather surprising, since market indexes with slower dynamics are put on top. This is opposite to previous findings where it was discovered that the more liquid assets lead the others~\\cite{kawaller1987temporal, lo1990contrarian, toth2006increasing}. In particular, the first four entries, all corresponding to such market indexes, appear rather high on the rank list nearly every month. The following ones, such as WTI oil exhibit varying influence in the lagged correlation matrix and are ranked rather differently in different months. The reason that the market indexes are most influential could be the fact that their price dynamics is slower and every change of their value incorporates some market news.\n\n\\begin{sidewaystable}[h!]\n \\begin{center}\n \\caption{Top ten most influential exchange rates in five cases: LC1, LC2 and LC3 -- lagged correlations in scenarios 1, 2, and 3 respectively; LPC -- most influential rates in lagged partial correlations; C1 and C2 -- ranking of Granger causality significance in scenarios 3 and 4 respectively.}\n \\label{tab:Ranking}\n \\begin{tabular}{c|c|c|c|c|c|c}\n \\textbf{Rank} & \\textbf{LC1} & \\textbf{LC2} & \\textbf{LC3} & \\textbf{LPC} & \\textbf{C1} & \\textbf{C2}\\\\\n \\hline \n 1 & ETX\/EUR & ETX\/EUR & ETX\/EUR & ETX\/EUR & ETX\/EUR & ETX\/EUR\\\\\n 2 & JPX\/JPY & JPX\/JPY & JPX\/JPY & JPX\/JPY & JPX\/JPY & JPX\/JPY\\\\\n 3 & AUX\/AUD & AUX\/AUD & AUX\/AUD & SPX\/USD & SPX\/USD & SPX\/USD\\\\\n 4 & SPX\/USD & SPX\/USD & SPX\/USD & AUX\/AUD & AUX\/AUD & AUX\/AUD\\\\\n 5 & USD\/CZK & USD\/CZK & USD\/CZK & WTI\/USD & USD\/CZK & USD\/CZK\\\\\n 6 & WTI\/USD & WTI\/USD & WTI\/USD & USD\/CZK & WTI\/USD & WTI\/USD\\\\\n 7 & EUR\/USD & USD\/HUF & USD\/HUF & NZD\/USD & AUD\/USD & AUD\/USD\\\\\n 8 & USD\/HUF & ZAR\/JPY & ZAR\/JPY & AUD\/USD & NZD\/USD & UDX\/USD\\\\\n 9 & ZAR\/JPY & EUR\/USD & AUD\/USD & USD\/DKK & UDX\/USD & NZD\/USD\\\\\n 10 & UDX\/USD & UDX\/USD & EUR\/USD & BCO\/USD & BCO\/USD & USD\/DKK\\\\\n \n \\hline\n \\end{tabular}\n \\end{center}\n\\end{sidewaystable}\n\n\n\n\n\n\n\n\nTo validate whether some correlations are not a random result, but simply correspond to low influence, we also look at the results without the Bonferroni correction. The results for the statistically significant lagged correlations, which pass the threshold $p=0.01$ in each of the studied 36 months are displayed in Table~\\ref{tab:PersistentCorr}. These results correspond to the first three scenarios. One can note that mostly the same correlations are present in the three situations. Only in few cases some additional correlation pairs appear when either the return in the leader is nonzero, or both the leader and the lagger in the next minute have nonzero returns. Although accidental emergence of correlation among $N$ uncorrelated time series with such significance threshold would not be a surprise, its appearance in 36 months consecutively suggests that some relationships between all such pairs really exist. From the same table one can notice some interesting patterns. First, dominant leaders are the same four stock market indexes AUX, ETX, JPX, and SPX, that appear as top sources of information spillover as detected with the PageRank. They influence the other indexes FRX, GRX, HKX, NSX, UKX and UDX. It is interesting that the other US-based index NSX is not as influential as the SPX. In addition, one can note that as laggers appear many exchange rates involving the Japanese Yen. When the above mentioned leading market indexes gain value, the Yen seems to react by weakening with respect to other assets. Another intriguing observation is the lowering of the value of gold ounce, XAU, quoted in AUD, when these indexes rise. One can also note that rise or fall of the MXN with respect to the US dollar, is followed by similar behavior of the TRY. These two minor currencies have been also found to be mutually related in a previous study \\cite{basnarkov2019correlation}. Finally, it is interesting that the fall of the MXN with respect to the USD, is followed by similar behavior of the ZAR with respect to the JPY.\n\n\n\n\n\\begin{table}[h!]\n\\begin{minipage}{\\textwidth}\n \\begin{center}\n \\caption{Persistent statistically significant correlations.}\n \\label{tab:PersistentCorr}\n \\begin{tabular}{l|l}\n \\textbf{Leader} & \\textbf{Positively correlated lagger} \\\\\n \\hline \n AUX\/AUD & AUD\/JPY, CAD\/CHF, CAD\/JPY, FRX\/EUR, \\\\ & GRX\/EUR, HKX\/HKD, NSX\/USD, NDZ\/JPY, \\\\ & SGD\/JPY, SPX\/USD\\footnote{\\label{ftnt:L_or_LF}Persistent correlation when either the leader, or both the leader and the lagger have nonzero returns (scenarios 1 and 2).}, UDX\/USD, UKX\/GBP,\\\\ & USD\/JPY\\\\\n \\hline \n ETX\/EUR & AUD\/JPY, CAD\/JPY, FRX\/EUR, GBP\/JPY, \\\\ & GRX\/EUR, HKX\/HKD, NSX\/USD, NDZ\/JPY,\\\\ \n & SGD\/JPY, SPX\/USD, UDX\/USD, UKX\/GBP,\\\\ & USD\/JPY\\\\\n \\hline \n JPX\/JPY & AUD\/JPY, CAD\/CHF, CAD\/JPY, CHF\/JPY\\footref{ftnt:L_or_LF}, \\\\ & \n EUR\/JPY, FRX\/EUR, GBP\/JPY, GRX\/EUR,\\\\ & HKX\/HKD, NSX\/USD, NDZ\/JPY, SGD\/JPY, \\\\ & SPX\/USD\\footref{ftnt:L_or_LF}, UDX\/USD, UKX\/GBP, USD\/JPY\\\\\n \\hline \n SPX\/USD & AUD\/JPY, CAD\/JPY, SGD\/JPY\\footref{ftnt:L_or_LF}\\\\\n \\hline\n USD\/MXN & USD\/TRY\\\\\n \\hline\n\\textbf{Leader} & \\textbf{Negatively correlated lagger}\\\\\n \\hline\n AUD\/USD & USD\/TRY \\footnote{Persistent correlation when only the leader has nonzero return (scenario 1). \\label{ftnt:L}}\\\\\n \\hline\n ETX\/EUR & XAU\/AUD\\\\\n \\hline\n JPX\/JPY & XAU\/AUD\\\\\n \\hline\n SPX\/USD & XAU\/AUD\\\\\n \\hline\n USD\/MXN & ZAR\/JPY\\footref{ftnt:L}\\\\\n \\hline\n \\end{tabular}\n \\end{center}\n \\end{minipage}\n\\end{table}\n\n\n\n\n\\subsection{Lagged partial correlations}\n\nWe quantify the importance of the exchange rates as determined from the lagged partial correlations through the PageRank algorithm, as well. In the fifth column of Table~\\ref{tab:Ranking}, we show the ranking of the most influential exchange rates. The same four major market indexes, as in the lagged correlation case, appear at the top of the rankings. When applying the ordinary critical value of $0.01$ instead of the Bonferroni correction\nwe find similar results as in the previous section. These results are provided in Table~\\ref{tab:PersPartCorrAndCaus}.\nHowever, these results also suggest why at the top is the European stock market index ETX.\nNamely, the log returns on it, lead the returns of all market indexes AUX, FRX, GRX, HKX, JPX, NSX, SPX, and UKX. It is influenced by the FRX and GRX, which is a feedback relationship, something that was not present in the ordinary correlations. This influence is not symmetrical and is more pronounced from the ETX towards FRX and GRX, than in the opposite direction.\n\nWe find that other bidirectional partial correlations are those between the two oil brands, and between TRY and ZAR when quoted in US dollars. We remark that in the ordinary case, these currencies did not possess any mutual lagged correlation. Moreover, our results show that the value of TRY and ZAR in US dollars seems to be also similar in their trailing of the dynamics of the two Oceanic currencies AUD and NZD as quoted in the USD. The returns on AUD\/USD also show sign of having some influence on those on SGD\/USD. Particularly important is the fact that the return on the silver in USD, precedes similar behavior of the gold in the next minute. In a related observation involving precious metals, it appears that the CHF adjusts its value in terms of gold, after the neighboring EUR has done it one minute before. When any of the MXN, TRY, or ZAR gains its value in USD, ZAR is induced to behave similarly with respect ot the JPY. Also, it is likely that the MXN has some influence on the ZAR which was not observed from the ordinary lagged correlations. Finally, interesting observations are those involving currencies from the Eurozone. The value of PLN in USD seems to influence the HUF quoted in the same currency base. It is also peculiar that the value of the minor currency CZK given in terms of US dollars, influences the major EUR, but not the opposite. \n\n\n\n\\subsection{Granger causality}\n\nThe tests for presence of Granger causality were performed under scenarios 3 and 4, where both the leader and lagger must have nonzero returns simultaneously, while in scenario 3 additionally the lagger also needs to have non-vanishing returns in two consecutive periods.\nThe statistically significant causal relationships were presented in a directed network and the importance of exchange rates was again determined with the PageRank algorithm. We note that the networks which we have considered for depicting the causal relationships are not weighted, which means that a link between two pairs exists only if the source node is influenced by the target node, and the weight of each such link is one. As expected, the rates involving market indexes appear at the top of the list. The rankings of ten rates with strongest causal influence are provided in the last two columns in table~\\ref{tab:Ranking}, where similar ordering with the other two measures (lagged correlations and lagged partial correlations) is observed. Clearly, it can be argued that the lagged correlations, both partial and ordinary, suggest that a causal relationship is present between a leader and its lagger. By lowering the threshold for statistical significance to $0.01$, as in previous cases, we obtain almost the same pairs of exchange rates as in the case of partial correlations. The only exception is the existence of statistically significant partial lagged correlation between the JPX\/JPY as leader and the CHF\/JPY. The causality relationships under both scenarios between the same pair does not pass the threshold for only one month, December in 2017. This might be result of statistical nature and one could speculate that both scenarios produce the same results.\n\nWe state that one might be suspicious in the predictability under scenario 3, where we determine causality when the lagger is constrained to have nonzero return in the next minute. However, in the weaker case, when non-vanishing returns of the leader and the lagger are observed simultaneously, one can use them as signals, and make a regression on the future value of the return on the lagger.\n\n\\begin{table}[h!]\n \\begin{center}\n \\caption{Persistent statistically significant partial correlations and causality relationships.}\n \\label{tab:PersPartCorrAndCaus}\n \\begin{tabular}{l|l}\n \\textbf{Leader} & \\textbf{Same direction} \\\\\n \\hline \n AUX\/AUD & AUD\/JPY, CAD\/CHF, CAD\/JPY, FRX\/EUR, \\\\ & GBP\/JPY, GRX\/EUR, HKX\/HKD, NSX\/USD, \\\\ & NDZ\/JPY, SGD\/JPY, SPX\/USD, UDX\/USD, \\\\ & UKX\/GBP, USD\/JPY, ZAR\/JPY\\\\\n \\hline \n BCO\/USD & WTI\/USD \\\\\n \\hline\n ETX\/EUR & AUD\/JPY, AUX\/AUD, CAD\/CHF, CAD\/JPY, \\\\ & FRX\/EUR, GBP\/JPY, GRX\/EUR, HKX\/HKD, \\\\ & JPX\/JPY, NSX\/USD, NDZ\/JPY, SGD\/JPY, \\\\ & SPX\/USD, UDX\/USD, UKX\/GBP, USD\/JPY\\\\\n \\hline \n FRX\/EUR & ETX\/EUR, GRX\/EUR \\\\\n \\hline\n GRX\/EUR & ETX\/EUR\\\\\n \\hline\n JPX\/JPY & AUD\/JPY, CAD\/CHF, CAD\/JPY, CHF\/JPY \\footnote{Only the partial lagged correlation is persistent in the considered 36 months.\\label{ftnt:Part_Corr}}, \\\\ & EUR\/JPY, FRX\/EUR, GBP\/JPY, GRX\/EUR,\\\\ & HKX\/HKD, NSX\/USD, NDZ\/JPY, SGD\/JPY, \\\\ & SPX\/USD, UDX\/USD, UKX\/GBP, USD\/JPY\\\\\n \\hline \n SPX\/USD & AUD\/JPY, CAD\/JPY, GRX\/EUR, NSX\/USD, \\\\ & NZD\/JPY, SGD\/JPY, UDX\/USD\\\\\n \\hline\n USD\/CAD & USD\/ZAR \\\\\n \\hline\n USD\/MXN & EUR\/TRY, USD\/TRY, USD\/ZAR\\\\\n \\hline\n USD\/PLN & USD\/HUF \\\\\n \\hline\n USD\/TRY & USD\/ZAR \\\\\n \\hline\n USD\/ZAR & EUR\/TRY, USD\/TRY\\\\\n \\hline\n WTI\/USD & BCO\/USD \\\\\n \\hline\n XAG\/USD & XAU\/USD \\\\\n \\hline\n XAU\/EUR & XAU\/CHF \\\\\n \\hline\n ZAR\/JPY & SGD\/JPY \\\\\n \\hline\n\\textbf{Leader} & \\textbf{Opposite direction}\\\\\n \\hline\n AUD\/USD & USD\/SGD, USD\/TRY, USD\/ZAR\\\\\n \\hline\n AUX\/AUD & XAU\/AUD\\\\\n \\hline\n ETX\/EUR & XAU\/AUD\\\\\n \\hline\n JPX\/JPY & XAU\/AUD\\\\\n \\hline\n NZD\/USD & USD\/TRY, USD\/ZAR\\\\\n \\hline\n SPX\/USD & XAU\/AUD\\\\\n \\hline\n USD\/CZK & EUR\/USD \\\\\n \\hline\n USD\/MXN & ZAR\/JPY\\\\\n \\hline\n USD\/TRY & ZAR\/JPY\\\\\n \\hline\n USD\/ZAR & ZAR\/JPY\\\\\n \\hline\n ZAR\/JPY & EUR\/TRY\\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\nSince the foreign exchange is known to be very liquid, one can not expect significant lags that span longer than one minute. To empirically check this hypothesis we also made an analysis where we examine whether these lagged relationships persist for longer periods. For this purpose we re-estimated our metrics for lead-lag relationships by taking into account for lagger returns that trail two minutes. We found that there are some pairs of exchange rates that pass the significance threshold of 0.01. However, they are not consistent. In particular, no such pair passed the test for all months in 2016. This is a further indication that the relevant lagging time scale is of order of one minute. Due to spatial constraints, we do not show these results, but make them available upon request.\n\n\n\\section{\\label{sec:conclusions}Conclusions}\n\nTo conclude, here we studied the relations between pairs of exchange rates where one exchange rate lags the other for one minute, and considered three different approaches. The first one considers lagged correlations which quantify similarity of log returns between lead and lagged exchange rate. The second approach restricts the lagged correlations to partial, in order to account for the potential autocorrelation. Finally, the last approach tries to uncover the causal relationships in the foreign exchange market in the sense of Granger. \n\nThe existence of statistically significant lagged correlations shows that, even though, the foreign exchange market is known to have very fast dynamics, information spreading is not instantaneous. We discovered that the rates which cause others to follow their dynamics are mostly those which involve stock market indexes. Observing changes in the value of an index, implies that certain currencies, or market indexes would more likely gain, while others would loose value. This was further confirmed by the calculation of the lagged partial correlation between the leader and the lagger exchange rate. By applying the PageRank algorithm on the statistically significant correlations in both cases we found that the most influential rates are indeed those involving market indexes. The same conclusions hold even when we estimated the Granger causality between pairs of exchange rates. When lagging period is two minutes, these lead-lag effects disappear, which suggests that the typical lagging time is of the order of one minute.\n\nIn spite of the fact that the existence of lagged correlations and causality challenges the efficient market hypothesis in foreign exchange markets, we are still uncertain of the existence of arbitrage. In order for this phenomenon to be proven, one should verify whether the potential statistical-arbitrage-based profit \ncan overcome the transaction costs. Investigating the presence of arbitrage should be a topic of future research which, besides the bid quotes, requires additional information such as ask price, transaction costs and time needed to complete the transaction.\n\n\n\nAs a final remark, we state that here we did not investigate the temporal dynamics of the discovered correlation patterns. A more detailed study which captures the dependencies among these temporal correlations could reveal novel insights for the nature of the lead-lag relationships in foreign exchange markets.\n\n\n\n\\section{Acknowledgement}\n\nWe are grateful to \\url{histdata.com} for freely sharing their data. This research was partially supported by the Faculty of Computer Science and Engineering at ``SS. Cyril and Methodius'' University in Skopje, Macedonia and by DFG through grant ``Random search processes, L\\'evy flights, and random walks on complex networks''.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nModeling and analyzing the dynamics of social networks allows scientists to understand the impact of social interactions on areas as diverse as politics (opinion formation, spread of (dis)information), economic development, and health (spread of diseases, update of vaccines)~\\cite{read2008dynamic,baumann2020modeling,DBLP:journals\/tcss\/BolzernCN20}. \n %\nMuch of the more foundational literature approach social network analysis from a perspective informed by statistical physics~\\cite{newman2008physics}\nusing a combination of mathematical models (differential equations) and programmed simulation, both derived from an intuitive understanding of the operation of the network. This works well for static networks, where the structure is fixed and changes are to node or link attributes only, but in complex adaptive networks~\\cite{Gross2008} the interconnectedness of structure and data evolution poses additional challenges.\n\nStochastic typed attributed graph transformation~\\cite{HLM06FI} is an obvious choice to formalize and analyze complex adaptive networks. The formalism provides both tool support for simulation and analysis and an established theory to derive mathematical models from the same rule-based descriptions, thus replacing the informal, and sometimes vague, descriptions in natural language. A case in point are the various voter models~\\cite{Durrett2012,zschaler2012early,klamser2017zealotry}\nwhich describe opinion formation in a network of agents. The operations, as described in~\\cite{Durrett2012} seem clear enough. \n\n\\begin{quotation}\n\\dots we consider two opinions (called 0 and 1) \\dots; and on each step, we pick a discordant edge $(x,y)$ at random \\dots. With probability $1 - \\alpha$, the voter at $x$ adopts the opinion of the voter at $y$. Otherwise (i.e., with probability $\\alpha$), $x$ breaks its connection to $y$ and makes a new connection to a voter chosen at random from those that share its opinion. The process continues until there are no edges connecting voters that disagree.\n\\end{quotation}\n\nIntuitively, this is a graph transformation system of four rules over undirected graphs whose nodes are attributed by 0 or 1, shown below using $\\circ$ and $\\bullet$, respectively. \nReferring to~\\cite{bp2019-ext,nbSqPO2019,behrCRRC,BK2020} for further details, we utilise a ``right-to-left'' convention for rewriting rules in order to accommodate a natural order of composition of the rule algebra representations.%\n\n\\[\n\\begin{matrix}\\bullet \\hphantom {-}\\circ \\\\[-0.675em] \\!\\!\\!\\!\\!\\!\\backslash \\\\[-0.675em]\\bullet \\end{matrix}\\;\\leftharpoonup \\;\\begin{matrix}\\bullet \\!\\!-\\!\\!\\circ \\\\[-0.25em]\\bullet \\end{matrix}\n\\qquad\n\\begin{matrix}\\bullet \\hphantom {-}\\circ \\\\[-0.675em] \\;\\;\\;\/ \\\\[-0.675em]\\circ \\end{matrix}\\;\\leftharpoonup \\;\n\\begin{matrix}\\bullet \\!\\!-\\!\\!\\circ \\\\[-0.25em]\\circ \\end{matrix}\n\\qquad\n\\bullet \\!\\!-\\!\\!\\bullet \\;\\leftharpoonup \\;\\bullet \\!\\!-\\!\\!\\circ\n\\qquad\n\\circ \\!\\!-\\!\\!\\circ \\;\\leftharpoonup \\;\\bullet \\!\\!-\\!\\!\\circ\n\\]\n\nThe first pair of rules models \\emph{rewiring} where an agents disconnects from another one with a different opinion to form a connection with a third agent of the same opinion. In the second pair of rules, an agent connected to one with a different opinion \\emph{adopts} the opinion of the other. \nWe could now associate rates with these rules, resulting in a stochastic graph transformation system that allows simulation using the SimSG tool~\\cite{DBLP:journals\/jot\/EhmesFS19} and the derivation of differential equations using the rule algebra approach~\\cite{bp2019-ext,nbSqPO2019, behrCRRC,BK2020}.\n\nHowever, on closer inspection we discover a number of discrepancies. First, the model in~\\cite{Durrett2012} and related papers is probabilistic, but without time. This is an abstraction of real-world behavior in social networks, where time is not discrete and actions not round-based. Arguably, a continuous-time model is a better representation of this behavior. Semantically, a stochastic graph transformation system induces a continuous-time Markov Chain (CTMC) --- an established model with clear links to logics, model checking and simulation techniques, while the operational model behind \\cite{Durrett2012} and others is left informal, but could be formalized as a discrete-time Markov chain (DTMC) or decision process (MDP)~\\cite{bdg2016, bdg2019,bp2019-ext,BK2020}. %\n\nMore specifically, looking at the description of the operation, this is a two- or three-step process, where first a conflict edge is selected at random, then a decision made based on the fixed probability $\\alpha$ between adopting another opinion, and rewiring which requires another random selection of a node with the same opinion. That means, in the rewiring case, the combined behavior of the operation is not reflected directly by the rewiring rules above, which choose all three nodes first. Also, the race between the rewiring and adopt rules in a stochastic graph transformation system depends on the number of available matches (the candidates for the 3rd agent to link) while the probability is fixed in the original formulation. Stochastic graph transformation realizes a mass action semantics which reflects the behavior of physical, chemical and biological processes where the frequency of actions (formally the jump rate of the CTMC) depends on both a rate constant and the concentration or amount available of the input materials required for the action. \n\nIn our view, the informal description of operations, lack of continuous time, and non-standard selection and execution procedure all represent weaknesses in the formulation which inhibit a natural mathematical interpretation of the operational behavior of the model and consequently a formal and systematic connection of this operational description with the simulation and equational model. \n\nOther authors have studied algorithmic problems in social networks, such as what subset of actors to target with interventions (e.g., in marketing or public health campaigns) so as to maximise the impact of the campaign on the network as a whole~\\cite{DBLP:journals\/toc\/KempeKT15,DBLP:journals\/kbs\/LiuYWLLT16}. Such research is based on models of opinion formation and the spread of information in networks, typically divided into \\emph{cascading} and \\emph{threshold models}. In a cascading model, an actor is activated by a single message or activation from a peer. The behavior outlined above, where a discordant edge represents an activation, falls into this category. Instead, in a threshold model, an actor becomes active once a certain number of its peers are activated. \n\nDifferent models give rise to different centrality measure for actors~\\cite{DBLP:journals\/kbs\/RiquelmeCMS18} and hence to different algorithmic solutions for discovering the most influential set of actors to target. These models assume static networks, but one could ask the question of centrality of actors for a dynamic network model given by a stochastic graph transformation system incorporating both the spread of information and the change of topology. \n\nIn this paper, we present an approach addressing weaknesses in dynamic social network modelling by starting from a formal graph transformation model as operational description. We investigate the links to the existing voter models both analytically, using the rule algebra framework to establish how to translate operations and parameters to stochastic graph transformation, and experimentally trying to recreate the observed emergent behavior in the literature. Apart from making a convincing claim for the superiority of our methodological approach, this will allow us in the future to both compare new analysis results to existing ones in the literature.\n\nWe start by introducing the model as a stochastic graph transformation system in the format accepted by our simulation tool, then study the theory of relating the different CTMC and DTMC semantics to support different possible conversions between models before reporting on the experimental validation of the resulting models through simulation. \n\n\n\\section{Voter Models as Stochastic Typed Attributed Graph Rewrite Systems}\\label{sec:voter-models}\n\nWe model voter networks as instances of the type graph below (Figure~\\ref{fig:voter-model-tg}), where \\emph{Group} nodes represent undirected links connecting \\emph{Agent} nodes. This means that two agents are linked if they are members of the same group. For now, the cardinality of each group is exactly 2, so these groups are indeed a model of undirected edges between agents (and we are planning to generalize to groups with several members later).\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.50\\textwidth]{graphics\/TypeGraph.pdf}\n\t\\caption{Type graph of the basic voter model}\\label{fig:voter-model-tg}\t\n\\end{figure}\nWe see the voter network as an undirected multi graph, i.e., parallel links between two voters $v1$ and $v2$ are permitted. \nThat means, in our representation, $v1, v2$ can jointly make up one group $g1$ as well as another group $g2$. \nIt is worth noting that the multi-graph interpretation is never explicitly stated in~\\cite{Durrett2012} nor the literature building on it. \nHowever it is the only interpretation consistent with the usual assumptions made, in particular that rewiring is always possible after the selection of the 3rd node, which may or may not be connected to the first node $v1$ already, and that the total number of edges in the graph is constant, so we have to create a new edge on rewiring even if $v1$ and $v2$ are already connected.\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.80\\textwidth]{graphics\/AdoptRules.pdf}\n\t\\includegraphics[width=0.80\\textwidth]{graphics\/Rewire1Rule.pdf}\n\t\\includegraphics[width=0.80\\textwidth]{graphics\/Rewire2Rule.pdf}\n\t\\caption{Rules of the basic voter model}\\label{fig:voter-model-rules}\n\\end{figure}\n\nIn this paper we use SimSG \\cite{DBLP:journals\/jot\/EhmesFS19} to simulate rule-based models, such as the basic voter model, for a given start graph and rates according to the semantics of stochastic graph transformations~\\cite{HLM06FI}.\nSimSG implements its own version of Gillespie's well-known algorithm~\\cite{Gillespie1977} which describes the behavior of stochastic systems over time using a continuous-time Markov Chain with exponentially distributed transition delays.\nSimSG is build upon the graph transformation rule interpreter eMoflon \\cite{eMoflon}, which provides the means to define and execute rules. For example, there are four rules in the basic voter model, two dual variants each of the \\emph{adopt} and \\emph{rewire} operations. \nThe graphs in Figure~\\ref{fig:voter-model-rules} show a visual representation of these four rules as defined in the eMoflon::IBeX-GT\\footnote{eMoflon::IBeX-GT: \\url{https:\/\/emoflon.org\/\\#emoflonIbex}} syntax.\nIn these rules, black and red elements (\\texttt{--}) specify context elements that need to be present in an instance graph, while green elements (\\texttt{++}) will be created.\nA pattern matcher will search for matches in an instance graph that fit these requirements.\nAs shown in Figure~\\ref{fig:voter-model-rules}, in addition to structural constraints, eMoflon allows the specification of attribute conditions.\nIf a match is found, the rule can be applied by deleting all graph elements matching red elements in the rule, and creating new instances of all green elements. \nIn addition to the structural constraints and attribute conditions shown in the example, eMoflon-GT also allows the definition of more complex conditions, such as negative application conditions that filter out matches connected to prohibited graph structures. \nBeyond the specification of rules, SimSG allows for the annotation of rules with rates, either through value literals or statically evaluated arithmetic expressions.\nMost importantly, eMoflon provides SimSG with an interface to its underlying incremental graph pattern matching engines, such as Viatra\\cite{VIATRA}, Democles\\cite{DEMOCLES} or the recently developed HiPE\\footnote{HiPE: \\url{https:\/\/github.com\/HiPE-DevOps\/HiPE-Updatesite}}, all of which can be used to find matches for rules and track observable patterns during simulations.\nDuring each simulation, SimSG tracks occurrence counts of these patterns and provide the user with the option to plot these counts over the simulation time or save them to a file.\n \n\n\n\n\\section{Theory: CTMCs and DTMCs via Rule-algebraic Methods}\\label{sec:theory}\n\n\nApproaches to social network modeling can be broadly classified by their underlying semantics. In this paper, we consider the two major classes consisting of \\emph{discrete-time Markov chains (DTMCs)} and \\emph{continuous-time Markov Chains (CTMCs)} (leaving for future work the class of Markov decision processes). We will utilize the \\emph{rule algebra formalism}~\\cite{bp2019-ext,nbSqPO2019, behrCRRC,BK2020} as our central technical tool, with CTMCs in so-called mass-action semantics implemented via the \\emph{stochastic mechanics framework}~\\cite{bdg2016, bdg2019,bp2019-ext, BK2020}. Here, the firing rate of a given rule for a system state at some time $t\\geq 0$ is proportional to a \\emph{base rate} (i.e., a positive real parameter) times the number of admissible matches of the rule into the system state.\nNote however that even in the CTMC setting, this semantics is only one of several conceivable variants, and indeed we will study in this paper also a variation of rule-based CTMC semantics wherein rule activities in infinitesimal jumps are not weighted by their numbers of matches. In order to faithfully implement rule-based DTMCs, we will adopt the approach of~\\cite{Behr2021}.\n\n\n\\subsection{Preliminaries: ``compositional'' rewriting theory}\n\nWe will focus throughout this paper on the case\\footnote{Since the rules in our concrete models are \\emph{linear} and will not involve vertex deletions nor creations, we could have equivalently opted for DPO- or SPO-semantics, which are well-known to coincide with SqPO-semantics in this special situation.} of Sesqui-Pushout (SqPO) semantics~\\cite{Corradini2006}, which under certain conditions on the underlying base categories furnishes a ``compositional'' rewriting semantics~\\cite{nbSqPO2019}, i.e., supports the requisite construction of rule algebras. Since the main theme of this paper is to advocate rewriting-based methods for implementing models, we discuss here the recent extension of the SqPO-formalism to include rules with conditions and under the influence of constraints~\\cite{behrCRRC,BK2020}. We refer the interested readers to~\\cite{BK2020} for the analogous formalism in the Double Pushout (DPO) setting.\n\nConsider thus a \\emph{base category} $\\bfC$ that is \\emph{$\\cM$-adhesive} (for some class $\\cM$ of monomorphisms), \\emph{finitary}, possesses an $\\cM$-initial object, $\\cM$-effective unions and an epi-$\\cM$-factorization, and such that all final pullback complements (FPCs) along composable pairs of $\\cM$-morphisms exist (compare~\\cite{BK2020}). Throughout this paper, we will consider the particular example of the category $\\mathbf{uGraph}\/T$ (for some $T\\in \\obj{\\mathbf{uGraph}}$) of \\emph{typed undirected multigraphs}, which according to~\\cite{BK2020} satisfies all of the aforementioned requirements.\n\nLet us briefly recall the salient points of SqPO-type ``compositional'' rewriting theory:\n\n\\begin{definition}[Rules]\nDenote by $\\LinAc{\\bfC}$ the set of equivalence classes of linear rules with conditions,\n\\begin{equation}\n\\LinAc{\\bfC}:= \\{ R=(O\\xleftarrow{o}K\\xrightarrow{i}I,\\ac{c}_I)\\vert i,o\\in \\cM, \\ac{c}_I\\in \\cond{\\bfC}\\}\\diagup_{\\sim}\\,,\n\\end{equation}\nwhere $R\\sim R'$ if and only if $\\ac{c}_I\\dot{\\equiv}\\ac{c}_I'$ (i.e., if the conditions are equivalent for all matches of the rules) and the rules are isomorphic. The latter entails the existence of isomorphisms $\\omega:O\\xrightarrow{\\cong} O'$, $\\kappa:K\\xrightarrow{\\cong} K'$ and $\\iota:I\\xrightarrow{\\cong} I'$ such that the obvious diagram commutes. We will adopt the convention to speak of ``a'' rule $R\\in \\LinAc{\\bfC}$ to mean an equivalence class of rules.\n\\end{definition}\n\n\\begin{definition}\nGiven a rule $R\\in \\LinAc{\\bfC}$ and an object $X\\in \\obj{\\bfC}$, an \\emph{SqPO-admissible match} $m\\in \\RMatchGT{}{R}{X}$ is an $\\cM$-morphism $m:I\\rightarrow X$ that satisfies the application condition $\\ac{c}_I$. Then an \\emph{SqPO-type direct derivation} of $X$ along $R$ with a match $m\\in \\RMatchGT{}{R}{X}$ is defined via a commutative diagram of the form\n\\begin{equation}\n\\ti{dd}\n\\end{equation}\nwhere the object $R_m(X)$ is defined up to the universal isomorphisms of FPCs and pushouts (POs).\n\\end{definition}\nNote that in all of our applications, we will only consider objects up to isomorphisms, so we will in a slight abuse of notations speak of taking direct derivations of iso-classes $X\\in \\obj{\\bfC}_{\\cong}$ along (equivalence classes of) rules, understood as a mapping from iso-classes to iso-classes. With the focus of the present paper upon implementation strategies, we forgo here a complete review of the concepts of compositional rewriting theory (see e.g.\\ \\cite{behrCRRC,BK2020}), and limit ourselves to the following salient points. From hereon, $\\bfC$ will always denote a category suitable for SqPO-type transformation.\n\n\\begin{definition}\nLet $\\cR_{\\bfC}$ denote the \\emph{$\\bR$-vector space ``over rules''} spanned by basis vectors $\\delta(R)$, where $\\delta:\\LinAc{\\bfC}\\xrightarrow{\\cong} \\mathsf{basis}(\\cR_{\\bfC})$ is an isomorphism. Let $\\hat{\\bfC}$ denote the \\emph{$\\bR$-vector space ``over states''}, which is defined via the isomorphism $\\ket{.}:\\obj{\\bfC}_{\\cong}\\rightarrow \\mathsf{basis}(\\hat{\\bfC})$ from isomorphism-classes of objects to basis vectors of $\\hat{\\bfC}$. We introduce the notation $End_{\\bR}(\\hat{\\bfC})$ to denote the space of \\emph{endomorphisms} on $\\hat{\\bfC}$ (i.e., of linear operators from $\\hat{\\bfC}$ to itself). Then we denote by $\\rho_{\\bfC}:\\cR_{\\bfC}\\rightarrow End_{\\bR}(\\hat{\\bfC})$ the morphism defined via\n\\begin{equation}\\label{eq:defCanRep}\n\\forall R\\in \\LinAc{\\bfC}, X\\in \\obj{\\bfC}_{\\cong}:\\quad\n\\rho_{\\bfC}(\\delta(R))\\ket{X} := \\sum_{m\\in \\RMatchGT{}{R}{X}} \\ket{R_m(X)}\\,.\n\\end{equation}\nThe above definition extends to arbitrary elements of $\\cR_{\\bfC}$ and $\\hat{\\bfC}$ by \\emph{linearity}, i.e., for $A=\\sum_{j=1}^M \\alpha_j\\delta(R_j)$ and $\\ket{\\Psi}:=\\sum_{X}\\psi_X\\ket{X}$, we define $\\rho_{\\bfC}(A)\\ket{X}:= \\sum_{j=1}^M\\alpha_j\\sum_{X} \\psi_X \\,\\rho_{\\bfC}(\\delta(R_j))\\ket{X}$.\n\\end{definition}\n\nIntuitively, $\\rho_{\\bfC}$ takes a rule vector in $\\cR_{\\bfC}$ and delivers a transformation between state vectors, applying the rules to the input states, and ``weighing'' the resulting states according to the coefficients in the rule vector. This concept is useful in particular in order to formalize the jumps of a CTMC (see Section~\\ref{sec:CTMC}).\n\n\\begin{theorem}[Cf.\\ \\cite{BK2020}]\n\\begin{itemize}\n\\item[(i)] The \\emph{trivial rule} $R_{\\mathop{\\varnothing}}:=[(\\mathop{\\varnothing}\\leftarrow\\mathop{\\varnothing}\\rightarrow\\mathop{\\varnothing},\\ac{true})]_{\\sim}\\in \\LinAc{\\bfC}$ satisfies\n\\begin{equation}\n\\rho_{\\bfC}(\\delta(R_{\\mathop{\\varnothing}}))=Id_{End_{\\bR}(\\hat{\\bfC})}\\,.\n\\end{equation}\n\\item[(ii)] Denote by $\\bra{}:\\hat{\\bfC}\\rightarrow \\bR$ the so-called \\emph{(dual) projection vector}, defined via $\\braket{}{X}:=1_{\\bR}$ for all $X\\in \\obj{\\bfC}_{\\cong}$. Extending this definition by linearity, $\\bra{}$ thus implements the operation of \\emph{summing coefficients}, i.e., for $\\ket{\\Psi}=\\sum_{X}\\psi_X\\ket{X}$, we define $\\braket{}{\\Psi}:=\\sum_X \\psi_X\\braket{}{X}=\\sum_X\\psi_X$. Then $\\rho_{\\bfC}$ satisfies the so-called \\textbf{SqPO-type jump-closure property}:\n\\begin{equation}\\label{eq:jc}\n\\forall R\\in \\LinAc{\\bfC}:\\quad \\bra{}\\rho_{\\bfC}(\\delta(R))\n=\\bra{}\\jcOp{\\delta(R)}\\,,\\quad \\jcOp{\\delta(R)}:=\\rho_{\\bfC}(\\delta([I\\leftarrow I\\rightarrow I,\\ac{c}_I]_{\\sim}))\\,.\n\\end{equation}\n\\end{itemize}\n\\end{theorem}\nThe last point of the above theorem alludes to the important special case of rule-based linear operators that are \\emph{diagonal} in the basis of $\\hat{\\bfC}$, since these implement operations of \\emph{counting patterns}. In the particular case at hand, we may combine the definition of $\\rho_{\\bfC}$ in~\\eqref{eq:defCanRep} with~\\eqref{eq:jc} in order to obtain\n\\begin{equation}\\label{ed:obs}\n\\bra{}\\jcOp{\\delta(R)}\\ket{X}=\\bra{}\\rho_{\\bfC}(\\delta(R))\\ket{X}=\\sum_{m\\in \\RMatchGT{}{R}{X}}\\braket{}{R_m(X)} =\n\\sum_{m\\in \\RMatchGT{}{R}{X}} 1_{\\bR}= \\vert \\RMatchGT{}{R}{X}\\vert\\,.\n\\end{equation}\nIn this sense, $\\jcOp{\\delta(R)}$ as defined in~\\eqref{eq:jc} ``counts'' the numbers of admissible matches of $R$ into objects. \n\n\n\\subsection{Rule-based CTMCs}\\label{sec:CTMC}\n\nTraditionally, CTMCs based upon transformation rules have been defined in so-called \\emph{mass-action semantics}. In the rule algebra formalism, such types of CTMCs are compactly expressed as follows (with $\\rho\\equiv \\rho_{\\bfC}$):\n\n\\begin{theorem}[Mass-action semantics for CTMCs]\\label{def:masCTMC}\nLet $\\cT:=\\{(\\kappa_j,R_j)\\}_{j=1}^N$ denote a (finite) set of pairs of \\emph{base rates} $\\kappa_{j}\\in \\bR_{>0}$ and rules $R_j\\in \\LinAc{\\bfC}$ (for $j=1,\\dotsc,N$) over some category $\\bfC$. Denote by $\\mathsf{Prob}(\\hat{\\bfC})$ the space of (sub-)probability distributions over the state-space $\\hat{\\bfC}$, and choose an \\emph{initial state} $\\ket{\\Psi_0}:=\\sum_X \\psi_X^{(0)}\\ket{X}\\in \\mathsf{Prob}(\\hat{\\bfC})$. Then the data $(\\cT,\\ket{\\Psi_0})$ defines a \\textbf{SqPO-type rule-based CTMC} via the following \\textbf{evolution equation} for the \\textbf{system state} $\\ket{\\Psi(t)}:=\\sum_X \\psi_X(t)\\ket{X}$ at time $t\\geq0$:\n\\begin{equation}\n\\tfrac{d}{dt}\\ket{\\Psi(t)}=H\\ket{\\Psi(t)}\\,,\\quad \\ket{\\Psi(0)}=\\ket{\\Psi_0}\\,,\\quad H:=\\rho(h)-\\jcOp{h}\\,,\\quad h:=\\sum_{j=1}^N \\kappa_j\\delta(R_j)\n\\end{equation}\nHere, we used linearity to extend the definition of the jump-closure operator $\\jcOp{.}$ of~\\eqref{eq:jc} to arbitrary elements of $\\cR_{\\bfC}$, i.e., $\\jcOp{h}:=\\sum_{j=1}^N \\kappa_j\\jcOp{\\delta(R_j)}$.\n\\end{theorem}\n\nHowever, while mass-action semantics is of key importance in the modeling of chemical reaction systems, it is of course not the only conceivable semantics. In particular, as we shall demonstrate in the later part of this paper, for certain applications it will prove useful to utilize alternative adjustments of the ``firing rates'' of individual rules other than the one fixed in mass-action semantics.\n\n\\begin{definition}[Generalized rule-based CTMC semantics]\\label{def:gsCTMC}\nFor a suitable base category $\\bfC$, let $\\cT:=\\{(\\gamma_j, R_j, W_j)\\}_{j=1}^N$ be a set of triples of \\emph{base rates} $\\gamma_j\\in \\bR_{>0}$, \\emph{rules} $R_j\\in \\LinAc{\\bfC}$ and \\emph{(inverse) weight functions} $W_j\\in End(\\hat{\\bfC})_{diag}$ ($j=1,\\dotsc,N$). Here, $End(\\hat{\\bfC})_{diag}$ denotes the space of \\emph{diagonal operators} (with respect to the basis of $\\hat{\\bfC}$). Together with a choice of \\emph{initial state} $\\ket{\\Psi_0}\\in \\mathsf{Prob}(\\hat{\\bfC})$, this data defines a \\emph{rule-based SqPO-type CTMC} via the \\emph{evolution equation}\n\\begin{equation}\n\\begin{aligned}\n&\\tfrac{d}{dt}\\ket{\\Psi(t)}=H_{\\vec{W}}\\ket{\\Psi(t)}\\,,\\quad\n\\ket{\\Psi(0)}=\\ket{\\Psi_0}\\,,\\quad\nH_{\\vec{W}}:= \\sum_{j=1}^N \\gamma_j\\left(\n\\rho(\\delta(R)_j)-\\jcOp{\\delta(R_j)}\n\\right)\\frac{1}{W_j^{*}}\\\\\n&\\forall F\\in End(\\hat{\\bfC})_{diag},\\ket{X}\\in \\hat{\\bfC}:\\quad \\frac{1}{F^{*}}\\ket{X}:=\\begin{cases}\n\\ket{X} \\quad&\\text{if }F\\ket{X}=0_{\\bR}\\\\\n\\frac{1}{\\bra{}F\\ket{X}}\\ket{X} &\\text{otherwise.}\n\\end{cases}\n\\end{aligned}\n\\end{equation}\n\\end{definition}\n\n\\begin{example}\nGiven a set of pairs of base rates and rules $\\cT=\\{(\\kappa_j,R_j)\\}_{j=1}^N$ for a rule-based CTMC with mass-action semantics as in Definition~\\ref{def:masCTMC}, one may define from this transition set in particular a \\emph{uniformed} CTMC with infinitesimal generator $H_U$ defined as follows (with $\\rho\\equiv \\rho_{\\bfC}$):\n\\begin{equation}\nH_U:=-Id_{End(\\hat{\\bfC})}+\\sum_{j=1}^N p_j \\cdot\\rho(\\delta(R_j))\\frac{1}{\\jcOp{\\delta(R_j)}^{*}}\\,,\\quad p_j:=\\frac{\\kappa_j}{\\sum_{j=1}^N \\kappa_j}\\,.\n\\end{equation}\nNote in particular that since all base rates $\\kappa_j$ are positive real numbers, the parameters $p_j$ are \\emph{probabilities}. Moreover, applying $H_U$ to an arbitrary basis state $\\ket{X}$, one finds that the overall jump rate (i.e., minus the coefficient of the diagonal part of $H_U$) is precisely equal to $1$, so that the $p_j$ in the non-diagonal part of $H_U$ encodes in fact the probability for the ``firing'' of rule $R_j$, regardless of the numbers of admissible matches of $R_j$ into the given state $\\ket{X}$. Finally, under the assumption that both the mass-action and the uniformed CTMC admit a \\emph{steady state}, we find by construction that\n\\begin{equation}\n\\lim\\limits_{t\\to\\infty}\\tfrac{d}{dt}\\ket{\\Psi(t)}=0\\quad \\Leftrightarrow \\quad\\lim\\limits_{t\\to\\infty}H\\ket{\\Psi(t)}=0\\quad \n\\Leftrightarrow \\quad \\lim\\limits_{t\\to\\infty}\nH_U\\left(\\jcOp{h}\\cdot \\ket{\\Psi(t)}\\right)=0\\,.\n\\end{equation}\nLetting $\\ket{\\Psi_U(t)}$ denote the system state at time $t$ of the ``uniformed'' CTMC with generator $H_U$, and assuming $\\ket{\\Psi(0)}=\\ket{\\Psi_U(0)}$, the above entails in particular that $\\ket{\\Psi_U(t\\to\\infty)}= (\\jcOp{h}^{*})^{-1}\\cdot\\ket{\\Psi(t\\to\\infty)}$, i.e., the steady state of the CTMC generated by $H_U$ and the one of the CTMC generated by $H$ are related by an operator-valued rescaling (via the operator $(\\jcOp{h}^{*})^{-1}$, and thus in general in non-constant fashion). A similar construction will play a key role when studying rule-based discrete-time Markov chains.\n\\end{example}\n\nAnother interesting class of examples is motivated via the ``Potsdam approach'' to probabilistic graph transformation systems as advocated in~\\cite{giese2012}. \n\n\\begin{example}[LCM-construction]\\label{ex:LCM}\nGiven a generalized CTMC with infinitesimal generator $H_{\\vec{W}}$ specified as in Definition~\\ref{def:gsCTMC}, construct the least common multiple (LCM) $L$ of the inverse weight functions:\n\\begin{equation}\nL:= LCM(W_1,\\dotsc,W_N)\\,.\n\\end{equation}\nThen we define the \\emph{LCM-variant} of the CTMC as\n\\begin{equation}\nH_{LCM}:=H_{\\vec{W}}\\cdot L\\,.\n\\end{equation}\nBy construction, $H_{LCM}$ does not contain any operator-valued inverse weights (in contrast to the original $H_{\\vec{W}}$). Moreover, for certain choices of (inverse) weight functions $W_1, \\dotsc, W_N$, the LCM-construction results in an infinitesimal generator $H_{LCM}$ in which all contributing rules have the same input motif $I$, thus making contact with the methodology of~\\cite{giese2012}. \n\\end{example}\n\n\\subsection{Rule-based DTMCs}\\label{sec:dtmc}\n\nDespite their numerous applications in many different research fields, to date discrete-time Markov chains (DTMCs) that are based upon notions of probabilistic transformation systems have not been considered in quite a comparable detail as the CTMC constructions. Following~\\cite{Behr2021}, we present here a possible general construction of rule-based DTMC via a rule-algebraic approach:\n\n\\begin{definition}\\label{def:DTMC}\nFor a suitable category $\\bfC$, let $\\cT=\\{(\\gamma_j,R_j,W_j)\\}_{j=1}^N$ be a (finite) set of triples of positive coefficients $\\gamma_j\\in \\bR_{>0}$, transformation rules $R_j\\in \\LinAc{\\bfC}$ and (inverse) weight functions $W_j\\in End(\\hat{\\bfC})_{diag}$ ($j=1,\\dotsc,N$), with the additional constraint that\n\\begin{equation}\n\\sum_{j=1}^{N} \\gamma_j\\jcOp{\\delta(R_j)}\\cdot\\frac{1}{W_j^{*}} = Id_{End(\\hat{\\bfC})}\\,.\n\\end{equation}\nThen together with an initial state $\\ket{\\Phi_0}\\in \\mathsf{Prob}(\\hat{\\bfC})$, this data defines a \\emph{SqPO-type rule-based discrete-time Markov chain (DTMC)}, whose $n$-th state (for non-negative integer values of $n$) is given by\n\\begin{equation}\n\\ket{\\Phi_n}:=D^n \\ket{\\Phi_0}\\,,\\quad D:=\\sum_{j=1}^N\\gamma_j\\rho(\\delta(R_j))\\cdot\\frac{1}{W_j^{*}}\\qquad (\\rho\\equiv\\rho_{\\bfC})\\,.\n\\end{equation}\n\\end{definition}\n\n\\begin{figure}\n\\centering\n\\[\n\\ti{AVM}\n\\]\n\\caption{Specification of the adaptive voter model (AVM) according to~\\cite{Durrett2012} via a probabilistic decision tree (black arrows), and as combined one-step probabilistic transitions ({\\color{qOrange}orange arrows}).}\\label{fig:AVM}\n\\end{figure}\n\n\\begin{example}[Adaptive Voter Model] As a typical example of a social network model, consider the specification of the \\emph{adaptive voter model (AVM)} in the variant according to Durrett et al. [ref], which has as its \\emph{input parameters} an \\emph{initial graph state} $\\ket{\\Phi_0}=\\ket{X_0}$ (with $N^{(0)}_{\\bullet}$ and $N^{(0)}_{\\circ}$ vertices of types $\\bullet$ and $\\circ$, respectively, and with $N_E$ undirected edges) and a \\emph{probability} $0\\leq \\alpha\\leq1$, and whose transitions as depicted in Figure~\\ref{fig:AVM} are given via a form of a probabilistic decision procedure in several steps (black arrows): in each round of the AVM, an edge is chosen at random; if the edge is linking two nodes of different kind, with probability $p$, one of the rewiring rules is ``fired'', or with probability $(1-p)$ one of the adopt rules, respectively; otherwise, i.e., if the chosen edge is linking two vertices of the same kind, no action is performed. As annotated in {\\color{darkblue}dark blue}, each phase of this probabilistic decision procedure is dressed with a probability (and so that all probabilities for a given phase sum to $1$). These probabilities in turn depend upon constant parameters (here, $p$ and $N_E$), as well as on \\emph{pattern counts} $N_{\\bullet}$, $N_{\\circ}$, $N_{\\bullet\\!-\\!\\bullet}$, $N_{\\bullet\\!-\\!\\circ}$ and $N_{\\circ\\!-\\!\\circ}$ that dynamically depend on the current system state. Note that $N_V:=N_{\\bullet}+N_{\\circ}$ and $N_E=N_{\\bullet\\!-\\!\\bullet}+N_{\\bullet\\!-\\!\\circ}+N_{\\circ\\!-\\!\\circ}$ are \\emph{conserved quantities} in this model, since none of the transitions create or delete vertices, but at most exchange vertex types\\footnote{Strictly speaking, we implement the two vertex types $\\bullet$ and $\\circ$ as attributes in our \\texttt{SimSG} implementation, which in the theoretical setting can be emulated via using self-loops of two different types; evidently, this amounts merely to a slight modification of the type graph plus the enforcement of some structural constraints (compare~\\cite{BK2020}), thus for notational simplicity, we do not make this technical detail explicit in our diagrams.}, and since the transitions manifestly preserve the overall number of edges.\n\nUpon closer inspection, the transition probabilities may be written in a form that permits to compare the ``firing'' semantics to the mass-action and to the generalized semantics used in rule-based CTMCs. According to~\\eqref{ed:obs}, we may implement the operation of counting patterns via linear operators based upon ``identity rules'', since for arbitrary graph patterns $P$ and graph states $\\ket{X}$,\n\\begin{equation}\nN_P(X) = \\bra{}\\hat{O}_P\\ket{X}\\,,\\quad \\hat{O}_P:=\\rho(\\delta([P\\leftarrow P\\rightarrow P,\\ac{true}]_{\\sim}))\\,.\n\\end{equation}\nSince rule algebra theory is not the main topic of the present paper, we provide below some operator relations without derivation\\footnote{In fact, for the simple case at hand, one may derive the relations heuristically, noting that applying $\\hat{O}_{\\bullet\\!-\\!\\circ}\\hat{O}_{x}$ to some graph state amounts to first counting all vertices of type $x$ (for $x\\in \\{\\bullet,\\circ\\}$), followed by counting all $\\bullet\\!\\!-\\!\\!\\circ$ edges; performing this operation in one step, this amounts to either counting the $x$-type vertices separately (i.e., counting the pattern $\\bullet\\!\\!-\\!\\!\\circ\\; x$), or counting the $x$-type vertices on the same location as counting the $\\bullet\\!\\!-\\!\\!\\circ$ patterns, thus explaining the two contributions in~\\eqref{eq:auxObsAVM}.}, noting that they may be computed utilizing the fact that $\\rho$ is a so-called representation of the SqPO-type rule algebra (cf.\\ \\cite[Thm.~3]{BK2020}):\n\\begin{equation}\\label{eq:auxObsAVM}\n\\hat{O}_{\\bullet\\!-\\!\\circ}\\hat{O}_{x}=\\hat{O}_{\\bullet\\!-\\!\\circ\\;x}+\\hat{O}_{\\bullet\\!-\\!\\circ} \\quad \\Leftrightarrow\\quad \n\\hat{O}_{\\bullet\\!-\\!\\circ\\;x}=\\hat{O}_{\\bullet\\!-\\!\\circ}(\\hat{O}_{x}-1)\\qquad (x\\in \\{\\bullet,\\circ\\})\\,,\n\\end{equation}\nwhich permit us to interpret the AVM model in the following way as a rule-based DTMC: starting by computing the overall one-step transitions and their respective probabilities ({\\color{qOrange}orange arrows} in Figure~\\ref{fig:AVM}), and using the above operator relations, we find the rule-based DTMC generator\n\\begin{subequations}\n\\begin{align}\nD_{AVM}&={\\color{qOrange}\\frac{\\alpha}{2N_E}}\n\\rho\\left(\\delta\\left(\\begin{matrix}\\bullet \\hphantom {-}\\circ \\\\[-0.675em] \\!\\!\\!\\!\\!\\!\\backslash \\\\[-0.675em]\\bullet \\end{matrix}\\;\\leftharpoonup \\;\\begin{matrix}\\bullet \\!\\!-\\!\\!\\circ \\\\[-0.25em]\\bullet \\end{matrix}\\right)\\right){\\color{qOrange}\\frac{1}{(\\hat{O}_{\\bullet}-1)^{*}}}\n+{\\color{qOrange}\\frac{\\alpha}{2N_E}}\\rho\\left(\\delta\\left(\\begin{matrix}\\bullet \\hphantom {-}\\circ \\\\[-0.675em] \\;\\;\\;\/ \\\\[-0.675em]\\circ \\end{matrix}\\;\\leftharpoonup \\;\\begin{matrix}\\bullet \\!\\!-\\!\\!\\circ \\\\[-0.25em]\\circ \\end{matrix}\\right)\\right){\\color{qOrange}\\frac{1}{(\\hat{O}_{\\circ}-1)^{*}}}\\label{eq:D-AVM-rewire}\\\\\n&\\quad+{\\color{qOrange}\\frac{(1-\\alpha)}{2N_E}}\\rho\\left(\\delta\\left(\n\\bullet \\!\\!-\\!\\!\\bullet \\;\\leftharpoonup \\;\\bullet \\!\\!-\\!\\!\\circ\n\\right)\\right)+{\\color{qOrange}\\frac{(1-\\alpha)}{2N_E}}\\rho\\left(\\delta\\left(\n\\circ \\!\\!-\\!\\!\\circ \\;\\leftharpoonup \\;\\bullet \\!\\!-\\!\\!\\circ\n\\right)\\right) \\label{eq:D-AVM-adopt}\\\\\n&\\quad+{\\color{qOrange}\\frac{1}{N_E}}\\rho\\left(\\delta\\left(\n\\bullet \\!\\!-\\!\\!\\bullet \\;\\leftharpoonup \\;\\bullet \\!\\!-\\!\\!\\bullet\n\\right)\\right)+{\\color{qOrange}\\frac{1}{N_E}}\\rho\\left(\\delta\\left(\n\\circ \\!\\!-\\!\\!\\circ \\;\\leftharpoonup \\;\\circ \\!\\!-\\!\\!\\circ\n\\right)\\right) \\label{eq:D-AVM-inactive}\\,.\n\\end{align}\n\\end{subequations}\nInterestingly, the particular semantics for the ``firing'' of transitions as encoding in this DTMC generator is neither a variant of mass-action semantics (wherein each rule would fire with a rate dependent on its number of matches, and so that the overall firing rate of all rules would be utilized to normalize all state-dependent firing rates to probabilities as described in Definition~\\ref{def:DTMC}), nor do all transitions have state-independent, i.e., constant rates. Instead, the \\emph{rewiring rules}~\\eqref{eq:D-AVM-rewire} have rates proportional to $N_{\\bullet\\!-\\!\\circ}$; according to~\\eqref{eq:auxObsAVM}, we have that $N_{\\bullet\\!-\\!\\circ}=N_{\\bullet\\!-\\!\\circ\\;x}\/(N_X-1)^{*}$ (for $x\\in \\{\\bullet,\\circ\\}$), whence the firing rates of the rewiring transitions of the AVM model are found to be proportional to a state-dependent fraction of their mass-action rates! In contrast, the \\emph{adoption} transitions~\\eqref{eq:D-AVM-adopt} as well as the \\emph{inactive} transitions~\\eqref{eq:D-AVM-inactive} are found to follow standard mass-action semantics, which might raise the question of whether the overall ``mixed'' semantics chosen for the AVM model indeed reflect the intended intuitions and empirical findings.\n\nFinally, in view of simulation experiments, we would like to advocate the use of dedicated and high-performance Gillespie-style stochastic graph transformation software such as in particular \\texttt{SimSG}~\\cite{DBLP:journals\/jot\/EhmesFS19} (cf.\\ Section~\\ref{sec:simulations}). More concretely, one may take advantage of the rule-algebraic formulation of DTMCs and CTMCs to identify for a given DTMC with generator $D$ a particular CTMC based upon the \\emph{uniform} infinitesimal generator $H_U:=D-Id_{End(\\hat{\\bfC})}$, so that (if both limits exist) the two alternative probabilistic systems have the same limit distribution (i.e., $\\ket{\\Phi_{\\infty}}=\\lim\\limits_{t\\to\\infty}\\ket{\\Psi(t)}$):\n\\begin{equation}\nD\\ket{\\Phi_{\\infty}}=\\ket{\\Phi_{\\infty}}\\quad \\Leftrightarrow\\quad\n(D-Id_{End(\\hat{\\bfC})})\\ket{\\Phi_{\\infty}}=0\\quad\\leftrightarrow\\quad\n\\left(\\tfrac{d}{dt}\\ket{\\Psi(t)}\\right)\\big\\vert_{t\\to\\infty}\n=\\left(H_U\\ket{\\Psi(t)}\\right)\\big\\vert_{t\\to\\infty}=0\n\\end{equation}\nHowever, CTMC simulations for rules that are not following mass-action semantics are non-standard to date, which is why it is also of interest to consider two particular mass-action type alternative CTMC models that are motivated by the original AVM model:\n\\begin{itemize}\n\\item A ``standard'' \\emph{mass-action semantics (MAS)} CTMC, where the transformation rules in $D_{AVM}$ are defined to act in mass-action semantics (with the base rates ${\\color{qOrange}\\kappa_{\\bullet}}, {\\color{qOrange}\\kappa_{\\circ}},{\\color{qOrange}\\alpha_{\\bullet}},{\\color{qOrange}\\alpha_{\\circ}}\\in \\bR_{>0}$ however not fixed in any evident way from the original AVM model interpretation):\n\\begin{subequations}\n\\begin{align}\nH_{MAS}&={\\color{qOrange}\\kappa_{\\bullet}}\\cdot\n\\rho\\left(\\delta\\left(\\begin{matrix}\\bullet \\hphantom {-}\\circ \\\\[-0.675em] \\!\\!\\!\\!\\!\\!\\backslash \\\\[-0.675em]\\bullet \\end{matrix}\\;\\leftharpoonup \\;\\begin{matrix}\\bullet \\!\\!-\\!\\!\\circ \\\\[-0.25em]\\bullet \\end{matrix}\\right)\\right)\n+{\\color{qOrange}\\kappa_{\\circ}}\\cdot\\rho\\left(\\delta\\left(\n\\begin{matrix}\\bullet \\hphantom {-}\\circ \\\\[-0.675em] \\;\\;\\;\/\\\\[-0.675em]\\circ \\end{matrix}\\;\\leftharpoonup \\;\\begin{matrix}\\bullet \\!\\!-\\!\\!\\circ \\\\[-0.25em]\\circ \\end{matrix}\\right)\\right)\\label{eq:AVM-H-MAS-rewire}\\\\\n&\\quad+{\\color{qOrange}\\alpha_{\\bullet}}\\cdot\\rho\\left(\\delta\\left(\n\\bullet \\!\\!-\\!\\!\\bullet \\;\\leftharpoonup \\;\\bullet \\!\\!-\\!\\!\\circ\n\\right)\\right)\n+{\\color{qOrange}\\alpha_{\\circ}}\\cdot\\rho\\left(\\delta\\left(\n\\circ \\!\\!-\\!\\!\\circ \\;\\leftharpoonup \\;\\bullet \\!\\!-\\!\\!\\circ\n\\right)\\right) \\label{eq:AVM-H-MAS-adopt}\\\\\n&\\quad-\\hat{O}_{\\bullet\\!-\\!\\circ}({\\color{qOrange}\\kappa_{\\bullet}}(\\hat{O}_{\\bullet}-1)+{\\color{qOrange}\\kappa_{\\circ}}(\\hat{O}_{\\circ}-1)+{\\color{qOrange}\\alpha_{\\bullet}}+{\\color{qOrange}\\alpha_{\\circ}})\\label{eq:AVM-H-MAS-diag}\\,.\n\\end{align}\n\\end{subequations}\nNote that if we let $\\rho(h_{MAS}):=\\eqref{eq:AVM-H-MAS-rewire}+\\eqref{eq:AVM-H-MAS-adopt}$, then the terms in~\\eqref{eq:AVM-H-MAS-diag} are indeed found to be equal to $-\\jcOp{h_{AVM}}$ (utilizing~\\eqref{eq:auxObsAVM} to simplify terms) as required by Theorem~\\ref{def:masCTMC}.\n\\item A \\emph{least-common-multiple (LCM)} variant which is computable from $D_{AVM}$ via the method presented in Example~\\ref{ex:LCM} (with constant and operator-valued contributions to firing rates in {\\color{qOrange}orange}):\n\\begin{subequations}\n\\begin{align}\nH_{LCM}&=\n{\\color{qOrange}\\frac{\\alpha}{2N_E}}\n\\rho\\left(\\delta\\left(\\begin{matrix}\\bullet \\hphantom {-}\\circ \\\\[-0.675em] \\!\\!\\!\\!\\!\\!\\backslash \\\\[-0.675em]\\bullet \\end{matrix}\\;\\leftharpoonup \\;\\begin{matrix}\\bullet \\!\\!-\\!\\!\\circ \\\\[-0.25em]\\bullet \\end{matrix}\\right)\\right){\\color{qOrange}(\\hat{O}_{\\circ}-1)}\n+{\\color{qOrange}\\frac{\\alpha}{2N_E}}\\rho\\left(\\delta\\left(\\begin{matrix}\\bullet \\hphantom {-}\\circ \\\\[-0.675em] \\;\\;\\;\/\\\\[-0.675em]\\circ \\end{matrix}\\;\\leftharpoonup \\;\\begin{matrix}\\bullet \\!\\!-\\!\\!\\circ \\\\[-0.25em]\\circ \\end{matrix}\\right)\\right){\\color{qOrange}(\\hat{O}_{\\bullet}-1)}\\label{eq:H-AVM-LCM-rewire}\\\\\n&\\begin{aligned}\n&\\quad+{\\color{qOrange}\\frac{(1-\\alpha)}{2N_E}}\\rho\\left(\\delta\\left(\n\\bullet \\!\\!-\\!\\!\\bullet \\;\\leftharpoonup \\;\\bullet \\!\\!-\\!\\!\\circ\n\\right)\\right){\\color{qOrange}(\\hat{O}_{\\bullet}-1)(\\hat{O}_{\\circ}-1)}\\\\\n&\\quad+{\\color{qOrange}\\frac{(1-\\alpha)}{2N_E}}\\rho\\left(\\delta\\left(\n\\circ \\!\\!-\\!\\!\\circ \\;\\leftharpoonup \\;\\bullet \\!\\!-\\!\\!\\circ\n\\right)\\right){\\color{qOrange}(\\hat{O}_{\\bullet}-1)(\\hat{O}_{\\circ}-1)}\\end{aligned} \\label{eq:H-AVM-LCM-adopt}\\\\\n&\\quad-{\\color{qOrange}\\frac{1}{N_E}\\hat{O}_{\\bullet\\!-\\!\\circ}{\\color{qOrange}(\\hat{O}_{\\bullet}-1)(\\hat{O}_{\\circ}-1)}} \\label{eq:H-AVM-LCM-diag}\\,.\n\\end{align}\n\\end{subequations}\nExpanding terms in $H_{LCM}$ via utilizing once again the representation property of $\\rho$ (not presented here for brevity; cf.\\ \\cite[Thm.~3]{BK2020} for the details), we may write $H_{LCM}$ in the equivalent form below, which exhibits this CTMC model as a particular instance of a model for which all rules have the same input motif (and thus as a model in the spirit of the ``Potsdam approach'' as in~\\cite{giese2012}):\n\\begin{subequations}\\label{eq:potsdam-rules}\n\\begin{align} \nH_{LCM}&={\\color{qOrange}\\frac{\\alpha}{2N_E}}\\rho\\left(\\delta\\left(\n\\begin{matrix}\n\\bullet \\hphantom {-}\\circ\\\\[-0.65em]\n| \\hphantom{-\\circ}\\\\[-0.65em]\n\\bullet \\hphantom {-}\\circ\n\\end{matrix}\\;\\leftharpoonup\\;\n\\begin{matrix}\n\\bullet \\!\\!-\\!\\!\\circ\\\\[-0.65em]\n\\hphantom{-\\circ}\\\\[-0.65em]\n\\bullet \\hphantom {-}\\circ\n\\end{matrix}\n\\right)\\right)\n+{\\color{qOrange}\\frac{\\alpha}{2N_E}}\\rho\\left(\\delta\\left(\n\\begin{matrix}\n\\bullet \\hphantom {-}\\circ\\\\[-0.65em]\n\\hphantom{\\bullet-}|\\\\[-0.65em]\n\\bullet \\hphantom {-}\\circ\n\\end{matrix}\\;\\leftharpoonup\\;\n\\begin{matrix}\n\\bullet \\!\\!-\\!\\!\\circ\\\\[-0.65em]\n\\hphantom{-\\circ}\\\\[-0.65em]\n\\bullet \\hphantom {-}\\circ\n\\end{matrix}\n\\right)\\right)\\label{eq:H-AVM-LCM-alt-rewire}\\\\\n&\\quad+{\\color{qOrange}\\frac{1-\\alpha}{2N_E}}\\rho\\left(\\delta\\left(\n\\begin{matrix}\n\\bullet \\!\\!-\\!\\!\\bullet\\\\[-0.65em]\n\\hphantom{-\\circ}\\\\[-0.65em]\n\\bullet \\hphantom {-}\\circ\n\\end{matrix}\\;\\leftharpoonup\\;\n\\begin{matrix}\n\\bullet \\!\\!-\\!\\!\\circ\\\\[-0.65em]\n\\hphantom{-\\circ}\\\\[-0.65em]\n\\bullet \\hphantom {-}\\circ\n\\end{matrix}\n\\right)\\right)\n+{\\color{qOrange}\\frac{1-\\alpha}{2N_E}}\\rho\\left(\\delta\\left(\n\\begin{matrix}\n\\circ \\!\\!-\\!\\!\\circ\\\\[-0.65em]\n\\hphantom{-\\circ}\\\\[-0.65em]\n\\bullet \\hphantom {-}\\circ\n\\end{matrix}\\;\\leftharpoonup\\;\n\\begin{matrix}\n\\bullet \\!\\!-\\!\\!\\circ\\\\[-0.65em]\n\\hphantom{-\\circ}\\\\[-0.65em]\n\\bullet \\hphantom {-}\\circ\n\\end{matrix}\n\\right)\\right)\\label{eq:H-AVM-LCM-alt-adapt}\\\\\n&\\quad -{\\color{qOrange}\\frac{1}{N_E}\\hat{O}_{\\text{\\tiny{$\\begin{matrix}\n\\bullet \\!\\!-\\!\\!\\circ\\\\[-0.65em]\n\\hphantom{-\\circ}\\\\[-0.65em]\n\\bullet \\hphantom {-}\\circ\n\\end{matrix}$}}}}\\label{eq:H-AVM-LCM-alt-diag}\n\\end{align}\n\\end{subequations}\n\\end{itemize}\n\\end{example}\n\n\n\n\n\n\\section{Simulating the Models}\\label{sec:simulations}\n\n\nIn Section~\\ref{sec:dtmc} we presented several ways of converting a DTMC-based model into a CTMC-based one. The first (M1) is based on introducing weight factors correcting for the deviation of the probabilistic view of the DTMC from the CTMC mass action semantics. This allows us to use a modified stochastic simulation algorithm to recover the behavior of the DTMC. Then we have the mass action stochastic graph transformation system (M2) as introduced in Section~\\ref{sec:voter-models}, in our view the most natural and closest to reality due to its use of continuous time, but in order to relate to the original probabilistic formulation we have to sample rate constants experimentally until the behavior matches that observed in the given model. Finally we consider a model (M3) obtained by mutually extending the rules of the system to the same left-hand side. Then all rules have the same number of matches and hence rates directly reflect probabilities. \n\nIn this section we simulate these models with the CTMC-based SimSG tool~\\cite{DBLP:journals\/jot\/EhmesFS19} to see how, with suitably chosen parameters, their behavior matches that of the original formulation~\\cite{Durrett2012}. In particular, we want to answer the following questions.\n\\begin{description}\n\\item[RQ1:] Can we reproduce the results of~\\cite{Durrett2012} by simulating (M1) in SimSG?\n\\item[RQ2:] What rates do we have to use to reproduce the behavior of~\\cite{Durrett2012} in (M2)?\n\\item[RQ3:] Does (M3) with rates reflecting the probabilities of~\\cite{Durrett2012} lead to the same overall behavior?\n\\end{description}\nNote that in (M1) and (M2) rates are derived analytically based on the theory in Section~\\ref{sec:dtmc} while they are determined experimentally for (M3). \n\\begin{figure}[h] \n\t\\centering\n\t\\subfigure[Initial graph (u=0.5)]{\\includegraphics[width=0.31\\textwidth]{graphics\/initialGraph.PNG}}\n\t\\hfill\n\t\\subfigure[Giant component]{\\includegraphics[width=0.31\\textwidth]{graphics\/giantComponentGraph.PNG}}\n\t\\hfill\n\t\\subfigure[Fragmented graph]{\\includegraphics[width=0.31\\textwidth]{graphics\/segmentedGraph.PNG}}\n\t\\caption{Graphs before and after simulation}\\label{fig:instance-graphs}\n\\end{figure}\nAs in~\\cite{Durrett2012} initial graphs are generated randomly based on a fraction $u$ of agents voting 0 (where $u = 0.5$ gives a 50\/50 split between opinions) by deciding for each pair of nodes with probability $v$ if they should be linked. This means that, with 100 voters and $v = 0.04$, we create an Erd{\\\"o}s-R{\\'en}yi graph with 400 links and average degree 8.\nAccording to~\\cite{Durrett2012} we expect to see one of two behaviors, depending on the probability $\\alpha$ of rewiring (vs. $1 - \\alpha$ for adoption) for a given discordant link. With $\\alpha > 0.43$ rewiring causes a fragmentation of the graph into homogeneous connected components; otherwise adoption leads to the dominance of a single opinion (usually the initial majority) in a giant component. \nVisually we indicate opinion 1 in green and opinion 0 in red. Homogeneous components are highlighted in their respective color, whereas heterogeneous components are shown in gray. For example, Figure~\\ref{fig:instance-graphs}(a) shows the initial graph. In the limit, if the rates favor the adopt rules we obtain a graph as in Figure~\\ref{fig:instance-graphs}(b), while Figure~\\ref{fig:instance-graphs}(c) shows a possible result of dominant rewiring where the graph has split split into two components, one for each opinion.\n\\begin{figure}[h] \n\t\\centering\n\t\\subfigure[DTMC Version of the Voter Model]{\\includegraphics[width=0.48\\textwidth]{graphics\/ResultsDiscrete.pdf}}\n\t\\hfill\n\t\\subfigure[CTMC Version of the Voter Model]{\\includegraphics[width=0.48\\textwidth]{graphics\/ResultsMassAction.pdf}}\n\t\\caption{Simulation results}\\label{fig:sim-results}\n\\end{figure}\n\nTo answer RQ1 we use the rules as presented in Section~\\ref{sec:voter-models}, but modify the simulation algorithm by the weight factors derived in Section~\\ref{sec:dtmc}. This produces a simulation matching that of a discrete time Markov chain. While fixing the model size, we perform a nested parameter sweep over $\\alpha$, the static probability of rewiring, and over the fraction $u$ of agents voting $0$.\nThe results are shown in Figure~\\ref{fig:sim-results}(a). On the $y$-axis we plot the fraction of voters with the minority opinion in the final graph, while the $x$-axis shows the \\(\\alpha\\) value for each simulation. Furthermore, each set of symbols in the figure represents a different opinion ratio $u$ in the initial graph. As we can see, independently of the $u$ there is a clear point after which an increase in $\\alpha$ leads to a separation of voters by opinion, which implies a segmentation of the graph. In turn, if $\\alpha$ is low, the minority opinion disappears. Thus, Figure~\\ref{fig:sim-results}(a) reflects very closely the findings of~\\cite{Durrett2012}, with only a minor deviation of the value of $\\alpha$ separating the two outcomes.\t\n\t\t\nFor RQ2, we execute the same rules with our standard implementation simulation algorithm reflecting mass action CTMC semantics. While fixing the rate of the adopt rules as 1, we perform a nested parameter sweep over $\\alpha$, in this case denoting the rate of the rewiring rules, and the minority fraction $u$. \nThe results are shown in Figure~\\ref{fig:sim-results}(b), again with the $y$-axis showing the fractions of voters in the minority in each final graph and the $x$-axis tracking the $\\alpha$ values. As before, each set of symbols represents a different initial fraction $u$. In contrast to Figure~\\ref{fig:sim-results}(a), here the threshold of $\\alpha$ at which the final graph becomes segmented is orders of magnitudes smaller. This is a result of the CTMC-based simulation algorithm, that obtains the rule application probability by multiplying a rule's match count with its static rate. Intuitively, the set of matches for each of the adopt rules consists of all connected pairs of voters $v1, v2$ of different opinion. Instead, the set of matches of the rewire rules is made up by the Cartesian product of this first set with the set of voters sharing the opinion of $v1$. To find the same balancing point as in the probabilistic model, this ``unfair advantage\" of the rewire rules is compensated for by a very low rate.\n\n\n\\begin{figure}[h] \n\t\\centering\n\t\\includegraphics[width=0.48\\textwidth]{graphics\/ResultsPotsdam.pdf}\n\t\\caption{Simulation results of the alternative CTMC Version described by the rules in equation (\\ref{eq:potsdam-rules})}\\label{fig:sim-results2}\n\\end{figure}\n\nFinally, to address RQ3 we perform simulations using adopt and rewire rules extended to a common left-hand side consisting of two connected voters $v1,v2$ of conflicting opinions plus one extra voter each of opinion $0$ and $1$. By the same argument as above this leads to a large increase in the number of matches and hence poses scalability challenges to the simulation. To address this we used a smaller initial graph of 50 Voters and 200 groups and executed only 10 simulation runs per parameter configuration rather than 40 as in the other two models. (In each case, configurations range across 3 values for $u$ and 7 values for $\\alpha$ totaling 21 parameter configurations for each model.) The results are shown in Figure~\\ref{fig:sim-results2} using the same layout as before. \nIt is interesting that, despite the high degree of activity in this model due to the very large match count, it displays almost the same behavior as the DTMC version (M1), even down to the $\\alpha$ threshold. \n\nThe limited scalability of this approach is in part due to the global nature of the requirement that all rules share the same left-hand side. In the MDP-based approach of~\\cite{giese2012} this applies only to the subsets of rules defining the same action. One could then consider the four alternatives of rewire and adopt as cases of the same action of resolving a conflict edge, leading to a probabilistic graph transformation system with one rule. An analysis of the relation of mass-action CTMC with MDP-based semantics, however, is beyond the scope of this paper.\n\nSocial network analysis of the voter model and its variants is limited to a theoretical level, exploring mechanisms resulting in certain emergent phenomena that are observable in real-life networks, but are not quantified using real data. For a deeper quantitative analysis of phenomena such as the spread of opinions or the fragmentation of networks, the parameters of our rule-based models need to be matched to real social network data where such is available. This means, in particular, to determine the rates of the rules, e.g., from observations of pattern frequencies.\n \nSuch approaches are well established in chemical and biological modeling where statistical methods are used to derive rates of reaction rules from concentrations, i.e., relative pattern frequencies describing the ratios of the different types molecules~\\cite{ Klinke2014InSM, Eydgahi2013MSB}. Instead, the derivation of an equational model from the operational rule-based description based on the rule algebra approach described in this paper could allow for an analytical approach where rates are determined by solving a system of equations. The precise conditions under which this is possible are a subject of future research.\n\n\n\\section{Conclusion}\n\n\nIn this paper we analyzed the non-standard semantics behind the formulation of adaptive system models in the literature. \nFor a simple but prototypical example, we focused specifically on the voter models of opinion formation in social networks. We analyzed the non-standard semantics behind these models and established that they can be seen as Discrete-Time Markov Chains (DTMCs). In order to start the study of such models using the concepts, theories and tools of stochastic graph transformations (SGTS), we formalized the semantic relation between the rule-based specification by SGTS of mass-action Continuous-Time Markov Chains (CTMCs) with the DTMC-based semantics, identifying two systematic ways by which an SGTS can be derived from a DTMC-based model while preserving the behavior in the limit. \n\nIn the first derivation, this leads to a generalized notion of SGTS with weight factors correcting for the mass-action component represented by the dependency of the jump rate on the number of matches for each rule. This new type of SGTS is supported by a generalized simulation algorithm supporting the analysis of DTMC-based probabilistic graph transformation systems which, to our knowledge, is original. The second derivation converts the rules of the system by extending them to the same left-hand sides, producing a model resembling Markov-Decision Process (MDP)-based probabilistic graph transformation rules. \n\nApart from the theoretical analysis, we validate both resulting systems through simulations, establishing that they reproduce the expected behavior of the model. We also create a direct model of the same system as an SGTS with standard mass-action CTMC semantics and succeed in determining its parameters experimentally to match the expected behavior. We conclude that we can use standard mass-action stochastic GTS to model the phenomena expressed by the voter models in the literature, providing a starting point for further more elaborate modeling and analyses. \n\nIn future work we want to study more complex adaptive networks, including variations on the voter model with groups of more than two members, opinion profiles for a set of topics instead of a single opinion per voter, and including concepts such as influencers (who actively try to link to and persuade others), or zealots (who do not change their opinions). We are also planning to study how to match models to social network data.\nOn the foundational side, we are planning optimizations to the simulation algorithm and a study of the relation between mass-action CTMC and MDP semantics of probabilistic graph transformations. We can also derive and study differential equations (ODEs) from our SGTS using the rule-algebra formalism. This provides a more scalable solution to analyzing their behavior complementing the simulation-based approach. \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nDespite the efforts made to understand and constrain the first stars and the process of element formation connected to them, we are after several decades still left with many unanswered questions. How massive were the first stars, and how were the heavy elements we detect in old metal-poor stars created? One way of answering these questions is to study old metal-poor stars. From these stars we can derive accurate stellar abundances, allowing us to trace the progenitor stars that yielded the gas the later stellar generations were made from. It is therefore important to understand if this gas was pure, or mixed with other gases in the interstellar medium.\nMost old dwarf stars of spectral type F and G preserve the original gas composition in their outer atmospheres. This allows us to study pristine gases of elements between lithium and uranium. The lighter elements (Z $< 26$) provide information on the characteristics of the supernova that created them during the explosion. The heavy elements (Z $> 38$) are formed by neutron-capture (n-capture) processes. We have detailed information for some n-capture processes while others remain poorly constrained. By studying abundances of elements that we can tie to the different processes, we can learn about their similarities and differences, which will bring us closer to understanding the formation processes as well as the supernovae (SNe) that host them. \n \n\\section{Data, Sample and Method}\nThe data cover 73 stars in total, two RR lyrae, 29 giants and 42 dwarfs. The high-resolution spectra have been obtained with UVES\/VLT \\cite{dekker} and Hires\/Keck \\cite{vogt94}. The data reduction and stellar parameter determination is described in \\cite{CJHag}.\nTo derive the abundances we used the 1D local thermodynamic equilibrium (1D LTE) spectral synthesis code MOOG \\cite{snedenphd} and MARCS model atmospheres \\cite{Gus08}.\n\n\\section{Tracing the first supernovae with RR Lyrae stars}\nSeveral studies have shown that very evolved stars, such as RR lyrae stars, can be used as chemical tracers \\cite{taut,hansenRR,for} as they preserve the gases of elements heavier than lithium in their surfaces.\nBy comparing the derived stellar abundances to a variety of SN yields we can extract information about the early (first?) SN such as mass, energy and electron fractions ($Y_e$). This may provide insight into the nature of the first stars. We compare to yield predictions from \\cite{lim,koba,tom,Izu} covering a mass range of 13-40$M_{\\odot}$, and an energy range of 1-10 foe, with or without jets\/asymmetric explosions (see Fig. \\ref{SN}).\n\\begin{figure}\n \\includegraphics[height=.45\\textheight]{SNy_proceednew_lte.eps}\n \\caption{Stellar abundances derived for the two RR lyrae stars are compared to the SN yield predictions (references are listed in the legend).\\label{SN}}\n\\end{figure}\nThe odd-even effect help to constrain the SN progenitor mass, the iron-peak elements are connected to the peak temperature of the explosion, while e.g. Sc provides information on the explosion energy and $Y_e$. The comparison in Fig. \\ref{SN} points towards an early generation of SN, which existed well below [Fe\/H]=$-3.3$ with a progenitor mass of around 40$M_{\\odot}$ and high explosion energy. However, several quantities such as the estimated SN mass will be lowered if non-LTE effects are considered. These SN could be the objects that created the very heavy elements during their explosion. \n\n\\begin{figure}\n \\includegraphics[height=.28\\textheight]{AgSrSrH_DGfit_astPH.ps}\n\\end{figure}\n\\vspace{-19mm}\n\\begin{figure}\n \\includegraphics[height=.28\\textheight]{AgPdPdH_DGfit_astPH.ps}\n\\end{figure}\n\\vspace{-9mm}\n\\begin{figure}\n \\includegraphics[height=.32\\textheight]{AgEuEuH_DGfit_astPH_new.ps}\n \\caption{Silver compared to Sr, Pd, and Eu in the giant and dwarf stars from our sample.\\label{SrAgEu}}\n\\end{figure}\n\\clearpage\n\n\n\\section{The evolution of heavy elements}\nThe majority of the heavy elements are produced in neutron-rich, very energetic environments.\nThere are two main channels through which the heavy elements can be made: a secondary slow n-capture (s-)process and a primary rapid n-capture (r-)process. Each of these production channels seem to branch into two (weak and main) n-capture processes. Here an observational study is presented attempting to map the features of the weak r-process.\n\n\nThe elements that were targeted for this study are: Sr, Y, Zr, Pd, Ag, Ba, and Eu. Pd and Ag are assumed to be created by the weak r-process, whereas e.g. Sr is created by a weak s-process \\cite{heil} (or charged particle process at low metallicity) and Eu is formed by a main r-process \\cite{arland} at all metallicities. In order to understand the weak r-process better we compare Ag to Sr and Pd to see how their formation processes differ. If the two elements are formed by the same process this will be seen as a flat (horizontal) trend in the abundance plots, otherwise the trend will be a line with a negative slope.\nFigure \\ref{SrAgEu} shows that Pd and Ag share a formation process, which is very different from the weak s-process\/charged particle process and the main r-process. This weak r-process is also very different from the main s-process responsible for creating Ba (cf. \\cite{CJHag}).\nTo understand the characteristics and environment of the weak r-process, the observationally derived abundances are compared to self-consistent faint O-Ne-Mg electron-capture SN 2D models \\cite{wan11} as well as the high-entropy wind (HEW) \\cite{farou} parameter studies. \n\n\n\nThe right-hand plots in Fig. \\ref{model} shows the abundances compared to the HEW model predictions, and the left-hand plots illustrates the O-Ne-Mg SN yields. In general Sr-Ag can in all stars and both models be produced by one model\/one set of parameters, while Ba and Eu require very different parameters\/features in the formation processes. The elements lighter than Ag can be correctly predicted either by an entropy $125 \\leq S \\leq 175 k_B\/baryon$ or an $Y_e \\geq 0.25$, while Ba and Eu generally need $S > 225$ and $Y_e < 0.2$. This is also true for stars with pure r-process abundance patterns ([Ba\/Eu]$<-0.74$), which confirms the r-process nature. These results apply to both r-poor and r-rich stars. It is evident from the comparison to the two models that r-poor and r-rich stars cannot be produced at the same site. The faint O-Ne-Mg SN seems to be a promising site for the weak r-process, and the HEW models yield a pattern that fits those of the r-rich stars well. \n\n\n\\section{Summary}\nThe abundances derived under non-LTE, compared to those derived under the LTE assumption, will provide e.g. a different mass and energy of the SN we are trying to trace. Hence it is central to understand the abundances and stellar parameters when these are derived including non-LTE and 3D effects.\n\n\\begin{figure}[!h]\n \\includegraphics[height=.25\\textheight]{AllYeBD+42621_wanfewy_astPH.ps}\n \\includegraphics[height=.24\\textheight]{BD+42621comp_ye045hewless_astPH.eps}\n\\end{figure}\n\\begin{figure}[!h]\n \\includegraphics[height=.25\\textheight]{AllYeHD115444_wanfewy_astPH.ps}\n \\includegraphics[height=.24\\textheight]{HD115444comp_ye045hewless_astPH.eps}\n\\end{figure}\n\\begin{figure}[!h]\n \\includegraphics[height=.25\\textheight]{AllYeHD122563_wanfewy_astPH.ps}\n \\includegraphics[height=.24\\textheight]{HD122563comp_ye045hewless_astPH.eps}\n \\caption{Abundances of all seven elements compared to model yield predictions. Three stars have been selected, one that shows a pure r-process pattern, one with an r-poor, and one with an r-rich abundance pattern. The stars are compared to O-Ne-Mg SN [16] on the left-hand side and to the HEW predictions on the right-hand side.\\label{model}}\n\\end{figure}\n\n\nTo explain Pd and Ag abundances we need a weak r-process, which so far seems to differ strongly from the weak\/main s-process as well as the main r-process. The weak r-process is efficiently yielding Pd and Ag at metallicities below [Fe\/H]$=-3.3$. This weak r-process needs lower neutron densities as well as entropies compared to what the main r-process needs, but higher such values than the s-process requires. We need 3D self-consistent SN models and optimised networks to understand quantities such as entropy and electron fractions before we can constrain this weak r-process. \n\n\n\\begin{theacknowledgments}\n This work was supported by Sonderforschungsbereich SFB 881 'The Milky Way System' (subproject A5) of the German Research Foundation (DFG). CJH is grateful to F. Primas, H. Hartman, K.-L. Kratz, S. Wanajo, B. Leibundgut, K. Farouqi, N. Christlieb, B. Nordstr\\\"om, P. Bonifacio, M. Spite, J. Andersen, and the First Stars IV organizers.\n\\end{theacknowledgments}\n\n\n\n\\bibliographystyle{aipproc} \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nDiffraction puts a limit on the the resolution of optical devices. According to the Rayleigh criterion \\cite{rayleigh,goodman}, the ability to resolve two point sources is limited by the wavelength of the light. The Rayleigh or diffraction limit is not an absolute limit and proposals to exceed it have been known for a long time\n\\cite{goodman}. Recently, new proposals to improve resolution beyond the Rayleigh limit have been made based on the use of entangled sources and new measurement techniques. Improving the resolving power of optical systems beyond the diffraction limit not only is of interest to the fundamental research, but also holds promise applications in remote sensing and quantum sensors.\n\nClassical imaging can be thought of as a single photon process in the sense that the light detected is composed of photons each of which illuminates the object, consequently, the image can be constructed one photon at time. What we mean by referring to this as classical is that the source of the light may be described by a density matrix with a\npositive P-function \\cite{pos1,pos2}. In this sense the Rayleigh limit may be thought of as a single photon limit. Recall that ideal imaging is a process in which there is a point-to-point mapping of the object to a unique image plane. Diffraction causes each point of the object to be mapped onto a disk, the Airy disk, in the image plane.\n\nOne of the new approaches to improving resolution is based on using non-classical light sources. Quantum ghost imaging\n\\cite{todd,streklov,rubin,imaging,shih,milenaL} is a process that uses two-photon entanglement. The unique features of this process are that entanglement allows only one photon to illuminate the object while the second photon does not. All the photons that illuminates the object are detected in a single (bucket) detector that does not resolve the image. The point detectors that detect the second photon must lie in a specific plane. This plane is called the image plane\nalthough there is no image in that plane; the image is formed in the correlation measurement of entangled photons. The image is constructed one pair at a time. The resolution of this system has recently been discussed \\cite{rubin2008,milena}. Losses in this system affect the counting rate but not the quality of the image.\n\nA second approach using non-classical source is based on entangled photon-number states \\cite{dowling}, e.g., N00N state. When the number of entangled photons exceeds two there are many possible imaging schemes that can be envisioned and so the analysis of these cases is still being carried out. This interferometric approach achieves a sub-wavelength spatial resolution by a factor $N$ and requires an $N$-photon absorption process. Another quantum source used to study imaging is to generate squeezed states \\cite{squeezed}. The image can be reconstructed through the homodyne detection \\cite{homodyne}. However, both of these techniques are severely limited by the loss of photons.\n\nA second class of approaches to improving resolution uses classical light sources. One method uses classical light with measurements based on correlations similar to ghost imaging and the Hanbury-Brown and Twiss experiment \\cite{h-t,texts}. This method has the advantage of being more robust with respect to losses \\cite{thermal,thermal2}. Another approach is to build an interferometric lithography with use of classical coherent state \\cite{coherent,coherent2}, which has similar setup to the case using entangled photon-number states.\n\nIn this paper we will consider improving spatial resolution beyond the Rayleigh diffraction limit using quantum imaging with an entangled photon-number state $|1,N\\rangle$. In our imaging scheme by sending the $N$ degenerate photons to the object while keeping the non-degenerate photon and imaging lens in the laboratory, a factor of $N$ improvement can be achieved in spatial resolution enhancement compared to classical optics. The assumptions required for the enhancement by a factor of $N$ are that the $N$ photons sent to the object scatter off the same point and are detected by either an $N$-photon number detector or a bucket detector. This sub-Rayleigh imaging resolution may have important applications in such as improving sensitivities of classical sensors and remote sensing. We emphasize that it is the quantum nature of the state that offers such sub-wavelength resolving power with high visibility. However, the system is very sensitive to loss. While we give general results, our main concern will be with the case in which the object is far from the source and the detectors and optics are close to the source. A different but related approach to the one discussed here is given in \\cite{giovannetti}.\n\nWe organize the paper as follows. We will discuss our imaging scheme with entangled photon-number state $|1,2\\rangle$ in some detail in Sec.~II. In previous work \\cite{wen1,wen2} we have shown that imaging occurs in correlation measurement, as in the ghost imaging case. Here we will show that under certain stringent conditions, the resolution\ncan be improved by a factor of $2$ compared to classical optics. In Sec.~III we generalize the scheme to the\n$|1,N\\rangle$ case and show that resolution improvement by a factor of $N$ can be obtained. In Sec.~IV some discussions will be addressed on other experimental configurations. Finally we will draw our conclusions in Sec.~V. In an appendix we discuss the meaning of the approximation that the $N$ photons illuminate the same point on the object.\n\n\\section{Three-Photon Optics}\nWe start with three photons because this is the easiest case to investigate the various configurations. Throughout the paper we shall assume that the source of the three photons is a pure state and that the three-photon counting rate for three point detectors is give by\n\\begin{equation}\nR_{cc}=\\frac{1}{T^2}\\int_{0}^{T}dt_1\\int_{0}^{T}dt_2\\int_{0}^{T}dt_3|\\Psi(1,2,3)|^{2},\\label{eq:Coin}\n\\end{equation}\nwhere the three-photon amplitude is determined by matrix element between the vacuum state and the three-photon state\n$|\\psi\\rangle$\n\\begin{equation}\n\\Psi(1,2,3)=\\langle0|E^{(+)}_1E^{(+)}_2E^{(+)}_3|\\psi\\rangle,\\label{eq:Ampl}\n\\end{equation}\nand\n\\begin{equation}\nE^{(+)}_j(\\vec{\\rho}_j,z_j,t_j)=\\int{d}\\omega_j\\int{d^{2}}\\alpha_jE_jf_j(\\omega_j)e^{-i\\omega_jt_j}g_j(\\vec{\\alpha}_j,\n\\omega_j;\\vec{\\rho}_j,z_j)a(\\vec{\\alpha}_j,\\omega_j),\\label{eq:freefield}\n\\end{equation}\nwhere $E_j=\\sqrt{\\hbar\\omega_{j}\/2 \\epsilon_{0}}$, $\\vec{\\alpha}_j$ is the transverse wave vector, and\n$a(\\vec{\\alpha}_j,\\omega_j)$ is a photon annihilation operator at the output surface of the source,\n\\begin{equation}\n[a(\\vec{\\alpha},\\omega),a^{\\dagger}(\\vec{\\alpha}\\prime,\\omega\\prime)]=\\delta(\\vec{\\alpha}-\\vec{\\alpha}\\prime)\n\\delta(\\omega-\\omega\\prime).\\label{eq:commutation rel}\n\\end{equation}\nThe function $f_{j}(\\omega)$ is a narrow bandwidth filter function which is assumed to be peaked at $\\Omega_j$. The function $g_j$ is the Green's function \\cite{goodman,rubin} that describes the propagation of each mode from the output surface of the source to the $j$th detector at the transverse coordinate $\\vec{\\rho}_j$, at the distance from the\noutput surface of the crystal to the plane of the detector, $z_j$. $\\Psi$ is referred to as the \\textit{three-photon amplitude} (or three-photon wavefunction).\n\nWe start with the case in which the source produces three-photon entangled states with a pair of degenerate photons, that is $\\psi\\rightarrow\\psi_{{1,2}}$\n\\begin{equation}\n|\\psi_{1,2}\\rangle=\\int{d}\\omega_1{d}\\omega_2\\int{d^{2 }}\\alpha_1d^2 \\alpha_2\\delta(2\\omega_1+\\omega_2-\\Omega)\\delta(\n2\\vec{\\alpha}_1+\\vec{\\alpha}_2)a^{\\dagger}(\\vec{\\alpha}_2,\\omega_{2})\\big[a^{\\dagger}(\\vec{\\alpha}_1,\\omega_{1})\\big]^2\n|0\\rangle,\\label{eq:state1}\n\\end{equation}\nwhere $\\Omega$ is a constant, $\\omega_{1,2}$ and $\\vec{\\alpha}_{1,2}$ are the frequencies and transverse wave vectors of the degenerate and non-degenerate photons, respectively. The $\\delta$-functions indicate that the source is assumed to produce three-photon states with perfect phase matching. We assume the paraxial approximation holds and that the temporal and transverse behavior of the waves factor. The frequency correlation determines the three-photon temporal\nproperties. The transverse momentum correlation determines the spatial properties of entangled photons. It is this wave-vector correlation that we are going to concentrate on. As discussed in \\cite{wen1}, several imaging schemes can be implemented with this three-photon source. To demonstrate spatial resolution enhancement beyond the Rayleigh diffraction limit, consider the experimental setup shown in Fig.~\\ref{fig:2photon}. It will be shown that for this configuration the spatial resolving power is improved by a factor of 2, provided the degenerate photons\nilluminate the same point on the object and are detected by a two photon detector.\n\\begin{figure}[tbp]\n\\includegraphics[scale=0.6]{2photon.pdf}\n\\caption{(color online) Schematic of quantum imaging with a three-photon entangled state $|1,2\\rangle$. $d_1$ is the distance from the output surface of the source to the object. $L_1$ is the distance from the object to a 2-photon detector, D$_1$. $d_2$ is the distance from the output surface of the source to the imaging lens with focal length $f$ and $L_2$ is the length from the imaging lens to a single-photon detector D$_2$, which scans coming signal photons in its transverse plane. ``C.C.\" represents the joint-detection measurement.}\\label{fig:2photon}\n\\end{figure}\n\nAs depicted in Fig.~\\ref{fig:2photon}, two degenerate photons with wavelength $\\lambda_1$ are sent to a two-photon detector (D$_1$) after illuminating an object, and the non-degenerate photon with wavelength $\\lambda_2$ propagates to a single-photon detector (D$_2$) after an imaging lens with focal length $f$. The three-photon amplitude\n(\\ref{eq:Ampl}) for detectors D$_1$ and D$_2$, located at $(z_1,\\vec{\\rho}_1)$ and $(z_2,\\vec{\\rho}_2)$, now is\n\\begin{equation}\n\\Psi\\rightarrow\\Psi_{1,2}=\\langle0|E^{(+)}_2(\\vec{\\rho}_2,z_2,t_2)\\big[E^{(+)}_1(\\vec{\\rho}_1,z_1,t_1)\\big]^2|\\psi_{1,2}\n\\rangle, \\label{eq:coincidence}\n\\end{equation}\nFollowing the treatments in \\cite{rubin,goodman,wen1}, we evaluate the Green's functions\n$g_1(\\vec{\\alpha}_1,\\omega_2;\\vec{\\rho}_1,z_1)$ and $g_2(\\vec{\\alpha}_2,\\omega_2;\\vec{\\rho}_2,z_2)$ for the experimental setup of Fig.~\\ref{fig:2photon} assuming that the narrow bandwidth filter allows us to make the assumption that $\\omega_{j}=\\Omega_j+\\nu_j$ where $|\\nu_j|\\ll\\Omega_j$ and $2\\Omega_1+\\Omega_2=\\Omega$.\n\nIn the paraxial approximation it is convenient to write\n\\begin{equation}\ng_j(\\vec{\\alpha}_j,\\omega_j;\\vec{\\rho}_j,z_j)=\\frac{\\omega_je^{i\\omega_jz_j \/c}}{i 2\\pi cL_jd_j}\n\\chi_j(\\vec{\\alpha}_{j},\\omega_{j};\\vec{\\rho}_{j},z_{j}),\\label{eq:chi}\n\\end{equation}\nthen\n\\begin{eqnarray}\n\\chi_1(\\vec{\\alpha}_1,\\Omega_1;\\vec{\\rho}_1,z_1)&=&e^{-i\\frac{d_1|\\vec{\\alpha}_1|^2}{2K _1}}\\int{d^{2}}\\rho_o\nA(\\vec{\\rho}_o)e^{i\\frac{K_1|\\vec{\\rho}_o|^2}{2L_1}}e^{-i\\frac{K_1\\vec{\\rho}_1\\cdot\\vec{\\rho}_o}{L_1}}\ne^{i\\vec{\\alpha}_1\\cdot\\vec{\\rho}_o},\\label{eq:g1}\\\\\n\\chi_2(\\vec{\\alpha}_2,\\Omega_2;\\vec{\\rho}_2,z_2)&=&e^{-i\\frac{d_2|\\vec{\\alpha}_2|^2}{2K_2}}\\int{d^{2}}\\rho_l\ne^{i\\frac{K_2|\\vec{\\rho}_l|^2}{2}(\\frac{1}{L_2}-\\frac{1}{f})}e^{i(\\vec{\\alpha}_2-\\frac{K_2}{L_2}\\vec{\\rho}_2)\\cdot\n\\vec{\\rho}_l},\\label{eq:g2}\n\\end{eqnarray}\nwhere we replace $\\omega_j$ by $\\Omega_j$ in $\\chi_j$, $K_j=\\Omega_j \/c=2\\pi\/\\lambda_j$, $z_1=d_1+L_1$, and $z_2=d_2+L_2$, respectively. In Eqs.~(\\ref{eq:g1}) and (\\ref{eq:g2}), $A(\\vec{\\rho}_o)$ is the aperture function of the object, and $\\vec{\\rho}_o$ and $\\vec{\\rho}_l$ are two-dimensional vectors defined, respectively, on the object and the imaging lens planes. With use of Eqs.~(\\ref{eq:freefield}) and (\\ref{eq:state1}), the three-photon amplitude (\\ref{eq:coincidence}) becomes\n\\begin{equation}\n\\Psi_{1,2}=e^{i(2\\Omega_1\\tau_1+\\Omega_2\\tau_2)}\\Phi_{1,2},\\label{Phi12}\n\\end{equation}\nwhere $\\tau_j=t_j-z_j\/c$ and\n\\begin{eqnarray}\n\\Phi_{1,2}=\\int{d}\\nu_1d\\nu_2\\delta(2\\nu_1+\\nu_2)e^{i(2\\nu_1 \\tau_1+\\nu_2\\tau_2)}f_1(\\Omega_1+\\nu_1)^2f_2 (\\Omega_2+\\nu_2)B_{1,2}.\\label{eq:Psi1}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\nB_{1,2}&=&B_{0}\\int{d^{2}}\\rho_oA(\\vec{\\rho}_o)e^{i\\frac{K_1|\\vec{\\rho}_o|^2}{2L_1}}e^{-i\\frac{K_1\\vec{\n\\rho}_1\\cdot\\vec{\\rho}_o}{L_1}}\\int{d^{2}}\\rho'_oA(\\vec{\\rho}'_o)e^{i\\frac{K_1|\\vec{\\rho}'_o|^2}{2L_1}}e^{-i\n\\frac{K_1\\vec{\\rho}_1\\cdot\\vec{\\rho}'_o}{L_1}}\\int{d^{2}}\\rho_le^{i\\frac{K_2|\\vec{\\rho}_l|^2}{2}(\\frac{1}{\nL_2}-\\frac{1}{f})}e^{-i\\frac{K_2}{L_2}\\vec{\\rho}_2\\cdot\\vec{\\rho}_l}\\nonumber\\\\\n&&\\times\\int{d^{2 }}\\alpha_1e^{-i|\\vec{\\alpha}_1|^2(\\frac{d_1}{K_1}+\\frac{2d_2}{K_2})}e^{-i\\vec{\\alpha}_1\n\\cdot(2\\vec{\\rho}_l-\\vec{\\rho}_o-\\vec{\\rho}'_o)},\\label{eq:psiapproximation}\n\\end{eqnarray}\nwhere we collect all the slowly varying quantities into the constant $B_{0}$. To proceed the discussion, in the following we will consider two different detection schemes. One uses a point two-photon detector for two degenerate photons after the object and the other has a two-photon bucket detector.\n\n\\subsection{Point Two-Photon Detector Scheme}\nIn this detection scheme, a point two-photon detector is necessary to retrieve the information of degenerate photons scattered off the same point in the object. We therefore make the key assumption that the detector D$_1$ is only sensitive to the signals from the same point in the object, i.e., $\\delta(\\vec{\\rho}_o-\\vec{\\rho}'_o)$ [The validity of this assumption is addressed in the Appendix]. With this assumption, Eq.~(\\ref{eq:psiapproximation}) becomes\n\\begin{eqnarray}\nB_{1,2}&=&B_{0}\\int{d^{2}}\\rho_oA^2(\\vec{\\rho}_o)e^{i\\frac{K_1|\\vec{\\rho}_o|^2}{L_1}}e^{-i\\frac{2K_1\\vec{\n\\rho}_1\\cdot\\vec{\\rho}_o}{L_1}}\\int{d^{2}}\\rho_le^{i\\frac{K_2|\\vec{\\rho}_l|^2}{2}(\\frac{1}{L_2}-\\frac{1}{f})}\ne^{-i\\frac{K_2}{L_2}\\vec{\\rho}_2\\cdot\\vec{\\rho}_l}\\nonumber\\\\\n&&\\times \\int{d^{2 }}\\alpha_1e^{-i|\\vec{\\alpha}_1|^2(\\frac{d_1}{K_1}\n+\\frac{2d_1}{K_2})}e^{-2i\\vec{\\alpha}_1\\cdot(\\vec{\\rho}_l-\\vec{\\rho}_o)}.\\label{eq:Psiassumption}\n\\end{eqnarray}\nCompleting the integration on the transverse mode $\\vec{\\alpha}_1$ in Eq.~(\\ref{eq:Psiassumption}) gives\n\\begin{eqnarray}\nB_{1,2}&=& B_{0}\\int{d^{2}}\\rho_oA^2(\\vec{\\rho}_o)e^{iK_1|\\vec{\\rho}_o|^2\n[\\frac{1}{L_1}+\\frac{1}{d_1+(2\\lambda_2\/\\lambda_1)d_2}]}e^{-i\\frac{2K_1\\vec{\\rho}_1\\cdot\\vec{\\rho}_o}{L_1}}\\nonumber\\\\\n&&\\times\\int{d^{2}}\\rho_le^{i\\frac{K_2|\\vec{\\rho}_l|^2}{2}[\\frac{1}{L_2}+\\frac{1}{d_2+(\\lambda_1\/2\\lambda_2)d_1\n}-\\frac{1}{f}]}e^{-iK_2\\vec{\\rho}_l\\cdot[\\frac{\\vec{\\rho}_2}{L_2}+\\frac{\\vec{\\rho}_o}{d_2+(\\lambda_1\/2\n\\lambda_2)d_1}]}.\\label{eq:psiassumption2}\n\\end{eqnarray}\nBy imposing the Gaussian thin-lens imaging condition in Eq.~(\\ref{eq:psiassumption2})\n\\begin{eqnarray}\n\\frac{1}{f}=\\frac{1}{L_2}+\\frac{1}{d_2+(\\lambda_1\/2\\lambda_2)d_1},\\label{eq:GTLE1}\n\\end{eqnarray}\nthe transverse part of the three-photon amplitude reduces to\n\\begin{eqnarray}\nB_{1,2}&=&B_{0}\\int{d^{2}}\\rho_oA^2(\\vec{\\rho}_o)e^{iK_1|\\vec{\\rho}_o|^2[\\frac{1}{L_1}+\\frac{1}{d_1\n+(2\\lambda_2\/\\lambda_1)d_2}]}e^{-i\\frac{2K_1\\vec{\\rho}_1\\cdot\\vec{\\rho}_o}{L_1}}\\mathbf{somb}\\bigg(\\frac{2\\pi{R}}{\n\\lambda_2[d_2+(\\lambda_1\/2\\lambda_2)d_1]}\\bigg|\\vec{\\rho}_o+\\frac{\\vec{\\rho}_2}{m}\\bigg|\\bigg),\\label{eq:psif}\n\\end{eqnarray}\nwhere $R$ is the radius of the imaging lens, $R\/[d_2+(\\lambda_1\/2\\lambda_2)d_1]$ may be thought of as the numerical aperture of the imaging system, and $m=L_2\/[d_2+(\\lambda_1\/2\\lambda_2)d_1]$ is the magnification factor. In Eq.~(\\ref{eq:psif}) the Airy disk is determined, as usual, by $\\mathbf{somb}(x)=2J_1(x)\/x$, where $J_1(x)$ is the first-order Bessel function.\n\nBefore proceeding with the discussion of resolution, let us look at the physics behind Eqs.~(\\ref{eq:GTLE1}) and (\\ref{eq:psif}). Equation (\\ref{eq:GTLE1}) defines the image plane where the ideal the point-to-point mapping of the\nobject plane occurs. The unique point-to-point correlation between the object and the imaging planes is the result\nof the transverse wavenumber correlation and the fact that we have assumed that the degenerate photons illuminate the same object point. Let us make a comparison with the two-photon and three-photon geometrical optics\n\\cite{rubin,wen1,rubin2008}. In the Gauss thin lens equation the distance between the imaging lens and the object planes, $d_2+(\\lambda_1\/2\\lambda_2)d_1$ is similar to the form that appears in the non-degenerate two-photon case except for the factor of 2. This factor 2 comes from the degeneracy of the pair of photons that illuminate the object. As we will show below, this factor of 2 is the source of the improved spatial resolution. Equation (\\ref{eq:psif}) implies that a coherent and inverted image magnified by a factor of m is produced in the plane of $D_{2}$. Of course, there really is no such image and the true image is \\textit{nonlocal}. The point-spread function in Eq.~(\\ref{eq:psif}) is generally determined by both wavelengths of the degenerate and non-degenerate photons.\n\nTo examine the resolution using the Rayleigh criterion, we consider an object consisting of two point scatters, one located at the origin and the other at the point $\\vec{a}$ in the object plane,\n\\begin{equation}\nA(\\vec{\\rho}_o)^2=A_0^2\\delta(\\vec{\\rho}_o)+A_{\\vec{a}}^2\\delta(\\vec{\\rho}_o-\\vec{a}).\\label{eq:object}\n\\end{equation}\nBy substituting Eq.~(\\ref{eq:object}) into (\\ref{eq:psif}) we obtain\n\\begin{eqnarray}\nB_{1,2}=B_{0}\\bigg({A}^2_0 \\mathbf{somb}\\bigg(\\frac{2\\pi{R}}{\\lambda_2}\\bigg|\\frac{\\vec{\\rho}_2}{L_2}\\bigg|\\bigg)+e^{i\n\\varphi_2}A_{\\vec{a}}^2\\mathbf{somb}\\bigg[\\frac{2\\pi{R}}{\\lambda_2}\\bigg|\\frac{\\vec{\\rho}_2}{L_2}+\\frac{\\vec{a}}{d_2+(\n\\lambda_1\/2\\lambda_2)d_1}\\bigg|\\bigg]\\bigg),\\label{eq:Psirayleigh}\n\\end{eqnarray}\nwhere the phase\n\\begin{equation}\n\\varphi_2=K_1\\bigg[|\\vec{a}|^2\\bigg(\\frac{1}{L_1}+\\frac{1}{d_1+d_2(2\\lambda_2\/\\lambda_1)}\\bigg)-\\frac{\\vec{a}\n\\cdot(\\vec{\\rho}_1+\\vec{\\rho}'_1)}{L_1}\\bigg]\\label{eq:phase2}\n\\end{equation}\nindicates that the image is coherent. For a point $2$-photon detector, we require $\\vec{\\rho}_1=\\vec{\\rho}'_1$ in Eq.~(\\ref{eq:phase2}). As is well-known \\cite{goodman} for coherent imaging the Rayleigh criterion is not the best choice for characterizing the resolution, however, it is indicative of the resolution that can be attained and it is convenient. For a circular aperture, the radius of the Airy disk, $\\xi$, is determined by the point-spread function, which is\n\\begin{equation}\n\\xi=0.61\\frac{\\lambda_2L_2}{R}.\\label{eq:airydisk1}\n\\end{equation}\nNote that the radius of the Airy disk is proportional to the wavelength of the non-degenerate photon. This is the standard result as obtained in classical optics. Using the Rayleigh criterion, the image of the second term in Eq.~(\\ref{eq:Psirayleigh}) is taken to lie on the edge of the Airy disk of the first term, therefore,\n\\begin{equation}\na_{\\mathrm{m}}=0.61\\frac{\\lambda_2}{R}\\bigg(d_2+\\frac{\\lambda_1}{2\\lambda_2}d_1\\bigg).\\label{eq:resolution1}\n\\end{equation}\nWe see from Eq.~(\\ref{eq:resolution1}) that the resolution depends on the wavelengths of the degenerate and the non-degenerate photons. In the case that $d_1\\gg{d}_2$, so that $d_2+(\\lambda_1\/2\\lambda_2)d_1$, is approximately $(\\lambda_1\/2\\lambda_2)d_1$. In this case Eq.~(\\ref{eq:GTLE1}) implies that $L_2\\approx f$ and the radius of the Airy disk approaches to $1.22\\lambda_2f\/R$, and\n\\begin{equation}\na_{\\mathrm{m}}=0.61\\frac{\\lambda_1d_1\/2}{R}.\\label{eq:resolution2}\n\\end{equation}\nEquation~(\\ref{eq:resolution2}) shows a gain in spatial resolution of a factor of 2 compared to classical optics.\nFurthermore, there is no background term which is characteristic of the quantum case.\n\n\\subsection{Bucket Detector Scheme}\nIf the two-photon detector is replaced by a bucket detector and the two degenerate photons are collected by two single-photon detection events, located at $(L_1,\\vec{\\rho}_1)$ and $(L_1,\\vec{\\rho}'_1)$, in the bucket, Eq.~(\\ref{eq:psiapproximation}) becomes\n\\begin{eqnarray}\nB_{1,2}&=&B_{0}\\int{d^{2}}\\rho_oA(\\vec{\\rho}_o)e^{i\\frac{K_1|\\vec{\\rho}_o|^2}{2L_1}}e^{-i\\frac{K_1\\vec{\n\\rho}_1\\cdot\\vec{\\rho}_o}{L_1}}\\int{d^{2}}\\rho'_oA(\\vec{\\rho}'_o)e^{i\\frac{K_1|\\vec{\\rho}'_o|^2}{2L_1}}e^{-i\n\\frac{K_1\\vec{\\rho}'_1\\cdot\\vec{\\rho}'_o}{L_1}}\\int{d^{2}}\\rho_le^{i\\frac{K_2|\\vec{\\rho}_l|^2}{2}(\\frac{1}{\nL_2}-\\frac{1}{f})}e^{i\\frac{K_2}{L_2}\\vec{\\rho}_2\\cdot\\vec{\\rho}_l}\\nonumber\\\\\n&&\\times\\int{d^{2 }}\\alpha_1e^{-i|\\vec{\\alpha}_1|^2(\\frac{d_1}{K_1}+\\frac{2d_2}{K_2})}e^{-i\\vec{\\alpha}_1\n\\cdot(2\\vec{\\rho}_l-\\vec{\\rho}_o-\\vec{\\rho}'_o)}.\\label{eq:bucketdetector}\n\\end{eqnarray}\nUnder the assumption that the two degenerate photons are scattered off the same point in the object, Eq.~(\\ref{eq:bucketdetector}) takes the similar form as Eq.~(\\ref{eq:Psiassumption}), except that the second phase term in the first integrand of (\\ref{eq:Psiassumption}) is replaced by $\\mathrm{exp}\\big[-i\\frac{K_1(\\vec{\\rho}_1+\\vec{\\rho}'_1)\\cdot\\vec{\\rho}_o}{L_1}\\big]$.\nIt is easy to show that the Gaussian thin-lens equation takes the same form as Eq.~(\\ref{eq:GTLE1}). By performing the same analysis as done in Sec.~IIA on the resolving two spatially close point scatters, the three-photon amplitude (\\ref{eq:Psirayleigh}) now is\n\\begin{eqnarray}\nB_{1,2}=B_{0}\\bigg(A_0^2\\mathbf{somb}\\bigg(\\frac{2\\pi{R}}{\\lambda_2}\\bigg|\\frac{\\vec{\\rho}_2}{L_2}\\bigg|\\bigg)+e^{i\n\\varphi_2}A_{\\vec{a}}^2\\mathbf{somb}\\bigg[\\frac{2\\pi{R}}{\\lambda_2}\\bigg|\\frac{\\vec{\\rho}_2}{L_2}+\\frac{\\vec{a}}{d_2+(\n\\lambda_1\/2\\lambda_2)d_1}\\bigg|\\bigg]\\bigg).\\label{eq:Psirayleighbucket}\n\\end{eqnarray}\nSince the bucket detector gives no position information, we must square the amplitude and integrating over the bucket detector,\n\\begin{equation}\nI=\\int{d^{2}}\\rho_{1} \\int{d^{2}}\\rho_{1}' |B_{1,2}|^{2}=s_{b}^2|B_{0}|^{2}\\bigg(|A_0|^4 \\mathbf{somb}^{2}\\bigg(\\frac{2\\pi{R}}{\\lambda_2}\\bigg|\\frac{\\vec{\\rho}_2}{L_2}\\bigg|\\bigg)+\n|A_{\\vec{a}}|^{4} \\mathbf{somb}^2\\bigg[\\frac{2\\pi{R}}{\\lambda_2}\\bigg|\\frac{\\vec{\\rho}_2}{L_2}+\\frac{\\vec{a}}{d_2+(\n\\lambda_1\/2\\lambda_2)d_1}\\bigg|\\bigg]\\bigg)\\label{eq:bucket}\n\\end{equation}\nwhere $s_b$ is the area of the bucket detector. It is easy to see that the spatial resolution improvement is the same as in Sec.~IIA, the difference is that now we get an incoherent image. The advantage is that a two photon bucket detector should be easier to construct than a point two photon detector.\n\n\\section{$N+1$ Photon Optics}\n\\begin{figure}[tbp]\n\\includegraphics[scale=0.6]{Nphoton.pdf}\n\\caption{(color online) Generalization of quantum imaging with $N+1$ entangled photons in state $|1,N\\rangle$. For notations please refer to Fig.~\\ref{fig:2photon} except that here D$_1$ is an $N$-photon detector. The image is formed in the coincidence measurement and is not localized at either detector.}\\label{fig:Nphoton}\n\\end{figure}\nIn Sec.~II, we have shown that with the entangled photon-number state $|1,2\\rangle$, the ability to resolve two point sources in the object can be improved by a factor of 2 by sending two degenerate photons to the object while keeping the non-degenerate photon and imaging lens in the laboratory. In this section, we are going to generalize the experimental configuration (Fig.~\\ref{fig:2photon}) with use of the entangled state of $|1,N\\rangle$, as described in Fig.~\\ref{fig:Nphoton}. For simplicity, we first address the case shown in Fig.~\\ref{fig:Nphoton} where the $N$ degenerate photons traverse to the $N$-photon detector, D$_1$, after the object and the non-degenerate photon propagates to the single-photon detector, D$_2$. The assumption required for the enhancement by a factor of $N$ are that the $N$ photons sent to the object scatter off the same point and are detected by the $N$-photon detector, D$_1$.\n\nThe $N+1$ photons are assumed to be in a non-normalized pure state\n\\begin{eqnarray}\n|\\psi_{1,N}\\rangle=\\int{d}\\omega_1{d}\\omega_2\\int{d^{2 }}\\alpha_1d^2\\alpha_2\\delta(N\\omega_1+\\omega_2-\\Omega)\\delta\n(N\\vec{\\alpha}_1+\\vec{\\alpha}_2)a^{\\dagger}_{\\vec{k}_2}\\big(a^{\\dagger}_{\\vec{k}_1}\\big)^N|0\\rangle.\\label{eq:state2}\n\\end{eqnarray}\nAgain the $\\delta$-functions in Eq.~(\\ref{eq:state2}) indicate perfect phase matching. The $N+1$-photon coincidence counting rate is defined as\n\\begin{eqnarray}\nR_{cc}=\\frac{1}{T}\\int^T_0dt_1\\int^T_0dt_2\\cdots\\int^T_0dt_{N+1}|\\Psi_{1,N}(1,2,\\cdots,N+1)|^2,\\label{eq:coincidence2}\n\\end{eqnarray}\nwhere $\\Psi_{1,N}$ is referred to as the \\textit{$N+1$-photon amplitude}. That is\n\\begin{eqnarray}\n\\Psi_{1,N}(1,2,\\cdots,N+1)&=&\\langle0|E^{(+)}_1E^{(+)}_2\\cdots{E}^{(+)}_{N+1}|\\psi_{1,N}\\rangle\\nonumber\\\\\n&=&\\langle0|E^{(+)}_2(\\vec{\\rho}_2,z_2,t_2)[E^{(+)}_1(\\vec{\\rho}_1,z_1,t_1)]^N|\\psi_{1,N}\\rangle.\\label{eq:ampN}\n\\end{eqnarray}\n\nFollowing the procedure done for the $|1,2\\rangle$ case, we calculate the transverse part of the $N+1$-photon amplitude $\\Psi_{1,N}$ (\\ref{eq:ampN}) as\n\\begin{eqnarray}\n\\Psi_{1,N}&=&e^{i(N\\Omega_1\\tau_1+\\Omega_2\\tau_2)}\\Phi_{1,N}(\\tau_1,\\tau_2)B_{1,N}\\nonumber\\\\\nB_{1,N}&=&B_0\\underbrace{\\int{d^{2}}\\rho_oA(\\vec{\\rho}_o)e^{i\\frac{K_1|\\vec{\\rho}_o|^2}{2L_1}}e^{-i\\frac{\nK_1\\vec{\\rho}_1\\cdot\\vec{\\rho}_o}{L_1}}\\cdots\\int{d^{2}}\\rho'_oA(\\vec{\\rho}'_o)e^{i\\frac{K_1|\\vec{\\rho}'_o|\n^2}{2L_1}}e^{-i\\frac{K_1\\vec{\\rho}_1\\cdot\\vec{\\rho}'_o}{L_1}}}_{\\mathrm{N\\;fold}}\\int{d^{2}}\\rho_le^{i\\frac{\nK_2|\\vec{\\rho}_l|^2}{2}(\\frac{1}{L_2}-\\frac{1}{f})}e^{-i\\frac{K_2}{L_2}\\vec{\\rho}_2\\cdot\\vec{\\rho}_l}\\nonumber\\\\\n&&\\times\\int{d^{2}}\\alpha_1e^{-i\\frac{N^2|\\vec{\\alpha}_1|^2}{2}(\\frac{d_1}{NK_1}+\\frac{d_2}{K_2})}e^{-i\\vec{\\alpha}_1\n\\cdot(N\\vec{\\rho}_l-\\underbrace{\\vec{\\rho}_o-\\cdots-\\vec{\\rho}'_o}_{\\mathrm{N}})}.\\label{eq:psiapproximation2}\n\\end{eqnarray}\nHere $\\Phi_{1,N}(\\tau_1,\\tau_2)$ describes the temporal behavior of entangled three photons. By applying the same argument that the $N$-photon detector D$_1$ only receives the signals from the same spatial point in the object, Eq.~(\\ref{eq:psiapproximation2}) can be further simplified as\n\\begin{eqnarray}\nB_{1,N}&=&B_0\\int{d^{2}}\\rho_oA^N(\\vec{\\rho}_o)e^{i\\frac{NK_1|\\vec{\\rho}_o|^2}{2L_1}}e^{-i\\frac{N K_1\n\\vec{\\rho}_1\\cdot\\vec{\\rho}_o}{L_1}}\\int{d^{2}}\\rho_le^{i\\frac{K_2|\\vec{\\rho}_l|^2}{2}(\\frac{1}{L_2}-\\frac{1}{\nf})}e^{-i\\frac{K_2}{L_2}\\vec{\\rho}_2\\cdot\\vec{\\rho}_l}\\nonumber\\\\\n&&\\times \\int{d^{2}}\\alpha_1e^{-i\\frac{N^2|\\vec{\\alpha}_1|^2}{2}(\\frac{d_1}{N\nK_1}+\\frac{d_2}{K_2})}e^{-Ni\\vec{\\alpha}_1\\cdot(\\vec{\\rho}_l-\\vec{\\rho}_o)}.\\label{eq:Psiassumption2}\n\\end{eqnarray}\nPerforming the integration on the transverse mode $\\vec{\\alpha}_1$ in Eq.~(\\ref{eq:Psiassumption2}) gives\n\\begin{eqnarray}\nB_{1,N}&=&B_0\\int{d^{2}}\\rho_oA^N(\\vec{\\rho}_o)e^{i\\frac{NK_1|\\vec{\\rho}_o|^2}{2}[\\frac{1}{L_1}+\\frac{1}{d_1+(N\n\\lambda_2\/\\lambda_1)d_2}]}e^{-i\\frac{NK_1\\vec{\\rho}_1\\cdot\\vec{\\rho}_o}{L_1}}\\nonumber\\\\\n&&\\times\\int{d^{2}}\\rho_le^{i\\frac{K_2|\\vec{\\rho}_l|^2}{2}[\\frac{1}{L_2}+\\frac{1}{d_2+(\\lambda_1\/N\\lambda_2)d_1\n}-\\frac{1}{f}]}e^{-iK_2\\vec{\\rho}_l\\cdot[\\frac{\\vec{\\rho}_2}{L_2}+\\frac{\\vec{\\rho}_o}{d_2+(\\lambda_1\/N\\lambda_2)d_1}\n]},\\label{eq:psiassumptionn}\n\\end{eqnarray}\nwhere, again, we have assumed multimode generation in the process. Applying the Gaussian thin-lens imaging condition\n\\begin{eqnarray}\n\\frac{1}{f}=\\frac{1}{L_2}+\\frac{1}{d_2+(\\lambda_1\/N\\lambda_2)d_1},\\label{eq:GTLEn}\n\\end{eqnarray}\nthe transverse part of the $N+1$-photon amplitude (\\ref{eq:psiassumptionn}) between detectors D$_1$ and D$_2$ now\nbecomes\n\\begin{eqnarray}\nB_{1,N}&=&B_0\\int{d^{2}}\\rho_oA^N(\\vec{\\rho}_o)e^{i\\frac{NK_1|\\vec{\\rho}_o|^2}{2}[\\frac{1}{L_1}+\\frac{1}{d_1+(N\\lambda\n_2\/\\lambda_1)d_2}]}e^{-i\\frac{NK_1\\vec{\\rho}_1\\cdot\\vec{\\rho}_o}{L_1}}\\mathbf{somb}\\bigg[\\frac{2\\pi{R}}{\\lambda_2}\\bigg|\n\\frac{\\vec{\\rho}_2}{L_2}+\\frac{\\vec{\\rho}_o}{d_2+(\\lambda_1\/N\\lambda_2)d_1}\\bigg|\\bigg].\n\\label{eq:psin}\n\\end{eqnarray}\nAs expected, Eqs.~(\\ref{eq:GTLEn}) and (\\ref{eq:psin}) have the similar forms as Eqs.~(\\ref{eq:GTLE1}) and (\\ref{eq:psif}) for the $|1,2\\rangle$ case. The unique point-to-point relationship between the object and the imaging planes is enforced by the Gaussian thin-lens equation (\\ref{eq:GTLEn}). The coherent and inverted image is demagnified by a factor of $L_2\/[d_2+d_1(\\lambda_1\/N\\lambda_2)]$. The spatial resolution is determined by the width of the point-spread function in Eq.~(\\ref{eq:psin}). Note that a factor of $N$ appears in the distance between the imaging lens and the object planes, $d_2+d_1(\\lambda_1\/N\\lambda_2)$. We emphasize again that the image is nonlocal and exists in the coincidence events.\n\nTo study the spatial resolution, we again consider the object represented by Eq.~(\\ref{eq:object}). Plugging Eq.~(\\ref{eq:object}) into (\\ref{eq:psin}) yields\n\\begin{eqnarray}\nB_{1,N}=B_0\\bigg(A_{0}^N\\mathbf{somb}\\bigg(\\frac{2\\pi{R}}{\\lambda_2}\\bigg|\\frac{\\vec{\\rho}_2}{L_2}\\bigg|\\bigg)+e^{i\n\\varphi_{N}}A_{\\vec{a}}^N\\mathbf{somb}\\bigg[\\frac{2\\pi{R}}{\\lambda_2}\\bigg|\\frac{\\vec{\\rho}_2}{L_2}+\\frac{\\vec{a}}{d_2+\n(\\lambda_1\/N\\lambda_2)d_1}\\bigg|\\bigg]\\bigg).\\label{eq:Psinrayleigh}\n\\end{eqnarray}\nFor $N$ single photon detectors located at $\\vec{\\rho}_{1}^{(1)},\\cdots,\\vec{\\rho}_1^{(N)}$ the phase is given by\n\\begin{equation}\n\\varphi_N=K_1\\bigg[\\frac{N|\\vec{a}|^2}{2}\\bigg(\\frac{1}{L_1}+\\frac{1}{d_1+d_2(N\\lambda_2\/\\lambda_1)}\\bigg)-\\frac{\n\\vec{a}\\cdot(\\overbrace{\\vec{\\rho}_{1}^{(1)}+\\vec{\\rho}^{(2)}_{1}+\\cdots}^{\\mathrm{N}})}{L_1}\\bigg]\\label{eq:phaseN}\n\\end{equation}\nFor a point $N$-photon number detector, we require $\\vec{\\rho}_{1}^{(1)}=\\vec{\\rho}^{(2)}_{1}=\\cdots$ and a coherent imaging is achievable in this case. The first term on the right-hand side in Eq.~(\\ref{eq:Psirayleigh}) gives the radius of the Airy disk, which is the same as the $|1,2\\rangle$ case, see Eq.~(\\ref{eq:airydisk1}). Applying the Rayleigh criterion, the minimum resolvable distance between two points in the transverse plane now is\n\\begin{eqnarray}\na_{\\mathrm{m}}=0.61\\frac{\\lambda_2}{R}\\bigg(d_2+\\frac{\\lambda_1}{N\\lambda_2}d_1\\bigg).\\label{eq:resolutionn}\n\\end{eqnarray}\nFor the case of $N=2$, Eq.~(\\ref{eq:resolutionn}) reduces to Eq.~(\\ref{eq:resolution1}). In the case that $d_1\\gg{d_2}$, this becomes\n\\begin{eqnarray}\na_{\\mathrm{m}}=0.61\\frac{\\lambda_1d_1}{NR}.\\label{eq:resolutionf}\n\\end{eqnarray}\nAs expected, Eq.~(\\ref{eq:resolutionf}) shows a gain in sub-Rayleigh resolution by a factor of $N$ with respect to what one would obtain in classical optics. We therefore conclude that in the proposed imaging protocol, the spatial resolving power can be improved by a factor of $N$ with use of the entangled photon-number state $|1,N\\rangle$. Furthermore, because we are using an entangled state with a specific type of detector, the image has high contrast because of the lack of background noise.\n\nBy following the analysis in Sec.~IIB, we can show that by replacing the $N$-photon detector with an $N$-photon bucket detector, we get an incoherent image but the sub-Rayleigh imaging process is not changed.\n\n\\section{Discussions and other Configurations}\nIn the previous two sections, we have analyzed a novel ghost imaging by sending $N$ degenerate photons to the object while keeping the non-degenerate photon and imaging lens in the lab. We find that if the distance between the object plane and the output surface of the source is much greater than the distance between the imaging lens and the single-photon detector planes, we can gain spatial resolution improvement in the object by a factor of $N$ compared to classical optics. In the cases that we have discussed in this paper, this enhancement beyond the Rayleigh criterion is due to the quantum nature of the entangled photon-number state. The assumptions required for such an enhancement are that the $N$ degenerate photons sent to the object scatter off the same point and are detected by either an $N$-photon number detector or a bucket detector. An $N$-photon bucket detector is much easier to realize than an $N$-photon point detector. Such a bucket detector could be an array of single photon point detectors which only sent a signal to the coincidence circuit if exactly $N$ of them fired.\n\n\\begin{figure}[tbp]\n\\includegraphics[scale=0.6]{otherconfigurations.pdf}\n\\caption{(color online) Other schematics of quantum ghost imaging with three entangled photons in state $|1,2\\rangle$. (a) Both the imaging lens and the object are inserted in the non-degenerate photon channel. (b) The imaging lens is placed in the degenerate photon pathway while the object is in the non-degenerate optical pathway.}\n\\label{fig:otherconfigurations}\n\\end{figure}\nBesides the favorable configuration discussed above, one may wonder what happens if we switch the $N$ degenerate photons to detector $D_1$ and the non-degenerate photon to $D_2$ after an imaging lens and an object? Do we gain any spatial resolution improvement? To answer the questions, let us look at the $|1,2\\rangle$ case as illustrated in Fig.~\\ref{fig:otherconfigurations}(a). Following the treatments in Sec.~IIA, after some algebra we find that the transverse part of the three-photon amplitude (\\ref{eq:coincidence}) is\n\\begin{eqnarray}\nB_{1,2}&=&B_0 \\int{d^{2}}\\rho_oA(\\vec{\\rho}_o)e^{i\\frac{K_2|\\vec{\\rho}_o|^2}{2}(\\frac{1}{L_2}+\\frac{1}{d'_2})}\ne^{-i\\frac{K_2\\vec{\\rho}_2\\cdot\\vec{\\rho}_o}{L_2}}\\nonumber\\\\\n&&\\times\\int{d^{2}}\\rho_le^{i\\frac{K_2|\\vec{\\rho}_l|^2}{2}[\\frac{1}{d'_2}+\\frac{1}{d_2+(\\lambda_1\/2\\lambda_2)L_1}-\n\\frac{1}{f}]}e^{-iK_2\\vec{\\rho}_l\\cdot[\\frac{\\vec{\\rho}\n_o}{d'_2}+\\frac{\\vec{\\rho}_1}{d_2+(\\lambda_1\/2\\lambda_2)L_1}]}.\\label{eq:Psi22}\n\\end{eqnarray}\nIn the derivation of Eq.~(\\ref{eq:Psi22}), the Green's functions associated with each beam give\n\\begin{eqnarray}\n\\chi_1(\\vec{\\alpha}_1,\\Omega_1;\\vec{\\rho}_1,L_1)&=&e^{-i\\frac{L_1|\\vec{\\alpha}_1|^2}{2K_1}}e^{i\\vec{\\rho}_1\\cdot\n\\vec{\\alpha}_1},\\label{eq:g12}\\nonumber\\\\\n\\chi_2(\\vec{\\alpha}_2,\\Omega_2;\\vec{\\rho}_2,z_2)&=&e^{-i\\frac{d_2|\\vec{\\alpha}_2|^2}{2K_2}}\\int{d^{2}}\\rho_oA(\n\\vec{\\rho}_o)e^{i\\frac{K_2|\\vec{\\rho}_o|^2}{2}(\\frac{1}{L_2}+\\frac{1}{d'_2})}e^{-i\\frac{K_2\\vec{\\rho}_2\\cdot\n\\vec{\\rho}_o}{L_2}}\\int{d^{2}}\\rho_le^{i\\frac{K_2|\\vec{\\rho}_l|^2}{2}(\\frac{1}{d'_2}-\\frac{1}{f})}e^{i\\vec{\\rho}\n_l\\cdot(\\vec{\\alpha}_2-\\frac{K_2\\vec{\\rho}_o}{d'_2})}.\\nonumber\n\\end{eqnarray}\nApplying the Gaussian thin-lens imaging condition\n\\begin{eqnarray}\n\\frac{1}{d'_2}+\\frac{1}{d_2+(\\lambda_1\/2\\lambda_2)L_1}=\\frac{1}{f},\\label{eq:GTLE2}\n\\end{eqnarray}\nthe transverse spatial part of the three-photon amplitude (\\ref{eq:Psi22}) reduces to\n\\begin{eqnarray}\nB_{1,2}=B_0\\int{d^{2}}\\rho_oA(\\vec{\\rho}_o)e^{i\\frac{K_2|\\vec{\\rho}_o|^2}{2}(\\frac{1}{L_2}+\\frac{1}{d'_2})}\ne^{-i\\frac{K_2\\vec{\\rho}_2\\cdot\\vec{\\rho}_o}{L_2}}\\mathbf{somb}\\bigg(\\frac{2\\pi{R}}{\\lambda_2}\\bigg|\\frac{\\vec{\n\\rho}_o}{d'_2}+\\frac{\\vec{\\rho}_1}{d_2+(\\lambda_1\/2\\lambda_2)L_1}\\bigg|\\bigg).\\label{eq:Psi222}\n\\end{eqnarray}\nFrom this we see that the magnification is $m=[d_2+(\\lambda_1\/2\\lambda_2)L_1]\/d'_2$. Comparing Eqs.~(\\ref{eq:GTLE2}) and (\\ref{eq:Psi222}) with Eqs.~(\\ref{eq:GTLE1}) and (\\ref{eq:psif}), we see that the distances between the object and the thin lens and between the thin lens and the imaging plane are interchanged. Since the degenerate photons are measured at the imaging plane in the setup of Fig.~\\ref{fig:otherconfigurations}(a), the requirement of a point $N$-photon detector cannot be relaxed.\n\nComputing the spatial resolution as in Sec. II we have\n\\begin{eqnarray}\nB_{1,2}=B_0\\bigg[A_{0}\\mathbf{somb}\\bigg(\\frac{2\\pi{R}}{\\lambda_2}\\bigg|\\frac{\\vec{\\rho}_1}{d_2+(\\lambda_1\/2\n\\lambda_2)L_1}\\bigg|\\bigg)+e^{i\\varphi'} A_{\\vec{a}}\\mathbf{somb}\\bigg(\\frac{2\\pi{R}}{\\lambda_2}\\bigg|\\frac{\\vec{a}}\n{d'_2}+\\frac{\\vec{\\rho}_1}{d_2+(\\lambda_1\/2\\lambda_2)L_1}\\bigg|\\bigg)\\bigg],\\label{eq:Psi22f}\n\\end{eqnarray}\nwhere $\\varphi'=K_2\\big[\\frac{|\\vec{a}|^2}{2}((\\frac{1}{L_2}+\\frac{1}{d'_2})-\\frac{\\vec{\\rho}_2\\cdot\\vec{a}}{L_2}\\big].$\nThe radius of the Airy disk is\n\\begin{eqnarray}\n\\xi=0.61\\frac{\\lambda_2}{R}\\bigg(\\frac{\\lambda_1}{2\\lambda_2}L_1+d_2\\bigg).\\label{eq:diskrudius2}\n\\end{eqnarray}\nIf $L_1\\gg{d_2}$, $\\xi\\rightarrow\\frac{0.61L_1}{R}(\\frac{\\lambda_1}{2})$, so that the width of the point-spread function shrinks to one half its value compared to the classical cases. Applying the Rayleigh criterion to see the minimum resolvable distance between two point sources in the object. From the second term of Eq.~(\\ref{eq:Psi22f}) the minimum distance turns out to be\n\\begin{eqnarray}\na_{\\mathrm{min}}=0.61\\frac{d'_2\\lambda_2}{R},\\label{eq:arayleigh2}\n\\end{eqnarray}\nwhich only is a function of the wavelength of the non-degenerate photon; therefore, no spatial resolution improvement can be achieved compared to classical optics.\n\nFinally, we consider the configuration shown in Fig.~\\ref{fig:otherconfigurations}(b) which was analyzed in \\cite{wen1} where it was shown that no well-defined images could be obtained.\n\nIt is straightforward to generalize the above two configurations with use of the $|1,N\\rangle$ state. By replacing the source state by the state $|1,N\\rangle$ in Fig.~\\ref{fig:otherconfigurations}(a), it can be shown that the radius of the Airy disk becomes\n\\begin{eqnarray}\n\\xi=0.61\\frac{\\lambda_2}{R}\\bigg(\\frac{\\lambda_1}{N\\lambda_2}L_1+d_2\\bigg).\\label{eq:diskrudiusN}\n\\end{eqnarray}\nIf $L_1\\gg{d_2}$, $\\xi\\rightarrow\\frac{0.61L_1}{R}(\\frac{\\lambda_1}{N})$, so the Airy disk shrinks to one $N$th of its radius compared to classical optics. However, if $L_1\\ll{d_2}$, Eq.~(\\ref{eq:diskrudiusN}) gives the same result as in\nclassical optics. Replacing the source with photon state $|1,N\\rangle$ in Fig.~\\ref{fig:otherconfigurations}(b), the above conclusion is still valid. The analysis has been presented in \\cite{wen2} and we will not repeat here.\n\n\\section{Conclusions}\nIn summary, we have proposed a quantum-imaging scheme to improve the spatial resolution in the object beyond the Rayleigh diffraction limit by using an entangled photon-number state $|1,N\\rangle$. We have shown that by sending the $N$ degenerate photons to the object, keeping the non-degenerate photon and imaging lens in the lab, and using a resolving $N$-photon detector or a bucket detector, a factor of $N$ can be achieved in spatial resolution enhancement using the Rayleigh criterion. The image is nonlocal and the quantum nature of the state leads to the sub-Rayleigh imaging resolution with high contrast. We have also shown that by sending the $N$ degenerate photons freely to a point $N$-photon detector while propagating the non-degenerate photon through the imaging lens and the object, the Airy disk in the imaging can be shrunk by a factor of $N$ under certain conditions. However, it may be possible to show that a similar effect can occur using non-entangled sources. In the language of quantum information, the non-degenerate photon may be thought of as an ancilla onto which the information about the object is transferred for measurement. Our imaging protocol may be of importance in many applications such as imaging, sensors, and telescopy.\n\n\\section{Acknowledgement}\nThis work was supported in part by U.S. ARO MURI Grant W911NF-05-1-0197 and by Northrop Grumman Corporation through the Air Force Research Laboratory under contract FA8750-07-C-0201 as part of DARPA's Quantum Sensors Program.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:Introduction}\nIn the era of big data, effective management of extensive feature spaces represents a genuine hurdle for scientists and practitioners. Only some features have significant relevance in real-world data, while the redundancy of the remaining ones actively inflates data-driven models' complexity. The search for always new and increasingly performing dimensionality reduction algorithms is hence justified by four primary needs: (i) reduce data maintenance costs; (ii) reduce the impact of the `curse of dimensionality' on data-driven models; (iii) increase data-driven models' interpretability; (iv) reduce energy costs for model training. Reducing data maintenance costs involves an articulate corpus of engineering applications including, but not limited to, the effective management of storage space \\cite{siddiqa2017big, katal2013big, padgavankar2014big, strohbach2016big}, continuous databases maintenance \\cite{madden2012databases}, design and implementation of cost-effectively data pipelines \\cite{eichelberger2017experiences, wu2016building}. These challenges are particularly evident in a heterogeneous range of application domains such as bioinformatics \\cite{mak2006solution, sen2005gini, antoniadis2003effective}, healthcare \\cite{patorno2013propensity, berisha2021digital, jiwani2021novel}, robotics \\cite{da2019combining, li2016utilizing}, object and speech recognition tasks \\cite{stommel2009binarising, lotfi2014neural, estellers2011multipose}, sentiment analysis \\cite{singh2020sentiment, skowron2016fusing} and finance \\cite{verleysen2005curse, wang2007brownian, gueant2019deep}. These disciplines historically represented the testing ground for data models suffering the `curse of dimensionality' \\cite{bellman1959mathematical}. In the literature, the `curse of dimensionality' refers to the difficulty in effectively modelling high-dimensional data due to the exponential increase in the number of samples required to cover the input space as the number of dimensions increases \\cite{donoho2000high, poggio2017and}. This phenomenon is even riskier in critical applicative domains like medicine \\cite{holzinger2017we, singh2020explainable}, physics \\cite{lai2022explainable}, communication technologies \\cite{guo2020explainable}, and automated trading \\cite{leal2020learning, briola2020deep, briola2021deep, nagy2023asynchronous, zhang2019deeplob}, where, in addition to generalisation power, models' interpretability represents an essential requirement. Thanks to remarkable advancements in hardware technology and neural networks training methodologies, in recent years, some of these last drawbacks have been partially addressed. This has paved the way for developing a new breed of large-scale models trained on expansive datasets, leading to notable improvements across a range of tasks. However, it is worth noting that these models come at a substantial cost in terms of finances and environmental impact. The financial costs associated with electricity consumption are significant, and the carbon footprint generated by the energy-intensive modern tensor processing hardware represents a growing environmental concern \\cite{strubell2019energy, patterson2021carbon}.\n\nDimensionality reduction addresses all these problems by decreasing the complexity of the feature space while minimally affecting information loss \\cite{miner2012practical}. Such a field can be further specified into two macro areas: (i) feature extraction and (ii) feature selection. Feature extraction techniques originate new features by transforming the original ones into a space with a different dimensionality and then choosing linear or non-linear combinations of them. They are mainly used in the presence of a limited understanding of raw data. On the other hand, feature selection techniques directly choose a subset of features from the original set while maintaining their physical meaning. Usually, they are a 'block' of a more complex pipeline including also a classification (or regression) step. The level of contamination among 'blocks' is more or less evident depending on the nature of the feature selection algorithm. Indeed, feature selection techniques are classified as (i) supervised \\cite{huang2015supervised}; (ii) unsupervised \\cite{solorio2020review}: and (iii) semi-supervised \\cite{sheikhpour2017survey}. In the supervised case, features' relevance is usually assessed via their correlation degree with class labels or regression targets. These models take advantage of the learning performance of the classification (or regression) algorithm to refine the choice of the meaningful subset of features while maintaining, at the same time, 'block' independence from it. This loop interaction is convenient only in scenarios where retrieving labels and executing classification (or regression) tasks is computationally efficient. On the contrary, unsupervised feature selection is applied in scenarios where retrieving labels is costly. These algorithms select relevant features based on specific data properties (e.g. variance maximisation). Different classes of unsupervised feature selection approaches exist: (i) filters; (ii) wrappers; (iii) embedded methods. In filter-based methods, the feature selection stage is entirely independent from the classification (or regression) algorithm; in wrapper-based methods, the feature selection process takes advantage of classification (or regression) performance to improve its performance; in embedded methods, the feature selection step is embedded into the classification algorithm in a way that the two 'blocks' take advantage from each other. \\\\\nLastly, semi-supervised feature selection represents a hybrid approach with the highest potential applicability in real-world problems. Indeed, labels are often only partially provided, and semi-supervised learning techniques are specifically designed to learn from a reduced number of labelled data, efficiently handling, at the same time, a large number of unlabeled samples.\n\nIn its general formulation, given a starting set of features $F = \\{f_1, \\dots, f_n\\}$, where $n \\gg 1$ is the cardinality, dimensionality reduction is an \\textit{NP-hard} problem \\cite{guyon2008feature, roffo2015infinite} where the goal is selecting the optimal subset of features with cardinality $m \\ll n$, among the ${n \\choose m}$ possible combinations. Due to the exponentially increased time required to find the globally optimal solution, existing feature selection algorithms employ heuristic rules to find dataset-dependent sub-optimal solutions \\cite{ge2016mctwo}. In this paper, we propose a novel unsupervised, graph-based filter algorithm for feature selection which (i) builds a topologically constrained network representation of raw features' dependency structure and (ii) exploits their relative position into the graph to reduce the input's dimensionality, minimising the information loss. \n\nA network (or graph) represents components of a system as nodes (or vertices) and interactions among them as links (or edges). The number of nodes defines the size of the network, and the number of links establishes the network's sparsity (or, conversely, density). Reversible interactions between components are represented through undirected links, while non-reversible interactions are represented as direct links \\cite{briola2022dependency}. Topologically constrained networks, also known as information filtering networks (IFNs), constitute a family of graphs built imposing topological constraints (such as being a tree or a planar graph) while optimising global properties (such as the likelihood) \\cite{mantegna1999hierarchical, tumminello2005tool, serin2016learning, marti2021review}. In this paper, starting from the raw set of features $F$, we exploit the power of a specific class of IFNs, namely the Triangulated Maximally Filtered Graph (TMFG), to capture a meaningful and intuitive description of dependency structures among features in an unsupervised manner (i.e. without exploring their relationships with labels or regression targets). Consequently, we study the relative position of elements inside the network to maximise the likelihood of features' relevance while minimising information loss. Based on this construction schema, we name our approach 'Topological Feature Selection' (TFS).\n\nTo prove the effectiveness of the proposed methodology, we test it against what is currently considered the state-of-the-art counterpart in unsupervised, graph-based filter feature selection approaches, i.e. Infinite Feature Selection ($\\textrm{Inf-FS}_U$) \\cite{roffo2015infinite}. The two feature selection algorithms are tested on $16$ benchmark datasets from different application domains. The proposed training\/test pipeline and the statistical validation one are designed to handle dataset imbalance and evaluate results based on fair performance metrics. The results are clear-cut. In most cases, TFS outperforms or equalises its alternative, redefining or teeing state-of-the-art performances. Our contribution to the existing literature is relevant since we propose an extraordinarily flexible, computationally cheap and remarkably intuitive (compared to its alternative) unsupervised, graph-based filter algorithm for feature selection guaranteeing complete control of the dimensionality reduction process by considering only the relative position of features inside well-known graph structures.\n\nThe rest of the paper is organised as follows. In Section \\ref{sec:Data}, we describe the datasets used to prove the effectiveness of the TFS algorithm. In Section \\ref{sec:Information_Filtering_Networks}, we review the theoretical foundations of IFNs and present the TMFG algorithm. In Section \\ref{sec:Topologically_Constrained_Feature_Selection}, we describe in a detailed manner the TFS methodology. In Section \\ref{sec:Experiments}, we describe the experimental protocols, while obtained results are presented in Section \\ref{sec:Results}. Finally, in Section \\ref{sec:Conclusions}, we discuss the meaning of our results and future research lines in this area.\n\n\\section{Data and Methods}\\label{sec:Data_and_Methods}\n\n\\subsection{Data} \\label{sec:Data}\nIn order to prove TFS' effectiveness, we extensively test it on $16$ benchmark datasets (all in a tabular format) belonging to different application domains. For each dataset, Table \\ref{tab:Datasets_Description} reports, respectively, the name, the application domain, the reference paper (or website), the downloading source, the number of features, the number of samples, the number of classes and the split dynamics.\n\n\\begin{table}[H]\n\\centering\n\\caption{Benchmark datasets used to compare feature selection algorithms considered in the current work. The order of appearance is inherited from \\cite{ASUDatasets}.}\n\\label{tab:Datasets_Description}\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{@{}cccccccc@{}}\n\\toprule\n\\textbf{Name} &\n \\textbf{Category} &\n \\textbf{Reference} &\n \\textbf{Source} &\n \\textbf{\\# Features} &\n \\textbf{\\# Samples} &\n \\textbf{Classes} &\n \\textbf{Split Provided} \\\\ \\midrule\nPCMAC & Text Data & \\cite{lang1995newsweeder} & \\cite{ASUDatasets} & 3289 & 1943 & binary & false \\\\\nRELATHE & Text Data & \\cite{lang1995newsweeder} & \\cite{ASUDatasets} & 4322 & 1427 & binary & false \\\\\nCOIL20 & Face Images Data & \\cite{nene1996columbia} & \\cite{ASUDatasets} & 1024 & 1440 & multi-class & false \\\\\nORL &\n Face Images Data &\n \\cite{samaria1994parameterisation} &\n \\cite{ASUDatasets} &\n 1024 &\n 400 &\n multi-class &\n false \\\\\nwarpAR10P & Face Images Data & \\cite{li2018feature} & \\cite{ASUDatasets} & 2400 & 130 & multi-class & false \\\\\nwarpPIE10P & Face Images Data & \\cite{sim2002cmu} & \\cite{ASUDatasets} & 2420 & 210 & multi-class & false \\\\\nYale &\n Face Images Data &\n \\cite{belhumeur1997eigenfaces} &\n \\cite{ASUDatasets} &\n 1024 &\n 165 &\n multi-class &\n false \\\\\nUSPS & Hand Written Images Data & \\cite{hull1994database} & \\cite{GitHubIFS} & 256 & 9298 & multi-class & false \\\\\ncolon & Biological Data & \\cite{alon1999broad} & \\cite{GitHubIFS} & 2000 & 62 & binary & false \\\\\nGLIOMA & Biological Data & \\cite{li2018feature} & \\cite{ASUDatasets} & 4434 & 50 & multi-class & false \\\\\nlung & Biological Data & \\cite{GitHubIFS} & \\cite{GitHubIFS} & 3312 & 203 & multi-class & false \\\\\nlung\\_small & Biological Data & \\cite{GitHubIFS} & \\cite{GitHubIFS} & 325 & 73 & multi-class & false \\\\\nlymphoma & Biological Data & \\cite{golub1999molecular} & \\cite{ASUDatasets} & 4026 & 96 & multi-class & false \\\\\nGISETTE & Digits Data & \\cite{guyon2007competitive} & \\cite{Dua:2019} & 5000 & 7000 & binary & true \\\\\nIsolet &\n Spoken Letters Data &\n \\cite{li2018feature} &\n \\cite{ASUDatasets} &\n 617 &\n 1560 &\n multi-class &\n false \\\\\nMADELON & Artificial Data & \\cite{guyon2007competitive} & \\cite{Dua:2019} & 500 & 2600 & binary & true \\\\ \\bottomrule\n\\end{tabular}%\n}\n\\end{table}\n\nWe distinguish among $7$ different application domains (i.e. text data, face images data, hand written images data, biological data, digits data, spoken letters data and artificial data). Categories follow the taxonomy in \\cite{li2018feature}. The average number of features is $2248$. The dataset with the lowest number of features is 'USPS' (i.e. $256$). The dataset with the largest number of features is 'GISETTE' (i.e. $5000$). The average number of samples is $1666$. The dataset with the lowest number of samples is 'GLIOMA' (i.e. $50$), while the one with the largest number of samples is 'USPS' (i.e. $9298$). $5$ of the considered datasets are binary, while $11$ are multi-class. Depending on the source, training\/test split could be provided or not. The two datasets 'GISETTE' and 'MADELON' come with a provided training\/validation\/test split. In both cases, the data source (i.e. \\cite{Dua:2019}) does not provide test labels. Because of this, we use the validation set for testing. For all the other datasets, $70\\%$ of the raw dataset is used as a training set, while $30\\%$ is used as a test set. We use a stratified splitting procedure to ensure that each set contains, for each target class, approximately the same percentage of samples as per in the raw dataset.\n\n\\begin{comment}\n\\begin{figure}[H]\n \\centering\n \\includegraphics[scale=0.25]{images\/Data_Pipeline.png}\n \\caption{The data preprocessing pipeline consists of $4$ different steps: (i) data reading and format unification, (ii) training\/test splitting, (iii) constant features pruning and (iv) metadata production.}\n \\label{fig:Data_Pipeline}\n\\end{figure}\n\nFigure \\ref{fig:Data_Pipeline} reports a pictorial schema of the data pre-processing pipeline. We distinguish 4 different steps: (i) data reading and format unification, (ii) training\/test splitting, (iii) constant features pruning and (iv) metadata production. The first step allows to read data and unify formats coming from different sources. The second step consists of the train\/test splitting and has been presented early in this section. In the third step, non-informative, constant covariates are detected on the training set and permanently removed from training, validation and test set. The fourth step allows to produce useful metadata that highly facilitate data management.\n\\end{comment}\n\nWe design the data pre-processing pipeline as consisting of $3$ different steps: (i) data reading and format unification; (ii) training\/test splitting; (iii) constant features pruning. The first step allows us to read data and unify formats coming from different sources. The second step consists of the train\/test splitting and has been presented early in this section. In the third step, non-informative, constant covariates are detected on the training set and permanently removed from the training, validation and test set. \n\nTable \\ref{tab:Pre_Processed_Data} reports datasets' specifics after the preprocessing step. Looking at it, we remark that most considered datasets are not affected by the constant features filtering step. The only $4$ datasets which are reduced in the number of covariates are 'PCMAC' (with a reduction of $0.07\\%$), 'RELATHE' (with a reduction of $0.03\\%$), 'GLIOMA' (with a reduction of $0.03\\%$) and 'GISETTE' (with a reduction of $0.9\\%$). Table \\ref{tab:Pre_Processed_Data} also reports the training and test dataset's labels' distributions. Datasets are generally balanced; the main exceptions are:\n\n\\begin{itemize}\n \\item 'USPS': classes 1 and 2 are over-represented compared to the other classes.\n \\item 'colon': class 1 is under-represented compared to the other class.\n \\item 'GLIOMA': class 2 is under-represented compared to other classes.\n \\item 'lung': class 1 is over-represented compared to the other classes.\n \\item 'lung\\_small': classes 1, 2, 3 and 5 are under-represented compared to the other classes.\n \\item 'lymphoma': class 1 is over-represented compared to the other classes.\n\\end{itemize}\n\n\\begin{table}[H]\n\\caption{Datasets' specifics after the preprocessing step. For each benchmark dataset, we report the number of features, the number of samples and the labels' distribution for training and test data. The labels' distribution entry consists of a tuple for each class. The first element of the tuple represents the class itself, while the second represents the number of samples with that label.}\n\\label{tab:Pre_Processed_Data}\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{|c|ccc|ccc|}\n\\hline\n\\textbf{Dataset} &\n \\multicolumn{3}{c|}{\\textbf{Training}} &\n \\multicolumn{3}{c|}{\\textbf{Test}} \\\\ \\hline\n &\n \\multicolumn{1}{c|}{\\textbf{\\# Features}} &\n \\multicolumn{1}{c|}{\\textbf{\\# Samples}} &\n \\textbf{Labels' Distribution} &\n \\multicolumn{1}{c|}{\\textbf{\\# Features}} &\n \\multicolumn{1}{c|}{\\textbf{\\# Samples}} &\n \\textbf{Labels' Distribution} \\\\ \\hline\nPCMAC &\n \\multicolumn{1}{c|}{3287} &\n \\multicolumn{1}{c|}{1360} &\n (1, 687), (2, 673) &\n \\multicolumn{1}{c|}{3287} &\n \\multicolumn{1}{c|}{583} &\n (1, 295), (2, 288) \\\\ \\hline\nRELATHE &\n \\multicolumn{1}{c|}{4321} &\n \\multicolumn{1}{c|}{998} &\n (1, 545), (2, 453) &\n \\multicolumn{1}{c|}{4321} &\n \\multicolumn{1}{c|}{429} &\n (1, 234), (2, 195) \\\\ \\hline\nCOIL20 &\n \\multicolumn{1}{c|}{1024} &\n \\multicolumn{1}{c|}{1008} &\n \\begin{tabular}[c]{@{}c@{}}(1, 50), (2, 51), (3, 50), (4, 50), \\\\ (5, 50), (6, 50), (7, 51), (8, 50), \\\\ (9, 51), (10, 50), (11, 51), (12, 50), \\\\ (13, 50), (14, 51), (15, 50), (16, 50), \\\\ (17, 50), (18, 51), (19, 51), (20, 51)\\end{tabular} &\n \\multicolumn{1}{c|}{1024} &\n \\multicolumn{1}{c|}{432} &\n \\begin{tabular}[c]{@{}c@{}}(1, 22), (2, 21), (3, 22), (4, 22), \\\\ (5, 22), (6, 22), (7, 21), (8, 22), \\\\ (9, 21), (10, 22), (11, 21), (12, 22), \\\\ (13, 22), (14, 21), (15, 22), (16, 22), \\\\ (17, 22), (18, 21), (19, 21), (20, 21)\\end{tabular} \\\\ \\hline\nORL &\n \\multicolumn{1}{c|}{1024} &\n \\multicolumn{1}{c|}{280} &\n \\begin{tabular}[c]{@{}c@{}}(1, 7), (2, 7), (3, 7), (4, 7), (5, 7), \\\\ (6, 7), (7, 7), (8, 7), (9, 7), (10, 7),\\\\ (11, 7), (12, 7), (13, 7), (14, 7), (15, 7), \\\\ (16, 7), (17, 7), (18, 7), (19, 7), (20, 7), \\\\ (21, 7), (22, 7), (23, 7), (24, 7), (25, 7), \\\\ (26, 7), (27, 7), (28, 7), (29, 7), (30, 7), \\\\ (31, 7), (32, 7), (33, 7), (34, 7), (35, 7), \\\\ (36, 7), (37, 7), (38, 7), (39, 7), (40, 7)\\end{tabular} &\n \\multicolumn{1}{c|}{1024} &\n \\multicolumn{1}{c|}{120} &\n \\begin{tabular}[c]{@{}c@{}}(1, 3), (2, 3), (3, 3), (4, 3), (5, 3),\\\\ (6, 3), (7, 3), (8, 3), (9, 3), (10, 3), \\\\ (11, 3), (12, 3), (13, 3), (14, 3), (15, 3),\\\\ (16, 3), (17, 3), (18, 3), (19, 3), (20, 3), \\\\ (21, 3), (22, 3), (23, 3), (24, 3), (25, 3), \\\\ (26, 3), (27, 3), (28, 3), (29, 3), (30, 3), \\\\ (31, 3), (32, 3), (33, 3), (34, 3), (35, 3), \\\\ (36, 3), (37, 3), (38, 3), (39, 3), (40, 3)\\end{tabular} \\\\ \\hline\nwarpAR10P &\n \\multicolumn{1}{c|}{2400} &\n \\multicolumn{1}{c|}{91} &\n \\begin{tabular}[c]{@{}c@{}}(1, 9), (2, 9), (3, 10), (4, 9), (5, 9), \\\\ (6, 9), (7, 9), (8, 9), (9, 9), (10, 9)\\end{tabular} &\n \\multicolumn{1}{c|}{2400} &\n \\multicolumn{1}{c|}{39} &\n \\begin{tabular}[c]{@{}c@{}}(1, 4), (2, 4), (3, 3), (4, 4), (5, 4), \\\\ (6, 4), (7, 4), (8, 4), (9, 4), (10, 4)\\end{tabular} \\\\ \\hline\nwarpPIE10P &\n \\multicolumn{1}{c|}{2420} &\n \\multicolumn{1}{c|}{147} &\n \\begin{tabular}[c]{@{}c@{}}(1, 14), (2, 15), (3, 15), (4, 14), (5, 15),\\\\ (6, 14), (7, 15), (8, 15), (9, 15), (10, 15)\\end{tabular} &\n \\multicolumn{1}{c|}{2420} &\n \\multicolumn{1}{c|}{63} &\n \\begin{tabular}[c]{@{}c@{}}(1, 7), (2, 6), (3, 6), (4, 7), (5, 6), \\\\ (6, 7), (7, 6), (8, 6), (9, 6), (10, 6)\\end{tabular} \\\\ \\hline\nYale &\n \\multicolumn{1}{c|}{1024} &\n \\multicolumn{1}{c|}{115} &\n \\begin{tabular}[c]{@{}c@{}}(1, 7), (2, 8), (3, 8), (4, 7), (5, 8), \\\\ (6, 7), (7, 8), (8, 8), (9, 8), (10, 8), \\\\ (11, 8), (12, 7), (13, 7), (14, 8), (15, 8)\\end{tabular} &\n \\multicolumn{1}{c|}{1024} &\n \\multicolumn{1}{c|}{50} &\n \\begin{tabular}[c]{@{}c@{}}(1, 4), (2, 3), (3, 3), (4, 4), (5, 3), \\\\ (6, 4), (7, 3), (8, 3), (9, 3), (10, 3), \\\\ (11, 3), (12, 4), (13, 4), (14, 3), (15, 3)\\end{tabular} \\\\ \\hline\nUSPS &\n \\multicolumn{1}{c|}{256} &\n \\multicolumn{1}{c|}{6508} &\n \\begin{tabular}[c]{@{}c@{}}(1, 1087), (2, 888), (3, 650), (4, 577), (5, 596), \\\\ (6, 501), (7, 584), (8, 554), (9, 496), (10, 575)\\end{tabular} &\n \\multicolumn{1}{c|}{256} &\n \\multicolumn{1}{c|}{2790} &\n \\begin{tabular}[c]{@{}c@{}}(1, 466), (2, 381), (3, 279), (4, 247), (5, 256), \\\\ (6, 215), (7, 250), (8, 238), (9, 212), (10, 246)\\end{tabular} \\\\ \\hline\ncolon &\n \\multicolumn{1}{c|}{2000} &\n \\multicolumn{1}{c|}{43} &\n (-1, 28), (1, 15) &\n \\multicolumn{1}{c|}{2000} &\n \\multicolumn{1}{c|}{19} &\n (-1, 12), (1, 7) \\\\ \\hline\nGLIOMA &\n \\multicolumn{1}{c|}{4433} &\n \\multicolumn{1}{c|}{35} &\n (1, 10), (2, 5), (3, 10), (4, 10) &\n \\multicolumn{1}{c|}{4433} &\n \\multicolumn{1}{c|}{15} &\n (1, 4), (2, 2), (3, 4), (4, 5) \\\\ \\hline\nlung &\n \\multicolumn{1}{c|}{3312} &\n \\multicolumn{1}{c|}{142} &\n (1, 97), (2, 12), (3, 15), (4, 14), (5, 4) &\n \\multicolumn{1}{c|}{3312} &\n \\multicolumn{1}{c|}{61} &\n (1, 42), (2, 5), (3, 6), (4, 6), (5, 2) \\\\ \\hline\nlung\\_small &\n \\multicolumn{1}{c|}{325} &\n \\multicolumn{1}{c|}{51} &\n \\begin{tabular}[c]{@{}c@{}}(1, 4), (2, 3), (3, 4), (4, 11), \\\\ (5, 5), (6, 9), (7, 15)\\end{tabular} &\n \\multicolumn{1}{c|}{325} &\n \\multicolumn{1}{c|}{22} &\n \\begin{tabular}[c]{@{}c@{}}(1, 2), (2, 2), (3, 1), (4, 5), \\\\ (5, 2), (6, 4), (7, 6)\\end{tabular} \\\\ \\hline\nlymphoma &\n \\multicolumn{1}{c|}{4026} &\n \\multicolumn{1}{c|}{67} &\n \\begin{tabular}[c]{@{}c@{}}(1, 32), (2, 7), (3, 6), (4, 8), (5, 4), \\\\ (6, 4), (7, 3), (8, 1), (9, 2)\\end{tabular} &\n \\multicolumn{1}{c|}{4026} &\n \\multicolumn{1}{c|}{29} &\n \\begin{tabular}[c]{@{}c@{}}(1, 14), (2, 3), (3, 3), (4, 3), (5, 2), \\\\ (6, 2), (7, 1), (8, 1)\\end{tabular} \\\\ \\hline\nGISETTE &\n \\multicolumn{1}{c|}{4955} &\n \\multicolumn{1}{c|}{6000} &\n (-1.0, 3000), (1.0, 3000) &\n \\multicolumn{1}{c|}{4955} &\n \\multicolumn{1}{c|}{1000} &\n (-1.0, 500), (1.0, 500) \\\\ \\hline\nIsolet &\n \\multicolumn{1}{c|}{617} &\n \\multicolumn{1}{c|}{1092} &\n \\begin{tabular}[c]{@{}c@{}}(1, 42), (2, 42), (3, 42), (4, 42), (5, 42), \\\\ (6, 42), (7, 42), (8, 42), (9, 42), (10, 42), \\\\ (11, 42), (12, 42), (13, 42), (14, 42), (15, 42), \\\\ (16, 42), (17, 42), (18, 42), (19, 42), (20, 42), \\\\ (21, 42), (22, 42), (23, 42), (24, 42), (25, 42), (26, 42)\\end{tabular} &\n \\multicolumn{1}{c|}{617} &\n \\multicolumn{1}{c|}{468} &\n \\begin{tabular}[c]{@{}c@{}}(1, 18), (2, 18), (3, 18), (4, 18), (5, 18), \\\\ (6, 18), (7, 18), (8, 18), (9, 18), (10, 18), \\\\ (11, 18), (12, 18), (13, 18), (14, 18), (15, 18), \\\\ (16, 18), (17, 18), (18, 18), (19, 18), (20, 18), \\\\ (21, 18), (22, 18), (23, 18), (24, 18), (25, 18), (26, 18)\\end{tabular} \\\\ \\hline\nMADELON &\n \\multicolumn{1}{c|}{500} &\n \\multicolumn{1}{c|}{2000} &\n (-1.0, 1000), (1.0, 1000) &\n \\multicolumn{1}{c|}{500} &\n \\multicolumn{1}{c|}{600} &\n (-1.0, 300), (1.0, 300) \\\\ \\hline\n\\end{tabular}%\n}\n\\end{table}\n\nThanks to the stratification technique discussed earlier in this Section, the same conclusions on labels balancing can be applied both to the training and the test datasets.\n\n\\subsection{Information Filtering Networks} \\label{sec:Information_Filtering_Networks}\nInformation Filtering Networks (IFNs) \\cite{mantegna1999hierarchical, aste2005complex, barfuss2016parsimonious, massara2017network, tumminello2005tool} are an effective tool to represent and model dependency structures among variables characterising complex systems. They have been extensively applied to a vast range of systems, including finance \\cite{procacci2022portfolio, briola2022anatomy, wang2022dynamic, seabrook2022quantifying, wang2022sparsification}, psychology \\cite{christensen2018network, christensen2018networktoolbox}, medicine \\cite{hutter2019multi, danoff2021genetic} and biology \\cite{song2008correlation, song2012hierarchical}. Sometimes IFNs are also referred to as Correlation Networks (CNs). Such an association is, however, inaccurate. Indeed, the two methodologies slightly differ, with CNs being normally obtained by applying a threshold that retains only the largest correlations among variables of the system, while IFNs being instead constructed imposing topological constraints (e.g. being a tree or a planar graph) and optimising specific global properties (e.g. the likelihood) \\cite{aste2022topological}. Both methodologies end in the determination of a sparse adjacency matrix, \\textbf{\\textit{A}}, representing relations among variables in the system with the fundamental difference that the former approach generates a disconnected graph, while the latter guarantees the connectedness. Based on the nature of the relationships to be modelled (i.e. linear, non-linear), one can choose different metrics to build the adjacency matrix \\textbf{\\textit{A}}. In most cases, \\textbf{\\textit{A}} is built on an arbitrary similarity matrix $\\hat{\\textbf{C}}$, which often corresponds to a correlation matrix. From a network science perspective, $\\hat{\\textbf{C}}$ can be considered as a fully connected graph where each variable of the system is represented as a node, and each pair of variables is joined by a weighted and undirected edge representing their similarity. Historically, the main IFNs were the Minimum Spanning Tree (MST) \\cite{papadimitriou1998combinatorial, mantegna1999hierarchical} and the Planar Maximally Filtered Graph (PMFG) \\cite{tumminello2005tool}. MSTs are a class of networks connecting all the vertices without forming cycles (i.e. closed paths of at least three nodes) while retaining the network's representation as simple as possible (i.e. representing only relevant relations among variables characterising the system under analysis) \\cite{briola2022dependency}. Prim's algorithm for MST construction sorts all edges' weights (i.e. similarities) in descending order and adds the largest possible edge weight among two nodes in an iterative way. The resulting network has $n-1$ edges and retains only the most significant connections, assuring, at the same time, that connectedness' property is fulfilled \\cite{christensen2018network}. For a complete pedagogical exposition of the determination of the MST, the interested reader is referred to the works by \\cite{mantegna1999hierarchical, briola2022dependency}. Despite being a powerful method to capture meaningful relationships in network structures describing complex systems, MST presents some aspects that can be unsatisfactory. Paradoxically, the main limit is represented by its tree structure (i.e. it cannot contain cycles) which does not allow to represent direct relationships among more than two variables showing strong similarity. The introduction of the Planar Maximally Filtered Graph (PMFG) \\cite{tumminello2005tool} overcomes such a shortcoming. Similarly to MST, also the PMFG algorithm sorts edge weights in descending order, incrementally adding the largest ones while imposing planarity \\cite{christensen2018network}. A graph is planar if it can be embedded in a sphere without edges crossing. Thanks to this, the same powerful filtering properties of the MST are maintained, and, at the same time, extra links, cycles and cliques (i.e. complete subgraphs) are added in a controlled manner. The resulting network has $3n-6$ edges and is composed of three- and four-nodes cliques. A nested hierarchy emerges from these cliques \\cite{song2011nested}: dimensionality is reduced in a deterministic manner while local information and the global hierarchical structure of the original network are retained. For a complete pedagogical exposition of the determination of the PMFG and a detailed analysis of its properties, the interested reader is referred to the works by \\cite{aste2005complex, tumminello2005tool, briola2022dependency}. The PMFG represents a substantial step forward compared to the MST. However, it still presents two limits: (i) it is computationally costly and (ii) it is a non-chordal graph. A graph is said to be chordal if all cycles made of four or more vertices have a chord, reducing the cycle to a set of triangles. A chord is defined as an edge that is not part of the cycle but connects two vertices of the cycle itself. The advantage of chordal graphs is that they fulfill the independence assumptions of Markov (i.e., bidirectional or undirected relations) and Bayesian (i.e., directional relations) networks \\cite{koller2009probabilistic, christensen2018network}. The Triangulated Maximally Filtered Graph (TMFG) \\cite{massara2017network} has been explicitly designed to be a chordal graph while retaining the strengths of PMFG.\n\n\\begin{algorithm}\n \\caption{TMFG built on the similarity matrix $\\hat{\\textbf{C}}$ to maximise the likelihood of features' relevance.}\\label{alg:TMFG_Algo}\n \\textbf{Input} Similarity matrix $\\hat{\\textbf{C}}$ $\\in \\mathbb{R}^{n, n}$ from a set of observations $\\{x_{1, 1}, \\dots, x_{s, 1}\\}, \\{x_{1, 2}, \\dots, x_{s, 2}\\} \\dots \\{x_{1, n}, \\dots, x_{s, n}\\}$.\\\\\n \\textbf{Output} Sparse adjacency matrix \\textbf{\\textit{A}} describing the TMFG. \n\n \\vspace{0.05cm}\n \n \\begin{algorithmic}[1]\n\n \\Function{MaximumGain}{$\\hat{\\textbf{C}}$, $\\mathcal{V}$, $t$}\\funclabel{alg:MaximumGain}\n \\State Initialize a vector of zeros $g \\in \\mathbb{R}^{1 \\times n}$;\n \\For{$j \\in 0, \\dots n$}\n \\For{$v \\notin \\mathcal{V}$}\n \\State $\\hat{\\textbf{C}}_{v,j} = 0$\n \\EndFor\n \n \\State $g = g \\oplus \\hat{\\textbf{C}}_{v,j}$\n \\EndFor\n \n \\State \\Return $\\max{\\{g\\}}$.\n \\EndFunction\n\n \\Statex\n\n \\State Initialize four empty sets: $\\mathcal{C}$ (cliques), $\\mathcal{T}$ (triangles), $\\mathcal{S}$ (separators) and $\\mathcal{V}$ (vertices);\n \\State Initialize an adjacency matrix $\\textbf{\\textit{A}} \\in \\mathbb{R}^{n, n}$ with all zeros;\n \\State $\\mathcal{C}_1 \\leftarrow$ tetrahedron, $\\{v_1, v_2, v_3, v_4\\}$, obtained choosing the $4$ entries of $\\hat{\\textbf{C}}$ maximising the similarity among features;\n \\State $\\mathcal{T} \\leftarrow$ the four triangular faces in $\\mathcal{C}_1: \\{v_1, v_2, v_3\\}, \\{v_1, v_2, v_4\\}, \\{v_1, v_3, v_4\\}, \\{v_2, v_3, v_4\\}$;\n \\State $\\mathcal{V} \\leftarrow$ Assign to $\\mathcal{V}$ the remaining $n-4$ vertices not in $\\mathcal{C}_1$;\n\n \\While{$\\mathcal{V}$ is not empty}\n \\State Find the combination of $\\{v_a, v_b, v_c\\} \\in \\mathcal{T}$ (i.e. $t$) and $v_d \\in \\mathcal{V}$ which maximises \\Call{MaximumGain}{$\\hat{\\textbf{C}}$, $\\mathcal{V}$, $t$}; \n \\State \\parbox[t]{\\dimexpr\\linewidth-\\algorithmicindent}{%\n \/* $\\{v_a, v_b, v_c, v_d\\}$ is a new 4-clique $\\mathcal{C}$, $\\{v_a, v_b, v_c\\}$ becomes a separator $\\mathcal{S}$, three new triangular faces, $\\{v_a, v_b, v_d\\}$, $\\{v_a, v_c, v_d\\}$ and $\\{v_b, v_c, v_d\\}$ are created *\/.\n }\n \\State Remove $v_d$ from $\\mathcal{V}$;\n \\State Remove $\\{v_a, v_b, v_c\\}$ from $\\mathcal{T}$;\n \\State Add $\\{v_a, v_b, v_d\\}$, $\\{v_a, v_c, v_d\\}$ and $\\{v_b, v_c, v_d\\}$ to $\\mathcal{T}$;\n \\EndWhile\n\n \\State For each pair of nodes $i, j$ in $\\mathcal{C}$, set $\\textbf{\\textit{A}}_{i,j} = 1$;\n \\State \\Return $\\textbf{\\textit{A}}$.\n \\end{algorithmic}\n\\end{algorithm}\n\nThe building process of TMFG (see Algorithm \\ref{alg:TMFG_Algo}) is based on a simple topological move that preserves planarity: it adds one node to the centre of three-nodes cliques by using a score function that maximises the sum of the weights of the three edges connecting the existing veritces. This addition transforms three-nodes cliques (i.e. triangles) into four-nodes cliques (i.e. tetrahedrons) characterised by a chord that is not part of the clique but connects two nodes in the clique, forming two triangles and generating a chordal network \\cite{christensen2018network}. Also, in this case, the resulting network has $3n-6$ edges and is composed of three- and four-nodes cliques. TMFG has two main advantages compared to PMFG: (i) it can be used to generate sparse probabilistic models as a form of topological regularization \\cite{aste2022topological} and (ii) it is computationally efficient. On the other hand, the two main limitations of chordal networks are that (i) they may add unnecessary edges to satisfy the property of chordality and (ii) their building cost can vary based on the chosen optimization function. For a complete pedagogical exposition of the determination of the TMFG and a detailed analysis of its properties, the interested reader is referred to the works by \\cite{massara2017network, briola2022dependency}.\n\n\\subsection{Topological Feature Selection} \n\\label{sec:Topologically_Constrained_Feature_Selection}\nTopological Feature Selection (TFS) algorithm is a graph-based filter method to perform feature selection in an unsupervised manner. Given a set of features $F = \\{f_1, \\dots, f_n\\}$, where $n \\gg 1$ is the cardinality, we build the adjacency matrix \\textbf{\\textit{A}} of the corresponding TMFG based on one of the following three metrics: (i) the Pearson's estimator of the correlation coefficient, (ii) the Spearman's rank correlation coefficient and (iii) the Energy coefficient (i.e. the weighted combination of two pairwise measures described later in this Section). Depending on the metric's formulation, it is possible to capture different kinds of interactions among covariates (e.g. linear or non-linear interactions).\n\nThe Person's estimator of the correlation coefficient for the two covariates $f_i$ and $f_j$, is defined as:\n\n\\begin{equation}\n r_{f_i, f_j} = \\frac{\\sum_{s=1}^S ({f_{i,s}} - \\hat{\\mu}_{f_i}) ({f_{j,s}} - \\hat{\\mu}_{f_j})}{\\hat{\\sigma}_{f_i}\\hat{\\sigma}_{f_j}}\n\\end{equation}\n\nwhere $S$ is the sample size, $f_{i,s}$ and $f_{j, s}$ are two sample points indexed with $s$, $\\hat{\\mu}$ is the sample mean and $\\hat{\\sigma}$ is the sample standard deviation. By definition, $r_{f_i, f_j}$ has values between $-1$ (meaning that the two features are completely, linearly anti-correlated), and $+1$ (meaning that the two features are completely, linearly correlated). When $r_{f_i, f_j} = 0$, the two covariates are said to be uncorrelated. The Person's estimator of the correlation coefficient heavily depends on the distribution of the underlying data and may be influenced by outliers. In addition to this, it only captures linear dependence among variables, restricting its applicability to real-world problems where non-linear interactions are often relevant \\cite{shirokikh2013computational}. \n\nThe Spearman's rank correlation coefficient is based on the concept of `variables ranking'. Ranking a variable means mapping its realizations to an integer number that describes their positions in an ordered set. Considering a variable with cardinality $|s|$, this means assigning 1 to the realization with the highest value and $|s|$ to the realization with the lowest value.\n\nThe Spearman's rank correlation coefficient for the two covariates $f_i$ and $f_j$, is defined as the Parson's correlation between the ranks of the variables:\n\n\\begin{equation}\n r_{s_{f_i, f_j}} = \\frac{\\sum_{s=1}^S (R_{f_{i, s}} - \\hat{\\mu}_{R_{f_i}})(R_{f_{j, s}} - \\hat{\\mu}_{R_{f_j}})}{\\hat{\\sigma}_{R_{f_i}}\\hat{\\sigma}_{R_{f_j}}}\n\\end{equation}\n\nwhere $S$ is the sample size, $R_{f_{i, s}}$ and $R_{f_{j, s}}$ are the ranks of the two sample points indexed with $s$, $\\hat{\\mu}$ is the sample mean for $R_{f_i}$ and $R_{f_j}$ and $\\hat{\\sigma}$ is the sample standard deviation for $R_{f_i}$ and $R_{f_j}$. If there are no repeated data samples, a perfect Spearman's rank correlation (i.e. $r_{s_{f_i, f_j}} = 1$ or $r_{s_{f_i, f_j}} = -1$ occurs when each of the features is a perfect monotone function of the other). Spearman's rank correlation technique is distribution-free and allows to capture monotonic, but not necessarily linear, relationships among variables \\cite{shirokikh2013computational}. \n\nThe Energy coefficient is a metric introduced by \\cite{roffo2015infinite} and, in this paper, it is used as the primary benchmark for comparison between our method and the current state-of-the-art. It is a weighted combination of two different pairwise measures defined as follows:\n\n\\begin{equation} \\label{eq:Energy_Coefficient}\n \\phi_{f_i, f_j} = \\alpha E_{f_i,f_j} + (1-\\alpha) \\rho_{f_i,f_j}\n\\end{equation}\n\nwhere $E_{f_i,f_j} = \\max(\\hat{\\sigma}_{f_i}, \\hat{\\sigma}_{f_j})$, with $\\hat{\\sigma}$ being the sample standard deviation computed on features $f_i$ and $f_j$ normalized to the range $[0,1]$, $\\rho_{f_i,f_j} = 1 - |r_{s_{f_i, f_j}}|$ and $\\alpha$ is a threshold value with a value $\\in [0, 1]$. $\\phi_{f_i, f_j} \\in [0, 1]$ analyses two features distributions (i.e. $f_i$ and $f_j$) taking into account both their maximal dispersion (i.e. standard deviation) and their level of uncorrelation. Computing $\\phi_{f_i, f_j}$ for all the features in $F$ in a pairwise manner, one can define a matrix which is $n \\times n$ symmetric and completely characterized by $n(n-1)\/2$ coefficients. For simplicity, we refer to this matrix as $\\hat{\\textbf{C}}$ too.\n\nOnce one of the above mentioned metrics is chosen and $\\hat{\\textbf{C}}$ is computed, TFS applies the standard TMFG algorithm defined in Section \\ref{sec:Information_Filtering_Networks} on the corresponding fully connected graph, creating a sparse chordal network which is able to retain useful relationships among features, pruning the weakest ones. The last step toward the selection of the most relevant features, is represented by the choice of the right nodes inside the TMFG. In this sense, multiple approaches of increasing complexity can be formulated. In this paper, which is a foundational one, we study the relative position of the nodes in the network computing their degree centrality. Degree centrality is the simplest and least computationally intensive measure of centrality. Typically, all the other centrality measures are strictly related \\cite{valente2008correlated, lee2006correlations}.\n\nGiven the sparse adjacency matrix \\textbf{\\textit{A}} representing the TMFG, degree centrality of a node $v$ is denoted as $\\deg(v)$ and represents the number of neighbours (i.e. how many edges a node has) of $v$ as follows:\n\n\\begin{equation}\n \\deg(v) = \\sum_{w=1}^n A_{f_v,f_w}\n\\end{equation}\n\nwhere $n$ is the cardinality of $F$ and $f_v$ and $f_w$ are two features $\\in F$. Although its simplicity, degree centrality can be very illuminating and can be considered a crude measure of whether a node is influential or not in the TMFG. Once obtained $\\deg(v) \\; \\forall v \\in \\textrm{TMFG}$, we rank these values in a descending order and we take the top $k$ central nodes, where $k$ is the cardinality of the features' subset we want to consider. \n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[scale=0.24]{images\/TMFG_FS_Pipeline.png}\n \\caption{TFS algorithm's working schema consists of three different steps: (i) construction of a fully connected graph based on a similarity metric (e.g. $r_{f_i, f_j}$ or $r_{s_{f_i, f_j}}$ or $\\phi_{f_i, f_j}$), (ii) TMFG-based filtering of weakest relationships between nodes and (iii) selection of relevant features based on a centrality measure (e.g. degree centrality).}\n \\label{fig:TMFG_FS_Pipeline}\n\\end{figure}\n\n\\subsection{Benchmark method: Infinite Feature Selection ($\\textrm{Inf-FS}_U$)} \nTo prove the effectiveness of the proposed methodology, we test the TFS algorithm against one of the state-of-the-art counterparts in unsupervised, graph-based filter feature selection techniques, i.e. Infinite Feature Selection ($\\textrm{Inf-FS}_U$) \\footnote{The Python implementation of $\\textrm{Inf-FS}_U$ algorithm used in this paper can be reached at \\url{https:\/\/github.com\/fullyz\/Infinite-Feature-Selection}.} \\cite{roffo2015infinite}. $\\textrm{Inf-FS}_U$ represents features as nodes of a graph and relationships among them as weighted edges. Weights are computed as per in Equation \\ref{eq:Energy_Coefficient}. Each path of a given length over the network is seen as a potential set of relevant features. Therefore, varying paths and letting them tend to an infinite number permits the investigation of the importance of each possible subset of features. Based on this, assigning a score to each feature and ranking them in descendant order allows us to perform feature selection effectively. It is worth noting that $\\textrm{Inf-FS}_U$ has a computational complexity equal to $\\approx \\mathcal{O}(n^3)$. In contrast, TFS has a computational complexity equal to $\\approx \\mathcal{O}(n^2)$, with $n$ being the number of the features.\n\n\\section{Experiments} \\label{sec:Experiments}\nFigure \\ref{fig:Train_Test_Pipeline} reports a pictorial representation of training\/validation\/test pipeline adopted to evaluate and compare the performances of TFS and $\\textrm{Inf-FS}_U$ algorithms. \n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[scale=0.25]{images\/Train_Test_Pipeline.png}\n \\caption{Pictorial representation of the training\/validation\/test pipeline. We distinguish $3$ main stages: (i) $\\textrm{Inf-FS}_U$- or TFS-based feature selection; (ii) features' standardisation (see Appendix \\ref{Appendix_B} and classifier's training; (iii) model's evaluation on out-of-sample data (i.e. validation or test set). Hyper-parameters optimization is performed using a parallel grid search approach and raw dataset's labels distribution is kept intact adopting a stratified \\textit{k}-fold cross validation approach.}\n \\label{fig:Train_Test_Pipeline}\n\\end{figure}\n\nModel hyper-parameters (see Table \\ref{tab:Hyperparameters_Table}) are optimised adopting a parallel grid search approach. For $\\textrm{Inf-FS}_U$, the first hyper-parameter to be optimised is $\\alpha$, which can take values between $0$ and $1$. To tune this parameter, we use a range of equally spaced realisations between $0.1$ and $1.0$, all at a distance of $0.1$. The second hyper-parameter to be optimised is $\\theta$ and represents a regularisation factor which, in the original paper \\cite{roffo2015infinite}, has a fixed value equal to $0.9$. Here, instead, we tune this parameter in the same way as $\\alpha$. In the case of TFS, the first hyper-parameter to be optimised is the metric to be used in the building process of the initial fully connected graph (see Figure \\ref{fig:TMFG_FS_Pipeline}). As reported in Section \\ref{sec:Topologically_Constrained_Feature_Selection}, we test three different metrics: (i) the Pearson's estimator of the correlation coefficient, (ii) the Spearman's rank correlation coefficient and (iii) the Energy coefficient. The second hyper-parameter to be tuned is a boolean value which regulates the possibility to use the coefficients mentioned above in a squared form. It is worth mentioning that, if the Energy metric is chosen, the corresponding $\\hat{\\textbf{C}}$ is never squared ($\\hat{\\textbf{C}}$ already contains only positive values). The last hyper-parameter to be optimised is $\\alpha$, and it should be considered only when the Energy coefficient is chosen as a metric. The meaning of this hyper-parameter is the same as its homologous in the $\\textrm{Inf-FS}_U$ model. All the models are finally evaluated on feature subsets with cardinalities $\\in [10, 50, 100, 150, 200]$.\n\n\\begin{table}[H]\n\\centering\n\\caption{Model dependent hyper-parameters search spaces. In the case of the TFS algorithm, the $\\dagger$ symbol indicates that, if the Energy metric is chosen, the corresponding $\\hat{\\textbf{C}}$ is never squared ($\\hat{\\textbf{C}}$ already contains only positive values). The $\\ddagger$ symbol, on the other hand, indicates that $\\alpha$ parameter should be considered only when the Energy coefficient is chosen as a metric.}\n\\label{tab:Hyperparameters_Table}\n\\begin{tabular}{@{}cc@{}}\n\\toprule\n\\textbf{Model} & \\textbf{Hyper-parameters} \\\\ \\midrule\n$\\textrm{Inf-FS}_U$ & \\begin{tabular}[c]{@{}c@{}}$\\alpha$: {[}0.1: 0.1: 1.0{]}\\\\ $\\theta$: {[}0.1: 0.1: 1.0{]}\\end{tabular} \\\\ \\midrule\nTFS & \\begin{tabular}[c]{@{}c@{}}metric: {[}Pearson, Spearman, Energy{]}\\\\ square$^\\dagger$: {[}True, False{]}\\\\ $\\alpha^\\ddagger$: {[}0.1: 0.1: 1.0{]}\\end{tabular} \\\\ \\bottomrule\n\\end{tabular}\n\\end{table}\n\nFor each hyper-parameter combination, a stratified $k$-fold cross-validation with $k=3$ is performed on the training set. The value of $k$ is chosen to take into account labels' distributions reported in Table \\ref{tab:Datasets_Description}. Results' reproducibility and a fair comparison between models are guaranteed by fixing the random seed for each step of the training\/validation\/test pipeline. \n\nThe meaningfulness of each subset of features chosen by the two algorithms is evaluated based on the classification performance achieved by three classification algorithms: (i) Linear Support Vector Classifier (LinearSVM); (ii) \\textit{k}-Nearest Neighbors Classifier (KNN); (iii) Decision Tree Classifier (Decision Tree). LinearSVM is a sparse kernel-based method designed to convert non-linearly separable problems in the low-dimensional space, into linearly separable problems in the higher-dimensional space, thereby achieving classification \\cite{han2022data}. KNN is a lazy learning algorithm, which classifies test instances evaluating their distance from the nearest \\textit{k} training samples stored in an \\textit{n}-dimensional space (where \\textit{n} is the number of dataset's covariates) \\cite{han2022data}. Finally, Decision Tree is a flowchart-like tree structure computed on training instances which classifies test samples tracing a path from the root to a leaf node holding the class prediction \\cite{han2022data}. The inherently different nature of the three classifiers prevents from obtaining biased results for the two feature selection approaches. More details about chosen classifiers and their implementations are reported in Appendix \\ref{Appendix_A}.\n\nResults obtained in the current paper are evaluated using three different performance metrics: (i) the Balanced Accuracy score (BA) \\cite{mosley2013balanced, kelleher2020fundamentals}; (ii) the F1 score (F1); (iii) the Matthews Correlation Coefficient (MCC) \\cite{baldi2000assessing, gorodkin2004comparing, jurman2012comparison}. We use the BA score for the hyper-parameters optimization process and as the reference metric to present results in Section \\ref{sec:Results}. The BA score for the multi-class case is defined as:\n\n\\begin{equation} \\label{eq:BA_multi}\n BA = \\frac{1}{|Z|} \\left( \\sum_{z \\in Z} \\frac{\\textrm{TP}_z}{\\textrm{TP}_z+\\textrm{FN}_z} + \\frac{\\textrm{TN}_z}{\\textrm{TN}_z+\\textrm{FP}_z} \\right).\n\\end{equation}\n\n$\\textrm{TP}$ is the number of outcomes where the model correctly classifies a sample as belonging to a positive class (or detects an event of interest), when in fact it does belong to that class (or the event is present). $\\textrm{TN}$ is the number of outcomes where the model correctly classifies a sample as belonging to a negative class (or fails to detect an event of interest), when in fact it does not belong to that class (or the event is not present). $\\textrm{FP}$ is the number of outcomes where the model incorrectly classifies a sample as belonging to a positive class (or detects an event of interest), when in fact it does not belong to that class (or the event is not present). $\\textrm{FN}$ is the number of outcomes where the model incorrectly classifies a sample as belonging to a negative class (or fails to detect an event of interest), when in fact it belongs to a positive class (or the event of interest is present). $|Z|$ indicates the cardinality of the set of different classes.\n\nGiven Equation \\ref{eq:BA_multi}, it is easy for the interested reader to reconstruct the formulation for the binary case. General formulations for the F1 score, and the MCC are reported in Appendix \\ref{Appendix_C} together with an extended version of results described later in this Section.\n\nFor each model and for each subset cardinality, hyper-parameters configuration which maximises the BA score while minimising the number of parameters is applied to test datasets. In order to assess the robustness of results, we test if the results achieved by the two feature selection approaches are statistically different. For this we use an improved version of the classic $5\\times2$ cv paired \\textit{t}-test\\footnote{The Python implementation of the original $5\\times2$ cv paired \\textit{t} test can be reached at \\url{http:\/\/rasbt.github.io\/mlxtend\/user_guide\/evaluate\/paired_ttest_5x2c}.} \\cite{dietterich1998approximate}. The test is constructed as follows. Given two classifiers $A$ and $B$ and a dataset $D$, $D$ is first randomly split into two balanced subsets $D_1$, $D_2$ (one for training and one for test). Both $A$ and $B$ are then estimated on $D_1$ and evaluated on $D_2$ obtaining performance measures $a_1, b_1$. The roles of the datasets are then switched by estimating $A$ and $B$ on $D_2$ and evaluating on $D_1$ which results in further performance measures $a_2, b_2$. The random division of $D$ is performed for a total of $5$ times, obtaining the matched performance evaluations $\\{a_1, b_1\\}, \\{a_2, b_2\\} \\dots, \\{a_{10}, b_{10}\\}$. The test statistic \\textit{t} is then computed as follows:\n\n\\begin{equation}\n t = \\frac{\\sqrt{10} \\bar d}{\\hat{\\sigma}},\n\\end{equation}\n\nwhere $d_h=a_h-b_h$ for $ h=1,\\dots, 10$, is the difference between the matched performances metrics of the two classifiers, $\\bar d = \\frac{1}{10}\\sum_{h=1}^{10} d_h$ and $\\hat{\\sigma}^2 = \\frac{1}{10} \\sum_{h=1}^{10}(d_h - \\bar d)^2$. \\textit{t} follows a \\textit{t}-distribution with $9$ degrees of freedom and the null hypothesis is that \\textit{the two classifiers $A$ and $B$ are not statistically different in their performances}. Starting from this basic formulation of the $5\\times2$ cv paired \\textit{t}-test, we simply increased the number of iterations, making it a $15\\times2$ cv paired \\textit{t}-test, in order to increase the statistical robustness of the achieved results. Also in this case, results reproducibility is guaranteed through a strict control of random seeds.\n\n\\section{Results} \\label{sec:Results}\n\nFor both $\\textrm{Inf-FS}_U$ and TFS, for each one of the three considered classifiers and for each feature subset cardinality $\\in [10, 50, 100, 150, 200]$, results obtained running the hyper-parameter optimisation pipeline described in Section \\ref{sec:Experiments} are reported in Appendix \\ref{Appendix_B}.\n\nTables \\ref{tab:local_bests_LinearSVM}, \\ref{tab:local_bests_KNN} and \\ref{tab:local_bests_Decision_Tree} report out-of-sample Balanced Accuracy scores obtained using subset's cardinality-dependent optimal hyper-parameter configurations for LinearSVM, KNN and Decision Tree classifier respectively. For each dataset, we highlight in bold the best achieved result. If one classifier performs equally across multiple subsets' cardinalities, the winning configuration is the one which minimises the subset's cardinality itself. If one classifier performs equally under the two feature selection schema, the winning feature selection approach is the one which minimises the computational complexity (i.e. TFS). \n\nTo compare $\\textrm{Inf-FS}_U$ and TFS, we consider three different measures: (i) the number of times a classifier achieves optimal results under each feature selection schema; (ii) the cross-datasets average balanced accuracy score; (iii) the cross-datasets average maximum drawdown ratio (i.e. the difference between the highest and the lowest achieved result).\n\n\\begin{table}[H]\n\\centering\n\\caption{Subset size-dependent, out-of-sample Balanced Accuracy scores using a LinearSVM classifier. For each dataset, we boldly highlight the combination between feature selection schema and classifier producing the best out-of-sample result. For each subset size, we report, in the last row, the number of times a feature selection approach outperforms the other across datasets.}\n\\label{tab:local_bests_LinearSVM}\n\\scalebox{0.85}{\n\\begin{tabular}{c|cccccccccc}\n\\hline\n &\n \\multicolumn{10}{c}{\\textbf{LinearSVM}} \\\\ \\hline\n &\n \\multicolumn{2}{c|}{\\textbf{10}} &\n \\multicolumn{2}{c|}{\\textbf{50}} &\n \\multicolumn{2}{c|}{\\textbf{100}} &\n \\multicolumn{2}{c|}{\\textbf{150}} &\n \\multicolumn{2}{c}{\\textbf{200}} \\\\ \\cline{2-11} \n &\n \\textbf{$\\textrm{Inf-FS}_U$} &\n \\multicolumn{1}{c|}{\\textbf{TFS}} &\n \\textbf{$\\textrm{Inf-FS}_U$} &\n \\multicolumn{1}{c|}{\\textbf{TFS}} &\n \\textbf{$\\textrm{Inf-FS}_U$} &\n \\multicolumn{1}{c|}{\\textbf{TFS}} &\n \\textbf{$\\textrm{Inf-FS}_U$} &\n \\multicolumn{1}{c|}{\\textbf{TFS}} &\n \\textbf{$\\textrm{Inf-FS}_U$} &\n \\textbf{TFS} \\\\ \\hline\nPCMAC &\n 0.52 &\n \\multicolumn{1}{c|}{0.50} &\n 0.57 &\n \\multicolumn{1}{c|}{0.67} &\n 0.59 &\n \\multicolumn{1}{c|}{0.70} &\n 0.61 &\n \\multicolumn{1}{c|}{\\textbf{0.71}} &\n 0.62 &\n 0.69 \\\\\nRELATHE &\n 0.47 &\n \\multicolumn{1}{c|}{0.49} &\n 0.43 &\n \\multicolumn{1}{c|}{\\textbf{0.53}} &\n 0.51 &\n \\multicolumn{1}{c|}{0.53} &\n 0.44 &\n \\multicolumn{1}{c|}{0.49} &\n 0.53 &\n 0.53 \\\\\nCOIL20 &\n 0.52 &\n \\multicolumn{1}{c|}{0.63} &\n 0.77 &\n \\multicolumn{1}{c|}{0.90} &\n 0.84 &\n \\multicolumn{1}{c|}{0.92} &\n 0.90 &\n \\multicolumn{1}{c|}{0.94} &\n 0.94 &\n \\textbf{0.96} \\\\\nORL &\n 0.40 &\n \\multicolumn{1}{c|}{0.44} &\n 0.63 &\n \\multicolumn{1}{c|}{0.88} &\n 0.72 &\n \\multicolumn{1}{c|}{0.89} &\n 0.86 &\n \\multicolumn{1}{c|}{0.93} &\n 0.84 &\n \\textbf{0.94} \\\\\nwarpAR10P &\n 0.33 &\n \\multicolumn{1}{c|}{0.44} &\n 0.56 &\n \\multicolumn{1}{c|}{0.78} &\n 0.72 &\n \\multicolumn{1}{c|}{0.85} &\n 0.70 &\n \\multicolumn{1}{c|}{\\textbf{0.95}} &\n 0.75 &\n 0.85 \\\\\nwarpPIE10P &\n 0.85 &\n \\multicolumn{1}{c|}{0.89} &\n 0.95 &\n \\multicolumn{1}{c|}{\\textbf{1.00}} &\n 0.98 &\n \\multicolumn{1}{c|}{1.00} &\n 1.00 &\n \\multicolumn{1}{c|}{1.00} &\n 1.00 &\n 1.00 \\\\\nYale &\n 0.14 &\n \\multicolumn{1}{c|}{0.33} &\n 0.25 &\n \\multicolumn{1}{c|}{0.50} &\n 0.39 &\n \\multicolumn{1}{c|}{0.67} &\n 0.37 &\n \\multicolumn{1}{c|}{0.69} &\n 0.53 &\n \\textbf{0.70} \\\\\nUSPS &\n 0.72 &\n \\multicolumn{1}{c|}{0.65} &\n 0.90 &\n \\multicolumn{1}{c|}{0.90} &\n 0.91 &\n \\multicolumn{1}{c|}{0.92} &\n 0.92 &\n \\multicolumn{1}{c|}{\\textbf{0.93}} &\n 0.92 &\n 0.93 \\\\\ncolon &\n 0.70 &\n \\multicolumn{1}{c|}{0.69} &\n 0.69 &\n \\multicolumn{1}{c|}{0.66} &\n \\textbf{0.92} &\n \\multicolumn{1}{c|}{0.82} &\n 0.85 &\n \\multicolumn{1}{c|}{0.74} &\n 0.85 &\n 0.88 \\\\\nGLIOMA &\n \\textbf{0.61} &\n \\multicolumn{1}{c|}{0.25} &\n 0.30 &\n \\multicolumn{1}{c|}{0.30} &\n 0.30 &\n \\multicolumn{1}{c|}{0.38} &\n 0.60 &\n \\multicolumn{1}{c|}{0.41} &\n 0.59 &\n 0.25 \\\\\nlung &\n 0.39 &\n \\multicolumn{1}{c|}{0.47} &\n 0.67 &\n \\multicolumn{1}{c|}{0.89} &\n 0.81 &\n \\multicolumn{1}{c|}{\\textbf{0.95}} &\n 0.71 &\n \\multicolumn{1}{c|}{0.87} &\n 0.90 &\n 0.81 \\\\\nlung\\_small &\n 0.49 &\n \\multicolumn{1}{c|}{0.57} &\n 0.76 &\n \\multicolumn{1}{c|}{0.79} &\n 0.82 &\n \\multicolumn{1}{c|}{0.68} &\n 0.79 &\n \\multicolumn{1}{c|}{0.75} &\n 0.82 &\n \\textbf{0.93} \\\\\nlymphoma &\n 0.22 &\n \\multicolumn{1}{c|}{0.50} &\n 0.58 &\n \\multicolumn{1}{c|}{0.96} &\n 0.78 &\n \\multicolumn{1}{c|}{0.87} &\n 0.90 &\n \\multicolumn{1}{c|}{0.82} &\n 0.81 &\n \\textbf{0.98} \\\\\nGISETTE &\n 0.50 &\n \\multicolumn{1}{c|}{0.49} &\n 0.48 &\n \\multicolumn{1}{c|}{0.47} &\n 0.51 &\n \\multicolumn{1}{c|}{\\textbf{0.52}} &\n 0.47 &\n \\multicolumn{1}{c|}{0.50} &\n 0.49 &\n 0.50 \\\\\nIsolet &\n 0.32 &\n \\multicolumn{1}{c|}{0.51} &\n 0.74 &\n \\multicolumn{1}{c|}{0.78} &\n 0.81 &\n \\multicolumn{1}{c|}{0.82} &\n 0.88 &\n \\multicolumn{1}{c|}{0.83} &\n 0.89 &\n \\textbf{0.89} \\\\\nMADELON &\n 0.59 &\n \\multicolumn{1}{c|}{\\textbf{0.59}} &\n 0.58 &\n \\multicolumn{1}{c|}{0.56} &\n 0.55 &\n \\multicolumn{1}{c|}{0.57} &\n 0.54 &\n \\multicolumn{1}{c|}{0.57} &\n 0.57 &\n 0.57 \\\\ \\hline\\hline\n \\# bests&\n 5 &\n \\multicolumn{1}{c|}{11} &\n 3 &\n \\multicolumn{1}{c|}{13} &\n 2 &\n \\multicolumn{1}{c|}{14} &\n 5 &\n \\multicolumn{1}{c|}{11} &\n 2 &\n 14 \\\\ \\hline\n\\end{tabular}\n}\n\\end{table}\n\nLooking at Table \\ref{tab:local_bests_LinearSVM}, we notice that TFS combined with LinearSVM classifier produces higher Balanced Accuracy scores in $14$ out of $16$ datasets (i.e. $87.5\\%$ of cases), while $\\textrm{Inf-FS}_U$ combined with the same classifier has a higher Balanced Accuracy scores only in $2$ out of $16$ datasets (i.e. $12.5\\%$ of cases). Considering only the scenarios where TFS is the winning feature selection schema, we notice that in $1$ case, the optimal cardinality is equal to $10$, in $2$ cases, the optimal cardinality is equal to $50$ and $100$, in three cases the optimal cardinality is equal to $150$, and in $6$ cases the optimal cardinality is equal to $200$. When the $\\textrm{Inf-FS}_U$ is used as feature selection algorithm and LinearSVM as classifier, the cross-datasets average Balanced Accuracy score grows from a value equal to $0.49$ at cardinality $10$ to a value of $0.75$ at cardinality $200$ with an increase of $26\\%$. When the TFS is used as feature selection algorithm and LinearSVM as classifier, cross-datasets average Balanced Accuracy score grows from a value equal to $0.52$ at cardinality $10$ to a value of $0.78$ at cardinality $200$ with an increase of $26\\%$. Finally, we notice that when the $\\textrm{Inf-FS}_U$ is used as feature selection algorithm and LinearSVM as classifier, the average maximum drawdown ratio is equal to $31\\%$. Using TFS as feature selection algorithm, instead, the average maximum drawdown ratio is equal to $28\\%$.\n\n\\begin{table}[H]\n\\centering\n\\caption{Subset size-dependent, out-of-sample balanced accuracy scores using a KNN classifier. For each dataset, we boldly highlight the combination between feature selection schema and classifier producing the best out-of-sample result. For each subset size, we report, in the last row, the number of times a feature selection approach outperforms the other across datasets.}\n\\label{tab:local_bests_KNN}\n\\scalebox{0.85}{\n\\begin{tabular}{c|cccccccccc}\n\\hline\n &\n \\multicolumn{10}{c}{\\textbf{KNN}} \\\\ \\hline\n &\n \\multicolumn{2}{c|}{\\textbf{10}} &\n \\multicolumn{2}{c|}{\\textbf{50}} &\n \\multicolumn{2}{c|}{\\textbf{100}} &\n \\multicolumn{2}{c|}{\\textbf{150}} &\n \\multicolumn{2}{c}{\\textbf{200}} \\\\ \\cline{2-11} \n &\n \\textbf{$\\textrm{Inf-FS}_U$} &\n \\multicolumn{1}{c|}{\\textbf{TFS}} &\n \\textbf{$\\textrm{Inf-FS}_U$} &\n \\multicolumn{1}{c|}{\\textbf{TFS}} &\n \\textbf{$\\textrm{Inf-FS}_U$} &\n \\multicolumn{1}{c|}{\\textbf{TFS}} &\n \\textbf{$\\textrm{Inf-FS}_U$} &\n \\multicolumn{1}{c|}{\\textbf{TFS}} &\n \\textbf{$\\textrm{Inf-FS}_U$} &\n \\textbf{TFS} \\\\ \\hline\nPCMAC &\n 0.52 &\n \\multicolumn{1}{c|}{0.53} &\n 0.57 &\n \\multicolumn{1}{c|}{0.61} &\n 0.61 &\n \\multicolumn{1}{c|}{0.62} &\n 0.59 &\n \\multicolumn{1}{c|}{\\textbf{0.63}} &\n 0.60 &\n 0.62 \\\\\nRELATHE &\n 0.46 &\n \\multicolumn{1}{c|}{0.46} &\n 0.50 &\n \\multicolumn{1}{c|}{\\textbf{0.57}} &\n 0.48 &\n \\multicolumn{1}{c|}{0.49} &\n 0.46 &\n \\multicolumn{1}{c|}{0.45} &\n 0.48 &\n 0.48 \\\\\nCOIL20 &\n 0.70 &\n \\multicolumn{1}{c|}{0.82} &\n 0.86 &\n \\multicolumn{1}{c|}{0.93} &\n 0.93 &\n \\multicolumn{1}{c|}{0.93} &\n 0.96 &\n \\multicolumn{1}{c|}{0.94} &\n \\textbf{0.97} &\n 0.93 \\\\\nORL &\n 0.38 &\n \\multicolumn{1}{c|}{0.52} &\n 0.52 &\n \\multicolumn{1}{c|}{\\textbf{0.77}} &\n 0.62 &\n \\multicolumn{1}{c|}{0.70} &\n 0.73 &\n \\multicolumn{1}{c|}{0.71} &\n 0.72 &\n 0.77 \\\\\nwarpAR10P &\n 0.36 &\n \\multicolumn{1}{c|}{0.30} &\n 0.36 &\n \\multicolumn{1}{c|}{\\textbf{0.51}} &\n 0.43 &\n \\multicolumn{1}{c|}{0.46} &\n 0.32 &\n \\multicolumn{1}{c|}{0.38} &\n 0.42 &\n 0.48 \\\\\nwarpPIE10P &\n 0.83 &\n \\multicolumn{1}{c|}{0.72} &\n 0.86 &\n \\multicolumn{1}{c|}{0.91} &\n 0.92 &\n \\multicolumn{1}{c|}{\\textbf{0.97}} &\n 0.89 &\n \\multicolumn{1}{c|}{0.92} &\n 0.89 &\n 0.95 \\\\\nYale &\n 0.14 &\n \\multicolumn{1}{c|}{0.42} &\n 0.28 &\n \\multicolumn{1}{c|}{0.41} &\n 0.26 &\n \\multicolumn{1}{c|}{0.42} &\n 0.43 &\n \\multicolumn{1}{c|}{0.38} &\n 0.35 &\n \\textbf{0.49} \\\\\nUSPS &\n 0.78 &\n \\multicolumn{1}{c|}{0.77} &\n 0.94 &\n \\multicolumn{1}{c|}{0.94} &\n \\textbf{0.96} &\n \\multicolumn{1}{c|}{0.95} &\n 0.96 &\n \\multicolumn{1}{c|}{0.95} &\n 0.95 &\n 0.95 \\\\\ncolon &\n 0.77 &\n \\multicolumn{1}{c|}{0.82} &\n 0.89 &\n \\multicolumn{1}{c|}{0.77} &\n 0.89 &\n \\multicolumn{1}{c|}{0.85} &\n \\textbf{1.00} &\n \\multicolumn{1}{c|}{0.70} &\n 0.85 &\n 0.77 \\\\\nGLIOMA &\n 0.24 &\n \\multicolumn{1}{c|}{0.10} &\n 0.40 &\n \\multicolumn{1}{c|}{0.40} &\n 0.42 &\n \\multicolumn{1}{c|}{\\textbf{0.62}} &\n 0.52 &\n \\multicolumn{1}{c|}{0.62} &\n 0.52 &\n 0.62 \\\\\nlung &\n 0.33 &\n \\multicolumn{1}{c|}{0.51} &\n 0.65 &\n \\multicolumn{1}{c|}{\\textbf{0.79}} &\n 0.71 &\n \\multicolumn{1}{c|}{0.65} &\n 0.72 &\n \\multicolumn{1}{c|}{0.68} &\n 0.78 &\n 0.79 \\\\\nlung\\_small &\n 0.57 &\n \\multicolumn{1}{c|}{0.61} &\n 0.80 &\n \\multicolumn{1}{c|}{0.87} &\n 0.82 &\n \\multicolumn{1}{c|}{0.90} &\n \\textbf{0.93} &\n \\multicolumn{1}{c|}{0.90} &\n 0.90 &\n 0.76 \\\\\nlymphoma &\n 0.44 &\n \\multicolumn{1}{c|}{0.50} &\n 0.60 &\n \\multicolumn{1}{c|}{0.74} &\n 0.69 &\n \\multicolumn{1}{c|}{0.69} &\n \\textbf{0.76} &\n \\multicolumn{1}{c|}{0.75} &\n 0.69 &\n 0.74 \\\\\nGISETTE &\n 0.49 &\n \\multicolumn{1}{c|}{0.51} &\n 0.52 &\n \\multicolumn{1}{c|}{\\textbf{0.54}} &\n 0.50 &\n \\multicolumn{1}{c|}{0.51} &\n 0.50 &\n \\multicolumn{1}{c|}{0.53} &\n 0.50 &\n 0.49 \\\\\nIsolet &\n 0.32 &\n \\multicolumn{1}{c|}{0.49} &\n 0.72 &\n \\multicolumn{1}{c|}{0.73} &\n 0.78 &\n \\multicolumn{1}{c|}{0.78} &\n \\textbf{0.83} &\n \\multicolumn{1}{c|}{0.81} &\n 0.82 &\n 0.83 \\\\\nMADELON &\n 0.61 &\n \\multicolumn{1}{c|}{\\textbf{0.78}} &\n 0.58 &\n \\multicolumn{1}{c|}{0.74} &\n 0.64 &\n \\multicolumn{1}{c|}{0.66} &\n 0.62 &\n \\multicolumn{1}{c|}{0.64} &\n 0.57 &\n 0.63 \\\\ \\hline\\hline\n \\# bests &\n 4 &\n \\multicolumn{1}{c|}{12} &\n 1 &\n \\multicolumn{1}{c|}{15} &\n 3 &\n \\multicolumn{1}{c|}{13} &\n 10 &\n \\multicolumn{1}{c|}{6} &\n 4 &\n 12 \\\\ \\hline\n\\end{tabular}\n}\n\\end{table}\n\nLooking at Table \\ref{tab:local_bests_KNN}, we notice that TFS combined with KNN classifier produces higher Balanced Accuracy scores in $10$ out of $16$ datasets (i.e. $62.5\\%$ of cases), while $\\textrm{Inf-FS}_U$ combined with the same classifier has higher Balanced Accuracy scores in $6$ out of $16$ datasets (i.e. $37.5\\%$ of cases). Considering only the scenarios where TFS is the winning feature selection schema, we notice that in $1$ case, the optimal cardinality is equal to $10$, $150$ and $200$, in $5$ cases, the optimal cardinality is equal to $50$, in two cases the optimal cardinality is equal to $100$. When the $\\textrm{Inf-FS}_U$ is used as feature selection algorithm and KNN as classifier, the cross-datasets average Balanced Accuracy score grows from a value equal to $0.46$ at cardinality $10$ to a value of $0.69$ at cardinality $200$ with an increase of $23\\%$. When TFS is used as feature selection algorithm and KNN as classifier, the cross-datasets average Balanced Accuracy score grows from a value equal to $0.55$ at cardinality $10$ to a value of $0.71$ at cardinality $200$ with an increase of $16\\%$. Finally, we notice that when $\\textrm{Inf-FS}_U$ is used as feature selection algorithm and KNN as classifier, the average maximum drawdown ratio is equal to $23\\%$. Using TFS as feature selection algorithm, instead, the average maximum drawdown ratio is equal to $21\\%$.\n\n\\begin{table}[H]\n\\centering\n\\caption{Subset size-dependent, out-of-sample balanced accuracy scores using a Decision Tree classifier. For each dataset, we boldly highlight the combination between feature selection schema and classifier producing the best out-of-sample result. For each subset size, we report, in the last row, the number of times a feature selection approach outperforms the other across datasets.}\n\\label{tab:local_bests_Decision_Tree}\n\\scalebox{0.85}{\n\\begin{tabular}{c|cccccccccc}\n\\hline\n &\n \\multicolumn{10}{c}{\\textbf{Decision Tree}} \\\\ \\hline\n &\n \\multicolumn{2}{c|}{\\textbf{10}} &\n \\multicolumn{2}{c|}{\\textbf{50}} &\n \\multicolumn{2}{c|}{\\textbf{100}} &\n \\multicolumn{2}{c|}{\\textbf{150}} &\n \\multicolumn{2}{c}{\\textbf{200}} \\\\ \\cline{2-11} \n &\n \\textbf{$\\textrm{Inf-FS}_U$} &\n \\multicolumn{1}{c|}{\\textbf{TFS}} &\n \\textbf{$\\textrm{Inf-FS}_U$} &\n \\multicolumn{1}{c|}{\\textbf{TFS}} &\n \\textbf{$\\textrm{Inf-FS}_U$} &\n \\multicolumn{1}{c|}{\\textbf{TFS}} &\n \\textbf{$\\textrm{Inf-FS}_U$} &\n \\multicolumn{1}{c|}{\\textbf{TFS}} &\n \\textbf{$\\textrm{Inf-FS}_U$} &\n \\textbf{TFS} \\\\ \\hline\nPCMAC &\n 0.53 &\n \\multicolumn{1}{c|}{0.50} &\n 0.56 &\n \\multicolumn{1}{c|}{0.69} &\n 0.58 &\n \\multicolumn{1}{c|}{0.71} &\n 0.57 &\n \\multicolumn{1}{c|}{0.68} &\n 0.60 &\n \\textbf{0.73} \\\\\nRELATHE &\n 0.49 &\n \\multicolumn{1}{c|}{0.50} &\n 0.51 &\n \\multicolumn{1}{c|}{\\textbf{0.51}} &\n 0.49 &\n \\multicolumn{1}{c|}{0.42} &\n 0.48 &\n \\multicolumn{1}{c|}{0.51} &\n 0.48 &\n 0.51 \\\\\nCOIL20 &\n 0.68 &\n \\multicolumn{1}{c|}{0.81} &\n 0.83 &\n \\multicolumn{1}{c|}{0.89} &\n 0.85 &\n \\multicolumn{1}{c|}{\\textbf{0.90}} &\n 0.89 &\n \\multicolumn{1}{c|}{0.90} &\n 0.90 &\n 0.90 \\\\\nORL &\n 0.36 &\n \\multicolumn{1}{c|}{0.39} &\n 0.42 &\n \\multicolumn{1}{c|}{0.48} &\n 0.49 &\n \\multicolumn{1}{c|}{0.54} &\n 0.59 &\n \\multicolumn{1}{c|}{0.61} &\n 0.49 &\n \\textbf{0.62} \\\\\nwarpAR10P &\n 0.37 &\n \\multicolumn{1}{c|}{0.33} &\n 0.46 &\n \\multicolumn{1}{c|}{0.59} &\n 0.55 &\n \\multicolumn{1}{c|}{0.59} &\n 0.41 &\n \\multicolumn{1}{c|}{0.64} &\n 0.68 &\n \\textbf{0.80} \\\\\nwarpPIE10P &\n 0.74 &\n \\multicolumn{1}{c|}{0.74} &\n 0.80 &\n \\multicolumn{1}{c|}{0.73} &\n 0.77 &\n \\multicolumn{1}{c|}{0.85} &\n 0.76 &\n \\multicolumn{1}{c|}{\\textbf{0.87}} &\n 0.76 &\n 0.81 \\\\\nYale &\n 0.17 &\n \\multicolumn{1}{c|}{0.31} &\n 0.26 &\n \\multicolumn{1}{c|}{0.34} &\n 0.39 &\n \\multicolumn{1}{c|}{0.42} &\n 0.50 &\n \\multicolumn{1}{c|}{0.43} &\n 0.43 &\n \\textbf{0.52} \\\\\nUSPS &\n 0.73 &\n \\multicolumn{1}{c|}{0.72} &\n 0.84 &\n \\multicolumn{1}{c|}{0.85} &\n 0.85 &\n \\multicolumn{1}{c|}{0.86} &\n 0.86 &\n \\multicolumn{1}{c|}{0.86} &\n \\textbf{0.88} &\n 0.87 \\\\\ncolon &\n 0.61 &\n \\multicolumn{1}{c|}{0.64} &\n 0.82 &\n \\multicolumn{1}{c|}{0.74} &\n 0.76 &\n \\multicolumn{1}{c|}{\\textbf{0.92}} &\n 0.89 &\n \\multicolumn{1}{c|}{0.83} &\n 0.85 &\n 0.83 \\\\\nGLIOMA &\n 0.34 &\n \\multicolumn{1}{c|}{\\textbf{0.61}} &\n 0.36 &\n \\multicolumn{1}{c|}{0.31} &\n 0.35 &\n \\multicolumn{1}{c|}{0.44} &\n 0.38 &\n \\multicolumn{1}{c|}{0.31} &\n 0.31 &\n 0.44 \\\\\nlung &\n 0.44 &\n \\multicolumn{1}{c|}{0.70} &\n 0.75 &\n \\multicolumn{1}{c|}{0.71} &\n 0.87 &\n \\multicolumn{1}{c|}{0.70} &\n \\textbf{0.90} &\n \\multicolumn{1}{c|}{0.73} &\n 0.71 &\n 0.79 \\\\\nlung\\_small &\n 0.46 &\n \\multicolumn{1}{c|}{0.42} &\n 0.58 &\n \\multicolumn{1}{c|}{\\textbf{0.63}} &\n 0.47 &\n \\multicolumn{1}{c|}{0.57} &\n 0.52 &\n \\multicolumn{1}{c|}{0.63} &\n 0.52 &\n 0.49 \\\\\nlymphoma &\n 0.20 &\n \\multicolumn{1}{c|}{\\textbf{0.69}} &\n 0.45 &\n \\multicolumn{1}{c|}{0.55} &\n 0.45 &\n \\multicolumn{1}{c|}{0.44} &\n 0.63 &\n \\multicolumn{1}{c|}{0.60} &\n 0.51 &\n 0.62 \\\\\nGISETTE &\n 0.52 &\n \\multicolumn{1}{c|}{0.50} &\n 0.44 &\n \\multicolumn{1}{c|}{\\textbf{0.52}} &\n 0.48 &\n \\multicolumn{1}{c|}{0.47} &\n 0.50 &\n \\multicolumn{1}{c|}{0.49} &\n 0.49 &\n 0.48 \\\\\nIsolet &\n 0.27 &\n \\multicolumn{1}{c|}{0.43} &\n 0.69 &\n \\multicolumn{1}{c|}{0.67} &\n 0.73 &\n \\multicolumn{1}{c|}{0.71} &\n 0.74 &\n \\multicolumn{1}{c|}{0.73} &\n \\textbf{0.78} &\n 0.73 \\\\\nMADELON &\n 0.58 &\n \\multicolumn{1}{c|}{0.66} &\n 0.70 &\n \\multicolumn{1}{c|}{\\textbf{0.81}} &\n 0.78 &\n \\multicolumn{1}{c|}{0.79} &\n 0.75 &\n \\multicolumn{1}{c|}{0.77} &\n 0.74 &\n 0.77 \\\\ \\hline\\hline\n \\# bests &\n 6 &\n \\multicolumn{1}{c|}{10} &\n 5 &\n \\multicolumn{1}{c|}{11} &\n 5 &\n \\multicolumn{1}{c|}{11} &\n 7 &\n \\multicolumn{1}{c|}{9} &\n 5 &\n 11 \\\\ \\hline\n\\end{tabular}\n}\n\\end{table}\n\nLooking at Table \\ref{tab:local_bests_Decision_Tree}, we notice that TFS combined with Decision Tree classifier produces higher Balanced Accuracy scores in $13$ out of $16$ datasets (i.e. $81.25\\%$ of cases), while $\\textrm{Inf-FS}_U$ combined with the same classifier produces higher Balanced Accuracy scores in $3$ out of $16$ datasets (i.e. $18.75\\%$ of cases). Considering only the scenarios where TFS is the winning feature selection schema, we notice that in $2$ cases, the optimal cardinality is equal to $10$ and $100$, in $4$ cases, the optimal cardinality is equal to $50$ and $200$, in only one case the optimal cardinality is equal to $150$. When the $\\textrm{Inf-FS}_U$ is used as feature selection algorithm and Decision Tree as classifier, the cross-datasets average Balanced Accuracy score grows from a value equal to $0.47$ at cardinality $10$, to a value of $0.63$ at cardinality $200$ with an increase of $16\\%$. When TFS is used as feature selection algorithm and Decision Tree as classifier, the cross-datasets average Balanced Accuracy score grows from a value equals to $0.56$ at cardinality $10$, to a value of $0.68$ at cardinality $200$ with an increase of $12\\%$. Finally, we notice that, when $\\textrm{Inf-FS}_U$ is used as feature selection algorithm and Decision Tree as classifier, the average maximum drawdown ratio is equal to $22\\%$. Using TFS as feature selection algorithm, instead, the average maximum drawdown ratio is equal to $20\\%$.\n\nConsidering the cross-datasets average Balanced Accuracy scores discussed earlier in this Section, we conclude that, independently from the chosen classifier, TFS always allows to select more informative features, guaranteeing higher classification performances. Both the cross-datasets average Balanced Accuracy score percentage increase and the cross-datasets average maximum drawdown ratio are lower when TFS is chosen as feature selection schema, further certifying an higher stability and an ability to choose higher quality features.\n\n\\begin{table}[h]\n\\centering\n\\caption{Out-of-sample Balanced Accuracy scores obtained by LinearSVM, KNN and Decision Tree classifier on the raw datasets (i.e. the datasets containing all the original features). We boldly highlight the entries where a TFS improves the classifier's performance. We do not highlight the entries where a classifier performs better on the raw dataset (i.e. where feature selection algorithms $\\textrm{Inf-FS}_U$ and TFS are not effective). The $^*$ symbol highlights the scenarios where the optimal feature selection schema is $\\textrm{Inf-FS}_U$ but TFS, in combination with the classifier, still outperforms the classifier on the raw dataset. The $^\\dagger$ symbol highlights the scenarios where the optimal feature selection schema is $\\textrm{Inf-FS}_U$ while TFS, in combination with the classifier, cannot outperform the classifier on the raw dataset.}\n\\label{tab:Full_dataset_results}\n\\scalebox{0.85}{%\n\\begin{tabular}{c|c|c|c}\n\\hline\n & \\textbf{LinearSVM} & \\textbf{KNN} & \\textbf{Decision Tree} \\\\ \\hline\nPCMAC & 0.83 & 0.71 & 0.90 \\\\\nRELATHE & 0.84 & 0.77 & 0.85 \\\\\nCOIL20 & 0.97 & \\;0.96$^\\dagger$ & \\textbf{0.87} \\\\\nORL & 0.97 & 0.82 & 0.68 \\\\\nwarpAR10P & 1.00 & 0.53 & \\textbf{0.66} \\\\\nwarpPIE10P & \\textbf{1.00} & \\textbf{0.84} & \\textbf{0.84} \\\\\nYale & 0.82 & \\textbf{0.46} & \\textbf{0.44} \\\\\nUSPS & \\textbf{0.93} & \\;0.95$^*$ & \\;0.87$^*$ \\\\\ncolon & \\;0.80$^*$ & \\;0.73$^*$ & \\textbf{0.80} \\\\\nGLIOMA & \\;0.56$^\\dagger$ & 0.78 & \\textbf{0.44} \\\\\nlung & \\textbf{0.93} & \\textbf{0.76} & \\;0.66$^*$ \\\\\nlung\\_small & \\textbf{0.79} & \\;0.76$^*$ & \\textbf{0.63} \\\\\nlymphoma & \\textbf{0.93} & \\;0.69$^*$ & \\textbf{0.61} \\\\\nGISETTE & 0.98 & 0.96 & 0.92 \\\\\nIsolet & 0.93 & 0.84 & 0.79 \\\\\nMADELON & \\textbf{0.58} & \\textbf{0.51} & \\textbf{0.74} \\\\ \\hline\n\\end{tabular}%\n}\n\\end{table}\n\nThe power of TFS can be further investigated by comparing results obtained by applying the three classification algorithms on the raw datasets (i.e. the datasets containing all the original features) against the best ones obtained by applying the same classifiers combined with the novel feature classification technique presented in this paper. Results are reported in Table \\ref{tab:Full_dataset_results}. When LinearSVM is chosen as a classifier, feature selection turns out to be beneficial on $9$ out of $16$ datasets (i.e. $56.25\\%$ of cases). In $7$ cases, TFS is the optimal feature classification approach; in the case of the \"colon\" dataset, even if $\\textrm{Inf-FS}_U$ is the optimal feature selection approach, results achieved using TFS are still better than the ones obtained on the raw dataset; in the case of \"GLIOMA\" dataset, $\\textrm{Inf-FS}_U$ is the optimal feature selection approach and results achieved using TFS are lower than the ones obtained on the raw dataset. When KNN is chosen as a classifier, feature selection is beneficial on $9$ out of $16$ datasets (i.e. $56.25\\%$ of cases). In $4$ cases, TFS is the optimal feature classification approach; in the case of \"USPS\", \"colon\", \"lung\\_small\" and \"lymphoma\" dataset, even if $\\textrm{Inf-FS}_U$ is the optimal feature selection approach, results achieved using TFS are still better than the ones obtained on the raw dataset; in the case of \"COIL20\" dataset, $\\textrm{Inf-FS}_U$ is the optimal feature selection approach and results achieved using TFS are lower than the ones obtained on the raw dataset. Finally, when Decision Tree is chosen as a classifier, feature selection is beneficial on $11$ out of $16$ datasets (i.e. $68.75\\%$ of cases). In $9$ cases, TFS is the optimal feature classification approach; in the case of \"USPS\" and \"colon\" dataset, even if $\\textrm{Inf-FS}_U$ is the optimal feature selection approach, results achieved using TFS are still better than the ones obtained on the raw dataset. \\\\\nLooking at the results reported above, we notice that the application domains where TFS is more compelling are the ones where the tabular format is the natural data format (i.e. biological data and artificial data). This finding is not unexpected. Application domains such as text, face images and spoken letters data would require a more complex data pre-processing pipeline (e.g. encoding) and specific deep learning-based classification algorithms (e.g. convolutional and recurrent neural networks). A deeper analysis of this aspect is left for the upcoming TFS-centered research.\n\n\\begin{table}[H]\n\\centering\n\\caption{Comparison between $\\textrm{Inf-FS}_U$' and TFS' \\textit{p}-values obtained performing a $15 \\times 2$ cv paired \\textit{t}-test. \\textit{p}-values $> 0.1$ are reported in their numerical form, \\textit{p}-values $\\leq 0.1$ and $> 0.05$ are marked as $^{*}$, \\textit{p}-values $\\leq 0.05$ and $> 0.01$ are marked as $^{**}$, \\textit{p}-values $\\leq 0.01$ and $> 0.001$ are marked as $^{***}$ and \\textit{p}-values $\\leq 0.001$ are marked as $^{****}$. $(\\vee)$ and $(\\wedge)$ symbols indicate that, when the two feature selection schemes combined with the same classifier, produce statistically robust different results, TFS performs, respectively, better or worse than $\\textrm{Inf-FS}_U$ according to results reported in Tables \\ref{tab:local_bests_LinearSVM}, \\ref{tab:local_bests_KNN} and \\ref{tab:local_bests_Decision_Tree}.}\n\\label{tab:StatTests}\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{c|ccccc|ccccc|ccccc}\n\\hline\n & \\multicolumn{5}{c|}{\\textbf{LinearSVM}} & \\multicolumn{5}{c|}{\\textbf{KNN}} & \\multicolumn{5}{c}{\\textbf{DT}} \\\\ \\cline{2-16} \n &\n \\textbf{10} &\n \\textbf{50} &\n \\textbf{100} &\n \\textbf{150} &\n \\textbf{200} &\n \\textbf{10} &\n \\textbf{50} &\n \\textbf{100} &\n \\textbf{150} &\n \\textbf{200} &\n \\textbf{10} &\n \\textbf{50} &\n \\textbf{100} &\n \\textbf{150} &\n \\textbf{200} \\\\ \\hline\nPCMAC & *$^{(\\wedge)}$ & 0.50 & 0.79 & 0.80 & 0.49 & 0.19 & 0.37 & 0.90 & 0.81 & 0.94 & *$^{(\\wedge)}$ & 0.69 & 0.93 & 0.75 & 0.73 \\\\\nRELATHE & 0.87 & **$^{(\\vee)}$ & ****$^{(\\vee)}$ & ***$^{(\\vee)}$ & **$^{(\\vee)}$ & 0.61 & 0.17 & ****$^{(\\vee)}$ & *$^{(\\wedge)}$ & 0.14 & 0.90 & ***$^{(\\vee)}$ & 0.58 & *$^{(\\vee)}$ & 0.47 \\\\\nCOIL20 & ***$^{(\\vee)}$ & ****$^{(\\vee)}$ & ****$^{(\\vee)}$ & ***$^{(\\vee)}$ & **$^{(\\vee)}$ & **$^{(\\vee)}$ & **$^{(\\vee)}$ & **$^{(\\vee)}$ & 0.39 & 0.40 & *$^{(\\vee)}$ & **$^{(\\vee)}$ & ***$^{(\\vee)}$ & ***$^{(\\vee)}$ & **$^{(\\vee)}$ \\\\\nORL & 0.14 & ***$^{(\\vee)}$ & 0.22 & **$^{(\\vee)}$ & ***$^{(\\vee)}$ & **$^{(\\vee)}$ & ***$^{(\\vee)}$ & 0.72 & 0.44 & 0.51 & 0.88 & 0.49 & 0.67 & **$^{(\\vee)}$ & 0.93 \\\\\nwarpAR10P & 0.23 & ***$^{(\\vee)}$ & 0.38 & 0.23 & 0.20 & **$^{(\\wedge)}$ & **$^{(\\vee)}$ & **$^{(\\vee)}$ & 0.48 & 0.69 & 0.77 & ***$^{(\\vee)}$ & 0.32 & 0.16 & 0.39 \\\\\nwarpPIE10P & 0.98 & 0.52 & 0.67 & 0.39 & 1.00 & 0.50 & 0.93 & 0.17 & 1.00 & 0.51 & 0.41 & 1.00 & 0.65 & 0.39 & 0.68 \\\\\nYale & ***$^{(\\vee)}$ & **$^{(\\vee)}$ & 0.17 & 0.19 & 0.64 & 0.81 & **$^{(\\vee)}$ & 0.28 & 0.78 & 0.10 & 0.60 & **$^{(\\vee)}$ & 0.90 & 0.45 & 0.23 \\\\\nUSPS & 0.89 & 0.38 & 0.29 & ***$^{(\\vee)}$ & *$^{(\\vee)}$ & 0.69 & 0.19 & ***$^{(\\wedge)}$ & *$^{(\\wedge)}$ & 0.26 & *$^{(\\wedge)}$ & 0.74 & 0.71 & 0.62 & 0.95 \\\\\ncolon & 0.15 & 0.42 & 0.19 & 0.37 & 0.53 & 0.92 & 0.48 & 0.63 & 1.00 & 0.18 & 0.89 & 0.35 & 0.15 & 0.59 & 0.50 \\\\\nGLIOMA & 0.54 & 0.63 & 0.45 & *$^{(\\wedge)}$ & 0.74 & 0.68 & 0.73 & 0.25 & 0.65 & 0.91 & **$^{(\\vee)}$ & 0.52 & *$^{(\\vee)}$ & *$^{(\\wedge)}$ & *$^{(\\vee)}$ \\\\\nlung & ***$^{(\\vee)}$ & 0.12 & ***$^{(\\vee)}$ & **$^{(\\vee)}$ & 0.19 & ***$^{(\\vee)}$ & 0.46 & 0.61 & 0.84 & 0.58 & ***$^{(\\vee)}$ & 0.11 & 0.28 & 0.47 & 0.86 \\\\\nlung\\_small & 0.72 & 0.83 & **$^{(\\wedge)}$ & 0.54 & 0.32 & 0.38 & 0.77 & 0.32 & 0.82 & 0.72 & 0.73 & 0.99 & 0.53 & *$^{(\\vee)}$ & 0.63 \\\\\nlymphoma & 0.56 & 0.26 & 0.37 & 0.74 & 0.92 & 1.00 & 0.69 & 0.41 & 0.25 & 0.86 & 0.35 & 0.58 & 0.32 & 0.45 & 0.85 \\\\\nGISETTE & ****$^{(\\wedge)}$ & ***$^{(\\wedge)}$ & ***$^{(\\vee)}$ & ***$^{(\\vee)}$ & **$^{(\\vee)}$ & ****$^{(\\vee)}$ & ****$^{(\\vee)}$ & ****$^{(\\vee)}$ & ****$^{(\\vee)}$ & ****$^{(\\wedge)}$ & ****$^{(\\wedge)}$ & ***$^{(\\vee)}$ & ***$^{(\\wedge)}$ & **$^{(\\wedge)}$ & *$^{(\\vee)}$ \\\\\nIsolet & ****$^{(\\vee)}$ & ***$^{(\\vee)}$ & 0.15 & 0.15 & **$^{(\\vee)}$ & ****$^{(\\vee)}$ & 1.00 & 0.28 & 0.38 & 0.88 & **$^{(\\vee)}$ & 0.81 & 0.61 & *$^{(\\wedge)}$ & 0.39 \\\\\nMADELON & *$^{(\\vee)}$ & 0.75 & 0.93 & 0.19 & 0.15 & 0.91 & 0.69 & 0.22 & 0.65 & 0.49 & 0.32 & 0.65 & 0.57 & 0.69 & 0.59 \\\\ \\hline\n\\end{tabular}%\n}\n\\end{table}\n\nThe statistical significance of results discussed earlier in this Section is assessed in Table \\ref{tab:StatTests}. Here we report the \\textit{p}-values obtained performing a $15 \\times 2$cv \\textit{t}-test as described in Section \\ref{sec:Experiments}. Specifically, \\textit{p}-values $> 0.1$ are reported in their numerical form, \\textit{p}-values $\\leq 0.1$ and $> 0.05$ are marked as $^{*}$, \\textit{p}-values $\\leq 0.05$ and $> 0.01$ are marked as $^{**}$, \\textit{p}-values $\\leq 0.01$ and $> 0.001$ are marked as $^{***}$ and \\textit{p}-values $\\leq 0.001$ are marked as $^{****}$. Looking at the entries of Tables \\ref{tab:local_bests_LinearSVM}, \\ref{tab:local_bests_KNN}, \\ref{tab:local_bests_Decision_Tree} where TFS defines a new state-of-the-art, we notice that (i) when LinearSVM is used as classifier, TFS is statistically different from $\\textrm{Inf-FS}_U$ in $8$ out of $14$ cases ($57\\%$); (ii) when KNN is used as classifier, TFS is statistically different from $\\textrm{Inf-FS}_U$ in $3$ out of $10$ cases ($30\\%$); (iii) when Decision Tree is used as classifier, TFS is statistically different from $\\textrm{Inf-FS}_U$ in $4$ out of $13$ cases ($31\\%$). There is only one dataset (i.e. 'GISETTE') where TFS is statistically different from $\\textrm{Inf-FS}_U$ independently from the classifier: in all the other cases, results are dependent on the choice of the classifier.\n\n\\section{Conclusions} \\label{sec:Conclusions}\nIn this work, we combine the power of state-of-the-art IFNs, and instruments from network science in order to develop a novel unsupervised, graph-based filter method for feature selection. Features are represented as nodes in a TMFG, and their relevance is assessed by studying their relative position inside the network. Exploiting topological properties of the network used to represent meaningful interactions among features, we propose a physics-informed feature selection model that is highly flexible, computationally cheap, fully explainable and remarkably intuitive. To prove the effectiveness of the proposed methodology, we test it against the state-of-the-art counterpart (i.e. $\\textrm{Inf-FS}_U$) on $16$ benchmark datasets belonging to different applicative domains. Employing a Linear Support Vector classifier, a k-Nearest Neighbors classifier and a Decision Tree classifier, we show how our algorithm achieves top performances on most benchmark datasets, redefining the current state-of-the-art on a significant number of them. The proposed methodology demonstrates effectiveness in conditions where the amount of training data largely varies. Compared to its main alternative, TFS has a lower computational complexity and provides a much more intuitive overview of the feature selection process. Thanks to the possibility of studying the relative position of nodes in the network in many different ways (i.e. choosing different centrality measures or defining new ones), TFS is highly versatile and fully adaptable to input data. This research work is also relevant since it underlines some criticalities in the way that the effectiveness of the feature selection methods is evaluated and proposes a rigorous pipeline to compare models and assess the statistical significance of achieved results. It is worth noting that the current work is a foundational one. It presents three aspects that are unsatisfactory and we plan to cover in the future: (i) the need to explicitly specify the cardinality of the subset of relevant features is limiting and requires an a priori knowledge of the applicative domain or, at least, an extended search for the optimal realization of this hyper-parameter; (ii) the usage of classic correlation measures in the TMFG's building process prevents from the possibility to handle problems with mixed type of features (continuous-categorical, categorical-categorical); (iii) TMFG is non-differentiable and this prevents from a direct integration with advanced Deep Learning-based architectures. More generally, this study points to many future directions spanning from the development of data-centred measures to assess features relevance, to the possibility of replicating the potential of this method and inferring them through automated learning techniques. The first steps have been taken in the latter research direction by introducing a new and potentially groundbreaking type of neural networks called Homological Neural Networks. \n\n\\section*{Acknowledgments} \\label{sec:Aknowledgments}\nThe author, T.A, acknowledges the financial support from ESRC (ES\/K002309\/1), EPSRC (EP\/P031730\/1) and EC (H2020-ICT-2018-2 825215). Both the authors acknowledge Agne Scalchi, Silvia Bartolucci and Paolo Barucca for for the fruitful discussions on foundational topics related to this work.\n\n\\bibliographystyle{unsrt} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}