diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzfqb" "b/data_all_eng_slimpj/shuffled/split2/finalzfqb" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzfqb" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nIn sexual reproduction, as opposed to asexual reproduction, the genomes of the \ntwo parents are mixed, and within the diploid genome of each parent happens\ncrossover. This way of reproduction has advantages as well as disadvantages \ncompared with asexual cloning of haploid genomes. An advantage is that bad \nrecessive mutations do not affect the health if they are present in only one of \nthe two haplotypes (= sets of genetic information). A disadvantage\nis the reduced number of births if only the females produce offspring while the \nmales consume as much food and space as the females. Moreover, crossover of \ntwo different genomes may produce a mixture which is fitter than each the two \nparents but also one which is less fit, as seen these \ndays in the DaimlerChrysler car company (outbreeding depression). For small \npopulations, the probability is higher that the two parents have the same bad \nrecessive mutation which therefore diminishes the health of the individual \n(inbreeding depression). \n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[angle=-90,scale=0.5]{inbreeding1.ps}\n\\end{center}\n\\caption{Average over 100 simulations with $G=10$ groups each, $\\Delta = 100$.\n}\n\\end{figure}\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[angle=-90,scale=0.5]{inbreeding2.ps}\n\\end{center}\n\\caption{Average over 10 simulations with a large (top) and 100 with a small\n(bottom) population, versus number $G$ of groups; $r=1, \\; \\Delta = 1000$ and \n100, respectively. For the larger population $G$ is divided by 10 so that \ndata for the same number of individuals per group have the same horizontal\ncoordinate. We see nice scaling.\n}\n\\end{figure}\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[angle=-90,scale=0.30]{inbreeding3a.ps}\n\\includegraphics[angle=-90,scale=0.30]{inbreeding3b.ps}\n\\end{center}\n\\caption{Time dependence of outbreeding advantage (part a) and outbreeding\ndepression (part b).\n}\n\\end{figure}\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[angle=-90,scale=0.5]{inbreeding4.ps}\n\\end{center}\n\\caption{Average over 100 simulations with a small population, high minimum\nage of reproduction, and $G=1$ and 50. For $G=1$ there is always complete \nmixing. Note the double-logarithmic scales, also in Fig.5 and 6.\n}\n\\end{figure}\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[angle=-90,scale=0.5]{inbreeding5.ps}\n\\end{center}\n\\caption{Average over 100 simulations with a small population, high minimum\nage of reproduction and various\nlengths $L$ of the bit-strings, using a birth rate $B = 128\/L$. \n}\n\\end{figure}\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[angle=-90,scale=0.5]{inbreeding6.ps}\n\\end{center}\n\\caption{Dependence on population size for $N_{\\max} = 10^3 \\dots 10^7$, \naveraged over 1000 to one sample. $L=64, \\; B = 2$.\n}\n\\end{figure}\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[angle=-90,scale=0.5]{inbreeding11.ps}\n\\end{center}\n\\caption{\nRelation between normalized population size and the crossover\nrate. The population size was divided by the population evolving with crossover\nrate $r = 1$.\n}\n\\end{figure}\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[angle=-90,scale=0.5]{inbreeding12.ps}\n\\end{center}\n\\caption{\nDistribution of defective genes in the genomes of populations\nevolving under different crossover frequencies.\n}\n\\end{figure}\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[angle=-90,scale=0.5]{inbreeding13.ps}\n\\end{center}\n\\caption{\nRelation between crossover frequency and average frequency of\ndefective genes in the sections of genomes expressed before the\nreproduction age $R$. Note, fractions equal 0 mean that the populations with \nthe given crossover frequency died out.\n}\n\\end{figure}\n\n\\section{Standard Model}\n\nWe try to simulate these effects in the standard sexual Penna ageing model,\ndeviating from published programs \\cite{books} as follows: The Verhulst factor, \na death probability $N\/N_{\\max}$ at population $N$ with carrying capacity\n$N_{\\max} = 10^3, \\ 10^4, \\,10^5$,\ndue to limited food and space, was applied to the births only and not to adults;\nthe initial age distribution was taken as random when needed;\nthe birth rate was reduced from 4 to 1, the lethal threshold of active \nmutations from 3 to 1 (that means a single active mutation kills), and mostly \nonly $10^4$ instead of $2 \\times 10^4$ time steps were made. (One time step or\niteration is one Monte Carlo step for each individual). Furthermore\nthe whole population was for most of the simulated time separated into $G$ \ndifferent groups such that females look for male partners only within their\nown group, with a separate Verhulst factor applying to each group. For the\nlast $\\Delta \\ll 10^4$ time steps this separation into groups was dissolved: \nThen females could select any male, and only one overall Verhulst factor \napplied to the whole population. Finally, the crossover\nprocess within each parent before each birth was not made always but only\nwith a crossover probability $r$. \n\nIf there would be no inbreeding depression then during the first longer part\nof the simulation the total number $N_1$ of individuals would be independent of \nthe number $G$ of groups into which it is divided. And if then there are no \nadvantages or disadvantages of outbreeding, the population $N_2$ during the \nsecond shorter part, $10^4 - \\Delta < t < 10^4$, would be the same as the\npreceding population $N_1$ during the last section, $10^4 - 2 \\Delta < t < \n10^4 - \\Delta$, of the longer first part. We will present data showing that \nthis is not the case. Similar simulations for $G=2$ groups was published\nlong ago \\cite{LSC}. \n \nA difficulty in such simulations is the Eve effect: After a time \nproportional to the population size, everybody has the same female (Eve) and \nthe same male (Adam) as ancestor, with all other offspring having gotten less \nfit genomes due to random mutations and thus having died out. If we would\ndivide the whole population into many groups without further changes, the Eve \neffect would let all groups but one die out and thus destroy the separation.\nTherefore for the first long period of separation we used separate Verhulst \nfactors for each group, stabilizing its population, while for the second shorter\npart of mixing we used mostly $\\Delta = N_{\\max}\/100$. \n\nFigure 1 shows the dependence on the crossover probability for the \npopulations $N_1$ before and $N_2$ after mixing. We see that the mixing always\nincreases the population, that means one has no outbreeding depression but\nan outbreeding advantage. Figure 2 confirms this advantage but also \nshows the inbreeding depression: The larger the number $G$ of groups (and \nthus the smaller the group size) is, the smaller are the two populations\n$N_1$ and $N_2$. (The difference between $N_1$ and $N_2$ fluctuates less than\nthese numbers themselves since $N_2$ is strongly correlated with $N_1$.)\nAlso, for the larger population in Fig.2, the number of groups can be larger\nbefore the population becomes extinct. Figure 3 shows the time dependence \nof the outbreeding effect with mixing between groups allowed after 9900 (part a)\nand 9000 (part b) time steps. Figure 3a shows summed populations from \n100 simulations with a small population ($G = 10$) and 10 simulations of a large\npopulation ($G = 100$), versus time after mixing started; $r = 1$ in both cases.\nFor much larger populations of 5 million and still $G = 10$, no such effect of \nmixing is seen. Part b shows for the high\nreproduction age $R$ of the following figures one example of the outbreeding\ndepression (bottleneck \\cite{malarz}) followed by a recovery with oscillations \nof period $R$ after mixing was allowed from time 9001 on; $N_{\\max}=10^5$..\n\n \nWe also checked for the influence of $r$ in the case when the minimum age of \nreproduction $R$ is 5\/8 of the length $L$ of the bit-strings, i.e. larger than \nthe value of 8 used before, and when $L$ is different from the 32 used in Figs.\n1 to 3. In these simulations we also assumed all mutations to be\nrecessive, in contrast to the 6 out of 32 dominant bit positions for Figs.\n1 to 3. Figure 4 shows for $L = 32$ and a birth rate $B = 4$ \na minimum of the population at intermediate $r$ for one group, and for 50 \ngroups a monotonic behaviour but with outbreeding depression at small $r$ and\noutbreeding advantage at big $r$. This population minimum is seen for\n$L = 64$ and 32 but not for 16, Fig.5. Figure 6 shows the dependence on \npopulation size. (Our data before and after mixing are average over $\\Delta = \n100$ or 1000 iterations. When outbreeding depression occurs it may happen that \nlater the population recovers: Fig.3b.)\n\n\\section{Interpretation}\n\nTo study the inbreeding and outbreeding depressions in detail we have\nanalyzed the results of simulations of single populations of different size\nunder different regime of intragenomic recombinations (crossover rate $r$).\nParameters for these simulations have been slightly changed to get clearer \nresults: $L= 128,\\; R = 80,\\; N_{\\max} = 1000$ to 20 000, crossover probability \n$r = 0$ to 1, $B = 1$, time of simulations = $5 \\times 10^5$ iterations.\nIn Fig. 7 the relation between the size of population and the crossover\nprobability for three different environment capacities are shown.\n\nPopulations in the smallest environment ($N_{\\max}=1000$) survive with $r = 0$ \nbut their sizes decrease with increasing $r$ and are extinct for $r$ set between\n0.12 and 0.4. Under larger crossover rates populations survive and their\nsizes are larger than those obtained for $r = 0$ (see plots in Fig. 7 where\nsizes of populations were normalized by the size of population under \n$r=1$). Larger populations ($N_{\\max}= 10 000$) are extinct in a very\nnarrow range of crossover rates close to 0.12, and populations with $N_{\\max}= \n20000$ become extinct at slightly lower crossover rates. Nevertheless, all\npopulations have larger sizes when the crossover rate is of the order of 1\nper gamete production (the highest tested).\n\nThis nonlinear relation between size of population and crossover rate could\nbe explained on the basis of the genetic structure of individual genomes in\nthe simulated populations. In Fig. 8 we have shown the frequency of\ndefective genes in the genetic pool of populations for $N_{\\max} = 10000$ under\ncrossover rates 0, 0.1 and 1. The frequency of defective genes expressed\nbefore minimum reproduction age ($R = 80$) in populations without crossover is\n0.5. Since $T = 1$, if the distribution of defects would be random the\nprobability of any individual to survive until the reproduction age $R$ would be\n$0.75^R$ (negligibly small for large $R > 30$). Thus, to survive, individuals \nhave to complete their\ngenomes of two complementing bit-strings (haplotypes). For more efficient\nreproduction the number of different haplotypes should be restricted and in\nfact there are only two different complementing haplotypes in the whole\npopulation as it was shown in \\cite{waga}. In such populations, the\nprobability of forming the offspring surviving until the reproduction age is\n0.5. Note, that recombination at any point inside the part of the genome\nexpressed before reproduction age $R$ produces a gamete which is not\ncomplementary to any other gamete produced without recombination or with\nrecombination in an other point. Thus, crossovers in such populations are\ndeleterious for the offspring.\nOn the other extreme, with crossover probability = 1, populations are under\npurifying selection. The fraction of defective genes in the population is\nkept low (about 0.1, compared with 0.5 without recombination), to enable\nthe surviving of the offspring until their reproduction period. The critical\ncrossover frequency close to 0.12 is connected with a sharp transition from\nthese two strategies of genomic evolution: complementarity and purifying\nselection. In Fig. 9 the frequency of defective genes expressed before the\nreproduction age is plotted. For lower crossover rates the fractions of\ndefective genes are kept at the level 0.5, for higher crossover rates they\nare close to 0.1. Close to the critical frequency of crossover, defective\ngenes located at both ends of the region of genomes expressed before the\nreproduction age are forced to obey the purifying selection which eliminates\nsome defects (Fig. 8).\n\nIn the case of small populations, the probability of meeting two closely related\npartners (high inbreeding coefficient) is high and as a consequence, there\nis higher probability of meeting two defective alleles in the same locus in\nzygote which determines phenotypic defect and eliminates the offspring from\nthe population. In such condition the strategy of completing the genome of\ntwo complementing haplotypes is more effective. Nevertheless, this strategy\nis not the best if effective populations are very large, with low inbreeding\ncoefficient, when the probability of meeting two identical haplotypes is\nnegligible. Thus, comparing very large populations with very small ones we\ncan observe the inbreeding depression. On the other hand, this strategy in\nsmall populations leads to the emerging of very limited number of different\nhaplotypes in the populations (in extreme only two). These haplotypes are\ncharacterized by a specific sequence of defective alleles. Independent\nsimulations generate haplotypes with different sequence of defective\nalleles. Mixing two or more populations evolving independently decreases the\nprobability of meeting in one zygote two complementing haplotypes, this\ndifference results in outbreeding depression (seen in Figs.3b and 4).\n\n\\section{Conclusion}\nWe varied the parameters of the sexual Penna ageing model, in particular by \nseparating the population into reproductively isolated groups and\/or having\nlonger bit-strings and a high minimum age of reproduction. We could observe\nand interpret inbreeding depression, outbreeding depression, and outbreeding \nadvantage, through the counterplay of purifying selection and of haplotype\ncomplementarity. Purifying selection tries to have as few mutations in \nthe bit-strings, like haplotype 00000000 for $L=8$, while haplotypes 01100101 \nand 10011010 are complementary. In both cases, deleterious effects from\nmutations are minimised.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\subsection{Motivation}\n\\label{subsec:Motivation}\nEven though infrequent, peak load is a cause for increased capital and operating expenses for power networks. The reason for this is twofold; the higher need for grid reinforcements and the use of more expensive fossil fuel-based generators to satisfy the peak loads (for a short duration). \nThereby to manage the extra costs due to peak demand, network operators include a peak demand tariff (commensurate to the peak load) in the electricity bills of commercial customers. This motivates large-scale commercial customers such as universities and manufacturers to manage their demand better and invest in assets such as solar photovoltaic (PV) panels and stationary batteries that have the potential to shift their demand and reduce their electricity bills. A secondary effect of this is the reduction in CO\\(_2\\) emissions because of the lesser usage of fossil fuel generators, which has the potential to combat climate change and aid the process of decarbonization of our power networks.\n\nHowever, optimal scheduling of (schedulable) loads and batteries (to minimize electricity costs) requires predictions of inflexible load (baseload), solar PV generation and the spot price of electricity. The first two variables majorly depend on weather conditions. Also, increasing renewable generation in the generation mix has led to increasing price volatility in the Australian national electricity market (NEM) which also creates a correlation between electricity prices and the weather \\cite{downey2022untangling}. This makes designing such algorithms challenging as most commercial activities are planned over the mid to long term; hence require reliable predictions. Additionally, the scheduling problems are generally mixed-integer linear programs (MILPs) which are NP-hard and may be intractable depending on the formulation and problem size. \n\nSurrounding this premise, the IEEE Computational Intelligence Society (IEEE-CIS) partnered with Monash University (Victoria, Australia) to conduct a competition seeking technical solutions to manage Monash's micro-grid containing rooftop solar panels and stationary batteries~\\cite{IEEECIS}. The main challenge was to develop an optimal scheduling algorithm for Monash's lecture theatres and operation of batteries to minimize their electricity bill, considering their baseload, solar generation and NEM electricity spot prices. To this end, the contestants were provided with actual time series data of the building loads without any lecture program (baseload) and solar generation from the micro-grid. So the contestants were expected to predict the baseload and solar generation by taking into account real-world weather data (also provided) for one month in the future. Following this, the contestants had to use actual electricity spot prices for the same duration along with these predictions for the optimal scheduling algorithm. \n\n\n\n\\subsection{Related work} \n\\label{subsec:Related Work}\nWe developed separate algorithms for the solar and baseload forecasts based on practical insights and the given data. Many approaches are available in the literature for both data types, a relatively mature and crowded research field. Therefore, the methods we have developed for this problem borrow themes from this mature literature. \n\nFor solar predictions, the basic theme used is similar to the ``clear sky'' models for forecasting solar irradiance and thereby estimating PV generation \\cite{Palani2017}. The general idea here is to create a baseline model for PV generation, assuming the sky is clear, meaning there is no cloud coverage or temperature variance. This baseline is then modified based on actual or expected weather conditions to estimate the actual PV generation. The literature around this idea is based on physical models of irradiance calculations, where equations are used to develop the baseline for a given geographical location \\cite{Palani2017}. Newer methods use data-driven techniques such as time series forecasting and machine learning models~\\cite{long2014analysis}. The data-driven methods are gaining popularity because of their robustness, speed and geographic adaptability. \n\nIntuitively, this method gives reliable solar forecasts because of solar generation's seasonal and diurnal nature. However, the main drawback is the lack of data specifically related to ``clear sky\" days; Consequently, we use the most commonly occurring day's generation as baseline, which can be discovered by our previous work~\\cite{yuan2021irmac}.\n\nFor the baseload forecasting, the theme used is the application of ensemble methods, as the current state of the art identifies these methods to be most accurate when compared to stand-alone methods \\cite{raviv2015}. We use mainly a combination of random forest (RF), gradient boost (GB), autoregressive integrated moving average (ARIMA) and support vector machine (SVM), which are all well-studied and standard methods for time series forecasting \\cite{HONG2016914}. Specifically, we apply different forecasting methods to different sub-series after disaggregating the load profiles using Seasonal and Trend decomposition using Loess (STL) due to their cyclic patterns \\cite{Theodosiou2011forecasting}.\n\nThe optimal scheduling of distributed energy resources (DER) and controllable\/uncontrollable loads for micro-grids comes under the class of problems commonly referred to as energy management problems. However, these classes of problems tend to be non-convex and non-linear in nature because of the constraints associated with the scheduled devices, e.g., scheduling uninterruptible loads such as lectures that cannot be stopped once started. As reviewed by the authors in~\\cite{EMP_Review}, the majority of these problems are modeled via classical optimization approaches such as MILP~\\cite{OptimalMILP} and mixed-integer non-linear programs (MINLPs)~\\cite{MINLP_Eg}. Other approaches include dynamic programming~\\cite{DP_Eg}, rule-based optimization and meta-heuristic algorithms~\\cite{Meta_Eg}. \n\nSince these problems tend to be non-convex and non-linear, the major challenge associated with solving these problems is tractability and (in the case of meta-heuristic methods) convergence to a global optimum. The mathematical literature surrounding MILPs offers a variety of techniques to simplify these problems before solving them, and the availability of solvers such as Gurobi\\textsuperscript{\\textregistered} for MILPs make them a very attractive option for solving these problems. Also, unlike other models where the algorithms (may) converge to a local optimal solution, modern MILP solvers can obtain globally optimal solutions for this class of problems.\n\n\n\\subsection{Contributions}\n\\label{subsec:Contributions}\nBased on the related work studied, this paper offers the following contributions to the body of knowledge: \n\\begin{itemize}\n \\item A solar forecasting algorithm using training data from refined motif (RM) discovery technique and use of an over-parameterized 1D-convoluted neural network (1D-CNN) implemented via residual networks (ResNet). To overcome the lack of ``clear sky'' data, we incorporated the work done previously by Rui Yuan et al.~\\cite{yuan2021irmac} to identify RMs in the given solar generation dataset. An RM is the most repetitive pattern within a given time series, which can be extracted along with the exogenous variables (e.g., weather information) associated with this pattern. Using this as a baseline, we estimated solar generation by training an over-parameterized 1D-CNN. Some studies have shown that over-parameterization of CNNs can lead to better performance at the expense of longer training time \\cite{Balaji2021, Bubeck2021,Power2021}. Therefore, a ResNet was implemented to develop a deeper NN but with faster computation time.\n \n \\item An optimal micro-grid scheduling algorithm is solved based on real-world data for a university-based application. To the best of our knowledge, there has not yet been a study to co-optimize lectures schedule (and associated resources) and battery operation (with PV panels). Therefore, in this paper, we have developed and tested our algorithm using real-world data and practical case instances provided by Monash University. Given the large problem size (one-month schedule) and the presence of a quadratic term in the objective function for the cost of peak demand, we proposed a two sub-problem approach to formulate a tractable problem. The first sub-problem was used to limit the peak demand throughout the month, eliminating the quadratic term from the objective. Then, the peak demand was used to solve the second sub-problem to minimize the total electricity costs.\n\\end{itemize}\n\n\nThe rest of the paper is structured as follows. Section \\ref{sec:Background} introduces the data, information and problem requirements. Section \\ref{sec:Methodology} proposes the forecasting and optimization methodologies. Section \\ref{sec:Results and Discussion} presents the numerical results of the scheduling algorithm based on time series forecasting. Finally, we conclude the paper section \\ref{sec:Conclusion}.\n\n\n\n\\section{Background Information}\n\\label{sec:Background}\nThis section describes the competition requirements and data provided by the organizers.\n\n\n\\subsection{Problem statement}\n\\label{subsec:Problem Statement}\nThis section describes the scheduling constraints and objective function provided by the competition organizers and has been adapted from~\\cite{IEEECIS}. We were required to develop prediction algorithms for the baseload of six buildings in Monash University and the solar generation of PV panels connected to them. After this, we had to use these predictions to optimally schedule lecture activities and battery operation while minimizing the electricity costs (i.e., electrical energy consumption cost plus peak demand charge). We were allowed to consider the electricity prices as known parameters to simplify the problem and focus on predicting solar generation and baseload. \n\nFor each lecture activity, we are provided with several small or large rooms needed, the electrical power consumed per room and the duration of the activities (in steps of 15 minutes). We are also provided with a list of precedence activities, i.e., activities that must be performed at least one day before the activity in question. For the batteries, we are provided with the maximum energy rating, i.e., state of charge (SOC), the peak charge\/discharge power and charge\/discharge efficiency. \n\nThe scheduling must start from the first Monday of the month, and each activity must; 1) take place within office hours (9:00--17:00), 2) happen once every week and recur on the same day and time every week. Also, the number of rooms (associated with each activity) allocated at a given time interval must not exceed the total number of rooms in the six buildings. The batteries \ncan operate in one of three states; charging, discharging or idle, which is determined by the scheduling algorithm. However, the battery power must be at the peak charge\/discharge capacity once operated.\n\n\\subsection{Dataset information}\n\\label{subsec:Data set}\nThe organizers provided us with datasets for the electricity consumption of the six buildings and their connected solar panels~\\cite{IEEECIS}. About two years' worth of data was provided with a granularity of 15 minutes. However, the datasets contained many missing points, especially the demand profiles with consecutive weeks of unrecorded data, as shown in \\cite{gitlabRay}.\nInformation regarding COVID-19 lockdown and how it influenced the forecast and earlier timelines were also provided to aid in incorporating the effect of COVID-19 pandemic on baseload in our models. ERA5 climate data was also provided as exogenous variables for the time series forecasting via a partnership with OikoLabs~\\cite{IEEECIS}. \nAlso, we were allowed to use electricity price information from the Australian NEM~\\cite{NEM_Data} as a parameter for objective function calculation. The input parameters used for solar and building forecasting are summarized in Table~\\ref{tab: inputs-sensitivity}.\n\\begin{table}[!h]\n \\centering\n \\caption{Input data used for forecasting}\n \\label{tab: inputs-sensitivity}\n \\resizebox{0.5\\textwidth}{!}{\n \\begin{tabular}{|l|*{2}{c|}}\n \\hline\n \\rowcolor{AdOrange} \\backslashbox{Weather inputs (15mins)}{Forecast outputs (15mins)}\n &\\makecell{Building \\\\(15mins)}&\\makecell{Solar\\\\ (15mins)}\\\\\\hline\n \\rowcolor{AdOrangeLight}Temperature ($^\\circ$C) & \\checkmark & \\checkmark \\\\\\hline\n \\rowcolor{AdCream}Dewpoint temperature ($^\\circ$C) &\\checkmark &\\\\\\hline\n \\rowcolor{AdOrangeLight}Wind speed (m\/s) & &\\checkmark \\\\\\hline\n \\rowcolor{AdCream}Relative humidity (0-1) & \\checkmark & \\checkmark\\\\\\hline\n \\rowcolor{AdOrangeLight}Surface solar radiation (W\/$m^2$) &\\checkmark &\\checkmark\\\\\\hline\n \\rowcolor{AdCream}Total cloud cover (0-1) & &\\checkmark\\\\\\hline\n \\rowcolor{AdOrangeLight}Occupancy (0-1) & \\checkmark&\\\\\\hline\n \\rowcolor{AdCream}Annual harmonics (0-1) & &\\checkmark\\\\\\hline\n \\end{tabular}\n }\n\\end{table}\n\\subsection{Instance information}\n\\label{subsec:Instance}\n\nTo test the algorithms each team developed, the competition organisers provided ten problem instances. Each instance provided information regarding the number of activities to be scheduled and the specifics of each activity; duration of the activity in time steps, power consumed per room for each time interval, number of large\/small rooms required and a precedence list. The precedence list contains the activities that must happen one day before the specified task. Depending on the number of activities in each instance, they were divided into two sub-categories; large (200 activities) and small (50 activities) instances. We report simulation results for one large and one small instances in section~\\ref{sec:Results and Discussion}.\n\n\n\\section{Methodology}\n\\label{sec:Methodology}\nThis section describes the methodology developed to solve the problem described in~\\ref{subsec:Problem Statement}. Mainly, we outline our data cleaning strategies, forecasting methodology and scheduling algorithms used in the competition.\n\n\n\\subsection{Solar generation forecast}\n\\label{subsec:Solar generation forecast}\n\n\\begin{figure}[!ht]\n\\centerline{\\includegraphics[clip, trim=3.3cm 4.3cm 2.75cm 3.95cm,width=0.95\\linewidth]{figs\/solar_forecast_2.pdf}}\n\\caption{Block diagram of the proposed solar generation prediction model}\n\\label{fig:solar}\n\\end{figure}\n\nThe solar generation forecast methodology is depicted in Fig.~\\ref{fig:solar}. The solar data was mainly clean, with no outliers and a few missing points, which were removed during the cleaning process. Since solar time series data is repetitive in nature, we used the method described in~\\cite{yuan2021irmac} to extract the RM (most repetitive patterns) and its associated weather conditions. The intuitive idea is to train a neural network (NN) to reshape this RM with respect to the difference in weather conditions between the RM weather conditions and the actual data weather conditions. \n\n\\subsection{Baseload forecast}\n\\label{subsec:baseload forecast}\n\n\\begin{figure}[!ht]\n\\centerline{\\includegraphics[clip, trim=4.5cm 4.2cm 0.4cm 1.5cm,width=0.95\\linewidth]{figs\/baseload_forecast_1.pdf}}\n\\caption{Block diagram of the proposed baseload prediction model}\n\\label{fig:base_load}\n\\end{figure}\n\nThe block diagram of the methodology used for baseload forecasting is shown in Fig.~\\ref{fig:base_load}, which demonstrates the use of different ensemble forecasting methods for different buildings. We applied the same procedure for the data cleaning process as in the PV generation forecast. However, since there were more missing points in the demand profiles, which lasted for many weeks consecutively, we used the average value from the previous week for the long-period missing data. The choice of ensemble methods was achieved through the use of the Python sklearn package's voting regressor function \\cite{SklearnVoting}. For this dataset, this resulted in a combination of RF, SVM, ARIMA and GB methods for best accuracy. In terms of the length of training data, due to the highly dynamic nature of demand profiles, which was exacerbated during the COVID-19 period, we only used 2 months of historical data for training.\n\nDue to the seasonal\/cyclic patterns demonstrated in building 1, 2 and 5, as shown in \\cite{gitlabRay}, we utilized STL decomposition to capture these properties by disaggregating the load profiles into three components, namely trend, seasonality and residual. We then trained each profile with different methods depending on the performance of the historical data. In the end, the predictions from these three sub-series were combined to obtain the final building demand forecast. For buildings 0 and 4 with no clear cyclic patterns, we observed that either RF or GB (without STL decomposition) were sufficient to capture their repetitive nature. Lastly, for building 3, historical data showed that the load only took on a finite set of discrete values; thus, we fixed the predictions to the median.\n\n\n\\subsection{Optimal scheduling algorithm}\n\\label{subsec:Optimal scheduling}\n\n\\begin{figure}[!ht]\n\\centerline{\\includegraphics[clip, trim=3.6cm 9.3cm 1.2cm 3.5cm,width=0.95\\linewidth]{figs\/scheduling_alg_1.pdf}}\n\\caption{Block diagram of the proposed optimal scheduling algorithm}\n\\label{fig:scheduling_alg}\n\\end{figure}\n\nWe developed a MIP-based optimal scheduling problem to represent all the scheduling constraints, where the objective function was provided by the organizers as follows: \n\\begin{flalign}\n\\label{eq:QP objective}\nO(P_{t}) = \\frac{0.25}{1000}\\sum_{t \\in \\mathcal{T}}\\, P_{t} \\, \\lambda^{\\text{aemo}}_{t} + 0.005 \\, \\left(\\max_{t \\in \\mathcal{T}} P_{t}\\right)^2 &\n\\end{flalign}\n\nHere \\(P_{t}\\) and \\(\\lambda^{\\text{aemo}}_{t}\\) are the net demand and AEMO electricity spot price at time \\(t\\) over the time horizon \\(\\mathcal{T}\\) respectively. Therefore, it can be seen that the problem is a MIQP problem, which is NP-hard and hence most likely intractable given the size of this problem (\\(\\|\\mathcal{T}\\| = 2880\\)).\nHence, to improve the tractability, we reformulated the MIQP problem into two MILP sub-problems, as demonstrated in Fig.~\\ref{fig:scheduling_alg}. The first sub-problem is used to estimate an upper bound for the maximum demand (\\(P^{\\text{max}}_1 = \\max_{t \\in \\mathcal{T}} P_{t}\\)), which is used to remove the quadratic term from the objective function in the second sub-problem. \n\nThe scheduling is developed for a time duration \\(\\Delta t\\) with \\(P^{\\text{base}}_t \\text{ and } P^{\\text{solar}}_t\\) being the baseload and solar generation forecasts, respectively. For a set of batteries \\(b \\in \\mathcal{B}\\) over the time horizon \\(t \\in \\mathcal{T}\\), we define \\(E^{\\text{bat}}_{b, t}\\) as battery SOC in kWh, \\(P^{\\text{bat}}_{b, t}\\) is the battery power in kW, \\(E^{\\text{cap}}_b\\) is the battery capacity in kWh, \\(B^{\\text{c}}_{b}\/B^{\\text{d}}_{b}\\) is the battery charge and discharge power in kW, \\(u^{\\text{c}}_{b, t}\/u^{\\text{d}}_{b, t}\\) are the battery charge\/discharge status, and \\(\\eta^{\\text{c}}_{b,t}\/\\eta^{\\text{d}}_{b,t}\\) are the charging\/discharging efficiencies. For a set of activities \\(a \\in \\mathcal{A}\\) over the same time horizon, we define \\(A^{\\text{kW}}_a\\) as the power required per room for the activity in kW, \\(R^{\\text{large}}_{a}\/R^{\\text{small}}_{a}\\) are the number of large\/small rooms needed for the activity, \\(P^{\\text{sched}}_{t}\\) be the total power scheduled in kW, and \\(u^{\\text{start}}_{a,t}\/u^{\\text{active}}_{a,t}\\) are binary variables which track the activity start time and active time, respectively. \\(\\mathcal{D}_{a}\\) is the duration set for the activity, and \\(\\mathcal{P}_{a}\\) is the set of activities that must precede activity \\(a\\). Additionally, we define \\(\\mathcal{T}_{n}\\) as the set of time intervals where activities must not be active, \\(\\mathcal{T}_f\\) is the set of time intervals for the first week and \\(L^{\\text{day}}_{t}\\) corresponds to the day of the time interval \\(t\\). We present the equations used for the two sub-problems in the below sections.\n\n\\subsubsection{Sub-problem 1}\n\\label{subsubsec:Sub-problem 1}\n\\begin{subequations}\n\\begin{flalign}\n\\label{eqref:Objective 1}\n\\min_{\\psi_1} P^{\\text{max}}_{1} &\n\\end{flalign}\n\\begin{flalign}\n&\\textbf{Subject to,} \\nonumber&\n\\end{flalign}\n\\begin{flalign}\n\\label{eqref:Max power constraint}\n0 \\leq P_{t}\\leq P^{\\text{max}}_{1} \\quad \\forall \\, t \\in \\mathcal{T}& \n\\end{flalign}\n\\begin{flalign}\n\\label{eqref:SOC limit}\n0 \\leq E^{\\text{bat}}_{b, t} \\leq E^{\\text{cap}}_{b} \\quad \\forall \\, t \\in \\mathcal{T}, \\, b \\in \\mathcal{B}&\n\\end{flalign}\n\\begin{flalign}\n\\label{eqref:Charge-Discharge}\nu^{\\text{c}}_{b, t} + u^{\\text{d}}_{b, t} \\leq 1 \\quad \\forall \\, t \\in \\mathcal{T}, \\, b \\in \\mathcal{B}&\n\\end{flalign}\n\\begin{flalign}\n\\label{eqref:Battery power}\nP^{\\text{bat}}_{b, t} = u^{\\text{c}}_{b, t}\\,\\frac{B^{\\text{c}}_{b}}{\\sqrt{\\eta_{b}^{\\text{c}}}} - u^{\\text{d}}_{b, t}\\,B^{\\text{d}}_{b}\\,\\sqrt{\\eta_{b}^{\\text{d}}} \\quad \\forall \\, t \\in \\mathcal{T}, \\, b \\in \\mathcal{B} &\n\\end{flalign}\n\\begin{flalign}\n\\label{eqref:Battery capacity}\nE^{\\text{bat}}_{b, -1} = E^{\\text{cap}}_{b} \\quad \\forall \\,b \\in \\mathcal{B}&\n\\end{flalign}\n\\begin{flalign}\n\\label{eqref:SOC value}\nE^{\\text{bat}}_{b, t} = E^{\\text{bat}}_{b, t-1} + \\Bigg(u^{\\text{c}}_{b, t}\\, B^{\\text{c}}_{b} - u^{\\text{d}}_{b, t}\\, B^{\\text{d}}_{b} \\Bigg) \\Delta t \\quad \\forall \\, t \\in \\mathcal{T}, \\, b \\in \\mathcal{B}& \n\\end{flalign}\n\\begin{flalign}\n\\label{eqref:Scheduled power}\nP^{\\text{sched}}_{t} = \\sum_{a \\in \\mathcal{A}} u^{\\text{active}}_{a,t} \\, A^{kW}_a\\, \\Bigg(R^{\\text{small}}_{a} + R^{\\text{large}}_{a}\\Bigg) \\quad \\forall \\, t \\in \\mathcal{T}&\n\\end{flalign}\n\\begin{flalign}\n\\label{eqref:Net export}\nP_{t} = P^{\\text{base}}_{t} - P^{\\text{solar}}_t + P^{\\text{sched}}_{t}+ \\sum_{b \\in \\mathcal{B}}P^{\\text{bat}}_b \\quad \\forall \\, t \\in \\mathcal{T} &\n\\end{flalign}\n\\begin{flalign}\n\\label{eqref:Room requirement}\n\\sum_{a \\in \\mathcal{A}} u^{\\text{active}}_{a, t}\\, R^{\\text{large}}_{a} \\leq H^{\\text{large}}; \\sum_{a \\in \\mathcal{A}} u^{\\text{active}}_{a, t}\\, R^{\\text{small}}_{a} \\leq H^{\\text{small}} \\; \\forall t \\in \\mathcal{T}&\n\\end{flalign}\n\\begin{flalign}\n\\label{eqref:Non active time}\nu^{\\text{active}}_{a, t} = 0 ; u^{\\text{start}}_{a, t} = 0 \\quad \\forall \\, t \\in \\mathcal{T}_{n}, \\, a \\in \\mathcal{A}& \n\\end{flalign}\n\\begin{flalign}\n\\label{eqref:Happen once a week}\n\\sum_{t \\in \\mathcal{T}_f} u^{\\text{start}}_{a, t} = 1 \\quad \\forall \\, a \\in \\mathcal{A} &\n\\end{flalign}\n\\begin{flalign}\n\\label{eqref:Duration constraint A}\nu^{\\text{start}}_{a, t} \\leq u^{\\text{active}}_{a, t + k} \\: \\forall \\, & t \\in \\{0, 1,\\dots, \\left|\\mathcal{T}\\right| - \\left|\\mathcal{D}_{a}\\right| - 1\\}, \\, k \\in \\mathcal{D}_{a}, a \\in \\mathcal{A}&\n\\end{flalign}\n\\begin{flalign}\n\\label{eqref:Duration constraint B}\n\\sum_{t \\in \\mathcal{T}_f} u^{\\text{active}}_{a, t} = \\left| \\mathcal{D}_{a} \\right| \\quad \\forall \\, a \\in \\mathcal{A}&\n\\end{flalign}\n\\begin{flalign}\n\\label{eqref:Precedence constraints}\n\\sum_{t \\in \\mathcal{T}_f} \\left(u^{\\text{active}}_{a, t}\\, L^{\\text{day}}_{t} - u^{\\text{active}}_{k, t}\\, L^{\\text{day}}_{t}\\right) \\geq 1 \\; \\; \\forall \\, k \\in \\mathcal{P}_{a}, a \\in \\mathcal{A}&\n\\end{flalign}\n\\begin{flalign}\n\\label{eqref:Start at same time every week}\nu^{\\text{start}}_{a, t} = u^{\\text{start}}_{a, t - \\alpha} \\quad \\forall \\, t \\in \\mathcal{T}_{f}^{c}, \\, a \\in \\mathcal{A} &\n\\end{flalign}\n\\end{subequations}\n\n\\noindent where \\(\\psi_1 = \\{P, P^{\\text{bat}}, P_1^{\\text{max}}, E^{\\text{bat}},u^{\\text{c}}, u^{\\text{d}}, u^{\\text{active}}, u^{\\text{start}} \\}\\). Equation~\\eqref{eqref:Max power constraint} helps find the peak demand of the schedule and prevents energy export to the grid. Equations~\\eqref{eqref:SOC limit}--\\eqref{eqref:SOC value} represent battery related constraints. Equations~\\eqref{eqref:Scheduled power}--\\eqref{eqref:Net export} are used to calculate the net energy imported from the grid, and equation~\\eqref{eqref:Room requirement} ensures the number of rooms engaged through activities is within the limit. Equation~\\eqref{eqref:Non active time} ensures activities do not start outside office hours and before the first Monday of the month, equation~\\eqref{eqref:Happen once a week} ensures activities happen once a week, equations~\\eqref{eqref:Duration constraint A}--\\eqref{eqref:Duration constraint B} ensure activities are active for the necessary duration after starting, equation~\\eqref{eqref:Precedence constraints} satisfies that all precedence (prerequisite) activities happen a day before and equation~\\eqref{eqref:Start at same time every week} ensures activities recur at the same time every week. The objective here is to minimize peak demand while satisfying all the scheduling constraints. \n\nSince this problem is only used to solve for the upper bound maximum demand \\(P^{\\text{max}}_1\\), it is not necessary to solve the full problem. We use the property that the solution of the LP relaxation of a minimization-based MILP problem is its lower bound (\\(y^{*}_{\\text{MILP}} \\geq y^{*}_{\\text{LP}}\\))~\\cite{Vielma2015} to just solve the LP relaxation and use this for sub-problem 2, thereby improving tractability of our method.\n\n\n\\subsubsection{Sub-problem 2}\n\\label{subsubsec:Sub-problem 2}\n\\begin{subequations}\n\\begin{flalign}\n\\min_{\\psi_2} \\frac{0.25}{1000}\\sum_{t \\in \\mathcal{T}}\\, P_{t} \\, \\lambda^{\\text{aemo}}_{t} &\n\\end{flalign}\n\\begin{flalign}\n&\\textbf{Subject to, Equations~}\\eqref{eqref:SOC limit} \\text{\\---} \\eqref{eqref:Precedence constraints}\\nonumber&\n\\end{flalign}\n\\begin{flalign}\n\\label{eqref:Load limit}\n0 \\leq P_{t} \\leq \\beta^{\\text{mul}} \\,P^{\\text{max}^{*}}_{1} &\n\\end{flalign}\n\\end{subequations}\n\n\\noindent where \\(\\psi_2 = \\left(\\psi_1 \\setminus \\{P^{\\text{max}}_{1}\\} \\right)\\). Using equation~\\eqref{eqref:Load limit}, we ensure that peak demand does not exceed \\(\\beta^{\\text{mul}} P^{\\text{max}^{*}}_1\\) limiting the demand charge. Intuitively, \\(\\beta^{\\text{mul}} \\geq 1\\) is a multiplier used to change the tightness of the MILP, creating a trade-off between computation time and electricity cost; higher values of \\(\\beta^{\\text{mul}}\\) decreases computation speed but increases the electricity cost (objective of the problem) and vice versa. \n\n\\begin{figure*}[t]\n \\centering\n \\subfigure[Difference in the RM at and predicted generation weather conditions for Nov \\(4^{\\text{th}}\\) 2020]\n {\n \\label{subfig:RM differences}\n \\includegraphics[width=0.29\\textwidth]{figs\/RM_Exogenous.pdf} \n } \n %\n \\subfigure[RM vs. predicted solar generation for Nov \\(4^{\\text{th}}\\) 2020]\n {\n \\label{subfig:RM output}\n \\includegraphics[width=0.29\\textwidth]{figs\/RM_Output.pdf} \n } \n %\n \\subfigure[Scheduled power, predicted net load and NEM spot price for Nov \\(2^{\\text{nd}}\\) 2020]\n {\n \\label{subfig:Optimization output}\n \\includegraphics[width=0.33\\textwidth]{figs\/Competition_Results.pdf}\n }\n \\vspace{-0.4em}\n \\caption{Prediction and optimization results}\n \\label{fig:Figure ref}\n\\end{figure*}\n\n\\section{Results and Discussion}\n\\label{sec:Results and Discussion}\n\nThe simulations were run for November 2020 as per the competition stipulation with \\(\\Delta t = 15 \\text{ min}, \\\n\\|\\mathcal{T}\\|=2880\\) and other instance information as specified in \\ref{subsec:Instance}. For the scheduling, two \\(\\beta^{\\text{mul}}\\) values were chosen; 1.10 for small instances and 1.15 for large instances. These values were obtained via trial and error. The prediction and optimization algorithms were run in Python 3.8.8 using Gurobi\\textsuperscript{\\textregistered} solver. \n\n\\begin{table}[t]\n \\caption{Baseload and PV generation prediction NRMSE}\n \\centering\n \\begin{tabular}{ccc|ccc}\n \\toprule\n \\rowcolor{AdOrange} \\textbf{Building \\#} & \\textbf{Proposed} &\n \\textbf{BI} & \\textbf{Solar \\#} & \\textbf{Proposed} & \\textbf{BI} \\\\\n \\midrule\n 0 & \\textbf{0.43} & 0.43 & 0 & \\textbf{0.72} & 0.81 \\\\\n \\rowcolor{AdOrangeLight} 1 & \\textbf{0.31} & 0.34 & 1 & \\textbf{0.77} & 0.77 \\\\\n 2 & \\textbf{0.23} & 0.24 & 2 & \\textbf{0.74} & 0.79 \\\\\n \\rowcolor{AdOrangeLight} 3 & \\textbf{0.32} & 0.32 & 3 & \\textbf{0.74} & 0.75 \\\\\n 4 & \\textbf{0.40} & 0.40 & 4 & \\textbf{0.78} & 0.79 \\\\\n \\rowcolor{AdOrangeLight} 5 & \\textbf{0.13} & 0.14 & 5 & \\textbf{0.72} & 0.75 \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:mae}\n\\end{table}\n\nSince each of the twelve datasets has different scales, we decide to use the normalised root mean square error (NRMSE), which has been used to facilitate the comparison among time series with varying magnitudes \\cite{das2018evaluation}. The metric is calculated as follows:\n\\begin{flalign}\n\\label{eq:NRMSE}\n& \\text{NRMSE} = \\sqrt{\\sum_t \\frac{\\hat{y}_t - y_t}{\\|\\mathcal{T}\\|}} \\times \\frac{1}{\\bar{y}}\n\\end{flalign}\nwhere $\\hat{y}_t$ is the model predictions, $y_t$ is the model actual data and $\\bar{y}$ represents the mean of the actual data.\n\nIn order to demonstrate the superiority of our methods, we compare our proposed techniques with the best individual (BI) prediction models that are mentioned in section \\ref{subsec:Related Work}. Please note that the BI models are picked from out-of-sample validation on the actual datasets using the same inputs and training length as the proposed methods. Table~\\ref{tab:mae} displays the NRMSE of the actual data in November 2020 between the proposed and the BI models. It can be seen that the proposed models achieve lower errors in almost all the twelve profiles except for the demand of buildings 0 and 4. As mentioned in section \\ref{subsec:baseload forecast}, these two profiles are predicted using either RF or GB due to their non-cyclic behavior. Therefore, they have the highest errors among all the demand profiles. For the PV generation time series, the proposed method also achieves better prediction accuracy with quite consistent NRMSE due to the high dependence of solar PV generation on weather data.\n\nFig.~\\ref{subfig:RM differences} and \\ref{subfig:RM output} are used to explain the intuition behind the RM method for the PV generation; the former shows the difference between RM \nand a predicted generation weather condition for November 4\\textsuperscript{th}, 2020 while the latter shows the RM and the predicted generation profiles for the same day. We can see that as the difference in temperature is positive, the predicted generation is lower than the RM and vice versa. The ResNet is then trained with the changes in weather conditions to learn how to reshape the RM to predict solar generation.\n\nFig.~\\ref{subfig:Optimization output} shows the power scheduled for the small and large instances, the predicted load and the NEM spot price profiles for Novemeber \\(2^{\\text{nd}}\\) 2020. It can be seen that during lower price periods (intervals 24--72), most of the load is scheduled and the scheduled load is clipped to the value obtained from sub-problem 1. During high price periods (intervals 72--88), the scheduled load is lower than the predicted baseload due to battery operation. This ensures that electricity cost is minimized. \n\n\\begin{table}[b]\n \\caption{Electricity cost from different forecasting results}\n \\centering\n \\begin{tabular}{c|ccc}\n \\toprule\n \\rowcolor{AdOrange} \\textbf{Forecast methods} &\n \\textbf{Base case} & \\textbf{BI} & \\textbf{Proposed} \\\\\n \\midrule\n \\rowcolor{AdOrangeLight} \\textbf{Cost (\\$AUD)} & 589356 & 466652 & \\textbf{357210} \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:opti_cost}\n\\end{table}\n\nThe organizers provided a base schedule\/electricity cost for the problem based on no predictions and an \\(\\epsilon\\)-greedy algorithm \\cite{IEEECIS}. Table~\\ref{tab:opti_cost} compares the electricity cost from three different forecast methods, where the base case represents the electricity cost provided by the organizers; whereas BI and the proposed cost represent the cost optimized on their respective forecasts. Our methodology reduced this cost by roughly 23\\% and 40\\% compared to the BI and base case, respectively. We were able to solve small instances (158400 binaries) in an average time of 447 seconds per instance and large instances (590400 binaries) in 1550 seconds per instance. It can be seen that even though the problem size grew 3.73 times, the time to solve the problem increased almost linearly and not exponentially with problem size as is the case in MIPs. It further demonstrates the tractability and scalability of our algorithm. \n\n\n\\section{Conclusion}\n\\label{sec:Conclusion}\n\nIn this paper, we developed an optimal scheduling solution for solving a real-world electricity cost minimization problem for six buildings at Monash University in Melbourne, Australia. A two-step solution was presented in this paper, which included forecasting and an optimization step. Our solution with energy forecast reduced the total cost of electricity roughly by 40\\%. We would like to highlight that we placed fifth in the competition among 50 participants and were off from the best solution only by 4\\% with respect to the objective value due to their better forecast.\nFor future work, the forecasting of baseload and solar generation can be improved by using\nmodern machine learning techniques, such as transformers and info-GANs.\nFor the optimization, a major improvement can be devising an algorithm to select optimal \\(\\beta^{\\text{mul}}\\) considering the trade-off between the tightness (complexity) of the problem and cost minimization. \n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n Cooperative communications via relaying is an effective measure to\n ensure reliability, provide high throughput and extend network\n coverage. It has been intensively studied in LTE-Advanced \\cite{4G-relay} and will continue to play an important role in\n the future fifth generation wireless networks. In the conventional\n two-hop one-way relay channel (OWRC), the communication takes two orthogonal phases, and suffers from the loss of spectral efficiency because of the\n inherent half-duplex (HD) constraint at the relay. Two-way relaying using the principle of physical layer network coding is proposed to\n recover the loss in OWRC by allowing the two sources to exchange\n information more efficiently \\cite{KRI}\\cite{Zhang-2Phase}\\cite{Qiang-Li-2009}. In the first phase of the two-way relay channel (TWRC), both sources\n transmit signals to the relay. In the second phase, the relay does not separate signals\n but rather broadcasts the processed mixed signals to both sources.\n Each source can subtract its own message from the received signal\n then decodes the information from the other source. The benefit of the TWRC is that,\n using the same two communication phases as the OWRC, bi-directional communication is achieved.\n\n However, note that the relay in the TWRC still operates in the HD mode thus two communication phases\n are needed. Motivated by this observation and thanks to the emerging full-duplex (FD) techniques,\n we aim to study the potential of the FD operation\n in the TWRC to enable simultaneous information\n exchange between the two sources, i.e., only one communication phase is required.\n In the proposed scheme, all nodes, including the two sources\nand the relay, work in the FD mode so they can transmit and receive\nsignals simultaneously \\cite{BLISS}\\cite{BLISS1}. However, the major\nchallenge in the FD operation is that the power of the\nself-interference (SI) from the relay output could be over 100 dB\nhigher than that of the signal received from distance sources and\nwell exceeds the dynamic range of the analog-to-digital converter\n\\cite{RII3,RII2,DAY}. {\\bl Therefore it is important that the SI is\nsufficiently mitigated. The SI cancellation can be broadly\ncategorized as passive cancellation and active cancellation\n\\cite{Sabharwal-inband}. Passive suppression is to isolate the\ntransmit and receive antennas using techniques such as directional\nantennas, absorptive shielding and cross-polarization\n\\cite{Sabharwal-passive}. Active suppression is to exploit a node's\nknowledge of its own transmit signal to cancel the SI, which\ntypically includes analog cancellation, digital cancellation and\nspatial cancellation. Experimental results are reported in\n\\cite{Duarte-Exp} that the SI can be cancelled to make FD wireless\ncommunication feasible in many cases. } In the recent work\n\\cite{Stanford-FD} and \\cite{Stanford-FD-MIMO}, the promising\nresults show that the SI can be suppressed to the noise level in\nboth single-antenna and multi-antenna cases. Below we will provide a\nreview on the application of the FD relaying to both the OWRC and\nTWRC.\n\n\n\\subsection{Literature on the FD relaying}\n\n\\subsubsection{OWRC}\nThe FD operation has attracted lots of {\\bl research interest} for\nrelay assisted cooperative communication. It is shown in \\cite{RII3}\nthat the FD relaying is feasible even in the presence of the SI and\ncan offer higher capacity than the HD mode. Multiple-input\nmultiple-output (MIMO) techniques provide an effective means to\nsuppress the SI in the spatial domain\n\\cite{RII1}-\\cite{MIMO-relay-HD}. The authors of \\cite{RII1} analyze\na wide range of SI mitigation measures when the relay has multiple\nantennas, including natural isolation, time-domain cancellation and\nspatial domain suppression. { The FD relay selection is studied in\n\\cite{KRIKIDIS} for the amplify-and-forward (AF) cooperative\nnetworks.} With multiple transmit or receive antennas at the\nfull-duplex relay, precoding at the transmitter and decoding at the\nreceiver can be jointly optimized to mitigate the SI effects. The\njoint precoding and decoding design for the FD relaying is studied\nin \\cite{RII1}, where both the zero forcing (ZF) solutions and the\nminimum mean square error (MMSE) solutions are discussed. When\nonly the relay has multiple antennas, a joint design of ZF precoding\nand decoding vectors is proposed in \\cite{CHOI_EL} to null out the\nSI at the relay. However, this design does not take into account the\nend-to-end (e2e) performance. A gradient projection method is\nproposed in \\cite{Park-null-12}, to optimize the precoding and\ndecoding vectors considering the e2e performance. When all terminals\nhave multiple antennas, the e2e performance is optimized in\n\\cite{MIMO-relay-HD} where the closed-form solution for\nprecoding\/decoding vectors design as well as diversity analysis are\nprovided.\n\n\\subsubsection{TWRC}\n\n In the early work of TWRC, the FD operation is often employed to\n investigate the capacity region from the information-theoretic viewpoint without considering the effects of the SI \\cite{capacity-twrc}\\cite{capacity-twrc-half-bit}.\n Only recently, the SI has been taken into account in the FD TWRC.\n In \\cite{FD-two-phase}, the FD operation is introduced to the relay but two one-way relaying phases are required to achieve the two-way communication to avoid\n interference.\n A physical layer network coding FD TWRC is proposed in \\cite{FD-practical}, where bit error\nrate is derived. The optimal relay selection scheme is proposed and\nanalyzed in \\cite{Song-FD-TW-Selection} for the FD TWRC using the AF\nprotocol. Transmit power optimization among the source nodes and the\nrelay node is studied in \\cite{FD-TW-SISO}, again using the AF\nprotocol. However, all existing works are restricted to the single\ntransmit\/receive antenna case at the relay thus the potential of\nusing multiple antennas to suppress the SI and improve the e2e\nperformance for the TWRC is not fully explored yet.\n\n\\subsection{Our work and contribution}\n In this work, we study the potential of the MIMO FD operation\n in the TWRC where the relay has multiple transmit\/receive antennas\n and employs the AF protocol and the principle of physical layer network coding. The two sources have single transmit\/receive antenna.\n We jointly optimize the relay beamforming matrix and the power\n allocation at the sources to maximize the e2e performance.\n To be specific, we study two problems: one is to find the achievable rate\n region and the other is to maximize the sum rate of the FD TWRC.\n Our contributions are summarized as follows:\n \\begin{itemize}\n \\item We derive the signal model for the MIMO FD TWRC and\n propose to use the ZF constraint at the relay to greatly simplify the model\n and the problem formulations.\n \\item We find the rate region by maximizing one source's rate\n with the constraint on the other source' minimum rate. We propose an\n iterative algorithm together with 1-D search to find the local\n optimum solution. At each iteration, we give analytical solutions for the transmit beamforming vector and the power control.\n \\item We tackle the sum rate maximization problem by employing a similar iterative algorithm.\n At each iteration, we propose to use the DC (difference of convex functions) approach to optimize the transmit beamforming vector and we solve the power allocation analytically.\n \\item We conduct intensive simulations to compare the proposed FD\n scheme with three benchmark schemes and clearly show its\n advantages of enlarging the rate region and improving the sum rate.\n \\end{itemize}\n\n\n\n\nThe rest of the paper is organized as follows. In Section II, we\npresent the system model, the explicit signal model and problem\nformulations. In Section III, we deal with the problem of finding\nthe achievable rate region; in Section IV, we address the problem of\nmaximizing the sum rate. Three benchmark schemes are introduced in\nSection V. Simulation results are presented in Section VI. Section\nVII concludes this paper and gives future directions.\n\n\n\n\n \\noindent \\emph{Notation}: The lowercase and\nuppercase boldface letters (e.g., ${\\bf x}$ and ${\\bf X}$) indicate column\nvectors and matrices, respectively. ${\\bf X}\\in \\mathbb{C}^{M\\times N}$\nmeans a complex matrix ${\\bf X}$ with dimension of $M\\times N$.\n ${\\bf I}$ is the identity matrix.\n We use $(\\cdot)^\\dagger$ to denote the conjugate\ntranspose, $\\textsf{trace}(\\cdot)$ is the trace operation, and $\\|\\cdot\\|$ is\nthe Frobenius norm. $|\\cdot|$ represents the absolution value of a\nscalar. ${\\bf X}\\succeq {\\bf 0}$ denotes that the Hermitian matrix ${\\bf X}$\nis positive semidefinite. The expectation operator is denoted by\n$\\mathcal{E}(\\cdot)$. Define $\\Pi_{\\bf X} =\n{\\bf X}({\\bf X}^\\dag{\\bf X})^{-1}{\\bf X}^\\dag$ as the orthogonal projection onto the\ncolumn space of ${\\bf X}$; and $\\Pi_{\\bf X}^\\bot = {\\bf I} - \\Pi_{\\bf X}$ as the\northogonal projection onto the orthogonal complement of the column\nspace of ${\\bf X}$.\n\n\n\n\n\n\\section{System model, signal model and problems statement}\n\\subsection{System model}\\label{GENERAL_system_model}\nConsider a three-node MIMO relay network consisting of two sources\n${\\tt A}$ and ${\\tt B}$ who want to exchange information with the aid of a\nMIMO relay ${\\tt R}$, as depicted in Fig. 1. {\\bl There is no direct\nlink between the two sources because of deep fading or heavy\nshadowing, so their communication must rely on ${\\tt R}$.} All nodes\nwork in the FD mode. To enable the FD operation, each source is\nequipped with two groups of RF chains and corresponding antennas,\ni.e., one for transmitting and one for receiving signals\\footnote{It\nis also possible to realize the FD operation using a single antenna,\nsee \\cite{Stanford-FD}.}. We assume that each source has one\ntransmit antenna and one receive antenna. We use $M_{T}$ and $M_{R}$\nto denote the number of transmit and receive antennas at ${\\tt R}$,\nrespectively. We assume $M_T>1$ or $M_R>1$ to help suppress the\nresidual SI at ${\\tt R}$ in the spatial domain.\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{MIMO_FD_Relay}\n \\caption{Full-duplex TWRC with two sources and a MIMO\nrelay. The dashed line denotes the residual self-interference.}\n\\label{FIGG1}\n\\end{figure}\n We use ${\\bf h}_{XR}\\in \\mathbb{C}^{M_R\\times 1}$ and\n${\\bf h}_{RX}\\in\\mathbb{C}^{M_T\\times 1}$ to denote the directional\nchannel vectors between the source node ${\\tt X}$'s (${\\tt X}\\in\n\\{{\\tt A},{\\tt B}\\}$) transmit antenna to ${\\tt R}$'s receive antenna(s), and\nbetween ${\\tt R}$'s transmit antenna(s) to ${\\tt X}$'s receive antenna,\nrespectively. In general, channel reciprocity does not hold, e.g.,\n${\\bf h}_{XR}\\ne {\\bf h}_{RX}$, due to different transmit and receive\nantennas used. In addition, $h_{AA}, h_{BB}$ and\n${\\bf H}_{RR}\\in\\mathbb{C}^{M_R\\times M_T}$ denote the residual SI\nchannels after the SI cancellation scheme is applied\n at the corresponding nodes \\cite{KRIKIDIS,RII3}. {\\bl The statistics of\nthe residual SI channel are not well understood yet \\cite{SI-stat}.\nThe experimental-based study in \\cite{Duarte-thesis} has\ndemonstrated that the amount of SI suppression achieved by a\ncombined analog\/digital cancellation technique is a complicated\nfunction of several system parameters. Therefore for simplicity, in\nthis paper, we model each element of the residual SI channel as a\nGaussian distributed random variable} with zero mean and variance\n$\\sigma^2_X, {\\tt X}\\in \\{{\\tt A},{\\tt B},{\\tt R}\\}$. All channel links between two\nnodes are subject to independent flat fading. We assume that the\nchannel coefficients between different nodes remain constant within\na normalized time slot, but change independently from one slot to\nanother according to a complex Gaussian distribution with zero mean\nand unit variance. The global channel state information (CSI) is\navailable at the relay where the optimization will be performed. As\nwill be seen in Fig. \\ref{fig:sumrate:csi}, this is critical for the\nrelay to adapt its beamforming matrix and for the two sources to\nadjust their transmit power. The noise at the node ${\\tt X}$'s (${\\tt X}\\in\n\\{{\\tt A},{\\tt B}, {\\tt R}\\}$) receive antenna(s) is denoted as $n_X$ (${\\bf n}_X$)\nand modeled as complex additive white Gaussian noise with zero mean\nand unit variance.\n\nSince each source has only a single transmit and receive antenna, a\nsingle data stream $S_A$ and $S_B$ are transmitted from ${\\tt A}$ and\n${\\tt B}$, with the average power $p_A$ and $p_B$, respectively. To keep\nthe complexity low, ${\\tt R}$ employs linear processing, i.e., the AF\nprotocol with an amplification matrix ${\\bf W}$, to process the\nreceived signal. The node ${\\tt X}$ has maximum transmit power\nconstraint $P_X$. We will jointly optimize the transmit power of the\ntwo source nodes, $p_A$ and $p_B$ together with the amplification\nmatrix at the relay to maximize the e2e system performance.\n\n\n {\\bbl The overhead due to the CSI acquisition at each node is analyzed as follows. The received CSI\n${\\bf h}_{AR}$ and ${\\bf h}_{BR}$ at ${\\tt R}$ can be estimated by ${\\tt A}$ and\n${\\tt B}$ each sending one pilot symbol separately. The SI channel\n${\\bf h}_{RR}$ can be estimated at ${\\tt R}$ by itself sending an\n$M_T$-symbol pilot sequence. The SI channel $h_{AA}$ and $h_{BB}$\n are estimated similarly and then sent back to\n${\\tt R}$. Regarding the transmit CSI ${\\bf h}_{RA}$ and ${\\bf h}_{RB}$, ${\\tt R}$\nfirst broadcasts an $M_T$-symbol pilot sequence (this can be used\nfor the estimation of ${\\bf h}_{RR}$ simultaneously), then ${\\tt A}$ and\n${\\tt B}$ feedback their estimation to ${\\tt R}$. In addition, after ${\\tt R}$\nperforms the optimization and obtains ${\\bf W}$, $p_A$ and $p_B$, it\ntransmits ${\\bf h}_{RA}^\\dag{\\bf W} {\\bf h}_{AR}$ and ${\\bf h}_{RB}^\\dag{\\bf W}\n{\\bf h}_{BR}$ to ${\\tt A}$ and ${\\tt B}$, respectively, such that they can\ncancel their previously sent symbols. Finally, ${\\tt R}$ informs ${\\tt A}$\nand ${\\tt B}$ their transmit power $p_A$ and $p_B$, respectively.}\n\n\n\n\\subsection{Signal model}\n We assume that the processing delay at ${\\tt R}$ is given by {\\bbl a\n $\\tau$-symbol duration}, which refers to the required\nprocessing time in order to implement the FD operation \\cite{RII1}.\n$\\tau$ typically takes integer values. The delay is short enough\ncompared to a time slot which has a large number of data symbols,\ntherefore its effect on the achievable rate is negligible. At the\ntime ({\\bbl symbol}) instance $n$, the received signal ${\\bf r}[n]$ and\nthe transmit signal ${\\bf x}_R[n]$ at ${\\tt R}$ can be written as\n \\begin{equation}} \\newcommand{\\ee}{\\end{equation}\\label{eqn:rn}\n {\\bf r}[n] = {\\bf h}_{AR}S_A[n] + {\\bf h}_{BR}S_B[n] + {\\bf H}_{RR} {\\bf x}_R[n] + {\\bf n}_R[n],\n \\ee \\vspace{-3mm} and\n \\begin{equation}} \\newcommand{\\ee}{\\end{equation}\\label{eqn:xn}\n {\\bf x}_R[n] = {\\bf W}{\\bf r}[n-\\tau],\n \\ee\nrespectively. Using (\\ref{eqn:rn}) and (\\ref{eqn:xn}) recursively,\nthe overall relay output can be rewritten as \\small\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\\label{eqn:xn2}\n {\\bf x}_R[n] &=& {\\bf W} {\\bf h}_{AR}S_A[n-\\tau] + {\\bf W}{\\bf h}_{BR}S_B[n-\\tau] \\notag\\\\ &&+ {\\bf W}{\\bf H}_{RR} {\\bf x}_R[n-\\tau] + {\\bf W}{\\bf n}_R[n-\\tau]\\notag\\\\\n &=& {\\bf W}\\sum_{j=0}^{\\infty}\\left({\\bf H}_{RR}{\\bf W}\\right)^j \\Big({\\bf h}_{AR}S_A[n-j\\tau-\\tau] \\notag\\\\ && + {\\bf h}_{BR}S_B[n-j\\tau-\\tau]+ {\\bf n}_R[n-j\\tau-\\tau] \\Big),\n \\eea \\normalsize\n {\\bl where $j$ denotes the index of the delayed symbols}.\n\n Its covariance matrix is given by\\small\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\n &&{\\mathcal{E}}[{\\bf x}_R{\\bf x}_R^\\dag] = P_A {\\bf W} \\sum_{j=0}^{\\infty}\\left({\\bf H}_{RR}{\\bf W}\\right)^j {\\bf h}_{AR}{\\bf h}_{AR}^\\dag\n \\left( \\left({\\bf H}_{RR}{\\bf W}\\right)^j\\right)^\\dag{\\bf W}^\\dag \\notag\\\\\n &&+P_B {\\bf W} \\sum_{j=0}^{\\infty}\\left({\\bf H}_{RR}{\\bf W}\\right)^j {\\bf h}_{BR}{\\bf h}_{BR}^\\dag\n \\left( \\left({\\bf H}_{RR}{\\bf W}\\right)^j\\right)^\\dag{\\bf W}^\\dag\\notag\\\\\n && + {\\bf W} \\sum_{j=0}^{\\infty} ({\\bf H}_{RR}{\\bf W}\\qW^\\dag {\\bf H}_{RR}^\\dag )^j {\\bf W}^\\dag\\notag\\\\\n &&= P_A {\\bf W} \\sum_{j=0}^{\\infty}\\left({\\bf H}_{RR}{\\bf W}\\right)^j {\\bf h}_{AR}{\\bf h}_{AR}^\\dag\n \\left( \\left({\\bf H}_{RR}{\\bf W}\\right)^j\\right)^\\dag{\\bf W}^\\dag \\notag\\\\\n &&+ P_B {\\bf W} \\sum_{j=0}^{\\infty}\\left({\\bf H}_{RR}{\\bf W}\\right)^j {\\bf h}_{BR}{\\bf h}_{BR}^\\dag\n \\left( \\left({\\bf H}_{RR}{\\bf W}\\right)^j\\right)^\\dag{\\bf W}^\\dag\\notag\\\\\n &&+{\\bf W}({\\bf I}-{\\bf H}_{RR}{\\bf W}\\qW^\\dag{\\bf H}_{RR}^\\dag)^{-1}{\\bf W}^\\dag.\n\\eea\\normalsize\n\n Note that ${\\tt R}$'s transmit signal covariance $\\mathcal{E}[{\\bf x}_R{\\bf x}_R^\\dag]$, and in turn the transmit power and the SI power, are complicated functions of ${\\bf W}$, which makes the optimization problems difficult.\n To simplify the signal model and make the optimization problems more tractable, we add the ZF constraint such that the optimization of ${\\bf W}$\n nulls out the residual SI from the relay output to relay\ninput. To realize this, it is easy to check from (\\ref{eqn:xn2})\nthat the following condition is sufficient, \\begin{equation}} \\newcommand{\\ee}{\\end{equation}\n {\\bf W}{\\bf H}_{RR}{\\bf W}={\\bf 0}.\n \\ee\n Consequently, (\\ref{eqn:xn2}) becomes\n {\\small\\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\\label{eqn:xn3}\n {\\bf x}_R[n] = {\\bf W} \\left({\\bf h}_{AR}S_A[n-\\tau] + {\\bf h}_{BR}S_B[n-\\tau] +\n {\\bf n}_R[n-\\tau]\\right),\n \\eea}\n with the covariance matrix\n {\\small\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\n {\\mathcal{E}}[{\\bf x}_R{\\bf x}_R^\\dag] = p_A {\\bf W}{\\bf h}_{AR}{\\bf h}_{AR}^\\dag{\\bf W}^\\dag + p_B {\\bf W}{\\bf h}_{BR}{\\bf h}_{BR}^\\dag{\\bf W}^\\dag + {\\bf W}\\qW^\\dag.\n\\eea}\n The relay output power is\n \\begin{align}\n p_R &= \\textsf{trace}( {\\mathcal{E}}[{\\bf x}_R{\\bf x}_R^\\dag])\\notag\\\\\n &= p_A\\|{\\bf W} {\\bf h}_{AR}\\|^2 + p_B\\|{\\bf W} {\\bf h}_{BR}\\|^2 + \\textsf{trace}({\\bf W}\\qW^\\dag).\n \\end{align}\n\nThe received signal at the source ${\\tt A}$ can be written as\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\n r_A[n] &=& {\\bf h}_{RA}^\\dag{\\bf x}_R[n] + h_{AA}S_A[n] + n_A[n]\\notag\\\\\n &=& {\\bf h}_{RA}^\\dag{\\bf W} {\\bf h}_{AR}S_A[n-\\tau] +{\\bf h}_{RA}^\\dag{\\bf W} {\\bf h}_{BR}S_B[n-\\tau] \\notag\\\\&& + {\\bf h}_{RA}^\\dag{\\bf W}{\\bf n}_R[n] + h_{AA}S_A[n] + n_A[n].\n \\eea\nAfter cancelling its own transmitted signal $S_A[n-\\tau]$, it\nbecomes\\footnote{Note that different from $S_A[n-\\tau]$, $S_A[n]$\ncan not be completely cancelled due to the simultaneous\ntransmission, which is the main challenge of the FD radio.}\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\n r_A[n] &=& {\\bf h}_{RA}^\\dag{\\bf W} {\\bf h}_{BR}S_B[n-\\tau] + {\\bf h}_{RA}^\\dag{\\bf W}{\\bf n}_R[n]\\notag\\\\\n && + h_{AA}S_A[n] + n_A[n].\n \\eea\n The received signal-to-interference-plus-noise ratio (SINR) at the source ${\\tt A}$, denoted as $\\gamma_A$, is expressed as\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\\label{eqn:hi1}\n \\gamma_A &=& \\frac{p_B|{\\bf h}_{RA}^\\dag{\\bf W} {\\bf h}_{BR}|^2}{ \\|{\\bf h}_{RA}^\\dag{\\bf W}\\|^2\n + p_A|h_{AA}|^2+ 1}.\n \\eea\n Similarly, the received SINR $\\gamma_B$ at the source ${\\tt B}$ can be\n written as\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\\label{eqn:hi2}\n \\gamma_B &=& \\frac{p_A|{\\bf h}_{RB}^\\dag{\\bf W} {\\bf h}_{AR}|^2}{ \\|{\\bf h}_{RB}^\\dag{\\bf W}\\|^2\n + p_B|h_{BB}|^2+ 1}.\n \\eea\n The achievable rates are then given by $R_A = \\log_2(1+\\gamma_A)$ and $R_B =\n \\log_2(1+\\gamma_B)$, respectively.\n\n\\subsection{Problems Statement}\nThe conventional physical layer analog network coding scheme\nrequires two phases for ${\\tt A}$ and ${\\tt B}$ to exchange information\n\\cite{Zhang-2Phase}. Thanks to the FD operation, the proposed\nscheme reduces the whole communication to only one phase thus\nsubstantially increases the spectrum efficiency. However, the FD\noperation also brings the SI to each node so they may not always\nuse their maximum power because higher transmit power also increases\nthe level of the residual SI, therefore each node needs to carefully\nchoose its transmit power.\n\n\nWe are interested in two e2e objectives subject to each source's\npower constraints by optimizing the relay beamforming and power\nallocation at each source. The first one is to find the achievable\nrate region $(R_A, R_B)$. This can be achieved by maximizing source\n${\\tt A}$'s rate while varying the constraint on the minimum rate of\nsource ${\\tt B}$'s rate (or vice versa), i.e.,\n solving the rate maximization problem $\\mathbb{P}_1$ below:\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\\label{eqn:prob1}\n \\mathbb{P}_1: \\max_{{\\bf W}, p_A, p_B, p_R}{R_A} ~~ \\mbox{s.t.}~~ R_B\\ge r_B, p_X\\le P_X,\n X\\in\\{A,B,R\\},\\notag\n \\eea\n where $r_B$ is the constraint on source ${\\tt B}$'s rate. By enumerating\n $r_B$, we can find the boundary of the achievable rate region.\n\nThe second problem is to maximize the sum rate of the two sources.\nMathematically, this problem is formulated as $\\mathbb{P}_2$ below:\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\\label{eqn:prob2}\n \\mathbb{P}_2: \\max_{{\\bf W}, p_A, p_B, p_R}{R_A+R_B} ~~ \\mbox{s.t.}~~ p_X\\le P_X,\n X\\in\\{A,B,R\\}.\\notag\n \\eea\n The next two sections will be devoted to solving $\\mathbb{P}_1$ and\n $\\mathbb{P}_2$, respectively.\n\n\n\n\\section{Finding the achievable rate region}\nIn this section, we aim to optimize the relay beamforming matrix\n${\\bf W}$ and the sources' transmit power $(p_A, p_B)$ to find the\nachievable rate region. This can be achieved by solving\n$\\mathbb{P}_1$. Using the monotonicity between the SINR and the\nrate, it can be expanded as {\\small\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\\label{eqn:fd2}\n && \\max_{{\\bf W}, p_A, p_B} ~~ \\frac{p_B|{\\bf h}_{RA}^\\dag{\\bf W} {\\bf h}_{BR}|^2}{ \\|{\\bf h}_{RA}^\\dag{\\bf W}\\|^2\n + p_A|h_{AA}|^2+ 1}\\\\\n && \\mbox{s.t.} ~~ \\frac{p_A|{\\bf h}_{RB}^\\dag{\\bf W} {\\bf h}_{AR}|^2}{ \\|{\\bf h}_{RB}^\\dag{\\bf W}\\|^2\n + p_B|h_{BB}|^2+ 1} \\ge \\Gamma_B,\\label{cons1}\\\\\n && p_A\\|{\\bf W} {\\bf h}_{AR}\\|^2 + p_B\\|{\\bf W} {\\bf h}_{BR}\\|^2 +\\textsf{trace}({\\bf W}\\qW^\\dag) \\le P_R,\\label{cons2}\\\\\n && {\\bf W}{\\bf H}_{RR}{\\bf W}={\\bf 0}, \\notag\\\\\n && 0\\le p_A\\le P_A, 0\\le p_B\\le P_B,\\notag\n \\eea}\n {\\bl where $\\Gamma_B\\triangleq 2^{r_B}-1$ is the equivalent SINR constraint for the source ${\\tt B}$.}\n Observe that all terms in \\eqref{eqn:fd2} are quadratic in ${\\bf W}$\n except the ZF constraint ${\\bf W}{\\bf H}_{RR}{\\bf W}={\\bf 0}$, which is\n difficult to handle. Considering the fact that each source only transmits a single data\n stream and the network coding principle encourages mixing rather\n than separating the data streams from different sources, we\n decompose ${\\bf W}$ as ${\\bf W} = {\\bf w}_t{\\bf w}_r^\\dag$, where\n${\\bf w}_r$ is the receive beamforming vector and ${\\bf w}_t$ is the\ntransmit beamforming vector at ${\\tt R}$. Then the ZF condition is\nsimplified to $({\\bf w}_r^\\dag{\\bf H}_{RR}{\\bf w}_t) {\\bf W} ={\\bf 0}$ or\nequivalently ${\\bf w}_r^\\dag{\\bf H}_{RR}{\\bf w}_t=0$ because in general ${\\bf W}\\ne\n{\\bf 0}$. Without loss of optimality, we further assume\n$\\|{\\bf w}_r\\|=1$. As a result, the problem \\eqref{eqn:fd2} can be\nsimplified to\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\\label{eqn:fd711}\n \\max_{{\\bf w}_r, {\\bf w}_t, p_A, p_B} && \\frac{p_B|{\\bf h}_{RA}^\\dag {\\bf w}_t|^2|{\\bf w}_r^\\dag {\\bf h}_{BR}|^2}{ |{\\bf h}_{RA}^\\dag {\\bf w}_t|^2 + p_A|h_{AA}|^2 + 1} \\\\\n \\mbox{s.t.} && \\frac{p_A|{\\bf h}_{RB}^\\dag {\\bf w}_t|^2 |{\\bf w}_r^\\dag {\\bf h}_{AR}|^2}{ |{\\bf h}_{RB}^\\dag {\\bf w}_t|^2 + p_B|h_{BB}|^2+\n 1} \\ge \\Gamma_B,\\notag\\\\\n && p_A\\|{\\bf w}_t\\|^2 |{\\bf w}_r^\\dag{\\bf h}_{AR}|^2 + p_B\\|{\\bf w}_t\\|^2 |{\\bf w}_r^\\dag {\\bf h}_{BR}|^2 \\notag\\\\ &&+ \\|{\\bf w}_t\\|^2 \\le P_R,\\notag\\\\\n && {\\bf w}_r^\\dag{\\bf H}_{RR}{\\bf w}_t=0, \\notag\\\\\n && \\|{\\bf w}_r\\|=1,\\notag\\\\\n && 0\\le p_A\\le P_A, 0\\le p_B\\le P_B.\\notag\n \\eea\n Note that in order to guarantee the feasibility of the ZF\n constraint ${\\bf w}_r^\\dag{\\bf H}_{RR}{\\bf w}_t=0$, ${\\tt R}$ only needs to have\n two or more either transmit or receive antennas but not necessarily both.\n\n\n The problem \\eqref{eqn:fd711} is still quite complicated as\n variables ${\\bf w}_t$, ${\\bf w}_r$ and $(p_A, p_B)$ are coupled. Our idea\n to tackle this difficulty is to use an alternating optimization\n approach, i.e., at each iteration, we optimize one variable while\n keeping the other fixed, together with 1-D search to find ${\\bf w}_r$. Details are given\n below.\n\\subsection{Parameterization of the receive beamforming vector ${\\bf w}_r$}\n Observe that ${\\bf w}_r$ is mainly involved in $|{\\bf w}_r^\\dag\n{\\bf h}_{BR}|^2$ and\n $|{\\bf w}_r^\\dag {\\bf h}_{AR}|^2$, so it has to balance the signals received\n from the\n two sources. According to the result in \\cite{Jorswieck2008f},\n ${\\bf w}_r$ can be parameterized by $0\\le \\alpha\\le 1$ as below:\n\\begin{equation}} \\newcommand{\\ee}{\\end{equation}\\label{eqn:wr} {\\bf w}_r =\n\\alpha\\frac{\\Pi_{{\\bf h}_{BR}}{\\bf h}_{AR}}{\\|\\Pi_{{\\bf h}_{BR}}{\\bf h}_{AR}\\|} +\n \\sqrt{1-\\alpha}\\frac{\\Pi^\\bot_{{\\bf h}_{BR}}{\\bf h}_{AR}}{\\|\\Pi^\\bot_{{\\bf h}_{BR}}{\\bf h}_{AR}\\|}.\n \\ee\n {\\bl We have to remark that \\eqref{eqn:wr} is not the complete characterization of ${\\bf w}_r$ because it is also involved in the ZF constraint\n ${\\bf w}_r^\\dag{\\bf H}_{RR}{\\bf w}_t=0$, but this parameterization makes the\n problem more tractable.}\n\n\n Given $\\alpha$, we can optimize ${\\bf w}_t$ and $(p_A,p_B)$ as will be introduced below. Then we perform 1-D search to find the\n optimal $\\alpha^*$. We will focus on how to separately optimize\n ${\\bf w}_t$ and $(p_A, p_B)$ in the following\n two subsections. For the optimization of each variable, we will derive analytical\n solutions without using iterative methods or numerical algorithms.\n\n\n\n\n\\subsection{Optimization of the transmit beamforming vector ${\\bf w}_t$}\n We first look into the optimization of ${\\bf w}_t$ assuming ${\\bf w}_r$ and $(p_A,\n p_B)$ are fixed. Based on the problem \\eqref{eqn:fd711}, we get the\n following formulation:\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\\label{eqn:fd71123}\n \\max_{{\\bf w}_t} && \\frac{p_B|{\\bf h}_{RA}^\\dag {\\bf w}_t|^2|{\\bf w}_r^\\dag {\\bf h}_{BR}|^2}{ |{\\bf h}_{RA}^\\dag {\\bf w}_t|^2 + p_A|h_{AA}|^2 + 1} \\\\\n \\mbox{s.t.} && \\frac{p_A|{\\bf h}_{RB}^\\dag {\\bf w}_t|^2 |{\\bf w}_r^\\dag {\\bf h}_{AR}|^2}{ |{\\bf h}_{RB}^\\dag {\\bf w}_t|^2 + p_B|h_{BB}|^2+\n 1} \\ge\\Gamma_B,\\notag\\\\\n && p_A\\|{\\bf w}_t\\|^2 |{\\bf w}_r^\\dag{\\bf h}_{AR}|^2 + p_B\\|{\\bf w}_t\\|^2 |{\\bf w}_r^\\dag {\\bf h}_{BR}|^2 \\notag\\\\ &&\n + \\|{\\bf w}_t\\|^2 \\le P_R,\\notag\\\\\n \n && {\\bf w}_r^\\dag{\\bf H}_{RR}{\\bf w}_t=0. \\notag\n \\eea\n By separating the variable ${\\bf w}_t$ and using monotonicity, \\eqref{eqn:fd71123} is\n simplified to\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\\label{eqn:fd712}\n \\max_{{\\bf w}_t} && |{\\bf h}_{RA}^\\dag {\\bf w}_t|^2 \\\\\n \\mbox{s.t.}\n && |{\\bf h}_{RB}^\\dag {\\bf w}_t|^2 \\ge\\frac{\\Gamma_B{p_B|h_{BB}|^2+1}}{p_A|{\\bf w}_r^\\dag {\\bf h}_{AR}|^2\n -\\Gamma_B}\\triangleq\\bar\\Gamma_B,\n \\notag\\\\\n && \\|{\\bf w}_t\\|^2 \\le \\frac{P_R}{(p_A|{\\bf w}_r^\\dag{\\bf h}_{AR}|^2 + p_B |{\\bf w}_r^\\dag {\\bf h}_{BR}|^2 +1)}\\triangleq\\bar P,\\notag\\\\\n \n && {\\bf w}_r^\\dag{\\bf H}_{RR}{\\bf w}_t=0. \\notag\n \\eea\n The problem \\eqref{eqn:fd712} is not convex but it is a\n quadratic problem in ${\\bf w}_t$.\n By defining ${\\bf W}_t = {\\bf w}_t\n {\\bf w}_t^\\dag$ and using semidefinite programming relaxation, \\eqref{eqn:fd712} will become a convex problem in ${\\bf W}_t $, from which\n the optimal ${\\bf w}_t$ can be found from matrix decomposition. Interested readers are referred to \\cite{SDPR} for details. However, the special structure of the problem\n \\eqref{eqn:fd712} allows us to derive the analytical solution in the following steps.\n\\begin{enumerate}\n \\item If $\\bar\\Gamma_B<0$ or $p_A|{\\bf w}_r^\\dag {\\bf h}_{AR}|^2 <\\Gamma_B$, the\n problem is infeasible; otherwise continue.\n \\item Define the null space of the vector ${\\bf w}_r^\\dag{\\bf H}_{RR}$\n as $\\mathbf{N}_t\\in \\mathbb{C}^{M_t \\times (M_t-1)}$, i.e., ${\\bf w}_r^\\dag{\\bf H}_{RR}\\mathbf{N}_t={\\bf 0}$. Introduce a new variable ${\\bf v}\\in \\mathbb{C}^{(M_t-1)\\times 1}$ and express ${\\bf w}_t = \\mathbf{N}_t{\\bf v}$, then\n we can remove the ZF constraint in \\eqref{eqn:fd712}, and obtain\n the following equivalent problem:\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\\label{eqn:fd712e}\n \\max_{{\\bf v}} && |{\\bf h}_{RA}^\\dag \\mathbf{N}_t{\\bf v}|^2 \\\\\n \\mbox{s.t.}\n && |{\\bf h}_{RB}^\\dag \\mathbf{N}_t{\\bf v}|^2 \\ge \\bar\\Gamma_B\n \\label{eqn:constr1} \\\\\n && \\|{\\bf v}\\|^2 \\le \\bar P, \\notag\n \\eea\n where we have used the property that $\\mathbf{N}_t^\\dag\n \\mathbf{N}_t={\\bf I}$. If $\\bar P\\|{\\bf h}_{RB}^\\dag \\mathbf{N}_t\\|^2< \\bar\\Gamma_B$, the problem\n is infeasible; otherwise continue.\n\n \\item In this step we aim to find the closed-form solution for \\eqref{eqn:fd712e}. We first solve it without the constraint\n \\eqref{eqn:constr1}.\n It can be seen that the last power constraint should always be\n satisfied with equality, and the optimal solution is given by\n ${\\bf v}^* = \\sqrt{\\bar P}\\frac{\\mathbf{N}_t^\\dag {\\bf h}_{RA}}{\\| \\mathbf{N}_t^\\dag\n {\\bf h}_{RA}\\|}$. If it also satisfies the constraint \\eqref{eqn:constr1},\n then it is the optimal solution; otherwise continue.\n \\item It this step, we know that both constraints in \\eqref{eqn:fd712e} should be active, so we reach the\n problem below:\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\\label{eqn:fd713e}\n \\max_{{\\bf v}} && |{\\bf h}_{RA}^\\dag \\mathbf{N}_t{\\bf v}|^2 \\\\\n \\mbox{s.t.}\n && |{\\bf h}_{RB}^\\dag \\mathbf{N}_t{\\bf v}|^2 =\\bar\\Gamma_B\n \\label{eqn:constr2}\\notag\\\\\n && \\|{\\bf v}\\|^2 =\\bar P. \\notag\n \\eea\n\\end{enumerate}\n If we define ${\\bf d}_2 \\triangleq\\frac{\\mathbf{N}_t^\\dag{\\bf h}_{RA}}{\\mathbf{N}_t^\\dag{\\bf h}_{RA}}, {\\bf d}_1\\triangleq\n \\frac{\\mathbf{N}_t^\\dag{\\bf h}_{RB}}{\\mathbf{N}_t^\\dag{\\bf h}_{RB}}$, $\\phi\\in (-\\pi,\\pi]$ be the argument of\n ${\\bf d}_2^\\dag{\\bf d}_1$, $r\\triangleq|{\\bf d}_2^\\dag{\\bf d}_1|$, $q\\triangleq \\frac{\\bar\\Gamma_B}{\\bar P\\|\\mathbf{N}_t^\\dag{\\bf h}_{RB}\\|^2}$ and ${\\bf z}\\triangleq\\frac{{\\bf v}}{\\|{\\bf v}\\|}$, then we\n have the following formulation:\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\\label{eqn:lem5}\n \\max_{{\\bf z}}&& {\\bf z}^\\dag {\\bf d}_2{\\bf d}_2^\\dag{\\bf z}, \\\\\n \\mathrm{s.t.}&& {\\bf z}^\\dag {\\bf d}_1{\\bf d}_1^\\dag{\\bf z} = q, \\quad \\|{\\bf z}\\|=1.\\notag\n \\eea\n\n The optimal solution ${\\bf z}^*$ follows from Lemma 2 in\n\\cite{Li-11} and is given below \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}{\\bf z}^*=\n \\left( r\\frac{1-q}{1-r^2}-\\sqrt{q}\\right) e^{j(\\pi-\\phi)}\n {\\bf d}_1+ \\sqrt{\\frac{1-q}{1-r^2}}{\\bf d}_2.\n\\eea\n Once we obtain ${\\bf z}^*$, the optimal transmit beamforming vector is\n given by\n \\begin{equation}} \\newcommand{\\ee}{\\end{equation} {\\bf w}_t^* = \\mathbf{N}_t {\\bf v}^* = \\sqrt{\\bar P}\\mathbf{N}_t{\\bf z}^*. \\ee\n\nAfter obtaining the optimal ${\\bf w}_t^*$ using the above procedures, we\ncan move on to find the optimal power allocation at the sources.\n\n\n\\subsection{Optimization of the source power $(p_A, p_B)$}\n Because of the FD operation at the sources and the fact that each source has a single transmit and receive antenna, they cannot suppress the residual SI in the spatial domain\n therefore cannot always use the full power. In contrast, ${\\tt R}$ has at least two transmit or receive antennas, so it can complete eliminate\n the SI and transmit using full power $P_R$. Here we aim to find the optimal power allocation $(p_A, p_B)$ at ${\\tt A}$ and ${\\tt B}$ assuming both ${\\bf w}_t$ and ${\\bf w}_r$ are\n fixed.\n\n For convenience, define $C_{At}\\triangleq |{\\bf h}_{RA}^\\dag {\\bf w}_t|^2,\nC_{rB}\\triangleq |{\\bf w}_r^\\dag {\\bf h}_{BR}|^2, C_{Bt}\\triangleq\n|{\\bf h}_{RB}^\\dag {\\bf w}_t|^2, C_{rA}\\triangleq |{\\bf w}_r^\\dag {\\bf h}_{AR}|^2\n$, then \\eqref{eqn:fd711} becomes{\\small\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\\label{eqn:fd77}\n \\max_{p_A, p_B} && \\frac{p_B C_{At}C_{rB}}{ C_{At} + p_A|h_{AA}|^2+ 1} \\\\\n \\mbox{s.t.} && \\frac{p_A C_{Bt} C_{rA}}{ C_{Bt} + p_B|h_{BB}|^2+\n 1} \\ge \\Gamma_B\\label{eqn:linear:SINR}\\\\\n && p_A\\|{\\bf w}_t\\|^2 C_{rA} + p_B\\|{\\bf w}_t\\|^2 C_{rB} + \\|{\\bf w}_t\\|^2\\le P_R,\\label{eqn:linear:power}\\\\\n && 0\\le p_A\\le P_A, 0\\le p_B\\le P_B.\\notag\n \\eea}\nThe problem \\eqref{eqn:fd77} is a linear-fractional programming\nproblem, and can be converted to a linear programming problem\n\\cite[p. 151]{Boyd}. Again, thanks to its special structure, we can\nderive its analytical solutions below step by step.\n\\begin{enumerate}\n \\item First we check whether the constraint \\eqref{eqn:linear:SINR} is feasible. If $P_A C_{Bt} C_{rA} \\le \\Gamma_B$, then the problem is\n infeasible; otherwise, continue.\n\n \\item Next we solve \\eqref{eqn:fd77} by ignoring the constraint\n \\eqref{eqn:linear:power}. It is easy to check that at the\n optimum, at least one source should achieve its maximum power.\n The power allocation depends on two cases:\n \\begin{enumerate}\n \\item If $ P_A \\ge \\frac{\\Gamma_B( C_{Bt} \\|{\\bf w}_r\\|^2 + P_B|h_{BB}|^2+\n 1)}{C_{Bt} C_{rA} } $, then $p_B=P_B$, $p_A = \\frac{\\Gamma_B( C_{Bt} \\|{\\bf w}_r\\|^2 + P_B|h_{BB}|^2+\n 1)}{C_{Bt} C_{rA} }$; otherwise,\n \\item $p_A=P_A, p_B = \\min\\left(P_B, \\frac{\\frac{p_A C_{Bt} C_{rA}}{\\Gamma_B} - 1 - C_{Bt}\n }{|h_{BB}|^2}\\right)$.\n \\end{enumerate}\n\n \\item We check whether the above obtained solution satisfies the constraint \\eqref{eqn:linear:SINR}. If it does, then it is the optimal solution. Otherwise\n the constraint \\eqref{eqn:linear:SINR} should be met with\n equality.\n\n \\item The optimal power allocation is determined by the equation\n set below,\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\n \\left\\{ \\begin{array}{l}\n p_A C_{Bt} C_{rA} =\\Gamma_B( C_{Bt} + p_B|h_{BB}|^2+\n 1), \\\\\n p_A\\|{\\bf w}_t\\|^2 C_{rA} + p_B\\|{\\bf w}_t\\|^2 C_{rB} + \\|{\\bf w}_t\\|^2\n =P_R\n \\end{array}\\right.\n \\eea\n and the solution is given by\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\n \\left\\{ \\begin{array}{l}\n p_A= \\frac{\\Gamma_B|h_{BB}|^2}{C_{Bt} + C_{rA}} p_B + \\frac{\\Gamma_B(C_{Bt}+1)}{C_{Bt} +\n C_{rA}},\\\\\n p_B = \\frac{\\frac{P_R}{\\|{\\bf w}_t\\|^2} -1 - \\frac{\\Gamma_B C_{rA}(C_{Bt}+1)\n }{C_{Bt}C_{rA}}}{\\frac{\\Gamma_B C_{rA}|h_{BB}|^2\n }{C_{Bt}C_{rA}} + C_{rB} }.\n \\end{array}\\right.\n \\eea\n\\end{enumerate}\n\n\\subsection{The overall algorithm}\n Given an $\\alpha$ or ${\\bf w}_r$, we can iteratively optimize ${\\bf w}_t$ and $(p_A,\n p_B)$ as above until convergence. The value of the objective function monotonically increases as the iteration goes thus converges to a local\n optimum. We can then conduct 1-D search over $0\\le \\alpha\\le 1$ to find the\n optimal $\\alpha^*$ or ${\\bf w}_r$. By enumerating source ${\\tt B}$'s requirement $r_B$, we can numerically\nfind the boundary of the achievable rate region.\n\n {\\bl About the complexity, we remark that at each iteration, the solutions of ${\\bf w}_t$ and $(p_A,\n p_B)$ are given in simple closed-forms, so the associated complexity is low.}\n\n\n\\section{Maximizing the Sum Rate}\n In this section, we aim to maximize the sum rate of the proposed FD TWRC, i.e., to solve $\\mathbb{P}_2$, which is rewritten\n below,\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\\label{prob:sum:rate:max}\n \\max_{{\\bf w}_t, {\\bf w}_r p_A, p_B} && \\log_2\\left(1+ \\frac{p_B C_{rB} |{\\bf h}_{RA}^\\dag {\\bf w}_t|^2}{ |{\\bf h}_{RA}^\\dag {\\bf w}_t|^2 + p_A|h_{AA}|^2 +\n 1}\\right)\\\\ &&\n +\\log_2\\left(1+ \\frac{p_A C_{rA}|{\\bf h}_{RB}^\\dag {\\bf w}_t|^2 }{ |{\\bf h}_{RB}^\\dag {\\bf w}_t|^2 + p_B|h_{BB}|^2+\n 1} \\right)\\notag \\\\\n \\mbox{s.t.} && \\|{\\bf w}_t\\|^2 \\le \\frac{P_R}{ p_A C_{rA} + p_BC_{rB} + 1},\\notag\\\\\n && {\\bf w}_r^\\dag{\\bf H}_{RR}{\\bf w}_t=0. \\notag\n \\eea\nWe will use the same characterization of \\eqref{eqn:wr} to find the optimal ${\\bf w}_r$ via 1-D\n search. We then concentrate on alternatingly optimizing the transmit beamforming vector ${\\bf w}_t$ and the power\n allocation $(p_A, p_B)$.\n \\subsection{Optimization of the transmit beamforming vector ${\\bf w}_t$}\n We first study how to optimize ${\\bf w}_t$ given ${\\bf w}_r$ and $(p_A, p_B)$.\n For convenience, we define a semidefinite matrix ${\\bf W}_t={\\bf w}_t{\\bf w}_t^\\dag$. Then the problem \\eqref{prob:sum:rate:max} becomes\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\\label{prob:sum:rate:max2}\n \\max_{ {\\bf W}_t\\succeq {\\bf 0} } && F({\\bf W}_t) \\\\\n \\mbox{s.t.} &&\\textsf{trace}({\\bf W}) \\le \\frac{P_R}{ p_A C_{rA} + p_BC_{rB} + 1},\\notag\\\\\n && \\textsf{trace}({\\bf W}_t {\\bf H}_{RR}^\\dag{\\bf w}_r{\\bf w}_r^\\dag{\\bf H}_{RR})=0,\n \\notag\\\\\n && \\mbox{rank}({\\bf W}_t)=1, \\notag\n \\eea\n where $F({\\bf W}_t) \\triangleq \\log_2\\left(1+ \\frac{p_B C_{rB} \\textsf{trace}({\\bf W}_t{\\bf h}_{RA}{\\bf h}_{RA}^\\dag)}{ \\textsf{trace}({\\bf W}_t{\\bf h}_{RA}{\\bf h}_{RA}^\\dag) + p_A|h_{AA}|^2 + 1}\\right)\n +\\log_2\\left(1+ \\frac{p_A C_{rA} \\textsf{trace}({\\bf W}_t{\\bf h}_{RB}{\\bf h}_{RB}^\\dag) }{ \\textsf{trace}({\\bf W}_t{\\bf h}_{RB}{\\bf h}_{RB}^\\dag) + p_B|h_{BB}|^2+\n 1} \\right).$ Clearly $F({\\bf W}_t)$ is not a concave function thus (\\ref{prob:sum:rate:max2}) is a cumbersome optimization problem.\n To tackle it, we propose to\nuse the DC programming \\cite{An} to find a local optimum point. To\nthis end, we express $F({\\bf W}_t)$, as a difference of two concave\nfunctions $f({\\bf W}_t)$ and $g({\\bf W}_t)$, i.e., { \\begin{align}\n & F({\\bf W}_t)\\notag\\\\\n &= \\log_2\\left( (p_B C_{rB}+1) \\textsf{trace}({\\bf W}_t{\\bf h}_{RA}{\\bf h}_{RA}^\\dag) +\np_A|h_{AA}|^2 + 1)\n \\right) \\notag\\\\\n & - \\log_2\\left(\\textsf{trace}({\\bf W}_t{\\bf h}_{RA}{\\bf h}_{RA}^\\dag) + p_A|h_{AA}|^2 + 1\\right)\\notag\\\\\n & + \\log_2\\left( (p_A C_{rA}+1) \\textsf{trace}({\\bf W}_t{\\bf h}_{RB}{\\bf h}_{RB}^\\dag) + p_B|h_{BB}|^2 + 1)\n \\right)\\notag\\\\\n & - \\log_2\\left(\\textsf{trace}({\\bf W}_t{\\bf h}_{RB}{\\bf h}_{RB}^\\dag) + p_B|h_{BB}|^2 +\n 1\\right)\\notag\\\\\n&\\triangleq f({\\bf W}_t) - g({\\bf W}_t) \\end{align}}\n where {\\small \\begin{align} f({\\bf W}_t)\n&\\triangleq \\log_2\\left( (p_B C_{rB}+1)\n\\textsf{trace}({\\bf W}_t{\\bf h}_{RA}{\\bf h}_{RA}^\\dag) + p_A|h_{AA}|^2 + 1)\n \\right) \\notag\\\\\n & +\\log_2\\left( (p_A C_{rA}+1) \\textsf{trace}({\\bf W}_t{\\bf h}_{RB}{\\bf h}_{RB}^\\dag) + p_B|h_{BB}|^2 + 1)\n \\right),\\notag\\\\\ng({\\bf W}_t) &\\triangleq \\log_2\\left(\\textsf{trace}({\\bf W}_t{\\bf h}_{RA}{\\bf h}_{RA}^\\dag) + p_A|h_{AA}|^2 + 1\\right)\\notag\\\\\n& +\\log_2\\left(\\textsf{trace}({\\bf W}_t{\\bf h}_{RB}{\\bf h}_{RB}^\\dag) + p_B|h_{BB}|^2 +\n 1\\right).\\notag \\end{align}}\n $f({\\bf W}_t)$ is a concave function while $g({\\bf W}_t)$ is a convex\n function. The main idea is to approximate $g({\\bf W}_t)$ by a linear\n function. The linearization (first-order approximation) of $g({\\bf W}_t) $ around the point ${\\bf W}_{t,k}$\n is given by{\\small\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray} &&g_L({\\bf W}_t;{\\bf W}_{t,k}) =\n \\frac{1}{\\ln(2)}\\frac{\\textsf{trace}\\left(({\\bf W}_{t}-{\\bf W}_{t,k}){\\bf h}_{RA}{\\bf h}_{RA}^\\dag\\right)}{\\textsf{trace}({\\bf W}_{t,k}{\\bf h}_{RA}{\\bf h}_{RA}^\\dag) + p_A|h_{AA}|^2 +\n 1} \\notag\\\\\n &&+ \\frac{1}{\\ln(2)}\\frac{\\textsf{trace}\\left(({\\bf W}_{t}-{\\bf W}_{t,k}){\\bf h}_{RB}{\\bf h}_{RB}^\\dag\\right)}{\\textsf{trace}({\\bf W}_{t,k}{\\bf h}_{RB}{\\bf h}_{RB}^\\dag) + p_B|h_{BB}|^2 +\n 1} \\notag\\\\\n &&+\\log_2\\left(\\textsf{trace}({\\bf W}_{t,k}{\\bf h}_{RA}{\\bf h}_{RA}^\\dag) + p_A|h_{AA}|^2 +\n 1\\right)\\notag\\\\\n &&+\\log_2\\left(\\textsf{trace}({\\bf W}_{t,k}{\\bf h}_{RA}{\\bf h}_{RA}^\\dag) + p_A|h_{AA}|^2 +\n 1\\right).\n\\eea} Then the DC programming is applied to sequentially solve the\nfollowing convex problem, \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\n{\\bf W}_{t,k+1} &= &\\arg \\max_{{\\bf W}_t}\\ f({\\bf W}_t) - g_L({\\bf W}_t; {\\bf W}_{t,k}) \\label{DCP}\\\\\n&&\\mbox{s.t.}\\quad \\textsf{trace}({\\bf W}_t) = \\frac{P_R}{ p_A C_{rA} + p_BC_{rB} + 1},\\notag\\\\\n && \\textsf{trace}({\\bf W}_t {\\bf H}_{RR}^\\dag{\\bf w}_r{\\bf w}_r^\\dag{\\bf H}_{RR})=0\n \\notag.\n\\eea To summarize, the problem \\eqref{prob:sum:rate:max2} can be\nsolved by i) choosing an initial point ${\\bf W}_t$; and ii) for $k=0, 1,\n\\cdots$, solving (\\ref{DCP}) until the termination condition is met.\nNotice that in \\eqref{DCP} we have ignored the rank-1 constraint on\n${\\bf W}_t$. This constraint is guaranteed to be satisfied by the\nresults in \\cite[Theorem 2]{Huang-11} when $M_t>2$, therefore the\ndecomposition of ${\\bf W}_t$ leads to the optimal solution ${\\bf w}_t^*$ for\n\\eqref{prob:sum:rate:max}. When $M_t=2$, the ZF constraint in the\nproblem \\eqref{prob:sum:rate:max} can determine the direction of\n${\\bf w}_t$, i.e., ${\\bf w}_t=\\sqrt{p_t}\\mathbb{N}_t$ where $p_t$ is the\ntransmit power and $\\mathbb{N}_t\\in \\mathbb{C}^{2\\times 1}$\nrepresents the null space of ${\\bf w}_r^\\dag{\\bf H}_{RR}$. Therefore the\noptimization of ${\\bf w}_t$ reduces to optimizing a scalar variable\n$p_t$, which can be found by checking the stationary points of the\nobjective function in \\eqref{prob:sum:rate:max} and the boundary\npoint without using the DC programming. The same applies to the\nspecial case of $M_t=1$.\n\n\n\\subsection{Optimization of source power $(p_A, p_B)$}\nWith ${\\bf w}_t$ and ${\\bf w}_r$ fixed, the sum rate maximization problem\n\\eqref{prob:sum:rate:max} about power allocation can be written as\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\\label{eqn:fd777}\n \\max_{p_A, p_B} && \\log_2 \\left(1+ \\frac{p_B C_{At}C_{rB}}{ C_{At} + p_A|h_{AA}|^2+ 1}\\right)\\notag\\\\\n &&+\\log_2 \\left( 1+\\frac{p_A C_{Bt} C_{rA}}{ C_{Bt} + p_B|h_{BB}|^2+\n 1} \\right) \\\\\n \\mbox{s.t.} && p_A C_{rA} + p_B C_{rB} + 1 \\le \\frac{P_R}{\\|{\\bf w}_t\\|^2},\\label{eqn:constr3}\\\\\n && 0\\le p_A\\le P_A, 0\\le p_B\\le P_B\\notag.\n \\eea\n Note that when the first relay power constraint \\eqref{eqn:constr3} is not tight, the problem is\n the same as the conventional power allocation among two interference links to maximize the sum\n rate, and the optimal power solution is known to be binary \\cite{binary-power},\n i.e., the optimal power allocation $ (p_A^*, p_B^*)$ should satisfy\n \\begin{equation}} \\newcommand{\\ee}{\\end{equation}\n (p_A^*, p_B^*)\\in\\{(0, P_B), (P_A,0), (P_A, P_B)\\}.\n \\ee\n\n Next we only focus on the case in which the constraint \\eqref{eqn:constr3} is\n active, i.e., $p_A C_{rA} + p_B C_{rB} + 1\n =\\frac{P_R}{\\|{\\bf w}_t\\|^2}$.\nWe then have\n \\begin{equation}} \\newcommand{\\ee}{\\end{equation}\n p_A = \\frac{\\frac{P_R}{\\|{\\bf w}_t\\|^2-1}-p_B C_{rB}}{C_{rA} }.\n \\ee\n Because $0\\le p_A\\le P_A$, we can obtain the feasible range $[p_B^{\\min}, p_B^{\\max}]$ for $p_B$:\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\n p_B^{\\min} &=& \\max\\left(0, \\frac{ {P_R}{\\|{\\bf w}_t\\|^2}-1 - C_{rA} P_A }{ C_{rB}}\n \\right),\\notag \\\\\n p_B^{\\max} &=& \\min \\left(P_B, \\frac{{P_R}{\\|{\\bf w}_t\\|^2}-1}{ C_{rB}}\\right).\n \\eea\n\n The objective function of \\eqref{eqn:fd777} then becomes a function of $p_B$ only, i.e.,\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\n y(p_B) &=& \\log_2 (C_{At} + p_A|h_{AA}|^2+ 1+ p_B C_{At}C_{rB})\\notag\\\\&& -\\log_2( C_{At} + p_A|h_{AA}|^2+ 1)\\notag\\\\\n &&+\\log_2 ( C_{Bt} + p_B|h_{BB}|^2+ 1+ p_A C_{Bt} C_{rA})\\notag\\\\&& -\\log_2( C_{Bt} + p_B|h_{BB}|^2+\n 1),\n \\eea\n and \\eqref{eqn:fd777} reduces to a one-variable optimization, i.e.,\n \\begin{equation}} \\newcommand{\\ee}{\\end{equation}\n \\max_{p_B} ~~ y(p_B)~~~~ \\mbox{s.t.}~~ p_B^{\\min}\\le p_B\\le p_B^{\\max}.\n \\ee\n\n Setting $\\frac{\\partial y(p_B)}{\\partial p_B}=0$ leads to\n {\\small \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\n&& \\frac{ C_{At}C_{rB} -\\frac{ C_{rB}}{C_{rA} } |h_{AA}|^2 }{ C_{At} + \\frac{\\frac{P_R}{\\|{\\bf w}_t\\|^2-1} |h_{AA}|^2}{C_{rA}}+ 1\n+ p_B \\left( C_{At}C_{rB} -\\frac{ C_{rB}}{C_{rA} } |h_{AA}|^2\\right) } \\\\\n&& + \\frac{\\frac{ C_{rB}}{C_{rA} } |h_{AA}|^2}{ C_{At} +\n\\frac{\\frac{P_R}{\\|{\\bf w}_t\\|^2-1}}{C_{rA} } |h_{AA}|^2 +1 -\np_B\\frac{ C_{rB}}{C_{rA} } |h_{AA}|^2 }\\notag\\\\\n&& +\\frac{ |h_{BB}|^2- \\frac{ C_{rB}}{C_{rA} } C_{Bt}\n C_{rA} }{ C_{Bt} + \\frac{\\frac{P_R}{\\|{\\bf w}_t\\|^2-1}}{C_{rA} } C_{Bt} C_{rA}+ 1+ p_B\\left( |h_{BB}|^2- \\frac{ C_{rB}}{C_{rA} } C_{Bt}\n C_{rA}\\right)}\\notag\\\\\n&& - \\frac{|h_{BB}|^2}{ C_{Bt} + p_B|h_{BB}|^2+1}\n =0.\\notag\n \\eea}\nThis in turn becomes a cubic (3-rd order) equation and all roots\ncan be found analytically. Suppose that the set of all positive root\nwithin $(p_B^{\\min}, p_B^{\\max})$ is denoted as $\\Psi$ which may\ncontain 0, 1 or 3 elements. In order to find the optimal $p_B^*$,\nwe need to compare the objective values of all elements in the set\n$\\Psi\\cup\\{p_B^{\\min}, p_B^{\\max}\\}$ and choose the one that results\nin the maximum objective value.\n\n\n{\\bl We comment that the complexity of the overall algorithm to\nmaximize the sum rate is dominated by the optimization of ${\\bf w}_t$\nand more specially, solving the problem \\eqref{DCP}. Since\n\\eqref{DCP} is a semi-definite programming (SDP) problem with one\nvariable ${\\bf W}_t\\in \\mathbb{C}^{M_t\\times M_t}$ and two constraints,\nthe worst-case complexity to solve it is\n$\\mathcal{O}(M_t^{4.5}\\log(\\frac{1}{\\epsilon}))$ where $\\epsilon$ is\nthe desired solution accuracy \\cite{SDPR}.}\n\n\n\\section{Benchmark schemes}\nIn this section, we introduce three benchmark schemes that the\nproposed FD network scheme can be compared with. The first one is\nthe conventional two-phase HD TWRC using the analog network coding,\nwhich is known to outperform the three-phase and four-phase HD\nschemes to provide throughput gain\\cite{practical_PLNC}; the second\none is a two-phase one-way FD scheme and in each phase, the relay\nworks in the FD mode\\cite{FD-two-phase}; the last one ignores the\nresidual SI channel at the relay thus provides a performance upper\nbound that is useful to evaluate the proposed algorithms.\n\n\\subsection{Two-phase HD relaying using analog network coding}\n HD analog network coding is introduced in \\cite{Zhang-2Phase} which takes\n two phases to complete information exchange between ${\\tt A}$ and\n ${\\tt B}$. In the first phase, both sources transmit to ${\\bf R}$ and in\n the second phase, the relay multiplies the received signal by a\n beamforming matrix ${\\bf W}$ then broadcasts it to ${\\tt A}$ and\n ${\\tt B}$. Because there is no SI, every node can use its full power,\n so only ${\\bf W}$ needs to be optimized. The achievable rate pair\n and the relay power consumption, are given by\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\n R_A &=& \\frac{1}{2}\\log_2\\left(1+ \\frac{P_B|{\\bf h}_{RA}^\\dag{\\bf W} {\\bf h}_{BR}|^2}{ \\|{\\bf h}_{RA}^\\dag{\\bf W}\\|^2\n +1}\\right),\\notag\\\\\n R_B &=& \\frac{1}{2}\\log_2\\left(1+ \\frac{P_A|{\\bf h}_{RB}^\\dag{\\bf W} {\\bf h}_{AR}|^2}{\n\\|{\\bf h}_{RB}^\\dag{\\bf W}\\|^2 + 1}\\right),\\notag\\\\\n p_R &=& \\|{\\bf W} {\\bf h}_{AR}\\|^2 P_A + \\|{\\bf W} {\\bf h}_{BR}\\|^2 P_B\n +\\textsf{trace}({\\bf W}\\qW^\\dag).\\notag\n \\eea\n where the factor of $\\frac{1}{2}$ is due to the two\n transmission phases used.\n Given the above rates and the power expression, problems\n $\\mathbb{P}_1$ and $\\mathbb{P}_2$ can be solved to find the\n achievable rate region and the maximum sum rate, respectively.\n Compared with this scheme, the proposed FD relaying can reduce the\n total communication phases to one, thus has the potential to\n improve the throughput.\n\n\\subsection{Two-phase one-way FD}\n Another scheme that we will compare with is the FD one-way relaying\n in which ${\\tt R}$ works in the FD mode while the two sources work in the\n HD mode. In this way, both sources can transmit with the maximum power. We use the same notation as the proposed scheme and the relay beamforming matrix is ${\\bf W}={\\bf w}_t{\\bf w}_r^\\dag$.\n\n The achievable rate, relay power constraint, and zero residual SI constraint for source ${\\tt A}$ (direction: ${\\tt B} \\rightarrow{\\tt A}$) are, respectively,\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\n &&R_A =\\log_2\\left(1+ \\frac{P_B|{\\bf h}_{RA}^\\dag {\\bf w}_t|^2|{\\bf w}_r^\\dag {\\bf h}_{BR}|^2}{ |{\\bf h}_{RA}^\\dag {\\bf w}_t|^2 + 1} \\right), \\\\\n &&p_B\\|{\\bf w}_t\\|^2 |{\\bf w}_r^\\dag {\\bf h}_{BR}|^2 +\n\\|{\\bf w}_t\\|^2\\|{\\bf w}_r\\|^2\\le P_R, \\\\\n &&{\\bf w}_r^\\dag{\\bf H}_{RR}{\\bf w}_t=0.\\label{eqn:ZF}\n \\eea\n\n\n In our previous work \\cite{MIMO-relay-HD}, we have derived the closed-form expressions below for $R_A$\n depending on how the ZF constraint is realized:\n \\begin{enumerate}\n\\item\n Receive ZF. In this case, we assume ${\\bf w}_t={\\bf h}_{RA}$ and choose\n ${\\bf w}_r$ to achieve \\eqref{eqn:ZF}. We showed that the achievable\ne2e received signal-to-noise ratio (SNR) can be expressed as\n \\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\\label{eqn:SNR:R}\n \\gamma_{RZF}\n =\\frac{P_B \\|{\\bf D}{\\bf h}_{BR}\\|^2 P_R \\|{\\bf h}_{RA}\\|^2}\n {P_B \\|{\\bf D}{\\bf h}_{BR}\\|^2 + P_R\\|{\\bf h}_{RA}\\|^2 + 1},\n \\eea\n where ${\\bf D}\\triangleq \\Pi^\\bot_{{\\bf H}_{RR}{\\bf h}_{RA}}$.\n\n\n\\item Transmit ZF. In this case, we assume ${\\bf w}_r={\\bf h}_{BR}$ and choose\n ${\\bf w}_t$ to achieve \\eqref{eqn:ZF}. We then reach the following\n achievable SNR:\n\\begin{eqnarray}} \\newcommand{\\eea}{\\end{eqnarray}\\label{eqn:SNR:t}\n \\gamma_{TZF} = \\frac{ P_B \\|{\\bf h}_{BR}\\|^2 P_R\\|{\\bf B}{\\bf h}_{RA}\\|^2}{\n {P_B\\|{\\bf h}_{BR}\\|^2 + {P_R}\\|{\\bf B}{\\bf h}_{RS}\\|^2+ 1 }},\n\\eea where ${\\bf B}\\triangleq \\Pi_{{\\bf H}_{RR}^\\dag{\\bf h}_{BR}}^\\bot$.\n\\end{enumerate}\n $R_A$ is then determined by $R_A=\\log_2(1+\\max(\\gamma_{RZF},\n \\gamma_{TZF}))$. We can derive similar achieve rate $R_B$ for the source\n ${\\tt B}$.\n\n Note that $R_A$ and $R_B$ cannot be achieved simultaneously as it requires\n that each corresponding source occupies the whole transmission time.\n The boundary of the rate region can be obtained by using time-sharing parameter $t\\in[0,1]$, i.e., $(t R_A, (1-t)R_B)$.\n\n\\subsection{FD-Upper Bound}\n This scheme is the same as the proposed FD scheme except that we\n assume there is no SI at ${\\tt R}$, but we still consider the SI at the two sources, i.e., ${\\bf H}_{RR}={\\bf 0}, |h_{AA}|>0, |h_{BB}|>0$.\n In this case the ZF constraint in \\eqref{eqn:fd2} is not necessary.\n We remark that this scheme uses unrealistic assumption\n of ${\\bf H}_{RR}={\\bf 0}$, so it is not a practical scheme but provides a\nuseful upper bound to evaluate the performance of the proposed\nalgorithms.\n\nIn the simulation results, we will label the above three benchmark\nschemes as ``Two-phase HD'', ``Two-phase FD'' and ``Proposed\none-phase FD upper bound'', respectively.\n\n\n\n\\section{Numerical Results}\\label{sec:simu}\nIn this section we provide numerical results to illustrate the\nachievable rate region and the sum rate performance of the proposed\nFD two-way relaying scheme. We compare it with the above mentioned\nthree benchmark schemes. The simulation set-up follows the system\nmodel in Section II. Unless otherwise specified, we assume that\nthere are $M_T=M_R=3$ antennas at ${\\tt R}$, the average residual SI\nchannel gain is $\\sigma^2_A=\\sigma^2_B=\\sigma^2_R=-20$ dB, {\\bl and\nthe per-node transmit SNR for both sources and the relay is\n$P_A=P_B=P_R=10$ dB, which are the power constraints in problem\nformulations $\\mathbb{P}_1$ and $\\mathbb{P}_2$}. The results are\nobtained using 100 independent channel realizations.\n\n\n\\input figures.tex\n\n\n\\subsection{Achievable rate region}\nFirst we illustrate the achievable {\\bl average} rate region for the\ntwo sources ${\\tt A}$ and ${\\tt B}$ in Fig. \\ref{fig:rate:region}. {\\bl It\nis seen that the two-phase FD scheme already greatly enlarges the\nachievable data region of the conventional HD scheme. The proposed\none-phase FD scheme achieves significantly larger rate region over\nthe two-phase FD and the conventional HD schemes.} We observe that\nthere is still a noticeable gap between the proposed scheme and its\nupper bound because the proposed solution is suboptimal.\n\n\n\n\\subsection{Sum rate performance}\nWe then investigate the effect of the source transmit SNR $P_A$ and\n$P_B$ ($P_A=P_B$) on the sum rate shown in Fig.\n\\ref{fig:sumrate:R1020dB} when { $P_R=10$ dB (solid curves) and\n$P_R=20$ dB (dashed curves), respectively}. {\\bl We first consider\nthe case $P_R=10$ dB.} As expected, the sum rate improves as the\nsource transmit SNR increase and the proposed one-phase FD schemes\nclearly outperforms the two benchmark schemes. When the source SNR\nis above 15 dB, the sum rate of the proposed FD scheme saturates.\nThis is because the high transmit power results in high residual SI,\ntherefore increasing power budget does not necessarily improve the\nperformance. To illustrate the performance improvement in sum rate,\nwe show the sum rate gain over the conventional two-phase HD scheme\nin Fig. \\ref{fig:sumrate:gain:R1020dB}. It is observed that when\nthe source SNR is 10 dB, the proposed one-phase FD scheme and the\ntwo-phase FD scheme can achieve the sum rate gain of 1.56 and 1.22,\nrespectively. Even the performance upper bound cannot achieve double\nrate because of the residual SI at both sources. The rate gain in\ngeneral decreases as the source transmit SNR increases again because\nof the residual SI therefore the sources need to carefully adjust\nits transmit power.\n\n\nThe same trend is observed when $P_R=20$ dB. In this case, a sum\nrate gain of nearly 1.7 is recorded when the source transmit SNR is\n10 dB. Another interesting observation is that the performance of\nthe proposed scheme is very close to the upper bound. { This is\nbecause when the relay power is high, the e2e performance is limited\nby the link from the source to the relay, rather than the relay to\nthe other source, therefore the residual SI at the relay has little\neffect on the sum rate.}\n\nThe impact of the relay transmit SNR on the sum rate is shown in\nFig. \\ref{fig:sumrate:RPower}. It is seen that when the relay SNR is\nlow, the proposed one-phase FD scheme achieves lower sum rate than\nthe two-phase FD scheme at low transmit SNRs then outperforms the\nlatter when the transmit SNR is above 5 dB. The performance gain is\nremarkable when the relay transmit SNR is high. This is because\nunlike the two sources, the relay can null out the residual SI\nusing multiple antennas, therefore it can always use the maximum\navailable power to improve the sum rate.\n\nThe effect of the residual SI channel gain at the sources is\nexamined in Fig. \\ref{fig:sumrate:SI}. Naturally, the sum rate\ndecreases as the residual SI channel becomes stronger or the SI is\nnot adequately suppressed. When the residual SI channel gain is\nabove -5 dB, the two-phase FD scheme outperforms the one-phase FD\nscheme while both still achieve higher sum rate than the\nconventional two-phase HD scheme even when the SI channel gain is\nas high as 5 dB.\n\n\n Next we show the sum rate and rate gain results when the number of antennas ($M_R=M_T$) at the\nrelay varies from 2 to 6 in Fig. \\ref{fig:sumrate:antennas} and\n\\ref{fig:sumrate:gain:antennas}, respectively. The sum rate steady\nincreases as more antennas are placed at the relay due to the array\ngain. It is observed that the rate gain remains about 1.55 when the\nnumber of antennas is greater than 2.\n\n\n\\subsection{Asymmetric channel gain}\n The above results are mainly for a symmetric case, i.e., both sources have\n similar power constraints and channel {\\bl strengths}. Here we consider an\n asymmetric case where the average channel gain between ${\\tt R}$ and\n ${\\tt B}$ is -10 dB. We plot the rate region in Fig. \\ref{fig:sumrate:region:asymmetric}\nusing the same system parameters as those in Fig. 2 except that the\ngain of channel vectors ${\\bf h}_{BR}$ and\n ${\\bf h}_{RB}$ is 10 dB weaker. {\\bl The results show that both sources'\n rates are reduced while the source ${\\tt B}$ suffers more rate loss.} This is because one\n source's channels to and from the relay will also affect the\n performance of the other source. The sum rate comparison is given\n in Fig. \\ref{fig:sumrate:asymmetric} and it is observed the performance of the two-phase {\\bl FD} scheme is very close to that of the proposed FD scheme at all SNR region. This can be explained\n by the fact that e2e performance is restricted by the channel quality between ${\\tt R}$\n and ${\\tt B}$, so the gain due to the simultaneous transmission of two sources is limited.\n\n{\n\\subsection{Impact of the local channel state information}\n Finally we consider the case that only the receive CSI at each node is available but the transmit CSI is unknown. Because of the lack of the transmit\n CSI, the two sources use full power\n $P_A$ and $P_B$; ${\\bf w}_t$ at the relay is chosen arbitrarily to satisfy the ZF constraint and the relay power\n constraint. The sum rate performance is shown in Fig.\n \\ref{fig:sumrate:csi}. It is seen that the proposed FD scheme still achieves significant performance gain over the HD relaying at low to medium transmit SNRs although all rates\n are much lower than the case with the global CSI in Fig. 3. Another\n notable difference is that at high transmit SNRs, the performance of the proposed FD scheme degrades quickly.\n This is because the two sources need to adjust its transmit power\n rather than using full power. This highlights the importance of the\n global CSI for adapting the transmit power and adjusting the relay beamforming.\n}\n\n\n \\section{Conclusion and Future work}\n We have investigated the application of the FD operation to MIMO TWRC, which requires only one phase for the two sources to exchange\n information. We studied two problems of finding the achievable\n rate region and maximizing the sum rate by optimizing the relay\n beamforming matrix and power allocation at the sources.\n Iterative algorithms are proposed together with 1-D search to\n find the local optimum solutions. At each iteration, either\n analytical solution or convex formulation has been derived. We\n have conducted intensive simulations to illustrate the effects of\n different system parameters. The results show that the FD\n operation has great potential to achieve much higher data rates than the conventional HD TWRC.\n\n\n{\\bl Regarding the future directions, better suboptimal and the\noptimal solutions are worth studying.} There are a couple of reasons\nwhy the proposed algorithm is sub-optimal such as the additional ZF\nconstraint, {\\bl the incomplete characterization of the receive\nbeamforming vector}, and the alternating optimization algorithms.\nAnother direction is to study the use of multiple transmit\/receive\nantennas at the two sources. If a single data stream is transmitted,\nthe residual SI can be removed using the ZF criterion at the sources\nas well. This actually simplifies the optimization as the two\nsources can use the maximum power. However, multiple antennas can\nsupport multiple and variable number of data streams, and when the\nproblem is coupled with the SI suppression, it will be much more\nchallenging. Thirdly, in this paper, we focus on the benefit of the\nFD {\\bl in terms of} spectrum efficiency. In \\cite{practical_PLNC},\nit is shown that for the HD case, three-phase transmission schemes\noffers a better compromise between the sum rate and the bit error\nrate than the two-phase scheme, especially in the asymmetric case.\nIt is worth investigating whether such an trade-off also exists for\nthe FD scenario. \\vspace{-2mm}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRecent observations of Type IA Supernovae \\cite{SN1,SN2} as well as\nconcordance with other observations (including the microwave\nbackground and galaxy power spectra) indicate that the universe is\naccelerating. Possible explanations for such an acceleration include\na variety of dark energy models in which the universe is vacuum\ndominated, such as a cosmological constant. In 1986, we explored the\npossibility of a time-dependent vacuum \\cite{fafm} characterized by an\nequation of state $w$. Quintessence models\n\\cite{ratpeeb,frieman,wett,stein,caldwell} utilize a rolling scalar\nfield to achieve time dependence of the vacuum.\n\nAs an alternative approach to explain the acceleration, we proposed\nmodifications to the Friedmann equation in lieu of having any vacuum\nenergy at all \\cite{freeselewis} (hereafter Paper I). In our\nCardassian model, \\footnote{The name Cardassian refers to a humanoid\n race in Star Trek whose goal is accelerated expansion of their evil\n empire (aka George W). This race looks foreign to us and yet is made\n entirely of matter.} the universe is flat and accelerating, and yet\nconsists only of matter and radiation.\nThe usual Friedmann equation governing the expansion of the universe\n\\begin{equation}\n\\label{eq:usual}\nH^2 = \\left({\\dot a \\over a}\\right)^2 = {8 \\pi G \\over 3} \\rho \n\\end{equation}\nis modified to become\n\\begin{equation}\n\\label{eq:general}\nH^2 = \\left({\\dot a \\over a}\\right)^2 = g(\\rho) ,\n\\end{equation}\nwhere $\\rho$ contains only matter and radiation (no vacuum), $H=\\dot\na\/a$ is the Hubble constant (as a function of time), $G=1\/m_{pl}^2$ is\nNewton's gravitational constant, and $a$ is the scale factor of the\nuniverse. We note here that the geometry is flat, as required by\nmeasurements of the cosmic background radiation \\cite{boom,dasi}, so that\nthere are no curvature terms in the equation. There is no vacuum term\nin the equation. The model does not address the cosmological constant\n($\\Lambda$) problem; we simply set $\\Lambda=0$.\n\nWe take $g(\\rho)$ to be a function of $\\rho$ that returns simply to $8\n\\pi G \\rho\/3$ at early epochs, but that takes a different form that\ndrives an accelerated expansion in the recent past of the universe at\n$z<{\\cal O}(1)$.\n\nI begin by describing the phenomenology of Cardassian models, and then\nturn to the motivation for modified Friedmann equations of this form.\nThe new term required may arise, e.g., as a consequence of our\nobservable universe living as a 3-dimensional brane in a higher\ndimensional universe. A second possible interpretation of Cardassian\nexpansion is developed \\cite{gondolo}, in which we treat the modified\nFriedman equations as due to a fluid, in which the energy density has\nnew contributions with negative pressure (possibly due to dark matter\nwith self-interactions).\n\n\\section{Power Law Cardassian Model}\n\nThe simplest version of Cardassian expansion invokes the addition of a\nnew power law term to the right hand side of the Friedmann equation:\n\\begin{equation}\n\\label{eq:new}\nH^2 ={8\\pi G\n\\over 3 } \\rho + B \\rho^n \n\\end{equation}\nwhere $n$ is a number with\n\\begin{equation}\nn<2\/3 .\n\\end{equation}\nThe new term is initially negligible, and only comes to dominate at\nredshift $z \\sim {\\cal O}(1)$. Once it dominates, it causes the\nuniverse to accelerate.\n\nWe take the usual energy conservation:\n\\begin{equation}\n\\label{eq:energy}\n\\dot \\rho + 3H (\\rho + p) = 0 ,\n\\end{equation}\nwhich gives the evolution of matter:\n\\begin{equation}\n\\rho_M = \\rho_{M,0}(a\/a_0)^{-3} .\n\\end{equation}\nHere subscript $0$ refers\nto today. \nEqs.(\\ref{eq:general}) and (\\ref{eq:energy})\ncontain the complete information of the expansion history.\n\nThe new term in Eq.(\\ref{eq:new}) (the second term on the right hand side)\nis initially negligible; hence ordinary early universe cosmology,\nincluding nucleosynthesis, results.. The new term only comes to\ndominate recently, at the redshift $z_{car} \\sim O(1)$ indicated by\nthe supernovae observations. Once the second term dominates, it\ncauses the universe to accelerate. When the new term is so large that\nthe ordinary first term can be neglected, we find\n\\begin{equation}\nR \\propto t^{2 \\over 3n}\n\\end{equation}\nso that the expansion is superluminal (accelerated) for $n<2\/3$. As\nexamples, for $n=2\/3$ we have $R \\sim t$; for $n=1\/3$ we have $R \\sim\nt^2$; and for $n=1\/6$ we have $R \\sim t^4$. \nFor $n<1\/3$ the acceleration is increasing (the cosmic jerk).\n\nThere are two free parameters in the power law model, $B$ and $n$ (or\nequivalently, $z_{car}$ and $n$). We choose one of these parameters\nto make the second term term kick in at the right time to explain the\nobservations. As yet we have no explanation of the coincidence\nproblem; i.e., we have no explanation for the timing of $z_{car}$.\nSuch an explanation may arise in the context of extra dimensions.\n\nObservations of the cosmic background radiation show that the geometry\nof the universe is flat with $\\Omega_0=1$. How can we reconcile this\nwith a theory made entirely of matter (and radiation) and yet\nobservations of the matter density that indicate $\\Omega_M =\n0.3$? In the Cardassian model we need to revisit the question of what\nvalue of energy density today, $\\rho_0$, corresponds to a flat\ngeometry. We will show that the energy density required to close the\nuniverse is much smaller than in a standard cosmology, so that matter\ncan be sufficient to provide a flat geometry.\n\nFrom evaluating Eq.(\\ref{eq:new}) today, we have\n\\begin{equation}\n\\label{hubbletoday}\nH_0^2 = A \\rho_0 + B \\rho_0^n .\n\\end{equation}\nThe energy density $\\rho_0$ that satisfies Eq.(\\ref{hubbletoday}) is,\nby definition, the critical density. The usual value of the critical\ndensity is found by solving the equation with only the first term\n(with $B=0$), so that\n\\begin{equation}\n\\rho_{c} = {3 H_0^2\n\\over 8 \\pi G} = 1.88 \\times 10^{-29} h_0^2 {\\rm gm\/cm^{-3}}\n\\end{equation}\nand $h_0$ is the Hubble constant today in units of 100 km\/s\/Mpc.\nHowever, in the presence of the second term, we can solve\nEq.(\\ref{hubbletoday}) to find that the critical density $\\rho_c$ has\nbeen modified from its usual value to a different number $\\tilde\n\\rho_c$. We use our second free parameter to fix this number to give\n\\begin{equation}\n\\rho_{M,0} = \\tilde \\rho_c = 0.3 \\rho_c \n\\end{equation}\nso that $\\Omega_M^{obs} = \\rho_{M,0}\/\\rho_c = 0.3$.\n\nHence the universe can be flat, matter dominated, and accelerating,\nwith a matter density that is 0.3 of the old critical density.\nMatter can provide the entire closure density of the Universe.\n\nAn equivalent formulation of power law Cardassian is\n\\begin{equation}\n\\label{eq:new2}\nH^2 = A \\rho [1 + ({\\rho \\over \\rho_{car}})^{n-1}] .\n\\end{equation}\nThe first term inside the bracket dominates initially but the second\nterm takes over once the energy density has dropped to the value\n$\\rho_{car}$. Here, $\\rho_{car}$ is the energy density at which the\ntwo terms are equal: the ordinary energy density term on the right\nhand side of the FRW equation is equal in magnitude to the new term.\nFor reasonable parameters, $\\rho_{car} \\sim 2.7 \\tilde \\rho_c \\sim\n10^{-29}$gm\/cm$^3$. Since the modifications are important only for $\\rho\n< \\rho_{card}$, solar system physics is completely unaffected.\n\n\\subsection{Observational tests of power law Cardassian}\n\nThe power law Cardassian model with an\nadditional term $\\rho^n$ satisfies many observational constraints: the\nuniverse is somewhat older, the first Doppler peak in the microwave\nbackground is slightly shifted, early structure formation ($z>1$) is\nunaffected, but structure will stop growing sooner. In addition the\nmodifications to the Poisson equation will affect cluster abundances and\nthe ISW affect in the CMB.\n\n\\subsection{Comparing to Quintessence}\n\nWe note that, with regard to observational tests, one can make a\ncorrespondence between the $\\rho^n$ Cardassian and Quintessence models\nfor constant $n$; we stress, however, that the two models are entirely\ndifferent. Quintessence requires a dark energy component with a specific\nequation of state ($p = w\\rho$), whereas the only ingredients in the\nCardassian model are ordinary matter ($p = 0$) and radiation ($p =\n1\/3$). However, as far as any observation that involves only $R(t)$, or\nequivalently $H(z)$, the two models predict the same effects on the\nobservation. Regarding such observations, we can make the following\nidentifications between the Cardassian and quintessence models: $n\n\\Rightarrow w+1$, $F\\Rightarrow \\Omega_m$, and $1-F \\Rightarrow\n\\Omega_Q$, where $w$ is the quintessence equation of state parameter,\n$\\Omega_m= \\rho_m\/\\rho_{c}$ is the ratio of matter density to the\n(old) critical density in the standard FRW cosmology appropriate to\nquintessence, $\\Omega_Q= \\rho_Q\/\\rho_{c}$ is the ratio of\nquintessence energy density to the (old) critical density, and \n$F=\\tilde{\\rho_c}\/\\rho_c$. In this way, the Cardassian model with\n$\\rho^n$ can make contact with quintessence with regard to observational\ntests. We note that Generalized Cardassian models can be distinguished\nfrom generic quintessence models with upcoming precision cosmological\nexperiments.\n\n\\subsection{Best Fit of Parameters to Current Data}\n\nWe can find the best fit of the Cardassian parameters $n$ and\n$z_{car}$ to current CMB and Supernova data. The current best fit is\nobtained for $w \\leq -0.78$, or, equivalently, $n \\leq 0.22$\n\\cite{wmap} and $0.33 \\leq z_{card} \\leq 0.48$. As an example, for\n$n= 0.2$ (equivalently, $w=-0.8$), we find that $z_{car} = 0.42$.\nThen the position of the first Doppler peak is shifted by a few\npercent. The age of the universe is 13 Gyr \\cite{savage}. There are\nsome indications that $w \\leq -1$ may give a good fit \\cite{mmot} and\nthis case of phantom cosmology will be discussed below.\n\n\\section{Generalized Cardassian}\n\nMore generally, we consider other forms of $g(\\rho)$ in Eq.(3), as\ndiscussed in \\cite{conf}. For example, a simple generalization of\nEq.(\\ref{eq:new}) is Modified Polytropic Cardassian:\n\\begin{equation}\n\\label{eq:MPC}\nH^2 = {8 \\pi G \\over 3} \\rho [1+ ({\\rho\/\\rho_{car}})^{q(n-1})]^{1\/q} \n\\end{equation}\nwith $q>0$ and $n<2\/3$. The right hand side returns to the ordinary\nFriedmann equation at early times, but becomes $\\rho^n$ at late times,\njust as in Eq.(\\ref{eq:new2}). Other examples of Generalized\nCardassian have been discussed in \\cite{conf,gondolo}.\n\n\\subsection{Observational Tests of Generalized Cardassian Models}\n\n\\subsubsection{Current Supernova Data}\n\nFigure 1 shows comparisons by Wang et al \\cite{wang} of MP Cardassian\nmodels with current supernova data. Clearly the models fit the\nexisting data. The figure also shows predictions out to $z=2$ for\nfuture supernova data. Future supernova data will thus allow one to\ndifferentiate between Cardassian models, quintessence models, and\ncosmological constant models.\n\n\\begin{figure}[ht]\n\\includegraphics[width=6in,height=6in]{SNnow.eps}\n\\caption{\\label{Figure 1}\n Comparison of MP Cardassian models (see Eq.(\\ref{eq:MPC}) with\n current Supernova data, as well as predictions out to $z=2$ for\n future data. For comparison, dotted lines indicate $(\\Omega_m,\n \\Omega_{\\Lambda}$ as labelled on the right hand side of the plot.\n We see that MP Cardassian models with $n=0.2$ and $q$ ranging from\n 1-3 match the current data, and can be differentiated from generical\n cosmological constant or quintessence models in future data. }\n\\end{figure}\n\n\\subsubsection{Additional Observational Tests}\n\nA number of additional observational tests can be used to test the\nmodels. We plan to use the code CMBFAST to further constrain\nCardassian parameters in light of Cosmic Background Radiation data\n(WMAP). In particular, the Integrated Sachs Wolfe effect may\ndifferentiate Cardassian from generic quintessence models. A second\napproach is number count tests such as DEEP2. The abundance of galaxy\nhaloes of fixed rotational speed depend on the comoving volume\nelement. Third, the Alcock-Paczynski test compares the angular size\nof a spherical object at redshift $z$ to its redhsift extent $\\Delta\nz$. Depending on the cosmology, the spherical object may look\nsquooshed in redshift space. A proposed trick \\cite{crottshui} is to\nuse the correlation function of Lyman-alpha clouds as spherical\nobjects.\n\n\\section{Motivation for Cardassian Cosmology}\n\nWe present two possible origins for Cardassian models.\nThe original idea arose from consideration of braneworld scenarios,\nin which our observable universe is a three dimensional membrane\nembedded in extra dimensions. Recently, as a second interpretation\nof Cardassian cosmology, we have investigated\nan alternative four-dimensional fluid description. Here,\nwe take the ordinary Einstein equations, but the energy density\nhas additional terms (possibly due to dark matter with self-interactions\ncharacterized by negative pressure).\n\n\\subsection{Braneworld Scenarios}\n\nIn braneworld scenarios, our observable universe is a three\ndimensional membrane embedded in extra dimensions. For simplicity, we\nwork with one extra dimension. In this five-dimensional\nworld, we take the metric to be\n\\begin{equation}\n\\label{eq:metric}\nds^2 = -q^2(\\tau,u)d\\tau^2 + a^2(\\tau,u)d\\vec{x}^2 + b^2(\\tau,u)du^2\n\\end{equation}\nwhere $u$ is the coordinate in the direction of the fifth dimension,\n$a$ is the usual scale factor of our observable universe and $b$ is\nthe scale factor in the fifth dimension. We found \\cite{cf}\nthat one does not generically obtain the usual Friedmann equation\non the observable brane. In fact, by a suitable choice of the\nbulk energy momentum tensor, one may obtain a Friedmann equation\non our brane of the form $H^2 \\sim \\rho^n$ for any $n$. More\ngenerally, one can obtain other modifications. \\cite{bin} showed\nthat for an antideSitter bulk one obtains a quadratic correction\n$H^2 \\sim \\rho^2$, but more generally one can obtain any power of $n$,\nas shown in \\cite{cf}. \n\nThe origin of these modifications to the Friedmann equations is as\nfollows. We consider the five dimensional Einstein equations,\n\\begin{equation}\n\\label{eq:5dEinstein}\n\\tilde{G}_{AB} = 8\\pi G_{(5)} \\tilde{T}_{AB} ,\n\\end{equation}\nwhere the 5D Newton's constant is related to the 5D Planck mass as\n$ 8\\pi G_{(5)} = M_{(5)}^{-3}$.\nThe 5D Einstein tensor on the left hand side is given by\n\\begin{eqnarray}\n{\\tilde G}_{00} &=& 3\\left\\{ \\frac{\\da}{a} \\left( \\frac{\\da}{a}+ \\frac{\\db}{b} \\right) - \\frac{n^2}{b^2} \n\\left(\\frac{\\ppa}{a} + \\frac{\\pa}{a} \\left( \\frac{\\pa}{a} - \\frac{\\pb}{b} \\right) \\right) \\right\\}, \n\\label{ein00} \\\\\n {\\tilde G}_{ij} &=& \n\\frac{a^2}{b^2} \\delta_{ij}\\left\\{\\frac{\\pa}{a}\n\\left(\\frac{\\pa}{a}+2\\frac{\\pn}{n}\\right)-\\frac{\\pb}{b}\\left(\\frac{\\pn}{n}+2\\frac{\\pa}{a}\\right)\n+2\\frac{\\ppa}{a}+\\frac{\\ppn}{n}\\right\\} \n\\nonumber \\\\\n& &+\\frac{a^2}{n^2} \\delta_{ij} \\left\\{ \\frac{\\da}{a} \\left(-\\frac{\\da}{a}+2\\frac{\\dn}{n}\\right)-2\\frac{\\dda}{a}\n+ \\frac{\\db}{b} \\left(-2\\frac{\\da}{a} + \\frac{\\dn}{n} \\right) - \\frac{\\ddb}{b} \\right\\},\n\\label{einij} \\\\\n{\\tilde G}_{05} &=& 3\\left(\\frac{\\pn}{n} \\frac{\\da}{a} + \\frac{\\pa}{a} \\frac{\\db}{b} - \\frac{\\dot{a}^{\\prime}}{a}\n \\right),\n\\label{ein05} \\\\\n{\\tilde G}_{55} &=& 3\\left\\{ \\frac{\\pa}{a} \\left(\\frac{\\pa}{a}+\\frac{\\pn}{n} \\right) - \\frac{b^2}{n^2} \n\\left(\\frac{\\da}{a} \\left(\\frac{\\da}{a}-\\frac{\\dn}{n} \\right) + \\frac{\\dda}{a}\\right) \\right\\}.\n\\label{ein55} \n\\end{eqnarray} \nIn the above expressions, a prime stands for a derivative with respect to\n $u$, and a \ndot for a derivative with respect to $\\tau$. \n\nThese 5D Einstein equations are supplemented by boundary conditions\n(known as Israel conditions) due to the existence of our 3-brane.\nThese boundary conditions are exactly analogous to those of a charged\nplate in electromagnetism. There, the change in the perpendicular\ncomponent of the electric field as one crosses a plate with charge per\narea $\\sigma$ is given by $\\Delta(E_{\\rm perp}) = 4 \\pi \\sigma$. There\nare two analogous boundary conditions here; one is \n\\begin{equation}\n\\label{eq:israel}\n\\Delta({da \\over\n du}) \\propto \\rho_{brane};\n\\end{equation}\ni.e. the change in the derivative of the scale factor as one crosses\nour 3-brane is given by the energy density on the brane.\n\nThe combination of the 5D Einstein equations with these boundary\nconditions leads to the modified Friedmann equations on our observable\nthree dimensional universe. Depending on what is in the bulk\n(off the brane), one can obtain a variety of possibilities such\nas $H^2 \\sim \\rho^n$ \\cite{cf}. Though we were at first concerned\nthat such modifications would be bad for cosmology, in fact\nthey can explain the observed acceleration of the universe today.\n\nCurrently one of our pressing goals would be to find a simple\nfive-dimensional energy momentum tensor that produces the Cardassian\ncosmology. While we succeeded in constructing an ugly toy model,\nit obviously does not represent our universe. Unfortunately,\ngoing from four to five dimensions is not unique, and is difficult.\n\n\\section{Motivation for Cardassian cosmology: 2) Fluid Description}\n\nAnother pressing goal is to find testable predictions of Cardassian\ncosmology. We want to be certain that ordinary astrophysical\nresults are not affected in an adverse way; e.g., we want energy\nmomentum to be conserved. In the interest of addressing these\ntypes of questions, we developed a four dimensional fluid description\nof Cardassian cosmology \\cite{gondolo}. This may be an effective\npicture of higher dimensional physics, or may be completely\nindependent of it. This fluid description may serve as a second\npossible motivation for Cardassian cosmology.\n\nHere we use the ordinary Friedmann equation, $H^2 = 8 \\pi G \\rho\/3$\nand the ordinary four dimensional Einstein equation,\n\\begin{equation}\nG_{\\mu\\nu} = 8 \\pi G T_{\\mu\\nu} .\n\\end{equation}\nWe take the energy density to be the sum of two terms:\n\\begin{equation}\n\\rho_{tot} = \\rho_M + \\rho_k\n\\end{equation}\nwhere $\\rho_k$ is a Cardassian contribution.\nAccompanying the two terms in the energy density are two\npressure terms:\n\\begin{equation}\np_{tot} = p_M + p_k ,\n\\end{equation}\nwhere the thermodynamics of an adiabatically expanding universe tell us that\n\\begin{equation}\np_k = \\rho_M \\left({\\partial \\rho_k \\over \\partial \\rho_M} \\right)_S\n- \\rho_k .\n\\end{equation}\nThese total energy density and pressure terms are what enter\ninto the four dimensional energy momentum tensor:\n\\begin{equation}\nT_{\\mu\\nu} = {\\rm diag}(\\rho_{tot},p_{tot},p_{tot},p_{tot}).\n\\end{equation}\n\nEnergy conservation is then automatically guaranteed:\n\\begin{equation}\nT_{\\mu\\nu;\\nu} = 0 .\n\\end{equation}\nThis equation implies a modified continuity equation,\nand a modified Euler's equation. We also require particle number\nconservation. Poisson's equation is also modified and becomes,\nin the Newtonian limit,\n\\begin{equation}\n\\nabla^2 \\phi = 4 \\pi G (\\rho_{tot} + 3 p_{tot}) .\n\\end{equation}\n\nAs an example, we can show the fluid description of the \npower law Cardassian model. We have\n\\begin{equation}\n\\rho_k = b \\rho_M^n \n\\end{equation}\nand\n\\begin{equation}\np_k = - (1-n) \\rho_k ,\n\\end{equation}\nwhich is a negative pressure for $n<2\/3$. The Poisson equation\nbecomes\n\\begin{equation}\n\\nabla^2 \\phi = - 4 \\pi G \\left[\\rho_M - (2-3n) b \\rho_M^n \\right]\n\\end{equation}\nOne can show that this simplified model runs into trouble\non galactic scales.\nThere is a new force ${d\\vec{v} \\over dt}|_{\\rm new}\n= - {\\vec{\\nabla} p_k \\over \\rho_M}$ that\ndestroys flat rotation curves. The fluid power law\nCardassian must be thought of as an effective model which applies\nonly on large scales. Hence, we proposed another possible\nmodel, the Modified Polytropic Cardassian model described earlier.\nDue to the existence of the additional parameter $q$, which\nis important on small scales, rotation curves are fine.\n\n\\subsection{Phantom Cardassian}\n\nSome Cardassian models can satisfy the dominant energy condition $w =\np_{tot}\/\\rho_{tot} > -1$ even with a dark energy component $w_k =\np_k\/\\rho_k < -1$, since both the ordinary and Cardassian components of\nthe energy density are made of the same matter and radiation. For\nexample, the MP Cardassian model wtih n=0.2 and q=2 has $w_k < -1$.\nThere is some evidence that $w_k < -1$ matches the data \\cite{mmot}.\n\n\\subsection{Speculation: Self Interacting Dark Matter?}\n\nWe here speculate on an origin for the new Cardassian term\nin the total energy density in the fluid model. The dark matter\nmay be subject to a new, long-range confining force (fifth force)\n\\begin{equation}\nF(r) \\propto r^{\\alpha -1},\\,\\,\\, \\alpha>0 .\n\\end{equation}\nThis may be analagous to quark confinement that exhibits negative\npressure. \n\nOur basic point of view is that Cardassian cosmology gives\nan attractive phenomenology, in that it is very efficient:\nmatter can provide all the observed behavior of the universe.\nTherefore it is worthwhile to examine all the avenues we\ncan think of as to the origin of these modified Friedmann terms\n(or alternatively of the fluid model).\n\n \n\\section{Discussion}\n\nWe have presented $H^2 = g(\\rho)$ as a modification to the Friedmann\nequation in order to suggest an explanation of the recent acceleration\nof the universe. In the Cardassian model, the universe can be flat\nand yet matter dominated. We have found that the new Cardassian\nmodifications can dominate the expansion of the universe after\n$z_{car} = \\mathcal{O}$$(1)$ and can drive an acceleration. We have\nfound that matter alone can be responsible for this behavior. The\ncurrent value of the energy density of the universe is then smaller\nthan in the standard model and yet is at the critical value for a flat\ngeometry.\nSuch a modified Friedmann equation may result from the existence of\nextra dimensions. Further work is required to find a simple\nfundamental theory responsible for Eq.(\\ref{eq:new}). A second\npossible motivation for Cardassian cosmology, namely a fluid\ndescription, has been developed \\cite{gondolo}, which allows us to\ncompute everything astrophysical. Comparison with SN observations,\nboth current and upcoming, were made and shown in Figure 1.\nIn future work, we plan to complete\na study of density perturbations in order to allow us to make\npredictions with observations, e.g. of cluster abundance or the ISW\neffect so that we may separate this model from others in upcoming\nobservations.\n\n\\section*{Acknowledgments}\n\nThis paper reflects work with collaborators Matt Lewis, Paolo Gondolo,\nYun Wang, Josh Frieman, Chris Savage, and Nori Sugiyama. We thank Ted\nBaltz for reminding us to consider the effects of the mass of the tau\nneutrino. I acknowledge support from the Department of Energy via the\nUniversity of Michigan and the Kavli Institute for Theoretical Physics\nat Santa Barbara.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn several recent papers (Lai \\& Tsang 2009; Tsang \\& Lai 2009c; Fu \\&\nLai 2011; Horak \\& Lai 2013), we have presented detailed study of the\nlinear instability of non-axisymmetric inertial-acoustic modes [also\ncalled p-modes; see Kato (2001) and Wagoner (2008) for review] trapped \nin the inner-most region of black-hole (BH) accretion discs. This global instability\narises because of wave absorption at the corotation resonance (where\nthe wave pattern rotation frequency matches the background disc\nrotation rate) and requires that the disc vortensity has a positive\ngradient at the corotation radius (see Narayan et al.~1987, Tsang \\& Lai 2008 \nand references therein). The disc vortensity (vorticity divided by\nsurface density) is given by\n\\be\n\\zeta={\\kappa^2\\over 2\\Omega\\Sigma},\n\\label{eq:zeta}\\ee \nwhere $\\Omega(r)$ is the disc rotation frequency, $\\kappa(r)$ is the \nradial epicyclic frequency and $\\Sigma(r)$ is the surface density\n\\footnote{Equation (\\ref{eq:zeta}) applies to barotropic discs in \nNewtonian (or pseudo-Newtonina) theory. See Tsang \\& Lai (2009c)\nfor non-barotropic discs and Horak \\& Lai (2013) for full\ngeneral relativistic expression.}. \nGeneral relativistic (GR) effect\nplays an important role in the instability: For a Newtonian disc, with\n$\\Omega=\\kappa\\propto r^{-3\/2}$ and relatively flat $\\Sigma(r)$\nprofile, we have $d\\zeta\/dr<0$, so the corotational wave absorption\nleads to mode damping. By contrast, $\\kappa$ is non-monotonic near a\nBH (e.g., for a Schwarzschild BH, $\\kappa$ reaches a maximum at\n$r=8GM\/c^2$ and goes to zero at $r_{\\rm ISCO}=6GM\/c^2$), the\nvortensity is also non-monotonic. Thus, p-modes with frequencies such\nthat $d\\zeta\/dr>0$ at the corotation resonance are overstable. Our\ncalculations based on several disc models and Paczynski-Witta \npseudo-Newtonian potential (Lai \\& Tsang 2009, Tsang \\& Lai 2009c)\nand full GR (Horak \\& Lai 2013) showed that the lowest-order\np-modes with $m=2,3,4,\\cdots$ have the largest growth rates, with the\nmode frequencies $\\omega \\simeq \\beta m\\Omega_{\\rm ISCO}$ (thus giving\ncommensurate frequency ratio $2:3:4,\\cdots$), where the dimensionless\nconstant $\\beta\\lo 1$ depends weakly the disc properties.\nThese overstable p-modes could potentially explain the \nHigh-frequency Quasi-Periodic Oscillations (HFPQOs) observed in BH\nX-ray binaries (e.g., Remillard \\& McClintock 2006; Belloni et al.~2012).\n\nThe effects of magnetic fields on the oscillation modes of BH\naccretion discs have been investigated by Fu \\& Lai (2009, 2011, 2012)\nand Yu \\& Lai (2013). Fu \\& Lai (2009) showed that the the basic wave\nproperties (e.g., propagation diagram) of p-modes are not strongly\naffected by disc magnetic fields, and it is likely that these p-modes\nare robust in the presence of disc turbulence (see Arras, Blaes \\&\nTurner 2006; Reynolds \\& Miller 2009). By contrast, other diskoseismic\nmodes with vertical structure (such as g-modes and c-modes) may be\neasily ``destroyed'' by the magnetic field (Fu \\& Lai 2009) or\nsuffer damping due to corotation resonance (Kato 2003; \nLi et al.~2003; Tsang \\& Lai 2009a). Although a modest\ntoroidal disc magnetic field tends to reduce the growth rate of the\np-mode (Fu \\& Lai 2011), a large-scale poloidal field can enhance the instability \n(Yu \\& Lai 2013; see Tagger \\& Pallet 1999; Tagger \\&\nVarniere 2006). The p-modes are also influenced by the magnetosphere\nthat may exist inside the disc inner edge (Fu \\& Lai 2012).\n\nSo far our published works are based on linear analysis. While these\nare useful for identifying the key physics and issues, the nonlinear\nevolution and saturation of the mode growth can only be studied by\nnumerical simulations. It is known that fluid perturbations near the\ncorotation resonane are particularly prone to become nonlinear (e.g.,\nBalmforth \\& Korycansky 2001; Ogilvie \\& Lubow 2003). \nMoreover, real accretion discs are more complex than any\nsemi-analytic models considered in our previous works. Numerical MHD\nsimulations (including GRMHD) are playing an increasingly important\nrole in unraveling the nature of BH accretion flows (e.g., De Villiers\n\\& Hawley 2003; Machida \\& Matsumoto 2003; Fragile et al.~2007; Noble\net al.~2009, 2011; Reynolds \\& Miller 2009; Beckwith et al.~2008, 2009;\nMoscibrodzka et al.~2009; Penna et al.~2010; Kulkarni et\nal.~2011; Hawley et al.~2011; O'Neill et al.~2011; Dolence et al.~2012;\nMcKinney et al.~2012; Henisey et al.~2012). Despite much\nprogress, global GRMHD simulations still lag far behind observations,\nand so far they have not revealed clear signs of HFQPOs that are directly \ncomparable with the observations of BH X-ray binaries\n\\footnote{Henisey et al.~(2009, 2012) found evidence of\nexcitation of wave modes in simulations of tilted BH accretion disks.\nHydrodynamic simulations using $\\alpha$-viscosity (Chan 2009; O'Neill et al.~2009)\nshowed wave generation in the inner disk region by viscous instability\n(Kato 2001). The MHD simulations by O'Neill et al.~(2011) revealed\npossible LFQPOs due to disk dynamo cycles. \nDolence et al.~(2012) reported transient QPOs in the numerical models\nof radiatively inefficent flows for Sgr A$^\\star$. McKinney et\nal.~(2012) found QPO signatures associated with the interface between\nthe disc inflow and the bulging jet magnetosphere (see Fu \\& Lai\n2012). Note that the observed HFQPOs are much weaker than LFQPOs,\ntherefore much more difficult to obtain by brute-force simulations.}.\nIf the corotation instability and its magnetic counterparts studied in\nour recent papers play a role in HFQPOs, the length-scale involved\nwould be small and a proper treatment of flow boundary conditions is\nimportant. It is necessary to carry out ``controlled'' numerical\nexperiments to capture and evaluate these subtle effects.\n\nIn this paper, we use two-dimensional hydrodynamic simulations to\ninvestigate the nonlinear evolution of corotational instability of\np-modes. Our 2D model has obvious limitations. For example it does not\ninclude disc magnetic field and turbulence. However, we emphasize that\nsince the p-modes we are studying are 2D density waves with no\nvertical structure, their basic radial ``shapes'' and real frequencies\nmay be qualitatively unaffected by the turbulence (see Arras et\nal. 2006; Reynolds \\& Miller 2009; Fu \\& Lai 2009). Indeed, several\nlocal simulations have indicated that density waves can propagate in\nthe presence of MRI turbulence (Gardiner \\& Stone 2005; Fromang et\nal. 2007; Heinemann \\& Papaloizou 2009). Our goal here is to\ninvestigate the saturation of overstable p-modes and the their\nnonlinear behaviours.\n\n\n\\section{Numerical Setup}\n\nOur accretion disc is assumed to be inviscid and geometrically thin so that the\nhydrdynamical equations can be reduced to two-dimension with vertically\nintegrated quantities. We adopt an isothermal equation of state \nthroughout this study, i.e. $P=c_s^2\\Sigma$ where $P$ is the vertically integrated\npressure, $\\Sigma$ is the surface density and $c_s$ is the constant\nsound speed. Self-gravity and magnetic fields are neglected.\n\nWe use the Paczynski-Witta Pseudo-Newtonian potential (Paczynski \\&\nWitta 1980) to mimic the GR effect:\n\\be\n\\Phi=-\\frac{GM}{r-r_{\\rm S}},\n\\ee\nwhere $r_{\\rm S}=2GM\/c^2$ is the Schwarzschild radius. \nThe corresponding Keplerian rotation frequency and radial epicyclic frequency are\n\\ba\n&&\\Omega_{\\rm K}=\\sqrt{\\frac{GM}{r}}\\frac{1}{r-r_{\\rm S}},\\\\\n&&\\kappa=\\Omega_{\\rm K}\\sqrt{\\frac{r-3r_{\\rm S}}{r-r_{\\rm S}}}.\n\\label{eq:grkappa}\n\\ea\nIn our compuation, we will adopt the units such that \nthe inner disc radius (at the Inner-most Stable Circular Orbit or ISCO)\nis at $r=1.0$ and the Keperian frequency at the ISCO is\n$\\Omega_{\\rm ISCO}=1$. In these units, $r_{\\rm s}=1\/3$, and\n\\be\n\\Omega_{\\rm K}=\\frac{2}{3}\\frac{1}{r-r_{\\rm s}}\\frac{1}{\\sqrt{r}}.\n\\ee\nOur computation domain extends from $r=1.0$ to $r=4.0$ in the radial\ndirection and from $\\phi=0$ to $\\phi=2\\pi$ in the azimuthal\ndirection. We also use the Keplerian orbital period\n($T=2\\pi\/\\Omega_{\\rm K}=2\\pi$) at $r=1$ as the unit for time.\nThe equilibrium state of the disk is axisymmetric. The surface density\nprofile has a simple power-law form\n\\be\n\\Sigma_0=r^{-1},\n\\ee\nwhich leads to a positive vortensity gradient in the inner disc region.\nThe equilibrium rotation frequency of the disc is given by \n\\be\n\\Omega_0(r)=\\sqrt{\\frac{4\/9}{r(r-1\/3)^2}-\\frac{c_s^2}{r^2}}.\n\\ee\nThroughout our simulation, we will adopt $c_s=0.1$ so that\n$\\Omega_0 \\simeq \\Omega_{\\rm K}$.\n\n\\begin{figure*}\n\\begin{center}\n$\n\\begin{array}{ccc}\n\\includegraphics[width=0.33\\textwidth]{m2vr1.ps} &\n\\includegraphics[width=0.33\\textwidth]{m3vr1.ps} &\n\\includegraphics[width=0.33\\textwidth]{mrvr1.ps}\n\\end{array}\n$\n\\caption{Evolution of the radial velocity amplitude $|u_r|_{\\rm max}$ \n(evaluated at $r=1.1$) for three runs with initial azimuthal mode number $m=2$\n(left panel), $m=3$ (middle panel) and \nwith random perturbations (right panel). \nThe dashed lines are the fits for the\nexponential growth stage (between $\\sim 10$ and $\\sim 30$ orbits) of\nthe mode amplitude.}\n\\label{fig:growth}\n\\end{center}\n\\end{figure*}\n\n\n\\begin{figure*}\n\\begin{center}\n$\n\\begin{array}{cc}\n\\includegraphics[width=0.45\\textwidth]{m2svr.ps} & \n\\includegraphics[width=0.45\\textwidth]{m3svr.ps} \\\\\n\\includegraphics[width=0.45\\textwidth]{m2sdvphi.ps} &\n\\includegraphics[width=0.45\\textwidth]{m3sdvphi.ps} \n\\end{array}\n$\n\\caption{Comparison of the radial profiles of velocity perturbations\n from non-linear simulation and linear mode calculation. The top and\n bottom panels show the radial and azimuthal velocity perturbations,\n respectively. The left and right panels are for cases with\n azimuthal mode number $m=2$ and $m=3$, respectively. In each panel,\n the dashed line is taken from the real part of the complex\n wavefunction obtained in linear mode calculation, while the solid line is from\n the non-linear simulation during the exponential growth stage (at\n $T=20$ orbits), with the quantities evaluated at $\\phi=0.4\\pi$.\n Note that the normalization factor is given in the $y$-axis label\n for the nonlinear simulation results. }\n\\label{fig:vrvphi}\n\\end{center}\n\\end{figure*}\n\nWe solve the Euler equations that govern the dynamics of the disc flow\nwith the PLUTO code \\footnote{publicly available at\n http:\/\/plutocode.ph.unito.it\/} (Mignone et al. 2007), which is a\nGodunov-type code with multiphysics and multialgorithm modules. For\nthis study, we choose a Runge-Kutta scheme (for time integration) and\npiecewise linear reconstruction (for space integration) to achieve\nsecond order accuracy, and Roe solver as the solution of Riemann\nproblems. The gird resolution we adopt is $(N_r\\times\nN_{\\phi})=(1024\\times 2048)$ so that each grid cell is almost\na square. Our runs typically last for 100 orbits (Keplerian orbits at\ninner disc boundary). To compare with the linear mode calculations\n(Lai \\& Tsang 2009), the inner disk boundary is set to be reflective\nwith zero radial velocity. At the outer disc boundary, we adopt the\nnon-reflective boundary condition. This is realized by employing the\nwave damping method (de Val-Borro et al. 2006) to reduce wave reflection. \nThis implementation mimics the outgoing radiative boundary condition \nused in the linear analysis.\n\n\n\\section{Results}\n\nWe carried out simulations with two different types of initial\nconditions for the surface density perturbation. In the first, we\nchoose $\\delta \\Sigma(r, \\phi)$ to be randomly distributed in $r$ and\n$\\phi$. In the second, we impose $\\delta \\Sigma(r, \\phi)\\propto\n\\cos(m\\phi)$ that is randomly distributed in $r$, so that the perturbation\nhas an azimuthal number $m$.\nIn all cases, the initial surface density perturbation has a small amplitude \n($|\\delta\\Sigma\/\\Sigma_{0}| \\leq 10^{-4}$).\n\n\\begin{table*}\n\\caption{Comparison of results from linear and nonlinear studies of overstable disc p-modes}\n\\centering\n\\begin{threeparttable}\n\\begin{tabular}{ c c c c c c c c }\n\\noalign{\\hrule height 1pt}\n$m$\\tnote{a} & $\\omega_{r}$\\tnote{b} &$\\omega_i$\\tnote{c} & $\\omega_{r}\/m\\Omega$\\tnote{d} & $\\omega_{r1}$\\tnote{e} & $|\\omega_r-\\omega_{r1}|\/\\omega_r$\\tnote{f} & $\\omega_{r2}$\\tnote{g} & $|\\omega_{r2}-\\omega_{r1}|\/\\omega_{r2}$\\tnote{h} \\\\\n\\hline\n$2$ & $1.4066$ & $0.0632$ & $0.7033$ & $1.3998$ & $0.5\\%$ & $1.4296$ & $2.1\\%$ \\\\\n$3$ & $2.1942$ & $0.0733$ & $0.7314$ & $2.1997$ & $0.3\\%$ & $2.2445$ & $2.0\\%$ \\\\\n$4$ & $3.0051$ & $0.0763$ & $0.7512$ & $2.9996$ & $0.2\\%$ & $3.0594$ & $2.0\\%$ \\\\\n$5$ & $3.8294$ & $0.0751$ & $0.7659$ & $3.7995$ & $0.8\\%$ & $3.8886$ & $2.3\\%$ \\\\\n$6$ & $4.6621$ & $0.0714$ & $0.7770$ & $4.6494$ & $0.3\\%$ & $4.7749$ & $2.6\\%$\\\\\n$7$ & $5.5007$ & $0.0664$ & $0.7858$ & $5.4992$ & $0.03\\%$ & $5.6756$ & $3.1\\%$ \\\\\n$8$ & $6.3436$ & $0.0607$ & $0.7930$ & $6.3492$ & $0.09\\%$ & $6.3189$ & $0.5\\%$\\\\\n\\noalign{\\hrule height 1.2pt}\n\\end{tabular}\n\\begin{tablenotes}\n\\item[a] Azimuthal mode number\n\\item[b] Mode frequency from the linear calculation (in units of Keplerian\n orbital frequency at the inner disc boundary; same for $\\omega_i$,\n $\\omega_{r1}$ and $\\omega_{r2}$)\n\\item[c] Mode growth rate from the linear calculation \n\\item[d] Ratio of wave pattern speed to the Keplerian orbital frequency at the inner disc boundary\n\\item[e] Mode frequency during the exponential growth stage of nonlinear simulation \n(peak frequency of the power density spectrum between $\\sim 10$ orbits and $\\sim 30$ orbits)\n\\item[f] Difference between $\\omega_r$ (linear result) and $\\omega_{r1}$ (nonlinear result)\n\\item[g] Mode frequency during the saturation stage of nonlinear simulation \n(peak frequency of the power density spectrum between $\\sim 30$ orbits and $\\sim 100$ orbits)\n\\item[h] Difference between $\\omega_{r1}$ and $\\omega_{r2}$\n\\end{tablenotes}\n\\end{threeparttable}\n\\label{tab:tab1}\n\\end{table*}\n\nFig.~\\ref{fig:growth} shows the evolution of the radial velocity\namplitude near the inner disc radius for runs with initial azimuthal\nmode number $m=2$, $m=3$ and random initial perturbation,\nrespectively. This velocity amplitude is obtained by searching for the\nmaximum $|u_r|$ at a given $r$ by varying $\\phi$ for each given time\npoint. We chose $r=1.1$ because this is where the largest radial\nvelocity perturbation is located (see the upper panels of\nFig.~\\ref{fig:vrvphi}). We see that in all cases there are three\nstages in the amplitude evolution. The first stage occupies roughly\nthe first $10$ orbits, during which the initial perturbation starts to\naffect the flow and presumably excites many modes\/oscillations in the\ndisc. In the second stage (from $\\sim 10$ orbits to $\\sim\n30$ orbits), the fastest growing mode becomes dominant and undergoes \nexponential growth with its amplitude increased by about four orders\nof magnitude. In the last stage (beyond $\\sim 30$ orbits), the\nperturbation growth saturates and its amplitude remains at\napproximately the same level. A fit to the exponential growth stage\ngives the growth rate of the fastest growing perturbation, which is\nthe slope of the fitted straight line. For the $m=2$ run, we find \n$0.17\/\\log_{10}(e)\/2\\pi\\simeq 0.0637$ (in units of the orbital\nfrequency at $r=1$) as the growth rate, which is quite consistent with\nthe result from our linear eigenmode calculation $\\omega_{i}\\simeq\n0.0632$ (the imaginary part of the eigenfrequency; Lai \\& Tsang 2009;\nFu \\& Lai 2011; see Table~\\ref{tab:tab1}). For the $m=3$ run,\nour simulation gives $0.074$ as the mode growth rate, close to\n$\\omega_{i}=0.0733$ from the linear calculation. In\nFig.~\\ref{fig:vrvphi}, we plot the radial profile of the velocity\nperturbation at a given time during the exponential growth stage\nof the simulation, and we compare it with the wavefunctions obtained\nfrom the linear mode calculation. Note that in each panel of\nFig.~\\ref{fig:vrvphi} we have normalized the different sets of data so\nthat they have the same scale. The normalization factor has also been\nincluded in figure labels for easy recovery of the absolute value\nin the simuation. We can see that wavefunctions obtained from two studies \nagree quite well. The only obvious differences are near the outer disc boundary,\nwhich can be attributed to the fact that outer boundary conditions\nemployed in the numerical simulation and linear mode calculation are not\nexactly the same. Nevertheless, this agreement and the agreement in \nthe mode growth rate in Fig.~\\ref{fig:growth} confirm that our non-linear simulations \nindeed capture the same unstable disc p-modes as those revealed in linear\nperturbation analysis.\n\n\n\n\\begin{figure*}\n\\begin{center}\n$\n\\begin{array}{cc}\n\\includegraphics[width=0.4\\textwidth]{m3fft2.ps} &\n\\includegraphics[width=0.4\\textwidth]{mrfft2.ps} \\\\\n\\includegraphics[width=0.4\\textwidth]{m3fft3.ps} &\n\\includegraphics[width=0.4\\textwidth]{mrfft3.ps} \n\\end{array}\n$\n\\caption{Power density spectra of the radial velocity perturbations\nnear the disk inner boundary ($r=1.1$). Each panel shows the normalized FFT\n magnitude as a function of frequency. The left and right\n columns are for runs with initial $m=3$ and with random perturbation,\n respectively. In the top and bottom panels, the Fourier transforms are\n sampled for time periods of [10, 30] orbits and [30,\n 100] orbits, respectively.}\n\\label{fig:fft}\n\\end{center}\n\\end{figure*}\n\n\n\\begin{figure*}\n\\begin{center}\n$\n\\begin{array}{cc}\n\\includegraphics[width=0.4\\textwidth]{m2cvr.ps} & \n\\includegraphics[width=0.4\\textwidth]{m3cvr.ps} \\\\\n\\includegraphics[width=0.4\\textwidth]{m2cdvphi.ps} &\n\\includegraphics[width=0.4\\textwidth]{m3cdvphi.ps} \n\\end{array}\n$\n\\caption{Evolution of the radial profiles of velocity perturbations from\n the non-linear simulations. The top and bottom panels show the radial\n and azimuthal components of velocity perturbation, respectively. \n The left and right panels are for cases with azimuthal mode number\n $m=2$ and $m=3$, respectively. In each panel, the data are taken\n from points with fixed $\\phi=0.4\\pi$. Different line types represent\n different times during the simulation.}\n\\label{fig:cvrvphi}\n\\end{center}\n\\end{figure*}\n\n\\begin{figure*}\n\\begin{center}\n$\n\\begin{array}{ccc}\n\\includegraphics[width=0.33\\textwidth]{m2vr_turn0020.ps} &\n\\includegraphics[width=0.33\\textwidth]{m3vr_turn0020.ps} &\n\\includegraphics[width=0.33\\textwidth]{mrvr_turn0020.ps} \\\\\n\\includegraphics[width=0.33\\textwidth]{m2vr_turn0030.ps} &\n\\includegraphics[width=0.33\\textwidth]{m3vr_turn0030.ps} &\n\\includegraphics[width=0.33\\textwidth]{mrvr_turn0030.ps} \\\\\n\\includegraphics[width=0.33\\textwidth]{m2vr_turn0050.ps} &\n\\includegraphics[width=0.33\\textwidth]{m3vr_turn0050.ps} &\n\\includegraphics[width=0.33\\textwidth]{mrvr_turn0050.ps} \n\\end{array}\n$\n\\caption{Evolution of the radial velocity for runs with initial $m=2$\n (left), $m=3$ (middle) and random (right) perturbations,\n respectively. From top to bottom, the times are $T=20$, $30$ and $50$ orbits, \n respectively. Note that the color scale varies from panel to panel.}\n\\label{fig:vrcolor}\n\\end{center}\n\\end{figure*}\n\n\nTo explore the time variability of the flow, we carry out\nFourier transform of the radial velocity $u_r(r, \\phi, t)$ at fixed\n$r$ and $\\phi$ during different evolutionary stages. In\nFig.~\\ref{fig:fft} we show some examples of the resulting power\ndensity spectra (normalized to the maximum value of unity). Different\nrows correspond to different total sampling times and different\ncolumns represent runs with different initial surface density\nperturbation. In the left columns, the disk has an initial\nperturbation with azimuthal mode number $m=3$. A mixture of various\nmodes\/oscillations are excited in the flow. After about $10$ orbits of\nevolution, one of them (the fastest growing mode) has its oscillation\namplitude grown by a large amount such that it dominates over other\nmodes. This corresponds to the primary spike in the top-left\npanel. The other spikes in the same panel are harmonics of this\nprimary spike (see the labels of the dashed vertical\nlines). Table~\\ref{tab:tab1} shows that the frequency\\footnote{All the\n frequencies in this paper are angular frequencies unless otherwise\n noted.} of this fastest growing mode ($\\omega_{r1}$) differs\nfrom the frequency obtained in linear mode calculation by only\n$0.3\\%$, which again demonstrates the consistency of these two studies. \nAfter the perturbation saturates (bottom-left panel), we see\nthat the basic structure of the power density spectrum does not change\nmuch except that these spikes are not as ``clean'' as in the linear regime;\nthis is probably due to the interaction of different modes. Compared with the upper\npanel, the location of the primary spike ($\\omega_{r2}$ in\nTable~\\ref{tab:tab1}) is increased by $2.0\\%$. In\nTable~\\ref{tab:tab1} we also include the results from both linear mode\ncalculation and numerical simulation for modes with other mode number\n$m$. The comparison illustrates two main points: First, the\nfrequencies of the fastest growing modes during the exponentially\ngrowth stage of numerical simulations are exceptionally close to the\nlinear calculation results (differ by less than $1\\%$); second, the\nfrequencies of the fastest growing modes during the saturation stage\nare only slightly higher (except for the $m=8$ mode which shows lower\nfrequency) than ones during the exponential growing stage. These indicate\nthat the mode frequencies obtained in linear mode calculation are\nfairly robust and can be reliably applied in the interpretation of HFQPOs.\n\nIn the right columns of Fig.~\\ref{fig:fft}, the simulation starts with\na random initial perturbation which excites modes with various\n$m$'s. During the exponential growth stage, six modes stand out. By\nexamining the location of the corresponding spikes, we know that these\nare the $m=3$, $4$, $5$, $6$, $7$, $8$ modes. Although the\n$m=3$ mode seems to be the most prominent one in this figure, we note\nthat this is because for this particular run the initial random\nperturbation happens to contain more $m=3$ wave components. If we were\nto start the run with a different initial random perturbation, then\nthe relative strengths of those peaks would also be different (not\nnecessarily with $m=3$ being dominant).\n\nFig.~\\ref{fig:cvrvphi} shows the comparison of the velocity perturbations \nduring the linear stage ($T=20$ orbits), at the end of the linear\nstage ($T=30$ orbits) and during the saturation stage ($T=50$ orbits)\nfor simulations with different initial perturbations. At $T=20$\norbits, the oscillation mainly comes from the single fastest growing\nmode and has a smooth radial profile. At $T=30$ orbits, the perturbation\nstarts to saturate, and the oscillation now consists of many different modes,\nand its radial profile exhibits sharp variations at several locations.\nThese sharp features remain after the saturation ($T=50$ orbits).\n\nTo see this evolution from a different perspective, in\nFig.~\\ref{fig:vrcolor} we show the color contours of radial velocity\nfor runs with different initial perturbations (different columns) at\ndifferent times (different rows). We can clearly see that spiral waves\ngradually develop due to the instability. As the system evolves, sharp\nfeatures in velcoities emerge. At the end (the bottom \nrow) the spiral arms becomes more irregular, which is related\nto the emergence and interaction of multiple modes after saturation.\n\nThe sharp velocity variations shown in\nFigs.~\\ref{fig:cvrvphi}-\\ref{fig:vrcolor} suggest shock-like features.\nNote that since our simulations adopt isothermal equation of state\n(with $P\/\\Sigma=$constant), there is no entropy generation and shock\nformation in the strict sense. The radial velocity jump (see\nFig.~\\ref{fig:cvrvphi}) is comparable but always smaller than the\nsound speed. Nevertheless, these sharp features imply that\nwave steepening plays an important role in the mode saturation. \n\n\\section{Conclusions}\n\nWe have carried out high-resolution, two-dimensional hydrodynamical\nsimulations of overtstable inertial-acoustic modes (p-modes) in BH\naccretion discs with various initial conditions. The evolution of\ndisc p-modes exhibits two stages. During the first (linear) stage, the\noscillation amplitude grows exponentially. In the cases with a\nspecific azimuthal mode number ($m$), the mode frequency, growth rate\nand wavefunctions agree well with those obtained in our previous\nlinear mode analysis. In the cases with random initial perturbation,\nthe disc power-density spectrum exhibits several prominent\nfrequencies, consistent with those of the fastest growing linear modes\n(with various $m$'s). These comparisons with the linear theory\nconfirm the physics of corotational instability that drives disc\np-modes presented in our previous studies (Lai \\& Tsang 2009; Tsang \\&\nLai 2009c; Fu \\& Lai 2011, 2012). In the second stage, the mode growth\nsaturates and the disc oscillation amplitude remains at roughly a\nconstant level. In general, we find that the primary disc oscillation\nfrequency (in the cases with specific initial $m$) is larger than the\nlinear mode frequency by less than $4\\%$, indicating the robustness of\ndisc oscillation frequency in the non-linear regime. Based on the\nsharp, shock-like features of fluid velocity profiles, we suggest that\nthe nonlinear saturation of disc oscillations is caused by wave\nsteepening and mode-mode interactions.\n\nAs noted in Section 1, our 2D hydrodynamical simulations presented in\nthis paper do not capture various complexities (e.g., magnetic field,\nturbulence, radiation) associated with real BH accretion discs.\nNevertheless, they demonstrate that under appropriate conditions, disc\np-modes can grow to nonlinear amplitudes with well-defined frequencies\nthat are similar to the linear mode frequencies. A number of issues\nmust be addressed before we can apply our theory to the interpretation\nof HFQPOs. First, magnetic fields may play an important role in the\ndisc oscillations. Indeed, we have shown in previous linear\ncalculations (Fu \\& Lai 2011,2012) that the growth of p-modes can be\nsignificantly affected by disc toroidal magnetic fields. A strong,\nlarge-scale poloidal field can also change the linear mode frequency\n(Yu \\& Lai 2013). Whether or not these remain true in the non-linear regime is\ncurrently unclear. Second, understanding the nature of the inner disc boundary\nis crucial. Our calculations rely on the assumption that the inner disc edge\nis reflective to incoming spiral waves. In the standard disc model with zero-torque\ninner boundary condition, the radial inflow velocity is not negligible near the ISCO, \nand the flow goes through a transonic point. While the steep density and velocity\ngradients at the ISCO give rise to partial wave reflection (Lai \\& Tsang 2009),\nsuch radial inflow can lead to significant mode damping such that the net growth rates of\np-modes become negative. This may explain the absence of HFQPOs in the thermal\nstate of BH x-ray binaries. However, it is possible that the inner-most region of \nBH accretion discs accumulates significant magnetic flux and forms a magnetosphere.\nThe disc-the magnetosphere boundary will be highly reflective, leading to the \ngrowth of disc oscillations (Fu \\& Lai 2012; see also Tsang \\& Lai 2009b).\nFinally, the effect of MRI-driven disc turbulence on the p-modes\nrequires further understanding. In particular, turbulent viscosity \nmay lead to mode growth or damping, depending on the magnitude and the \ndensity-dependence of the viscosity (R. Miranda \\& D. Lai 2013, in prep).\n\n\n\\section*{Acknowledgements}\nThis work has been supported in part by the NSF grants AST-1008245,\nAST-1211061 and the NASA grant NNX12AF85G. WF also acknowledges the\nsupport from the Laboratory Directed Research and Development Program\nat LANL.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}