diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcuja" "b/data_all_eng_slimpj/shuffled/split2/finalzzcuja" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcuja" @@ -0,0 +1,5 @@ +{"text":"\n\\section{Introduction}\n\nConceptual metaphor is an ubiquitous cognitive mechanism that allows us to structure and reason about abstract concepts by relating them to experiential domains \\citep{lakoff2008metaphors, feldman2008molecule}. In language, metaphors allow human communication and reasoning about abstract ideas using concrete notions learned from sensorimotor, emotional, and other embodied experience~\\citep{thibodeau2011}: ``a plan is \\textit{solid}''; ``the economy \\textit{is stumbling}''; ``I \\textit{see} what you mean''.\n\nTo illustrate the role of metaphor in abstract reasoning, consider the following metaphorical statement: ``the economy is \\emph{stumbling}''. According to conceptual metaphor theory (CMT), we understand this statement through mental simulation, by connecting the abstract concept of economy to the imagined movement of a stumbling person. We use the same mental imagery to infer that the economy, just like a stumbling person, is ``unstable'' and ``might fall''. \nAs another example, in the metaphorical statement ``a proposal is \\emph{solid}'', bringing to mind solid objects and their properties suggests that the proposal, just like a physical object, was ``well-built'' and ``will not easily break''. \nHowever, not all properties generalize: unlike a physical object, a proposal, which is an abstract entity, cannot be ``thrown'' or ``bent''; \nunlike a stumbling person, the economy does not ``wear shoes''.\n\nLarge Language Models (LLMs) have achieved remarkable results on a variety of tasks. However, in contrast to humans, LLMs do not have access to commonsense and embodied experiences of the world~\\citep{bender-koller-2020-climbing}. Although the data LLMs are trained on includes up to trillions of text tokens, it is unclear how much of this data allows them to capture human commonsense reasoning \\citep{gordon2013reporting, becker-etal-2021-reconstructing}. Conceptual metaphor theory suggests that embodied and implicit knowledge is required for the ability to utilize metaphors in commonsense reasoning about concrete and abstract ideas.\n\nWe propose a novel dataset, MiQA (Metaphorical Inference Questions and Answers), to assess the ability of a model to reason with conventional metaphors.\nThe benchmark draws on the CMT research \\citep{grady1997foundations} to construct a representative set of primary metaphors, which are contrastively paired with literal statements. \nOur task requires a model to make a correct inference in simple situations without specifying whether the contextual register is literal or metaphorical, leveraging research on metaphor processing \\citep{rai2020}, commonsense reasoning \\citep{davis2015}, and natural language inference \\citep{dagan2006, bowman-etal-2015-large}.\n\nOur benchmark combines the previously isolated areas of metaphor detection and commonsense inference. Although there is considerable research on both of these areas separately, it is unclear whether such capabilities scale compositionally: LLMs could handle two separate tasks well, but not their combination~\\citep{Keysers2019}.\n\nOur contributions are the following:\n\\begin{itemize}[nolistsep]\n \\setlength\\itemsep{0.25em}\n \\item We propose MiQA, a benchmark for commonsense inference with conventional metaphors;\n \\item We show a large discrepancy between the performance of small and large models in a binary-choice MiQA task, from chance to human-level accuracy;\n \\item We use a generative MiQA task to corroborate the performance of the largest model in an open-ended setting, showing that although human-level performance is approached, careful multiple-shot prompting is required. \n\\end{itemize}\n\n\n\\input{examples_table}\n\n\n\\section{Related Work}\n\nMetaphor has received renewed attention in natural language processing, but most tasks have been restricted to detection (e.g. \\citealp{shared-metaphor-task-2018, shared-metaphor-task-2020, choi-etal-2021-melbert}) on large annotated corpora \\citep{steen2010method, beigman-klebanov-etal-2016-semantic}. Human-level performance has not been reached by LLMs, but the progress is promising and an active area of research. However,\nthese tasks may excessively rely on context-dependent word meanings \\citep{neidlein-etal-2020-analysis} and do not measure the ability to reason with metaphor. \n\n\nMetaphor paraphrasing is another active area of research. BIG-bench \\citep{big-bench}, a collaborative multi-task benchmark intended to test a variety of capabilities of LLMs, includes four tasks related to metaphor.\nWhile these tasks contain novel metaphors, they still do not assess the ability to employ metaphoric knowledge in reasoning.\n\nRecently, \\citet{chakrabarty-etal-2022-rocket} built a dataset for multiple-choice story continuations involving similes and idioms extracted from books. \nSubsequently, \\citet{fig-qa-paper} proposed a metaphor interpretation task that requires models to choose the correct out of two interpretations of a simile. \nIn contrast, our task combines metaphor interpretation with commonsense inference, uses a more systematic data source, and has an additional adversarial character, as it requires the selection between two semantically-close items instead of items with opposite meanings.\n\n\n\n\n\\section{Dataset}\n\\label{dataset}\n\n\\subsection{Motivation}\n\nMost existing studies of metaphor have primarily started from corpus-based methods, using frequency and other corpus-based metrics to detect or classify metaphors. This process leads to a primary focus on corpus distributions and makes it hard to compare studies across different corpora. Furthermore, it ignores the central tenet of metaphor theory that foundational metaphors are grounded in non-linguistic and experiential domains which may be assumed as a background and thus underrepresented in corpora. \n\nTo address this, we chose to use a foundational ontology of primary conceptual metaphors~\\citep{grady1997foundations} based on CMT~\\citep{lakoff2008metaphors, feldman2008molecule}. Our choice of primary metaphors has multiple desirable properties. Primary metaphors are a good starting point for the investigation of more complex, compositional metaphor. The metaphors in our dataset are developmentally early in child experience with primary scenes and language. Moreover, the chosen metaphors are embodied, in that the source domain is often observable and sensory-motor (size, warmth, height) while the targets are less observable and often subjective or abstract (importance, affection, quantity). The metaphors we chose form a basis set of mappings that can create, through composition, more complex mappings, such as the Event Structure Metaphor, that maps movement and manipulation to actions. Our approach ensures that the distributions of metaphor categories in the task is balanced and hence reflective of the capability of large models to use primary metaphors as building blocks of reasoning.\n\n\\subsection{Construction}\n\nWe constructed a novel dataset consisting of $150$ items. \nThe items were manually created by the authors based on the work of \\citet{grady1997foundations}, which lists $100$ primary metaphors that are conventional, developmentally early, and form a basis set for composing complex mappings.\n\nEach item in the dataset is a tuple consisting of four sentences in English: a literal premise ($L_p$), a premise containing a conventional metaphor ($M_p$), an implication of the literal premise ($L_c$), and an implication of the metaphorical premise ($M_c$).\n\nThe tuples are paired so that a mistaken literal interpretation of $M_p$ can falsely suggest that $L_c$ is implied. For example, a wrong inference would be:\n``\\textit{I see what you mean}'' implies that ``\\textit{I am using my eyes}''.\nThe false implication $M_p \\to L_c$ thus serves as an adversarial element that probes whether the model correctly registers the metaphorical context.\n\nFor each primary metaphor proposed by \\citet{grady1997foundations}, we manually created $1$-$2$ pairs of items in the form described above, where $M_p$ is an example of the metaphor, while $L_p$ relates to the source domain of the metaphor only. \n\nTo create the final benchmark for LLMs, we used these items to generate two types of adversarial questions. \nThe first type (``\\textit{implies}''-questions) requests the model to select the most likely inference given a metaphorical statement. Answering correctly requires the model to not be tricked by a possible literal interpretation of the metaphorical premise.\nThe second type (``\\textit{implied-by}''-questions) requests the model to select the most likely premise that a literal statement is implied by. Answering correctly requires the model to not be tricked by a possible metaphorical interpretation of the literal conclusion.\nSee Table~\\ref{miq-questions} for examples.\n\n\nWe combined these items to obtain a benchmark consisting of $300$ questions, of which half are ``\\textit{implies}''-questions and half are ``\\textit{implied-by}''-questions. This pairing of tasks ensures that the model does not achieve a better score if biased towards assigning a higher likelihood to either literal or metaphorical continuations of a statement. \n\n\n\\section{Human Evaluation}\n\nWe estimated the human performance on the binary-choice task using the responses of $15$ human adult volunteers with English as first or second language.\nThe participants were told that the aim of the research was to gather a set of commonsense responses and compare them to LLMs responses. No additional information about the task was given.\n\n\n\\section{Large Language Models Evaluation}\n\n\n\\input{palm_generative_examples}\n\nWe evaluated the performance of two pre-trained LLMs: \nPaLM with 8B, 62B, 540B parameters \\citep{palm-paper} and GPT-3 Ada, Babbage, Curie and DaVinci \\citep{gpt3-paper}. The parameter counts of the GPT-3 models are not publicly available, but have been estimated at 350M, 1.3B, 6.7B, and 175B respectively \\footnote{\\href{ https:\/\/blog.eleuther.ai\/gpt3-model-sizes}{https:\/\/blog.eleuther.ai\/gpt3-model-sizes}}.\n\nThe main purpose of this study is to assess the capabilities of LLMs on the MiQA benchmark. For comparison, we also verify the capabilities of pre-trained fine-tuned smaller language models on our benchmark. We follow~\\citet{fig-qa-paper} in using encoder-only models trained on the natural language inference datasets SNLI~\\citep{bowman-etal-2015-large} and MNLI~\\citep{williams-etal-2018-broad} for zero-shot evaluation.\nWe opt for this approach because MiQA is designed as a small dataset suitable as a benchmark and not for fine-tuning.\nWe test the state-of-the-art encoder-only model DeBERTaV3~\\citep{he2021debertav3} in sizes small, medium and large. These models have 44M, 86M and 304M parameters respectively, and their weights are available online\\footnote{\\href{https:\/\/huggingface.co\/models?search=cross-encoder\/nli-deberta-v3}{https:\/\/huggingface.co\/models?search=cross-encoder\/nli-deberta-v3}}.\nThe models take a premise-implication pair and produce a probability distribution over three classes: ``entailment'', ``contradiction'', and ``undetermined''. We report the results for the best-performing score, in this case $1 - P(\\emph{contradiction})$, which outranks $P(\\emph{entailment})$.\n\n\n\\subsection{Binary-Choice Tasks}\n\nWe first assessed the models by prompting them with the question types illustrated in Table~\\ref{miq-questions} in $0$, $1$ and $5$-shot settings. Few-shot prompts were obtained by prefixing with randomly selected questions followed by their correct answers. To score the results, we obtained the log likelihood of each of the two choices as candidate continuations to the given prompt question. A response was scored as correct if the log likelihood of the correct choice was larger than that of the incorrect choice.\n\nPrompts can greatly influence LLM predictions \\citep{lu-etal-2022-fantastically, cao-etal-2021-knowledgeable}. As expected, we observed variability with changing prompts. To mitigate this, we tried multiple prompts, as detailed in Appendix~\\ref{appendix}. For each model, we selected the prompt that performed best with $0$-shot, and subsequently used this prompt to obtain and report its results in few-shot settings. Additionally, we used two baseline prompts (an empty prompt with no choices, and a prompt containing an unrelated question), which can indicate if the models simply learn to select either metaphorical or literal statements independently of the prompt in few-shot settings.\n \n\\subsection{Generative Task}\n\nIn addition to the binary-choice tasks, we also tested the largest model, PaLM-540b, in a generative setting. We prompted this model with questions of the form: ``$M_p$. Could that imply that $L_c$?''. This capitalises on the adversarial false implication $M_p \\to L_c$ described in section~\\ref{dataset}. \n\nWe obtained completions to the $150$ questions of this form generated from the MiQA dataset. Answers were manually and independently scored by every author. Every author scored at least two thirds of all responses and the scores were averaged. Scoring consisted of labelling the first paragraph of an answer as ``correct'', ``wrong'' or ``ambiguous''. To compute the accuracy over the generative task, ``correct'' responses were scored as $1$ and ``ambiguous'' responses were scored as $0.5$. Agreement between raters was medium, with intraclass correlation \\citep{shrout1979intraclass} $ICC(2, k)$ at $0.56$. Examples of scored answers are shown in Table~\\ref{palm-generative}. \n\nAs before, we evaluated the model in $0$, $1$ and $5$-shot settings. From the $0$-shot setting, we selected $32$ answers produced by the model that all authors independently scored as ``correct'', and the same number of answers of the form $M_p \\to M_c$ and $L_p \\to L_c$. We randomly selected $1$ or $5$ of these answers and their corresponding questions to prefix each prompt question in $1$- and $5$-shot settings.\n\n\n\\section{Results} \n\n\\input{table_narrow}\n\nThe full results are shown in Table~\\ref{miq-results}.\n\nFirstly, for the binary-choice tasks, there was a considerable gap between small and large LLMs. While the smaller models performed at or close to chance level, the largest models achieved very good performance even with $0$ shots, and approached human-level performance with few shots. We note that the ``\\textit{implied-by}'' task was overall more difficult than the ``\\textit{implies}'' task for both humans and LLMs. \n\nSecondly, the chance-level performance on the baseline prompts suggests that the increase in performance in few-shot settings was not due to the model learning to select either metaphorical or literal statements independently of the prompt. \nOn the other hand, the strong performance of the DeBERTaV3 models suggests a high level of transfer from the NLI datasets to MiQA, although there is a still a considerable gap to human performance.\n\nFinally, the generative results on PaLM-540b estimated the model performance in an open-ended setting. Similarly to the binary task, the model performed considerably better with $5$ shots compared to $0$ shots, approaching human performance. However, the gap between human and model performance for the generative task was greater compared with the gap for the binary-choice task.\n\nOverall, the results demonstrate that LLMs can correctly select between the metaphorical and literal contextual registers to perform inferences with conventional metaphors, but there is still a considerable gap between human and LLM performance in $0$-shot settings.\n\n\n\n\\section{Limitations}\n\n\nOur work used foundational metaphors from CMT to test basic metaphoric reasoning in LLMs.\nWe will expand this benchmark using additional and more complex sources of conceptual metaphor (e.g. \\citealp{metanet}). Future work will assess LLMs on novel non-conventional mappings.\n\nAlthough we mitigated for prompt sensitivity by using multiple prompts, the result interpretation should allow for small accuracy variations. \nFurther, in the binary-choice tasks we compare the LLM results with a human baseline, but we do not provide a baseline for the generative task. This is near perfect for humans, but a more systematic baseline can be created to quantify the exact headroom on this task. \nFinally, while the task holistically measures the performance of LLMs on a complex task, it is difficult to disentangle the component effects \n(metaphor detection, reasoning, response generation) in the overall accuracy. \n\n\\section{Conclusion}\n\nWe have proposed a novel compositional benchmark based on conceptual metaphor theory to assess the capacity of LLMs to make inferences with metaphors. Successful performance on this task requires metaphor detection and commonsense inference. Using a metaphor theory-based approach allows us to systematically explore capabilities and limitations of LLMs. This is the first in a planned series of increasingly complex metaphor inference datasets.\n\n\nThree main findings emerged from our proposed task. Firstly, there is a vast difference between the performance of small and large LLMs, with the former performing at chance level and the latter approaching human level in few-prompt setting. This observation is informative in the context of previous results showing that some, but not all, tasks observe a qualitative performance jump with model size and scale: for example, this is the case for reasoning about goal-step relationships between events and ordering events, but not for navigation and mathematical induction tasks~\\citep{palm-paper}. This result invites more research into the question of how and whether the performance of smaller models can be improved. Secondly, this reflects a true ability of LLMs to reason with conventional metaphor, and not simply to detect it. Whether this ability extends to novel metaphor is ongoing work. Finally, the performance of large LLMs approaches that of humans in binary-choice and generative tasks, but careful multiple-shot prompting is required.\n\n\n\\newpage\n\n\\section*{Acknowledgements}\n\nWe thank Fernando Pereira, Yasemin Altun, \\mbox{William} Cohen and Tiago Pimentel, as well as our anonymous reviewers, for their valuable feedback.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\n\n\nSemantic segmentation is an important task in computer vision with a high potential for practical applications, in particular for autonomous vehicles. Thanks to the predicted 2D segmentation maps by the deep convolutional neural networks (DCNNs), it contributes to an improved understanding of the scenes.\n\n\n\n\nA large body of the recent literature on DCNNs for semantic segmentation focuses on improving predictive performance and run-time through advanced~\\cite{chen2017deeplab,yu2017dilated} or lighter~\\cite{yu2018bisenet, zhao2018icnet, li2019dfanet, li2019partial} architectures, better use of multiple resolutions~\\cite{zhao2017pyramid, sun2019high} and novel loss functions~\\cite{lin2017focal, sudre2017generalised, berman2018lovasz, zhu2019improving}. However, for real-world deployments, other requirements, e.g., reliability, robustness, must be equally satisfied to avoid any failures. To reach them, there are a few major challenges yet to be fully solved. First, DCNNs have been shown to be overconfident~\\cite{guo2017calibration} even when their predictions are wrong~\\cite{nguyen2015deep, hein2019relu}. In addition, DCNNs struggle to learn when there are few training samples are available, or data and annotations are noisy. In particular, high capacity DCNNs can find ``shortcuts'' that allow them to exploit spurious correlations in the data (e.g., background information~\\cite{rosenfeld2018elephant, srivasta2020human}, textures and salient patterns~\\cite{geirhos2018imagenettrained}) towards minimizing the training error at the cost of generalization. Such DCNNs have been shown to be biased, e.g. contextual bias~\\cite{xiao2021noise} or texture bias~\\cite{geirhos2018imagenettrained}. This type of problem could be addressed by larger and higher quality datasets~\\cite{lambert2020mseg}, yet the entire complexity of the world cannot be encompassed in a training dataset with a limited size. Alternative solutions leverage uncertainty estimation for detecting such failures~\\cite{gal2016dropout, lakshminarayanan2017simple, franchi2019tradi}. However the most effective ones are computationally inefficient as they rely on ensembles or multiple forward passes~\\cite{ovadia2019can, gustafsson2020evaluating, ashukha2020pitfalls}. \n\n\n\n\n\n\nIn this paper, we aim to increase the \\Correction{robustness} of semantic segmentation models. For the scope of this work, we define \\Correction{\\emph{robust} as follows}: \\emph{a model is \\Correction{robust} if its predictions are accurate and well calibrated when facing \ntypical conditions from the training distribution, but also under distribution shift (epistemic and aleatoric uncertainty) and for unknown object classes which are not seen during training (epistemic uncertainty).} \\Correction{Our definition extends the scope of \\emph{robustness} beyond invariance to different perturbations of the input, e.g., \nadversarial attacks~\\cite{szegedy2013intriguing, eykholt2017robust}, image corruptions~\\cite{pezzementi2018putting, hendrycks2019benchmarking}, change of style~\\cite{geirhos2018imagenettrained}, where only prediction accuracy is used as a proxy for robustness. Here, a robust model must not be only accurate, but also well calibrated such that unknown objects or strong input perturbations are designated low confidence scores and easily identified as unreliable and discarded. Although difficult to attain, we argue that both accuracy and calibration are essential for deployment in real world conditions where the data distribution is not identical to the training distribution.\\footnote{\\Correction{The two metrics are often at odds with each other: a classifier can be accurate but non-calibrated (usually overcofident~\\cite{nguyen2015deep, guo2017calibration, hein2019relu}) and conversely it can be inaccurate yet calibrated, if its predictions are always low-confident.}}} To improve the \\Correction{robustness} of a DCNN,\ngiven \nthe noisy nature of the data, and to address the problem of limited numbers of training labeled images, we propose a technique that combines the teacher-student framework~\\cite{mean_teacher_NIPS_2017} with a novel data augmentation strategy (Fig.~\\ref{fig:unsupervised_training}). Our augmentation method, named \\emph{Superpixel-mix}, exchanges superpixels between training images to generate more training samples that preserve object boundaries and disentangle objects' parts from their frequent contexts. \nTo the best of our knowledge, this is among the first investigations on the use of these techniques for reducing DCNN bias and uncertainty. \n\n\n\\noindent \\textbf{Contributions: } In summary, our contributions are as follows: \\textbf{(i)} Superpixel-mix, a new type of data augmentation for creating new unlabeled images to increase DCNNs' accuracy and \\Correction{robustness}.\n\\textbf{(ii)} A theoretical grounding on why \nmixing augmentation combined with the teacher-student framework can improve \\Correction{robustness}. The theory is confirmed by a set of experiments. \n\\textbf{(iii)} A new dataset for quantifying contextual bias of DCNNs.\\footnote{The dataset will be made publicly available after the anonymity period}\n\n\n\n\\vspace{-3mm}\n\\section{Related Work}\n\n\\Correction{\\noindent\\textbf{Robust Deep Learning.} Robustness of DCNNs has been studied under different perspectives in the past few years, e.g., robustness to adversarial attacks~\\cite{papernot2016practical, eykholt2017robust}}. We focus here rather on the \\Correction{robustness} of the perception functions,\nand less on security aspects. Geirhos \\emph{et al}\\bmvaOneDot~\\cite{geirhos2018imagenettrained} observe that classification models trained on ImageNet are biased towards textures and blind to shapes. They mitigate this by augmenting the training set with stylized images~\\cite{gatys2015neural}, yet this can be detrimental for semantic segmentation as object boundaries are distorted. Shetty \\emph{et al}\\bmvaOneDot~\\cite{car_sidewalk_dependency_cvpr_2019} counter contextual bias \nwith a data augmentation strategy that removes random objects from images. Some other works focus on evaluating the robustness under different image perturbations, e.g., blur~\\cite{vasiljevic2016examining}, brightness~\\cite{pei2017deepxplore}. A more systematic study of robustness of classification DCNNs to image perturbations over varying levels of corruption is proposed in \\cite{hendrycks2019benchmarking}. This idea is extended to autonomous driving datasets~\\cite{michaelis2019benchmarking}, where robustness of object detection methods is evaluated. New datasets with challenging weather conditions, e.g., rain~\\cite{hu2019depth}, fog~\\cite{sakaridis2018semantic}, low light~\\cite{sakaridis2019lowlight} are created to assess and improve robustness of visual perception models. Most approaches simply evaluate evolution of accuracy under such distribution shifts, but ignore other metrics, e.g., calibration that is essential for \\Correction{robustness} (calibrated predictions facilitate thresholding for low-confidence predictions and detection of distribution shift). We evaluate our proposed method on multiple shifted datasets~\\cite{hendrycks2019benchmarking, michaelis2019benchmarking, sakaridis2018semantic, hu2019depth} and show \\Correction{robustness} improvements, beyond accuracy. \n\n\n\\noindent\\textbf{DCNN Uncertainty Estimation.} Uncertainty estimation, i.e., knowing when a model does not ``know'' the answer, is a crucial functionality for \\Correction{robust} DCNNs. Most DCNN approaches for uncertainty estimation are inspired from Bayesian Neural Networks~\\cite{neal1995bayesian, mackay1992bayesian}. Deep Ensembles (DE)~\\cite{lakshminarayanan2017simple} train multiple instances of a DCNN with different random initializations, while MC-Dropout~\\cite{gal2016dropout} mimics an ensemble through multiple forward passes \nwith active Dropout~\\cite{srivastava2014dropout} layers. In-between them, some methods generate ensembles with lower training cost (by analyzing weight trajectories during optimization~\\cite{maddox2019simple, franchi2019tradi}) or with lower forward cost (by generating ensembles from lower dimensional weights~\\cite{wen2020batchensemble, franchi2020encoding} or multiple network heads~\\cite{lee2015m}). Other works prioritize computational efficiency to compute uncertainties from a single forward pass~\\cite{malinin2018predictive, sensoy2018evidential, postels2019sampling, brosse2020last, van2020uncertainty, joo2020being}, but become specialized to a single type of uncertainty~\\cite{hora1996aleatory, kendall2017uncertainty, malinin2018predictive}, e.g., \\Correction{Out-Of-Distribution (OOD)}. DE methods are top-performers across benchmarks~\\cite{ovadia2019can, gustafsson2020evaluating}. \nYet, their computational costs make them unfeasible for complex vision tasks, e.g., semantic segmentation. With Superpixel-mix we aim to attain most properties of ensembles, e.g., predictive uncertainty and calibration, in a cost-effective manner. \n\n\n\n\n\n\n\\noindent\\textbf{Augmentation by mixing samples.} Initially seen as a mere heuristic to address over-fitting, data augmentation is now an essential part of recent supervised~\\cite{cutmix_ICCV_2019, cutout_arXiv_2017, zhang2017mixup}, semi-supervised~\\cite{cutmix_SSL_seg_BMVC_2020, sohn2020fixmatch,mixmatch_nips_2019} and self-supervised learning methods~\\cite{gidaris2020learning, chen2020simple}. Mixing techniques, among the most powerful augmentation strategies, generate new ``virtual'' samples (and labels) from pairs of training samples. Mixup\\cite{zhang2017mixup} interpolates two images, while Manifold Mixup~\\cite{verma2019manifold} interpolates \nhidden activations. CutMix~\\cite{cutmix_ICCV_2019,cutmix_SSL_seg_BMVC_2020} replace a random square inside an image with a patch from another image. Classmix~\\cite{classmix_arXiv_2020} cuts and mixes object classes. Puzzle-Mix ~\\cite{kim2020puzzle} and Co-Mixup ~\\cite{kim2021co} mix salient areas. For semantic segmentation, mixing by square blocks \n\\Correction{is agnostic to} object boundaries and is likely to increase contextual bias as objects \\Correction{or object parts can be recognized via their context, i.e., learning shortcuts~\\cite{geirhos2020shortcut}.}\nSuperpixel-mix mitigates this by mixing within object boundaries. \n\n\n\n\n\n\n\n\\vspace{-3mm}\n\\section{Proposed method}\n\n\\vspace{-2mm}\n\\subsection{Overview}\\label{Overview}\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.7\\linewidth]{images\/unsupervised_training2.jpg}\n\\vspace{-5pt}\n\\caption{\\small Consistency training with a student-teacher framework using unlabeled images (Task 2). To mix two unlabeled images, superpixels are randomly sampled to create a mixing mask. This mask is used to merge the two images and their pseudo-label outputs from the teacher network. A cross-entropy loss is applied to the student network to encourage consistency between the mixed pseudo-labels and the student network labels.}\n\\label{fig:unsupervised_training}\n\\vspace{-10pt}\n\\end{figure*}\n\nIn this paper, we propose a novel superpixel-mix method to generate new training data and \nleverage the results of this data augmentation technique in an existing teacher-student framework~\\cite{mean_teacher_NIPS_2017}. %\nThe combination of our mixing technique and the teacher-student framework forms an optimization component that serves as a consistency constraint in our DCNN training system for semantic segmentation. We conduct experiments for evaluating our trained DCNNs on both types of uncertainties: epistemic and aleatoric. At the end, we also assess our proposed approach in semi-supervised learning.\n\nIn general, our training process for all of our experiments comprises two steps that are optimized simultaneously: \\textbf{(1)} supervised learning where we train the DCNNs using images with ground-truth labels, and \\textbf{(2)} using teacher-student optimization with superpixel-mix data augmentation on images as a consistency constraint that does not use ground-truth labels. In the uncertainty experiments such as \\Correction{OOD} in \n\\S~\\ref{OOD}, DCNNs' bias studying in \n\\S~\\ref{ablation_bias}, and aleatoric uncertainty in \n\\S~\\ref{aleatoric}, we use datasets that contain ground-truth labels for all images. We first train the DCNNs in fully supervised learning fashion as in step 1. We then remove all those labels and use only the images to optimize for the consistency constraint as in step 2. In the semi-supervised learning experiment in \n\\S~\\ref{semi-supervised}, we use the training dataset that consists of two parts: labelled images and unlabelled images. We train the DCNN using the labelled data for step 1 and the unlabelled data for step 2. %\n\n\\textbf{Step 1 - supervised learning with labelled images:} We use a standard pixel-wise cross-entropy loss denoted by $\\mathcal{L}_{\\text{sup}}$ and apply \nit to all the labelled images. \nIn addition, we use a weak data augmentation (WDA) that consists of \\textit{horizontal flipping} and\/or \\textit{random cropping}.\n\n\\textbf{Step 2 - consistency constraint with unlabelled images:} We apply two transformations on an unlabelled image: one is WDA and the other is a strong data augmentation (SDA). Consistency training encourages predictions of the DCNNs to be consistent in the results of the two transformations. For SDA, we merge WDA and a superpixel mixing technique (see \n\\S~\\ref{watershedmix}). The consistency loss for optimizing this constraint is denoted as $\\mathcal{L}_{\\text{cons}}$.\n\nFor \nevery experiment, we optimize the loss in the overall framework as %\nthe \\textbf{joint loss} $\\mathcal{L} = \\mathcal{L}_{\\text{sup}}+ \\lambda \\cdot \\mathcal{L}_{\\text{cons}}$, where $\\lambda$ is a weighting hyper-parameter and is set to 1.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\vspace{-2mm}\n\\subsection{Teacher-Student Framework}\\label{sec:teacherstudent}\n\n\nTo learn from unlabeled images, we follow the teacher-student framework established in~\\cite{mean_teacher_NIPS_2017}, where the teacher network produces pseudo-labels learned from the labeled data and the student network is encouraged to be consistent with the teacher. Consistency is encouraged via \ncross-entropy loss between the two outputs. In our case, we encourage consistency between the mixed output labels of a teacher network \ncorresponding to the two unlabelled inputs and the output label of a student network \npredicted from the input that resulted by mixing the two unlabelled images. \nWe explain this approach in details in the following paragraph.%\n\nLet $g_{\\phi}$ represent the teacher network with weights $\\phi$ and $f_{\\theta}$ be the student network with weights $\\theta$. For two unlabeled images $x^1_u$ and $x^2_u$, we can use the teacher network to generate pseudo-labels $y^1_u$ and $y^2_u$:\n$y^1_u = g_{\\phi}(x^1_u)$ and $y^2_u = g_{\\phi}(x^2_u)$. Assume now we are given some mixing function $\\texttt{mix}$ with a mixing parameter $m$. Without any assumption on the mixing itself, we denote a mixed output label $y_m$, where $y_m = \\texttt{mix}(y^1_u,y^2_u,m)$. For the same mixing parameter $m$, we can also mix the inputs $x^1_u$ and $x^2_u$, \\emph{i.e.}\\@\\xspace $x_m = \\texttt{mix}(x^1_u,x^2_u,m)$.\nApplying $x_m$ %\nto the student network, we expect $f_{\\theta}(x_m)$ to be the same as the mixed output $y_m$. This {is} enforced by minimizing the consistency loss $ \\mathcal{L}_{\\text{cons}} = \\text{CE}\\left(y_m, %\n f_{\\theta}\\left(\\texttt{mix}(x^1_u, x^2_u, m)\\right)\\right)$,\nwhere $\\text{CE}$ is the pixel-wise cross-entropy and $y_m$ is the mixed pseudo-labels from the teacher. \n\n\nDuring training, both the teacher and student networks evolve together. Similarly to~\\cite{mean_teacher_NIPS_2017}, we update the teacher network weights $\\phi$ after each iteration with an \\Correction{Exponential} Moving Average \\Correction{(EMA)}, \\emph{i.e.}\\@\\xspace $\\phi = \\alpha \\phi + (1 - \\alpha)\\theta$ where $\\alpha=0.99$ is a momentum-like parameter.%\n\n\\vspace{-2mm}\n\\subsection{Superpixel-mix for semantic segmentation} \\label{watershedmix}\n\nTo mix two unlabeled images, we use masks generated by sampling superpixels. Superpixels are local clusters of visually similar pixels. Therefore, a group of pixels belonging to the same superpixel are likely to be in the same object or the same part of the object.\nThere are several superpixel variants, including SEEDS~\\cite{van2012seeds}, SLIC~\\cite{slic} or Watershed superpixels~\\cite{watershed_superpixel_ICIP_2014}. We opt to use Watershed superpixels as their boundaries retain more salient object edges~\\cite{machairas2015waterpixels}. We refer the reader to the Supplementary Material for details on how we use the watershed transformation to produce superpixels.\n\n\nGiven an unlabeled image $x^1_u$, we apply the watershed superpixel algorithm, which results in a set of $n$ superpixels $\\mathcal{S}=\\{S_1, S_2,..., S_n\\}$. A mixing mask $m$ (which is a binary mask)\\footnote{For simplicity, we overload the notation of the mixing parameter $m$ simply as the mixing mask itself.} is created from a sampled subset of superpixels $S$: $m = \\cup_{j\\in \\sigma(k,n)} S_j$ %\nwhere $\\sigma(k,n)$ is a subset of size $k$ of the $n$ indices, and $k$ is the number of superpixels we want to keep. %\n\nThe mixing mask $m$ defines the pixels in $x^1_u$ which will be replaced by pixels from the unlabeled image $x^2_u$ to form the mixed input $x_m$, \\emph{i.e.}\\@\\xspace $\n x_m = \\texttt{mix}(x^1_u, x^2_u, m) = (1-m) \\odot x^1_u + m \\odot x^2_u\n$, where $\\odot$ is a pixel-wise multiplication. Superpixels are uniformly sampled given a fixed proportion of selected superpixels. \nContrary to existing regularization techniques such as Cutout~\\cite{cutout_arXiv_2017} or Cutmix~\\cite{cutmix_SSL_seg_BMVC_2020}, our superpixel mixing strategy enforces each set of selected pixels in the unlabeled image $x^1_u$ %\nto belong to the same object. However, the algorithm is allowed to select a set of superpixel clusters from different objects as illustrated in Figure \\ref{fig:unsupervised_training}. %\n\nWe use 200 superpixels per image for all the evaluated datasets. Studies on the number and proportions of superpixels used in mixing are shown in Section~\\ref{Ablations_sp} and the Supplement. %\n\n\\vspace{-2mm}\n\\subsection{From empirical risk to teacher-student mixup}\n\n\nIn this section, we show that the training loss of the teacher-student framework in combination with superpixel-mix data augmentation is bounded by the accuracy of the teacher network and the quality of the data augmentation.\nLet $\\mathcal{D} =\\{(x_i,y_i)\\}_i \\sim \\mathcal{P}$ be the labelled dataset which follows the joint distribution $\\mathcal{P}$ and $l$ be a loss %\nbetween the target $y$ and the prediction $f_{\\theta}(x)$ of the DCNN $f_{\\theta}$. Typically, in deep learning, the objective is to learn $\\theta$ that minimizes the expected risk defined by: \n$\\mathbf{R}_{\\mathcal{P}}(f_{\\theta}) =\\int l(f_{\\theta}(x),y)d\\mathcal{P}(x,y)$.\nAs we do not have access to the distribution $\\mathcal{P}$, we optimize the loss function that is formed by the empirical risk on $\\mathcal{D}$: %\n\\begin{equation}\n \\hat{\\mathbf{R}}_{\\mathcal{P}_{\\delta}}(f_{\\theta}) =\\frac{1}{n}\\sum_{i=1}^n l(f_{\\theta}(x_i),y_i) = \\int l(f_{\\theta}(x),y)d\\mathcal{P}_{\\delta}(x,y),\n\\end{equation}\nwhere the summation is converted back to the integral based on $\\mathcal{P}_{\\delta}(x,y) = \\frac{1}{n} \\sum_{i=1}\\delta(x=x_i,y=y_i)$, as shown by~\\cite{zhang2017mixup}. %\n\nTherefore, we optimize the parameters of the DCNN using the empirical risk. However, the representation of the discretized data is likely to be sparse, Zhang \\emph{et al}\\bmvaOneDot~\\cite{zhang2017mixup} proposed to work with $\\mathcal{D}_{\\text{mix}} =\\{(x_{m,i},y_{m,i})\\}_i \\sim \\mathcal{P}^{\\text{mix}}_{X,Y}$ where $x_{m,i}$, and $y_{m,i}$ are the data of $\\mathcal{D}$ where a mixing procedure has been applied. The hypothesis in~\\cite{zhang2017mixup} is that the mixing procedure helps to better approximate the dataset distribution. Let $ \\mathcal{P}^{\\text{mix}}_{\\delta}$ denote the discrete distribution of this augmented %\ndataset. The risk to fit the teacher prediction on $\\mathcal{P}^{\\text{mix}}_{\\delta}$ can then be defined as:\n$\n\\hat{\\mathbf{R}}_{\\mathcal{P}^{\\text{mix}}_{\\delta}}(f_{\\theta},g_{\\phi}) =\\int l(f_{\\theta}(x),g_{\\phi}(x))d\\mathcal{P}^{\\text{mix}}_{\\delta}(x,y).\n$ Therefore, our training loss for the overall framework is defined in detail as the following:\n$\n \\mathcal{L}(\\theta)= \\hat{\\mathbf{R}}_{\\mathcal{P}_{\\delta}}(f_{\\theta})+ \\hat{\\mathbf{R}}_{\\mathcal{P}^{\\text{mix}}_{\\delta}}(f_{\\theta},g_{\\phi}).\n$ As the loss $l$ is a norm that satisfies the triangle equality, we can prove that the training loss $\\mathcal{L}(\\theta)$ is bounded by the following:\n\\begin{equation}\n \\mathcal{L}(\\theta)\\leq 2\\mathbf{R}_{\\mathcal{P}}(f_{\\theta})+ M(\\| \\mathcal{P}^{\\text{mix}}_{\\delta} -\\mathcal{P}\\|_1 + \\| \\mathcal{P}_{\\delta} -\\mathcal{P}\\|_1) +\\hat{\\mathbf{R}}_{\\mathcal{P}^{\\text{mix}}_{\\delta}}(g_{\\phi}),\n\\end{equation}\n\n\n\\noindent where the four terms are linked to the true error, approximation error, mixing distribution error and teacher error, respectively. \nThis implies that the quality of the DCNN is bounded by the accuracy of the teacher. It is also bounded by how much the mixing strategy can sample the true distribution of the dataset. Finally, the distribution of the training data with respect to the true data distribution also plays an important role. %\nThis finding can be applied to all teacher-student frameworks, such as those used in SSL, self supervised training, and domain adaptation.\nThe detailed proof and analysis is given in the Supplementary Material. %\n\n\\Correction{$\\| \\mathcal{P}^{\\text{mix}}_{\\delta} -\\mathcal{P}\\|_1$ and $\\| \\mathcal{P}_{\\delta} -\\mathcal{P}\\|_1$\nreflect the capacity of the two distributions $\\mathcal{P}_{\\delta}$ and $\\mathcal{P}^{\\text{mix}}_{\\delta}$ to approximate the true dataset distribution. This bound allows us to control the variation of the risk. To reduce the risk, we can increase the number of training data or improve the quality of the data augmentation and so reduce $\\| \\mathcal{P}^{\\text{mix}}_{\\delta} -\\mathcal{P}\\|_1 $. This motivates our research on data augmentation strategies to approximate the true distribution in a data-efficient way. We can also improve the quality of the teacher using EMA training that stabilizes the training loss. } %\n\n\\vspace{-3mm}\n\\subsection{Uncertainty and Deep Learning}\n\n\nConsider a joint distribution $\\mathcal{P}$ over input $x$ and labels $y$ over a set of labels $\\mathcal{Y}$. When a DCNN performs inference, it predicts $f_{\\theta} = \\mathcal{P}(y|x,\\theta)$, where $\\theta$ is optimized to minimize the loss over the training set $\\mathcal{D}$. This likelihood typically suffers from two kinds of uncertainty\\AB{~\\cite{hora1996aleatory, kendall2017uncertainty}}. First is aleatoric uncertainty, linked to the unpredictability of the data acquisition process. During inference, instead of working with $x$, we may have access to $x+n$, %\nwhere $n$ represents noise on the input data. Second is epistemic uncertainty, linked to the lack of knowledge of the model, \\AB{i.e.,} the weights $\\theta$ of the network. In addition, the epistemic uncertainty can be subdivided into two sub-types: one linked to the \\Correction{OOD} \\AB{~\\cite{malinin2018predictive}} and the other one linked to networks' bias. \\Correction{Epistemic uncertainty models the uncertainty associated with limited sizes of training datasets. Most works focus only on the ability to detect OOD. In this paper, we conduct experiments for all types of uncertainties: aleatoric (i.e., testing models on noisy data) and two sub-types of epistemic (i.e., OOD detection and models' bias).}%\n \n \n\n \n \n \n\\vspace{-3mm}\n\\section{Experiments}\n\nWe \n\\AB{conduct} experiments on five datasets. First, %\nwe study \n\\AB{network robustness} to epistemic uncertainty experiments on StreetHazards \\cite{hendrycks2019anomalyseg}. \n\\AB{The test set contains some object classes that are not available in the training set.}\nThe goal is to detect these out-of-distribution (OOD) classes. We also evaluate the performance of the DCNNs on an contextually unbiased dataset.\nFurthermore, we investigate the networks' \\Correction{robustness} for the aleatoric uncertainty. \n\\AB{To this end, we train a DCNN on Cityscapes ~\\cite{Cityscapes_CVPR_2016}} and evaluate the performances on Rainy \\cite{hu2019depth} and Foggy Cityscapes~\\cite{sakaridis2018semantic}. %\nFinally, we evaluate \\AB{on the} semi-supervised learning \\AB{task} on Cityscapes~\\cite{Cityscapes_CVPR_2016} and Pascal ~\\cite{pascal-voc-2012}. We implement the experiments using PyTorch (see Supplementary Material).\n\n\\vspace{-3mm}\n\\subsection{Evaluation criteria}\n\nThe first criterion we use is the mIoU \\cite{jaccard1912distribution}, \n\\AB{reflecting the predictive performance of segmentation models.}\n\\AB{Second, similarly to \\cite{lakshminarayanan2017simple} we use} the negative log-likelihood (NLL), \\AB{a proper scoring rule~\\cite{gneiting2007strictly},} which depends on the aleatoric uncertainty \\Correction{and can assess the degree of overfitting~\\cite{guo2017calibration}.}. In addition, we use the expected calibration error (ECE) ~\\cite{guo2017calibration} that measures \n\\AB{how} the confidence score predicted by a DCNN is related to its accuracy. Finally, we use the AUPR , AUC, and the FPR-95-TPR defined in \\cite{hendrycks2016baseline} that evaluate the quality of a DCNN to detected OOD data. With multiple metrics, we can get a clearer picture on the performance of the DCNNs with regards to accuracy, calibration error, failure rate, \\AB{OOD detection}. Even though it is difficult to achieve top performance on all metrics, \n\\AB{we argue that it is more pragmatic and convincing}\nto evaluate on multiple metrics~\\cite{ovadia2019can, fort2019deep} than \\AB{optimizing for a single metric, potentially at the expense of many others.}\n\\Correction{For example, a DCNN with a low accuracy and a low confidence score is well-calibrated. Therefore, evaluating a DCNN on a single metric such as ECE or mIoU alone is not enough. We aim to have a good compromise between accuracy and calibration.}\n\n \\vspace{-3mm}\n\\subsection{Epistemic uncertainty: Out-Of-Distribution (OOD) Detection }\\label{OOD}\n\n\\Correction{One cause of the epistemic uncertainty in deep learning is the limited training data that does not cover all possible object classes. The evaluation of this type of epistemic uncertainty is often linked to OOD detection. Therefore, this experiment is designed to evaluate the epistemic uncertainty using StreetHazards~\\cite{hendrycks2019anomalyseg}.}\nStreetHazards is a large-scale dataset that consists of different sets of synthetic images of street scenes. This dataset is composed of $5,125$ images for training and $1,500$ test images.\nThe training dataset contains pixel-wise annotations for $13$ classes. The test dataset comprises $13$ training classes and $250$ OOD classes, unseen in the training set, making it possible to test the robustness of the algorithm when facing a diversity of possible scenarios.\nFor this experiment, we use DeepLabv3+~\\cite{chen2018encoder} with the experimental protocol from~\\cite{hendrycks2019anomalyseg}. We use ResNet50 encoder~\\cite{he2016deep}. For this experiment, we compare our algorithm to Deep Ensembles \\cite{lakshminarayanan2017simple}, BatchEnsemble \\cite{wen2020batchensemble}, LP-BNN \\cite{franchi2020encoding}, TRADI \\cite{franchi2019tradi} , MIMO \\cite{havasi2020training}\n\\AB{achieving} state-of-the-art results on epistemic uncertainty. We also compare our \\AB{model with} MCP which is the baseline DCNN where we consider the maximum probability class as a confidence score, and with Cutmix~\\cite{cutmix_SSL_seg_BMVC_2020} strategy.\nThe results in Table~\\ref{table:outofditribution} show that our method is the only one to have the best results in three out of five measures. Running up, Deep Ensemble and LP-BNN achieve best results on one measure only. Moreover, our method achieves the best results faster than Deep Ensemble and LP-BNN, we only need one inference pass compared to 4 inference passes for the others.\n\n\\begin{table*}[!t]\n\\renewcommand{\\figurename}{Table}\n \\begin{center}\n \\scalebox{0.60}\n {\n \\begin{tabular}{c l c c c c c c }\n \\toprule\n Dataset & OOD method & mIoU $\\uparrow$ & AUC $\\uparrow$ & AUPR $\\uparrow$ & FPR-95-TPR $\\downarrow$ & ECE $\\downarrow$ & \\# Forward passes $\\downarrow$ \\\\ \n \\midrule\n\\multirow{6}{*}{\\shortstack[c]{\\textbf{StreetHazards} \\\\ DeepLabv3+ \\\\ ResNet50}} &\nBaseline (MCP)~\\cite{hendrycks2016baseline} & 53.90\\% & 0.8660& 0.0691 & 0.3574 & 0.0652 &1 \\\\ \n & TRADI \\cite{franchi2019tradi} & 52.46\\%\t& 0.8739 & 0.0693 & 0.3826 &0.0633 &4 \\\\ \n & Cutmix ~\\cite{cutmix_SSL_seg_BMVC_2020} & 56.06\\% & 0.8764 & 0.0770 &0.3236 &0.0592 &1 \\\\ \n & MIMO \\cite{havasi2020training} & 55.44\\% & 0.8738 & 0.0690 & 0.3266 & 0.0557 &4 \\\\ \n & BatchEnsemble \\cite{wen2020batchensemble} &56.16\\% & 0.8817 & 0.0759 & 0.3285 & 0.0609 &4 \\\\ \n & LP-BNN \\cite{franchi2020encoding} & 54.50\\% & 0.8833 & 0.0718 & 0.3261 & \\textbf{0.0520} &4 \\\\ \n & Deep Ensembles \\cite{lakshminarayanan2017simple}& 55.59\\% & 0.8794 & \\textbf{0.0832} & 0.3029 & 0.0533 &4 \\\\ \n & Superpixel-mix (ours) & \\textbf{56.39}\\% & \\textbf{0.8891} & 0.0778 & \\textbf{0.2962} &0.0567 &1 \\\\ \n\n\\bottomrule\n \\end{tabular}\n } %\n \\end{center}\n \\vspace{-1mm}\n \\caption{\\small \\textbf{Comparative results on the OOD task for semantic segmentation.} \\small Results are averaged over three seeds.}\\label{table:outofditribution}\n \\vspace{-3mm}\n \\end{table*}\n\n\n \\vspace{-3mm}\n\\subsection{Epistemic uncertainty : Unbiased experiment} \\label{ablation_bias}\n\\vspace{-1mm}\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=0.5\\linewidth]{images\/bias_results3.jpg}\n\\vspace{-3mm}\n\\caption{\\small A qualitative example of the network bias study. When the road and pavements are replaced with sand texture, the baseline supervised network makes wrong segmentations. There is still pavement segmentation due to the association with people. Superpixel-mix produces better results without the wrong pavement segmentation, also provides a clean segmentation for the sand texture.}\n\\label{fig:bias_study}\n\\vspace{-5mm}\n\\end{figure*}\n\n\\Correction{The second aspect of epistemic uncertainty \nis related to network biased caused limited samples and diversity of scenarios, e.g., certain objects always co-occur in the training data. \nHere, we evaluate the epistemic uncertainty under the lens of model bias.\n}\nIn urban datasets such as Cityscapes, \n\\Correction{road and car pixels appear most of the time together,} raising the question of co-occurrence dependency between objects. If two objects are dependent, then the network will likely fail when the car object is encountered in different context other than roads. Shetty \\emph{et al}\\bmvaOneDot~\\cite{car_sidewalk_dependency_cvpr_2019} study object dependency and suggest using GANs to remove one object by in-painting and training the network for the new contexts. Their results show that a network that is less biased towards the co-occurrence dependency yields better accuracy for segmentation.\n\nTo measure the bias of our superpixel-mix method, we create a new dataset dubbed \n Out-of-Context Cityscapes (OC-Cityscapes), by replacing roads in the validation data of Cityscapes with various textures such as water, sand, grass, etc. Example images are shown in the Supplementary Material. Studies in~\\cite{geirhos2018imagenettrained} show that DCNNs are biased towards texture. By replacing different textures for roads, we test the trained networks on these new context images and evaluate the bias level for each network.\n\n\n\nIn Table~\\ref{table:bias_study}, we show the performances of the fully supervised network (baseline) and our network trained using the superpixel-mix method for the Cityscapes validation set and our generated dataset. The results are mIoU, ECE, and NLL scores averaged over 3 runs for classes that are not road. On our experimental dataset, \nthe baseline's performance drops by 21.97\\% while \nSuperpixel-mix drops by 19.83\\% for the mIoU metric. The results also show that \nSuperpixel-mix \nproduces a less biased model for \nco-occurring objects with the best \nscores on mIoU and NLL measures. %\nSuperpixel-mix changes the image context while preserving object shapes, effectively regularizing the model to address shortcut learning, i.e., overfitting on the co-occurrence of objects and their typical contexts. \nFor a visual example, see Figure~\\ref{fig:bias_study}.\n\n\n\\begin{table}[!t]\n\\renewcommand{\\figurename}{Table}\n\\begin{center}\n \\scalebox{0.50}\n {\n\\begin{tabular}{ l c c c c c c }\n\\toprule\nEvaluation data & Cityscapes & OC-Cityscapes &Cityscapes & OC-Cityscapes & Cityscapes & OC-Cityscapes \\\\\n & mIoU & mIoU & NLL & NLL & ECE & ECE \\\\\n\\midrule\nBaseline (MCP)~\\cite{hendrycks2016baseline} & 76.51\\% & 54.54\\% & -0.9456 & -0.7565 & 0.1303 & 0.2162 \\\\\n Cutmix ~\\cite{cutmix_SSL_seg_BMVC_2020} & 78.37\\% & 54.78\\% & -0.955&\t-0.7435 & 0.1365 & 0.2587\\\\\n MIMO \\cite{havasi2020training} & 77.13\\% & 55.87\\% & -0.9516 & -0.7431 & 0.1398 & 0.2359\\\\\nDeep Ensembles \\cite{lakshminarayanan2017simple} & 77.48\\% & 57.09\\% & -0.9469 & -0.7613 & \\textbf{0.1274} & \\textbf{0.1968}\\\\\nSuperpixel-mix (ours) & \\textbf{78.99\\%} & \\textbf{59.16\\%} & \\textbf{-0.9563} & \\textbf{-0.7768} & 0.1348 & 0.2244\\\\\n\\bottomrule\n\\end{tabular}\n}\n\\end{center}\n\\vspace{-1mm}\n\\caption{\\small \\textbf{Comparative results for network biases on OC-Cityscapes.} The results are segmentation mIoU, NLL and ECE, for classes that are not road. The baseline is the result from supervised training.}\n\\label{table:bias_study}\n\\vspace{-2mm}\n\\end{table}\n\n\n\\vspace{-3mm}\n\\subsection{Aleatoric uncertainty experiments}\\label{aleatoric}\n\n\n\\Correction{Aleatoric uncertainty is associated with unpredictability of the data acquisition process that causes various noises in the data.\nIn the following experiments we evaluate the aleatoric uncertainty of DCNNs trained on normal images (e.g., normal weather images) when facing test images with various types of noise (e.g., rainy or foggy environments).}\nIn semantic segmentation, the DCNN must be \\Correction{robust} to aleatoric uncertainty. To check that, we use the rainy \\cite{hu2019depth} and foggy Cityscapes \\cite{sakaridis2018semantic} datasets, which are built by adding rain of fog to the Cityscapes validation \\Correction{images}. The goal \nis to evaluate the performance of DNNs to resist these perturbations. \n\\Correction{We further generate an additional Cityscapes variant with images modified with different perturbations and intensities to mimic a distribution shift~\\cite{hendrycks2019benchmarking}.} \n\\Correction{We apply the following} perturbations: Gaussian noise, shot noise, impulse noise, defocus blur, frosted, glass blur, motion blur, zoom blur, snow, frost, fog, brightness, contrast, elastic, pixelate, JPEG. For more information, please refer to \\cite{hendrycks2019benchmarking}. We call this dataset Cityscapes-C.\n\n\n\nTo measure the \\Correction{robustness} under aleatoric uncertainty, we \ncompute ECE, mIoU and NLL scores averaged over 3 runs. Table \\ref{table:Aleatoric} shows results close to the state of the art. \nDE reaches good results, yet this approach needs to train several DCNNs, so it is more time-consuming for training and inference. In the Supplementary Material, we \nreport mIoU scores of different approaches for the different levels of noise. Overall, our \nexperiments indicate that Superpixel-mix tends to be \n\\Correction{robust} to high level of noise. \n\n\n\n\n\n\n\\begin{table}[t]\n\\begin{center}\n \\scalebox{0.50}\n {\n\\begin{tabular}{ l c c c c c c c c c c c c }\n\\toprule\n\\multirow{2}{*}{Evaluation data} & \\multicolumn{3}{c}{Cityscapes} & \\multicolumn{3}{c}{Rainy Cityscapes} & \\multicolumn{3}{c}{Foggy Cityscapes} & \\multicolumn{3}{c}{Cityscapes-C} \\\\\n & mIoU $\\uparrow$ & ECE $\\downarrow$ & NLL $\\downarrow$ & mIoU $\\uparrow$ & ECE $\\downarrow$ & NLL $\\downarrow$ & mIoU $\\uparrow$ & ECE $\\downarrow$ & NLL $\\downarrow$ & mIoU $\\uparrow$ & ECE $\\downarrow$ & NLL $\\downarrow$ \\\\\n\\midrule\nBaseline (MCP)~\\cite{hendrycks2016baseline} & 76.51\\% & 0.1303 &-0.9456 & 58.98\\% & 0.1395 & -0.8123 & 69.89\\% & 0.1493 &-0.9001 & 40.85\\% & 0.2242 &-0.7389 \\\\\n Cutmix ~\\cite{cutmix_SSL_seg_BMVC_2020} & 78.37\\% & 0.1365 & -0.9550& 61.86\\% & \t0.1559& -0.8200 & 73.57\\% & 0.1484 &-0.9289 & 39.16\\% \t& 0.3064 &\t-0.6865\\\\\nMIMO \\cite{havasi2020training} & 77.13\\% & 0.1398 &-0.9516 & 59.27\\% & 0.1436 & -0.8135 &70.24\\% & 0.1425 &-0.9014& 40.73\\% & 0.2350 &-0.7313\\\\\nBatchEnsemble \\cite{wen2020batchensemble} & 77.99\\% & 0.1129 &-0.9472 & 60.29\\% & 0.1436 &-0.7820& 72.19\\% & 0.1425 &-0.9132& 40.93\\% & 0.2270 &-0.7082\\\\\nLP-BNN \\cite{franchi2020encoding} & 77.39\\% & \\textbf{0.1105} &-0.9464 & 60.71\\% & 0.1338 &-0.7891& 72.39\\% & \\textbf{0.1358} &-0.9131& \\textbf{43.47}\\% & 0.2085 &-0.7282\\\\\nDeep Ensembles \\cite{lakshminarayanan2017simple} & 77.48\\% & 0.1274 &-0.9469 & 59.52\\% &\\textbf{0.1078} &-0.8205& 71.43\\% & 0.1407 &-0.9070 & 43.40\\% & \\textbf{0.1912}&-0.7509 \\\\\nSuperpixel-mix (ours) & \\textbf{78.99}\\% & 0.1348 & \\textbf{-0.9563} & \\textbf{61.87}\\% & \t0.1583& \t\\textbf{-0.8207}& \\textbf{74.39}\\% & 0.1411 &-\\textbf{0.9266}& 42.58\\% & \t0.2338 &\t\\textbf{-0.7513}\\\\\n\n\\bottomrule\n\\end{tabular}\n}\n\\end{center}\n\\caption{\\small Aleatoric uncertainty study on Cityscapes-C, Foggy Cityscapes~\\cite{hu2019depth} and Rainy Cityscapes~\\cite{sakaridis2018semantic}. }\n\\label{table:Aleatoric}\n\\vspace{-4mm}\n\\end{table}\n\n\n\\Correction{We note that our method does not achieve the top ECE scores across the various perturbations and weather conditions in these experiments. However, none of the considered strong baselines based on ensembles outperforms the others consistently either due to the difficulty and diversity of the considered test sets. Superpixel-mix achieves competitive ECE scores and is a top performer on mIoU and NLL.}\n\n\\vspace{-3mm}\n\\subsection{Semi-Supervised Learning experiments}\\label{semi-supervised}\nTo evaluate the \\Correction{robustness} of our method to missing annotation, we tested our approach on a semi-supervised learning task on two datasets: Cityscapes~\\cite{Cityscapes_CVPR_2016} and Pascal VOC 2012~\\cite{pascal-voc-2012}. \nWe follow the common practice for this task from prior works~\\cite{adversarial_bmvc_2018,cutmix_SSL_seg_BMVC_2020} and use DeepLab-V2~\\cite{chen2017deeplab} model with ResNet101~\\cite{he2016deep} encoder pre-trained on ImageNet~\\cite{ImageNet_cvpr_2009} and MS-COCO~\\cite{mscoco_eccv_2014}.\nThe weights of both the teacher and student models are initialized the same manner.\nWe evaluate our method and compare with existing methods using three sets of labeled data: 1\/30 (100 images), 1\/8 (372 images) and 1\/4 (744 images). Our results are reported as average mIoU of 12 runs (4 times on each of 3 official splits) %\nas well as the standard deviation. The results are shown in Table \\ref{table:Cityscapes}. We present results on Pascal dataset in the Supplementary Material.\n\n\n\n\\begin{table*}[!t]\n\\renewcommand{\\figurename}{Table}\n\\begin{center}\n\\scalebox{0.55}\n {\n\\begin{tabular}{l l l l}\n\\toprule\n\\textbf{Labeled samples} & \\textbf{1\/30 (100)} & \\textbf{1\/8 (372)} & \\textbf{1\/4 (744)} \\\\\n\\midrule\nAdversarial~\\cite{adversarial_bmvc_2018} & - & 58.80\\% & 62.30\\% \\\\\ns4GAN~\\cite{s4GAN_pami_2019} & - & 59.30\\% & 61.90\\% \\\\\nCutout~\\cite{cutout_arXiv_2017} & 47.21\\% $\\pm$ 1.74 & 57.72\\% $\\pm$ 0.83 & 61.96\\% $\\pm$ 0.99 \\\\\nCutmix~\\cite{cutmix_SSL_seg_BMVC_2020} & 51.20\\% $\\pm$ 2.29 & 60.34\\% $\\pm$ 1.24 & 63.87\\% $\\pm$ 0.71 \\\\\nClassmix~\\cite{classmix_arXiv_2020} & 54.07\\% $\\pm$ 1.61 & 61.35\\% $\\pm$ 0.62 & 63.63\\% $\\pm$ 0.33 \\\\\nClassmix~\\cite{classmix_arXiv_2020} & 54.07\\% $\\pm$ 1.61 & 61.35\\% $\\pm$ 0.62 & 63.63\\% $\\pm$ 0.33 \\\\\n\\midrule\nBaseline(*) & 43.84\\% $\\pm$ 0.71 & 54.84\\% $\\pm$ 1.14 & 60.08\\% $\\pm$ 0.62\\\\\nSuperpixel-mix (ours) & \\textbf{54.11\\% $\\pm$ 2.88 ($\\uparrow$ 7.27\\%)} &\\textbf{63.44\\%$\\pm$ 0.88 ($\\uparrow$ 8.60\\%)} & \t\\textbf{65.82\\%$\\pm$ 1.78 ($\\uparrow$ 5.74\\%) }\\\\\n\n\n\n\n\\bottomrule\n\\end{tabular}\n}\n\\end{center}\n\\caption{\\small \n\\textbf{Evaluation in the semi-supervised learning regime on Cityscapes. We report mIoU scores as $mean \\pm std.dev$ computed over 12 runs.}\nThe $(\\uparrow)$ shows the improvement of our methods over the baselines. (*) The baselines are from~\\cite{classmix_arXiv_2020} as we use a similar base procedure.}\n\\label{table:Cityscapes}\n\\end{table*}\n\n\n\\vspace{-3mm}\n\\subsection{Ablations and analysis}\\label{Ablations_sp}\n\\Correction{We perform various ablations to understand the \ninfluence of different hyper-parameters and choices\nin our algorithm. First, we vary the number of superpixels extracted per image from 20 to 1,000. The results (in Table 5, Supplement) show that our method achieves the highest mIoU on Cityscapes when the number of superpixels per image is from 100 to 200. Secondly, we study the proportion of chosen superpixels for mixing over the total number of superpixels per image. The proportion ranges from 0.1 to 0.9. We find that the results vary little across all the proportion values (Table 6, Supplement). The best mIoU is obtained at the proportion value of 0.6. Finally, we study the influence of different superpixel techniques to generate mixing masks: Watershed~\\cite{watershed_superpixel_ICIP_2014}, SLIC~\\cite{slic}, and Felzenszwalb~\\cite{felzenswalb2004efficient}. The results in Table~\\ref{table:Abblation000} show that on Cityscapes segmentation the performance of Superpixel-mix is relatively stable across superpixel methods, with Watershed yielding the best mIoU score.}\n\n\\begin{table}[t]\n\\begin{center}\n \\scalebox{0.55}\n {\n\\begin{tabular}{l| c c c }\n\\toprule\nsuperpixels algorithm & Watershed ~\\cite{watershed_superpixel_ICIP_2014} & SLIC~\\cite{slic} & Felzenszwalb \\cite{felzenswalb2004efficient} \\\\\n\n mIoU & 78.99 \\%& 78.89\\% & 77.99\\% \\\\\n\n\\bottomrule\n\\end{tabular}\n}\n\\end{center}\n\\caption{\\small \\textbf{Ablation study results over influence of different superpixel techniques. We report mIoU scores for semantic segmentation on Cityscapes.}}\n\\label{table:Abblation000}\n\\vspace{-5mm}\n\\end{table}\n\n\\vspace{-3mm}\n\\section{Conclusions}\n\nSuperpixel-mix data augmentation is a promising new training technique for semantic segmentation. This strategy for creating diverse data, combined with a teacher-student framework, leads to better accuracy and to more \\Correction{robust} DCNNs.\n We are the first, to the best of our knowledge, to successfully apply the watershed algorithm in data augmentation. What sets our data augmentation technique apart from existing methods is the ability to preserve the global structure of images and the shapes of objects while creating image perturbations. We conduct various experiments with different types of uncertainty. The results show that our strategy achieves state-of-the-art \\Correction{robustness} scores for epistemic uncertainty. For aleatoric uncertainty, our approach produces state-of-the-art results in Foggy Cityscapes and Rainy Cityscapes. In addition, our method needs just one forward pass.\n\nPrevious research in machine learning has established that creating more training data using data augmentation may improve the accuracy of the trained DCNNs substantially. Our work not only confirms that, but also provides evidence that some data augmentation methods, such as our Superpixel-mix, help to improve the \\Correction{robustness} of DCNNs by reducing both epistemic and aleatoric uncertainty.\n\n\\section*{Acknowledgments}\n\\Correction{This work was performed using HPC resources\nfrom GENCI-IDRIS (Grant 2020-AD011011970) and (Grant 2021-AD011011970R1).}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\chapter{\\@title}}\n \\renewcommand\\chapter{\\if@openright\\cleardoublepage\\else\\clearpage\\fi\n \\thispagestyle{empty}%\n \\global\\@topnum\\z@\n \\@afterindentfalse\n \\secdef\\@chapter\\@schapter}\n \\def\\@makechapterhead#1{%\n \\vspace*{50\\p@}%\n {\\parindent \\z@ \\raggedleft \\normalfont\n \\ifnum \\c@secnumdepth >\\m@ne\n \\if@mainmatter\n \n \\par\\nobreak\n \\vskip 20\\p@\n \\fi\n \\fi\n \\interlinepenalty\\@M\n \\Huge \\bfseries #1\\par\\nobreak\n \\vskip.25in\n \\large\\bfseries\\@author\\par\\nobreak\n \\vskip 40\\p@}\n \\ifx\\@abstract\\@empty\\else{\\small\\@abstract\\par\\vskip20\\p@}\\fi\n }\n\n\n\n \\let\\tdf\\textbf\n \\let\\df\\bf\n\n\n\\DeclareRobustCommand\\em\n {\\@nomath\\em \\ifdim \\fontdimen\\@ne\\font >\\z@\n \\upshape \\else \\slshape \\fi}\n\\let\\tem\\emph\n\n\n\\def\\@begintheorem#1#2{\\sl \\trivlist \\item[\\hskip \\labelsep{\\bf #1\\ #2}]}\n\\def\\@opargbegintheorem#1#2#3{\\sl \\trivlist\n \\item[\\hskip \\labelsep{\\bf #1\\ #2\\ (#3)}]}\n\n\n \\newcommand{\\sect}[1]{\\S\\ref{sect:#1}} \n \n \\newcommand{\\baresect}[1]{\\ref{sect:#1}} \n \n \\newcommand{\\eq}[1]{(\\ref{eq:#1})}\n \n \\newcommand{\\foot}[1]{footnote \\ref{foot:#1}}\n \n \\newcommand{\\fig}[1]{Fig.~\\ref{fig:#1}}\n\n \\newcommand{\\sectlabel}[1]{\\label{sect:#1}}\n \\newcommand{\\eqlabel}[1]{\\label{eq:#1}}\n \\newcommand{\\figlabel}[1]{\\label{fig:#1}}\n\n\n \\renewcommand{\\theequation} {\\mbox{\\arabic{equation}}}\n \\renewcommand{\\thesection} {\\mbox{\\arabic{section}}}\n \\renewcommand{\\thefigure} {\\mbox{\\arabic{figure}}}\n \\renewcommand{\\thetable} {\\mbox{\\arabic{table}}}\n\n\n\n \\let\\smallsection\\subsubsection \\setcounter{secnumdepth}{2}\n \\newcommand{\\parhead}[1]{\\tsl{#1}.\\quad}\n \n\n\n \\def\\@arabic#1{\\number #1}\n\n\n\n\\long\\def\\@makecaption#1#2{\n \\vskip\\abovecaptionskip\n \\sbox\\@tempboxa{{\\small {\\bf #1}: #2}}%\n \\ifdim\\wd\\@tempboxa>\\hsize\n {\\small {\\bf #1}: #2\\par}\n \\else\n \\global\\@minipagefalse\n \\hbox to\\hsize{\\hfil\\box\\@tempboxa\\hfil}\n \\fi\n \\vskip \\belowcaptionskip}\n\n\\def\\figstrut#1{\\hbox to\\linewidth{\\vrule height#1\\hfill}}\n\n\\newcommand{\\Fig}[4][!htb]\n\\begin{figure}[#1]\n \\centering\\leavevmode#3%\n \\caption{#4}\n \\figlabel{#2}\n\\end{figure} }\n \\let\\figonecol\\Fig\n\n \\newcommand{\\Figwide}[4][!t]\n \\begin{figure*}[#1]\n \\centering\\leavevmode#3%\n \\caption{#4}\n \\figlabel{#2}\n \\end{figure*} }\n \\let\\figtwocol\\Figwide\n\n \\newcommand{\\Figpage}[3]\n \\begin{figure*}[p]\n \\centering\\leavevmode#2%\n \\caption{#3}\n \\figlabel{#1}\n \\end{figure*} }\n \\let\\figpage\\Figpage\n\n\\renewenvironment{thebibliography}[1]\n {\\section*{\\bibname\n \\@mkboth{\\MakeUppercase\\bibname}{\\MakeUppercase\\bibname}}%\n \\list{\\@biblabel{\\@arabic\\c@enumiv}}%\n {\\settowidth\\labelwidth{\\@biblabel{#1}}%\n \\leftmargin\\labelwidth\n \\advance\\leftmargin\\labelsep\n \\@openbib@code\n \\usecounter{enumiv}%\n \\let\\p@enumiv\\@empty\n \\renewcommand\\theenumiv{\\@arabic\\c@enumiv}}%\n \\sloppy\n \\clubpenalty4000\n \\@clubpenalty \\clubpenalty\n \\widowpenalty4000%\n \\sfcode`\\.\\@m}\n {\\def\\@noitemerr\n {\\@latex@warning{Empty `thebibliography' environment}}%\n \\endlist}\n\\makeatother\n\n\\bibliographystyle{ICCS}\n\n\\endinput\n\n \\def\\thispagestyle{empty}\\chapter{\\@title}{%\n \\thispagestyle{empty}\n \\raggedleft\n \\noindent\\@author\\par\n \\medskip\n \\noindent\\hrule\n \\smallskip\n \\noindent{\\Huge\\sf\\@title\\par}\n \\vfil\\vfil\\vfil\\vfil\n {\\narrower\\narrower\\noindent\\@abstract\\par}\\newpage}\n\n\\section{Introduction}\n\nThroughout history we have used concepts from our current technology as metaphors to describe our world. Examples of this are the description of the body as a factory during the Industrial Age, and the description of the brain as a computer during the Information Age. These metaphors are useful because they extend the knowledge acquired by the scientific and technological developments to other areas, illuminating them from a novel perspective. For example, it is common to extend the particle metaphor used in physics to other domains, such as crowd dynamics \\cite{HelbingVicsek1999}. Even when people are not particles and have very complicated behaviour, for the purposes of crowd dynamics they can be effectively described as particles, with the benefit that there is an established mathematical framework suitable for this description. Another example can be seen with cybernetics \\cite{Ashby1956,HeylighenJoslyn2001}, where the system metaphor is used: everything is seen as a system with inputs, outputs, and a control\nthat regulates the internal variables of the system under the influence of perturbations from its environment. Yet another example can be seen with the computational metaphor \\cite{Wolfram2002}, where the universe can be modelled with simple discrete computational machines, such as cellular automata or Turing machines.\n\nHaving in mind that we are using metaphors, this paper proposes to extend the concept of information to describe the world: from elementary particles to galaxies, with everything in between, particularly life and cognition. There is no suggestion on the nature of reality as information \\cite{Wheeler1990}. This work only explores the advantages of \\emph{describing} the world as information. In other words, there are no ontological claims, only epistemological.\n\nIn the next section, the motivation of the paper is presented, followed by a section describing the notion of information to be used throughout the paper. In Section \\ref{s:laws}, eight tentative laws of information are put forward. These are applied to the notions of life (Section \\ref{s:life}) and cognition (Section \\ref{s:cog}). The paper closes presenting future work and conclusions.\n\n\n\\section{Why Information?}\n\nThere is a great interest in the relationship between energy, matter, and information \\cite{Kauffman2000,Umpleby2004,MorowitzSmith2006}. One of the main reasons for this arises because this relationship plays a central role in the definition of life: Hopfield \\cite{Hopfield1994} suggests that the difference between biological and physical systems is given by the meaningful information content of the former ones. Not that information is not present in physical systems, but---as Roederer puts it---information is \\emph{passive} in physics and \\emph{active} in biology \\cite{Roederer2005}. However, it becomes complicated to describe how this information came to be in terms of the physical laws of matter and energy. In this paper the inverse approach is proposed: let us describe matter and energy in terms of information. If atoms, molecules and cells are described as information, there is no need of a \\emph{qualitative} shift (from non-living to living matter) while describing the origin and evolution of life: this is translated into a \\emph{quantitative} shift (from less complex to more complex information). \n\nThere is a similar problem when we study the origin and evolution of cognition \\cite{Gershenson2004}: it is not easy to describe cognitive systems in terms of matter and energy. The drawback with the physics-based approach to the studies of life and cognition is that it requires a new category, that in the best situations can be referred to as ``emergent\". Emergence is a useful concept, but it this case it is not explanatory. Moreover, it stealthily introduces a dualist view of the world: if we cannot relate properly matter and energy with life and cognition, we are forced to see these as separate categories. Once this breach is made, there is no clear way of studying or understanding how systems with life and cognition evolved from those without it. If we see matter and energy as particular, simple cases of information, the dualist trap is avoided by following a continuum in the evolution of the universe. Physical laws are suitable for describing phenomena at the physical scale. The tentative laws of information presented below aim at being suitable for describing phenomena \\emph{at any scale}. Certainly, there are other approaches to describe phenomena at multiple scales, such as general systems theory and dynamical systems theory. These approaches are not exclusive, since one can use several of them, including information, to describe different aspects of the same phenomena.\n\nAnother benefit of using information as a basic descriptor for our world is that the concept is well studied and formal methods have already been developed \\cite{CoverThomas2006,ProkopenkoEtAl2007}, as well as its philosophical implications have been discussed \\cite{Floridi2003}. Thus, there is no need to develop a new formalism, since information theory is well established. I borrow this formalism and interpret it in a new way.\n\nFinally, information can be used to describe other formalisms: not only particles and waves, but also systems, networks, agents, automata, and computers can be seen as information. In other words, it can contain other descriptions of the world, potentially exploiting their own formalisms. Information is an \\emph{inclusive} formalism.\n\n\\section{What Is Information?}\n\nExtending the notion of Umwelt \\cite{vonUexkull1957}, the following notion of information can be given: \n\n\\begin{notion}\n\\label{notion:Info}\nInformation is anything that an agent can sense, perceive, or observe.\n\\end{notion}\n\nThis notion is in accordance with Shannon's \\cite{Shannon1948}, where information is seen as a just-so arrangement, a defined structure, as opposed to randomness \\cite{Cohen2000,Cohen2006}, and it can be measured in bits.\nThis notion can be applied to everything that surrounds us, including matter and energy, since we can perceive it---because it has a defined structure---and we are agents, according to the following notion:\n\n\\begin{notion}\n\\label{notion:Agent}\nAn agent is a description of an entity that \\emph{acts} on its environment \\cite[p. 39]{GershensonDCSOS}.\n\\end{notion}\n\nNoticing that agents (and their environments) are also information (as they can be perceived by other agents, especially us, who are the ones who \\emph{describe} them as agents), an agent can be a human, a cell, a molecule, a computer program, a society, an electron, a city, a market, an institution, an atom, or a star. Each of these can be described (by us) as \\emph{acting} in their environment, simply because they \\emph{interact} with it.\nHowever, not all information is an agent, e.g. temperature, color, velocity, hunger, profit.\n\n\\begin{notion}\n\\label{notion:Environment}\nThe environment of an agent consists of all the information \\emph{interacting} with it.\n\\end{notion}\n\n\nInformation will be relative to the agent perceiving it\\footnote{Shannon's information \\cite{Shannon1948} deals only with the technical aspect of the transmission of information and not with its \\emph{meaning}, i.e. it neglects the semantic aspect of communication.}. Information can exist in theory ``out there\", independently of an agent, but for practical purposes, it can be only spoken about once an agent---not necessarily a human---perceives \/ interacts with it. The \\emph{meaning} of the information will be given by the \\emph{use} the agent perceiving it makes of it \\cite{Wittgenstein1999}, i.e. how the agent responds to it \\cite{AtlanCohen1998}. Thus, Notion \\ref{notion:Info} is a \\emph{pragmatic} one. Note that perceived information is different from the meaning that an agent gives to it. Meaning is an \\emph{active} product of the \\emph{interaction} between information and the agent perceiving it \\cite{Cohen2006,Neuman:2008}.\n\nLike this, an electron can be seen as an agent, which perceives other electrons as information. The same description can be used for molecules, cells, and animals. We can distinguish:\n\n\\begin{description}\n\\item[First order information] is that which is perceived directly by an agent. For example, the information received by a molecule about another molecule\n\\item[Second order information] is that which is perceived by an agent about information perceived by another agent. For example, the information perceived by a human observer about a molecule receiving information about another molecule.\n\\end{description}\n\nMost of the scientific descriptions about the world are second order information, as we perceive how agents perceive and produce information. The present approach also introduces naturally the role of the observer in science, since everything is ``observing\" the (limited, first order) information it interacts with from its own perspective. Humans would be second-level observers, observing the information observed by information. Everything we can speak about is observed, and all agents are observers.\n\nInformation is not necessarily conserved, i.e. it can be created, destroyed, or transformed. These can take place only through interaction. \\emph{Computation} can be seen as the \\emph{change} in information, be it creation, destruction, or transformation.\nMatter and energy can be seen as particular types of information that cannot be created or destroyed, only transformed, along with the well-known properties that characterize them.\n\nThe amount of information required to describe a process, system, object, or agent determines its \\emph{complexity} \\cite{ProkopenkoEtAl2007}. According to our current knowledge, during the evolution of our universe there has been a shift from simple information towards more complex information \\cite{Adami2002} (the information of an atom is less complex than that of a molecule, than that of a cell, than that of a multicellular organism, etc.). This ``arrow of complexity\"\\cite{Bedau1998} in evolution can guide us to explore general laws of information.\n\n\\section{Tentative Laws of Information}\n\\label{s:laws}\n\nSeeing the world as information allows us to describe general laws that can be applied to everything we can perceive. Extending Darwin's theory \\cite{Darwin1998}, the present framework can be used to reframe ``universal Darwinism\" \\cite{Dennet1995}, which explores the idea of evolution beyond biological systems.\nIn this work, the laws that describe the general behaviour of information as it evolves are introduced. These laws are only \\emph{tentative}, in the sense that they are only presented with arguments in favour of them, but they still need to be thoroughly tested.\n\n\\subsection{Law of Information Transformation}\n\nSince information is relative to the agents perceiving it, \\emph{information will potentially be \\emph{transformed} as different agents perceive it}. Another way of stating this law is the following: \\emph{information will potentially be transformed by \\emph{interacting} with other information}.\nThis law is a generalization of the Darwinian principle of random variation, and ensures \\emph{novelty} of information in the world. Even when there might be static information, different agents can perceive it differently and interact with it, potentially transforming it.\nThrough evolution, the transformation of information generates a \\emph{variety} or \\emph{diversity} that can be used by agents for novel purposes.\n\nSince information is not a conserved quantity, it can increase (created), decrease (destroyed), or be maintained as it is transformed.\n\nAs an example, RNA polymerase (RNAP) can make errors while copying DNA onto RNA strands. This slight random variation can lead to changes in the proteins for which the RNA strands serve as templates. \\emph{Some} of these changes will lead to novel proteins that might improve or worsen the function of the original proteins.\n\nThe transformation of information can be classified as follows:\n\n\\begin{description}\n\\item[Dynamic.] Information changes itself. This could be considered as ``objective, internal\" change.\n\\item[Static.] The agent perceiving the information changes, but the information itself does not change. There is a dynamic change but in the agent. This could be considered as ``subjective, internal\" change.\n\\item [Active.] An agent changes information in its environment. This could be considered as an ``objective, external\" change.\n\\item[Stigmergic.] An agent makes an active change of information, which changes the perception of that information by another agent. This could be considered as ``subjective, external\" or ``intersubjective\" change.\n\\end{description}\n\n\n\\subsection{Law of Information Propagation}\n\n\\emph{Information \\emph{propagates} as fast as possible}. Certainly, only some information manages to propagate. In other words, we can assume that different information has a different ``ability\" to propagate, also depending on its environment. The ``fitter\" information, i.e. that which manages to persist and propagate faster and more effectively, will prevail over other information. This law generalizes the Darwinian principle of natural selection, the maximum entropy production principle \\cite{MartyushevSeleznev2006} (entropy can also be described as information), and Kauffman's tentative fourth law of thermodynamics\\footnote{``The workspace of the biosphere expands, on average, as fast as it can in this coconstructing biosphere\" \\cite[p. 209]{Kauffman2000}}. It is interesting that this law contains the second law of thermodynamics, as atoms interact, propagating information homogeneously. It also describes living organisms, where genetic information is propagated across generations. And it also describes cultural evolution, where information is propagated among individuals. Life is ``far from thermodynamic equilibrium\" because it constrains \\cite{Kauffman2000} the (more simple) information propagation at the thermodynamic scale, i.e. the increase of entropy, exploiting structures to propagate (or maintain) the (more complex) information at the biological scale.\n\nIn relation with the law of information transformation, as information requires agents to perceive it, information will be potentially transformed. This source of novelty will allow for the ``blind\" exploration of better ways of propagating information, according to the agents perceiving it and their environments.\n\nExtending the previous example, if errors in transcription made by RNAP are beneficial for its propagation (which entails the propagation of the cell producing RNAP), cells with such novel proteins will have better chances of survival than their ``cousins\" without transcription errors.\n\nThe propagation of information can be classified as follows:\n\n\\begin{description}\n\\item[Autonomous.] Information propagates by itself. Strictly speaking, this is not possible, since at least some information is determined by the environment. However, if more information is produced by itself than by its environment, we can call this autonomous propagation (See Section \\ref{s:life}). \n\\item[Symbiotic. ] Different information cooperates, helping to propagate each other.\n\\item[Parasitic. ] Information exploits other information for its own propagation. \n\\item[Altruistic. ] Information promotes the propagation of other information at the cost of its own propagation.\n\\end{description}\n\n\n\\subsection{Law of Requisite Complexity}\n\nTaking into account the law of information transformation, transformed information can increase, decrease, or maintain its previous complexity, i.e. amount \\cite{ProkopenkoEtAl2007}. However, \\emph{more complex information will require more complex agents to perceive, act on, and propagate it}. This law generalizes the cybernetic law of requisite variety \\cite{Ashby1956}. Note that simple agents can perceive and interact with \\emph{part} of complex information, but they cannot (by themselves) propagate it. An agent cannot perceive (and thus contain) information more complex than itself. For simple agents, information that is complex for us will be simple as well. As stated above, different agents can perceive the same information in different ways, giving it different meanings.\n\nThe so called ``arrow of complexity\" in evolution \\cite{Bedau1998} can be explained with this law. If we start with simple information, its transformation will produce by simple drift \\cite{McShea1996,Miconi:2008} increases in the complexity of information, without any goal or purpose. This occurs simply because there is an open niche for information to become more complex as it varies. But this also promotes agents to become more complex to exploit novel (complex) information and propagate it. Evolution does not need to favour complexity in any way: information just propagates to every possible niche as fast as possible, and it seems that there is often an ``adjacent possible\" \\cite{Kauffman2000} niche of greater complexity.\n\n\nFor example, it can be said that a protein (as an agent) perceives some information via its binding sites, as it recognizes molecules that ``fit\" a site. More complex molecules will certainly need more complex binding sites. Whether complex molecules are better or worse is a different matter: some will be better, some will be worse. But for those which are better, the complexity of the proteins must match the complexity of the molecules perceived. If the binding site perceives only a part of the molecule, then this might be confused with other molecules which share the perceived part. Following the law of information transformation, there will be a variety of complexities of information. The law of requisite complexity just states that the increase in complexity of information is determined by the ability of agents to perceive, act on, and propagate more complex information.\n\nSince more complex information will be able to produce more variety, the \\emph{speed} of the complexity increase will escalate together with the complexity of the information.\n\n\n\\subsection{Law of Information Criticality}\n\n\\emph{Transforming and propagating information will tend to a critical \\emph{balance} between its stability and its variability}. Propagating information maintains itself as much as possible, but transforming information varies it as much as possible. This struggle leads to a critical balance analogous to the ``edge of chaos\" \\cite{Langton1990,Kauffman1993}, self-organized criticality \\cite{BTW1987,Adami1995}, and the ``complexity from noise\" principle \\cite{Atlan1974}. The homeostasis of living systems can also be seen as the self-regulation of information criticality. \n\nThis law can generalize Kauffman's four candidate laws for the coconstruction of a biosphere \\cite[Ch. 8]{Kauffman2000}. Their relationship with this framework demands further discussion, which is out of the scope of this paper.\n\nA well known example can be seen with cellular automata \\cite{Langton1990} and random Boolean networks \\cite{Kauffman1993,Gershenson2004c,Gershenson:2010}: stable (ordered) dynamics limit considerably or do not allow change of states so information cannot propagate, while variable (chaotic) dynamics change the states too much, losing information.\nFollowing the law of information propagation, information will tend to a critical state between stability and variability to maximize its propagation: if it is too stable, it will not propagate, and if it is too variable, it will be transformed. In other words, ``critical\" information will be able to propagate better than stable or variable one, i.e. as fast as possible (cf. law of information propagation).\n\n\n\\subsection{Law of Information Organization}\n\n\\emph{Information produces constraints that regulate information production}. These constraints can be seen as \\emph{organization} \\cite{Kauffman2000}. In other words, evolving information will be organized (by transformation and propagation) to regulate information production. According to the law of information criticality, this organization will lie at a critical area between stability and variability. And following the law of information propagation, the organization of information will enable it to propagate as fast as possible.\n\nThis law can also be seen as information having a certain \\emph{control} over its environment, since the organization of information will help it withstand perturbations. It has been shown \\cite{KlyubinEtAl2004,ProkopenkoEtAl2006,KlyubinEtAl2007} that using this idea as a fitness function can lead to the evolution of robust and adaptive agents, namely maximizing the mutual information between sensors and environment.\n\nA clear example of information producing its own organization can be seen with living systems, which are discussed in Section \\ref{s:life}.\n\n\n\n\n\\subsection{Law of Information Self-organization}\n\n\\emph{Information tends to its preferred, most probable state}. This is actually a tautology, since observers determine probabilities after observing tendencies of information dynamics. Still, this tautology can be useful to describe and understand phenomena. This law lies at the heart of probability theory and dynamical systems theory \\cite{Ashby1962}. The dynamics of a system tend to a subset of its state space, i.e. attractors, depending on its history. This simple fact reduces the possibility space of information, i.e. a system will tend towards a small subset of all possible states. If we describe attractors as ``organized\", then we can describe the dynamics of information in terms of self-organization \\cite{GershensonHeylighen2003a}.\n\nPattern formation can be described as information self-organizing, and related to the law of information propagation. Information will self-organize in ``fit\" patterns that are the most probable (defined \\emph{a posteriori}).\n\nUnderstanding different ways in which self-organization is achieved by transforming information can help us understand better natural phenomena \\cite{Gershenson:2010a} and design artificial systems \\cite{GershensonDCSOS}. For example, random Boolean networks can be said to self-organize towards their attractors \\cite{Gershenson:2010}.\n\n\\subsection{Law of Information Potentiality}\n\n\\emph{An agent can give different potential meanings to information}. This implies that the same information can have different meanings. Moreover, meaning---while being information---can be independent of the information carrying it, i.e. depend only on the agent observing it. Thus, different information can have the same potential meaning. The precise meaning of information will be given by an agent observing it within a specific context.\n\nThe potentiality of information allows the effective communication between agents. Different information has to be able to acquire the same meaning (homonymy), while the same information has to be able to acquire different meanings (polysemy) \\cite{Neuman:2008}. The relationship between the laws of information and communication is clear, but beyond the scope of this paper.\n\nThe law of information potentiality is related to a passive information transformation, i.e. a change in the agent observing information.\n\nIn spite of information potentiality, not all meanings will be suitable for all information. In other words, pure subjectivism cannot dictate meanings of information. By the law of information propagation, some meanings will be more suitable than others and will propagate. The suitability of meanings will be determined by their use and context \\cite{Wittgenstein1999}. However, there is always a certain freedom to subjectively transform information. \n\nFor example, a photon can be observed as a particle, as a wave, or as a particle-wave. The suitability of each given meaning is determined by the context in which the photon is described\/observed.\n\n\\subsection{Law of Information Perception}\n\n\\emph{The meaning of information is \\emph{unique} for an agent perceiving it in unique, always changing open contexts}. If meaning of information is determined by the use an agent makes of it, which is embedded in an open environment, we can go to such a level of detail that the meaning will be unique. Certainly, agents make generalizations and abstractions of perceptions in order to be able to respond to novel information. Still, the precise situation and context will never be repeated. This makes perceived information unique. The implication of this is that the response to any given information might be ``unexpected\", i.e. novelty can arise. Moreover, the meaning of information can be to a certain extent \\emph{arbitrary}. This is related with the law of information transformation, as the uniqueness of meaning allows the same information perceived differently by the same or different agents to be statically transformed. \n\nThis law is a generalization of the first law of human perception: ``whatever is perceived can be perceived only from a uniquely situated place in the overall structure of points of view\" \\cite[p. xxiv]{Holquist:1990} (cited in \\cite[p. 250]{Neuman:2008}). We can describe agents perceiving information as filtering it. An advantage of humans and other agents is that we can \\emph{choose} which filter to use to perceive. The suggestion is not that ``unpleasant\" information should be solipsistically ignored, but that information can be potentially actively transformed.\n\nFor example, T lymphocytes in an immune system can perceive foreign agents and attack them. Even when the response will be similar for similar foreign agents, each perception will be unique, a situation that always leaves space for novelty.\n\n\\subsubsection{Scales of perception}\n\nDifferent information is perceived at different scales of observation \\cite{BarYam2004}. As the scale tends to zero, then the information tends to infinite. For lower scales, more information and details are perceived. The uniqueness of information perception dominates at these very low (spatial and temporal) scales. However, as generalizations are made, information is ``compressed\", i.e. only relevant aspects of information are perceived\\footnote{The relevance is determined by the context, i.e. different aspects will be relevant for different contexts.}. At higher scales, more abstractions and generalizations are made, i.e. less information is perceived. When the scale tends to infinite, the information tends to zero. In other words, no information is needed to describe all of the universe, because all the information is already there. This most abstract understanding of the world is in line with the ``highest view\" of Vajrayana Buddhism \\cite{Nydahl:2008}. Implications at this level of description cannot be right or wrong, because there is no context. Everything is contained, but no information is needed to describe it, since it is already there. This ``maximum\" understanding is also described as vacuity, which leads to bliss \\cite[p. 42]{Nydahl:2008}.\n\n\nFollowing the law of information criticality, agents will tend to a balance where the perceived information is minimal but maximally predictive \\cite{Shalizi2001} (at a particular scale): few information is cheaper, but more information in general entails a more precise predictability. The law of requisite complexity applies at particular scales, since a change of scale will imply a change of complexity of information \\cite{BarYam2004}.\n\n\n\\section{On the Notion of Life}\n\\label{s:life}\n\nThere is no agreed notion of life, which reflects the difficulty of defining the concept. Still, many researchers have put forward properties that characterize important aspects of life. \\emph{Autopoiesis} is perhaps the most salient one, which notes that living systems are self-producing \\cite{VarelaEtAl1974,McMullin2004}. Still, it has been argued that autopoiesis is a necessary but not sufficient property for life \\cite{RuizMoreno2004}. The relevance of autonomy \\cite{Barandarian2004,MorenoRuiz2006,KrakauerZanotto2007} and individuality \\cite{Michod2000,KrakauerZanotto2007} for life have also been highlighted . \n\nThese approaches are not unproblematic, since no living system is completely autonomous. This follows from the fact that all living systems are open. For example, we have some degree of autonomy, but we are still dependent on food, water, oxygen, sunlight, bacteria living in our gut, etc. This does not mean that we should abandon the notion of autonomy in life. However, we need to abandon the sharp distinction between life and non-life \\cite{Bedau1998,KrakauerZanotto2007}, as different degrees of autonomy escalate \\emph{gradually}, from the systems we considered as non-living to the ones we consider as living. In other words, life has to be a fuzzy concept.\n\nUnder the present framework, living and non-living systems are information. Rather than a yes\/no definition, we can speak about a ``\\emph{life ratio}\":\n\n\\begin{notion}\n\\label{notion:Life}\nThe ratio of \\emph{living information} is the information produced by itself over the information produced by its environment.\n\\end{notion}\n\nBeing more specific---since all systems also receive information---a system with a high life ratio produces more (first order) information about itself than the one it receives from its environment. Following the law of information organization, this also implies that living information produces more of its own constraints (organization) to regulate itself than the ones produced by its environment, and thus it has a greater autonomy. All information will have constraints from other (environmental) information, but we can measure (as second-order information) the proportion of internal over external constraints to obtain the life ratio. If this is greater than one, then the information regulates by itself more than the proportion that is regulated by external information. In the opposite case, the life ratio would be less than one.\n\nFollowing the law of information propagation, evolution will tend to information with higher life ratios, simply because this can propagate better, as it has more ``control\" and autonomy over its environment. When information depends more on its environment for its propagation, it has a higher probability of being transformed as it interacts with its environment.\n\nNote that the life ratio depends on spatial and temporal scales at which information is perceived. For example, for some microorganisms observed at a scale of years , the life ratio would be less than one, but if observed at a scale of seconds, the life ration would be greater than one.\n\nCertainly, some artificial systems would be considered as living under this notion. However, we can make a distinction between living systems embodied in or composed by biological cells \\cite{DeDuve2003}, i.e. life as we know it, and the rest, i.e. life as it could be. The latter ones are precisely those explored by artificial life.\n \n\\section{On the Notion of Cognition}\n\\label{s:cog}\n\nCognition is certainly related with life \\cite{Stewart1995}. The term has taken different meanings in different contexts, but all of them can be generalized into a common notion \\cite{Gershenson2004}. Cognition comes from the Latin \\emph{cognoscere}, which means ``get to know\". Like this,\n\n\\begin{notion}\n\\label{notion:Cognition}\nA system is cognitive if it \\emph{knows} something \\cite[p.135]{Gershenson2004}.\n\\end{notion}\n\nFrom Notion \\ref{notion:Agent}, all agents are cognitive, since they ``\\emph{know}\" how to act on their environment, giving (first order) \\emph{meaning} to their environmental information. Thus, there is no boundary between non-cognitive and cognitive systems. Throughout evolution, however, there has been a \\emph{gradual} increase in the complexity of cognition \\cite{Gershenson2004}. This is because all agents can be described as possessing some form of cognition, i.e. ``knowledge\" about the (first-order) information they perceive\\footnote{One could argue that, since agency (and thus cognition) is already assumed in all agents, this approach is not explanatory. But I am not trying to explain the ``origins\" of agency, since I assume it to be there from the start. I believe that we can only study the evolution and complexification of agency and cognition, not their ``origins\".}.\n\nFollowing the law of requisite complexity, evolution leads to more complex agents, to be able to cope with the complexity of their environment. This is precisely what triggers the (second-order) increase in the complexity of cognition we observe.\n\n\n\nCertainly, there are different types of cognition\\footnote{For example, human, animal, plant, bacterial, immune, biological, adaptive, systemic, and artificial \\cite{Gershenson2004}.}. We can say that a rock ``knows\" about gravity because it perceives its information, which has an effect on it, but it cannot \\emph{react} to this information. Throughout evolution, information capable of maintaining its integrity has prevailed over that which was not. \\emph{Robust} information is that which can resist perturbations to maintain its integrity. The ability to react to face perturbations to maintain information makes information \\emph{adaptive}, increasing its probability of maintenance. When this reaction is made before it occurs, the information is \\emph{anticipative}\\footnote{For a more detailed treatment on robustness, adaptation, and anticipation, see \\cite{GershensonDCSOS}}. As information becomes more complex (even if only by information transformation), the mechanisms for maintaining this information also become more complex, as stated by the law of requisite complexity. This has led gradually to the advanced cognition that animals and machines posses.\n\n\n\n\n\n\\section{Future Work}\n\nThe ideas presented here still need to be explored and elaborated further. One way of doing this would be with a simulation-based method. Being inspired by $\\epsilon$-machines \\cite{Shalizi2001,GoernerupCrutchfield2008}, one could start with ``simple\" agents that are able to perceive and produce information, but cannot control their own production. These would be let to evolve, measuring if complexity increases as they evolve. The hypothesis is that complexity would increase (under which conditions still remains to be seen), to a point where ``$\\epsilon$-agents\" will be able to produce themselves depending more on their own information than that of the environment. This would be similar to the evolution in Tierra \\cite{Ray1991} or Avida \\cite{AdamiBrown1994} systems, only that self-replication would not be inbuilt. The tentative laws of information presented in Section \\ref{s:laws} would be better defined if such a system was studied.\n\nOne important aspect that remains to be studied is the representation of thermodynamics in terms of information. This is because the ability to perform thermodynamic work is a characteristic property of biological systems \\cite{Kauffman2000}. This work can be used to generate the organization necessary to sustain life (cf. law of information organization). It is difficult to describe life in terms of thermodynamics, since it entails new characteristic properties not present in thermodynamic systems. But if we see the latter ones as information, it will be easier to describe how life---also described as information---evolves from them, as information propagates itself at different scales.\n\nA potential application of this framework would be in economy, considering capital, goods, \nand resources as information (a non-conserved quantity) \\cite{FarmerZamani2006}. A similar benefit (of non-conservation) could be given in game theory: if the payoff of games is given in terms of information (not necessarily conserved), non-zero sum games could be easier to grasp than if the payoff is given in material (conserved) goods.\n\nIt becomes clear that information (object), the agent perceiving it (subject) and the meaning-making or transformation of information (action) are deeply interrelated. They are part of the same totality, since one cannot exist without the others. This is also in line with Buddhist philosophy. The implications of an informational description of the world for philosophy have also to be addressed, since some schools have focussed on partial aspects of the object-subject-action trichotomy. Another potential application of the laws of information would be in ethics, where value can be described accordingly to the present framework.\n\n\n\\section{Conclusions}\n\nThis paper introduced general ideas that require further development, extension and grounding in particular disciplines. Still, a first step is always necessary, and hopefully feedback from the community will guide the following steps of this line of research.\n\nDifferent metaphors for describing the world can be seen as different languages: they can refer to the same objects without changing them. And each can be more suitable for a particular context. For example, English has several advantages for fast learning, German for philosophy, Spanish for narrative, and Russian for poetry. In other words, there is no ``best\" language outside a particular context. In a similar way, I am not suggesting that describing the world as information is more suitable than physics to describe physical phenomena, or better than chemistry to describe chemical phenomena. It would be redundant to describe particles as information if we are studying only particles. The suggested approach is meant only for the cases when the physical approach is not sufficient, i.e. across scales, constituting an alternative worth exploring to describe evolution.\n\nIt seems easier to describe matter and energy in terms of information than vice versa. Moreover, information could be used as a common language across scientific disciplines \\cite{vonBaeyer2004}.\n\n\n\\subsection*{Acknowledgements}\n\nI should like to thank Irun Cohen, Inman Harvey, Francis Heylighen, David Krakauer, Antonio del R\\'{i}o, Marko Rodriguez, David Rosenblueth, Stanley Salthe, Mikhail Prokopenko, Cl\\'{e}ment Vidal, and H\\'{e}ctor Zenil for their useful comments and suggestions. \n\n\\footnotesize{\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\chapter{The Formation of the First Luminous Primordial Structures}\n{Emanuele Ripamonti \\\\\nKapteyn Astronomical Institute, University of Groningen \\\\\nThe Netherlands \\\\~\\\\\nTom Abel \\\\\nKavli Institute for\nAstroparticle Physics and Cosmology, Stanford University \\\\\nU.S.A.}\n\n\n\\section{Introduction}\n\nThe scientific belief that the universe evolves in time is one of the legacies of the theory of the Big Bang. \n\nThe concept that the universe has an history started to attract the interest of cosmologists soon after the first formulation of the theory: already Gamow (1948; 1949) investigated how and when galaxies could have been formed in the context of the expanding Universe.\n\nHowever, the specific topic of the formation (and of the fate) of the {\\it first} objects dates to two decades later, when no objects with metallicities as low as those predicted by primordial nucleosynthesis ($Z\\lsim10^{-10}\\sim10^{-8}Z_\\odot$) were found. Such concerns were addressed in two seminal papers by Peebles \\& Dicke (1968; hereafter PD68) and by\nDoroshkevich, Zel'Dovich \\& Novikov (1967; hereafter DZN67)\\footnote{This paper is in Russian and we base our comments on indirect knowledge ({\\frenchspacing\\it e.g. } from the Novikov \\& Zel'Dovich 1967 review).}, introducing the idea that some objects could have formed before the stars we presently observe. \n\\begin{enumerate}\n\\item{Both PD68 and DZN67 suggest a mass of $\\sim10^5M_{\\odot}$ for the first\ngeneration of bound systems, based on the considerations on the cosmological\nJeans length\\index{Jeans length} (Gamow 1948; Peebles 1965) and the possible shape of the power spectrum.}\n\\item{They point out the role of thermal instabilities in the formation of the\nproto-galactic bound object, and of the cooling\\index{cooling} of the gas\ninside it; in particular, PD68 introduces H$_2$~ cooling\\index{HH cooling} and\nchemistry\\index{HH chemistry} in the calculations about the contraction of the gas.}\n\\item{Even if they do not specifically address the occurrence of fragmentation\\index{fragmentation}, these papers make two very different assumptions: PD68 assumes that the gas will fragment into ``normal'' stars to form globular clusters, while DZN67 assumes that fragmentation {\\it does not}\\ occur, and that a single ``super-star'' forms.}\n\\item{Finally, some feedback\\index{feedback} effects as considered ({\\frenchspacing\\it e.g. } Peebles \\& Dicke considered the effects of supernovae).}\n\\end{enumerate}\n\nToday most of the research focuses on the issues when fragmentation\\index{fragmentation} may occur, what objects are formed and how they influence subsequent structure formation. \n\n\nIn these notes we will leave the discussion of feedback to lecture notes by\nFerrara \\& Salvaterra and by Madau \\& Haardt in this same book and focus only on the aspects of the formation of the first objects.\n\n\n\n\n\n\n\n\n\n\n\n\nThe advent of cosmological numerical hydrodynamics in particular allow a fresh new look at these questions. Hence, these notes will touch on aspects of theoretical cosmology to chemistry\\index{chemistry}, computer science, hydrodynamics and atomic physics. \nFor further reading and more references on the subject we refer the reader to other\nrelevant reviews such as Barkana \\& Loeb 2001, and more recently Ciardi \\& Ferrara 2004,\nGlover 2004 and Bromm \\& Larson 2004.\n\nIn this notes, we try to give a brief introduction to only the most relevant\naspects. We will start with a brief overview of the relevant cosmological\nconcepts in section 2, followed by a discussion of the properties of\nprimordial material (with particular emphasis to its cooling and its\nchemistry\\index{chemistry}) in section 3. We will then review the technique\nand the results of numerical simulations\\index{numerical Simulation} in sections 4 and 5: the former will deal with detailed 3D simulations of the formation of gaseous clouds which are likely to transform into luminous objects, while the latter will examine results (mostly from 1D codes) about the modalities of such transformation. Finally, in section 6 we will critically discuss the results of the previous sections, examining their consequences and comparing them to our present knowledge of the universe.\n\n\n\n\\section{Physical cosmology}\n\nIn the following of this notes we will adopt the modern physical\ndescription of the universe (dating back to at least 1920), based upon\nthe ``cosmological principle'', which affirms that on cosmological\nscales the distributions of matter and energy should be homogeneous\nand isotropic, whose metric is the Robertson-Walker metric. Although\nderived from mainly philosophical arguments, the\ncosmological principle is also supported by observations such as the\nisotropy of the CMB ({\\it e.g.} Wu {\\it et al.~} 1999).\n\nWe will also make some additional, general assumptions which are\nquite common in present-day cosmology, and which are believed to be\nthe best explanations for a series of observations. That is:\n\\begin{enumerate}\n\\item{The cosmological structures we observe at present (galaxies,\nclusters of galaxies etc.) formed because of gravitational instability\nof pre-existing, much shallower fluctuations;}\n\\item{Most of the matter in the universe is in the form of {\\it ``Cold\nDark Matter''}\\index{dark matter}, that is, of some kind of elementary particle (or\nparticles) that has not been discovered at present. Cold Dark Matter\\index{dark matter}\nparticles are assumed to have a negligibly small cross section for\nelectromagnetic interactions ({\\frenchspacing\\it i.e. } to be {\\it dark}), and to move at\nnon-relativistic speeds ({\\frenchspacing\\it i.e. } to be{\\it cold}).}\n\\end{enumerate}\n\n\n\n\n\\subsection{Fluctuations in the Early Universe}\n\n\\subsubsection{Inflation\\index{inflation}}\nInflation is a mechanism which was first proposed by Guth (1981) and (in\na different way) by Starobinsky (1979, 1980), and has since been\nincluded in a number of different ``inflationary theories'' (see Guth\n2004 for a review upon which we base this paragraph).\n\nThe basic assumption of inflationary models is the existence of states\nwith negative pressure; a typical explanation is that some unidentified\nkind of scalar field (commonly referred to as {\\it inflaton}\\index{inflation})\ntemporarily keeps part of the universe in a ``false vacuum'' state, in\nwhich the energy density must be approximately constant (that is, it can\nnot ``diluted'' by expansion) at some value $\\rho_f c^2$, implying a\nnegative pressure $p_f=-\\rho_f c^2$. Inserting these two expression in\nthe first Friedmann cosmological equation\n\\begin{equation}\n\\ddot{a}(t) = -{{4\\pi}\\over3}G\\left({\\rho+3{p\\over{c^2}}}\\right) a(t)\n\\end{equation}\n(where $a(t)$ is the scale factor at cosmic time $t$) it is easy to\nderive that the considered region expands exponentially: $a(t)\\propto\ne^{t\/t_f}$ with $t_f=(8\\pi G\\rho_f\/3)^{-1\/2}$; the epoch in which the\nuniverse undergoes such exponential expansion is called {\\it inflation}.\n\nIf $\\rho_f$ is taken to be at about the grand unified theory scale, we\nget $t_f\\sim10^{-38}\\;{\\rm s}$, corresponding to an Hubble length of\n$ct_f \\sim 10^{-28}\\;{\\rm cm}$; if the inflationary phase is long enough\n(a lower limit for this is about 65 e-folding times, corresponding to an\nexpansion by a factor $\\sim10^{28}$), it smooths the metric, so that the\nexpanding region approaches a de Sitter flat space, regardless of\ninitial conditions. Furthermore, when inflation ends, the energy stored\nin the inflaton\\index{inflation} field is finally released, thermalizes and leads to the\nhot mixture of particles assumed by the standard big bang picture.\n\n\\begin{figure}[t]\n\\epsfig{file=geometry.ps,width=11truecm}\n\\caption{Effect of universe geometry on the observed angular size of\nfluctuations in the CMBR\\index{Cosmic Microwave Background}. If the universe is closed (left panel) ``hot\nspots'' appear larger than actual size; if the universe is flat\n(middle panel), ``hot spots'' appear with their actual size; if the\nuniverse is open, ``hot spots'' appear smaller than their actual\nsize.}\n\\label{geometry}\n\\end{figure}\n\nInflation helps explaining several fundamental observations, which were\njust assumed as initial conditions in non inflationary models:\n\\begin{enumerate}\n\\item{The Hubble expansion: repulsive gravity associated with false\nvacuum is exactly the kind of force needed to set up a motion pattern in\nwhich every two particles are moving apart with a velocity proportional\nto their distance.}\n\\item{The homogeneity and isotropy: in ``classical'' cosmology the\nremarkable uniformity of the Cosmic Microwave Background\\index{Cosmic Microwave Background} Radiation (CMBR)\ncannot be ``explained'', because different\nregions of the present CMBR sky never were causally connected. Instead,\nin inflationary models the whole CMBR sky was causally connected {\\it before}\ninflation\\index{inflation} started, and uniformity can be established at that time.}\n\\item{The flatness problem: a flat Friedman-Robertson-Walker model\nuniverse ({\\it i.e.}\\ with $\\Omega(t)\\equiv\\rho_{\\rm tot}(t)\/\\rho_{\\rm\ncrit}(t)=1$, where $\\rho_{\\rm tot}(t)$ is the cosmological mean density,\nincluding the ``dark energy'' term, and $\\rho_{\\rm crit}(t)=3H(t)^2\/8\\pi\nG$, $H(t)\\equiv \\dot{a}(t)\/a(t)$ being the Hubble parameter) always\nremains flat, but if at an early time $\\Omega$ was just slightly\ndifferent from $1$, the difference should have grown as fast as\n$(\\Omega-1) \\propto t^{2\/3}$. All the observational results ({\\it e.g.}\nthe Bennet {\\it et al.~} 2003 WMAP result $\\Omega_0=1.02 \\pm 0.02$) show that at\npresent $\\Omega$ is quite close to 1, implying that at the Planck time\n($t\\sim10^{-43}\\;{\\rm s}$) the closeness was really amazing. Inflation\nremoves the need for this incredibly accurate fine tuning, since during\nan inflationary phase $\\Omega$ is driven towards 1 as $(\\Omega-1)\\propto\ne^{-2Ht}$.}\n\\item{The absence of magnetic mono-poles: grand unified theories,\ncombined with classical cosmology, predict that the universe should be\nlargely dominated by magnetic mono-poles, which instead have never been\nobserved. Inflation\\index{inflation} provides an explanation, since it can dilute the\nmono-pole density to completely negligible levels.}\n\\item{the {\\it anisotropy} properties of the CMBR\\index{Cosmic Microwave Background} radiation: inflation\nprovides a prediction for the power spectrum of fluctuations, which\nshould be generated by quantum fluctuations and nearly scale-invariant,\na prediction in close agreement with the WMAP results (see the next\nsubsection).}\n\\end{enumerate}\n \n\n\nA peculiar property of inflation\\index{inflation} is that most inflationary models\npredict that inflation does not stop everywhere at the same time, but\njust in localized ``patches'' in a succession which continues eternally;\nsince each ``patch'' (such as the one we would be living in) can be\nconsidered a whole universe, it can be said that inflation produces an\ninfinite number of universes.\n\n\n\n\\subsubsection{Primordial fluctuation evolution - Dark Matter\\index{dark matter}}\nInflation\\index{inflation} predicts the statistical properties of the density\nperturbation field, defined as\n\\begin{equation}\n\\delta({\\bf x}) \\equiv {{\\rho({\\bf r}) - \\bar\\rho}\\over{\\bar\\rho}}\n\\end{equation}\nwhere ${\\bf r}$ is the proper coordinate, ${\\bf x}={\\bf r}\/a$ is the\ncomoving coordinate, and $\\bar\\rho$ is the mean matter density.\n\nIn fact, if we look at the Fourier components in terms of the comoving\nwave-vectors ${\\bf k}$ \n\\begin{equation}\n\\delta({\\bf k}) = \\int{\\delta({\\bf x})\\ e^{-i{\\bf k}{\\bf x}}\\ d^3{\\bf x}}\n\\end{equation}\nthe inflationary prediction is that the perturbation field is a\nGaussian random field, that the various ${\\bf k}$ modes are independent,\nand that the power spectrum $P(k)$ (where $k\\equiv|{\\bf k}|$) is close\nto scale-invariant, {\\it i.e.}, it is given by a power law\n\\begin{equation}\nP(k) \\equiv <|\\delta_{\\bf k}|^2> \\propto k^{n_s},\\qquad{\\rm with}\\ n_s\\simeq 1.\n\\end{equation}\n\n\\begin{figure}[t]\n\\epsfig{file=powerspectrum.ps,width=11truecm}\n\\caption{Schematic shape of the DM power spectrum after accounting\nfor the effects of free streaming\\index{fluctuations!free streaming scale} and of the processing that takes\nplace for fluctuations entering the horizon before $t_{\\rm eq}$.}\n\\label{powerspectrum}\n\\end{figure}\n\n\nThis prediction applies to the {\\it primordial} power spectrum; in order\nto make comparisons with observations, we need to include effects which\nsubsequently modified its shape:\n\\begin{enumerate}\n\\item{Free streaming: the dark matter\\index{dark matter} particles are in motion, and since\nthey are believed to interact only through gravity, they freely\npropagate from overdense to underdense regions, wiping out the density\nperturbations. However, at any given cosmic time $t$, this will affect\nonly wavelengths smaller than the {\\it free streaming}\\index{fluctuations!free streaming scale} length, {\\frenchspacing\\it i.e. }\\ the\nproper distance $l_{\\rm FS}(t)$ which a dark matter particle can have\ntravelled in time $t$. It can be shown (see {\\it e.g.}\\ Padmanabhan 1993,\n\\S4.6) that this scale depends primarily on the epoch $t_{\\rm nr}$ when\nthe dark matter\\index{dark matter} particles become non-relativistic: before that epoch\nthey move at the speed of light, and can cover a proper distance $l_{\\rm\nFS}(t_{\\rm nr})\\sim 2ct_{\\rm nr}$, corresponding to a comoving distance\n$\\lambda_{\\rm FS} = (a_0\/a_{\\rm nr})2ct_{\\rm nr}$; after becoming\nnon-relativistic, particle motions become much slower than cosmological\nexpansion, and the evolution after $t_{\\rm nr}$ only increases\n$\\lambda_{\\rm FS}$ by a relatively small factor. In turn, $t_{\\rm nr}$\ndepends primarily on the mass $m_{\\rm DM}$ of dark matter particles, so\nthat\n\\begin{equation}\n\\lambda_{\\rm FS}\\sim 5\\times10^{-3}\\;(\\Omega_{\\rm DM} h^2)^{1\/3}\n \\left({{m_{\\rm DM}}\\over{\\rm 1\\;GeV}}\\right)^{-4\/3}\\ {\\rm pc}\n\\end{equation}\ncorresponding to a mass scale\n\\begin{equation}\nM_{\\rm FS}\\sim 6\\times 10^{-15}\\;(\\Omega_{\\rm DM} h^2)\n \\left({{m_{\\rm DM}}\\over{\\rm 1\\;GeV}}\\right)^{-4}\\ M_{\\odot}.\n\\end{equation}\nSince the most favoured candidates for the DM particles are Weakly\nInteracting Massive Particles (WIMPs) with mass between $0.1$ and\n$100\\;{\\rm GeV}$, we probably have $M_{\\rm FS}\\lsim10^{-8} M_{\\odot}$. Some\nsuper-symmetric theories ({\\frenchspacing\\it e.g. }\\ Schwartz {\\it et al.~} 2001, and more recently\nGreen, Hofmann \\& Schwarz 2004) instead point towards $M_{\\rm FS}\\sim\n10^{-6}M_{\\odot}$, and Diemand, Moore \\& Stadel (2005) used this result in\norder to argue that the first structure which formed in the early\nUniverse were Earth-mass dark-matter\\index{dark matter} haloes (but see also Zhao {\\it et al.~}\n2005 for criticism about this result);}\n\\item{Growth rate changes: in general, perturbations grow\nbecause of gravity; however, the details of the process change with\ntime, leaving an imprint on the final processed spectrum. The time\n$t_{\\rm eq}$ when the matter density becomes larger than the radiation\ndensity is particularly important: before $t_{\\rm eq}$, perturbations\nwith $\\lambda$ larger than the Hubble radius $c\/H(t)$ grow as\n$\\delta\\propto a^2$, while the growth of smaller perturbations is almost\ncompletely suppressed; after $t_{\\rm eq}$ both large and small\nfluctuations grow as $\\delta\\propto a$. Because of these differences,\nthe size of the Hubble radius at $t_{\\rm eq}$, $\\lambda_{\\rm eq}\\simeq\n13\\;(\\Omega h^2)^{-1} {\\rm Mpc}$ (in terms of mass, $M_{\\rm eq}\\simeq\n3.2\\times10^{14} (\\Omega h^2)^{-2}\\;M_{\\odot}$) separates two different\nspectral regimes. In the wavelength range $\\lambda_{\\rm FS}\\leq\n\\lambda\\leq \\lambda_{\\rm eq}$ the growth of fluctuations pauses between\nthe time they enter the Hubble radius and $t_{\\rm eq}$. As a result\nthe slope of the processed spectrum is changed, and $P(k)\\propto k^{n_s-4}\\simeq\nk^{-3}$. Instead, at scales $\\lambda>\\lambda_{\\rm eq}$ all the\nfluctuations keep growing at the same rate at all times, and the shape\nof the power spectrum remains similar to the primordial one, $P(k)\\propto\nk^{n_s}\\simeq k^1$.}\n\\end{enumerate}\n\nWMAP (Bennett {\\it et al.~} 2003) measured the spectral index,\nobtaining $n_s=0.99\\pm0.04$, and did not detect deviations from\ngaussianity, both results in agreement with inflationary\npredictions.\n\nThis kind of spectrum, in which fluctuations are typically larger on\nsmall scales, leads naturally to hierarchical structure formation, since\nsmall-scale fluctuations are the first to become non-linear\\index{fluctuations!non-linear evolution} ({\\frenchspacing\\it i.e. } to\nreach $\\delta\\sim1$), collapse and form some kind of astronomical\nobject. It is also worth remarking that the very first objects, coming\nfrom the highest peaks of $\\delta({\\bf x})$, are typically located where\nmodes $\\delta({\\rm k})$ of different wavelength make some kind of\n``constructive interference'': the very first objects are likely to be\non top of larger ones, and they are likely to be clustered together,\nrather than uniformly distributed. For this reason, it is also very\nlikely that the halos where these objects formed have long since been\nincorporated inside larger objects, such as the bulges of $M_*$ galaxies\nor the cD galaxy at the centre of galaxy clusters (see {\\it e.g.}\\ White\n\\& Springel 1999).\n\n\\begin{figure}[t]\n\\epsfig{file=wmap_map.ps,angle=0,width=10truecm,clip=}\n\\caption{Map of the temperature anisotropies in the CMB\\index{Cosmic Microwave Background} as obtained by\ncombining the five bands of the WMAP satellite in order to minimise\nforeground contamination (from Bennett {\\it et al.~} 2003).}\n\\label{wmap_map}\n\\end{figure}\n\n\\begin{figure}[t]\n\\epsfig{file=wmap_spectrum.ps,width=10truecm,clip=}\n\\caption{Angular CMB\\index{Cosmic Microwave Background} power spectrum of temperature (top panel) and\ntemperature-polarization (bottom panel) as obtained by the WMAP\nsatellite (from Bennett {\\it et al.~} 2003). The line shows the best-fit with\na $\\Lambda$CDM model, and grey areas represent cosmic variance.}\n\\label{wmap_angspec}\n\\end{figure}\n\n\n\\subsubsection{Fluctuation evolution - Baryons}\n\nBefore the equivalence epoch $t_{\\rm eq}$ the baryons evolve in the same\nway as dark matter\\index{dark matter}. Instead, in the matter dominated era they behave\ndifferently: we mentioned that all the dark matter fluctuations which\nwere not erased by free streaming\\index{fluctuations!free streaming scale} grow as $\\delta\\propto a \\propto\n(1+z)^{-1}$, but this does not apply to baryons. In fact, baryons\ndecouple from radiation only at $t_{\\rm dec}$, significantly later than\n$t_{\\rm eq}$ (we remind that $1+z_{\\rm eq}\\sim 10^4$, while $1+z_{\\rm dec}\n\\simeq10^3$). The persistence of the coupling with radiation prevents\nthe growth of baryonic fluctuations on all scales; even worse, on\nrelatively small scales all the fluctuations in the baryonic component\nare erased through a mechanism similar to free streaming. Such effect\ntakes place below the so-called {\\it Silk scale}\\index{fluctuations!silk scale} (Silk 1968), which is\ngiven by the average distance that the photons (and the baryons coupled\nwith them) can diffuse before $t=t_{\\rm dec}$; this translates into a\ncomoving distance\n\\begin{equation}\n \\lambda_{\\rm S} \\simeq 3.5 \\left({\\Omega\\over{\\Omega_{\\rm\n b}}}\\right)^{1\/2} (\\Omega h^2)^{-3\/4}\\ {\\rm Mpc}\n\\end{equation}\n(where $\\Omega_{\\rm b}$ is the baryonic contribution to $\\Omega$)\nand encloses a mass\n\\begin{equation}\n M_{\\rm S} \\simeq 6.2\\times10^{12} \\left({\\Omega\\over{\\Omega_{\\rm\n b}}}\\right)^{3\/2} (\\Omega h^2)^{-5\/4}\\ M_{\\odot}.\n\\end{equation}\n\nThis result was a major problem for cosmology before the existence of\ndark matter\\index{dark matter} started to be assumed: it implies that in a purely baryonic\nuniverse there should be no structures on a scale smaller than that of\ngalaxy clusters (if $\\Omega=\\Omega_{\\rm b}\\simeq 0.1$). Furthermore,\neven fluctuations which were not erased can grow only by a factor\n$1+z_{\\rm dec}$ between decoupling and present time, and this is not\nenough to take a typical CMBR\\index{Cosmic Microwave Background} fluctuation (of amplitude\n$\\delta\\sim10^{-5}$) into the non-linear\\index{fluctuations!non-linear evolution} regime. The introduction of\nCold Dark matter solved this problem, since after recombination\\index{Hydrogen!recombination} the\nbaryons are finally free to fall inside dark matter\\index{dark matter} potential wells,\nwhose growth was unaffected by the photon drag. \n\nIt can be found that after decoupling from radiation, the baryonic\nfluctuations quickly ``reach'' the levels of dark matter fluctuations,\nevolving as\n\\begin{equation}\n\\delta_{\\rm b} = \\delta_{\\rm DM}\\left({1-{{1+z}\\over\n {1+z_{\\rm dec}}}}\\right),\n\\end{equation}\nso that the existing dark matter\\index{dark matter} potential wells ``regenerate'' baryonic\nfluctuations, including the ones below the Silk scale\\index{fluctuations!silk scale}.\n\nThis result is correct as long as pressure does not play a role, that is\nfor objects with a mass larger than the cosmological Jeans mass\\index{Jeans mass} $M_{\\rm\nJ}\\propto T^{3\/2}{\\rho}^{-1\/2}$. Such mass behaves differently at high\nand low redshift. Before a redshift $z_{\\rm Compton}\\simeq 130$ we have\nthat the temperature of the baryons is still coupled to that of the CMB\\index{Cosmic Microwave Background}\nbecause of Compton scattering of radiation on the residual free\nelectrons; for this reason, $T_{\\rm b}(z)\\simeq T_{\\rm CMB}(z)\\propto (1+z)$,\nand as $\\rho(z)\\propto (1+z)^3$ the value of $M_{\\rm J}$ is constant:\n\\begin{equation}\nM_{\\rm J}(z)\\simeq 1.35\\times 10^5 \\left({{\\Omega_{\\rm m} h^2}\\over\n {0.15}}\\right)^{-1\/2}\\;M_{\\odot} \\qquad\n ({\\rm for}\\ z_{\\rm dec}\\lower.5ex\\hbox{\\gtsima} z\\lower.5ex\\hbox{\\gtsima} z_{\\rm Compton})\n\\end{equation}\nwhere $\\Omega_{\\rm m}=\\Omega_{\\rm b}+\\Omega_{\\rm DM}$ is the total\nmatter density (baryons plus dark matter\\index{dark matter}).\nAt lower redshifts the baryon temperature is no longer locked to that of\nthe CMB\\index{Cosmic Microwave Background} and drops adiabatically as $T_{\\rm b}\\propto (1+z)^2$. At such\nredshifts the Jeans mass\\index{Jeans mass} evolves as\n\\begin{eqnarray}\nM_{\\rm J}(z)\\simeq 5.7\\times 10^3\n \\left({{\\Omega_{\\rm m} h^2}\\over {0.15}}\\right)^{-1\/2} \\;\n \\left({{\\Omega_{\\rm b}h^2}\\over{0.022}}\\right)^{-3\/5} \\;\n \\left({{1+z}\\over{10}}\\right)^{3\/2}\\;M_{\\odot} \\qquad\n \\\\({\\rm for}\\ z\\lower.5ex\\hbox{\\ltsima} z_{\\rm Compton}).\\nonumber \n\\end{eqnarray}\n\n\\begin{figure}[t]\n\\epsfig{file=thermal_hist.ps,width=11truecm,clip=}\n\\caption{Schematic evolution of the temperature of the Intergalactic Medium\n(IGM) as a function of redshift: after recombination and before\nthermal decoupling (at $z_{\\rm Compton}$), $T_{\\rm IGM}$ is locked to\n$T_{\\rm CMBR}$ and evolves as $(1+z)$. After thermal decoupling\n$T_{\\rm IGM}$ is no more locked to the radiation temperature, and\ndrops adiabatically as $(1+z)^2$, until the first non-linear objects\ncollapse, and emit light which is able to re-ionize the universe,\nraising $T_{\\rm IGM}$ above $\\sim 10^4\\;{\\rm K}$.}\n\\label{thermal_hist}\n\\end{figure}\n\n\n\n\\subsection{From fluctuations to cosmological structures}\n\\subsubsection{Non-linear evolution\\index{fluctuations!non-linear evolution}}\nWhen the density contrast $\\delta$ becomes comparable to unity, the\nevolution becomes non-linear, and the Fourier modes no more evolve\nindependently. The easiest way to study this phase is through the ``real\nspace'' $\\delta({\\bf x})$ (rather than its Fourier components\n$\\delta({\\bf k})$), considering the idealized case of the collapse of a\nspherically symmetric overdensity, and in particular the collapse of a\n{\\it top-hat} fluctuation. It is well known that, through some simple\nfurther assumption (such as that ``spherical shells do not cross''), the\ntime evolution of the radius $R$ of a top-hat perturbation (see {\\frenchspacing\\it e.g. }\nPadmabhan 1993, \\S 8.2 for the treatment of a slightly more general\ncase) can be written down as\n\\begin{equation}\nR(t)={R_i\\over2}{{1+\\delta_i}\\over{\\delta_i-(\\Omega_i^{-1}-1)}}[1-\\cos\\theta(t)]\n\\label{tophatradius}\n\\end{equation}\nwhere $R_i=R(t_i)$, $\\delta_i=\\delta(t_i)$ and $\\Omega_i=\\Omega(t_i)$\nare the ``initial conditions'' for the fluctuation evolution, and\n$\\theta$ is defined by the equation\n\\begin{equation}\n[\\theta(t)-\\sin\\theta(t)]{{1+\\delta_i}\\over\n{2H_i\\Omega_i^{1\/2}[\\delta_i-(\\Omega_i^{-1}-1)]^{3\/2}}} \\simeq t\n\\end{equation}\nwhere again $H_i$ is the Hubble parameter at $t_i$, and the last\napproximate equality is valid only as long as $\\delta_i\\ll1$ (that is, a\nsufficiently early $t_i$ must be chosen). The fluctuation radius $R$\nreaches a maximum $R_{\\rm ta}$ at the so-called {\\it turn-around} epoch\n(when $\\theta=\\pi$) when the overdense region finally detaches itself\nfrom the Hubble flow and starts contracting. However, while\neq. (\\ref{tophatradius}) suggests an infinite contraction to $R=0$ (when\n$\\theta=2\\pi$), the violent relaxation process (Lynden-Bell 1967)\nprevents this from happening, leading to a configuration in virial\nequilibrium at $R_{\\rm vir}\\simeq R_{\\rm ta}\/2$.\n\nHere, we summarise some well known, useful findings of this model.\n\nFirst of all, combining the evolution of the background density\nevolution and of eq. (\\ref{tophatradius}) it is possible to\nestimate the density contrast evolution\n\\begin{equation}\n\\delta(t)={9\\over2}{{(\\theta-\\sin\\theta)^2}\\over{(1-\\cos\\theta)^3}}-1\n\\end{equation}\nwhich leads to some noteworthy observation, such as that the density\ncontrast at turn-around is $\\delta_{\\rm ta}=(9\\pi^2\/16)-1\\simeq4.6$,\nwhich at virialization becomes $\\delta_{\\rm vir}=\\Delta_{\\rm c}$,\nwhere it can be usually assumed that $\\Delta_{\\rm c}\\simeq18\\pi^2$\nbut sometimes higher order approximations are necessary, such as the\none in Bryan \\& Norman 1998,\n\\begin{equation}\n\\Delta_{\\rm c}=18\\pi^2 + 82(1-\\Omega_m^z) -39(1-\\Omega_m^z)^2\n\\end{equation}\nwith\n\\begin{equation}\n\\Omega_m^z={{\\Omega_{\\rm m} (1+z)^3}\\over\n{\\Omega_{\\rm m} (1+z)^3+\\Omega_\\Lambda+\\Omega_k(1+z)^2}}\n\\end{equation}\nwhere $\\Omega_\\Lambda$ is the dark energy density, and $\\Omega_k =\n1-\\Omega_{\\rm m}-\\Omega_\\Lambda$ is the curvature.\n\nFrom this, it is possible to estimate the virial radius\n\\begin{equation}\nR_{\\rm vir} \\simeq 0.784 \\left({M\\over{10^8 h^{-1} M_{\\odot}}}\\right)^{1\/3}\n\\left({{\\Omega_{\\rm m}\\over\\Omega_m^z}{\\Delta_c\\over{18\\pi^2}}}\\right)^{-1\/3}\n\\left({{1+z}\\over{10}}\\right)^{-1} h^{-1}\\;{\\rm kpc},\n\\end{equation}\nthe circular velocity for such an halo\n\\begin{equation}\nV_{\\rm circ} \\simeq 23.4 \\left({M\\over{10^8 h^{-1} M_{\\odot}}}\\right)^{1\/3}\n\\left({{\\Omega_{\\rm m}\\over\\Omega_m^z}{\\Delta_c\\over{18\\pi^2}}}\\right)^{1\/6}\n\\left({{1+z}\\over{10}}\\right)^{1\/2}\\;{\\rm km\\,s^{-1}},\n\\end{equation}\nand the virial temperature\n\\begin{equation}\nT_{\\rm vir} \\simeq 19800 \\left({M\\over{10^8 h^{-1} M_{\\odot}}}\\right)^{2\/3}\n\\left({{\\Omega_{\\rm m}\\over\\Omega_m^z}{\\Delta_c\\over{18\\pi^2}}}\\right)^{1\/3}\n\\left({{1+z}\\over{10}}\\right) \\left({\\mu\\over{0.6}}\\right) \\;{\\rm K}.\n\\end{equation}\n\n\\subsubsection{The Press-Schechter formalism\\index{Press-Schechter formalism}} \n\\begin{figure}[t]\n\\epsfig{file=press_schechter_mass.ps,width=10truecm,clip=}\n\\caption{Characteristic mass of $1\\sigma$ (bottom solid curve),\n$2\\sigma$ (middle solid curve) and $3\\sigma$ (top solid curve)\ncollapsing halos as a function of redshift. These were obtained from\nthe Eisenstein \\& Hu (1999) power spectrum, assuming an\n$\\Omega_\\Lambda=0.7$, $\\Omega_{\\rm m}=0.3$ cosmology. The dashed curves show\nthe minimum mass\\index{minimum mass} which is required for the baryons to be able to cool\nand collapse (see next section) in case of pure atomic cooling (upper\ncurve) and of molecular cooling\\index{cooling} (lower curve).\n(from Barkana \\& Loeb 2001).}\n\\label{press_schechter_mass}\n\\end{figure}\n\n\\begin{figure}[t]\n\\epsfig{file=press_schechter_abundance.ps,width=10truecm,clip=}\n\\caption{Halo mass functions at several redshifts (from bottom left to\n bottom right, $z=30$, $z=20$, $z=10$, $z=5$ and $z=0$,\n respectively). The assumed power spectrum and cosmology are the same as in fig. \\ref{press_schechter_mass}. (from Barkana \\& Loeb 2001).}\n\\label{press_schechter_abundance}\n\\end{figure}\n\n\nThe simple top-hat model is at the core of the so-called Press-Schechter\nformalism\\index{Press-Schechter formalism} (Press \\& Schechter 1974, but see also the contribution by\nSommerville in this same book), which predicts the density of\nvirialized halos of a given mass at a given redshift. This model assumes\nthat the distribution of the smoothed density field $\\delta_M$ (where\n$M$ is the mass scale of the smoothing) at a certain redshift $z_0$ is\nGaussian with a variance $\\sigma_M$, so that the probability of having\n$\\delta_M$ larger than a given $\\delta_{\\rm crit}$ is\n\\begin{equation}\nP(\\delta_M>\\delta_{\\rm crit}) = \\int_{\\delta_{\\rm crit}}^\\infty {\n{1\\over{(2\\pi)^{1\/2}\\sigma_M}} e^{-{x^2\\over{2\\sigma_M^2}}} dx};\n\\end{equation}\na common choice is $z_0=0$, requiring $\\delta_M$ to be estimated through\na purely linear evolution\\index{fluctuations!linear evolution} of the primordial power spectrum.\n\n\nThe Press-Schechter\\index{Press-Schechter formalism} model then chooses a $\\delta_{\\rm crit}=\\delta_{\\rm\ncrit}(z)$ (but it is also possible to assume a constant $\\delta_{\\rm\ncrit}$ and make $\\sigma_M$ a function of redshift; see {\\frenchspacing\\it e.g. }\\ Viana \\&\nLiddle 1996) and assumes that this probability (multiplied by a factor\nof 2 - see Bond {\\it et al.~} 1991 for an explanation of this extra factor) also\ngives the fraction mass which at a redshift $z$ is inside virialized\nhalos of mass M or larger. This can be differentiated over $M$ in order\nto get the mass distribution at each given redshift\n\\begin{equation}\n{{dn}\\over{dM}}={2\\over{(2\\pi)^{1\/2}}} {\\rho_m\\over M}\n{{d ln(1\/\\sigma_M)}\\over{dM}} {{\\delta_{\\rm crit}(z)}\\over\\sigma_M}\ne^{-{{\\delta_{\\rm crit}(z)^2}\\over{2\\sigma_M^2}}}\n\\end{equation}\n\nIn this way, the abundance of halos is completely determined through the\ntwo functions $\\delta_{\\rm crit}(z)$ and $\\sigma_M$. The first one is\ncommonly written as $\\delta_{\\rm crit}(z)=\\delta_0\/D(z)$, where $D(z)$\nis the growth factor ($D(z)\\simeq(1+z)^{-1}$ for Einstein-de Sitter\nmodels; see Peebles 1993 for a more general expression), coming from\ncosmology; instead $\\delta_0$ is usually taken to be $1.686$, since the\ntop-hat model predicts that an object virializes at the time when the\nlinear theory estimates its overdensity at $\\delta=1.686$. Instead\n$\\sigma_M$ depends from the power spectrum; for example, figures\n\\ref{press_schechter_mass} and \\ref{press_schechter_abundance} are based\non the Eisenstein \\& Hu (1999) results\\footnote{The authors of this\npaper also provide some very useful codes for dealing with the power\nspectrum and the Press-Schechter formalism\\index{Press-Schechter formalism} at the web page\nhttp:\/\/background.uchicago.edu\/$\\sim$whu\/transfer\/transferpage.html.}.\n\n\n\n\n\\section{Primordial gas properties}\n\n\\subsection{Cooling}\nThe typical densities reached by the gas after virialization (of the\norder of $n_{\\rm B}\\equiv \\rho_{\\rm B}\/m_{\\rm H} \\sim 0.01 \\Omega_{\\rm\nb} [(1+z_{\\rm vir})\/10]^3 {\\rm cm^{-3}}$ are far too low for the gas to\ncondense and form an object like a star. The only way to proceed further\nin the collapse and in the formation of luminous objects is to remove\nthe gas thermal energy through radiative cooling\\index{cooling}.\n\nFor this reason, cooling processes are important in determining where,\nwhen and how the first luminous objects will form.\n\nIn Fig. \\ref{metalfree_cooling} it is possible to see that the cooling\\index{cooling!metal-free gas}\nof primordial ({\\frenchspacing\\it i.e. } metal-free) {\\it atomic} gas at temperatures below\n$\\sim10^4\\;{\\rm K}$ is dramatically inefficient, because in that\ntemperature range the gap between the fundamental and the lowest excited\nlevels ($\\simeq 10.2\\;{\\rm eV}$ for H atoms) is so much larger than the\nthermal energy $\\sim k_{\\rm B}T\\lower.5ex\\hbox{\\ltsima} 1\\;{\\rm eV}$ that very few atoms\nget collisionally excited, and very few photons are emitted through the\ncorresponding de-excitations.\n\nThis is important because in all hierarchical scenarios the first\nobjects to virialize are the smallest ones, and such halos have\nthe lowest virial temperatures. If the primordial gas were completely\natomic, the first luminous objects would probably form relatively late\n($z\\lsim10-15$), in moderately massive halos ($M\\sim10^8\\;M_{\\odot}$) with\n$T_{\\rm vir}\\lower.5ex\\hbox{\\gtsima} 10^4\\;{\\rm K}$.\n\nHowever, it is also possible to see from Fig. \\ref{metalfree_cooling} that\nthe presence of molecules in small amounts ($f_{\\rm H_2}\\equiv 2n_{\\rm\nH_2}\/(n_{\\rm H}+2n_{\\rm H_2})\\lower.5ex\\hbox{\\gtsima} 5\\times10^{-4}$; the dashed curve in\nFig. \\ref{metalfree_cooling} was obtained assuming $f_{\\rm\nH_2}=10^{-3}$) can dramatically affect the cooling\\index{cooling} properties of\nprimordial gas at low temperatures, making low mass halos virializing at\nhigh redshift ($z\\lower.5ex\\hbox{\\gtsima} 20$) the most likely sites for the formation of\nthe first luminous objects. \n\n\\begin{figure}[t]\n\\epsfig{file=metalfree_cooling.ps,width=10truecm,clip=}\n\\caption{Cooling rate per atomic mass unit of\\index{cooling!metal-free gas} metal-free gas, as a\nfunction of temperature. The solid line shows assumes the gas to be\ncompletely atomic (the two peaks correspond to H and He excitations,\nwhile the high temperature tail is dominated by free-free processes) and\ndrops to about zero below $T\\sim10^4\\;{\\rm K}$; the dashed line shows\nthe contribution of a small ($f_{\\rm H_2} = 10^{-3}$) fraction of\nmolecular hydrogen\\index{Hydrogen}, which contributes extra-cooling in the range\n$100\\;{\\rm K}\\lower.5ex\\hbox{\\ltsima} T \\lower.5ex\\hbox{\\ltsima} 10^4\\;{\\rm K}$ (from Barkana \\& Loeb 2001).}\n\\label{metalfree_cooling}\n\\end{figure}\n\n\n\\subsection{Molecular cooling\\index{cooling}}\n\nIn the current scenario for the formation of primordial objects, the\nmost relevant molecule is H$_2$. The only reason for this is the high\nabundance of H$_2$~\\index{HH} when compared with all other molecules. In fact, the\nradiating properties of an H$_2$~ molecule are very poor: because of the\nabsence of a dipole moment, radiation is emitted only through weak\nquadrupole transitions. In addition, the energy difference between the\nH$_2$~ ground state and the lowest H$_2$~ excited roto-vibrational levels is\nrelatively large ($\\Delta E_{01}\/k_{\\rm B}\\lower.5ex\\hbox{\\gtsima} 200\\;{\\rm K}$, between\nthe fundamental and the first excited level; however, such transition is\nprohibited by quadrupole selection rules, and the lowest energy gap for\na quadrupole transition is $\\Delta E_{02}\/k_{\\rm B}\\simeq 510\\;{\\rm\nK}$), further reducing the cooling efficiency at low temperatures.\n\nApart from H$_2$~ the most relevant molecular coolants are HD and LiH. In\nthe following, we will briefly list the cooling\\index{cooling} rates (mainly taken from\nGalli \\& Palla 1998, hereafter GP98; see also Hollenbach \\& McKee 1979,\nLepp \\& Shull 1983, 1984 and Martin {\\it et al.~} 1996, Le Bourlot {\\it et al.~} 1999,\nFlower {\\it et al.~} 2000) of H$_2$~ and of the other two possibly relevant\nspecies\\footnote{In Fig. \\ref{species_cooling} the cooling rate for H$_2^+$\nis shown, too. But while it is still marginally possible that HD or LiH\ncooling\\index{HD cooling}\\index{LiH cooling} can play some kind of role in the primordial universe, this is\nmuch more unlikely for H$_2^+$, mainly because its under-abundance with\nrespect to H$_2$~ is always much larger than the difference in the cooling\nrates; for this reasons we choose to omit a detailed discussion of H$_2^+$\ncooling.}\\index{cooling}.\n\nWe note that Bromm {\\it et al.~} 1999 included HD cooling\\index{HD cooling} in some of\ntheir simulations but found that it never accounted for more than\n$\\sim10\\%$ of H$_2$~ cooling; however, they did not completely rule out the\npossibility that a $n_{\\rm HD}\/n_{\\rm H_2}$ ratio substantially larger\nthan the equilibrium value (close to $n_{\\rm D}\/n_{\\rm H}\\sim 10^{-5}$)\ncould change this conclusion. Also, in present-day simulations of\nprimordial star formation the gas temperature never goes below a few\nhundreds degrees: in case it did, even a tiny amount of LiH\\index{LiH cooling} could be\nenough to dominate the cooling\\index{cooling} rate.\n\n\n\\begin{figure}[t]\n\\epsfig{file=H2cool.eps,width=10truecm,clip=}\n\\caption{Cooling rate per H$_2$~ molecule as computed by different authors\n(Lepp \\& Shull 1983, Martin {\\it et al.~} 1996, GP98) as a\nfunction of temperature and for different densities; note that for\n$n\\lower.5ex\\hbox{\\gtsima} 10^4\\;{\\rm cm^{-3}}$ the cooling rate is almost independent of\ndensity.}\n\\label{H2_cool}\n\\end{figure}\n\n\\subsubsection{H$_2$~ cooling rate\\index{cooling}}\nThe H$_2$~ cooling rate {\\it per molecule} $\\Lambda_{\\rm H_2}(\\rho,T)$\ncan be conveniently expressed in the form:\n\\begin{equation}\n\\Lambda_{\\rm H_2}(\\rho,T) = {{\\Lambda_{\\rm H_2,LTE}(T)}\\over\n{1+{{\\Lambda_{\\rm H_2,LTE}(T)}\\over{n_{\\rm H}\\Lambda_{\\rm\nH_2,\\rho\\rightarrow0}(T)}}}}\n\\end{equation}\nwhere $\\Lambda_{\\rm H_2,LTE}(T)$ and $n_{\\rm H}\\Lambda_{\\rm\nH_2,\\rho\\rightarrow0}$ are the high and low density limits of the\ncooling rate (which apply at $n \\lower.5ex\\hbox{\\gtsima} 10^4\\; {\\rm cm^{-3}}$ and\nat $n \\lower.5ex\\hbox{\\ltsima} 10^2\\; {\\rm cm^{-3}}$, respectively).\n\nThe high density (or LTE, from the Local Thermal Equilibrium assumption\nwhich holds in these conditions) limit of the cooling rate per H$_2$~\nmolecule is given by Hollenbach \\& McKee (1979):\n\\begin{eqnarray}\n\\Lambda_{\\rm H_2,LTE}(T)={{9.5\\times10^{-22}T_3^{3.76}} \\over\n {1+0.12\\,T_3^{2.1}}} e^{-\\left({0.13}\\over{T_3}\\right)^3} +\n 3\\times10^{-24} e^{-{{0.51}\\over{T_3}}} + \\nonumber \\\\\n + 6.7\\times10^{-19} e^{-{{5.86}\\over{T_3}}} +\n 1.6\\times10^{-18} e^{-{{11.7}\\over{T_3}}}\\;\n {\\rm erg\\,s^{-1}}\n\\end{eqnarray}\nwhere $T_3 \\equiv T\/(1000\\;{\\rm K})$. Note that the first row in the\nformula accounts for rotational cooling\\index{cooling}, while the second row accounts for\nthe first two vibrational terms.\n\nFor the low density limit, GP98 found that in the\nrelevant temperature range ($10\\;{\\rm K}\\leq T \\leq 10^4\\;{\\rm K}$) the\ncooling rate $\\Lambda_{\\rm H_2,\\rho\\rightarrow0}$ is independent from\ndensity, and is well approximated by\n\\begin{eqnarray}\n\\log \\Lambda_{\\rm H_2,\\rho\\rightarrow0}(T) \\simeq \n -103.0 + 97.59 \\log T - 48.05 (\\log T)^2 + \\nonumber \\\\\n 10.80 (\\log T)^3 - 0.9032 (\\log T)^4\n\\end{eqnarray}\nwhere $T$ and $\\Lambda_{\\rm H_2,\\rho\\rightarrow0}$ are expressed in\n$K$ and ${\\rm erg\\,s^{-1}\\,cm^3}$, respectively.\n\nNote that even if both $\\Lambda_{\\rm H_2,LTE}$ and\n$\\Lambda_{\\rm H_2,\\rho\\rightarrow0}$ do not depend on density,\n$\\Lambda_{\\rm H_2}(\\rho,T)$ is independent of $\\rho$ only in the high\ndensity limit.\n\n\n\\subsubsection{HD\\index{HD cooling} and LiH\\index{LiH cooling} cooling rates}\nThe cooling rates of HD and of LiH are more complicated (see Flower\n{\\it et al.~} 2000 for HD and Bogleux \\& Galli 1997 for LiH), but in the low\ndensity limit (and in the temperature range $10\\;{\\rm K}\\leq T \\leq\n1000\\;{\\rm K}$) it is possible to use the relatively simple expressions\ngiven by GP98.\n\nFor HD, we have that the low density limit of the cooling rate per\nmolecule, $\\Lambda_{\\rm HD,\\rho\\rightarrow0}$, is:\n\\begin{equation}\n\\Lambda_{\\rm HD,\\rho\\rightarrow0}(T) \\simeq \n2\\gamma_{10}E_{10}e^{-{{E_{10}}\\over{k_{\\rm B}T}}} +\n(5\/3)\\gamma_{21}E_{21}e^{-{{E_{21}}\\over{k_{\\rm B}T}}}\n\\end{equation}\nwhere $E_{10}$ and $E_{21}$ are the energy gaps between HD levels 1 and\n0 and levels 2 and 1, respectively; they are usually expressed as\n$E_{10}=k_{\\rm B} T_{10}$ and $E_{21}=k_{\\rm B} T_{21}$, with\n$T_{10}\\simeq128\\;{\\rm K}$ and $T_{21}\\simeq255\\;{\\rm K}$,\n\n\n$\\gamma_{10}$ and $\\gamma_{21}$ are the approximate collisional\nde-excitation rates for the 1-0 and 2-1 transitions, and are given by\n\\begin{eqnarray}\n\\gamma_{10} \\simeq &\n4.4\\times10^{-12}+3.6\\times10^{-13}\n T^{0.77}\\\\\n\\gamma_{21} \\simeq &\n4.1\\times10^{-12}+2.1\\times10^{-13}\n T^{0.92}\n\\end{eqnarray}\nwhere we use the numerical value of T (in Kelvin) and the rates are\nexpressed in ${\\rm cm^3\\,s^{-1}}$.\n\nFor LiH instead we have that the same density limit of the cooling\\index{cooling} rate\nper molecule, $\\Lambda_{\\rm LiH,\\rho\\rightarrow0}$ can be fitted by:\n\\begin{eqnarray}\n\\log_{10}(\\Lambda_{\\rm LiH,\\rho\\rightarrow0}) = &\nc_0 + c_1\\log_{10} T + c_2(\\log_{10} T)^2 + \\nonumber\\\\\n& c_3(\\log_{10} T)^3 + c_4(\\log_{10} T)^4\n\\end{eqnarray}\nwhere $c_0=-31.47,\\ c_1=8.817,\\ c_2=-4.144,\\ c_3=0.8292$ and\n$c_4=-0.04996$, assuming that $T$ is expressed in Kelvin, and that\n$\\Lambda_{\\rm LiH,\\rho\\rightarrow0}$ is expressed in\n${\\rm erg\\,cm^3\\,s^{-1}}$.\n\n\n\\begin{figure}[t]\n\\epsfig{file=species_cooling.ps,width=10truecm,clip=}\n\\caption{Comparison of the low density ($n\\lower.5ex\\hbox{\\ltsima} 10\\;{\\rm cm^{-3}}$)\ncooling rates per molecule of several molecular species, in particular\nH$_2$~, HD and LiH (from GP98). Note that at $T\\sim100\\;{\\rm\nK}$ both HD and LiH molecules are more than $10^3$ times more efficient\ncoolants than H$_2$~ molecules, but this difference is believed to be\ncompensated by the much higher H$_2$~ abundance (see {\\it e.g.}, Bromm\n{\\it et al.~} 2002). The plot also shows the cooling due to H-H$_2^+$ and\n$e^-$-H$_2^+$ collisions, but these contributions are never important\nbecause of the very low H$_2^+$ abundance.}\n\\label{species_cooling}\n\\end{figure}\n\n\n\\subsubsection{Cooling at high densities: Collision Induced Emission\\index{cooling!collision induced emission}}\n\n\\begin{figure}[t]\n\\epsfig{file=cie_cooling.eps,width=9truecm,angle=270}\n\\caption{CIE cooling for different collisions and different molecular\nfractions(0.999, 0.5, 0.001). Top panels show the cooling rates per unit\nmass: the thick solid line is the total CIE cooling, while thin lines\nshow the various components: H$_2$~-H$_2$~ (solid), H$_2$~-He (short dashed),\nH$_2$~-H (dot dashed) and H-He (long dashed); the dotted line shows the\nresults of the approximate formula given in the text. In the bottom\npanels the ratios of the various component to H$_2$~-H$_2$~ CIE is shown. All\nquantities were calculated assuming $n=10^{14}\\;{\\rm cm^{-3}}$ and\n$X=0.75$ (from Ripamonti \\& Abel 2004).}\n\\label{cie_cooling}\n\\end{figure}\n\nDuring the formation of a protostar an important role is played by the\nso called Collision-Induced Emission (CIE; very often known as\nCollision-Induced Absorption, or CIA), a process which requires pretty\nhigh densities ($n=\\rho\/m_{\\rm H}\\gsim10^{13}-10^{14}\\;{\\rm cm^{-3}}$)\nto become important (see Lenzuni {\\it et al.~} 1991, Frommold 1993, Ripamonti \\&\nAbel 2004). In fact, Collision-Induced Emission takes place when a\ncollision between two H$_2$~ molecules (or H$_2$~ and H, or H$_2$~ and He, or\neven H and He) takes place: during the collision, the interacting pair\nbriefly acts as a ``super-molecule'' with a nonzero electric dipole, and\na much higher probability of emitting (CIE) or absorbing (CIA) a photon\nthan an unperturbed H$_2$~ molecule (whose dipole is 0). Because of the\nvery short durations of the collisions, ($\\lsim10^{-12}\\;{\\rm s}$), such\nmechanism can compete with ``normal'' emission only in high density\nenvironments. Furthermore, because of the short durations of the\ninteractions, collision-induced lines become very broad and merge into a\ncontinuum: the H$_2$~ CIE spectrum only shows broad peaks corresponding to\nvibrational bands, rather than a large number of narrow roto-vibrational\nlines. This is also important because self-absorption is much less\nrelevant than for line emission, and in primordial proto-stars\nsimulations CIE\\index{cooling!collision induced emission} cooling can be treated with the optically thin\napproximation up to $n\\sim10^{16}\\;{\\rm cm^{-3}}$ (for H$_2$~ lines, the\noptically thin approximation breaks at about $n\\sim10^{10}\\;{\\rm\ncm^{-3}}$). In figure \\ref{cie_cooling} we show the CIE cooling rate\nfor a gas with $n=10^{14}\\;{\\rm cm^{-3}}$, as a function of temperature\nand for different chemical compositions; at temperatures between $400$\nand $7000\\;{\\rm K}$.\nFor H$_2$~ abundances $f_{\\rm H_2}\\equiv 2n_{\\rm H_2}\/(n_{\\rm H^+}+n_{\\rm\nH}+2n_{\\rm H_2}) \\lower.5ex\\hbox{\\gtsima} 0.5$ the total CIE\\index{cooling!collision induced emission} cooling rate can be\napproximated by the simple expression (see Ripamonti \\& Abel 2004)\n\\begin{equation}\nL_{\\rm CIE}(\\rho,T,X,f_{\\rm H_2}) \\simeq 0.072 \\rho T^4 X f_{\\rm H_2}\n\\;{\\rm erg\\,g^{-1}\\,s^{-1}}\n\\end{equation}\nwhere the density is in $\\rm g\\,cm^{-3}$, the temperature is in K, and\n$X\\simeq0.75$ is the hydrogen\\index{Hydrogen} fraction (by mass).\n\n\n\n\\subsection{Chemistry\\index{Chemistry}}\n\n\\begin{table}[b]\n\\caption{Reaction rates for some of the most important reactions in\nprimordial gas, plus the main reactions involved in the formation of\nHD. In the formulae, $T_\\gamma$ is the temperature of the radiation\nfield (for our purposes, the temperature of the CMB\\index{Cosmic Microwave Background} radiation) and\ntemperatures need to be expressed in Kelvin; $j(\\nu_{\\rm LW})$ is the\nradiation flux (in ${\\rm erg\\,s^{-1}\\,cm^{-2}}$) at the central\nfrequency $\\nu_{\\rm LW}$ of the Lyman-Werner bands, $h\\nu_{\\rm\nLW}=12.87\\;{\\rm eV}$. The rates come from compilations given by Tegmark\n{\\it et al.~} 1997 (reactions 1-7), Palla {\\it et al.~} 1983 (reactions 8-13), Abel\n{\\it et al.~} 1997 (reaction 14) and Bromm {\\it et al.~} 2002 (reactions 15-19).}\n\\begin{tabular}{cccl}\n\\hline\\hline\nReaction & & & Rate\\\\\n\\hline\n${\\rm H}^+ + e^-$ & $\\rightarrow$ & ${\\rm H} + h\\nu$ &\n$k_1 \\simeq 1.88\\times10^{-10} T^{-0.644} \\;{\\rm cm^3\\,s^{-1}}$\\\\\n\n${\\rm H} + e^-$ & $\\rightarrow$ & ${\\rm H}^- + h\\nu$ &\n$k_2 \\simeq 1.83\\times10^{-18} T^{0.88} \\;{\\rm cm^3\\,s^{-1}}$\\\\\n\n${\\rm H}^- + {\\rm H}$ & $\\rightarrow$ & ${\\rm H}_2 + e^- $ &\n$k_3 \\simeq 1.3\\times10^{-9} \\;{\\rm cm^3\\,s^{-1}}$\\\\\n\n${\\rm H}^- + h\\nu$ & $\\rightarrow$ & ${\\rm H} + e^-$ &\n$k_4 \\simeq 0.114 T^{2.13} e^{-8650\/T_\\gamma} \\;{\\rm cm^3\\,s^{-1}}$\\\\\n\n${\\rm H}^+ + {\\rm H}$ & $\\rightarrow$ & ${\\rm H}_2^+ + h\\nu$ &\n$k_5 \\simeq 1.85\\times10^{-23} T^{1.8} \\;{\\rm cm^3\\,s^{-1}}$\\\\\n\n${\\rm H}_2^+ + {\\rm H}$ & $\\rightarrow$ & ${\\rm H}_2- + {\\rm H}^+$ &\n$k_6 \\simeq 6.4\\times10^{-10} \\;{\\rm cm^3\\,s^{-1}}$\\\\\n\n${\\rm H}_2^+ + h\\nu$ & $\\rightarrow$ & ${\\rm H}^+ + {\\rm H}$ &\n$k_7 \\simeq 6.36\\times10^5 e^{-71600\/T_\\gamma} \\;{\\rm cm^3\\,s^{-1}}$\\\\\n\n${\\rm H} + {\\rm H} + {\\rm H}$ & $\\rightarrow$ & ${\\rm H}_2 + {\\rm H}$ &\n$k_8 \\simeq 5.5\\times10^{-29} T^{-1} \\;{\\rm cm^6\\,s^{-1}}$\\\\\n\n${\\rm H}_2 + {\\rm H}$ & $\\rightarrow$ & ${\\rm H} + {\\rm H} + {\\rm H}$ &\n$k_9 \\simeq 6.5\\times10^{-7} T^{-1\/2} e^{-52000\/T}\\times$\\\\\n& & & $\\qquad\\qquad\\times(1-e^{-6000\/T}) \\;{\\rm cm^6\\,s^{-1}}$\\\\\n\n${\\rm H} + {\\rm H} + {\\rm H}_2$ & $\\rightarrow$ & ${\\rm H}_2 + {\\rm H}_2$ &\n$k_{10} \\simeq k_8\/8$\\\\\n\n${\\rm H}_2 + {\\rm H}_2$ & $\\rightarrow$ & ${\\rm H} + {\\rm H} + {\\rm H}_2$ &\n$k_{11} \\simeq k_9\/8$\\\\\n\n${\\rm H} + e^-$ & $\\rightarrow$ & ${\\rm H}^+ + e^- + e^-$ &\n$k_{\\rm 12} \\simeq 5.8\\times10^{-11} T^{1\/2}\n\te^{-158000\/T} \\;{\\rm cm^3\\,s^{-1}}$\\\\\n\n${\\rm H} + {\\rm H}$ & $\\rightarrow$ & ${\\rm H}^+ + e^- + {\\rm H}$ &\n$k_{\\rm 13} \\simeq 1.7\\times10^{-4} k_{12}$\\\\\n\n${\\rm H}_2 + h\\nu$ & $\\rightarrow$ & ${\\rm H}^+ + {\\rm H}$ &\n$k_{\\rm 14} \\simeq 1.1\\times10^8 j(\\nu_{\\rm LW}) \\; {\\rm s^{-1}}$\\\\\n\n${\\rm D}^+ + e^-$ & $\\rightarrow$ & ${\\rm D} + h\\nu$ &\n$k_{\\rm 15} \\simeq 8.4\\times10^{-11}\nT^{-0.5} \\times$\\\\\n& & & $\\quad\\times({T\\over{10^3}})^{-0.2}[1+({T\\over{10^6}})^{0.7}]^{-1}\n\\; {\\rm cm^3 s^{-1}}$\\\\\n\n${\\rm D} + {\\rm H}^+$ & $\\rightarrow$ & ${\\rm D}^+ + {\\rm H}$ &\n$k_{\\rm 16} \\simeq 3.7\\times10^{-10} T^{0.28} e^{-43\/T}\n\\; {\\rm cm^3 s^{-1}}$\\\\\n\n${\\rm D}^+ + {\\rm H}$ & $\\rightarrow$ & ${\\rm D} + {\\rm H}^+$ &\n$k_{\\rm 17} \\simeq 3.7\\times10^{-10} T^{0.28} \\; {\\rm cm^3 s^{-1}}$\\\\\n\n${\\rm D}^+ + {\\rm H}_2$ & $\\rightarrow$ & ${\\rm H}^+ + {\\rm HD}$ &\n$k_{\\rm 18} \\simeq 2.1\\times10^{-9} \\; {\\rm cm^3 s^{-1}}$\\\\\n\n${\\rm HD} + {\\rm H}^+$ & $\\rightarrow$ & ${\\rm H}_2 + {\\rm D}$ &\n$k_{\\rm 19} \\simeq 1.0\\times10^{-9} e^{-464\/T} \\; {\\rm cm^3 s^{-1}}$\\\\\n\\hline\\hline\n\\end{tabular}\n\\label{reaction_rates}\n\\end{table}\n\nSince molecules play such an important role in the formation of the\nfirst luminous objects, it is important to include a proper treatment of\ntheir abundance evolution. However, a full treatment should keep into\naccount $\\sim 20$ different species and $\\sim 100$ different\nreactions. For instance, GP98 give a chemical network\nof 87 reactions, which includes $e^-$, H, H$^+$, H$^-$, D, D$^+$, He,\nHe$^+$, He$^{++}$, Li, Li$^+$, Li$^-$, H$_2$~\\index{HH}, H$_2^+$, HD, HD$^+$, HeH$^+$,\nLiH, LiH$^+$, H$_3^+$ and H$_2$D$^+$; even their {\\it minimal model} is\ntoo complicated to be described here, so we will just describe the most\nbasic processes involved in H$_2$~\\index{HH} formation. However, we note that the\npapers by Abel {\\it et al.~} (1997)\\footnote{The collisional rate coefficients\ngiven in the Abel {\\it et al.~} (1997) paper can be readily obtained through a\nFORTRAN code available on the web page\nhttp:\/\/www.astro.psu.edu\/users\/tabel\/PGas\/LCA-CM.html ; also note that\non Tom Abel's web site (http:\/\/www.tomabel.com\/) it is possible to find\nseveral useful informations about the primordial universe in general.}\\\nand by GP98 provide a much more accurate description of\nprimordial chemistry\\index{Chemistry}.\n\n\\subsubsection{Atomic Hydrogen\\index{Hydrogen} and free electrons}\nApart from molecule formation (see below), the main reactions involving\nHydrogen (and Helium, to which we can apply all the arguments below) are\nionizations\\index{Hydrogen!Ionization} and recombinations\\index{Hydrogen!Recombination}.\n\nIonizations can be produced both by radiation (${\\rm H} + h\\nu\n\\rightarrow {\\rm H}^+ + e^-$) and by collisions, mainly with free\nelectrons (${\\rm H} + e^- \\rightarrow {\\rm H}^+ + 2e^-$) but also with\nother H atoms (${\\rm H} + {\\rm H} \\rightarrow {\\rm H}^+ e^- + {\\rm\nH}$). Photoionizations dominate over collisions as long as UV photons\nwith energy above the H ionization\\index{Hydrogen!Ionization} threshold are present (see {\\frenchspacing\\it e.g. }\nOsterbrock 1989), but this is not always the case before the formation\nof the first luminous objects, when only the CMB\\index{Cosmic Microwave Background} radiation is present;\neven after some sources of radiation have appeared, it is likely that\ntheir influence will be mainly local, at least until reionization.\nHowever, in the primordial universe the electron temperature is low,\nand collisional ionizations are relatively rare.\n\nRecombinations\\index{Hydrogen!Recombination} (${\\rm H}^+ + e^- \\rightarrow {\\rm H} + h\\nu$) have much\nhigher specific rates, since they do not have a high energy\nthreshold. They probably dominate the evolution of free electrons, which\ncan be relatively abundant as residuals from the recombination era ({\\frenchspacing\\it e.g. }\nPeebles 1993), and are important for H$_2$~\\index{HH} formation (see below).\n\nSimple approximations for the reaction rates of collisional ionizations\nand recombinations are given in Table \\ref{reaction_rates}.\n\n\n\n\\subsubsection{H$_2$~\\index{HH} formation and disruption}\nAt present, H$_2$~ is commonly believed to form mainly through reactions\ntaking place on the surface of dust grains (but see {\\frenchspacing\\it e.g. } Cazaux \\& Spaans\n2004). Such a mechanism cannot work in the primordial universe, when the\nmetal (and dust) abundance was negligible. The first two mechanisms\nleading to formation of H$_2$~ in a primordial environment was described by\nMcDowell (1961) and by Saslaw \\& Zipoy (1967); soon after the\npublication of this latter paper,\nH$_2$~\\index{HH} started to be included in theories of structure formation (such as\nthe PD68 paper about globular cluster formation through H$_2$~\\index{HH cooling}\ncooling). Both these mechanisms consist of two stages, involving either\n$e^-$ or H$^+$ as catalyzers. The first (and usually most important)\ngoes through the reactions\n\\begin{eqnarray}\n{\\rm H} + e^- & \\rightarrow & {\\rm H}^- + h\\nu\\\\\n{\\rm H}^- + {\\rm H} & \\rightarrow & {\\rm H}_2- + e^-\n\\end{eqnarray}\nwhile the second proceeds as\n\\begin{eqnarray}\n{\\rm H}^+ + {\\rm H} & \\rightarrow & {\\rm H}_2^+ + h\\nu\\\\\n{\\rm H}_2^+ + {\\rm H} & \\rightarrow & {\\rm H}_2- + {\\rm H}^+.\n\\end{eqnarray}\nIn both cases, H$_2$~\\index{HH} production can fail at the intermediate stage if a\nphotodissociation occurs (${\\rm H}^- + h\\nu \\rightarrow {\\rm H} +\ne^-$, or ${\\rm H}_2^+ + h\\nu \\rightarrow {\\rm H}^+ + {\\rm H}$).\nThe rates of all these reactions are listed in Table \\ref{reaction_rates}\n\nBy combining the reaction rates of these reactions, it is possible (see\nTegmark {\\it et al.~} 1997; hereafter T97) to obtain an approximate evolution for the\nionization fraction $x \\equiv n_{\\rm H^+} \/ n$ and the H$_2$~ fraction\n$f_{\\rm H_2} \\equiv 2n_{\\rm H_2}\/n$ (here $n$ is the total density of\nprotons, $n \\simeq n_{\\rm H} + n_{\\rm H^+} + 2n{\\rm H_2}$):\n\\begin{eqnarray}\n&& x(t) \\simeq {x_0\\over{1+x_0 n k_1 t}}\\\\\n&& f_{\\rm H_2}(t) \\simeq f_0 + 2{k_m\\over k_1} \\ln(1 + x_0 n k_1 t)\n\\qquad{\\rm with}\\\\\n&& k_m = {{k_2 k_3}\\over{k_3+k_4\/[n(1-x)]}} + \n {{k_5 k_6}\\over{k_6+k_7\/[n(1-x)]}}\n\\end{eqnarray}\nwhere the various $k_i$ are the reaction rates given in Table\n\\ref{reaction_rates}, and $x_0$ and $f_0$ are the initial fractions of\nionized atoms and of H$_2$~\\index{HH} molecules.\n\nAnother mechanism for H$_2$~ formation, which becomes important at\n(cosmologically) very high densities $n_{\\rm H}\\gsim10^8\\;{\\rm cm^-3}$\nare the so-called 3-body reactions described by Palla {\\it et al.~} (1983).\nWhile the previous mechanisms are limited by the small abundance of free\nelectrons (or of H$^+$), reactions such as\n\\begin{equation}\n{\\rm H} + {\\rm H} + {\\rm H} \\rightarrow {\\rm H}_2 + {\\rm H}\n\\end{equation}\ncan rapidly convert all the hydrogen\\index{Hydrogen} into molecular form, provided the\ndensity is high enough. For this reason, they are likely play an\nimportant role during the formation of a primordial protostar.\n\nFinally, H$_2$~\\index{HH} can be dissociated through collisional processes such as\nreactions 9 in table \\ref{reaction_rates}, but probably the most\nimportant process is its photo-dissociation by photons in the\nLyman-Werner bands (11.26-13.6 eV; but Abel {\\it et al.~} 1997 found that the\nmost important range is between 12.24 and 13.51 eV). These photons are\nbelow the H ionization\\index{Hydrogen!Ionization} threshold, therefore they can diffuse to large\ndistances from their source. So, any primordial source emitting a\nrelevant number of UV photons ({\\frenchspacing\\it e.g. } a $\\sim100M_{\\odot}$ star) is likely to\nhave a major feedback\\index{feedback} effect, since it can strongly reduce the amount of\nH$_2$~\\index{HH} in the halo where it formed (and also in neighbouring halos),\ninhibiting further star formation.\n\n\n\\subsubsection{Approximate predictions}\n\n\\begin{figure}[t]\n\\epsfig{file=tegmark_minh2.ps,width=9truecm,angle=270}\n\\caption{Comparison of the H$_2$~\\index{HH} fraction needed for an halo to collapse\nand H$_2$~ fraction which can be formed inside the halo in an Hubble time\n(from T97). This is shown as a function of halo virial\ntemperature and for three different virialization redshifts ($z=100$:\nsolid; $z=50$: short dashes; $z=25$:long dashes). The three dots mark the\nminimum H$_2$~\\index{HH} abundance which is needed for collapse at the three\nconsidered redshift, and it can be seen that they all are at $f_{\\rm\nH_2}\\sim 5\\times10^{-4}$.}\n\\label{tegmark_minh2}\n\\end{figure}\n\n\\begin{figure}[t]\n\\epsfig{file=tegmark_minmass.ps,width=9truecm}\n\\caption{Evolution with virialization redshift of the minimum mass\\index{minimum mass} which\nis able to cool, collapse and possibly form stars. Only the halos whose ($z_{\\rm\nvir},M$) fall outside the filled region are able to cool fast\nenough. The filled region is made of a red part, where CMB\\index{Cosmic Microwave Background} radiation\nprevents the cooling, and a yellow part where the cooling is slow\nbecause of a dearth of H$_2$~ molecules. The two parallel dashed lines\ncorrespond to virial temperatures of $10^4\\;{\\rm K}$ (the highest one)\nand $10^3\\;{\\rm K}$ (the lowest one), while the almost vertical line in\nthe middle of the plot corresponds to $3\\sigma$ peaks in a SCDM\n($\\Omega_{\\rm m}=1,\\ \\Omega_\\Lambda=0$) cosmology.}\n\\label{tegmark_minmass}\n\\end{figure}\n\nThe above information can be used for making approximate predictions\nabout the properties of the halos hosting the first luminous\nobjects. Such kind of predictions was started by Couchman \\& Rees\n(1986), and especially by T97 and their example was followed and\nimproved by several authors (see below).\n\nThe basic idea is to have a simplified model for the evolution of the\nH$_2$~\\index{HH} fraction and the cooling rate inside spherical top-hat fluctuations,\nin order to check whether after virialization the baryons are able to\ncool (and keep collapsing), or they just ``settle down'' at about the\nvirial radius. Such approximate models are fast to compute, and it is\neasy to explore the gas behaviour on a large range of virialization\nredshifts and fluctuation masses (or, equivalently, virial temperatures).\nIn this way, for each virialization redshift it is possible to obtain\nthe minimum molecular fraction which is needed for the collapse to\nproceed, and the minimum size of halos where such abundance is achieved.\nThe results interestingly point out that the molecular fraction\nthreshold separating collapsing and non-collapsing objects has an almost\nredshift-independent value of $f_{\\rm H_2}\\sim5\\times10^{-4}$ (see\nfig. \\ref{tegmark_minh2}). Instead, the minimum halo mass actually\nevolves with redshift (see fig. \\ref{tegmark_minmass}).\n\nPredictions about the ability of the baryons inside each kind of halo to\nkeep collapsing after virialization can then be combined with\nPress-Schechter\\index{Press-Schechter Formalism} predictions about the actual abundances of halos. For\ninstance, in fig. \\ref{tegmark_minmass}\\ the solid black (almost\nvertical) line shows where the masses of $3\\sigma$ fluctuations lie, as\na function of redshift. So, if we decide to neglect the rare\nfluctuations at more than $3\\sigma$ from the average, that figure tells\nus that the first luminous objects can start forming only at $z\\lsim30$,\nin objects with a total mass $\\lower.5ex\\hbox{\\gtsima} 2\\times 10^6\\;M_{\\odot}$.\n\nSuch result is subject to a number of uncertainties, both about the\n``details'' of the model and about the processes it neglects; here are\nsome of the more interesting developments:\n\\begin{enumerate}\n\\item{Abel {\\it et al.~} (1998) found that the minimum mass\\index{minimum mass} is strongly\n affected by the uncertainties in the adopted H$_2$~\\index{HH cooling} cooling function,\n with differences that could reach a factor $\\sim 10$ in the minimum\n mass\\index{minimum mass} estimate}\n\\item{Fuller \\& Couchman (2000) used numerical simulations\\index{Numerical Simulation} of single,\n high-$\\sigma$ density peaks in order to improve the spherical\n collapse approximation}\n\\item{Machacek {\\it et al.~} (2001) and Kitayama {\\it et al.~} (2001) investigated the\n influence of background radiation}\n\\item {Yoshida {\\it et al.~} (2003) used larger-scale numerical simulations\\index{Numerical Simulation}\n and found that also the merging history of an halo could play a\n role, since frequent mergings heat the gas and prevent or delay the\n collapse of $\\sim 30\\%$ of the halos.}\n\\end{enumerate}\n\nAs a result of the improved modeling (and of a different set of\ncosmological parameters as $\\Lambda$CDM has substituted SCDM), in the\nmost recent papers the value of the minimum halo mass for the formation\nof the first luminous objects is somewhat reduced to the range 0.5--1\n$\\times 10^6\\; M_{\\odot}$, with a weak dependence on redshift and a stronger\ndependence on other parameters (background radiation, merging history\netc.).\n\n\n\n\\section{Numerical cosmological hydrodynamics}\n\nNumerical simulations\\index{Numerical Simulation} are an important tool for a large range of\nastrophysical problems. This is especially true for the study of\nprimordial stars, given the absence of direct observational data about\nthese stars, and the relatively scarce indirect evidence we have.\n\n\n\\begin{figure}[p]\n\\epsfig{file=halo_3d.eps,width=10.5truecm}\n\\caption{Overview of the evolution leading to the formation of a\nprimordial star. The top row shows the gas density, centered at the\npre-galactic object within which the star is formed. The four projections\nare labelled with their redshifts. Pre-galactic objects form from very\nsmall density fluctuations and continuously merge to form larger\nobjects. The middle and bottom rows show thin slices through the gas\ndensity and temperature at the final simulation stage. The four pairs of\nslices are labelled with the scale on which they were taken, starting\nfrom 6 (proper) kpc (the size of the simulated volume) and zooming in\ndown to 0.06 pc (12,000 AU). In the left panels, the larger scale\nstructures of filaments and sheets are seen. At their intersections, a\npre-galactic object of $\\sim 10^6\\;M_{\\odot}$ is formed. The temperature\nslice (second panel, bottom row) shows how the gas shock heats as it\nfalls into the pre-galactic object. After passing the accretion shock,\nthe material forms hydrogen\\index{Hydrogen} molecules and starts to cool. The cooling\nmaterial accumulates at the centre of the object and forms the\nhigh-redshift analog to a molecular cloud (third panel from the right),\nwhich is dense and cold ($T\\sim 200 K$). Deep within the molecular\ncloud, a core of $\\sim 100\\;M_{\\odot}$, a few hundred K warmer, is formed\n(right panel) within which a $\\sim 1\\;M_{\\odot}$ fully molecular object is\nformed (yellow region in the right panel of the middle row). (from\nABN02).}\n\\label{halo_3d}\n\\end{figure}\n\n\n\\subsection{Adaptive refinement codes (ENZO)}\nThe two problems which immediately emerge when setting up a simulation\nof primordial star formation are the dynamical range\nand the required accuracy in solving the hydrodynamical equations.\n\nWhen studying the formation of objects in a cosmological context, we\nneed both to simulate a large enough volume (with a box size of at least 100\ncomoving kpc), and to resolve objects of the size of a star\n($\\sim 10^{11}\\;{\\rm cm}$ in the case of the Sun), about 11 orders of\nmagnitude smaller.\n\nThis huge difference in the relevant scales of the problem is obviously\na problem. It can be attenuated by the use of Smoothed Particle\nHydrodynamics\\index{Numerical Simulations!Smoothed Particle Hydrodynamics} (SPH; see {\\frenchspacing\\it e.g. } Monaghan 1992), whose Lagrangian nature has\nsome benign effects, as the simulated particles are likely to\nconcentrate ({\\frenchspacing\\it i.e. }, provide resolution) in the regions where the maximum\nresolution is needed. However, even if this kind of method can actually\nbe employed (see Bromm {\\it et al.~} 1999, 2002), it has at least two important\ndrawbacks. First of all, the positive effects mentioned above cannot\nbridge in a completely satisfactory way the extreme dynamical range we\njust mentioned, since the mass resolution is normally fixed once and for\nall at the beginning of the simulation. Second, SPH\\index{Numerical Simulations!Smoothed Particle Hydrodynamics} is known to have\npoor shock resolution properties, which casts doubts on the results when\nthe hydrodynamics becomes important.\n\nThe best presently available solution for satisfying both requirements\nis the combination of an Eulerian approach (which is good for the\nhydrodynamical part) and an adaptive technique (which can extend the\ndynamical range). This is known as Adaptive Mesh Refinement\\index{Numerical Simulations!Adaptive Mesh Refinement} (AMR; see\n{\\frenchspacing\\it e.g. } Berger \\& Colella 1989; Norman 2004), and basically consists in\nhaving the simulation volume represented by a hierarchy of nested {\\it\ngrids} ({\\it meshes}) which are created according to resolution needs.\n\nIn particular, the simulations we are going to describe in the following\nparagraphs were made with the code ENZO\\footnote{The ENZO code can be\nretrieved at the web site http:\/\/cosmos.ucsd.edu\/enzo\/} (see O'Shea\n{\\it et al.~} 2004 and references therein for a full description). Briefly, ENZO\nincludes the treatment of gravitational physics through N-body\ntechniques, and the treatment of hydrodynamics through the piecewise\nparabolic method (PPM) of Woodward \\& Colella 1984 (as modified by Bryan\n{\\it et al.~} 1995 in order to adapt to cosmological simulations). ENZO can\noptionally include the treatment of gas cooling (in particular H$_2$~\\index{HH\ncooling} cooling, as described in the preceeding sections), of primordial\nnon-equilibrium chemistry\\index{Chemistry} (see the preceeding sections) and of UV\nbackground models ({\\frenchspacing\\it e.g. } the ones by Haardt \\& Madau 1996) and heuristic\nprescriptions (see Cen \\& Ostriker 1992) for star formation in\nlarger-scale simulations.\n\n\n\\subsection{Formation of the first star}\n\n\\begin{figure}[p]\n\\epsfig{file=profiles3d.eps,width=11.5truecm}\n\\caption{Radial mass-weighted averages of physical quantities at seven\ndifferent simulation times. (A) Particle number density in ${\\rm\ncm^{-3}}$ as a function of radius; the bottom line corresponds to\n$z=19$, and moving upwards the ``steps'' from one line to the next are\nof $9\\times10^6$ yr, $3\\times10^5$ yr, $3\\times10^4$ yr, 3000 yr, 1500\nyr, and 200 yr, respectively; the uppermost line shows the simulation\nfinal state, at $z=18.181164$. The two lines between 0.01 and 200 pc\ngive the DM mass density (in ${\\rm GeV\\,cm^{-3}}$) at $z=19$ and the\nfinal time, respectively. (B) Enclosed gas mass. (C) Mass fractions of\n$H$ and H$_2$~\\index{HH}. (D) Temperature. (E) Radial velocity of the baryons; the\nbottom line in (E) shows the negative value of the local speed of sound\nat the final time. In all panels the same output times correspond to the\nsame line styles. (from ABN02).}\n\\label{profiles3d}\n\\end{figure}\n\nThe use of AMR\\index{Numerical Simulations!Adaptive Mesh Refinement} codes for cosmological simulations of the formation of\nthe first objects in the universe was pioneered by Abel {\\it et al.~} (2000) and\nfurther refined by Abel {\\it et al.~} (2002; hereafter ABN02), where the dynamic\nrange covered by the simulations was larger than 5 orders of magnitude\nin scale length, {\\frenchspacing\\it i.e. } the wide range between an almost cosmological\n($\\lower.5ex\\hbox{\\gtsima} 100$ comoving kpc, {\\frenchspacing\\it i.e. } $\\lower.5ex\\hbox{\\gtsima} 5$ proper kpc) and an almost\nstellar scale ($\\lower.5ex\\hbox{\\ltsima} 1000\\;{\\rm AU} \\sim 10^{-2}\\;{\\rm pc}$). In the\nwhole process, the AMR\\index{Numerical Simulations!Adaptive Mesh Refinement} code kept introducing new (finer resolution)\nmeshes whenever the density exceeded some thresholds, or the Jeans\nlength\\index{Jeans Length} was resolved by less than 64 grid cells.\n\nThe simulations were started at a redshift\n$z=100$ from cosmologically consistent initial conditions\\footnote{\nSuch conditions were taken from an SCDM model with $\\Omega_\\Lambda=0$,\n$\\Omega_{\\rm m}=1$, $\\Omega_{\\rm b}=0.06$, $H_0=50\\;{\\rm\nkm\/s\\,Mpc^{-1}}$ which is quite different from the ``concordance''\n$\\Lambda$CDM model. However, the final results are believed to be only\nmarginally affected by differences in these cosmological\nparameters.}, and the code also followed the non-equilibrium chemistry\\index{Chemistry} of the\nprimordial gas, and included an optically thin treatment of radiative\nlosses from atomic and molecular lines, and from Compton cooling.\n\nThe main limitation of these simulations was the assumption that the\ncooling proceeds in the optically thin limit; such assumption breaks\ndown when the optical depth inside H$_2$~\\index{HH} lines reaches unity\n(corresponding to a Jeans length\\index{Jeans Length} $\\sim10^3\\;{\\rm AU}\\sim 0.01\\;{\\rm\npc}$). However, the simulations were halted only when the optical\ndepth at line centres becomes larger than 10 (Jeans length\\index{Jeans Length} of about\n$\\sim 10\\;{\\rm AU}$ ($\\sim 10^{-4}\\;{\\rm pc}$), since it was unclear\nwhether Doppler shifts could delay the transition to the optically\nthick regime.\n\n\\subsubsection{Summary of the evolution: radial profiles}\nIn Figures \\ref{halo_3d} and \\ref{profiles3d} we show the evolution of\ngas properties both in pictures and in plots of spherically averaged\nquantities, as presented in Abel {\\it et al.~} (2002). From these figures,\nand in particular from the local minima in the infall velocity\n(Fig. \\ref{profiles3d}e) it is possible to identify four\ncharacteristic mass scales:\n\\begin{enumerate}\n\\item{The mass scale of the pre-galactic halo, of $\\sim\n7\\times10^5M_{\\odot}$ in total, consistent with the approximate\npredictions discussed in the previous sections}\n\\item{The mass scale of a ``primordial molecular cloud'', $\\sim\n4000M_{\\odot}$: the molecular fraction in this region is actually very low\n($\\lsim10^{-3}$), but it is enough to reduce the gas temperature from\nthe virial value ($\\gsim10^3\\;{\\rm K}$) to $\\sim 200\\;{\\rm K}$}\n\\item{The mass scale of a ``fragment'', $\\sim 100M_{\\odot}$, which is\ndetermined by the change in the H$_2$~ cooling\\index{HH cooling} properties at a density\n$n\\sim10^4\\;{\\rm cm^{-3}}$ (for a complete discussion, see Bromm {\\it et al.~}\n1999, 2002), when Local Thermal Equilibrium is reached and the cooling\nrate dependence on density flattens to $\\Lambda\\propto n$ (from\n$\\Lambda\\propto n^2$); this is also the first mass scale where the gas\nmass exceeds the Bonnor-Ebert mass (Ebert 1955, Bonnor 1956) $M_{\\rm\nBE}(T,n,\\mu)\\simeq 61M_{\\odot} T^{3\/2} n^{-1\/2} \\mu^{-2}$ (with $T$ in\nKelvin and $n$ in ${\\rm cm^{-3}}$), indicating an unstable collapse}\n\\item{The mass scale of the ``molecular core'', $\\sim 1M_{\\odot}$ is\ndetermined by the onset of 3-body reactions at densities in the range\n$n\\sim10^8-10^{10}\\;{\\rm cm^{-3}}$, which leads to the complete\nconversion of hydrogen\\index{Hydrogen} into molecular form; at this stage, the infall\nflow becomes supersonic (which is the precondition for the appearance\nof an accretion shock and of a central hydrostatic flow); the increase\nin the H$_2$~\\index{HH} abundance due to the formation of this molecular core also\nleads to the transition to the optically thick regime.}\n\\end{enumerate}\n\n\\subsubsection{Angular momentum}\n\n\\begin{figure}[t]\n\\epsfig{file=angmom3d.eps,width=11.5truecm}\n\\caption{Radial mass weighted averages of angular momentum\\index{Angular Momentum}-related\nquantities at different times (the same as in\nfig. \\ref{profiles3d}). (A) specific angular momentum\\index{Angular Momentum} $L$. (B)\nRotational speed in units of Keplerian velocity $v_{\\rm\nKep}\\equiv(GM_r\/r)^{1\/2}$. (C) Rotational speed ($L\/r$). (D)\nRotational speed in units of the sound speed $c_s$ (from ABN02).}\n\\label{angmom3d}\n\\end{figure}\n\nIn Fig. \\ref{angmom3d} we show the evolution of the radial distribution\nof average specific angular momentum\\index{Angular Momentum} (and related quantities). It is\nremarkable that in ABN02 simulations rotational support does not halt\nthe collapse at any stage, even if this could be a natural expectation.\n\nThere are two reasons for such (apparently odd) fact:\n\\begin{enumerate}\n\\item{As can be seen in panel A of Fig. \\ref{angmom3d}, the collapse\nstarts with the central gas having much less specific angular momentum\\index{Angular Momentum}\nthan the average (this is typical of halos produced by gravitational\ncollapse; see {\\frenchspacing\\it e.g. } Quinn \\& Zurek 1988). That is, the gas in the central\nregions starts with little angular momentum\\index{Angular Momentum} to lose in the first place.}\n\\item{Second, some form of angular momentum\\index{Angular Momentum} transport is clearly active,\nas is demonstrated by the decrease in the central specific angular\nmomentum. Turbulence is likely to be the explanation: at any radius,\nthere will be both low and high angular momentum\\index{Angular Momentum} material, and\nredistribution will happen because of pressure forces or shock waves:\nlower angular momentum\\index{Angular Momentum} material will selectively sink inwards,\ndisplacing higher angular momentum\\index{Angular Momentum} gas. It is notable that this kind of\ntransport will be suppressed in situations in which the collapse occurs\non the dynamical time scale, rather than the longer cooling time scale:\nthis is the likely reason why this kind of mechanism has not been\nobserved in simulations of present day star formation (see {\\frenchspacing\\it e.g. }. Burkert \\&\nBodenheimer 1993).}\n\\end{enumerate}\n\n\n\\subsection{SPH\\index{Numerical Simulations!Smoothed Particle Hydrodynamics} results}\n\n\nBromm {\\it et al.~} (1999, 2002) have performed simulations of primordial\nstar formation using an SPH\\index{Numerical Simulations!Smoothed Particle Hydrodynamics} code which included essentially the same\nphysical processes as the simulations we just described. Apart from the\nnumerical technique, the main differences were that they included\ndeuterium in some of their models, and that their initial conditions\nwere not fully cosmological ({\\frenchspacing\\it e.g. }, since they choose to study isolated\nhalos, the angular momenta are assigned as initial conditions, rather\nthan generated by tidal interactions with neighbouring halo).\n\nIn Fig. \\ref{sphresults3d} and \\ref{sphresultsgas} we show the results\nof one of their simulations, which give results in essential agreement\nwith those we have previously discussed.\n\nAn interesting extra-result of this kind of simulation is that the\nauthors are able to assess the mass evolution of the various gaseous\nclumps in the hypothesis that feedback\\index{feedback} is unimportant (which could be the\ncase if the clumps directly form intermediate mass \\index{Black Hole}black holes without\nemitting much radiation, or if fragmentation\\index{fragmentation} to a quite small stellar\nmass scale happens). They find that a significant fraction ($\\sim 0.5$)\nof the halo gas should end up inside one of the gas clumps, although it\nis not clear at all whether this gas will form stars (or some other kind\nof object). Furthermore, they find that clumps are likely to increase\ntheir mass on a timescale of about $10^7\\;{\\rm yr}$ (roughly\ncorresponding to the initial time scale of the simulated halo), both\nbecause of gas accretion and of mergers, and they could easily reach\nmasses $\\lower.5ex\\hbox{\\gtsima} 10^4\\;M_{\\odot}$. Obviously, this result is heavily\ndependent on the not very realistic assumed lack of feedback\\index{feedback}.\n\n\\begin{figure}[t]\n\\epsfig{file=sphresults3d.eps,width=11.5truecm}\n\\caption{Typical result of the SPH\\index{Numerical Simulations!Smoothed Particle Hydrodynamics} simulations by Bromm {\\it et al.~} 2002. The\nfigure shows the morphology of a simulated $2\\times10^6\\;M_{\\odot}$ halo\nvirializing at $z\\sim30$ just after the formation of the first clump of\nmass $1400\\;M_{\\odot}$ (which is likely to produce stars). The two top row\npanels shows the distribution of the dark matter\\index{Dark Matter} (which is undergoing\nviolent relaxation). The two bottom panels show the distribution of gas,\nwhich has developed a lumpy morphology and settled at the centre of the\npotential well.}\n\\label{sphresults3d}\n\\end{figure}\n\n\\begin{figure}[t]\n\\epsfig{file=sphresultsgas.eps,width=11.5truecm}\n\\caption{Gas properties in the SPH\\index{Numerical Simulations!Smoothed Particle Hydrodynamics} simulations by Bromm {\\it et al.~} 2002. The\nfigure shows the properties of the particles shown in\nFig. \\ref{sphresults3d}. Panel (a) shows the free electron abundance\n$x_e$ as a fraction of H number density. Panel (b) shows the H$_2$~\\index{HH}\nabundance $f_{\\rm H_2}$ (note that even the highest density particles\nhave $n_{\\rm H}\\lower.5ex\\hbox{\\ltsima} 10^8\\;{\\rm cm^3}$, so 3-body reactions are\nunimportant and $f_{\\rm H_2}\\lsim10^{-3}$). Panel (c) shows the gas\ntemperature (low density gas gets to high temperatures because H$_2$~\ncooling\\index{HH cooling}\nis inefficient). Panel (d) shows the value of the Jeans mass\\index{Jeans mass} which can\nbe obtained by using each particle temperature and density. All the\npanels have the hydrogen\\index{Hydrogen} number density $n_{rm H}$ on the X axis.}\n\\label{sphresultsgas}\n\\end{figure}\n\n\n\n\\section{Protostar formation and accretion}\n\n\\begin{figure}[p]\n\\epsfig{file=protostellar_collapse.ps,width=11truecm}\n\\caption{Proto-stellar collapse\\index{protostellar collapse}, formation of an\nhydrostatic core\\index{protostellar collapse!hydrostatic core} and\nstart of the proto-stellar accretion\\index{protostellar collapse!accretion} phase as can be found with 1-D\nsimulations. The four panel show the profiles of density (top left; a),\ninfall velocity (top right; b), temperature (bottom left;d) and H$_2$~\\index{HH}\nabundance (bottom right; d) as a function of enclosed mass at 10\ndifferent evolutionary stages (0=initial conditions; 9=final stage of\nthe computation). The most relevant phases are the rapid formation of\nH$_2$~ (2-3) and the formation of a shock on the surface of the hydrostatic\ncore (5-6-7), followed by the onset by accretion. Also note the almost\nperfect power-law behaviour of the density profile before core\nformation. (from Ripamonti {\\it et al.~} 2002).}\n\\label{protostellarcollapse}\n\\end{figure}\n\n\\begin{figure}[t]\n\\epsfig{file=kelvinhelmoltz.eps,width=11truecm}\n\\caption{Comparison of the accretion\\index{protostellar collapse!accretion} and the Kelvin-Helmoltz time scales\nfor a primordial protostars, as a function of protostellar mass. The\nKelvin-Helmoltz contraction time (obtained using the ZAMS luminosity as\ngiven by Schaerer 2002) is shown as the solid black line, while\nthe dashed line and the solid line with circles show the time which is\nneeded to accrete each mass of gas; they are based on the results of\nABN02 and differ slightly in the way in which they were obtained. The\ndotted lines mark three constant accretion rates of $10^{-2}$ (bottom line),\n$5\\times10^{-3}$ (middle line) and $10^{-3}\\;{\\rm M_{\\odot}\/yr}$ (top line).}\n\\label{kelvinhelmoltz}\n\\end{figure}\n\n\\begin{figure}[t]\n\\epsfig{file=omukaipalla_accretion.eps,width=11truecm}\n\\caption{Evolution of the protostellar radius as a function of\nprotostellar mass in the models of Omukai \\& Palla (2003). The models\ndiffer in the assumed accretion rates. The dotted curves show the\nresults of models where $\\dot{M}_{\\rm core}$ is assumed to be constant\nat values of $1.1\\times 10^{-3}\\;{\\rm M_{\\odot}\/yr}$ (dotted curve starting\nat $R_{\\rm core}\\simeq 30\\;R_\\odot$), $2.2\\times 10^{-3}\\;{\\rm\nM_{\\odot}\/yr}$ (starting at $R_{\\rm core}\\simeq 40\\;R_\\odot$), $4.4\\times\n10^{-3}\\;{\\rm M_{\\odot}\/yr}$ (``fiducial'' model; starting at $R_{\\rm\ncore}\\simeq 50\\;R_\\odot$) and $8.8\\times 10^{-3}\\;{\\rm M_{\\odot}\/yr}$\n(starting at $R_{\\rm core}\\simeq 65\\;R_\\odot$). The solid thick curve\nshows the evolution of a model in which the accretion rate was obtained\nfrom the extrapolation of the ABN02 data.}\n\\label{omukaipalla_accretion}\n\\end{figure}\n\nFull three-dimensional simulations\\index{Numerical Simulations!3D} (such as the ones of ABN02)\nare not able to reach the stage when a star is really formed. They are\nusually stopped at densities ($n\\sim10^{10}-10^{\\rm 11}\\;{\\rm cm^{-3}}$)\nwhich are much lower than typical stellar densities ($\\rho\\sim1\\; {\\rm\ng\\,cm^{-3}}$, $n\\sim10^{24}\\; {\\rm cm^{-3}}$). In fact, at low\ndensities the gas is optically thin at all frequencies, and it is not\nnecessary to include radiative transfer in order to estimate the gas\ncooling. Instead, at densities $n\\lower.5ex\\hbox{\\gtsima} 10^{10}\\; {\\rm cm^{-3}}$ some of\nthe dominant H$_2$~\\index{HH} roto-vibrational lines become optically thick and\nrequire the treatment of radiative transfer.\n\nFor this kind of problem, this is a prohibitive computational burden,\nand at present the actual formation of a protostar can not be fully\ninvestigated through self-consistent 3-D simulations\\index{Numerical\nSimulations!3D}. In order to\nproceed further, it is necessary to introduce some kind of\nsimplification in the problem.\n\n\n\\subsection{Analytical results}\nHistorically, there were several studies based on analytical arguments\n({\\frenchspacing\\it i.e. } stability analysis\\index{fragmentation!Stability Analysis}) or single-zone models, leading to different\nconclusions about the properties of the final object.\n\nAn early example\nis the paper by Yoneyama (1972), in which it was argued that\nfragmentation\\index{fragmentation!Opacity Limit} takes place until the opacity limit is reached. However,\nYoneyama looked at the opacity limit of the entire ``cloud'' (roughly\ncorresponding to one of the $10^5-10^6\\;M_{\\odot}$ mini-halos we consider at\npresent; originally, this was mass the scale proposed by PD68)\nrather than the putative fragments and arrived at the conclusion that\nfragmentation stopped for masses $\\lower.5ex\\hbox{\\ltsima} 60M_{\\odot}$.\n\nMore recently, Palla {\\it et al.~} 1983 pointed to the increase of the cooling rate at\n$n\\lower.5ex\\hbox{\\gtsima} 10^8\\; {\\rm cm^{-3}}$ (due to H$_2$~\\index{HH} fast formation through\n3-body reactions) as a possible trigger for instability, leading to\nfragmentation\\index{fragmentation} on mass scales of $\\sim 0.1\\; M_{\\odot}$; such instability\nhas actually been observed in the simulations of Abel {\\it et al.~} (2003),\nbut it does not lead to fragmentation because its growth is too slow\n(see also Sabano \\& Yoshii 1977, Silk 1983, Omukai \\& Yoshii 2003 and\nRipamonti \\& Abel 2004 for analytical fragmentation\\index{fragmentation} criteria and their\napplication to this case).\n\n\\subsection{Mono-dimensional models}\n\n\\subsubsection{The formation of the protostellar core}\nA less simplified approach relies on 1D studies\\index{Numerical Simulations!1D}, such as those of Omukai\n\\& Nishi (1998) and Ripamonti {\\it et al.~} (2002): in such studies the heavy\nprice of assuming that the collapsing object always remains spherical\n(which makes impossible to investigate the effects of angular momentum\\index{Angular Momentum},\nand prevents fragmentation\\index{fragmentation}) is compensated by the ability to properly\ntreat all the other physical processes which are believed to be\nimportant: first of all, the detailed radiative transfer of radiation\nemitted in the numerous H$_2$~\\index{HH} resonance lines, then the transition to\ncontinuum cooling (at first from molecular Collision-Induced Emission,\nand later from atomic processes) and finally the non-ideal equation of\nstate which becomes necessary at high densities, when the gas becomes\nincreasingly hot and largely ionized.\n\nSuch studies find that the collapse initially proceeds in a\nself-similar\\index{protostellar collapse!self-similar phase} fashion (in good agreement with the solution found by Larson\n1969 and Penston 1969); at the start of this phase (stages 1 and 2 in\nfig. \\ref{protostellarcollapse}, to be compared with the profiles at\ncorresponding densities in fig. \\ref{profiles3d}) their results can be\ncompared with those of 3-D simulations\\index{Numerical Simulations!3D}, and the agreement is good\ndespite the difference in the assumed initial conditions: this is likely\ndue to the self-similar properties. At later stages the comparison is\nnot as good, but this is presumably due to the differences in the\ncooling rate (the 3D models at $n\\gsim10^{10}\\;{\\rm cm^{-3}}$ only include\nthe optically thin cooling rate).\n\nIt is noticeable that the transition to optically thick cooling does not\nstop the self-similar phase, contrary to theoretical expectations (see\n{\\frenchspacing\\it e.g. } Rees 1976). The reason is that the reduction in the cooling rate is\nsmooth, and is further delayed by the onset of CIE\\index{cooling!collision induced emission} cooling and by the\nstart of H$_2$~\\index{HH} dissociation, which acts as an heat well (each H$_2$~\ndissociation absorbs $4.48\\;{\\rm eV}$ of thermal energy) and provides an\nalternative ``disposal'' mechanism for the excess thermal energy.\n\nThe self-similar phase\\index{protostellar collapse!self-similar phase} proceeds until the central density is $n\\lower.5ex\\hbox{\\gtsima}\n10^{21}\\;{\\rm cm^{-3}}$ $\\sim10^{-3}\\;{\\rm g\\, cm^{-3}}$, when a small\n($M_{\\rm core,0}\\sim 3\\times 10^{-3}\\; M_{\\odot}$) hydrostatic\ncore\\index{protostellar collapse!hydrostatic core}\neventually forms. Such core starts accreting the surrounding\ngas at a very high rate (${\\dot{M}}_{\\rm core} \\sim 0.01-0.1\\; M_{\\odot}\\,\n{\\rm yr^{-1}}$), comparable to the theoretical predictions of Stahler\n{\\it et al.~} 1980 \n\\begin{equation}\n\\dot{M}_{\\rm core} \\sim c_s^3\/G \\simeq\n4 \\times 10^{-3} \\left({T\\over{1000\\;{\\rm K}}}\\right)^{3\/2}\\;\n\\left({\\mu\\over{1.27}}\\right)^{-3\/2}\\; M_{\\odot}\\,{\\rm yr^{-1}}\n\\end{equation}\nwhere $c_s=[k_{\\rm B}T\/(\\mu m_{\\rm H})]^{1\/2}$ is the isothermal sound\nspeed of a gas with mean molecular weight $\\mu$.\n\nAlthough the Omukai \\& Nishi (1998) and the Ripamonti {\\it et al.~}\n(2002) studies predict that such a high accretion rate will decline\n(approximately as $\\dot{M}_{\\rm core}\\propto t^{-0.3}$), it is clear\nthat, if the accretion proceeds unimpeded, it could lead to the\nformation of a star with a mass comparable to the whole proto-stellar\ncloud where the proto-star is born, that is, about $100M_{\\odot}$; such a\nprocess would take a quite short time of about $10^4-10^5\\;{\\rm\nyr}$.\n\nThis scenario could be deeply affected by feedback\\index{feedback} effects: even\nnot considering the radiation coming from the interior of the\nproto-star, the energy released by the shocked material accreting onto\nthe surface can reach very high values, comparable with the Eddington\nluminosity, and is likely to affect the accretion flow.\n\nFigure \\ref{kelvinhelmoltz}, compares the accretion\ntime scale (which can be extrapolated from the data at the end of the\nABN02 simulations) to the Kelvin-Helmoltz timescale,\n\\begin{equation}\nt_{\\rm KH} = {{GM^2}\\over{RL}}\n\\end{equation}\nwhere $L$ is the luminosity of the protostar. This plot tells us that\nthe accretion timescale is so fast that the stellar interior has very\nlittle space for a ``re-adjustment'' (which could possibly stop the\naccretion) before reaching a mass of $\\sim10-100M_{\\odot}$.\n\nHowever, it can be argued that such a readjustment is not necessary,\nsince in the first stages most of the protostellar luminosity comes from\nthe accretion process itself: \n\\begin{eqnarray}\n&L_{\\rm acc}\\simeq {{GM_{\\rm core}\\dot{M}_{\\rm core}}\\over R_{\\rm core}}\n\\nonumber \\\\ \\simeq\n8.5\\times10^{37}\n&\\left({M_{\\rm core}\\overM_{\\odot}}\\right)\n\\left({\\dot{M_{\\rm core}}\\over{0.01 M_{\\odot}\/yr}}\\right)\n\\left({R_{\\rm core}\\over{10^{12}\\;{\\rm cm}}}\\right)^{-1}\n\\;{\\rm erg\\,s^{-1}}\n\\end{eqnarray}\nwhere we have inserted realistic values for $M_{\\rm core}$, $\\dot{M}_{\\rm\ncore}$ and $R_{\\rm core}$.\nThis luminosity is quite close to the Eddington luminosity\n($L_{\\rm Edd}\\simeq 1.3\\times10^{38}$ $(M\/M_{\\odot}) \\;{\\rm erg\\,s^{-1}}$), and\nradiation pressure could have a major effect.\n\nRipamonti {\\it et al.~} (2002) found that this luminosity is not\nable to stop the accretion, but they do not properly trace the internal\nstructure of the core, so their results cannot be trusted except in the\nvery initial stages of accretion (say, when $M_{\\rm core}\\lsim0.1M_{\\odot}$).\n\nA better suited approach was used in studies by Stahler {\\it et al.~} (1986)\nand, more recently, by Omukai \\& Palla (2001, 2003), who assumed that\nthe accretion can be described as a sequence of steady-state accretion\nflows onto a growing core. The core is modelled hydrostatically, as a\nnormal stellar interior (including the treatment of nuclear burning),\nwhile the accreting envelope is modelled through the steady state\nassumption, in conjunction with the condition that outside the\n``photosphere'' (the region where the optical depth for a photon to\nescape the cloud is $\\lower.5ex\\hbox{\\gtsima} 1$) the accreting material is in\nfree-fall. As shown in Fig. \\ref{omukaipalla_accretion}, Omukai \\& Palla\n(2003) find that even if feedback\\index{feedback} luminosity deeply affects the\nstructure of the accreting envelope, it is never able to stop the\naccretion before the proto-stellar mass reaches $60-100\\;M_{\\odot}$ as a\nminimum; after that, the final mass of the protostar depends on the\nassumed mass accretion rate $\\dot{M}_{\\rm core}$: for $\\dot{M}_{\\rm\ncore}\\lower.5ex\\hbox{\\ltsima} 4\\times 10^{-3}\\;M_{\\odot}\\,{\\rm yr^{-1}}$, the accretion can\nproceed unimpeded until the gas is completely depleted (or the star\nexplodes as a supernova); otherwise, the radiation pressure is finally\nable to revert the accretion; with high accretion rates this happens\nsooner, and the final stellar mass is relatively low, while for\naccretion rates only slightly above the critical value of $\\sim 4\\times\n10^{-3}\\;M_{\\odot}\\,{\\rm yr^{-1}}$ the stellar mass can reach about\n$300\\;M_{\\odot}$.\n\nIf such predictions are correct, the primordial Initial Mass Function\\index{Initial Mass Function}\nis likely to be much different from the present one, reflecting mainly\nthe mass spectrum of the gas fragments from which the stars originate,\nand a mass of $\\sim 100\\;M_{\\odot}$ could be typical for a primordial\nstar. However, we note that this important results could change as a\nresult of better modeling. For example, deep modifications of the envelope\nstructure are quite possible if a frequency-dependent opacity (rather than the\nmean opacity used in the cited studies) is included.\n\n\n\n\\section{Discussion}\nThe previous sections show that, although not certain at all (because of\nthe big uncertainty about feedback\\index{feedback} effects), numerical simulations\\index{numerical simulation} tend\nto favour the hypothesis that primordial stars had a larger typical mass\nthan present-day stars.\n\nIf this is true, it could indeed solve some observational puzzle, such\nas why we have never observed a single zero-metallicity star (answer:\nthey have exploded as supernovae and\/or transformed into compact objects\nat a very high redshift), and maybe help explaining the relatively high\nmetallicities ($Z\\gsim10^{-4}Z_\\odot$ even in low column density\nsystems) measured in the Lyman $\\alpha$ forest (see {\\frenchspacing\\it e.g. } Cowie \\&\nSongaila 1998), or the high ($\\sim$ solar) metallicities we observe in\nthe spectra of some quasars already at redshift 6 (Fan {\\it et al.~} 2000, 2001,\n2003, 2004; Walter {\\it et al.~} 2003).\n\nHowever, a top-heavy primordial IMF also runs into a series of\nproblems, which we will discuss in the remaining of this section.\n\n\n\\subsection{UV Radiation feedback\\index{feedback}}\n\n\\begin{figure}[t]\n\\epsfig{file=sawtooth.eps,width=11truecm}\n\\caption{Average flux (in units of ${\\rm erg\\, cm^{-2}\\, s^{-1}\\,\nsr^{-1}\\, Hz^{-1}}$) at an observation redshift $z_{\\rm obs}=25$ coming\nfrom sources turning on at $z=35$.\nThe top panel shows the effects of absorption of neutral hydrogen\\index{Hydrogen} and\nhelium, strongly reducing the flux between $\\sim 13.6\\;{\\rm eV}$ and\n$\\sim 1\\;{\\rm keV}$ (the solid and dashed lines show the absorbed and\nunabsorbed flux, respectively).\nThe bottom panel shows the same quantities in a much smaller energy\nrange, in order to illustrate the sawtooth modulation due to line\nabsorption below 13.6 eV. (from Haiman, Rees \\& Loeb 1997).}\n\\label{sawtooth}\n\\end{figure}\n\n\\begin{figure}[t]\n\\epsfig{file=primordialhii.eps,width=11truecm}\n\\caption{Dynamical evolution of the HII region produced by a\n$200\\,M_{\\odot}$ star. Panels refer to ionization fraction profiles (top\nleft), temperature distributions (top right), densities (bottom left)\nand velocities (bottom right). The dashed line in the density panel is\nfor the Str\\\"omgren density (the density required to form a Str\\\"omgren\nsphere and therefore initially bind the ionization front) at a given\nradius. Profiles are given at times (from top to bottom in the density\npanel; from left to right in the others) 0 (density panel), 63 kyr (all\npanels), 82 kyr (ionization and temperature panels), 95 kyr (ionization\npanel) 126.9 kyr (all panels), 317 kyr (all panels), 1.1 Myr\n(temperature, density and velocity panels) and 2.2 Myr (all\npanels), which is the approximate main sequence lifetime of a\n$200\\;M_{\\odot}$ star. (From Whalen, Abel \\& Norman 2004).}\n\\label{primordialhii}\n\\end{figure}\n\nFirst of all, if a moderately massive star (even $M\\lower.5ex\\hbox{\\gtsima} 20-30M_{\\odot}$ is\nlikely to be enough) actually forms in the centre of an halo in the way\nshown by ABN02, it will emit copious amounts of UV radiation, which will\nproduce important feedback\\index{feedback} effects. Indeed, the scarcity of metals in\nstellar atmospheres (see Schaerer 2002, and also Tumlinson \\& Shull\n2000, Bromm Kudritzki \\& Loeb 2001) results in UV fluxes which are\nsignificantly larger than for stars of the same mass but with\n``normal'' metallicity. Furthermore, this same scarcity of metals is\nlikely to result in a negligibly small density of dust particles,\nfurther advancing the UV fluxes at large distances from the sources when\ncompared to the present-day situation.\n\nMassive primordial objects are also likely to end up into \\index{Black Hole}black holes\n(either directly or after having formed a star\\footnote{Here we think of\na star as an object where quasi-hydrostatic equilibrium is reached\nbecause of stable nuclear burning; according to this definition, a black\nhole can be formed without passing through a truly stellar phase,\neven if it is very likely to emit significant amounts of radiation in\nthe process.}), which could be the seeds of present day super-massive\n\\index{Black Hole}black holes (see {\\frenchspacing\\it e.g. } Volonteri, Haardt \\& Madau 2003). If accretion can\ntake place onto these \\index{Black Hole}black holes (also known as {\\it mini-quasars}),\nthey are likely to emit an important amount of radiation in the UV and\nthe X-rays (this could also be important in explaining the WMAP result\nabout Thomson scattering optical depth; see {\\frenchspacing\\it e.g. } Madau {\\it et al.~} 2003).\n\nUV photons can have a series of different effects. First of all,\nwe already mentioned in the section about chemistry\\index{chemistry} that Lyman-Werner\nphotons ($11.26\\;{\\rm eV}\\leq h\\nu \\leq 13.6\\;{\\rm eV}$) can dissociate\nH$_2$~\\index{HH} molecules, preventing further star formation. Such photons are\neffective even at large distances from their sources, and even in a\nneutral universe, since their energy is below the H ionization\\index{Hydrogen!ionization}\nthreshold; the only obstacle they find on their way are some of the\nLyman transitions of H, which can scatter these photons or remove them\nfrom the band; this results in a ``sawtooth'' modulation (see\nfig. \\ref{sawtooth}) of the spectrum.\nHaiman, Rees \\& Loeb (1997) argue that this negative feedback\\index{feedback} could\nconceivably bring primordial star formation to an early stop; however,\nother authors found that this is not\nnecessarily the case (Ciardi {\\it et al.~} 2000; Omukai 2001; Glover \\& Brand\n2001), or that the feedbac\\index{feedback}k effect on\nH$_2$~\\index{HH} abundance can be moderated by the effects of X-ray photons coming\nfrom mini-quasars (Haiman Abel and Rees 2000; Glover \\& Brand 2003).\n \nA second obvious effect from UV photons is to ionize the interstellar\nand the intergalactic medium; it is unclear whether these kind of\nobjects are important in the reionization history of the universe, but\nthey will definitely form HII regions in their immediate\nneighbourhoods. In fig. \\ref{primordialhii} we show the results of\nWhalen, Abel \\& Norman 2004 about the evolution of an HII region\nproduced by a $200M_{\\odot}$ star inside an halo with the same density\nprofile as found in the ABN02 paper: in the beginning the ionization\nfront is D-type (that is, it is ``trapped'' because of the density) and\nexpands because a shock forms in front of it; later (after about 70 kyr\nfrom the start of the UV emission) the ionization front reaches regions\nof lower densities and becomes R-type (radiation driven), expanding much\nfaster than the shock. However, the shock keeps moving at speeds of the\norder of $30\\;{\\rm km\\,s^{-1}}$ even after the source of ionizing\nphotons is turned off. Such a speed is much larger than the rotational\nvelocities of the mini-halos where primordial star formation is supposed\nto be taking place ($\\lower.5ex\\hbox{\\ltsima} 10\\;{\\rm km s^{-1}}$), so this shocks are\nlikely to expel a very large fraction of the original gas content of the\nmini-halo. UV emission could\neven lead to the {\\it photo-evaporation} of neighbouring\nmini-halos, similar to what Barkana \\& Loeb 1999 (see also Shapiro,\nIliev \\& Raga 2004) found in the slightly different context to\ncosmological reionization.\n\nSince the dynamical timescale of a mini-halo is $\\lower.5ex\\hbox{\\gtsima} 10^7\\;{\\rm yr}$\nand is longer than the timescale for this kind of phenomena (definitely\nshorter than the $\\sim1-10$ Myr main sequence lifespan of massive stars,\nand very likely to be of the order of $\\sim 10^4-10^5\\;{\\rm yr}$), the\nstar formation in one mini-halo is likely to stop almost immediately\nafter the first (massive) object forms. This means that, unless two or\nmore stars form exactly at the same time (within $\\sim1\\%$ of the\nmini-halo dynamical time), each mini-halo will form {\\it exactly one}\nmassive star.\n\n\\subsection{Supernovae\\index{supernovae} feedback and metallicities}\n\n\\begin{figure}[p]\n\\epsfig{file=starfate.eps,width=11truecm}\n\\caption{The ``final mass function'' of non-rotating metal-free stars as\na function of their initial mass, as given by from Heger \\& Woosley\n2002. The thick black curve gives the final mass of the collapsed\nremnant, and the thick gray curve and the mass of the star at the start\nof the event that produces that remnant (mass loss, SN etc.); for\nzero-metal stars this happens to be mostly coincident with the dotted\nline, corresponding to no mass loss. Below initial masses of $5-10\\;M_{\\odot}$\nwhite dwarfs are formed; above that, initial masses of up to\n$\\sim25\\;M_{\\odot}$ lead to neutron stars. At larger masses, \\index{Black Hole}black holes can\nform in different ways (through fall-back of SN ejecta or\ndirectly). Pair instability starts to appear above $\\sim100\\;M_{\\odot}$, and\nbecomes very important at $\\sim 140\\;M_{\\odot}$: stars with initial masses\nin the $140-260\\;M_{\\odot}$ range are believed to completely disrupt the\nstar, leaving no remnant; instead, above this range pair instability is\nbelieved to lead to a complete collapse of the star into a \\index{Black Hole}black hole.}\n\\label{starfate}\n\\end{figure}\n\nAfter producing plenty of UV photons during their life, primordial\nmassive stars are likely to explode as supernovae\\index{supernovae}. This must be true for\nsome fraction of them, otherwise it would be impossible to pollute the\nIGM with metals, as mass loss from zero-metal stars is believed to be\nunimportant (this applies to stars with $M\\lsim500\\;M_{\\odot}$; see Baraffe,\nHeger \\& Woosley 2001, and also Heger {\\it et al.~} 2002); however, it is clearly\npossible that some fraction of primordial stars directly evolve into black\nholes.\n\nIn figure \\ref{starfate} we show the results about the fate of\nzero metallicity stars, as obtained by Heger \\& Woosley 2002 (but see\nalso Heger {\\it et al.~} 2003), indirectly confirming this picture and\nsuggesting that pair-instability supernovae\\index{supernovae} could play a major role if\nthe primordial IMF is really skewed towards masses $\\lower.5ex\\hbox{\\gtsima}\n100\\;M_{\\odot}$.\n\nThis has a series of consequences. First of all, supernova\\index{supernovae} explosion are\none of the very few phenomena directly involving primordial stars which\nwe can realistically hope to observe. Wise \\& Abel (2004; see also Marri\n\\& Ferrara 1998 and Marri, Ferrara \\& Pozzetti 2000 for the effects of\ngravitational lensing) investigated the expected number of such\nsupernovae by means of a semi-analytic model combining Press-Schechter\nformalism\\index{Press-Schechter formalism}, an evolving minimum mass\\index{minimum mass} for star forming halos and negative\nfeedbacks, finding the rates shown in Fig. \\ref{primordialsn}; if such\nobjects are pair-instability supernovae with masses $\\gsim175\\;M_{\\odot}$,\nthey should be detectable by future missions such as JWST\\footnote{James\nWebb Space Telescope; more information at http:\/\/www.jwst.nasa.gov}; if\nsome of them are associated to Gamma Ray Bursts, the recently launched\n{\\it Swift}\\footnote{More information at http:\/\/swift.gsfc.nasa.gov}\nsatellite has a slim chance of observing them at a redshifts\n$\\lsim30$ (see Gou {\\it et al.~} 2004).\n\nSupernova\\index{supernovae} explosions obviously have hydrodynamical effects on the gas of\nthe mini-halo where they presumably formed, especially in the case of\nthe particularly violent pair-instability supernovae (see {\\frenchspacing\\it e.g. } Ober, El\nElid \\& Fricke 1983), and they are likely to expel it even if it was not\nremoved by the effects of UV radiation (see {\\frenchspacing\\it e.g. } MacLow \\& Ferrara 1999\nand Ferrara \\& Tolstoy 2000), further reducing the probability of having\nmore than 1 star per mini-halo. The similarity with UV effects extends\nto the influence on neighbouring halos which can be wiped out by SN\nexplosions (Sigward, Ferrara \\& Scannapieco 2004).\n\nFinally, supernovae provide a mechanism for spreading metals into the\nIGM, as discussed {\\frenchspacing\\it e.g. } by Madau, Ferrara \\& Rees 2001. In turn, these\nmetals will modify the cooling properties of the gas, and lead to star\nformation with a normal (Salpeter-like) IMF.\nIf this is true, and if pair-instability supernovae\\index{supernovae} dominate\nthe yields from primordial stars, we should be able to distinguish the\npeculiar pair-supernova abundance pattern (described in Heger \\& Woosley\n2001) as a ``signature'' of primordial origin.\n\nSearches for low-metallicity stars have a long history (see {\\frenchspacing\\it e.g. } Beers\n1999, 2000), and their inability to find stars with metallicities\n$Z\\leq10^{-4}Z_\\odot$ led to speculation that this metallicity marked\nthe transition to a normal IMF (see {\\frenchspacing\\it e.g. } Bromm {\\it et al.~} 2001, Schneider\n{\\it et al.~} 2002). However, Christlieb {\\it et al.~} 2002 finally reported the\ndiscovery of an extremely metal poor star (HE0107-5240) with\n$[Fe\/H]=-5.3$\\footnote{$[Fe\/H] = \\log_{10}(N_{Fe}\/N_H) -\n\\log_{10}(N_{Fe}\/N_H)_\\odot$.}, a level compatible with a ``second\ngeneration'' star ({\\frenchspacing\\it i.e. }, a star formed from material enriched by very few\nsupernovae; for comparison, Wise \\& Abel 2004 find that primordial SNe\ncould enrich the IGM to $[Fe\/H]\\sim-4.1$, although this is probably an upper\nlimit). The abundance patterns in this star are quite strange (for\nexample, the carbon abundance is slightly less than 1\/10 solar), but a\nmuch better fit is provided by supernova\\index{supernovae} yields of moderately massive\nstars ($15-40\\;M_{\\odot}$) rather than from yields predicted for\npair-instability supernovae coming from very massive stars. However, at\nthe moment no model can satisfactorily fit all the observed abundances\n(see Bessel, Christlieb \\& Gustafsson 2004).\n\nEven if the primordial nature of this star must still be established, it\nrepresents a cautionary tale about the currently favoured predictions of\na large number of massive or very massive primordial stars, and a\nreminder that better theoretical modeling is still needed, with\nparticular regard to feedback effects during the stellar accretion phase.\n\n\\begin{figure}[t]\n\\epsfig{file=primordialsn.eps,width=11truecm}\n\\caption{Primordial supernova\\index{supernovae} properties as reported by Wise \\& Abel\n2004. The right panels show the differential rate of primordial\nsupernovae per unit redshift (top) and the cumulative rate (bottom);\nboth are per year of observation and per square degree. The left panels\nshow the critical halo mass (in $M_{\\odot}$) for primordial star formation\n(top), the comoving number density (in ${\\rm Mpc^{-3}}$) of halos above\nthe critical mass (middle) and the predicted specific intensity of the\nsoft UV background in the Lyman-Werner band (in ${\\rm erg\\; s^{-1}\\;\ncm^{-2}\\; Hz^{-1}\\; sr^{-1}}$, bottom). The three lines in each panel\nrefer to fixed primordial stellar masses of $100M_{\\odot}$ (solid),\n$200M_{\\odot}$ (dotted) and $500M_{\\odot}$ (dashed).}\n\\label{primordialsn}\n\\end{figure}\n\n\\section*{Acknowledgements}\nWe are grateful to the SIGRAV School for providing the opportunity to\nwrite this lecture notes. We thank M. Mapelli and M. Colpi\nfor assistance and useful comments about the manuscript. This work was\nstarted when both authors were at the Astronomy Department of\nPennsylvania State University. E.R. gratefully acknowledges support from\nNSF CAREER award AST-0239709 from the U.S. National Science Foundation,\nand from the Netherlands Organization for Scientific Research (NWO)\nunder project number 436016.\n\n\n\\References\n\n\\begin{thereferences}\n\\item Abel T, Anninos P, Zhang Y and Norman M 1997 {\\it New Astr.} {\\bf\n2} 181\n\\item Abel T, Anninos P, Norman M L and Zhang Y 1998 {\\it ApJ} {\\bf 508} 518--529\n\\item Abel T, Bryan G L and Norman M L 2000 {\\it ApJ} {\\bf 540} 39\n\\item Abel T, Bryan G L and Norman M L 2002 {\\it Science} {\\bf 295}\n 93--98 [ABN02]\n\\item Baraffe I, Heger A and Woosley S E 2001 {\\it ApJ} {\\bf 550} 890\n\\item Barkana R and Loeb A 1999 {\\it ApJ} {\\bf 523} 54\n\\item Barkana R and Loeb A 2001 {\\it Physics Reports} {\\bf 349}\n125-238\n\\item Beers T C 1999 in {\\it The Third Stromlo Symposium: The Galactic\n Halo}, eds. Gibson B K Axelrod T S \\& Putman M E, {\\it Astron. Soc.\n Pacif. Conf. Ser.} {\\bf 165} 202\n\\item Beers T C 1999 {\\it The First Stars}, proceedings\nof the MPA\/ESO Workshop held at Garching, Germany, 4-6 August 1999,\neds. A. Weiss, T. Abel \\& V. Hill (Springer), p. 3\n\\item Bennett C L {\\it et al.~} 2003 {\\it ApJS} {\\bf 148} 1\n\\item Berger M J and Colella P 1989 {\\it J. Comp. Phys.} {\\bf 82} 64\n\\item Bessel M S, Christlieb N and Gustafsson B 2004 {\\it ApJL} {\\bf 612} 61\n\\item Bogleux E and Galli D 1997 {\\it MNRAS} {\\bf 288} 638\n\\item Bond J R, Cole S, Efstathiou G and Kaiser N 1991 {\\it ApJ} {\\bf 379} 440\n\\item Bonnor W B 1956 {\\it MNRAS} {\\bf 116} 351\n\\item Bromm V, Coppi P and Larson R B 1999 {\\it ApJL} {\\bf 527} L5\n\\item Bromm V, Coppi P and Larson R B 2002 {\\it ApJ} {\\bf 564} 23\n\\item Bromm V, Ferrara A, Coppi P and Larson R B 2001 {\\it MNRAS}\n {\\bf 328} 969\n\\item Bromm V and Larson R B 2004 {\\it ARAA} {\\bf 42} 79\n\\item Bromm V, Kudritzki R P and Loeb A 2001 {\\it ApJ} {\\bf 552} 464\n\\item Bryan G L and Norman M 1998 {\\it ApJ} {\\bf 495} 80\n\\item Bryan G L, Norman M L, Stone J M, Cen R and Ostriker J P 1995 {\\it\nComp. Phys. Comm.} {\\bf 89} 149\n\\item Burkert A and Bodenheimer P 1993 {\\it MNRAS} {\\bf 264} 798\n\\item Cazaux S and Spaans M 2004 {\\it ApJ} {\\bf 611} 40\n\\item Cen R and Ostriker P 1992 {\\it ApJL} {\\bf 399} L113\n\\item Christlieb N, Bessel M S, Beers T C, Gustafsson B, Korn A, Barklem P S,\n Karlsson T, Mizuno-Wiedner M and Rossi S 2002 {\\it Nature} {\\bf 419} 904\n\\item Ciardi B and Ferrara A 2004 {\\it Space Science Reviews} accepted\n [astro-ph\/0409018]\n\\item Ciardi B, Ferrara Governato F and Jenkins A {\\it MNRAS} {\\bf 314} 611 \n\\item Coles P and Lucchin F 1995 {\\it Cosmology. The origin and\nevolution of cosmic structure} (Chichester, England: John Wiley \\& Sons)\n\\item Couchman H M and Rees M J 1986 {\\it MNRAS} {\\bf 221} 53\n\\item Cowie L L and Songaila A 1998 {\\it Nature} {\\bf 394} 44\n\\item Diemand J, Moore B and Stadel J 2005 {\\it Nature} {\\bf 433} 389 [astro-ph\/0501589]\n\\item Doroshkevich A G, Zel'Dovich Y B and Novikov I D 1967\n {\\it Astronomicheskii Zhurnal} {\\bf 44} 295 [DZN67]\n\\item Ebert R. 1955 {\\it Z. Astrophys.} {\\bf 37} 217\n\\item Fan X {\\it et al.~} 2000 {\\it AJ} {\\bf 120} 1167\n\\item Fan X {\\it et al.~} 2001 {\\it AJ} {\\bf 122} 2833\n\\item Fan X {\\it et al.~} 2003 {\\it AJ} {\\bf 125} 2151\n\\item Fan X {\\it et al.~} 2004 {\\it AJ} {\\bf 128} 1649\n\\item Ferrara A and Salvaterra R 2005 {\\bf THIS BOOK}\n [see also astro-ph\/0406554]\n\\item Ferrara A and Tolstoy E 2000 {\\it MNRAS} {\\bf 313} 291\n\\item Flower D R, Le Bourlot J, Pineau de For\\^ets G and Roueff E 2000\n {\\it MNRAS} {\\bf 314} 753--758\n\\item Frommhold L 1993 {\\it Collision-induced absorption in gases}\n(Cambridge, UK: Cambridge University Press)\n\\item Fuller T M and Couchman H M P 2000 {\\it ApJ} {\\bf 544} 6--20\n\\item Galli D and Palla F 1998 {\\it A\\&A} {\\bf 335} 403--420 [GP98]\n\\item Gamow G 1948 {\\it Physical Review} {\\bf 74} 505\n\\item Gamow G 1949 {\\it Rev. Mod. Phys.} {\\bf 21}, 367\n\\item Glover S C O 2004 {\\it Space Science Reviews} accepted [astro-ph\/0409737]\n\\item Glover S C O and Brand P W J L 2001 {\\it MNRAS} {\\bf 321} 385\n\\item Glover S C O and Brand P W J L 2003 {\\it MNRAS} {\\bf 340} 210\n\\item Gou L, Meszaros P, Abel T and Zhang B 2004 {\\it Apj} {\\bf 604} 508\n\\item Green A M, Hofmann S and Schwarz D J 2004 {\\it MNRAS} {\\bf 353} 23\n\\item Guth A H 2004 to be published in {\\it Carnegie Observatories\n Astrophysics Series, Vol. 2: Measuring and Modeling the Universe},\n ed. W L Freedman (Cambridge: Cambridge University Press),\n [astro-ph\/0404546]\n\\item Guth A H 1981 {\\it Phys. Rev. D} {\\bf 23} 247\n\\item Haardt F and Madau P 1996 {\\it ApJ} {\\bf 461} 20\n\\item Haiman Z, Abel T and Rees M J 2000 {\\it ApJ} {\\bf 534} 11\n\\item Haiman Z, Rees M J and Loeb A 1997 {\\it ApJ} {\\bf 476} 458\n\\item Heger A, Fryer C L, Woosley S E, Langer N and Hartmann D H 2003 {\\it\n ApJ} {\\bf 591} 288\n\\item Heger A and Woosley S E 2002 {\\it ApJ} {\\bf 567} 532\n\\item Heger A, Woosley S, Baraffe I and Abel T 2002 in {\\it Lighthouses in\n the Universe; The Most Luminous Celestial Objects and Their Use for\n Cosmology}, proceedings of the MPA\/ESO Conference held at Garching\n (Germany), eds. M. Gilfanov, R. A. Siunyaev and E. Curazov (Berlin:\n Springer), 369 [astro-ph\/0112059]\n\\item Hollenbach D and McKee C F 1979 {\\it ApJS} {\\bf 41} 555--592\n\\item Kitayama T, Susa H, Umemura M and Ikeuchi S 2001 {\\it MNRAS} {\\bf 326} 1353\n\\item Larson R B 1969 {\\it MNRAS} {\\bf 145} 271\n\\item Le Bourlot J, Pineau de For\\^ets G and Flower D R 1999\n {\\it MNRAS} {\\bf 305} 802\n\\item Lepp S and Shull J M 1983 {\\it ApJ} {\\bf 270} 578--582\n\\item Lepp S and Shull J M 1984 {\\it ApJ} {\\bf 280} 465--469\n\\item Lenzuni P, Chernoff D F and Salpeter E E 1991 {\\it ApJS} {\\bf 76} 759\n\\item Lynden-Bell D 1967 {\\it MNRAS} {\\bf 136} 101\n\\item Mac Low M M and Ferrara A 1999 {\\it ApJ} {\\bf 513} 142\n\\item Machacek M E, Bryan G L and Abel T 2001 {\\it ApJ} {\\bf 548}\n 509--521\n\\item Madau P and Haardt F 2005 {\\bf THIS BOOK}\n\\item Madau P, Ferrara A and Rees M J 2001 {\\it ApJ} {\\bf 555} 92\n\\item Madau P, Rees M J, Volonteri M, Haardt F and Oh S P {\\it ApJ}\n {\\bf 604} 484\n\\item Marri S and Ferrara A 1998 {\\it ApJ} {\\bf 509} 43\n\\item Marri S, Ferrara A and Pozzetti L 2000 {\\it MNRAS} {\\bf 317} 265\n\\item Martin P G, Schwarz D H and Mandy M E 1996 {\\it ApJ} {\\bf 461}\n 265--281\n\\item McDowell M R C 1961 {\\it Observatory} {\\bf 81} 240\n\\item Monaghan J J 1992 {\\it ARAA} {\\bf 30} 543\n\\item Norman M L 2004 to be published in {\\it Springer Lecture Notes in\nComputational Science and Engineering: Adaptive Mesh Refinement -\nTheory and Applications}, eds. T Plewa T Linde and G Weirs [astro-ph\/0402230]\n\\item Novikov I D and Zel'Dovich Y B 1967 {\\it ARAA} {\\bf 5} 627\n\\item O'Shea B W, Bryan G, Bordner J, Norman M L, Abel T, Harkness R and\nKritsuk A 2004 to be published in {\\it Springer Lecture Notes in\nComputational Science and Engineering: Adaptive Mesh Refinement -\nTheory and Applications}, eds. T Plewa T Linde and G Weirs [astro-ph\/0403044]\n\\item Ober W W, El Elid M F and Fricke K J 1983 {\\it A\\&A} {\\bf 119} 61\n\\item Omukai K 2001 {\\it ApJ} {\\bf 546} 635\n\\item Omukai K and Nishi R 1998 {\\it ApJ} {\\bf 508} 141--150\n\\item Omukai K and Palla F 2001 {\\it ApJL} {\\bf 561} L55--L58\n\\item Omukai K and Palla F 2003 {\\it ApJ} {\\bf 589} 677--687\n\\item Omukai K and Yoshii Y 2003 {\\it ApJ} {\\bf 599} 746\n\\item Osterbrock D E 1989 {\\it Astrophysics of Gaseous Nebulae and\nActive Galactic Nuclei} (Mill Valley, California: University Science Books)\n\\item Padmanabhan T 1993 {\\it Structure Formation in the Universe}\n (Cambridge: Cambridge University Press)\n\\item Palla F, Salpeter E E and Stahler S W 1983 {\\it ApJ} {\\bf 271}\n 632\n\\item Peebles P J E 1965 {\\it ApJ} {\\bf 142} 1317\n\\item Peebles P J E 1993 {\\it Principles of Physical Cosmology} (Princeton: Princeton University Press)\n\\item Peebles P J E and Dicke R H 1968 {\\it ApJ} {\\bf 154} 891 [PD68]\n\\item Penston M V 1969 {\\it MNRAS} {\\bf 144} 425\n\\item Press W H and Schechter P 1974 {\\it ApJ} {\\bf 187} 425\n\\item Quinn P J and Zurek W H 1988 {\\it ApJ} {\\bf 331} 1\n\\item Rees M J 1976 {\\it MNRAS} {\\bf 174} 483\n\\item Ripamonti E and Abel T 2004 {\\it MNRAS} {\\bf 348} 1019--1034\n\\item Ripamonti E, Haardt F, Ferrara A and Colpi M 2002 {\\it MNRAS}\n {\\bf 334} 401--418\n\\item Sabano Y and Yoshii Y 1977 {\\it PASJ} {\\bf 29} 207\n\\item Saslaw W C and Zipoy D 1967 {\\it Nature} {\\bf 216} 976\n\\item Schaerer D 2002 {\\it A\\&A} {\\bf 382} 28\n\\item Schneider R, Ferrara A, Natarajan P and Omukai K 2002 {\\it ApJ}\n {\\bf 571} 30\n\\item Shapiro P R, Iliev I T and Raga A C 2004 {\\it MNRAS} {\\bf 348} 753\n\\item Sigward F, Ferrara A and Scannapieco E 2004 {\\it MNRAS}, accepted\n [astro-ph\/0411187]\n\\item Silk J 1968 {\\it ApJ} {\\bf 151} 459--472\n\\item Silk J 1983 {\\it MNRAS} {\\bf 205} 705\n\\item Somerville R 2005 {\\bf THIS BOOK}\n\\item Stahler S W, Shu F H and Taam R E 1980 {\\it ApJ} {\\bf 241} 637\n\\item Stahler S, Palla F and Salpeter E E 1986 {\\it ApJ} {\\bf 302} 590\n\\item Starobinsky A A 1979 {\\it Pis'ma Zh. Eksp. Teor. Fiz.} {\\bf 30}\n 719 [JETP Lett. 30,682]\n\\item Starobinsky A A 1980 {\\it Phys. Lett.} {\\bf 91B} 99\n\\item Tegmark M, Silk J, Rees M J, Blanchard A, Abel T and Palla F 1997\n {\\it ApJ} {\\bf 474} 1--12 [T97]\n\\item Tumlinson J and Shull J M 2000 {\\it ApJL} {\\bf 528} 65\n\\item Uehara H and Inutsuka S 2000 {\\it ApJL} {\\bf 531} L91--L94 (HD)\n\\item Viana P T P and Liddle A R 1996 {\\it MNRAS} {\\bf 281} 323\n\\item Volonteri M, Haardt F and Madau P 2003 {\\it ApJ} {\\bf 582} 559\n\\item Walter F, Bertoldi F, Carilli C, Cox P, Lo K Y, Neri R, Fan X, Omont A,\n Strauss MA and Menten K M 2003 {\\it Nature} {\\bf 424} 406\n\\item Whalen D, Abel T and Norman M L 2004 {\\it ApJ} {\\bf 610} 14\n\\item White S M and Springel V 1999 {\\it The First Stars}, proceedings\nof the MPA\/ESO Workshop held at Garching, Germany, 4-6 August 1999,\neds. A. Weiss, T. Abel \\& V. Hill (Springer), p. 327 \n\\item Wise J H and Abel T 2004 {\\it in preparation}, [astro-ph\/0411558]\n\\item Woodward P R and Colella 1984 {\\it J. Comp. Phys.} {\\bf 54} 174\n\\item Wu K K S, Lahav O and Rees M J 1999 {\\it Nature} {\\bf 397} 225--30\n\\item Yoneyama T 1972 {\\it Publ. of the Astr. Soc. of Japan} {\\bf 24} 87\n\\item Yoshida N, Abel T, Hernquist L and Sugiyama N 2003 {\\it ApJ}\n {\\bf 592} 645--663\n\\item Zhao H S, Taylor J, Silk J and Hooper D 2005, astro-ph\/0502049\n\\end{thereferences}\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDeviations from the purely poissonian shot noise (the so-called\n``full'' shot noise) in mesoscopic\ndevices and resonant tunneling structures have\nbeen the subject of growing interest in the last decade.\n\\cite{lesovik89,li90,yurkkoch90,buettiker90,buettiker92,chenting92,davihyld92,brown92,iannashot97,liu95,ciambrone95,lombardi97}\nThe main reason is that noise is a very sensitive probe of\nelectron-electron interaction, \\cite{landauer96} both due\nto the Pauli principle and to Coulomb force, and provides \ninformation not obtainable from DC and AC characterization; furthermore, \nnoise depends strongly on the details of device structure, so that\nthe capability of modeling it in nanoscale devices implies\nand requires a deep understanding of the collective transport mechanisms \nof electrons.\n\nAlmost all published theoretical and experimental studies \nhave focussed on the suppression of shot noise due to negative \ncorrelation between\ncurrent pulses caused by single electrons traversing the device.\nSuch correlation may be introduced by Pauli exclusion, which limits\nthe density of electrons in phase space, and\/or by Coulomb repulsion,\ndepending\non the details of the structure and on the dominant transport\nmechanism,\\cite{chenting92,davihyld92,brown92,iannashot97}\nand make the pulse distribution sub-poissonian, leading to suppressed\nshot noise.\n\nIn particular, for the case of resonant tunneling structures,\nseveral theoretical and experimental studies \nhave appeared in the literature,\n\\cite{li90,yurkkoch90,buettiker90,buettiker92,chenting92,davihyld92,brown92,iannashot97,liu95,ciambrone95,lombardi97}\nassessing\nthat the power spectral density\nof the noise current $S$ in such devices may be suppressed down to\nhalf the ``full'' shot noise value $S_{\\rm full} = 2 q I$ , i.e., that \nassociated to a purely poissonian process.\n\nIn this Letter, we propose a theoretical model and show experimental \nevidence of the opposite behavior, that is of enhanced shot noise \nwith respect to $S_{\\rm full}$, which is to be expected in\nresonant tunneling structures biased in the negative \ndifferential resistance region of the $I$-$V$ characteristic.\n\nWe shall show that in such condition Coulomb interaction\nand the shape of the \ndensity of states in the well \nintroduce positive correlation\nbetween consecutive current\npulses, leading to a super-poissonian pulse distribution, \nwhich implies a super-poissonian shot noise.\n\nFirst, we shall show an intuitive physical picture of the phenomenon,\nthen we shall express it in terms of a model for transport and noise in \ngeneric resonant tunneling structures presented elsewhere \n\\cite{iannashot97,ianna_unified95}. \nFurthermore, we shall show the experimental results,\nexhibiting a noise power spectral density almost 6.6 times greater than \n$S_{\\rm full}$, and compare it with the results\nprovided by a numerical implementation of our model.\n\nAs is well known, the typical I-V characteristic of a resonant tunneling diode \nis due to the shape of the density of states in the well,\nwhich consists of a series of narrow peaks in correspondence with the \nlongitudinal allowed energies in the well: for the\nGaAs\/Al$_{0.36}$Ga$_{0.64}$As material system considered here, \nthere is a single narrow peak.\nIn the negative differential resistance region of the \nI-V characteristic, the peak of the density of states is below the conduction\nband edge of the cathode: with increasing voltage, the density of states\nis moved downward, so that fewer states are available for\ntunneling from\nthe cathode, and the current decreases.\n\n\\begin{figure}\n\\epsfxsize = 8.cm\n\\epsffile{sketch.eps}\n\\vspace{0.2cm}\n\\caption{Enhanced shot noise is obtained because\nan electron tunneling into the well (a) from the \ncathode raises the potential energy of the well by an amount \n$q\/(C_1+C_2)$ so that more states are available for tunneling from the\ncathode (b)}\n\\end{figure}\n\nThe microscopic mechanism which allows for enhanced shot noise is the\nfollowing (see Fig. 1): an electron tunneling into the well from the \ncathode raises the potential energy of the well by an amount \n$q\/(C_1+C_2)$, where $q$\nis the electron charge, $C_1$ and $C_2$ the capacitances\nbetween the well region and either contacts;\nas a consequence, \nthe density of states in the well is shifted upwards \nby the same amount, with\nthe result that more states are available for successive tunneling\nevents from the\ncathode, and the probability per unit time that electrons\nenter the well increases. That means that electrons entering \nthe well are positively\ncorrelated, so that enhanced shot noise is to be expected.\n\nFor a more analytical derivation we can consider the structure \nas consisting of three regions \n$\\Omega_l$, $\\Omega_w$, and $\\Omega_r$, i.e.,\nthe left reservoir, the well region, and the right reservoir,\nrespectively, that are only weakly coupled through the two tunneling \nbarriers 1 and 2, as sketched in Fig. 1(a).\nIn addition, we suppose that electron transport is well described\nin terms of sequential tunneling (which is reasonable, \nexcept for the case of temperatures in the millikelvin range): an electron in\n$\\Omega_l$ traverses barrier 1, loses\nphase coherence and relaxes to a quasi-equilibrium energy distribution\nin the well region $\\Omega_w$, then traverses barrier 2 and \nleaves through $\\Omega_r$.\n\nSince confinement is realized only in one direction (that of MBE growth),\na state in $\\Omega_s$ ($s=l,r,w$) is characterized by its \nlongitudinal energy $E$, its transverse wave vector ${\\bf k_T}$, \nand its spin $\\sigma$, and\ntunneling can be treated as a transition between\nlevels in different regions \\cite{bardeen61} in which $E$, ${\\bf k_T}$ \nand $\\sigma$ are conserved.\n\nFollowing Davies {\\em et al.} \\cite{davihyld92}, we introduce\n``generation'' and ``recombination'' rates through both\nbarriers: \\cite{iannashot97} the generation rate $g_1$ is the transition\nrate from $\\Omega_l$ to $\\Omega_w$, i.e., the sum of the\nprobabilities per unit time of having a transition from $\\Omega_l$\nto $\\Omega_w$ given by the Fermi ``golden rule'' over all pairs of\noccupied states in $\\Omega_l$ and\nempty states in $\\Omega_w$. Analogously, we define $r_1$, \nthe recombination rate through barrier 1 (from\n$\\Omega_w$ to $\\Omega_l$), $g_2$ and $r_2$, generation and recombination\nrates through barrier 2.\n\nSince negative differential resistance is obtained at\nhigh bias, when the electron flux is one-directional, $r_1$ and\n$g_2$ can be discarded, while $g_1$ and $r_2$ are:\n\\begin{eqnarray}\ng_1 & = & 2 \\frac{2 \\pi}{\\hbar} \\int dE \\;|M_{1lw}(E)|^2 \\rho_l(E)\n\t\t\\rho_w(E) \\times \\nonumber \\\\\n\t& & \t\\int d{\\bf k_T} \\rho_T({\\bf k_T}) \n\t\tf_l(E,{\\bf k_T}) (1 - f_w(E,{\\bf k_T})) \n,\\label{g1} \\\\\nr_2 & = & 2 \\frac{2 \\pi}{\\hbar} \\int dE \\;|M_{2rw}(E)|^2 \\rho_r(E)\n\t\t\\rho_w(E) \\times \\nonumber \\\\\n\t& & \\int d{\\bf k_T} \\rho_T({\\bf k_T}) \n\t\tf_w(E,{\\bf k_T}) (1 - f_r(E,{\\bf k_T}))\n.\\label{r2}\n\\end{eqnarray}\nwhere $\\rho_s$, $f_s$, ($s=l,w,r$), are the longitudinal\ndensity of states and the equilibrium occupation factor\nin $\\Omega_s$ (dependent on the quasi Fermi level $E_{fs}$), \nrespectively, and \n$\\rho_T$ is the density of transverse states;\n$ M_{1lw}(E)$ is the matrix \nelement for a transition through barrier 1 between \nstates of longitudinal energy\n$E$: it is obtained in Ref. \\cite{ianna_unified95} as\n$| M_{1lw}(E)|^2 = \\hbar^2 \\nu_l(E) \\nu_w(E) T_1(E)$, where $\\nu_s$ ($s=l,w,r$)\nis the so-called attempt frequency in $\\Omega_s$ and $T_1$ is the \ntunneling probability\nof barrier 1; $M_{2rw}(E)$ is analogously defined.\n\nAs is well known, in the negative resistance\nregion of the $I$-$V$ characteristic, the peak of $\\rho_w$ is below the conduction \nband edge of the left electrode.\nIn such a way, as the voltage is increased, the number of allowed \nstates for a transition from\n$\\Omega_l$ to $\\Omega_w$ is reduced, hence the current decreases.\nSince all electrons relax to lower energy\nstates once they are in the well, it is reasonable to\nassume that $\\Omega_w$-states\nwith longitudinal energies above the conduction band edge\nof the left electrode $E_{\\rm cbl}$ are empty, i.e.,\ncorrespond to a zero\noccupation factor $f_w$ (analogously, $f_r =0$). \nIn addition, if we discard size\neffect in the cathode, we have that $2 \\pi \\hbar \\rho_l \\nu_l = 1$ \nif $E > E_{\\rm cbl}$. Therefore we can rewrite $g_1$ and $r_2$ as\n\\begin{eqnarray}\ng_1 & = & 2 \\int_{E_{\\rm cbl}}^\\infty dE\n \\, \\nu_w(E) \\, \\rho_w(E) T_1(E) F_l(E), \n\\label{g1a} \\\\\nr_2 & = & 2 \\int_{E_{\\rm cbw}}^\\infty dE\n \\, \\nu_w(E) \\, \\rho_w(E) T_2(E) F_w(E)\n,\\label{r2a}\n\\end{eqnarray}\nwhere $F_s(E)$ is the occupation factor of $\\Omega_s$, ($s=l,w,r$)\nintegrated over the transverse wave vectors\n$F_s(E) \\equiv \\int d{\\bf k_T} \\, \\rho_T({\\bf k_T}) \\, f_s(E, {\\bf k_T})$, \nand $E_{\\rm cbw}$ is the bottom of the conduction band edge in the well.\n\nLet us point out that $g_1$ and $r_2$ depend on the number of\nelectrons $N$ in the well region both through the potential energy\nprofile, which is affected by the charge in $\\Omega_w$ through\nthe Poisson equation, and through the term $F_w$ in (\\ref{r2a})\nwhich depends on $N$ through the quasi-Fermi level $E_{fw}$.\nIt is worth noticing that in our case Pauli exclusion has no\neffect, since practically all possible final states are unoccupied.\nFollowing these considerations, $g_1$ and $r_2$ can be\nobtained as a function of $N$, at a given bias voltage $V$.\n\nThe steady state value $\\tilde{N}$ of $N$ satisfies charge\nconservation in the well, i.e., $g_1(\\tilde{N}) = r_2(\\tilde{N})$,\nand the steady state current is $I =\nq g_1(\\tilde{N}) = q r_2(\\tilde{N})$.\n\nFollowing Ref. \\onlinecite{iannashot97}, it is worth\nexpanding $g_1(N)$ and $r_2(N)$ around $\\tilde{N}$ and\ndefining the following characteristic times:\n\\begin{equation}\n\\frac{1}{\\tau_{g}} \\equiv - \\left. \\frac{dg_1}{dN}\n \\right|_{N = \\tilde{N}} \n,\\hspace{1cm}\n\\frac{1}{\\tau_{r}} \\equiv \\left. \\frac{dr_2}{dN} \n \\right|_{N = \\tilde{N}} \\label{taus}\n;\\end{equation}\n\nOur parameter of choice for studying deviations from full\nshot noise is the so-called Fano factor $\\gamma$, the ratio of the \npower spectral density of the current noise $S(\\omega)$ to the full shot value\n$2 q I $. From \\onlinecite{iannashot97} we have, in this case,\nfor $\\omega \\tau_g \\tau_r \\ll \\tau_g + \\tau_r$,\n\\begin{equation}\n\\gamma = \\frac{S(\\omega)}{2 q I} =\n 1 - \\frac{2 \\tau_g \\tau_r}{(\\tau_g + \\tau_r)^2}\n.\\label{gamma}\n\\end{equation}\n \nFrom the definition (\\ref{taus}), $\\tau_{g}$ is positive in\nthe first region of the I-V characteristic, when Pauli principle and\nCoulomb interaction make $g_1$ decreasing with increasing $N$.\nOn the other hand, in the negative differential resistance\nregion, the term which varies the most with increasing N is the\nlongitudinal density of states, which shifts upwards by a factor $q\/(C1+C2)$\nper electron: since the peak is just below $E_{\\rm cbl}$, a\nslight shift of the peak sensibly increases the integrand\nin (\\ref{g1a}), yielding a negative $\\tau_{g}$. \nNote that, while from (\\ref{gamma}) we see\nthat noise could also diverge if $\\tau_{g} = - \\tau_{r}$, \nthis cannot physically happen, because the large\ndeviation of $N$ with respect to $\\tilde{N}$ would make\nthe linearization of $g_1$ and $r_2$ not acceptable.\n\nWe now focus on a particular structure, on which we have\nperformed noise measurements and numerical simulations\nfollowing the theory just described. \nSuch structure has been fabricated at\nthe TASC-INFM laboratory in Trie\\-ste\nand has the following layer structure:\na Si-doped ($N_d = 1.4 \\times 10^{18}$ cm$^{-3}$)\n500 nm-thick GaAs buffer layer, an undoped 20~nm-thick\nGaAs spacer layer to prevent silicon diffusion into the\nbarrier, an undoped 12.4~nm-thick AlGaAs first barrier,\nan undoped 6.2~nm-thick GaAs quantum well, an undoped\n14.1~nm-thick AlGaAs barrier, a 10~nm \nGaAs spacer layer and a Si-doped 500~nm-thick\ncap layer. The aluminum mole fraction\nin both barriers is $0.36$ and the diameter of the mesa defining\nthe single device is about $50$~$\\mu$m.\n\nThe barriers in our samples are\nthicker than in most similar resonant-tunneling diodes, \nfor the purpose of reducing the\ncurrent and, consequently, to increase the differential resistance,\nin order to\nobtain the best possible noise match with the measurement amplifiers\n(available ultra-low-noise amplifiers offer a good performance, with a very\nsmall noise figure, for a range of resistance values between a few\nkiloohms and several megaohms).\n\nWe have applied a measurement technique purposedly developed for low-level\ncurrent noise measurements, based on the careful evaluation of the\ntransimpedance between the device under test and the output of the\namplifier. \\cite{macupell91}\nThis procedure allows us to measure noise levels\nthat are up to 3~dB below that of the available amplifiers with a maximum\nerror around 10\\%.\n\nOur usual approach \\cite{macupell91} includes also the\nsubtraction of the noise due to the amplifier and other\nspurious sources, which is evaluated using a substitution\nimpedance, equivalent to that of the device under test with known\nnoise behaviour.\nFor the measurements in the negative differential resistance region,\ninstead, we have evaluated an upper limit for the noise\ncontribution from the amplifier in these particular operating\nconditions, since it is difficult to sinthesize an appropriate\nsubstitution impedance. From experimental and theoretical\nconsiderations, we have verified that such limit is always below\n3~\\% of the noise level from the device under test, so that\ncorrections are not necessary.\nIn Fig. 2 the measured current and the Fano factor $\\gamma$\nat the temperature of liquid nitrogen (77~K) are\nplotted as a function of the applied voltage (the thicker barrier\nis on the anode side).\n\n\\begin{figure}\n\\epsfxsize 8.25cm\n\\epsffile{speri.eps}\n\\vspace{0.2cm}\n\\caption{Experimental current (solid line) and Fano factor $\\gamma$ \n(squares) as a function\nof the applied voltage, at the temperature of 77~K. \nThe maximum value of $\\gamma$ is 6.6, \nwhile the minimum is close to 0.5}\n\\end{figure}\n\nIt can be noticed that as the voltage increases, the Fano factor\ndecreases down to about 0.5 (which corresponds to the maximum\ntheoretical suppression \\cite{iannashot97}), at the voltage\ncorresponding to the current peak is exactly one, then increases\nagain and reaches a peak of 6.6 at the voltage corresponding\nto the lowest modulus of the\nnegative differential resistance, while, for higher voltages, \nit rapidly approaches one.\n\nIn Fig. 3 we show numerical results for the same structure at 77~K\nbased on the theory discussed before and obtained by considering a \nrelaxation length of 15~nm.\\cite{ianna_unified95}\nAs can be seen, there is an\nalmost quantitative agreement between theory and experiment\n( the peak experimental current is 45 ~nA which corresponds\nto a current density of 23~A\/m$^2$): we ascribe most of the difference\nto the tolerance in the nominal device parameters.\nAll the relevant features of the Fano factor as a function\nof the applied voltage are reproduced, and can be easily \nexplained in terms of our model.\n\n\\begin{figure}\n\\epsfxsize 8.25cm\n\\epsffile{calc.eps}\n\\vspace{0.2cm}\n\\caption{Calculated current density (solid line) and Fano factor $\\gamma$\n(squares) as a function of the applied voltage for the considered structure}\n\\end{figure}\n\nThe fact that $\\gamma$ is maximum at the voltage corresponding\nto the minimum negative differential resistance $r_d$ of the\ndevice is readily justified once we recognize that $r_d$ is\npractically proportional to $\\tau_g + \\tau_r$ \\cite{iannadopo}.\nIn fact, from Fig. 4 it is clear that $\\tau_r$ varies much more smoothly\nthan $\\tau_g$ with the applied voltage, so that,\nsince $\\tau_g$ and $\\tau_r$ have opposite sign, \nthe modulus of $r_d$ is minimum\nwhen $\\tau_g\/\\tau_r$ approaches $-1$. At this point, we simply notice,\nfrom (\\ref{gamma}), that $\\gamma$ gets larger as $\\tau_g\/\\tau_r$ \napproaches $-1$, too (and would eventually diverge for $\\tau_g = - \\tau_r$).\n\nFurthermore, from Figs. 3 and 4, and according to \\cite{iannadopo},\nwe can notice that $r_d$ and $\\tau_g$ tend to infinity at about\nthe same voltage, i.e., the one corresponding to the current peak.\nTherefore, according to (\\ref{gamma}), $\\gamma$ is 1 at the \ncurrent peak bias, as can be verified from both experiments \nand calculations (Figs. 2 and 3, respectively).\n\n\\begin{figure}\n\\epsfxsize 8.25cm\n\\epsffile{taus.eps}\n\\vspace{0.2cm}\n\\caption{Calculated $1\/\\tau_g$ (solid) and $1\/\\tau_r$ (dashed) \nas a function of the\napplied voltage}\n\\end{figure}\n\nIn conclusion, we have demonstrated experimentally that\nCoulomb interaction, enhanced by the shape of the density\nof states in the well, can lead to a \ndramatic increase of shot noise in\nresonant tunneling diodes biased in the negative differential\nresistance region of the I-V characteristic. We have provided a model \nwhich leads to good numerical agreement with the\nexperimental data, taking into account all the\nrelevant physics involved in the phenomenon.\n\n\nThis work has been supported by the Ministry for the University\nand Scientific and Technological Research of Italy, \nand by the Italian National Research Council (CNR).\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}