diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzziltv" "b/data_all_eng_slimpj/shuffled/split2/finalzziltv" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzziltv" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\n\n\n\n\nSummarisation aims to condense a given piece of information into a short and succinct summary that best covers its semantics with the least redundancy. This helps users quickly browse and understand long content by focusing on the most important ideas \\cite{IMani2001automatic}. Summarisation on a single modality, such as video summarisation \\cite{MMa2020videosparse, LYuan2020unsupervisedvideo}, which aims to summarise a video into keyframes, and text summarisation \\cite{RMihalcea2004Textrank, ASee2017Get, YLiu2019BERTExt, PLaban2020SummaryLoop}, which aims to summarise a document into a few sentences, has been actively studied for decades. \n\n\nVideo summarisation aims to summarise a video into keyframes \\cite{DBLP:journals\/tcsvt\/LuoPC09, JWang2019stacked, LYuan2020unsupervisedvideo,JWang2020QueryTwiceVS} that provide a compact yet informative representation of a video. The majority of existing methods focus on modelling the temporal dependency and spatio structure among frames \\cite{EApostolidis2021vssurvey}. To address information overload issues, extreme video summarisation has been proposed as a sub-task of video summarisation \\cite{HGu2018Thumbnails, JRen2020BestFrame, EApostolidis2021RLThumbnail}, which aims to summarise a video into a cover frame. It involves high source compression and allows users to quickly discern the essence of a video and decide whether it is worth watching or not. \n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=.47\\textwidth]{tldw_intro.png}\n\\caption{Illustration of our newly proposed task XMSMO.}\n\\label{fig:xmsmo}\n\\end{figure}\n\nText summarisation aims to condense a given document into a short and succinct summary that best covers the document's semantics. \nThe majority of existing methods are either extractive or abstractive. Extractive methods \\cite{SNarayan2018Ranking,XZhang2019HIBERT,YLiu2019BERTExt,zhong2020extractive} select salient sentences from a document to form its summary. Abstractive methods \\cite{ASee2017Get, RPaulus2018DeepReinforced, JZhang2020pegasus, PLaban2020SummaryLoop} involve natural language generation to generate a summary for a given document. \nTo further condense the text and address information overload issues, extreme text summarisation has been proposed as a sub-task of text summarisation. Extreme text summarisation \\cite{SNarayan2018XSum, YLu2020multixscience,ICachola2020tldr, SSotudeh2021tldr9} aims to summarise a document into a one-sentence summary. It helps users quickly browse through the main information of a document.\n\n\n\nWhile single-modal summarisation has been investigated for decades, with the rapid growth of multimedia data, there is an emerging interest on Multimodal Summarisation with Multimodal Output (MSMO) \\cite{JZhu2018msmo,JZhu2020multimodal,MLi2020vmsmo}. MSMO aims to summarise a pair of a video or a set of images and a document into a visual-textual summary, since image and text could complement each other to help users to better obtain a more informative and visual understanding of events. However, most of the existing MSMO methods are designed for short visual inputs, such as short videos and multiple images, without considering the summary length. Given the increasing pace of producing multimedia data and the subsequent challenge in keeping up with the explosive growth of such rich content, these existing methods may be sub-optimal to address the imminent issue of information overload of multimedia data. \n\nIn this paper, we introduce a new task, \\textit{eXtreme Multimodal Summarisation with Multimodal Output (XMSMO)}, for the scenario TLDW which stands for \\textit{Too Long; Didn't Watch)}. \nAs shown in Figure \\ref{fig:xmsmo}, XMSMO aims to summarise a pair of a video and its corresponding document into a multimodal summary with an extremely short length. That is, an extreme multimodal summary consists of one cover frame as the visual summary and one sentence as the textual summary. \nTo solve this new task, we propose a novel unsupervised Hierarchical Optimal Transport Network (HOT-Net) architecture including three components, the hierarchical multimodal encoders, the hierarchical multimodal (fusion-based) decoders and the optimal transport solvers. \n\nSpecifically, the hierarchical visual encoder formulates the representations of a video from three levels including frame-level, scene-level and video-level; the hierarchical textual encoder formulates the representations of a document from three-levels as well: word-level, sentence-level and document-level. \nThen, the hierarchical decoder formulates the cross-modal representations in a local-global manner and evaluates candidate cover frames and candidate words, which are used to form a visual summary and a compressive textual summary, respectively. Note that a compressive textual summary offers a balance between the conciseness issue of extractive summarisation and the factual hallucination issue of abstractive summarisation.\nFinally, our optimal transport-based unsupervised training strategy is devised to mimic human judgment on the quality of an extreme multimodal summary in terms of the visual and textual coverage. The coverage is measured by a Wasserstein distance with an optimal transport plan measuring the distance between the semantic distributions of the summary and the original content. In addition, textual fluency and cross-modal similarity are further considered, which can be important to obtain a high quality multimodal summary.\n\n\nAdditionally, to facilitate the study on this new task XMSMO and evaluate our proposed HOT-Net, we built the first dataset of such kind, namely XMSMO-News, by harvesting 4,891 video-document pairs as input and cover frame-title pairs as multimodal summary output from the British Broadcasting Corporation (BBC) News Youtube channel from year 2013 to 2021.\n\nIn summary, the key contributions of this paper are:\n\\begin{itemize}\n \\item We introduce a new task, eXtreme Multimodal Summarisation with Multiple Output (XMSMO) as TLDW, which stands for \\textit{Too Long; Didn't Watch}. It aims to summarise a video-document pair into an extreme multimodal summary (i.e., one cover frame as the visual summary and one sentence as the textual summary).\n \\item We propose a novel unsupervised Hierarchical Optimal Transport Network (HOT-Net). The hierarchical encoding and decoding are conducted across both the visual and textual modalities, and optimal transport solvers are introduced to guide the summaries to maximise their semantic coverage. \n \\item We construct a new large-scale dataset XMSMO-News for the research community to facilitate research in this new direction. Experimental results on this dataset demonstrate that our method outperforms other baselines in terms of ROUGE and IoU metrics.\n \n\\end{itemize}\n\n\n\\section{Related Work}\n\n\nIn this section, we first review existing deep learning-based extreme unimodal summarisation methods in two categories, video-based and text-based, since they are closely related to our study. We also review existing multimodal summarisation with multimodal output methods which share similar input and output modalities with our study. \n\n\n\\begin{figure*}[h!]\n\\centering\n\\includegraphics[width=.99\\textwidth]{TLDW_architecture.png}\n\\caption{Illustration of the unsupervised Hierarchical Optimal Transport Network (HOT-Net) for XMSMO. HOT-Net consists of three components: hierarchical multimodal encoders, hierarchical multimodal fusion decoders, and optimal transport solvers.\n}\n\\label{fig:architecturechart}\n\\end{figure*}\n\n\n\n\n\\subsection{Extreme Video Summarisation} \n\nExtreme video summarisation methods can be conceptualized as a frame ranking task, which scores the frames in a video as the output. A deep learning method based on a CNN-based autoencoder architecture was first proposed~\\cite{HGu2018Thumbnails}, which was trained by a reconstruction loss considering the representativeness and aesthetic quality of the selected frames. \nThe scoring was improved by~\\citet{JRen2020BestFrame} by considering the quality of faces, and it utilised a Siamese architecture, which was optimized by a piece-wise ranking loss using pairs of frames. \\citet{EApostolidis2021RLThumbnail} proposed a generative adversarial network which introduced a reinforcement learning scheme by rewarding the representativeness and aesthetic quality. Note that most of these methods encode a video as a sequence of frames directly, whilst the semantic hierarchical structure of a video has not been adequately explored.\n\n\\subsection{Extreme Text Summarisation} \n\n\n\nThe extreme text summarisation task was first explored by \\citet{SNarayan2018XSum} who formulated a sequence-to-sequence learning problem, where the input was a source document and the output was an extreme summary. A supervised encoder-decoder framework was studied and a topic model was incorporated as an additional input to involve the document-level semantic information and guide the summary to be consistent with the document theme.\n\\citet{ICachola2020tldr} introduced multitask learning and incorporated the title generation as a scaffold task to improve the learning ability regarding the salient information. These methods relied on integrating the knowledge from pre-trained embedding models to generate abstractive summaries. As a result, these generative models are highly prone to external hallucination and it is possible to generate contents unfaithful to the original document, which was shown by \\citet{JMaynez2020Faithfulness}. \n\n\n\n\n\\subsection{Multimodal Summarisation with Multimodal Output}\n\nMultimodal summarisation with multimodal output task was first studied by \\citet{JZhu2018msmo}, which took a document and an image set as the input. A supervised attention based encoder-decoder framework was devised. For encoding, a textual encoder and a visual encoder formulate the document and visual representations, respectively. For decoding, a textual decoder generates a textual summary, and a visual decoder selects the most representative image as a visual summary. Additionally, a multimodal attention layer was incorporated to fuse the textual and visual context information. To \nalleviate the modality-bias issue, a multitask learning was applied to jointly consider the two MSMO subtasks: summary generation and text-image relation recognition \\cite{JZhu2020multimodal}. A hierarchical intra- and inter-modality correlation between the image and text inputs was studied to enhance the multimodal context representation \\cite{LZhang2021hierarchical}. \\citet{MLi2020vmsmo} extended visual inputs to short videos, and introduced self-attentions to improve the multimodal context representation. Nonetheless, most of these methods encode the video and document inputs directly without considering their semantic hierarchical structure. Moreover, these existing methods have been mainly studied in a supervised manner. To the best of our knowledge, our work is the first unsupervised method for MSMO. \n\n\n\n\n\n\\section{Methodology}\n\nAs shown in Figure \\ref{fig:architecturechart}, our proposed eXtreme Multimodal Summarisation method, namely unsupervised Hierarchical Optimal Transport Network (HOT-N), consists of three components, the hierarchical multimodal encoders, the hierarchical multimodal (fusion-based) decoders and the optimal transport solvers. \nSpecifically, the hierarchical visual encoder formulates frame-level, scene-level and video-level representations of a video \\(\\mathbf{V}\\). The hierarchical textual encoder formulates word-level, sentence-level and document-level representations of a document \\(\\mathbf{D}\\). Then, the hierarchical visual decoder selects an optimal frame \\(\\mathbf{f^{*}}\\) as an extreme visual summary, and the hierarchical textual decoder produces an extreme textual summary \\(\\mathbf{s}^{*}\\) based on the cross-modal guidance. Finally, the optimal transport solvers conduct unsupervised learning to optimise the encoders and the decoders in pursuit of the best semantic coverage of the obtained summaries. \n\n\n\\subsection{Hierarchical Multimodal Encoders}\n\n\\subsubsection{Visual Encoder}\n\nGiven an input video $\\mathbf{V}$, it can be represented as a sequence of $T$ frames $\\{\\mathbf{x}_{i}^{\\text{frame}}|i=1,...T\\}$. By grouping the consecutive frames with similar semantics, the video can be segmented into a sequence of $T'$ scenes $\\{\\mathbf{x}_{j}^{\\text{scene}}|j=1,...,T'\\}$, where $\\mathbf{x}_{j}^{\\text{scene}}$ consists of the video frames from the $i_{j_0}$-th to the $i_{j_1}$-th frame, where $j_0$ indicates the start index of the frame and $j_1$ indicates the end index of the frame for the $j$-th scene in the video. The hierarchical visual encoder learns the scene-level and video-level representations based on $\\mathbf{x}_{i}^{\\text{frame}}$ and $\\mathbf{x}_{j}^{\\text{scene}}$, respectively. \n\nTo characterize a video frame $\\mathbf{x}_{i}^{\\text{frame}}$, a pre-trained neural network can be introduced. The CLIP model \\cite{ARadford2021CLIP} is adopted in this study since it is the state-of-the-art multi-modal embedding model. For the sake of convenience, we use the the symbol $\\mathbf{x}_{i}^{\\text{frame}}$ to represent this pre-trained feature of the $i$-th frame. To further model the scene-level features, a pooling method is introduced, which is denoted as a function $g^{\\text{scene}}$. In detail, for the $j$-th scene, its representation $\\mathbf{x}_{j}^{\\text{scene}}$ can be obtained by observing its associated frame-level features $\\mathbf{x}_{i}^{\\text{frame}}$, $i=i_{j_0},..., i_{j_1}$ as:\n\\begin{equation}\n\\mathbf{x}_{j}^{\\text{scene}} = g^{\\text{scene}}(\\{\\mathbf{x}_{i_{j_0}}^{\\text{frame}},...,\\mathbf{x}_{i_{j_1}}^{\\text{frame}}\\}). \n\\end{equation}\nParticularly, a generalized pooling operator (GPO) \\cite{JChen2021GPO} is adopted as the pooling method in this study, since it is shown to be an effective and efficient pooling strategy for different features. \nWith the scene-level features, a pooled global (i.e., video-level) representation can be derived as:\n\\begin{equation}\n\\mathbf{x}^{\\text{video}} = g^{\\text{video}}(\\{\\mathbf{x}_{1}^{\\text{scene}},...,\\mathbf{x}_{T'}^{\\text{scene}}\\}),\n\\end{equation}\nwhere $g^{\\text{video}}$ is a video-level pooling function based on a GPO operator. \n\n\n\\subsubsection{Textual Encoder}\n\nA document $\\mathbf{D}$ can be viewed as a sequence consisting of $U$ words as $\\{\\mathbf{x}_{m}^{\\text{word}}|m=1,...,U\\}$ or a sequence of $U'$ sentences $\\{\\mathbf{x}_{n}^{\\text{sentence}}|n=1,...,U'\\}$. The $n$-th sentence consists of consecutive words in $\\mathbf{D}$ from the $m_{n_0}$-th to the $m_{n,1}$-th word. Similar to the visual encoder, a hierarchical textual encoder is introduced to learn the sentence-level and the document-level representation.\n\nA pre-trained CLIP model is introduced to formulate the word-level features, which is denoted as $\\mathbf{x}_{m}^{\\text{word}}$ for the $m$-th word. Next, a pooling mechanism $g^{\\text{sentence}}$ is adopted to formulate the sentence-level features. In detail, the $n$-th sentence-level features can be computed as:\n\\begin{equation}\n\\mathbf{x}_{n}^{\\text{sentence}} = g^{\\text{sentence}}(\\{\\mathbf{x}_{m_{n_0}}^{\\text{word}},...,\\mathbf{x}_{m_{n,1}}^{\\text{word}}\\}). \n\\end{equation}\nFinally, the global representation of the document $\\mathbf{D}$ can be derived based on the sentence-level features:\n\\begin{equation}\n\\mathbf{x}^{\\text{document}} = g^{\\text{document}}(\\{\\mathbf{x}_{1}^{\\text{sentence}},...,\\mathbf{x}_{U'}^{\\text{sentence}}\\}),\n\\end{equation}\nwhere $g^{\\text{document}}$ is a document-level pooling function based on GPO. \n\n\n\n\n\\subsection{Hierarchical Multimodal Fusion}\n\nTo attend and fuse the representations from the visual and textual modalities, we adopt a graph-based attention mechanism~\\cite{PVelivckovic2018GAT}. This formulation helps easily extend the attention layer to future additional modalities, such as an audio modality. Each modality feature can be treated as a vertex feature of a graph. The relationships between modalities are formulated by graph convolution to attend over the other modality, which then updates the representations of each modality. Particularly, a hierarchical local, which focuses between scene and sentence levels, and global, which focuses between video and document levels, observations are introduced by a graph fusion strategy. \n\n\n\nFor local multimodal fusion, the representations of the scenes \\(\\mathbf{x}^{\\text{scene}}=\\{\\mathbf{x}_{1}^{\\text{scene}},...,\\mathbf{x}_{T'}^{\\text{scene}}\\}\\) and sentences \\(\\mathbf{x}^{\\text{sentence}}=\\{\\mathbf{x}_{1}^{\\text{sentence}},...,\\mathbf{x}_{U'}^{\\text{sentence}}\\}\\) are fed into graph fusion modules $f^\\text{scene}_{\\text{local}}$ and $f^\\text{sentence}_{\\text{local}}$. The resulted representation, which can be viewed as an information exchange between modalities, are fed into an average pooling operator $g^{\\text{avg}}$ to obtain the local multimodal context representations $\\dot{\\mathbf{x}}_{j}^{\\text{scene}}$ and $\\dot{\\mathbf{x}}_{n}^{\\text{sentence}}$:\n\\begin{equation}\n\\begin{aligned}\n\\dot{\\mathbf{x}}_{j}^{\\text{scene}} = & g^{\\text{avg}}([f^{\\text{scene}}_{\\text{local}}(\\mathbf{x}_{j}^{\\text{scene}}; \\mathbf{x}_{1}^{\\text{sentence}} ) ,..., \\\\ & f^{\\text{scene}}_{\\text{local}}(\\mathbf{x}_{j}^{\\text{scene}};\\mathbf{x}_{U'}^{\\text{sentence}} ) ]),\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\n\\dot{\\mathbf{x}}_{n}^{\\text{sentence}} = & g^{\\text{avg}}([f^{\\text{sentence}}_{\\text{local}}(\\mathbf{x}_{n}^{\\text{sentence}}; \\mathbf{x}_{1}^{\\text{scene}} ) ,..., \\\\ & f^{\\text{sentence}}_{\\text{local}}(\\mathbf{x}_{n}^{\\text{sentence}};\\mathbf{x}_{T'}^{\\text{scene}} ) ]).\n\\end{aligned}\n\\end{equation}\nFor global multimodal fusion, the global representations of the document \\(\\mathbf{x}^{\\text{document}}\\) and video \\(\\mathbf{x}^{\\text{video}}\\) are fed into a graph fusion module $f_{\\text{global}}$:\n\\begin{equation}\n\\begin{aligned}\n\\dot{\\mathbf{x}} = g^{\\text{avg}}(f_{\\text{global}}(\\left[\\mathbf{x}^{\\text{video}} ; \\mathbf{x}^{\\text{document}} \\right])).\n\\end{aligned}\n\\end{equation}\n\n\\subsection{Hierarchical Multimodal Decoders}\n\n\\subsubsection{Visual Decoder}\nOur visual decoder consists of three stages: 1) scene-guided frame decoding, 2) video-guided frame decoding, and 3) cross-modality-guided frame decoding. It aims to evaluate the probability of a particular frame being a cover frame. \n\nTo produce a scene-aware decoding outcome of evaluating each frame, a scene-guided visual decoder $h^{\\text{scene}}$ derives a latent decoding\n$\\mathbf{y}_{j}^{\\text{scene}}$ for frames from $i_{j_0}$ to $i_{j_1}$, $j=1,...,T'$, as follows: \n\\begin{equation}\n\\begin{aligned}\n\\mathbf{y}_{j}^{\\text{scene}} & = \\{\\mathbf{y}_{i_{j_0}}^{\\text{scene-frame}},...,\\mathbf{y}_{i_{j_1}}^{\\text{scene-frame}}\\} \\\\ & = h^{\\text{scene}}(\\{\\mathbf{x}_{i_{j_0}}^{\\text{frame}},...,\\mathbf{x}_{i_{j_1}}^{\\text{frame}}\\}|\\dot{\\mathbf{x}}_{j}^{\\text{scene}}),\n\\end{aligned}\n\\end{equation}\nwhere $h^{\\text{scene}}$ is a bi-directional GRU \\cite{DBahdanau2015gru} and $\\dot{\\mathbf{x}}_{j}^{\\text{scene}}$ is a multimodal scene guidance, which can be viewed as a prior knowledge.\nNext, \nto produce a video-guided frame decoding outcome, we have:\n\\begin{equation}\n\\begin{aligned}\n\\mathbf{y}^\\text{{video}} & = \\{\\mathbf{y}_{1}^{\\text{video-frame}},...,\\mathbf{y}_{T}^{\\text{video-frame}}\\} \\\\ & = h^{\\text{video}}(\\{\\mathbf{x}_{i_{j_0}}^{\\text{frame}},...,\\mathbf{x}_{i_{j_1}}^{\\text{frame}}\\}|\\mathbf{x}^{\\text{video}}),\n\\end{aligned}\n\\end{equation}\nwhere $h^{\\text{video}}$ is a bi-directional GRU and $\\mathbf{x}^{\\text{video}}$ is a unimodal video guidance as a prior knowledge. \nFinally, to produce a global multimodal context-aware decoding, we adopt a Bi-GRU decoder $\\dot{h}^\\text{video}$ with the guidance of the cross-modal embedding $\\dot{\\mathbf{x}}$:\n\\begin{equation}\n\\begin{aligned}\n\\dot{\\mathbf{y}}^\\text{{video}} & = \\{\\dot{\\mathbf{y}}_{1}^{\\text{video-frame}},...,\\dot{\\mathbf{y}}_{T}^{\\text{video-frame}}\\} \\\\ & = \\dot{h}^\\text{video}(\\mathbf{y}_{1}^{\\text{video-frame}},...,\\mathbf{y}_{T}^{\\text{video-frame}}|\\dot{\\mathbf{x}}).\n\\end{aligned}\n\\end{equation}\nTo this end, the optimal frame \\(\\mathbf{f^{*}}\\) is obtained with a frame-wise linear layer activated with a softmax function:\n\\begin{equation}\n\\begin{aligned}\n\\mathbf{f^{*}} = \\textup{argmax}_t(\\textup{Linear}(\\dot{\\mathbf{y}}^\\text{{video}})).\n\\end{aligned}\n\\end{equation}\n\n\n\\subsubsection{Textual Decoder}\nSimilar to the visual decoder, the textual decoder also consists of three stages: 1) sentenced-guided word decoding, 2) document-guided word decoding, and 3) cross-modality-guided word decoding. It aims to evaluate the probability of a word being selected in a compressive summary. \n\nTo produce a sentence-aware decoding outcome, a sentence decoder $h^{\\text{sentence}}$ derives a latent decoding\n$\\mathbf{y}_{n}^{\\text{sentence}}$ for words from $m_{n_0}$ to $m_{n,1}$, $n=1,...,U'$, where $n_0$ indicates the start index of the word and $n_1$ indicates the end index of the word for the $n$-th sentence in the document, as follows: \n\n\\begin{equation}\n\\begin{aligned}\n\\mathbf{y}_{n}^{\\text{sentence}} & = \\{\\mathbf{y}_{m_{n_0}}^{\\text{sentence-word}},...,\\mathbf{y}_{m_{n_1}}^{\\text{sentence-word}}\\} \\\\ & = h^{\\text{sentence}}(\\{\\mathbf{x}_{m_{n_0}}^{\\text{word}},...,\\mathbf{x}_{m_{n,1}}^{\\text{word}}\\}|\\dot{\\mathbf{x}}_{n}^{\\text{sentence}}),\n\\end{aligned}\n\\end{equation}\nwhere $h^{\\text{sentence}}$ is a bi-directional GRU and $\\dot{\\mathbf{x}}_{n}^{\\text{sentence}}$ is used as a prior knowledge for the multimodal sentence guidance.\nThen, to produce a document-level textual decoding, we have:\n\\begin{equation}\n\\begin{aligned}\n\\mathbf{y}^\\text{{document}} & = \\{\\mathbf{y}_{1}^{\\text{document-word}},...,\\mathbf{y}_{U}^{\\text{document-word}}\\} \\\\ & = h^{\\text{document}}(\\{\\mathbf{x}_{m_{n_0}}^{\\text{word}},...,\\mathbf{x}_{m_{n,1}}^{\\text{word}}\\}|\\mathbf{x}^{\\text{document}}),\n\\end{aligned}\n\\end{equation}\nwhere $h^{\\text{document}}$ is a bi-directional GRU and $\\mathbf{x}_{n}^{\\text{document}}$ is a unimodal document guidance. \nFinally, to produce a global cross-modal context-aware decoding for each word, a Bi-GRU decoder $\\dot{h}^\\text{document}$ is adopted with the guidance of the global multimodal embedding $\\dot{\\mathbf{x}}$:\n\\begin{equation}\n\\begin{aligned}\n\\dot{\\mathbf{y}}^\\text{{document}} & = \\{\\dot{\\mathbf{y}}_{1}^{\\text{document-word}},...,\\dot{\\mathbf{y}}_{U}^{\\text{document-word}}\\} \\\\ & = \\dot{h}^\\text{document}(\\mathbf{y}_{1}^{\\text{document-word}},...,\\mathbf{y}_{U}^{\\text{document-word}}|\\dot{\\mathbf{x}}).\n\\end{aligned}\n\\end{equation}\nAs a result, the optimal compressive summary \\(\\mathbf{s}^{*}\\) with length \\(k\\) is obtained by:\n\\begin{equation}\n\\begin{aligned}\n\\mathbf{s}^{*} = \\textup{topk}(\\textup{Linear}(\\dot{\\mathbf{y}}^\\text{{document}})).\n\\end{aligned}\n\\end{equation}\nNote that the selected $k$ words are ranked in line with their scores obtained from the linear layer with a softmax activation. Thus, the sentence $\\mathbf{s}^*$ can be constructed with these words and their orders. \n\n\\subsection{Optimal Transport-Guided Semantic Coverage}\n\nOur method is trained without reference summaries by mimicking the human judgment on the quality of a multimodal summary, which minimises a quartet loss of visual coverage, textual coverage, textual fluency, and cross-modal similarity.\n\n\n\\subsubsection{Document Coverage}\n\nIntuitively, a high-quality summary is supposed to be close to the original document regarding their semantic distributions. We measure the Wasserstein distance \\cite{MKusner2015WMD} $L_{\\text{document}}$ between the document $\\mathbf{D}$ and the selected sentence $\\mathbf{s}^{*}$. It is the minimal cost required to transport the semantics from $\\mathbf{s}^{*}$ to $\\mathbf{D}$, measuring the semantic coverage of $\\mathbf{s}^{*}$ on $\\mathbf{D}$.\n\nGiven a dictionary, the number of the $\\alpha$-th token (i.e, a word in a dictionary) occurred in $\\mathbf{D}$ can be counted as $P_{\\mathbf{D}}(\\alpha)$. As a result, the semantic distribution $\\text{TF}_{\\mathbf{D}}$ of the document $\\mathbf{D}$ can be defined with the normalized term frequency of each token. In detail, for the $\\alpha$-th element of $\\text{TF}_{\\mathbf{D}}$, we have: \n\\begin{equation}\n\\label{eqt:TFD}\n\\text{TF}_{\\mathbf{D}}(\\alpha)=\\frac{P_{\\mathbf{D}}(\\alpha)}{\\sum_{\\alpha'} P_{\\mathbf{D}}(\\alpha')}.\n\\end{equation} \nThe semantic distribution $\\text{TF}_{\\mathbf{s}^{*}}$ of the selected sentence $\\mathbf{s}^{*}$ can be derived in a similar manner. The normalized term frequency of the $\\alpha$-th token in $\\mathbf{s}^{*}$ is: \n\\begin{equation}\n\\label{eqt:TFD}\n\\text{TF}_{\\mathbf{s}^{*}}(\\alpha)=\\frac{P_{\\mathbf{s}^{*}}(\\alpha)}{\\sum_{\\alpha'} P_{\\mathbf{s}^{*}}(\\alpha')}.\n\\end{equation}\nNote that $\\text{TF}_{\\mathbf{D}}$ and $\\text{TF}_{\\mathbf{s}^{*}}$ have an equal total token quantities of \\(1\\) and can be completely transported from one to the other mathematically. \n\nA transportation cost matrix $\\mathbf{C} = (c_{\\alpha\\alpha'})$ is introduced to measure the semantic similarity between the tokens. Given a pre-trained tokeniser and token embedding model, define $\\mathbf{u}_\\alpha$ to represent the feature embedding of the $\\alpha$-th token. The transport cost $c_{\\alpha\\alpha'}$ from the $\\alpha$-th token to the $\\alpha'$-th one is computed based on the cosine similarity:\n\\begin{equation} \\label{eqt:costfunction}\nc_{\\alpha\\alpha'} = 1- \\frac{<\\mathbf{u}_{\\alpha}, \\mathbf{u}_{\\alpha'}>}{\\left \\| \\mathbf{u}_{\\alpha} \\right \\| _{2}\\left \\| \\mathbf{u}_{t\\alpha'} \\right \\|_{2} }.\n\\end{equation}\nNote that the method to obtain token representations $\\mathbf{u}_{\\alpha}$ follows the same method that we formulate for word representations $\\mathbf{x}_{(\\cdot)}^{\\text{word}}$ by a pre-trained model. \n\nThen, an optimal transport plan matrix $\\mathbf{T}^{*}(\\mathbf{D},\\mathbf{s}^*)=(t^{*}_{\\alpha\\alpha'}(\\mathbf{D},\\mathbf{s}^*))$ in pursuit of minimizing the transportation cost can be obtained by solving the following optimization problem: \n\\begin{equation} \\label{eqt:OT}\n\\begin{aligned}\n\\mathbf{T}^{*}(\\mathbf{D},\\mathbf{s}^*) = \\underset{\\mathbf{\\mathbf{T}(\\mathbf{D},\\mathbf{s}^*)}}{\\text{argmin}} \\sum_{\\alpha,\\alpha'} t_{\\alpha\\alpha'}(\\mathbf{D},\\mathbf{s}^*)c_{\\alpha\\alpha}, \\\\\n\\text{s.t.} \\; \\sum_{\\alpha'}t_{\\alpha\\alpha'}(\\mathbf{D},\\mathbf{s}^*) = \\text{TF}_{\\mathbf{D}}(\\alpha), \\\\\n\\sum_{\\alpha=1}t_{\\alpha\\alpha'}(\\mathbf{D},\\mathbf{s}^*)t_{ij}^{Doc} =\\text{TF}_{\\mathbf{s}^{*}}(\\alpha'), \\\\\nt_{\\alpha\\alpha'}(\\mathbf{D},\\mathbf{s}^*)\\geq 0, \n\\forall \\alpha, \\alpha'.\n\\end{aligned}\n\\end{equation} \nTo this end, the Wasserstein distance can be defined as:\n\\begin{equation}\n\\begin{aligned}\n L_{\\text{document}} = \\sum_{\\alpha,\\alpha'} t^{*}_{\\alpha\\alpha'}(\\mathbf{D},\\mathbf{s}^*)c_{\\alpha\\alpha'},\n\\end{aligned}\n\\end{equation} \nwhich is associated with the optimal transport plan. By minimizing $L_{\\text{document}}$, a high-quality summary sentence is expected to be obtained. \n\n\n\n\n\n\n\\subsubsection{Video Coverage}\n\nIn parallel, a good cover frame is supposed to be close to the original video regarding their perceptual similarity. We measure the loss of visual coverage by computing the Wasserstein distance $L_{\\text{video}}$ between the corresponding colour signatures of the mean of video frames in $\\mathbf{V}$ and the cover frame $\\mathbf{f^*}$. It can be viewed as the minimal cost required to transport the semantics from $\\mathbf{f^*}$ to $\\mathbf{V}$. \n\nBy denoting $\\bar{\\mathbf{f}}$ as the mean of the video frames in $\\mathbf{V}$,\nwe define $\\bar{\\mathbf{r}}$ and $\\mathbf{r}^{*}$ as the colour signatures of $\\bar{\\mathbf{f}}$ and $\\mathbf{f^*}$, respectively. In detail, we have:\n\\begin{equation} \\label{eqt:coloursig}\n\\begin{aligned}\n\\bar{\\mathbf{r}} &= \\{(\\bar{\\mu}_1, \\bar{\\tau}_1),... , (\\bar{\\mu}_{\\bar{n}}, \\bar{\\tau}_{\\bar{n}})\\}\\;, \\\\\n\\mathbf{r}^* &= \\{(\\mu^*_1, \\tau^*_1), ..., (\\mu^*_{m^*}, \\tau^*_{m^*})\\}\\;,\n\\end{aligned}\n\\end{equation} \nwhere $\\bar{\\mu}_i$ and $\\mu^*_j$ are the points in the colour space, and $\\bar{\\tau}_i$ and $\\tau^*_j$ are the corresponding weights of the points. \n\n\n\nAn optimal transport plan matrix $\\mathbf{T}^{*}(\\mathbf{V},\\mathbf{f}^*)=(t^{*}_{\\beta\\beta'}(\\mathbf{V},\\mathbf{f}^*))\\in\\mathbf{R}^{\\bar{m}\\times m^*}$ in pursuit of minimizing the transportation cost between $\\bar{\\mathbf{r}}$ and $\\mathbf{r}^{*}$ can be obtained by solving the following optimization problem: \n\\begin{equation} \\label{eqt:OTvis}\n\\begin{aligned}\n\\mathbf{T}^{*}(\\mathbf{V},\\mathbf{f}^*) = \\underset{\\mathbf{T}(\\mathbf{V},\\mathbf{f}^*)}{\\text{argmin}} \\sum_{\\beta, \\beta'} t_{\\beta\\beta'}(\\mathbf{V},\\mathbf{f}^*)\\left \\| \\bar{\\mu}_\\beta - \\mu_{\\beta'}^* \\right \\|\\;, \\\\\n\\text{s.t.} \\; \\sum_{\\beta'}t_{\\beta\\beta'}(\\mathbf{V},\\mathbf{f}^*) = \\bar{\\tau}_\\beta, \n\\sum_{\\beta}t_{\\beta\\beta'}(\\mathbf{V},\\mathbf{f}^*) = \\tau^*_{\\beta'}\\;, \\\\\nt_{\\beta\\beta'}(\\mathbf{V},\\mathbf{f}^*)\\geq 0, \\forall \\beta, \\beta'\\;,\n\\end{aligned}\n\\end{equation} \nwhere $\\mathbf{T}(\\mathbf{V},\\mathbf{f}^*)$ is a transport plan. Then, a Wasserstein distance measuring the distance between the two colour signatures can be derived as:\n\\begin{equation}\n\\begin{aligned}\n L_{\\text{video}} = t^{*}_{\\beta\\beta'}(\\mathbf{V},\\mathbf{f}^*)\\left \\| \\bar{\\mu}_\\beta - \\mu_{\\beta'}^* \\right \\|\n\\end{aligned}\n\\end{equation} \nwhich is associated with the optimal transport plan. By minimizing $L_{\\text{video}}$, a high-quality summary frame is expected to be the cover frame. \n\n\n\n\\subsection{Textual Fluency and Cross-modal Consistency}\n\nInspired by \\citet{PLaban2020SummaryLoop}, we adopt a pre-trained language model $P_{LM}$ to measure \nthe fluency of the textual summary \n\\(L_{\\text{Fluency}}\\). The loss can be defined as: \n\\begin{equation}\nL_{\\text{Fluency}}= P_{LM}(\\mathbf{s}^*),\n\\end{equation} \nwhere \\(P_{LM}\\) computes the probability of $\\mathbf{s}^*$ being a sentence. \n\n\nThe semantic consistency should exist between the cover frame and the one-sentence summary.\nTo formulate this, we measure the cross-modal similarity between the two embeddings of the cover frame $\\mathbf{f}^*$ and the one-sentence summary $\\mathbf{s}^{*}$. The loss can be defined based on a cosine similarity:\n\\begin{equation}\n\\begin{aligned}\n L_{\\text{cross-modal}} = 1 - cos(\\mathbf{f}^*, \\mathbf{\\mathbf{s}^*}).\n\\end{aligned}\n\\end{equation}\n\nIn summary, four losses have been obtained to measure the summarisation quality: $ L_{\\text{document}}$, $L_{\\text{video}}$, $L_{\\text{fluency}}$ and $L_{\\text{cross-modal}}$. To this end, a loss function to optimize the proposed architecture can be formulated as follows:\n\\begin{equation}\n\\begin{aligned}\nL = \\lambda_{\\text{d}}L_{\\text{document}} + \\lambda_{\\text{v}}L_{\\text{video}} \n+ \\lambda_{\\text{f}}L_{\\text{fluency}} \n+\\lambda_{\\text{c}}L_{\\text{cross-modal}},\n\\end{aligned}\n\\end{equation}\nwhere $\\lambda_{\\text{d}}$, $\\lambda_{\\text{v}}$, $\\lambda_{\\text{f}}$ and $\\lambda_{\\text{c}}$ are the hyper-parameters controlling the weights of each loss term. \n\n\n\n\n\n\\section{Experimental Results and Discussions}\n\n\n\\subsection{Dataset}\nTo the best of our knowledge, there is no existing large-scale dataset for XMSMO. Hence, we collected the first large-scale dataset of such kind, XMSMO-News, from the British Broadcasting Corporation (BBC) News Youtube channel \\footnote{https:\/\/www.youtube.com\/c\/BBCNews}. We used the Pytube library to collect 4,891 quartets of video, document, cover frame, and one-sentence summary from the year 2013 to 2021. We used the video description as the document and video title as the one-sentence summary, as these visual and textual summaries were professionally created by the BBC. \\footnote{We removed the trailing promotional text from the video title and video description.} We then split the quartets randomly into the train, validation, and test sets at a ratio 90:5:5.\n\nTable \\ref{tab:dataset} shows the statistics and the comparison of XMSMO-News with other benchmarks on multimodal summarisation with multimodal output.\nThe major differences are regarding the input and output lengths: XMSMO-News has an average duration of 345.5 seconds, whereas \\cite{MLi2020vmsmo} has 60 seconds only.\n\n\\begin{table}[htbp]\n\\tiny\n \\centering\n \\resizebox{.475\\textwidth}{!}\n {\n \\begin{threeparttable}\n \\begin{tabular}{p{2.5cm}|p{2.5cm}<{\\centering}p{2.8cm}<{\\centering}p{2.9cm}<{\\centering}}\n \n \\textbf{Dataset} & \\textbf{XMSMO-News} & \\textbf{VMSMO} & \\textbf{MSMO} \\\\\n \\hline\\hline\n \\text{\\#Train\/Val\/Test} & 4382\/252\/257 & 180000\/2460\/2460 & 293965\/10355\/10262 \\\\\n \n \\text{Language} & English & Chinese & English \\\\\n \n \\text{Visual Input} & Video & Video & Multi-images \\\\\n \n \\text{Textual Input} & Document & Document & Document \\\\\n \n \\text{Visual Output} & Cover frame & Cover frame & One image \\\\\n \n \\text{Textual Output} & One-sentence & Arbitrary length & Multi-sentence \\\\\n \n \n \\text{Frames\/Video} & 8827.4 & 1500.0 & 6.6 \\\\\n \n \\text{Video Duration(s)} & 345.5 & 60.0 & - \\\\\n \n \\text{Tokens\/Document} & 101.7 & 96.8 & 723.0\\\\\n \n \\text{Tokens\/Summary} & 12.4 & 11.2 & 70.0 \\\\\n \n \\text{Annotation} & Full & Partial\\tnote{1} & Partial\\tnote{2} \\\\\n \n \\end{tabular}%\n \\begin{tablenotes}\n \\item[1,2] 1) Not all ground-truth data is available; 2) No visual ground-truth on training and validation splits.\n \\end{tablenotes}\n \\end{threeparttable}\n }\n \\caption{Comparison of XMSMO-News with existing MSMO benchmark datasets. }\n \\label{tab:dataset}%\n\\end{table}%\n\n\n\\subsection{Implementation Details}\n\nWe used the PyTorch library for the implementation of our method. We set the hidden size of GPO and GRU to 512. \nFor the pre-trained CLIP model and the pre-trained token embedding model BERT (base version) used for computing the loss of textual coverage, we obtained them from HuggingFace \\footnote{\\label{huggingface}https:\/\/huggingface.co}. To detect the scenes of a video, we utilised the PySceneDetect library \\footnote{http:\/\/scenedetect.com\/en\/latest\/}. \nFor video preprocessing, we extracted one of every 360 frames to obtain 120 frames as candidate frames. All frames were resized to 640x360. \nWe trained HOT-Net using AdamW \\cite{ILoshchilov2018AdamW} with a learning rate of 0.01 and a batch size of 3 for about 72 hours. All experiments were run on a GeForce GTX 1080Ti GPU card.\n\n\n\n\\subsection{Baselines}\nTo evaluate our proposed method HOT-Net, we compared it with the following state-of-the-art baseline methods, including PEGASUS \\cite{JZhang2020pegasus} - the state-of-the-art method of text summarisation, CA-SUM \\cite{EApostolidis2022CASUM} - the state-of-the-art method of video summarisation, zero-shot CLIP \\cite{ARadford2021CLIP} - the state-of-the-art multi-modal embedding model with a linear classification layer to perform multimodal summarisation. The baseline models PEGASUS and CLIP were obtained from HuggingFace\n; CA-SUM was obtained from the author's Github \\footnote{https:\/\/github.com\/e-apostolidis\/CA-SUM}; VMSMO was obtained from the author's Github \\footnote{https:\/\/github.com\/iriscxy\/VMSMO} with modifications on the latest libraries' update and bug fixing. \n\n\n\n\n\n\n\n\\begin{table*}[h]\n\\footnotesize\n\\centering\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{l|ccc|cc|c}\n\\textbf{Method} & \\multicolumn{3}{c|}{\\textbf{Textual Evaluation}}\n& \\multicolumn{2}{c|}{\\textbf{Visual Evaluation}} & \\textbf{Overall Evaluation} \\\\\n {} & ROUGE-1 & ROUGE-2 & ROUGE-L & Frame Accuracy & IoU & {}\\\\ \\hline\\hline\nPEGASUS \\cite{JZhang2020pegasus} & 4.36 & \\textbf{0.12} & \\ul{4.00} & - & - & - \\\\\nCA-SUM \\cite{EApostolidis2022CASUM} & - & - & - & \\ul{0.57} & \\ul{0.69} & - \\\\\nVMSMO \\cite{MLi2020vmsmo} & \\textit{Divergence} & \\textit{Divergence} & \\textit{Divergence}\n& 0.57 & 0.69 & 0.49\n\\\\\nCLIP \\cite{ARadford2021CLIP} & 4.14 & \\ul{0.08} & 3.80 & 0.54 & 0.63 & 0.89 \\\\\\hline\\hline\nHOT-Net (Ours) visual only & - & - & - & \\textbf{0.60} & 0.68 & - \\\\\nHOT-Net (Ours) textual only & 3.85 & 0.05 & 3.60 & - & - & - \\\\\nHOT-Net (Ours) w\/o multimodal fusion & 3.99 & 0.05 & 3.73 & 0.56 & \\textbf{0.70} & 0.93 \\\\\nHOT-Net (Ours) w\/o local-level multimodal fusion & 4.45 & 0.06 & 4.16 & 0.59 & \\textbf{0.70} & 0.98 \\\\\nHOT-Net (Ours) w\/o global-level multimodal fusion \\hspace{5cm} & 3.65 & 0.06 & 3.45 & 0.58 & 0.68 & 0.88 \\\\ \nHOT-Net (Ours) w\/o fluency loss & 4.58 & 0.06 & 4.28 & 0.57 & 0.68 & 0.98 \\\\\nHOT-Net (Ours) w\/o cross-modal loss & 4.58 & 0.06 & 4.28 & 0.57 & 0.68 & 0.98 \\\\\n\\hline\\hline\nHOT-Net (Ours) & \\textbf{4.64} & 0.07 & \\textbf{4.33} & \\ul{0.57} & 0.68 & \\textbf{0.99} \\\\\n\n\\end{tabular}\n}\n\\caption{Comparisons between our HOT-N and the state-of-the-art summarisation methods on XMSMO-News.}\n\\label{tab:evaluation}\n\\end{table*}\n\n\n\\subsection{Quantitative Analysis} \nFor the quantitative evaluation of a textual summary, the commonly used ROUGE metric \\cite{CLin2004Rouge} for text summarisation is adopted. For the visual summary, the commonly used Intersection over Union (IoU) \\cite{ASharghi2017Query} and frame accuracy \\cite{SMessaoud2021DeepQAMVS} metrics for video summarisation are adopted. \n\nThe ROUGE metric evaluates the content consistency between a generated summary and a reference summary. In detail, the ROUGE-n F-scores calculates the number of overlapping n-grams between a generated summary and a reference summary. The ROUGE-L F-score considers the longest common subsequence between a generated summary and a reference summary. \nIoU metric evaluates the high-level semantic information consistency by counting the number of overlap concepts between the ground-truth cover frame and the generated one. Frame accuracy metric is to compare lower-level visual features, the ground-truth cover frame and generated cover frame are considered to be matching when pixel-level Euclidean distance is smaller than a predefined threshold.\nTo evaluate the overall performance on both modalities, we compute the overall evaluation as $0.5 \\times \\frac{\\text{IoU}}{\\text{Best IoU}} + 0.5 \\times \\frac{\\text{ROUGE-L}}{\\text{Best ROUGE-L}} $, where the best IoU and the best ROUGE-L are the best scores among all the evaluated methods.\n\n\n\nThe experimental results of HOT-Net on XMSMO-News are shown in Table \\ref{tab:evaluation} \nincluding ROUGE-1, ROUGE-2. and ROUGE-L F-scores, and IoU. Our method outperforms the baseline models in terms of ROUGE-1 and ROUGE-L, which demonstrate the quality of the generated extreme textual summary, and achieves promising results in terms of frame accuracy and IoU, which demonstrate the quality of the generated extreme visual summary. HOT-Net underperforms in terms of ROUGE-2, which may be due to the trade-off between informativeness and fluency. PEGASUS was trained on massive text corpora which may help improve the fluency of natural language generation. This trade-off is further discussed in the Qualitative Analysis section.\n\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=.47\\textwidth]{dataset_sample.png}\n\\caption{Example summary generated by baseline methods and HOT-Net on XMSMO-News. It is about a US congressman made an unusual appearance and flipped upside down.\n}\n\\label{fig:sample}\n\\end{figure}\n\n\\subsubsection{Ablation Study}\n\nTo study the effect of the proposed mechanisms, we compare a number of different settings of HOT-Net and the results can be found in Table 2. We first observe that multimodal learning improves the modelling by comparing to the visual or textual only method. Our fusion strategy is also important to obtain high-quality textual summaries. The hierarchical mechanism does not have much impact on the results of the visual summary, which may be due to that the overall model architecture has achieved its best possible potential in terms of producing a visual summary. Additionally, the fluency loss and cross-modal loss improve the textual summary as well. \n\n\n\n\n\\subsection{Qualitative Analysis}\n\nFigure \\ref{fig:sample} compares the summaries produced by HOT-Net and the baseline methods, and the reference summary of a sample in the XMSMO-News dataset. \nThe example demonstrates that our proposed HOT-Net method produces factually correct and reasonably fluent extreme textual summary that captures the essence of the document even without supervision. In comparison, as highlighted in red colour, PEGASUS produces a fluent but unfaithful summary with information that does not occur in the original document. \nMost of the methods agree on the choice of the cover frame, whilst ours and CA-SUM are closer to the ground-truth.\n\n\n\\section{Conclusion}\n\nIn this paper, we have introduced a new task - eXtreme Multimodal Summarisation with Multimodal Output (XMSMO), which aims to summarise a video-document pair into an extreme multimodal summary, consisting of one cover frame as the visual summary and one sentence as the textual summary. We present a novel {\\it unsupervised} deep learning architecture, which consists of three components: hierarchical multimodal encoders, hierarchical multimodal fusion decoders, and optimal transport solvers. \nIn addition, we construct a new large-scale dataset XMSMO-News to facilitate research in this new direction. Experimental results demonstrate the effectiveness of our method. \nIn the future, we will explore the metric space to measure the optimal transport plan in a more efficient and effective manner. Moreover, we will explore improved ways to learn and identity the information that humans would consider to be \\textit{important}, such as a frame containing the face of a key character.\n\n\\label{cha:conclusion}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{goico:sec1}\nOptical photometric monitoring of gravitationally lensed quasars (GLQs) brings plenty of \nastrophysical information\\cite{schn06}. For example, time delays between correlated brightness\nvariations of their multiple images are used to estimate the current expansion rate of the \nUniverse (the so-called Hubble constant $H_0$), provided lensing mass distributions can be \nconstrained by observational data\\cite{jack15,treu16}. Throughout this paper, $H_0$ is expressed \nin standard units of km s$^{-1}$ Mpc$^{-1}$, so units only are explicitly given in tables and \nfigures. \n\nVery recently, the H0LiCOW collaboration performed a joint analysis of six GLQs with measured \ntime delays\\cite{wong20}. For a flat $\\Lambda$CDM standard cosmology, they obtained $H_0$ = \n73.3$^{+1.7}_{-1.8}$, in good agreement with $H_0$ = 74.03 $\\pm$ 1.42 from SNe data by the SH0ES \ncollaboration\\cite{ries19}, but in apparent tension with Cosmic Microwave Background (CMB) \ndata\\cite{plan20}. {\\it Planck\\\/} observations of the CMB suggested a narrow 1$\\sigma$ interval \nranging from 66.9 to 67.9, which is clearly inconsistent with H0LiCOW\/SH0ES results. It is also \nworth noting that Freedman {\\it et al.\\\/}\\cite{free19} obtained an intermediate value of $H_0$ = 69.8 \n$\\pm$ 1.9.\n\nThe big question is whether the tension between early and late-Universe probes is due to \nsystematic errors or has a physical origin. Possible systematic errors in some methods may fix \nthis issue, avoiding hasty rejection of the standard cosmological model. Thus, we use two doubly\nimaged quasars of the Gravitational LENses and DArk MAtter (GLENDAMA) sample\\cite{gilm18} to \ndiscuss the influence of observational constraints, and hypotheses and priors on the mass model \nin the estimation of the Hubble constant in a standard cosmology. \\Sref{goico:sec2} briefly \npresents the GLENDAMA project and the framework of time-delay cosmography through double \nquasars, while \\sref{goico:sec3} and \\sref{goico:sec4} include preliminary results for the GLQs \nSBS 0909+532 and SDSS J1339+1310, respectively. A discussion of results and future prospects \nappear in \\sref{goico:sec5}. \n\n\\section{GLENDAMA project and $H_0$ from doubles}\\label{goico:sec2}\nThe GLENDAMA project\\footnote{\\url{https:\/\/gravlens.unican.es\/}.} is aimed to accurately study a \nsample of ten GLQs in the Northern Hemisphere over a period of about 25 years, basically \ncovering the first quarter of this century\\cite{gilm18}. The sample includes seven double \nquasars with two images (A and B) each, and three quads having four images (A, B, C and D) each.\n\\Fref{goico:fig1} shows the distribution on the sky of the selected, optically bright GLQs. The \nGran Telescopio CANARIAS (GTC) is being used to obtain deep spectroscopy of the lens systems, \nwhile the optical variability of quasar images is traced from light curves mainly based on \nobservations with the Liverpool Telescope (LT). These optical light curves are allowing us to \nmeasure the time delay $\\Delta t_{\\rm AB}$ in doubles, and three independent delays $\\Delta \nt_{\\rm AB}$, $\\Delta t_{\\rm AC}$ and $\\Delta t_{\\rm AD}$ in quads (see results in \n\\tref{goico:tbl1}). Current GLENDAMA delays have been estimated to a typical accuracy of about \n5\\%, with only two exceptions: the relative error in the delay of QSO 0957+561 is well below \n5\\%, and the three delays of the quad HE 1413+117 have larger relative uncertainties.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{goico_fig1.png}\n\\end{center}\n\\caption{GLENDAMA GLQs in the Northern Hemisphere. Triangles and circles represent \nquadruply and doubly imaged quasars, respectively. Larger symbols mean brighter quasars.}\n\\label{goico:fig1}\n\\end{figure}\n\n\\begin{table}\n\\tbl{GLENDAMA time delays from light curves in the SDSS $r$ band.}\n{\\begin{tabular}{@{}lcccl@{}}\n\\toprule\nGLQ & $\\Delta t_{\\rm AB}$ & $\\Delta t_{\\rm AC}$ & $\\Delta t_{\\rm AD}$ & Reference \\\\\n& (days) & (days) & (days) & \\\\\n\\colrule\nPS J0147+4630 & pm & pm & pm & --- \\\\ \nSBS 0909+532 & 50$^{+2}_{-4}$ & --- & --- & Ref.~\\citenum{hain13} \\\\\nFBQ 0951+2635 & pm & --- & --- & --- \\\\\nQSO 0957+561 & 420.6 $\\pm$ 1.9 & --- & --- & Ref.~\\citenum{shal12} \\\\\nSDSS J1339+1310 & 48 $\\pm$ 2 & --- & --- & Ref.~\\citenum{shal21} \\\\\nHE 1413+117 & 17 $\\pm$ 3 & 20 $\\pm$ 4 & 23 $\\pm$ 4 & Ref.~\\citenum{goic10} \\\\\nSDSS J1442+4055 & 25.0 $\\pm$ 1.5 & --- & --- & Ref.~\\citenum{shal19} \\\\\nSDSS J1515+1511 & 211 $\\pm$ 5 & --- & --- & Ref.~\\citenum{shal17} \\\\ \n\\botrule\n\\end{tabular}\n}\n\\begin{tabnote}\npm = preliminary measure.\\\\\n\\end{tabnote}\n\\label{goico:tbl1}\n\\end{table}\n\nThe time delay between the two images of a double quasar can be expressed in terms of the \nso-called time-delay distance $D_{\\Delta t}$, the speed of light $c$ and a dimensionless factor \n$\\Delta \\Phi_{\\rm AB}$, so that\\cite{treu16} $\\Delta t_{\\rm AB} = (D_{\\Delta t}\/c) \\Delta \n\\Phi_{\\rm AB}$. Here, $D_{\\Delta t}$ depends on the source (quasar) and deflector (main lens \ngalaxy) redshifts, as well as cosmological parameters. Measuring the redshifts, and assuming a \nflat $\\Lambda$CDM cosmological model with $\\Omega_{\\rm M}$ = 0.3 (matter density) and \n$\\Omega_{\\Lambda}$ = 0.7 (dark energy density), $D_{\\Delta t}\/c$ is given as a known constant \ndivided by $H_0$\\footnote{The time-delay distance does not appreciably change when matter and \ndark energy densities are slightly different to 0.3 and 0.7, respectively.}. Additionally, \n$\\Delta \\Phi_{\\rm AB}$ depends on the position of both images and the source, and the lens \npotential at the image positions\\cite{jack15,treu16}. Hence, the lensing mass distribution \ndetermines the value of the multiplicative factor $\\Delta \\Phi_{\\rm AB}$.\n\nWe used a lens model to describe the primary lensing mass in SBS 0909+532 and SDSS J1339+1310. \nFor each of these two double quasars, our lens model consisted of an elliptical surface mass \ndensity to account for the main lens galaxy G and an external shear $\\gamma$ due to the \ngravitational action of other galaxies around the lens system. The surface mass density of G was \nmodeled as a singular power-law distribution since a composite model (treating baryons and dark \nmatter individually) leads to similar results\\cite{suyu14,mill20}. In this preliminar study, \ninstead of using high-resolution imaging to put constraints on the power-law index of G, we \nfocused on an isothermal distribution, i.e., a singular isothermal ellipsoid (SIE). Such \ndistribution is consistent with stellar and gas motions in the Milky Way, as well as \nobservations of many spiral and elliptical galaxies. We also did not use the stellar kinematics \nof G\\cite{para09}. \n\nWe considered constraints on the time delay, the relative astrometry and the flux ratio between \nimages, along with some observationally-motivated priors on SIE+$\\gamma$ lens model parameters. \nThese constraints\/priors and the LENSMODEL software\\cite{keet01} allowed us to simultaneously \nfit lens model parameters, position and flux of the source quasar, and $H_0^{\\rm model}$ with \ndof = 0, where \"dof\" means degrees of freedom. In addition to the mass that is explicitly \nincluded in the lens model (main deflector plus external shear), we must take the mass along the \nline of sight to G into account. This additional effect can be approximated as an external \nconvergence in the lens plane $\\kappa_{\\rm ext}$, which may be positive or negative depending on\nthe mass distribution along the sightline. The true time-delay distance $D_{\\Delta t}^{\\rm \ntrue}$ relates to that derived from the lens model and measured delay $D_{\\Delta t}^{\\rm model}$ \nby $D_{\\Delta t}^{\\rm true} = D_{\\Delta t}^{\\rm model}\/(1 - \\kappa_{\\rm ext})$ (e.g., see Eq. \n(4) of Ref.~\\citenum{wong20}), which leads to $H_0^{\\rm true} = H_0^{\\rm model}(1 - \\kappa_{\\rm \next})$. Therefore, when accounting for an external convergence, the Hubble constant \ndecreases\/increases in a factor $1 - \\kappa_{\\rm ext}$. The two next sections deal with \nestimates of $H_0^{\\rm model}$ from observations of SBS 0909+532 and SDSS J1339+1310. \n \n\\section{SBS 0909+532}\\label{goico:sec3}\nSBS 0909+532 is a doubly imaged quasar in which the background source (quasar) and the \nforeground early-type lens galaxy (main deflector) have redshifts $z_{\\rm s}$ = 1.377 and \n$z_{\\rm d}$ = 0.830, respectively\\cite{koch97,osco97,lubi00}. Our first set of observational \nconstraints consisted of the SBS 0909+532 time delay in \\tref{goico:tbl1} taking a symmetric \nuncertainty (50 $\\pm$ 3 days)\\footnote{Despite 49 $\\pm$ 3 days is fully consistent with the\nmeasurement in \\tref{goico:tbl1}, initially we have preferred to keep its central value and \ndivide the error bar into two identical halves.}, the relative astrometry of B and G (their \npositions with respect to A at the origin of coordinates) and the flux of B in units such that \nthe flux of A is equal to one. These last astro-photometric constraints were taken from the \n$HST$ near-IR data in Table 3 of Ref.~\\citenum{leha00}. We also considered priors on the \nellipticity $e$ and external shear of the SIE+$\\gamma$ lens model described in \n\\sref{goico:sec2}: $e \\leq$ 0.5 (see Table 3 of Ref.~\\citenum{slus12}) and $\\gamma \\leq$ 0.1 \n(see Table 4 of Ref.~\\citenum{leha00}).\n\nAlthough the data fit yielded a best solution for $H_0^{\\rm model}$ of 68.4 (see \n\\tref{goico:tbl2}), unfortunately, the $HST$ relative astrometry of Leh\\'ar {\\it et al.\\\/}\\cite{leha00} \nis not so good as would be desirable. For instance, the relative position of the faint lens \ngalaxy G was determined with a large uncertainty of about 100 mas (1 mas = \n0.$^{\\prime\\prime}$001). The insufficiently accurate astrometric measures were responsible for a \nbroad valley in the $\\chi^2$ curve (see the black solid line in \\fref{goico:fig2}), so the \n1$\\sigma$ confidence interval for $H_0^{\\rm model}$ included values below 55 and above 80. If we \nwere able to improve the Leh\\'ar {\\it et al.\\\/}'s astrometry, e.g., reducing errors in relative \npositions of B and G in factors 3 and 10, respectively, the best solution of $H_0^{\\rm model}$ \nwould be practically the same, but its uncertainty would be dramatically decreased to about \n10\\%. Using \"achievable\" uncertainties of 1 mas in B and 10 mas in G, we obtained the black \ndashed-dotted line in \\fref{goico:fig2} and $H_0^{\\rm model}$ = 68.5 $\\pm$ 7.5.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=\\textwidth]{goico_fig2.png}\n\\end{center}\n\\caption{Estimation of $H_0^{\\rm model}$ from the SBS 0909+532 time delay and the \nastro-photometric constraints of Leh\\'ar {\\it et al.\\\/}\\cite{leha00}. We also used \nobservationally-motivated priors on the ellipticity of the lens galaxy and the external shear. \nWe show the $\\chi^2$ curve (black solid line) along with its 1$\\sigma$ and 2$\\sigma$ maximum \nthresholds (horizontal dashed lines). The black dashed-dotted line corresponds to an \"improved\" \nastrometry (see main text), with blue, green and red dashed-dotted lines describing some \ncontributions to the total $\\chi^2$.}\n\\label{goico:fig2}\n\\end{figure}\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=\\textwidth]{goico_fig3.png}\n\\end{center}\n\\caption{Estimation of $H_0^{\\rm model}$ from the SBS 0909+532 time delay and the \nastro-photometric constraints of Sluse {\\it et al.\\\/}\\cite{slus12}. The solid lines are related to \npriors on the shape of the lens galaxy (ellipticity and position angle; the black solid line \nrepresents the total $\\chi^2$), while the black dot and vertical arrow indicate the best \nsolution when using priors on the ellipticity and the external shear.}\n\\label{goico:fig3}\n\\end{figure}\n\nIn addition to new observations of the lens system, a reanalysis of the available $HST$ frames \nof SBS 0909+532 might produce a better astrometry for the system, and thus provide an accurate \nmeasure of the Hubble constant. This is a promising task that we and other astronomers are \nexploring. Sluse {\\it et al.\\\/}\\cite{slus12} have reanalysed the available $HST$ near-IR frames, \nobtaining a formally improved astrometry and even details on the structure of G. The error in \nthe relative position of G was only 3 mas; about 30$-$40 times smaller than the uncertainty \nderived by Leh\\'ar {\\it et al.\\\/} We also considered these new constraints to measure $H_0^{\\rm model}$. \nUsing the SBS 0909+532 time delay with symmetric error (see above) and the astro-photometric \nsolutions in Table 4 of Sluse {\\it et al.\\\/}, along with the ellipticity and position angle of G in \nTable 3 of Sluse {\\it et al.\\\/} (priors on the SIE+$\\gamma$ lens model), we found $H_0^{\\rm model}$ = \n38.2 $\\pm$ 3.3 (see \\tref{goico:tbl2} and the black solid line in \\fref{goico:fig3}). Even the \n2$\\sigma$ confidence interval only includes values below 45. Although a moderate increase in the \nbest solution of $H_0^{\\rm model}$ is found when taking the previous priors $e \\leq$ 0.5 and \n$\\gamma \\leq$ 0.1 (see the black dot and vertical arrow in \\fref{goico:fig3}), the Sluse {\\it et al.\\\/}'s \nrelative astrometry leads to best solutions below 50. Hence, either such astrometry is \nbiased or the near-IR fluxes of the quasar images (optical emission) are strongly affected by \nmicrolensing in the lens galaxy\\cite{medi05}. \n\n\\begin{sidewaystable}\n\\tbl{Results for $H_0^{\\rm model}$ using a SIE+$\\gamma$ lens model (see main text).}\n{\\begin{tabular}{@{}cccccccccccccc@{}}\n\\toprule\\\\[-6pt]\n & \\multicolumn{6}{c}{Observational constraints} &\n & \\multicolumn{3}{c}{Priors on model parameters} &\n & \\multicolumn{2}{c}{$H_0^{\\rm model}$}\\\\[3pt]\n\\cline{2-7}\\cline{9-11}\\cline{13-14}\\\\[-6pt]\nGLQ & $\\Delta t_{\\rm AB}$$^{\\text a}$ & $\\Delta x_{\\rm AB}$$^{\\text b}$ & $\\Delta y_{\\rm AB}$$^{\\text b}$ \n& $\\Delta x_{\\rm AG}$$^{\\text b}$ & $\\Delta y_{\\rm AG}$$^{\\text b}$ & $F_{\\rm B}\/F_{\\rm A}$$^{\\text c}$ &\n& $e$$^{\\text d}$ & $\\theta_e$$^{\\text d}$ & $\\gamma$$^{\\text e}$ & \n& best$^{\\text f}$ & 1$\\sigma^{\\text f}$\\\\[3.5pt]\n\\hline\\\\[-6pt]\nSBS 0909+532 & 50 $\\pm$ 3 & $-$0.987 $\\pm$ 0.003 & $-$0.498 $\\pm$ 0.003 & $-$0.415 $\\pm$ 0.100 & \n$-$0.004 $\\pm$ 0.100 & 0.89 $\\pm$ 0.10 & & $\\leq$ 0.5 & --- & $\\leq$ 0.1 & & 68.4 & ---\\\\[3.5pt]\n & & $-$0.987 $\\pm$ 0.001 & $-$0.498 $\\pm$ 0.001 & $-$0.415 $\\pm$ 0.010 & \n$-$0.004 $\\pm$ 0.010 & & & & & & & 68.3 & 68.5 $\\pm$ 7.5$^{\\text g}$\\\\[3.5pt] \n & & $-$0.9868 $\\pm$ 0.0006 & $-$0.4973 $\\pm$ 0.0006 & $-$0.464 $\\pm$ 0.003 & \n$-$0.055 $\\pm$ 0.003 & 0.88 $\\pm$ 0.10 & & 0.11 $\\pm$ 0.08 & $-$48.1 $\\pm$ 16.9 & --- & & 38.1 & 38.2 $\\pm$ 3.3$^{\\text h}$\\\\[3.5pt]\n & & & & & \n & & & $\\leq$ 0.5 & --- & $\\leq$ 0.1 & & 47.5 & ---\\\\[3.5pt]\nSDSS J1339+1310 & 47.0 $\\pm$ 5.5 & +1.419 $\\pm$ 0.001 & +0.939 $\\pm$ 0.001 & +0.981 $\\pm$ 0.010 & \n+0.485 $\\pm$ 0.010 & 0.175 $\\pm$ 0.015 & & 0.18 $\\pm$ 0.05 & 32 $\\pm$ 10 & ---- & & 69.1 & 69$^{+10}_{-8}$$^{\\text i}$\\\\[3.5pt]\n & 48 $\\pm$ 2 & & & & \n& & & & & & & 67.6 & 67.8 $\\pm$ 4.4\\\\[3pt]\n\\Hline\n\\end{tabular}}\n\\begin{tabnote}\n\\\\\n$^{\\text a}$ Time delay between both images in days. Some errors have been made symmetric.\\\\\n$^{\\text b}$ Relative positions of B and G with respect to A at the origin of coordinates. Here, $\\Delta x$ \nand $\\Delta y$ are given in arc seconds, and their positive directions are defined by west and north, \nrespectively. For SBS 0909+532, some errors have been conveniently approximated.\\\\\n$^{\\text c}$ Flux ratio. For SBS 0909+532, errors are enlarged to 10\\% to account for moderate microlensing \neffects.\\\\\n$^{\\text d}$ Ellipticity and position angle of the SIE. The position angle ($\\theta_e$) is measured east of \nnorth.\\\\\n$^{\\text e}$ External shear strength.\\\\\n$^{\\text f}$ Best solution and 1$\\sigma$ confidence interval for $H_0^{\\rm model}$. We use standard units of \nkm s$^{-1}$ Mpc$^{-1}$.\\\\\n$^{\\text g}$ Plausible but not real measurement. Astrometric errors have been reduced to \"achievable\" values\n(see next row).\\\\\n$^{\\text h}$ Real measurement, but based on a biased astrometry or an inappropriate (strongly affected by \nmicrolensing) flux ratio.\\\\\n$^{\\text i}$ Measurement relying on an old, innacurate time delay.\\\\\n\\end{tabnote}\n\\label{goico:tbl2}\n\\end{sidewaystable}\n\n\\section{SDSS J1339+1310}\\label{goico:sec4}\nThe gravitational lens system SDSS J1339+1310 was discovered by Inada {\\it et al.\\\/}\\cite{inad09}. It \nconsists of two quasar images (A and B) at $z_{\\rm s}$ = 2.231 and an early-type galaxy G at \n$z_{\\rm d}$ = 0.607 acting as main deflector\\cite{goic16}. The first set of observational \nconstraints included the relative astrometry of B and G in the last column of Table 1 of \nRef.~\\citenum{shal14}, the macrolens magnification ratio from narrow-line\/line-core flux ratios \nand a standard extinction law (based on emission lines in GTC spectra)\\cite{goic16}, and an old\ntime delay from LT light curves\\cite{goic16}. We note that the first time delay we used (47.0 \n$\\pm$ 5.5 days) is more inaccurate than the updated delay in \\tref{goico:tbl1}. Additionally, we\nhave taken the ellipticity and position angle of G in the last column of Table 1 of Shalyapin et \nal.\\cite{shal14} as priors on the SIE+$\\gamma$ lens model. The data fit led to an 1$\\sigma$ \nconfidence interval $H_0^{\\rm model}$ = 69$^{+10}_{-8}$ (accuracy of $\\sim$13\\%; see \n\\tref{goico:tbl2} and the black line in \\fref{goico:fig4}). The observational constraint on the \ntime delay is the primary contribution to the $\\chi^2$ curve (see the red line in \n\\fref{goico:fig4}), while other constraints\/priors (e.g., the position of G; see the green line \nin \\fref{goico:fig4}) play a secondary role.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=\\textwidth]{goico_fig4.png}\n\\end{center}\n\\caption{Estimation of $H_0^{\\rm model}$ from an old time delay of SDSS J1339+1310 with an \naccuracy of $\\sim$12\\% and constraints\/priors from results in Refs.~\\citenum{goic16} and \n\\citenum{shal14}. The black line represents the total $\\chi^2$, while the blue, green and red \nlines describe three different contributions to the total curve. The 1$\\sigma$ and 2$\\sigma$ \nmaximum thresholds are also depicted (horizontal dashed lines).}\n\\label{goico:fig4}\n\\end{figure}\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=\\textwidth]{goico_fig5.png}\n\\end{center}\n\\caption{Estimation of $H_0^{\\rm model}$ from the updated time delay of SDSS J1339+1310 with an \naccuracy of $\\sim$4\\% and constraints\/priors from results in Refs.~\\citenum{goic16} and \n\\citenum{shal14}. (a) SIE+$\\gamma$ lens model. (b) DV+$\\gamma$ lens model. To obtain\nthe $\\chi^2$ curve in this bottom panel, we have assumed that light traces mass of the main lens \ngalaxy (see main text).}\n\\label{goico:fig5}\n\\end{figure} \n\nResults in \\fref{goico:fig4} suggest that a tighter constraint on the time delay would produce a\nmore accurate determination of the Hubble constant. Therefore, in a second approach, we used the \nupdated time delay with a 4\\% error that appears in \\tref{goico:tbl1} to more accurate estimate \n$H_0^{\\rm model}$. The new $\\chi^2$ curve in \\fref{goico:fig5}(a) indicates that $H_0^{\\rm \nmodel}$ = 67.8 $\\pm$ 4.4 (see also \\tref{goico:tbl2}). This is a quite robust measurement of \n$H_0^{\\rm model}$ because its relative error is small (only 6.5\\%), and the priors on $e$ and \n$\\theta_e$ do not play a relevant role (see the blue dashed line in \\fref{goico:fig5}(a)). In \naddition, the macrolens magnification ratio is not affected by microlensing\/extinction effects, \nand only one major issue must be addressed: the hypothesis about an isothermal mass distribution \nfor G. Using 58 gravitational lens systems from the SLACS Survey, Koopmans {\\it et al.\\\/}\\cite{koop09} \nconcluded that massive early-type galaxies have close to isothermal total density profiles, with \na scatter between their logarithmic density slopes below 10\\%. For a particular lens galaxy, a \nsmall deviation from the isothermal power-law index is plausible, and this potential deviation \ncan be taken into account by increasing the error in $H_0^{\\rm model}$ (see a more complete \ndiscussion in \\sref{goico:sec5}). \n\nIt is easy to demonstrate the need for dark matter (e.g., a power-law mass distribution) and the \nfact that a model in which light traces mass produces biased results. To this end, we again \nconsidered the updated time delay in \\tref{goico:tbl1} and added a new prior, i.e., we worked \nwith three priors instead two. Assuming that light traces mass of G, i.e., a de Vaucouleurs (DV) \nmass distribution instead a singular isothermal one, in a self-consistent way, the optical \nstructure of G in the last column of Table 1 of Ref.~\\citenum{shal14} (effective radius, \nellipticity and position angle) was used to describe the structure of its mass. This scheme led \nto a biased $H_0^{\\rm model}$ value of about 100 (97.7 $\\pm$ 6.4; see \\fref{goico:fig5}(b)). \nEven the 2$\\sigma$ lower limit is above 85. \n\n\\section{Discussion and future prospects}\\label{goico:sec5}\nUsing two double quasars of the GLENDAMA sample (see \\fref{goico:fig1}), we focused on the role \nthat some observational constraints and hypotheses\/priors on the mass model play in estimating \n$H_0^{\\rm model}$ in a standard cosmology. The main lens galaxies in SBS 0909+532 and SDSS \nJ1339+1310 were modelled with a singular isothermal ellipsoid, in agreement with observations in \nthe Milky Way and SLACS Survey results for massive early-type galaxies acting as gravitational \nlenses\\cite{koop09}. Adding the external shear $\\gamma$ that is caused by galaxies around a lens \nsystem, we initially considered a SIE+$\\gamma$ lens (mass) model. \n\nFor SBS 0909+532, there are two different astrometric solutions based on the same $HST$ near-IR \ndata. While the Leh\\'ar {\\it et al.\\\/}'s solution\\cite{leha00} led to a best value of $H_0^{\\rm model}$ \nequal to 68.4 and a broad 1$\\sigma$ interval for this parameter, the Sluse {\\it et al.\\\/}'s \nsolution\\cite{slus12} provided an 8.6\\% measurement of $H_0^{\\rm model}$ around a central value \nof 38.2 (we derived biased results making different choices of priors). Assuming that the time \ndelay and flux ratio between quasar images that we used are right, the last astrometry would be \nbiased. However, the observed near-IR fluxes correspond to optical emission from the quasar \naccretion disk, so they could be strongly affected by microlenses (stars) in the main lens \ngalaxy\\cite{medi05}. Hence, an accurate and reliable astrometric solution along with a detailed \nanalysis of the macrolens magnification ratio (flux ratio free from extinction and microlensing \neffects) is required before robustly measuring $H_0^{\\rm model}$ for a SIE+$\\gamma$ scenario. \n \nResults for SDSS J1339+1310 are really encouraging because its current astrometry, updated time \ndelay and macrolens magnification ratio through GTC spectra allowed us to accurately measure \n$H_0^{\\rm model}$ (67.8 $\\pm$ 4.4), with priors on the ellipticity and position angle of the SIE \nnot playing a relevant role. It is also noteworthy that our 1$\\sigma$ interval is not in tension \nwith other recent estimates from GLQs\\cite{wong20} and the CMB\\cite{plan20} (see also \nRef.~\\citenum{free19}), and the central value practically coincides with the upper limit of the \n{\\it Planck\\\/} collaboration. Accounting for a potential microlensing effect on the time \ndelay\\cite{tiek18} ($\\sim$1 day) would only modify $H_0^{\\rm model}$ by $\\sim$2\\%. Additionally, \nthe use of a main galaxy mass model with an unrealistic density profile may have a significant \nimpact on $H_0^{\\rm model}$ and be responsible for an error of about 10\\%\\cite{schn13,koch20,\nstac21}. At present, we do not know details about the mass density profile of the main deflector \nin SDSS J1339+1310, and thus we should adopt an uncertainty in $H_0^{\\rm model}$ greater than \nthat obtained with a SIE. Very recently, assuming that the deflectors of the H0LiCOW GLQs and \nthe SLACS lenses share the same mass density properties, Birrer {\\it et al.\\\/}\\cite{birr20} have \nobtained $H_0$ = 67.4$^{+4.1}_{-3.2}$. This new GLQ-based result is in excellent agreement with \nours and the CMB-based estimation of $H_0$, notably reduces tension between early and \nlate-Universe probes, and illustrates the importance of assumptions on mass distributions. \n\nFuture time-domain observations of large collections of GLQs will lead to robust constraints on \n$H_0$, and the matter and dark energy components of the Universe\\cite{treu16}. The GLENDAMA \nproject includes the first initiative to robotically monitor a small sample of 10 GLQs for about \n20 years\\cite{gilm18}. This project and the associated robotic monitoring with the LT will end \nin 2025, after providing accurate time delays for several GLQs and discussing their cosmological \nimplications. In next few years, other ongoing monitoring projects will also measure accurate \ndelays for small\/medium samples of GLQs (see the paper by Geoff Chih-Fan Chen in these \nproceedings), which will contribute to a rich database of tens of measured delays. Despite this \noptimistic perspective about time-domain results, some issues must be fixed before sheding light \non unbiased values of cosmological parameters from such delay database. Deep spectroscopy, \nhigh-resolution imaging and other complementary observations will be required. For example, \nunaccounted mass along GLQ sightlines may produce overestimated\/underestimated values of $H_0$ \n(see the end of \\sref{goico:sec2}), so accurate $H_0$ estimates cannot ignore external \nconvergences. Here, although we ignored the external convergence for SDSS J1339+1310, the \nunaccounted mass is expected to translate to a few percent relative uncertainty in \n$H_0$\\cite{birr20,rusu17}, noticeably less than that related to the mass density profile of G \n(see above). \n \n\\section*{Acknowledgments}\nWe thank the organizers of Sixteenth Marcel Grossmann Meeting for planning a very interesting \nevent and allowing us to give a talk in the parallel session \"Cosmography with Gravitational \nLensing\". We also thank the chairs of such parallel session for creating a pleasant environment. \nWe acknowledge Claudio Grillo, Mimoza Hafizi and Sherry Suyu for helpful comments that have \nsignificantly contributed to prepare the final text of this contribution. Among other things, \nthe Gravitational LENses and DArk MAtter (GLENDAMA) project aims to construct accurate optical \nlight curves of SBS 0909+532 and SDSS J1339+1310, and measure robust time delays for both \nsystems. Although these optical variability studies mainly rely on observations with the \nLiverpool Telescope (LT), we are particularly grateful to our collaborators working in several \ninstitutions, who provide us with complementary data from the Maidanak Astronomical Observatory \nand the US Naval Observatory, and participate actively in the project development. The LT is \noperated on the island of La Palma by the Liverpool John Moores University (with financial \nsupport from the UK Science and Technology Facilities Council), in the Spanish Observatorio del \nRoque de los Muchachos of the Instituto de Astrof\\'isica de Canarias. We thank the staff of the \ntelescope for a kind interaction before, during and after the observations. This research has \nbeen supported by the grant AYA2017-89815-P funded by MCIN\/AEI\/10.13039\/501100011033 and by \n\"ERDF A way of making Europe\", and the grant PID2020-118990GB-I00 funded by \nMCIN\/AEI\/10.13039\/501100011033. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Inherent Fairness Trade-Offs in Resource Allocation}\n\\label{sec: method}\n\n\nIn this section we describe our theoretical framework, first defining the problems we are concerned with, and then outlining both general and illustrative results on inherent group fairness trade-offs in the allocation of scarce resources.\n\n\\subsection{Setting}\nWe consider $K$ services, with maximum capacities $c_{k}$ for $k\\in \\{1, ..., K\\}$, and $N$ individuals $i=1, ..., N$.\\footnote{We follow the convention of denoting vectors in bold type and random variables with capital letters.} We can thus describe individuals by their utility vector $\\mathbf{u}=(u_{1}, ..., u_{K})$ over each program $k$ and their sensitive attribute $s\\in\\mathcal{S}$. $\\mathcal{S}$ describes the set of groups for which we want to study the fairness of service allocation. For ease of exposition, we assume that group characteristics are binary and $\\mathcal{S}=\\{0, 1\\}$; however, our results readily extend to more complex definitions of groups, and the empirical section will show that our results hold for intersectional groups. We denote by $N_{s}$ the number of individuals with sensitive attribute $s=0, 1$.\n\nFor each individual $\\mathbf{u}$, we denote by $u^{\\min}$ the utility derived from receiving the least beneficial program: $u^{\\min}=\\min\\{u_{k}|k=1, .., K\\}$. We denote by $u^{\\max}$ the utility derived from receiving the most beneficial program: $u^{\\max}=\\max\\{u_{k}|k=1, .., K\\}$. Best and worst programs might vary among individuals. $u^{\\min}$ could potentially characterize a ``do nothing option'', i.e. the individuals' utility without the intervention. We assume that $\\mathbf{u}$ is drawn from a joint distribution $G_{s}(u)$ over $\\mathbb{R}^{K}$ that depends on the value $s$ of the sensitive attribute. We denote the random utility vector $U$.\n\nAn allocation policy $\\mathbf{a}:\\mathbb{R}^{K}\\rightarrow \\{0, 1\\}^{K}$ assigns each individual with utility $\\mathbf{u}$ to a program $k$ if and only if $a_{k}(\\mathbf{u})=1$. We assume that individuals are assigned to only one program: $\\sum_{k=1}^{K}a_{k}(\\mathbf{u})=1$. We denote by $\\mathbf{a}.\\mathbf{u}$ the inner product between the policy assignment and the individual utility: $\\mathbf{a}.\\mathbf{u} = \\sum_{k=1}^{K}a_{k}(\\mathbf{u})u_{k}$. Given $N$ individuals $i$ with utility $\\mathbf{u}_{i}$, the allocation is feasible if and only if for all programs $k$, $\\sum_{i=1}^{N}a_{k}(\\mathbf{u}_{i})\\leq c_{k}$ (the maximum capacity for the $k$-th service). \n\n\\subsection{Fairness, Baselines, and Normalization}\nIn this section, we consider four notions of fairness to compare the average realized utility between groups: \\improvement, regret\\xspace, \\equitability, and gain\\xspace. The definitions differ along two dimensions (1) how they normalize individual utility (additive or multiplicative), and (2) which baselines they compare individual realized utility to (worst case or best case).\n\nThe \\improvement and gain\\xspace metrics use as a baseline the minimal or worst utility that an individual can expect from any service they receive. To be fair, the definitions say that the average increase in utility relative to the least beneficial intervention should be equal across groups. They differ in how they normalize realized utility relative to the baseline; \\improvement uses an additive normalization, while gain\\xspace uses a multiplicative normalization.\n\n\\begin{dfn}\n\\textbf{\\Improvement fairness.}\nAn allocation policy $\\mathbf{a}$ satisfies fair \\improvement if and only if\n\\begin{equation}\n E\\left[\\frac{1}{N_{0}}\\displaystyle\\sum_{i, s=0}\\mathbf{a}.(\\mathbf{u}_{i} - u^{\\min}_{i})\\right]= E\\left[\\frac{1}{N_{1}}\\displaystyle\\sum_{i, s=1}\\mathbf{a}.(\\mathbf{u}_{i} - u^{\\min}_{i})\\right],\n\\end{equation}\nwhere the expectation is taken over samples of size $N_{s}$ for the group with sensitive attribute $s=0,1$.\n\\end{dfn}\n\n\\begin{dfn}\n\\textbf{Gain\\xspace fairness.}\nAn allocation policy $a$ satisfies fair gain\\xspace if and only if\n\\begin{equation}\n E\\left[\\frac{1}{N_{0}}\\displaystyle\\sum_{i, s=0}\\mathbf{a}.\\frac{\\mathbf{u}_{i}}{ u^{\\min}_{i}}\\right]= E\\left[\\frac{1}{N_{1}}\\displaystyle\\sum_{i, s=1}\\mathbf{a}.\\frac{\\mathbf{u}_{i}}{ u^{\\min}_{i}}\\right].\n\\end{equation}\n\\end{dfn}\n\nWe denote by $\\Delta I(\\mathbf{a})$ the difference in \\improvement between groups:\n\\begin{equation}\n \\Delta I(\\mathbf{a}) = E\\left[\\frac{1}{N_{1}}\\displaystyle\\sum_{i, s=1}\\mathbf{a}.(\\mathbf{u}_{i} - u^{\\min}_{i})\\right] - E\\left[\\frac{1}{N_{0}}\\displaystyle\\sum_{i, s=0}\\mathbf{a}.(\\mathbf{u}_{i} - u^{\\min}_{i})\\right].\n\\end{equation}\nIf $\\Delta I(\\mathbf{a})$ is positive, the policy $\\mathbf{a}$ favors group $1$; if $\\Delta I(\\mathbf{a})$ is negative, the policy favors group $0$. We define similarly differences in gain\\xspace as $\\Delta G(\\mathbf{a})$.\n\nRegret\\xspace fairness and \\equitability benchmark the realized utility in comparison to the best outcome individuals can hope for from any service (as such they are related to the classical definition of \\emph{equitability} in fair division, albeit with differences in normalization). Both fairness notions are satisfied when the average loss of utility compared to receiving the most beneficial program is equalized across groups. \n\n\\begin{dfn}\n\\textbf{Regret\\xspace fairness.}\nAn allocation policy $a$ satisfies regret\\xspace fairness if and only if\n\\begin{equation}\n E\\left[\\frac{1}{N_{0}}\\displaystyle\\sum_{i: s=0}\\mathbf{a}.(u^{\\max}_{i}- \\mathbf{u}_{i})\\right]= E\\left[\\frac{1}{N_{1}}\\displaystyle\\sum_{i: s=1}\\mathbf{a}.(u^{\\max}_{i}- \\mathbf{u}_{i})\\right],\n\\end{equation}\n\\end{dfn}\n\n\\begin{dfn}\n\\textbf{\\Equitability.}\nAn allocation policy $a$ satisfies \\equitability if and only if\n\\begin{equation}\n E\\left[\\frac{1}{N_{0}}\\displaystyle\\sum_{i: s=0}\\mathbf{a}.\\frac{\\mathbf{u}_{i}}{ u^{\\max}_{i}}\\right]= E\\left[\\frac{1}{N_{1}}\\displaystyle\\sum_{i: s=1}\\mathbf{a}.\\frac{\\mathbf{u}_{i}}{ u^{\\max}_{i}}\\right].\n\\end{equation}\n\\end{dfn}\n\nLike differences in \\improvement or in gain\\xspace, we denote differences in \\equitability and regret\\xspace as $\\Delta S(\\mathbf{a})$ and $\\Delta R(\\mathbf{a})$, respectively. Note that $\\Delta R(\\mathbf{a})\\geq 0$ means that the policy $\\mathbf{a}$ favors group $S=0$ over group $S=1$ for regret fairness.\n\nAll four definitions represent reasonable and desirable properties of a fair allocation. However, the following results show that a decision-maker faces trade-offs when choosing which fairness notion to target. Not only might the notions not be satisfied simultaneously, it is possible to generate explicitly contradictory conclusions across the relatively similar fairness metrics regarding which group is under-served. \n\n\\subsection{\\Improvement versus Regret\\xspace}\\label{subsec:imp_vs_regret}\n\nOur first result shows that \\improvement and regret\\xspace fairness cannot be satisfied simultaneously, unless we impose strong restrictions on how groups differ. Consider two random variables $U^{\\max}$ and $U^{\\min}$ defined on individual most and least beneficial utility. The maximum individual utility gain that can be delivered by a service is then a random variable $\\Delta U = U^{\\max}-U^{\\min}$. We show that heterogeneity in $\\Delta U$ across groups generates an inherent trade-off between improvement and regret fairness. \n\n\\begin{thm}\n\\label{thm: trade-off}\nIf an allocation policy $\\mathbf{a}$ satisfies both \\improvement and regret\\xspace fairness then the average maximum utility gain $\\Delta U$ must be equal across groups: $E[\\Delta U|S=0] = E[\\Delta U|S=1]$. Moreover, $\\Delta I(\\mathbf{a}) + \\Delta R(\\mathbf{a})=E[\\Delta U|S=1] - E[\\Delta U|S=0]$. \n\\end{thm}\n\n\\begin{proof}\nThe proof is based on the following identities:\n\\begin{equation}\n \\begin{split}\n \\Delta I(\\mathbf{a})& = E\\left[\\frac{1}{N_{1}}\\displaystyle\\sum_{i: s=1}\\mathbf{a}(\\mathbf{u}).(\\mathbf{u}_{i}-u^{\\max}_{i} + \\Delta u_{i})\\right] - E\\left[\\frac{1}{N_{0}}\\displaystyle\\sum_{i: s=0}\\mathbf{a}(\\mathbf{u}).(\\mathbf{u}_{i}-u^{\\max}_{i} + \\Delta u_{i})\\right] \\\\\n & =E\\left[\\frac{1}{N_{1}}\\displaystyle\\sum_{i: s=1}\\sum_{k=1}^{K}\\mathbf{a}_{k}(\\mathbf{u})\\Delta u_{i}\\right] - E\\left[\\frac{1}{N_{0}}\\displaystyle\\sum_{i: s=0}\\sum_{k=1}^{K}\\mathbf{a}_{k}(\\mathbf{u})\\Delta u_{i}\\right] - \\Delta R(\\mathbf{a})\\\\\n & =E\\left[\\frac{1}{N_{1}}\\displaystyle\\sum_{i: s=1}\\Delta u_{i}\\right] - E\\left[\\frac{1}{N_{0}}\\displaystyle\\sum_{i: s=0}\\Delta u_{i}\\right] - \\Delta R(\\mathbf{a}),\n \\end{split}\n\\end{equation}\nwhere the last equality comes from the fact that $\\sum_{k=1}^{K}a_{k}(u)=1$ for all $u$. Therefore, if $\\Delta I(\\mathbf{a})=0$ and $\\Delta R(\\mathbf{a})=0$, then $E[\\Delta U|S=0] = E[\\Delta U|S=1]$.\n\\end{proof}\n\nThe result in Theorem \\ref{thm: trade-off} implies that regardless of the allocation policy, for both \\improvement and regret\\xspace fairness to hold it is necessary that groups would gain on average similarly if they were always allocated their most beneficial intervention. Thus, a trade-off exists when defining what a fair assignment should look like: for example, a policy satisfying improvement fairness would always violate regret fairness unless $E[\\Delta U|S=0] = E[\\Delta U|S=1]$. Since $\\Delta I(\\mathbf{a}) + \\Delta R(\\mathbf{a})=E[\\Delta U|S=1] - E[\\Delta U|S=0]$, the closer a policy is to satisfying improvement fairness, the worse its regret fairness, and vice-versa. \nA follow up question is whether \\improvement and regret\\xspace fairness tell a different story about the fairness of an allocation policy $a$. The next result shows that whenever $E[\\Delta U|S=0]$ and $E[\\Delta U|S=1]$ differ, unless all policies favor one group, there exists a policy that favors one group for \\improvement fairness and favors the other one for regret\\xspace fairness. \n\n\\begin{thm}\n\\label{cor: 2}\nSuppose that $E[\\Delta U|S=1] > E[\\Delta U | S=0]$. Suppose that there exists a policy that favors group $S=0$ for \\improvement fairness and another policy that favors group $S=1$ for \\improvement fairness. Then, there exists a policy $\\mathbf{a^{*}}$ such that $\\Delta I(\\mathbf{a^{*}}) > 0 \\mbox{ and } \\Delta R(\\mathbf{a^{*}}) > 0\n$. That is, there exists a policy that favors $S=1$ with respect to \\improvement fairness (larger is better), but favors $S=0$ with respect to regret\\xspace fairness (lower is better).\n\\end{thm}\n\nThe proof of Theorem \\ref{cor: 2} relies on the fact that the set of differences in \\improvement\/regret\\xspace is a continuous interval:\n\\begin{lem}\n\\label{lem: 1}\nSuppose that there exist two allocation policies $\\mathbf{a}$ and $\\mathbf{a}^{'}$ with differences in \\improvement $\\delta$ and $\\delta^{'}> \\delta$. Then, for any $\\delta^{*}\\in[\\delta, \\delta^{'}]$, there exists an allocation policy $\\mathbf{a}^{*}$ with difference in \\improvement equal to $\\delta^{*}$. A similar result holds for difference in regret\\xspace. \\end{lem}\n\n\\begin{proof}\nWe show the result for differences in \\improvement. The proof can be readily extended to differences in regret\\xspace. We choose $\\lambda = \\frac{\\delta^{'} - \\delta^{*}}{\\delta^{'} - \\delta}\\in [0, 1]$. We define an allocation policy $\\mathbf{a}^{\\lambda}$ as follows:\n\\begin{itemize}\n \\item Partition randomly the individuals into two populations $G_{\\lambda}$ and $G_{1-\\lambda}$ of size $\\lambda N$ and $(1-\\lambda )N$, respectively.\n \\item For each program $k$, assign $\\lambda c_{k}$ of them to the population $G_{\\lambda}$; and $(1-\\lambda)c_{k}$ of them to the population $G_{1 -\\lambda}$. \n \\item Apply the allocation policy $\\mathbf{a}$ to the population $G_{\\lambda}$ and $\\mathbf{a}^{'}$ to the population $G_{1-\\lambda}$. \n\\end{itemize}\nBy construction the policy $a_{\\lambda}$ satisfies the resource constraints. Moreover,\n\\begin{equation}\n \\Delta I(\\mathbf{a}_{\\lambda}) = \\Delta I(\\mathbf{a}) P(G_{\\lambda}) + \\Delta I(\\mathbf{a}^{'}) (1 - P(G_{1-\\lambda}) =\\delta \\lambda + \\delta^{'}(1-\\lambda) = \\delta^{*},\n\\end{equation}\nwhere the last equality comes from our choice for the value of $\\lambda$. \n\\end{proof}\n\n\\begin{proof}\nTheorem \\ref{cor: 2}. We choose $\\epsilon = \\frac{E[\\Delta U|S=1] - E[\\Delta U | S=0]}{2}$. $\\epsilon >0$ by assumption. Using the assumption of Theorem \\ref{cor: 2}, there exist $\\mathbf{a}$ and $\\mathbf{a}^{'}$ such that $\\Delta I(\\mathbf{a}) < 0$ and $\\Delta I(\\mathbf{a}^{'}) > 0$. We apply Lemma \\ref{lem: 1} with $\\delta =\\Delta I(\\mathbf{a})< 0 $, $\\delta^{'}=\\Delta I(\\mathbf{a}^{'}) > 0$ and $\\delta^{*}=\\min\\{\\epsilon, \\delta^{'} \/ 2\\}$: there exists a policy $\\mathbf{a}^{*}$ such that $\\Delta I(\\mathbf{a}^{*}) = \\delta^{*} > 0$. Moreover, $\\Delta R(\\mathbf{a}^{*}) = E[\\Delta U|S=1] - E[\\Delta U | S=0] - \\Delta I(\\mathbf{a}^{*}) \\geq \\epsilon > 0$.\n\\end{proof}\n\nThus, regret\\xspace fairness and \\improvement fairness cannot hold simultaneously unless populations are homogeneous in terms of their best response to the allocation (Theorem \\ref{thm: trade-off}). Moreover, assessing which group is favored by a given policy can lead to contradictory results depending on whether we measure the fairness properties of the policy in terms of differences in \\improvement or regret. The result in Theorem \\ref{cor: 2} illustrates that decision-makers cannot expect that \\improvement and regret\\xspace notions tell a similar story about whether an allocation policy under-serves a given group. Results Theorem \\ref{thm: trade-off} and Theorem \\ref{cor: 2} are general, since they hold for any set of capacities $c_{1}$, ..., $c_{K}$ and for distributions of utilities provided that $E[\\Delta U|S=1] > E[\\Delta U | S=0]$. Both illustrate the central role of the difference between $E[\\Delta U|S=0]$ and $E[\\Delta U|S=1]$ in driving wedges between \\improvement and regret\\xspace fairness. Additionally, Theorem \\ref{cor: 2} is not very restrictive in its assumptions, since it only requires that no group is under-served regardless of the policy. \n\n\\subsection{\\Equitability versus Gain\\xspace}\nIn this section, we show that the fairness trade-offs between \\improvement and regret\\xspace exist also with multiplicative notion of fairness, gain\\xspace and \\equitability. Unlike trade-offs between improvement and regret where our results are general, in the case of \\equitability versus gain\\xspace, we derive results in a stylized framework and leave it to future work to extend our results to more general settings. Nevertheless, this section captures the essence of the problem in the multiplicative setting. We denote for each individual by $r=u^{\\min}\/u^{\\max}$ the ratio between the lowest and highest utility obtained from the intervention. This serves as a multiplicative counterpart of $\\Delta u$. We consider the following framework (SF1):\n\\begin{itemize}\n \\item There are two types of individuals: type $A$ with high value $\\overline{r}$ for the ratio $r$; type $B$ with a low value $\\underline{r}< \\overline{r}$ for $r$. \n \\item Conditional on $r$, the distribution of utility is similar across programs and types.\n\\end{itemize}\nIn this stylized framework, assigning to an individual their most beneficial program delivers either a large increase $\\overline{r}$ over $u^{\\min}$ (type A) or a lower one $\\underline{r}$ (type B). We characterize the heterogeneity across groups by differences in the distribution of type A and B within each group. We denote by $\\pi_{0}$ the proportion of type B individuals for group $S=0$; and, $\\pi_{1}$ the proportion of type $B$ for group $S=1$.\n\n\\begin{thm}\n\\label{thm: equi}\nIn the stylized framework (SF1):\n\\begin{itemize}\n\\item A policy satisfies both \\equitability and gain\\xspace fairness if and only if $\\pi_{0}=\\pi_{1}$. \n\\item If $\\pi_{0}\\neq \\pi_{1}$, a policy $a$ that achieves gain\\xspace (\\equitability) fairness, favors, according to \\equitability (gain\\xspace) fairness whichever group has the lowest proportion of type $A$ individuals.\n\\end{itemize}\n\\end{thm}\n\n\\begin{proof}\nLet $\\underline{\\alpha}$ denote $E\\left[\\frac{\\mathbf{a}(u).\\mathbf{u}}{u^{\\min}}|r=\\underline{r}\\right]$ and $\\overline{\\alpha}$ denote $E\\left[\\frac{\\mathbf{a}(u).\\mathbf{u}}{u^{\\min}}|r=\\overline{r}\\right]$. Then, we write (for any policy) differences in gain\\xspace as \n\\begin{equation}\n\\label{eq: if}\n \\Delta G(\\mathbf{a}) = \\left\\{\\pi_{1}\\underline{\\alpha} + (1 - \\pi_{1}) \\overline{\\alpha}\\right\\} - \\left\\{\\pi_{0}\\underline{\\alpha} + (1 - \\pi_{0}) \\overline{\\alpha}\\right\\} = (\\pi_{0} - \\pi_{1})(\\overline{\\alpha} - \\underline{\\alpha})\n\\end{equation}\nand differences in \\equitability as \n\\begin{equation}\n \\Delta S(\\mathbf{a}) =\\left\\{\\pi_{1}\\underline{\\alpha}\\underline{r} + (1 - \\pi_{1}) \\overline{\\alpha}\\overline{r}\\right\\} - \\left\\{\\pi_{0}\\underline{\\alpha}\\underline{r} + (1 - \\pi_{0}) \\overline{\\alpha}\\overline{r}\\right\\} = (\\pi_{0} - \\pi_{1})(\\overline{\\alpha}\\overline{r} - \\underline{\\alpha}\\underline{r}).\n\\end{equation}\nTherefore, gain\\xspace and \\equitability fairness are equivalent to \n$(\\pi_{0} - \\pi_{1})(\\overline{\\alpha} - \\underline{\\alpha})=0$ and $(\\pi_{0} - \\pi_{1})(\\overline{\\alpha}\\overline{r} - \\underline{\\alpha}\\underline{r})=0$. Hence, if $\\pi_{0}\\neq \\pi_{1}$, $\\overline{\\alpha}=\\underline{\\alpha}$ and $\\overline{\\alpha}\\;\\overline{r} = \\underline{\\alpha}\\;\\underline{r}$, which is not possible since $\\underline{r} \\neq \\overline{r}$. \n\nTo show the second part of Theorem \\ref{thm: equi}, we use the fact that gain\\xspace fairness implies that $\\overline{\\alpha}=\\underline{\\alpha}$ (equation \\eqref{eq: if}) and that the difference in \\equitability between group $S=1$ and $S=0$ can be then written $\\Delta S(\\mathbf{a})=(\\pi_{0} - \\pi_{1})(\\overline{r}-\\underline{r})\\overline{\\alpha}$, which have the same sign as $\\pi_{0} - \\pi_{1}$ since $\\overline{r}>\\underline{r}$. Therefore, if $\\pi_{0} > \\pi_{1}$, the policy favors group $S=1$ with respect to \\equitability fairness; otherwise, it favors group $S=0$.\n\\end{proof}\n\nTheorem \\ref{thm: equi} states that \\equitability and gain\\xspace can be satisfied simultaneously if and only if populations have similar fractions of type $A$ individuals. It is similar in spirit to the results above, showing that unless populations meet stringent requirements of similarity in utility distributions between groups (in this case instantiated by the fractions of the two types in each population), the versions of fairness characterized by comparing with the min versus the max cannot be simultaneously satisfied. \n\n\\subsection{Multiplicative versus Additive Normalization}\n\n\\Improvement and gain\\xspace fairness aim at capturing a similar fairness concept: groups receive on average the same increase in utility relative to assigning the least beneficial service. Both fairness metrics differ only by whether the normalization relative to the lowest utility that an individual can derive from the overall intervention is additive or multiplicative. In this section, we show that even the choice of normalization generates inherent fairness trade-offs. \n\nWe consider the following stylized framework (SF2):\n\\begin{itemize}\n \\item There are two types of individuals: type $C$ for which $u^{\\min}$ takes a low value $\\underline{u}$; and type $D$ for which $u^{\\min}$ takes a larger value $\\overline{u} > \\underline{u}$. \n \\item Conditional on $u^{\\min}$, the distribution of utility is similar across programs and types.\n\\end{itemize}\nAlthough stylized, both assumptions allow us to characterize the heterogeneity across groups by differences in their distribution over $u^{\\min}$. Let $p_{s}$ denote the fraction of type $C$ for group $S=s$. Differences in $p_{s}$ across groups imply differences in the distribution of utility $P(U|S)$ within each group, even if the conditional distribution $P(U|U^{\\min})$ is similar across types.\n\n\\begin{thm}\n\\label{thm: 2}\nIn the stylized framework (SF2) with types $C$ and $D$, a policy $\\mathbf{a}$ satisfies both \\improvement fairness and gain\\xspace fairness for group $S=0$ and $S=1$ if and only if one of the following conditions holds:\n\\begin{itemize}\n\\item $p_{0}=p_{1}$;\n\\item the policy $\\mathbf{a}$ assigns the least beneficial program to everyone (i.e. $\\mathbf{a}.\\mathbf{u}=u^{\\min}$). \n\\end{itemize}\n\\end{thm}\n\n\\begin{proof}\nLet $\\underline{\\beta}$ denote $E[\\mathbf{a}.\\mathbf{u}|U^{min}=\\underline{u}]$ and $\\overline{\\beta}$ denote $E[\\mathbf{a}.\\mathbf{u}|U^{min}=\\overline{u}]$. Then, we write differences in \\improvement as \n\\begin{equation}\n \\begin{split}\n \\Delta I(\\mathbf{a})& =\\left\\{p_{1}\\underline{\\beta}+(1-p_{1})\\overline{\\beta}-p_{1}\\underline{u}-(1-p_{1})\\overline{u}\\right\\}-\\left\\{p_{0}\\underline{\\beta} + (1 - p_{0})\\overline{\\beta}-p_{0}\\underline{u}-(1-p_{0})\\overline{u}\\right\\} \\\\\n & =(p_{1}-p_{0})(\\underline{\\beta}-\\overline{\\beta} + \\underline{u} - \\underline{u})\n \\end{split}\n\\end{equation}\nand differences in gain\\xspace as \n\\begin{equation}\n\\Delta G(\\mathbf{a}) = \\left\\{p_{1}\\frac{\\underline{\\beta}}{\\underline{u}} + (1 - p_{1}) \\frac{\\overline{\\beta}}{\\overline{u}}\\right\\} - \\left\\{p_{0}\\frac{\\underline{\\beta}}{\\underline{u}} + (1 - p_{0}) \\frac{\\overline{\\beta}}{\\overline{u}}\\right\\}=\n \\left(p_{1}-p_{0}\\right)\\left(\\frac{\\underline{\\beta}}{\\underline{u}}-\\frac{\\overline{\\beta}}{\\underline{u}}\\right)\n\\end{equation}\nTherefore, \\improvement fairness and gain\\xspace are equivalent to\n$(p_{0}-p_{1})(\\underline{\\beta}-\\overline{\\beta} + \\underline{u} - \\underline{u}) = 0$ and $\\left(p_{0}-p_{1}\\right)\\left(\\frac{\\underline{\\beta}}{\\underline{u}}-\\frac{\\overline{\\beta}}{\\underline{u}}\\right) = 0\n$. If $p_{0}\\neq p_{1}$, \\improvement and gain\\xspace fairness imply $\\underline{\\beta}=\\frac{\\underline{u}}{\\overline{u}}\\overline{\\beta}$ and $\\overline{\\beta}= \\overline{u}$. The latter equality leads to $\\mathbf{a}.\\mathbf{u}=\\overline{u}$ if $u^{\\min}=\\overline{u}$ and the former equality leads to $\\mathbf{a}.\\mathbf{u}=\\underline{u}$ if $u^{\\min}=\\underline{u}$.\n\\end{proof}\nTheorem \\ref{thm: 2} demonstrates a simple, yet general, setting where \\improvement fairness and gain\\xspace fairness cannot be obtained simultaneously unless either the distribution of utilities are the same across groups ($p_{0}=p_{1}$) or the policy does not create any utility improvement relative to $U^{\\min}$. \n\n\n\n\n\\section{Conclusion}\nHow do we judge whether an approach to allocation of scarce societal resources is fair for different sociodemographic groups of public concern? The problem lies at the intersection of recent work in fair machine learning and a long history of work from economics, social choice, and algorithmic game theory on fair division. It also brings into question concerns of local justice~\\cite{elster1992local}, which studies how individuals are prioritized in the allocation of scarce resources by local institutions. The key point we make in this paper is that \\emph{baselines matter when we measure outcomes for different groups}. The exact same allocation may favor one group over another when assessed against the baseline intervention of doing nothing, but the group it favors could invert when measured against the baseline of giving each group the best intervention it could get in a scenario with no resource constraints. The social objective being optimized also can drive fairness results -- for example, utilitarian allocations will typically favor groups with higher variance in utilities across different types of services, even if the means are the same. \n\nOur results are more than theoretical. We show that the pattern arises in homeless service delivery, where outcomes vary by and within sociodemographic groups. For instance, returns to homelessness vary by service allocation more for households without children compared to families with children. Naive policy applications that fail to consider baseline variation may negatively impact some groups. Aiming to reduce overall homelessness, for example, by prioritizing households without children for intensive services disproportionately excludes households with children from receiving their best service, whereas an alternative policy that matches households with children to their best service fails to reduce overall homelessness. The data illustrate similar fairness tradeoffs across intersecting sociodemographic groups, including disability status, gender, age, and race. Failing to consider carefully the underlying distributions and metrics for success threatens counterproductive policy initiatives. Current national advocacy to reduce veteran and chronic homelessness to zero ask communities to shift resources in ways that may undermine other goals~\\cite{builtforzero}. Moreover, federal and local policies simultaneously strive for system efficiency and equity, which prove antithetical in many contexts~\\cite{fowler2019solving}. Our findings raise serious questions for institutions when designing homeless policies and social policy more generally. \n\n\n\\section{Simulations With Utilitarian and Random Allocations}\\label{sec:exp}\n\nThus far, we have not needed to define an allocation policy explicitly, since we were focused on existence results. In this section, we consider two natural allocation policies -- utilitarian (maximizing the sum of utilities of all agents) and random. Both must respect capacity constraints. We simulate a simple environment with two groups and three services. In one setting, members of the two groups have different mean utilities from receiving the three services, while the variances are the same. In the second, members of the two groups have the same mean utilities from receiving the three services, but different variances. We are interested in understanding (1) how the different fairness measures behave in these two settings; (2) the role played by utilitarian objectives in the assignment problem.\n\nIn our setting, there are three ($k=1, 2, 3$) services with fixed capacities ($c_{1}=c_{2}=c_{3}=1000$) and $3000$ applicants divided into two groups of equal size: \\emph{group 0} and \\emph{group 1}. We sample individual utilities for service $k$ from a normal distribution with mean $\\mu_{sk}$ and standard deviation $\\sigma_{sk}$, where $s=0$ for \\emph{group 0} and $s=1$ for \\emph{group 1}. \n\n\n\\subsection{Groups with Different Means}\n\\begin{figure}\n\\subfloat[Distribution of $\\Delta U$ ]{\\includegraphics[width=.295\\textwidth]{Figures\/Simulation\/Case1_distribution.pdf}}\n\\subfloat[\\Improvement vs. Regret\\xspace]{\\includegraphics[width=.36\\textwidth]{Figures\/Simulation\/Case1_regret_improvement.pdf}\\label{fig:imp_reg_case1}}\n\\hspace{0.05em}\n\\subfloat[Gain\\xspace vs. \\Equitability]{\\includegraphics[width=.305\\textwidth]{Figures\/Simulation\/Case1_gain_shortfall.pdf}}\n\\caption{Simulation results when groups have different mean of utilities. Panel~(a) shows the distribution of the maximum utility gains $\\Delta U = U^{\\max}-U^{\\min}$ for \\emph{group 0} (blue), and \\emph{group 1} (orange). Panel~(b) shows the differences in improvement and regret, Panel~(c) shows the differences in gain and shortfall. Error bars show the 95\\% confidence interval of each fairness metric over 100 instantiations of the random allocation.}\n\\label{fig:diff_mean_same_std}\n\\end{figure} \n\nIn this set of simulations, we study the behavior of fairness measures when individual utilities are sampled from group-dependent distributions. The groups have different sample means $\\mu$ but the same variances $\\sigma^{2}$. For\n \\emph{group 0}, the means of the three services are $\\mu_{01}=0.2, \\mu_{02}=0.3, \\text{and}, \\mu_{03}=0.4$. For \\emph{group 1}, the means are $\\mu_{11}=0.4, \\mu_{12}=0.5, \\text{and}, \\mu_{13}=0.63$\nThe variances of the three services for both groups are equal, $\\sigma_{01}^{2} = \\sigma_{11}^{2}= \\num{1e-4}$\n, $\\sigma_{02}^{2} = \\sigma_{12}^{2}=\\num{4e-4}$, and, $\\sigma_{03}^{2} = \\sigma_{13}^{2}=\\num{9e-4}$. Individuals in \\emph{group 1} have on average higher utilities for all services. \n\nAs pointed out in section~\\ref{subsec:imp_vs_regret}, we observe in Figure \\ref{fig:diff_mean_same_std} that the difference in $\\Delta U$ leads to a trade-off between the \\improvement and regret\\xspace fairness metrics. \nFigure \\ref{fig:diff_mean_same_std} shows that even for a random assignment, different metrics lead to conflicting fairness assessment. \nThe \\improvement fairness metric favors the group with higher mean $\\Delta U$ (\\emph{group 1}), and regret\\xspace favors the groups with lower mean $\\Delta U$ (\\emph{group 0}). To complicate fairness assessment further, switching from additive to multiplicative normalization reverses which group is favored.\n\nMoreover,\nthe utilitarian allocation appears to favor \\emph{group 1} according to \\improvement, regret\\xspace and gain\\xspace, but favors \\emph{group 0} in terms of \\equitability. These results confirm in a simulated environment that utility normalization has profound implications on how we assess the fairness of an allocation.\n\n\n\\subsection{Groups with Equal Means and Different Variances}\n\n\\begin{figure}\n\n\\subfloat[Distribution of $\\Delta U$ ]{\\includegraphics[width=.3\\textwidth]{Figures\/Simulation\/Case2_distribution.pdf}}\n\\subfloat[\\Improvement vs. Regret\\xspace (Utilitarian)]{\\includegraphics[width=.329\\textwidth]{Figures\/Simulation\/Case2_regret_improvement.pdf}\\label{fig:imp_reg_case2}}\n\\subfloat[ Gain\\xspace vs. \\Equitability (Utilitarian)]{\\includegraphics[width=.33\\textwidth]{Figures\/Simulation\/Case2_gain_shortfall.pdf}}\\\\\n \\caption{Simulation results when groups have the same mean utilities for the services, but different variances. Panel~(a) shows the distribution of the maximum utility gains $\\Delta U = U^{\\max}-U^{\\min}$ for \\emph{group 0} (blue), and \\emph{group 1} (orange). Panel~(b) shows the differences in improvement and regret, Panel~(c) shows the differences in gain and shortfall. \n Group 1 is favored strongly by all the fairness measures when allocations are utilitarian.}\n \\label{fig:same_mean_diff_std}\n\\end{figure} \n\nIn our second set of simulations, we study the effects of groups having similar means but different variances, a situation that is commonly discussed, for instance in the context of gender differences in student performance~\\cite{baye2016gender}. In this case, we hypothesize that the higher variance group is likely to be favored by utilitarian allocations. For both groups, the means for the three services are equal, $\\mu_{01}=\\mu_{11} = 0.4$, $\\mu_{02}=\\mu_{12} = 0.5$, and $\\mu_{03}=\\mu_{13} = 0.6$. \nFor \\emph{group 0}, the variances for the three interventions are set to $\\sigma_{01}^{2} = \\num{9e-5}$, $\\sigma_{02}^{2} = \\num{2e-3}$, $\\sigma_{03}^{2} = \\num{1e-2}$, while for \\emph{group 1},\nthe variances for the three interventions are set to\n$\\sigma_{11}^{2} = \\num{9e-3}$, $\\sigma_{12}^{2} = \\num{2e-2}$, \n$\\sigma_{13}^{2} = \\num{3e-2}$. Thus, \\emph{group 0} has lower variance. \n\nOur results in Figure \\ref{fig:same_mean_diff_std} show that, as hypothesized, the group with larger variance (group 1) is indeed favored according to all fairness metrics. When maximizing the sum of utilities, it is optimal to assign their best services to individuals with utilities in the tail of the distribution. We find that a larger fraction (65\\%) of individuals in \\emph{group 1} than in \\emph{group 0} (46\\%) receive the service that maximizes their utility.\n\nWe leave it for future research to investigate further the role of variance on the fairness properties of a utilitarian allocation.\n\n\\section{Introduction}\\label{sec:intro}\n\nMany social interventions that allocate resources to individuals are challenging because individuals have heterogeneous utilities. \nThus, the design and analysis of allocation policies for social interventions in terms of efficiency and fairness is critical \\cite{roth2015gets}, as seen in many domains including child protection (e.g. \\cite{chouldechova2018case}), healthcare (e.g. \\cite{yadav2016using}), and homeless services (e.g \\cite{kube2019fair,brown2018reliability}). \nA particular concern for the use of machine learning posits that the tools systematically disfavor some sociodemographic or intersectional groups (see \\cite{chouldechova2018frontiers} for a review). For example, a growing body of work has documented racial disparities in credit lending, recidivism risk assessment \\cite{ProPublica2016}, education \\cite{gardner2019evaluating}, healthcare \\cite{pfohl2019creating}, and policing \\cite{ensign2018runaway}. In this paper, we explore how to measure these potential disparities in the context of allocating resources given a limited budget. The literature on fair resource allocation has typically come from the areas of fair division and cooperative game theory. In that literature, one typically thinks of individuals as having preferences, and tries to define measures of fairness and allocation mechanisms that demonstrate these properties with respect to individual preferences. Recent notions of group fairness coming from the fair division line of literature strengthen the requirements for individual fairness \\cite{conitzer2019group} and are thus too strong for situations of scarce resource allocation, where allocations by definition must be unfavorable to some individuals.\n\nSo how should one measure fairness across groups in the allocation of scarce societal resources, where decisions often are made on the basis of multiple criteria? To ground our considerations in a specific case, consider homelessness service provision, where federal policy makes serving the most vulnerable an explicit goal, and at the same time, the effectiveness of services is measured by returns to homelessness among those served \\cite{systemperformance}. Such examples motivate us to consider how different notions of what role social services should play lead to different conclusions about the fairness of potential allocations across demographic groups. \n\nFor example, we could analyze how much better off members of a group are compared with how well they would have done under some minimal baseline allocation, or we could look at how much worse-off members of a group are than they would have been under the allocations that serve them the best.\nFairness could then be defined as equitable performance of groups according to these measures, and indeed, the existing literature on fair allocation of both divisible and indivisible resources has looked at measures along both of directions, of \\improvement (or gain\\xspace) and regret\\xspace (or \\equitability). \nAlthough both are reasonable definitions of a fair allocation, we consider two important factors that arise in many real-world problems. First, instead of the problem simply focusing on a set of identical resources that need to be allocated amongst agents, there is often a whole set of different interventions, each with capacity constraints (for example, different types of homelessness resources or different cities that refugees can be matched to). Second, individuals may respond heterogeneously to the different interventions (for example, homeless individuals with disabilities may benefit disproportionately from intensive housing supports, or refugees may assimilate and find jobs more easily in places where there is already a substantial population from their place of origin). \n\nWe show that when there is a multiplicity of possible service\n, and groups are heterogeneous in the distributions of utilities they receive from different services, it becomes impossible to satisfy simultaneously \\improvement and regret\\xspace oriented definitions of group fairness. Even more dramatically, \nan allocation policy that appears to favor one group according to \\improvement fairness can favor another group according to regret\\xspace fairness. The results yield insights on inherent trade-offs that policymakers face when attempting to achieve a fairness objective. How we measure improvement or regret also matters when assessing the fairness of an allocation policy. For example, we could measure improvement by the ratio of realized utility over baseline utility (a multiplicative measure); or by the difference between realized utility and baseline utility (an additive measure). Depending on the application, it is not always clear which of these additive or multiplicative normalizations makes more sense. We establish, in a stylized framework, that fairness in terms of additive normalization and fairness in terms of multiplicative normalization cannot hold simultaneously except when the distribution of individual responses to different allocations is similar across demographic groups. \n\nThese trade-offs are not theoretical corner-cases and have substantive implications for social policy. We use administrative data from a regional homeless system to explore the fairness of a capacitated assignment of community-based services that address housing needs. Services include transitional housing, rapid rehousing, and emergency shelter; three programs that vary in intensity and availability. We measure the utility of a service to a household as the probability estimated in prior work by \\cite{kube2019fair} that the household would make a successful exit from homelessness given the delivery of that service. We first document significant differences in utility distributions across different groups (e.g., disabled versus not disabled households, families with children versus households without children, single females with versus without children). We then confirm our theoretical results that the differences in utility distributions across groups generate trade-offs when assessing the fairness of an allocation. For example, we consider the original allocation as recorded in the administrative data and we find that improvement and regret disagree on whether the policy favors households with or without children, as well as other groups. \n\nIn addition to contributing to our understanding of how the definition and measurement of fairness is affected by heterogeneity in how members of different groups may respond to interventions, these findings can inform practice in homeless and social services that allocate scarce resources across diverse populations. Policies frequently attempt to maximize public welfare by targeting available supports towards heterogeneous groups based on competing notions of fairness (e.g., vulnerability, efficiency, equality). Understanding the fairness trade-offs and measurement sensitivity allows for more intentional policy-making and better evaluation.\n\n\\section{Fairness Trade-offs in Homeless Service Delivery}\n\nOur theoretical analysis suggests that heterogeneity in service responses across groups drives fairness metrics in opposite directions. In this section, we investigate whether the fairness tradeoffs emerge in the capacitated assignment of homeless services across several sub-populations. We hypothesize that if sociodemographic group differences exist in the utilities received from allocations (and in particular, between the differences in the best versus worst allocations), then we should see tradeoffs between \\improvement versus regret\\xspace fairness, \\equitability versus gain\\xspace, and \\improvement versus gain\\xspace. We provide evidence for both the antecedent (\\emph{heterogeneity in responses across groups}) and the consequent (\\emph{inherent fairness trade-offs between groups}).\n\n\n\\subsection{Background}\nHomelessness represents a socioeconomic and public health challenge for many communities in the United States. Approximately $1.5$ million people experience homelessness for at least one night every year \\cite{henry2020ahar, fisher2018homelessness}. Homelessness has short- and longer-term implications on health, employment, and crime \\cite{fowler2019solving, khadduri2010costs, cohen2020housing}. Guided by federal policies, communities offer an array of services for households lacking stable and permanent living accommodations. We study three main homeless services: Transitional Housing (TH); Rapid Rehousing (RRH) and Emergency Shelter (ES). Transitional Housing provides accommodation for up to 24 months with comprehensive case management to address barriers toward stable housing, such as substance abuse and issues related to behavioral health. Rapid Rehousing offers access to rental units for six months without intensive case management. Emergency Shelter provides a bed to sleep at night for no more than one or two months. On a daily basis, caseworkers assign homeless households seeking assistance to an available service, reserving the most intensive TH for those with greater needs.\n\n\\subsection{Data}\n\\label{sec:data}\nOur main dataset is based on estimated probabilities of households re-entering homelessness services within two years after initial receipt of services. This data, collected by \\cite{kube2019fair} is publicly available.\\footnote{\\url{https:\/\/github.com\/amandakube\/Allocating-Homelessness-Interventions---Counterfactual-Predictions}} The estimates are based on applying a machine learning model (BART \\cite{hill2011bayesian}) to administrative records that tracked service provision in a metropolitan area from 2007 through 2014. Service providers collected demographic and household characteristics upon entry into the system, and data capture the intervention assigned and whether households subsequently requested additional assistance \\cite{kube2019fair}. The model estimates counterfactual probabilities $p_{ik}$ of a household $i$ to re-enter the homeless system within 2 years given the assignment of a specific service $k$, where $k\\in \\{TH, RRH, ES\\}$. The original data also tracks responses to homelessness prevention -- time-limited monetary assistance that differs from the other three interventions that allocate actual bed space. Given that the constraints on homelessness prevention are different, we focus here only on households that needed actual bed space (and were therefore not eligible to receive prevention services). Therefore, our final data contains $3,375$ households and they received either TH, RRH, or ES. \n\nWe compute the utility of service $k$ to individual $i$ as $u_{ik}=1-p_{ik}$. \nWe obtained from Kube et al. additional sociodemographic characteristics\nfor each household, including race, gender, age, disability status,\npresence of spouse and\/or children, and household size. \n\nWe define a series of sociodemographic groups and intersectional identities expected to exhibit substantial heterogeneity in responses to homeless services. First, households with disabilities are considered more vulnerable, and prior research shows that more vulnerable households do best with more intensive services ~\\cite{aubry2020,munthe-kaas2018}. Therefore, we expect households with disabilities to benefit more from TH and less from ES than the rest of the population. Second, families with children under the age of 18 experience homelessness due to socioeconomic reasons rather than disability and vulnerability, and thus, we anticipate families will respond better to rapid rehousing than more intensive TH~\\cite{cunningham2015rapid, fertig2008homelessness, rog2007characteristics}. Third, we examine the intersection between gender and family status, assuming that single female households without children do better in TH compared with single female-headed families with children, who are more likely to benefit from RRH. Fourth, we look within households headed by youth aged 18 to 24 years to compare disability status (versus no disability) and family status (children versus no children), hypothesizing that those with disabilities benefit more from TH and families with children from RRH \\cite{morton2020}. Lastly, given the over-representation in homelessness of minorities and especially Black households, we test how race affects homeless service utilities \\cite{henry2020ahar}. Prior research suggests the causes of homelessness vary for White people, who more likely experience disabilities, versus Black people, who experience greater housing discrimination and marginalization ~\\cite{jones2016does}. Moreover, race intersects with gender (males vs females) and family status (with children versus without children) in ways that could drive variation in homeless service outcomes. \n\n\\subsection{Heterogeneity across Demographic Groups}\nIn this section, we document heterogeneity in the distributions of utility across various sociodemographic groups. For each household, we compute the difference $\\Delta U$ between its best and worst utility. \n\nFigure \\ref{fig:delta_u_het} shows heterogeneity in response to homeless services across \nhouseholds with and without reported disabilities; with and without children. The distribution of $\\Delta U$ for households with a disability skews to the right (panel a)); assigning the best service to a disabled client has a larger impact in terms of the probability to re-enter the homeless system than assigning a client without a disability to its most beneficial service. The difference in the means of the distributions is statistically significant with a t-statistic of 8.5 and p-value infinitesimally small. This finding aligns with prior research that shows vulnerable households do best with more intensive services \\cite{aubry2020,munthe-kaas2018}. The distribution of $\\Delta U$ for families without children skews strongly to the right compared with to households with children (panel b)). The mean of $\\Delta U$ for households without children is 0.07, while it is only 0.04 for household with children. The difference is statistically significant with a t-statistic of 29.0 and a p-value infinitesimally small. This result illustrates how families with children differ in their responses to housing assistance compared to homeless individuals. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\textwidth]{.\/Figures\/delta_u_faact_het_main.png}\n \\caption{Distribution of the maximum utility gain $\\Delta U$ that individuals can derive from the homeless system across various demographic groups. We obtain the probability density function of $\\Delta U = U^{max}-U^{min}$ via Gaussian kernel density estimation with a bandwidth of $0.2$. Differences in probability density functions between households with and without disability (Panel a)) and with and without spouse (Panel b)) illustrate heterogeneous responses to housing assistance.}\n \\label{fig:delta_u_het}\n\\end{figure}\n\nIn Figure \\ref{fig:delta_u_hom}, we look at intersectional sociodemographic groups. We find in panel c) that the impact of different homeless services for a single female depends strongly on whether there are children in the household. Similarly, youth with and without disability respond differently to homeless services (panel d)). For both intersections, the difference in means is statistically significant with a t-statistic equal to 25.7 for single female versus single mother and to 5.1 for youth with a disability versus youth without a disability. \n\nFigure \\ref{fig:delta_u_inter} explores differential responses to housing assistance by race and shows substantial differences in the distribution of $\\Delta U$ between Black and White males (Panel g)). \nBlack homeless populations may on average benefit more from more intensive homeless services. Prior research \\cite{jones2016does} suggests that social discrimination and socio-economic disadvantage could increase the risk for homelessness among populations with perceived Black background and that housing assistance could mitigate some of these vulnerabilities. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\textwidth]{.\/Figures\/delta_u_faact_het_inter.png}\n \\caption{Same as Figure \\ref{fig:delta_u_het} but for intersection groups single female with and without children (Panel c)); youth under 25 with and without disability (Panel d)); and, youth under 25 with and without children (Panel e)). }\n \\label{fig:delta_u_hom}\n\\end{figure}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\textwidth]{.\/Figures\/delta_u_faact_het_race.png}\n \\caption{Same as Figure \\ref{fig:delta_u_het} but for groups defined by perceived racial background.}\n \\label{fig:delta_u_inter}\n\\end{figure}\n\n\nResults from Figures \\ref{fig:delta_u_het}, \\ref{fig:delta_u_hom} and \\ref{fig:delta_u_inter} \nsuggest that heterogeneity in utility pervades sociodemographic groups. Table \\ref{tab: dist_serv} explains some of this heterogeneity by identifying which of the three services (TH, RRH and ES) benefits the most households within each group. \nFor the homeless population studied in this paper, TH is the most preferred service for $68\\%$ of the population, followed by RRH ($27\\%$) and ES ($5\\%$). We find that this preference for more intensive care is exacerbated for households with disability ($73\\%$ prefer TH), which is in line with prior findings that most vulnerable populations benefit from more integrated care. The preferences of households with a disability toward TH contrasts with the preferences of families with children toward RRH: $67\\%$ of households with children benefit the most from RRH, while TH is the best service for only $16\\%$ of families. This observation holds true for all intersectional groups that include children and could explain differences between males and females, \nsince females are more likely to live with children than males. On the other hand, regardless of gender, the most beneficial program is more likely to be TH for the Black homeless population: TH is the most beneficial service for $46\\%$ of Black females but only for $34\\%$ of White females; and, for $95\\%$ of Black males but only for $80\\%$ of White males.\n\n\\begin{table}[]\n\\centering\n\\caption{Distribution of services that deliver to each household the highest utility across demographic groups. This shows the fraction of households in each demographic group for which ES, TH or RRH leads to the lowest probability to re-enter the homeless system. }\n\\begin{tabular}{llllllll}\n \\toprule\n & TH & RRH & ES & & TH & RRH & ES \\\\\n \\midrule \nAll & 0.68 & 0.27 & 0.05 \\\\\n\\midrule\nWith disability & 0.73 & 0.23 & 0.03 & Without disability & 0.66 & 0.28 & 0.06\\\\\n\\midrule \nWithout children & 0.85 & 0.14 & 0.01 & With children & 0.16 & 0.67 & 0.17 \\\\\n\\midrule \nSingle female with children & 0.15 & 0.7 & 0.15 & Single female without children & 0.7 & 0.3 & 0.01 \\\\\n\\midrule \nLess than 25 with disability & 0.62 & 0.37 & 0.01 & Less than 25 without disability & 0.47 & 0.49 & 0.05 \\\\\n\\midrule\nLess than 25 without children & 0.83 & 0.17 & 0.0 & Less than 25 with children & 0.19 & 0.73 & 0.08 \\\\\n\\midrule\nFemale - Black & 0.46 & 0.46 & 0.08 & Female - White & 0.34 & 0.62 & 0.04 \\\\\nMale - Black &0.95 & 0.02 & 0.03 & Male - White & 0.8 & 0.14 & 0.06 \\\\\n\\bottomrule\n\\end{tabular}\n\\label{tab: dist_serv}\n\\end{table}\n\n\n\\subsection{Fairness Trade-Offs in the Observed Allocation of Homeless Services}\nOur theory suggests that heterogeneity in the distribution of the maximum gain $\\Delta U$ for housing assistance would drive fairness metrics in opposite directions: (i) there exist assignments of homeless services with conflicting fairness assessment depending on choosing \\improvement, regret\\xspace, gain\\xspace or \\equitability as the fairness metric (Theorem~\\ref{cor: 2}); (ii) assignments that satisfy improvement fairness could violate regret fairness and vice-versa (Theorem~\\ref{thm: trade-off}). \nSince we observe substantial heterogeneity among the sociodemographic and intersectional groups presented in section 5.3, we know by Theorem~\\ref{cor: 2} that ambiguous fairness assessments can arise for some policies. However, Theorem \\ref{cor: 2} is not constructive and does not tell whether such policies are realistic in the context of homeless services delivery. Here we test whether the observed assignment as reported in the administrative records is subject to contradictory fairness assessments depending on the choice of the fairness metric. \n\nFigure~\\ref{fig:trade-off-obs} (Panel a)) plots the difference in improvement $\\Delta I$ and the negative of difference in regret $-\\Delta R$, so that positive values indicate that the policy favors group $S=1$, while negative values mean the policy favors group $S=0$. According to the \\improvement metric, the observed assignment favors households without children, while according to regret\\xspace, it favors households with children: $\\Delta I$ is equal to $-0.013$, while $-\\Delta R$ is equal to $0.016$. A similar ambiguity emerges for households with and without disability. Moreover, choosing \\improvement over regret\\xspace flips the conclusion on whether the observed assignment is unfair to Black males relative to White males: Black males derive higher utility gains according to \\improvement ($\\Delta I = -0.02$) but lower utility gains according to regret ($-\\Delta R = 0.009$). \nThe results provide empirical evidence that policies that lead to contradictory fairness assessment in Theorem \\ref{cor: 2} are not just theoretical oddities, but do occur in real world applications. Although we do not prove a counterpart of \nTheorem~\\ref{cor: 2} for \\equitability versus gain\\xspace, we find empirically that similar trade-offs do, in fact, occur (Figure \\ref{fig:trade-off-obs}, Panel b)). Moreover, in Figure~\\ref{fig:trade-off-obs}, we find one pairwise comparison, youth with a disability versus youth without a disability, for which the observed policy satisfies \\improvement fairness. This instance of \\improvement fairness allows us to test whether Theorem~\\ref{thm: trade-off} holds here. We find that the policy does not satisfy regret\\xspace fairness, which is consistent with the heterogeneity in $\\Delta U$ found in section 5.3 between youth with a disability versus youth without a disability. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{.\/Figures\/trade_offs_faact_main_inter_race.png}\n \\caption{Fairness trade-off in the observed assignment of homeless services. This compares which demographic group is favored by the assignment depending on the fairness metric. Trade-offs occurs when \\improvement favors one group and regret\\xspace the other one (left panel) or when \\equitability favors one group and gain\\xspace the other one (right panel).}\n \\label{fig:trade-off-obs}\n\\end{figure}\n\\section{Related Work}\\label{sec:literature}\n\n\\subsection{Group Fairness}\nPrior research has led to many definitions of fairness to compare algorithmic outcomes across demographic groups. Popular definitions include statistical parity~\\cite{dwork2012fairness}, equalized odds and opportunity~\\cite{hardt2016equality}. However, these definitions only apply to binary settings and implicitly assume that the utility of an individual is equal to one when the algorithm's outcome is one and equal to zero otherwise. Few papers consider more general definitions of utilities~\\cite{heidari2019moral}. In this paper, we argue as in~\\cite{hossain2020designing} that in many societal applications of machine learning, utilities are heterogeneous across individuals and that this heterogeneity could be systematic across demographic groups. \n\nThe fair division literature offers a framework to compare utilities across individuals. Envy-freeness, proportionality or equitability~\\cite{caragiannis2019unreasonable} are common utility-based definitions of a fair allocation of goods. The literature strengthens these notions of fairness to control for envy-freeness to arbitrary segments of the population~\\cite{conitzer2019group, bartholdi1992hard}. In this paper, we focus on notions of group equitability that vary by their normalization, but leaves it for future research to explore the role of normalization on group envy-freeness. \n\nA standard assumption in the fair division literature is that utilities, although heterogeneous, are unit-normalized~\\cite{aziz2020justifications}. The rationale for unit-normalization is that it allows one to make more reasonable interpersonal comparisons of utility by converting all utilities to a common scale. Unit-normalization implies that the maximum utility gain is equal to one for all individuals~\\cite{aziz2020justifications}. Our notions of \\equitability or regret\\xspace rely on a similar assumption, which is reasonable in many settings (e.g. voting ~\\cite{bouveret2016characterizing}). However, we argue that other reasonable choices of normalization are possible and more relevant in different types of allocation problems. For example, in the case of homeless services delivery, a policymaker would want to account for the fact that families with children have on average more to gain from rapid rehousing programs \\cite{rog2007characteristics}. In this case, our measures of \\improvement and gain\\xspace, which normalize by comparison with the worst utility that an individual can expect from an allocation, are also reasonable notions of fairness. This paper relates closely to the work of \\cite{hossain2020designing}, who introduce utility-based notions of group fairness for classification problems. However, they assume away the need to normalize utilities to a similar scale \/ support. One of our contributions is to show that different normalization approaches can lead to conflicting assessments of the fairness of an allocation policy.\n\n\\subsection{Impossibility Results}\nThe binary outcome setting admits some fundamental impossibility results \\cite{kleinberg2016inherent,chouldechova2017fair}. Except under very restrictive conditions, it is impossible for a classifier to simultaneously equalize false positive rates and false negative rates across groups and also guarantee that predictions are calibrated within each group. \\cite{kleinberg2016inherent} show that the impossibility emerges whenever demographic groups differ systematically in the distribution of features used by the classifier as inputs. In this paper, we demonstrate new impossibility results in the case of utility-based notions of fairness. As in \\cite{kleinberg2016inherent}, we obtain a paradox where fairness guarantees that seem to share the same objective -- that the allocation of resources will be as effective for all demographic groups -- are nonetheless incompatible. \n\nOur results on the incompatibility of different fairness principles is also reminiscent of Arrow's impossibility theorem ~\\cite{arrow1950difficulty}. In the presence of heterogeneous preferences, there is no way to aggregate individual preferences into a social welfare function that would satisfy unanimity, non-dictatorship and informational parsimony. The theory of fair allocation \\cite{foley1966resource,varian1973equity} that selects a subset of policies on basis of their fairness and efficiency obtains possibility results by relaxing informational parsimony \\cite{fleurbaey2005informational}. However, in this paper, we show that we cannot avoid negative results when notions of fairness based on different normalizations have to hold simultaneously.\n\n\\subsection{Algorithmic Allocation of Societal Resources}\nThere has been\nrecent interest in the specific setting where scarce resources that are collective or societal are algorithmically allocated by a centralized institution to individual members of society (see \\cite{das2022local} for a recent review). \nThe design of algorithmic approaches has typically focused on increasing the efficiency of social interventions, including kidney exchange \\cite{li2019incorporating,roth2005pairwise}, housing assistance \\cite{kube2019fair,manlove2006popular}, HIV awareness campaigns \\cite{yadav2016using} and refugee resettlement \\cite{delacretaz2019matching}. In this paper, we investigate how to assess the fairness of resulting allocations. Empirically, we find evidence of our impossibility results in the context of capacitated one-sided matching, which involve a set of services with fixed capacities, a set of agents with heterogeneous preference orderings (see e.g. ~\\cite{manlove2006popular} for an application to the house allocation problem) and a social worker that assigns a service to each agent.\n\n\n\n\n\\section{Inherent Fairness Trade-Offs in Resource Allocation}\n\\label{sec: method}\n\n\nIn this section we describe our theoretical framework, first defining the problems we are concerned with, and then outlining both general and illustrative results on inherent group fairness trade-offs in the allocation of scarce resources.\n\n\\subsection{Setting}\nWe consider $K$ services, with maximum capacities $c_{k}$ for $k\\in \\{1, ..., K\\}$, and $N$ individuals $i=1, ..., N$.\\footnote{We follow the convention of denoting vectors in bold type and random variables with capital letters.} We can thus describe individuals by their utility vector $\\mathbf{u}=(u_{1}, ..., u_{K})$ over each program $k$ and their sensitive attribute $s\\in\\mathcal{S}$. $\\mathcal{S}$ describes the set of groups for which we want to study the fairness of service allocation. For ease of exposition, we assume that group characteristics are binary and $\\mathcal{S}=\\{0, 1\\}$; however, our results readily extend to more complex definitions of groups, and the empirical section will show that our results hold for intersectional groups. We denote by $N_{s}$ the number of individuals with sensitive attribute $s=0, 1$.\n\nFor each individual $\\mathbf{u}$, we denote by $u^{\\min}$ the utility derived from receiving the least beneficial program: $u^{\\min}=\\min\\{u_{k}|k=1, .., K\\}$. We denote by $u^{\\max}$ the utility derived from receiving the most beneficial program: $u^{\\max}=\\max\\{u_{k}|k=1, .., K\\}$. Best and worst programs might vary among individuals. $u^{\\min}$ could potentially characterize a ``do nothing option'', i.e. the individuals' utility without the intervention. We assume that $\\mathbf{u}$ is drawn from a joint distribution $G_{s}(u)$ over $\\mathbb{R}^{K}$ that depends on the value $s$ of the sensitive attribute. We denote the random utility vector $U$.\n\nAn allocation policy $\\mathbf{a}:\\mathbb{R}^{K}\\rightarrow \\{0, 1\\}^{K}$ assigns each individual with utility $\\mathbf{u}$ to a program $k$ if and only if $a_{k}(\\mathbf{u})=1$. We assume that individuals are assigned to only one program: $\\sum_{k=1}^{K}a_{k}(\\mathbf{u})=1$. We denote by $\\mathbf{a}.\\mathbf{u}$ the inner product between the policy assignment and the individual utility: $\\mathbf{a}.\\mathbf{u} = \\sum_{k=1}^{K}a_{k}(\\mathbf{u})u_{k}$. Given $N$ individuals $i$ with utility $\\mathbf{u}_{i}$, the allocation is feasible if and only if for all programs $k$, $\\sum_{i=1}^{N}a_{k}(\\mathbf{u}_{i})\\leq c_{k}$ (the maximum capacity for the $k$-th service). \n\n\\subsection{Fairness, Baselines, and Normalization}\nIn this section, we consider four notions of fairness to compare the average realized utility between groups: \\improvement, regret\\xspace, \\equitability, and gain\\xspace. The definitions differ along two dimensions (1) how they normalize individual utility (additive or multiplicative), and (2) which baselines they compare individual realized utility to (worst case or best case).\n\nThe \\improvement and gain\\xspace metrics use as a baseline the minimal or worst utility that an individual can expect from any service they receive. To be fair, the definitions say that the average increase in utility relative to the least beneficial intervention should be equal across groups. They differ in how they normalize realized utility relative to the baseline; \\improvement uses an additive normalization, while gain\\xspace uses a multiplicative normalization.\n\n\\begin{dfn}\n\\textbf{\\Improvement fairness.}\nAn allocation policy $\\mathbf{a}$ satisfies fair \\improvement if and only if\n\\begin{equation}\n E\\left[\\frac{1}{N_{0}}\\displaystyle\\sum_{i, s=0}\\mathbf{a}.(\\mathbf{u}_{i} - u^{\\min}_{i})\\right]= E\\left[\\frac{1}{N_{1}}\\displaystyle\\sum_{i, s=1}\\mathbf{a}.(\\mathbf{u}_{i} - u^{\\min}_{i})\\right],\n\\end{equation}\nwhere the expectation is taken over samples of size $N_{s}$ for the group with sensitive attribute $s=0,1$.\n\\end{dfn}\n\n\\begin{dfn}\n\\textbf{Gain\\xspace fairness.}\nAn allocation policy $a$ satisfies fair gain\\xspace if and only if\n\\begin{equation}\n E\\left[\\frac{1}{N_{0}}\\displaystyle\\sum_{i, s=0}\\mathbf{a}.\\frac{\\mathbf{u}_{i}}{ u^{\\min}_{i}}\\right]= E\\left[\\frac{1}{N_{1}}\\displaystyle\\sum_{i, s=1}\\mathbf{a}.\\frac{\\mathbf{u}_{i}}{ u^{\\min}_{i}}\\right].\n\\end{equation}\n\\end{dfn}\n\nWe denote by $\\Delta I(\\mathbf{a})$ the difference in \\improvement between groups:\n\\begin{equation}\n \\Delta I(\\mathbf{a}) = E\\left[\\frac{1}{N_{1}}\\displaystyle\\sum_{i, s=1}\\mathbf{a}.(\\mathbf{u}_{i} - u^{\\min}_{i})\\right] - E\\left[\\frac{1}{N_{0}}\\displaystyle\\sum_{i, s=0}\\mathbf{a}.(\\mathbf{u}_{i} - u^{\\min}_{i})\\right].\n\\end{equation}\nIf $\\Delta I(\\mathbf{a})$ is positive, the policy $\\mathbf{a}$ favors group $1$; if $\\Delta I(\\mathbf{a})$ is negative, the policy favors group $0$. We define similarly differences in gain\\xspace as $\\Delta G(\\mathbf{a})$.\n\nRegret\\xspace fairness and \\equitability benchmark the realized utility in comparison to the best outcome individuals can hope for from any service (as such they are related to the classical definition of \\emph{equitability} in fair division, albeit with differences in normalization). Both fairness notions are satisfied when the average loss of utility compared to receiving the most beneficial program is equalized across groups. \n\n\\begin{dfn}\n\\textbf{Regret\\xspace fairness.}\nAn allocation policy $a$ satisfies regret\\xspace fairness if and only if\n\\begin{equation}\n E\\left[\\frac{1}{N_{0}}\\displaystyle\\sum_{i: s=0}\\mathbf{a}.(u^{\\max}_{i}- \\mathbf{u}_{i})\\right]= E\\left[\\frac{1}{N_{1}}\\displaystyle\\sum_{i: s=1}\\mathbf{a}.(u^{\\max}_{i}- \\mathbf{u}_{i})\\right],\n\\end{equation}\n\\end{dfn}\n\n\\begin{dfn}\n\\textbf{\\Equitability.}\nAn allocation policy $a$ satisfies \\equitability if and only if\n\\begin{equation}\n E\\left[\\frac{1}{N_{0}}\\displaystyle\\sum_{i: s=0}\\mathbf{a}.\\frac{\\mathbf{u}_{i}}{ u^{\\max}_{i}}\\right]= E\\left[\\frac{1}{N_{1}}\\displaystyle\\sum_{i: s=1}\\mathbf{a}.\\frac{\\mathbf{u}_{i}}{ u^{\\max}_{i}}\\right].\n\\end{equation}\n\\end{dfn}\n\nLike differences in \\improvement or in gain\\xspace, we denote differences in \\equitability and regret\\xspace as $\\Delta S(\\mathbf{a})$ and $\\Delta R(\\mathbf{a})$, respectively. Note that $\\Delta R(\\mathbf{a})\\geq 0$ means that the policy $\\mathbf{a}$ favors group $S=0$ over group $S=1$ for regret fairness.\n\nAll four definitions represent reasonable and desirable properties of a fair allocation. However, the following results show that a decision-maker faces trade-offs when choosing which fairness notion to target. Not only might the notions not be satisfied simultaneously, it is possible to generate explicitly contradictory conclusions across the relatively similar fairness metrics regarding which group is under-served. \n\n\\subsection{\\Improvement versus Regret\\xspace}\\label{subsec:imp_vs_regret}\n\nOur first result shows that \\improvement and regret\\xspace fairness cannot be satisfied simultaneously, unless we impose strong restrictions on how groups differ. Consider two random variables $U^{\\max}$ and $U^{\\min}$ defined on individual most and least beneficial utility. The maximum individual utility gain that can be delivered by a service is then a random variable $\\Delta U = U^{\\max}-U^{\\min}$. We show that heterogeneity in $\\Delta U$ across groups generates an inherent trade-off between improvement and regret fairness. \n\n\\begin{thm}\n\\label{thm: trade-off}\nIf an allocation policy $\\mathbf{a}$ satisfies both \\improvement and regret\\xspace fairness then the average maximum utility gain $\\Delta U$ must be equal across groups: $E[\\Delta U|S=0] = E[\\Delta U|S=1]$. Moreover, $\\Delta I(\\mathbf{a}) + \\Delta R(\\mathbf{a})=E[\\Delta U|S=1] - E[\\Delta U|S=0]$. \n\\end{thm}\n\n\\begin{proof}\nThe proof is based on the following identities:\n\\begin{equation}\n \\begin{split}\n \\Delta I(\\mathbf{a})& = E\\left[\\frac{1}{N_{1}}\\displaystyle\\sum_{i: s=1}\\mathbf{a}(\\mathbf{u}).(\\mathbf{u}_{i}-u^{\\max}_{i} + \\Delta u_{i})\\right] - E\\left[\\frac{1}{N_{0}}\\displaystyle\\sum_{i: s=0}\\mathbf{a}(\\mathbf{u}).(\\mathbf{u}_{i}-u^{\\max}_{i} + \\Delta u_{i})\\right] \\\\\n & =E\\left[\\frac{1}{N_{1}}\\displaystyle\\sum_{i: s=1}\\sum_{k=1}^{K}\\mathbf{a}_{k}(\\mathbf{u})\\Delta u_{i}\\right] - E\\left[\\frac{1}{N_{0}}\\displaystyle\\sum_{i: s=0}\\sum_{k=1}^{K}\\mathbf{a}_{k}(\\mathbf{u})\\Delta u_{i}\\right] - \\Delta R(\\mathbf{a})\\\\\n & =E\\left[\\frac{1}{N_{1}}\\displaystyle\\sum_{i: s=1}\\Delta u_{i}\\right] - E\\left[\\frac{1}{N_{0}}\\displaystyle\\sum_{i: s=0}\\Delta u_{i}\\right] - \\Delta R(\\mathbf{a}),\n \\end{split}\n\\end{equation}\nwhere the last equality comes from the fact that $\\sum_{k=1}^{K}a_{k}(u)=1$ for all $u$. Therefore, if $\\Delta I(\\mathbf{a})=0$ and $\\Delta R(\\mathbf{a})=0$, then $E[\\Delta U|S=0] = E[\\Delta U|S=1]$.\n\\end{proof}\n\nThe result in Theorem \\ref{thm: trade-off} implies that regardless of the allocation policy, for both \\improvement and regret\\xspace fairness to hold it is necessary that groups would gain on average similarly if they were always allocated their most beneficial intervention. Thus, a trade-off exists when defining what a fair assignment should look like: for example, a policy satisfying improvement fairness would always violate regret fairness unless $E[\\Delta U|S=0] = E[\\Delta U|S=1]$. Since $\\Delta I(\\mathbf{a}) + \\Delta R(\\mathbf{a})=E[\\Delta U|S=1] - E[\\Delta U|S=0]$, the closer a policy is to satisfying improvement fairness, the worse its regret fairness, and vice-versa. \nA follow up question is whether \\improvement and regret\\xspace fairness tell a different story about the fairness of an allocation policy $a$. The next result shows that whenever $E[\\Delta U|S=0]$ and $E[\\Delta U|S=1]$ differ, unless all policies favor one group, there exists a policy that favors one group for \\improvement fairness and favors the other one for regret\\xspace fairness. \n\n\\begin{thm}\n\\label{cor: 2}\nSuppose that $E[\\Delta U|S=1] > E[\\Delta U | S=0]$. Suppose that there exists a policy that favors group $S=0$ for \\improvement fairness and another policy that favors group $S=1$ for \\improvement fairness. Then, there exists a policy $\\mathbf{a^{*}}$ such that $\\Delta I(\\mathbf{a^{*}}) > 0 \\mbox{ and } \\Delta R(\\mathbf{a^{*}}) > 0\n$. That is, there exists a policy that favors $S=1$ with respect to \\improvement fairness (larger is better), but favors $S=0$ with respect to regret\\xspace fairness (lower is better).\n\\end{thm}\n\nThe proof of Theorem \\ref{cor: 2} relies on the fact that the set of differences in \\improvement\/regret\\xspace is a continuous interval:\n\\begin{lem}\n\\label{lem: 1}\nSuppose that there exist two allocation policies $\\mathbf{a}$ and $\\mathbf{a}^{'}$ with differences in \\improvement $\\delta$ and $\\delta^{'}> \\delta$. Then, for any $\\delta^{*}\\in[\\delta, \\delta^{'}]$, there exists an allocation policy $\\mathbf{a}^{*}$ with difference in \\improvement equal to $\\delta^{*}$. A similar result holds for difference in regret\\xspace. \\end{lem}\n\n\\begin{proof}\nWe show the result for differences in \\improvement. The proof can be readily extended to differences in regret\\xspace. We choose $\\lambda = \\frac{\\delta^{'} - \\delta^{*}}{\\delta^{'} - \\delta}\\in [0, 1]$. We define an allocation policy $\\mathbf{a}^{\\lambda}$ as follows:\n\\begin{itemize}\n \\item Partition randomly the individuals into two populations $G_{\\lambda}$ and $G_{1-\\lambda}$ of size $\\lambda N$ and $(1-\\lambda )N$, respectively.\n \\item For each program $k$, assign $\\lambda c_{k}$ of them to the population $G_{\\lambda}$; and $(1-\\lambda)c_{k}$ of them to the population $G_{1 -\\lambda}$. \n \\item Apply the allocation policy $\\mathbf{a}$ to the population $G_{\\lambda}$ and $\\mathbf{a}^{'}$ to the population $G_{1-\\lambda}$. \n\\end{itemize}\nBy construction the policy $a_{\\lambda}$ satisfies the resource constraints. Moreover,\n\\begin{equation}\n \\Delta I(\\mathbf{a}_{\\lambda}) = \\Delta I(\\mathbf{a}) P(G_{\\lambda}) + \\Delta I(\\mathbf{a}^{'}) (1 - P(G_{1-\\lambda}) =\\delta \\lambda + \\delta^{'}(1-\\lambda) = \\delta^{*},\n\\end{equation}\nwhere the last equality comes from our choice for the value of $\\lambda$. \n\\end{proof}\n\n\\begin{proof}\nTheorem \\ref{cor: 2}. We choose $\\epsilon = \\frac{E[\\Delta U|S=1] - E[\\Delta U | S=0]}{2}$. $\\epsilon >0$ by assumption. Using the assumption of Theorem \\ref{cor: 2}, there exist $\\mathbf{a}$ and $\\mathbf{a}^{'}$ such that $\\Delta I(\\mathbf{a}) < 0$ and $\\Delta I(\\mathbf{a}^{'}) > 0$. We apply Lemma \\ref{lem: 1} with $\\delta =\\Delta I(\\mathbf{a})< 0 $, $\\delta^{'}=\\Delta I(\\mathbf{a}^{'}) > 0$ and $\\delta^{*}=\\min\\{\\epsilon, \\delta^{'} \/ 2\\}$: there exists a policy $\\mathbf{a}^{*}$ such that $\\Delta I(\\mathbf{a}^{*}) = \\delta^{*} > 0$. Moreover, $\\Delta R(\\mathbf{a}^{*}) = E[\\Delta U|S=1] - E[\\Delta U | S=0] - \\Delta I(\\mathbf{a}^{*}) \\geq \\epsilon > 0$.\n\\end{proof}\n\nThus, regret\\xspace fairness and \\improvement fairness cannot hold simultaneously unless populations are homogeneous in terms of their best response to the allocation (Theorem \\ref{thm: trade-off}). Moreover, assessing which group is favored by a given policy can lead to contradictory results depending on whether we measure the fairness properties of the policy in terms of differences in \\improvement or regret. The result in Theorem \\ref{cor: 2} illustrates that decision-makers cannot expect that \\improvement and regret\\xspace notions tell a similar story about whether an allocation policy under-serves a given group. Results Theorem \\ref{thm: trade-off} and Theorem \\ref{cor: 2} are general, since they hold for any set of capacities $c_{1}$, ..., $c_{K}$ and for distributions of utilities provided that $E[\\Delta U|S=1] > E[\\Delta U | S=0]$. Both illustrate the central role of the difference between $E[\\Delta U|S=0]$ and $E[\\Delta U|S=1]$ in driving wedges between \\improvement and regret\\xspace fairness. Additionally, Theorem \\ref{cor: 2} is not very restrictive in its assumptions, since it only requires that no group is under-served regardless of the policy. \n\n\\subsection{\\Equitability versus Gain\\xspace}\nIn this section, we show that the fairness trade-offs between \\improvement and regret\\xspace exist also with multiplicative notion of fairness, gain\\xspace and \\equitability. Unlike trade-offs between improvement and regret where our results are general, in the case of \\equitability versus gain\\xspace, we derive results in a stylized framework and leave it to future work to extend our results to more general settings. Nevertheless, this section captures the essence of the problem in the multiplicative setting. We denote for each individual by $r=u^{\\min}\/u^{\\max}$ the ratio between the lowest and highest utility obtained from the intervention. This serves as a multiplicative counterpart of $\\Delta u$. We consider the following framework (SF1):\n\\begin{itemize}\n \\item There are two types of individuals: type $A$ with high value $\\overline{r}$ for the ratio $r$; type $B$ with a low value $\\underline{r}< \\overline{r}$ for $r$. \n \\item Conditional on $r$, the distribution of utility is similar across programs and types.\n\\end{itemize}\nIn this stylized framework, assigning to an individual their most beneficial program delivers either a large increase $\\overline{r}$ over $u^{\\min}$ (type A) or a lower one $\\underline{r}$ (type B). We characterize the heterogeneity across groups by differences in the distribution of type A and B within each group. We denote by $\\pi_{0}$ the proportion of type B individuals for group $S=0$; and, $\\pi_{1}$ the proportion of type $B$ for group $S=1$.\n\n\\begin{thm}\n\\label{thm: equi}\nIn the stylized framework (SF1):\n\\begin{itemize}\n\\item A policy satisfies both \\equitability and gain\\xspace fairness if and only if $\\pi_{0}=\\pi_{1}$. \n\\item If $\\pi_{0}\\neq \\pi_{1}$, a policy $a$ that achieves gain\\xspace (\\equitability) fairness, favors, according to \\equitability (gain\\xspace) fairness whichever group has the lowest proportion of type $A$ individuals.\n\\end{itemize}\n\\end{thm}\n\n\\begin{proof}\nLet $\\underline{\\alpha}$ denote $E\\left[\\frac{\\mathbf{a}(u).\\mathbf{u}}{u^{\\min}}|r=\\underline{r}\\right]$ and $\\overline{\\alpha}$ denote $E\\left[\\frac{\\mathbf{a}(u).\\mathbf{u}}{u^{\\min}}|r=\\overline{r}\\right]$. Then, we write (for any policy) differences in gain\\xspace as \n\\begin{equation}\n\\label{eq: if}\n \\Delta G(\\mathbf{a}) = \\left\\{\\pi_{1}\\underline{\\alpha} + (1 - \\pi_{1}) \\overline{\\alpha}\\right\\} - \\left\\{\\pi_{0}\\underline{\\alpha} + (1 - \\pi_{0}) \\overline{\\alpha}\\right\\} = (\\pi_{0} - \\pi_{1})(\\overline{\\alpha} - \\underline{\\alpha})\n\\end{equation}\nand differences in \\equitability as \n\\begin{equation}\n \\Delta S(\\mathbf{a}) =\\left\\{\\pi_{1}\\underline{\\alpha}\\underline{r} + (1 - \\pi_{1}) \\overline{\\alpha}\\overline{r}\\right\\} - \\left\\{\\pi_{0}\\underline{\\alpha}\\underline{r} + (1 - \\pi_{0}) \\overline{\\alpha}\\overline{r}\\right\\} = (\\pi_{0} - \\pi_{1})(\\overline{\\alpha}\\overline{r} - \\underline{\\alpha}\\underline{r}).\n\\end{equation}\nTherefore, gain\\xspace and \\equitability fairness are equivalent to \n$(\\pi_{0} - \\pi_{1})(\\overline{\\alpha} - \\underline{\\alpha})=0$ and $(\\pi_{0} - \\pi_{1})(\\overline{\\alpha}\\overline{r} - \\underline{\\alpha}\\underline{r})=0$. Hence, if $\\pi_{0}\\neq \\pi_{1}$, $\\overline{\\alpha}=\\underline{\\alpha}$ and $\\overline{\\alpha}\\;\\overline{r} = \\underline{\\alpha}\\;\\underline{r}$, which is not possible since $\\underline{r} \\neq \\overline{r}$. \n\nTo show the second part of Theorem \\ref{thm: equi}, we use the fact that gain\\xspace fairness implies that $\\overline{\\alpha}=\\underline{\\alpha}$ (equation \\eqref{eq: if}) and that the difference in \\equitability between group $S=1$ and $S=0$ can be then written $\\Delta S(\\mathbf{a})=(\\pi_{0} - \\pi_{1})(\\overline{r}-\\underline{r})\\overline{\\alpha}$, which have the same sign as $\\pi_{0} - \\pi_{1}$ since $\\overline{r}>\\underline{r}$. Therefore, if $\\pi_{0} > \\pi_{1}$, the policy favors group $S=1$ with respect to \\equitability fairness; otherwise, it favors group $S=0$.\n\\end{proof}\n\nTheorem \\ref{thm: equi} states that \\equitability and gain\\xspace can be satisfied simultaneously if and only if populations have similar fractions of type $A$ individuals. It is similar in spirit to the results above, showing that unless populations meet stringent requirements of similarity in utility distributions between groups (in this case instantiated by the fractions of the two types in each population), the versions of fairness characterized by comparing with the min versus the max cannot be simultaneously satisfied. \n\n\\subsection{Multiplicative versus Additive Normalization}\n\n\\Improvement and gain\\xspace fairness aim at capturing a similar fairness concept: groups receive on average the same increase in utility relative to assigning the least beneficial service. Both fairness metrics differ only by whether the normalization relative to the lowest utility that an individual can derive from the overall intervention is additive or multiplicative. In this section, we show that even the choice of normalization generates inherent fairness trade-offs. \n\nWe consider the following stylized framework (SF2):\n\\begin{itemize}\n \\item There are two types of individuals: type $C$ for which $u^{\\min}$ takes a low value $\\underline{u}$; and type $D$ for which $u^{\\min}$ takes a larger value $\\overline{u} > \\underline{u}$. \n \\item Conditional on $u^{\\min}$, the distribution of utility is similar across programs and types.\n\\end{itemize}\nAlthough stylized, both assumptions allow us to characterize the heterogeneity across groups by differences in their distribution over $u^{\\min}$. Let $p_{s}$ denote the fraction of type $C$ for group $S=s$. Differences in $p_{s}$ across groups imply differences in the distribution of utility $P(U|S)$ within each group, even if the conditional distribution $P(U|U^{\\min})$ is similar across types.\n\n\\begin{thm}\n\\label{thm: 2}\nIn the stylized framework (SF2) with types $C$ and $D$, a policy $\\mathbf{a}$ satisfies both \\improvement fairness and gain\\xspace fairness for group $S=0$ and $S=1$ if and only if one of the following conditions holds:\n\\begin{itemize}\n\\item $p_{0}=p_{1}$;\n\\item the policy $\\mathbf{a}$ assigns the least beneficial program to everyone (i.e. $\\mathbf{a}.\\mathbf{u}=u^{\\min}$). \n\\end{itemize}\n\\end{thm}\n\n\\begin{proof}\nLet $\\underline{\\beta}$ denote $E[\\mathbf{a}.\\mathbf{u}|U^{min}=\\underline{u}]$ and $\\overline{\\beta}$ denote $E[\\mathbf{a}.\\mathbf{u}|U^{min}=\\overline{u}]$. Then, we write differences in \\improvement as \n\\begin{equation}\n \\begin{split}\n \\Delta I(\\mathbf{a})& =\\left\\{p_{1}\\underline{\\beta}+(1-p_{1})\\overline{\\beta}-p_{1}\\underline{u}-(1-p_{1})\\overline{u}\\right\\}-\\left\\{p_{0}\\underline{\\beta} + (1 - p_{0})\\overline{\\beta}-p_{0}\\underline{u}-(1-p_{0})\\overline{u}\\right\\} \\\\\n & =(p_{1}-p_{0})(\\underline{\\beta}-\\overline{\\beta} + \\underline{u} - \\underline{u})\n \\end{split}\n\\end{equation}\nand differences in gain\\xspace as \n\\begin{equation}\n\\Delta G(\\mathbf{a}) = \\left\\{p_{1}\\frac{\\underline{\\beta}}{\\underline{u}} + (1 - p_{1}) \\frac{\\overline{\\beta}}{\\overline{u}}\\right\\} - \\left\\{p_{0}\\frac{\\underline{\\beta}}{\\underline{u}} + (1 - p_{0}) \\frac{\\overline{\\beta}}{\\overline{u}}\\right\\}=\n \\left(p_{1}-p_{0}\\right)\\left(\\frac{\\underline{\\beta}}{\\underline{u}}-\\frac{\\overline{\\beta}}{\\underline{u}}\\right)\n\\end{equation}\nTherefore, \\improvement fairness and gain\\xspace are equivalent to\n$(p_{0}-p_{1})(\\underline{\\beta}-\\overline{\\beta} + \\underline{u} - \\underline{u}) = 0$ and $\\left(p_{0}-p_{1}\\right)\\left(\\frac{\\underline{\\beta}}{\\underline{u}}-\\frac{\\overline{\\beta}}{\\underline{u}}\\right) = 0\n$. If $p_{0}\\neq p_{1}$, \\improvement and gain\\xspace fairness imply $\\underline{\\beta}=\\frac{\\underline{u}}{\\overline{u}}\\overline{\\beta}$ and $\\overline{\\beta}= \\overline{u}$. The latter equality leads to $\\mathbf{a}.\\mathbf{u}=\\overline{u}$ if $u^{\\min}=\\overline{u}$ and the former equality leads to $\\mathbf{a}.\\mathbf{u}=\\underline{u}$ if $u^{\\min}=\\underline{u}$.\n\\end{proof}\nTheorem \\ref{thm: 2} demonstrates a simple, yet general, setting where \\improvement fairness and gain\\xspace fairness cannot be obtained simultaneously unless either the distribution of utilities are the same across groups ($p_{0}=p_{1}$) or the policy does not create any utility improvement relative to $U^{\\min}$. \n\n\n\n\n\\section{Conclusion}\nHow do we judge whether an approach to allocation of scarce societal resources is fair for different sociodemographic groups of public concern? The problem lies at the intersection of recent work in fair machine learning and a long history of work from economics, social choice, and algorithmic game theory on fair division. It also brings into question concerns of local justice~\\cite{elster1992local}, which studies how individuals are prioritized in the allocation of scarce resources by local institutions. The key point we make in this paper is that \\emph{baselines matter when we measure outcomes for different groups}. The exact same allocation may favor one group over another when assessed against the baseline intervention of doing nothing, but the group it favors could invert when measured against the baseline of giving each group the best intervention it could get in a scenario with no resource constraints. The social objective being optimized also can drive fairness results -- for example, utilitarian allocations will typically favor groups with higher variance in utilities across different types of services, even if the means are the same. \n\nOur results are more than theoretical. We show that the pattern arises in homeless service delivery, where outcomes vary by and within sociodemographic groups. For instance, returns to homelessness vary by service allocation more for households without children compared to families with children. Naive policy applications that fail to consider baseline variation may negatively impact some groups. Aiming to reduce overall homelessness, for example, by prioritizing households without children for intensive services disproportionately excludes households with children from receiving their best service, whereas an alternative policy that matches households with children to their best service fails to reduce overall homelessness. The data illustrate similar fairness tradeoffs across intersecting sociodemographic groups, including disability status, gender, age, and race. Failing to consider carefully the underlying distributions and metrics for success threatens counterproductive policy initiatives. Current national advocacy to reduce veteran and chronic homelessness to zero ask communities to shift resources in ways that may undermine other goals~\\cite{builtforzero}. Moreover, federal and local policies simultaneously strive for system efficiency and equity, which prove antithetical in many contexts~\\cite{fowler2019solving}. Our findings raise serious questions for institutions when designing homeless policies and social policy more generally. \n\n\n\\section{Simulations With Utilitarian and Random Allocations}\\label{sec:exp}\n\nThus far, we have not needed to define an allocation policy explicitly, since we were focused on existence results. In this section, we consider two natural allocation policies -- utilitarian (maximizing the sum of utilities of all agents) and random. Both must respect capacity constraints. We simulate a simple environment with two groups and three services. In one setting, members of the two groups have different mean utilities from receiving the three services, while the variances are the same. In the second, members of the two groups have the same mean utilities from receiving the three services, but different variances. We are interested in understanding (1) how the different fairness measures behave in these two settings; (2) the role played by utilitarian objectives in the assignment problem.\n\nIn our setting, there are three ($k=1, 2, 3$) services with fixed capacities ($c_{1}=c_{2}=c_{3}=1000$) and $3000$ applicants divided into two groups of equal size: \\emph{group 0} and \\emph{group 1}. We sample individual utilities for service $k$ from a normal distribution with mean $\\mu_{sk}$ and standard deviation $\\sigma_{sk}$, where $s=0$ for \\emph{group 0} and $s=1$ for \\emph{group 1}. \n\n\n\\subsection{Groups with Different Means}\n\\begin{figure}\n\\subfloat[Distribution of $\\Delta U$ ]{\\includegraphics[width=.295\\textwidth]{Figures\/Simulation\/Case1_distribution.pdf}}\n\\subfloat[\\Improvement vs. Regret\\xspace]{\\includegraphics[width=.36\\textwidth]{Figures\/Simulation\/Case1_regret_improvement.pdf}\\label{fig:imp_reg_case1}}\n\\hspace{0.05em}\n\\subfloat[Gain\\xspace vs. \\Equitability]{\\includegraphics[width=.305\\textwidth]{Figures\/Simulation\/Case1_gain_shortfall.pdf}}\n\\caption{Simulation results when groups have different mean of utilities. Panel~(a) shows the distribution of the maximum utility gains $\\Delta U = U^{\\max}-U^{\\min}$ for \\emph{group 0} (blue), and \\emph{group 1} (orange). Panel~(b) shows the differences in improvement and regret, Panel~(c) shows the differences in gain and shortfall. Error bars show the 95\\% confidence interval of each fairness metric over 100 instantiations of the random allocation.}\n\\label{fig:diff_mean_same_std}\n\\end{figure} \n\nIn this set of simulations, we study the behavior of fairness measures when individual utilities are sampled from group-dependent distributions. The groups have different sample means $\\mu$ but the same variances $\\sigma^{2}$. For\n \\emph{group 0}, the means of the three services are $\\mu_{01}=0.2, \\mu_{02}=0.3, \\text{and}, \\mu_{03}=0.4$. For \\emph{group 1}, the means are $\\mu_{11}=0.4, \\mu_{12}=0.5, \\text{and}, \\mu_{13}=0.63$\nThe variances of the three services for both groups are equal, $\\sigma_{01}^{2} = \\sigma_{11}^{2}= \\num{1e-4}$\n, $\\sigma_{02}^{2} = \\sigma_{12}^{2}=\\num{4e-4}$, and, $\\sigma_{03}^{2} = \\sigma_{13}^{2}=\\num{9e-4}$. Individuals in \\emph{group 1} have on average higher utilities for all services. \n\nAs pointed out in section~\\ref{subsec:imp_vs_regret}, we observe in Figure \\ref{fig:diff_mean_same_std} that the difference in $\\Delta U$ leads to a trade-off between the \\improvement and regret\\xspace fairness metrics. \nFigure \\ref{fig:diff_mean_same_std} shows that even for a random assignment, different metrics lead to conflicting fairness assessment. \nThe \\improvement fairness metric favors the group with higher mean $\\Delta U$ (\\emph{group 1}), and regret\\xspace favors the groups with lower mean $\\Delta U$ (\\emph{group 0}). To complicate fairness assessment further, switching from additive to multiplicative normalization reverses which group is favored.\n\nMoreover,\nthe utilitarian allocation appears to favor \\emph{group 1} according to \\improvement, regret\\xspace and gain\\xspace, but favors \\emph{group 0} in terms of \\equitability. These results confirm in a simulated environment that utility normalization has profound implications on how we assess the fairness of an allocation.\n\n\n\\subsection{Groups with Equal Means and Different Variances}\n\n\\begin{figure}\n\n\\subfloat[Distribution of $\\Delta U$ ]{\\includegraphics[width=.3\\textwidth]{Figures\/Simulation\/Case2_distribution.pdf}}\n\\subfloat[\\Improvement vs. Regret\\xspace (Utilitarian)]{\\includegraphics[width=.329\\textwidth]{Figures\/Simulation\/Case2_regret_improvement.pdf}\\label{fig:imp_reg_case2}}\n\\subfloat[ Gain\\xspace vs. \\Equitability (Utilitarian)]{\\includegraphics[width=.33\\textwidth]{Figures\/Simulation\/Case2_gain_shortfall.pdf}}\\\\\n \\caption{Simulation results when groups have the same mean utilities for the services, but different variances. Panel~(a) shows the distribution of the maximum utility gains $\\Delta U = U^{\\max}-U^{\\min}$ for \\emph{group 0} (blue), and \\emph{group 1} (orange). Panel~(b) shows the differences in improvement and regret, Panel~(c) shows the differences in gain and shortfall. \n Group 1 is favored strongly by all the fairness measures when allocations are utilitarian.}\n \\label{fig:same_mean_diff_std}\n\\end{figure} \n\nIn our second set of simulations, we study the effects of groups having similar means but different variances, a situation that is commonly discussed, for instance in the context of gender differences in student performance~\\cite{baye2016gender}. In this case, we hypothesize that the higher variance group is likely to be favored by utilitarian allocations. For both groups, the means for the three services are equal, $\\mu_{01}=\\mu_{11} = 0.4$, $\\mu_{02}=\\mu_{12} = 0.5$, and $\\mu_{03}=\\mu_{13} = 0.6$. \nFor \\emph{group 0}, the variances for the three interventions are set to $\\sigma_{01}^{2} = \\num{9e-5}$, $\\sigma_{02}^{2} = \\num{2e-3}$, $\\sigma_{03}^{2} = \\num{1e-2}$, while for \\emph{group 1},\nthe variances for the three interventions are set to\n$\\sigma_{11}^{2} = \\num{9e-3}$, $\\sigma_{12}^{2} = \\num{2e-2}$, \n$\\sigma_{13}^{2} = \\num{3e-2}$. Thus, \\emph{group 0} has lower variance. \n\nOur results in Figure \\ref{fig:same_mean_diff_std} show that, as hypothesized, the group with larger variance (group 1) is indeed favored according to all fairness metrics. When maximizing the sum of utilities, it is optimal to assign their best services to individuals with utilities in the tail of the distribution. We find that a larger fraction (65\\%) of individuals in \\emph{group 1} than in \\emph{group 0} (46\\%) receive the service that maximizes their utility.\n\nWe leave it for future research to investigate further the role of variance on the fairness properties of a utilitarian allocation.\n\n\\section{Introduction}\\label{sec:intro}\n\nMany social interventions that allocate resources to individuals are challenging because individuals have heterogeneous utilities. \nThus, the design and analysis of allocation policies for social interventions in terms of efficiency and fairness is critical \\cite{roth2015gets}, as seen in many domains including child protection (e.g. \\cite{chouldechova2018case}), healthcare (e.g. \\cite{yadav2016using}), and homeless services (e.g \\cite{kube2019fair,brown2018reliability}). \nA particular concern for the use of machine learning posits that the tools systematically disfavor some sociodemographic or intersectional groups (see \\cite{chouldechova2018frontiers} for a review). For example, a growing body of work has documented racial disparities in credit lending, recidivism risk assessment \\cite{ProPublica2016}, education \\cite{gardner2019evaluating}, healthcare \\cite{pfohl2019creating}, and policing \\cite{ensign2018runaway}. In this paper, we explore how to measure these potential disparities in the context of allocating resources given a limited budget. The literature on fair resource allocation has typically come from the areas of fair division and cooperative game theory. In that literature, one typically thinks of individuals as having preferences, and tries to define measures of fairness and allocation mechanisms that demonstrate these properties with respect to individual preferences. Recent notions of group fairness coming from the fair division line of literature strengthen the requirements for individual fairness \\cite{conitzer2019group} and are thus too strong for situations of scarce resource allocation, where allocations by definition must be unfavorable to some individuals.\n\nSo how should one measure fairness across groups in the allocation of scarce societal resources, where decisions often are made on the basis of multiple criteria? To ground our considerations in a specific case, consider homelessness service provision, where federal policy makes serving the most vulnerable an explicit goal, and at the same time, the effectiveness of services is measured by returns to homelessness among those served \\cite{systemperformance}. Such examples motivate us to consider how different notions of what role social services should play lead to different conclusions about the fairness of potential allocations across demographic groups. \n\nFor example, we could analyze how much better off members of a group are compared with how well they would have done under some minimal baseline allocation, or we could look at how much worse-off members of a group are than they would have been under the allocations that serve them the best.\nFairness could then be defined as equitable performance of groups according to these measures, and indeed, the existing literature on fair allocation of both divisible and indivisible resources has looked at measures along both of directions, of \\improvement (or gain\\xspace) and regret\\xspace (or \\equitability). \nAlthough both are reasonable definitions of a fair allocation, we consider two important factors that arise in many real-world problems. First, instead of the problem simply focusing on a set of identical resources that need to be allocated amongst agents, there is often a whole set of different interventions, each with capacity constraints (for example, different types of homelessness resources or different cities that refugees can be matched to). Second, individuals may respond heterogeneously to the different interventions (for example, homeless individuals with disabilities may benefit disproportionately from intensive housing supports, or refugees may assimilate and find jobs more easily in places where there is already a substantial population from their place of origin). \n\nWe show that when there is a multiplicity of possible service\n, and groups are heterogeneous in the distributions of utilities they receive from different services, it becomes impossible to satisfy simultaneously \\improvement and regret\\xspace oriented definitions of group fairness. Even more dramatically, \nan allocation policy that appears to favor one group according to \\improvement fairness can favor another group according to regret\\xspace fairness. The results yield insights on inherent trade-offs that policymakers face when attempting to achieve a fairness objective. How we measure improvement or regret also matters when assessing the fairness of an allocation policy. For example, we could measure improvement by the ratio of realized utility over baseline utility (a multiplicative measure); or by the difference between realized utility and baseline utility (an additive measure). Depending on the application, it is not always clear which of these additive or multiplicative normalizations makes more sense. We establish, in a stylized framework, that fairness in terms of additive normalization and fairness in terms of multiplicative normalization cannot hold simultaneously except when the distribution of individual responses to different allocations is similar across demographic groups. \n\nThese trade-offs are not theoretical corner-cases and have substantive implications for social policy. We use administrative data from a regional homeless system to explore the fairness of a capacitated assignment of community-based services that address housing needs. Services include transitional housing, rapid rehousing, and emergency shelter; three programs that vary in intensity and availability. We measure the utility of a service to a household as the probability estimated in prior work by \\cite{kube2019fair} that the household would make a successful exit from homelessness given the delivery of that service. We first document significant differences in utility distributions across different groups (e.g., disabled versus not disabled households, families with children versus households without children, single females with versus without children). We then confirm our theoretical results that the differences in utility distributions across groups generate trade-offs when assessing the fairness of an allocation. For example, we consider the original allocation as recorded in the administrative data and we find that improvement and regret disagree on whether the policy favors households with or without children, as well as other groups. \n\nIn addition to contributing to our understanding of how the definition and measurement of fairness is affected by heterogeneity in how members of different groups may respond to interventions, these findings can inform practice in homeless and social services that allocate scarce resources across diverse populations. Policies frequently attempt to maximize public welfare by targeting available supports towards heterogeneous groups based on competing notions of fairness (e.g., vulnerability, efficiency, equality). Understanding the fairness trade-offs and measurement sensitivity allows for more intentional policy-making and better evaluation.\n\n\\section{Fairness Trade-offs in Homeless Service Delivery}\n\nOur theoretical analysis suggests that heterogeneity in service responses across groups drives fairness metrics in opposite directions. In this section, we investigate whether the fairness tradeoffs emerge in the capacitated assignment of homeless services across several sub-populations. We hypothesize that if sociodemographic group differences exist in the utilities received from allocations (and in particular, between the differences in the best versus worst allocations), then we should see tradeoffs between \\improvement versus regret\\xspace fairness, \\equitability versus gain\\xspace, and \\improvement versus gain\\xspace. We provide evidence for both the antecedent (\\emph{heterogeneity in responses across groups}) and the consequent (\\emph{inherent fairness trade-offs between groups}).\n\n\n\\subsection{Background}\nHomelessness represents a socioeconomic and public health challenge for many communities in the United States. Approximately $1.5$ million people experience homelessness for at least one night every year \\cite{henry2020ahar, fisher2018homelessness}. Homelessness has short- and longer-term implications on health, employment, and crime \\cite{fowler2019solving, khadduri2010costs, cohen2020housing}. Guided by federal policies, communities offer an array of services for households lacking stable and permanent living accommodations. We study three main homeless services: Transitional Housing (TH); Rapid Rehousing (RRH) and Emergency Shelter (ES). Transitional Housing provides accommodation for up to 24 months with comprehensive case management to address barriers toward stable housing, such as substance abuse and issues related to behavioral health. Rapid Rehousing offers access to rental units for six months without intensive case management. Emergency Shelter provides a bed to sleep at night for no more than one or two months. On a daily basis, caseworkers assign homeless households seeking assistance to an available service, reserving the most intensive TH for those with greater needs.\n\n\\subsection{Data}\n\\label{sec:data}\nOur main dataset is based on estimated probabilities of households re-entering homelessness services within two years after initial receipt of services. This data, collected by \\cite{kube2019fair} is publicly available.\\footnote{\\url{https:\/\/github.com\/amandakube\/Allocating-Homelessness-Interventions---Counterfactual-Predictions}} The estimates are based on applying a machine learning model (BART \\cite{hill2011bayesian}) to administrative records that tracked service provision in a metropolitan area from 2007 through 2014. Service providers collected demographic and household characteristics upon entry into the system, and data capture the intervention assigned and whether households subsequently requested additional assistance \\cite{kube2019fair}. The model estimates counterfactual probabilities $p_{ik}$ of a household $i$ to re-enter the homeless system within 2 years given the assignment of a specific service $k$, where $k\\in \\{TH, RRH, ES\\}$. The original data also tracks responses to homelessness prevention -- time-limited monetary assistance that differs from the other three interventions that allocate actual bed space. Given that the constraints on homelessness prevention are different, we focus here only on households that needed actual bed space (and were therefore not eligible to receive prevention services). Therefore, our final data contains $3,375$ households and they received either TH, RRH, or ES. \n\nWe compute the utility of service $k$ to individual $i$ as $u_{ik}=1-p_{ik}$. \nWe obtained from Kube et al. additional sociodemographic characteristics\nfor each household, including race, gender, age, disability status,\npresence of spouse and\/or children, and household size. \n\nWe define a series of sociodemographic groups and intersectional identities expected to exhibit substantial heterogeneity in responses to homeless services. First, households with disabilities are considered more vulnerable, and prior research shows that more vulnerable households do best with more intensive services ~\\cite{aubry2020,munthe-kaas2018}. Therefore, we expect households with disabilities to benefit more from TH and less from ES than the rest of the population. Second, families with children under the age of 18 experience homelessness due to socioeconomic reasons rather than disability and vulnerability, and thus, we anticipate families will respond better to rapid rehousing than more intensive TH~\\cite{cunningham2015rapid, fertig2008homelessness, rog2007characteristics}. Third, we examine the intersection between gender and family status, assuming that single female households without children do better in TH compared with single female-headed families with children, who are more likely to benefit from RRH. Fourth, we look within households headed by youth aged 18 to 24 years to compare disability status (versus no disability) and family status (children versus no children), hypothesizing that those with disabilities benefit more from TH and families with children from RRH \\cite{morton2020}. Lastly, given the over-representation in homelessness of minorities and especially Black households, we test how race affects homeless service utilities \\cite{henry2020ahar}. Prior research suggests the causes of homelessness vary for White people, who more likely experience disabilities, versus Black people, who experience greater housing discrimination and marginalization ~\\cite{jones2016does}. Moreover, race intersects with gender (males vs females) and family status (with children versus without children) in ways that could drive variation in homeless service outcomes. \n\n\\subsection{Heterogeneity across Demographic Groups}\nIn this section, we document heterogeneity in the distributions of utility across various sociodemographic groups. For each household, we compute the difference $\\Delta U$ between its best and worst utility. \n\nFigure \\ref{fig:delta_u_het} shows heterogeneity in response to homeless services across \nhouseholds with and without reported disabilities; with and without children. The distribution of $\\Delta U$ for households with a disability skews to the right (panel a)); assigning the best service to a disabled client has a larger impact in terms of the probability to re-enter the homeless system than assigning a client without a disability to its most beneficial service. The difference in the means of the distributions is statistically significant with a t-statistic of 8.5 and p-value infinitesimally small. This finding aligns with prior research that shows vulnerable households do best with more intensive services \\cite{aubry2020,munthe-kaas2018}. The distribution of $\\Delta U$ for families without children skews strongly to the right compared with to households with children (panel b)). The mean of $\\Delta U$ for households without children is 0.07, while it is only 0.04 for household with children. The difference is statistically significant with a t-statistic of 29.0 and a p-value infinitesimally small. This result illustrates how families with children differ in their responses to housing assistance compared to homeless individuals. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\textwidth]{.\/Figures\/delta_u_faact_het_main.png}\n \\caption{Distribution of the maximum utility gain $\\Delta U$ that individuals can derive from the homeless system across various demographic groups. We obtain the probability density function of $\\Delta U = U^{max}-U^{min}$ via Gaussian kernel density estimation with a bandwidth of $0.2$. Differences in probability density functions between households with and without disability (Panel a)) and with and without spouse (Panel b)) illustrate heterogeneous responses to housing assistance.}\n \\label{fig:delta_u_het}\n\\end{figure}\n\nIn Figure \\ref{fig:delta_u_hom}, we look at intersectional sociodemographic groups. We find in panel c) that the impact of different homeless services for a single female depends strongly on whether there are children in the household. Similarly, youth with and without disability respond differently to homeless services (panel d)). For both intersections, the difference in means is statistically significant with a t-statistic equal to 25.7 for single female versus single mother and to 5.1 for youth with a disability versus youth without a disability. \n\nFigure \\ref{fig:delta_u_inter} explores differential responses to housing assistance by race and shows substantial differences in the distribution of $\\Delta U$ between Black and White males (Panel g)). \nBlack homeless populations may on average benefit more from more intensive homeless services. Prior research \\cite{jones2016does} suggests that social discrimination and socio-economic disadvantage could increase the risk for homelessness among populations with perceived Black background and that housing assistance could mitigate some of these vulnerabilities. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\textwidth]{.\/Figures\/delta_u_faact_het_inter.png}\n \\caption{Same as Figure \\ref{fig:delta_u_het} but for intersection groups single female with and without children (Panel c)); youth under 25 with and without disability (Panel d)); and, youth under 25 with and without children (Panel e)). }\n \\label{fig:delta_u_hom}\n\\end{figure}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.95\\textwidth]{.\/Figures\/delta_u_faact_het_race.png}\n \\caption{Same as Figure \\ref{fig:delta_u_het} but for groups defined by perceived racial background.}\n \\label{fig:delta_u_inter}\n\\end{figure}\n\n\nResults from Figures \\ref{fig:delta_u_het}, \\ref{fig:delta_u_hom} and \\ref{fig:delta_u_inter} \nsuggest that heterogeneity in utility pervades sociodemographic groups. Table \\ref{tab: dist_serv} explains some of this heterogeneity by identifying which of the three services (TH, RRH and ES) benefits the most households within each group. \nFor the homeless population studied in this paper, TH is the most preferred service for $68\\%$ of the population, followed by RRH ($27\\%$) and ES ($5\\%$). We find that this preference for more intensive care is exacerbated for households with disability ($73\\%$ prefer TH), which is in line with prior findings that most vulnerable populations benefit from more integrated care. The preferences of households with a disability toward TH contrasts with the preferences of families with children toward RRH: $67\\%$ of households with children benefit the most from RRH, while TH is the best service for only $16\\%$ of families. This observation holds true for all intersectional groups that include children and could explain differences between males and females, \nsince females are more likely to live with children than males. On the other hand, regardless of gender, the most beneficial program is more likely to be TH for the Black homeless population: TH is the most beneficial service for $46\\%$ of Black females but only for $34\\%$ of White females; and, for $95\\%$ of Black males but only for $80\\%$ of White males.\n\n\\begin{table}[]\n\\centering\n\\caption{Distribution of services that deliver to each household the highest utility across demographic groups. This shows the fraction of households in each demographic group for which ES, TH or RRH leads to the lowest probability to re-enter the homeless system. }\n\\begin{tabular}{llllllll}\n \\toprule\n & TH & RRH & ES & & TH & RRH & ES \\\\\n \\midrule \nAll & 0.68 & 0.27 & 0.05 \\\\\n\\midrule\nWith disability & 0.73 & 0.23 & 0.03 & Without disability & 0.66 & 0.28 & 0.06\\\\\n\\midrule \nWithout children & 0.85 & 0.14 & 0.01 & With children & 0.16 & 0.67 & 0.17 \\\\\n\\midrule \nSingle female with children & 0.15 & 0.7 & 0.15 & Single female without children & 0.7 & 0.3 & 0.01 \\\\\n\\midrule \nLess than 25 with disability & 0.62 & 0.37 & 0.01 & Less than 25 without disability & 0.47 & 0.49 & 0.05 \\\\\n\\midrule\nLess than 25 without children & 0.83 & 0.17 & 0.0 & Less than 25 with children & 0.19 & 0.73 & 0.08 \\\\\n\\midrule\nFemale - Black & 0.46 & 0.46 & 0.08 & Female - White & 0.34 & 0.62 & 0.04 \\\\\nMale - Black &0.95 & 0.02 & 0.03 & Male - White & 0.8 & 0.14 & 0.06 \\\\\n\\bottomrule\n\\end{tabular}\n\\label{tab: dist_serv}\n\\end{table}\n\n\n\\subsection{Fairness Trade-Offs in the Observed Allocation of Homeless Services}\nOur theory suggests that heterogeneity in the distribution of the maximum gain $\\Delta U$ for housing assistance would drive fairness metrics in opposite directions: (i) there exist assignments of homeless services with conflicting fairness assessment depending on choosing \\improvement, regret\\xspace, gain\\xspace or \\equitability as the fairness metric (Theorem~\\ref{cor: 2}); (ii) assignments that satisfy improvement fairness could violate regret fairness and vice-versa (Theorem~\\ref{thm: trade-off}). \nSince we observe substantial heterogeneity among the sociodemographic and intersectional groups presented in section 5.3, we know by Theorem~\\ref{cor: 2} that ambiguous fairness assessments can arise for some policies. However, Theorem \\ref{cor: 2} is not constructive and does not tell whether such policies are realistic in the context of homeless services delivery. Here we test whether the observed assignment as reported in the administrative records is subject to contradictory fairness assessments depending on the choice of the fairness metric. \n\nFigure~\\ref{fig:trade-off-obs} (Panel a)) plots the difference in improvement $\\Delta I$ and the negative of difference in regret $-\\Delta R$, so that positive values indicate that the policy favors group $S=1$, while negative values mean the policy favors group $S=0$. According to the \\improvement metric, the observed assignment favors households without children, while according to regret\\xspace, it favors households with children: $\\Delta I$ is equal to $-0.013$, while $-\\Delta R$ is equal to $0.016$. A similar ambiguity emerges for households with and without disability. Moreover, choosing \\improvement over regret\\xspace flips the conclusion on whether the observed assignment is unfair to Black males relative to White males: Black males derive higher utility gains according to \\improvement ($\\Delta I = -0.02$) but lower utility gains according to regret ($-\\Delta R = 0.009$). \nThe results provide empirical evidence that policies that lead to contradictory fairness assessment in Theorem \\ref{cor: 2} are not just theoretical oddities, but do occur in real world applications. Although we do not prove a counterpart of \nTheorem~\\ref{cor: 2} for \\equitability versus gain\\xspace, we find empirically that similar trade-offs do, in fact, occur (Figure \\ref{fig:trade-off-obs}, Panel b)). Moreover, in Figure~\\ref{fig:trade-off-obs}, we find one pairwise comparison, youth with a disability versus youth without a disability, for which the observed policy satisfies \\improvement fairness. This instance of \\improvement fairness allows us to test whether Theorem~\\ref{thm: trade-off} holds here. We find that the policy does not satisfy regret\\xspace fairness, which is consistent with the heterogeneity in $\\Delta U$ found in section 5.3 between youth with a disability versus youth without a disability. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{.\/Figures\/trade_offs_faact_main_inter_race.png}\n \\caption{Fairness trade-off in the observed assignment of homeless services. This compares which demographic group is favored by the assignment depending on the fairness metric. Trade-offs occurs when \\improvement favors one group and regret\\xspace the other one (left panel) or when \\equitability favors one group and gain\\xspace the other one (right panel).}\n \\label{fig:trade-off-obs}\n\\end{figure}\n\\section{Related Work}\\label{sec:literature}\n\n\\subsection{Group Fairness}\nPrior research has led to many definitions of fairness to compare algorithmic outcomes across demographic groups. Popular definitions include statistical parity~\\cite{dwork2012fairness}, equalized odds and opportunity~\\cite{hardt2016equality}. However, these definitions only apply to binary settings and implicitly assume that the utility of an individual is equal to one when the algorithm's outcome is one and equal to zero otherwise. Few papers consider more general definitions of utilities~\\cite{heidari2019moral}. In this paper, we argue as in~\\cite{hossain2020designing} that in many societal applications of machine learning, utilities are heterogeneous across individuals and that this heterogeneity could be systematic across demographic groups. \n\nThe fair division literature offers a framework to compare utilities across individuals. Envy-freeness, proportionality or equitability~\\cite{caragiannis2019unreasonable} are common utility-based definitions of a fair allocation of goods. The literature strengthens these notions of fairness to control for envy-freeness to arbitrary segments of the population~\\cite{conitzer2019group, bartholdi1992hard}. In this paper, we focus on notions of group equitability that vary by their normalization, but leaves it for future research to explore the role of normalization on group envy-freeness. \n\nA standard assumption in the fair division literature is that utilities, although heterogeneous, are unit-normalized~\\cite{aziz2020justifications}. The rationale for unit-normalization is that it allows one to make more reasonable interpersonal comparisons of utility by converting all utilities to a common scale. Unit-normalization implies that the maximum utility gain is equal to one for all individuals~\\cite{aziz2020justifications}. Our notions of \\equitability or regret\\xspace rely on a similar assumption, which is reasonable in many settings (e.g. voting ~\\cite{bouveret2016characterizing}). However, we argue that other reasonable choices of normalization are possible and more relevant in different types of allocation problems. For example, in the case of homeless services delivery, a policymaker would want to account for the fact that families with children have on average more to gain from rapid rehousing programs \\cite{rog2007characteristics}. In this case, our measures of \\improvement and gain\\xspace, which normalize by comparison with the worst utility that an individual can expect from an allocation, are also reasonable notions of fairness. This paper relates closely to the work of \\cite{hossain2020designing}, who introduce utility-based notions of group fairness for classification problems. However, they assume away the need to normalize utilities to a similar scale \/ support. One of our contributions is to show that different normalization approaches can lead to conflicting assessments of the fairness of an allocation policy.\n\n\\subsection{Impossibility Results}\nThe binary outcome setting admits some fundamental impossibility results \\cite{kleinberg2016inherent,chouldechova2017fair}. Except under very restrictive conditions, it is impossible for a classifier to simultaneously equalize false positive rates and false negative rates across groups and also guarantee that predictions are calibrated within each group. \\cite{kleinberg2016inherent} show that the impossibility emerges whenever demographic groups differ systematically in the distribution of features used by the classifier as inputs. In this paper, we demonstrate new impossibility results in the case of utility-based notions of fairness. As in \\cite{kleinberg2016inherent}, we obtain a paradox where fairness guarantees that seem to share the same objective -- that the allocation of resources will be as effective for all demographic groups -- are nonetheless incompatible. \n\nOur results on the incompatibility of different fairness principles is also reminiscent of Arrow's impossibility theorem ~\\cite{arrow1950difficulty}. In the presence of heterogeneous preferences, there is no way to aggregate individual preferences into a social welfare function that would satisfy unanimity, non-dictatorship and informational parsimony. The theory of fair allocation \\cite{foley1966resource,varian1973equity} that selects a subset of policies on basis of their fairness and efficiency obtains possibility results by relaxing informational parsimony \\cite{fleurbaey2005informational}. However, in this paper, we show that we cannot avoid negative results when notions of fairness based on different normalizations have to hold simultaneously.\n\n\\subsection{Algorithmic Allocation of Societal Resources}\nThere has been\nrecent interest in the specific setting where scarce resources that are collective or societal are algorithmically allocated by a centralized institution to individual members of society (see \\cite{das2022local} for a recent review). \nThe design of algorithmic approaches has typically focused on increasing the efficiency of social interventions, including kidney exchange \\cite{li2019incorporating,roth2005pairwise}, housing assistance \\cite{kube2019fair,manlove2006popular}, HIV awareness campaigns \\cite{yadav2016using} and refugee resettlement \\cite{delacretaz2019matching}. In this paper, we investigate how to assess the fairness of resulting allocations. Empirically, we find evidence of our impossibility results in the context of capacitated one-sided matching, which involve a set of services with fixed capacities, a set of agents with heterogeneous preference orderings (see e.g. ~\\cite{manlove2006popular} for an application to the house allocation problem) and a social worker that assigns a service to each agent.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nUnderstanding the structure of hadrons in terms of their partonic\nconstituents (quarks and gluons) is one of the preeminent tasks\nof modern hadron physics.\nOver the years,\nexperiments like deep inelastic scattering (DIS) led to \na reasonably accurate knowledge of parton distribution functions (PDFs), which\ndescribe the structure of the proton in terms of the fraction of its\nlarge longitudinal momentum carried by a quark.\nThe generalization of this picture from one to three dimensions\n(including transverse spatial coordinates)\nis a major ongoing effort, to\nwhich significant resources of JLab and CERN are\ndedicated, and which is a major science case\nfor the future electron-ion collider (EIC) \\cite{Accardi:2012qut}.\n\nSuch 3D hadron structure can be encoded in the generalized parton \ndistributions (GPDs) \\cite{Mueller:1998fv,Radyushkin:1996nd,Ji:1996nm,Burkardt:2000za},\nwhich are measurable in hard exclusive scattering processes, the most\nstudied of which is deeply virtual Compton scattering (DVCS)\nof a photon with large virtuality $Q^2$ off a proton,\n$\\gamma^{*} p \\to \\gamma p$.\nThe present phenomenological status (see e.g. \\cite{dHose:2016mda,Kumericki:2016ehc})\ndoes not yet allow a reliable determination of most GPDs. We are at the intermediate\nstage where one aims for related functions --- Compton form factors (CFFs), which\n(at leading order in $1\/Q^2$)\nfactorize into a convolution of GPDs and the known\nperturbatively calculable coefficient functions.\nCFFs thus also describe distributions of partons, albeit indirectly, \nwhile at the same time being more accessible experimentally.\nThis is completely analogous to the history of DIS studies, where the extraction\nof form factors preceded the determination of PDFs.\n\nSince DVCS probes (both the initial virtual and final real photon) couple\nto charge, not flavor, \nto determine the distributions of particular quark\nflavors it is necessary either to use other processes with flavored\nprobes (e. g. with meson instead of photon in the final state),\nor to combine DVCS measurements with different targets,\nlike protons and neutrons.\nThe latter method, involving processes with fewer hadronic states, is less\nprone to the influence of low-energy systematic uncertainties, and\nwill be utilized in this study.\n\nThe data on proton DVCS is relatively rich, so\nit is the recent complementary \\emph{neutron} DVCS measurement by \nJLab's Hall A collaboration \\cite{Benali:2020vma}\nthat made the\npresent study possible, and enabled us to separate the $u$ and\n$d$ quark contributions to the leading CFF.\nThe Hall A collaboration itself also tried to \nseparate $u$ and $d$ quark flavors in \\cite{Benali:2020vma}, \nusing the technique of\nfitting separately in each kinematic bin, but their\nresults are somewhat inconclusive having large uncertainties.\nHere, we reduce significantly the uncertainties of\nthe extracted CFFs by (1) performing global fits and (2) using dispersion\nrelations (DR), thus imposing additional constraints on CFFs.\nTo keep our main results model-independent, we parametrize the form factors using\nneural networks.\n\nWe first perform both model and neural net fits to most of the JLab\n6 \\GeV{} proton-only DVCS data, \ndemonstrating how adding DR constraints\nto the neural net procedure significantly increases our ability to extract CFFs. \nWe end up with an extraction of six\nout of the total eight real and imaginary parts of leading twist-2 CFFs,\nincluding the CFF $\\mathcal{E}$, which is a major research target related\nto the nucleon spin structure \\cite{Ji:1996nm}.\nThen, we make both model and neural net fits to JLab's combined proton and \n\\emph{neutron} DVCS data. This enables a clear\nseparation of $u$ and $d$ quark contributions to the leading CFF\n$\\mathcal{H}$.\n\n\n\\section{Methods and Data}\n\n\nTo connect the sought structure functions to the experimental observables,\nwe use the formulae from \\cite{Belitsky:2001ns,Belitsky:2010jw}, giving \nthe four-fold differential cross-section \n$$\\frac{d^{4}\\sigma_{\\lambda,\\Lambda}}{dx_{\\mathrm{B}} dQ^2 d|t| d\\phi}$$\nfor the leptoproduction of a real photon by scattering a lepton\nof helicity $\\lambda\/2$ off a nucleon target with longitudinal spin\n$\\Lambda\/2$. \nDVCS is a part of the leptoproduction amplitude and is expressed\nin terms of four complex-valued twist-2 CFFs \n$\\mathcal{H}(\\xi=x_{\\mathrm{B}}\/(2-x_{\\mathrm{B}}), t, Q^2)$,\n$\\mathcal{E}(\\xi, t, Q^2)$, $\\widetilde{\\mathcal{H}}(\\xi, t, Q^2)$, and $\\widetilde{\\mathcal{E}}(\\xi, t, Q^2)$.\nThe kinematical variables are squared momentum transfers from the\nlepton, $Q^2$, and to the nucleon, $t$, Bjorken $x_{\\mathrm{B}}$, and the angle $\\phi$ between the lepton and photon\nscattering planes. The dependence of CFFs on $Q^2$ is perturbatively\ncalculable in QCD and will be suppressed in what follows.\n\n\nAn important constraint on CFFs is\nprovided by dispersion relations \\cite{Teryaev:2005uj}, relating\ntheir real and imaginary parts. E. g., for CFF $\\mathcal{H}$ we have\n\\begin{multline}\n\\Re{\\mathcal{H}}(\\xi,t) = \\Delta(t) \\\\\n+ \\frac{1}{\\pi} {\\rm P.V.}\n\\int_{0}^{1} {\\rm d}x \\left(\\frac{1}{\\xi-x} - \\frac{1}{\\xi+x}\\right)\\Im{\\mathcal{H}} (x,t) \\;,\n\\label{eq:DR}\n\\end{multline}\nwhere $\\Delta(t)$ is a subtraction, constant in $\\xi$, which is up to an opposite sign\nthe same for $\\mathcal{H}$ and $\\mathcal{E}$, and is zero for $\\widetilde{\\mathcal{H}}$ and $\\widetilde{\\mathcal{E}}$. \nThis makes it possible\nto independently model only the imaginary parts of four CFFs and one subtraction constant.\nIt is a known feature of any statistical inference that, with given data, \na more constrained model will generally lead to smaller uncertainties of the results \n--- a property usually called \\emph{the bias-variance tradeoff}.\nSo we expect that, besides easier modeling, the DR constraint will result\nin more precise CFFs.\nThe application of this constraint to neural\nnetwork models is the important technical novelty of the fitting procedure presented here.\n\n\\subsection{Model fit}\n\\label{sec:model}\nAlthough the main results of this study are obtained using neural networks, for comparison\nwe also perform a standard least-squares model fit to the same data.\nWe use the ``\\texttt{KM}'' model parametrization from \\cite{Kumericki:2009uq}, which is of a hybrid type: Flavor-symmetric sea quark $H_q$ and gluon $H_G$ GPDs are modeled in the conformal-moment space,\n evolved in $Q^2$ using leading order (LO) QCD evolution, convoluted with LO coefficient functions,\n and added together to give the total sea CFF $\\mathcal{H}^{\\rm sea}(\\xi, t, Q^2)$.\nOn the other hand, valence quark GPDs, like $H^{\\rm val}(x,\\eta,t)$, are modeled as functions\n of momentum fractions, and only on the $\\eta=x$ line, \n where $x$ and $\\eta$ are the average and transferred momentum of the struck parton.\n E.g.,\n \\begin{multline}\n H^{\\rm val}_{q}(x, x, t) = \n \\frac{n_{q}r_{q}}{1+x}\n \\left(\\frac{2x}{1+x}\\right)^{-\\alpha_{v}(t)}\n \\left(\\frac{1-x}{1+x}\\right)^{b_{q}} \\\\\n \\times \\frac{1}{\n 1-\\dfrac{1-x}{1+x}\\dfrac{t}{M_{q}^2}} \\;, \\quad q = u, d,\n \\label{eq:KMval}\n \\end{multline}\n where the Regge trajectory $\\alpha_{v}(t) = 0.43 + 0.85\\,t\/\\GeV^2$ is used, and\n where the known normalizations $n_q$ of the corresponding PDFs\n are factored out, so that $r_q$ parametrizes ``skewedness''.\n Evolution in $Q^2$ is neglected for valence CFFs and \n their imaginary part is given by the LO relation\n \\begin{equation}\n \\Im\\mathcal{H}^{\\rm val}(\\xi) = \\pi\\sum_{q=u,d} e_{q}^2 \\big[H^{\\rm val}_q(\\xi,\\xi)-\n H^{\\rm val}_q(-\\xi, \\xi) \\big] \\,,\n \\end{equation}\n with $e_{q}$ being the quark charge and where\n dependence on $t$ is suppressed. The subtraction constant\n is modeled separately,\n \\begin{equation}\n \\Delta(t) = \\frac{C}{\n \\left(1-\\frac{t}{M_{C}^2}\\right)^2}\\,,\n \\label{eq:subtraction}\n \\end{equation}\n and real parts of CFFs are then obtained using DR\n (\\ref{eq:DR}). $r_q$, $b_q$, $M_q$, $C$ and $M_C$ are parameters of the model.\nFor further details of the parametrization, and for other CFFs, \nsee \\cite{Kumericki:2009uq, Kumericki:2015lhb}.\nIn these references only proton DVCS data, for which the contributions of separate\nflavors are not visible, were analysed, so the final model was\ndesigned using simply $H^{\\rm val}_u = 2 H^{\\rm val}_d$.\nParameters of the model were then fitted to global proton DVCS data resulting in, most recently, the\nmodel \\texttt{KM15} \\cite{Kumericki:2015lhb}.\n\nIn this work we first make a refit of this same model, adding also the 2017 Hall A\ndata to the dataset, while keeping the same sea parton parameters from the \\texttt{KM15} fit\n(which was fitted also to H1, ZEUS and HERMES data). The resulting updated fit,\nnamed \\texttt{KM20}, is\nthe only model presented in this paper which is truly global in the sense\nthat it successfully describes also the low-$x$ H1, ZEUS and HERMES DVCS data.\nThen, focusing on flavor separation, we make a fit \nusing the same flavor-symmetric sea $\\mathcal{H}^{\\rm sea}$,\nbut parametrizing separately $H^{\\rm val}_{u}$ and $H^{\\rm val}_{d}$ in\n(\\ref{eq:KMval}), i.e., $r_u \\neq r_d$, $b_u \\neq b_d$, and $M_u \\neq M_d$,\nand similarly for other GPDs.\nThis model is fitted to both proton and neutron DVCS data, where isospin\nsymmetry is assumed, i. e., we take that \n$H_{d, \\mathrm{neutron}} = H_{u, \\mathrm{proton}} \\equiv H_{u}$, etc.\nSince neutron datapoints are few and coming only from JLab, this flavor-separated fit\nis performed only to JLab data because only in this kinematics there\nis hope to tell flavors apart. The resulting flavor-separated fit is named \\texttt{fKM20}.\n\n\\subsection{Neural networks fit}\n\n\nFor the neural network approach, \nwe use the method originally developed by two of us in \\cite{Kumericki:2011rz}, and\ninspired by a similar procedure for PDF fitting \\cite{Forte:2002fg}.\nCFFs are parametrized as neural networks, with values at input\nrepresenting kinematical variables $x_{\\mathrm{B}}$ and $t$, and values at\noutput representing imaginary or real parts of CFFs.\nHere we make significant improvements by adding the possibility of\nDR constraints, where outputs represent only imaginary parts,\nand one network output represents the subtraction constant $\\Delta(t)$ from\n(\\ref{eq:DR}), see Fig.~\\ref{fig:architecture}.\nThe iterative analysis proceeds in several steps. The network output is \nused as input for the DR and the result in turn as input for the cross section\nformulae. From comparison with experiment we then obtain the required\ncorrection, which is back-propagated to the neural network.\nThe network parameters are finally adjusted in a standard cross-validated\nlearning procedure.\nFor this, we modified the publicly available \\textsc{PyBrain} software\nlibrary \\cite{pybrainpaper}.\nThis DR-constrained neural net fitting procedure was already\napplied by one of us recently to the specific study of \npressure in the proton \\cite{Kumericki:2019ddg},\nbut is applied here for the first time in a more general context.\n\n\nTo propagate experimental uncertainties, we used the standard\nmethod of fitting to several replicas of datasets \\cite{Forte:2002fg}, generated\nby Gaussian distributions corresponding to uncertainties of the measured data.\nTo determine the needed number of replicas to generate and, consequently, the number of neural\nnets to train, we made preliminary studies with reduced datasets, where we\ncompared the results obtained with 10 replicas with those obtained with 80 replicas and we found\nthat the variation of results is less than 5\\%, which we consider acceptable. Since training\nof neural nets with DR constraints is quite slow, due to the\nevaluation of a numerical Cauchy principal value integral (\\ref{eq:DR}) in each training step,\nwe opted to generate our results with 20 replicas for each presented model.\nTraining of each net required about 1 day on a single thread CPU of a 2.4 GHz Intel Xeon processor.\n\nSimilar preliminary analyses demonstrated that we do not need many neurons to successfully describe\nthe data, most likely due to the CFF functions being quite well-behaved in this kinematics.\nWe thus believe that there is no necessity for the \\emph{deep learning} with\nlarge amounts of neurons in many layers, which is an extremely powerful method, for\nmuch more complex machine learning tasks. Actual numbers of neurons in our nets\nare given in the caption of Fig.~\\ref{fig:architecture}.\n\nPreliminary fits using all of the 8 real and imaginary parts of leading\ntwist-2 CFFs have shown that $\\Re{\\widetilde{\\mathcal{H}}}$ and $\\Re{\\widetilde{\\mathcal{E}}}$ are consistent with zero and\nhave negligible influence on the goodness of fit, i. e., they cannot\nbe extracted from the present data.\nThis is consistent with findings of \\cite{Moutarde:2019tqa}, which used an even larger dataset.\nThus, to simplify the model and further reduce the variance, we\nremoved these two CFFs and performed all neural network fits presented below using just\nthe remaining six CFFs.\n\n\\begin{figure}[t]\n \\centering{\\includegraphics[width=0.8\\linewidth]{architecture}}\n \\caption{Architecture of neural nets when DR constraints are used. \n The main net parametrizes imaginary parts of CFFs, \n while the simpler subsidiary net parametrizes\n the subtraction constant $\\Delta(t)$. Real parts are then obtained using DR \n (\\protect\\ref{eq:DR}).\n Observables are calculated using total complex CFFs, and, finally, \n differences with respect to measured observables\n are back-propagated for weight adjustment of the network neurons. There are also standard ``bias'' nodes which are\n for clarity not drawn. Architectures (number of neurons per layer, starting from the input layer) of\n our main nets are $[2\\!\\to\\!13\\!\\to\\!6]$ (for model \\texttt{NN20}), $[2\\!\\to\\!13\\!\\to\\!4]$ (\\texttt{NNDR20}),\n \n and $[2\\!\\to\\!11\\!\\to\\!17\\!\\to\\!8]$ (\\texttt{fNNDR20}), \n while subtraction constant nets are $[1\\!\\to\\!3\\!\\to\\!1]$ (unflavored), $[1\\!\\to\\!5\\!\\to\\!1]$ ($u$-quark), and\n $[1\\!\\to\\!4\\!\\to\\!1]$ ($d$-quark).\n}\n\t\\label{fig:architecture}\n\\end{figure}\n\nFirst, to assess the influence of DR, we made fits to the proton-only DVCS data using two parametrizations\n\\begin{enumerate}\n \\item using neural network parametrization of four imaginary parts of CFFs \n and of $\\Re{\\mathcal{H}}$ and $\\Re{\\mathcal{E}}$ (i.e. without imposing DR constraints) \n --- this gives us the model \\texttt{NN20}.\n \\item using neural network parametrization of four imaginary parts of CFFs, \n and of the subtraction constant, while\n $\\Re{\\mathcal{H}}$ and $\\Re{\\mathcal{E}}$ are then given by DR (\\ref{eq:DR}) \n --- this gives us the model \\texttt{NNDR20}.\n\\end{enumerate}\n\nAfter we convinced ourselves that the DR-constrained neural net parametrization works in\nthe proton-only case, we made a\nseparate parametrization for two light quark flavours (essentially doubling everything)\nand fitted to the combined proton and neutron DVCS data --- this gave us the model \\texttt{fNNDR20}.\n\n\\subsection{Experimental data used}\n\n\\begin{table} \n\\renewcommand{\\arraystretch}{1.4}\n\\caption{\\label{tab:chis}\nValues of $\\chi^2\/n_\\mathrm{pts}$ for presented models and for\neach set of DVCS measurements with fixed proton or neutron\ntarget used in this study ($\\phi$-space).\nFirst row specifies the number of real independent CFFs plus the number of\nsubtraction constants.\nSecond row gives total value for all datapoints in actually performed fit\n(which was just to leading harmonics of Fourier-transformed data --- $n$-space).}\n\\centering\n\\setlength{\\tabcolsep}{2pt}\n\\renewcommand{\\arraystretch}{1.2}\n\\begin{tabular}{ccccccc}\n\\hline\\noalign{\\smallskip}\n Observable & $n_\\mathrm{pts}$ & \\texttt{KM20} & \\texttt{NN20} & \\texttt{NNDR20} \n & \\texttt{fKM20} & \\texttt{fNNDR20} \\\\\n\\noalign{\\smallskip}\\hline\n\\# CFFs + $\\Delta$s & & 3+1 & 6 & 4+1 & 5+2 & 8+2 \\\\\n\\hline\nTotal (harmonics) & 277 & 1.3 & 1.6 & 1.7 & 1.7 & 1.8 \\\\\n\\hline\nCLAS \\cite{Pisano:2015iqa} $A_{\\rm LU}$ & 162 & 0.9 & 1.0 & 1.1 & 1.2 & 1.3 \\\\\nCLAS \\cite{Pisano:2015iqa} $A_{\\rm UL}$ & 160 & 1.5 & 1.7 & 1.8 & 1.8 & 2.0 \\\\\nCLAS \\cite{Pisano:2015iqa} $A_{\\rm LL}$ & 166 & 1.3 & 3.9 & 0.8 & 1.1 & 1.6 \\\\\nCLAS \\cite{Jo:2015ema} $d\\sigma$ & 1014 & 1.1 & 1.0 & 1.2 & 1.2 & 1.1 \\\\\nCLAS \\cite{Jo:2015ema} $\\Delta\\sigma$ & 1012 & 0.9 & 0.9 & 1.0 & 0.9 & 1.1 \\\\\n\\hline\nHall A \\cite{Defurne:2015kxq} $d\\sigma$ & 240 & 1.2 & 1.9 & 1.7 & 0.9 & 1.3 \\\\\nHall A \\cite{Defurne:2015kxq} $\\Delta\\sigma$ & 358 & 0.7 & 0.8 & 0.8 & 0.7 & 0.7 \\\\\nHall A \\cite{Defurne:2017paw} $d\\sigma$ & 450 & 1.5 & 1.6 & 1.7 & 1.9 & 2.0 \\\\\nHall A \\cite{Defurne:2017paw} $\\Delta\\sigma$ & 360 & 1.6 & 2.2 & 2.2 & 1.9 & 1.7 \\\\\nHall A \\cite{Benali:2020vma} $d\\sigma_{n}$ & 96 & & & & 1.2 & 0.9 \\\\\n\\hline\nTotal ($\\phi$-space) & 4018 & 1.1 & 1.3 & 1.3 & 1.2 & 1.3 \\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\n\\renewcommand{\\arraystretch}{1.}\n\\end{table}\n\nFor the neural network fits we used the JLab DVCS data\nlisted in Table \\ref{tab:chis}.\nWe excluded the lower-$x$ HERA data because we wanted to be safe from any $Q^2$\nevolution effects since QCD evolution is not yet implemented in our neural\nnetwork framework. Also, in order to demonstrate flavor separation,\nit made sense to restrict ourselves to the particular kinematic region where\nthe neutron DVCS measurement was performed such that there is some balance between the proton and\nneutron data.\n\nThe data contains measurements of the unpolarized cross-section $d\\sigma$, various\nbeam and target asymmetries defined via\n\\begin{equation}\n d\\sigma_{\\lambda,\\Lambda}=d\\sigma(1+\\lambda A_{LU}\n +\\Lambda A_{UL} + \\lambda\\Lambda A_{LL}) \\;,\n \\label{eq:defA}\n\\end{equation}\nas well as the helicity-dependent cross-section $\\Delta\\sigma \\equiv d\\sigma A_{LU}$.\nFurthermore, since leading-twist formulae \\cite{Belitsky:2001ns,Belitsky:2010jw}\ndescribe observables as\ntruncated Fourier series in $\\phi$, with only one or two terms, we made a Fourier transform\nof the data, and fitted only to these first harmonics. \nThis makes the fitting procedure much more efficient. \nWe propagated the experimental uncertainties using the Monte-Carlo method, and\nchecked that, indeed, no harmonics beyond the second\none are visible in the data with any statistical significance.\n\n\n\\section{Results and conclusion}\n\n\\begin{figure}[t]\n \\centering{\\includegraphics[width=\\linewidth]{nfDR}}\n \\caption{Extraction of CFFs (at $Q^2=4\\,\\GeV^2$ and $t=-0.2\\,\\GeV^2$) by three fits to JLab proton DVCS data.\n \\texttt{KM20} is model described in Sect.~\\ref{sec:model},\n \\texttt{NN20} is standard neural network parametrization, while \\texttt{NNDR20} additionally\nincludes DR constraints. $\\Im{\\mathcal{E}}$ and $\\Im{\\widetilde{\\mathcal{E}}}$ are zero in \\texttt{KM20} model\nby construction.}\n\t\\label{fig:nfDR}\n\\end{figure}\n\nThe quality of the fit for each model is displayed in Table~\\ref{tab:chis}.\nJudging this quality by the $\\chi^2$ values for fits of the above-mentioned \nFourier harmonics of the data (second row of Table~\\ref{tab:chis}) is problematic,\nbecause propagation of experimental uncertainties \nto subleading harmonics is impaired by unknown correlations, see discussion in\nSect. 3.1 of \\cite{Kumericki:2016ehc}. We consider the values of $\\chi^2$\nfor the published experimental $\\phi$-dependent data as a better measure of the actual fit quality. \nThese are displayed in other rows of Table~\\ref{tab:chis}.\nSome particular datasets are imperfectly described, \nbut total values of\n$\\chi^2\/n_{\\rm pts}$ of 1.1--1.3 look reasonable and give us confidence that the\nresulting CFFs are realistic.\nNote that the significantly different number of independent CFFs in our models (see \nthe first row of Table~\\ref{tab:chis}) leads to a similar quality of fits.\nOne concludes that there are some correlations among CFFs. Some are intrinsic,\nlike the consequence of DR, while some will be broken with more data, on more\nobservables.\n\n\\begin{figure}[t]\n \\centering{\\includegraphics[width=\\linewidth]{pnHallA}}\n \\caption{Model fit \\texttt{fKM20} (black solid line) and neural network\n fit \\texttt{fNNDR20} (hatched green band) in comparison to Hall A DVCS data on \n proton (red circles) and neutron (blue squares) cross-sections (upper two\n panels) and first cosine Fourier harmonics of cross-sections (lower two panels), \n for $x_B = 0.36$, $Q^2 = 1.75\\,\\GeV^2$, and two beam energies,\n $E=4.45\\,\\GeV$ (left) and $E=5.55\\,\\GeV$ (right). \n \n }\n\t\\label{fig:pnHallA}\n\\end{figure}\n\nOn Fig.~\\ref{fig:nfDR} we display CFFs for models \\texttt{KM20},\n\\texttt{NN20} and \\texttt{NNDR20} obtained from fits to proton-only data. We observe\nthe power of DR constraints, which lead to reduced uncertainties of the \\texttt{NNDR20}\nmodel in comparison to \\texttt{NN20},\nmost notably for $\\Im{\\mathcal{H}}$, $\\Re{\\mathcal{H}}$, and $\\Im{\\widetilde{\\mathcal{E}}}$.\nMean values are also shifted for real parts of unpolarized CFFs $\\mathcal{H}$ and\n$\\mathcal{E}$, where DR constraints can even change the sign of the extracted CFFs\\footnote{Interestingly,\n the popular VGG \\cite{Vanderhaeghen:1999xj} and GK \\cite{Goloskokov:2007nt} \n models have negative $\\Re{\\mathcal{E}}$,\nwhile the fit in \\cite{Moutarde:2018kwr} gives a positive $\\Re{\\mathcal{E}}$ in this region.}. For $\\Re{\\mathcal{H}}$,\nDR induce a strong $\\xi$ dependence and a clear extraction of this CFF. It is this\nparticular effect of DR that made recent attempts at determination of quark\npressure distribution in the proton from the DVCS data possible \\cite{Burkert:2018bqq,Kumericki:2019ddg}.\nThe DR-constrained neural net fit \\texttt{NNDR20} is, as is to be expected, \nin somewhat better agreement with the model fit \\texttt{KM20} which is also DR constrained.\nThe green bands in the second row of Fig.~\\ref{fig:nfDR} constitute the first unbiased extraction \nof the important CFF $\\mathcal{E}$ in this kinematic region.\n\nThe CFFs in the \\texttt{NNDR20} model are in broad agreement with\nthe results of the (also DR-constrained) \nmodel fit of Ref. \\cite{Moutarde:2018kwr}. One notable exception is the opposite\nsign of $\\Im{\\mathcal{E}}$. As this CFF has the largest uncertainty of the six displayed, \none can hope that with more data the discrepancy will fade.\nComparing with CFFs extracted by the recent global neural network fit of Ref. \\cite{Moutarde:2019tqa},\nresults agree within specified uncertainties, with the largest tension now being observed for $\\Re{\\mathcal{E}}$.\nOne notes that $\\Re{\\mathcal{E}}$ of \\cite{Moutarde:2019tqa} agrees much better with our\nfit \\texttt{NN20}, which is to be expected since one is now comparing results of more\nsimilar procedures: both are completely unbiased fits, with neither using DR\nconstraints.\n\n\\begin{figure}[t]\n \\centering{\\includegraphics[width=\\linewidth]{fCFFH}}\n \\caption{CFF $\\mathcal{H}$ extracted (at $Q^2=4\\, \\GeV^2$ and $x_{\\rm B} = 0.36$) from neural network fit to proton-only DVCS data (left). Separation of $u$ (red band) and $d$ (blue band) quark CFF $\\mathcal{H}$ resulting from neural network fit to proton and neutron JLab DVCS data (right). Red solid\n (unflavored), black solid ($u$) and\n dashed ($d$) lines correspond to analogous least-squares model fit to the same data.\n }\n\t\\label{fig:fCFFH}\n\\end{figure}\n\n\\begin{figure}[t]\n \\centering{\\includegraphics[width=\\linewidth]{fCFFE}}\n \\caption{Same as Fig.~\\protect\\ref{fig:fCFFH} but for CFF $\\mathcal{E}$. Separation of $u$ and $d$ \n quark CFF $\\mathcal{E}$ is not possible with present data (right). $\\Im{\\mathcal{E}}$ is zero\n in \\texttt{KM} models by construction.\n}\n\t\\label{fig:fCFFE}\n\\end{figure}\n\nTurning now to the simultaneous fit to proton and neutron data, besides in Table~\\ref{tab:chis},\nthe quality of the fit can also be seen in Fig. \\ref{fig:pnHallA},\nwhere the model fit \\texttt{fKM20} and the DR-constrained neural net fit \\texttt{fNNDR20}\nare confronted with Hall A data.\nThe resulting $\\Im{\\mathcal{H}}$ and $\\Re{\\mathcal{H}}$ CFFs, separately for up and down quarks,\nare displayed in the right two panels of Fig.~\\ref{fig:fCFFH}, demonstrating\nhow the inclusion of neutron DVCS data enables a clear flavor separation\nfor this CFF. (For other CFFs, there is no visible separation, see \nexample of $\\mathcal{E}$ in Fig.~\\ref{fig:fCFFE}.)\n\nThe separated up and down quark CFFs have much larger uncertainties than\ntheir sum, shown in the left panels of Fig.~\\ref{fig:fCFFH}, and although\nthere are some hints of different $t$-slopes, at this\nlevel we are not yet able to address the question of possible different\nspatial distributions of up and down quarks in the nucleon.\n\nTo conclude, we have used JLab DVCS data to make both a model-dependent and an unbiased\nneural net extraction of six Compton form factors, where constraints by dispersion relations\nproved valuable. Furthermore, in the case of the dominant CFF $\\mathcal{H}$,\nwe have successfully separated the contributions of up and down quarks. \nThis constitutes another\nstep towards a full three-dimensional picture of the nucleon structure.\n\n\n\n\\subsection*{Code availability}\nIn the interest of open and reproducible research, the computer code used in\nthe production of numerical results and plots for this paper is made available at \n\\url{https:\/\/github.com\/openhep\/neutron20}.\n\n\n\\subsection*{Acknowledgments}\nThis publication is supported by the\nCroatian Science Foundation project IP-2019-04-9709, by\nDeutsche Forschungsgemeinschaft (DFG) through the\nResearch Unit FOR 2926, ``Next Generation pQCD for Hadron Structure: Preparing\nfor the EIC'', project number 40824754,\nby QuantiXLie Centre of Excellence through grant KK.01.1.1.01.0004, and the \nEU Horizon 2020 research and innovation programme, STRONG-2020 project, \nunder grant agreement No 824093.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nRecent discoveries of extremely metal-poor stars \\citep{Christlieb2002,Frebel2005,Norris2007,Keller2014} raised the question from which mechanism and under which conditions they were formed.\nThese stars present very low iron abundances [Fe\/H]~$<$~-4 but an anomalous richness in carbon and are considered crucial to provide a physical insight on the possible environments where the first stars were formed.\nA distinct class of carbon-enhanced metal-poor (CEMP) stars exists and different mechanisms for their formation have been proposed \\citep{Umeda2003}.\n\nStars like SMSS J031300.36-670839.3 are thought to be formed from a low-energy ($\\sim 10^{51}$ erg) type II supernova of a primordial massive star ranging between 40-60 M$_\\odot$ \\citep{Keller2014}. Violent pair-instability supernovae (PISNe) are in fact likely to disrupt the hosting halo inhibiting the formation of a second generation of stars \\citep{Greif2007,Cooke2014,Seifried2014}. Low-explosion energy black-hole forming events \\citep{Umeda2003} need to be invoked to explain the abundance pattern observed in CEMP stars \\citep{Keller2014}. During such events metals heavier than iron are trapped inside the black hole because of the larger degrees of fallback, whilst lighter elements which reside in the outer region are dispersed during the explosion. This scenario has been recently investigated by \\citet{Cooke2014} through detailed nucleosynthesis calculations. They also suggested that the seeds of CEMP stars should have been relatively low mass halos of a few 10$^6$ M$_\\odot$. \n\nSimplified theoretical models \\citep{Frebel2007,Safranek2010,Ji2014} suggested possible conditions to form low mass CEMP stars that can be observed today by evaluating the amount of metals needed to induce cooling and subsequent fragmentation.\n\\citet{Salvadori2007,Salvadori2010} explored the metallicity distribution function to probe the stellar population history of the Milky Way providing a critical metallicity of Z$_\\mathrm{crit}$\/Z$_\\odot$ = 10$^{-4}$.\n Nevertheless, while the Population III star formation mode was the main focus of many hydrodynamical simulations, indicating typical masses between 10-300 M$_\\odot$ \\citep[][and references therein]{Hirano2014}, the transition between the Pop III - PopII star formation mode is still an ongoing research topic which needs to be accurately explored\\footnote{A window of possible low-mass primordial star formation via fragmentation or HD cooling has also been explored by several authors \\citep{ClarkGlover2011,Greif2012,Stacy2012,Prieto2013,Bovino2014mergers}.}.\n\nResults from smoothed particle hydrodynamics (SPH) calculations have been reported by \\citet{Bromm2001} which propose a critical metallicity Z$_\\mathrm{crit}$\/Z$_\\odot$ = 10$^{-3.5}$ which regulates the transition between the high and low-mass star formation mode. This value was suggested to be unlikely by \\citet{Jappsen2009} indicating that fragmentation processes mostly depend on the initial conditions. \nThey explored conditions under which cooling is mainly regulated by fine-structure transitions of oxygen and carbon by employing an idealized Navarro-Frenk-White (NWF) density profile and a low-metallicity network for a minihalo of 2$\\times$10$^6$ M$_\\odot$\n\nRecently, \\citet{Safranek2014} carried out a series of calculations starting from realistic cosmological initial conditions for an atomic cooling halo ($\\sim$10$^7$ M$_\\odot$) irradiated by a fixed UV flux (J$_{21}$ = 100). They included a comprehensive model for non-equilibrium chemistry and metal line cooling and followed the evolution of the halo by introducing sink particles. \nThe predicted metallicity threshold given by \\citet{Bromm2001} has been confirmed and the influence of metal line cooling on the final masses has been discussed. In particular they found that a metallicity of Z\/Z$_\\odot$~=~10$^{-3}$ is needed to reach the cosmic background temperature and to induce fragmentation, forming a stellar cluster of $\\sim$ 1000 M$_\\odot$. \n\nIn this letter we explore the chemo-dynamical conditions of a metal-poor minihalo required to cool and undergo gravitational collapse to provide a possible site of CEMP star formation. \nWe start our calculations from cosmological initial conditions including a comprehensive model for the non-equilibrium metal cooling and photochemical processes and vary both the metallicity of the gas as well as the UV radiation strength. \n\nIn the following sections, we provide the details of our simulations, present the main results, and summarize our conclusions.\n\\section{Simulations setup}\nTo follow the collapse of typical minihalo from cosmological initial conditions we employ the cosmological hydrodynamics code \\verb|ENZO|, version 2.3 \\citep{Enzo2014}. \\verb|ENZO| is based on an adaptive mesh refinement (AMR) method. It includes the split 3rd-order piece-wise parabolic (PPM) method for solving the hydrodynamical equations, while the dark matter component is modeled using the particle-mesh technique. Self-gravity is calculated via a multi-grid Poisson solver.\nWe first run a dark-matter only low-resolution simulation to study the evolution of the halo for a box of 300 kpc\/h with a top grid resolution of 128$^3$ cells similarly to \\citet{Bovino2013MNRAS,Bovino2014} and \\citet{Latif2013}.\n The parameters for creating the initial conditions and the distribution of baryonic and dark matter components are taken from the WMAP seven year data \\citep{Jarosik2011}. We select the most massive mini-halo of $\\sim$1.4$\\times10^5 $ M$_\\odot$ and re-run our simulation with the box centered onto the selected mini-halo adding two additional nested grids for an effective resolution of 512$^3$.\nFrom $z=99$ to $z=22$ we evolve the halo to reach a temperature of $\\sim$10$^3$~K. We then inject metals which are initialized based on the metallicity. We allow 20 levels of refinement and 32 cells per Jeans length. \nOur refinement strategy is based on over-density, Jeans length, and particle mass and is applied during the course of the simulations to ensure that all physical processes are well resolved and the Truelove criterion \\citep{Truelove1997,Federrath11} is fulfilled. \n\n\\begin{figure}[!h]\n\\includegraphics[scale=.40]{fig1.eps}\n\\caption{Radial profiles of the averaged density (top left), temperature (top right), neutral and ionized carbon (middle panels), radial infall velocity (bottom left), and accretion rate (bottom right). \nEach curve represents a different metallicity run as sketched in the legend. The horizontal blue line represents the CMB temperature.}\\label{figure1}\n\\end{figure}\n\n\\subsection{Chemistry}\nThe evolution of the gas enriched by metals and irradiated by UV background is regulated by the complex interplay between cooling, kinetics, and hydrodynamics. \nA standard approach to consider metal cooling in hydrodynamical simulations is to employ equilibrium \\textsc{cloudy} tables making use of a general metallicity field \\citep[e.g.][]{Smith2008,Smith2009}, or introducing equilibrium approximations for some of the species which evolve faster \\citep{GloverJappsen2007,Jappsen2007ApJ}. Recently, \\citet{Peters2014} explored metal cooling employing a parametrized equation\nof state. \nTo our knowledge a complete non-equilibrium approach including the coupling between kinetics and cooling for low-metallicity environments has never been implemented in \\textsc{enzo} and only a few simulations have been performed with other codes \\citep[see][]{GloverJappsen2007,Safranek2014}. In this study we employ the publicly available chemistry package \\textsc{krome}\\footnote{\\url{www.kromepackage.org}} \\citep{Grassi2014} to evolve 16 species: H, H$^+$, H$^-$, H$_2$, H$_2^+$, He, He$^+$, He$^{2+}$, C, C$^+$, Si, Si$^+$, Si$^{2+}$, O, O$^+$, and e$^-$, for a total of 44 reactions. We include photoionization and photoheating for C and Si, which are easily ionized for fluxes below the Lyman limit, with ionization thresholds of 8.15 eV and 11.26 eV, respectively. We integrate the cross sections provided by \\citet{Verner1996} as described by \\citet{Grassi2014} considering an optically thin gas. Photodissociation of H$_2$, H$_2^+$, H$^-$ photo-detachment, collisional dissociation by \\citet{Martin1996}, and the self-shielding function from \\citet{Wolcott2011sfh} are also employed \\citep[see also][]{Latif2014}. We consider here a T$_*$ = 10$^4$ K soft spectrum. The following cooling\/heating processes are included: H$_2$ cooling, atomic line cooling, chemical heating\/cooling, and metal line cooling. The latter is evaluated on the fly solving the linear system for the individual metal excitation levels along with the rate equations integration, as discussed in \\citet{GloverJappsen2007,Maio2007} and \\citet{Grassi2014}. For additional details on the thermal processes implemented in \\textsc{krome} we refer to \\citet{Grassi2014}. \nTo prevent the temperature from dropping below the CMB floor and since induced (de)-excitations are not included, we define the following effective cooling rate:\n\\begin{equation}\n\t\\mathrm{\\Lambda_{eff} = \\Lambda_{metal} (T_{gas}) - \\Lambda_{metal} (T_{CMB})}\n\\end{equation} \nas also reported by \\citet{Jappsen2009a}.\nWe do not include deuterium chemistry as well as HD cooling which is easily photodissociated by a weak radiation background \\citep{Wolcott2011MNRAS}. We note that all the chemical species are consistently advected to ensure conservation.\nThe network used here is publicly available with the package \\textsc{krome} (react\\_lowmetal). The reactions for metals are taken from \\citet{GloverJappsen2007}, numbers 30-47, 56 and 58, while the primordial reactions are the standard reactions employed in \\textsc{krome} \\citep[see][table C1]{Grassi2014} with the only difference in the three-body formation rate that we take from the latest available data \\citep{Forrey2013}.\n\nThe initialization of the individual metal species X is based on the following definition:\n\\begin{equation}\n\t\\log_{10}(n_\\mathrm{X}\/n_\\mathrm{H}) = \\mathrm{Z\/Z_\\odot}+\\log_{10}(n_\\mathrm{X}\/n_\\mathrm{H})_\\odot\n\\end{equation}\nwith Z\/Z$_\\odot$ being the logarithm of the metallicity expressed in terms of solar metallicity and $n_\\mathrm{H}$ the total hydrogen nuclei. \nThe solar abundances $(n_\\mathrm{X}\/n_\\mathrm{H})_\\odot$ are taken from \\citet{Asplund2009}.\nOnly in one case we directly use the observed abundances for the CEMP star SMSS J031300.36-670839.3 \\citep{Keller2014}.\n\n\\begin{figure*}\n\\includegraphics[scale=.55]{fig4.eps}\n\\caption{Density, temperature, and CII projections along the y-axis at a scale of 1 pc, for three different metallicities as sketched in the plots.}\\label{figure2}\n\\end{figure*}\n\n\\begin{figure*}\n\\includegraphics[scale=.8]{fig2.eps}\n\\caption{Radial profiles of the density and the most important chemical species fractions: electrons (top right), H$_2$ and H$^-$ (middle top panels), CI and CII (middle bottom panels), and SiI and SiII (bottom panels). Different values of the flux strength in terms of J$_{21}$ and metallicities (Z\/Z$_\\odot$) are reported according to the legend.}\\label{figure3}\n\\end{figure*}\n\n\\section{Results}\nWe first explore the effect of varying the metallicity of the gas once it reaches $z=22$ and a temperature of $\\sim$ 1000 K, assuming the absence of UV background radiation (J$_{21}$ = 0).\nThree different values of the metallicity, Z\/Z$_\\odot$ = -4, Z\/Z$_\\odot$ = -3, and Z\/Z$_\\odot$ = -2 are chosen. In addition, the abundances from the recently discovered CEMP star SMSS J031300.36-670839.3 \\citep{Keller2014} is also investigated to probe the differences in the dynamical evolution of the halo.\nIn figure \\ref{figure1} dynamical and chemical quantities are plotted. Increasing the metallicity, the cooling efficiency is clearly enhanced and the gas is able to reach the CMB floor temperature for Z\/Z$_\\odot$~=~-2.\nThe thermal evolution for Z\/Z$_\\odot$~=~-4 is mainly dominated by H$_2$ cooling and is not influenced by the small amount of metals. This result is in agreement with the critical metallicity proposed by \\citet{Bromm2001} of Z\/Z$_\\odot$ = -3.5 and the recent results reported by \\citet{Safranek2014} for atomic cooling halos. \n\nOnce we employ the abundance pattern of SMSS J031300.36-670839.3 the evolution of the halo is very similar to the case with Z\/Z$_\\odot$ = -2. Indeed, the main coolant is the neutral carbon CI which regulates the whole evolution. Assuming a high metallicity is then equivalent to considering a carbon-enhanced halo. \nIn figure \\ref{figure1} we also report the evolution of neutral and ionized carbon (CI and CII) from where it is clear how the chemical species affect the thermal evolution. \nIn our simulations we assume as initial conditions that metal species are ionized with the exception of oxygen because of its high ionization potential. The fast recombination of these species is well visible in the figure around a radius of 0.01 pc where CII declines (middle right panel) and whereas neutral carbon (CI) increases (middle left panel). \n\nAs the initial conditions are rescaled by the metallicity the evolution of CI and CII behaves essentially in the same way but is shifted toward higher values when we increase Z\/Z$_\\odot$.\nDue to the fact that we are not including CO, H$_2$O, and OH, the evolution of metals reaches a plateau as they are not depleted into molecules. Based on previous results, we expect that these molecules would simply provide an alternative cooling channel at high densities \\citep[e.g.][]{Omukai2005}.\nOxygen and silicon species show a similar evolution. \nChanging the metallicity does not change the behavior of dynamical quantities like radial infall velocity and accretion rates, which show similar features.\n\nIn figure \\ref{figure2}, the averaged projections of density, temperature, and CII fraction for different metallicities are shown. A more compact central structure and some fragmentation is already visible for metallicities higher than Z\/Z$_\\odot$ = -4. The temperature is clearly lower for Z\/Z$_\\odot$ = -2, while Z\/Z$_\\odot$ = -4 presents a temperature of about 200 K in the central core comparable to the primordial case. The ionized carbon fraction decreases in the core where the temperatures are lower and recombination reactions faster.\n\n\\subsection{The effect of Far-UV radiation}\nIn a second series of runs here we explored the effect of varying the intensity of the radiation flux expressed in terms of J$_{21}$ (10$^{-21}$ erg s$^{-1}$ cm$^{-2}$ sr$^{-1}$ Hz$^{-1}$) that is kept below the Lyman limit ($h\\nu < $ 13.6 eV). We fix the metallicity to Z\/Z$_\\odot$ = -3. The aim of these runs is to better understand under which conditions the halo is still able to collapse and reach the CMB temperature.\nIn figure \\ref{figure3} the most relevant chemical species are shown as a function of radius for different values of J$_{21}$. The following important features are noted:\n\\begin{itemize}\n\t\\item The H$^-$ and H$_2$ fractions are strongly suppressed by the presence of a stronger UV flux for radii grater than 0.1 pc. H$_2$ starts to form once H$^-$ starts getting abundant and formation dominates over destruction.\n\t\\item The CII evolution is almost constant. As we start from ionized metals and considering the low ionization potential of CI, recombination is only able to form a small amount of CI, while CII is continuously pumped by the radiation. This is different with respect J$_{21}$ = 0, where recombination is the main reaction path for CII which is then quickly destroyed.\n\t\\item SiII and SiI show a similar behavior as CII and CI. Oxygen is not reported as the flux is not able to ionize it (ionization potential of 13.61 eV) and its evolution is not affected by the changes in J$_{21}$.\n\\end{itemize}\n\nEven for very different values of J$_{21}$, a very similar thermal evolution is obtained for J$_{21}$ = 1, 10, 100 at a metallicity of Z\/Z$_\\odot$ = -3. \nAn increase in metallicity thus cancels the effect of having a very high flux (J$_{21} = 100$) allowing the halo to collapse similarly to the case without any radiation but with enhanced cooling at higher densities. This is a clear confirmation that even in the absence of H$_2$ the halo is able to collapse due to metal enrichment. \nThis is even clearer from the data reported in table \\ref{table1} where the virial temperature, mass, and redshift for every run has been evaluated following \\citet{Barkana2001}. Runs A and E present similar collapse parameters and a similar evolution. The presence of a higher flux at metallicity Z\/Z$_\\odot = -3$ allows the halo to reach a higher virial temperature before the collapse because at low density the H$_2$ cooling is suppressed. The gas should then reach a high enough density and temperature for metal cooling to be efficient. This effect causes a delay of the collapse to $z_\\mathrm{vir}$~=~15 and a higher virial temperature T$_\\mathrm{vir}\\sim 4700$~K, allowing the metals to drop the gas temperature to even lower values due to the lower CMB temperature (40 K instead of 60 K). \nRuns B, C, and D, have similar properties.\nWe can therefore conclude that the main net effect of the presence of radiation is to delay the collapse and to activate a possible channel for the formation of lower mass stars. On the other hand, an increase in metallicity removes any possible effect of the radiation. This finding is in agreement both with the results reported by \\citet{Jappsen2009} but also with the findings of \\citet{Bromm2001} and \\citet{Safranek2014}.\nThe critical metallicity value represents indeed a threshold for the cooling-mode only in the absence of UV flux, while it is a real threshold to induce possible fragmentation in the presence of background.\nFrom recent studies \\citep[e.g.][]{Mark2014}, it is however evident that such a radiation field will generally be present, therefore effectively requiring a critical metallicity for collapse.\n\n\nFinally the overall behavior of the radial infall velocities and accretion rates reported in figure \\ref{figure4} is again similar for all the runs; in general stronger flux and higher metallicities produce higher infall velocities. \n\n\n\n\n\\begin{deluxetable}{lllllll}\n\\tabletypesize{\\scriptsize}\n\t\\tablecaption{Simulations details: flux strength (J$_{21}$), metallicity (Z\/Z$_\\odot$), virial mass in M$_\\odot$, redshift, and temperature of the halo. In the last column the CMB temperature T$_\\mathrm{cmb}$ = 2.73(1+$z$) is also reported.}\n\t\\tablewidth{0pt}\n\t\\tablehead{\n\t\\colhead{Run} & \\colhead{J$_{21}$} & \\colhead{Z\/Z$_\\odot$} & \\colhead{ Mass (M$_\\odot$)} & \\colhead{ $z_{\\mathrm{vir}}$} & \\colhead{T$_{\\mathrm{vir}}$ (K)} & \\colhead{T$_\\mathrm{cmb}$ (K)} \n\t}\t\n\t\\startdata\t\t\t\n\t\t\tA & 0 & \t-3 & 1.4$\\times$10$^5$ \t& 21\t& 1100 & $\\sim$60\\\\\n\t\t\tB & 1 & \t -3 & 2.0$\\times$10$^6$\t& 15\t& 4700 &$\\sim$44\\\\\n\t\t\tC & 10 & -3 & 2.0$\\times$10$^6$\t& 15\t& 4700 &$\\sim$44\\\\\n\t\t\tD & 100& -3 & 2.0$\\times$10$^6$\t& 15\t& 4700 &$\\sim$44\\\\\t\t\t\n\t\t E & 100& -2 & 1.4$\\times$10$^5$& 21 & 1100 &$\\sim$60 \\\\\n\t\t \\enddata \\label{table1}\n\\end{deluxetable}\n \n\\begin{figure}[!h]\n\\includegraphics[scale=.40]{fig3.eps}\n\\caption{Radial profiles of averaged density (top left), temperature (top right), accretion rate (bottom left), and infall velocity (bottom right), for same parameters as in figure \\ref{figure3}. The horizontal blue line represents the CMB temperature.}\\label{figure4}\n\\end{figure}\n\\section{Conclusions}\nIn this letter, we focused on the chemo-dynamical conditions necessary to allow a minihalo enriched by metals to undergo collapse. \nWe included an accurate treatment of the microphysics employing the chemical package \\textsc{krome}. A complete non-equilibrium low-metallicity network has been evolved including 16 species and 44 reactions, photochemistry, non-equilibrium individual metal line cooling, together with standard primordial thermal processes. This is the first time that such a network together with an accurate treatment of metal line cooling is employed in the 3D hydrodynamical code \\textsc{enzo}. All the species have been advected to guarantee mass conservation. We consider both solar-type abundance patterns as well as pollution from type II supernovae as recently reported by \\citet{Keller2014} and \\citet{Cooke2014}. Metals are injected around redshift $z = 22$ once the halo reaches a temperature of about 1000~K with a mass of 1.4$\\times$10$^5$ M$_\\odot$ simulating a mini-halo polluted by metals dispersed from a nearby star death. This metal-enrichment strategy is of course only an approximation as self-consistent simulations of metal-enrichment and subsequent collapse exceed the current computational capabilities \\citep[see for example][]{Greif2010}. \n\nDifferent metallicities have been explored ranging from Z\/Z$_\\odot$ = -4 to Z\/Z$_\\odot$ = -2, also including the abundances pattern provided by recent observational data \\citep{Keller2014}.\nTo better assess the physical conditions for the collapse, we included a UV radiation flux with different strengths considering a T$_*$ = 10$^4$ K soft spectrum and including H$_2$ photodissociation, H$^-$ photo-detachment, as well as self-shielding from \\citet{Wolcott2011sfh}. In the presence of UV flux the halo grows until 2$\\times$10$^6$ M$_\\odot$ before starting to collapse at redshift 15. This allows the halo to reach lower temperatures compared to the case without radiation ($\\sim$ 40 K) where collapse sets in earlier for higher temperatures of the CMB. It is then likely to produce lower mass second generation stars.\nAs expected, for J$_{21}$ = 0, a change in the metallicity provides a switch from H$_2$ dominated cooling to metal-line cooling for metallicities above Z\/Z$_\\odot$ = -3. It should be noted that the expected average intensity for far UV radiation has been recently calculated to be above J$_{21}$ = 1, even for higher redshifts \\citep{Mark2014}. \n\nA critical metallicity according to \\citet{Bromm2001} has been found between Z\/Z$_\\odot$ = -4 and Z\/Z$_\\odot$ = -3. Even in the absence of H$_2$ the halo is able to cool and collapse, while above this critical metallicity, the floor limit is given by T$_\\mathrm{cmb}$. While neutral carbon is the main coolant for J$_{21}$ = 0, in the presence of radiation the gas keeps the metals ionized and CII becomes more important. The differences between different flux strengths are not very pronounced. In the case of strong flux, J$_{21}$~=~100, and higher metallicity, Z\/Z$_\\odot$~=~-2, the thermal evolution of the gas is very similar to the case with J$_{21}$~=~0, because the cooling from fine-structure metal transitions is very strong and the halo collapses at an earlier redshift without need of growing in mass and reach higher densities. \n\nWe therefore conclude that with a significant amount of metals, in the presence or absence of radiation, collapse will occur. The metals will cool the gas to the temperature of the CMB, possibly induce fragmentation, and therefore\ndetermine the mass scale of the resulting stars.\nIt is thus evident that carbon-induced cooling is central during high-redshift structure formation.\n\n\n\n\\acknowledgments\nS.B. and D.R.G.S. thank for funding through the DFG priority program `The Physics of the Interstellar Medium' (project SCHL 1964\/1-1). D.R.G.S. and M.L. thank for funding via the SFB 963\/1 on \"Astrophysical Flow Instabilities and Turbulence\" (project A12). The plot of this paper have been obtained by using the $\\mathrm{YT}$ tool \\citep{Turk2011a}. The simulations have been performed on the HLRN cluster under the project nip00035.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}