diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzznmrj" "b/data_all_eng_slimpj/shuffled/split2/finalzznmrj" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzznmrj" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nQuantifier words---such as {\\em each} or {\\em most} or {\\em more than three}---have been extensively studied, both in logic and in linguistics \\citep{westerstaahl1989quantifiers,peters2006quantifiers}, going all the way back to \\citet{frege1879begriffsschrift}. In this paper, we examine the extent to which they present a challenge to modern NLU systems. Our analysis is motivated by three observations:\n\n{\\bf Quantifier words are abstract} Unlike nouns, verbs and adjectives, quantifier words do not have referents out in the world. Rather, quantifier words specify relationships between sets of entities, events and properties. To provide intuitions about the semantics of quantifier words, and to be able to refer to quantifiers in a language-independent way, we rely on the notion of generalized quantifiers \\cite{mostowski1957generalization}, as described in \\S2. \n\n{\\bf Quantifier words vary across languages} Quantifier word inventories differ across languages. Often what is considered rough translation equivalents also differ in syntax, fine-grained semantics or pragmatics. \\newcite{10.3389\/fpsyg.2019.00957} show, e.g., that perceptions of the numerical bounds of existential quantifiers differ across speakers of English, French, Slovenian, and German. Other papers showing discrepancies between quantifier systems include comparisons of Salish to English \\cite{matthewson2001quantification}, Adyghe to English \\cite{Nikolaeva2012}, or of Dutch, Hebrew and Bengali \\cite{10.2307\/25001107}. The cross-linguistic differences in how generalized quantifiers are expressed motivates a cross-lingual error analysis, since quantifiers may contribute more to error when processing some languages rather than others. \n\n\\input{tables\/examples.tex}\n\\input{tables\/categorization_set.tex}\n{\\bf Quantifier words are important} Quantifier words are extremely important for tasks that require inference, including natural language inference, question answering, fact-checking, etc. Datasets have, for example, been developed for numerical reasoning in English \\cite{dua-etal-2019-drop}. Several researchers have identified quantifier words as important sources of errors for natural language processing systems \\cite{joshi-etal-2020-taxinli}; see Table~\\ref{tab:examples} for examples of such errors. Unfortunately, most efforts have concentrated on subsets of quantifier words and on English. \n\n{\\bf Contributions} We analyze how quantifiers are represented in NLU benchmarks, and how their occurrence at test time contributes to errors by neural language models (LMs). We derive a linguistically motivated 11-way categorization set for generalized quantifiers and look into their distribution in three steps: (a) monolingual NLI; (b) cross-lingual NLI; (c) cross-lingual question answering. We also propose GQNLI\\footnote{\\url{https:\/\/github.com\/ruixiangcui\/GQNLI}}, an adversarial generalized quantifier NLI challenge dataset. Our work shows that (i) generalized quantifiers are pervasive and cause overall performance drops in NLU benchmarks; (ii) the contribution of quantifier words to system error varies across languages; and (iii) generalized quantifiers are particularly difficult for LMs in interaction with negation and subsumption. \n\n\\section{Background}\n\nGeneralized quantifiers (GQs) are developed upon first-order predicate logic, denoting relations between sets \\citep{mostowski1957generalization}. Given a universe \\textit{E}, a quantifier \\textit{Q} would be treated as a mapping \\(Q_{E}\\) from the Cartesian product of powersets \\(\\mathcal{P}(E) \\times \\mathcal{P}(E)\\) to the set \\{\\textit{false,true}\\} or, as a binary relation on subsets of \\textit{E} \\citep{dvorak2015}. GQs are generalizations of the\n\\(\\forall\\)\n\\(\\exists\\) quantifiers from first-order predicate logic \\citep{mostowski1957generalization, perlindstrom1966first,montague1973proper,Bach1995QuantificationIN,Keenan2012HandbookOQ}. A generalized quantifier is, abstractly, a relation between sets. Generalized quantifier theory, while developed by logicians, is used by formal linguists to analyze the meaning of quantifier words in combination with referential expressions \n\\citep{barwise1981generalized,higginbotham1981questions}.\n\nMost human languages contain ways of expressing generalized quantifiers, and their semantics exhibit striking similarities across languages \\cite{matthewson2004methodology, 2008universals,SteinertThrelkeld2019LearnabilityAS}. At the same time, generalized quantifiers can be instantiated very differently across languages due to pragmatic considerations \\citep{grice1989studies} or cognitive economy\nand cost-benefit optimisation in the exchange of information \\citep{levinson2000presumptive,e23101335,10.1162\/ling_a_00461}. Quantifier words also exhibit syntactic differences, e.g., with some languages having specialized words\nto express quantity, while others rely on metaphorical usage of common nouns \\citep{katsos2012acquisition}. In English, {\\em most} is a determiner, but Spanish and French express the same concept through common nouns, {\\em la mayor\\'{i}a} and {\\em la majorit\\'{e}}. The relative stability of the core semantics of quantifiers makes a cross-linguistic comparison possible, but the syntactic and pragmatic variation associated with the expression of generalized quantifiers poses a challenge for multilingual NLU. \nWe consult quantifier taxonomy studies \\citep{Keenan1997GeneralizedQI, peters2006quantifiers, szymanik-thorne-2015-semantic, Szymanik2016} and derive a categorization set for quantifier analysis in NLU benchmarks. In Table~\\ref{tab:categorization_set}, we list the 11-way quantifier categorization set and their logical denotation based on set theory. \n\nWhile other foci of formal linguistics have attracted the attention of NLP researchers---including coreference \\citep{ws-2019-models, crac-2020-models}, negation \\citep{hossain-etal-2020-analysis,hartmann-etal-2021-multilingual}, \nand consistency \\citep{li-etal-2019-logic,ribeiro-etal-2019-red,asai-hajishirzi-2020-logic,10.1162\/tacl_a_00450}---there has been little work on generalized quantifiers as a source of error in NLU, let alone in multilingual NLU. It remains an open problem whether LMs represent the semantics of quantifiers words adequately, or if they provide a basis for resolving scopal ambiguities.\\footnote{Note that generalized quantifiers are not always {\\em explicit} in discourse. The sentence {\\em inadequate sleep causes obesity} should be interpreted as {\\em Most of those who do not sleep adequately, gain weight} \\citep{zadeh1983computational}. Such implicit quantifiers related to pragmatic variation are important for language understanding, but will be ignored in this work.}\n\n\\section{NLU Benchmarks}\n\nWe conduct an error analysis focusing on the role of generalized quantifiers in\ntwo\nNLU tasks, Natural Language Inference (NLI) and Question Answering (QA), which generally\nrequire understanding of quantifiers. For each type of task, both monolingual and cross-lingual evaluation are conducted. We focus on generalized quantifiers in the {\\em hypotheses} in NLI examples---and on generalized quantifiers in the {\\em question} fields in question answering. \n\\input{tables\/task_stats_nli.tex}\nTo this end, we identify quantifiers by the lemma and the universal dependency relation \\citep{nivre-etal-2020-universal} of a quantifier after preprocessing the sentences using \\textit{Stanza} \\citep{qi-etal-2020-stanza}. Take the sentence ``The Yiddish culture has survived for more than a thousand years.'', we annotate it as ``The\/\\textit{det} Yiddish\/\\textit{amod} culture\/\\textit{nsubj} have\/\\textit{aux} survive\/\\textit{root} for\/\\textit{case} more\/\\textit{advmod} than\/\\textit{fixed} a\/\\textit{det} thousand\/\\textit{nummod} year\/\\textit{obl} .\/\\textit{punct}''. By matching the regex pattern of the quantifier ``more than k'', in this case \\textit{``((more|great)\\symbol{92}\/advmod than\\symbol{92}\/(fixed|case)|at\\symbol{92}\/case least\\symbol{92}\/nmod) .+\\symbol{92}\/nummod .+\\symbol{92}\/(nsubj|obj|obl)''}, we approximate the surface form of the type ``more than k''.Through matching quantifier patterns, we are able to find entries in which quantifiers are instantiated. See Appendix~\\ref{app: Regular Expressions} for the list of regex patterns we write to identify GQs. In Table~\\ref{tab:task_stats_nli} and Table~\\ref{tab:task_stats_qa}, we present the statistics of the quantifier distributions in NLI and QA tasks, respectively. As can be seen, quantifiers are indeed widespread in NLU tasks, accounting for roughly 10\\% in NLI tasks and 5\\% in QA tasks.\nWe will further discuss the statistics and experiments in the following section.\n\n\\section{Quantifiers in English NLI Benchmarks}\n\\label{sec:Quantifiers in Four English NLI Tasks}\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=\\columnwidth]{fig\/frequency.png}\n \\caption{Relative distribution of quantifiers in NLI and QA tasks ranked by semantic complexity. The bars show the relative frequency of such quantifier and the lines indicate the cumulative frequency for a task. }\n \\label{fig:frequency}\n\\end{figure}\n\\input{tables\/nli_result.tex}\n\nNLI is commonly framed as a three-way classification task with labels \\textit{entailment}, \\textit{contradiction} and \\textit{neutral} \\cite{bowman-etal-2015-large}. While SOTA models exhibit low error rates on NLI benchmarks, it is unclear when they succeed or fail in their underlying reasoning. We are interested in whether generalized quantifers challenge modern NLI models. In our error analysis, we initially focus on three English NLI datasets, MultiNLI \\citep[MNLI;][]{williams-etal-2018-broad}, SNLI \\citep{bowman-etal-2015-large} and ANLI \\citep{nie-etal-2020-adversarial} as testbeds. \n\nTable~\\ref{tab:task_stats_nli} presents statistics of quantifier distribution in these datasets, where we observe that, across, about 10\\% of all hypotheses contain quantifier words, indicating the pervasiveness of quantification. We also plot the frequency of quantifiers in NLI in Figure~\\ref{fig:frequency} and find the quantifier word distribution follows Zipf's law \\citep{Zipf1949HumanBA}. Note the top three most common quantifiers account for more than 90\\% of all. \n\n\\paragraph{Experiments and Results}\nIn order to investigate whether NLU systems can solve quantifiers in NLI, we experiment with two pretrained LMs: BERT\\footnote{\\texttt{wwm\\_cased\\_L-24\\_H-1024\\_A-16}} \\citep{devlin-etal-2019-bert} and RoBERTa\\footnote{\\texttt{roberta-large}} \\citep{Liu2019RoBERTaAR}. We use the codebase by \\citet{nie-etal-2020-adversarial}. The training data combines SNLI, MNLI, FEVER-NLI \\citep{nie2019combining} and ANLI. \n\nIn Table~\\ref{tab:nli_result}, we report the test set performance on SNLI and ANLI, and the dev set performance on MLNI \\textit{matched} and \\textit{mismatched} sections. We can observe that SOTA models suffer from performance drops across almost all quantification phenomena in every task. When it comes to performance over all quantifiers, the improvement from RoBERTa to BERT (2.2\\%) is less prominent than that over full datasets (2.9\\%), suggesting RoBERTa is particularly challenged. \n\nTaking a closer look at error by category, proportional quantifiers seem harder to solve than Aristotelian\/counting quantifiers. Except for {\\em k\\%}, all proportional quantifiers---{\\em p\/k}, {\\em most}, and {\\em few}---are about 10\\% lower than the five counting quantifiers (except {\\em less than k}) with BERT; and about 5\\% lower with RoBERTa. RoBERTa is not generally superior to BERT; e.g., for {\\em k\\%}, BERT outperforms it by 22\\%. We show a pairwise analysis of how GQs affect performance when they appear in both the premises and hypotheses in the Appendix \\ref{app:pairwise}. Generally, our results attest to the difficulty of resolving GQs in NLI benchmarks.\n\n\n\\section{Quantifiers in Cross-lingual NLU Benchmarks}\nQuantifiers are acquired in similar orders across languages \\cite{Katsos2016CrosslinguisticPI}, although languages express quantifiers in different ways. For example, there are eight different universal quantifiers with different level of distributivity in Malagasy \\citep{Matthewson2008QuantificationAC}. This poses challenges to training multilingual LMs and transfer learning. We are interested in whether quantifiers are universally and evenly challenging for all languages. \n\n\\input{tables\/xnli_result.tex}\n\n\\paragraph{Quantifiers in Cross-lingual NLI}\nWe choose XNLI \\citep{conneau-etal-2018-xnli}, a manual translation of the development and test set of MNLI into 15 languages, for this multilingual error analysis. We should clarify that for XNLI, the authors annotate entailment labels for the English data only and apply them to the other languages. We do not assume label changes due to translation in this study, but it is worth investigate in the future. We choose five languages belonging to different language families, namely Arabic, Chinese, German, Spanish and Vietnamese as targets. \nThe last column in Table~\\ref{tab:task_stats_nli} shows the numbers of quantifiers in XNLI. The distribution rate is 10\\%. Note that the universal quantifier is the most common quantifier in XNLI. \n\nWe fine-tune mBERT\\footnote{\\texttt{multi\\_cased\\_L-12\\_H-768\\_A-12}} \\citep{devlin-etal-2019-bert} and XLM\\footnote{\\texttt{xlm-mlm-100-1280}} \\citep{Lample2019CrosslingualLM} on the MNLI training set and evaluate them on XNLI. We report the results in Table~\\ref{tab:xnli_result}. We find that performance varies across languages. For Chinese and Vietnamese, we see significant drops in performance for examples with GQs, whereas for Arabic and German, we see improvements. The results {\\em per} quantifier are more homogeneous, however. \n\nSimilar to our results for English, we can see that the lowest accuracies in XNLI are with proportional quantifiers, such as {\\em most} and {\\em few}. But the gap in non-English languages is wider for these two categories, especially for Chinese, the difference reaches 30\\%. Other hard quantifiers include {\\em all}, $>k$, $1$, respectively. We looked at six different system widths, $W= 128, 256, 512, 1024, 2048, 4096$. Because FK clusters are intrinsically bond clusters, we needed to use a trick to turn them into site clusters. We created a lattice twice as dense as the original and marked every site at the center of a bond and every site where two bonds meet as cluster sites. The FK cluster widths used were $W\/2 = 64, 128, 256, 512, 1024, 2048$. \n\n\nTo have proper FK clusters we require equilibration in the SW algorithm. We numerically determined that the equilibration time for $Q=2, 3, 4$ is of the order of $W$ spin updates by looking at the relaxation of the average energy per spin and the average largest cluster size. For $Q = 2,3$ and small $W$ we ran a separate simulation to equilibrium for each spanning cluster which was added to our ensemble. For $Q=2$ and $3$ and $W=2048$ and $4096$ and for all of the $Q=4$ clusters, the equilibration time was too large to proceed in this way. In these cases we equilibrated the system once and recorded an ensemble of spanning clusters as the simulation proceeded. We conservatively estimate the correlation time as $50$ spin updates for all $W$ and $Q$. This means we recorded a spanning cluster every $50$ spin updates. \n\nFor each system size we grew a number of clusters. For all $Q$ our ensemble was $2000, 2000, 1000, 1000, 400, 100$ clusters for $W = 128, 256, 512, 1024, 2048, 4096$, respectively.\n\n\\section{Measuring small probabilities with random walkers}\n\n\n\\subsection{Previous Methods}\nSmall probabilities in the harmonic measure correspond to very unlikely paths. As the simulation proceeds we can think of the event of a random walker landing where the measure is very small as a rare event. Thus computing small probabilities is a similar task to finding the rate of a rare chemical reaction \\cite{vanKampen01}, a rare extinction of a disease \\cite{Andersson00} or a population \\cite{Bartlett60}, or the failure of a queuing system via queue over-flow \\cite{Medhi03}.\n\n Accelerated numerical methods for these problems often involve biased event sampling. The sampling can frequently be cast as a random walk, either through state space or in our case, physical space. For example, one could ask what is the probability that a random walker starting half-way up a hill will successfully climb up to the top before sliding down to the bottom. If the hill is steep, it could be impossible to directly sample the probability to climb the hill. One could place barriers uniformly on the hill, which when crossed by the random walker, will split the random-walker into two walkers, each with equal weight which add up to the original weight of the walker. This will aid sampling of the events higher up on the hill. This method is called ``splitting\" and effectively performs importance sampling \\cite{Hammersley65}. One significant drawback of splitting is that if the barriers are too densely or sparsely spaced, the number of random walkers will tend to diverge or extinguish, respectively. \n\nThe methods we detail in this paper are related to the splitting method, but differ in that our methods do not have the possibility of diverging or extinguishing. Another popular method called ``milestoning\" \\cite{Farajian04}, does not have a divergence problem, but does require the system studied to be in equilibrium and the location of the barrier to be known \\textit{a priori}, whereas our method works for equilibrium and non-equilibrium systems and the barriers are placed `on the fly.'\n\n\n\\subsection{Signposts}\nWe have developed several accelerated methods for the harmonic measure problem. The motivation, as we have stated, is that it is usually impossible to send in enough random walkers to directly obtain the harmonic measure: the clusters will frequently have regions with probabilities of being hit that are smaller than $10^{-100}$. It would require of order $10^{100}$ random walkers to sample this region; such a computation is clearly impossible. \n\nWe now review the first method we developed, the signpost method \\cite{Adams08}. The signpost method consists of two steps which are applied iteratively; see Figure \\ref{fig:Signpost}. In the first (probe) step we release $N$ diffusing random walkers far from the cluster to determine which regions are rarely visited in straightforward sampling. Next, we block off all poorly sampled regions with signposts (absorbing lines). In the second (measurement) step, $N$ more walkers are released far from the cluster and either absorb on the cluster (or the accessible perimeter) or onto the signposts. The walkers sent in this step have their weight permanently added to the harmonic measure of the perimeter sites where they landed. In the next probe step, the walkers are released from the points on the signposts where the walkers in the previous measurement landed. The new walkers have a weight of $p\/N$, where $p$ is the fraction of random walkers that absorb onto signpost lines in the previous step, to conserve probability. The probe step again helps determine which regions are still poorly sampled, which are subsequently blocked off. Next, another measurement step is performed. This process is repeated until all regions are explored by the random walkers. This algorithm can be applied to on- and off-lattice clusters. \n\n\\begin{figure}[t]\n\\includegraphics[width=0.48\\textwidth]{Etching_Visualization_e_fig3}\n\\caption{\\label{fig:Etching} (Color online) The etching method. Walkers are released from the current level sites. The next level of soft sites absorb walkers; they are then relabeled as current level sites. Future sites are all sites which will eventually become current level sites. The first round of random walkers are launched from the row above the cluster, (a). The weight of all of the walkers released is $0.2\/N$, where $N$ is the number of walkers released per current level site. $20\\%$ of the walker weight is deposited onto the top row of perimeter sites and the next level soft site, which will release $N$ walkers in the next step. (b). One more perimeter site is accessible to the random walkers and $5$\\% of the weight is deposited on the site in the next level. (c) Three sites in the next level each absorb $1$\\% of the walker weight. (d) Due to the reduced weight of the walkers released in the next step, small probabilities are measured on the newly exposed perimeter sites.}\n\\end{figure}\n\nWe should note some things about this method. First, one must determine the entire perimeter of the cluster at the beginning of the computation in order to figure out how to block poorly sampled regions. Also, one needs to choose a rate to reduce the threshold for calling areas ``poorly sampled\" in each iteration. In \\cite{Adams08}, we moved the threshold down by a power of $10$ each iteration, whereas in \\cite{Adams09}, we reduced it as a function of how many walkers hit the signpost in the previous iteration. When more walkers hit the signposts we moved them even deeper. The second method gave more consistent walker saturation, which should lead to a slower compounding of error. It is important to note the signpost algorithm is only practical for two-dimensional problems. For higher dimensions, one would need to define signpost \\emph{surfaces} to block poorly sampled regions. This is could be very complex for a complicated cluster. \n\n\\subsection{Etching}\nWe now describe the method we use here which we call ``etching.\" Consider the hull of FK clusters grown on a triangular lattice with periodic boundary conditions. We want to find the harmonic measure of the top perimeter from above. To do this, we start by marking all sites that are exterior to the cluster from above as \\emph{soft} sites; the soft sites are absorbing like the cluster (or accessible perimeter) sites. The highest row is limited to one level above the highest point on the perimeter. \n\nWe next relabel every site on that highest row as a \\emph{current level} site; these are not absorbing. We release $N$ random walkers, each with weight $1 \/ (N W)$, from each current level site. The walkers released from these sites are allowed to walk until they deposit their weight onto a soft site or a perimeter site. If they move one level further away from the cluster, they are immediately moved back onto the current level sites using a Green's function which must be determined in advance. However, this is rather simple since it is the Green's function to return to a plane from one site above the plane. This Green's function is used for the entire simulation and limits how far a walker can backtrack to at most one level above the cluster. After all random walkers are released, the labels on each current level site are removed and every soft site hit in the previous step is labeled as a current level site. From each current level site $i$ we release $N$ random walkers with weight $p_i$, where $p_i$ is the amount of probability deposited on the site in the previous step divided by $N$. This process is repeated until there are no more soft sites. See Figure \\ref{fig:Etching}. \n\nEtching can be thought of the limit of the signpost method with the signposts spaced one site apart. However, etching has several benefits over the signpost method. First, the entire perimeter of the cluster does not need to be mapped out before we start. Both algorithms have the same time complexity, $O(W^3)$ for the complete perimeter of Ising clusters, and both methods have similar memory requirements. In contrast to the signpost method, the etching method can be easily generalized to higher-dimensional lattice problems and networks. We have successfully used etching to obtain the harmonic measure of three-dimensional percolation clusters \\cite{Adams09b}. \n\n\\begin{figure}[t]\n\\includegraphics[width=0.45\\textwidth]{Log_Z_vs_Log_L_fig4}\n\\caption{\\label{fig:Zq} An example of the fit of $\\log_{10} Z(L,q)$ versus $\\log_{10} L$ to a straight line for $q = 2.0379$. The behavior is similar for all $q$ values that we have examined. The slope of the line is $\\tau(q) \\equiv (q-1)D(q)$.}\n\\end{figure}\n \n\\subsection{Green's functions}\n\nWe have also developed a rare-event method which may be significantly more efficient than etching and signposting for some problems. Thus far we have applied this method only to simple test problems. This method manipulates probabilities directly and does not allow backtracking of probability. To do this, we calculate the Green's function $G(i,j;k,l)$, i.e., the probability to move to any of the sites $i,j$ in the next level from a given site $k,l$ in the current level. \n\nTo illustrate our algorithm, consider finding the probability distribution a channel with absorbing walls on a square lattice. The initial condition is that the probability is uniformly distributed among the sites in the first row of the channel and the zeroth row is a reflecting boundary. All sites that initially have probability are denoted by $C$. The previous level sites, absorbing sites, and next level sites accessible to the current level sites are denoted by $B$, $A$, and $N$, respectively. (Initially, the previous level is the reflecting boundary.) In each iteration, the goal is to move all of the probability from each current level site to the all the next level and absorbing sites. \n\nWe find the Green's function by iteration on an index $s$. The process begins for some current level site, $k, l$; $(k,l) \\in C$. Initially, probability only resides at $k,l$ so that for $s=0$, $G^s(i,j;k,l) = \\delta_{i,k} \\delta_{j,l}$. In each iteration, the probability is moved to each of the current level site's neighbors\n\\begin{equation}\n\\label{eq:update}\nG^{(s+1)}(i,j;k,l) = \\sum_{(m,n)} W(i,j;m,n) G^{(s)}(m,n;k,l), \n\\end{equation}\nusing the jump probability, \n\\begin{eqnarray}\nW(i,j;m,n) &=& \\frac{1}{4} ( \\delta_{i,m+1}\\delta_{j,n} + \\delta_{i,m-1}\\delta_{j,n} + \\delta_{i,m}\\delta_{j,n+1} \\nonumber \\\\\n &+& \\delta_{i,m}\\delta_{j,n-1} ) \\quad (m,n) \\in C \\nonumber \\\\\n &=& G_B(i,j;m,n) \\quad (m,n) \\in B \\nonumber \\\\\n &=& \\delta_{i,m}\\delta_{j,n} \\quad (m,n) \\in A \\cup N\n\\end{eqnarray}\nHere $G_B(i,j;m,n)$ is the Green's function for the previous level, see below, and the last line represents the probability staying at absorbing and next level sites. $G_B(i,j;m,n)$ takes into account all the processes that would correspond to random walkers backtracking before the previous level. To start the process, the reflecting boundary has $G_B(i,j;0,n) = \\delta_{i,1}\\delta_{j,n}$.\n\n\\begin{figure*}\n\\includegraphics[width=0.85\\textwidth]{D_q_external_panel_fig5}\n\\caption{\\label{fig:D_q_Q1234} (Color online) The $D(q)$ spectrum for the accessible perimeters of $Q=1,2,3,4$ clusters, in (a), (b), (c), and (d), respectively. The solid lines are the theory of \\cite{Duplantier00} and the symbols are the results of our simulations for several system widths. The vertical dotted lines marks $q_{min}$ for the theoretical spectra for infinite systems.}\n\\end{figure*}\n\nFor large $s$, virtually all of the probability will be on absorbing sites and next level sites. In any finite amount of time, some slight probability will remain in the current level, so after some stopping criteria is met, the probabilities recorded on the absorbing and next level sites must be normalized. When this has been achieved, we have the Green's function from a given site in the current level, $k,l$, to any site in the next level, $i,j$:\n\\begin{equation}\n\\label{eq:GreenNext}\nG_B(i,j;k,l) = \\lim_{s \\rightarrow \\infty } G^{(s)}(i,j;k,l).\n\\end{equation}\nIn the next step, this $G_B$ will be used as a jump probability. \n\nThis process is repeated for all current level sites so that Green's functions from those sites to the next level sites and absorbing sites are calculated. With these Green's functions, it is easy to determine where the probability from the first level will end up. If the probability in the starting level is $P(k,l)$, then the probability in the next level is:\n\\begin{equation}\n\\label{eq:ProbAbsorbing}\nP(i,j) = \\sum_{ (k,l) \\in C } G_B(i,j;k,l) P(k,l)\n\\end{equation}\nNote that $(i,j)$ can be absorbing sites as well as next level sites.\n\nThe next step is to relabel all current level sites as previous level sites, relabel all next level sites as current level sites, and mark all sites that are accessible to the new current level sites (which are not previous or absorbing sites) as next level sites. Then the process is repeated.\n \n\n\n\n\n\nThe end result of this process is that all of the original probability is at absorbing sites, as it would be using signposting or etching. Although this example contained only sites that were completely absorbing or non-absorbing, the Green's function method can easily be generalized to partial absorption problems. \n\nThe Green's function method is somewhat more complex to program than the etching method and the simplest implementation involves setting up the Green's function look-ups in sparse arrays. This leads to a memory complexity which grows like $W^{2d}$, where $d$ is the dimension of the space. The memory complexity would significantly reduce its usefulness, as it would take at least one terabyte to store a two-dimensional cluster with a length scale of $1000$ lattice sites. However, it is possible to store the Green's function lookup in an associative array; this reduces the memory complexity to $W^{d-1 + D}$, where $D$ is the fractal dimension of the perimeter. For the external perimeter of two-dimensional percolation clusters the memory complexity grows like $W^{7\/3}$, which is quite close to the memory complexity for etching, $W^2$. For a cluster with a length scale of $1000$ sites, the minimum required memory would be about ten megabytes for the Green's function method. \n\n\n\n\n\\begin{figure}\n\\includegraphics[width=0.48\\textwidth]{D_q_external_Qone_fine_fig6}\n\\caption{\\label{fig:D_q_Q1_fine} (Color online) The $D(q)$ spectrum for the accessible perimeters of $Q=1$ clusters for small $q$. As the system size increases the simulated values increase, presumably to approach infinity for $q< -1\/24$.}\n\\end{figure}\n\n\\begin{figure}\n\n\\includegraphics[width=0.4\\textwidth]{D_q_Qone_complete_fig7}\n\\caption{\\label{fig:D_q_Q1_complete} (Color online) The $D(q)$ spectrum for $Q=1$ for the complete perimeter. There is no theoretical prediction for this quantity. However, for $q$ substantially bigger than 0 we expect this result to be very similar to the result for the accessible perimeter since large probabilities will dominate the sum in Eq. (\\ref{eq:Zq}). The line labeled ``theory\" is for the accessible perimeter.}\n\\end{figure}\n\n\\section{Results}\nWe used etching to find the harmonic measure of $Q$-state Potts model clusters. We analyze the measure by producing $D(q)$ spectra and histograms of the probability distributions. To obtain $D(q)$, we start by sectioning individual clusters into boxes of length $L$ as described above. Because we are using a triangular lattice, it is convenient to use a parallelogram aligned with the lattice as a box. After completely tiling the cluster with boxes, we define the probability within a box $p_{i,L}$ as the sum of the measure of perimeter sites within the box. We then calculate $Z(L,q)$ using Eq.~(\\ref{eq:Zq}). $D(q)$ is related to $Z(L,q)$ by $(q-1)D(q) = m$, where $m$ is the slope of $\\log Z(L,q)$ versus $\\log L$. \n\nWe found that for a given $Q$ and $q$, all system sizes have similar local slope behavior over a range of $L$; see Figure \\ref{fig:Zq}. In order to average over the ensemble we average $\\log Z$. However, if we use the slopes for each individual member of the ensemble and average them we get virtually identical results. \n\nThe spectra of generalized dimensions for the external hulls of $Q=1-4$ are given in Figure \\ref{fig:D_q_Q1234}. In all cases the results are close to the theoretical predictions \\cite{Duplantier00}. The theoretical predictions include a divergence of $D(q)$ for $q < q_{min}$ for an infinite system, see below. Our simulation results increase rapidly with $W$ for this regime, as expected; see Figure \\ref{fig:D_q_Q1_fine}. \n\nFor completeness, we include the spectrum of generalized dimensions for the complete perimeter for the case $Q=1$; see Figure~\\ref{fig:D_q_Q1_complete}. There is no theoretical prediction for this quantity. For positive $q$ the results are close to those of the accessible perimeter shown in Figure \\ref{fig:D_q_Q1234}. This is because, for positive $q$, large probabilities contribute most of the weight in $Z(q)$. Near $q=0$ the two spectra differ because there are significantly more sites with small measure for the complete hulls.\n\n\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{histogram_demo_external_Qone_fig8}\n\\caption{\\label{fig:rawhist} (Color online) The histogram of the frequency of occurrence of the values of $p$ for the accessible perimeter for $Q=1$. The points for various values of $W$ are superimposed.}\n\\end{figure}\n\nWe also considered the distribution of the values of $p$ directly, by making histograms of its frequency for all $Q$ and $W$. The histograms turn out to be power laws with negative powers near $-1$; for an example see Figure \\ref{fig:rawhist}. Since the histogram is very accurately a power-law in $p$, it is useful to plot the local slope of the histogram, which is shown in Figure \\ref{fig:ProbDistQ1234} for the accessible perimeters for $Q=1-4$. We also show the local slope for the complete perimeter of $Q=1$; see Figure~\\ref{fig:ProbDistQ1complete}. The slope is calculated over about 10 orders of magnitude in $p$ for the accessible perimeter, and more than one order of magnitude for the complete perimeter. \n\nThe significance of the slope is that it gives information about the non-scaling aspects of the distribution, and, in particular, the value of $q_{min}$ mentioned above. If we call the slope of the histogram $-\\phi$ (so that $\\phi$ is a positive number) we see that the partition function of Eq. (\\ref{eq:Zq}) formally diverges if $q< \\phi-1$, or, said another way, we expect $D(q)$ to be undefined for $q< q_{min} = -1+\\phi$. This means that the partition function is dominated by a few instances of very small probabilities which does not scale as power-law in $R\/L$. \nThe values for the limit of the spectrum agree well with the predictions of Duplantier \\cite{Duplantier99, Duplantier00}; see Figure \\ref{fig:ProbDistQ1234}. Note that the slopes are very nearly constant over about 40 orders of magnitude in $p$.\n\nThe slopes for the complete perimeter of percolation clusters are also constant over many orders of magnitude; see Figure \\ref{fig:ProbDistQ1complete}. In this case we find that $\\phi$ is very close to 1, and the limit of the spectrum is at $q_{min}=0$. There is no theory for this case, and no explanation for this intriguing result.\n\n\\begin{figure*}\n\\includegraphics[width=0.9\\textwidth]{histogram_external_panel_fig9}\n\\caption{\\label{fig:ProbDistQ1234} (Color online) The local slope of the histogram of the frequency of occurrence of the values of $p$ for the accessible perimeter for $Q=1,2,3,4$ in (a), (b), (c), and (d), respectively. Also shown (solid lines) are the theoretical predictions of the local slope from \\cite{Duplantier00}. Note that in (b) the smallest probabilities recorded were not from the largest system size, but were from $W=1024$. This can be understood by the fact that ten times as many clusters were generated for $W=1024$. That is, among the many samples at $W=1024$, a few abnormally deep clusters were recorded which happened to have the smallest probabilities.}\n\\end{figure*}\n\n\n\n\n\n\\section{Error estimate}\nSince etching involves sampling the probability, there will be errors due to the finite number of random walkers released at each step. \nFor the results in this paper, we released $10^3$ random-walkers per current level site for all system widths and $Q$ values. \n\nWe can estimate the sampling errors as follows: we considered one percolation cluster with $W=2048$ and made 10 independent computations of the $p_i$. The variance of the probability over this sample at a given point on the cluster, $\\delta p_i$, is a measure of the reliability of the measurement. In our case we found that some points have a rather large percentage error, though always less than a factor of 3, but the average over all the points, $\\langle \\delta p_i\/p_i\\rangle$, was 23\\%. Note that the very small probabilities well inside the cluster have very small errors. There is no build-up of the error as we etch toward the interior, as might have been expected. \n \n If it is necessary to reduce the error further, more random walkers can be used. However, we believe that the ensemble averaging that we did means that the generalized dimensions are much more accurate than the individual probabilities. Our evidence for the last statement is the good quality of the fit in Figure \\ref{fig:Zq}, and the closeness of the results in Figure \\ref{fig:D_q_Q1234} to theory. Note also that $D(0)$ is close to the known fractal dimensions of the exterior perimeters.\n\n\n\\section{Conclusions}\nIn this paper, we presented the etching method, a new accelerated technique for computing the harmonic measure. We are able to measure probabilities as small as $10^{-4600}$. We showed how this method relates to other methods. We used etching to obtain the harmonic measure for the accessible perimeter of FK clusters for the $Q$-state Potts model for $Q=1-4$, for a range of system sizes. We compared this data to theoretical predictions \\cite{Duplantier99,Duplantier00}. These theories were produced for a continuum model which, in principle, might not apply to the scaling limit of the $Q$-state Potts model on a lattice. In fact, we found good agreement between our numerical results and the theoretical predictions for every comparison we made, including the $D(q)$ spectra and the slopes of the power-law probability distributions. \n\n\\begin{figure}[t]\n\\includegraphics[width=0.45\\textwidth]{Histogram_Complete_Qone_fig10}\n\\caption{\\label{fig:ProbDistQ1complete} (Color online) The local slope of the histogram of the frequency of occurrence of the values of $p$ for the complete perimeter for $Q=1$.}\n\\end{figure}\n\n\nFor the complete perimeter of percolation clusters, we found the slope to be almost exactly $-1$ for about 4000 orders of magnitude. This suggests the smallest $q$ for which $D(q)$ is defined is $q=0$. This means that there are many instances of small probabilities on the complete perimeter of percolation clusters which tend to diverge towards negative infinity faster than any power of $R\/L$. \n\nEtching, signposting, and the Green's function method are three tools which can find very small probabilities. The advantage of signposting is that it is natural to use in off-lattice systems, and, in fact, we have applied it to off-lattice DLA \\cite{Adams09}. Etching is simple to program and should easy to use in higher dimensional on-lattice systems. Lastly, the Green's function method is likely to be the most efficient of the algorithms for on-lattice and network systems, but it is more difficult to implement and requires more memory than etching. The etching and Green's function methods (but not signposting) can be used in problems which involve absorption probabilities less than unity.\n\n\n\\section{Acknowledgements}\n\nThis work was supported in part by National Science Foundation grant DMS-0553487.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\\subsection*{Background} Statistical shape analysis is now regarded across the board as an important area of applied mathematics as it has been and still is the source of quantities of theoretical works as well as applications to domains like computational anatomy, computer vision or robotics. Broadly speaking, one of its central aim is to provide quantitative\/computational tools to analyze the variability of geometric structures in order to perform different tasks such as shape comparison or classification. \n\nThere are several specific difficulties in tackling such problems in the case of datasets involving geometric shapes. A fundamental one is the issue of defining and computing metrics on shape spaces. A now quite standard approach which was pioneered by Grenander in \\cite{Grenander1993} is to compare shapes through distances based on deformation groups equipped with right-invariant metrics together with a left group action defined on the set of shapes. In this framework, the induced distance is typically obtained by solving a registration problem i.e by finding an optimal deformation mapping one object on the other one. It is thus ultimately determined by the deformation group and its metric for which many models have been proposed. In this paper, we will focus on the Large Deformation Diffeomorphic Metric Mapping (LDDMM) of \\cite{Beg2005} in which diffeomorphic transformations are generated as flows of time-dependent velocity fields.\n\nDespite the versatility of such models, one of the other common difficulty in shape analysis is the multiple forms or modalities that shapes may take. Looking only at the applications in the field of computational anatomy, if early works have mostly considered shapes given by medical images \\cite{Rueckert1999,Beg2005} or manually extracted landmarks \\cite{Joshi2000}, the variety of geometric structures at hand has considerably increased since then, whether shapes are images acquired through multiple modalities (MRI, CT...) \\cite{Avants2008}, vector or tensor fields as in Diffusion Tensor Images \\cite{Cao2005}, fields of orientation distribution functions \\cite{Du2012} or delineated objects like point clouds, curves \\cite{Glaunes2006}, surfaces \\cite{Glaunes2008}, fiber bundles \\cite{Durrleman4}... \n\nThe intent of this paper is to make a modest step toward one possible generalized setting that could encompass a rich class of shapes including many of the previous cases within a common representation and eventually lead to a common LDDMM matching framework. Our starting point is the set of works on curve and surface registration based on geometric distributions like measures, currents or varifolds \\cite{Glaunes2004,Glaunes2008,Charon2}. In the recent article \\cite{Charon2017} for instance, an oriented curve\/surface is interpreted as a directional distribution (known as oriented varifold) of its oriented tangent\/normal vectors, which results in simple fidelity terms used in combination with LDDMM to formulate and solve inexact matching problems. Yet all those works so far have restricted the role of distributions' representations to intermediates for the computation of guiding terms in registration algorithms; the underlying deformation model and registration problem remains defined over point sets with meshes.\n\nThe stance we take here is to instead introduce group actions and formulate the diffeomorphic matching problem directly in spaces of geometric distributions. In this particular work, we will restrict the analysis to objects in 2D and 3D and focus on the simpler subspace of discrete distributions, i.e that write as finite sums of Dirac varifold masses: Figure \\ref{fig:discr_shape_var} gives a few examples of objects naturally represented in this form. We shall consider different models of group actions and derive the corresponding optimal control problems, optimality conditions (Section \\ref{sec:optimal_control}) and registration algorithms (Section \\ref{sec:matching_algo}). This provides, on the one hand, an alternative (and theoretically equivalent) numerical framework to \\cite{Charon2017} for curve and surface matching using currents, oriented or unoriented varifolds. But the main contribution of our proposed model is that it extends LDDMM registration to the more general class of objects representable by discrete varifolds. In Section \\ref{sec:results}, we will show several examples of synthetic data besides curves or surfaces that can be treated as such, including cases like multi-directional objects or contrast-invariant images. \n\n\\subsection*{Related works.} A few past works share some close connections with the present paper. For instance, \\cite{Cao2005} develops an approach for registration of vector fields also within the LDDMM setting. The discrete distributions we consider here are however distinct from vector fields as they should rather be interpreted as unlabelled particles at some locations in space with orientation vectors attached (and with possibly varying number of orientation vectors at a single position) as opposed to a field of vectors defined on a fixed grid. In particular, our approach will be naturally framed in the Lagrangian setting as opposed to the Eulerian formulation of \\cite{Cao2005}. The geodesic equations for the pushforward group action that are derived in Section \\ref{sec:optimal_control} can be also related to the framework of \\cite{Sommer2013} where deformations between images are estimated by matching higher-order information like the Jacobian of the diffeomorphism at given points using higher-order similarity measures with a specific form. These are defined through labelled sets of control points though and need to be first extracted from the images, which is again different and arguably less flexible than the method we introduce here. \n\n\n\\section{Shapes and discrete varifolds}\n\\label{sec:discrete_varifolds}\nThe idea of representing shapes as distributions goes back to the many works within the field of Geometric measure theory. Those concepts have later been of great interest in the construction of simple and numerically tractable metrics between curves or surfaces for registration problems: the works of \\cite{Glaunes2006,Glaunes2008,Durrleman4,Charon2} are a few examples. The framework of oriented varifolds recently exploited in \\cite{Charon2017} was shown to encompass all those notions into a general representation and provide a wide range of metrics on the spaces of embedded curves or surfaces. We give a brief summary of the latter work below. \n\nIn the rest of the paper, we will call an oriented varifold or, to abbreviate, a \\textit{varifold} in $\\mathbb{R}^n$ (we shall here consider the cases $n=2$ or $n=3$) a distribution on the product $\\mathbb{R}^n \\times \\mathbb{S}^{n-1}$. In other words, a varifold $\\mu$ is by definition a linear form over a certain space $W$ of smooth functions on $\\mathbb{R}^n \\times \\mathbb{S}^{n-1}$, which evaluation we shall write as $\\mu(\\omega)$ for any test function $\\omega \\in W$. In all what follows, we shall restrict our focus to 'discrete' shapes and varifolds, leaving aside the analysis of the corresponding continuous models. By discrete varifold, we mean specifically that $\\mu$ writes as a finite combination of Dirac masses $\\mu=\\sum_{i=1}^{P} r_i \\delta_{(x_i,d_i)}$ with $r_i>0$, $(x_i,d_i) \\in \\mathbb{R}^n \\times \\mathbb{S}^{n-1}$ for all $i$, in which case $\\mu(\\omega) = \\sum_{i=1}^{P} r_i \\omega(x_i,d_i)$ for all $\\omega$. Such a $\\mu$ can be thought as a set of unit direction vectors $d_i$ located at positions $x_i$ with weights (or masses) equal to the $r_i$'s. We assume by convention that the $(x_i,d_i)$ are distinct, but not necessarily that all the positions $x_i$ are: in other words, in our model, there can be more than a single direction vector attached to each position. In the rest of the paper, we will denote by $\\mathcal{D}$ the set of all discrete varifolds. Note that in this representation and unlike the cases of landmarks and vector fields, the particles are unlabelled i.e the varifold $\\mu$ is invariant to any permutation of the $(x_i,d_i)$. One particular subset of interest that we shall denote $\\mathring{\\mathcal{D}} \\subset \\mathcal{D}$ is the space of discrete varifolds with distinct positions $x_i$ (or equivalently, the discrete varifolds that carry a single direction vector per point position). \n\n\\begin{figure}\n \\centering\n \\begin{tabular}{ccc}\n \\includegraphics[width=4cm]{curve_var_discrete.jpg} & & \\includegraphics[width=4.5cm]{surface_var_discrete.png} \\\\\n (a) & & (b) \\\\\n \\includegraphics[width=4cm,height=4cm]{subject_Pasteur5.png} & & \\includegraphics[width=4.5cm]{HARDI_peaks.png} \\\\\n (c) & & (d)\n \\end{tabular}\n \\caption{Some examples of data representable by discrete varifolds: (a) Piecewise linear curve. (b) Triangulated surface. (c) A set of cells' mitosis directions measured inside a mouse embryonic heart membrane (c.f \\cite{Ragni2017}). (d) Peak diffusion directions extracted from a slice of High Angular Resolution Diffusion Imaging phantom data, note the presence of multiple directions at certain locations corresponding to fiber crossing.}\n \\label{fig:discr_shape_var}\n\\end{figure}\n\nThe relationship between shapes and varifolds relies on the fact that discrete shapes, namely curve or surface meshes, can be naturally approximated by varifolds of the previous form. As explained with more details in the aforementioned references, this is done by associating to any cell of the discrete mesh (i.e a segment for curves or a triangular face for surfaces) the weighted Dirac $r_i \\delta_{(x_i,d_i)}$ as illustrated in Figure \\ref{fig:discr_shape_var}. In that expression, $x_i$ is the coordinates of the center of the cell, $r_i$ its total length or area and $d_i$ the direction of the tangent space represented by the unit tangent or normal orientation vector $d_i$. It results in a mapping $S \\mapsto \\mu_{S}$ that associates to any discrete shape $S$ the discrete varifold $\\mu_S = \\sum_{i=1}^{F} r_i \\delta_{(x_i,d_i)} \\in \\mathring{\\mathcal{D}}$ obtained as the sum over all faces $i=1,\\ldots,F$ of the corresponding Diracs. \n\nThe main interest of such a representation is that it gives a convenient setting for the definition of shape similarities that are easy to compute without the need for pointwise correspondences between points. Assuming, which is quite natural in our context, that $W$ is a Hilbert space and that all Diracs $\\delta_{(x,d)}$ for $(x,d) \\in \\mathbb{R}^n \\times \\mathbb{S}^{n-1}$ belong to the dual, $W$ must be then chosen as a Reproducing Kernel Hilbert Space (RKHS) associated to a smooth positive definite kernel on $\\mathbb{R}^n \\times \\mathbb{S}^{n-1}$. In particular, we will follow the construction proposed in \\cite{Charon2017} and consider separable kernels of the form $k(x,d,x',d')=\\rho(|x-x'|^2) \\gamma(\\langle d , d' \\rangle)$ where $\\rho$ and $\\gamma$ define positive definite kernel functions respectively on the positions between particles and the angles between their orientation vectors. The reproducing kernel metric on $W$ then gives a dual metric on varifolds that explicitly writes, for $\\mu=\\sum_{i=1}^{P} r_i \\delta_{(x_i,d_i)}$:\n\\begin{equation}\n\\label{eq:metric_var}\n \\|\\mu\\|_{W^*}^2 = \\sum_{i,j} r_i r_j \\rho(|x_i-x_j|^2) \\gamma(\\langle d_i, d_j \\rangle)\n\\end{equation}\n\nSuch metrics on $W^*$ are determined by the choice of the positive definite functions $\\rho$ and $\\gamma$ and provide a global measure of proximity between two discrete varifolds. One important advantage for applications to e.g registration is that the computation of a distance $\\|\\mu-\\mu'\\|_{W^*}^2$ between two distributions does not require finding correspondences between their masses but instead reduces numerically to a quadratic number of kernel evaluations. The gradients of the metric with respect to the $x_i$'s and $d_i$'s is also very easy to obtain by direct differentiation of \\eqref{eq:metric_var}. Finally, we note that the expression in \\eqref{eq:metric_var} is also invariant to the action of the group of rigid motion. Namely for any rotation matrix $R$, translation vector $h$ and the group action $(R,h)\\cdot \\mu \\doteq \\sum_{i=1}^{P} r_i \\delta_{(Rx_i+h,Rd_i)}$, one has $\\|(R,h)\\cdot \\mu \\|_{W^*} = \\|\\mu\\|_{W^*}$. \n\nIn all generality however, \\eqref{eq:metric_var} may only yield a pseudo-metric on the set of discrete varifolds $\\mathcal{D}$ since the inclusion mapping $\\mathcal{D} \\rightarrow W^*$ is not necessarily injective. A necessary and sufficient condition is: \n\\begin{prop}\n\\label{prop:var_metric1}\n The metric $\\|\\cdot\\|_{W^*}$ on $W^*$ induces a metric on $\\mathcal{D}$ if and only if $k$ is a strictly positive definite kernel on $\\mathbb{R}^n \\times \\mathbb{S}^{n-1}$. \n\\end{prop}\nThe proof follows immediately from the definition of strictly positive definite kernel. This condition holds in particular if both kernels defined by $\\rho$ and $\\gamma$ are strictly positive definite. In the case of $\\mathring{\\mathcal{D}}$, one can provide different sufficient conditions which are often more convenient to satisfy in practice. These involve a density property on kernels called $C_0$-universality, cf \\cite{Carmeli2010}. A kernel on $\\mathbb{R}^n$ is said to be $C_0$-universal if the associated RKHS is dense in $C_0(\\mathbb{R}^n,\\mathbb{R})$. Then one has the following \n\\begin{prop}\n\\label{prop:var_metric2}\n If the kernel defined by $\\rho$ is $C_0$-universal, $\\gamma(1)>0$ and $\\gamma(u) < \\gamma(1)$ for all $u \\in [-1,1)$, then $\\|\\cdot\\|_{W^*}$ induces a metric on $\\mathring{\\mathcal{D}}$. \n\\end{prop}\n\\begin{proof}\nLet $W_{pos}$ and $W_{or}$ be the RKHS associated to $\\rho$ and $\\gamma$. By contradiction, suppose that $\\mu, \\mu' \\in \\mathring{\\mathcal{D}}$ with $\\|\\mu-\\mu'\\|_{W^*}=0$ and $\\mu \\neq \\mu'$ in $\\mathring{\\mathcal{D}}$. We can write $\\mu,\\mu'$ in the following form:\n\\begin{align*}\n \\mu = \\sum_{i=1}^N r_i \\delta_{(z_i,d_i)}, \\ \\ \\mu' = \\sum_{i=1}^N r_i' \\delta_{(z_i,d_i')},\n\\end{align*}\nwhere $\\{z_i\\}$, with $z_i$ all distinct, is the reunion of point positions from both distributions and $\\max\\limits_{1 \\leq i \\leq N} \\{r_i,r_i' \\}>0$, $\\min\\limits_{1 \\leq i \\leq N} \\{r_i,r_i' \\} \\geq 0$. Since $\\mu$ and $\\mu'$ are distinct in $\\mathring{\\mathcal{D}}$, there is some $i_0$ such that $(d_{i_0},r_{i_0}) \\neq (d_{i_0}',r_{i_0}')$. Without loss of generality, we may assume $r_{i_0} \\geq r_{i_0}'$. Let $g(\\cdot) = \\gamma(\\langle d_{i_0}, \\cdot \\rangle) \\in W_{or}$ and choose $f \\in C_0(\\mathbb{R}^n,\\mathbb{R})$ satisfying $f(z_{i_0}) =1$ and $f(z_i)=0$ for all $i \\geq 2$. Since the kernel defined by $\\rho$ is $C_0$-universal, there exists $\\{f_n\\} \\subset W_{pos}$ such that $f_n \\rightarrow f$ uniformly. As $f_n \\otimes g \\in W$, we have that\n\\begin{equation*}\n 0 = (\\mu-\\mu'|f_n \\otimes g) = \\sum_{i=1}^N f_n(z_i) (r_i g(d_i) -r_i' g(d_i'))\n\\end{equation*}\nTaking the limit $n \\rightarrow +\\infty$, this gives:\n\\begin{equation}\n\\label{eq:proof_prop2}\n 0 = f(z_{i_0}) (r_{i_0} g(d_{i_0}) -r_{i_0}' g(d_{i_0}')) = \\underbrace{r_{i_0} \\gamma(1) - r_{i_0}' \\gamma(\\langle d_{i_0}, d_{i_0}'\\rangle)}_{A} .\n\\end{equation}\nSince $(d_{i_0},r_{i_0}) \\neq (d_{i_0}',r_{i_0}'), \\ r_{i_0} \\geq r_{i_0}' \\textrm{ and } r_{i_0}>0$, we have either $d_{i_0}\\neq d_{i_0}'$ and then $A \\geq r_{i_0}(\\gamma(1) -\\gamma(\\langle d_{i_0}, d_{i_0}'\\rangle))>0$ or $d_{i_0} = d_{i_0}'$ and $r_{i_0}> r_{i_0}'$ in which case $A = (r_{i_0} - r_{i_0}') \\gamma(1)>0$. In either case the right hand side of \\eqref{eq:proof_prop2} is positive which is a contradiction.\n\\end{proof}\nNote that the $C_0$-universality assumption still implies that the kernel defined by $\\rho$ is strictly positive definite. However, the assumptions on $\\gamma$ are typically less restrictive than in Proposition \\ref{prop:var_metric1}. \n\nA last subclass of varifold metrics that shall be of interest in this paper is the case of orientation-invariant kernels which amounts in choosing an even function $\\gamma$ in the kernel definition. This, indeed, leads to a space $W^*$ and metric $\\|\\cdot\\|_{W^*}$ for which Diracs $\\delta_{(x,d)}$ and $\\delta_{(x,-d)}$ are equal in $W^*$ for any $(x,d) \\in \\mathbb{R}^n \\times \\mathbb{S}^{n-1}$. In other words, elements of $\\mathcal{D}$ can be equivalently viewed as \\textit{unoriented} varifolds, i.e distributions on the product of $\\mathbb{R}^n$ and the projective space of $\\mathbb{R}^n$, similarly to the framework of \\cite{Charon2}. In that particular situation, one obtains an induced distance under the conditions stated in the following proposition which proof is a straightforward adaptation of the one of Proposition \\ref{prop:var_metric2}. \n\\begin{prop}\n\\label{prop:var_metric3}\n If the kernel defined by $\\rho$ is $C_0$-universal, $\\gamma$ is an even function with $\\gamma(1)>0$ and $\\gamma(u) < \\gamma(1)$ for all $u \\in (-1,1)$, then $\\|\\cdot\\|_{W^*}$ induces a metric on the space $\\mathring{\\mathcal{D}}$ modulo the orientation. \n\\end{prop}\n\nIn Section \\ref{sec:results} below, we will discuss more thoroughly and illustrate the effects of those kernel properties on the solutions to registration problems for different cases of discrete distributions. \n\n\n\n\\section{Optimal diffeomorphic mapping of varifolds}\n\\label{sec:optimal_control}\nIt is essential to point out that the notion of varifold presented above contains but is also more general than curves and surfaces as it allows to model more complex geometric structures like objects carrying multiple orientation vectors at a given position. In contrast with most previous works on diffeomorphic registration that only involve varifolds as an intermediary representation to compute fidelity terms between shapes, the purpose of this paper to derive a deformation model and registration framework on the space $\\mathcal{D}$ itself. \n\n\\subsection{Group action}\n\\label{ssec:group_action}\nA first key element is to express the way that deformations 'act' on discrete varifolds. Considering a smooth diffeomorphism $\\phi \\in \\text{Diff}(\\mathbb{R}^n)$, we first intend to express how $\\phi$ should transport a Dirac $\\delta_{(x,d)}$. There is however not a canonical way to define it as the nature of the underlying data affects the deformation model itself. An important distinction to be made is on the interpretation of direction vectors $d$, whether they correspond for instance to a unit tangent direction to a curve or a surface in which case $d$ is transported by the Jacobian of $\\phi$ as $D_x \\phi(d)\/|D_x \\phi(d)|$ or rather to a normal direction which instead requires a transport model involving the inverse of the transposed Jacobian i.e $(D_x \\phi)^{-T}(d)\/|(D_x \\phi)^{-T}(d)|$ (see \\cite{Younes} chap. 10 for more thorough discussion). To keep notations more compact, we will write $D\\phi \\cdot d$ for a given generic action of $D\\phi$ on $\\mathbb{R}^{n}$ on either tangent or normal vector and $\\overline{D\\phi \\cdot d}$ for the corresponding normalized vector in $\\mathbb{S}^{n-1}$. That being said, we will also consider two distinct models for the action:\n\\begin{itemize}\n \\item[$\\bullet$] $\\phi_{*} \\delta_{(x,d)} \\doteq \\delta_{(\\phi(x),\\overline{D\\phi\\cdot d})}$ (\\textit{normalized action}): this corresponds to transporting the Dirac mass at the new position $\\phi(x)$ and transforming the orientation vector as $\\overline{D\\phi \\cdot d}$. \n \n \\item[$\\bullet$] $\\phi_{\\#} \\delta_{(x,d)} \\doteq |D\\phi\\cdot d| \\delta_{(\\phi(x),\\overline{D\\phi\\cdot d})}$ (\\textit{pushforward action}): the position and orientation vector are transported as previously but with a reweighting factor equal to the norm of $D\\phi\\cdot d$.\n\\end{itemize}\nIt is then straightforward to extend both of these definitions by linearity to any discrete varifold in $\\mathcal{D}$. In both cases, we obtain a group action of diffeomorphisms on the set of discrete varifolds. However, these actions are clearly not equivalent. The normalized action operates as a pure transport of mass and rotation of the direction vector whereas the pushforward model adds a weight change corresponding to the Jacobian of $\\phi$ along the direction $d$. This is a necessary term in the situation where $\\mu=\\mu_{S}$ is representing a discrete oriented curve or surface. Indeed, one can check, up to discretization errors, that under the pushforward model, we have $\\phi_{\\#} \\mu_{S} = \\mu_{\\phi(S)}$; in other words the action is compatible with the usual deformation of a shape. In the result section below, we will show examples of matching based on those different group action models. \n\nAlthough we will be focusing on special subgroups of diffeomorphisms in the next section, it will be insightful to study a little more closely the orbits of discrete varifolds under the normalized and pushforward actions of the full group $\\text{Diff}(\\mathbb{R}^n)$ (or similarly the equivalence classes $\\mathcal{D}\/\\text{Diff}(\\mathbb{R}^n)$). Let $\\mu \\in \\mathcal{D}$ which we can write as $\\mu = \\sum_{i=1}^{N} \\sum_{j=1}^{n_i} r_{i,j} \\delta_{(x_{i},d_{i,j})}$ where the $x_i$ are here assumed to be distinct positions and for each $i=1,\\ldots,N$, the $(d_{i,j})_{j=1,\\ldots,n_i}$ are distinct in $\\mathbb{S}^{n-1}$. While it is well-known that $\\text{Diff}(\\mathbb{R}^n)$ acts transitively on the set of point clouds of $N$ points in $\\mathbb{R}^n$ (as $n\\geq2$), this may no longer hold when one or several direction vectors are attached to each point position. \n\nIn the case of the normalized action, we have $\\phi_{*} \\mu = \\sum_{i=1}^{N} \\sum_{j=1}^{n_i} r_{i,j} \\delta_{(\\phi(x_i),\\overline{D\\phi\\cdot d_{i,j}})}$. We see that the orbit of $\\mu$ is then given by: \n\\begin{equation*}\n \\text{Diff}_{*}\\mu = \\left \\{\\sum_{i=1}^{N} \\sum_{j=1}^{n_i} r_{i,j} \\delta_{(y_i,u_{i,j})} \\ s.t \\ y_i \\neq y_j \\ \\text{for} \\ i \\neq j, \\ \\exists A_1,\\ldots,A_{N} \\in \\text{GL}(\\mathbb{R}^n), \\ u_{i,j}= \\frac{A_i d_{i,j}}{|A_i d_{i,j}|} \\right \\} \n\\end{equation*}\nThis is essentially the set of all discrete varifolds with any set of $N$ distinct positions and for each $i$, a set of $n_i$ directions obtained by a linear transformation of the $\\{d_{i,j}\\}_{j=1,\\ldots,n_i}$ with weights $r_{i,j}$ unchanged. In particular, this imposes some constraints on the set of 'attainable' direction vectors: clearly, if the number of direction vectors at a given position exceeds the dimension i.e $n_i \\geq n$, this system of vectors cannot be mapped in general to any other system of $n_i$ vectors on the sphere by a single linear map. If we assume that the system of vectors at each position $x_i$ forms a frame, i.e that for all $i$, $n_i \\leq n$ and the direction vectors $d_{i,j}$ for $j=1,\\ldots,n_i$ are independent, then we see that the orbit of $\\mu$ is given by the set of all discrete varifolds of the form $\\sum_{i=1}^{N} \\sum_{j=1}^{n_i} r_{i,j} \\delta_{(y_i,u_{i,j})}$ with distinct $y_i$'s and $(u_{i,j})$ in $\\mathbb{S}^{n-1}$ such that the $(u_{i,j})_{j=1,\\ldots,n_i}$ are independent for all $i$. In the special case of $n_i=1$ for all $i$, that is $\\mu \\in \\mathring{\\mathcal{D}}$, the orbits are then entirely determined by the set of weights $r_i$ which gives the identification of $\\mathring{\\mathcal{D}}\/\\text{Diff}(\\mathbb{R}^n)$ with ordered finite sets of positive numbers. \n\nWith the pushforward action, we have $\\phi_{\\#} \\mu = \\sum_{i=1}^{N} \\sum_{j=1}^{n_i} |D\\phi\\cdot d_{i,j}| r_{i,j} \\delta_{(\\phi(x_i),\\overline{D\\phi\\cdot d_{i,j}})}$ and the orbit writes:\n\\begin{align*}\n \\text{Diff}_{\\#}\\mu = \\Big\\{ \\nu \\in \\mathcal{D} \\ \\ &s.t \\ \\ \\exists (y_i) \\in (\\mathbb{R}^n)^N, \\ y_i \\neq y_j \\ \\text{for} \\ i \\neq j, \\exists A_1,\\ldots,A_{N} \\in \\text{GL}(\\mathbb{R}^n), \\\\ \n &\\nu=\\sum_{i=1}^{N} \\sum_{j=1}^{n_i} |A_i d_{i,j}| r_{i,j} \\delta_{(y_i,u_{i,j})} \\ \\ \\text{with} \\ u_{i,j}= \\frac{A_i d_{i,j}}{|A_i d_{i,j}|} \\Big\\} \n\\end{align*}\nIn the general situation, there is again no simple characterization of the orbit. With the additional assumptions that $n_i \\leq n$ and the $(d_{i,j})_{j=1,\\ldots,n_i}$ are independent vectors for each $i$, the orbit of $\\mu$ is the set of all discrete varifolds of the form $\\sum_{i=1}^{N} \\sum_{j=1}^{n_i} s_{i,j} \\delta_{(y_i,u_{i,j})}$ with any choice of distinct points $y_i$, direction vectors $(d_{i,j})$ in $\\mathbb{S}^{n-1}$ such that the $(d_{i,j})_{j}$ are independent and weights $s_{i,j} > 0$. In particular, the action of $\\text{Diff}(\\mathbb{R}^n)$ in the pushforward model is transitive on all subsets of $\\mathring{\\mathcal{D}}$ with fixed $N$, which implies that the equivalence classes of $\\mathring{\\mathcal{D}}\/\\text{Diff}(\\mathbb{R}^n)$ in that case are only determined by the number of Diracs in the discrete varifold, as we would expect. \n\nThe previous discussion thus shows that for both models and unlike the more standard cases of landmarks or discrete vector fields, the action of diffeomorphisms on discrete varifolds is in general not transitive. It is therefore necessary to formulate the registration problems in their inexact form by introducing fidelity terms like the kernel metrics introduced in Section \\ref{sec:discrete_varifolds}. \n\n\\subsection{Optimal control problem}\n\\label{ssec:optimal_control}\nWith the definitions and notations of the previous sections, we can now introduce the mathematical formulation of the diffeomorphic registration of discrete varifolds. As mentioned in the introduction, we will rely on the LDDMM model for generating diffeomorphisms although other transformation spaces and models could be taken as well. In short, we consider a space of time dependent velocity fields $v \\in L^2([0,1],V)$ such that for all $t\\in [0,1]$, $v_t$ belongs to a certain RKHS $V$ of vector fields on $\\mathbb{R}^n$. We will write $K: \\ \\mathbb{R}^n \\times \\mathbb{R}^n \\rightarrow \\mathbb{R}^n$ the vector-valued reproducing kernel of $V$. From $v$, one obtains the flow mapping $\\phi^v_t$ at each time $t$ as the integral of the differential equation $\\partial_t \\phi^v_t = v_t \\circ \\phi^v_t$ with $\\phi_0 = \\text{Id}$. We then define our deformation group as the set of all flow maps $\\phi_1^v$ for all velocity fields $v \\in L^2([0,1],V)$. With the adequate assumptions on the kernel of $V$, this is a subgroup of the group of diffeomorphisms of $\\mathbb{R}^n$ and it is naturally equipped with the metric given by $\\int_{0}^{1} \\|v_t\\|_V^2 dt$, cf \\cite{Younes} for a detailed exposition of the LDDMM framework. \n\nNow, let's consider two discrete varifolds $\\mu_0 = \\sum_{i=1}^{P} r_{i,0} \\delta_{(x_{i,0},d_{i,0})}$ (template) and $\\tilde{\\mu} = \\sum_{j=1}^{Q} \\tilde{r}_{j} \\delta_{(\\tilde{x}_{j},\\tilde{d}_{j})}$ (target). We formulate the inexact matching problem between $\\mu_0$ and $\\tilde{\\mu}$ as follows:\n\\begin{equation}\n\\label{eq:matching_var}\n\\text{argmin}_{v \\in L^2([0,1],V)} \\left\\{ E(v) = \\int_{0}^{1} \\|v_t\\|_V^2 dt + \\lambda \\|\\mu(1) - \\tilde{\\mu} \\|_{W^*}^2 \\right\\}\n\\end{equation}\nsubject to either $\\mu(t) \\doteq (\\phi_t^v)_{*} \\mu_0$ in the normalized action scenario or $\\mu(t) \\doteq (\\phi_t^v)_{\\#} \\mu_0$ for the pushforward model, and $\\lambda$ being a weight parameter between the regularization and fidelity terms in the energy. This is easily interpreted as an optimal control problem in which the state variable is the transported varifold $\\mu(t)$, the control is the velocity field $v$ and the cost functional is the sum of the standard LDDMM regularization term on the deformation and a discrepancy term between $\\mu(1)$ and the target given by a varifold kernel metric as in \\eqref{eq:metric_var}. Those optimal control problems are well-posed in the following sense:\n\\begin{prop}\nIf $V$ is continuously embedded in the space $C^2_0(\\mathbb{R}^n,\\mathbb{R}^n)$, or equivalently if $K$ is of class $C^2$ with all derivatives up to order 2 vanishing at infinity, then there exists a global minimum to the problem \\eqref{eq:matching_var}. \n\\end{prop}\n\\begin{proof}\nThe result follows from an argument similar to that of the existence of minimizers in usual LDDMM registration problems. If $(v^{n})$ is a minimizing sequence in $L^2([0,1],V)$ then thanks the first term of $E$, we may assume that $(v^n)$ is bounded in $L^2([0,1],V)$ and therefore that, up to extracting a subsequence, $v^n \\rightharpoonup v^*$ weakly in $L^2([0,1],V)$. It then follows from the results of \\cite{Younes} (Chapter 8.2) that the sequence of diffeomorphisms $(\\phi_1^{v^n})$ and their first-order differentials $(d\\phi_1^{v^n})$ converge uniformly on every compact respectively to $\\phi_{1}^{v^*}$ and $d\\phi_{1}^{v^*}$. In particular, for all $i=1,\\ldots,P$, $\\phi_{1}^{v^n}(x_i) \\rightarrow \\phi_{1}^{v^*}(x_i)$ and $d\\phi_{1}^{v^n}(x_i) \\rightarrow d\\phi_{1}^{v^*}(x_i)$. Then, from the expressions of the group actions and the metric \\eqref{eq:metric_var}, we obtain that either $\\|(\\phi_1^{v^n})_{*} \\mu_0 - \\tilde{\\mu} \\|_{W^*}^2 \\xrightarrow[n\\rightarrow \\infty]{} \\|(\\phi_1^{v^*})_{*} \\mu_0 - \\tilde{\\mu} \\|_{W^*}^2$ or $\\|(\\phi_1^{v^n})_{\\#} \\mu_0 - \\tilde{\\mu} \\|_{W^*}^2 \\xrightarrow[n\\rightarrow \\infty]{} \\|(\\phi_1^{v^*})_{\\#} \\mu_0 - \\tilde{\\mu} \\|_{W^*}^2$. Finally, using the weak lower semicontinuity of the norm in $L^2([0,1],V)$, it gives in both cases:\n\\begin{equation*}\n E(v^*) \\leq \\lim \\inf_{n \\rightarrow \\infty} E(v^n)\n\\end{equation*}\nand consequently $v^*$ is a global minimizer of $E$. \n\\end{proof}\n\n\n\\subsection{Hamiltonian dynamics}\n\\label{ssec:hamiltonian_dynamics}\nBy fixing the final time condition $\\mu(1)$ and minimizing $\\int_{0}^{1} \\|v_t\\|_V^2 dt$ with those boundary constraints, the resulting path $t \\mapsto \\mu(t)$ corresponds to a \\textit{geodesic} in $\\mathcal{D}$ for the metric induced by the metric on the deformation group. We can further characterize those geodesics as solutions of a Hamiltonian system. For that purpose, we follow the general setting developed in \\cite{arguillere14:_shape} for similar optimal control problems. \n\nIn our situation, we can describe the state $\\mu(t)$ as a set of $P$ particles each given by the triplet $(x_i(t),d_i(t),r_i(t)) \\in \\mathbb{R}^n \\times \\mathbb{S}^{n-1} \\times \\mathbb{R}_{+}^{*}$ representing its position, orientation vector and weight. From \\ref{ssec:group_action}, we have that $x_i(t) = \\phi_t^v(x_{i,0})$, $d_i(t) = \\overline{D\\phi^v_t\\cdot d_{i,0}}$ and $r_i(t) = r_{i,0}$ for the normalized action and $r_i(t)=|D\\phi^v_t\\cdot d_{i,0}| r_{i,0}$ in the pushforward case. Differentiating with respect to $t$, the state evolution may be alternatively described by the set of ODEs \n$$\n\\left \\{\n\\begin{array}[h]{l}\n\\dot{x}_i (t)= v_t(x_i(t)) \\\\\n\\dot{d}_i (t) = P_{d_i(t)^{\\bot}}(D v_t\\cdot d_{i}(t)) \\\\\n\\dot{r}_i (t) = \\left\\{ \n\\begin{array}[h]{l}\n0 \\ \\ \\text{(normalized)} \\\\\n\\langle d_i(t), D v_t\\cdot d_{i}(t) \\rangle r_i(t) \\ \\ \\text{(pushforward)}\n\\end{array}\n\\right.\n\\end{array}\n \\right.\n$$\n \n\\noindent where $P_{d_i(t)^\\bot}$ denotes the orthogonal projection on the subspace orthogonal to $d_i(t)$, $D v_t \\cdot d_i(t)$ corresponds to the infinitesimal variation of the action of $D\\phi$ on vectors of $\\mathbb{R}^n$ introduced in \\ref{ssec:group_action}: it is given specifically by $D v_t \\cdot d_i(t) = D_{x_i(t)} v_t (d_i(t))$ in the tangent case and $D v_t \\cdot d_i(t) = -(D_{x_i(t)} v_t)^{T}(d_i(t))$ in the normal case. Note that other choices of transformation of the weights could be treated quite similarly by modifying accordingly the last equation in the previous system. In what follows, we detail the derivations of the optimality equations in the case of tangent direction vectors for both normalized and pushforward group action models, the situation of normal vectors being easily tackled in similar fashion. \n\n\\subsubsection{Normalized action}\nIn the case of normalized action, $\\{r_i(t)\\}_{i=1}^P$ are time independent as previous discussed. So we can choose the state variable of the optimal control problem to be $q:= \\{(x_i,d_i), \\ i =1,\\cdots,P\\} \\in \\mathbb{R}^{2dP}$ with the infinitesimal action\n\\begin{align*}\n\\xi_q v = \\left\\{\\left( v(x_i),P_{d_i^{\\perp}}(D_{x_i}v(d_i))\\right),\\ i=1,\\cdots,P\\right\\}\n\\end{align*} \nand introduce the Hamiltonian\n\n\\begin{align*}\nH(p,q,v) &= (p|\\xi_qv) -\\frac{1}{2}\\|v\\|^2_V \\\\\n &=\\sum_{i=1}^N \\langle p_i^{(1)}, v(x_i) \\rangle + \\langle P_{d_i^{\\perp}}(p_i^{(2)}), D_{x_i}v(d_i) \\rangle -\\frac{1}{2}\\|v\\|^2_V,\n\\end{align*}\nwhere \n\\begin{align*}\np= \\left\\{\\left(p_i^{(1)},p_i^{(2)}\\right), \\ i=1,\\cdots,P \\right\\} \\in \\mathbb{R}^{2dP}\n\\end{align*}\nis the adjoint variable of state $q$. We call $p_i^{(1)}$ the \\textit{spatial momentum} and $p_i^{(2)}$ the \\textit{directional momentum}. From Pontryagin's maximum principle, the Hamiltonian dynamics is given by the forward system of equations\n\\begin{align}\n\\left\\{\\begin{array}{ll}\n\\dot{x}_i(t) &= v_t(x_i(t)) \\\\\n\\dot{d}_i(t) &= P_{d_i(t)^{\\perp}}(D_{x_i(t)}v_t(d_i(t))) \\\\\n\\dot{p}_i^{(1)}(t) &= -(D_{x_i(t)}v_t)^T p_i^{(1)}(t) \\\\\n &-(D_{x_i(t)}^{(2)}v_t(\\cdot,d_i(t)))^T\\left( P_{d_i(t)^{\\perp}}(p_i^{(2)}(t))\\right) \\\\\n\\dot{p}_i^{(2)}(t) &= -(D_{x_i(t)} v_t)^T P_{d_i(t)^{\\perp}}(p_i^{(2)}(t)) \\\\\n &+ \\langle d_i(t),p_i^{(2)}(t) \\rangle D_{x_i(t)}v_t(d_i(t)) \\\\\n &+ \\langle d_i(t),D_{x_i(t)}v_t(d_i(t)) \\rangle p_i^{(2)}(t)\n\\end{array}\\right.\n\\label{eq:forward_normalized}\n\\end{align}\nand optimal vector fields $v$ satisfy\n\\begin{align*}\n\\langle v_t, h \\rangle_V &= \\left( p(t),\\xi_{q(t)}h \\right) \\\\\n&= \\sum_{i=1}^P \\langle p_i^{(1)},h(x_i)\\rangle + \\langle P_{d_i(t)^{\\perp}}(p_i^{(2)}(t)),D_{x_i(t)}h(d_i(t))\\rangle, \n\\end{align*}\nfor any $h \\in V$ and $t \\in [0,1]$. The reproducing property and reproducing property for the derivatives in a vector RKHS give \\cite{Sommer2013} that $\\forall x \\in \\mathbb{R}^n, \\ z \\in \\mathbb{R}^n, v \\in V$ and multi-index $\\alpha$,\n\\begin{align*}\n&\\langle K(x,\\cdot)z,v \\rangle_V = (z \\otimes \\delta_x |v) \\\\\n&\\langle D_1^{\\alpha} K(x,\\cdot),v \\rangle_V = z^T D^{\\alpha} v(x).\n\\end{align*}\nWith the above properties, we obtain the following expression of $v$\n\\begin{align}\n\\label{eq:v_normalized}\nv_t(\\cdot) &= \\sum_{k=1}^P K(x_k(t),\\cdot)p_k^{(1)}(t) \\nonumber \\\\\n&+ D_1 K(x_k(t),\\cdot)\\left(d_k(t),P_{d_k(t)^{\\perp}}(p_k^{(2)}(t))\\right).\n\\end{align}\nwhere we use the shortcut notation $D_1 K(x,\\cdot)(u_1,u_2)$ for the vector $D_1 (K(x,\\cdot) u_2)(u_1)$. In Figure \\ref{fig:normalized}, we show an example of geodesic and resulting deformation for a single Dirac varifold, which is obtained as the solution of \\eqref{eq:forward_normalized} with the initial momenta shown in the figure. It illustrates the combined effects of the spatial momentum which displaces the position of the Dirac and of the directional momentum that generates a local rotation of the direction vector. \n\n\n\\begin{figure}\n \\centering\n \\begin{tabular}{ccc}\n \\includegraphics[trim = 15mm 15mm 15mm 15mm ,clip,width=4cm]{nrotation1.png} \n &\\includegraphics[trim = 15mm 15mm 15mm 15mm ,clip,width=4cm]{nrotation31.png} \n &\\includegraphics[trim = 15mm 15mm 15mm 15mm ,clip,width=4cm]{nrotation61.png}\\\\\n $t=0$ & $t=1\/2$& $t=1$ \\\\\n \\end{tabular}\n \\caption{Example of geodesic for a single Dirac in the normalized action case.}\n \\label{fig:normalized}\n\\end{figure}\n\n\n\\subsubsection{Pushforward action}\nAs in the previous section, we set the state variable $q := \\{(x_i,d_i,r_i), \\ i=1,\\cdots,P\\} \\in (\\mathbb{R}^n \\times \\mathbb{S}^{n-1} \\times \\mathbb{R}_{+}^{*} )^P$, the infinitesimal action\n\\begin{align*}\n\\xi_q v = \\left\\{ \\left(v(x_i),P_{d_i^{\\perp}} (d_{x_i}v(d_i)), r_i \\langle d_i, d_{x_i}v(d_i) \\rangle \\right), \\ i=1,\\cdots,P \\right\\} \n\\end{align*}\nand the Hamiltonian\n\\begin{align}\\label{fullHam}\nH(p,q,v) = \\sum_{i=1}^P \\left\\langle p_i^{(1)}, v(x_i) \\right\\rangle + \\left\\langle P_{d_i^{\\perp}}(p_i^{(2)}), d_{x_i}v(d_i) \\right\\rangle + p_i^{(3)} r_i \\left\\langle d_i, d_{x_i}v(d_i) \\right\\rangle - \\frac{1}{2} \\|v\\|_V^2,\n\\end{align}\nwhere $p = \\left\\{ \\left( p_i^{(1)}, p_i^{(2)}, p_i^{(3)} \\right), \\ i=1,\\cdots,P \\right\\} \\in \\mathbb{R}^{(2d+1)P}$. Applying again Pontryagin's maximum principle, we obtain the forward system\n\\begin{align} \\label{fullforward}\n\\left\\{\n\\begin{array}{ll}\n\\dot x_i &= v_t(x_i) \\\\\n\\dot d_i &= P_{d_i^{\\perp}} (d_{x_i} v(d_i)) \\\\\n\\dot r_i &= r_i \\left\\langle d_i, d_{x_i} v_t(d_i) \\right\\rangle\\\\\n\\dot p_i^{(1)} &= -(d_{x_i(t)}v_t)^T p_i^{(1)}-(d_{x_i(t)}^{(2)}v(\\cdot,d_i))^T (P_{d_i^{\\perp}}(p_i^{(2)}) ) - p_i^{(3)} r_i d_{x_i}v(\\cdot,d_i)^T d_i\\\\\n\\dot p_i^{(2)} &= -d_{x_i(t)}v_t^T(p_i^{(2)}) + \\left( \\left\\langle d_i,p_i^{(2)}\\right\\rangle - r_i p_i^{(3)} \\right) \\left[ d_{x_i}v_t + d_{x_i}v^T_t \\right] (d_i) \\\\\n &+ \\left\\langle d_i, d_{x_i}v_t(d_i) \\right\\rangle p_i^{(2)} \\\\\n\\dot p_i^{(3)} &= - p_i^{(3)} \\left\\langle d_i, d_{x_i}v_t(d_i) \\right\\rangle\n\\end{array} \\right. \n\\end{align}\nwith optimal vector field of the form\n\\begin{align} \\label{fullvf}\nv_t(x) = \\sum_{k=1}^P K(x_k,x) p_k^{(1)} + D_1K(x_k,x)\\left(d_k,P_{d_k^{\\perp}}(p_k^{(2)})+p_k^{(3)}r_k d_k \\right) .\n\\end{align}\nFrom the forward equations \\eqref{fullforward}, we see that $\\frac{d}{dt} \\langle d_i(t), d_i(t) \\rangle =0$ and $\\frac{d}{dt} r_i(t) p_i^{(3)}(t) =0$, hence $\\|d_i(t)\\|$ and $ r_i(t) p_i^{(3)}(t)$ are constant along geodesic paths. Similarly to the normalized action case, we can use use those conservation properties to reduce the number of state and dual variables as follows. \n\nLet the new state variable be $q = \\{(x_i,u_i), \\ i=1,\\cdots,P\\}$ and the Hamiltonian\n\\begin{align} \\label{reduced_ham}\nH(p,q,v) = \\sum_{i=1}^P \\langle p_i^{(1)}, v(x_i) \\rangle + \\langle p_i^{(2)}, d_{x_i}v(u_i)\\rangle - \\frac{1}{2} \\| v\\|_V^2\n\\end{align}\nThe forward equations and optimal vector field $v$ derived from this Hamiltonian are\n\\begin{align} \\label{reduced_foward}\n\\left\\{\n\\begin{array}{l}\n\\dot x_i(t) = v_t(x_i(t)) \\\\\n\\dot d_i(t) = d_{x_i(t)}v_t(u_i(t)) \\\\\n\\dot p_i^{(1)}(t) = -(d_{x_i(t)}v_t)^T p_i^{(1)}-(d_{x_i(t)}^{(2)}v(\\cdot,u_i))^T p_i^{(2)} \\\\\n\\dot p_i^{(2)}(t) = -(d_{x_i(t)}v_t)^T p_i^{(2)}(t)\n\\end{array} \\right. \n\\end{align}\nand \n\\begin{align} \\label{reduced_vf}\nv_t(x) = \\sum_{k=1}^P K(x_k,x)p_k^{(1)}(t) + D_1 K(x_k,x)(u_k(t),p_k^{(2)}(t)). \n\\end{align}\nThen this new system is rigorously equivalent to the original one in the following sense:\n\\begin{prop} \\label{reduced_prop}\nAny solution of \\eqref{reduced_foward} + \\eqref{reduced_vf} is such that $\\left( x_i(t),\\overline{u_i(t)},|u_i(t)| \\right)$ is a solution of \\eqref{fullforward} + \\eqref{fullvf}. Conversely, any solution $\\left(x_i(t),d_i(t),r_i(t) \\right)$ of \\eqref{fullforward} + \\eqref{fullvf} with initial conditions satisfying $\\left\\langle p_i^{(2)}(0), u_i(0) \\right\\rangle = r_i(0) p_i^{(3)}(0)$ gives the solution $\\left(x_i(t),r_i(t)d_i(t)\\right)$ to \\eqref{reduced_foward} + \\eqref{reduced_vf}.\n\\end{prop}\n\n\\begin{figure}\n \\centering\n \\begin{tabular}{ccc}\n \\includegraphics[trim = 12mm 12mm 12mm 12mm ,clip,width=4cm]{pushshrink1.png} \n &\\includegraphics[trim = 12mm 12mm 12mm 12mm ,clip,width=4cm]{pushshrink31.png} \n &\\includegraphics[trim = 12mm 12mm 12mm 12mm ,clip,width=4cm]{pushshrink61.png}\\\\\n $t=0$ & $t=1\/2$& $t=1$ \\\\\n \\includegraphics[trim = 12mm 12mm 12mm 12mm ,clip,width=4cm]{pushstretch1.png} \n &\\includegraphics[trim = 12mm 12mm 12mm 12mm ,clip,width=4cm]{pushstretch31.png} \n &\\includegraphics[trim = 12mm 12mm 12mm 12mm ,clip,width=4cm]{pushstretch61.png}\\\\\n $t=0$ & $t=1\/2$& $t=1$ \n \\end{tabular}\n \\caption{Examples of geodesics in the pushforward action case.}\n \\label{fig:pushforward}\n\\end{figure}\n\nThe proof is given in Appendix. Note that these equations can be also obtained in a more particular case as the geodesic equations on the tangent bundle of the space of landmarks, as derived for instance in \\cite{Arguillere2015} (Section 3.5). In what follows, we will thus replace the system \\eqref{fullforward} by \\eqref{reduced_foward}. \n\n\\begin{remark}\nWe point out that there are other conserved quantities in the previous system. In particular, it's easy to see that $\\left\\langle p_i^{(2)}, d_i \\right\\rangle$ is constant along geodesics since\n\\begin{align*}\n\\frac{d}{dt} \\left\\langle p_i^{(2)}, d_i \\right\\rangle = - \\left\\langle (d_{x_i}v_t)^T p_i^{(2)}, d_i \\right\\rangle + \\left\\langle p_i^{(i)}, d_{x_i} v_t(d_i) \\right\\rangle =0.\n\\end{align*}\n\\end{remark}\nFigure \\ref{fig:pushforward} shows two geodesic trajectories of a single Dirac varifold for different initial momenta. In particular, we can again observe the effect of the directional momentum $p^{(2)}$ on the dynamics and resulting deformations. In addition to similar rotation effects as in the normalized action case, local contraction or expansion can be generated as well, depending precisely on the angle $\\langle p_i^{(2)}, d_i \\rangle$.\n\n\\section{Registration algorithm and implementation}\n\\label{sec:matching_algo}\nWe now turn to the issue of numerically solving the optimization problem \\eqref{eq:matching_var}. We will follow the commonly used method for such problems called \\textit{geodesic shooting} (cf \\cite{Vialard2012b}). Indeed, from the developments of Section \\ref{ssec:hamiltonian_dynamics}, we see that optimizing \\eqref{eq:matching_var} with respect to vector fields $v$ can be done equivalently by restricting to geodesics and thus by optimizing over the initial momenta variables $p^{(1)}_0$ and $p^{(2)}_0$ that completely parametrize those geodesics through the Hamiltonian equations. \n\n\\subsection{Computation of $E$}\nLet a template and target discrete varifold be given as in Section \\ref{ssec:optimal_control}. As mentioned above, we can rewrite the energy $E$ as a function of the initial momenta that we will denote $p^{(1)}_0=(p^{(1)}_i(0))$ and $p^{(2)}_0=(p^{(2)}_i(0))$: \n\\begin{equation}\n\\label{eq:energy_discrete}\n E(p^{(1)}_0,p^{(2)}_0) = H_r(p_0,q_0) + \\lambda \\underbrace{\\|\\mu(1) - \\tilde{\\mu} \\|_{W^*}^2}_{:=g(q(1))}\n\\end{equation}\nwhere $q_0$ is the initial state, $\\mu(1)$ is the varifold corresponding to the final time state $q(1)$ with $g(q(1))$ the resulting fidelity term between $\\mu(1)$ and the target varifold, and $H_r$ is the \\textit{reduced Hamiltonian} $H_r(p,q)=H(p,q,v)$ for the optimal $v$ given by \\eqref{eq:v_normalized} or \\eqref{reduced_vf} (note that $H_r(p(t),q(t))$ is conserved along solutions of the Hamiltonian systems thus giving the above expression of the energy). \n\nThe expression of $H_r$ as well as the resulting reduced Hamiltonian equations can be obtained in all generality by plugging the expression of $v$ in the equations of Section \\ref{ssec:optimal_control}. In our implementation, we actually restrict to the more particular case of radial scalar kernels for the vector fields in $V$, i.e we assume that $K(x,y)=h(|x-y|^2) I_n$. Then the reduced Hamiltonian for the normalized case becomes:\n\\begin{equation}\n H_r(p,q) = \\frac{1}{2} \\left\\langle \\overline{K}_q p, p \\right\\rangle,\n\\end{equation}\nwhere $\\overline{K}_q$ is a symmetric positive definite matrix which is defined as follows. Let\n\n\\begin{align*}\n&H =(H)_{ik}= (h_{ki}) \\\\\n&\\overline{A} = (\\overline{A})_{ik} = 2 \\dot h_{ki} \\langle x_k-x_i,d_k \\rangle \\\\\n&\\overline{B} = (\\overline{B})_{ik} = - \\left[ 4\\ddot h_{ik} \\langle x_k-x_i,d_i \\rangle \\langle x_k-x_i, d_k \\rangle + 2 \\dot h_{ki} \\langle d_i, d_k \\rangle \\right]\n\\end{align*}\nwith $h_{ki}$ being a shortcut for $h(|x_i-x_j|^2)$ and \n\n\\begin{align*}\nP_{d^{\\perp}}(\\cdot) =\\left( \\begin{array}{ccc}\nI - d_1 \\cdot d_1^T & &\\textrm{\\huge{0}} \\\\\n & \\ddots & \\\\\n\\textrm{\\huge{0}} & & I - d_N \\cdot d_N^T \n\\end{array}\\right)\n\\end{align*}\nThen we define\n\\begin{align*}\n\\overline{K}_q := \\left( \\begin{array}{cc}\nI_{Pd} &0 \\\\\n0 & P_{d^{\\perp}} \\\\\n\\end{array}\n \\right)^T\n\\left( \\begin{array}{cc}\nH \\otimes I_{Pd} & \\overline{A} \\otimes I_{Pd}\\\\\n(\\overline{A} \\otimes I_{Pd})^T & \\overline{B} \\otimes I_{Pd} \n\\end{array}\n \\right)\n\\left( \\begin{array}{cc}\nI_{Pd} & 0 \\\\\n0 & P_{d^{\\perp}} \\\\\n\\end{array}\n \\right) \n\\end{align*}\n where $\\otimes$ denotes the Kronecker product. For the pushforward action case, we define $H$, $A$ and $B$ as in normalized action case with $d_i$ and $d_k$ replaced by $u_i$ and $u_k$, then\n \\begin{equation}\n H_r(p,q) = \\frac{1}{2} \\left\\langle K_q p, p \\right\\rangle,\n\\end{equation}\nwhere \n\n\\begin{align*}\nK_q := \\left( \\begin{array}{cc}\nH \\otimes I_{Pd} & A \\otimes I_{Pd}\\\\\n(A \\otimes I_{Pd})^T & B \\otimes I_{Pd} \n\\end{array}\n \\right). \n\\end{align*}\n This gives us explicitly the first term of the energy in \\eqref{eq:energy_discrete}.\n\nNow, the time evolution of $q$ and $p$ can be also rewritten equivalently in reduced Hamiltonian form, which expressions are given in full for radial scalar kernels in the Appendix. We numerically integrate those differential systems using an RK4 scheme, which we experienced to be better-adapted to these systems than the simpler Euler midpoint integrator used in \\cite{Charon2017}. Then, given initial momenta $p^{(1)}_0$ and $p^{(2)}_0$, integrating those equations forward in time produces the final state $q(1)$ and its corresponding varifold $\\mu(1)$. It is then straightforward to evaluate the second term in \\eqref{eq:energy_discrete} through the expression of the varifold norm \\eqref{eq:metric_var}; in the pushforward case one only needs to apply the additional intermediate operation of converting state $q(1) = (x_i(1),u_i(1))$ into $(x_i(1),\\overline{u_i}(1),|u_i|(1))$. We will discuss different choices of kernels for the varifold metric in the result section. \n\n\\subsection{Computation of the gradient of $E$}\nThe second element we need is the gradient of the energy with respect to the momenta. The first term being directly a function of $p_0$, it can be differentiated easily and gives the following gradient: \n\\begin{align*}\n \\nabla_{p_0} H_r(p_0,q_0) &= \\overline{K}_q p_0 \\ \\ \\text{(normalized)} \\\\\n \\nabla_{p_0} H_r(p_0,q_0) &= K_q p_0 \\ \\ \\text{(pushforward)}\n\\end{align*}\n\nThe fidelity term $g(q(1))$ in \\eqref{eq:energy_discrete}, however, is a function of the final state $q(1)$ which is in turn a function of the momenta through the Hamiltonian system of equations. The computation of the gradient is therefore more involved due to the complicated dependency of $q(1)$ in $p_0$. The standard approach for optimal control problems of this form (cf for example \\cite{Vialard2012b} or \\cite{arguillere14:_shape}) is to introduce the \\textit{adjoint Hamiltonian system}: \n$$ \\overset{\\bold{\\ldotp}}{Z}(t) = d(\\partial_p H_r , \\partial_q H_r)^{T} Z(t) $$ \nwith $Z = (\\tilde{q}, \\tilde{p})^{T}$ the vector of the adjoint variables. Then, as detailed in \\cite{arguillere14:_shape}, the gradient of $g(q(1))$ with respect to $p_0$ is given by $\\tilde{p}(0)$ where $(\\tilde{q}(t), \\tilde{p}(t))^{T}$ is the solution of the adjoint system integrated backward in time with $\\tilde{q}(1) = \\nabla g(q(1))$ and $\\tilde{p}(1) = 0$. \n\nFor the particular Hamiltonian equations considered here, the adjoint system is tedious to derive and to implement. We simply avoid that by approximating the differentials appearing in the adjoint system by finite difference of the forward Hamiltonian equations, following the suggestion of \\cite{arguillere14:_shape} (Section 4.1) which we refer to for details. Note that another possibility would be to take advantage of automatic differentiation methods, as used recently for some LDDMM registration problems by the authors of \\cite{Kuhnel2017}. \n\nLastly, the end time condition $\\nabla g(q(1))$ in the previous adjoint system is computed by direct differentiation of the varifold norm \\eqref{eq:metric_var} with respect to the final state variables. This is actually more direct than in previous works like \\cite{Charon2,Charon2017} where the gradients are computed with respect to the positions of vertices of the underlying mesh. Here, we have specifically, for the normalized model: \n\\begin{align*}\n \\partial_{x_i} g(q(1)) &= 2 \\sum_{j=1}^{P} 2 r_i r_j \\rho'(|x_i(1)-x_j(1)|^2) \\gamma(\\langle d_i(1), d_j(1) \\rangle) .(x_i(1)-x_j(1)) \\ \\ -2\\ldots \\\\\n \\partial_{d_i} g(q(1)) &= 2 \\sum_{j=1}^{P} r_i r_j \\rho(|x_i(1)-x_j(1)|^2) \\gamma'(\\langle d_i(1), d_j(1) \\rangle). d_j(1) \\ \\ -2\\ldots\n\\end{align*}\nwhere the $\\ldots$ denote a similar term for the differential of the cross inner product $\\langle \\mu(1),\\tilde{\\mu} \\rangle_{W^*}$. In the pushforward case with state variables $(x_i(1),u_i(1))$, we first compute $d_i(1)=u_i(1)\/|u_i(1)|$ and $r_i(1)=|u_i(1)|$ and obtain $\\partial_{x_i} g(q(1))$ with the same expression as above while $\\partial_{u_i} g(q(1))$ is given by a simple chain rule. \n\nFinally, with the above notations, the gradient of $E$ writes:\n\\begin{equation}\n \\label{eq:gradient_E}\n \\nabla_{p_0} E = \\overline{K}_{q} p_0 + \\lambda \\tilde{p}(0)\n\\end{equation}\nrespectively $\\nabla_{p_0} E = K_{q} p_0 + \\lambda \\tilde{p}(0)$ in the pushforward case.\n\n\\subsection{Gradient descent algorithm}\nThe solution to the minimization of \\eqref{eq:energy_discrete} is then computed by gradient descent on $p_0 = \\left( p^{(1)}_{0,i},p^{(2)}_{0,i} \\right)_{i=1,\\ldots,P}$. Note that this is a non-convex optimization problem. Until convergence, each iteration consists of the following steps:\n\\vskip2ex\n\\noindent (1) Given the current estimate of $p_0$, integrate the Hamiltonian equations forward in time to obtain $q(1)$.\n\\vskip1ex\n\\noindent (2) Compute the gradient $\\nabla g(q(1))$.\n\\vskip1ex\n\\noindent (3) Integrate the adjoint Hamiltonian system backward in time to obtain $\\nabla_{p_0} E$. \n\\vskip1ex\n\\noindent (4) Update $p_0$: we use two separate update steps for the spatial and directional momentum which are selected, at each iteration, using a rough space search approach leading to the lowest value of $E$. \n\n\\section{Results}\n\\label{sec:results}\nWe now present a few results of registration using the previous algorithm on simple and synthetic examples. Our implementation equally supports objects in 2D or 3D, we will however focus on examples in $\\mathbb{R}^2$ here simply to allow for an easier visualization and interpretation of the results. \n\n\\subsection{Curve registration}\n\\begin{figure}\n \\centering\n \\begin{tabular}{cccc}\n \\includegraphics[trim = 20mm 20mm 20mm 20mm ,clip,width=3.3cm]{matching_bottle_t1.png} \n &\\includegraphics[trim = 20mm 20mm 20mm 20mm ,clip,width=3.3cm]{matching_bottle_t10.png} \n &\\includegraphics[trim = 20mm 20mm 20mm 20mm ,clip,width=3.3cm]{matching_bottle_t19.png}\n &\\includegraphics[trim = 20mm 20mm 20mm 20mm ,clip,width=3.3cm]{matching_bottle_final.png} \\\\\n $t=0$ & $t=1\/3$ & $t=2\/3$ & $t=1$ \\\\\n \\includegraphics[trim = 20mm 20mm 20mm 20mm ,clip,width=3.3cm]{matching_bottle_var_t1.png} \n &\\includegraphics[trim = 20mm 20mm 20mm 20mm ,clip,width=3.3cm]{matching_bottle_var_t10.png} \n &\\includegraphics[trim = 20mm 20mm 20mm 20mm ,clip,width=3.3cm]{matching_bottle_var_t19.png}\n &\\includegraphics[trim = 20mm 20mm 20mm 20mm ,clip,width=3.3cm]{matching_bottle_var_final.png} \n \\end{tabular}\n \\includegraphics[width=5cm]{matching_bottle_energies.png}\n \\caption{Curve registration using point-mesh LDDMM (1st row) and our proposed discrete varifold LDDMM (2nd row). On the last row is shown the evolution of the total energy across the iterations for both algorithms.}\n \\label{fig:bottle}\n\\end{figure}\nWe begin with a toy example of standard curve matching to compare the result and performance of our discrete varifold LDDMM registration algorithm with the state-of-the-art LDDMM approach for curves such as the implementations of \\cite{Glaunes2008,Charon2017}. The former methods share a very similar formulation to \\eqref{eq:matching_var} and also make use of varifold metrics as fidelity terms, the essential difference being that the state of the optimal control problem is there the set of vertices of the deformed template curve which is only converted to a varifold for the evaluation of the fidelity term at each iteration. But the dynamics of geodesics still correspond to usual point set deformation under the LDDMM model. \n\nWe consider here the pushforward model for the action of diffeomorphisms on discrete varifolds that we have seen is compatible with the action of diffeomorphisms on curves. In this case, the two formulation and optimization problems for curve registration are theoretically equivalent up to discretization precision. We verify it with the example of Figure \\ref{fig:bottle} for which both algorithms are applied with the same deformation kernel, varifold metric and optimization scheme. Note that in our approach, template and target curves are first (and only once at the beginning) converted to their discrete varifold representations as explained in Section \\ref{sec:discrete_varifolds}. \n\nAs we can see, the resulting geodesics and deformations are consistent between the two methods. This is also corroborated by the very similar values of the energy at convergence. Interestingly however, although each iteration in our model is arguably more expansive numerically compared to standard curve-LDDMM due to the increased complexity of the Hamiltonian equations, the algorithm converges in a significantly lesser number of iterations. Whether this observation generalizes to other examples or other optimization methods will obviously require more careful examination in future work.\n\n\\subsection{Registration of directional sets}\nWe now turn to examples that are more specific to the framework of discrete varifolds. \n\n\\subsubsection*{Choice of the varifold metric} \nFirst, we examine more closely the effect of the metric $\\|\\cdot\\|_{W^*}$ on the registration of discrete varifolds. The framework we propose can indeed support many choices for the kernel functions $\\rho$ and $\\gamma$ that define fidelity metrics $\\|\\cdot\\|_{W^*}$ with possibly very different properties. This has been already analyzed quite extensively in \\cite{Charon2017} but only in the situation where varifolds associated to a curve or a surface. We consider here the same examples of kernels and briefly discuss what are the specific effects to expect when matching more general varifolds in $\\mathcal{D}$ which may involve several orientation vectors at a given position.\n\n\\begin{figure}\n \\centering\n \\begin{tabular}{ccccc}\n \\rotatebox{90}{\\phantom{aaaaa} Binet}\n &\\includegraphics[trim = 8mm 8mm 8mm 8mm ,clip,width=3cm]{matching_2diracs_binet_t1.png} \n &\\includegraphics[trim = 8mm 8mm 8mm 8mm ,clip,width=3cm]{matching_2diracs_binet_t11.png} \n &\\includegraphics[trim = 8mm 8mm 8mm 8mm,clip,width=3cm]{matching_2diracs_binet_t21.png}\n &\\includegraphics[trim = 8mm 8mm 8mm 8mm ,clip,width=3cm]{matching_2diracs_binet_final.png} \\\\ \n \\rotatebox{90}{\\phantom{aa} Unor. Gaussian}\n &\\includegraphics[trim = 8mm 8mm 8mm 8mm ,clip,width=3cm]{matching_2diracs_unorgauss_t1.png} \n &\\includegraphics[trim = 8mm 8mm 8mm 8mm ,clip,width=3cm]{matching_2diracs_unorgauss_t11.png} \n &\\includegraphics[trim = 8mm 8mm 8mm 8mm,clip,width=3cm]{matching_2diracs_unorgauss_t21.png}\n &\\includegraphics[trim = 8mm 8mm 8mm 8mm ,clip,width=3cm]{matching_2diracs_unorgauss_final.png} \\\\\n \\rotatebox{90}{Or. Gaussian\/Linear}\n &\\includegraphics[trim = 8mm 8mm 8mm 8mm ,clip,width=3cm]{matching_2diracs_orgauss_t1.png} \n &\\includegraphics[trim = 8mm 8mm 8mm 8mm ,clip,width=3cm]{matching_2diracs_orgauss_t11.png} \n &\\includegraphics[trim = 8mm 8mm 8mm 8mm,clip,width=3cm]{matching_2diracs_orgauss_t21.png}\n &\\includegraphics[trim = 8mm 8mm 8mm 8mm ,clip,width=3cm]{matching_2diracs_orgauss_final.png} \\\\\n &$t=0$ & $t=1\/3$ & $t=2\/3$ & $t=1$\n \\end{tabular}\n \\caption{Matching of pairs of Dirac varifolds (template is in blue and target in red) under the normalized action with different choices of kernels: Binet on the first row, unoriented Gaussian ($\\sigma_s =1$) on the second and oriented Gaussian ($\\sigma_s =2$) on the last one. The linear kernel leads to the same result as the former in that particular case.}\n \\label{fig:2Diracs}\n\\end{figure}\n\nThe results of Propositions \\ref{prop:var_metric2} and \\ref{prop:var_metric3} hold under the assumption that the kernel defined by $\\rho$ is a $C_0$-universal kernel on $\\mathbb{R}^n$, which restricts the possible choices to a few known classes (cf \\cite{Carmeli2010} for a thorough analysis). Here, we will focus on the class of Gaussian kernels given by $\\rho(|x-x'|^2) = e^{-\\frac{|x-x'|^2}{\\sigma^2}}$ with a width parameter $\\sigma>0$ that essentially provides a notion of spatial scale sensitivity to the metric, and which must be adapted to the intrinsic sizes of shapes in each example. \n\nIn combination with $\\rho$, as in \\cite{Charon2017}, we introduce the following four kernels on $\\mathbb{S}^{n-1}$:\n\\begin{itemize}\n\\item[$\\bullet$] $\\gamma(\\langle d,d'\\rangle) = \\langle d,d'\\rangle$ (linear kernel): this choice is related to the particular subclass of currents \\cite{Glaunes2008}. In that case, the resulting $\\|\\cdot\\|_{W^*}$ is clearly only a pseudo-metric on $\\mathcal{D}$ since the linearity implies that in $W^*$: $\\delta_{(x,-d)}=-\\delta_{(x,d)}$ and for any $d_1\\neq -d_2$, $\\delta_{(x,d_1)} + \\delta_{(x,d_2)} = |d_1 + d_2| \\delta_{\\left(x,\\frac{d_1+d_2}{|d_1+d_2|}\\right)}$. However, we still obtain a metric on the subspace $\\mathring{\\mathcal{D}}$ thanks to Proposition \\ref{prop:var_metric2}. \n\n\\item[$\\bullet$] $\\gamma(\\langle d,d'\\rangle) = \\langle d,d'\\rangle^2$ (Binet kernel): $\\gamma$ being an even function, as discussed in Section \\ref{sec:discrete_varifolds}, the resulting metric on $W^*$ is invariant to the orientation of direction vectors. According to Proposition \\ref{prop:var_metric3}, we then have a distance on $\\mathring{\\mathcal{D}}$ modulo the orientation. Note however that with this particular choice, one does not obtain a metric (but only a pseudo-metric) on $\\mathcal{D}$ modulo the orientation, as we will illustrate in the examples below. \n\n\\item[$\\bullet$] $\\gamma(\\langle d,d'\\rangle) = e^{-\\frac{2}{\\sigma_s^2}(1-\\langle d,d'\\rangle^2)}$ (unoriented Gaussian kernel): this is another example of orientation-invariant kernel considered in \\cite{Charon2} corresponding to a particular construction of Gaussian kernels on the projective space. In contrast with Binet kernel, it does induce a metric on $\\mathcal{D}$ modulo orientation.\n\n\\item[$\\bullet$] $\\gamma(\\langle d,d'\\rangle) = e^{-\\frac{2}{\\sigma_s^2}(1-\\langle d,d'\\rangle)}$ (oriented Gaussian kernel): this kernel is the restriction of the standard Gaussian kernel on $\\mathbb{R}^n$ to the sphere $\\mathbb{S}^{n-1}$. As such, it can be shown to be $C_0$-universal on $\\mathbb{S}^{n-1}$ and thus, from Proposition \\ref{prop:var_metric1}, lead to a metric on the entire space $\\mathcal{D}$.\n\\end{itemize}\n\n\\begin{figure}\n \\centering\n \\begin{tabular}{ccccc}\n \\rotatebox{90}{\\phantom{aaaa} Linear}\n &\\includegraphics[trim = 8mm 8mm 8mm 8mm ,clip,width=3cm]{matching_2diracs_bis_linear_t1.png} \n &\\includegraphics[trim = 8mm 8mm 8mm 8mm ,clip,width=3cm]{matching_2diracs_bis_linear_t11.png} \n &\\includegraphics[trim = 8mm 8mm 8mm 8mm,clip,width=3cm]{matching_2diracs_bis_linear_t21.png}\n &\\includegraphics[trim = 8mm 8mm 8mm 8mm ,clip,width=3cm]{matching_2diracs_bis_linear_final.png} \\\\\n \\rotatebox{90}{\\phantom{aa} Or. Gaussian}\n &\\includegraphics[trim = 8mm 8mm 8mm 8mm ,clip,width=3cm]{matching_2diracs_bis_orgauss_t1.png} \n &\\includegraphics[trim = 8mm 8mm 8mm 8mm ,clip,width=3cm]{matching_2diracs_bis_orgauss_t11.png} \n &\\includegraphics[trim = 8mm 8mm 8mm 8mm,clip,width=3cm]{matching_2diracs_bis_orgauss_t21.png}\n &\\includegraphics[trim = 8mm 8mm 8mm 8mm ,clip,width=3cm]{matching_2diracs_bis_orgauss_final.png} \\\\\n &$t=0$ & $t=1\/3$ & $t=2\/3$ & $t=1$\n \\end{tabular}\n \\caption{Registration of pairs of Dirac varifolds with the pushforward model for both the linear and oriented Gaussian kernel.}\n \\label{fig:2Diracs_bis}\n\\end{figure}\n\nWe illustrate the aforementioned properties on a very simple registration example between pairs of Dirac varifolds located at the same position $x$ i.e $\\delta_{(x,d_1)} + \\delta_{(x,d_2)}$ and $\\delta_{(x,d_1')} + \\delta_{(x,d_2')}$. In Figure \\ref{fig:2Diracs}, the template and target pairs of Diracs are matched based on the normalized action model. The estimated matching and deformations clearly differ with the choice of kernel but each of these result is in fact perfectly consistent with the different invariances of those kernels. Indeed the two Diracs are exactly matched to the target using the oriented Gaussian kernel since $\\|\\cdot\\|_{W^*}$ is in that case a metric on the entire space $\\mathcal{D}$. They are however matched to the opposite vectors with the unoriented Gaussian kernel which is indeed insensitive to orientation. In the case of Binet kernel, in addition to orientation-invariance, there exists other pairs of Diracs which are distinct in $\\mathcal{D}$ but coincide in $W^*$. For example, it can be easily verified that all discrete varifolds of the form $\\delta_{(x,d_1)} + \\delta_{(x,d_2)}$ with orthogonal vectors $d_1$ and $d_2$ are equal in $W^*$, which is reflected by the result in Figure \\ref{fig:2Diracs}. \n\nWe emphasize the difference of behavior between linear and oriented Gaussian kernels with the example of Figure \\ref{fig:2Diracs_bis} associated this time to the pushforward action model. The result shown in the first row is a consequence of the fact that fidelity terms derived from the linear kernel only constrains the sums $d_1 + d_2$ and $d_1' + d_2'$ to match. \n\n\n\n\\subsubsection*{Multi-directional varifold matching}\nFinally, Figure \\ref{fig:cat} shows an example of matching on more general discrete varifolds that involve varying number of directions at different spatial locations. This is computed with the normalized action using an oriented Gaussian kernel for the fidelity term. Although purely synthetic, it illustrates the potentialities of the proposed approach to register data with complex directional patterns. \n\n\\begin{figure}\n \\centering\n \\begin{tabular}{cccc}\n \\includegraphics[trim = 15mm 15mm 15mm 15mm ,clip,width=3.2cm]{matching_cat_t_1.png} \n &\\includegraphics[trim = 15mm 15mm 15mm 15mm ,clip,width=3.2cm]{matching_cat_t_7.png} \n &\\includegraphics[trim = 15mm 15mm 15mm 15mm ,clip,width=3.2cm]{matching_cat_t_15.png}\n &\\includegraphics[trim = 15mm 15mm 15mm 15mm ,clip,width=3.2cm]{matching_cat_final.png} \\\\\n $t=0$ & $t=1\/3$ & $t=2\/3$ & $t=1$ \n \\end{tabular}\n \\caption{Registration of multi-directional sets. The lengths of vectors correspond to the weights of the Dirac varifolds.}\n \\label{fig:cat}\n\\end{figure}\n\n\\subsection{Contrast-invariant image registration}\nA last possible application worth mentioning is the registration of images with varying contrast. Indeed, an image $I$ modulo all contrast changes is equivalently represented by its unit gradient vector field $\\frac{\\nabla I}{|\\nabla I|}$. Note that this may in fact be only defined at isolated pixels in the image, specifically the ones where the gradient is non vanishing. Within the setting of this work, it is thus natural to associate to $I$ the discrete varifold \n\\begin{equation*}\n \\mu_{I} = \\sum_{\\nabla I(x_i) \\neq 0} \\delta_{\\left(x_i,\\frac{\\nabla I}{|\\nabla I|}(x_i)\\right)} \\in \\mathcal{D}\n\\end{equation*}\n \n\n\\begin{figure}\n \\centering\n \\begin{tabular}{cc}\n \\includegraphics[trim = 15mm 15mm 15mm 10mm ,clip,width=4cm]{phan_t_0.png} \n &\\includegraphics[trim = 15mm 15mm 15mm 10mm ,clip,width=4cm]{phan_t_1.png} \\\\\n $t=0$ & $t=1\/2$ \\\\\n \\includegraphics[trim = 15mm 15mm 15mm 10mm ,clip,width=4cm]{phan_t_2.png}\n &\\includegraphics[trim = 15mm 15mm 15mm 10mm ,clip,width=4cm]{phan_target.png} \\\\\n $t=1$ & $\\textrm{target}$ \n \\end{tabular}\n \n \\begin{tabular}{c}\n \\includegraphics[trim = 15mm 15mm 15mm 10mm ,clip,width=6cm]{phan_match_vf.png} \\\\\n t=1\n \\end{tabular}\n \\caption{Registration of images modulo contrast changes. The matching is computed between the discrete varifolds associated to both images with the normalized action model and unoriented Gaussian fidelity term. The estimated deformation can be then applied to the template image.}\n \\label{fig:phantom}\n\\end{figure}\n\nIt is straightforward that $\\mu_{I}$ is invariant to increasing contrast changes. It also becomes invariant to decreasing ones by quotienting out the orientation of the unit gradient vectors, which in our framework is simply done by selecting an orientation-invariant kernel $\\gamma(\\langle d ,d' \\rangle)$ to define $\\|\\cdot\\|_{W^*}$. In Figure \\ref{fig:phantom}, this approach is used to map two oppositely contrasted synthetic phantom brain images. We show both the alignment of the discrete varifolds as well as the full deformation applied to the image itself. Note that these images have no noise and a simple structure with relatively low number of non-vanishing gradients. There will be clearly the need for more validation to be done in the future in order to evaluate the practicality and robustness of this method for real multi-modal medical images.\n\n\\section{Conclusion and future work} \nWe have proposed, in this paper, a framework for large deformation inexact registration between discrete varifolds. It relies on the LDDMM setting for diffeomorphisms and include different models of group action on the space of varifolds. In each case, we derived the corresponding optimal control problems and the associated geodesic equations in Hamiltonian form. By combining those with the use of kernel-based fidelity metrics on varifolds, we proposed a geodesic shooting algorithm to numerically tackle the optimization problems. We finally illustrated the versatility and properties of this approach through examples of various natures which go beyond the classical cases of curves or surfaces. \n\nSeveral improvements or extensions of this work could be considered for future work. From a theoretical standpoint, it would be for instance important to derive a more general 'continuous' varifold matching model i.e with more general distributions than Dirac sums. Besides, higher dimensional varifolds could be possibly introduced within our model, although this would involve dealing with direction elements in Grassmann manifolds as in \\cite{Charon2} instead of the simpler $\\mathbb{S}^{n-1}$. Lastly, future work will also include adapting the existing fast GPU implementations for LDDMM to the new dynamical systems appearing here, with the objective of making the whole approach more scalable to real data applications. \n\n\\subsection*{Acknowledgements}\nThe authors would like to thank Prof. Sarang Joshi for many enriching discussions that initiated parts of this work.\n\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec01}\n\nIn our investigation, we are supposing the radio loops to be evolved\nsupernova remnants (SNRs). Supernova remnants represent shells of\napproximately spherical shape which are spreading, and depending of\nthe interstellar matter density, they change their shape. We can see\nonly parts of these spherical contours which represent radio spurs,\ne.g. areas of more intense emission in the sky, spur-like and of\nhuge dimensions. Radio loops consist of more spurs lying\napproximately in the same small circle of the celestial sphere.\nTheir material probably expands inside of bubbles of low density.\nBubbles are made by former SNR explosions or by strong stellar winds\n(\\citet{salt83,mcke77} and references therein). The radio emission\nfrom SNRs is generally understood to be synchrotron emission from\nthe relativistic electrons moving in the magnetic field.\n\nA star in the constellation of Cygnus exploded and its remnant is\nCygnus loop. It is classified as a middle-aged SNR of type S,\nlocated below (but near the plane of) the Galactic equator, and less\nthan 1 kpc away from us. It is listed in Green's catalogue of SNRs\nas G74.0-8.5 (\\citet{gree04,gree06,gree09}). As shown in\n\\citet{gree84}, this remnant has been decelerated considerably by\nits interaction with the surrounding interstellar medium.\n\n\\citet{leah97} presented high resolution 1420 MHz total intensity\nand polarization maps of the Cygnus loop and derived rotation\nmeasures. \\citet{asch99} made a comparison between the radio and\nX-ray emission from supernova remnants. The X-ray emission traces\nthe hot gas behind the shock front, while the radio emission comes\nfrom the relativistic electron population emitting synchrotron\nradiation in the ambient magnetic field. They found that there are\nsignificant differences in the distribution of X-ray and radio\nbrightness in the Cygnus loop. Also, significant temperature\nvariations, seen along the rim and in the bright filaments interior\nto the rim, pointed out that the Cygnus loop was caused by a\nsupernova exploding in a cavity.\n\n\\citet{uyan02} suggested that Cygnus loop may consist of two\noverlapping remnants. Substantial differences between the northern\nand southern part as well as in their emission characteristics have\nbeen observed (\\citet{uyan04,patn02}). Some authors proposed that\nthe Cygnus loop consists of two likely interacting SNRs: G74.3-8.4\nand G72.9-9.0 (\\citet{uyan02,leah02}). In \\citet{sun06}, from 4800\nMHz observations, it is explained that the polarization maps (the\ndifference in the polarization characteristic between the northern\nand southern part) support previous ideas that the Cygnus loop may\nconsist of two SNRs. Besides, several compact radio sources are\nlocated within the boundary of the remnant. The main characteristics\nof the Cygnus loop in different spectral bands are: (a) in radio\nband: it is shell, brightest to the north-east, with fainter\nbreakout region to south, with spectral variations, (b) optical:\nlarge filamentary loop, brightest to the north-east, not well\ndefined to the south and west, (c) X-ray: shell in soft X-rays (see\ne.g. in \\citet{gree06}).\n\nSome radio maps which include this remnant are the following: the\nmap by \\citet{kund72} at 4940 MHz, \\citet{keen73} at 2695 MHz,\n\\citet{uyan04} at 2675 MHz, \\citet{leah97} at 1420 MHz,\n\\citet{dick80} at 610 MHz, \\citet{gree84} at 408 MHz. Observations\nof the continuum radio emission at 2720 (\\citet{reif87}), 1420\n(\\citet{reic86}), 820 (\\citet{berk72}), 408 (\\citet{hasl82}) and\n34.5 MHz (\\citet{dwar90}), we found in electronic form and used them\nin this paper.\n\nIn our previous paper (\\citet{bork09a}) we only calculated the\ntemperatures and brightnesses at the three frequencies. In this\npaper we expand the scope of our investigation to: brightness\ntemperatures and surface brightnesses at the five frequencies, as\nwell as spectrum, $T-T$ graphs, the radio spectral indices,\nestimation of environment density and the initial energy of\nsupernova explosion, and the flux density spectrum.\n\nIn our calculations, we used brightness temperatures over the whole\narea of the loop, so the mean temperature that we estimated refers\nto northern and southern parts together. In this research the\naverage brightness temperatures and surface brightnesses of the\nCygnus radio loop are calculated at the five frequencies: 2720,\n1420, 820, 408 and 34.5 MHz. Then we study how these results are\ngetting along with previous results (\\citet{roge99,reic03,uyan04})\nand with current theories of SNR evolution. These theories predict\nthat loops are non-thermal sources which are spreading inside of the\nhot and low density bubbles made by former supernova explosions or\nby strong stellar winds (see \\citet{salt83,mcke77} and references\ntherein).\n\nOur aim is also to apply method for determination of the brightness\ntemperature given in article \\citet{bork07} which is developed for\nlarge radio loops, and to show that it is rather efficient in the\ncase of much smaller radio loops, e.g. Cygnus loop, as well as to\ncheck how the results obtained using this method are getting along\nwith the method of $T-T$ graphics and results obtained with other\nmethods. Our method is quite simple because we are using brightness\ntemperature isolines to define borders of the Cygnus loop at the\nwide range of frequencies ($\\nu_{max} \/ \\nu_{min}$ = 2720 MHz \/ 34.5\nMHz $\\approx$ 79). Other authors are using different squared or\nrectangular areas to determine area of the loop and calculate\nspectral indices, brightness temperature and the flux of the loop\n(\\citet{uyan04}, page 917 and \\citet{leah98}, page 786). Also, we\ncalculated flux from Cygnus loop and compare our results with\nresults of other authors in different frequency ranges.\n\n\n\\section{Analysis}\n\\label{sec02}\n\n\\subsection*{Data}\n\\label{sec02a}\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{fig1a.eps} \\\\\n\\vspace*{0.2cm}\n\\hspace*{0.4cm}\n\\includegraphics[width=0.34\\textwidth]{fig1b.eps} \\\\\n\\vspace*{0.8cm}\n\\includegraphics[width=0.50\\textwidth]{fig1c.eps}\n\\caption{\\emph{Top}: the 2720 MHz map of a region in Cygnus, in new\nGalactic coordinates ($l$, $b$), showing contours of brightness\ntemperature. This radio loop has position: $l$ = [76.5$^\\circ$,\n71.5$^\\circ$]; $b$ = [-10.5$^\\circ$, -7$^\\circ$]. The HPBW (Half\nPower Beam Width) for this frequency is 0$^\\circ$.35. Eleven\ncontours plotted represent the temperatures $T_\\mathrm{min}$ and\n$T_\\mathrm{max}$ from Table \\ref{tab01} and nine contours in\nbetween. The contours are plotted every 0.265 K, starting from the\nlowest temperature of 0.395 K up to 0.66 K. The corresponding\ntemperature scale is given (in K). \\emph{Bottom}: the 2720 MHz area\nmap of Cygnus.}\n\\label{fig01}\n\\end{figure}\n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{fig2a.eps} \\\\\n\\vspace*{0.2cm}\n\\hspace*{0.4cm}\n\\includegraphics[width=0.34\\textwidth]{fig2b.eps} \\\\\n\\vspace*{0.8cm}\n\\includegraphics[width=0.50\\textwidth]{fig2c.eps}\n\\caption{The same as Fig. 1, but for 1420 MHz. The HPBW for this\nfrequency is 0$^\\circ$.59. The contours are plotted every 0.07 K,\nstarting from the lowest temperature of 4.2 K up to 4.9 K.}\n \\label{fig02}\n\\end{figure}\n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{fig3a.eps} \\\\\n\\vspace*{0.2cm}\n\\hspace*{0.4cm}\n\\includegraphics[width=0.34\\textwidth]{fig3b.eps} \\\\\n\\vspace*{0.8cm}\n\\includegraphics[width=0.50\\textwidth]{fig3c.eps}\n\\caption{The same as Fig. 1, but for 820 MHz. The HPBW is\n1$^\\circ$.2. The contours are plotted every 0.39 K, starting from\nthe lowest temperature of 10.1 K up to 14 K.}\n\\label{fig03}\n\\end{figure}\n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{fig4a.eps} \\\\\n\\vspace*{0.2cm} \\hspace*{0.4cm}\n\\includegraphics[width=0.34\\textwidth]{fig4b.eps} \\\\\n\\vspace*{0.8cm}\n\\includegraphics[width=0.50\\textwidth]{fig4c.eps}\n\\caption{The same as Fig. 1, but for 408 MHz. The HPBW is\n0$^\\circ$.85. The contours are plotted every 3.1 K, starting from\nthe lowest temperature of 41 K up to 72 K.}\n\\label{fig04}\n\\end{figure}\n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{fig5a.eps} \\\\\n\\vspace*{0.2cm} \\hspace*{0.4cm}\n\\includegraphics[width=0.34\\textwidth]{fig5b.eps} \\\\\n\\vspace*{0.8cm}\n\\includegraphics[width=0.50\\textwidth]{fig5c.eps}\n\\caption{The same as Fig. 1, but for 34.5 MHz. The HPBW is\n0$^\\circ$.7. The contours are plotted every 3550 K, starting from\nthe lowest temperature of 34500 K up to 70000 K.}\n\\label{fig05}\n\\end{figure}\n\n\n\\begin{figure*}[ht!]\n\\centering\n\\includegraphics[height=0.9\\textheight]{fig6.eps}\n\\caption{Temperature profiles at 2720 MHz for Galactic longitude\nfrom 76$^\\circ$.5 to 71$^\\circ$.5 and for the following values of\nGalactic latitude: $b$ = -7$^\\circ$.5 \\emph{(top left)}, $b$ =\n-8$^\\circ$ \\emph{(top right)}, $b$ = -8$^\\circ$.5 \\emph{(middle\nleft)}, $b$ = -9$^\\circ$ \\emph{(middle right)} and $b$ = -10$^\\circ$\n\\emph{(bottom)}. Temperatures are given in K, and Galactic\nlongitudes in degrees. Notice that the bottom figure shows the\ntemperature profile at latitude which lies outside (but close enough\nto) the loop and from that profile we can see what is the highest\ntemperature of the background at $b$ = -10$^\\circ$, at 2720 MHz (not\nof the loop with background).}\n\\label{fig06}\n\\end{figure*}\n\n\nWe used the following radio-continuum surveys as the basic source of\ndata in this paper: at 2720 MHz (\\citet{reif87}), 1420 MHz\n(\\citet{reic86}), 820 MHz (\\citet{berk72}), 408 MHz (\\citet{hasl82})\nand 34.5 MHz (\\citet{dwar90}). These surveys are available in\nelectronic form, as the \"Flexible Image Transport System\" (FITS)\ndata format, at the site:\n\\url{http:\/\/www.mpifr-bonn.mpg.de\/survey.html}. This online Survey\nSampler of the \"Max-Planck-Institut f\\\"{u}r Radioastronomie\" (MPIfR)\nnear Bonn, Germany, allows users to pick a region of the sky and\nobtain images and data at different frequencies. The 2720-MHz\nStockert survey has the angular resolution of 0$^\\circ$.35, 1420-MHz\nStockert survey (\\citet{reic86}) 0$^\\circ$.59, the 820-MHz Dwingeloo\nsurvey (\\citet{berk72}) 1$^\\circ$.2, the 408-MHz all-sky survey\n(\\citet{hasl82}) 0$^\\circ$.85 and the 34.5-MHz Gauribidanur survey\n(\\citet{dwar90}) 0$^\\circ$.7. The corresponding observations are\ngiven at the following rates (measured data) for both $l$ and $b$:\n$\\frac{1^\\circ}{8}$ at 2720 MHz, $\\frac{1^\\circ}{4}$ at 1420 MHz,\n$\\frac{1^\\circ}{2}$ at 820 MHz, $\\frac{1^\\circ}{3}$ at 408 MHz and\n$\\frac{1^\\circ}{5}$ at 34.5 MHz. The effective sensitivities are\nabout 5 mK T$_b$ (T$_b$ is for an average brightness temperature),\n50 mK, 0.20 K, 1.0 K and about 700 K, respectively.\n\nFITS data format stores the multidimensional arrays and\n2-dimensional tables containing rows and columns of scientific data\nsets. We extracted observed brightness temperatures from this data\nformat into ASCII data files, and afterwards using our programs in C\nand FORTRAN, we obtained results presented in this paper.\n\n\n\n\\subsection*{Method}\n\\label{sec02b}\n\nAs there is great influence of background radiation over the Cygnus\nloop area, it is very difficult to determine its area precisely. The\narea of this loop is enclosed with brightness temperature contours.\nThe maps of a region in Cygnus, in new Galactic coordinates ($l$,\n$b$), with contours of the brightness temperatures $T_\\mathrm{b}$\nare plotted in Figs. \\ref{fig01}--\\ref{fig05}. In these figures,\namong all the brightness temperature contours, the most important\nare the outer contour and inner one which represent the loop\nborders. The outer one (which corresponds to the minimum temperature\nof the loop) separates loop and background, while the inner one\n(corresponding to the maximum temperature) separates loop and some\nsuperposed source. The interval of Galactic longitude of this loop\nis $l$ = [76.5$^\\circ$, 71.5$^\\circ$], and of latitude is $b$ =\n[-10.5$^\\circ$, -7$^\\circ$]. We used the same method of calculation\nas given in \\citet{bork07} for Galactic radio loops I--VI,\n\\citet{bork08} for Loops V and VI and in \\citet{bork09b} for\nMonoceros loop. Our aim is to apply method, which we developed for\nmain Galactic loops I-VI and described in \\citet{bork07} and\n\\citet{bork08}, to smaller remnants and to show that our method of\ncalculation is applicable to almost all remnants. To confirm that\nour method is good, we also gave the area maps of Cygnus at the five\nfrequencies (Figs. \\ref{fig01}--\\ref{fig05}) and some examples of\nthe temperature profiles at the 2720 MHz (Fig. \\ref{fig06}). In Fig.\n\\ref{fig06} it should be noticed that the bottom panel shows only\nbackground radiation (not including loop), where for $b$ =\n-10$^\\circ$ the brightness temperature is not higher than 0.38 K. It\nis in agreement with temperature intervals given in the first row of\nthe Table \\ref{tab01}, where for the minimum temperature of the loop\nwith background, at 2720 MHz, it is put 0.395 K.\n\n\\clearpage\n\nThe mean temperatures and surface brightnesses of this radio loop\nare computed using data taken from radio-continuum surveys at 2720,\n1420, 820, 408 and 34.5 MHz. We have subtracted the background\nradiation, in order to derive the mean brightness temperature of the\nSNR alone. First, the temperature of the loop plus background was\ndetermined. For every bin in survey we have some value for\ntemperature $T_{sum}$. We take all bins within the loop border and\nobtained average temperature $$ of the loop plus\nbackground. Then, the background alone near the loop was estimated.\nWe determine temperature of the background near the outer border of\nthe loop (average value from all data near the outer border of the\nloop) and that is the background temperature $T_{back}$. And\nfinally, the difference of these values is calculated to obtain the\naverage temperature of the loop at each frequency: average\ntemperature of the loop is $ - T_{back}$. The areas over\nwhich an average brightness temperature is determined at each of the\nfive frequencies are taken to be as similar as possible within the\nlimits of measurement accuracy. $T_\\mathrm{min}$ from Table\n\\ref{tab01} means the lower temperature limit between the background\nand the loop, and $T_\\mathrm{max}$ means the upper temperature limit\nof the loop. So we used all measured values between $T_\\mathrm{min}$\nand $T_\\mathrm{max}$, inside the corresponding regions of $l$ and\n$b$, to calculate the brightness temperature of a loop including the\nbackground. The mean brightness temperature for the loop is found by\nsubtracting the mean value of background brightness temperature from\nthe mean value of the brightness temperature over the area of the\nloop.\n\nAfter deriving the mean brightness temperatures\n$T_{\\mathrm{b},\\nu}$, we have converted these values into surface\nbrightness $\\Sigma_{\\nu}$ by:\n\n\\begin{equation}\n\\Sigma_\\nu = \\left( {2k\\nu ^2 \/c^2 } \\right) T_{\\mathrm{b},\\nu},\n\\label{equ01}\n\\end{equation}\n\n\\noindent where $k$ is Boltzmann constant and $c$ the speed of\nlight. Results are given in Table \\ref{tab01}.\n\n\n\\begin{table*}[ht!]\n\\centering\n\\caption{Temperatures and brightnesses of Cygnus radio\nloop at 2720, 1420, 820, 408 and 34.5 MH\\lowercase{z}}\n\\begin{tabular}{c|c|c|c}\n\\hline\nFrequency & Temperature limits & Temperature & Brightness \\\\\n(MHz) & $T_\\mathrm{min}$, $T_\\mathrm{max}$ (K) & (K) & (10$^{-22}$ W\/(m$^2$ Hz Sr)) \\\\\n\\hline\n2720 & 0.395, 0.66 & 0.160 $\\pm$ 0.005 & 3.64 $\\pm $ 0.12 \\\\\n1420 & 4.2, 4.9 & 0.49 $\\pm$ 0.05 & 3.04 $\\pm $ 0.30 \\\\\n820 & 10.1, 14.0 & 2.30 $\\pm$ 0.20 & 4.75 $\\pm$ 0.40 \\\\\n408 & 41, 72 & 15.3 $\\pm$ 1.0 & 7.83 $\\pm$ 0.50 \\\\\n34.5 & 34500, 70000 & 13960 $\\pm$ 700 & 50.97 $\\pm$ 2.56 \\\\\n\\hline\n\\end{tabular}\n\\label{tab01}\n\\end{table*}\n\n\n\\section{Results}\n\\label{sec03}\n\nThe radio continuum maps are used for determining the area of the\nCygnus loop and for deriving brightness temperatures over it. At\neach of the five frequencies, the areas are determined to be as\nsimilar as possible within the limits of measurement accuracy. There\nare still some differences between these areas and we think that the\nmajor causes of differing borders between the five frequencies are\nsmall random and systematic errors in the data. The surface\nbrightnesses of SNRs must be above the sensitivity limit of the\nobservations and must be clearly distinguishable from the Galactic\nbackground emission (\\citet{gree91}). As it is very difficult to\nresolve the fainter parts of the loop from the background, they are\nnot taken into account. For evaluation brightness temperatures over\nthe area of the loop we had to take into account background\nradiation (see \\citet{webs74}). Borders enclosing the spurs are\ndefined to separate the spur and its background. For the method of\ncalculation see also \\citet{bork07}, \\citet{bork08} and\n\\citet{bork09b}. As mentioned in these papers, if the value of\n$T_\\mathrm{min}$ is changed by a small amount, the brightness\ncontours become significantly different. If $T_\\mathrm{min}$ is too\nsmall, the area of the spur becomes confused with the background and\nit becomes obvious that the border has been incorrectly chosen.\n\nThe results are given in Tables \\ref{tab01}, \\ref{tab02} and\n\\ref{tab03}. $T_\\mathrm{min}$, given in the second column of Table\n\\ref{tab01}, is the lower temperature limit, while $T_\\mathrm{max}$\nis the upper temperature of the loop and it is also upper\ntemperature limit (because there are no other superposed sources).\nThese temperature limits enable us to distinguish the loop from\nbackground. Then we derived the surface brightnesses using equation\n(\\ref{equ01}) for each frequency.\n\nThe values for brightnesses in $\\mathrm{10^{-22}\\, W\/(m^2\\, Hz\\,\nSr)}$ can be compared with results for flux densities in Jy. The\nflux densities can be transformed into brightnesses or vice versa,\ntaking into account that the frequencies have to be the same.\nKnowing the loop size $\\Omega$, the flux densities $S_\\nu$ given in\nJy can be transformed to brightnesses given in $\\mathrm{10^{-22}\\,\nW\/(m^2\\, Hz\\, Sr)}$ by:\n\n\\begin{equation}\n\\Sigma_\\nu = S_\\nu \\times 10^{-26} \/ \\Omega.\n\\label{equ02}\n\\end{equation}\n\nBy use of the spectral indices, the brightnesses can be reduced to\n1000 MHz according to relation:\n\n\\begin{equation}\n\\Sigma_{1000} = \\Sigma_\\nu (1000 \/ \\nu)^{(2 - \\beta)},\n\\label{equ03}\n\\end{equation}\n\n\\noindent where the temperature spectral index $\\beta = \\alpha + 2$,\nand $\\alpha$ is the spectral index defined by $S_\\nu\\propto\n\\nu^{-\\alpha}$.\n\n\n\\subsection*{Spectrum}\n\\label{sec03a}\n\nThe spectrum was generated using mean temperatures at five different\nfrequencies. Best-fit straight line spectrum enables calculation of\nspectral index as negative value of the line's direction\ncoefficient. In Figure \\ref{fig07} we give spectrum for the three\nmiddle frequencies: 1420, 820 and 408 MHz because their spectrum is\nvery well fitted with the straight line (see Figure \\ref{fig07}).\nFrequencies 2720 and 34.5 MHz, lies on very high and on very low\nends of the spectrum, as presented in Fig. \\ref{fig08}. If for\ncalculation of spectral index we take only three frequencies 1420,\n820 and 408 MHz, the result would be $\\beta_3$ = 2.76 $\\pm$ 0.03.\nAll five frequencies 2720, 1420, 820, 408 and 34.5 MHz, also from\nlinear fit, give $\\beta_5$ = 2.66 $\\pm$ 0.09. It is steeper spectral\nindex in comparison to average value for Galactic SNRs $\\beta = 2.5$\n(in Green's catalogue (\\citet{gree09}) the adopted value for\nspectral index is not given because of its variability). Cygnus loop\nis relatively old SNR. The shock wave has to be weak for evolved\nSNRs. The steeper spectral indices of SNRs should be expected for\nthe smaller shock wave velocities. It is result obtained from the\ndiffuse particle acceleration theory (\\citet{bell78a,bell78b}).\nAdditionally, Cygnus loop probably expands in low density\nenvironment. It looks like large Galactic radio loops - evolved SNRs\nwith the steep spectral indices, immersed in the low density\nenvironment (see \\citet{bork07,bork08}, and references therein).\n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{fig7.eps}\n\\caption{Cygnus loop spectrum: temperature versus frequency, for\nthree measurements -- at 408, 820 and 1420 MHz.}\n\\label{fig07}\n\\end{figure}\n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{fig8.eps}\n\\caption{Cygnus loop spectrum: temperature versus\nfrequency, for five measurements -- at 34.5, 408, 820, 1420 and 2720\nMHz.}\n\\label{fig08}\n\\end{figure}\n\n\nObtained values $\\beta_3$ = 2.76 $\\pm$ 0.03 (from three\nfrequencies), $\\beta_5$ = 2.66 $\\pm$ 0.09 (from all five\nfrequencies), are greater than 2.2 and confirm non-thermal origin of\nCygnus loop emission. From Fig. \\ref{fig08} it can be noticed that\nlinear fit is quite satisfactory. The value for the brightness\ntemperature spectral index of the Cygnus loop is rather steep. This\nis at the high end of the spectral index distribution for SNRs as\nsuggested in \\citet{clar76}.\n\n\n\\begin{figure*}[ht!]\n\\centering\n\\includegraphics[height=0.93\\textheight]{fig9.eps}\n\\caption{The data retabulated to $0^\\circ.5 \\times 0^\\circ.5$\nresolution, for the following frequencies: 2720 MHz \\emph{(top\nleft)}, 1420 MHz \\emph{(top right)}, 820 MHz \\emph{(middle left)},\n408 MHz \\emph{(middle right)} and 34.5 MHz \\emph{(bottom)}. The\nHPBWs for these frequencies are 0$^\\circ$.35, 0$^\\circ$.59,\n1$^\\circ$.2, 0$^\\circ$.85 and 0$^\\circ$.7, respectively. The gray\nscales of temperatures are given also.}\n\\label{fig09}\n\\end{figure*}\n\n\n\\subsection*{$T-T$ plot}\n\\label{sec03b}\n\n\n\\begin{table*}\n\\centering\n\\caption{Spectral indices for Cygnus loop from $T-T$\nplots, between 2720, 1420, 820, 408 and 34.5 MH\\lowercase{z}}\n\\begin{tabular}{c|c|c|c|c|c}\n\\hline\nFrequency (MHz) & 2720 & 1420 & 820 & 408 & 34.5 \\\\\n\\hline\n2720 & \/ & 1.52 $\\pm$ 0.35 & 1.96 $\\pm$ 0.53 & 2.35 $\\pm$ 0.26 & 2.57 $\\pm$ 0.23 \\\\\n1420 & 2.22 $\\pm$ 0.76 & \/ & 2.27 $\\pm$ 0.64 & 2.60 $\\pm$ 0.28 & 2.67 $\\pm$ 0.25 \\\\\n820 & 2.64 $\\pm$ 0.61 & 3.10 $\\pm$ 0.64 & \/ & 2.82 $\\pm$ 0.31 & 2.82 $\\pm$ 0.10 \\\\\n408 & 2.67 $\\pm$ 0.26 & 2.79 $\\pm$ 0.28 & 3.10 $\\pm$ 0.31 & \/ & 2.78 $\\pm$ 0.15 \\\\\n34.5 & 2.86 $\\pm$ 0.23 & 2.95 $\\pm$ 0.25 & 2.92 $\\pm$ 0.10 & 2.94 $\\pm$ 0.15 & \/ \\\\\n\\hline\n\\end{tabular}\n\\label{tab02}\n\\end{table*}\n\n\nThe measured data have different resolutions for different\nfrequencies (see \\S 2.1), and therefore in order to obtain $T-T$\nplots the data were retabulated so the higher resolution maps are\nconvolved to the resolution of the lowest resolution map. In that\nway we convolved data at 2720, 1420, 408 and 34.5 MHz to $0^\\circ.5\n\\times 0^\\circ.5$ resolution, which is the sampling rate of the 820\nMHz survey. These retabulated data are presented in Figure\n\\ref{fig09} for the following frequencies: 2720 MHz (top left), 1420\nMHz (top right), 820 MHz (middle left), 408 MHz (middle right) and\n34.5 MHz (bottom). Then, for each frequency pair we used only the\ncommon points (with the same $(l,b)$) which belong to the loop area\nat both frequencies. In that way we reduced loop area to the same\narea for different frequencies. The obtained $T-T$ plots for five\npairs of frequencies enabled calculating the spectral indices. We\ncalculated two $\\beta$ values for each of these pairs: between\n2720--1420, 2720--820, 2720--408, 2720--34.5, 1420--820, 1420--408,\n1420--34.5, 820--408, 820--34.5 and 408--34.5 MHz and presented it\nin Table \\ref{tab02}. For each of the ten frequency pairs, by\ninterchanging the dependent and independent variables we have\nobtained two $\\beta$ values for each pair and the mean value of\nthese fit results is adopted as the radio spectral index, as\nsuggested in \\citet{uyan04}. Regarding only three frequencies\n(because their spectrum lies on straight line, see Figure\n\\ref{fig07}) 1420, 820 and 408 MHz, the average value of spectral\nindex from $T-T$ is $<\\beta_{TT}>_3$ = 2.78 $\\pm$ 0.41. Taking into\naccount all five frequencies, we get $<\\beta_{TT}>_5$ = 2.63 $\\pm$\n0.30. It can be noticed that this value agrees well with the\ncorresponding value obtained from spectrum, as expected (see\n\\citet{uyan04}).\n\\newline\n\\newline\n\nThen we calculated mean value of spectral index: regarding spectrum\nand $T-T$ graphs. Between 1420, 820 and 408 MHz we obtained\n$<\\beta>_3$ = 2.77 $\\pm$ 0.22, and between 2720, 1420, 820, 408 and\n34.5 MHz we obtained $<\\beta>_5$ = 2.64 $\\pm$ 0.20.\n\n\n\\section{Discussion}\n\\label{sec04}\n\nSpectral index variations with position of more spatial features\nwithin the Cygnus loop can be found in paper \\citet{leah99}. There\nwere studied radio spectral indices by $T-T$ plot method between\n2695, 1420 and 408 MHz and found that the bright radio filaments all\nshow negative curvature (steeper at higher frequency), and regions\ndominated by diffuse emission show positive curvature (flatter at\nhigher frequency).\n\nIt can be noticed that mean value of spectral index for Cygnus loop\nis little higher then the corresponding value obtained from articles\n(see \\citet{uyan04} or \\citet{leah98}). Reason for this is different\nareas used for data of Cygnus loop in these papers. In both papers\nthey used square areas (see page 917 from \\citet{uyan04} and page\n786 from \\citet{leah98}). In these square areas they take into\naccount parts of Cygnus loop and parts of background radiation near\nCygnus loop (these parts we do not take into account). We take in\naccount only higher intensity regions from these areas (see Figures\n\\ref{fig01}--\\ref{fig05}).\n\n\n\\begin{table}[hb!]\n\\centering\n\\caption{Brightness of Cygnus radio loop reduced to 1000\nMH\\lowercase{z}, using spectral index derived in this paper:\n$\\beta_5$ = 2.66 $\\pm$ 0.09 (from the spectrum of all five\nfrequencies 2720, 1420, 820, 408 and 34.5 MH\\lowercase{z})}\n\\begin{tabular}{c|c}\n\\hline\nFrequency & Brightness at 1000 MHz \\\\\n(MHz) & (10$^{-22}$ W\/(m$^2$ Hz Sr)) \\\\\n\\hline\n2720 & 7.05 $\\pm$ 0.86 \\\\\n1420 & 3.84 $\\pm$ 0.52 \\\\\n820 & 4.17 $\\pm$ 0.29 \\\\\n408 & 4.33 $\\pm$ 0.07 \\\\\n34.5 & 5.52 $\\pm$ 1.40 \\\\\n\\hline\n\\end{tabular}\n\\label{tab03}\n\\end{table}\n\n\nIn order to compare our values for brightnesses with results for\nfluxes given in other papers, we transform their calculated fluxes\ninto brightnesses at 1000 MHz. Knowing the loop size $\\Omega$ we\nreduce the flux densities given in Jy to brightnesses given in\n$\\mathrm{10^{-22}\\, W\/(m^2\\, Hz\\, Sr)}$ by equation (\\ref{equ02})\nand then by use of the spectral indices, the brightnesses are\nextrapolated to 1000 MHz according to relation (\\ref{equ03}).\n\nIn our previous paper (\\citet{bork09a}) we transformed the flux\ndensities for Cygnus Loop given in \\citet{roge99} at 22 MHz ($S_\\nu$\n= 1378 Jy) and \\citet{reic03} at 863 MHz ($S_\\nu$ = 184 Jy) into\n$\\Sigma_{1000}$ using loop size $\\Omega = 240' \\times 170'$ from\n\\citet{reic03} and spectral index $\\beta = 2.49$ from\n\\citet{trus02}. We obtained that values from (\\citet{bork09a}) agree\nwith previous data. But now in this paper, we use our derived value\nfor spectral index $\\beta_5 = 2.66 \\pm 0.09$, and obtain even better\nagreement. From flux densities given in mentioned papers, we\ncalculated the following values for radiation intensities:\n$\\Sigma_{1000}$ = $3.22 \\times \\mathrm{10^{-22}\\, W\/(m^2\\, Hz\\,\nSr)}$ from flux given in \\citet{roge99} and $\\Sigma_{1000}$ = $4.84\n\\times \\mathrm{10^{-22}\\, W\/(m^2\\, Hz\\, Sr)}$ from flux given in\n\\citet{reic03}.\n\nUsing our calculated $\\beta_5$ and applying relation (\\ref{equ03})\nto our calculated values of $\\Sigma_\\nu$ at 2720, 1420, 820, 408 and\n34.5 MHz, for $\\Sigma_{1000}$ we obtain values given in Table\n\\ref{tab03}. Absolute errors for brightnesses at 1000 MHz are\ncalculated in this way: $\\Delta \\Sigma_{1000} = \\left(\n{\\Sigma_{1000} \/ \\Sigma_{\\nu 1}} \\right) \\left( {\\Delta \\Sigma_{\\nu\n1} + \\Sigma_{\\nu 1} \\cdot \\ln \\left( {\\nu_1 \/ 1000} \\right) \\cdot\n\\Delta \\beta} \\right)$, where $\\nu_1$ takes values 2720, 1420, 820,\n408 and 34.5 MHz.\n\n\n\\begin{table}[ht!]\n\\centering \\caption{The Cygnus loop areas $\\Omega$ and its angular\nradii $\\theta$ at the five frequencies}\n\\begin{tabular}{c|c|c}\n\\hline\nFrequency (MHz) & $\\Omega$ ($(^\\circ)^2$) & $\\theta$ ($^\\circ$) \\\\\n\\hline\n2720 & 7.52 & 1.547 \\\\\n1420 & 8.64 & 1.658 \\\\\n820 & 9.46 & 1.735 \\\\\n408 & 9.88 & 1.773 \\\\\n34.5 & 8.44 & 1.639 \\\\\n\\hline\n\\end{tabular}\n\\label{tab04}\n\\end{table}\n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{fig10.eps}\n\\caption{The surface brightness to diameter diagram from Berezhko \\&\nV\\\"{o}lk (2004), with value for Cygnus loop added. Three different\ndensities for the interstellar matter (ISM) ($N_\\mathrm{H}$ = 3, 0.3\nand 0.003 cm$^{-3}$) are presented, plus three values for the (SN)\nexplosion energy ($E_\\mathrm{SN}$ = 0.25, 1 and 3 $\\times 10^{51}$ erg).}\n\\label{fig10}\n\\end{figure}\n\n\nInformation about variations of brightness and spectrum over radio\nimage is presented by \\citet{leah98} and by \\citet{uyan02}. The\naverage values reduce the information imprinted in variations of\nbrightness and spectrum over the radio image but give us other\ninteresting information, like explosion energy, surface brightness\nand distance to the Cygnus radio loop.\n\nWith regard to the loop's borders in Figs. \\ref{fig01}--\\ref{fig05},\nwe derived loop area at each frequency, as well as its corresponding\nangular radii (see Table \\ref{tab04}). An angular radius is obtained\nin this way: our derived loop area, when approximated with the\ncircle of the same area, gives possibility of determining angular\nradius as $\\theta$ = $\\sqrt{\\Omega \/ \\pi}$. These areas can be\ncompared to the areas calculated by other authors, e.g.\n9$(^\\circ)^2$.80 at 1420 MHz (\\citet{leah02}) and 11$(^\\circ)^2$.33\nat 863 MHz (\\citet{reic03}). When their areas are recalculated into\nangular radii in the way we described, the result are: 1$^\\circ$.77\nat 1420 MHz (\\citet{leah02}) and 1$^\\circ$.90 at 863 MHz\n(\\citet{reic03}). It can be noticed that we obtained somewhat\nsmaller values, but it has to be taken into account that other\nauthors estimated only the rectangular map size while we estimated\nthe loop size (inside its contour borders) exactly.\n\nThe relation between surface brightness ($\\Sigma$) and diameter\n($D$) for supernova remnants (SNRs) - so-called the $\\Sigma-$D\nrelation - is appropriate for description of the radio brightness\nevolution of these sources. The empirical relations should be used\nwith caution because of the limited usefulness due to selection\neffects (see \\citet{gree09}, \\citet{uros10} and references therein).\nThe updated theoretical relations were derived by \\citet{duri86},\nbased on the Bell's (\\cite{bell78a,bell78b}) diffuse shock particle\nacceleration theory, and by Berezhko and V\\\"{o}lk (\\citet{bere04})\nbased on the non-linear kinetic theory of the diffuse shock\nacceleration mechanism. The $\\Sigma-D$ diagram at 1 GHz with the\ntheoretically derived evolutionary tracks taken from \\citet{bere04}\nwith our value for the Cygnus loop superposed, is shown in Fig.\n\\ref{fig10}. The $\\Sigma-D$ diagram at 1 GHz taken from\n\\citet{bere04} with the derived value for the Cygnus loop\nsuperposed, is shown in Fig. \\ref{fig10}. With aim to superpose\nCygnus position, we used the following results of our calculations:\nthe mean value of brightnesses at 1 GHz and the diameter, calculated\nas $D = 2 r \\sin \\bar \\theta$, where $\\bar \\theta$ = 1$^\\circ$.67\nbeing mean angular radius for all five frequencies. The distance $r$\n= 0.44 kpc is taken from Green's catalogue of SNRs (\\citet{gree09}).\nSo we added this point to the diagram: ($D$, $\\Sigma$) = (25.7 pc,\n4.98 $\\times$ 10$^{-22}$ W\/(m$^2$ Hz Sr)). From its position in this\ndiagram it can be concluded that Cygnus loop evolves in the low\ndensity environment and the initial energy of supernova (SN)\nexplosion was relatively low (see Fig. \\ref{fig10}).\n\n\n\\subsection*{Flux density spectrum}\n\\label{sec04a}\n\nOn the basis of values for brightnesses given in the fourth column\nof Table \\ref{tab01} and our calculated values for loop size\n$\\Omega$ (Table \\ref{tab04}), we derived flux values in Jy. The\ncalculated flux densities we give in Table \\ref{tab05}. As the\nCygnus loop is well studied SNR, it is possible to perform a\nmulti-frequency spectral study. In the Fig. \\ref{fig11} we present a\nsummary of some results obtained from other papers and from our\nstudy. Our results are labeled with asterisks. So, in this figure,\nwe have seventeen results for the flux density values $S_{\\nu}$ of\nCygnus loop derived in several papers (values are given in Table 3\nof \\citet{uyan04} and five results from our paper.\n\nIt can be seen from Fig. \\ref{fig11} that our added flux values fit\nvery well among other fluxes, which shows correctness of our method,\nand also that we determined loop area and its brightness well.\n\n\n\\begin{table}[ht!]\n\\centering \\caption{Flux densities (J\\lowercase{y}) that we\ncalculated at the five frequencies}\n\\begin{tabular}{c|c}\n\\hline\nFrequency (MHz) & Flux density (Jy) \\\\\n\\hline\n2720 & 83.37 $\\pm$ 2.60 \\\\\n1420 & 80.05 $\\pm$ 8.13 \\\\\n820 & 139.35 $\\pm$ 12.10 \\\\\n408 & 235.44 $\\pm$ 15.36 \\\\\n34.5 & 1309.04 $\\pm$ 65.65 \\\\\n\\hline\n\\end{tabular}\n\\label{tab05}\n\\end{table}\n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{fig11.eps}\n\\caption{Spectrum of Cygnus Loop: flux versus frequency for more different frequencies,\nobtained from all available flux density values (values are given in Table 3 in Uyaniker et al. (2004))\nand from values calculated in this paper. We identify our values by asterisks.}\n\\label{fig11}\n\\end{figure}\n\n\n\n\\section{Conclusions}\n\\label{sec06}\n\nThe main result is method for determination brightness temperature\ngiven in article \\citet{bork07} which is developed for large radio\nloops, and we show that this method is very good for much smaller\nloop, e.g. Cygnus loop. We check the method by applying it to the\nCygnus radio loop at the wide range of frequencies. It is in good\nagrement with method of $T-T$ graphics and results obtained with\nanother methods. This method is quite simple because we use\nbrightness temperature isoline to define border of a Cygnus loop.\nOther authors are using different squared or rectangular areas to\ndetermine area of the loop and calculate spectral indices,\nbrightness temperature and the flux density of the loop\n(\\citet{uyan04}, page 917 and \\citet{leah98}, page 786). Also, we\ncalculated flux for Cygnus loop and compare our results with results\nof other authors in different ranges of frequencies. We show that\nour results are in good agrement with these results.\n\nWe estimated the temperatures and brightness of the Cygnus loop SNR\non the basis of observations of the continuum radio emission at the\nfrequencies: 2720, 1420, 820, 408 and 34.5 MHz (this paper and\n\\citet{bork09a}). The sensitivity of the brightness temperatures\nare: 5 mK for 2720 MHz, 50 mK for 1420 MHz, 0.2 K for 820 MHz, 1.0 K\nfor 408 MHz and about 700 K $T_\\mathrm{b}$ for 34.5 MHz. At the\nfrequency of 2720 MHz the measurements are the most precise (with\nthe least relative errors) so positions of the brightness\ntemperature contours of the loop are the most realistic for this\nfrequency.\n\nBorders between the five frequencies are somewhat different for this\nSNR probably due to small, random and systematic errors in the\ncalibrated data. Also, we suppose there are uncertainties of about\n($2 \\times \\Delta T$) 10 mK for 2720 MHz, 100 mK for 1420 MHz, 0.4 K\nfor 820 MHz, 2.0 K for 408 MHz and about 1400 K $T_\\mathrm{b}$ for\n34.5 MHz, in the border due to measurement errors, and there is a\ntiny difference in the absorption of radio emission in the\ninterstellar medium at different wavelengths (\\citet{pach70}).\n\nWe determined average brightness temperature from a Cygnus radio\nloop region, after subtraction of a background level. Our obtained\nvalues (when all reduced to 1000 MHz for comparison) are in good\nagreement with the earlier results. We present the radio continuum\nspectrum of the Cygnus loop using average brightness temperatures at\nfive different frequencies. As it can be seen from Figs. \\ref{fig07}\nand \\ref{fig08}, given linear fit provides reliable spectral index.\nWe present the $T-T$ plots which enable the calculation of spectral\nindex, too.\n\nAlso, from our results can be concluded that Cygnus loop evolves in\nthe low density environment and the initial energy of SN explosion\nwas relatively low. This can be read after superposing the position\nof this loop to theoretical $\\Sigma-D$ diagram form \\citet{bere04},\nwhich was derived from non-linear kinetic theory of diffuse shock\nacceleration mechanism.\n\nWe showed that method for defining a loop border and for determining\nthe values of temperature and brightness, which we developed for\nmain Galactic loops I-VI, could be applicable to all SNRs.\n\n\n\\begin{acknowledgments}\nThe authors are grateful to the referee whose suggestions\nsubstantially improved the paper. This research is supported by the\nMinistry of Science of the Republic of Serbia through project No.\n176005.\n\\end{acknowledgments}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\n\n\n\n\n\n\\section*{Ethics Statement}\n\nAll authors of this work have read the ICLR code of ethics and commit to adhering to it. There is no ethical concern in this work.\n\n\\section*{Reproducibility Statement}\n\nThe code for reproducing our results in the experiment section can be found at \\url{https:\/\/github.com\/Crazy-Jack\/Cl-InfoNCE}.\n\n\\section*{Acknowledgements}\nThe authors would like to thank the anonymous reviewers for helpful comments and suggestions. This work is partially supported by the National Science Foundation IIS1763562, IARPA\nD17PC00340, ONR Grant N000141812861, Facebook PhD Fellowship, BMW, National Science Foundation awards 1722822 and 1750439, and National Institutes of Health awards R01MH125740, R01MH096951 and U01MH116925. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsors, and no official endorsement should be inferred.\n\n\n\n\n\\section{Method}\n\\label{sec:method}\n\\vspace{-1mm}\n\nWe present a two-stage approach to leverage the structural information from the auxiliary information for weakly-supervised representation learning. The first step (Section~\\ref{subsec:cluster_construct}) clusters data according to auxiliary information, which we consider discrete attributes as the auxiliary information\\footnote{Our approach generalizes to different types of auxiliary information. To help with clarity of the explanations, the paper focuses primarily on discrete attributes, but more details about other auxiliary information can be found in Appendix.}.\nThe second step (Section~\\ref{subsec:cl-infonce}) presents our clustering InfoNCE (Cl-InfoNCE) objective, a contrastive-learning-based approach, to leverage the constructed clusters. We discuss the mathematical intuitions of our approach and include an information-theoretical characterization of the goodness of our learned representations. We also show that Cl-InfoNCE can specialize to recent self-supervised and supervised contrastive approaches. \nFor notations, we use the upper case (e.g., $X$) letter to denote the random variable and the lower case (e.g., $x$) to denote the outcome from the random variable. \n\n\\vspace{-1mm}\n\\subsection{Cluster Construction for Discrete Attributes}\n\\label{subsec:cluster_construct}\n\n\n\\input{fig_tex\/cluster_construction}\n\n\\vspace{-1mm}\nWe consider discrete attributes as the auxiliary information. An example of such auxiliary information is binary indicators of attributes, such as ``short\/long hair'', ``with\/without sunglasses'' or ``short\/long sleeves'', for human photos. We construct the clusters such that data within each cluster will have the same values for a set of attributes. In our running example, selecting hair and sunglasses as the set of attributes, the human photos with ``long hair'' and ``with sunglasses'' will form a cluster. Then, how we determine the set of attributes? First, we rank each attribute according to its entropy in the dataset. Note that if an attribute has high entropy, it means this attribute is distributed diversely. Then, we select the attributes with top-$k$ highest entropy, where $k$ is a hyper-parameter. The reason for this selection process is to make sure the selected attributes are informative. See Figure~\\ref{fig:cluster_construction} for illustration.\n\n\\vspace{-1mm}\n\\subsection{Clustering InfoNCE (Cl-InfoNCE) Objective}\n\\label{subsec:cl-infonce}\n\\vspace{-1mm}\n\nThis section presents how we integrate the clustering information of data into the representation learning process. Recently, the contrastive approaches~\\citep{chen2020simple,caron2020unsupervised} have attracted lots of attention for self-supervised and supervised representation learning. The goal is to learn similar representations for correlated data and dissimilar representations for uncorrelated data. To be more specific, the self-supervised setting (e.g., the InfoNCE objective~\\citep{oord2018representation}) regards different views of the same data as correlated and distinct data as uncorrelated; the supervised setting (e.g., the supervised contrastive objective~\\citep{khosla2020supervised}) regards the data with the same downstream label as correlated and the data with distinct labels as uncorrelated. Inspired by these methods, when performing weakly-supervised representation learning, we present to learn similar representations for data within the same cluster assignment, and vice versa. To this end, we extend from the self-supervised InfoNCE objective and introduce the clustering InfoNCE (Cl-InfoNCE) objective that takes the data clustering information into account.\nWith the alphabets $X$ and $Y$ denoting the representations from augmented data:\n\\begin{equation*}\n\\resizebox{\\hsize}{!}{$X={\\rm Feature\\_Encoder\\Big(Augmentation\\_1\\big(Data\\_1\\big)\\Big)}\\,\\,{\\rm and}\\,\\,Y={\\rm Feature\\_Encoder\\Big(Augmentation\\_2\\big(Data\\_2\\big)\\Big)}$}\n\\end{equation*}\nand the alphabet $Z$ denoting the constructed clusters, we formulate Cl-InfoNCE as\n\\vspace{2mm}\n\\begin{definition}[Clustering-based InfoNCE (Cl-InfoNCE)]\n\\vspace{-1mm}\n\\begin{equation}\n{\\rm Cl-InfoNCE}:=\\underset{f}{\\rm sup}\\,\\,\\mathbb{E}_{(x_i, y_i)\\sim {\\mathbb{E}_{z\\sim P_Z}\\big[P_{X|z}P_{Y|z}\\big]}^{\\otimes n}}\\Big[ \\,\\frac{1}{n}\\sum_{i=1}^n {\\rm log}\\,\\frac{e^{f(x_i, y_i)}}{\\frac{1}{n}\\sum_{j=1}^n e^{f(x_i, y_j)}}\\Big].\n\\label{eq:cl_infonce}\n\\end{equation}\n\\end{definition}\n\\vspace{-1mm}\n$f(x,y)$ is any function that returns a scalar from the input $(x,y)$. As suggested by prior work~\\citep{chen2020simple,he2020momentum}, we choose $f(x,y) = {\\rm cosine}\\big(g(x),g(y)\\big) \/ \\tau$ to be the cosine similarity between non-linear projected $g(x)$ and $g(y)$. $g(\\cdot)$ is a neural network (also known as the projection head~\\citep{chen2020simple,he2020momentum}) and $\\tau$ is the temperature hyper-parameter. $\\{(x_i, y_i)\\}_{i=1}^n$ are $n$ independent copies of $(x,y)\\sim \\mathbb{E}_{z\\sim P_Z}\\big[P_{X|z}P_{Y|z}\\big]$, where it first samples a cluster $z \\sim P_Z$ and then samples $(x,y)$ pair with $x \\sim P_{X|z}$ and $y \\sim P_{Y|z}$. Furthermore, we call $(x_i,y_i)$ as the positively-paired data ($x_i$ and $y_i$ have the same cluster assignment) and $(x_i,y_j)$ ($i\\neq j$) as the negatively-paired data ($x_i$ and $y_j$ have independent cluster assignment). Note that, in practice, the expectation in~\\eqref{eq:cl_infonce} is replaced by the empirical mean of a batch of samples.\n\n\\vspace{-1mm}\n\\paragraph{Mathematical Intuitions.} Our objective is learning the representations $X$ and $Y$ (by updating the parameters in the $\\rm Feature\\_Encoder$) to maximize Cl-InfoNCE. At a colloquial level, the maximization pulls towards the representations of the augmented data within the same cluster and push away the representations of the augmented data from different clusters. At a information-theoretical level, we present the following:\n\\vspace{1.5mm}\n\\begin{theorem}[informal, Cl-InfoNCE maximization learns to include the clustering information]\n\\vspace{-1.5mm}\n\\begin{equation}\n\\begin{split}\n & {\\rm Cl-InfoNCE} \\leq D_{\\rm KL}\\,\\Big( \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] \\,\\|\\, P_{X}P_{Y} \\Big) \\leq H(Z) \\\\\n{\\rm and}\\,\\,& {\\rm the\\,\\,equality\\,\\,holds\\,\\,only\\,\\,when \\,\\,} H(Z|X) = H(Z|Y) = 0,\n\\end{split}\n\\label{eq:max_resulting_repre}\n\\end{equation}\n\\vspace{-5mm}\n\\label{theo:max_resulting_repre}\n\\end{theorem}\nwhere $H(Z)$ is the entropy of $Z$ and $H(Z|X)$ (or $H(Z|Y)$) are the conditional entropy of $Z$ given $X$ (or $Y$). Please find detailed derivations and proofs in Appendix. \n\nThe theorem suggests that Cl-InfoNCE has an upper bound $D_{\\rm KL}\\,\\Big( \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] \\,\\|\\, P_{X}P_{Y} \\Big)$, which measures the distribution divergence between the product of clustering-conditional marginal distributions (i.e., $\\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big]$) and the product of marginal distributions (i.e., $P_{X}P_{Y}$). We give an intuition for $D_{\\rm KL}\\,\\Big( \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] \\,\\|\\, P_{X}P_{Y} \\Big)$: if $D_{\\rm KL}\\,\\Big( \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] \\,\\|\\, P_{X}P_{Y} \\Big)$ is high, then we can easily tell whether $(x,y)$ have the same cluster assignment or not. The theorem also suggests that maximizing Cl-InfoNCE results in the representations $X$ and $Y$ including the clustering information $Z$ ($\\because H(Z|X) = H(Z|Y) = 0$). \n\n\\vspace{-2mm}\n\\paragraph{Goodness of the Learned Representations.} \nIn Theorem~\\ref{theo:max_resulting_repre}, we show that maximizing Cl-InfoNCE learns the representations ($X$ and $Y$) to include the clustering ($Z$) information. Therefore, to characterize how good is the learned representations by maximizing Cl-InfoNCE or to perform cross validation, we can instead study the relations between $Z$ and the downstream labels (denoting by $T$). In particular, we can use information-theoretical metrics such as the mutual information $I(Z;T)$ and the conditional entropy $H(Z|T)$ to characterize the goodness of the learned representations. $I(Z;T)$ measures how relevant the clusters and the labels, and $H(Z|T)$ measures how much redundant information in the clusters that are irrelevant to the labels. \nFor instance, we can expect good downstream performance for our auxiliary-information-infused representations when having high mutual information and low conditional entropy between the auxiliary-information-determined clusters and the labels. It is worth noting that, when $Z$ and $T$ are both discrete variables, computing $I(Z;T)$ and $H(Z|T)$ would be much easier than computing $I(X;T)$ and $H(X|T)$. \n\n\\vspace{-2mm}\n\\paragraph{Generalization of Recent Self-supervised and Supervised Contrastive Approaches.} Cl-InfoNCE (\\eqref{eq:cl_infonce}) serves as an objective that generalizes to different levels of supervision according to how we construct the clusters ($Z$). When $Z=$ instance id (i.e., each cluster only contains one instance), $\\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big]$ specializes to $P_{XY}$ and Cl-InfoNCE specializes to the InfoNCE objective~\\citep{oord2018representation}, which aims to learn similar representations for augmented variants of the same data and dissimilar representations for different data. InfoNCE is the most popular used self-supervised contrastive learning objective~\\citep{chen2020simple,he2020momentum,tsai2021multiview}. When $Z=$ downstream labels, Cl-InfoNCE specializes to the objective described in {\\em Supervised Contrastive Learning}~\\citep{khosla2020supervised}, which aims to learn similar representations for data that are from the same downstream labels and vice versa. In our paper, the clusters $Z$ are determined by the auxiliary information, and we aim to learn similar representations for data sharing the same auxiliary information and vice versa. This process can be understood as weakly supervised contrastive learning. \nTo conclude, Cl-InfoNCE is a clustering-based contrastive learning objective. \nBy differing its cluster construction, Cl-InfoNCE interpolates among unsupervised, weakly supervised, and supervised representation learning.\n\n\\iffalse\n\\vspace{-3mm}\n\\paragraph{Comparison to Learning to Predict the Clusters Assignments.} An alternative way to leverage the data clustering information is learning to predict the cluster assignment ($Z$) from the representations ($X$ and $Y$).\nThis approach requires building an additional classifier between the representations and the cluster, and it will be inefficient to optimize this classifier when having a large number of clusters. The reason is that the number of the classifier's parameters is proportional to the number of clusters. As a comparison, Cl-InfoNCE contains no additional classifier, and hence it is computationally more efficient and can scale up with a large number of clusters. Last, the most used objective for learning to predict the clusters is the cross-entropy loss. And evidences~\\citep{khosla2020supervised} show that, compared to the cross-entropy loss, the contrastive objective (e.g., our presented Cl-InfoNCE) is more robust to natural corruptions of data and stable to hyper-parameters and optimizers settings.\n\n\\vspace{-3mm}\n\\paragraph{Comparison to Contrastive Multi-view Coding with Auxiliary Information.} A direct way to leverage auxiliary information is treating auxiliary information as another view of data then adopting the contrastive multi-view coding (CMC) objective for representation learning. On the contrary, our method indirectly leverages auxiliary information. To be more precise, our method performs additional clustering construction according to the auxiliary information and then considers the Cl-InfoNCE objective to learn with the constructed clusters. We argue that modeling the clustering information can be easier than modeling the raw auxiliary information, especially when the auxiliary information is complex or high-dimensional.\n\\fi\n\\section{Introduction}\n\\label{sec:intro}\n\nSelf-supervised learning (SSL) designs learning objectives that use data's self-information but not labels. As a result, SSL empowers us to leverage a large amount of unlabeled data to learn good representations, and its applications span computer vision~\\citep{chen2020simple,he2020momentum}, natural language processing~\\citep{peters2018deep,devlin2018bert} and speech processing~\\citep{schneider2019wav2vec,baevski2020wav2vec}. More than leveraging only data's self-information, this paper is interested in a weakly-supervised setting by assuming access to additional sources as auxiliary information for data, such as the hashtags as auxiliary attributes information for Instagram images. The auxiliary information can provide valuable but often noisy information. Hence, it raises a research challenge of how we can effectively leveraging useful information from auxiliary information.\n\n\n\n\n\n\n\n\n\n\n\nWe argue that a form of the valuable information provided by the auxiliary information is its implied data clustering information. For example, we can expect an Instagram image to be semantically more similar to the image with the same hashtags than those with different hashtags. Hence, our first step is constructing auxiliary-information-determined clusters. Specifically, we build data clusters such that the data from the same cluster have similar auxiliary information, such as having the same data auxiliary attributes. Then, our second step is to minimize the intra-cluster difference of the representations. Particularly, we present a contrastive approach - the clustering InfoNCE (Cl-InfoNCE) objective to learn similar representations for augmented variants of data within the same cluster and dissimilar representations for data from different clusters. To conclude, the presented two-stage approach leverages the structural information from the auxiliary information, then integrating the structural information into a contrastive representation learning process. See Figure~\\ref{fig:illus} for an overview of our approach.\n\nWe provide the following analysis and observations to better understand our approach. First, we characterize the goodness of the Cl-InfoNCE-learned representations via the statistical relationships between the constructed clusters and the downstream labels. A resulting implication is that we can expect better downstream performance for our weakly-supervised representations when having i) higher mutual information between the labels and the auxiliary-information-determined clusters and ii) lower conditional entropy of the clusters given the labels. Second, Cl-InfoNCE generalizes recent contrastive learning objectives by changing the way to construct the clusters. In particular, when each cluster contains only one data point, Cl-InfoNCE becomes a conventional self-supervised contrastive objective (e.g., the InfoNCE objective~\\citep{oord2018representation}). When the clusters are built using directly the labels, Cl-InfoNCE \nbecomes a supervised contrastive objective (e.g., the objective considered by~\\citet{khosla2020supervised}). These generalizations imply that our approach (auxiliary-information-determined clusters + Cl-InfoNCE) interpolates between conventional self-supervised and supervised representation learning.\n\n\\input{fig_tex\/illus}\n\n\nWe conduct experiments on learning visual representations using UT-zappos50K~\\citep{yu2014fine}, CUB-200-2011~\\citep{wah2011caltech}, Wider Attribute~\\citep{li2016human} and ImageNet-100~\\citep{russakovsky2015imagenet} datasets.\nFor the first set of experiments, we shall see how much improvement can the auxiliary information bring to us. We consider the {\\em derivative} auxiliary information, which means the auxiliary information comes from the datasets: the discrete attributes from UT-zappos50K, CUB-200-2011, and Wider Attribute. We show that the auxiliary-information-infused weakly-supervised representations, compared to conventional self-supervised representation, have a much better performance on downstream tasks. We consider two baselines that also leverage auxiliary information: i) predicting the auxiliary-information-induced clusters with cross-entropy loss and ii) adopting the contrastive multi-view coding (CMC)~\\citep{tian2020contrastive} method when treating auxiliary information as another view of data. Our approach consistently outperforms the cross-entropy method and performs better than the CMC method in most cases. \nFor the second set of experiments, we focus on the analysis of Cl-InfoNCE to study how well it works with unsupervised constructed clusters (K-means clusters). We find it achieves better performance comparing to the clustering-based self-supervised learning approaches, such as the Prototypical Contrastive Learning (PCL)~\\citep{li2020prototypical} method. The result suggests that the K-means method + Cl-InfoNCE can be a strong baseline for the conventional self-supervised learning setting. \n\n\n\n\\section{Comparison with others related in literature}\n\n\\subsection{Experiments: Comparison with Weakly supervised contrastive learning\\citep{zheng2021weakly} }\n\nWe would like to point out that a concurrent work \\cite{zheng2021weakly} presented a similar idea on weakly-supervised contrastive learning in ICCV 2021. We would like to point out the reason it is a concurrent work with ours. \\cite{zheng2021weakly} is made publicly available on 10\/05\/2021, which is the same day as the the paper submission deadline for ICLR'22. To be more precise, ICCV publicly released this paper on 10\/05\/2021, and the paper's arxiv version and code are available on 10\/10\/2021. The zero time overlap suggests that our two works are independent and concurrent.\n\n\\paragraph{Similarity and Difference} We acknowledge that the two works share the similar idea of utilizing weak labels of data in contrastive learning. \\cite{zheng2021weakly} motivates by preventing class collision during instance-wise contrastive learning (random data that belongs to the same category will possibly get falsely pushed away in instance-wise contrastive learning), and ours motivates by exploring the structural information of data within contrastive learning, followed by providing information-theoretic analysis to explain how different structural information can affect the learned representations. Task-wise \\cite{zheng2021weakly} focuses on unsupervised (no access to data labels) and semi-supervised (access to a few data labels) representation learning, and ours focuses on weakly supervised (access to side information such as data attributes) and unsupervised representation learning. For the common unsupervised representation learning part, \\cite{zheng2021weakly} presents to generate weak labels using connected components labeling process, and ours generates weak labels using K-means clustering.\n\n\\paragraph{Empirical Results} We observed that the performance on ImageNet-100 reported in [1] looks better than ours (79.77 \\cite{zheng2021weakly} v.s. ours 77.9 Figure 5). However, the experimental settings differ a lot. First, the \\textbf{datasets} are different despite the same name: \\cite{zheng2021weakly} considers ImageNet-100 by selecting the first 100 class of the ILSVRC 2012 challenge, and we select a different set of 100 classes (details shown in the Appendix E). Second, the \\textbf{batch size} is different: \\cite{zheng2021weakly} considers 2048 , and ours considers 128. Third, the \\textbf{projection heads in architecture} are different: \\cite{zheng2021weakly} uses 2 projection heads (each with 4096 hidden units) with two objectives, one is for InfoNCE and the other is for the proposed Weakly Supervised Contrastive Learning loss; whereas ours uses one projection head with 2048 hidden units for Cl-InfoNCE objective only. Although our main experiments have demonstrated that Cl-InfoNCE alone can achieve competitive performance, we acknowledge that adding InfoNCE objective with an additional linear projection head would further improve the learned representation. \n\nTo fairly compare our Cl-InfoNCE loss with their proposed Weakly Supervised Contrastive objective, we add an additional head trained with InfoNCE along with our Cl-InfoNCE objective. Experiments are conducted on our version of ImageNet100 with the controlled set up: same network architecture of resnet50, same batch size of 384, same training epochs of 200, same projection head (2048-2048-128), the same optimizer and linear evaluation protocols, etc. Our Kmeans cluster number K is chosen to be 2500 via a grid search from $\\{100, 1000, 2500, 5000, 10,000\\}$. The results are shown below Table~\\ref{tab:iccv-compare}. \n\n\\input{tbl_tex\/iccv-compare}\n\nFrom the results, we can see that the two methods' performances are similar. Our work and theirs [1] are done independently and concurrently, and both works allow a broader understanding of weakly supervised contrastive learning. \n\n\n\\subsection{Experiments: Comparison with IDFD~\\citep{tao2021clustering} }\nIDFD~\\citep{tao2021clustering} presents to learn representations that are clustering friendly (from a spectral clustering viewpoint) during the instance discrimination (ID) contrastive learning process. Although it includes both ideas of clustering and contrastive learning, IDFD~\\citep{tao2021clustering} differs from our paper fundementally because they does not utilize the constructed clusters as weak labels to train contrastive objective. However, IDFD~\\citep{tao2021clustering} can still be considered as a self-supervised representation learning method, hence we perform experiments to compare our unsupervised setting (Cl-InfoNCE + Kmeans method) with their proposed IDFD on CIFAR10 Dataset~\\citep{krizhevsky2009learning}. To provide a fair comparison with IDFD~\\citep{tao2021clustering}, we stick to the training paradigm of IDFD where they replaces Resnet-50 with Resnet-18. The batch size of 128 is used following their report. Since IDFD~\\citep{tao2021clustering} was focusing on clustering quality and didn't report the linear evaluation protocol, we use the released code of IDFD~\\citep{tao2021clustering} to re-train the model meanwhile using both the cluster accuracy and the linear evaluation protocal as evaluation metrics. We train both methods for 1000 epochs for a fair comparison. The results are presented in Table~\\ref{tb:iclr-comparison}. \n\n\n\\input{tbl_tex\/iclr-comparison}\n\n\n\nNote that \\citep{tao2021clustering} proposed 2 methods (IDFD and IDFO), we choose the compare with IDFD because \\textbf{(i)} IDFO is very unstable, \\textbf{(ii)} IDFD\/IDFO perform at-par for the best performance based on Figure2 in \\citep{tao2021clustering} and \\textbf{(iii)} \\citep{tao2021clustering} only officially releases code for IDFD. We can observe that our method exceeds IDFD on in terms of top-1 classification accuracy during linear evaluation and also improve the raw clustering accuracy score, indicating integrating weak labels from unsupervised clustering with contrastive objectives would help both representation learning and the unsupervised clustering task. \n\n\n\n\n\n\n\n\n\n\n\n\n}\n\n\n\\section{Data's Hierarchy Information as Auxiliary Information}\n\\label{sec:hierarchy}\n\n\\input{fig_tex\/wordnet_wrap}\nIn the main text, we select the discrete attributes as the auxiliary information of data, then presenting data cluster construction according to the discrete attributes. We combine the constructed clusters and the presented Cl-InfoNCE objective together for learning weakly-supervised representations. In this section, we study an alternative type of the auxiliary information - data labels' hierarchy information, more specifically, the WordNet hierarchy~\\citep{miller1995wordnet}, illustrated in the right figure. In the example, we present the WordNet hierarchy of the label ``Henslow's Sparrow'', where only the WordNet hierarchy would be seen during training but not the label. \n\n\\subsection{Cluster Construction for WordNet Hierarchy}\n\\label{subsec:cluster_hier}\nHow do we construct the data clusters according to the WordNet hierarchy? In the above example, ``vertebrate'' and ``bird'' can be seen as the coarse labels of data. We then construct the clusters such that data within each cluster will have the same coarse label. Now, we explain how we determine which coarse labels for the data. First, we represent the WordNet hierarchy into a tree structure (each children node has only one parent node). \nThen, we choose the coarse labels to be the nodes in the level $l$ in the WordNet tree hierarchy (the root node is level $1$). $l$ is a hyper-parameter. We illustrate the process in the below figure.\n\\includegraphics[width=1.0\\textwidth]{fig\/wordnet_cluster_illus.pdf}\n\n\\subsection{Experiments: Data-Hierarchy-Determined Clusters + Cl-InfoNCE}\n\\label{subsec:hierarchy_exp}\n\nThe experimental setup and the comparing baselines are similar to Section 4.3 in the main text, but now we consider the WordNet~\\citep{miller1995wordnet} hierarchy as the auxiliary information. As discussed in prior subsection, we construct the clusters $Z$ such that the data within a cluster have the same parent node in the level $l$ in the data's WordNet tree hierarchy. $l$ is the hyper-parameter\\footnote{Note that we do not compare with the CMC method for fair comparisons with other method. The reason is that the CMC method will leverage the entire tree hierarchy, instead of a certain level in the tree hierarchy.}. \n\n\\vspace{-2mm}\n\\paragraph{Results.} Figure~\\ref{fig:hierarchy} presents our results. First, we look at the leftmost plot, and we have several similar observations when having the data attributes as the auxiliary information. One of them is that our approach consistently outperforms the auxiliary-information-determined clusters + cross-entropy loss. Another of them is that the weakly supervised representations better close the gap with the supervised representations. Second, as discussed in prior subsection, the WordNet data hierarchy clusters can be regarded as the coarse labels of the data. Hence, when increasing the hierarchy level $l$, we can observe the performance improvement (see the leftmost plot) and the increasing mutual information $I(Z;T)$ (see the middle plot) between the clusters $Z$ and the labels $T$. Note that $H(Z|T)$ remains zero (see the rightmost plot) since the coarse labels (the intermediate nodes) can be determined by the downstream labels (the leaf nodes) under the tree hierarchy structure. Third, we discuss the conventional self-supervised setting with the special case when $Z=$ instanced ID. $Z$ as the instance ID has the highest $I(Z;T)$ (see the middle plot) but also the highest $H(Z|T)$ (see the rightmost plot). And we observe that the conventional self-supervised representations perform the worse (see the leftmost plot). We conclude that, when using clustering-based representation learning approaches, we shall not rely purely on the mutual information between the data clusters and the downstream labels to determine the goodness of the learned representations. We shall also take the redundant information in the clusters into account.\n \n\n\\input{fig_tex\/hierarchy}\n\n\n\n\\section{Theoretical Analysis}\n\nIn this section, we provide theoretical analysis on the presented Cl-InfoNCE objective. We recall the definition of Cl-InfoNCE and our presented theorem:\n\n\\begin{definition}[Clustering-based InfoNCE (Cl-InfoNCE), restating Definition 3.1 in the main text] \n\\begin{equation*}\n{\\rm Cl-InfoNCE}:=\\underset{f}{\\rm sup}\\,\\,\\mathbb{E}_{(x_i, y_i)\\sim {\\mathbb{E}_{z\\sim P_Z}\\big[P_{X|z}P_{Y|z}\\big]}^{\\otimes n}}\\Big[\\,\\frac{1}{n}\\sum_{i=1}^n {\\rm log}\\,\\frac{e^{f(x_i, y_i)}}{\\frac{1}{n}\\sum_{j=1}^n e^{f(x_i, y_j)}}\\Big],\n\\end{equation*}\n\\label{defn:cl-infonce-2}\n\\end{definition}\n\\begin{theorem}[informal, Cl-InfoNCE maximization learns to include the clustering information, restating Theorem 3.2 in the main text]\n\\begin{equation*}\n\\begin{split}\n & {\\rm Cl-InfoNCE} \\leq D_{\\rm KL}\\,\\Big( \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] \\,\\|\\, P_{X}P_{Y} \\Big) \\leq H(Z) \\\\\n{\\rm and}\\,\\,& {\\rm the\\,\\,equality\\,\\,holds\\,\\,only\\,\\,when \\,\\,} H(Z|X) = H(Z|Y) = 0.\n\\end{split}\n\\end{equation*}\n\\label{theo:max_resulting_repre_2}\n\\end{theorem}\n\nOur goal is to prove Theorem~\\ref{theo:max_resulting_repre_2}. For a better presentation flow, we split the proof into three parts:\n\\begin{itemize}\n \\item Proving ${\\rm Cl-InfoNCE} \\leq D_{\\rm KL}\\,\\Big( \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] \\,\\|\\, P_{X}P_{Y} \\Big)$ in Section~\\ref{subsec:proof_a}\n \\item Proving $D_{\\rm KL}\\,\\Big( \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] \\,\\|\\, P_{X}P_{Y} \\Big) \\leq H(Z)$ in Section~\\ref{subsec:proof_b}\n \\item Proving ${\\rm Cl-InfoNCE} {\\rm \\,\\,maximizes\\,\\,at\\,\\,} H(Z) {\\rm \\,\\,when \\,\\,} H(Z|X) = H(Z|Y) = 0$ in Section~\\ref{subsec:proof_c}\n\\end{itemize}\n\n\\subsection{Part I - Proving ${\\rm Cl-InfoNCE} \\leq D_{\\rm KL}\\,\\Big( \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] \\,\\|\\, P_{X}P_{Y} \\Big)$}\n\\label{subsec:proof_a}\nThe proof requires the following lemma. \n\n\\begin{lemma}[Theorem 1 by~\\citet{song2020multi}] Let $\\mathcal{X}$ and $\\mathcal{Y}$ be the sample spaces for $X$ and $Y$, $f$ be any function: $(\\mathcal{X} \\times \\mathcal{Y}) \\rightarrow \\mathbb{R}$, and $\\mathcal{P}$ and $\\mathcal{Q}$ be the probability measures on $\\mathcal{X} \\times \\mathcal{Y}$. Then,\n$$\n\\underset{f}{\\rm sup}\\,\\,\\mathbb{E}_{(x, y_1)\\sim \\mathcal{P}, (x, y_{2:n})\\sim \\mathcal{Q}^{\\otimes (n-1)}}\\Big[ {\\rm log}\\,\\frac{e^{f(x, y_1)}}{\\frac{1}{n}\\sum_{j=1}^n e^{f(x, y_j)}}\\Big] \\leq D_{\\rm KL} \\Big( \\mathcal{P} \\,\\|\\, \\mathcal{Q} \\Big).\n$$\n\\label{lemm:infonce_like}\n\\end{lemma}\n\n\n\n\\iffalse\n\\begin{lemma}[\\citet{nguyen2010estimating}] Let $\\mathcal{X}$ and $\\mathcal{Y}$ be the sample spaces for $X$ and $Y$, $f$ be any function: $(\\mathcal{X} \\times \\mathcal{Y}) \\rightarrow \\mathbb{R}$, and $\\mathcal{P}$ and $\\mathcal{Q}$ be the probability measures on $\\mathcal{X} \\times \\mathcal{Y}$. Then,\n$$\nD_{\\rm KL} \\Big( \\mathcal{P} \\,\\|\\, \\mathcal{Q} \\Big) = \\underset{f}{\\rm sup} \\,\\mathbb{E}_{(x, y)\\sim\\mathcal{P}} [f(x,y)] - \\mathbb{E}_{(x, y)\\sim\\mathcal{Q}} [e^{f(x,y)}] + 1.\n$$\n\\begin{proof}\nThe second-order functional derivative of the objective is $-e^{f(x,y)}\\cdot d\\mathcal{Q}$, which is always negative. The negative second-order functional derivative implies the objective has a supreme value. \nThen, take the first-order functional derivative and set it to zero:\n\\begin{equation*}\nd \\mathcal{P} - e^{f(x,y)}\\cdot d \\mathcal{Q} = 0.\n\\end{equation*}\nWe then get optimal $f^*(x,y) = {\\rm log}\\,\\frac{d\\mathcal{P}}{d\\mathcal{Q}}$. Plug in $f^*(x,y)$ into the objective, we obtain\n\\begin{equation*}\n\\mathbb{E}_{\\mathcal{P}} [f^*(x,y)] - \\mathbb{E}_{\\mathcal{Q}} [e^{f^*(x,y)}] + 1 = \\mathbb{E}_{\\mathcal{P}} [{\\rm log}\\,\\frac{d\\mathcal{P}}{d\\mathcal{Q}}] = D_{\\rm KL} \\Big( \\mathcal{P} \\,\\|\\, \\mathcal{Q} \\Big).\n\\end{equation*}\n\\end{proof}\n\\label{lemm:kl}\n\\end{lemma}\n\n\\begin{lemma} \n$\n\\underset{f}{\\rm sup}\\,\\,\\mathbb{E}_{(x, y_1)\\sim \\mathcal{P}, (x, y_{2:n})\\sim \\mathcal{Q}^{\\otimes (n-1)}}\\Big[ {\\rm log}\\,\\frac{e^{f(x, y_1)}}{\\frac{1}{n}\\sum_{j=1}^n e^{f(x, y_j)}}\\Big] \\leq D_{\\rm KL} \\Big( \\mathcal{P} \\,\\|\\, \\mathcal{Q} \\Big).\n$\n\\begin{proof}\nFrom Lemma~\\ref{lemm:kl}, $\\forall f$, we have\n\\begin{equation*}\n\\small\n\\begin{split}\n D_{\\rm KL} \\Big( \\mathcal{P} \\,\\|\\, \\mathcal{Q} \\Big) & = \\mathbb{E}_{(x, y_{2:n})\\sim \\mathcal{Q}^{\\otimes (n-1)}}\\Bigg[D_{\\rm KL} \\Big( \\mathcal{P} \\,\\|\\, \\mathcal{Q} \\Big) \\Bigg]\n \\\\\n & \\geq \\,\\mathbb{E}_{(x, y_{2:n})\\sim \\mathcal{Q}^{\\otimes (n-1)}}\\Bigg[ \\mathbb{E}_{(x, y_1)\\sim \\mathcal{P}} \\Big[ {\\rm log}\\,\\frac{e^{f(x, y_1)}}{\\frac{1}{n}\\sum_{j=1}^n e^{f(x, y_j)}} \\Big] - \\mathbb{E}_{(x, y_1)\\sim \\mathcal{Q}} \\Big[ \\frac{e^{f(x, y_1)}}{\\frac{1}{n}\\sum_{j=1}^n e^{f(x, y_j)}} \\Big] + 1 \\Bigg] \\\\\n & = \\mathbb{E}_{(x, y_{2:n})\\sim \\mathcal{Q}^{\\otimes (n-1)}}\\Bigg[ \\mathbb{E}_{(x, y_1)\\sim \\mathcal{P}} \\Big[ {\\rm log}\\,\\frac{e^{f(x, y_1)}}{\\frac{1}{n}\\sum_{j=1}^n e^{f(x, y_j)}} \\Big] - 1 + 1 \\Bigg] \\\\\n & = \\mathbb{E}_{(x, y_1)\\sim \\mathcal{P}, (x, y_{2:n})\\sim \\mathcal{Q}^{\\otimes (n-1)}}\\Big[ {\\rm log}\\,\\frac{e^{f(x, y_1)}}{\\frac{1}{n}\\sum_{j=1}^n e^{f(x, y_j)}}\\Big].\n\\end{split}\n\\end{equation*}\nThe first line comes from the fact that $D_{\\rm KL} \\Big( \\mathcal{P} \\,\\|\\, \\mathcal{Q} \\Big)$ is a constant. The second line comes from Lemma~\\ref{lemm:kl}. The third line comes from the fact that $(x, y_1)$ and $(x, y_{2:n})$ are interchangeable when they are all sampled from $\\mathcal{Q}$.\n\nTo conclude, since the inequality works for all $f$, and hence the supreme is also less than $D_{\\rm KL} \\Big( \\mathcal{P} \\,\\|\\, \\mathcal{Q} \\Big)$.\n\\end{proof}\n\\label{lemm:infonce_like}\n\\end{lemma}\n\nNote that Lemma~\\ref{lemm:infonce_like} does not require $n \\rightarrow \\infty$, which is a much more practical setting compared to the analysis made only when $n\\rightarrow \\infty$. And a remark is that the equality holds in Lemma~\\ref{lemm:infonce_like} when $n\\rightarrow \\infty$. \n\\fi\n\nNow, we are ready to prove the following lemma: \n\\begin{lemma}[Proof Part I]\n$\n {\\rm Cl-InfoNCE}:=\\underset{f}{\\rm sup}\\,\\,\\mathbb{E}_{(x_i, y_i)\\sim {\\mathbb{E}_{z\\sim P_Z}\\big[P_{X|z}P_{Y|z}\\big]}^{\\otimes n}}\\Big[\\,\\frac{1}{n}\\sum_{i=1}^n {\\rm log}\\,\\frac{e^{f(x_i, y_i)}}{\\frac{1}{n}\\sum_{j=1}^n e^{f(x_i, y_j)}}\\Big] \\leq D_{\\rm KL}\\,\\Big( \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] \\,\\|\\, P_{X}P_{Y} \\Big).\n$\n\\begin{proof}\nBy defining $\\mathcal{P} = \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big]$ and $\\mathcal{Q} = P_XP_Y$, we have \n$$\n\\mathbb{E}_{(x, y_1)\\sim \\mathcal{P}, (x, y_{2:n})\\sim \\mathcal{Q}^{\\otimes (n-1)}}\\Big[ {\\rm log}\\,\\frac{e^{f(x, y_1)}}{\\frac{1}{n}\\sum_{j=1}^n e^{f(x, y_j)}}\\Big] = \\mathbb{E}_{(x_i, y_i)\\sim {\\mathbb{E}_{z\\sim P_Z}\\big[P_{X|z}P_{Y|z}\\big]}^{\\otimes n}}\\Big[\\,\\frac{1}{n}\\sum_{i=1}^n {\\rm log}\\,\\frac{e^{f(x_i, y_i)}}{\\frac{1}{n}\\sum_{j=1}^n e^{f(x_i, y_j)}}\\Big].\n$$\nPlug in this result into Lemma~\\ref{lemm:infonce_like} and we conclude the proof.\n\\end{proof}\n\\label{lemm:part_a}\n\\end{lemma}\n\n\\subsection{Part II - Proving $D_{\\rm KL}\\,\\Big( \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] \\,\\|\\, P_{X}P_{Y} \\Big) \\leq H(Z)$}\n\\label{subsec:proof_b}\n\nThe proof requires the following lemma:\n\\begin{lemma}\n$\nD_{\\rm KL}\\,\\Big( \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] \\,\\|\\, P_{X}P_{Y} \\Big) \\leq {\\rm min}\\,\\Big\\{{\\rm MI}(Z;X), {\\rm MI}(Z;Y)\\Big\\}.\n$\n\n\\begin{proof}\n\\begin{equation*}\n\\begin{split}\n & {\\rm MI}(Z;X) - D_{\\rm KL}\\,\\Big( \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] \\,\\|\\, P_{X}P_{Y} \\Big) \\\\\n = & \\int_z p(z) \\int_x p(x|z) \\,\\,{\\rm log}\\,\\,\\frac{p(x|z)}{p(x)} {\\rm d}x {\\rm d}z - \\int_z p(z) \\int_x p(x|z) \\int_y p(y|z) \\,\\,{\\rm log}\\,\\,\\frac{\\int_{z'}p(z')p(x|z')p(y|z'){\\rm d}z'}{p(x)p(y)} {\\rm d}x {\\rm d}y {\\rm d}z \\\\\n = & \\int_z p(z) \\int_x p(x|z) \\,\\,{\\rm log}\\,\\,\\frac{p(x|z)}{p(x)} {\\rm d}x {\\rm d}z - \\int_z p(z) \\int_x p(x|z) \\int_y p(y|z) \\,\\,{\\rm log}\\,\\,\\frac{\\int_{z'}p(z'|y)p(x|z'){\\rm d}z'}{p(x)} {\\rm d}x {\\rm d}y {\\rm d}z \\\\\n = & \\int_z p(z) \\int_x p(x|z) \\int_y p(y|z) \\,\\,{\\rm log}\\,\\,\\frac{p(x|z)}{\\int_{z'}p(z'|y)p(x|z'){\\rm d}z'}{\\rm d}x {\\rm d}y {\\rm d}z \\\\\n = & - \\int_z p(z) \\int_x p(x|z) \\int_y p(y|z) \\,\\,{\\rm log}\\,\\,\\frac{\\int_{z'}p(z'|y)p(x|z'){\\rm d}z'}{p(x|z)}{\\rm d}x {\\rm d}y {\\rm d}z \\\\\n \\geq & - \\int_z p(z) \\int_x p(x|z) \\int_y p(y|z) \\,\\,\\Bigg(\\frac{\\int_{z'}p(z'|y)p(x|z'){\\rm d}z'}{p(x|z)} - 1\\Bigg){\\rm d}x {\\rm d}y {\\rm d}z \\,\\,\\Big(\\because {\\rm log}\\,t \\leq t-1 \\Big) \\\\\n = &\\,\\, 0.\n\\end{split}\n\\end{equation*}\nHence, ${\\rm MI}(Z;X) \\geq D_{\\rm KL}\\,\\Big( \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] \\,\\|\\, P_{X}P_{Y} \\Big)$. Likewise, ${\\rm MI}(Z;Y) \\geq D_{\\rm KL}\\,\\Big( \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] \\,\\|\\, P_{X}P_{Y} \\Big)$. We complete the proof by combining the two results.\n\\end{proof}\n\\label{lemm:less_than_mi}\n\\end{lemma}\n\nNow, we are ready to prove the following lemma:\n\\begin{lemma}[Proof Part II]\n$D_{\\rm KL}\\,\\Big( \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] \\,\\|\\, P_{X}P_{Y} \\Big) \\leq H(Z).$\n\\begin{proof}\nCombining Lemma~\\ref{lemm:less_than_mi} and the fact that ${\\rm min}\\,\\Big\\{{\\rm MI}(Z;X), {\\rm MI}(Z;Y)\\Big\\} \\leq H(Z)$, we complete the proof. Note that we consider $Z$ as the clustering assignment, which is discrete but not continuous. And the inequality holds for the discrete $Z$, but may not hold for the continuous $Z$.\n\\end{proof}\n\\label{lemm:part_b}\n\\end{lemma}\n\n\\subsection{Part III - Proving ${\\rm Cl-InfoNCE} {\\rm \\,\\,maximizes\\,\\,at\\,\\,} H(Z) {\\rm \\,\\,when \\,\\,} H(Z|X) = H(Z|Y) = 0$}\n\\label{subsec:proof_c}\n\nWe directly provide the following lemma:\n\\begin{lemma}[Proof Part III]\n${\\rm Cl-InfoNCE} {\\rm \\,\\,max.\\,\\,at\\,\\,} H(Z) {\\rm \\,\\,when \\,\\,} H(Z|X) = H(Z|Y) = 0.$\n\\begin{proof}\nWhen $H(Z|Y) = 0$, $p(Z|Y=y)$ is Dirac. The objective \n\\begin{equation*}\n\\begin{split}\n & D_{\\rm KL}\\,\\Big( \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] \\,\\|\\, P_{X}P_{Y} \\Big) \\\\\n = & \\int_z p(z) \\int_x p(x|z) \\int_y p(y|z) \\,\\,{\\rm log}\\,\\,\\frac{\\int_{z'}p(z')p(x|z')p(y|z'){\\rm d}z'}{p(x)p(y)} {\\rm d}x {\\rm d}y {\\rm d}z \\\\\n = & \\int_z p(z) \\int_x p(x|z) \\int_y p(y|z) \\,\\,{\\rm log}\\,\\,\\frac{\\int_{z'}p(z'|y)p(x|z'){\\rm d}z'}{p(x)} {\\rm d}x {\\rm d}y {\\rm d}z \\\\\n = & \\int_z p(z) \\int_x p(x|z) \\int_y p(y|z) \\,\\,{\\rm log}\\,\\,\\frac{\\int_{z'}p(z')p(x|z')p(y|z'){\\rm d}z'}{p(x)p(y)} {\\rm d}x {\\rm d}y {\\rm d}z \\\\\n = & \\int_z p(z) \\int_x p(x|z) \\int_y p(y|z) \\,\\,{\\rm log}\\,\\,\\frac{p(x|z)}{p(x)} {\\rm d}x {\\rm d}y {\\rm d}z = {\\rm MI}\\Big( Z;X\\Big).\n\\end{split}\n\\end{equation*}\nThe second-last equality comes with the fact that: when $p(Z|Y=y)$ is Dirac, $p(z'|y) = 1 \\,\\,\\forall z' = z$ and $p(z'|y) = 0 \\,\\,\\forall z' \\neq z$. Combining with the fact that ${\\rm MI}\\Big( Z;X\\Big) = H(Z)$ when $H(Z|X)=0$, we know \n$D_{\\rm KL}\\,\\Big( \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] \\,\\|\\, P_{X}P_{Y} \\Big) = H(Z) $ when $H(Z|X)=H(Z|Y)=0$.\n\nFurthermore, by Lemma~\\ref{lemm:part_a} and Lemma~\\ref{lemm:part_b}, we complete the proof.\n\\end{proof}\n\\label{lemm:part_c}\n\\end{lemma}\n\n\\subsection{Bringing Everything Together}\n\nWe bring Lemmas~\\ref{lemm:part_a},~\\ref{lemm:part_b}, and~\\ref{lemm:part_c} together and complete the proof of Theorem~\\ref{theo:max_resulting_repre_2}.\n\n\n\n\\section{Algorithms}\nIn this section, we provide algorithms for our experiments. We consider two sets of the experiments. The first one is K-means clusters + Cl-InfoNCE (see Section 4.4 in the main text), where the clusters involved in Cl-InfoNCE are iteratively obtained via K-means clustering on top of data representations. The second one is auxiliary-information-determined clusters + Cl-InfoNCE (see Section 4.3 in the main text and Section~\\ref{subsec:hierarchy_exp}), where the clusters involved in Cl-InfoNCE are pre-determined accordingly to data attributes (see Section 4.3 in the main text) or data hierarchy information (see Section~\\ref{subsec:hierarchy_exp}).\n\n\\paragraph{K-means clusters + Cl-InfoNCE}\nWe present here the algorithm for K-means clusters + Cl-InfoNCE. At each iteration in our algorithm, we perform K-means Clustering algorithm on top of data representations for obtaining cluster assignments. The cluster assignment will then be used in our Cl-InfoNCE objective.\n\n\\begin{algorithm}[H]\n\\SetAlgoLined\n\\KwResult{Pretrained Encoder $f_{\\theta}(\\cdot)$}\n $f_{\\theta}(\\cdot)\\leftarrow \\text{Base Encoder Network}$\\;\n Aug $(\\cdot)\\leftarrow$ Obtaining Two Variants of Augmented Data via Augmentation Functions\\;\nEmbedding $\\leftarrow$ Gathering data representations by passing data through $f_{\\theta}(\\cdot)$\\;\n Clusters $\\leftarrow$\\textbf{K-means-clustering}(Embedding)\\;\n \\For {epoch in 1,2,...,N}{\n \\For{batch in 1,2,...,M}{\n data1, data2 $\\leftarrow$ Aug(data\\_batch)\\;\n feature1, feature2 $\\leftarrow$ $f_{\\theta}$(data1), $f_{\\theta}$(data2)\\;\n $L_{\\text{Cl-infoNCE}}\\leftarrow$ Cl-InfoNCE(feature1, feature2, Clusters)\\;\n $f_{\\theta} \\leftarrow f_{\\theta} - lr * \\frac{\\partial}{\\partial \\theta}L_{\\text{Cl-infoNCE}}$\\;\n }\n Embedding $\\leftarrow$ gather embeddings for all data through $f_{\\theta}(\\cdot)$\\;\n Clusters $\\leftarrow$\\textbf{K-means-clustering}(Embedding)\\;\n }\n \\caption{K-means Clusters + Cl-InfoNCE}\n\\end{algorithm}\n\n\\paragraph{Auxiliary information determined clusters + Cl-InfoNCE}\nWe present the algorithm to combine auxiliary-information-determined clusters with Cl-InfoNCE. We select data attributes or data hierarchy information as the auxiliary information, and we present their clustering determining steps in Section 3.1 in the main text for discrete attributes and Section~\\ref{subsec:cluster_hier} for data hierarchy information.\n\n\\begin{algorithm}[H]\n\\SetAlgoLined\n\\KwResult{Pretrained Encoder $f_{\\theta}(\\cdot)$}\n $f_{\\theta}(\\cdot)\\leftarrow \\text{Base Encoder Network}$\\;\n Aug $(\\cdot)\\leftarrow$ Obtaining Two Variants of Augmented Data via Augmentation Functions\\;\n Clusters $\\leftarrow$Pre-determining Data Clusters from \\textbf{Auxiliary Information}\\;\n \\For {epoch in 1,2,...,N}{\n \\For{batch in 1,2,...,M}{\n data1, data2 $\\leftarrow$ Aug(data\\_batch)\\;\n feature1, feature2 $\\leftarrow$ $f_{\\theta}$(data1), $f_{\\theta}$(data2)\\;\n $L_{\\text{Cl-infoNCE}}\\leftarrow$ Cl-InfoNCE(feature1, feature2, Clusters)\\;\n $f_{\\theta} \\leftarrow f_{\\theta} - lr * \\frac{\\partial}{\\partial \\theta}L_{\\text{Cl-infoNCE}}$\\;\n }\n }\n \\caption{Pre-Determined Clusters + Cl-InfoNCE}\n\\end{algorithm}\n\n\n\\section{Experimental details}\n\nThe following content describes our experiments settings in details. For reference, our code is available at \\url{https:\/\/github.com\/Crazy-Jack\/Cl-InfoNCE\/README.md}.\n\n\\subsection{UT-Zappos50K}\nThe following section describes the experiments we performed on UT-Zappos50K dataset in Section 4 in the main text.\n\\paragraph{Accessiblity}\nThe dataset is attributed to \\citep{yu2014fine} and available at the link: \\url{http:\/\/vision.cs.utexas.edu\/projects\/finegrained\/utzap50k}. The dataset is for non-commercial use only.\n\n\\paragraph{Data Processing}\nThe dataset contains images of shoe from Zappos.com. We rescale the images to $32\\times 32$. The official dataset has 4 large categories following 21 sub-categories. We utilize the 21 subcategories for all our classification tasks. The dataset comes with 7 attributes as auxiliary information. We binarize the 7 discrete attributes into 126 binary attributes. We rank the binarized attributes based on their entropy and use the top-$k$ binary attributes to form clusters. Note that different $k$ result in different data clusters (see Figure 4 (a) in the main text).\n\n\\textit{Training and Test Split}: We randomly split train-validation images by $7:3$ ratio, resulting in $35,017$ train data and $15,008$ validation dataset. \n\n\\paragraph{Network Design}\nWe use ResNet-50 architecture to serve as a backbone for encoder. To compensate the 32x32 image size, we change the first 7x7 2D convolution to 3x3 2D convolution and remove the first max pooling layer in the normal ResNet-50 (See code for detail). This allows finer grain of information processing. After using the modified ResNet-50 as encoder, we include a 2048-2048-128 Multi-Layer Perceptron (MLP) as the projection head \\Big(i.e., $g(\\cdot)$ in $f(\\cdot, \\cdot)$ equation (1) in the main text\\Big) for Cl-InfoNCE. During evaluation, we discard the projection head and train a linear layer on top of the encoder's output. For both K-means clusters + Cl-InfoNCE and auxiliary-information-determined clusters + Cl-InfoNCE, we adopt the same network architecture, including the same encoder, the same MLP projection head and the same linear evaluation protocol. In the K-means + Cl-InfoNCE settings, the number of the K-means clusters is $1,000$. Kmeans clustering is performed every epoch during training. We find performing Kmeans for every epoch benefits the performance. For fair comparsion, we use the same network architecture and cluster number for PCL.\n\n\\paragraph{Optimization}\nWe choose SGD with momentum of $0.95$ for optimizer with a weight decay of $0.0001$ to prevent network over-fitting. To allow stable training, we employ a linear warm-up and cosine decay scheduler for learning rate. For experiments shown in Figure 4 (a) in the main text, the learning rate is set to be $0.17$ and the temperature is chosen to be $0.07$ in Cl-InfoNCE. And for experiments shown in Figure 5 in the main text, learning rate is set to be $0.1$ and the temperature is chosen to be $0.1$ in Cl-InfoNCE. \n\n\\paragraph{Computational Resource}\nWe conduct experiments on machines with 4 NVIDIA Tesla P100. It takes about 16 hours to run 1000 epochs of training with batch size 128 for both auxiliary information aided and unsupervised Cl-InfoNCE.\n\n\\subsection{Wider Attributes}\nThe following section describes the experiments we performed on Wider Attributes dataset in Section 4 in the main text.\n\\paragraph{Accessiblity}\nThe dataset is credited to \\citep{li2016human} and can be downloaded from the link: \\url{http:\/\/mmlab.ie.cuhk.edu.hk\/projects\/WIDERAttribute.html}. The dataset is for public and non-commercial usage.\n\n\\paragraph{Data Processing}\nThe dataset contains $13,789$ images with multiple semantic bounding boxes attached to each image. Each bounding is annotated with $14$ binary attributes, and different bounding boxes in an image may have different attributes. Here, we perform the OR operation among the attributes in the bounding boxes in an image. Hence, each image is linked to $14$ binary attributes. We rank the 14 attributes by their entropy and use the top-$k$ of them when performing experiments in Figure 4 (b) in the main text. We consider a classification task consisting of $30$ scene categories. \n\n\\textit{Training and Test Split}: The dataset comes with its training, validation, and test split. Due to a small number of data, we combine the original training and validation set as our training set and use the original test set as our validation set. The resulting training set contains $6,871$ images and the validation set contains $6,918$ images.\n\n\\paragraph{Computational Resource}\nTo speed up computation, on Wider Attribute dataset we use a batch size of $40$, resulting in 16-hour computation in a single NVIDIA Tesla P100 GPU for $1,000$ epochs training. \n\n\\paragraph{Network Design and Optimization}\nWe use ResNet-50 architecture as an encoder for Wider Attributed dataset. We choose 2048-2048-128 MLP as the projection head \\Big(i.e., $g(\\cdot)$ in $f(\\cdot, \\cdot)$ equation (1) in the main text\\Big) for Cl-InfoNCE. The MLP projection head is discarded during the linear evaluation protocol. Particularly, during the linear evaluation protocol, the encoder is frozen and a linear layer on top of the encoder is fine-tuned with downstream labels. For Kmeans + Cl-InfoNCE and Auxiliary information + Cl-InfoNCE, we consider the same architectures for the encoder, the MLP head and the linear evaluation classifier. For K-means + Cl-InfoNCE, we consider $1,000$ K-means clusters. For fair comparsion, the same network architecture and cluster number is used for experiments with PCL.\n\nFor Optimization, we use SGD with momentum of $0.95$. Additionally, $0.0001$ weight decay is adopted in the network to prevent over-fitting. We use a learning rate of $0.1$ and temperature of $0.1$ in Cl-InfoNCE for all experiments. A linear warm-up following a cosine decay is used for the learning rate scheduling, providing a more stable learning process. \n\n\\subsection{CUB-200-2011}\nThe following section describes the experiments we performed on CUB-200-2011 dataset in Section 4 in the main text.\n\\paragraph{Accessiblity}\nCUB-200-2011 is created by \\citet{wah2011caltech} and is a fine-grained dataset for bird species. It can be downloaded from the link: \\url{http:\/\/www.vision.caltech.edu\/visipedia\/CUB-200-2011.html}. The usage is restricted to non-commercial research and educational purposes. \n\n\\paragraph{Data Processing}\nThe original dataset contains $200$ birds categories over $11,788$ images with $312$ binary attributes attached to each image. We utilize those attributes and rank them based on their entropy, excluding the last $112$ of them (resulting in $200$ attributes), because including these $112$ attributes will not change the number of the clusters than not including them. In Figure 4 (c), we use the top-$k$ of those attributes to constrcut clusters with which we perform in Cl-InfoNCE. The image is rescaled to $224\\times 224$.\n\n\\textit{Train Test Split}:\nWe follow the original train-validation split, resulting in $5,994$ train images and $5,794$ validation images. \n\n\\paragraph{Computational Resource}\nIt takes about 8 hours to train for 1000 epochs with 128 batch size on 4 NVIDIA Tesla P100 GPUs. \n\n\\paragraph{Network Design and Optimization}\nWe choose ResNet-50 for CUB-200-2011 as the encoder. After extracting features from the encoder, a 2048-2048-128 MLP projection head \\Big(i.e., $g(\\cdot)$ in $f(\\cdot, \\cdot)$ equation (1) in the main text\\Big) is used for Cl-InfoNCE. During the linear evaluation protocal, the MLP projection head is removed and the features extracted from the pre-trained encoder is fed into a linear classifier layer. The linear classifier layer is fine-tuned with the downstream labels. The network architectures remain the same for both K-means clusters + Cl-InfoNCE and auxiliary-information-determined clusters + Cl-InfoNCE settings. In the K-means clusters + Cl-InfoNCE settings, we consider $1,000$ K-means clusters. For fair comparsion, the same network architecture and cluster number is used for experiments with PCL.\n\nSGD with momentum of $0.95$ is used during the optimization. We select a linear warm-up following a cosine decay learning rate scheduler. The peak learning rate is chosen to be $0.1$ and the temperature is set to be $0.1$ for both K-means + Cl-InfoNCE and Auxiliary information + Cl-InfoNCE settings. \n\n\n\\subsection{ImageNet-100}\nThe following section describes the experiments we performed on ImageNet-100 dataset in Section 4 in the main text.\n\\paragraph{Accessibility}\nThis dataset is a subset of ImageNet-1K dataset, which comes from the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012-2017 \\citep{russakovsky2015imagenet}. ILSVRC is for non-commercial research and educational purposes and we refer to the ImageNet official site for more information: \\url{https:\/\/www.image-net.org\/download.php}.\n\n\n\\paragraph{Data Processing}\nIn the Section 4 in the main text and Section~\\ref{sec:hierarchy}, we select $100$ classes from ImageNet-1K to conduct experiments (the selected categories can be found in \\url{https:\/\/github.com\/Crazy-Jack\/Cl-InfoNCE\/data_processing\/imagenet100\/selected_100_classes.txt}). We also conduct a slight pre-processing (via pruning a small number of edges in the WordNet graph) on the WordNet hierarchy structure to ensure it admits a tree structure. Specifically, each of the selected categories and their ancestors only have one path to the root. We refer the pruning procedure in \\url{https:\/\/github.com\/Crazy-Jack\/Cl-InfoNCE\/data_processing\/imagenet100\/hierarchy_processing\/imagenet_hierarchy.py} (line 222 to 251).\n\nWe cluster data according to their common ancestor in the pruned tree structure and determine the level $l$ of each cluster by the step needed to traverse from root to that node in the pruned tree. Therefore, the larger the $l$, the closer the common ancestor is to the real class labels, hence more accurate clusters will be formed. Particularly, the real class labels is at level $14$. \n\n\\textit{Training and Test Split}: \nPlease refer to the following file for the training and validation split.\n\\begin{itemize}\n \\item training: \\url{https:\/\/github.com\/Crazy-Jack\/Cl-InfoNCE\/data_processing\/imagenet100\/hier\/meta_data_train.csv}\n \\item validation: \\url{https:\/\/github.com\/Crazy-Jack\/Cl-InfoNCE\/data_processing\/imagenet100\/hier\/meta_data_val.csv}\n\\end{itemize}\nThe training split contains $128,783$ images and the test split contains $5,000$ images. The images are rescaled to size $224\\times 224$.\n\n\\paragraph{Computational Resource}\nIt takes $48$-hour training for $200$ epochs with batch size $128$ using $4$ NVIDIA Tesla P100 machines. All the experiments on ImageNet-100 is trained with the same batch size and number of epochs. \n\n\\paragraph{Network Design and Optimization Hyper-parameters}\nWe use conventional ResNet-50 as the backbone for the encoder. 2048-2048-128 MLP layer and $l2$ normalization layer is used after the encoder during training and discarded in the linear evaluation protocal. We maintain the same architecture for Kmeans + Cl-InfoNCE and auxiliary information aided Cl-InfoNCE. For Kmeans + Cl-InfoNCE, we choose 2500 as the cluster number. For fair comparsion, the same network architecture and cluster number is used for experiments with PCL. The Optimizer is SGD with $0.95$ momentum. For K-means + Cl-InfoNCE used in Figure 5 in the main text, we use the learning rate of $0.03$ and the temperature of $0.2$. We use the learning rate of $0.1$ and temperature of $0.1$ for auxiliary information + Cl-InfoNCE in Figure~\\ref{fig:hierarchy}. A linear warm-up and cosine decay is used for the learning rate scheduling. To stablize the training and reduce overfitting, we adopt $0.0001$ weight decay for the encoder network. \n\n\n\n\n\n\n\n\\section{Comparisons with Swapping Clustering Assignments between Views}\nIn this section, we provide additional comparisons between Kmeans + Cl-InfoNCE and Swapping Clustering Assignments between Views (SwAV)~\\citep{caron2020unsupervised}. The experiment is performed on ImageNet-100 dataset. SwAV is a recent art for clustering-based self-supervised approach. In particular, SwAV adopts Sinkhorn algorithm~\\citep{cuturi2013sinkhorn} to determine the data clustering assignments for a batch of data samples, and SwAV also ensures augmented views of samples will have the same clustering assignments. We present the results in Table~\\ref{tab:swav}, where we see SwAV has similar performance with the Prototypical Contrastive Learning method~\\citep{li2020prototypical} and has worse performance than our method (i.e., K-means +Cl-InfoNCE).\n\n\n\\begin{table}[h]\n\\centering\n\\scalebox{0.8}{\n\\begin{tabular}{cc}\n\\toprule\nMethod & Top-1 Accuracy (\\%) \\\\ \\midrule \\midrule \\multicolumn{2}{c}{\\it Non-clustering-based Self-supervised Approaches} \\\\ \\midrule \\midrule\nSimCLR~\\citep{chen2020simple} & 58.2$\\pm$1.7 \\\\ [1mm]\nMoCo~\\citep{he2020momentum} & 59.4$\\pm$1.6 \\\\ \\midrule \\midrule \\multicolumn{2}{c}{\\it Clustering-based Self-supervised Approaches (\\# of clusters = $2.5$K)} \\\\ \\midrule \\midrule\nSwAV~\\citep{caron2020unsupervised} & 68.5$\\pm$1.0 \\\\ [1mm]\nPCL~\\citep{li2020prototypical} & 68.9$\\pm$0.7 \\\\ [1mm]\nK-means + Cl-InfoNCE (ours) & \\textbf{77.9$\\pm$0.7} \\\\ \\bottomrule\n\\end{tabular}\n}\n\\vspace{1mm}\n\\caption{Additional Comparsion with SwAV~\\citep{caron2020unsupervised} showing its similar performance as PCL on ImageNet-100 dataset.}\n\\label{tab:swav}\n\\end{table}\n\n\\section{Preliminary results on ImageNet-1K with Cl-InfoNCE}\nWe have performed experiments on ImageNet-100 dataset, which is a subset of the ImageNet-1K dataset \\citep{russakovsky2015imagenet}. We use the batch size of $1,024$ for all the methods and consider $100$ training epochs. We present the comparisons among Supervised Contrastive Learning~\\citep{khosla2020supervised}, our method (i.e., WordNet-hierarchy-information-determined clusters + Cl-InfoNCE), and SimCLR~\\citep{chen2020simple}. We select the level-$12$ nodes in the WordNet tree hierarchy structures as our hierarchy-determined clusters for Cl-InfoNCE. We report the results in Table~\\ref{tab:imagenet-1K}. We find that our method (i.e., hierarchy-determined clusters + Cl-InfoNCE) performs in between the supervised representations and conventional self-supervised representations. \n\n\n\\input{tbl_tex\/imgnet1k}\n\n\n\\section{Synthetically Constructed Clusters in Section 4.2 in the Main Text}\n\n\nIn Section 4.2 in the main text, on the UT-Zappos50K dataset, we synthesize clusters $Z$ for various $I(Z;T)$ and $H(Z|T)$ with $T$ being the downstream labels. There are $86$ configurations of $Z$ in total. Note that the configuration process has no access to data's auxiliary information and among the $86$ configurations we consider the special cases for the supervised \\big($Z=T$\\big) and the unsupervised setting \\big($Z=$ instance ID\\big). In specific, when $Z=T$, $I(Z;T)$ reaches its maximum at $H(T)$ and $H(Z|T)$ reaches its minimum at $0$; when $Z=$ instance ID, both $I(Z;T)$ \\big(to be $H(T)$\\big) and $H(Z|T)$ \\big(to be $H(\\text{instance ID})$\\big) reaches their maximum. The code for generating these $86$ configurations can be found in lines 177-299 in \\url{https:\/\/github.com\/Crazy-Jack\/Cl-InfoNCE\/data_processing\/UT-zappos50K\/synthetic\/generate.py}.\n\n\\iffalse\nby the procedures described below to varying $Z$ artificially with different $I(Z;T)$ and $H(Z|T)$, resulting in 86 different configurations of $Z$ and then evaluate their resulted representation quality following the common linear evaluation protocol. The linear evaluation results is shown in Figure 3. \n\n\\paragraph{Synthetizing Protocal}\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=\\linewidth]{fig\/ICLR_fig.pdf}\n \\caption{Synthesising protocol demo. $x_i$ denotes each data instance, $t_i$ denotes downstream class labels and $z_i^{(level)}$ denotes clusters in the first setting where we control $I(Z;T)$ to its maximumn $H(T)$. And $S_i^{(level)}$ are superclasses of the downstream task labels $t_i$ and represents the $Z$ in the second setting. }\n \\label{fig:synthetic-protocol}\n\\end{figure}\n\n\n\nTo intepolate between fully supervised and unsupervised setting, we form synthesised $Z$ to illustrate the relationship between $Z$ and $T$. Figure~\\ref{fig:synthetic-protocol} shows a toy example of how we construct the $Z$. Suppose $\\{x_1, x_2, ..., x_n\\}$ denotes every instance and $\\{t_1, t_2, ..., t_k\\}$ denotes different downstream class labels, then the key of constructing covariates are performing certain level of grouping. We consider three settings: \n\\paragraph{(1)} Control $I(Z;T)$ to its maximum. $I(Z;T)$ reach its maximum $H(T)$ when $P(T|Z)=1$, which means that instances assigned to a specific $Z$ have the same downstream task labels. Following this intuition, we form $z_i^{(level)}$ as subclasses of the downstream classes. We vary $level$ to transit from SimCLR to SupCon. \n\n\\paragraph{(2)} Control $H(Z|T)$ to its minimum 0. $H(Z|T)$ reach its minimum 0 if $P(Z|T)=1$, which suggests we can create $Z$ as superclasses of $T$. As shown in Figure~\\ref{fig:synthetic-protocol}, we create $S_i^{(level)}$ as different $Z$ and vary $level$ to get different $Z$ with various $I(Z;T)$ value. \n\n\\paragraph{(3)} For each $level$ in setting \\textbf{(1)}, we keep the subclasses of certain set of downstream classs $T_f=\\{t_i, t_j, t_m, ...\\}$ and permutate other $z_j^{(level)}$ assignment. Take $level=2$ for example, if we fix the subclasses of $T_f=\\{t_1\\}$, instance assigned to $\\{z_1^{(2)}, z_2^{(2)}, z_3^{(2)}, z_4^{(2)}\\}$ is untouched (i.e. the cluster assignment of $\\{x_1, x_2, ..., x_8\\}$ remain the same) and we randomly permutate instance assigned to $\\{z_5^{(2)}, z_6^{(2)}, ..., z_{16}^{(2)}\\}$ (for example $x_9$ may be reassigned to $z_6^{(2)}$ or $z_{13}^{(2)}$ instead of $z_5^{(2)}$). As we increase $|T_f|$, we get $Z$ with various $I(Z;T)$ and $H(Z|T)$ that spans the most of the information plane. This construction of $Z$ leads us to efficiently illustrate how the relationship between $Z$ and $T$ reflects the learnt representation's quality (As shown in Figure 3).\n\n\\fi\n\n\\end{appendices}\n\\section{Conclusion and Discussions}\n\\label{sec:conclu}\n\\vspace{-2mm}\n\nIn this paper, we introduce the clustering InfoNCE (Cl-InfoNCE) objective that leverages the implied data clustering information from {\\color{black}auxiliary information or data itself }for learning weakly-supervised representations. {\\color{black} Our method effectively brings the performance closer to the supervised learned representations compared to the conventional self-supervised learning approaches, therefore improving pretraining quality when limited information is at hand. In terms of limitation, our approach requires clustering based on auxiliary information or data itself. This process sometimes could pose additional computational cost. In addition, clustering on auxiliary information or data will also lose precision. Tackling these problems would be our further research direction.}\n\n\n\\section{Related Work}\n\\label{sec:rela}\n\n\\vspace{-1mm}\n\\paragraph{Self-supervised Learning.} Self-supervised learning (SSL) defines a pretext task as a pre-training step and uses the pre-trained features for a wide range of downstream tasks, such as object detection and segmentation in computer vision~\\citep{chen2020simple,he2020momentum}, question answering, and language understanding in natural language processing~\\citep{peters2018deep,devlin2018bert} and automatic speech recognition in speech processing~\\citep{schneider2019wav2vec,baevski2020wav2vec}. In this paper, we focus on discussing two types of pretext tasks: clustering approaches~\\citep{caron2018deep,caron2020unsupervised} and contrastive approaches~\\citep{chen2020simple,he2020momentum}.\n\nThe clustering approaches jointly learn the networks' parameters and the cluster assignments of the resulting features. For example, the cluster assignments can be obtained through unsupervised clustering methods such as k-means~\\citep{caron2018deep}, or the optimal transportation algorithms such as Sinkhorn algorithm~\\citep{caron2020unsupervised}. It is worth noting that the clustering approaches enforce consistency between cluster assignments for different augmentations of the same data. The contrastive approaches learn similar representations for augmented variants of a data and dissimilar representations for different data. Examples of contrastive approaches include the InfoNCE objective~\\citep{oord2018representation,chen2020simple,he2020momentum}, Wasserstein Predictive Coding~\\citep{ozair2019wasserstein}, and Relative Predictive Coding~\\citep{tsai2021self}. Both the clustering and the contrastive approaches aim to learn representations that are invariant to data augmentations. \n\nThere is another line of work combining clustering and contrastive approaches, such as HUBERT~\\citep{hsu2020hubert}, Prototypical Contrastive Learning~\\citep{li2020prototypical} and Wav2Vec~\\citep{schneider2019wav2vec,baevski2020wav2vec}. They first construct (unsupervised) clusters from the data. Then, they perform a contrastive approach to learn similar representations for the data within the same cluster. Our approach relates to these work with two differences: 1) we construct the clusters from the auxiliary information; and 2) we present Cl-InfoNCE as a new contrastive approach and characterize the goodness for the resulting representations. Recent works like IDFD~\\citep{tao2021clustering} aim to achieve better unsupervised clustering by using contrastive learning representations.\nHowever, \\citet{tao2021clustering} differs from our work in that they don't directly incorporate auxiliary information into contrastive objectives.\n\n\\vspace{-2mm}\n\\paragraph{Weakly-supervised Learning with Auxiliary Information.} \nOur study relates to work on {\\color{black}prediction using auxiliary information, by treating the auxiliary information as weak labels~\\citep{sun2017revisiting,mahajan2018exploring,wen2018disjoint,radford2021learning,tan2019learning}}. The weak labels can be hashtags of Instagram images~\\citep{mahajan2018exploring}, metadata such as identity and nationality of a person~\\citep{wen2018disjoint} or corresponding textual descriptions for images~\\citep{radford2021learning}. Compared to normal labels, the weak labels are noisy but require much less human annotation work. Surprisingly, it has been shown that the network learned with weakly supervised pre-training tasks can generalize well to various downstream tasks, including object detection and segmentation, cross-modality matching, and action recognition~\\citep{mahajan2018exploring,radford2021learning}. The main difference between these works and ours is that our approach does not consider a prediction objective but a contrastive learning objective (i.e., the Cl-InfoNCE objective). {\\color{black}An independent and concurrent work~\\citep{zheng2021weakly} also incorporates weak labels into the contrastive learning objective. \nHowever, our method differs from \\citet{zheng2021weakly} by the the way we construct the weak labels. We perform clustering on the annotative attributes or unsupervised k-means to obtain weak labels\nwhereas they employ connected components labeling process. Task-wise, \\citep{zheng2021weakly} focuses on unsupervised (no access to data labels) and semi-supervised (access to a few data labels) representation learning, and ours focuses on weakly-supervised (access to side information such as data attributes) and unsupervised representation learning. For the common unsupervised representation learning part, we include a comparison with their method in the Appendix.}\n\n\n\nAnother way to learn from auxiliary information is using multi-view contrastive coding (CMC)~\\citep{tian2020contrastive} where auxiliary information is treated as another view of the data. Specifically, CMC learns representations that can capture the joint information between the data and the accompanying auxiliary information. The main difference between CMC and our approach is that CMC leverages auxiliary information directly and Cl-InfoNCE leverages it indirectly (i.e., our approach pre-processes auxiliary information by clustering it).\n\n\\subsection{Technical Background}\nContrastive learning~\\citep{bachman2019learning,chen2020simple,he2020momentum} has shown its effectiveness in unsupervised representation learning, where its goal is to learn similar representations for the positively-paired (correlated) instances and dissimilar representations for the negatively-paired (uncorrelated) instances. For example, \\citet{bachman2019learning,chen2020simple,he2020momentum} treat the positively-paired instances as distorted variations of the same image (e.g., the same image with different augmentations) and the negatively-paired instances as two distinct images. For another example, \\citet{radford2021learning,tsai2021multiview} treat the positively-paired instances as the cross-modality pair of an instance (e.g., image and its captions) and the negatively-paired instances as pairs consisting of a random image paired with the captions of another random image. Probabilistically, we refer the positively-paired representations from the data as the representations sampled from their joint distributions (i.e., $(x,y)\\sim P_{X,Y}$) and the negatively-paired representations as the representations sampled from the product of marginal distributions (i.e., $(x,y)\\sim P_{X}P_{Y}$). Hence, the contrastive learning can also be interpreted as maximizing the distribution divergence between $P_{X,Y}$ and $P_{X}P_{Y}$~\\citep{oord2018representation,chen2020simple,bachman2019learning,ozair2019wasserstein,tsai2021self}, and among different objectives the most popular one is maximizing the following InfoNCE objective~\\citep{oord2018representation}: \n\\begin{proposition}[InfoNCE~\\citep{oord2018representation}]\n\\begin{equation}\n{\\rm InfoNCE}:=\\underset{f}{\\rm sup}\\,\\,\\mathbb{E}_{(x_i, y_i)\\sim {P_{X,Y}}^{\\otimes n}}\\Big[ {\\rm log}\\,\\frac{e^{f(x_i, y_i)}}{\\frac{1}{n}\\sum_{j=1}^n e^{f(x_i, y_j)}}\\Big]\\leq D_{\\rm KL}\\,\\big(P_{X,Y} \\,\\|\\,P_{X}P_Y \\big) = {\\rm MI}\\,(X,Y),\n\\label{eq:infonce}\n\\end{equation}\n\\end{proposition}\nwhere $\\{(x_i, y_i)\\}_{i=1}^n$ are $n$ independent copies of $(x,y)\\sim P_{X,Y}$, $f(x,y)$ is any function that returns a scalar from the input $(x,y)$, and ${\\rm MI}\\,(X,Y)$ is the mutual information (MI) between $X$ and $Y$. Practically, prior work~\\citep{chen2020simple,he2020momentum} suggest $f(x,y) = {\\rm cosine}(x,y) \/ \\tau$, where ${\\rm cosine}(x,y)$ is the cosine similarity and $\\tau$ is the temperature hyper-parameter. Equation~\\eqref{eq:infonce} suggests InfoNCE is a lower bound of ${\\rm MI}\\,(X,Y)$ and theoretical work~\\citep{arora2019theoretical,tsai2021multiview,tosh2021contrastive} show that ${\\rm MI}\\,(X,Y)$ maximization could lead to representations that perform well on downstream tasks. To conclude, as the most popular contrastive approach, InfoNCE can learn good representations without the need to access the downstream supervision.\n\n\\subsection{Clustering-based Contrastive Learning}\n\nIn addition to learning without access to the downstream supervision, the contrastive learning also manifests it effectiveness under the supervised fashion~\\citep{khosla2020supervised}. In particular, \\citet{khosla2020supervised} present {\\em Supervised Contrastive Learning} that re-defines the positively-paired samples as the samples from the same category and the negatively-paired samples as the samples belonging to different categories. Observing the fact that the contrastive learning work for both unsupervised and supervised setting, a research question pops out naturally: \n\n\\begin{center}\n{\\em Can we design a unified contrastive learning objective that generalizes its applications from unsupervised to weakly-supervised and supervised representation learning?}\n\\end{center}\n\nOur paper aims to answer this question by presenting clustering-based InfoNCE (Cl-InfoNCE), which 1) first constructs the clusters under either unsupervised, weakly-supervised or supervised setup and 2) then defines the positively-paired samples as the samples from the same cluster and the negatively-paired samples as the samples from different clusters. At a colloquial level, Cl-InfoNCE pulls towards the representations of the data within the same (unsupervised, weakly-supervised, or supervised-formed) cluster and push away the representations of the data from different clusters. With $Z\/z$ being the constructed clusters, the proposed Cl-InfoNCE is formulated as:\n\\begin{proposition}[Clustering-based InfoNCE (Cl-InfoNCE)]\n\\begin{equation}\n{\\rm Cl-InfoNCE}:=\\underset{f}{\\rm sup}\\,\\,\\mathbb{E}_{(x_i, y_i)\\sim {\\color{blue} \\mathbb{E}_{z\\sim P_Z}\\big[P_{X|z}P_{Y|z}\\big]}^{\\otimes n}}\\Big[ {\\rm log}\\,\\frac{e^{f(x_i, y_i)}}{\\frac{1}{n}\\sum_{j=1}^n e^{f(x_i, y_j)}}\\Big],\n\\label{eq:cl_infonce}\n\\end{equation}\n\\end{proposition}\nwhere the positively-paired representations $(x_i, y_i)$ are sampled from ${\\color{blue} \\mathbb{E}_{z\\sim P_Z}\\big[P_{X|z}P_{Y|z}\\big]}$\\footnote{$(x, y) \\sim \\mathbb{E}_{z\\sim P_Z}\\big[P_{X|z}P_{Y|z}\\big]$ includes two steps: first, $z \\sim P_Z$; then, $x \\sim P_{X|z}$ and $y \\sim P_{Y|z}$.} and the negatively-paired representations $(x_i, y_j)$ are sampled from $P_XP_Y$ ($i\\neq j$)\\footnote{$\\int_X \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] dx = P_Y$ and $\\int_Y \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] dy = P_X$.}. We note that equation~\\eqref{eq:cl_infonce} generalizes to different levels of supervision: 1) when $Z=$ instance id (i.e., each cluster only contains an instance), $\\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big]$ specializes to $P_{XY}$ and Cl-InfoNCE specializes to InfoNCE for unsupervised representation learning; 2) when $Z=$ downstream labels, Cl-InfoNCE specializes to the objective described in {\\em Supervised Contrastive Learning}~\\citep{khosla2020supervised} for supervised representation learning; and 3) when $Z$ is constructed from the weak supervision signals, such as data's attributes or the hierarchical information of data, Cl-InfoNCE can be used for weakly-supervised representation learning. Now, we provide the following theoretical statements to better understand Cl-InfoNCE:\n\\begin{theorem}[Asymptotic analysis of Cl-InfoNCE]\n\\begin{equation}\n{\\rm Cl-InfoNCE} \\leq {\\rm Cl-InfoNCE}_{\\,n \\rightarrow \\infty} = D_{\\rm KL}\\,\\Big( \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] \\,\\|\\, P_{X}P_{Y} \\Big) \\leq H(Z),\n\\label{eq:asymp_cl_infonce}\n\\end{equation}\n\\label{theo:asymp_cl_infonce}\n\\end{theorem}\n\\begin{theorem}[Cl-InfoNCE maximization learns to include the clusering information]\n\\begin{equation}\n{\\rm Cl-InfoNCE}_{\\,n \\rightarrow \\infty} {\\rm \\,\\,maximizes\\,\\,at\\,\\,} H(Z) {\\rm \\,\\,when \\,\\,} H(Z|X) = H(Z|Y) = 0,\n\\label{eq:max_resulting_repre}\n\\end{equation}\n\\label{theo:max_resulting_repre}\n\\end{theorem}\nwhere $H(Z)$ is the entropy of $Z$ and $H(Z|X)$\/ $H(Z|Y)$ are the conditional entropy of $Z$ given $X$\/$Y$. We leave detailed derivations and proofs in Appendix. Theorem~\\ref{theo:asymp_cl_infonce} suggests that Cl-InfoNCE has an upper bound $D_{\\rm KL}\\,\\Big( \\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big] \\,\\|\\, P_{X}P_{Y} \\Big)$, which measures the distribution divergence between the product of clustering-conditional marginal distributions (i.e., $\\mathbb{E}_{P_Z}\\big[P_{X|Z}P_{Y|Z}\\big]$) and the product of marginal distributions (i.e., $P_{X}P_{Y}$). Theorem~\\ref{theo:max_resulting_repre} suggests that maximizing Cl-InfoNCE results in the representations $X$ and $Y$ including the clustering information $Z$ ($\\because H(Z|X) = H(Z|Y) = 0$). This result is not surprising: since Cl-InfoNCE pulls together the representations from the data belonging to the same cluster and push away the representations from the data belonging to different clusters, the representations learned by maximizing Cl-InfoNCE should be able to distinguish among distinct clusters. \n\nTo conclude, Cl-InfoNCE is a clustering-based contrastive learning objective. By differing its cluster construction, Cl-InfoNCE generalizes from unsupervised, weakly-supervised to supervised representation learning. In this paper, we focus on the application of Cl-InfoNCE in weakly-supervised setup, where we examine different weak supervision sources to form clusters.\n\\section{Experiments}\n\\vspace{-1mm}\nWe given an overview of our experimental section. Section~\\ref{subsec:datasets} discusses the datasets. We consider discrete attribute information as auxiliary information for data. Next, in Section~\\ref{subsec:method}, we explain the methodology that will be used in the experiments. Section~\\ref{subsec:exp_with_auxi_attr} presents the first set of the experiments, under a weakly-supervised setting, to manifest the effectiveness of our approach and the benefits of taking the auxiliary information into account. \nLast, to study the effect of Cl-InfoNCE alone, Section~\\ref{subsec:exp_without_auxi} presents the second set of the experiments under a unsupervised setting. {\\color{black} We also conduct comparison experiments with another independent and concurrent weakly supervised contrastive learning work~\\citep{zheng2021weakly} in the Appendix.} \n\n\n\n\n\\vspace{-1mm}\n\\subsection{Datasets}\n\\label{subsec:datasets}\n\\vspace{-1mm}\nWe consider the following datasets. {\\bf UT-zappos50K}~\\citep{yu2014fine}: It contains $50,025$ shoes images along with $7$ discrete attributes as auxiliary information. Each attribute follows a binomial distribution, and we convert each attribute into a set of Bernoulli attributes, resulting in a total of $126$ binary attributes. There are $21$ shoe categories.\n{\\bf Wider Attribute}~\\citep{li2016human}: It contains $13,789$ images, and there are several bounding boxes in an image. The attributes are annotated per bounding box. We perform OR operation on attributes from different bounding boxes in an image, resulting in $14$ binary attributes per image as the auxiliary information. There are $30$ scene categories. \n{\\bf CUB-200-2011}~\\citep{wah2011caltech}: It contains $11,788$ bird images with $200$ binary attributes as the auxiliary information. There are $200$ bird species. For the second set of the experiments, we further consider the {\\bf ImageNet-100}~\\citep{russakovsky2015imagenet} dataset. It is a subset of the ImageNet-1k object recognition dataset~\\citep{russakovsky2015imagenet}, where we select $100$ categories out of $1,000$, resulting in around $0.12$ million images. \n\n\n\\vspace{-1mm}\n\\subsection{Methodology}\n\\label{subsec:method}\n\\vspace{-1mm}\nFollowing~\\citet{chen2020simple}, we conduct experiments on pre-training visual representations and then evaluating the learned representations using the linear evaluation protocol. In other words, after the pre-training stage, we fix the pre-trained feature encoder and then categorize test images by linear classification results. We select ResNet-50~\\citep{he2016deep} as our feature encoder across all settings. Note that our goal is learning representations (i.e, $X$ and $Y$) for maximizing the Cl-InfoNCE objective (equation~\\eqref{eq:cl_infonce}). Within Cl-InfoNCE, the positively-paired representations $(x, y^+) \\sim \\mathbb{E}_{z\\sim P_Z}\\big[P_{X|z}P_{Y|z}\\big]$ are the learned representations from augmented images from the same cluster $z\\sim P_Z$ and the negatively-paired representations $(x, y^-) \\sim P_XP_Y$ are the representations from arbitrary two images. We leave the network designs, the optimizer choices, and more details for the datasets in Appendix.\n\\input{fig_tex\/synthetic_wrap}\nBefore delving into the experiments, we like to recall that, in Section~\\ref{subsec:cl-infonce}, we discussed using the mutual information $I(Z;T)$ and the conditional entropy $H(Z|T)$ between the clusters ($Z$) and the labels ($T$) to characterize the goodness of Cl-InfoNCE's learned representations. To prove this concept, on UT-Zappos50K, we synthetically construct clusters for various $I(Z;T)$ and $H(Z|T)$ followed by applying Cl-InfoNCE. We present the results in the right figure.\nOur empirical results are in accordance with the statements that the clusters with higher $I(Z;T)$ and lower $H(Z|T)$ will lead to higher downstream performance. In later experiments, we will also discuss these two information-theoretical metrics.\n\n\\vspace{-1mm}\n\\subsection{Experiment I: Auxiliary-Information-Determined Clusters + Cl-InfoNCE}\n\\label{subsec:exp_with_auxi_attr}\n\\vspace{-1mm}\n\nWe like to understand how well Cl-InfoNCE can be combined with the auxiliary information. For this purpose, we select the data discrete attributes as the auxiliary information, construct the clusters ($Z$) using the discrete attributes (see Section~\\ref{subsec:cluster_construct} and Figure~\\ref{fig:cluster_construction}), and then adopt attributes-determined clusters for Cl-InfoNCE. Recall our construction of data-attributes-determined clusters: we select the attributes with top-$k$ highest entropy and then construct the clusters such that the data within a cluster will have the same values over the selected attributes. $k$ is the hyper-parameter. Note that our method considers a weakly supervised setting since the data attributes can be seen as the data's weak supervision. \n\nWe dissect the experiments into three parts. First, we like to study the effect of the hyper-parameter $k$ and select its optimal value. Note that different choices of $k$ result in different constructed clusters $Z$. Our study is based on the information-theoretical metrics (i.e., $I(Z;T)$ and $H(Z|T)$ between the constructed clusters ($Z$) and the labels ($T$)) and their relations with the downstream performance of the learned representations. Second, we perform comparisons between different levels of supervision. In particular, we include the comparisons with the supervised ($Z=$ downstream labels $T$) and the conventional self-supervised ($Z=$ instance ID) setting for our method. We show in Section~\\ref{subsec:cl-infonce}, the supervised setting is equivalent to the Supervised Contrastive Learning objective~\\citep{khosla2020supervised} and the conventional self-supervised setting is equivalent to SimCLR~\\citep{chen2020simple}. Third, we include baselines that leverage the auxiliary information: i) learning to predict the clusters assignments using cross-entropy loss and ii) treating auxiliary information as another view of data when using the contrastive multi-view coding (CMC)~\\citep{tian2020contrastive}.\n\n\n\\input{fig_tex\/attributes}\n\n\n\\vspace{-1mm}\n\\paragraph{Part I - Effect of the hyper-parameter $k$.}\nTo better understand the effect of the hyper-parameter $k$ for constructing the attributes-determined clusters, we study the information-theoretical metrics between $Z$ and $T$ and report in Figure~\\ref{fig:attributes}. Note that, to ensure the same scales for $I(Z;T)$ and $H(Z|T)$ across different datasets, we normalize $I(Z;T)$ and $H(Z|T)$ using $$I(Z;T) \\leftarrow \\frac{I(Z;T) - {\\rm min}_ZI(Z;T)}{ {\\rm max}_ZI(Z;T)- {\\rm min}_ZI(Z;T)} \\quad {\\rm and}\\quad H(Z|T) \\leftarrow \\frac{H(Z|T)) - {\\rm min}_ZH(Z|T)}{ {\\rm max}_ZH(Z|T)- {\\rm min}_ZH(Z|T)}.$$ \n\nAs $k$ increases, the mutual information $I(Z;T)$ increases but the conditional entropy $H(Z|T)$ also increases. Hence, although considering more attributes leads to the clusters that are more correlated to the downstream labels, the clusters may also contain more downstream-irrelevant information. This is in accord with our second observation that, as $k$ increases, the downstream performance first increases then decreases. Therefore, we only need a partial set of the most informative attributes (those with high entropy) to determine the clusters. Next, we observe that the best performing clusters happen at the intersection between $I(Z;T)$ and negative $H(Z|T)$. This observation helps us study the trade-off between $I(Z;T)$ and $H(Z|T)$ and suggests an empirical way to select the optimal $k$ that achieves the best performance. It is worth noting that the above process of determining the optimal $k$ does not require directly evaluating the learned representations. \n\n\\input{tbl_tex\/attributes-supervision}\n\n\n\\vspace{-1mm}\n\\paragraph{Part II - Interpolation between Different Supervision Levels.}\nIn Section~\\ref{subsec:cl-infonce}, we discussed that, by altering the designs of the clusters, our presented approach specializes to the conventional self-supervised contrastive method - SimCLR~\\citep{oord2018representation} and the supervised contrastive method - SupCon~\\citep{khosla2020supervised}. In particular, our approach specializes to SimCLR when considering augmented variants of each instance as a cluster and specializes to SupCon when considering instances with the same downstream label as a cluster. Hence, we can interpolate different supervision levels of our approach and study how auxiliary information of data can help improve representation learning.\n\nWe present the results in Table~\\ref{tbl:attributes-supervision} with different cluster constructions along with Cl-InfoNCE. We use the top-1 accuracy on Wider Attribute for discussions. We find the performance grows from low to high when having the clusters as instance ID ($40.2$), \nattributes-determined clusters ($45.5$) to labels ($49.9$). This result suggests that CL-InfoNCE can better bridge the gap with the supervised learned representations by using auxiliary information. \n\n\n\n\n\\vspace{-1mm}\n\\paragraph{Part III - Comparisons with Baselines that Leverage Auxiliary Information.} In the last part, we see that Cl-InfoNCE can leverage auxiliary information to achieve a closer performance to supervised representations than self-supervised representations. Nonetheless, two questions still remain: 1) is there another way to leverage auxiliary information other than our method (attributes-determined clusters + Cl-InfoNCE), and 2) is the weakly-supervised methods (that leverages auxiliary information) always better than self-supervised methods? To answer these two questions, in Table~\\ref{tbl:attributes-baselines}, we include the comparisons among weakly-supervised representation learning baselines that leverage auxiliary information (Attributes-determined clusters + cross-entropy loss and Contrastive Multi-view Coding (CMC)~\\citep{tian2020contrastive} when treating auxiliary information as another view of data) and self-supervised baselines (SimCLR~\\citep{oord2018representation} and MoCo~\\citep{he2020momentum}).\n\n\nFirst, we find that using auxiliary information does not always guarantee better performance than not using it. For instance, for top-1 acc. on Wider Attribute dataset, predicting the attributes-determined clusters using the cross-entropy loss ($39.4$) or treating auxiliary information as another view of data then using CMC ($34.1$) perform worse than the SimCLR method ($40.2$), which does not utilize the auxiliary information. The result suggests that, although auxiliary information can provide useful information, how we can effectively leverage the auxiliary information is even more crucial.\n\n\\iffalse\nThird, we observe the predictive method always performs worse than the contrastive method under the weakly supervised setting. For example, on UT-Zappos50K, although predicting the labels using the cross-entropy loss ($89.2$) performs at par with SupCon ($89.0$), predicting attributes-determined clusters using the cross-entropy loss ($82.7$) performs worse than attributes-determined clusters + Cl-InfoNCE ($84.6$). This result implies that the contrastive method (e.g., Cl-InfoNCE) can generally be applied across various supervision levels.\n\\fi\n\nSecond, we observe that our method constantly outperforms the baseline - Attributes-Determined Clusters + Cross-Entropy loss. For instance, on ZT-Zappos50K, our method achieves $84.6$ top-1 accuracy while the baseline achieves $82.7$ top-1 accuracy. Note that both our method and the baseline consider constructing clusters according to auxiliary information. The difference is that our method adopts the contrastive approach - Cl-InfoNCE, and the baseline considers to adopt cross-entropy loss on an additional classifier between the representations and the clusters. Our observation is in accordance with the observation from a prior work~\\citep{khosla2020supervised}. It shows that, compared to the cross-entropy loss, the contrastive objective (e.g., our presented Cl-InfoNCE) is more robust to natural corruptions of data, stable to hyper-parameters and optimizers settings, and enjoying better performance.\n\nLast, we compare our method with the CMC method. We see that although our method performs better on UT-zappos50K ($84.6$ over $83.7$) and Wider Attributes ($45.5$ over $34.1$) dataset, CMC achieves significantly better results on CUB-200-2011 ($32.7$ over $20.6$) dataset. To explain such differences, we recall that 1) the CMC method leverages the auxiliary information directly, while our method leverages the auxiliary information indirectly (we use the structural information implied from the auxiliary information); and 2) the auxiliary information used in UT-zappos50K and Wider Attributes contains relatively little information (i.e., consisting of less than 20 discrete attributes), and the auxiliary information used in CUB-200-2011 contains much more information (i.e., consisting of 312 discrete attributes). We argue that since CMC leverages the auxiliary information directly, it shall perform better with more informative auxiliary information. On the other hand, Cl-InfoNCE performs better with less informative auxiliary information. \n\n\\input{tbl_tex\/attributes-baselines-comparisons}\n\n\n\n\n\n\n\\vspace{-1mm}\n\\subsection{Experiment II: K-means Clusters + Cl-InfoNCE}\n\\label{subsec:exp_without_auxi}\n\\vspace{-1mm}\nSo far, we see how we can combine auxiliary-information-determined clusters and Cl-InfoNCE to learn good weakly-supervised representations. Now, we would like to show that Cl-InfoNCE can also learn good self-supervised representations without auxiliary information. To this end, we construct unsupervised clusters (e.g., k-means clusters on top of the learned representations) for Cl-InfoNCE.\nSimilar to the EM algorithm, we iteratively perform the k-means clustering to determine the clusters for the representations, and then we adopt Cl-InfoNCE to leverage the k-means clusters to update the representations. We select thet Prototypical Contrastive Learning (PCL)~\\citep{li2020prototypical} as the baseline of the clustering-based self-supervised approach. In particular, PCL performs data log-likelihood maximization by assuming data are generated from isotropic Gaussians. It considers the MLE objective, where the author makes a connection with contrastive approaches~\\citep{chen2020simple,he2020momentum}. The clusters in PCL are determined via MAP estimation. For the sake of the completeness of the experiments, we also include the non-clustering-based self-supervised approaches, including SimCLR~\\citep{chen2020simple} and MoCo~\\citep{he2020momentum}. Note that this set of experiments considers the conventional self-supervised setting, in which we can leverage the information neither from labels nor from auxiliary information. \n\n\\input{fig_tex\/unsupervised}\n\\vspace{-3mm}\n\\paragraph{Results.} We first look at the left table in Figure~\\ref{fig:unsupervised}. We observe that, except for ImageNet-100, there is no obvious performance difference between the non-clustering-based (i.e., SimCLR and MoCo) and the clustering-based baseline (i.e., PCL). Since ImageNet-100 is a more complex dataset comparing to the other three datasets, we argue that, when performing self-supervised learning, discovering latent structures in data (via unsupervised clustering) may best benefit larger-sized datasets. Additionally, among all the approaches, our method reaches the best performance. The result suggests our method can be as competitive as other conventional self-supervised approaches. \n\nNext, we look at the right plot in Figure~\\ref{fig:unsupervised}. We study the mutual information $I(Z;T)$ and the conditional entropy $H(Z|T)$ between the unsupervised constructed clusters $Z$ and the downstream labels $T$. We select our method and PCL, providing the plot of the two information-theoretical metrics versus the training epoch. We find that, as the number of training epochs increases, both methods can construct unsupervised clusters that are more relevant (higher $I(Z;T)$) and contain less redundant information (lower $H(Z|T)$) about the downstream label. This result suggests that the clustering-based self-supervised approaches are discovering the latent structures that are more useful for the downstream tasks. It is worth noting that our method consistently has higher $I(Z;T)$ and lower $H(Z|T)$ comparing to PCL.\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}