diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzznvtj" "b/data_all_eng_slimpj/shuffled/split2/finalzznvtj" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzznvtj" @@ -0,0 +1,5 @@ +{"text":"\\section*{Musical Organization: Strings \\& Schemes}\nIn the context of expressive communication systems like natural language or tonal music, it might help to clarify what the term `organization' really means, at least as it is intended here. In the simplest sense, musical organization refers to the relationships between events on the musical surface, be they notes, chords, motives, phrases, or any other coherent `units' of that organization. Following the classes of mental structure described by \\textcite{Mandler1979}, we might represent the connections between these events as unordered (or \\textit{coordinate}) relations, such as the members of a major triad, temporally ordered (or \\textit{proordinate}) relations, such as the progression from dominant to tonic at the end of a phrase (i.e., V--I), or hierarchical (or \\textit{superordinate}\/\\textit{subordinate}) relations, such as the prolongation of a given harmony through other (subordinate) harmonies (e.g., I--V$^4_3$--I$^6$). Bearing these relational types in mind, tonal harmony is thus an \\textit{emergent} organizational system in that superordinate structures emerge out of the coordinate and proordinate relations among events on the surface. The real challenge here, then, is to model these relational types using strings.\n\nAmong computer scientists, a \\textit{string} is a finite set of discrete symbols---a database of nucleic acid sequences, a dictionary of English words, or for the purposes of this study, a corpus of Haydn string quartets. In the first two cases, the mapping between the individual character or word in a printed text and its symbolic representation in a computer database is essentially one-to-one. Music encoding is considerably more complex. Notes, chords, phrases, and the like are characterized by a number of different features, so digital encodings of individual events must concurrently represent multiple properties of the musical surface. To that end, many symbolic formats encode standard music notation as a series of discrete event sequences (i.e., strings) in an $m \\times n$ matrix, where $m$ denotes the number of events in the symbolic representation (e.g., notes as notated in a score), and $n$ refers to the number of encoded features or attributes (e.g., pitch, onset time, rhythmic duration, etc.).\n\nTo model the coordinate relations (i.e., vertical sonorities) associated with tonal harmony using unidimensional strings, corpus studies typically limit the investigation to a particular chord typology from music theory (e.g., Roman numerals, figured bass nomenclature, or pop chord symbols), and then identify chord events using either human annotators \\autocite{Burgoyne2012,Declercq:2011,Tymoczko:2011}, or rule-based computational classifiers \\autocite{Temperley:1999,Rowe:2001}. Yet unfortunately, existing typologies depend on a host of assumptions about the sorts of simultaneous relations the researcher should privilege (e.g., triads and seventh chords), and may also require additional information about the underlying tonal context, which again must be inferred either during transcription \\autocite{Margulis2008}, or using some automatic (key-finding) method. \\textcite{White2015} distinguishes this `top-down' approach from the `bottom-up', data-driven methods that build composite representations of chord events from simpler representations of note events \\autocite{Cambouropoulos2015,Conklin2002,Quinn2010,Quinn2011, Sapp2007}.\n\nWith a representation scheme in place, researchers then divide the corpus into contiguous sequences of $n$ events (called \\textit{n}-grams) to model the proordinate relations between harmonies. The resulting \\textit{n}-gram distributions serve as input for tasks associated with pattern discovery \\autocite{Conklin2002}, classification \\autocite{Conklin2013}, automatic harmonic analysis \\autocite{Taube1999}, and prediction \\autocite{Sears2018}. And yet since much of the world's music is hierarchically organized such that certain events are more central (or prominent) than others \\autocite{Bharucha1983}, non-contiguous events often serve as focal points in the sequence \\autocite{Gjerdingen2014}. For this reason, corpus studies employing string-based methods often suffer from the \\textit{contiguity fallacy}---the assumption that note or chord events on the musical surface depend only on their immediate neighbors \\autocite{Sears2017}.\n\nBy way of example, consider the closing measures of the main theme from the final movement of Haydn's string quartet Op. 50, No. 2, shown in Example \\ref{ex:haydn_ex}a. The passage culminates in a perfect authentic cadence that features a conventional harmonic progression and a falling upper-voice melody. In the music theory classroom, students are taught to reduce this musical surface to a succession of chord symbols, such as the Roman numeral annotations shown below. Yet despite the ubiquity of these harmonies throughout the history of Western tonal music, existing string-based methods generally fail to retrieve this sequence of chords due to the presence of intervening embellishing tones (shown in gray), a limitation one study has called the \\textit{interpolation problem} \\autocite{Collins2014}.\n\n\\begin{example}[t!]\n\t\\centering\n\t\\input{Fig1.pdf_tex}\n\t\\caption{(a) Haydn, String Quartet in C major, Op. 50\/2, iv, mm. 48--50. Embellishing tones are shown with gray noteheads, and Roman numeral annotations appear below. (b) Expansion. Downbeat chord onsets are annotated with the chromatic scale-degree combination (csdc) scheme for illustrative purposes.}\n\t\\label{ex:haydn_ex}\n\\end{example}\n\nThus, the following sections consider whether string-based computational methods can discover (1) the most recurrent harmonies on the musical surface (i.e., coordinate relations); (2) the syntactic progressions that characterize a given idiom (i.e., proordinate relations); and (3) the recursive hierarchy by which certain harmonies are more central (or prominent) than others (i.e., superordinate\/subordinate relations). To that end, I have developed a representation scheme that loosely approximates Roman numeral symbols using a corpus of 50 expositions from Haydn string quartets \\autocite{Sears2016}. In addition to the symbolic encodings, the corpus includes accompanying text files with manual annotations for the key, mode, modulations, and pivot boundaries in each movement. Thus, I will sidestep the key-finding problem, which has already received considerable attention elsewhere (e.g., \\citeauthor{Temperley2008} \\citeyear{Temperley2008}). What interests me here, and consequently provides the impetus for the following pages, are the methods we use to discover the syntactic or recursive structures described in many theories of harmony. Hence, the corpus will serve as a toy dataset, with the hope that we might apply these methods to larger datasets in future work.\n\n\\section*{Coordinate Relations: Representation \\& Recurrence}\n\nCorpus studies in music research often treat the \\textit{note} event as the unit of analysis, examining features like chromatic pitch \\autocite{Pearce2004}, melodic interval \\autocite{Vos1989}, and chromatic scale degree \\autocite{Margulis2008}. Using computational methods to identify \\textit{composite} events like triads and seventh chords in complex polyphonic textures is considerably more complex, since the number of distinct $n$-note combinations associated with any of the above-mentioned features is enormous. Thus, many music analysis software frameworks derive chord progressions from symbolic corpora by first performing a \\textit{full expansion} of the symbolic encoding \\autocite{Conklin2002}, which duplicates overlapping note events at every unique onset time.\\footnote{In the Humdrum toolkit, this technique is called \\textit{ditto} \\autocite{Huron1993}, while \\textit{Music21} calls it \\textit{chordifying} \\autocite{Cuthbert2010}.} Shown in Example \\ref{ex:haydn_ex}b, expansion produces 14 distinct onset times. This partitioning method is admittedly too fine-grained to resemble the Roman numeral analysis in Example \\ref{ex:haydn_ex}a, but provides a useful starting point for the reduction methods that follow.\n\nTo relate the chord event at each onset to an underlying tonic, some studies use the opening key signature, with the researcher determining the mode from the score, resulting in chord distributions that often fail to control for modulations or changes in modality \\autocite{Margulis2008}. Key-finding algorithms have also become more common in recent decades, allowing researchers to automatically identify the key of a passage with high degrees of accuracy ($>90\\%$) \\autocite{Albrecht2013}. Nevertheless, the lack of available annotated corpora indicating modulations and changes of mode makes testing these algorithms quite difficult. Since in this case the corpus includes annotations for the key, mode, modulations, and pivot boundaries in each movement, we can simply map each note event to a chromatic scale degree (or \\texttt{csd}) modulo 12. This scheme consists of twelve distinct symbols numbered from 0 to 11, where 0 denotes the tonic, 7 the dominant, and so on. Absent instrumental parts for each distinct onset receive an undefined symbol $\\perp$.\n\nWe may now represent each chord onset as a chromatic scale-degree combination (or \\texttt{csdc}) to examine the recurrence of sonorities on the musical surface. Each onset contains between one and four note events, so the initial vocabulary consists of $13^4$ (or $28,561$) possibilities. To reduce the vocabulary of possible chord types, \\textcite{Quinn2010} excluded voice doublings and allowed permutations between the upper parts, so we can adopt that approach here. Thus, the major triads $\\langle0, 4, 4, 7\\rangle$ and $\\langle0, 7, 4, 0\\rangle$ would reduce to $\\langle0, 4, 7, \\perp\\rangle$. These exclusion criteria decrease the size of the potential vocabulary to 2784 distinct types, though in this corpus the vocabulary of \\texttt{csdc} consisted of just 688 types.\\footnote{Ideally, we would reduce the vocabulary to less than, say, 100 symbols, but given the number of combinatorial possibilities for three- and four-note chords, I will instead introduce a novel reduction method in the final section of this chapter.}\n\nIn total, 38\\% of the $19,570$ onsets in the corpus consisted of fewer than three distinct chromatic scale degrees (e.g., $<$$0,4,\\perp,\\perp$$>$), so I have omitted those onsets in order to examine the most common chord types. The multi-level pie plot in Figure \\ref{fig:csdc_pie} presents the onsets consisting of at least three chromatic scale degrees from major-mode passages in the corpus, with the proportions weighted by the rhythmic duration of each onset (see \\textcite{Sears2016} for further details). The inner pie plot represents the diatonic harmonies with Roman numeral notation, with upper and lower case Roman numerals denoting major and minor triads, respectively. The outer concentric circle represents each inversion (root position, first, second, and third), with the inversions appearing in clockwise order for each harmony beginning in root position.\n\nIn major-mode passages, tonic harmony appeared most frequently, followed by dominant harmony, the predominant harmonies IV and ii, and finally vii, vi, and iii. In the outer concentric circle, root position chords predominated for harmonies like I, IV, V, and vi, but unsurprisingly, first inversion chords appeared more frequently for ii, iii, and vii. What is more, the 49 \\texttt{csdc} types representing diatonic harmony---triads and seventh chords for every diatonic harmony in every inversion---accounted for approximately 62\\% of the three- and four-note combinations in major-mode passages of the corpus.\n\nTogether, these findings suggest that (1) the most central sonorities in most theories of harmony are also the most frequent in this corpus, and (2) like their natural language counterparts, the chromatic scale-degree combinations follow a power-law distribution between frequency and rank, with the most frequent (top-ranked) types---the diatonic harmonies of the tonal system---accounting for the vast majority of the three- and four-note combinations in the corpus. Of course, these claims are by no means new. Frequency distributions of both words and chords often display power-law (or \\textit{Zipfian}) distributions \\autocite{Zipf1935, Rohrmeier2008, Sears2017}. What would be new is to provide evidence that the statistical regularities characterizing a tonal corpus also reflect the \\textit{order} in which its constituent harmonies occur. To that end, the next section introduces string-based methods for the identification and ranking of recurrent temporal patterns. \n\n\\begin{figure}[t!]\n\t\\centering\n\t\\input{Fig2.pdf_tex}\n\t\\caption{Multi-level pie plot of the diatonic chromatic scale degree combinations consisting of at least three chromatic scale degrees from the major mode (chord onsets located within the boundaries of a pivot were excluded). The inner pie and outer ring represent diatonic harmony (triads and seventh chords) and inversion (root position, first, second, third), respectively. Inversions appear in clockwise order for each harmony beginning in root position (labels only provided for dominant harmony). Roman numerals iii and IV did not appear in third inversion. $N = 11,253$.}\n\t\\label{fig:csdc_pie}\n\\end{figure}\n\n\\section*{Proordinate Relations: Syntax \\& Skip-grams}\n\nIn corpus linguistics, researchers often discover recurrent multi-word expressions (sometimes called \\textit{collocations}) by dividing the corpus into sub-sequences of $n$ events (or $n$-grams), and then determining the number of instances (or \\textit{tokens}) associated with each unique $n$-gram \\textit{type}. \\textit{N}-grams consisting of one, two, or three events are often called \\textit{unigrams}, \\textit{bigrams}, and \\textit{trigrams}, respectively, while longer \\textit{n}-grams are typically represented by the value of \\textit{n}. The previous discussion represented the corpus using unigrams, for example, but to discover the most conventional (i.e., syntactic) harmonic progressions using the current representation scheme, we need only increase the value of \\textit{n}.\n\nEach piece $m$ consists of a contiguous sequence of combinations, so let $k$ represent the length of the sequence in each movement, and let $C$ denote the total number of movements in the corpus. The number of contiguous \\textit{n}-gram tokens in the corpus is\n\\begin{equation}\\label{eq:1}\n\\displaystyle\\sum_{m=1}^{C} k_m-n+1\n\\end{equation}\n\nTable \\ref{tab:top10} presents the top ten contiguous bigram types ranked by count. The combinations in each type are represented with the \\texttt{csdc} scheme, but for most types I also include Roman numeral annotations. In short, nine of the top ten types repeat tonic or dominant harmony, or scale degrees $\\hat{1}$ or $\\hat{5}$, with the top-ranked type, I--I, featuring 440 tokens. The seventh-ranked type, V$^7$--I, is in fact the only non-repeating bigram to crack the top ten. Thus, the musical surface contains considerable repetition, thereby obscuring the kinds of patterns we might hope to study (e.g., harmonic progressions containing more than one distinct harmony). Perhaps worse, recall that a significant portion of the chromatic scale degree combinations in the corpus feature fewer than three distinct chromatic scale degrees.\n\n\\begin{table}[t!]\n\t\\centering\n\t\\caption{Top ten contiguous bigram types ranked by count.}\n\t\\begin{threeparttable}\n\t\t\\renewcommand{\\TPTminimum}{\\columnwidth}\n\t\t\\makebox[\\textwidth]{\n\t\t\t\\begin{tabular}{S[table-format=2.0]ccllccrcllcc}\n\t\t\t\t\\toprule\n\t\t\t\t& & \\multicolumn{5}{c}{Without Exclusion Criteria} & & \\multicolumn{5}{c}{With Exclusion Criteria} \\\\\n\t\t\t\t\\cmidrule{3-7}\\cmidrule{9-13}\n\t\t\t\t\\multicolumn{1}{c}{Rank} & & \\textit{N} & \\multicolumn{2}{c}{\\texttt{csdc} (mod 12)} & \\multicolumn{2}{c}{RN} & & \\textit{N} & \\multicolumn{2}{c}{\\texttt{csdc} (mod 12)} & \\multicolumn{2}{c}{RN} \\\\\n\t\t\t\t\\midrule\n\t\t\t\t1 & & 440 & $0,4,7,\\perp$ & $0,4,7,\\perp$ & I & I & & 147 & $7,2,5,11$ & $0,4,\\perp,\\perp$ & V$^7$ & I \\\\\n\t\t\t\t2 & & 212 & $7,0,4,\\perp$ & $7,0,4,\\perp$ & I$^6_4$ & I$^6_4$ & & 59 & $7,0,4,\\perp$ & $7,2,11,\\perp$ & I$^6_4$ & V \\\\\n\t\t\t\t3 & & 212 & $0,4,\\perp,\\perp$ & $0,4,\\perp,\\perp$ & I & I & & 56 & $11,2,5,7$ & $0,4,7,\\perp$ & V$^6_5$ & I \\\\\n\t\t\t\t4 & & 182 & $4,0,7,\\perp$ & $4,0,7,\\perp$ & I$^6$ & I$^6$ & & 52 & $0,2,5,11$ & $0,4,\\perp,\\perp$ & (vii) & I \\\\\n\t\t\t\t5 & & 154 & $0,4,\\perp,\\perp$ & $0,4,7,\\perp$ & I & I & & 42 & $7,0,4,\\perp$ & $7,2,5,11$ & I$^6_4$ & V$^7$ \\\\\n\t\t\t\t6 & & 153 & $7,2,11,\\perp$ & $7,2,11,\\perp$ & V & V & & 39 & $7,2,5,\\perp$ & $7,0,4,\\perp$ & V$^7$ & I$^6_4$ \\\\\n\t\t\t\t7 & & 147 & $7,2,5,11$ & $0,4,\\perp,\\perp$ & V$^7$ & I & & 31 & $7,0,4,\\perp$ & $7,2,5,\\perp$ & I$^6_4$ & V$^7$ \\\\\n\t\t\t\t8 & & 139 & $7,2,5,11$ & $7,2,5,11$ & V$^7$ & V$^7$ & & 30 & $5,2,7,11$ & $4,0,7,\\perp$ & V$^4_2$ & I$^6$ \\\\\n\t\t\t\t9 & & 137 & $7,\\perp,\\perp,\\perp$ & $7,\\perp,\\perp,\\perp$ & & & & 28 & $5,2,9,\\perp$ & $7,0,4,\\perp$ & ii$^6$ & I$^6_4$ \\\\\n\t\t\t\t10 & & 105 & $0,\\perp,\\perp,\\perp$ & $0,\\perp,\\perp,\\perp$ & & & & 27 & $5,0,9,\\perp$ & $4,0,7,\\perp$ & IV & I$^6$ \\\\\n\t\t\t\t\\bottomrule\n\t\t\t\\end{tabular\n\t\t}\n\t\t\\begin{tablenotes}[flushleft]\n\t\t\t\\small\n\t\t\t\\item \\textit{Note.} Exclusion criteria: (1) either chord contains only one distinct chromatic scale degree (\\textit{monophony}); (2) neither chord contains at least three distinct chromatic scale degrees (\\textit{polyphony}); (3) chords share the same chromatic scale degrees regardless of inversion (\\textit{identity}); (4) chords share the same chromatic scale degree in the bass and subsets or supersets of chromatic scale degrees in the upper parts (\\textit{similarity}). Parentheses suggest a change of harmony from one chord to the other, but with a pedal in the bass.\n\t\t\\end{tablenotes}\n\t\\end{threeparttable\n\t\\label{tab:top10}\n\\end{table}\n\nCorpus linguists typically solve problems like this by removing (or \\textit{filtering}) bigram types reflecting parts of speech (or syntactic categories) ``that are rarely associated with interesting linguistic expressions'' \\autocite[31]{Manning1999}. For instance, researchers often exclude types containing articles like `the' or `a' in natural language corpora to ensure that adjective-noun and noun-noun expressions will receive higher ranks in the distribution. For our purposes, one could easily extend this sort of thinking to harmonic corpora by excluding \\textit{n}-grams according to the temporal periodicity or proximity of their constituent members. \\textcite{Symons2012}, for example, discovered recurrent contrapuntal patterns in a corpus of two-voice solfeggi by sampling events at regular temporal intervals. Similarly, I increased the ranking of conventional cadential progressions like ii$^6$-I$^6_4$-V$^7$-I by privileging patterns with temporally proximal members \\autocite{Sears2016}.\\footnote{In this instance, I$^6_4$ refers to a double suspension above the cadential dominant, which is more commonly notated as V$^6_4$, as is the case in Example \\ref{ex:haydn_ex}. Unfortunately, this dominant embellishment may only be determined from the immediate harmonic context (e.g., V$^{6-5}_{4-3}$ vs. I$^6_4$-I$^6$), so the present encoding scheme cannot distinguish six-four inversions of the tonic from six-four embellishments of the dominant. Thus, I have retained the I$^6_4$ annotation for instances of $<$7,0,4,$\\perp$$>$ in the analyses that follow.}\n\nThere are, of course, a number of reasons to exclude patterns, but for the present study, I will exclude bigram types if they fail to represent what \\textcite[231]{Meyer2000b} referred to as ``forceful harmonic progressions\": progressions featuring a genuine harmonic (i.e., pitch) change between primarily tertian sonorities.\\footnote{\\textcite[231]{Meyer2000b} explains, ``... the perception and cognition of patterns (and hence the formation of schemata) are dependent on clear differentiation between successive stimuli. More specifically, forceful harmonic progression depends in part on the amount of pitch change between successive triads.''} To that end, I have excluded bigram types if (1) either chord contains only one distinct chromatic scale degree (\\textit{monophony}, e.g., $<$0,$\\perp$,$\\perp$,$\\perp$$>$); (2) neither chord contains at least three distinct chromatic scale degrees (\\textit{polyphony}, e.g., $<$0,$\\perp$,$\\perp$,$\\perp$$>$$\\rightarrow$$<$0,2,$\\perp$,$\\perp$$>$); (3) chords share the same chromatic scale degrees regardless of inversion (\\textit{identity}, e.g., $<$0,4,7$\\perp$$>$$\\rightarrow$$<$4,0,7,$\\perp$$>$); and (4) chords share the same chromatic scale degree in the bass and subsets or supersets of chromatic scale degrees in the upper parts (\\textit{similarity}, e.g., $<$7,5,11,$\\perp$$>$$\\rightarrow$$<$7,2,5,11$>$). The first two criteria ensure that the filtered bigram types will feature tertian sonorities in some way, while the latter two criteria emphasize the importance of pitch change from one sonority to the next.\n\nShown in the right-most columns of Table \\ref{tab:top10}, 34\\% of the 5378 bigram types in the corpus met these exclusion criteria. With exclusion, progressions deemed `cadential' in most theories of harmony rose to the top of the table, with five of the top ten progressions featuring a six-four embellishment of the dominant. Along with these progressions, the table also includes typical tonic-prolongational progressions like V$^6_5$--I, V$^4_2$--I$^6$, and IV--I$^6$. The appeal of filtering in this way is thus that potentially meaningful progressions emerge out of distributional statistics. Nevertheless, by only including the counts for contiguous bigram types, we necessarily omit syntactic progressions with intervening embellishing tones from the final count. In a previous study, for example, I found that the progression I$^6$--ii$^6$--V$^7$--I \\textit{never} appears contiguously in this corpus despite the apparent ubiquity of the pattern in the classical style \\autocite{Sears2017}. Of the thousands of chord onsets examined here, it therefore seems unreasonable to assume that the conventional progressions in the right-most columns of Table \\ref{tab:top10} should feature so few tokens. In this case, the commitment to contiguous \\textit{n}-grams---the standard method in musical corpus research---has effectively tied our hands.\n\nTo discover associations lying beneath (or beyond) the musical surface, we might simply relax the contiguity assumption to ensure potentially relevant bigram types appear in the distribution. Shown in Figure \\ref{fig:non_contiguous}, the top plot depicts the contiguous and non-contiguous bigram tokens for a 5-event sequence with solid and dashed arcs, respectively. According to Equation \\eqref{eq:1}, the number of contiguous tokens in a 5-event sequence is $k-n+1$, or four tokens. If we also include all possible non-contiguous relations, the number of tokens is given by the combination equation:\n\\begin{equation}\n{k \\choose n} = \\frac{k!}{n!(k-n)!} = \\frac{k(k-1)(k-2) \\ldots (k-n+1)}{n!}\n\\end{equation}\n\nThe notation ${k \\choose n}$ denotes the number of possible combinations of $n$ events from a sequence of $k$ events. By including the non-contiguous associations, the number of tokens for a 5-event sequence increases to 10. As \\textit{n} and \\textit{k} increase, the number of patterns can very quickly become unwieldy: a 20-event sequence, for example, contains 190 possible tokens. To overcome the combinatoric complexity of counting tokens in this way, researchers in natural language processing have limited the investigation to what I have called \\textit{fixed-skip} \\textit{n}-grams \\autocite{Sears2017,Guthrie2006}, which only include \\textit{n}-gram tokens if their constituent members occur within a fixed number of skips $t$. Shown in the bottom plot in Figure \\ref{fig:non_contiguous}, $ac$ and $bd$ constitute one-skip tokens (i.e., $t=1$), while $ad$ and $be$ constitute two-skip tokens. Thus, up to 10 tokens appear in a 5-event sequence when $t=3$.\n\nBy relaxing the restrictions on the possible associations between events in a sequence, the skip-gram method has increased our chances of including potentially meaningful \\textit{n}-grams in the final distribution. It does not ensure that the most frequent types will feature genuine harmonic progressions, however. To be sure, the skip-gram method aggregates the counts for bigram types whose members occur \\textit{within} a certain number of skips, and so is just as susceptible to the most repetitive patterns on the surface. As a result, repeating bigram types like I--I tend to retain their approximate ranking regardless of the permitted number of skips.\n\nTo resolve this issue, we could again exclude patterns that do not represent a genuine harmonic change between adjacent members, but corpus linguists also frequently employ alternative ranking functions whose goal is to discover recurrent multi-word expressions using other relevance criteria. In this case, the skip-gram method provides counts for each \\textit{n}-gram type at a number of possible skips. If the harmonies in conventional harmonic progressions tend to appear in strong metric positions and feature intervening embellishing tones, we might instead rank patterns using a statistic that characterizes the \\textit{depth} at which conventional harmonic progressions tend to emerge.\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\def\\svgwidth{.5\\textwidth}\n\t\\input{Fig3.pdf_tex}\n\t\\caption{Top: A five-event sequence, with arcs denoting all contiguous (solid) and non-contiguous (dashed) bigram tokens. Bottom: All tokens, with $t$ indicating the number of skips between events.}\n\t\\label{fig:non_contiguous}\n\\end{figure}\n\nFigure \\ref{fig:poly_plot} presents scatter plots of the counts measured from zero to ten skips for the cadential six-four progression, I$^6_4$--V$^7$, and the top-ranked type in the corpus, I--I. Recall that I--I features 440 tokens on the surface (see Table \\ref{tab:top10}). As the number of skips $t$ between the members of this type increase, the associated counts decrease. In other words, repetitive patterns like I--I appear far more prevalently at (or near) the surface. This result seems unsurprising---presumably over enough skips, \\textit{all} bigram types become less frequent. Yet since conventional harmonic progressions often appear in strong metric positions and feature intervening embellishing tones, we might assume that the counts for these patterns should actually \\textit{increase} with $t$ up to a certain point, and then decrease for more distal relations. This is in fact exactly what we find for the I$^6_4$--V$^7$ progression in Figure \\ref{fig:poly_plot}, which features its highest count when $t=3$, but decreasing counts when $t>3$.\n\nSo how might we privilege patterns like I$^6_4$--V$^7$ in the final ranking? If conventional harmonic progressions tend to appear beneath the surface, the counts across skips should increase as $t$ increases. We could then model this assumption by fitting a first-order polynomial (or \\textit{linear}) trend to the distribution of counts for each $n$-gram type. The best-fit line modeled by the equation $y_i = \\beta_1x_i +\\beta_0$ minimizes the error between the predicted count at each skip and its actual count in Cartesian space (called \\textit{linear regression}), where the leading coefficient $\\beta_1$ characterizes the shape of the trend (i.e., the slope of the line). Thus, positive estimates of $\\beta_1$ would indicate an increasing trend, whereas negative estimates would indicate a decreasing trend.\n\nIn this model, ranking bigram types using $\\beta_1$ privileges patterns whose counts increase as $t$ increases. As a result, conventional progressions like I--V$^7$ and V--I appear in the top ten, but so do syntactic retrogressions like ii$^6$--I (table not shown). In this case, permitting such large skips ensures that the final distribution will features types whose members skip over syntactically meaningful harmonies \\textit{within} the progression. Thus, we could revise the linear model by assuming that the counts should increase as $t$ initially increases (e.g., up to $t=3$), and then \\textit{decrease} for larger skips. A second-order polynomial trend---which would produce a U-shape if the leading coefficient $\\beta_2$ is positive, and an inverted U-shape if $\\beta_2$ is negative---would peak near the center of the distribution (i.e., at around $t=5$), so I have ranked each bigram type using the leading coefficient $\\beta_3$ of a third-order polynomial trend. Patterns that increase from zero to approximately three skips, and then decrease for larger numbers of skips, will produce the positive trend found for the I$^6_4$--V$^7$ progression in Figure \\ref{fig:poly_plot}. Conversely, patterns that decrease exponentially from zero to ten skips will produce the negative trend found for the I--I progression. In point of fact, these two patterns are the highest- and lowest-ranked types in the distribution.\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\input{Fig4.pdf_tex}\n\t\\caption{Scatter plots of counts at each skip for the highest- and lowest-ranked bigram types: $<$7,0,4,$\\perp$$>$ $\\rightarrow$ $<$7,2,5,11$>$ (left) and $<$0,4,7,$\\perp$$>$ $\\rightarrow$ $<$0,4,7,$\\perp$$>$ (right). Counts in both plots are fit with a third-order polynomial trend (solid line).}\n\t\\label{fig:poly_plot}\n\\end{figure}\n\nTable \\ref{tab:top10poly} presents the top ten bigram types ranked by $\\beta_3$. Even without exclusion criteria, the top ten types include several conventional harmonic progressions, such as V$^7$--I, a pre-dominant-to-I$^6_4$ progression, and a tonic-prolongational progression from I to IV$^6_4$. With exclusion criteria, a few other interesting patterns emerge, such as the progression from I$^6$ to IV, or the tonic pedal supporting a progression from vii to I. Perhaps more importantly, few of the patterns discovered using this method appear frequently on the surface. The sixth-ranked V$^7$--I progression, for example, includes just three tokens on the surface. Even across the top one hundred patterns in the distribution, the median count is just four tokens. Thus, it seems that patterns appearing just beneath the surface---i.e., whose counts peak at around $t=3$---feature many of the conventional (or syntactic) progressions described in most theories of harmony.\n\nYet despite the success of this ranking function to privilege patterns representing a genuine harmonic change of some sort, a persistent problem remains: namely, four of the top ten progressions in Table \\ref{tab:top10poly} are variants of the same V$^7$--I progression. This finding suggests that the vocabulary, at 688 symbols, is simply too large to serve as a suitable proxy for the chord vocabularies in theories of harmony. Discovering the most recurrent, syntactic progressions---or reducing the musical surface to a sequence of its most central (or salient) harmonies---requires a novel vocabulary reduction method, a problem I turn to in the next section.\n\n\\section*{Superordinate\/Subordinate Relations: Recursion \\& Reduction}\n\nThe appeal of the scheme selected for this study is that the most common chromatic scale degree combinations will have analogues in most theories of harmony. Frankly, this is not surprising given that the scheme relies on human annotations about the keys and modes for each movement. Nevertheless, the \\texttt{csdc} representation is more promiscuous than traditional definitions of `chord' would embrace. Whereas theorists tend to assign chordal status only to those vertical sonorities featuring stacked intervals of a third, the \\texttt{csdc} scheme makes no distinctions between chord tones and non-chord tones, consonant and dissonant intervals, or diatonic and chromatic scale degrees \\autocite{Quinn2010}. Hence, the vocabulary of \\texttt{csdc} types is enormous.\n\n\\begin{table}[t!]\n\t\\centering\n\t\\caption{Top ten bigram types ranked by leading coefficient of third-order polynomial trend.}\n\t\\begin{threeparttable}\n\t\t\\renewcommand{\\TPTminimum}{\\columnwidth}\n\t\t\\makebox[\\textwidth]{\n\t\t\t\\begin{tabular}{S[table-format=2.0]ccllccrcllcc}\n\t\t\t\t\\toprule\n\t\t\t\t& & \\multicolumn{5}{c}{Without Exclusion Criteria} & & \\multicolumn{5}{c}{With Exclusion Criteria} \\\\\n\t\t\t\t\\cmidrule{3-7}\\cmidrule{9-13}\n\t\t\t\t\\multicolumn{1}{c}{Rank} & & \\textit{$\\beta_3$} & \\multicolumn{2}{c}{\\texttt{csdc} (mod 12)} & \\multicolumn{2}{c}{RN} & & \\textit{$\\beta_3$} & \\multicolumn{2}{c}{\\texttt{csdc} (mod 12)} & \\multicolumn{2}{c}{RN} \\\\\n\t\t\t\t\\midrule\n\t\t\t\t1 & & .429 & $7,0,4,\\perp$ & $7,2,5,11$ & I$^6_4$ & V$^7$ & & .429 & $7,0,4,\\perp$ & $7,2,5,11$ & I$^6_4$ & V$^7$ \\\\\n\t\t\t\t2 & & .240 & $5,\\perp,\\perp,\\perp$ & $0,\\perp,\\perp,\\perp$ & & & & .181 & $0,4,7,\\perp$ & $0,5,9,\\perp$ & I & IV$^6_4$ \\\\\n\t\t\t\t3 & & .220 & $7,\\perp,\\perp,\\perp$ & $2,\\perp,\\perp,\\perp$ & & & & .171 & $7,5,11,\\perp$ & $0,4,\\perp,\\perp$ & V$^7$ & I \\\\\n\t\t\t\t4 & & .181 & $0,4,7,\\perp$ & $0,5,9,\\perp$ & I & IV$^6_4$ & & .166 & $5,9,\\perp,\\perp$ & $7,0,4,\\perp$ & PrD & I$^6_4$ \\\\\n\t\t\t\t5 & & .173 & $2,\\perp,\\perp,\\perp$ & $7,\\perp,\\perp,\\perp$ & & & & .162 & $0,4\\perp,\\perp$ & $0,5,9,\\perp$ & I & IV$^6_4$ \\\\\n\t\t\t\t6 & & .171 & $7,5,11,\\perp$ & $0,4,\\perp,\\perp$ & V$^7$ & I & & .155 & $7,2,5,11$ & $0,4,\\perp,\\perp$ & V$^7$ & I \\\\\n\t\t\t\t7 & & .166 & $5,9,\\perp,\\perp$ & $7,0,4,\\perp$ & PrD & I$^6_4$ & & .144 & $7,2,5,11$ & $0,4,7,\\perp$ & V$^7$ & I \\\\\n\t\t\t\t8 & & .162 & $0,4\\perp,\\perp$ & $0,5,9,\\perp$ & I & IV$^6_4$ & & .141 & $4,0,7,\\perp$ & $5,0,9,\\perp$ & I$^6$ & IV \\\\\n\t\t\t\t9 & & .161 & $0,4,7,\\perp$& $0,\\perp,\\perp,\\perp$ & I & & & .135 & $7,2,5,\\perp$ & $0,4,\\perp,\\perp$ & V$^7$ & I \\\\\n\t\t\t\t10 & & .155 & $7,2,5,11$ & $0,4,\\perp,\\perp$ & V$^7$ & I & & .122 & $0,2,5,11$ & $0,4,7,\\perp$ & (vii) & I \\\\\n\t\t\t\t\\bottomrule\n\t\t\t\\end{tabular\n\t\t}\n\t\t\\begin{tablenotes}[flushleft]\n\t\t\t\\small\n\t\t\t\\item \\textit{Note.} A third-order polynomial trend fit to the counts from zero to ten skips for each bigram type (i.e., $y_i = \\beta_3x^3_i + \\beta_2x^2_i + \\beta_1x_i +\\beta_0$). PrD denotes predominant function. See Table \\ref{tab:top10} for exclusion criteria.\n\t\t\\end{tablenotes}\n\t\\end{threeparttable\n\t\\label{tab:top10poly}\n\\end{table}\n\nHow, then, do we solve the harmonic reduction problem when the relations between note events explode in combinatorial complexity for complex polyphonic textures? To demonstrate that these sorts of organizational systems emerge out of distributional statistics, let us consider again the two ranking functions from the previous section: (1) \\textit{count}, which assumes that the most relevant or meaningful types are the most frequent; and (2) a \\textit{polynomial trend}, which assumes that the most relevant types tend to appear just beneath the surface (e.g., at $t=3$). The former is a simple statistic that represents the sum of the instances for each type, whereas the latter exploits domain-specific knowledge about the corpus under investigation. There are limitations to both ranking functions, however. One could argue, for example, that since the top-ranked types in Table \\ref{tab:top10poly} are by no means the most frequent, the assumption that they are somehow relevant (or conventional) is unwarranted. Similarly, corpus linguists have argued that count is not a sufficient indicator for a strong attraction between words \\autocite[5]{Evert2008}, since two highly frequent words are also likely to co-occur quite often just by chance. V$^7$ and I appear frequently in the corpus (see Figure \\ref{fig:csdc_pie}), for example, so it is possible that their appearance in Table \\ref{tab:top10} simply reflects the joint probability of their co-occurrence.\n\nTo resolve this issue, corpus linguists have developed a large family of \\textit{association} (or \\textit{attraction}) \\textit{measures} that quantify the statistical association between co-occurring words \\autocite{Evert2008}. The majority of these measures use contingency tables. Table \\ref{tab:contingency} presents the contingency table for the most common bigram type associated with the V$^7$--I progression: $<$7,2,5,11$>$ $\\rightarrow$ $<$0,4,$\\perp$,$\\perp>$. The counts reflect tokens with up to five skips between bigram members. The table has four cells for tokens containing both chord$_1$ and chord$_2$ (a), tokens containing chord$_1$ but not chord$_2$ (i.e., any other chord) (b), tokens containing chord$_2$ but not chord$_1$ (c), and tokens containing neither chord (d). The \\textit{marginal frequencies}, so called because they appear at the margins of the table, represent the sum of each row and column. Thus, the co-occurence frequency for V$^7$--I is 581.\n\nAgain, there are dozens of available association measures \\autocite{Pecina2009}, but \\textit{Fisher's exact test} is perhaps the most appropriate (or mathematically rigorous) significance test for the analysis of contingency tables \\autocite{Agresti2002}. The mathematical formalism need not concern us here, but in short, Fisher's exact test computes the total probability of all possible outcomes that are similar to or more extreme than the observed contingency table. The resulting probability (or \\textit{p-value}) will be large if the two chords of a given bigram are statistically independent, but very small if the two chords are unlikely to co-occur at the estimated frequency just by chance. In this case, the \\textit{p}-value is vanishingly small ($p < .0001$), suggesting that V$^7$ and I are statistically dependent.\n\nEssentially, association measures produce empirical statements about the \\textit{statistical attraction} between chords. They do not measure the potential asymmetry of this association, however. This is to say that in many cases chord$_1$ could be more (or less) predictive of chord$_2$ than the other way around, so \\textcite{Gries2013} and \\textcite{Nelson2014} have suggested alternative association measures based on the predictive asymmetry between the members of each bigram. According to \\textcite{Firth1957}, association measures should quantify the statistical influence an event exerts on its neighborhood, where some events exert more influence than others. Given such a measure, we could reduce the chord vocabulary by privileging chords that exert the greatest `attractional force' on their neighbors.\n\nAsymmetric (or \\textit{directional}) association measures typically compute the conditional probabilities between the members of each bigram \\autocite{Michelbacher2007}. In Table \\ref{tab:contingency}, for example, the probability that I follows V$^7$ can be computed from the frequencies in the first row. In this case, $P(\\text{I}|\\text{V}^7)$ is $\\frac{a}{a+b}$, with $a$ representing all of the instances in which I follows V$^7$, and $b$ representing all of the instances in which some other harmony follows V$^7$. Thus, $\\frac{581}{3873}=.15$, which tells us that I follows V$^7$ roughly 15\\% of the time. This estimate does not represent the probability that V$^7$ \\textit{precedes} I, however. To compute this statistic, we can use the frequencies in the first column of Table \\ref{tab:contingency}. Here, $P(\\text{V}^7|\\text{I})$ is $\\frac{a}{a+c}$, or $\\frac{581}{6216}=.09$. Thus, for this particular variant of the V$^7$--I progression, V$^7$ is a better predictor of I than the other way around. Or put another way, I exerts the greater attractional force.\n\n\\begin{table}[t!]\n\t\\centering\n\t\\caption{Contingency table for the bigram $<$7,2,5,11$>$ $\\rightarrow$ $<$0,4,$\\perp$,$\\perp>$ (V$^7$--I). Counts reflect tokens with up to five skips between chord events.}\n\t\\setlength{\\tabcolsep}{16pt}\n\t\\def\\arraystretch{1.5}\n\t\\begin{tabular}{r|S[table-format=4.0] S[table-format=6.0]r|S[table-format=6.0]}\n\t\t& {I} & {Not I} & & Totals \\\\\n\t\t\\midrule\n\t\tV$^7$ & 581 \\enspace {(a)} & 3292 \\enspace {(b)} & & 3873 \\\\\n\t\tNot V$^7$ & 5635 \\enspace {(c)} & 106862 \\enspace {(d)} & & 112497 \\\\\n\t\t\\midrule\n\t\tTotals & 6216 & 110154 & & 116370 \\\\\n\t\\end{tabular\n\t\\label{tab:contingency\n\\end{table\n\nI have formalized this statistical inference in the following way:\n\\begin{equation}\n\\textsc{asym} = P(\\text{chord}_2|\\text{chord}_1) - P(\\text{chord}_1|\\text{chord}_2) = \\frac{a}{a+b} - \\frac{a}{a+c}\n\\end{equation}\nIn this equation, \\textsc{asym} is simply the arithmetic difference between the two conditional probabilities. The estimates of \\textsc{asym} fall in a range between $-1$ and 1, where positive values indicate that chord$_2$ is the attractor, negative values indicate that chord$_1$ is the attractor, and 0 indicates bidirectionality, since both harmonies exert equivalent attractional force. For the bigram in Table \\ref{tab:contingency}, the positive directional asymmetry of .06 tells us that I exerts more influence on V$^7$, and so serves as the statistical attractor within the bigram.\n\nIt bears mentioning here that directional asymmetries differ from the \\textit{temporal} asymmetries described in theories of harmony. Whereas temporal asymmetries refer to the conventionality or syntactic plausibility of harmonies according to their temporal order (e.g., ii$^6$--V$^7$ compared to V$^7$--ii$^6$), directional asymmetries attempt to capture the attractional force between two harmonies in a \\textit{specified} temporal relationship. Thus, V$^7$--ii$^6$ may be much less likely---and thus, less syntactic---than ii$^6$--V$^7$, but our notion of directional asymmetry simply indicates which of the two harmonies is the stronger attractor in both progressions.\n\nTo reduce the vocabulary of chord types, we could simply calculate the number of bigram types in which each unigram type is the attractor. Shown in Table \\ref{tab:asym_ranks}, \\textit{N}$_{\\text{attractor}}$ indicates that $<$$0,4,7,\\perp$$>$ serves as the attractor in 604 distinct bigram types in the corpus. Unsurprisingly, \\%$_{\\text{attractor}}$ also tells us that this variant of I is the attractor for every bigram in which it appears. Finally, the table also presents an alternative asymmetric measure based on the sum of the asymmetries for each bigram type in which the indicated unigram type is a member. In this case, I exerts the greatest attractional force of any of unigram type in the corpus $\\sum_{\\textsc{asym}}=56.41$, with other harmonies like V, V$^7$, I$^6$, and ii$^6$ also making the top ten. Vocabulary reduction methods could then use a table like this one to create an \\textit{$n$-best list}, which uses a specified threshold $n$ to determine the members (and non-members) of the vocabulary \\autocite{Evert2008}. We would then assimilate harmonies appearing below this threshold into those appearing above using a kind of incremental clustering method.\n\nA discussion of clustering methods for directional asymmetry data deserves its own study, but for the sake of illustration, I have presented one possible method in Example \\ref{ex:reduction}. In this case, reducing the musical surface to a sequence of its most central harmonies is a specific case of the more general vocabulary reduction problem considered thus far. Starting with the attractional force rankings represented in Table \\ref{tab:asym_ranks}, a simple harmonic reduction algorithm could reduce a sequence of harmonies---and thus, the size of the overall vocabulary---by linking the chord exerting the least attractional force in the sequence, denoted by $c_i$, to the left or right chord neighbor exerting the greater attractional force ($c_{i-1}$ or $c_{i+1}$). The algorithm would then remove $c_i$ from the sequence and repeat the process until all of the chords have been linked.\n\nShown in Example \\ref{ex:reduction}, the harmonic reduction algorithm just described could be used to produce a tree diagram not unlike those found in Lerdahl and Jackendoff's (1983) approach. In this case, the chord in onset three, $<$$9,0,7,\\perp$$>$, features the least attractional force of the chords in the sequence, so the algorithm linked onset three to the stronger attractor, which in this case was the right neighbor, $<$$9,0,5,\\perp$$>$ (i.e., IV$^6$). The algorithm then removed onset three from the sequence and started the process again, at each step linking the combination of scale degrees exerting the least attractional force to the stronger adjacent attractor, and then removing that combination from the sequence. Branches reaching above the horizontal dashed line produce the reduction shown in the bottom system. Thus, for this passage the algorithm pruned all of the chords containing embellishing tones, resulting in the sequence of chords corresponding to the Roman numeral annotations provided below.\n\n\\begin{table}[t!]\n\t\\centering\n\t\\caption{Top ten unigram types, ranked according to the number of bigram types in which each unigram type is the attractor.}\n\t\\begin{threeparttable}\n\t\t\\renewcommand{\\TPTminimum}{\\columnwidth}\n\t\t\\makebox[\\textwidth]{\n\t\t\t\\begin{tabular}{S[table-format=2.0]S[table-format=3.0]S[table-format=3.1]clc}\n\t\t\t\t\\toprule\n\t\t\t\t\\multicolumn{1}{c}{Rank} & \\textit{N}$_{\\text{attractor}}$ & \\%$_{\\text{attractor}}$ & $\\sum_{\\textsc{asym}}$ & \\multicolumn{1}{l}{\\texttt{csdc} (mod 12)} & RN \\\\\n\t\t\t\t\\midrule\n\t\t\t\t1 & 604 & 100 & 56.41 & $0,4,7,\\perp$ & I \\\\\n\t\t\t\t2 & 560 & 99.6 & 38.33 & $0,4,\\perp,\\perp$ & I \\\\\n\t\t\t\t3 & 555 & 99.3 & 31.35 & $7,\\perp,\\perp,\\perp$ & \\\\\n\t\t\t\t4 & 509 & 98.1 & 36.38 & $7,2,11,\\perp$ & V \\\\\n\t\t\t\t5 & 508 & 98.5 & 29.10 & $7,2,5,11$ & V$^7$ \\\\\n\t\t\t\t6 & 508 & 97.7 & 23.89 & $0,\\perp,\\perp,\\perp$ & \\\\\n\t\t\t\t7 & 497 & 98.8 & 35.34 & $4,0,7,\\perp$ & I$^6$ \\\\\n\t\t\t\t8 & 440 & 96.9 & 32.27 & $7,0,4,\\perp$ & I$^6_4$ \\\\\n\t\t\t\t9 & 425 & 95.6 & 27.36 & $5,2,9,\\perp$ & ii$^6$ \\\\\n\t\t\t\t10 & 412 & 96.3 & 13.98 & $2,\\perp,\\perp,\\perp$ & \\\\\n\t\t\t\t\\bottomrule\n\t\t\t\\end{tabular\n\t\t}\n\t\t\\begin{tablenotes}[flushleft]\n\t\t\t\\small\n\t\t\t\\item \\textit{Note.} \\textit{N}$_{\\text{attractor}}$ denotes the number of bigram types in which each unigram type is the attractor, and \\%$_{\\text{attractor}}$ indicates the percentage of bigram types in which each unigram type appears as the attractor.\n\t\t\\end{tablenotes}\n\t\\end{threeparttable\n\t\\label{tab:asym_ranks\n\\end{table\n\nPresumably the pronounced directional asymmetries in the distribution of counts for each bigram type allow this algorithm to distinguish genuine harmonies from chords containing embellishing tones. Nevertheless, a toy example like this one tends to paper over the cracks of what is in fact a very difficult problem. Note, for example, that the highest branches of the tree only partly reflect how an analyst might parse this particular passage. The three harmonies at the very top of the tree seem reasonable enough (I--V$^7$--I), but this algorithm identified the next most important harmony as I$^6$ rather than IV, producing the progression, I--I$^6$--V$^7$--I. If Meyer's (\\citeyear{Meyer2000b}) preference for genuine harmonic change is reasonable, then IV would be the more fitting member in the final progression even though I$^6$ obtained the higher rank in Table \\ref{tab:asym_ranks}.\n\nWe could perhaps solve this problem by applying the algorithm not just to the passage in question, but to the entire corpus of movements simultaneously. At each step, the algorithm would assimilate variants of harmonies like $<$$9,0,7,\\perp$$>$ into more stable attractors like $<$$9,0,5,\\perp$$>$, and then adjust the ranks in Table \\ref{tab:asym_ranks} before starting the process again. In this way, the attractional force for each harmony in the distribution would change from one hierarchical level to the next. One could imagine that by incrementally assimilating variants into their more stable attractors, the resulting vocabulary might better reflect the functional categories described in theories of harmony (e.g., tonic, predominant, and dominant), and so produce greater attractional force estimates for harmonies like IV relative to those like I$^6$ at higher levels of the hierarchy.\n\n\\begin{example}[t!]\n\t\\centering\n\t\\input{Fig5.pdf_tex}\n\t\\caption{Tree diagram produced by the harmonic reduction algorithm for Op. 50\/2, iv, mm. 48--50. Branches reaching above the horizontal dashed line produce the reduction shown in the bottom system.}\n\t\\label{ex:reduction}\n\\end{example}\n\nDespite these limitations, the important point here is that when the vocabulary is larger than, say, 5 to 10 symbols, the admittedly simple algorithm just described produces surface-to-mid-level parsings not unlike those found in a Roman numeral analysis. Thus, it seems reasonable to suggest that the statistical associations between events near the surface reflect many of the organizational principles captured by most theories of harmony.\n\n\\section*{Conclusions}\n\nThis chapter adapted string-based methods from corpus linguistics to examine the coordinate, proordinate, and superordinate\/subordinate relations characterizing tonal harmony. In doing so, I have assumed that three of the organizational principles associated with natural languages---recurrence, syntax, and recursion---might also appear in tonal music. To that end, I began by examining the distribution of chromatic scale degree combinations across a corpus of Haydn string quartets. Unsurprisingly, the diatonic harmonies of the tonal system accounted for the vast majority of the combinations in the corpus. To model progressions of these harmonies over time, I then employed skip-grams, which include sub-sequences in an \\textit{n}-gram distribution if their constituent members occur within a certain number of skips. After applying filtering measures and ranking functions of various types, the most relevant (or meaningful) harmonic progressions emerged at the top of the \\textit{n}-gram distribution. Finally, to reduce the musical surface in Example \\ref{ex:haydn_ex}a to a sequence of its most central harmonies, I presented a simple harmonic reduction algorithm (and tree diagram) based on an asymmetric probabilistic measure of attractional force. In this case, the algorithm pruned all of the chords containing embellishing tones.\n\nTo examine the potential of string-based methods in corpus studies of music, I made certain simplifying assumptions about the principles mentioned above. Perhaps the most obvious of these was to restrict the purview of coordinate relations to temporally coincident scale degrees. This restriction seems reasonable for homorhythmic textures, but much less so for string quartets, piano sonatas, and the like, which often feature accompanimental textures that prolong harmonies over time (e.g., an Alberti bass pattern). This problem was at least partly resolved by the harmonic reduction algorithm, which assimilates variants into nearby attractors (e.g., I$^6$ into I), but note that it cannot replace variants of a given harmonic category---say, for example, $<$$0,4,\\perp,\\perp$$>$---if their more central (or prototypical) attractors---$<$$0,4,7,\\perp$$>$---do not appear nearby. Thus, the current algorithm would sometimes fail to produce convincing harmonic reductions for passages featuring complex polyphonic textures.\\footnote{For an innovative string-based solution to this problem, see \\textcite{White2013b}.}\n\nNevertheless, because these methods are ambivalent about the organizational systems they model, one need only revise the simplifying assumptions above to suit the needs of the research program. Indeed, since these methods proceed from the bottom up, we could just as easily use skip-grams or attraction measures to study melody, rhythm, or meter. Similarly, corpus studies in cognitive linguistics or systematic musicology might argue that the syntactic properties of natural language or tonal music should reflect limitations of human auditory processing, so it seems reasonable to impose similar restrictions on the sorts of contiguous and non-contiguous relations the skip-gram method should model \\autocite{Sears2017}. This claim seems especially relevant if we assume that the recursive hierarchy described throughout this chapter is nonuniform and discontinuous \\autocite{Meyer1973}, in that the statistics operating at relatively surface levels of musical organization---the syntax of harmonic progressions (e.g., I--ii$^6$--V--I)---might differ from those operating at deeper levels---long-range key relationships (e.g., I--III--V--I).\n\nThere are, indeed, many possible solutions for the computational problems described here. My goal was not to offer definitive results, but to demonstrate that the most recent methods developed in corpus linguistics and natural language processing have much to offer for corpus studies of music. Indeed, if \\textcite{Patel2008} is right that language and music share certain fundamental design features, then skip-grams, contingency tables, and association measures represent invaluable tools for the study of tonal harmony.\n\n\\pagebreak\n\\printbibliography\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{Section: Introduction}\nFor object recognition to be done correctly, a model must preserve the hierarchical relationships between object parts and their corresponding wholes. It is not sufficient that all the pieces of an object are present in an image, they must also be oriented correctly with respect to one another. Convolutional Neural Networks (CNNs) are limited in their ability to model the spatial hierarchy between objects. Even if sub-sampling operations (e.g., max-pooling) are removed, the representation of data in CNNs do not take into account the relationships between object parts and wholes.\n\nCapsule Networks \\cite{Sabour_2017}, \\cite{Hinton_2018} learn such hierarchies by grouping feature map channels together to form a vector of features (i.e., a capsule) that captures the instantiation parameters of objects and by learning transformation matrices that encode viewpoint \\textit{invariant} information. These networks generalize to novel viewpoints by incorporating the viewpoint changes directly into the activities of the capsules. The capsules can represent various properties of an object ranging from its size and shape to more subtle features such as texture and orientation. Since their introduction, CapsNets have produced state-of-the-art accuracies on datasets such as MNIST \\cite{Zhao_2019} and smallNORB using a network with far fewer parameters compared with their CNN counterparts \\cite{Sabour_2017}.\n\nAt their core, CapsNets \\cite{Sabour_2017} make use of a dynamic routing algorithm to ensure that the output of a lower-level capsule is routed to the most appropriate higher-level capsule. This is done by multiplying the lower-level capsules with learned viewpoint invariant transformation matrices to produce prediction vectors. These matrices make sense of the spatial relationships between object features. The scalar product between the prediction vectors and each of the higher-level capsules governs the agreement between a part and a whole. Large values imply that a part has a high likelihood of belonging to a whole and vice versa. The combination of the scalar products and the transformation matrices ultimately decides which whole is most suited for each part. This ``routing-by-agreement'' is a more effective way of sending spatial information between layers in a network than the routing implemented by max-pooling, since the former maintains the exact spatial relationships between neurons in each layer of the network, regardless of the network depth.\n\nAlthough CapsNets have shown promising results, there are some key limitations that make them unsuitable for real world use. One such issue is inference using CapsNets is significantly slower compared with CNNs. This is primarily due to the dynamic routing procedure which requires several iterations to produce the output vectors of a capsule layer. This limitation prevents deeper CapsNet architectures from being used in practice.\n\nWe present a method for speeding up inference for CapsNets, with potential applications for training. This is accomplished by making use of the accumulated information in the dynamically calculated routing coefficients computed using the training dataset. Analyses of intra-class routing coefficients show that they form a unique signature for each class. Intuitively, this is because parts of the same object should generally be similar and, more importantly, distinct from the parts of a different object. In practice, the network is trained to produce prediction vectors (using the lower-level capsules and learned transformation matrices) that closely correlate with the higher-level capsule associated with its own class. This observation allows creation of a set of \\textit{master} routing coefficients using the individual routing coefficients associated with each training example. At inference time, the master routing coefficients are used instead of dynamically computing the routing coefficients for each input to the network. Our method for fast routing at inference effectively replaces the \\boldmath{$r$} iterations in the routing procedure with a single matrix-multiply operation, allowing the network to be parallelized at run-time. On the MNIST dataset and its variants, fast inference decreases the test time accuracies by less than $0.5 \\%$ and by approximately $5 \\%$ for CIFAR10.\n\nSection \\ref{Section: Capsule Network Architecture} describes the three-layer network architecture that was used. Section \\ref{Section: Comparison Between Dynamic and Fast Routing Procedures} compares the differences in the routing procedure at inference between the dynamic routing algorithm and the fast routing procedure. Section \\ref{Section: Analyses of Dynamically Calculated Routing Coefficients} analyses the dynamic routing coefficients from the MNIST and CIFAR10 training datasets. In Section \\ref{Section: Creation of Master Routing Coefficients}, we detail the procedure for creating a set of master routing coefficients and compare the master set of routing coefficients with the dynamically calculated ones in Section \\ref{Section: Analyses of Master Routing Coefficients}. Section \\ref{Section: Results} compares the test accuracies between the dynamic and fast routing procedures for the five datasets. Discussions on the use and applicability of master routing coefficients are given in Section \\ref{Section: Discussion}. The general procedure for creating master routing coefficients is detailed in Appendix \\ref{App. General Approach for Creating Master Routing Coefficients}.\n\n\\section{Capsule Network Architecture}\n\\label{Section: Capsule Network Architecture}\nThe CapsNet architecture shown in Fig. \\ref{Fig: CapsNet Architecture} follows the three layer network from \\cite{Sabour_2017}. For the MNIST \\cite{MNIST}, Background MNIST (bMNIST) and Rotated MNIST (rMNIST) \\cite{R_and_BG_MNIST}, and Fashion MNIST (fMNIST) \\cite{F_MNIST} datasets, the input to the network is a $28 \\times 28$ grayscale image that is operated on by a convolutional layer to produce a $20 \\times 20 \\times 256$ feature map tensor (for CIFAR10, the $32 \\times 32 \\times 3$ image is randomly cropped so that its spatial dimensions are $24 \\times 24$). The second convolutional layer outputs a $6 \\times 6 \\times 256$ feature map tensor. Each group of $8$ neurons ($4$ for CIFAR10) in this feature map tensor is then grouped channel-wise and forms a single lower-level capsule, \\boldmath{$i$}, for a total of $6 \\times 6 \\times (256 \\div 8) = 1152$ lower-level capsules ($1024$ for CIFAR10).\n\nThe lower-level capsules are fed to the routing layer, where the dynamic routing procedure converts these capsules to the $10 \\times 16$ DigitCaps matrix, where $10$ is the number of higher-level capsules (also the number of object classes) and $16$ is the dimensionality of the capsules. Here, we use Max-Min normalization \\cite{Zhao_2019} as opposed to Softmax to convert the raw logits into routing coefficients. Details on the dynamic routing procedure are given in Section \\ref{Section: Comparison Between Dynamic and Fast Routing Procedures}.\n\nEach row in the DigitCaps matrix represents the instantiation parameters of a single object class and the length of the row vector represents the probability of the existence of that class. During training, non-ground-truth rows in the DigitCaps matrix are set to zero and the matrix is passed to a reconstruction network composed of two fully-connected layers of dimensions $512$ and $1024$ with ReLU activations, and a final fully-connected layer of dimension $28 \\times 28 \\times 1 = 784$ ($24 \\times 24 \\times 3 = 1728$ for CIFAR10) with a sigmoid activation. During inference, the reconstruction network is not used. Instead, the row in the DigitCaps matrix with the largest L2-norm is taken as the predicted object class for the input.\n\nOur implementation uses TensorFlow \\cite{TensorFlow} with training conducted using the Adam optimizer \\cite{Adam_Optimizer} with TensorFlow's default parameters and an exponentially decaying learning rate. Unless otherwise noted, the same network hyperparameters in \\cite{Zhao_2019} were used here for training as well. Original code is adapted from \\cite{Sabour_Code}.\n\n\\begin{figure}[htp]\n\\centering\n{\\includegraphics[width = 3.5 in]{CapsNet_Architecture}}\n\\caption{(Left) Three-layer CapsNet architecture adapted from Sabour et al. \\cite{Sabour_2017}. The PrimaryCaps layer consists of $6 \\times 6 \\times 32 = 1152$ $8$-D vector capsules for the MNIST dataset and its variants ($1024$ $4$-D vector capsules for CIFAR10). The routing procedure produces the $10 \\times 16$ DigitCaps layer, which is used to calculate the margin loss. The DigitCaps output is also passed to the reconstruction network where the non-ground-truth rows are masked with zeros. The network takes the masked $10 \\times 16$ DigitCaps matrix as input and learns to reproduce the original image. Margin and reconstruction loss functions follow those from \\cite{Sabour_2017}.}\n\\label{Fig: CapsNet Architecture}\n\\end{figure}\n\n\\section{Comparison Between Dynamic and Fast Routing Procedures}\n\\label{Section: Comparison Between Dynamic and Fast Routing Procedures}\nThe dynamic routing procedure for CapsNets using Max-Min normalization is given by Max-Min Routing Procedure below. This algorithm is used during normal training and inference. The prediction vectors to the routing layer, \\boldmath{$\\hat{u}_{j|i}$}, are created by multiplying each of the lower-level capsules, \\boldmath{$u_i$}, in PrimaryCaps by their respective transformation weight matrix, \\boldmath$W_{ij}$. The higher-level capsules \\boldmath{$s_j$} are computed as a sum over all the prediction vectors, weighted by the routing coefficients, \\boldmath{$c_{ij}$}. The routing coefficients are initialized with a value of $1.0$ and can be viewed as independent probabilities representing the likelihood of a lower-level capsule being assigned to a higher-level capsule \\cite{Zhao_2019}. For a given input to the routing layer, its routing coefficient matrix has shape $N_i \\times N_j$, where $N_i$ and $N_j$ are the number of lower and higher-level capsules, respectively. The higher-level capsules, \\boldmath{$s_j$}, are then squashed using a non-linear function so that the vector length of the resulting capsule, \\boldmath{$v_j$}, is between $0$ and $1$. These operations are shown in Eq. \\ref{Eqs. u_hat, s_J, and v_J}.\n\n\\begin{equation}\n\t\\label{Eqs. u_hat, s_J, and v_J}\n\t\\hat{u}_{j|i} = W_{ij}u_i, \\quad s_j = \\sum_{i} {c_{ij}\\hat{u}_{j|i}}, \\quad v_j = \\frac{||s_j||^2}{1 + ||s_j||^2}\\frac{s_j}{||s_j||}\n\\end{equation}\n\nDuring the procedure, the update to the routing coefficients, \\boldmath{$b_{ij}$}, is computed as the dot product between the prediction vectors and the current state of the higher-level capsules, \\boldmath{$v_j$}. The update to the routing coefficients is then normalized via Max-Min normalization over the object classes as given by Eq. \\ref{Eq: Max-Min Normalization}, where $p$\/$q$ are the lower\/upper bounds of the normalization. For the first iteration, the routing coefficients are initialized to $1.0$ outside of the main routing for-loop.\n\n\\begin{equation}\n\t\\label{Eq: Max-Min Normalization}\n\tc_{ij} = p + \\frac{b_{ij} - min(b_{ij})}{max(b_{ij}) - min(b_{ij})} * (q-p)\n\\end{equation}\n\n\\begin{table}[h!]\n \\begin{tabular}{l}\n \\hline\n \\textbf{Max-Min Routing Procedure} \\\\\n \\hline\n 1: Input to Routing Procedure: ({$\\bm{\\hat{u}_{j|i}}$}, $r$, $l$) \\\\\n 2: \\quad for all capsules $i$ in layer $l$ and capsule $j$ in layer ($l$ + 1): $c_{ij}$ $\\leftarrow$ 1.0 \\\\\n 3: \\quad \\textbf{for} $r$ iterations: \\\\\n 4: \\quad \\quad for all capsule $j$ in layer ($l$ + 1): $\\bm{s_j}$ $\\leftarrow$ $\\sum_{i} c_{ij} \\bm{\\hat{u}_{j|i}}$ \\\\\n 5: \\quad \\quad for all capsule $j$ in layer ($l$ + 1): $\\bm{v_j}$ $\\leftarrow$ Squash($\\bm{s_j}$) \\\\\n 6: \\quad \\quad for all capsule $i$ in layer $l$ and capsule $j$ in layer ($l$ + 1): $b_{ij} \\leftarrow b_{ij} + \\bm{\\hat{u}_{j|i}} \\cdot$ $\\bm{v_j}$ \\\\\n 7: \\quad \\quad for all capsule $i$ in layer $l$: $\\bm{c_i}$ $\\leftarrow$ Max-Min ($b_{ij}$) $\\Longrightarrow$ $\\mathrm{Given~in~Eq.}$ ~\\ref{Eq: Max-Min Normalization} \\\\\n \\quad \\quad \\textbf{return} $\\bm{v_j}$ \\\\\n \\hline\n \\end{tabular}\n \\label{Procedure: Max-Min Routing}\n\\end{table}\n\nFor fast inference, the routing coefficients no longer need to be dynamically calculated for each new input. Instead, the prediction vectors are simply multiplied with the precomputed master routing coefficient tensor ($C_{ij}$), summed, and squashed to form the higher-level capsules. Classification using the parent-level capsules, \\boldmath{$v_j$}, is made in the usual way afterwards. Details on the master routing coefficients are given in the following sections.\n\n\\begin{table}[h!]\n \\begin{tabular}{l}\n \\hline\n \\textbf{Fast Routing Procedure for Inference} \\\\\n \\hline\n 1: Input to Fast Routing Procedure: ($C_{ij}$, \\boldmath$\\hat{u}_{j|i}$, $l$) \\\\\n 2: \\quad for all capsule $j$ in layer ($l$ + 1): $s_j$ $\\leftarrow$ $\\sum_{i} C_{ij}\\boldmath\\hat{u}_{j|i}$ \\\\\n 3: \\quad for all capsule $j$ in layer ($l$ + 1): $v_j$ $\\leftarrow$ Squash($s_j$) \\\\\n \\quad \\quad \\textbf{return} $v_j$ \\\\\n \\hline\n \\end{tabular}\n \\label{Procedure: Fast Routing for Inference}\n\\end{table}\n\n\\section{Analyses of Dynamically Calculated Routing Coefficients}\n\\label{Section: Analyses of Dynamically Calculated Routing Coefficients}\nFor each input to the routing layer, an $N_i \\times N_j$ routing coefficient matrix is initialized with a value of $1.0$ and then iteratively updated in the routing procedure. By design, the $i^{th}$ column of routing coefficients is responsible for linking the lower and higher-level capsules for the $i^{th}$ object class. Intuitively, one would expect intra-class objects to have similarly activated routing coefficients compared with those from inter-class objects since the routing coefficients are simply the agreement between prediction vectors and higher-level capsules. This can be shown quantitatively by computing the correlations between the dynamically calculated routing coefficients for the different object classes.\n\nWe choose to compute the correlations between only the ground-truth (GT) columns in the routing coefficient matrices rather than between entire routing coefficient matrices, since the GT columns are used for the correct classification of each image. In other words, for a network that has been properly trained, the routing coefficients in the GT columns are the ones that provide the unique signatures for each object class. The routing coefficients in the other columns generally do not evolve to form unique signatures in the same manner as the GT routing coefficients. This can be observed in the tuning curves associated with the higher-level capsules (c.f. Section \\ref{Section: Analyses of Master Routing Coefficients}).\n\n\\begin{figure}[h]\n\\centering\n{\\includegraphics[width = 4.5 in]{GT_Routing_Coefficient_CH}}\n\\caption{(a) Heatmap showing the correlations between the GT columns for the first $100$ images in the MNIST training set. The heatmap is created by calculating the correlation coefficient between GT columns in the $1152 \\times 10$ routing coefficient matrix associated with each image. (b) Heatmap showing the correlations between the GT columns for the first $100$ images in the CIFAR10 training set. The heatmap is created by calculating the correlation coefficient between GT columns in the $1024 \\times 10$ routing coefficient matrix associated with each image. Only half of the computed values are shown for each heatmap since the correlations are symmetric.}\n\\label{Fig: GT Routing Coefficient CH}\n\\end{figure}\n\nFigure \\ref{Fig: GT Routing Coefficient CH} (a) shows the correlation heatmap between the GT columns of the routing coefficient matrices for the first $100$ images in the MNIST training dataset. For each image, the GT column in its routing coefficient matrix refers to the column that corresponds to its object class. For example, an image of the digit $5$, the GT column in that image's routing coefficient matrix is the sixth column (due to zero-indexing). Likewise, Fig. \\ref{Fig: GT Routing Coefficient CH} (b) shows the correlation heatmap for the first $100$ images in the CIFAR10 training dataset calculated in the same way. For MNIST, high correlation is observed between routing coefficients for intra-class objects. For a more complicated dataset such as CIFAR10, the distinction between inter-class routing coefficients is not as clear since the same part can often be associated with more than one object class in the dataset. For example, trucks and automobiles will have often multiple similar parts (e.g., wheels, headlights, body frame, etc.) and airplanes and ships will often appear against the same color background.\n\nFigures \\ref{Fig: Mean Class Routing Coefficient CH} (a) and (b) show the correlation heatmaps between each of the object classes for the MNIST and CIFAR10 datasets, respectively, averaged over \\textit{all} of their training images. For example, element $(0, 0)$ in the heatmaps is the mean correlation between the GT columns of all objects in class $0$ for the dataset and element $(0, 1)$ is the mean correlation between the GT columns of all objects in classes $0$ and $1$, and so on. The same trends exist when the correlation is computed across all training images as those in Fig. \\ref{Fig: GT Routing Coefficient CH} for the case of $100$ images for each dataset. In particular, it is interesting to note the second highest correlations for each object class. For the MNIST dataset, the second highest correlation with the digit $0$ is the digit $6$ and the digit $9$ has high correlations with the digits $4$ and $7$. These digit classes are often most similar to one another for the MNIST dataset (c.f. \\cite{Zhao_2019} for examples). For CIFAR10, high inter-class correlations exist between airplanes (class $0$) and ships (class $8$) and automobiles (class $1$) and trucks (class $9$). These classes often present the most challenging examples for classification. For MNIST, intra-class correlations are significantly higher compared with inter-class correlations and thus, the network is able to properly distinguish between the digit classes. For CIFAR10, intra-class correlations are lower and this leads to difficulties in classifying new images.\n\n\\begin{figure}[h]\n\\centering\n{\\includegraphics[width = 4.5 in]{Mean_Class_Routing_Coefficient_CH}}\n\\caption{(a) Heatmap showing the average class correlations for \\textit{all} images in the MNIST training set. For example, the value in element $(0, 0)$ is the mean correlation between the GT columns in the routing coefficient matrix for all objects in class $0$ and the value in element $(0, 1)$ is the mean correlation between the GT columns in the routing coefficient matrix for all objects in class $0$ and $1$, etc. (b) Heatmap showing the average class correlations for \\textit{all} images in the CIFAR10 training set. Only half of the computed values are shown for each heatmap since the correlations are symmetric.}\n\\label{Fig: Mean Class Routing Coefficient CH}\n\\end{figure}\n\n\\section{Creation of Master Routing Coefficients}\n\\label{Section: Creation of Master Routing Coefficients}\nSince the intra-class routing coefficients from the training dataset form a unique signature for each object class, they can be used to create a set of master routing coefficients that can generalize well to new examples. There are several ways in which a set of master routing coefficients can be created. We detail the procedure shown in Fig. \\ref{Fig: Creation of Master Coefficients}, which was used to generate the master routing coefficients used for fast inference. A general approach for creating master routing coefficients is given in Appendix \\ref{App. General Approach for Creating Master Routing Coefficients}.\n\nDuring training, the routing coefficient matrix is initialized to $1.0$ for each input to the routing layer and then iteratively updated to reflect the agreement between the input prediction vectors to the routing layer and the final output vectors of the routing layer. At the start of training, the routing coefficients can change for the same input since the prediction vectors are being updated by the (trainable) network weights, \\boldmath$W_{ij}$. However, once training has converged, the routing coefficients naturally converge since they are computed in a bootstrap manner; i.e., the update to the routing coefficients are calculated using just the prediction vectors and an initial set of routing coefficients. After training is completed, the routing coefficient matrix associated with each training image can be extracted.\n\n\\begin{figure}[h]\n\\centering\n{\\includegraphics[width = 4.5 in]{Creation_of_Master_Coefficients}}\n\\caption{Procedure for creating a single master routing coefficient matrix for fast inference. After a network has been properly trained, the routing coefficient matrix associated with each training image is extracted and used to form a single master routing coefficient matrix via three steps: 1) summation, 2) normalization, and 3) dimension reduction. Details are given in Section \\ref{Section: Creation of Master Routing Coefficients}.}\n\\label{Fig: Creation of Master Coefficients}\n\\end{figure}\n\nFor training images, fast inference can be conducted on the \\textit{training} dataset by simply using the individual routing coefficient matrix associated with each image. For new images, however, there is no apriori method of determining which routing coefficient matrix to use for each image since the class label is unknown (if such a method existed, then the classification problem is essentially solved without the need for the network to even make the prediction). As a result, fast inference on new images must rely on the use of a \\textit{single} routing coefficient matrix. This matrix has the same shape as the individual routing coefficient matrices and each column is assigned to a known object class.\n\nTo create the master routing coefficient matrix, we accumulate the information contained in the individual routing coefficient matrices from the training dataset. This process involves three main steps: 1) summation, 2) normalization, and 3) dimension reduction. First, we train the CapsNet to convergence in the usual manner, run inference on the training images, and save the routing coefficients associated with the training images at the last step of the routing procedure (for MNIST, this results in $60,000$ $1152 \\times 10$ matrices). Then, we initialize $N$ matrices as containers to hold the accumulated class-specific routing coefficients for each of the $N$ object classes. For each training example, its individual routing coefficient matrix is summed in the appropriate container matrix for its class. After all training images have been processed, the set of $N$ container matrices holds the sum of all routing coefficients at the last routing iteration, one for each class.\n\nEach container matrix is then normalized by their respective class frequency, followed by a row-wise Max-Min normalization. At this point, each container matrix can be viewed as a master routing coefficient matrix for that class. However, the network expects a single routing coefficient matrix and, without additional information, there is no straightforward method to point to the correct container matrix when the network is presented with a new example.\n\nThus, in order to present a single routing coefficient matrix to the network at inference, the set of $N$ container matrices must be reduced to a single matrix. This is done by transferring only the GT column from each of the $N$ container matrices to its corresponding column in the master routing coefficient matrix. In other words, only the first column from the first container matrix (which holds the accumulated routing coefficients for the first object class) is transferred to the first column of the master routing coefficient matrix, and so on for the columns in the other container matrices. The end result of the dimension reduction is a single routing coefficient matrix that can be used for new examples during inference.\n\n\\section{Analyses of Master Routing Coefficients}\n\\label{Section: Analyses of Master Routing Coefficients}\nIn order for the master routing coefficient matrix to generalize well to new images, each column of routing coefficients in the matrix should only correlate highly with its own object class. In other words, the first column of routing coefficients in the master routing coefficient matrix should correlate highly with routing coefficients in the first columns of the individual routing coefficient matrices for images associated with the first object class. The second column of routing coefficients in the master routing coefficient matrix should correlate highly with routing coefficients in the second columns of the individual routing coefficient matrices for images associated with the second object class, and so on. If the first column in the master routing coefficient matrix correlated highly with the fifth column in an individual routing coefficient matrix associated with an image from the fifth object class, this would imply that the first column of master routing coefficients does \\textit{not} form an unique signature for the first object class---it can likely classify an image that belongs to the first object class as that of the fifth object class (and vice versa).\n\nTo quantify the degree to which each column of the master routing coefficient matrix is representative of its object class, we compute the correlation between each column in the master routing coefficient matrix and the GT columns in each individual routing coefficient matrix. This is shown in Figs. \\ref{Fig: Mean CH Between GT Master and GT Individual Routing Coefficients} (a) and (b) for the MNIST and CIFAR10 datasets, respectively. These correlations are between a single column of routing coefficients from the \\textit{master} routing coefficient matrix and the GT column of routing coefficients from the \\textit{individual} routing coefficient matrix associated with each training image for the dataset. Thus, the correlations are not symmetric (i.e., the correlation for element $(0, 1)$ in the heatmap is not the same as the correlation for element $(1, 0)$).\n\nFrom Figs. \\ref{Fig: Mean CH Between GT Master and GT Individual Routing Coefficients} (a) and (b), we see that each column of routing coefficients in the master routing coefficient matrix has higher intra-class correlations than inter-class correlations with the individual GT routing coefficients from each image in the training dataset. This is effectively why the master routing coefficients are able to generalize well to new images during inference---they are able to uniquely route the lower-level capsules to the correct higher-level capsules for new images by using the accumulated information from the training data. This relationship between master and individual routing coefficients is stronger for simpler datasets such as MNIST and its variants than for a more complicated dataset such as CIFAR10. For CIFAR10, intra-class correlations are still higher compared with inter-class correlations; however, as mentioned above, inter-class correlations can also be high for objects belonging to similar classes.\n\n\\begin{figure}[h]\n\\centering\n{\\includegraphics[width = 4.5 in]{Mean_CH_Between_GT_Master_and_GT_Individual_Routing_Coefficients}}\n\\caption{(a) Correlation heatmap between the columns in the master routing coefficients matrix and the GT columns from the individual routing coefficient matrices associated with the MNIST training images. This shows that creating the master routing coefficients as outlined in Section \\ref{Section: Creation of Master Routing Coefficients} results in a set of routing coefficients that correlates highly within its own class. (b) Correlation heatmap between the columns in the master routing coefficients matrix and the GT columns from the individual routing coefficient matrices associated with the CIFAR10 training images. For CIFAR10, high correlations can also exist between \\textit{inter}-class routing coefficients. For example, the correlation between the routing coefficients in column $1$ (associated with the object class ``car'') of the master routing coefficient matrix and GT columns $9$ (associated with the object class ``truck'') from individual routing coefficient matrices is almost as high as the \\textit{intra}-class correlations for classes $1$ and $9$, individually.}\n\\label{Fig: Mean CH Between GT Master and GT Individual Routing Coefficients}\n\\end{figure}\n\nThe effectiveness of the master routing coefficients can also be examined by looking at the output capsules, \\boldmath{$v_j$}, from the DigitCaps layer. Figures \\ref{Fig: DigitCaps Routing Examples} (a) and (b) compare the digit class probabilities for the same set of test images from the MNIST dataset between the dynamic and fast inference routing procedures. Likewise, Figs. \\ref{Fig: DigitCaps Routing Examples} (c) and (d) compare the probabilities for the same set of test images from the CIFAR10 dataset. For MNIST, the master routing coefficients produce similarly peaked values for the digit class probabilities compared with the use of dynamically calculated routing coefficients---each column in the master routing coefficient matrix is a unique signature for that digit class. On the other hand, the digit class probability comparison for CIFAR10 is noticeably different. Although the master routing coefficients are able to correctly classify three out of the five test examples shown, the classification is not particularly robust compared with the use of dynamically calculated routing coefficients (the dynamically calculated routing coefficients correctly classifies four out of the five test image examples shown, and have lower probabilities for the non-ground-truth classes).\n\n\\begin{figure}[h]\n\\centering\n{\\includegraphics[width = 5.5 in]{DigitCaps_Routing_Examples}}\n\\caption{Output class probabilities for the same set of test images from the MNIST and CIFAR10 datasets. The class probabilities are obtained from networks that use dynamically calculated routing coefficients ((a) and (c)) and the master routing coefficients ((b) and (d)). For MNIST, the master routing coefficients produce nearly identical class probabilities compared with the use of dynamically calculated routing coefficients. For CIFAR10, the use of master routing coefficients correctly classify three out of the five test image examples shown and there is strong inter-class competition.}\n\\label{Fig: DigitCaps Routing Examples}\n\\end{figure}\n\nThe digit class probabilities can also be class-averaged over the test dataset as shown in Fig. \\ref{Fig: Mean Vector Length Per Class}. These ``tuning curves'' show how well the network is able to distinguish between the different object classes in the dataset. For MNIST, the tuning curves resulting from the fast inference procedure is similar to those from the dynamic routing procedure, suggesting that the master routing coefficients provides an accurate reflection of the dynamically calculated routing coefficients. The tuning curves for CIFAR10 from the dynamic routing procedure show that, for certain object classes, the discriminability is not as robust compared with MNIST---multiple peaks exist for each object class. For fast inference, each tuning curve is still peaked at the correct object class; however, they are also highly peaked around the other classes as well.\n\n\\begin{figure}[h]\n\\centering\n{\\includegraphics[width = 5.5 in]{Mean_Vector_Length_Per_Class}}\n\\caption{Class-averaged output probabilities for the MNIST and CIFAR10 test datasets obtained with the use of dynamically calculated routing coefficients ((a) and (c)) and the master routing coefficients ((b) and (d)). The master routing coefficients for the MNIST dataset produce tuning curves similar to those obtained using dynamically calculated routing coefficients. For CIFAR10, the tuning curve for each object class obtained using dynamically calculated routing coefficients exhibit multiple peaks around non-ground-truth classes for each object class. In addition, multiple high-valued peaks are observed for the tuning curves resulting from the use of master routing coefficients.}\n\\label{Fig: Mean Vector Length Per Class}\n\\end{figure}\n\n\\section{Results}\n\\label{Section: Results}\nTable \\ref{Table: Routing Method Accuracies} provides the test accuracies for five datasets for the two different routing methods at inference. For fast routing, the master routing coefficients are created using the steps detailed in Section \\ref{Section: Creation of Master Routing Coefficients} and the procedure is given by Fast Routing Procedure for Inference in Section \\ref{Section: Comparison Between Dynamic and Fast Routing Procedures}. With Max-Min normalization, fast routing decreases the test accuracy by approximately $0.5 \\%$ for the MNIST dataset and its variants and by $5 \\%$ for CIFAR10.\n\nThe master routing coefficients can also be created from a network trained using Softmax as the normalization in the routing layer as opposed to Max-Min. In this approach, the routing procedure is exactly the same as in \\cite{Sabour_2017} and the master routing coefficients are created in the same manner as that detailed in Section \\ref{Section: Creation of Master Routing Coefficients} except the Softmax function is used to normalize the rows in the container matrices instead of Max-Min. For networks trained with Softmax in the routing layer, the majority of the routing coefficients are grouped around their initial value of $0.1$ after three routing iterations \\cite{Zhao_2019}. Since the container matrices for each class are created via summation, class frequency averaging, and normalizing via Softmax, the resulting master routing coefficient matrix contains values close to $0.1$ (for a dataset with $10$ classes). This amounts to a uniform distribution for the master routing coefficients and results in a significantly reduced test time performance across all five datasets.\n\n\\begin{table}[h]\n \\centering\n \\caption{Mean of the maximum test accuracies and their standard deviations on five datasets for the different routing methods at inference. Five training sessions were conducted for each dataset. The conditions under which the datasets were trained are the same as \\cite{Zhao_2019}. The master routing coefficients used for fast inference is created using the procedure outlined in Section \\ref{Section: Creation of Master Routing Coefficients}.}\n \\begin{tabular}{cccccc}\n \\hline\n \\textbf{Routing Method} & \\textbf{MNIST {[}\\%{]}} & \\textbf{bMNIST {[}\\%{]}} & \\textbf{fMNIST {[}\\%{]}} & \\textbf{rMNIST {[}\\%{]}} & \\textbf{CIFAR10 {[}\\%{]}} \\\\\n \\hline\n Dynamic, Max-Min & 99.55 $\\pm$ 0.02 & 93.09 $\\pm$ 0.04 & 92.07 $\\pm$ 0.12 & 95.42 $\\pm$ 0.03 & 75.92 $\\pm$ 0.27 \\\\\n Fast, Max-Min & 99.43 $\\pm$ 0.08 & 92.93 $\\pm$ 0.10 & 91.52 $\\pm$ 0.20 & 95.04 $\\pm$ 0.04 & 70.33 $\\pm$ 0.36 \\\\\n Dynamic, Softmax & 99.28 $\\pm$ 0.07 & 89.08 $\\pm$ 0.21 & 90.52 $\\pm$ 0.16 & 93.72 $\\pm$ 0.09 & 73.65 $\\pm$ 0.10 \\\\\n Fast, Softmax & 98.92 $\\pm$ 0.30 & 84.34 $\\pm$ 4.37 & 80.16 $\\pm$ 7.35 & 84.13 $\\pm$ 4.28 & 47.11 $\\pm$ 8.70 \\\\\n \\hline\n \\end{tabular}\n \\label{Table: Routing Method Accuracies}\n\\end{table}\n\n\\section{Discussion}\n\\label{Section: Discussion}\nIn CapsNets, routing coefficients form the link between capsules in adjacent network layers. Throughout the dynamic routing procedure, the updates to the routing coefficients come from the agreement between the prediction vectors and the higher-level capsules calculated using the prediction vectors. Since the prediction vectors are computed by a convolutional layer and are learned by the network, they capture the lower-level features for that object class. Using this information, we create a set of master routing coefficients from the training data that generalize well to test images.\n\nIf the network is properly trained (i.e., it is able to adequately distinguish between each object class) then the prediction vectors are unique and, as a result, the routing coefficients have high intra-class correlations. This is the case for MNIST and its variants. For a more complex dataset such as CIFAR10, the network does not learn sufficiently different prediction vectors for each object class during training. This is evident by comparing the tuning curves in Fig. \\ref{Fig: Mean Vector Length Per Class} between MNIST and CIFAR10 for the case of \\textit{dynamic} routing. As a result, the master routing coefficients created for CIFAR10 do not perform as well.\n\nA better process for creating a master routing coefficient matrix is also possible. In this paper, the approach taken to create the master routing coefficients uses \\textit{all} of the training data. For networks that can sufficiently distinguish between each object class, using all of the data makes sense. For CIFAR10, not all individual routing coefficient matrices are equally useful since some have high inter-class correlations. Therefore, a method of selecting the routing coefficients that have high intra-class and low inter-class correlations can be helpful. This can be done in several ways. For example, clustering analyses can be used to group routing coefficients that have high intra-class properties and remove training examples that result in ``outlier'' routing coefficients for each class. A similarity measure (e.g., correlation, dot product, etc.) can also be used to exclude outliers. Filtering methods will be taken up in future work.\n\nGiven that a single set of routing coefficients can be used for inference, a question to ask is whether or not it can also be useful for training. At least two approaches can be implemented for training using the master routing coefficients. First, a CapsNet can be retrained using the single master routing coefficient matrix to see if the network can learn to recognize the object classes better when it is able to use the cumulative knowledge (contained in the master routing coefficients) from all training data at once. Second, training can be expedited by initially training a CapsNet using dynamic routing on a carefully selected subset (or all) of the training data for a few epochs. Afterwards, the master routing coefficient matrix can be created from this training session and used to retrain the network (using all of the training data) to convergence. Both of these approaches will be taken up in future work.\n\n\\section{Summary}\n\\label{Section: Summary}\nCapsule Networks have the ability to learn part-whole relationships for objects and can potentially generalize to novel viewpoints better than conventional CNNs. In contrast to CNNs, they can maintain the exact spatial relationships between the different components of an object by discarding sub-sampling layers in the network. State-of-the-art performance has already been demonstrated for the MNIST dataset by \\cite{Zhao_2019}. However, CapsNets are slow to train and run in real-time due to the dynamic routing algorithm. In addition, state-of-the-art performance on more complicated datasets still present some challenges, possibly due to the prohibitively high cost of constructing deeper CapsNet models. In this work, we focused on methods to improve the speed of CapsNets for inference while still maintaining the accuracy obtained using the dynamic routing algorithm. To this end, we have implemented a method that allows for fast inference while keeping the test accuracy comparable to the dynamic implementation.\n\n\\medskip\n\n\\small\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe concerns about the potential health risks caused by the X-ray radiation of computed tomography (CT) have led to the concept of ALARA (As Low As Reasonably Achievable)~\\cite{international19921990},~\\cite{slovis2003children},~\\cite{brenner2007hall},~\\cite{SARMA2012750}. \nThe concept aims to regulate the delivery of excessive X-ray radiation to the patients. \nAlthough low-dose CT (LDCT) imaging decreases the risks, it yields images of deteriorated quality because of the increased noise background and the pertinent appearance of artifacts that could affect the diagnostic decisions~\\cite{yu2009radiation}.\n\nFor mitigating the excessive noise in LDCT, different methods were proposed. One of the approaches is to apply iterative reconstruction techniques. These methods aim to improve the quality of CT slices through optimization of an objective function that incorporates a system model, a statistical noise model, and prior information in the image domain. Although these iterative techniques gave an improvement in the quality of the reconstructed CT slices, they are not efficient because they require multiple forward and back-projection operations.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=2.5in]{figures\/schemes\/n2ntd_scheme.pdf}\n\\caption{The scheme of the proposed self-supervised Noise2NoiseTD-ANM approach. $y$ denotes noisy pixels, $\\Omega_y$ is neighborhood of noisy pixels, $\\lambda$ and $\\sigma_e^2$ are estimated noise parameters. Green arrows correspond to the training process and purple arrows correspond to inference.}\n\\label{fig:approach}\n\\end{figure}\n\nThe other approach is to employ image processing: either traditional or deep-learning-based. Because these methods were adapted from denoising images in the natural domain, they are not overly dependent on the CT scanner and its acquisition geometry. Compared to iterative reconstruction techniques, they can separately denoise raw CT projections or already reconstructed CT slices. These methods can serve as an auxiliary tool for noise reduction before or after reconstruction. Thus, they, especially deep-learning-based methods, are of high research interest.\nThe traditional methods are not very adaptive to image content and often contain many parameters which require careful case-by-case tuning. Besides, they provide inferior quality than deep-learning-based methods. \nSpecifically, supervised deep learning methods (Noise2Clean) that assume model training on the big data set of paired noisy-clean images show the best performance~\\cite{3DCT},~\\cite{RED},~\\cite{ResCT}. However, their requirement for the paired noisy-clean images may not always be accomplished. \nIn a clinical setting, for example, in a radiology suite, the acquisition of such paired data can take at least twice longer than the regular CT exam and would increase exposure to the X-ray dose. Furthermore, obtained images may not ideally coincide due to the patient breathing and movements.\nGenerating the noisy data artificially is an option to alleviate the lack of the image pairs, but the addition of the noise to the images, especially the medical ones, can bias the predictions of a convolutional neural network (CNN)~\\cite{green2018learning} unless a very accurate noise simulation is used, which requires deep insights into the actual CT system,~\\cite{zabic2013low} and is not always possible.\n\nBecause of the problems with the data availability, different unsupervised and self-supervised methods for image denoising were proposed. However, unsupervised methods, which are mostly based on generative adversarial networks~\\cite{park2019unpaired},~\\cite{wolterink2017generative},~\\cite{kang2019cycle}, sill require a set of clean images. Besides, this set, as well as the set of noisy images, should be well representative of the diversity of cases found in reality. Thus, these methods imply the need to irradiate some patients with rather high doses of CT X-rays. \nSelf-supervised methods, which do not require clean images at all, appear to be more suitable for denoising medical images than supervised and unsupervised methods. \n\nSome of the previously proposed self-supervised CT denoising methods~\\cite{n2n_ct2020},~\\cite{wu2019consensus},~\\cite{hendriksen2020noise2inverse},\\cite{yuan2020half2half} require additional data generation that complicates the denoising process. The other methods~\\cite{xu2021deformed2self},~\\cite{choi2021self},~\\cite{unal2021self} do not have this requirement. However, the previous self-supervised CT denoising methods do not get advantage from all the available information, such as CT noise properties, and some of them do not consider similarity of adjacent projections. A more detailed description of related work on self-supervised denoising will be given in the section~\\ref{sec:rel_work}.\nIn this paper, we propose a new self-supervised approach for denoising that takes into account the mentioned properties.\n\nUnlike most previously proposed methods that denoise already reconstructed CT slices, we propose to work in the projection domain. Although radiologists work with CT slices, image processing methods can be directly applied to projections. Working with CT projections may be more advantageous than processing CT slices. Firstly, CT reconstruction is an ill-posed inverse problem. There are a lot of different reconstruction methods, which have many parameters influencing quality of reconstructed CT slices. A method developed for a one set of parameters can be unsuitable for other sets leading to the necessity of stack of methods designed for each set. For the projection data, there is no such a problem. Secondly, after the reconstruction CT slices may contain artefacts, e.g., streaks, caused by the noise in projections, which are more difficult to be removed than the initial noise. \nThirdly, in the case of deep learning methods, they require many data for training. Here is one more benefit to working in the projection domain, as the projection data of one patient contains more images than the reconstructed image data of this patient. Because the medical images are difficult to be obtained, it is a crucial property. \nFinally, CT projections share almost the same content, and the noise is spatially uncorrelated\non projections that can be useful for denoising, especially in the absence of clean images. One more point about noise is that its pixel-wise independent distribution on projections is one of the main assumptions for the possibility to apply self-supervised denoising methods. In the image domain, this assumption is violated because of the reconstruction process. \n\nIn this paper, we propose the self-supervised Noise2NoiseTD-ANM (ANM stands for ``adaptable noise model'') approach for denoising low-dose CT projections that is an extension of our earlier proposed method in~\\cite{zainulina2021n2ntd}. Like our previous method, it uses information contained in sequences of images, whereas most other self-supervised denoising methods denoise each image separately, which can give sub-optimal quality of denoised images. In addition, the developed approach takes into account the theoretical distribution of the CT noise. This makes denoising models better generalize to different noise levels, even relatively high ones.\nThe scheme of the proposed approach is illustrated in Fig.~\\ref{fig:approach}.\n\n\n\\subsection{Contributions}\nThe contribution of this paper is in the following:\n\\begin{enumerate}\n \n \n \\item Novelty. The approach incorporates actual CT noise model. The noise model improves the capability of our denoising framework to generalize to previously unseen noise levels.\n \\item Flexibility. Our approach simplifies model adaptation to different CT scan settings because it is designed to pre-estimate parameters of the noise model and to optimize the denoising and the noise models separately. \n \\item Validation. We performed the comparison of the proposed approach, the Noise2Clean, the Half2Half technique~\\cite{yuan2020half2half}, and one of the best self-supervised methods introduced in the paper ``High-Quality Self-Supervised Deep Image Denoising'' \\cite{laine2019hqss}, adapted to CT projection denoising. We used both simulated data (three different noise levels) and a dataset with the real noise. Our method outperformed state-of-the-art algorithms.\n\\end{enumerate}\n\n\n\\section{Related work}\n\\label{sec:rel_work}\nThe first attempt to train a neural network using only noisy images was made by J. Lehtinen et al. Their work~\\cite{Noise2Noise} gave rise to self-supervised denoising methods. Their method consists in training a neural network to predict one noisy image from another. The approach is based on the properties of the loss functions, which allow the model to converge in the limit to the same state as in the case of supervised training. For the applicability of the approach, the noise have to be zero-mean and noise components of different pixels have to be independent to each other.\nFor medical data, this approach does not seem appropriate because of the requirement of paired noisy images. However, different methods were introduced that extend the Noise2Noise approach and adopt it to denoise CT images. Some of them give solutions on how to generate an acceptable dataset for model training. \n\nIn~\\cite{n2n_ct2020} the Noise2Noise approach was applied for denoising X-ray projections and CT images and compared to the Noise2Clean. The Noise2Noise approach has shown rather good results compared to the Noise2Clean. However, they used an artificially generated dataset for the experiments. \n\nIn the works~\\cite{wu2019consensus} and~\\cite{hendriksen2020noise2inverse} the ``data splitting'' methods were proposed for generation of a dataset for the Noise2Noise approach. These methods consist in reconstruction of paired CT slices from non-intersecting sets of CT projections. Although these methods showed rather good results, they may introduce reconstruction artifacts that can violate the noise assumptions. The other possible negative outcome of the ``data splitting'' is the drop in resolution of reconstructed images. These methods allow denoising only already reconstructed slices. \n\n\\cite{yuan2020half2half} introduced a Half2Half method consisting in the obtainment of two half-dose projections from the given projection using the theoretical distribution of count-domain data for Noise2Noise training. The method gave acceptable results in the experiments carried out by the authors and provided a reasonable approach for the dataset generation. \nHowever, the Half2Half approach needs additional knowledge about acquisition parameters for noise simulation that is not always possible to obtain. \n\nThus, the Noise2Noise-based methods have the main bottleneck consisting in accurate data generation satisfying all assumptions of the Noise2Noise. This bottleneck makes the models less flexible. \n\nThe drawback with the dataset generation is overcome by self-supervised methods based on the usage of blind spots. These methods assume model training using only original noisy images as input and target. Nevertheless, the application of the blind-spot-methods for denoising of low-dose CT projections is little researched. Some of the methods use pixel masking during training~\\cite{krull2019noise2void},~\\cite{xie2020noise2same}, which makes them computationally inefficient, and other use special neural network architectures~\\cite{laine2019hqss},~\\cite{batson2019noise2self},~\\cite{lee2020noise2kernel} that restricts their flexibility. The main weakness of most blind-spot-based methods is that they exclude information about the noisy pixel to be denoised that makes them prone to loss of small details and blurring and cause worse performance compared to the Noise2Clean and Noise2Noise approaches. One of the most efficient solutions for this problem \nwas introduced in~\\cite{laine2019hqss}, where the authors proposed to include information about the excluded noisy pixel during inference by Bayes rule.\nThis approach was adopted by us for denoising of low-dose CT projections in~\\cite{zainulina2021n2ntd}. Also, in~\\cite{zainulina2021n2ntd} we proposed a new method Noise2NoiseTD, which uses information from adjacent projections that can help prevent the model from over-smoothing and losing edges.\n\nRecent works~\\cite{xu2021deformed2self},~\\cite{choi2021self},~\\cite{unal2021self} also attempt to perform self-supervised denoising of CT data, however they do not take the true noise model into account.\n\nIn the paper, we propose the extension of our previous approach. \nWe will compare it to the approach from~\\cite{laine2019hqss} adopted for denoising of CT projections, which will be referred to as Noise2Void-4R ($4$R stands for $4$ rotations), and to the Half2Half approach~\\cite{yuan2020half2half}.\n\n\n\\section{Methods}\nIn this section, we present the description of our approach. Firstly, we give an overview of the CT noise properties. Then, we describe how we use the adjacency in the approach and how we incorporate it in a neural network. After that, we describe the train-inference schemes of the model depending on the CT noise distribution. We introduce a noise model for self-supervised training that incorporates the CT noise properties. The description of the usage of the adjacency and the noise model completes the description of the Noise2NoiseTD-ANM approach. Additionally, in the section, we present the neural network architecture used as the backbone for the Noise2NoiseTD-ANM, Noise2Void-4R, Half2Half, and Noise2Clean approaches for comparison. There will be given details about modifications of the neural network corresponding to the applied approaches.\n\n\\subsection{CT noise properties}\n\\label{sev:CT_noise}\nCT projections are the line integrals of the linear attenuation coefficients of the body. They are obtained after normalization applied to the data measured by detectors. \n\nAccording to~\\cite{buzug2011computed}, the measured data can be described by Lambert-Beer's law:\n\\begin{equation}\n I = I_0\\exp(-p),\n\\end{equation}\nwhere $I$ is the number of detected photons, $I_0$ is the incident number of photons, $p$ is the line integral of linear attenuation coefficients, i.e., the projection.\nLet $\\exp(-p)$ be the transmission data $T$. Then, $I=I_0T$.\n\nIn practice, the actual value of $I$ is not available: it is corrupted by noise. The detected number of photons $I$ is a random variable that can be described by Poisson distribution ($\\mathcal{P}$)~\\cite{buzug2011computed}. Besides, the electronic noise inherent in the CT scanner contributes to the overall noise level. The electronic noise can be modeled by Gaussian distribution with parameters $\\mu_e$ and $\\sigma_e^2$, the mean and variance of the electronic noise, $\\mathcal{N}\\left(\\mu_e, \\sigma_e^2\\right)$~\\cite{la2006penalized}. In CT systems, $\\mu_e$ is usually calibrated to be $0$. Thus, the detected number of photons obey the following mixed Poisson-Gaussian distribution:\n\\begin{equation}\n I = \\mathcal{P}\\left(I_0T_{hd}\\right) + \\mathcal{N}\\left(0, \\sigma_e^2\\right).\n\\end{equation}\n$T_{hd}=\\exp(-p_{hd})$ and $p_{hd}$ are the transmission and projection data, respectively, which do not contain noise ($hd$ corresponds to high-dose data). \n\nThen, for the noisy projection $\\hat{p}$, the transmission data $\\hat{T}$ can be modeled by the distribution:\n\\begin{equation}\n \\hat{T} = \\exp(-\\hat{p}) = \\frac{I}{I_0} = \\frac{1}{I_0}\\left(\\mathcal{P}\\left(I_0T_{hd}\\right) + \\mathcal{N}\\left(0, \\sigma_e^2\\right)\\right).\n\\end{equation}\n\nApproximating Poisson noise as signal-dependent Gaussian noise and defining high-dose transmission data as $x=T_{hd}=\\exp(-p_{hd})$, we can express the transmission data for the noisy projection using Gaussian distribution:\n\\begin{equation} \\label{eq:distr}\n \\hat{T} = \\exp(-\\hat{p}) = \\mathcal{N}\\left(\\mu_x, \\sigma_x^2+\\frac{\\mu_x}{I_0}+\\frac{\\sigma_e^2}{I_0^2}\\right).\n\\end{equation}\n\nWhereas $\\mu_x$ and $\\sigma_x^2$ characterize clean data, $I_0$ and $\\sigma_e^2$ are the noise parameters depending on the properties of a CT scanner. The commonly employed in CT scanners bowtie filtering modulates an X-ray beam as a function of the angle to balance the photon flux on a detector array~\\cite{liu2014dynamic}. It reduces the radiation at the periphery of the field of view resulting in a bell shape distribution of incident flux levels. Besides, recent CT scanners use a dose modulation technique,\nwhich regulates the tube current for each projection angle depending on the properties of organs to not expose to excessive radiation. Because the flux level is proportional to the tube current, dose modulation influences its distribution. Thus, the incident flux level and, hence, the noise variance depends on the position of the detector column and the tube current.\nAs for the variance of electronic noise $\\sigma_e^2$, it is the characteristic of the detector and does not depend on the X-ray beam. \n\n\\subsection{Using adjacent projections in a time-distributed denoising model}\nAs in our previous work~\\cite{zainulina2021n2ntd}, we propose to restore the projection depending on its noisy adjacent projections. However, the previously proposed model was not completely blind causing its over-fitting to the noise at some moment during training and requiring early stopping. In this work we exclude the projection to be denoised from the consideration of the denoising model.\nThe problem can be considered as the prediction of the middle frame in the sequence depending on its past and future frames. Because the neighboring projections share almost the same content, while the noise is independently distributed on each projection, the prediction will be mostly dependent on the structural features of the projections, excluding the noise features. It will allow recovering noise-free middle frame. Thus, we restore a denoised version of a projection $p_i$ from projections $p_{i\\pm 1},\\dots,p_{i\\pm k}$. The choice of the number of the adjacent frames $k$ depends on how much the content on two consequent frames differs and the computational and the memory capacity of the device. In this study $k=3$ is chosen.\n\nIn order to predict the frame from the sequence of past and future frames, we propose using the convolutional long short-term memory (ConvLSTM) units because LSTM was shown to be effective in the time-series tasks. ConvLSTM units included in the model are based on the ConvLSTM layers introduced in~\\cite{shi2015convlstm}. Compared to~\\cite{shi2015convlstm}, we do not use the past cell status for gates calculation, reducing the number of model parameters without really affecting prediction quality. Also, according to~\\cite{bias-free}, all the convolutions are made bias-free. In this study, the ConvLSTM unit consisted of one ConvLSTM cell.\n\nThis unit processes features extracted by some CNN for each frame independently. \nIt makes a prediction in one direction, i.e. the middle projection $p_i$ is predicted from the combination of the features extracted from the sequence $p_{i-k},\\dots,p_{i-1}$ and from the sequence $p_{i+1},\\dots,p_{i+k}$. The features extracted from both sequences are combined using attention~\\cite{SE} along the time axis and concatenation along the channel axis. Then, these features are combined by a fusing CNN to obtain the denoised middle projection.\n\n\\subsection{Train-inference scheme}\n\nAs in~\\cite{zainulina2021n2ntd}, we use the train-inference scheme proposed in~\\cite{laine2019hqss} that assumes model training and inference depending on the Gaussian approximation of the data distribution.\nAccording to the description of the CT data distribution, the noisy and clean transmission data can be modeled by Gaussian distribution with rather explainable parameters. Therefore, the training of the models for the self-supervised approaches is carried out on the transmission data.\n\nLet $y=\\left(y_1,\\dots,y_N\\right)$ denote the pixels of noisy transmission data. Each noisy pixel has a neighborhood $\\Omega_{y_i}$ that consists of noisy pixels surrounding the pixel $y_i$ on its adjacent projections. Let $\\Omega_y=\\left(\\Omega_{y_1},\\dots,\\Omega_{y_N}\\right)$ be the neighborhoods of the noisy pixels.\nWe want to restore the pixels of clean data $x=\\left(x_1,\\dots,x_n\\right)$.\nThe parameters of the Gaussian distribution of clean data $p(x|\\Omega_y)$ are predicted by neural networks. Because only the noisy data $\\mathcal{D}=\\left\\{y_i, \\Omega_{y_i}\\right\\}_{i=1}^N$ is available, then the networks are trained using a loss function that maximizes the log-likelihood of the distribution of the noisy data $p(y|\\Omega_y)$:\n\\begin{equation}\n \\mathcal{L}=-\\sum_{i=1}^{n}\\log{p(y_i|\\Omega_{y_i})}.\n\\end{equation}\nAfter substituting the distribution from~\\ref{eq:distr}, we get the following loss function:\n\\begin{equation} \n\\label{eq:loss}\n\\begin{split}\n \\mathcal{L}=\\sum_{i=1}^{n}\\left(\\frac{(y_i-\\mu_{x_i})^2}{2\\sigma_{y_i}^2} + \\frac{1}{2}\\log\\sigma_{y_i}^2\\right),\\\\\n \\sigma_{y_i}^2 = \\sigma_{x_i}^2+\\sigma_{n_i}^2,\\; \n \\sigma_{n_i}^2 = \\frac{\\mu_{x_i}}{\\lambda}+\\frac{\\sigma_e^2}{\\lambda^2}.\n\\end{split}\n\\end{equation}\nThe $\\lambda$ denotes the parameter approximating the actual incident flux level $I_0$. The $\\sigma_e^2$ denotes approximation of the electronic noise variance.\n\nWith this loss function, the model is trained to predict estimations of the parameters of the clean data distribution $\\mu_x$ and $\\sigma_x^2$. Then we get the prediction of the clean data incorporating the information about the noisy pixels $y$ itself by Bayes rule:\n\\begin{equation}\n p(x|y, \\Omega_y) \\sim p(y|x)p(x|\\Omega_y).\n\\end{equation}\nGiven $p(y|x)=\\mathcal{N}_y(x, \\sigma_n^2)=\\mathcal{N}_x(y, \\sigma_n^2)$ (the equation used the symmetry of Gaussian distribution to swap $x$ and $y$) and $p(x|\\Omega_y)=\\mathcal{N}(\\mu_x,\\sigma_x^2)$ are normal distributions, the distribution $p(x|y, \\Omega_y)$ is a scaled normal with the mean~\\cite{bromiley2003products}:\n\\begin{equation} \\label{eq:prediction}\n \\mathbb{E}_x[p(x|y, \\Omega_y)] = \\frac{y\\sigma_x^2+\\mu_x\\sigma_n^2}{\\sigma_x^2+\\sigma_n^2}.\n\\end{equation}\nThis mean coincides with the posterior mean estimation of the clean data that is used for prediction.\n\n\n\\subsection{Noise model}\n We propose to make the neural network representing the noise model with the parameters $\\lambda$ and $\\sigma_e^2$ that approximate the incident flux level and the electronic noise variance, respectively. The parameters of the noise model can be pre-estimated independently and optimized together with the main denoising model. The noise model parameters can be pre-estimated depending on the CT properties described earlier.\n \nAs for the parameter $\\lambda$, if the bowtie filtering was applied, the $\\lambda$ parameter should differ across detector columns, i.e., it should depend on the column position of the pixel to be denoised. Otherwise, some parts of the projection would be over-smoothed, and others would stay noisy because of the under- or overestimation of the Poisson parameter of the noise. \nBecause the modern CT scanners use dose modulation, the $\\lambda$ parameter should also depend on the used tube current. Thus, $\\lambda=\\lambda(i, mA)$, where $i$ is a detector column number and $mA$ is a tube current.\nAs for the variance of electronic noise $\\sigma_e^2$, we assume that $\\sigma_e^2$ is the same for all pixels. It probably differs between the detector cells, but we assume that this difference is relatively small. The possible error is negligible compared to the cost caused by the increase in the number of parameters by the number of detector cells.\n\nSummarizing all the above properties, we present the noise model that allows accounting for the parameters of the acquisition process. \nFirstly, we made the model able to predict different values for different detector columns by introducing an embedding layer that maps the column number of the pixel into the value that characterizes the number of incident photons influenced by the bowtie filter. \nThen, this value is normalized by the tube current. The normalization is needed to\ntransform the obtained values into the distribution of incident photons for the specific current and potential.\nThe normalization consists of two steps. The first step is to map the tube current to the slope and bias coefficients. The mapping is made by the linear layers with $2$ output channels and ReLU activation between them. The second step is to multiply the value by the slope coefficient and add the bias coefficient. This type of normalization was chosen according to the theoretical dependency between incident flux levels of different tube currents~\\cite{zeng2015simulation}. \nIn this way, the proposed noise model allows predicting the approximation of the incident flux level $I_0$ depending only on the column of the pixel and acquisition parameters. There is no need to know photon distribution for the test data.\nAs for the electronic noise variance $\\sigma_e^2$, it is represented as the parameter with the shape $1\\times 1$ in the model.\nThe scheme of the proposed noise model is presented in Fig.~\\ref{fig:arch_noise}.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=2.5in]{figures\/schemes\/Noise_net.pdf}\n\\caption{The architecture of the noise model for self-supervised model training.}\n\\label{fig:arch_noise}\n\\end{figure}\n\n\nIf train and test projections were obtained using different CT scanner settings, for example, if the bowtie filter is removed or changed, the noise model can be tuned for the new settings and retrained on the new dataset in a self-supervised mode, and the main denoising model can be frozen during training to speed up the process. If some noise parameters ($I_0$ or $\\sigma_e^2$) are known, they can be used instead of the trained ones. \n\n\n\\subsection{Comparison approaches}\nWe decided to benchmark our approach Noise2NoisTD-ANM against the Noise2Clean, Half2Half and Noise2Void-4R approaches. The Noise2Clean model is trained using pairs of noisy and clean projections. The Half2Half approach assumes creation of pairs of noisy projections from the original projections and training the model in a supervised manner using one noisy projection as input and the other as target. The Noise2Void-4R approach is an adaptation of the approach from~\\cite{laine2019hqss} for training CT projections using the proposed noise model.\nAll approaches used the same neural network architecture as a backbone, with some modifications caused by the requirements of the approaches.\nWe decided to use a relatively simple neural network that shows good results for denoising that is DnCNN (Denoising Convolutional Neural Network)~\\cite{zhang2017dncnn}. In our previous work~\\cite{zainulina2021n2ntd}, our model was based on the U-Net architecture~\\cite{ronneberger2015u} but in this study the U-Net architecture did not show significant improvements over the DnCNN.\n\nFor the Half2Half approach, we initially created the half-dose pairs as it was described in the original paper~\\cite{yuan2020half2half}. Then, we trained the model on the transmission data using MSE loss function.\nUnlike the self-supervised approaches, the Noise2Clean model is trained using MSE loss function on the projection, not transmission data. Also, we used residual learning, i.e., we trained the model to predict the noise because it was shown to be effective for supervised denoising~\\cite{zhang2017dncnn}. Because the noise model applied for the self-supervised approaches allows to account for the noise level depending on the used tube current automatically, we decided to include the analog of the noise level map into the input of the Noise2Clean model~\\cite{zhang2018ffdnet}. \nAs was shown in~\\cite{zainulina2021n2ntd}, the usage of adjacent projections enhances results for the supervised training and does not give any improvements for the Noise2Void-4R approach. Therefore, we used adjacent projections as additional channels for the Noise2Clean approach and did not use them for the Noise2Void-4R approach.\n\nThe model architectures for each approach are depicted in Fig.~\\ref{fig:arch_dncnn}.\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=0.8\\linewidth]{figures\/schemes\/DnCNN_net.pdf}\n\\caption{Model architectures of the compared approaches.}\n\\label{fig:arch_dncnn}\n\\end{figure*}\n\n\\section{Experiments and results}\n\\subsection{Data preparation}\nThe study used the publicly available dataset of CT projections that is Low Dose CT Image and Projection data (LDCT-and-Projection-data)~\\cite{LDCT}. The experiments were carried out on abdomen projections obtained with the fixed tube voltage $100$ kVp. During acquistion of these projections, dose modulation and a bowtie filter were used. In order to evaluate the performance of the projection-domain denoising approaches in the image domain, the reconstruction was performed by the open TIGRE toolbox~\\cite{TIGRE}. Also, the dataset has information about photon distribution ($I_0$) for each CT projection that will be used for the noise model estimation.\n\nFor each patient, CT projection data are provided for both full and simulated lower dose levels. The provided low dose level is $25\\%$ of the routine dose. For testing the approaches on the more severe cases of $10\\%$ and $5\\%$ of the routine dose, new low-dose data were simulated according to the algorithm from~\\cite{zeng2015simulation}. \nBecause the simulated data may differ from real data and the testing results may be different for simulated and real data, we also performed the comparison of the approaches on the data with real noise. It was possible because the dose differs for each projection due to the properties of organs and patient-specific requirements and therefore the full-dose projections having high noise levels can be found. These projections can be considered as real noisy projections. \n\n\\subsubsection{Data selection}\nFor the experiments, we selected projections of several patients. The selection was based on noise levels of projections. Because noise levels of projections correlate with used tube currents (higher noise levels correspond to lower tube currents), the projections were chosen depending on the distributions of the tube currents. The noise levels of the selected projections were verified using the algorithm proposed in~\\cite{liu2014noise_level} because patient anatomies can also have an impact on noise levels. \nThe distribution of noise levels correlated with the distribution of the tube currents.\n\n\nFor the train dataset, we selected projections of one patient, whose low-dose projections have the wide range of the estimated noise levels.\nWe randomly picked up $21800$ projections from them so that at least $100$ projections are adjacent to be able to use the connections between neighboring frames. We denote the train dataset projections as TM (the train set, middle noise levels). Paired full-low-dose projections were used to train the Noise2Clean model, and low-dose projections were used to train the self-supervised models. For the test datasets, we chose projections of two patients with low and high noise levels of low-dose projections and picked up $6000$ consecutive projections with the highest noise levels. We denoted these sets as SL and SH, respectively, where S means that low-dose projections were simulated and the second letter shows the noise level: low and high. Their low-dose projections serve as noisy inputs to the denoising approaches, and their full-dose projections serve as reference images. Then we found full-dose projections with the highest noise level comparable to noise levels of low-dose projections of other patients. We chose $10000$ consecutive projections with the highest noise level from them to be the test dataset with real data. It was denoted as RH (real noise, high level).\nFig.~\\ref{fig:hist_mAs} presents the distributions of the tube currents (mA) for the patients, whose projections were selected for the experiments. \n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=2.5in]{figures\/plots\/mA_low.pdf}\n\\caption{Distribution of the tube currents (mA) used to acquire projections for different patients. The figure illustrates noise levels of the train and test sets used in the experiments because noise levels correlate with used tube currents (higher tube currents correspond to lower noise levels).}\n\\label{fig:hist_mAs}\n\\end{figure}\n\n\\subsubsection{Low-dose data simulation}\nUsing the algorithm from~\\cite{zeng2015simulation}, we simulated low-dose projections with doses equal to $5\\%$ and $10\\%$ of the routine dose to test the denoising approaches on the projections with higher noise levels. We created $5\\%$- and $10\\%$-low-dose projections from the full-dose projections of the SL and SH sets. Also, we simulated $5\\%$ TM low-dose projections to test the approaches being trained on data with the high noise level and their adaptability. \n\n\n\\subsection{Learning settings}\n\\subsubsection{Pre-estimation of the $\\lambda$ parameter of the noise model}\nThe noise model parameters $\\lambda$ (the embedding and mapping layers) were pre-estimated using photon distributions and the information about used tube currents provided for each projection in the LDCT dataset.\nWe took the subset of the TM dataset and extracted information about photon distributions and corresponding values of tube currents. We trained the embedding layer and the mapping layers to predict a photon distribution from its tube current. The training was performed using the MSE loss function, Adam optimizer with learning rate $10^{-2}$ for $1000$ epochs. The optimized parameters were tested on full-dose and low-dose photon distributions and tube currents of the validation subset of the TM dataset. RMSRE (root mean square relative error)~\\cite{zeng2015simulation} is equal to $0.16\\pm0.11\\%$ for the full-dose dataset, and it is equal to $0.53\\pm0.40\\%$ for the low-dose dataset. Thus, the error does not exceed $1\\%$ and can be considered insignificant.\nFig.~\\ref{fig:pred_ph_dist} shows predicted and original photon distributions for different tube currents.\nThe output values of the pre-estimated model embedding layer form a bell-shaped curve, this coincides with the theoretical shape of the photon distribution after the bowtie filter.\nThen, this shape is well stretched out to fit the photon distribution corresponding to the certain tube current value.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=2.5in]{figures\/plots\/pred_ph_dist.pdf}\n\\caption{Comparison of the predicted and actual photon distributions (Noise Equivalent Quanta) for different tube currents (mA). The solid line is predicted values, and the dashed line is true values. The figure shows that the noise estimation module successfully estimates Poisson parameter of the noise.}\n\\label{fig:pred_ph_dist}\n\\end{figure}\n\nExperiments with a joint training of the noise model and main denoising model showed that this pre-estimation makes models converge faster and give better results. Besides, freezing the pre-estimated embedding and mapping layers during joint training also leads to slightly better results. Because of that,\nthe pre-estimated and frozen embedding and mapping layers were used for all denoising models dependent on this noise model.\n\n\\subsubsection{Model training}\nAll models were trained in Pytorch using Adam optimizer with default parameters and learning rate $10^{-4}$. The minibatch used for training supervised and self-supervised models consisted of $64$ and $16$ randomly cropped $64\\times 64$ patches of frames, correspondingly. The Noise2Clean model was trained using MSE loss on the projection data. The Half2Half model was trained using MSE loss on the transmission model. The Noise2Void-4R and Noise2NoiseTD-ANM models were trained using the loss function~(\\ref{eq:loss}) on the transmission data. For all models, the training continued until the training curve reached a plateau. \n\n\\subsection{Results on the simulated data}\n\\subsubsection{Models trained on the $25\\%$-dose data}\n\nWe tested the models trained on the $25\\%$-dose TM dataset on the SL and SH datasets. Low-dose projections provided by the LDCT dataset with the $25\\%$ routine dose and $10\\%$ and $5\\%$ low-dose projections simulated by us were denoised and compared to the corresponding full-dose projections.\nThe quantitative comparison in the projection domain is presented in Table~\\ref{tab:sim}. Additionally to the usually applied methods of evaluation of denoised images SSIM and PSNR, we used GMSD (gradient magnitude similarity deviation)~\\cite{xue2013gmsd}. It considers image over-blurring unintentionally introduced by denoising, which can be overlooked by PSNR or SSIM.\n\nThe Noise2Clean approach shows the best results for the $25\\%$ low-dose projections. However, the difference between the Noise2Clean and the proposed Noise2NoiseTD-ANM approach is minor, especially in the image domain, although the Noise2NoiseTD-ANM method uses less information than the supervised approach.\nFor the lower percents of the routine dose levels, the Noise2Clean approach gives the worse results. It shows the lowest quality for the $5\\%$ low-dose projections. Although the Noise2Clean approach used the noise level map as an additional input channel, it failed to generalize to lower noise levels as the self-supervised approach did. \n\nAs for the self-supervised approaches, the difference between them is not big. It becomes more prominent for lower doses. Although the proposed approach does not show a great advantage over other self-supervised methods in the projection domain, it gives the best results in the image domain for all doses and all image quality assessment methods.\n\n\\begin{table*}[!t]\n\\caption{Quantitative comparison of the denoising approaches, simulated data}\n\\label{tab:sim}\n\\centering\n\\begin{tabular}{c| c| c c c | c c c}\\hline\\hline\n & & \\multicolumn{3}{|c|}{projection domain} & \\multicolumn{3}{c}{image domain}\\\\ \\cline{3-8}\nDataset & Approach & SSIM$\\uparrow$ & PSNR$\\uparrow$ & GMSD$\\downarrow$ & SSIM$\\uparrow$ & PSNR$\\uparrow$ & GMSD$\\downarrow$\\\\\n\\hline\\hline\n\\multicolumn{8}{c}{$25\\%$ of the routine dose} \\\\\\hline\n& Low dose &\t$0.905 \\pm 0.036$ & \t$39.2 \\pm 2.0$ & \t$0.013 \\pm 0.006$ \n& $0.957\\pm0.022$ & $43.9\\pm3.0$ & $0.003\\pm0.002$ \\\\\n& Noise2Clean & $\\mathbf{0.975 \\pm 0.010}$ & \t$\\mathbf{45.6 \\pm 1.6}$ & \t$\\mathbf{0.004 \\pm 0.002}$ \n& $\\mathbf{0.987\\pm0.005}$ & $\\mathbf{48.7\\pm2.0}$ & $\\mathbf{0.002\\pm0.001}$ \\\\ \nSL & Half2Half & $0.973 \\pm 0.010$\t& $44.9 \\pm 1.5$\t& $0.006 \\pm 0.002$ \n& $0.985\\pm0.005$ & $47.3\\pm1.6$ & $0.006\\pm0.002$ \\\\\n& Noise2Void-4R &\t$0.973 \\pm 0.010$ & \t$44.9 \\pm 1.7$ & \t$0.006 \\pm 0.002$\t\n& $0.986\\pm0.005$ & $47.8\\pm1.9$ & $0.004\\pm0.002$ \\\\\n& Ours &\t$0.972 \\pm 0.010$ & \t$44.4 \\pm 1.3$ & \t$0.006 \\pm 0.002$\n& $\\mathbf{0.987\\pm0.005}$ & $48.6\\pm2.1$ & $\\mathbf{0.002\\pm0.001}$ \\\\\n \\hline\n& Low dose &\t$0.873 \\pm 0.023$ & \t$37.1 \\pm 1.3$ & \t$0.016 \\pm 0.005$\n& $0.832\\pm0.030$ & $35.6\\pm1.3$ & $0.015\\pm0.004$ \\\\\n& Noise2Clean &\t$\\mathbf{0.965 \\pm 0.007}$ & \t$\\mathbf{43.7 \\pm 1.1}$ & \t$\\mathbf{0.006 \\pm 0.002}$\n& $\\mathbf{0.944\\pm0.012}$ & $\\mathbf{41.0\\pm1.4}$ & $\\mathbf{0.009\\pm0.003}$ \\\\\nSH & Half2Half & $0.960 \\pm 0.009$\t& $42.8 \\pm 1.2$\t& $0.010 \\pm 0.003$ \n& $0.935\\pm0.014$ & $39.4\\pm1.8$ & $0.021\\pm0.007$ \\\\\n& Noise2Void-4R &\t$0.960 \\pm 0.009$ & \t$42.6 \\pm 1.3$ & \t$0.010 \\pm 0.004$ \n& $0.938\\pm0.013$ & $39.8\\pm1.7$ & $0.016\\pm0.005$ \\\\\n& Ours &\t$0.961 \\pm 0.008$ & \t$42.3 \\pm 1.0$ & \t$0.009 \\pm 0.004$ \n& $0.941\\pm0.011$ & $40.5\\pm1.4$ & $\\mathbf{0.009\\pm0.003}$ \\\\\n \\hline\\hline\n\\multicolumn{8}{c}{$10\\%$ of the routine dose} \\\\\\hline\n& Low dose &\t$0.787 \\pm 0.075$ & \t$33.9 \\pm 2.8$ & \t$0.044 \\pm 0.025$ \n& $0.845\\pm0.103$ & $37.3\\pm5.0$ & $0.024\\pm0.024$ \\\\\n& Noise2Clean &\t$0.958 \\pm 0.029$ & \t$41.6 \\pm 4.3$ & \t$0.023 \\pm 0.023$ \n& $0.939\\pm0.057$ & $42.5\\pm5.7$ & $0.016\\pm0.017$ \\\\\nSL & Half2Half & $0.957 \\pm 0.018$\t& $42.7 \\pm 1.9$\t& $0.011 \\pm 0.004$ \n& $0.979\\pm0.008$ & $45.8\\pm1.8$ & $0.008\\pm0.002$ \\\\\n& Noise2Void-4R &\t$0.961 \\pm 0.015$ & \t$\\mathbf{43.1 \\pm 1.6}$ & \t$0.010 \\pm 0.003$ \n& $0.980\\pm0.007$ & $46.1\\pm1.9$ & $0.007\\pm0.002$ \\\\\n& Ours &\t$\\mathbf{0.963 \\pm 0.010}$ & \t$42.8 \\pm 1.0$ & \t$\\mathbf{0.007 \\pm 0.002}$ \n& $\\mathbf{0.981\\pm0.006}$ & $\\mathbf{46.7\\pm1.8}$ & $\\mathbf{0.004\\pm0.002}$ \\\\\n\\hline\n& Low dose &\t$0.860 \\pm 0.026$ & \t$36.0 \\pm 1.9$ & \t$0.023 \\pm 0.012$\n& $0.813\\pm0.037$ & $34.6\\pm1.9$ & $0.021\\pm0.013$ \\\\\n& Noise2Clean &\t$\\mathbf{0.963 \\pm 0.012}$ & \t$42.5 \\pm 3.2$ & \t$0.012 \\pm 0.015$ \n& $0.938\\pm0.020$ & $40.2\\pm2.3$ & $0.013\\pm0.009$ \\\\\nSH & Half2Half & $0.962 \\pm 0.008$\t& $43.0\\pm 1.2$\t& $0.010 \\pm 0.003$ \n& $0.938\\pm0.013$ & $39.5\\pm1.8$ & $0.022\\pm0.007$ \\\\\n& Noise2Void-4R &\t$0.962 \\pm 0.008$ & \t$\\mathbf{43.0 \\pm 1.1}$ & \t$0.011 \\pm 0.003$ \n& $0.935\\pm0.015$ & $39.2\\pm1.9$ & $0.024\\pm0.008$ \\\\\n& Ours &\t$0.961 \\pm 0.009$ & \t$41.4 \\pm 0.8$ & \t$\\mathbf{0.009 \\pm 0.003}$ \n& $\\mathbf{0.942\\pm0.013}$ & $\\mathbf{40.4\\pm1.6}$ & $\\mathbf{0.012\\pm0.004}$ \\\\\n \\hline\\hline\n\\multicolumn{8}{c}{$5\\%$ of the routine dose} \\\\\\hline\n& Low dose &\t$0.632 \\pm 0.106$ & \t$28.2 \\pm 3.3$ & \t$0.108 \\pm 0.050$\n& $0.565\\pm0.262$ & $28.1\\pm7.2$ & $0.113\\pm0.081$ \\\\\n& Noise2Clean &\t$0.863 \\pm 0.103$ & \t$31.6 \\pm 5.9$ & \t$0.099 \\pm 0.059$ \n& $0.707\\pm0.228$ & $32.5\\pm7.9$ & $0.086\\pm0.062$ \\\\\nSL & Half2Half & $0.918 \\pm 0.039$\t& $39.2 \\pm 2.4$\t& $0.025 \\pm 0.012$\n& $0.961\\pm0.019$ & $43.5\\pm2.4$ & $0.010\\pm0.003$ \\\\\n& Noise2Void-4R &\t$0.934 \\pm 0.027$ & \t$40.5 \\pm 1.9$ & \t$0.017 \\pm 0.004$ \n& $0.968\\pm0.014$ & $44.0\\pm2.2$ & $0.011\\pm0.004$ \\\\\n& Ours &\t$\\mathbf{0.963 \\pm 0.012}$ & \t$\\mathbf{41.9 \\pm 1.1}$ & \t$\\mathbf{0.011 \\pm 0.003}$ \n& $\\mathbf{0.980\\pm0.007}$ & $\\mathbf{45.9\\pm1.8}$ & $\\mathbf{0.007\\pm0.003}$ \\\\\n \\hline\n& Low dose &\t$0.742 \\pm 0.055$ & \t$31.1 \\pm 2.9$ & \t$0.058 \\pm 0.028$ \n& $0.569\\pm0.118$ & $27.5\\pm3.5$ & $0.087\\pm0.046$ \\\\\n& Noise2Clean &\t$0.935 \\pm 0.034$ & \t$36.3 \\pm 5.9$ & \t$0.048 \\pm 0.038$\n& $0.779\\pm0.123$ & $32.3\\pm4.5$ & $0.060\\pm0.037$ \\\\\nSH & Half2Half & $0.944 \\pm 0.016$\t& $40.8 \\pm 1.7$\t& $0.017 \\pm 0.006$\n& $0.916\\pm0.020$ & $38.1\\pm1.8$ & $0.027\\pm0.008$ \\\\\n& Noise2Void-4R &\t$\\mathbf{0.956 \\pm 0.009}$ & \t$\\mathbf{41.9 \\pm 1.4}$ & \t$0.016 \\pm 0.006$ \n& $0.924\\pm0.017$ & $38.1\\pm2.1$ & $0.033\\pm0.011$ \\\\\n& Ours &\t$\\mathbf{0.956 \\pm 0.009}$ & \t$40.7 \\pm 0.8$ & \t$\\mathbf{0.012 \\pm 0.004}$ \n& $\\mathbf{0.930\\pm0.015}$ & $\\mathbf{39.2\\pm1.7}$ & $\\mathbf{0.019\\pm0.006}$ \\\\\n\\hline\\hline\n\\end{tabular}\n\\end{table*}\n\n\\begin{figure*}[!t]\n\\centering\n\\subfloat[from the SL dataset]{\\includegraphics[width=0.5\\linewidth]{figures\/results\/L033_prj.jpg}%\n}\n\\hfil\n\\subfloat[from the SH dataset]{\\includegraphics[width=0.5\\linewidth]{figures\/results\/L134_prj.jpg}%\n}\n\\caption{The example parts of CT projections of different doses ($25\\%$, $10\\%$, $5\\%$) denoised by the approaches. N2NTD-ANM stands for Noise2NoiseTD-ANM. The arrow points to the parts that the Noise2Clean model failed to restore due to high noise levels.}\n\\label{fig:sim_prj}\n\\end{figure*}\n\nThe approaches were also evaluated qualitatively. Fig.~\\ref{fig:sim_prj} shows the example parts of the denoised low-dose CT projections. The CT slices reconstructed from these projections are presented in Fig.~\\ref{fig:sim_img}. It can be found from the figures that although the Noise2Clean approach preserves structural details rather good, being trained on the data with lower noise levels, it fails to restore pixels corrupted by more severe noise. This led to streaks and noise on the reconstructed CT slices.\nAt the same time, the $25\\%$ low-dose projections denoised by the Noise2NoiseTD-ANM and the reconstructed from them CT slices have the quality comparable to the CT projections and the reconstructed slices of the Noise2Clean approach.\nThe figures also confirm that the Noise2NoiseTD-ANM approach preserves edges and small details better than the Noise2Void-4R and Half2Half models. \n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/results\/L033_img.jpg}\n\\caption{The comparison of CT slices reconstructed from the denoised $25\\%$, $10\\%$, $5\\%$ low-dose projections (from the SL dataset). N2V-4R and N2NTD-ANM stand for Noise2Void-4R and Nois2NoiseTD-ANM, respectively.}\n\\label{fig:sim_img}\n\\end{figure*}\n\n\n\n\\subsubsection{Cross-test}\nFor testing the capability of the approaches to make a neural network able to generalize to different noise levels, we performed cross-testing. Additionally to the models trained on the $25\\%$-dose TM dataset, we trained the models of each approach on the $5\\%$-dose TM datasets using the same training setting. The trained models were tested on the five sets of projections ($6000$ projections in each set) including the SL and SH datasets. Three other datasets were selected such as their noise levels were between the noise levels of the SL and SH datasets. Each testing dataset was made at $25\\%$, $10\\%$, and $5\\%$ of the routine dose. The results of the cross-testing are presented in Fig.~\\ref{fig:cross_test}.\n\n\\begin{figure*}[!t]\n\\centering\n\\subfloat{\\includegraphics[width=0.45\\linewidth]{figures\/plots\/Cross_test_prj_dom_SSIM_5_sets_h2h.pdf}%\n}\n\\hfil\n\\subfloat{\\includegraphics[width=0.45\\linewidth]{figures\/plots\/Cross_test_prj_dom_GMSD_5_sets_h2h.pdf}%\n}\n\\caption{The quantitative comparison of the denoising models trained on $25\\%$ and $5\\%$ low-dose projections using box plots in the projection domain.}\n\\label{fig:cross_test}\n\\end{figure*}\n\nThe box plots show that the difference between the Noise2Clean models trained on the datasets made at $25\\%$ and $5\\%$ doses is significant. The difference is less between the Half2Half models but it is more pronounced than for the Noise2Void-4R and Noise2NoiseTD-ANM models. Thus, the Noise2NoiseTD-ANM and Noise2Void-4R models generalize better to various noise levels. They show approximately the same tendency in the differences between models trained at dose levels of $25\\%$ and $5\\%$. The both methods used the proposed noise model that improved their generalization capability.\nThe advantage of the noise model in making the denoising adaptable to various noise levels will also be further proven by comparing between using the noise model and the simple MSE-loss function on the projection data for training.\n\nIn the comparison of the self-supervised models, the Noise2NoiseTD-ANM model shows slightly better results, this is more pronounced on the GMSD box plot and may become more visible in the image domain. Although the difference between the approaches is not prominent for $25\\%$ of the routine dose, it increases with the decrease of the dose and becomes clear for $5\\%$ of the routine dose.\nThe advantage of the Noise2NoiseTD-ANM model over the Noise2Void-4R model is that it leverages information from adjacent projections that helps to preserves edges and some small details. The Noise2Void-4R restores a pixel depending on the neighborhood of this pixel from one image to be denoised. If this neighborhood contains a big fraction of the corrupted pixels, it is less likely to restore the pixel correctly. At the same time, the usage of the adjacent projections by the Noise2NoiseTD-ANM approach decreases the error.\n\n\\subsection{Results on the real data}\nWe tested the Noise2Clean and Noise2NoiseTD-ANM approaches on the RH dataset. \nBecause the proposed method assumes training without high-quality reference images, we also trained the Noise2NoiseTD-ANM model directly on the RH dataset with the same learning settings. We denoted this model as Noise2NoiseTD-ANM$^*$.\nThe comparison was performed only qualitatively because there are no high-quality images for assessment of the methods using full-reference quality measures and there are no reference-free measures proven to be representative. \n\nThe comparison is presented in Fig.~\\ref{fig:real_prj}. The figure demonstrates that the Noise2NoiseTD-ANM model removed the noise slightly worse than the Noise2Clean, but it kept some structural details better. The Noise2NoiseTD-ANM$^*$ model trained on the data with real noise suppressed the noise better than the Noise2NoiseTD-AMNM model while maintaining the same level of detail preservation.\nThe parts of the CT slices reconstructed from the denoised projections are shown in Fig.~\\ref{fig:real_img}. The orange row indicates the small area in the lung where for the Noise2Clean model the light band occurred as a result of over-smoothing while it does not exist in the original noisy image and did not appear for the Noise2NoiseTD-ANM models.\n\nThe Noise2NoiseTD-ANM$^*$ model demonstrated higher visual quality. Thus, the model training directly on the data with real noise can be advantageous and the application of the Noise2NoiseTD-ANM approach on the real data can result in more accurate denoising than the application of the Noise2Clean model trained on the simulated data.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/results\/L006_prj.jpg}\n\\caption{The comparison of the parts of the denoised CT projections with real noise. The arrow shows that the Noise2NoiseTD-ANM models better preserve structures than the Noise2Clean model.}\n\\label{fig:real_prj}\n\\end{figure}\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/results\/L006_img.jpg}\n\\caption{The comparison of the example parts of CT slices reconstructed from the denoised projections that had real noise. The arrow shows that the Noise2Clean model almost connected structures that were originally disconnected, unlike the Noise2NoiseTD-ANM models, due to stronger smoothing.}\n\\label{fig:real_img}\n\\end{figure}\n\n\n\\subsection{Experiments with the noise model}\nThe experiment with cross-testing showed that the noise model could help the self-supervised Noise2Void-4R and Noise2NoiseTD-ANM models better perform on the data with unseen noise levels.\nTo ensure that the better adaptation of the self-supervised models to different noise levels is the advantage of the used noise model we tested the used noise model against the simple MSE loss function. We trained the Noise2NoiseTD-ANM model (without ReLU activation at the end) using the MSE loss function on the projection data using the earlier defined learning settings. The model trained using the MSE-loss function was compared to the Noise2NoiseTD-ANM model trained using the noise model on the five test datasets used for the cross-testing in the projection domain.\nThe results (Fig.~\\ref{fig:mse_vs_noise}) prove that the proposed noise model enhances generalization capabilities of the denoising model. \n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/plots\/mse_vs_noise.pdf}\n\\caption{The quantitative comparison of the noise model and the MSE-loss function in the projection domain.}\n\\label{fig:mse_vs_noise}\n\\end{figure}\n\nAdditionally, we tested whether considering CT scanner properties gives improvements. We compared the proposed noise model ($\\lambda=\\lambda(i, mA)$) with the model that assumes Gaussian approximation of the mixed Poisson-Gaussian distribution of the noise with the constant parameters $\\lambda$ ($\\lambda=const$) and $\\sigma_e^2$ of the noise variance. These parameters can hardly be pre-estimated in this case. \nFor comparison, we trained the Noise2NoiseTD-ANM model in the transmission domain with the noise variance's $\\lambda$ parameter depending on neither a pixel position nor the tube current ($\\lambda=const$). \nThe results of the comparison in the projection domain are presented in Table~\\ref{tab:noise}. Fig.~\\ref{fig:noise_prj} gives the visual comparison of the example denoised projection parts and the obtained CT slices.\nThe results show that varying parameter $\\lambda$ of the noise variance allows models to remove noise better while preserving edges and fine details. Besides, the independence of the parameter $\\lambda$ from the tube current would not allow the models easily adapt to different noise levels, the parameter would need to be retrained or it would worsen the results.\n\n\\begin{table}[!t]\n\\caption{The quantitative evaluation of the noise model with the Poisson parameter $\\lambda$ dependent on the detector column position and tube current ($\\lambda=\\lambda(i, mA)$) and the constant Poisson parameter $\\lambda$ ($\\lambda=const$) in the projection domain.}\n\\label{tab:noise}\n\\centering\n\\begin{tabular}{c| c c c}\\hline\\hline\nNoise model & SSIM$\\uparrow$ & PSNR$\\uparrow$ & GMSD$\\downarrow$ \\\\\n\\hline\\hline\n\\multicolumn{4}{c}{SL dataset}\\\\\\hline\n$\\lambda=const$ &\t$0.967 \\pm 0.011$ & \t$44.0 \\pm 1.3$ & \t$0.008 \\pm 0.002$ \\\\ \n$\\lambda=\\lambda(i, mA)$ &\t$0.972 \\pm 0.010$ & \t$44.4 \\pm 1.3$ & \t$0.006 \\pm 0.002$ \\\\\n\\hline\\hline \n\\multicolumn{4}{c}{SH dataset}\\\\\\hline\n$\\lambda=const$ &\t$0.956 \\pm 0.009$ & \t$42.5 \\pm 1.2$ & \t$0.010 \\pm 0.004$ \\\\ \n$\\lambda=\\lambda(i, mA)$ &\t$0.961 \\pm 0.008$ & \t$42.3 \\pm 1.0$ & \t$0.009 \\pm 0.004$ \\\\ \\hline\n\n\\hline\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{figure}[!t]\n\\centering\n\\subfloat[parts of CT projections]{\\includegraphics[width=\\linewidth]{figures\/results\/L033_nm_prj_wo_pl.jpg}}\n\\vfil\n\\subfloat[parts of CT slices]{\\includegraphics[width=0.95\\linewidth]{figures\/results\/L033_nm_img.jpg}\n}\n\\caption{The example parts of CT projections from the SL dataset denoised by the Noise2NoiseTD-ANM with different noise models and the CT slices reconstructed from them.}\n\\label{fig:noise_prj}\n\\end{figure}\n\n\n\\section{Discussion and conclusions}\nIn the study, we considered self-supervised deep-learning methods for denoising low-dose CT projections. Although supervised methods were shown to perform better than the self-supervised approaches for the denoising task, they require the representative dataset of paired clean-noisy images, which is difficult to obtain in the clinical setting. \nAs for synthetic approaches (e.g., using GANs \\cite{prokopenko2019unpaired}), it is not always possible to obtain the complete information about the CT machine necessary for the accurate simulation. Besides, a closer proximity of the distributions of training and testing data is known to lead to better results, making the training on real data preferable.\nTherefore, self-supervised approaches for denoising low-dose CT images are of particular value. Early self-supervised approaches mostly built upon the Noise2Noise model, with the generation of the dataset complicating the denoising challenge and making the solution less adaptable.\n\nHerein, we proposed a new self-supervised approach Noise2NoiseTD-ANM that uses only the original noisy projections. This method is based on restoring a certain CT projection from its adjacent projections, as in the problem of frame prediction from the frame sequence, and modeling data distributions for model training and inference. \nFor this, we included ConvLSTM units in the network and developed the noise model that takes into account the theoretical physics-based noise distribution of the CT projections. This noise model incorporates the effect of bowtie filtering, adapting to the dose modulation.\nIt should be noted that the train-inference scheme was taken from~\\cite{laine2019hqss} and adopted for denoising CT projections using the developed noise model. \n\nUsing the same backbone neural network architecture, we compared our approach with the Noise2Clean, the Half2Half, and the state-of-the-art blind-spot network-based approach~\\cite{laine2019hqss}, using the adapted train-inference scheme. We tested the models on the simulated test data with both high and low noise levels. The results showed that in the case of training and testing on the data with approximately the same noise levels, the Noise2Clean approach gives slightly better results. However, the Noise2Clean model, even with given noise level maps, fails to properly denoise projections of different noise levels, whereas, our physics-based self-supervised approach with realistic noise model is more successful in generalizing to any noise levels. These experiments also emphasize the advantage of the self-supervised technique over the supervised training regime. \nThe noise model, capable of estimating the noise properties separately, also allows to easily adapt to various denoising scenarios. For example, if the test noise distribution model differs from the noise model of the trained model, it can be easily tuned, and then the main denoising model and the noise model can be retrained together on the new data, or only the parameters of the noise model can be optimized.\n\nIn the comparison of the self-supervised approaches, our approach marginally outperformed the others. While the approach from~\\cite{laine2019hqss} relies only on the neighborhood of a pixel to be denoised, our approach looks at the adjacent projections that contain almost the same content but different noise realization. Therefore, our approach is less prone to losing some object details, even in the cases of severe noise. \n\nBesides, we compared the approaches on test projections with real noise. Even trained on the simulated data, the proposed approach turned out to be better than the supervised approach. Nevertheless, our approach can be directly trained on the data to be denoised. As was expected, training of the Noise2NoiseTD-ANM model on the test data with real noise gave improvements. However, the comparison was performed only qualitatively. Similarly to other medical imaging modalities, the quantitative comparison could not be performed because of the lack of representative no-reference image quality assessment (IQA) methods \\cite{Kastryulin}. The no-reference IQA methods as well as full-reference IQA methods, appropriate for the assessment of CT projections and CT images, are yet to be developed.\n\nThus, the proposed self-supervised approach Noise2NoiseTD-ANM outperformed the other considered approaches in terms of adaptability and the quality of the denoised images. We believe this approach is promising given it can readily denoise LDCT images in a clinical setting. Further development of the self-supervised approach could decrease the dose of X-ray radiation even more, especially if combined with other low-photon image recovery methods \\cite{pronina2020microscopy}. However, the proposed model still requires additional testing using more projections with real noise in a controlled manner. Lastly, the opinion of the radiologists about the projections denoised by our approach should also be analyzed.\n\n\\section*{Compliance with Ethical Standards}\nThis research study was conducted retrospectively using human subject data made available in open access by The Cancer Imaging Archive (TCIA). The usage of this dataset has been approved by the Philips internal committee for biomedical experiments.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section*{Acknowledgments}\nWe thank Dr. Frank Bergner, Dr. Thomas Koehler (Philips GmbH Innovative Technologies) and Dr. Kevin M.Brown (Philips Healthcare) for supporting this research and providing feedback for this article.\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn \\cite{AG2} we explored the question what symmetric pairs are\nGelfand pairs. We introduced the notion of regular symmetric pair\nand conjectured that all symmetric pairs are regular. This\nconjecture would imply that many symmetric pairs are Gelfand\npairs, including all connected symmetric pair over ${\\mathbb C}$.\n\nIn this paper we show that the pairs $$(GL(V),O(V)), \\,\n(GL(V),U(V)), \\, (U(V),O(V)), \\, (O(V \\oplus W),O(V) \\times O(W)),\n\\, (U(V \\oplus W),U(V) \\times U(W))$$ are regular where $V$ and\n$W$ are quadratic or hermitian spaces over arbitrary local field\nof characteristic zero. We deduce from this that the pairs\n$(GL_n({\\mathbb C}),O_n({\\mathbb C}))$ and $(O_{n+m}({\\mathbb C}),O_n({\\mathbb C}) \\times O_m({\\mathbb C}))$\nare Gelfand pairs.\n\nIn general, if we would know that all symmetric pairs are regular,\nthen in order to show that a given symmetric pair $(G,H)$ is a\nGelfand pair it would be enough to check the following condition\nthat we called \"goodness\":\\\\\n(*) Every closed $H$-double coset in $G$ is invariant with respect\nto $\\sigma$. Here, $\\sigma$ is the anti-involution defined by\n$\\sigma(g):= \\theta(g^{-1})$ and $\\theta$ is an involution (i.e.\nautomorphism of order 2) of $G$ such that $H = G^{\\theta}$.\n\nThis condition always holds for connected symmetric pairs over\n${\\mathbb C}$.\n\nMeanwhile, before the conjecture is proven, in order to show that\na given symmetric pair is a Gelfand pair one has to verify that\nthe pair is good, to prove that it is regular and also to compute\nits \"descendants\" and show that they are also regular. The\n\"descendants\" are certain symmetric pairs related to centralizers\nof semisimple elements.\n\n\nIn this paper we develop further the tools from \\cite{AG2} \nfor proving regularity of symmetric pairs. We also\nintroduce a systematic way to compute descendants of classical\nsymmetric pairs.\n\nBased on that we show that all the descendants of the above\nsymmetric pairs are regular.\n\n\\subsection{Structure of the paper} $ $\\\\\nIn section \\ref{PrelNot} we introduce the notions that we discuss\nin this paper. In subsection \\ref{GelPairs} we discuss the notion\nof Gelfand pair and review a classical technique for proving\nGelfand property due to Gelfand and Kazhdan. In subsection\n\\ref{SymPar} we review the results of \\cite{AG2}, introduce the\nnotions of symmetric pair, descendants of a symmetric pair, good\nsymmetric pair and regular symmetric pair mentioned above and\ndiscuss their relations to Gelfand property.\n\nIn section \\ref{MainRes} we formulate the main results of the\npaper. We also explain how they follow from the rest of the paper.\n\nIn section \\ref{GradRepDef} we introduce terminology that enables\nus to prove regularity for symmetric pairs in question.\n\nIn section \\ref{SecReg} we prove regularity for symmetric pairs in\nquestion.\n\nIn section \\ref{CompDes} we compute the descendants of those\nsymmetric pairs.\n\n\n\\subsection{Acknowledgements}\nWe are grateful to \\textbf{Herve Jacquet} for a suggestion to\nconsider the pair $(U_{2n}, U_n \\times U_n)$ which inspired this\npaper. We also thank \\textbf{Joseph Bernstein}, \\textbf{Erez\nLapid}, \\textbf{Eitan Sayag} and \\textbf{Lei Zhang} for fruitful\ndiscussions and \\textbf{Gerard Schiffmann} for useful remarks.\n\nBoth authors were partially supported by a BSF grant, a GIF grant, and an ISF Center\nof excellency grant. A.A was also supported by ISF grant No. 583\/09 and \nD.G. by NSF grant DMS-0635607. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.\n\n\n\\section{Preliminaries and notations} \\label{PrelNot}\n\\setcounter{lemma}{0}\n\\begin{itemize}\n\\item Throughout the paper we fix an arbitrary local field $F$ of characteristic zero.\n\\item All the algebraic varieties and algebraic\ngroups that we will consider will be defined over $F$.\n\\item For a group $G$ acting on a set $X$ and an element $x \\in X$\nwe denote by $G_x$ the stabilizer of $x$.\n\\item By a reductive group we mean an algebraic reductive group.\n\\end{itemize}\n\nIn this paper we will refer to distributions on algebraic\nvarieties over archimedean and non-archimedean fields. In the\nnon-archimedean case we mean the notion of distributions on\n$l$-spaces from \\cite{BZ}, that is linear functionals on the space\nof locally constant compactly supported functions. In the\narchimedean case one can consider the usual notion of\ndistributions, that is continuous functionals on the space of\nsmooth compactly supported functions, or the notion of Schwartz\ndistributions (see e.g. \\cite{AG1}). It does not matter here which\nnotion to choose since in the cases of consideration of this\npaper, if there are no nonzero equivariant Schwartz distributions\nthen there are no nonzero equivariant distributions at all (see\nTheorem 4.0.2 in \\cite{AG2}).\n\n\n\\begin{notation}\nLet $E$ be an extension of $F$. Let $G$ be an algebraic group\ndefined over $F$. We denote by $G_{E\/F}$ the canonical algebraic\ngroup defined over $F$ such that $G_{E\/F}(F)=G(E)$.\n\\end{notation}\n\n\n\\subsection{Gelfand pairs} \\label{GelPairs}\n$ $\\\\\nIn this section we recall a technique due to Gelfand and Kazhdan\n(\\cite{GK}) which allows to deduce statements in representation\ntheory from statements on invariant distributions. For more\ndetailed description see \\cite{AGS}, section 2.\n\n\\begin{definition}\nLet $G$ be a reductive group. By an \\textbf{admissible\nrepresentation of} $G$ we mean an admissible representation of\n$G(F)$ if $F$ is non-archimedean (see \\cite{BZ}) and admissible\nsmooth {Fr\\'{e}chet \\,} representation of $G(F)$ if $F$ is archimedean.\n\\end{definition}\n\nWe now introduce three notions of Gelfand pair.\n\n\\begin{definition}\\label{GPs}\nLet $H \\subset G$ be a pair of reductive groups.\n\\begin{itemize}\n\\item We say that $(G,H)$ satisfy {\\bf GP1} if for any irreducible\nadmissible representation $(\\pi,E)$ of $G$ we have\n$$\\dim Hom_{H(F)}(E,\\mathbb{C}) \\leq 1$$\n\n\n\\item We say that $(G,H)$ satisfy {\\bf GP2} if for any irreducible\nadmissible representation $(\\pi,E)$ of $G$ we have\n$$\\dim Hom_{H(F)}(E,\\mathbb{C}) \\cdot \\dim Hom_{H}(\\widetilde{E},\\mathbb{C})\\leq\n1$$\n\n\\item We say that $(G,H)$ satisfy {\\bf GP3} if for any irreducible\n{\\bf unitary} representation $(\\pi,\\mathcal{H})$ of $G(F)$ on a\nHilbert space $\\mathcal{H}$ we have\n$$\\dim Hom_{H(F)}(\\mathcal{H}^{\\infty},\\mathbb{C}) \\leq 1.$$\n\\end{itemize}\n\n\\end{definition}\nProperty GP1 was established by Gelfand and Kazhdan in certain\n$p$-adic cases (see \\cite{GK}). Property GP2 was introduced in\n\\cite{Gross} in the $p$-adic setting. Property GP3 was studied\nextensively by various authors under the name {\\bf generalized\nGelfand pair} both in the real and $p$-adic settings (see e.g.\n\\cite{vD,Bos-vD}).\n\nWe have the following straightforward proposition.\n\n\\begin{proposition}\n$GP1 \\Rightarrow GP2 \\Rightarrow GP3.$\n\\end{proposition}\n\nWe will use the following theorem from \\cite{AGS} which is a\nversion of a classical theorem of Gelfand and Kazhdan.\n\n\\begin{theorem}\\label{DistCrit}\nLet $H \\subset G$ be reductive groups and let $\\tau$ be an\ninvolutive anti-automorphism of $G$ and assume that $\\tau(H)=H$.\nSuppose $\\tau(\\xi)=\\xi$ for all bi $H(F)$-invariant distributions\n$\\xi$ on $G(F)$. Then $(G,H)$ satisfies GP2.\n\\end{theorem}\n\nIn the cases we consider in this paper GP2 is equivalent to GP1 by\nthe following proposition.\n\n\\begin{proposition} \\label{GP2GP1}\n$ $\\\\\n(i) Let $V$ be a quadratic space (i.e. a linear space with a\nnon-degenerate quadratic form) and let $H \\subset GL(V)$ be any\ntranspose invariant subgroup.\n Then $GP1$ is equivalent to $GP2$ for the pair\n$(\\mathrm{GL}(V),H)$.\\\\\n(ii) Let $V$ be a quadratic space and let $H \\subset O(V)$ be any\nsubgroup. Then $GP1$ is equivalent to $GP2$ for the pair\n$(O(V),H)$.\n\\end{proposition}\nIt follows from the following 2 propositions.\n\n\\begin{proposition} \\label{GKCor}\nLet $H \\subset G$ be reductive groups and let $\\tau$ be an\nanti-automorphism of $G$ such that\\\\\n(i) $\\tau^2\\in Ad(G(F))$\\\\\n(ii) $\\tau$ preserves any closed conjugacy class in $G(F)$\\\\\n(iii) $\\tau(H)=H$.\\\\\nThen $GP1$ is equivalent to $GP2$ for the pair $(G,H)$.\n\\end{proposition}\n\nFor proof see \\cite{AG2}, Corollary 8.2.3.\n\n\\begin{proposition} $ $\\\\\n(i) Let $V$ be a quadratic space and let $g \\in GL(V)$. Then $g$\nis conjugate to $g^{t}$.\\\\\n(ii) Let $V$ be a quadratic space and let $g \\in O(V)$. Then $g$\nis conjugate to $g^{-1}$ inside $O(V)$\n\\end{proposition}\nPart (i) is well known. For the proof of (ii) see \\cite{MVW},\nProposition I.2 in chapter 4.\n\n\\subsection{Symmetric pairs} \\label{SymPar}\n$ $\\\\\nIn this subsection we review some tools developed in \\cite{AG2}\nthat enable to prove that a symmetric pair is a Gelfand pair. The\nmain results discussed in this subsection are Theorem\n\\ref{LinDes}, Theorem \\ref{GoodHerRegGK} and Proposition\n\\ref{SpecCrit}.\n\n\\begin{definition}\nA \\textbf{symmetric pair} is a triple $(G,H,\\theta)$ where $H\n\\subset G$ are reductive groups, and $\\theta$ is an involution of\n$G$ such that $H = G^{\\theta}$. We call a symmetric pair\n\\textbf{connected} if $G\/H$ is connected.\n\nFor a symmetric pair $(G,H,\\theta)$ we define an antiinvolution\n$\\sigma :G \\to G$ by $\\sigma(g):=\\theta(g^{-1})$, denote ${\\mathfrak{g}}:=Lie\nG$, ${\\mathfrak{h}} := LieH$, $\\g^{\\sigma}:=\\{a \\in {\\mathfrak{g}} | \\theta(a)=-a\\}$. Note that\n$H$ acts on $\\g^{\\sigma}$ by the adjoint action. Denote also\n$G^{\\sigma}:=\\{g \\in G| \\sigma(g)=g\\}$ and define a\n\\textbf{symmetrization map} $s:G \\to G^{\\sigma}$ by $s(g):=g\n\\sigma(g)$.\n\nIn case when the involution is obvious we will omit it.\n\\end{definition}\n\n\\begin{remark}\nLet $(G,H,\\theta)$ be a symmetric pair. Then ${\\mathfrak{g}}$ has a ${\\mathbb Z}\/2{\\mathbb Z}$\ngrading given by $\\theta$.\n\\end{remark}\n\n\n\\begin{definition}\nLet $(G_1,H_1,\\theta_1)$ and $(G_2,H_2,\\theta_2)$ be symmetric\npairs. We define their \\textbf{product} to be the symmetric pair\n$(G_1 \\times G_2,H_1 \\times H_2,\\theta_1 \\times \\theta_2)$.\n\\end{definition}\n\n\\begin{definition}\nWe call a symmetric pair $(G,H,\\theta)$ \\textbf{good} if for any\nclosed $H(F) \\times H(F)$ orbit $O \\subset G(F)$, we have\n$\\sigma(O)=O$.\n\\end{definition}\n\n\\begin{proposition} \\label{GoodCrit}\nEvery connected symmetric pair over ${\\mathbb C}$ is good.\n\\end{proposition}\nFor proof see e.g. \\cite{AG2}, Corollary 7.1.7.\n\n\\begin{definition}\nWe say that a symmetric pair $(G,H,\\theta)$ is a \\textbf{GK pair}\nif any $H(F) \\times H(F)$ - invariant distribution on $G(F)$ is\n$\\sigma$ - invariant.\n\\end{definition}\n\n\\begin{remark}\nTheorem \\ref{DistCrit} implies that any GK pair satisfies GP2.\n\\end{remark}\n\\subsubsection{Descendants of symmetric pairs}\n\\begin{proposition} \\label{PropDescend}\nLet $(G,H,\\theta)$ be a symmetric pair. Let $g \\in G(F)$ such\nthat $HgH$ is closed. Let $x=s(g)$. Then $x$ is a semisimple\nelement of $G$.\n\\end{proposition}\nFor proof see e.g. \\cite{AG2}, Proposition 7.2.1.\n\\begin{definition}\nIn the notations of the previous proposition we will say that the\npair $(G_x,H_x,\\theta|_{G_x})$ is a \\textbf{descendant} of\n$(G,H,\\theta)$.\n\\end{definition}\n\n\n\\subsubsection{Tame symmetric pairs}\n\\begin{definition}\nLet $\\pi$ be an action of a reductive group $G$ on a smooth affine\nvariety $X$.\nWe say that an algebraic automorphism $\\tau$ of $X$ is \\textbf{$G$-admissible} if \\\\\n(i) $\\pi(G(F))$ is of index at most 2 in the group of\nautomorphisms of $X$\ngenerated by $\\pi(G(F))$ and $\\tau$.\\\\\n(ii) For any closed $G(F)$ orbit $O \\subset X(F)$, we have\n$\\tau(O)=O$.\n\\end{definition}\n\n\\begin{definition}\nWe call an action of a reductive group $G$ on a smooth affine\nvariety $X$ \\textbf{tame} if for any $G$-admissible $\\tau : X \\to\nX$, every $G(F)$-invariant distribution on $X(F)$ is\n$\\tau$-invariant.\n\nWe call a symmetric pair $(G,H,\\theta)$ \\textbf{tame} if the\naction of $H\\times H$ on $G$ is tame.\n\\end{definition}\n\n\\begin{remark}\nEvidently, any good tame symmetric pair is a GK pair.\n\\end{remark}\n\n\\begin{notation}\nLet $V$ be an algebraic finite dimensional representation over $F$\nof a reductive group $G$. Denote $Q(V):=V\/V^G$. Since $G$ is\nreductive, there is a canonical embedding $Q(V) \\hookrightarrow\nV$.\n\\end{notation}\n\n\\begin{notation}\nLet $(G,H,\\theta)$ be a symmetric pair. We denote by\n$\\mathcal{N}_{G,H}$ the subset of all the nilpotent elements in\n$Q(\\g^{\\sigma})$. Denote $R_{G,H}:=Q(\\g^{\\sigma}) - \\mathcal{N}_{G,H}$.\n\\end{notation}\nNote that our notion of $R_{G,H}$ coincides with the notion\n$R(\\g^{\\sigma})$ used in \\cite{AG2}, \nNotation 2.3.10. This follows from\nLemma 7.1.11 in \\cite{AG2}.\n\n\\begin{definition}\nWe call a symmetric pair $(G,H,\\theta)$ \\textbf{weakly linearly\ntame} if for any $H$-admissible transformation $\\tau$ of $\\g^{\\sigma}$\nsuch that every $H(F)$-invariant distribution on $R_{G,H}$ is also\n$\\tau$-invariant,\nwe have\\\\\n(*) every $H(F)$-invariant distribution on $Q(\\g^{\\sigma})$ is also\n$\\tau$-invariant.\n\\end{definition}\n\n\\begin{theorem} \\label{LinDes}\nLet $(G,H,\\theta)$ be a symmetric pair. Suppose that all its\ndescendants (including itself) are weakly linearly tame. Then\n$(G,H,\\theta)$ is tame.\n\\end{theorem}\nFor proof see Theorem 7.3.3 in \\cite{AG2}.\n\nNow we would like to formulate a criterion for being weakly linearly\ntame. For it we\nwill need the following lemma and notation.\n\n\\begin{lemma}\nLet $(G,H,\\theta)$ be a symmetric pair. Then any nilpotent element\n$x \\in \\g^{\\sigma}$ can be extended to an $sl_2$ triple $(x,d(x),x_-)$\nsuch that $d(x) \\in {\\mathfrak{h}}$ and $x_- \\in \\g^{\\sigma}$.\n\\end{lemma}\nFor proof see e.g. \\cite{AG2}, Lemma 7.1.11.\n\n\\begin{notation}\nWe will use the notation $d(x)$ from the last lemma in the future.\nIt is not uniquely defined but whenever we will use this notation\nnothing will depend on its choice.\n\\end{notation}\n\n\\begin{proposition} \\label{SpecCrit}\nLet $(G,H,\\theta)$ be a symmetric pair. Suppose that for any\nnilpotent $x \\in \\g^{\\sigma}$ we have\n$$\\operatorname{Tr}(ad(d(x))|_{{\\mathfrak{h}}_x}) < \\dim Q(\\g^{\\sigma}) .$$\nThen the pair $(G,H,\\theta)$ is weakly linearly tame.\n\\end{proposition}\nThis proposition follows from \\cite{AG2} (Propositions 7.3.7 and\n7.3.5).\n\n\\subsubsection{Regular symmetric pairs}\n\\begin{definition}\nLet $(G,H,\\theta)$ be a symmetric pair. We call an element $g \\in\nG(F)$ \\textbf{admissible} if\\\\\n(i) $Ad(g)$ commutes with $\\theta$ (or, equivalently, $s(g)\\in Z(G)$) and \\\\\n(ii) $Ad(g)|_{{\\mathfrak{g}}^{\\sigma}}$ is $H$-admissible.\n\\end{definition}\n\n\\begin{definition}\nWe call a symmetric pair $(G,H,\\theta)$ \\textbf{regular} if for\nany admissible $g \\in G(F)$ such that every $H(F)$-invariant\ndistribution on $R_{G,H}$ is also $Ad(g)$-invariant,\nwe have\\\\\n(*) every $H(F)$-invariant distribution on $Q(\\g^{\\sigma})$ is also\n$Ad(g)$-invariant.\n\\end{definition}\n\nThe following two propositions are evident.\n\\begin{proposition} \\label{TrivReg}\nLet $(G,H,\\theta)$ be symmetric pair. Suppose that any $g \\in\nG(F)$ satisfying $\\sigma(g)g \\in Z(G(F))$ lies in $Z(G(F))H(F)$.\nThen $(G,H,\\theta)$ is regular. In particular if the normalizer of\n$H(F)$ lies inside $Z(G(F))H(F)$ then $(G,H,\\theta)$ is regular.\n\\end{proposition}\n\n\\begin{proposition} $ $\\\\\n(i) Any weakly linearly tame pair is regular. \\\\\n(ii) A product of regular pairs is regular (see \\cite{AG2},\nProposition 7.4.4).\n\\end{proposition}\n\nIn section \\ref{GradRepDef} we will introduce terminology that will\nhelp to verify the condition of Proposition \\ref{SpecCrit}.\n\nThe importance of the notion of regular pair is demonstrated by\nthe following theorem.\n\n\\begin{theorem} \\label{GoodHerRegGK}\nLet $(G,H,\\theta)$ be a good symmetric pair such that all its\ndescendants (including itself) are regular. Then it is a GK pair.\n\\end{theorem}\nFor proof see \\cite{AG2}, Theorem 7.4.5.\n\\section{Main Results} \\label{MainRes}\nHere we formulate the main results of the paper and explain how\nthey follow from the rest of the paper.\n\n\\setcounter{lemma}{0}\n\n\\begin{definition}\nA quadratic space is a linear space with a fixed non-degenerate\nquadratic form.\n\nLet $F'$ be an extension of $F$ and $V$ be a quadratic space over\nit. We denote by $O(V)$ the canonical algebraic group such that\nits $F$-points form the group of orthogonal transformations of\n$V$.\n\\end{definition}\n\n\n\\begin{definition}\nLet $D$ be a field with an involution $\\tau$. A hermitian space\nover $(D,\\tau)$ is a linear space over $D$ with a fixed\nnon-degenerate hermitian form.\n\nSuppose that $D$ is an extension of $F$ and $F \\subset D^{\\tau}$.\nLet $V$ be a hermitian space over $(D,\\tau)$. We denote by $U(V)$\nthe canonical algebraic group such that its $F$-points form the\ngroup of unitary transformations of $V$.\n\\end{definition}\n\n\\begin{definition}\nLet $G$ be a reductive group and $\\varepsilon \\in G$ be an element of\norder 2. We denote by $(G,G_{\\varepsilon})$ the symmetric pair defined by\nthe involution $x \\mapsto \\varepsilon x \\varepsilon$.\n\\end{definition}\n\nThe following lemma is straightforward.\n\\begin{lemma}\nLet $V$ be a quadratic space.\\\\\n(i) Let $\\varepsilon \\in GL(V)$ be an element of order 2. Then\n$GL(V)_{\\varepsilon} \\cong GL(V_1) \\times GL(V_2)$ for some decomposition\n$V=V_1 \\oplus V_2$.\\\\\n(ii) Let $\\varepsilon \\in O(V)$ be an element of order 2. Then\n$O(V)_{\\varepsilon} \\cong O(V_1) \\times O(V_2)$ for some orthogonal\ndecomposition $V=V_1 \\oplus V_2$.\\\\\n(iii) Let $V$ be a hermitian space.\\\\\n Let $\\varepsilon \\in U(V)$ be an element of order 2. Then\n$U(V)_{\\varepsilon} \\cong U(V_1) \\times U(V_2)$ for some orthogonal\ndecomposition $V=V_1 \\oplus V_2$.\n\\end{lemma}\n\n\\begin{theorem}\nLet $V$ be a quadratic space over $F$. Then all the descendants of\nthe pair $(O(V),O(V)_{\\varepsilon})$ are regular.\n\\end{theorem}\n\\begin{proof}\nBy Theorem \\ref{CompDesO_OtO} below, the descendants of the pair\n$(O(V),O(V)_{\\varepsilon})$ are products of pairs of the types\\\\\n(i) $(GL(W),O(W))$ for some quadratic space $W$ over some field\n$F'$ that extends $F$\\\\\n(ii) $(U(W_E) , O(W))$ for some quadratic space $W$ over some\nfield $F'$ that extends $F$, and some quadratic extension $E$ of\n$F'$. Here, $W_E:=W \\otimes _{F'}E$ is the extension of scalars\nwith the corresponding hermitian structure.\\\\\n(iii) $(O(W),O(W)_{\\varepsilon})$ for some quadratic space $W$ over $F$.\n\nThe pair (i) is regular by Theorem \\ref{GL_O} below. The pair (ii)\nis regular by subsection \\ref{U_reg} below. The pair (iii) is\nregular by subsection \\ref{O_OtO} below.\n\\end{proof}\n\\begin{corollary}\nSuppose that $F={\\mathbb C}$ and Let $V$ be a quadratic space over it. Then\nthe pair $(O(V),O(V)_{\\varepsilon})$ satisfies GP1.\n\\end{corollary}\n\\begin{proof}\nThis pair is good by Proposition \\ref{GoodCrit} and all its\ndescendants are regular. Hence by Theorem \\ref{GoodHerRegGK} it is\na GK pair. Therefore by Theorem \\ref{DistCrit} it satisfies GP2.\nNow, by Proposition \\ref{GP2GP1}, it satisfies GP1.\n\\end{proof}\n\\begin{theorem}\nLet $D\/F$ be a quadratic extension and $\\tau \\in Gal(D\/F)$ be the\nnon-trivial element. Let $V$ be a hermitian space over $(D,\\tau)$.\nThen all the descendants of the pair $(U(V),U(V)_{\\varepsilon})$ are\nregular.\n\\end{theorem}\n\\begin{proof}\nBy theorem \\ref{CompDesU_UtU} below, the descendants of the pair\n$(U(V),U(V)_{\\varepsilon})$ are products of pairs of the types\\\\\n(a) $(G \\times G, \\Delta G)$ for some reductive group $G$.\\\\\n(b) $(GL(W),U(W))$ for some hermitian space $W$ over some\nextension $(D',\\tau')$ of $(D,\\tau)$\\\\\n(c) $(G_{E\/F},G)$ for some reductive group $G$ and some quadratic\nextension $E\/F$.\\\\\n(d) $(GL(W),GL(W)_{\\varepsilon})$ where $W$ is a linear space over $D$\nand $\\varepsilon \\in GL(W)$ is an element of order $\\leq 2$.\\\\\n(e) $(U(W),U(W)_{\\varepsilon})$ where $W$ is a hermitian space over\n$(D,\\tau)$.\n\nThe pairs (a) and (c) are regular by Theorem \\ref{2RegPairs}\nbelow. The pairs (b) and (e) are regular by subsection \\ref{U_reg}\nbelow. The pair (d) is regular by Theorem \\ref{RJR} below.\n\\end{proof}\n\\begin{theorem} \\label{Main_GL_O}\nLet $V$ be a quadratic space over $F$. Then all the descendants of\nthe pair $(GL(V),O(V))$ are weakly linearly tame. In particular,\nthis pair is tame.\n\\end{theorem}\n\\begin{proof}\nBy Theorem \\ref{CompDesGL_O} below, the descendants of the pair\n$(GL(V),O(V))$ are products of pairs of the type $(GL(W),O(W))$\nfor some quadratic space $W$ over some field $F'$ that extends\n$F$. By Theorem \\ref{GL_O} below, these pairs are weakly linearly\ntame. Now, the pair $(GL(V),O(V))$ is tame by Theorem\n\\ref{LinDes}.\n\\end{proof}\n\\begin{corollary}\nSuppose that $F={\\mathbb C}$ and Let $V$ be a quadratic space over it. Then\nthe pair $(GL(V),O(V))$ is GP1.\n\\end{corollary}\n\n\\begin{theorem}\nLet $D\/F$ be a quadratic extension and $\\tau \\in Gal(D\/F)$ be the\nnon-trivial element. Let $V$ be a hermitian space over $(D,\\tau)$.\nThen all the descendants of the pair $(GL(V),U(V))$ are weakly\nlinearly tame. In particular, this pair is tame.\n\\end{theorem}\n\\begin{proof}\nBy Theorem \\ref{CompDesGL_U} below, all the descendants of the\npair\n$(GL(V),U(V))$ are products of pairs of the types\\\\\n(i) $(GL(W) \\times GL(W), \\Delta GL(W))$ for some linear space $W$\nover some field $D'$ that extends $D$\\\\\n(ii) $(GL(W),U(W))$ for some hermitian space $W$ over some\n$(D',\\tau')$ that extends $(D,\\tau)$.\n\nThe pair (i) is weakly linearly tame by Theorem \\ref{2RegPairs}\nbelow and the pair (ii) is weakly linearly tame by subsection\n\\ref{U_reg} below. Now, the pair $(GL(V),U(V))$ is tame by Theorem\n\\ref{LinDes}.\n\\end{proof}\n\\begin{theorem}\nLet $V$ be a quadratic space over $F$. Let $D\/F$ be a quadratic\nextension and $\\tau \\in Gal(D\/F)$ be the non-trivial element. Let\n$V_D:=V \\otimes_F D$ be its extension of scalars with the\ncorresponding hermitian structure. Then all the descendants of the\npair $(U(V_D),O(V))$ are weakly linearly tame. In particular, this\npair is tame.\n\\end{theorem}\n\\begin{proof}\n\nBy Theorem \\ref{CompDesU_O} below, all the descendants of the pair\n$(U(V_D),O(V))$ are products\nof pairs of the types\\\\\n(i) $(GL(W),O(W))$ for some quadratic space $W$ over some field\n$F'$ that extends $F$.\\\\\n(ii) $(U(W_{D'}),O(W))$ for some extension $(D',\\tau')$ of\n$(D,\\tau)$ and some quadratic space $W$ over $D'^{\\tau'}$.\n\nThe pair (i) is weakly linearly tame by Theorem \\ref{GL_O} below\nand the pair (ii) is weakly linearly tame by subsection\n\\ref{U_reg} below. Now, the pair $(GL(V),U(V))$ is tame by Theorem\n\\ref{LinDes}.\n\\end{proof}\n\n\\section{${\\mathbb Z}\/2{\\mathbb Z}$ graded representations of $sl_2$ and their\ndefects} \\label{GradRepDef}\n\nIn this section we will introduce terminology that will help to\nverify the condition of Proposition \\ref{SpecCrit}.\n\n\\subsection{Graded representations of $sl_2$}\n\n\\begin{definition}\nWe fix standard basis $e,h,f$ of $sl_2(F)$. We fix a grading on\n$sl_2(F)$ given by $h \\in sl_2(F)_0$ and $e,f \\in sl_2(F)_1$. A\n\\textbf{graded representation of $sl_2$} is a representation of\n$sl_2$ on a graded vector space $V=V_0 \\oplus V_1$ such that\n$sl_2(F)_i(V_j) \\subset V_{i+j}$ where $i,j \\in {\\mathbb Z}\/2{\\mathbb Z}$.\n\\end{definition}\n\nThe following lemma is standard.\n\\begin{lemma}$ $\\\\\n(i) Every graded representation of $sl_2$ which is\nirreducible as a graded representation is irreducible just as a representation.\\\\\n(ii) Every irreducible representation $V$ of $sl_2$ admits exactly\ntwo gradings. In one highest weight vector lies in $V_0$ and in\nthe other in $V_1$.\n\\end{lemma}\n\n\\begin{definition}\nWe denote by $V_{\\lambda}^{w}$ the irreducible graded\nrepresentation of $sl_2$ with highest weight $\\lambda$ and highest\nweight vector of parity $p$ where $w = (-1)^p$.\n\\end{definition}\n\nThe following lemma is straightforward.\n\n\\begin{lemma} \\label{GradedTensors}\n\\setcounter{equation}{0}\n\\begin{align}\n& (V_{\\lambda}^w)^* = V_{\\lambda}^{w(-1)^{\\lambda}}\\\\\n& V_{\\lambda_1}^{w_1} \\otimes V_{\\lambda_2}^{w_2} =\n\\bigoplus_{i=0}^{\\min(\\lambda_1, \\lambda_2)}\nV_{\\lambda_1+\\lambda_2 - 2i}^{w_1w_2(-1)^i}\\\\\n& \\Lambda^2(V_{\\lambda}^w) = \\bigoplus_{i=0}^{\\lfloor\n\\frac{\\lambda-1}{2} \\rfloor} V_{2\\lambda-4i-2}^{-1}.\n\\end{align}\n\\end{lemma}\n\n\\subsection{Defects\n\\begin{definition}\nLet $\\pi$ be a graded representation of $sl_2$. We define the\n\\textbf{defect} of $\\pi$ to be\n$$def(\\pi)=\\operatorname{Tr}(h|_{(\\pi^e)_0})-\\dim(\\pi_1)$$\n\\end{definition}\nThe following lemma is straightforward\n\n\\begin{lemma} \\label{Defects}\n\n\\begin{align}\n&def(\\pi \\oplus \\tau)=def(\\pi) + def(\\tau)\\\\\n&def(V_{\\lambda}^w) =\\frac{1}{2}(\\lambda w + w( \\frac{1\n+(-1)^{\\lambda}}{2}) -1)= \\frac{1}{2} \\left\\{%\n\\begin{array}{ll}\n \\lambda w + w -1 & \\lambda \\text{ is even} \\\\\n \\lambda w -1 & \\lambda \\text{ is odd}\n\\end{array}%\n\\right.\n\\end{align}\n\n\\end{lemma}\n\\begin{definition}\nLet ${\\mathfrak{g}}$ be a $({\\mathbb Z}\/2{\\mathbb Z})$ graded Lie algebra. We say that ${\\mathfrak{g}}$\n\\textbf{is of negative defect} if for any graded homomorphism\n$\\pi: sl_2 \\to {\\mathfrak{g}}$, the defect of ${\\mathfrak{g}}$ with respect to the adjoint\naction of $sl_2$ is negative. \n\nWe say that ${\\mathfrak{g}}$\n\\textbf{is of negative normalized defect} if the semi-simple part of ${\\mathfrak{g}}$ (i.e. the quotient of ${\\mathfrak{g}}$ by its center) \nis of negative defect.\n\\end{definition}\n\\begin{remark}\nClearly, ${\\mathfrak{g}}$\nis of negative normalized defect if and only if for any graded homomorphism\n$\\pi: sl_2 \\to {\\mathfrak{g}}$, the defect of ${\\mathfrak{g}}$ with respect to the adjoint\naction of $sl_2$ is less than the dimension of the odd part of the center of ${\\mathfrak{g}}$.\n\\end{remark}\n\\begin{definition}\nWe say that a symmetric pair $(G,H,\\theta)$ \\textbf{is of negative \nnormalized defect} if the Lie algebra ${\\mathfrak{g}}$ with the grading defined by\n$\\theta$ is of negative \nnormalized defect.\n\\end{definition}\n\n\\begin{lemma}\nLet $(G,H,\\theta)$ be a symmetric pair. Assume that ${\\mathfrak{g}}$ is semi-simple. Then\n$Q(\\g^{\\sigma})=\\g^{\\sigma}.$\n\\end{lemma}\n\\begin{proof}$ $\\\\\nAssume the contrary: there exist $0 \\neq x \\in \\g^{\\sigma}$ such that\n$Hx=x$. Then $\\dim (CN_{Hx,x}^{\\g^{\\sigma}}) = \\dim \\g^{\\sigma}$, hence\n$CN_{Hx,x}^{\\g^{\\sigma}} = \\g^{\\sigma}$. On the other hand, $CN_{Hx,x}^{\\g^{\\sigma}} \\cong [{\\mathfrak{h}}, x]^\\bot=(\\g^{\\sigma})^x$ (here $(\\cdot)^\\bot$ means the ortogonal compliment w.r.t. the kiling form). Therefore $\\g^{\\sigma}=(\\g^{\\sigma})^x$ and hence $x$\nlies in the center of ${\\mathfrak{g}}$, which is impossible.\n\\end{proof}\n\nProposition \\ref{SpecCrit} can be rewritten now in the following\nform\n\\begin{theorem}\nA symmetric pair of negative \nnormalized defect is weakly linearly tame.\n\\end{theorem}\n\nEvidently, a product of pairs of negative normalized defect is again of\nnegative normalized defect.\n\nThe following lemma is straightforward.\n\\begin{lemma} \\label{NegDefAlg}\nLet $(G,H,\\theta)$ be a symmetric pair. Let $F'$ be any field\nextending $F$. Let $(G_{F'},H_{F'},\\theta)$ be the extension of\n$(G,H,\\theta)$ to $F'$. Suppose that it is of negative normalized defect (as\na pair over $F'$) . Then $(G,H,\\theta)$ and\n$(G_{F'\/F},H_{F'\/F},\\theta)$ are of negative normalized defect (as pairs over\n$F$).\n\\end{lemma}\n\nIn \\cite{AG2} we proved the following (easy) proposition (see\n\\cite{AG2}, Lemma 7.6.6).\n\\begin{proposition} \\label{pi_pibarAG2}\nLet $\\pi$ be a representation of $sl_2$. Then\n$\\operatorname{Tr}(h|_{(\\pi^e)})<\\dim(\\pi)$.\n\\end{proposition}\nWe would like to reformulate it in terms of defect. For this we\nwill need the following notation.\n\\begin{notation} $ $\\\\\n(i) Let $\\pi$ be a representation of $sl_2$. We denote by ${\\overline{\\pi}}$\nthe representation of $sl_2$ on the same space defined by\n${\\overline{\\pi}}(e):=-\\pi(e)$, ${\\overline{\\pi}}(f):=-\\pi(f)$ and ${\\overline{\\pi}}(h):=\\pi(h)$. \\\\\n(ii) We define grading on $\\pi \\oplus {\\overline{\\pi}}$ by the involution $s(v\n\\oplus w):= w \\oplus v$.\n\\end{notation}\nProposition \\ref{pi_pibarAG2} can be reformulated in the following\nway.\n\\begin{proposition} \\label{pi_pibar}\nLet $\\pi$ be a representation of $sl_2$. Then $def(\\pi \\oplus\n{\\overline{\\pi}})<0$.\n\\end{proposition}\nIn \\cite{AG2} we also deduced from this proposition the following\ntheorem (see \\cite{AG2}, 7.6.2).\n\\begin{theorem} \\label{2RegPairs}\nFor any reductive group $G$, the pairs $(G \\times G,\\Delta G)$ and\n$(G_{E\/F},G)$ are of negative normalized defect and hence weakly linearly\ntame. Here $\\Delta G$ is the diagonal in $G \\times G$.\n\\end{theorem}\nIn \n\\cite[\\S\\S 2.7]{AG2} we\nproved the following theorem.\n\\begin{theorem} \\label{RJR}\nThe pair $(GL(V \\oplus V),GL(V) \\times GL(V))$ is of negative\nnormalized defect and hence regular.\n\\end{theorem}\nNote that in the case $dimV \\neq dimW$ the pair $(GL(V \\oplus\nW),GL(V) \\times GL(W))$ is obviously regular by Proposition\n\\ref{TrivReg}.\n\n\n\\section{Proof of regularity and tameness} \\label{SecReg}\n\n\\subsection{The pair $(GL(V),O(V))$} $ $\\\\\\\\\nIn this subsection we prove that the pair $(GL(V),O(V))$ is weakly\nlinearly tame. \nFor $\\dim V \\leq 1$ it is obvious.\nHence it is enough to prove the following\ntheorem.\n\n\\begin{theorem} \\label{GL_O}\nLet $V$ be a quadratic space \nof dimension at least $2$. Then the pair $(GL(V),O(V))$ has\nnegative normalized defect.\n\\end{theorem}\nWe will need the following notation.\n\n\\begin{notation}\nLet $\\pi$ be a representation of $sl_2$. We define grading on $\\pi\n\\otimes {\\overline{\\pi}}$ by the involution $s(v \\otimes w):= - w \\otimes v$.\n\\end{notation}\n\nTheorem \\ref{GL_O} immediately follows from the following one.\n\n\\begin{theorem}\nLet $\\pi$ be a representation of $sl_2$ of dimension at least 2. Then $def(\\pi \\otimes {\\overline{\\pi}})<-1.$\n\\end{theorem}\n\nThis theorem in turn follows from the following lemma.\n\n\\begin{lemma}\nLet $V_{\\lambda}$ and $V_{\\mu}$ be irreducible\nrepresentations of $sl_2$. Then\\\\\n(i) $def(V_{\\lambda} \\otimes \\overline{V_{\\lambda}}) = -(\\lambda +\n1)(\\frac{\\lambda}{2}+1)$.\\\\\n(ii) $def(V_{\\lambda} \\otimes \\overline{V_{\\mu}} \\oplus V_{\\mu}\n\\otimes \\overline{V_{\\lambda}})<0$.\n\\end{lemma}\n\\begin{proof} $ $\\\\\n(i) Follows from the fact that $V_{\\lambda} \\otimes\n\\overline{V_{\\lambda}} = \\bigoplus_{i=0}^{\\lambda} V_{2\\lambda -\n2i}^{-1}$ and from Lemma \\ref{Defects}.\\\\\n(ii) Follows from Proposition \\ref{pi_pibar}.\n\\end{proof}\n\n\\subsection{The pair $(O(V_1 \\oplus V_2), O(V_1) \\times O(V_2))$} \\label{O_OtO}\n $ $\\\\\\\\\nIn this subsection prove that the pair $(O(V_1 \\oplus V_2), O(V_1)\n\\times O(V_2))$ is regular. For that it is enough to prove the\nfollowing theorem.\n\n\\begin{theorem}\nLet $V_1$ and $V_2$ be quadratic spaces. Assume $\\dim V_1 = \\dim\nV_2$. Then the pair $(O(V_1 \\oplus V_2), O(V_1) \\times O(V_2))$\nhas negative normalized defect.\n\\end{theorem}\n\nThis theorem immediately follows from the following one.\n\n\\begin{theorem} \\label{Lambda2neg}\nLet $\\pi$ be a (non-zero) graded representation of $sl_2$ such\nthat $\\dim \\pi_0 = \\dim \\pi_1$ and $\\pi \\simeq \\pi^*$. Then\n$\\Lambda^2(\\pi)$ has negative defect.\n\\end{theorem}\n\nFor this theorem we will need the following lemma.\n\n\\begin{lemma}\nLet $V_{\\lambda_1}^{w_1}$ and $V_{\\lambda_2}^{w_2}$ be irreducible\ngraded\nrepresentations of $sl_2$. Then\\\\\n(i) $def(V_{\\lambda_1}^{w_1} \\otimes V_{\\lambda_2}^{w_2}) =$\n$$= -\\frac{1}{2} \\left\\{%\n\\begin{array}{lll}\n \\min(\\lambda_1,\\lambda_2)+1 - \\frac{w_1w_2}{2}(\\lambda_1+\\lambda_2+1+(-1)^{\\min(\\lambda_1,\\lambda_2)}(|\\lambda_1-\\lambda_2|-1)), & \\lambda_1 \\neq \\lambda_2 & \\mod 2; \\\\\n \\min(\\lambda_1,\\lambda_2)+1 - w_1w_2(\\max(\\lambda_1,\\lambda_2)+1), & \\lambda_1 \\equiv \\lambda_2 \\equiv 0 & \\mod 2; \\\\\n \\min(\\lambda_1,\\lambda_2)+1 - w_1w_2(\\min(\\lambda_1,\\lambda_2)+1), & \\lambda_1 \\equiv \\lambda_2 \\equiv 1 & \\mod 2;\n\\end{array}%\n\\right.$$\n(ii) $def(\\Lambda^2(V_{\\lambda_1}^{w_1}))= -\\frac{\\lambda^2}{4} -\n\\frac{\\lambda}{2} - \\frac{1 + (-1)^{\\lambda+1}}{8}$\n\\end{lemma}\n\\begin{proof}\nThis lemma follows by straightforward computations from Lemmas\n\\ref{GradedTensors} and \\ref{Defects}.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{Lambda2neg}]\nSince $\\pi \\simeq \\pi^*$, $\\pi$ can be decomposed to a direct sum\nof irreducible graded representations in the following way\n$$ \\pi = (\\bigoplus_{i=1}^l V_{\\lambda_i}^{1}) \\oplus\n(\\bigoplus_{j=1}^m V_{\\mu_j}^{-1}) \\oplus (\\bigoplus_{k=1}^n\nV_{\\nu_k}^{1} \\oplus V_{\\nu_k}^{-1}).$$\n\nHere, all $\\lambda_i$ and $\\mu_j$ are even and $\\nu_k$ are odd.\nSince $\\dim \\pi_0 = \\dim \\pi_1$, $l=m$.\n\nBy the last lemma, $def(V_{\\lambda_i}^1 \\otimes (V_{\\nu_k}^{1}\n\\oplus V_{\\nu_k}^{-1})) = -(\\min(\\lambda_1,\\lambda_2)+1)<0$.\nSimilarly, $def(V_{\\mu_j}^{-1} \\otimes (V_{\\nu_k}^{1} \\oplus\nV_{\\nu_k}^{-1}))<0$. Also, $def((V_{\\nu_{k_1}}^{1} \\oplus\nV_{\\nu_{k_1}}^{-1}) \\otimes (V_{\\nu_{k_2}}^{1} \\oplus\nV_{\\nu_{k_2}}^{-1}))<0$ and $def(\\Lambda^2(V_{\\lambda}^w)) \\leq 0$\nfor all $\\lambda$ and $w$.\n\nHence if $l=0$ we are done. Otherwise we can assume $n=0$. Now,\n\n\\begin{multline*}\ndef(\\Lambda^2(\\pi)) \\leq \\sum_{1 \\leq i}[d]\\ar@{->}[dl]\\ar@{->}[ddr]\\ar@{->}[ddrr] & &\n\\\\\n \\framebox{\\parbox{64pt}{$(GL(V),U(V))$}}\\ar@{->}[d] & \\framebox{\\parbox{80pt}{$\\,\\,\\,\\, (GL(V),GL(V)_{\\varepsilon})$}}\\ar@{->}[d]\\ar@{->}[dl] & &\\\\\n\\framebox{\\parbox{115pt}{$(GL(V)\\times GL(V), \\Delta GL(V))$}} &\n\\framebox{\\parbox{80pt}{$(GL(V)_{E\/F},GL(V))$}} &\n\\framebox{\\parbox{67pt}{$(U(V)_{E\/F}, U(V))$}} &\n\\framebox{\\parbox{92pt} {$(U(V) \\times U(V), \\Delta U(V))$}}}\n$ $\\\\\n\\text{Here }V\\text{ is a linear or hermitian space over some\nfinite field extension of }F \\text{ and }E \\text{ is} \\text{some\nquadratic extension of } F. }}$$\n\\framebox{\\parbox{475pt}{\n$$ \\xymatrix{ \\framebox{\\parbox{63pt}{$(O(V),O(V)_{\\varepsilon})$}}\\ar@{->}[d]\\\\\n \\framebox{\\parbox{63pt}{$(U(V_E),O(V))$}}\\ar@{->}[d]\\\\\n\\framebox{\\parbox{63pt}{$(GL(V), O(V))$}}}$$\n$ $\\\\\n\\text{Here }V\\text{ is a quadratic space over some finite field\nextension of }F \\text{ and }E \\text{ is} \\text{some quadratic\nextension of } F. }}}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nBimanual gestures are central to everyday life, and constitute a fundamental ground for the study of basic principles of human behaviour.\nTraditionally, the study of bimanual gestures in humans focus on very simple motions involving fingers and hands, and including coordination, symmetry, in-phase and anti-phase behaviours.\nThese studies are aimed at understanding the dynamics associated with bimanual movements and target aspects of motor control, as exemplified by the Haken-Kelso-Bunz (HKB) model for self-organisation in human movement pattern formation \\cite{Kelso1984}\\cite{Hakenetal1985}.\n\nIn this paper, we are interested in determining how the study of bimanual gestures can lead to automated systems for their detection and classification in unconstrained, everyday environments.\nIn the context of assistive systems for fragile populations, including elderly, people with disabilities and other people with special needs, the need arises to provide caregivers, medical staff or simply relatives with a tool able to assess the ability of assisted people to perform bimanual gestures in their \\textit{natural} environment. \nSuch approach is in line with the \\textit{ageing in place} paradigm, a recent healthcare position which acknowledges and focuses on the role that a person's surroundings (the home, the neighbourhood) play for his well-being in older ages \\cite{Wiles11}. \nA familiar environment brings about a sense of security, independence and autonomy which has a positive impact on routines and activities, and ultimately on quality of life.\n\nThere is a big gap between clinical studies involving the coordination of finger movements and the recognition of such general-purpose bimanual gestures as \\textit{opening and closing curtains}, \\textit{sweeping the floor}, or \\textit{filling a cup with water}. \nHowever, the first step to take is to determine how current understanding of bimanual movements and their representation in the brain can lead to better detection and classification systems in real-world environments.\nThree factors must be considered when designing such a system.\n\n\\textit{Factor 1. It is debated whether bimanual gestures are controlled in intrinsic or extrinsic coordinates, or rather multiple coordination strategies co-occur}. \n\nBimanual movements tend to motion symmetry and stabilisation \\cite{Kelso1984}.\nThis has been typically explained using co-activation of homologous muscles in neuronal motor structures, due to bilateral cross-talk, suggesting that bimanual coordination is mostly done using intrinsic (i.e., proprioceptive) coordinates.\nMechsner \\textit{et al} suggest that, instead, such a coordination is due to spatial, perceptual symmetry only, i.e., using extrinsic (i.e., exteroceptive) visual coordinates \\cite{Mechsneretal2001}. \nIf this were true, there would be no need to map visual representations to motor representations (and \\textit{viceversa}), and voluntary movements could be organised on the basis of perceptual goals.\nThe role of different coordinates and their interplay for bimanual coordination mechanisms has been studied by Sakurada and colleagues \\cite{Sakuradaetal2015}.\nStarting from studies relating temporal and spatial couplings in bilateral motions, including the adaptation exhibited by two hands having to perform motions of different speed (i.e., the fastest becoming slightly slower and viceversa) \\cite{Heueretal1998}, and the fact that the movement of a non-dominant hand is likely to be assimilated by the dominant one \\cite{Byblowetal2000}, they demonstrate a relatively stronger contribution of intrinsic components in bimanual coordination, although both components are flexibly regulated according to the specific bimanual task.\nFurthermore, they argue that the central nervous system regulates bilateral coordination at different levels, as hypothesised by Swinnen and Wenderoth \\cite{SwinnenandWenderoth2004}.\nThe importance of both intrinsic and extrinsic coordinates seems to be confirmed by recent studies in interpersonal coordination \\cite{Kodamaetal2015}.\nIt is suggested that coordinated motion is informed by a full perception-action coupling, including visual and haptic sensorimotor loops, which propagates to the neuromuscular system.\n\nWe derive two requirements for our analysis:\n\\begin{itemize}\n\\item[$R_1$)] we must consider models \\textit{agnostic} with respect to an explicit coordination at the motor level between the two hands\/arms;\n\\item[$R_2$)] classification techniques must be robust to variation in speed, both for the bimanual gesture as a whole and for the single hand\/arm. \n\\end{itemize} \n\n\\textit{Factor 2. Ageing affects the way we move, and therefore coordination in bimanual gestures varies over time}.\n\nAccording to the \\textit{dedifferentiation} paradigm, ageing is now considered a parallel and distributed process occurring at various levels in the human's body.\nThe dedifferentiation can be defined as ``the process by which structures, mechanisms of behaviour that were specialised for a given function lose their specialisation and become simplified, less distinct or common to different functions'' \\cite{BaltesandLindenberger1997}.\nAs a consequence, ageing affects not only individual body \\textit{subsystems} (i.e., the muscular system or the brain), but also their interactions.\nIt is argued by Sleimen-Malkoun and colleagues that such a process can lead to common and intertwined causes for \\textit{cognitive ageing}, i.e., a general slowing down of information processing, including the information related to procedural memory and -- therefore -- movement and coordination \\cite{SleimenMalkounetal2014}.\nIt is posited that the ageing brain undergoes anatomical and physiological changes, for the reorganising activation patterns between neural circuits.\nAs far as motor task complexity is concerned, a generalised increased activation of brain areas is even more evident, which reflects a greater involvement of processes related to executive control.\n\nAlso in this case, we derive an important requirement:\n\\begin{itemize}\n\\item[$R_3$)] we must consider models which can be adapted over time and which follow the evolution of the musculoskeletal system, at least implicitly, thus requiring the use of forms of machine learning techniques.\n\\end{itemize} \n\n\\textit{Factor 3. Different mental representations of sensorimotor loops and action, involving discrete and continuous organisation principles, are under debate}.\n\nBeside the models aimed at representing bimanual gestures assuming a motor control framework, much work has been carried out in the past few years to devise building blocks for mental action representation \\cite{Kelso1984}\\cite{Hakenetal1985}\\cite{Mechsneretal2001}\\cite{SwinnenandWenderoth2004}\\cite{Sakuradaetal2015}\\cite{Kodamaetal2015}.\nAssuming a goal-directed cognitive perspective, it has been shown how movements can be represented as a serial and functional combination of goal-related body postures, or goal postures (i.e., key frames), as well as their transitional states.\nFurthermore, it has been posited that movements can be expressed as incremental changes between goal postures, which reduces the amount of effortful attention needed for their execution \\cite{Rosenbaumetal2007}.\nOn these premises, Basic Action Concepts (BACs) have been proposed as building blocks of mental action representations.\nBACs represent chucked body postures related to common functions to realise goal-directed actions.\nSchack and colleagues posit that complex (including bimanual) actions are mentally represented as a combination of executed action and intended or observed effects \\cite{Schacketal2014}.\nFurthermore, they argue that the map linking motion and perceptual effects is bi-directional and stored hierarchically in long-term memory, in structures resembling \\textit{dendograms} \\cite{Schack2012}.\nThis is a specific case of what has been defined by Bernstein as the \\textit{degrees of freedom problem} \\cite{Bernstein1967}.\nThe problem is related to how the various parts of the motor system can become harnessed so as to generate coordinated behaviour when needed.\nAs Bernstein theorised, a key role is played by muscular-articular links (i.e., synergies) in constraining how many degrees of freedom lead to dexterous behaviour.\nHarrison and Stergiou argue that dexterity and motion robustness are enabled by multi-functional (degenerate) body parts able to assume context-dependent roles.\nAs a consequence, task-specific human-environment interactions can flexibly generate adaptable motor solutions.\n\nWe derive two requirements:\n\\begin{itemize}\n\\item[$R_4$)] although motion models are intrinsically continuous, we need to derive a discrete representation able to provide action labels which, in principle, can lead to more complex organisations;\n\\item[$R_5$)] models must capture dexterity in everyday environments and robustness to different executions of the same gesture, which leads to models obtained by human demonstration.\n\\end{itemize}\n\nOn the basis of these requirements, we propose a bimanual wearable system able to detect and classify bimanual gestures using the inertial information provided by two wrist-mounted smartwatches.\nThe system builds up and significantly extends previous work \\cite{Bruno12}, and adheres to the \\textit{wearable sensing} paradigm, which envisions the use of sensors located on a person's body, either with wearable devices such as smartwatches, or with purposely engineered articles of clothing \\cite{Lara13}, to determine a number of important parameters, in our case motion.\nSince sensors are physically carried around, the monitoring activity can virtually occur in any place and it is usually focused on the detection of movements and gestures.\n\nThe contribution is four-fold: (i) an analysis of two procedures for modelling bimanual gestures, respectively explicitly and implicitly taking the correlation between the two hands\/arms into account; (ii) an analysis of two procedures for the classification of run-time data, respectively relying on the probability measure and the Mahalanobis distance to compute the similarity between run-time data and previously stored models of bimanual gestures; (iii) a performance assessment of the developed techniques with the standard statistical metrics of accuracy, precision and recall over the collected dataset, as well as under real-life conditions; (iv) a dataset of $60$ recordings of five bimanual gestures, performed by ten volunteers, to support reproducible research.\n\nThe paper is structured as follows.\nSection \\ref{sec:related_work} describes the theoretical background of the proposed modelling and recognition procedures, as well as related work, in view of the requirements outlined above.\nSection \\ref{sec:system_architecture} provides a thorough description of the system's architecture and insights on the five bimanual gestures considered for the analysis; the performance of such system are presented and discussed in Section \\ref{sec:experimental_evaluation}.\nConclusions follow.\n\n\\section{Related Work}\n\\label{sec:related_work}\n\nWearable systems for the automatic recognition of human gestures and full-body movements typically rely on inertial information.\nAccelerometers prove to be the most informative sensor for the task \\cite{Lester05}.\nTo comply with end users' constraints related to the impact of the monitoring system on their appearance and freedom of motion, most solutions adopt a single sensing device, either located at the waist \\cite{Mathie04} or, as it is becoming more and more common, at the wrist \\cite{Dietrich14}.\n\nDue to the similarities in the input data and in the operating conditions, most systems adopt a similar architecture \\cite{Lara13}, sketched in Figure \\ref{fig:system_architecture}.\nThe architecture identifies two stages, namely a training phase (on the left hand side in the Figure) and a testing phase (on the right hand side).\nThe \\textit{training} phase, typically executed off-line without strict computational constraints, is devoted to the creation of a compact representation of a gesture\/movement of interest, on the basis of an informative set of examples.\nThis also complies with requirements $R_3$ and $R_5$ above.\nThe \\textit{testing} phase, which may be subject to real-time and computational constraints, is responsible for the analysis of an input data stream to detect the gesture, among the modelled ones, which more closely matches it, if any, and label it accordingly.\nPlease note that the word ``testing'' is used here with respect to the data stream to analyse, with no reference to the stage of development of the monitoring system.\nSpecifically, we denote with the term ``validation'' the development stage in which we assess the performance of the system, and with the term ``deployment'' the stage in which the system is actually adopted by end users.\nDuring validation, the testing phase executes an off-line analysis of labelled gesture recordings, while during deployment the testing phase executes an on-line analysis of a continuous data stream, in unsupervised conditions.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=11cm]{generic_architecture}\n\\caption{The typical architecture of wearable sensing systems for the recognition of human gestures. The left hand side lists the tasks of the training phase, while the right hand side lists the tasks executed during the testing phase.}\n\\label{fig:system_architecture}\n\\end{figure}\n\nDuring the training phase (see Figure \\ref{fig:system_architecture} on the left hand side), it is first necessary to acquire and build the training set of measured attributes for the gestures of interest.\nTwo approaches are possible.\nThe \\textit{specialised} approach envisions the creation of a training set exclusively composed of gesture recordings performed by the person to be monitored during the deployment stage.\nThis approach maximises the recognition accuracy at the expenses of a long setup for each new installation.\nHowever, it enforces requirements $R_3$ and $R_5$, in that it allows someone to periodically retrain the system if necessary.\nConversely, the \\textit{generalised} approach envisions the creation of a training set composed of a large number of gesture recordings, performed by a number of volunteers (not necessarily including the person to monitor).\nUsing gestures provided by different individuals maximises the likelihood that the model is able to capture a much varied dexterity, as posited by $R_5$. \nThis approach, albeit more prone to errors, greatly reduces the setup costs and is to be preferred in the case of Ambient Assisted Living applications, in which the perceived system's ease of use is crucial for its success \\cite{Bruno12, Bulling14}.\n\nOnce the training set is available, it is typically filtered for noise reduction and\/or formatted for later processing (\\textit{data pre-processing} stage).\nThen, the purpose of the \\textit{feature extraction} procedure is to determine relevant information (in the form of features) from raw signals.\nFeatures are expected to (i) enhance the differences between gestures while being invariant when extracted from data patterns corresponding to the same gesture, (ii) lead to a compact representation of the gesture and (iii) require limited computational time and resources for the extraction and processing, since these operations are subject to real-time constraints during deployment.\nLiterature discriminates between \\textit{statistical} features, extracted using methods such as the Fourier and the Wavelet transform on the basis of quantitative characteristics of the data, and \\textit{structural} features, aiming at enhancing the interrelationship among data \\cite{Lara13}.\nMost human activity recognition systems based on inertial measurements make use of statistical features, usually defined in the time- or frequency-domain \\cite{Krassnig10}.\nAlternative feature extraction methods include the use of Principal Component Analysis \\cite{Mashita12} and autoregressive models \\cite{Lee11}.\nWith respect to time-domain features exclusively, which minimise the computational load introduced by the feature extraction procedure, gravity and body acceleration are among the most commonly adopted \\cite{Karantonis06, Krassnig10, Bruno12}.\nDiscriminating between gravity and body acceleration is a non-trivial operation, for the two features overlap both in the time and frequency domains.\nTwo approaches are typically adopted to separate them.\nThe former exploits additional sensors, such as gyroscopes \\cite{Chul09} or magnetometers \\cite{Bonnet07}, to compute the orientation or attitude of another body part, usually the torso.\nThe latter exploits the known features of gravity and uses either low-pass filters to isolate the gravitational component \\cite{Karantonis06, Bruno12}, or high-pass filters to isolate the body acceleration components \\cite{Sharma08}.\n\nOnce gravity and body acceleration components are isolated, the need arises to model them as features.\nSix different 1-dimensional sampled signals are available, i.e., the three $g_x, g_y, g_z$ gravity and the three $b_x, b_y, b_z$ body acceleration components along the $x$, $y$ and $z$ axes.\nAgain, two possibilities are discussed in the Literature.\nThe first is to assume the signals to be pairwise uncorrelated, which yields six separate 2-dimensional features, each feature being composed of timing information and the corresponding signal value on a given axis, i.e., $(t, g_i)$ and $(t, b_i)$, where $i \\in \\{x, y, z\\}$.\nThe second is to assume the $x$, $y$ and $z$ components of gravity and body acceleration to be correlated, which yields two separate 4-dimensional features, i.e., $(t, g_x, g_y, g_z)$, $(t, b_x, b_y, b_z)$, each feature being composed of timing information and the corresponding signal values on all axes.\nThe explicit use of correlation among tri-axial acceleration data has been proved to lead to better results in terms of classification rate and computational time \\cite{Cho08, Krassnig10, Bruno12}.\nIt is noteworthy that, in case of bimanual gestures, it is possible to explicitly model the correlation among inertial data originating from the two hands\/arms, or to consider them as separate signals.\nIn this way, we can comply with requirements $R_1$ and (in part) $R_2$ above. \n\nFinally, the \\textit{modelling} procedure is devoted to the creation of a compact and informative representation of the considered gestures in terms of available sensory data.\nTwo classes of approaches have been traditionally pursued.\nIn \\textit{logic-based} solutions each gesture to monitor and recognise is encoded through sound and well-defined rules, which are based on ranges of admissible values for a set of relevant parameters.\nRecognition is carried out by analysing run-time sensory values to progressively converge towards the encoded gesture more closely matching run-time data.\nDecision trees, which allow for a fast and simple classification procedure, are the most adopted solution in logic-based approaches \\cite{Lee02, Mathie04, Karantonis06, Krassnig10}.\n\\textit{Probability-based} solutions assume instead each gesture to be represented by a model encoding relevant moments of the training set, and to be identified using non ambiguous labels.\nIn this case, recognition is typically performed by comparing run-time sensory data with the stored models through probabilistic distance measures.\nCommonly adopted techniques include Neural Networks \\cite{Krassnig10}, Hidden Markov Models \\cite{Minnen05, OlguinOlguin06, Choudhury08} and Gaussian Mixture Models \\cite{Bruno12}.\nIn our work, we exploit probability-based models to comply with requirement $R_4$.\n\nDuring the testing phase (see Figure \\ref{fig:system_architecture} on the right hand side), analogously to what happens in the training phase, a number of steps are sequentially executed.\nThe feature extraction step executes the same algorithms of the training phase. Once the testing stream has been processed (typically focusing on a time window), it is possible to evaluate its features against the previously trained models (\\textit{recognition}), generating a predicted label.\nIn the testing phase, we exploit specific distance metrics relating the stored models with the run-time data stream. \nIn this way, we can account for requirement $R_2$.\n\nMost wearable sensing systems based on a single sensing point (e.g., the right wrist) focus on the recognition of gestures which are either one-handed, e.g., \\textit{bringing a cigarette to the lips to smoke} \\cite{Dietrich14}, or, albeit involving both hands, such as \\textit{cutting meat with fork and knife} \\cite{Bruno13} or even the full-body, e.g., \\textit{climbing stairs} \\cite{Bruno12}, correspond to a unique and generalised pattern at the considered sensing point.\nThe presented work relaxes this assumption, by evaluating a wearable sensing system based on two sensing points (the left and right wrists) which allows for the modelling and recognition of \\textit{generic} bimanual gestures.\n\n\\section{System Architecture}\n\\label{sec:system_architecture}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=10.8cm]{modelling_architecture}\n\\caption{System architecture (training phase).}\n\\label{fig:modelling_architecture}\n\\end{figure}\nFigure \\ref{fig:modelling_architecture} shows a schematic view of the training phase of the proposed system, while Figure \\ref{fig:testing_architecture} details the operations performed during the testing phase.\nThe blocks devoted to \\textit{data synchronisation} and \\textit{data pre-processing}, as well as the \\textit{feature extraction} block, are the same in the two phases.\nWe consider two approaches for the modelling stage: (i) \\textit{explicit} modelling of the correlation of the two hands ($2\\times7D$ approach, see Figure \\ref{fig:modelling_architecture} on the left hand side), presupposing the stress on intrinsic coordinates in motor control studies, and (ii) \\textit{implicit} modelling of the correlation ($4\\times4D$ approach, see Figure \\ref{fig:modelling_architecture} on the right hand side), which assumes the importance of extrinsic coordinates.\nWe also consider two approaches for the comparison of testing data with the available models, respectively based on the probability measure and the Mahalanobis distance.\nThe former takes into account the likelihood of a model \\textit{as a whole}, whereas the second weights more the contribution of body degeneracy in robustness and dexterity.\n\n\\subsection{Pre-processing \\& Feature extraction}\n\nThe proposed system relies on the inertial information collected by two tri-axial accelerometers, respectively located at a person's left and right wrists.\nTo properly manage the two data streams, they should be synchronised, and the \\textit{data synchronisation} procedure heavily depends on the choices made related to hardware solutions.\nAs it will be described in Section \\ref{sec:experimental_evaluation}, in the case of the devices we adopt, the synchronisation is guaranteed by the manufacturer.\n\nAll the trials of a gesture in the training set (i.e., all the couples of left- and right-wrist data streams associated with a single execution of the gesture) are initially synchronised with each other manually, so that the starting moment of the gesture is the same across all recordings, and trimmed to have equal length.\nThen, the \\textit{data pre-processing} stage filters each acceleration stream with a median filter of size $3$ to reduce noise.\n\nLet us assume that we have $M$ different bimanual gestures.\nFor each gesture $m$ to learn, where $m=1, \\ldots, M$, let us assume that the training set includes $S_m$ trials and $s$ is one of them. After the pre-processing stage, all the $S_m$ trials are synchronised and truncated as to be composed of the same number $K_m$ of observations.\nA trial is defined as:\n\\begin{equation}\ns = \\{g_{l,k}, b_{l,k}, g_{r,k}, b_{r,k}\\} \\quad k = 1, \\ldots, K_m\n\\label{eq:4Dtrial}\n\\end{equation}\nwith:\n\\begin{equation}\n\\begin{split}\ng_{l,k}&=(t_k, g_{l,x,k}, g_{l,y,k}, g_{l,z,k}), \\\\\nb_{l,k}&=(t_k, b_{l,x,k}, b_{l,y,k}, b_{l,z,k}), \\\\\ng_{r,k}&=(t_k, g_{r,x,k}, g_{r,y,k}, g_{r,z,k}), \\\\\nb_{r,k}&=(t_k, b_{r,x,k}, b_{r,y,k}, b_{r,z,k}), \\\\\n\\end{split}\n\\label{eq:4Dfeatures}\n\\end{equation}\nwhere $l$ and $r$ denote, respectively, the acceleration streams provided by the sensing device on the left and on the right wrist, $g_{l,k}$ includes the time and $x, y$ and $z$ components of the gravity on the left acceleration stream, $b_{l,k}$ includes the time and $x, y$ and $z$ components of the body acceleration on the left acceleration stream, $g_{r,k}$ includes the time and $x, y$ and $z$ components of the gravity on the right acceleration stream and $b_{r,k}$ includes the time and $x, y$ and $z$ components of the body acceleration on the right acceleration stream.\nThe \\textit{feature extraction} stage separates the $l$ and $r$ tri-axial acceleration streams provided by the sensing devices into their gravity and body acceleration components \\cite{Bruno12}, by applying a low pass Chebyshev I $5^{\\circ}$ order filter ($F_{cut}=0.25 {Hz}, A_{pass}= 0.001 {dB}, A_{stop}= -100 {dB},F_{stop}= 2 {Hz}$).\n\n\\subsection{Modelling}\n\nThe goal of the \\textit{modelling} stage is to combine the $S_m$ trials in the training set to obtain a generalised version, i.e., a \\textit{model}, of gesture $m$, defined in terms of the two features of gravity and body acceleration.\nTwo approaches are possible.\n\nIn the \\textit{explicit} correlation modelling ($2\\times7D$ approach, see Figure \\ref{fig:modelling_architecture} on the left hand side), we merge the left and right components of each trial $s$ to create $7$-dimensional features, i.e.:\n\\begin{equation}\ns = \\{G_{k}, B_{k}\\} \\quad k = 1, \\ldots, K_m\n\\label{eq:7Dtrial}\n\\end{equation}\nwith:\n\\begin{equation}\n\\begin{split}\nG_{k} &= (t_k, g_{l,x,k}, g_{l,y,k}, g_{l,z,k}, g_{r,x,k}, g_{r,y,k}, g_{r,z,k}), \\\\\nB_{k} &= (t_k, b_{l,x,k}, b_{l,y,k}, b_{l,z,k}, b_{r,x,k}, b_{r,y,k}, b_{r,z,k}), \\\\\n\\end{split}\n\\label{eq:7Dfeatures}\n\\end{equation}\nand the model of gesture $m$ is then defined in terms of $G$ and $B$.\n\nConversely, in the \\textit{implicit} correlation modelling ($4\\times4D$ approach, see Figure \\ref{fig:modelling_architecture} on the right hand side), we keep the left and right hand streams independent, thus considering the four features defined in \\eqref{eq:4Dfeatures}.\nThe model of gesture $m$ is then defined in terms of $g_l, b_l, g_r$ and $b_r$.\n\nThe first approach corresponds to assuming that the motion of the two hands is \\textit{fully constrained} by the performed gesture, while the latter approach leaves to later stages the responsibility of correlating the two data streams.\nAt the same time, in the first case we assume the contribution of the left and right hands\/arms as correlated, whereas in the second case we do not pose such an assumption.\nAlbeit introducing an additional step, the possibility of tuning the correlation of the two data streams opens interesting scenarios for the recognition stage.\nConsider the gesture of rotating a tap's handle with the left hand, which can occur in a number of situations (for example, when filling a glass with tap water, or when washing a dish): by varying the importance given to this hand we can either have a more flexible system, which is able to recognise many situations in light of the common traits in the left hand stream, or a more specialised one, which is focused on one situation only and is able to filter out all the others in light of the differences in the right hand stream.\n\nThe modelling procedure in itself is the same for both approaches and, in particular, it relies on Gaussian Mixture Modelling (GMM) and Gaussian Mixture Regression (GMR) for the retrieval of the \\textit{expected} curve and covariance matrix of each considered feature, on the basis of a given training set. The procedure has been first introduced in the field of Human-Robot Interaction \\citep{Calinon10} and later used for the purposes of Human Activity Recognition with a wrist-placed inertial device for a single arm \\cite{Bruno12}. We point to the references for its thorough description.\n\nWe denote with $f$ the generic feature of interest, i.e., $f$ can either correspond to gravity $G$ or body acceleration $B$ in the $2\\times7D$ approach, or to one among $g_l, b_l, g_r$ and $b_r$ in the $4\\times4D$ approach.\nWe assume the following definitions.\n\\begin{itemize}\n\\item $f_{s,k}\\in\\mathbb{R}^n$ is the data point $k$ of feature $f$ of trial $s$, defined as:\n\\begin{equation}\nf_{s,k} = \\{f_{t,k}, f_{a,s,k}\\},\n\\label{eq:xiks}\n\\end{equation}\nwhere $f_{t,k}$ stores the time information and $f_{a,s,k}$ includes the acceleration components.\nThe dimension $n$ depends on the modelling approach.\n\\item $F^f\\in\\mathbb{R}^{n\\times O_m}$, with $O_m = S_mK_m$, is the ordered set of data points generating the feature curve $f$ for all the $S_m$ trials, defined as:\n\\begin{equation}\nF^f = \\{f_1, \\ldots, f_k, \\ldots, f_{O_m}\\},\n\\label{eq:Xixi}\n\\end{equation}\nwhere\n\\begin{equation}\nf_k = \\{f_{t,k}, f_{a,k}\\}\n\\label{eq:xik}\n\\end{equation}\nis a generic data point taken from the whole training set, i.e., by hiding the information about the trial $s$ to which it belongs.\n\\end{itemize}\n\nThe purpose of GMM+GMR is to build the expected version of all features of gesture $m$, i.e.:\n\\begin{equation}\n\\hat{F}^{f} = \\{\\hat{f}_1, \\ldots, \\hat{f}_k, \\ldots, \\hat{f}_{K_m}\\}, \n\\label{eq:hatXifeat}\n\\end{equation}\nwith:\n\\begin{equation}\n\\hat{f}_k = \\{f_{t,k},\\hat{f}_{a,k},\\hat{\\Sigma}_{a,k}\\}, \n\\label{eq:hatxir}\n\\end{equation}\nwhere $\\hat{f}_{a,k}$ is the conditional expectation of $f_{a,k}$ given $f_{t,k}$ and $\\hat{\\Sigma}_{a,k}$ is the conditional covariance of $f_{a,k}$ given $f_{t,k}$.\nThe model $\\hat{F}^m$ of gesture $m$ is then defined as the set of the feature models $\\hat{F}^{f}$.\nPlease note that the number of data points in the expected curve may not be necessarily the same as the number $K_m$ of data points in the trials of the training set.\nThe equality is imposed here for the clarity of the description.\n\n\\begin{figure}\n\\centering\n\\subfigure[Left hand]\n{\\includegraphics[width=12.1cm]{WO_left4x4HD}}\n\\subfigure[Right hand]\n{\\includegraphics[width=12.1cm]{WO_right4x4HD}}\n\\caption{Model of the gesture \\textit{open a wardrobe} in the implicit correlation approach ($4\\times4D$), which is defined in terms of: (i) left hand gravity component (a)-left, (ii) left hand body acceleration component (a)-right, (iii) right hand gravity component (b)-left, (iv) right hand body acceleration component (b)-right.}\n\\label{fig:WO_4times4D}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\subfigure[$l_x$, $l_y$, $l_z$ axes]\n{\\includegraphics[width=10.4cm]{GRAPH_OpenCloseCurtains_2x7_1}}\n\\subfigure[$r_x$, $r_y$, $r_z$ axes]\n{\\includegraphics[width=10.4cm]{GRAPH_OpenCloseCurtains_2x7_2}}\n\\caption{Model of the gesture \\textit{open and close curtains} in the explicit correlation approach ($2\\times7D$), which is defined in terms of: (i) gravity component (a,b)-left, (ii) body acceleration component (a,b)-right.}\n\\label{fig:OCC_2times7D}\n\\end{figure}\n\nFigure \\ref{fig:WO_4times4D} shows the $2D$ projections of the model of the gesture \\textit{open a wardrobe}, computed from the full dataset of $60$ recordings with the $4\\times4D$ approach.\nThe four modelled features are $g_l$ (a)-left, $b_l$ (a)-right, $g_r$ (b)-left, $b_r$ (b)-right.\nFigure \\ref{fig:OCC_2times7D} shows the $2D$ projections of the model of the gesture \\textit{open and close curtains}, computed from the full dataset of $60$ recordings with the $2\\times7D$ approach.\nThe two modelled features are $G$ (a,b)-left and $B$ (a,b)-right.\nIn both cases, the solid red line represents the projection of the expected curve $\\hat{f}_a$ on one time-acceleration space, while the pink area surrounding it represents the conditional covariance $\\hat{\\Sigma}_a$.\n\nThe modelling procedure requires in input the number of Gaussian functions to use, which varies both with the gestures and with the features.\nThe \\textit{modelling parameters estimation} stage implements a procedure based on the k-means clustering algorithm and the silhouette clustering quality metric \\cite{Rousseeuw87} for the estimation of the number of Gaussian functions to use and their initialisation \\cite{Bruno12}.\nOther choices are equally legitimate.\nWe again point to the references for the details of the procedure.\n\n\\subsection{Comparison}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=11.5cm]{testing_architecture}\n\\caption{System architecture (testing phase).}\n\\label{fig:testing_architecture}\n\\end{figure}\nAs it is shown in Figure \\ref{fig:testing_architecture}, the testing phase executes the same procedures for data synchronisation, data pre-processing (noise reduction) and feature extraction of the training phase.\nThen, in accordance with the chosen modelling approach, the features are either expressed in the form of \\eqref{eq:4Dfeatures} or \\eqref{eq:7Dfeatures}.\n\nThe recognition procedure is composed of a comparison stage, devoted to ranking the similarity between the testing data and each previously learned model, and a classification stage, responsible for the final labelling of the testing data on the basis of comparison results. \n\nWe propose two \\textit{comparison} procedures.\nThe \\textit{distance}-based comparison computes the Mahalanobis distance between the testing features and each model features, while the \\textit{probability}-based comparison computes the likelihood of the occurrence of the testing features given each model features.\nIn both cases, the \\textit{classification} stage identifies the model with most prominent distance\/probability, if any, and labels the testing data accordingly.\nBoth comparison techniques are applied for the two considered modelling approaches, thus yielding four combinations of training and testing procedures.\nThe comparison of their performance is the topic of the experimental evaluation presented in Section \\ref{sec:experimental_evaluation}.\n\nLet us consider a moving horizon window of length $K_w$ on the testing streams and let us denote with $F^w$ the set of features $F^{w,f}$ extracted from the window in accordance with the chosen modelling approach.\n\nIn the distance-based approach, on the basis of the $M$ available models, we compute $M$ distances $d(\\hat{F}^m,F^w)$.\nThe Mahalanobis distance is a probabilistic distance measure used to compute the similarity between sets of random variables whose means and variances are known \\cite{Mahalanobis36}.\nThe Mahalanobis distance $d$ between a generic element $\\hat{f}_k \\in \\hat{F}^{f}$ defined by \\eqref{eq:hatxir} and a generic element $f_{k} \\in F^{w,f}$ defined in accordance with \\eqref{eq:xik}, is computed as:\n\\begin{equation}\nd(\\hat{f}_k,f_{k}) = \\sqrt{(\\hat{f}_{a,k}-f_{a,k})^T (\\hat{\\Sigma}_{a,k})^{-1}(\\hat{f}_{a,k}-f_{a,k})}.\n\\label{eq:dxiMcj}\n\\end{equation}\nThe accumulated distance between $\\hat{F}^f$ and $F^{w,f}$ is computed as:\n\\begin{equation}\nd(\\hat{F}^{f},F^{w,f}) = {\\sum_{k=1}^{K_m}}d(\\hat{f}_k,f_{k}),\n\\label{eq:dphixy}\n\\end{equation}\nthat is, by integrating the distance between each element in the run-time stream and its corresponding element in the model.\nThe overall distance $d(\\hat{F}^m,F^w)$ is computed as a weighted sum of all feature distances.\nIn our experiments we choose the weights to be equal for all features.\nThis approach weights more the precision associated with gesture production, and emphasises dexterity and robustness of bimanual motions.\n\nIn the probability-based approach, the probability of the window feature $F^{w,f}$ to be an occurrence of the model $\\hat{F}^{f}$ is computed as:\n\\begin{equation}\n\\begin{split}\np(\\hat{F}^{f},F^{w,f})&=\\mathcal{N}(f_k,\\hat{f}_{a,k},\\hat{\\Sigma}_{a,k}) \\quad\\quad\\quad \\forall k \\in 1...K_m\\\\\n&=\\frac{1}{\\sqrt{2\\pi^n|\\hat{\\Sigma}_{a,k}|}}e^{-\\frac{1}{2}(f_{a,k}-\\hat{f}_{a,k})^T(\\hat{\\Sigma}_{a,k})^{-1}(f_{a,k}-\\hat{f}_{a,k})}.\n\\end{split}\n\\label{eq:probability}\n\\end{equation}\n\nThe overall probability $p(\\hat{F}^m,F^w)$ is computed as a weighted sum of all feature probabilities, i.e., a \\textit{mixture}.\nWe again choose the weights to be equal for all features.\nWhen we use probabilities, we consider the gesture as a whole, and therefore we account for small variations in the gesture execution speed.\n\n\\section{Experimental Evaluation}\n\\label{sec:experimental_evaluation}\n\n\\subsection{Experimental Setup}\n\nIn all the experiments, we adopt two smartwatches LG G watch R W110 as sensing devices (Android Wear 1.0, CPU Quad-Core 1.2GHz Qualcomm Snapdragon 400, 4GB\/512MB RAM) equipped with a tri-axial accelerometer.\nThe sampling frequency is 40Hz.\nThe smartwatches automatically sync on startup with the smartphone they are paired with.\nBy pairing the two smartwatches with the same smartphone, we ensure that they are synchronised with each other with a precision satisfying the requirements of our application.\n\n\\begin{figure}\n\\centering\n{\\includegraphics[width=12cm]{curtains}}\n\\caption{Open and close curtains.}\n\\label{fig:OCC}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n{\\includegraphics[width=12cm]{sweeping}}\n\\caption{Sweep the floor.}\n\\label{fig:Swp}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n{\\includegraphics[width=12cm]{filling_cup}}\n\\caption{Fill a cup with tap water.}\n\\label{fig:FCoT}\n\\end{figure}\n\nWe consider five bimanual gestures:\n\\begin{itemize}\n\\item \\textit{Open and close curtains} (OCC).\nExtend and retract lateral-sliding curtains by pulling the connecting cords with an alternated up-and-down movement of the hands.\nKeep pulling until the curtain is fully closed or opened (see Figure \\ref{fig:OCC}).\n\\item \\textit{Sweep the floor} (SWP).\nPull a conventional broom from left to right, to sweep the floor.\nThree strokes are required (see Figure \\ref{fig:Swp}).\n\\item \\textit{Fill a cup with tap water} (FCOT).\nWith the right hand, take a cup from the sink and hold it below the tap, while rotating the tap's handle with the left hand to fill the cup with water (see Figure \\ref{fig:FCoT}).\n\\item \\textit{Take a bottle from the fridge} (RFF).\nWith the right hand, open the door of a small fridge, then, with the left one, take a bottle from it.\nLastly, close the fridge door with the right hand (see Figure \\ref{fig:RfF}).\n\\item \\textit{Open a wardrobe} (WO).\nOpen a two-doors small wardrobe moving the two hands concurrently (see Figure \\ref{fig:WO}).\n\\end{itemize}\n\n\\begin{figure}\n\\centering\n{\\includegraphics[width=12cm]{removing_from_fridge}}\n\\caption{Take a bottle from the fridge.}\n\\label{fig:RfF}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n{\\includegraphics[width=12cm]{wardrobe_opening}}\n\\caption{Open a wardrobe.}\n\\label{fig:WO}\n\\end{figure}\n\nIntuitively, the gestures \\textit{open and close curtains}, \\textit{sweep the floor} and \\textit{open a wardrobe} fully constrain the movement of the two hands, while the gestures \\textit{fill a cup with tap water} and \\textit{take a bottle from the fridge} allow for more freedom in their execution, as far as synchronisation between the arms is concerned.\nMoreover, the gestures \\textit{sweep the floor} and \\textit{open a wardrobe} imply that the two hands are moved concurrently, while the gestures \\textit{open and close curtains}, \\textit{fill a cup with tap water} and \\textit{take a bottle from the fridge} mostly require the two hands to be moved in sequence.\nLastly, the gestures \\textit{open and close curtains} and \\textit{sweep the floor} are recursive, i.e., composed of a number of repetitions of simpler movements, while the gestures \\textit{open a wardrobe}, \\textit{fill a cup with tap water} and \\textit{take a bottle from the fridge} are non-recursive.\n\nFigures \\ref{fig:OCC}-\\ref{fig:WO} show pictures of an execution of the gesture on the left hand side and the acceleration measured at the two wrists during the execution of the same gesture on the right hand side.\nThe impact of the aforementioned gesture characteristics (constrained\/not constrained, concurrent\/sequential, recursive\/not recursive) on the accelerations measured at the wrist, and therefore on the considered modelling and comparison procedures, is not known: finding it is one of the goals of the experiments we have conducted.\n\nFor each gesture, we collected a dataset of $60$ recordings from $10$ volunteers with age ranging from $22$ to $30$ years old.\nThe volunteers, wearing the smartwatches, have been asked to autonomously start and stop the recordings and, once an experimenter described the gesture, to perform it in a natural way.\nAll repetitions were supervised.\nIn addition to this dataset, we asked a number of volunteers to take some recordings in real-life conditions.\nThey have been asked to clean a room and, amidst the other activities, to perform the five bimanual gestures of interest.\nVolunteers could freely choose the timing and sequence of the gestures, and the choices have been annotated by an experimenter.\n\n\\subsection{Performance Analysis}\n\nWe tested the four combinations of modelling and comparison procedures in terms of the standard statistical measures of accuracy, precision and recall by using k-fold cross validation on the collected dataset.\nFor all gestures, we split the dataset in $6$ groups of $10$ recordings each and iteratively used $5$ groups as training dataset and the remaining group for validation.\n\n\\begin{figure}\n\\centering\n{\\includegraphics[width=11cm]{confusion_4D_prob_crop.eps}}\n\\caption{Results of k-fold cross validation for the $4\\times4D$ modelling approach and probability-based comparison. The bottom row reports the \\textit{recall} measures, while the rightmost column reports the \\textit{precision} measures. The purple cell reports the overall \\textit{accuracy} of the system.}\n\\label{fig:validation_4Dprob}\n\\end{figure}\n\nThe results of the k-fold cross validation on the four combinations of modelling and comparison procedures are shown in Figures \\ref{fig:validation_4Dprob}-\\ref{fig:validation_7Ddist}.\nIn all figures, the first five rows\/columns refer to the gestures \\textit{open and close curtains} (OCC), \\textit{sweep the floor} (SWP), \\textit{fill a cup with tap water} (FCOT), \\textit{take a bottle from the fridge} (RFF) and \\textit{open a wardrobe} (WO).\nThe yellow row\/column collectively represents all the gestures which are not among the modelled ones (N.A.).\nThe columns denote the true labels of all validation recordings (i.e., the gestures actually performed during each of them), while the rows denote the labels assigned by the recognition system.\nSince, collectively, the validation dataset is composed of $60$ recordings per modelled gesture, a perfect recognition system would show $60$ recordings in the first five green cells (i.e., with the predicted label matching the true label) and none in the red cells (meaning that the recording of a gesture has been classified as an occurrence of another gesture) or in the yellow cells (meaning that the recording of a gesture has been classified as an occurrence of an unknown gesture).\n\n\\begin{figure}\n\\centering\n{\\includegraphics[width=11cm]{confusion_4D_dist_crop.eps}}\n\\caption{Results of k-fold cross validation for the $4\\times4D$ modelling approach and distance-based comparison. The bottom row reports the \\textit{recall} measures, while the rightmost column reports the \\textit{precision} measures. The purple cell reports the overall \\textit{accuracy} of the system.}\n\\label{fig:validation_4Ddist}\n\\end{figure}\n\nColumn $1$ of Figure \\ref{fig:validation_4Dprob} reports that, out of $60$ validation recordings actually referring to the gesture \\textit{open and close curtains}, $59$ were correctly labelled as occurrences of that gesture and one was labelled as an occurrence of the gesture \\textit{take a bottle from the fridge}.\nThis analysis allows for computing the \\textit{recall} performance of the system, which corresponds to the ratio between the number of recordings of gesture $m$ correctly labelled as occurrences of gesture $m$ and the overall number of recordings of gesture $m$.\nThe recall values for all gestures are listed in the bottom row of the confusion matrix. \n\n\\begin{figure}\n\\centering\n{\\includegraphics[width=11cm]{confusion_7D_prob_crop.eps}}\n\\caption{Results of k-fold cross validation for the $2\\times7D$ modelling approach and probability-based comparison. The bottom row reports the \\textit{recall} measures, while the rightmost column reports the \\textit{precision} measures. The purple cell reports the overall \\textit{accuracy} of the system.}\n\\label{fig:validation_7Dprob}\n\\end{figure}\n\nA similar analysis of the rows of the confusion matrix allows for assessing the \\textit{precision} performance of the system.\nAs an example, row $2$ of Figure \\ref{fig:validation_4Dprob} reports that, out of $59$ recordings labelled as occurrences of the gesture \\textit{sweep the floor}, $58$ were true recordings of that gesture, while one was a recording of the gesture \\textit{fill a cup with tap water}.\nThe precision metric measures the ratio between number of recordings of gesture $m$ correctly labelled as occurrences of gesture $m$ and the overall number of recordings labelled as occurrences of gesture $m$. The precision values for all gestures are listed in the rightmost column of the confusion matrix.\n\n\\begin{figure}\n\\centering\n{\\includegraphics[width=11cm]{confusion_7D_dist_crop.eps}}\n\\caption{Results of k-fold cross validation for the $2\\times7D$ modelling approach and distance-based comparison. The bottom row reports the \\textit{recall} measures, while the rightmost column reports the \\textit{precision} measures. The purple cell reports the overall \\textit{accuracy} of the system.}\n\\label{fig:validation_7Ddist}\n\\end{figure}\n\nLastly, the aggregated analysis of the number of correct labels over the total number of recordings, i.e., the \\textit{accuracy} performance of the system, is reported in the purple cell at the bottom-right corner.\nAs an example, the recognition system adopting the implicit correlation modelling approach ($4\\times4D)$ and probability-based comparison, whose confusion matrix in shown in Figure \\ref{fig:validation_4Dprob}, has an overall accuracy of $82\\%$.\n\n\\subsection{Real-life Conditions}\n\n\\begin{table}\n\\centering\n\\caption{Experiment scripts for the real-life tests.}\n\\label{tab:real-life}\n\\begin{tabular}{@{ }c@{ } @{ }c@{ }}\n\\hline\nScenario \\#1 & Scenario \\#2 \\\\\n\\hline \n\\multirow{2}{*}{SWP} & WO \\\\ \n & RFF \\\\ \n\\multirow{2}{*}{SWP} & FCOT \\\\ \n & OCC \\\\ \n\\hline\n\\end{tabular} \n\\end{table}\n\nAs an additional test for assessing the system's performance, we asked two volunteers to wear the smartwatches at the wrists while carrying out cleaning chores of their choice in a room.\nThe first volunteer was instructed to perform twice, amidst the other activities, the gesture \\textit{sweep the floor}, while the second volunteer was instructed to perform each of the other gestures once, as summarised in Table \\ref{tab:real-life}.\nBoth tests were supervised by an experimenter, annotating the time when each gesture of interest was performed.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=12cm]{t1_measurements_}\n\\caption{Acceleration streams registered during the execution of the real-life scenario 1. Red circles denote the two executions of the gesture \\textit{sweep the floor}.}\n\\label{fig:scenario1}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=12cm]{t2_measurements_}\n\\caption{Acceleration streams registered during the execution of the real-life scenario 2. Red circles denote, from left to right, the execution of the gestures \\textit{open a wardrobe}, \\textit{take a bottle from the fridge}, \\textit{fill a cup with tap water} and \\textit{open and close curtains}.}\n\\label{fig:scenario2}\n\\end{figure}\n\nFigure \\ref{fig:scenario1} shows the accelerations registered at the wrists of the first volunteer, while performing the real-life scenario 1, while Figure \\ref{fig:scenario2} shows the accelerations registered at the wrists of the second volunteer, while performing the real-life scenario 2.\nThe red circles denote all occurrences of the modelled bimanual gestures.\nFor these tests we used the whole dataset of $60$ recordings per gesture previously described as training set for the creation of the models.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=12cm]{4x4_POSS_PROB_1}\n\\caption{Output of the system for the recording of scenario 1 shown in Figure \\ref{fig:scenario1}. The left hand graph refers to the $4\\times4D$ modelling approach and probability-based comparison, while the right hand side refers to the $4\\times4D$ modelling approach and distance-based comparison.}\n\\label{fig:validation_scenario1_4D}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=12cm]{2x7_POSS_PROB_1}\n\\caption{Output of the system for the recording of scenario 1 shown in Figure \\ref{fig:scenario1}. The left hand graph refers to the $2\\times7D$ modelling approach and probability-based comparison, while the right hand side refers to the $2\\times7D$ modelling approach and distance-based comparison.}\n\\label{fig:validation_scenario1_7D}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=12cm]{4x4_POSS_PROB_2}\n\\caption{Output of the system for the recording of scenario 2 shown in Figure \\ref{fig:scenario2}. The left hand graph refers to the $4\\times4D$ modelling approach and probability-based comparison, while the right hand side refers to the $4\\times4D$ modelling approach and distance-based comparison.}\n\\label{fig:validation_scenario2_4D}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=12cm]{2x7_POSS_PROB_2}\n\\caption{Output of the system for the recording of scenario 2 shown in Figure \\ref{fig:scenario2}. The left hand graph refers to the $2\\times7D$ modelling approach and probability-based comparison, while the right hand side refers to the $2\\times7D$ modelling approach and distance-based comparison.}\n\\label{fig:validation_scenario2_7D}\n\\end{figure}\n\nFigures \\ref{fig:validation_scenario1_4D}-\\ref{fig:validation_scenario2_7D} show the output of the recognition system, for the first (Figures \\ref{fig:validation_scenario1_4D} and \\ref{fig:validation_scenario1_7D}) and the second scenario (Figures \\ref{fig:validation_scenario2_4D} and \\ref{fig:validation_scenario2_7D}), for the four combinations of modelling and comparison procedures.\nIn all figures, the x-axis denotes time and the y-axis denotes the probability or inverse distance value of the models.\nThe \\textit{TP} box marks the true positive recognitions, i.e., all the occurrences of a modelled gesture which are correctly recognised.\nThe \\textit{FP} box marks the false positive recognitions, i.e., all the occurrences of gestures which have been assigned the wrong label.\n\n\\subsection{Discussion}\n\nAs Figures \\ref{fig:validation_4Dprob}-\\ref{fig:validation_7Ddist} show, the combination allowing for the best recognition performance relies on the \\textit{implicit} correlation modelling approach and the \\textit{probability-based} comparison: this system achieves an overall accuracy of $82\\%$ (see Figure \\ref{fig:validation_4Dprob}) and it retains good recognition performance, especially in terms of precision, for all modelled gestures (the gesture with worst recall is \\textit{open a wardrobe}, with $48.3\\%$, while the gesture with worst precision is \\textit{fill a cup with tap water}, with $70.9\\%$).\nInterestingly enough, the combination resulting in the worst recognition performance ($38.3\\%$ overall accuracy) relies on the explicit correlation modelling approach and the same probability-based comparison (see Figure \\ref{fig:validation_7Dprob}), thus confirming that there is a tight relation between the modelling and comparison procedures.\nThe same modelling approach, with the distance-based comparison procedure, significantly improves its performance ($69.7\\%$ overall accuracy, see Figure \\ref{fig:validation_7Ddist}).\n\nFor all four considered combinations, the gestures \\textit{open and close curtains} and \\textit{sweep the floor} consistently achieve the highest precision and recall values, which seems to suggest that recursive motions (i.e., composed of the repeated execution of simple movements), regardless whether they involve the concurrent or sequential motion of the two arms, are easier to model and recognise.\nIn other words, repeated gestures produce more stable patterns as far as the detection and classification system is concerned.\nConversely, the gestures \\textit{open a wardrobe} and \\textit{take a bottle from the fridge} consistently achieve the lowest precision and recall values, with the latter performing especially poorly with the implicit correlation modelling approach.\nThese results, albeit preliminary, seem to confirm our intuition that the explicit modelling approach is much more sensitive to small differences between run-time streams and the corresponding model than the implicit correlation modelling approach.\n\nIn accordance with the k-fold validation analysis, also in the real-life tests the combinations achieving the best performance are the implicit correlation modelling approach with probability-based comparison and the explicit correlation modelling approach with distance-based comparison, which, on the whole, successfully recognise respectively three and four of the six gestures of interest.\n\nReal-life tests highlight a difference in the pattern of the recognition labels between the probability-based and distance-based comparison procedures.\nAs Figures \\ref{fig:validation_scenario1_4D}-\\ref{fig:validation_scenario2_7D} show, in the second case the labels follow a smoother pattern, which reduces the number of false positive recognition.\nThe adoption of reasoning techniques analysing label patterns to increase the recognition accuracy, has been proved effective in the case of a single stream \\cite{Bruno14b} and may lead to significant improvements also in the case of bimanual gestures.\n\nWhat these results have to say about the way we currently understand bimanual gestures in humans?\nNot surprisingly, the best results we achieve involve implicit correlation with probability-based comparison.\nWhilst with implicit correlation we do not assume the motion of the two hands\/arms to be correlated explicitly, probability-based comparison is more robust with respect to small variations in the phases of the two hands\/arms.\nIf we had assumed the two limbs to be explicitly correlated in motor space, i.e., through intrinsic coordinates, we would have constrained the phases of the two hands\/arms to be perfectly tuned.\nThis is not what happens in practice, as it is exemplified in the experiments by Heuer and colleagues \\cite{Heueretal1998} and Byblow \\textit{et al} \\cite{Byblowetal2000}.\nSince, in real-world gesture execution, perfect synchronisation seldom happens, a better result is obtained -- on average -- when we allow for the maximum flexibility in gesture execution.\nProbability-based measures enforce such flexibility even more, because they do not constraint the distance metric to be tied to a specific time instant of the gesture execution. \n\nFinally, it is noteworthy that the use of Gaussian mixtures allows for determining what parts of the motion are more relevant, for those are characterised by lower amplitudes of the covariance.\nAs a consequence, the covariance matrix associated with each element in the model assures to give a proper weight to the sample itself. \nThis allows to consider the differing importance of the various gesture's phases in the recognition phase, therefore taking into account motion dexterity and robustness. \n\nThe combination of the two methods for modelling and recognition allows us to support the Bernstein's intuition about motion constraints \\cite{Bernstein1967}, i.e., (i) variations in degrees of freedom affecting motion performance are constrained (in our case, by low covariance values in intra-arm correlation of inertial data); (ii) variation in degrees of freedom that \\textit{do not affect} task performance can be unconstrained (again, by larger values in the covariance in intra-arm correlation); (iii) co-variations between gesture-relevant degrees of freedom not impacting on performance are permitted (by considering implicit correlations among the two hands\/arms). \n\n\n\n\n\n\n\n\t\t\n\n\\section{Conclusions}\n\\label{sec:conclusions}\n\nThis paper discusses design choices involved in the detection and classification of bimanual gestures in unconstrained environments.\nThe assumption we make is the availability of inertial data originating from two distinct sensing points, conveniently located at the wrist, given the availability of such COTS sensors as smartwatches.\nOur models are grounded with respect to Gaussian Mixture Modelling (GMM) and Gaussian Mixture Regression (GMR), which we use as basic modelling procedure.\nStarting from these two positions, we compare different modelling (i.e., explicit and implicit correlations between the two hands\/arms) and classification techniques (i.e., based on distance and probability-related considerations), which are inspired by literature about the representation of bimanual gestures in the brain. \nOur architecture allows for different combinations of modelling and classification techniques. \nFurthermore, it can be extended as a framework to support reproducible research.\n\nExperiments show results related to $5$ generic bimanual activities, which have been selected on the basis of three main parameters: (not) constraining the two hands by a physical tool, (not) requiring a specific sequence of single-hand gestures, being recursive (or not). \nThe best results are obtained when considering an implicit coordination among the two hands\/arms (i.e., the two motions are modelled separately) and using a probability-based distance for classification (i.e., the specific timing characteristics of the trajectories are considered only to a limited extent). \nThis seems to confirm a few insights from the literature related to motor control of bimanual gestures, and opens up a number of interesting research questions for the upcoming future.\n\\section*{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}