Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S13-1021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:42:06.329421Z"
},
"title": "MayoClinicNLP-CORE: Semantic representations for textual similarity",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Mayo Clinic Rochester",
"location": {
"postCode": "55905",
"region": "MN"
}
},
"email": "[email protected]"
},
{
"first": "Dongqing",
"middle": [],
"last": "Zhu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Delaware Newark",
"location": {
"postCode": "19716",
"region": "DE"
}
},
"email": "[email protected]"
},
{
"first": "Ben",
"middle": [],
"last": "Carterette",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Delaware Newark",
"location": {
"postCode": "19716",
"region": "DE"
}
},
"email": "[email protected]"
},
{
"first": "Hongfang",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Mayo Clinic Rochester",
"location": {
"postCode": "55905",
"region": "MN"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The Semantic Textual Similarity (STS) task examines semantic similarity at a sentencelevel. We explored three representations of semantics (implicit or explicit): named entities, semantic vectors, and structured vectorial semantics. From a DKPro baseline, we also performed feature selection and used sourcespecific linear regression models to combine our features. Our systems placed 5th, 6th, and 8th among 90 submitted systems.",
"pdf_parse": {
"paper_id": "S13-1021",
"_pdf_hash": "",
"abstract": [
{
"text": "The Semantic Textual Similarity (STS) task examines semantic similarity at a sentencelevel. We explored three representations of semantics (implicit or explicit): named entities, semantic vectors, and structured vectorial semantics. From a DKPro baseline, we also performed feature selection and used sourcespecific linear regression models to combine our features. Our systems placed 5th, 6th, and 8th among 90 submitted systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The Semantic Textual Similarity (STS) task (Agirre et al., 2012; Agirre et al., 2013) examines semantic similarity at a sentence-level. While much work has compared the semantics of terms, concepts, or documents, this space has been relatively unexplored. The 2013 STS task provided sentence pairs and a 0-5 human rating of their similarity, with training data from 5 sources and test data from 4 sources.",
"cite_spans": [
{
"start": 43,
"end": 64,
"text": "(Agirre et al., 2012;",
"ref_id": "BIBREF0"
},
{
"start": 65,
"end": 85,
"text": "Agirre et al., 2013)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We sought to explore and evaluate the usefulness of several semantic representations that have had recent significance in research or practice. First, information extraction (IE) methods often implicitly consider named entities as ad hoc semantic representations, for example, in the clinical domain. Therefore, we sought to evaluate similarity based on named entity-based features. Second, in many applications, an effective means of incorporating distributional semantics is Random Indexing (RI). Thus we consider three different representations possible within Random Indexing (Kanerva et al., 2000; Sahlgren, 2005) . Finally, because compositional distributional semantics is an important research topic (Mitchell and Lapata, 2008; Erk and Pad\u00f3, 2008) , we sought to evaluate a principled composition strategy: structured vectorial semantics (Wu and Schuler, 2011) .",
"cite_spans": [
{
"start": 580,
"end": 602,
"text": "(Kanerva et al., 2000;",
"ref_id": "BIBREF8"
},
{
"start": 603,
"end": 618,
"text": "Sahlgren, 2005)",
"ref_id": "BIBREF12"
},
{
"start": 708,
"end": 735,
"text": "(Mitchell and Lapata, 2008;",
"ref_id": "BIBREF11"
},
{
"start": 736,
"end": 755,
"text": "Erk and Pad\u00f3, 2008)",
"ref_id": "BIBREF5"
},
{
"start": 846,
"end": 868,
"text": "(Wu and Schuler, 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper proceeds as follows. Section 2 overviews our similarity metrics, and Section 3 overviews the systems that were defined on these metrics. Competition results and additional analyses are in Section 4. We end with discussion on the results in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Because we expect semantic similarity to be multilayered, we expect that we will need many similarity measures to approximate human similarity judgments. Rather than reinvent the wheel, we have chosen to introduce features that complement existing successful feature sets. We utilized 17 features from DKPro Similarity and 21 features from TakeLab, i.e., the two top-performing systems in the 2012 STS task, as a solid baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity measures",
"sec_num": "2"
},
{
"text": "These are summarized in Table 1 . We introduce 3 categories of new similarity metrics, 9 metrics in all.",
"cite_spans": [],
"ref_spans": [
{
"start": 24,
"end": 31,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Similarity measures",
"sec_num": "2"
},
{
"text": "Named entity recognition provides a common approximation of semantic content for the information extraction perspective. We define three simple similarity metrics based on named entities. First, we computed the named entity overlap (exact string matches) between the two sentences, where NE k was the set of named entities found in sentence S k . This is the harmonic mean of how closely S1 Table 1 : Full feature pool in MayoClinicNLP systems. The proposed MayoClinicNLP metrics are meant to complement DKPro (B\u00e4r et al., 2012) and TakeLab (\u0160ari\u0107 et al., 2012) ",
"cite_spans": [
{
"start": 510,
"end": 528,
"text": "(B\u00e4r et al., 2012)",
"ref_id": "BIBREF3"
},
{
"start": 533,
"end": 561,
"text": "TakeLab (\u0160ari\u0107 et al., 2012)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 391,
"end": 398,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Named entity measures",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "sim neo (S1, S2) = 2 \u22c5 NE 1 \u2229 NE 2 NE 1 + NE 2",
"eq_num": "(1)"
}
],
"section": "Named entity measures",
"sec_num": "2.1"
},
{
"text": "Additionally, we relax the constraint of requiring exact string matches between the two sentences by using the longest common subsequence (Allison and Dix, 1986) and greedy string tiling (Wise, 1996) algorithms. These metrics give similarities between two strings, rather than two sets of strings as we have with NE 1 and NE 2 . Thus, we follow previous work in greedily aligning these named entities (Lavie and Denkowski, 2009; \u0160ari\u0107 et al., 2012) into pairs. Namely, we compare each pair (ne i,1 , ne j,2 ) of named entity strings in NE 1 and NE 2 . The highest-scoring pair is entered into a set of pairs, P . Then, the next highest pair is added to P if neither named entity is already in P , and discarded otherwise; this continues until there are no more named entities in either NE 1 or NE 2 .",
"cite_spans": [
{
"start": 138,
"end": 161,
"text": "(Allison and Dix, 1986)",
"ref_id": "BIBREF2"
},
{
"start": 401,
"end": 428,
"text": "(Lavie and Denkowski, 2009;",
"ref_id": "BIBREF10"
},
{
"start": 429,
"end": 448,
"text": "\u0160ari\u0107 et al., 2012)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Named entity measures",
"sec_num": "2.1"
},
{
"text": "We then define two named entity aligning measures that use the longest common subsequence (LCS) and greedy string tiling (GST) fuzzy string matching algorithms:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named entity measures",
"sec_num": "2.1"
},
{
"text": "sim nea (S1, S2) = (ne 1 ,ne 2 )\u2208P f (ne 1 , ne 2 ) max NE 1 , NE 2 (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named entity measures",
"sec_num": "2.1"
},
{
"text": "where f (\u22c5) is either the LCS or GST algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named entity measures",
"sec_num": "2.1"
},
{
"text": "In our experiments, we performed named entity recognition with the Stanford NER tool using the standard English model (Finkel et al., 2005) . Also, we used UKP's existing implementation of LCS and GST (\u0160ari\u0107 et al., 2012) for the latter two measures.",
"cite_spans": [
{
"start": 118,
"end": 139,
"text": "(Finkel et al., 2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Named entity measures",
"sec_num": "2.1"
},
{
"text": "Random indexing (Kanerva et al., 2000; Sahlgren, 2005) is another distributional semantics framework for representing terms as vectors. Similar to LSA (Deerwester et al., 1990) , an index is created that represents each term as a semantic vector. But in random indexing, each term is represented by an elemental vector e t with a small number of randomly-generated non-zero components. The intuition for this means of dimensionality reduction is that these randomly-generated elemental vectors are like quasi-orthogonal bases in a traditional geometric semantic space, rather than, e.g., 300 fully orthogonal dimensions from singular value decomposition (Landauer and Dumais, 1997) . For a standard model with random indexing, a contextual term vector c t,std is the the sum of the elemental vectors corresponding to tokens in the document. All contexts for a particular term are summed and normalized to produce a final term vector v t,std .",
"cite_spans": [
{
"start": 16,
"end": 38,
"text": "(Kanerva et al., 2000;",
"ref_id": "BIBREF8"
},
{
"start": 39,
"end": 54,
"text": "Sahlgren, 2005)",
"ref_id": "BIBREF12"
},
{
"start": 151,
"end": 176,
"text": "(Deerwester et al., 1990)",
"ref_id": "BIBREF4"
},
{
"start": 654,
"end": 681,
"text": "(Landauer and Dumais, 1997)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Random indexing measures",
"sec_num": "2.2"
},
{
"text": "Other notions of context can be incorporated into this model. Local co-occurrence context can be accounted for in a basic sliding-window model by considering words within some window radius r (instead of a whole document). Each instance of the term t will have a contextual vector c t,win = e t\u2212r + \u22ef + e t\u22121 + e t+1 + \u22ef + e t+r ; context vectors for each instance (in a large corpus) would again be added and normalized to create the overall vector v t,win . A directional model doubles the dimensionality of the vector and considers left-and right-context separately (half the indices for left-context, half for rightcontext), using a permutation to achieve one of the two contexts. A permutated positional model uses a position-specific permutation function to encode the relative word positions (rather than just left-or rightcontext) separately. Again, v t would be summed and normalized over all instances of c t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Random indexing measures",
"sec_num": "2.2"
},
{
"text": "Sentence vectors from any of these 4 Random Indexing-based models (standard, windowed, directional, positional) are just the sum of the vectors for each term v S = \u2211 t\u2208S v t . We define 4 separate similarity metrics for STS as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Random indexing measures",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "sim RI (S1, S2) = cos(v S1 , v S2 )",
"eq_num": "(3)"
}
],
"section": "Random indexing measures",
"sec_num": "2.2"
},
{
"text": "We used the semantic vectors package (Widdows and Ferraro, 2008; Widdows and Cohen, 2010) in the default configuration for the standard model. For the windowed, directional, and positional models, we used a 6-word window radius with 200 dimensions and a seed length of 5. All models were trained on the raw text of the Penn Treebank Wall Street Journal corpus and a 100,075-article subset of Wikipedia.",
"cite_spans": [
{
"start": 37,
"end": 64,
"text": "(Widdows and Ferraro, 2008;",
"ref_id": "BIBREF15"
},
{
"start": 65,
"end": 89,
"text": "Widdows and Cohen, 2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Random indexing measures",
"sec_num": "2.2"
},
{
"text": "Structured vectorial semantics (SVS) composes distributional semantic representations in syntactic context (Wu and Schuler, 2011) . Similarity metrics defined with SVS inherently explore the qualities of a fully interactive syntax-semantics interface. While previous work evaluated the syntactic contributions of this model, the STS task allows us to evaluate the phrase-level semantic validity of the model. We summarize SVS here as bottom-up vector composition and parsing, then continue on to define the associated similarity metrics.",
"cite_spans": [
{
"start": 107,
"end": 129,
"text": "(Wu and Schuler, 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic vectorial semantics measures",
"sec_num": "2.3"
},
{
"text": "Each token in a sentence is modeled generatively as a vector e \u03b3 of latent referents i \u03b3 in syntactic context c \u03b3 ; each element in the vector is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic vectorial semantics measures",
"sec_num": "2.3"
},
{
"text": "e \u03b3 [i \u03b3 ] = P(x \u03b3 lci \u03b3 ), for preterm \u03b3 (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic vectorial semantics measures",
"sec_num": "2.3"
},
{
"text": "where l \u03b3 is a constant for preterminals. We write SVS vector composition between two word (or phrase) vectors in linear algebra form, 1 assuming that we are composing the semantics of two children e \u03b1 and e \u03b2 in a binary syntactic tree into their parent e \u03b3 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic vectorial semantics measures",
"sec_num": "2.3"
},
{
"text": "e \u03b3 = M \u2299 (L \u03b3\u00d7\u03b1 \u22c5 e \u03b1 ) \u2299 (L \u03b3\u00d7\u03b2 \u22c5 e \u03b2 ) \u22c5 1 (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic vectorial semantics measures",
"sec_num": "2.3"
},
{
"text": "M is a diagonal matrix that encapsulates probabilistic syntactic information; the L matrices are linear transformations that capture how semantically relevant child vectors are to the resulting vector (e.g., L \u03b3\u00d7\u03b1 defines the the relevance of e \u03b1 to e \u03b3 ). These matrices are defined such that the resulting e \u03b3 is a semantic vector of consistent P(x \u03b3 lci \u03b3 ) probabilities. Further detail is in our previous work (Wu, 2010; Wu and Schuler, 2011) .",
"cite_spans": [
{
"start": 415,
"end": 425,
"text": "(Wu, 2010;",
"ref_id": "BIBREF18"
},
{
"start": 426,
"end": 447,
"text": "Wu and Schuler, 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic vectorial semantics measures",
"sec_num": "2.3"
},
{
"text": "Similarity metrics can be defined in the SVS space by comparing the distributions of the composed e \u03b3 vectors -i.e., our similarity metric is a comparison of the vector semantics at different phrasal nodes. We define two measures, one corresponding to the top node c \u25b3 (e.g., with a syntactic constituent c \u25b3 = 'S'), and one corresponding to the left and right largest child nodes (e.g.,, c \u2220 = 'NP' and c \u2220 = 'VP' for a canonical subject-verb-object sentence in English). sim svs-top (S1, S2) = cos(e \u25b3(S1) , e \u25b3(S2) ) (6) sim svs-phr (S1, S2) = max( avgsim(e \u2220(S1) , e \u2220(S2) ; e \u2220(S1) , e \u2220(S2) ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic vectorial semantics measures",
"sec_num": "2.3"
},
{
"text": "avgsim(e \u2220(S1) , e \u2220(S2) ; e \u2220(S1) , e \u2220(S2) )) (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic vectorial semantics measures",
"sec_num": "2.3"
},
{
"text": "where avgsim() is the harmonic mean of the cosine similarities between the two pairs of arguments. Top-level similarity comparisons in (6) amounts to comparing the semantics of a whole sentence. The phrasal similarity function sim svs-phr (S1, S2) in (7) thus seeks to semantically align the two largest subtrees, and weight them. Compared to sim svs-top , the phrasal similarity function sim svs-phr (S1, S2) assumes there might be some information captured in the child nodes that could be lost in the final composition to the top node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic vectorial semantics measures",
"sec_num": "2.3"
},
{
"text": "In our experiments, we used the parser described in Wu and Schuler (2011) with 1,000 headwords and 10 relational clusters, trained on the Wall Street Journal treebank.",
"cite_spans": [
{
"start": 52,
"end": 73,
"text": "Wu and Schuler (2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic vectorial semantics measures",
"sec_num": "2.3"
},
{
"text": "The similarity metrics of Section 2 were calculated for each of the sentence pairs in the training set, and later the test set. In combining these metrics, we extended a DKPro Similarity baseline (3.1) with feature selection (3.2) and source-specific models and classification (3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature combination framework",
"sec_num": "3"
},
{
"text": "For our baseline (MayoClinicNLPr1wtCDT), we used the UIMA-based DKPro Similarity system from STS 2012 (B\u00e4r et al., 2012) . Aside from the large number of sound similarity measures, this provided linear regression through the WEKA package (Hall et al., 2009) to combine all of the disparate similarity metrics into a single one, and some preprocessing. Regression weights were determined on the whole training set for each source.",
"cite_spans": [
{
"start": 102,
"end": 120,
"text": "(B\u00e4r et al., 2012)",
"ref_id": "BIBREF3"
},
{
"start": 238,
"end": 257,
"text": "(Hall et al., 2009)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linear regression via DKPro Similarity",
"sec_num": "3.1"
},
{
"text": "Not every feature was included in the final linear regression models. To determine the best of the 47 (DKPro-17, TakeLab-21, MayoClinicNLP-9) features, we performed a full forward-search on the space of similarity measures. In forward-search, we perform 10-fold cross-validation on the training set for each measure, and pick the best one; in the next round, that best metric is retained, and the remaining metrics are considered for addition. Rounds continue until all the features are exhausted, though a stopping-point is noted when performance no longer increases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature selection",
"sec_num": "3.2"
},
{
"text": "There were 5 sources of data in the training set: paraphrase sentence pairs (MSRpar), sentence pairs from video descriptions (MSRvid), MT evaluation sentence pairs (MTnews and MTeuroparl) and gloss pairs (OnWN). In our submitted runs, we trained a separate, feature-selected model based on crossvalidation for each of these data sources. In training data on cross-validation tests, training domainspecific models outperformed training a single conglomerate model. In the test data, there were 4 sources, with 2 appearing in training data (OnWN, SMT) and 2 that were novel (FrameNet/Wordnet sense definitions (FNWN), European news headlines (headlines)). We examined two different strategies for applying the 5-source trained models on these 4 test sets. Both of these strategies rely on a multiclass random forest classifier, which we trained on the 47 similarity metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subdomain source models and classification",
"sec_num": "3.3"
},
{
"text": "First, for each sentence pair, we considered the final similarity score to be a weighted combination of the similarity score from each of the 5 sourcespecific similarity models. The combination weights were determined by utilizing the classifier's confidence scores. Second, the final similarity was chosen as the single source-specific similarity score corresponding to the classifier's output class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subdomain source models and classification",
"sec_num": "3.3"
},
{
"text": "The MayoClinicNLP team submitted three systems to the STS-Core task. We also include here a posthoc run that was considered as a possible submission.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "r1wtCDT This run used the 47 metrics from DKPro, TakeLab, and MayoClinicNLP as a feature pool for feature selection. Sourcespecific similarity metrics were combined with classifier-confidence-score weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "r2CDT Same feature pool as run 1. Best-match (as determined by classifier) source-specific similarity metric was used rather than a weighted combination.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "r3wtCD TakeLab features were removed from the feature pool (before feature selection). Same source combination as run 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "r4ALL Post-hoc run using all 47 metrics, but training a single linear regression model rather than source-specific models. Table 2 shows the top 10 runs of 90 submitted in the STS-Core task are shown, with our three systems placing 5th, 6th, and 8th. Additionally, we can see that run 4 would have placed 4th. Notice that there are significant source-specific differences between the runs. For example, while run 4 is better overall, runs 1-3 outperform it on all but the headlines and FNWN datasets, i.e., the test datasets that were not present in the training data. Thus, it is clear that the source-specific models are beneficial when the training data is in-domain, but a combined model is more beneficial when no such training data is available. Due to the source-specific variability among the runs, it is important to know whether the forwardsearch feature selection performed as expected. For source specific models (runs 1 and 3) and a combined model (run 4), Figure 1 shows the 10-fold cross-validation scores on the training set as the next feature is added to the model. As we would expect, there is an initial growth region where the first features truly complement one another and improve performance significantly. A plateau is reached for each of the models, and some (e.g., SMTnews) even decay if too many noisy features are added.",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 130,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 970,
"end": 978,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "The feature selection curves are as expected. Because the plateau regions are large, feature selection could be cut off at about 10 features, with gains in efficiency and perhaps little effect on accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature selection analysis",
"sec_num": "4.2"
},
{
"text": "The resulting selected features for some of the trained models are shown in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 76,
"end": 83,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature selection analysis",
"sec_num": "4.2"
},
{
"text": "We determined whether including MayoClinicNLP features was any benefit over a feature-selected DKPro baseline. Table 4 analyzes this question by adding each of our measures in turn to a baseline feature-selected DKPro (dkselected). Note that this baseline was extremely effective; it would have ranked 4th in the STS competition, outperforming our run 4. Thus, metrics that improve this baseline must truly be complementary metrics. Here, we see that only the phrasal SVS measure is able to improve performance overall, largely by its contributions to the most difficult categories, FNWN and SMT. In fact, that system (dkselected + SVSePhrSimilari-tyMeasure) represents the best-performing run of any that was produced in our framework. Also, we see some source-specific behavior. None of our introduced measures are able to improve the headlines similarities. However, random indexing improves OnWN scores, several strategies improve the FNWN metric, and sim svs-phr is the only viable performance improvement on the SMT corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 118,
"text": "Table 4",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Contribution of MayoClinicNLP metrics",
"sec_num": "4.3"
},
{
"text": "Mayo Clinic's submissions to Semantic Textual Similarity 2013 performed well, placing 5th, 6th, and 8th among 90 submitted systems. We introduced similarity metrics that used different means to do compositional distributional semantics along with some named entity-based measures, finding some improvement especially for phrasal similar-ity from structured vectorial semantics. Throughout, we utilized forward-search feature selection, which enhanced the performance of the models. We also used source-based linear regression models and considered unseen sources as mixtures of existing sources; we found that in-domain data is necessary for smaller, source-based models to outperform larger, conglomerate models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We define the operator \u2299 as point-by-point multiplication of two diagonal matrices and 1 as a column vector of ones, collapsing a diagonal matrix onto a column vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Thanks to the developers of the UKP DKPro system and the TakeLab system for making their code available. Also, thanks to James Masanz for initial implementations of some similarity measures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semeval-2012 task 6: A pilot on semantic textual similarity",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics",
"volume": "1",
"issue": "",
"pages": "385--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main confer- ence and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Eval- uation, pages 385-393. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "*sem 2013 shared task: Semantic textual similarity, including a pilot on typed-similarity",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2013,
"venue": "*SEM 2013: The Second Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez- Agirre, and Weiwei Guo. 2013. *sem 2013 shared task: Semantic textual similarity, including a pilot on typed-similarity. In *SEM 2013: The Second Joint Conference on Lexical and Computational Semantics. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A bit-string longest-common-subsequence algorithm",
"authors": [
{
"first": "Lloyd",
"middle": [],
"last": "Allison",
"suffix": ""
},
{
"first": "Trevor",
"middle": [
"I"
],
"last": "Dix",
"suffix": ""
}
],
"year": 1986,
"venue": "Information Processing Letters",
"volume": "23",
"issue": "5",
"pages": "305--310",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lloyd Allison and Trevor I Dix. 1986. A bit-string longest-common-subsequence algorithm. Information Processing Letters, 23(5):305-310.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Ukp: Computing semantic textual similarity by combining multiple content similarity measures",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "B\u00e4r",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics",
"volume": "1",
"issue": "",
"pages": "435--440",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel B\u00e4r, Chris Biemann, Iryna Gurevych, and Torsten Zesch. 2012. Ukp: Computing semantic textual sim- ilarity by combining multiple content similarity mea- sures. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth Interna- tional Workshop on Semantic Evaluation, pages 435- 440. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Indexing by latent semantic analysis",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Deerwester",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Dumais",
"suffix": ""
},
{
"first": "George",
"middle": [
"W"
],
"last": "Furnas",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"K"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Harshman",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of the American Society for Information Science",
"volume": "41",
"issue": "6",
"pages": "391--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott Deerwester, Susan Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6):391- 407.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A structured vector space model for word meaning in context",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Erk and Sebastian Pad\u00f3. 2008. A structured vec- tor space model for word meaning in context. In Pro- ceedings of EMNLP 2008.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Incorporating non-local information into information extraction systems by gibbs sampling",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Trond",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "363--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meeting on Associ- ation for Computational Linguistics, pages 363-370. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The weka data mining software: an update",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Pfahringer",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Reutemann",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2009,
"venue": "SIGKDD Explor. Newsl",
"volume": "11",
"issue": "1",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The weka data mining software: an update. SIGKDD Explor. Newsl., 11(1):10-18, November.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Random indexing of text samples for latent semantic analysis",
"authors": [
{
"first": "Pentti",
"middle": [],
"last": "Kanerva",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Kristofersson",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Holst",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 22nd annual conference of the cognitive science society",
"volume": "1036",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pentti Kanerva, Jan Kristofersson, and Anders Holst. 2000. Random indexing of text samples for latent se- mantic analysis. In Proceedings of the 22nd annual conference of the cognitive science society, volume 1036. Citeseer.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge",
"authors": [
{
"first": "K",
"middle": [],
"last": "Landauer",
"suffix": ""
},
{
"first": "S",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
}
],
"year": 1997,
"venue": "Psychological Review",
"volume": "104",
"issue": "",
"pages": "211--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Landauer and S.T. Dumais. 1997. A Solution to Plato's Problem: The Latent Semantic Analysis The- ory of Acquisition, Induction, and Representation of Knowledge. Psychological Review, 104:211-240.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The meteor metric for automatic evaluation of machine translation. Machine translation",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Denkowski",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "23",
"issue": "",
"pages": "105--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alon Lavie and Michael J Denkowski. 2009. The meteor metric for automatic evaluation of machine translation. Machine translation, 23(2-3):105-115.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Vector-based models of semantic composition",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "236--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL-08: HLT, pages 236-244, Columbus, OH.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "An introduction to random indexing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sahlgren",
"suffix": ""
}
],
"year": 2005,
"venue": "Methods and Applications of Semantic Indexing Workshop at the 7th International Conference on Terminology and Knowledge Engineering, TKE",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Sahlgren. 2005. An introduction to random index- ing. In Methods and Applications of Semantic Index- ing Workshop at the 7th International Conference on Terminology and Knowledge Engineering, TKE, vol- ume 5.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Takelab: Systems for measuring semantic text similarity",
"authors": [
{
"first": "Goran",
"middle": [],
"last": "Frane\u0161ari\u0107",
"suffix": ""
},
{
"first": "Mladen",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Jan\u0161najder",
"middle": [],
"last": "Karan",
"suffix": ""
},
{
"first": "Bojana Dalbelo",
"middle": [],
"last": "Ba\u0161i\u0107",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Sixth International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "441--448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frane\u0160ari\u0107, Goran Glava\u0161, Mladen Karan, Jan\u0160najder, and Bojana Dalbelo Ba\u0161i\u0107. 2012. Takelab: Sys- tems for measuring semantic text similarity. In Pro- ceedings of the Sixth International Workshop on Se- mantic Evaluation (SemEval 2012), pages 441-448, Montr\u00e9al, Canada, 7-8 June. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The semantic vectors package: New algorithms and public tools for distributional semantics",
"authors": [
{
"first": "Dominic",
"middle": [],
"last": "Widdows",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2010,
"venue": "IEEE Fourth International Conference on",
"volume": "",
"issue": "",
"pages": "9--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominic Widdows and Trevor Cohen. 2010. The seman- tic vectors package: New algorithms and public tools for distributional semantics. In Semantic Computing (ICSC), 2010 IEEE Fourth International Conference on, pages 9-15. IEEE.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Semantic vectors: a scalable open source package and online technology management application",
"authors": [
{
"first": "D",
"middle": [],
"last": "Widdows",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Ferraro",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Sixth International Language Resources and Evaluation (LREC'08)",
"volume": "",
"issue": "",
"pages": "1183--1190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Widdows and K. Ferraro. 2008. Semantic vec- tors: a scalable open source package and online tech- nology management application. Proceedings of the Sixth International Language Resources and Evalua- tion (LREC'08), pages 1183-1190.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Yap3: Improved detection of similarities in computer program and other texts",
"authors": [
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wise",
"suffix": ""
}
],
"year": 1996,
"venue": "In ACM SIGCSE Bulletin",
"volume": "28",
"issue": "",
"pages": "130--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael J Wise. 1996. Yap3: Improved detection of sim- ilarities in computer program and other texts. In ACM SIGCSE Bulletin, volume 28, pages 130-134. ACM.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Structured composition of semantic vectors",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Schuler",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the International Conference on Computational Semantics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Wu and William Schuler. 2011. Structured com- position of semantic vectors. In Proceedings of the In- ternational Conference on Computational Semantics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Vectorial Representations of Meaning for a Computational Model of Language Comprehension",
"authors": [
{
"first": "Stephen Tze-Inn",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Tze-Inn Wu. 2010. Vectorial Representations of Meaning for a Computational Model of Language Comprehension. Ph.D. thesis, Department of Com- puter Science and Engineering, University of Min- nesota.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Performance curve of feature selection for r1wtCDT, r2CDT, and r4ALL",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF0": {
"html": null,
"content": "<table><tr><td>DKPro metrics (17)</td><td>TakeLab metrics (21)</td><td>Custom MayoClinicNLP metrics (9)</td></tr><tr><td colspan=\"2\">n-grams/WordNGramContainmentMeasure 1 stopword-filtered t ngram/UnigramOverlap</td><td/></tr><tr><td colspan=\"2\">n-grams/WordNGramContainmentMeasure 2 stopword-filtered t ngram/BigramOverlap</td><td/></tr><tr><td colspan=\"2\">n-grams/WordNGramJaccardMeasure 1 t ngram/TrigramOverlap</td><td/></tr><tr><td colspan=\"2\">n-grams/WordNGramJaccardMeasure 2 stopword-filtered t ngram/ContentUnigramOverlap</td><td/></tr><tr><td colspan=\"2\">n-grams/WordNGramJaccardMeasure 3 t ngram/ContentBigramOverlap</td><td/></tr><tr><td colspan=\"2\">n-grams/WordNGramJaccardMeasure 4 t ngram/ContentTrigramOverlap</td><td/></tr><tr><td>n-grams/WordNGramJaccardMeasure 4 stopword-filtered</td><td/><td/></tr><tr><td colspan=\"2\">t words/WeightedWordOverlap</td><td>custom/StanfordNerMeasure overlap.txt</td></tr><tr><td colspan=\"2\">t words/GreedyLemmaAligningOverlap</td><td>custom/StanfordNerMeasure aligngst.txt</td></tr><tr><td colspan=\"2\">t words/WordNetAugmentedWordOverlap</td><td>custom/StanfordNerMeasure alignlcs.txt</td></tr><tr><td colspan=\"2\">esa/ESA Wiktionary t vec/LSAWordSimilarity NYT</td><td>custom/SVSePhrSimilarityMeasure.txt</td></tr><tr><td colspan=\"2\">esa/ESA WordNet t vec/LSAWordSimilarity weighted NYT</td><td>custom/SVSeTopSimilarityMeasure.txt</td></tr><tr><td colspan=\"2\">t vec/LSAWordSimilarity weighted Wiki</td><td>custom/SemanticVectorsSimilarityMeasure d200 wr0.txt</td></tr><tr><td/><td/><td>custom/SemanticVectorsSimilarityMeasure d200 wr6b.txt</td></tr><tr><td/><td/><td>custom/SemanticVectorsSimilarityMeasure d200 wr6d.txt</td></tr><tr><td/><td/><td>custom/SemanticVectorsSimilarityMeasure d200 wr6p.txt</td></tr><tr><td colspan=\"2\">n-grams/CharacterNGramMeasure 2 t other/RelativeLengthDifference</td><td/></tr><tr><td colspan=\"2\">n-grams/CharacterNGramMeasure 3 t other/RelativeInfoContentDifference</td><td/></tr><tr><td colspan=\"2\">n-grams/CharacterNGramMeasure 4 t other/NumbersSize</td><td/></tr><tr><td colspan=\"2\">string/GreedyStringTiling 3 t other/NumbersOverlap</td><td/></tr><tr><td colspan=\"2\">string/LongestCommonSubsequenceComparator t other/NumbersSubset</td><td/></tr><tr><td colspan=\"2\">string/LongestCommonSubsequenceNormComparator t other/SentenceSize</td><td/></tr><tr><td colspan=\"2\">string/LongestCommonSubstringComparator t other/CaseMatches</td><td/></tr><tr><td colspan=\"2\">t other/StocksSize</td><td/></tr><tr><td colspan=\"2\">t other/StocksOverlap</td><td/></tr><tr><td>matches S2, and how closely S2 matches S1:</td><td/><td/></tr></table>",
"type_str": "table",
"text": "metrics.",
"num": null
},
"TABREF1": {
"html": null,
"content": "<table><tr><td>TEAM NAME</td><td colspan=\"5\">headlines rank OnWN rank FNWN rank SMT rank mean rank</td></tr><tr><td>UMBC EBIQUITY-ParingWords</td><td>0.7642</td><td>0.7529</td><td>0.5818</td><td>0.3804</td><td>0.6181 1</td></tr><tr><td>UMBC EBIQUITY-galactus</td><td>0.7428</td><td>0.7053</td><td>0.5444</td><td>0.3705</td><td>0.5927 2</td></tr><tr><td>deft-baseline</td><td>0.6532</td><td>0.8431</td><td>0.5083</td><td>0.3265</td><td>0.5795 3</td></tr><tr><td>MayoClinicNLP-r4ALL</td><td>0.7275</td><td>0.7618</td><td>0.4359</td><td>0.3048</td><td>0.5707</td></tr><tr><td>UMBC EBIQUITY-saiyan</td><td>0.7838</td><td>0.5593</td><td>0.5815</td><td>0.3563</td><td>0.5683 4</td></tr><tr><td>MayoClinicNLP-r3wtCD</td><td>0.6440 43</td><td>0.8295 2</td><td>0.3202 47</td><td>0.3561 17</td><td>0.5671 5</td></tr><tr><td>MayoClinicNLP-r1wtCDT</td><td>0.6584 33</td><td>0.7775 4</td><td>0.3735 26</td><td>0.3605 13</td><td>0.5649 6</td></tr><tr><td>CLaC-RUN2</td><td>0.6921</td><td>0.7366</td><td>0.3793</td><td>0.3375</td><td>0.5587 7</td></tr><tr><td>MayoClinicNLP-r2CDT</td><td>0.6827 23</td><td>0.6612 20</td><td>0.396 17</td><td>0.3946 5</td><td>0.5572 8</td></tr><tr><td>NTNU-RUN1</td><td>0.7279</td><td>0.5952</td><td>0.3215</td><td>0.4015</td><td>0.5519 9</td></tr><tr><td>CLaC-RUN1</td><td>0.6774</td><td>0.7667</td><td>0.3793</td><td>0.3068</td><td>0.5511 10</td></tr></table>",
"type_str": "table",
"text": "Performance comparison.",
"num": null
},
"TABREF2": {
"html": null,
"content": "<table><tr><td>headlines OnWN FNWN</td><td>SMT</td><td>mean</td></tr></table>",
"type_str": "table",
"text": "Adding customized features one at a time into optimized DKPro feature set. Models are trained across all sources.",
"num": null
}
}
}
}