Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N07-1031",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:47:38.071983Z"
},
"title": "Automating Creation of Hierarchical Faceted Metadata Structures",
"authors": [
{
"first": "Emilia",
"middle": [],
"last": "Stoica",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Marti",
"middle": [
"A"
],
"last": "Hearst",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Megan",
"middle": [],
"last": "Richardson",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe Castanet, an algorithm for automatically generating hierarchical faceted metadata from textual descriptions of items, to be incorporated into browsing and navigation interfaces for large information collections. From an existing lexical database (such as WordNet), Castanet carves out a structure that reflects the contents of the target information collection; moderate manual modifications improve the outcome. The algorithm is simple yet effective: a study conducted with 34 information architects finds that Castanet achieves higher quality results than other automated category creation algorithms, and 85% of the study participants said they would like to use the system for their work.",
"pdf_parse": {
"paper_id": "N07-1031",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe Castanet, an algorithm for automatically generating hierarchical faceted metadata from textual descriptions of items, to be incorporated into browsing and navigation interfaces for large information collections. From an existing lexical database (such as WordNet), Castanet carves out a structure that reflects the contents of the target information collection; moderate manual modifications improve the outcome. The algorithm is simple yet effective: a study conducted with 34 information architects finds that Castanet achieves higher quality results than other automated category creation algorithms, and 85% of the study participants said they would like to use the system for their work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "It is becoming widely accepted that the standard search interface, consisting of a query box and a list of retrieved items, is inadequate for navigation and exploration in large information collections such as online catalogs, digital libraries, and museum image collections. Instead, user interfaces which organize and group retrieval results have been shown to be helpful for and preferred by users over the straight results-list model when engaged in exploratory tasks (Yee et al., 2003; Pratt et al., 1999; Kaki, 2005) . In particular, a representation known as hierarchical faceted metadata is gaining great traction within the information architecture and enterprise search communities (Yee et al., 2003; Weinberger, 2005) .",
"cite_spans": [
{
"start": 472,
"end": 490,
"text": "(Yee et al., 2003;",
"ref_id": "BIBREF19"
},
{
"start": 491,
"end": 510,
"text": "Pratt et al., 1999;",
"ref_id": "BIBREF12"
},
{
"start": 511,
"end": 522,
"text": "Kaki, 2005)",
"ref_id": "BIBREF7"
},
{
"start": 692,
"end": 710,
"text": "(Yee et al., 2003;",
"ref_id": "BIBREF19"
},
{
"start": 711,
"end": 728,
"text": "Weinberger, 2005)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A considerable impediment to the wider adoption of collection navigation via metadata in general, and hierarchical faceted metadata in particular, is the need to create the metadata hierarchies and assign the appropriate category labels to the information items. Usually, metadata category structures are manually created by information architects (Rosenfeld and Morville, 2002) . While manually created metadata is considered of high quality, it is costly in terms of time and effort to produce, which makes it difficult to scale and keep up with the vast amounts of new content being produced.",
"cite_spans": [
{
"start": 348,
"end": 378,
"text": "(Rosenfeld and Morville, 2002)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we describe Castanet, an algorithm that makes considerable progress in automating faceted metadata creation. Castanet creates domain-specific overlays on top of a large general-purpose lexical database, producing surprisingly good results in a matter of minutes for a wide range of subject matter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the next section we elaborate on the notion of hierarchical faceted metadata and show how it can be used in interfaces for navigation of information collections. Section 3 describes other algorithms for inducing category structure from textual descriptions. Section 4 describes the Castanet algorithm, Section 5 describes the results of an evaluation with information architects, and Section 6 draws conclusions and discusses future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A hierarchical faceted metadata system (HFC) creates a set of category hierarchies, each of which corresponds to a different facet (dimension or type). The main application of hierarchical faceted metadata is in user interfaces for browsing and navigating collections of like items.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Faceted Metadata",
"sec_num": "2"
},
{
"text": "In the case of a recipe collection, for example, facets may consist of dish type (salad, appetizer), ingredients such as fruits (apricot, apple), vegetables (broccoli, cabbage), meat (beef, fish), preparation method (fry, bake, etc.), calorie count, and so on. Decomposing the description into independent categories allows users to move through large information spaces in a flexible manner. The category metadata guides the user toward possible choices, and organizes the results of keyword searches, allowing users to both refine and expand the current query, while maintaining a consistent representation of the collection's structure. This use of metadata should be integrated with free-text search, allowing the user to follow links, then add search terms, then follow more links, without interrupting the interaction flow.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Faceted Metadata",
"sec_num": "2"
},
{
"text": "Usability studies have shown that, when incorporated into a properly-designed user interface, hierarchical faceted metadata provides a flexible, intuitive way to explore a large collection of items that enhances feelings of discovery without inducing a feeling of being lost (Yee et al., 2003) .",
"cite_spans": [
{
"start": 275,
"end": 293,
"text": "(Yee et al., 2003)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Faceted Metadata",
"sec_num": "2"
},
{
"text": "Note that the HFC representation is intermediate in complexity between that of a monolithic hierarchy and a full-blown ontology. HFC does not capture relations and inferences that are essential for some applications. For example, faceted metadata can express that an image contains a hat and a man and a tree, and perhaps a wearing activity, but does not indicate who is wearing what. This relative simplicity of representation suggests that automatically inferring facet hierarchies may be easier than the full ontology inference problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Faceted Metadata",
"sec_num": "2"
},
{
"text": "There is a large literature on document classification and automated text categorization (Sebastiani, 2002) . However, that work assumes that the categories of interest are already known, and tries to assign documents to categories. In contrast, in this paper we focus on the problem of determining the categories of interest.",
"cite_spans": [
{
"start": 89,
"end": 107,
"text": "(Sebastiani, 2002)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Another thread of work is on finding synonymous terms and word associations, as well as automatic acquisition of IS-A (or genus-head) relations from dictionary definitions and free text (Hearst, 1992; Caraballo, 1999) . That work focuses on finding the right position for a word within a lexicon, rather than building up comprehensible and coherent faceted hierarchies.",
"cite_spans": [
{
"start": 186,
"end": 200,
"text": "(Hearst, 1992;",
"ref_id": "BIBREF5"
},
{
"start": 201,
"end": 217,
"text": "Caraballo, 1999)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "A major class of solutions for creating subject hierarchies uses data clustering. The Scatter/Gather system (Cutting et al., 1992) uses a greedy global agglomerative clustering algorithm where an initial set of clusters is recursively re-clustered until only documents remain. Hofmann (1999) proposes the probabilistic latent semantic analysis algorithm (pLSA), a probabilistic version of clustering that uses latent semantic analysis for grouping words and annealed EM for model fitting.",
"cite_spans": [
{
"start": 108,
"end": 130,
"text": "(Cutting et al., 1992)",
"ref_id": "BIBREF3"
},
{
"start": 277,
"end": 291,
"text": "Hofmann (1999)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "The greatest advantage of clustering is that it is fully automatable and can be easily applied to any text collection. Clustering can also reveal interesting and potentially unexpected or new trends in a group of documents. The disadvantages of clustering include their lack of predictability, their conflation of many dimensions simultaneously, the difficulty of labeling the groups, and the counter-intuitiveness of cluster sub-hierarchies (Pratt et al., 1999) . Blei et al. (2003) developed the LDA (Latent Dirichlet Allocation) method, a generative probabilistic model of discrete data, which creates a hierarchical probabilistic model of documents. It attempts to analyze a text corpus and extract the topics that combined to form its doc-uments. The output of the algorithm was evaluated in terms of perplexity reduction but not in terms of understandability of the topics produced. Sanderson and Croft (1999) propose a method called subsumption for building a hierarchy for a set of documents retrieved for a query. For two terms x and y, x is said to subsume y if the following conditions hold:",
"cite_spans": [
{
"start": 442,
"end": 462,
"text": "(Pratt et al., 1999)",
"ref_id": "BIBREF12"
},
{
"start": 465,
"end": 483,
"text": "Blei et al. (2003)",
"ref_id": "BIBREF1"
},
{
"start": 889,
"end": 915,
"text": "Sanderson and Croft (1999)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "\u00a1 \u00a3 \u00a2 \u00a5 \u00a4 \u00a7 \u00a6\u00a8 \u00a9 \u00a1 \u00a3 \u00a2 \u00a5 ! \u00a6\u00a4 \" \u00a9 # % $",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": ". In other words, x subsumes y and is a parent of y, if the documents which contain y, are a subset of the documents which contain x. To evaluate the algorithm the authors asked 8 participants to look at parent-child pairs and state whether or not they were \"interesting\". Participants found 67% to be interesting as compared to 51% for randomly chosen pairs of words. Of those interesting pairs, 72% were found to display a \"type-of\" relationship. Nevill-Manning et.al (1999) , Anick et.al (1999) and Vossen (2001) build hierarchies based on substring inclusion. For example, the category full text indexing and retrieval is the child of indexing and retrieval which in turn is the child of index. While these string inclusion approaches expose some structure of the dataset, they can only create subcategories which are substrings of the parent category, which is very restrictive.",
"cite_spans": [
{
"start": 449,
"end": 476,
"text": "Nevill-Manning et.al (1999)",
"ref_id": null
},
{
"start": 479,
"end": 497,
"text": "Anick et.al (1999)",
"ref_id": null
},
{
"start": 502,
"end": 515,
"text": "Vossen (2001)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Another class of solutions make use of existing lexical hierarchies to build category hierarchies, as we do in this paper. For example, Navigli and Velardi (2003) use WordNet (Fellbaum, 1998) to build a complex ontology consisting of a wide range of relation types (demonstrated on a travel agent domain), as opposed to a set of human-readable hierarchical facets. They develop a complex algorithm for choosing among WordNet senses; it requires building a rich semantic network using Word-Net glosses, meronyms, holonyms, and other lexical relations, and using the semantically annotated SemCor collection. The semantic nets are intersected and the correct sense is chosen based on a score assigned to each intersection. Mihalcea and Moldovan (2001) describe a sophisticated method for simplifying WordNet in general, rather than tailoring it to a specific collection.",
"cite_spans": [
{
"start": 136,
"end": 162,
"text": "Navigli and Velardi (2003)",
"ref_id": "BIBREF10"
},
{
"start": 175,
"end": 191,
"text": "(Fellbaum, 1998)",
"ref_id": null
},
{
"start": 721,
"end": 749,
"text": "Mihalcea and Moldovan (2001)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "The main idea behind the Castanet algorithm 1 is to carve out a structure from the hypernym (IS-A) relations within the WordNet (Fellbaum, 1998) lexical database. The primary unit of representation in WordNet is the synset, which is a set of words that are considered synonyms for a particular concept. Each synset is linked to other synsets via several types of lexical and semantic relations; we only use hypernymy (IS-A relations) in this algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "4"
},
{
"text": "The Castanet algorithm assumes that there is text associated with each item in the collection, or at least with a representative subset of the items. The textual descriptions are used both to build the facet hierarchies and to assign items (documents, images, citations, etc.) to the facets. The text does not need to be particularly coherent for the algorithm to work; we have applied it to fragmented image annotations and short journal titles, but if the text is impoverished, the information items will not be labeled as thoroughly as desirable and additional manual annotation may be needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm Overview",
"sec_num": "4.1"
},
{
"text": "The algorithm has five major steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm Overview",
"sec_num": "4.1"
},
{
"text": "1. Select target terms from textual descriptions of information items. 2. Build the Core Tree:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm Overview",
"sec_num": "4.1"
},
{
"text": "For each term, if the term is unambiguous (see below), add its synset's IS-A path to the Core Tree. Increment the counts for each node in the synset's path with the number of documents in which the target term appears.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm Overview",
"sec_num": "4.1"
},
{
"text": "3. Augment the Core Tree with the remaining terms' paths:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm Overview",
"sec_num": "4.1"
},
{
"text": "For each candidate IS-A path for the ambiguous term, choose the path for which there is the most document representation in the Core Tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm Overview",
"sec_num": "4.1"
},
{
"text": "4. Compress the augmented tree. 5. Remove top-level categories, yielding a set of facet hierarchies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm Overview",
"sec_num": "4.1"
},
{
"text": "We describe each step in more detail below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm Overview",
"sec_num": "4.1"
},
{
"text": "Castanet selects only a subset of terms, called target terms, that are intended to best reflect the topics in the documents. Similarly to Sanderson and Croft (1999) , we use the term distribution -defined as the number of item descriptions containing the term -as the selection criterion. The algorithm retains those terms that have a distribution larger than a threshold and eliminates terms on a stop list. One and two-word consecutive noun phrases are eligible to be considered as terms. Terms that can be adjectives or verbs as well as nouns are optionally deleted.",
"cite_spans": [
{
"start": 138,
"end": 164,
"text": "Sanderson and Croft (1999)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Select Target Terms",
"sec_num": "4.2"
},
{
"text": "The Core Tree acts as the \"backbone\" for the final category structure. It is built by using paths derived from unambiguous terms, with the goal of biasing the final structure towards the appropriate senses of words. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Build the Core Tree",
"sec_num": "4.3"
},
{
"text": "A term is considered unambiguous if it meets at least one of two conditions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disambiguate using Wordnet Domains",
"sec_num": "4.3.1"
},
{
"text": "(1) The term has only one sense within WordNet, or (2) (Optional) The term matches one of the pre-selected WordNet domains (see below).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disambiguate using Wordnet Domains",
"sec_num": "4.3.1"
},
{
"text": "From our experiments, about half of the eligible terms have only one sense within WordNet. For the rest of terms, we disambiguate between multiple senses as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disambiguate using Wordnet Domains",
"sec_num": "4.3.1"
},
{
"text": "WordNet provides a cross-categorization mechanism known as domains, whereby some synsets are assigned general category labels. However, only a small subset of the nouns in WordNet have domains assigned to them. For example, for a medicine collection, we found that only 4% of the terms have domains medicine or biology associated with them. For this reason, we use an additional resource called Wordnet Domains (Magnini, 2000) , which assigns domains to WordNet synsets. In this resource, every noun synset in WordNet has been semiautomatically annotated with one of about 200 Dewey Decimal Classification labels. Examples include history, literature, plastic arts, zoology, etc.",
"cite_spans": [
{
"start": 411,
"end": 426,
"text": "(Magnini, 2000)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Disambiguate using Wordnet Domains",
"sec_num": "4.3.1"
},
{
"text": "In Castanet, Wordnet Domains are used as follows. First, the system counts how many times each domain is represented by target terms, building a list of the most well-represented domains for the collection. Then, in a manual intervention step, the information architect selects the subset of the well-represented domains which are meaningful for the collection in question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disambiguate using Wordnet Domains",
"sec_num": "4.3.1"
},
{
"text": "For example, for a collection of biomedical journal titles, Surgery should be selected as a domain, whereas for an art history image collection, Architecture might be chosen. When processing the word lancet, the choice of domain distinguishes between the hyponym path entity In some cases, more than one domain may be relevant for a given term and for a given collection. For example, the term brain is annotated with two domains, Anatomy and Psychology, which are both relevant domains for a biomedical journal collection. Currently for these cases the algorithm breaks the tie by choosing the sense with the lowest WordNet sense number (corresponding to the most common sense), which in this case selects the Anatomy sense. However, we see this forced choice as a limitation, and in future work we plan to explore how to allow a term to have more than one occurrence in the metadata hierarchies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disambiguate using Wordnet Domains",
"sec_num": "4.3.1"
},
{
"text": "To build the Core Tree, the algorithm marches down the list of unambiguous terms and for each term looks up its synset and its hypernym path in WordNet. (If a term does not have representation in WordNet, then it is not included in the category structure.) To add a path to the Core Tree, its path is merged with those paths that have already been placed in the tree. Figure 1(a-b) shows the hypernym paths for the synsets corresponding to the terms sundae and ambrosia. Note that they have several hypernym path nodes in common: (entity), (substance, matter), (food, nutrient), (nutriment), (course), (dessert, sweet, afters) . Those shared paths are merged by the algorithm; the results, along with the paths for parfait and sherbert are shown in Figure 1(c) .",
"cite_spans": [
{
"start": 602,
"end": 626,
"text": "(dessert, sweet, afters)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 368,
"end": 381,
"text": "Figure 1(a-b)",
"ref_id": "FIGREF0"
},
{
"start": 749,
"end": 760,
"text": "Figure 1(c)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Add Paths to Core Tree",
"sec_num": "4.3.2"
},
{
"text": "In addition to augmenting the nodes in the tree, adding in a new term increases a count associated with each node on its path; this count corresponds to how many documents the term occurs in. Thus the more common a term, the more weight it places on the path it falls within.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Add Paths to Core Tree",
"sec_num": "4.3.2"
},
{
"text": "The Core Tree contains only a subset of terms in the collection (those that have only one path or whose sense can be selected with WordNet Domains). The next step is to add in the paths for the remaining target terms which are ambiguous according to WordNet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Augment the Core Tree / Disambiguate Terms",
"sec_num": "4.4"
},
{
"text": "The Core Tree is built with a bias towards paths that are most likely to be appropriate for the collection as a whole. When confronted with a term that has multiple possible IS-A paths corresponding to multiple senses, the system favors the more common path over other alternatives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Augment the Core Tree / Disambiguate Terms",
"sec_num": "4.4"
},
{
"text": "Assume that we want to add the term date to the Core Tree for a collection of recipes, and that currently there are two paths corresponding to two of its senses in the Core Tree (see Figure 3) . To decide which of the two paths to merge date into, the algorithm looks at the number of items assigned to the deepest node that is held in common between the existing Core Tree and each candidate path for the ambiguous term. The path for the calendar day sense has fewer than 20 documents assigned to it (corresponding to terms like Valentine's Day), whereas the path for the edible fruit sense has more than 700 documents assigned. Thus date is added to the fruit sense path. (The counts for the ambiguous terms' document hits are not incorporated into the new tree.) Also, to eliminate unlikely senses, each candidate sense's hypernym path is required to share at least \u00a2 \u00a1 of its nodes with nodes already in the Core Tree, where the user sets (usually between 40 and 60%). Thus the romantic appointment sense of date would not be considered as most of its hypernym path is not in the Core Tree. If no path passes the threshold, then the first sense's hypernym path (according to WordNet's sense ordering) is placed in the tree.",
"cite_spans": [],
"ref_spans": [
{
"start": 183,
"end": 192,
"text": "Figure 3)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Augment the Core Tree / Disambiguate Terms",
"sec_num": "4.4"
},
{
"text": "The tree that is obtained in the previous step usually is very deep, which is undesirable from a user interface perspective. Castanet uses two rules for compressing the tree:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compress the Tree",
"sec_num": "4.5"
},
{
"text": "1. Starting from the leaves, recursively eliminate a parent that has fewer than k children, unless the parent is the root or has an item count larger than 0.1 \u00a3 (maximum term distribution). 2. Eliminate a child whose name appears within the parent's name, unless the child contains a WordNet domain name. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compress the Tree",
"sec_num": "4.5"
},
{
"text": ", which means eliminate parents that have fewer than two children.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a1 \u00a3 \u00a2",
"sec_num": null
},
{
"text": "Starting from the leaves, by applying Rule 2, nodes (ice cream sundae), (sherbet, sorbet), (course), (nutriment), (food, nutrient), (substance, matter) and (entity) are eliminated since they have only one child. Figure 2(a) shows the resulting tree. Next, by applying Rule 3, the node frozen dessert is eliminated, since it contains the word dessert which also appears in the name of its parent. The final tree is presented in Figure 2(b) . Note that this is a rather aggressive compression strategy, and the algorithm can be adjusted to allow more hierarchy to be retained.",
"cite_spans": [],
"ref_spans": [
{
"start": 212,
"end": 223,
"text": "Figure 2(a)",
"ref_id": null
},
{
"start": 427,
"end": 438,
"text": "Figure 2(b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "\u00a1 \u00a3 \u00a2",
"sec_num": null
},
{
"text": "The final step is to create a set of facet sub-hierarchies. The goal is to create a moderate set of facets, each of which has moderate depth and breadth at each level, in order to enhance the navigability of the categories. Pruning the top levels can be automated, but a manual editing pass over the outcome will produce the best results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prune Top Level Categories / Create Facets",
"sec_num": "4.6"
},
{
"text": "To eliminate the top levels in an automated fashion, for each of the nine tree roots in the WordNet noun database, manually cut the top \u00a4 levels (where \u00a4 \u00a6 \u00a5 for the recipes collection). Then, for each of the resulting trees, recursively test if its root has more than \u00a7 \u00a9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prune Top Level Categories / Create Facets",
"sec_num": "4.6"
},
{
"text": "children. If it does, then the tree is considered a facet; otherwise, the current root is deleted and the algorithm tests to see if each new root has \u00a7 children. Those subtrees that do not meet the criterion are omitted from the final set of facets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prune Top Level Categories / Create Facets",
"sec_num": "4.6"
},
{
"text": "Consider the tree in Figure 4(a) . In this case, the categories of interest are (flavorer) and (kitchen utensil) along with their children. However, to reach any of these categories, the user has to descend six levels, each of which has very little information. Figure 4(b) shows the resulting facets, which (subjectively) are at an informative level of description for an information architecture. (In this illustration, \u00a4 \u00a3 \u00a2 .) Often the internal nodes of WordNet paths do not have the most felicitous names, e.g., edible fruit instead of fruit. Although we did not edit these names for the usability study, it is advisable to do so.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 32,
"text": "Figure 4(a)",
"ref_id": "FIGREF3"
},
{
"start": 262,
"end": 273,
"text": "Figure 4(b)",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Prune Top Level Categories / Create Facets",
"sec_num": "4.6"
},
{
"text": "The intended users of the Castanet algorithm are information architects and others who need to build structures for information collections. A successful algorithm must be perceived by information architects as making their job easier. If the proposed category system appears to require a lot of work to modify, then IAs are likely to reject it. Thus, to evaluate Castanet's output, we recruited information architects and asked them to compare it to one other state-of-the-art approach as well as a baseline. The participants were asked to assess the qualities of each category system and to express how likely they would be to use each in their work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "The study compared the output of four algorithms: (a) Baseline (frequent words and two-word phrases), (b) Castanet, (c) LDA (Blei et al., 2003) 2 and (d) Subsumption (Sanderson and Croft, 1999) . The algorithms were applied to a dataset of $ recipes from Southwestcooking.com. Participants were recruited via email and were required to have experience building information architectures and to be at least familiar with recipe websites (to show their interest in the domain).",
"cite_spans": [
{
"start": 124,
"end": 143,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF1"
},
{
"start": 166,
"end": 193,
"text": "(Sanderson and Croft, 1999)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Study Design",
"sec_num": "5.1"
},
{
"text": "Currently there are no standard tools used by information architects for building category systems from free text. Based on our own experience, we assumed a strong baseline would be a list of the most frequent words and two-word phrases (stopwords removed); the study results confirmed this assumption. The challenge for an automated system is to be preferred to the baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Study Design",
"sec_num": "5.1"
},
{
"text": "The study design was within-participants, where each participant evaluated Castanet, a Baseline approach, and either Subsumption (N=16) or LDA (N=18). 3 Order of showing Castanet and the alternative algorithm was counterbalanced across participants in each condition.",
"cite_spans": [
{
"start": 151,
"end": 152,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Study Design",
"sec_num": "5.1"
},
{
"text": "Because the algorithms produce a large number of hierarchical categories, the output was shown to the Cas. Bas. LDA Cas. Bas. Sub. Def . Yes 4 2 0 2 2 0 Yes 10 10 0 13 11 6 No 2 2 2 1 3 2 Def. No 2 4 16 0 0 8 Table 1 : Responses to the question \"Would you be likely to use this algorithm in your work?\" comparing Castanet to the Baseline and LDA (N=18), and comparing Castanet to the Baseline and Subsumption (N=16).",
"cite_spans": [],
"ref_spans": [
{
"start": 135,
"end": 245,
"text": ". Yes 4 2 0 2 2 0 Yes 10 10 0 13 11 6 No 2 2 2 1 3 2 Def. No 2 4 16 0 0 8 Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Study Design",
"sec_num": "5.1"
},
{
"text": "Cas. 34 Table 2 : Average responses to questions about the quality of the category systems. N shown in parentheses. Assessed on a four point scale where higher is better.",
"cite_spans": [],
"ref_spans": [
{
"start": 8,
"end": 15,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Study Design",
"sec_num": "5.1"
},
{
"text": "participants using the open source Flamenco collection browser 4 (see Figure 5) . Clicking on a link shows subcategories as well as items that have been assigned that category. For example, clicking on the Penne subcategory beneath Pasta in the Castanet condition shows 5 recipes that contain the word penne as well as the other categories that have been assigned to these recipes. Since LDA does not create names for its output groups, they were assigned the generic names Category 1, 2, etc. Assignment of categories to items was done on a strict word-match basis; participants were not asked to assess the item assignment aspect of the interface. At the start of the study, participants answered questions about their experience designing information architectures. They were then asked to look at a partial list of recipes and think briefly about what their goals would be in building a website for navigating the collection.",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 79,
"text": "Figure 5)",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Study Design",
"sec_num": "5.1"
},
{
"text": "Next they viewed an ordered list of frequent terms drawn automatically from the collection (Baseline condition). After this, they viewed the output of one of the two target category systems. For each algorithm, participants were asked questions about the top-level categories, such as Would you add any categories? (possible responses: (a) No, None, (b) Yes, one or two, (c) Yes, a few, and (d) Yes, many). They were then asked to examine two specific top level categories in depth (e.g., For category Bread, would you remove any subcategories?). At the end of each assessment, they were asked to comment on general aspects of the category system as a whole (discussed below). After having seen both category systems, participants were asked to state how likely they would be to use the algorithm (e.g., Would you use Oak? Would you 4 Available at flamenco.berkeley.edu use Birch? Would you use the frequent words list?) Answer types were (a) No, definitely not, (b) Probably not, (c) Yes, I might want to use this system in some cases, and (d) Yes, I would definitely use this system. Table 1 shows the responses to the final question about how likely the participants are to use the results of each algorithm for their work. Both Castanet and the Baseline fare well, with Castanet doing somewhat better. 85% of the Castanet evaluators said yes or definitely yes to using it, compared to 74% for the Baseline. Only one participant said \"no\" to Castanet but \"yes\" to the Baseline, suggesting that both kinds of information are useful for information architects.",
"cite_spans": [],
"ref_spans": [
{
"start": 1086,
"end": 1093,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Study Design",
"sec_num": "5.1"
},
{
"text": "The comparison algorithms did poorly. Subsumption received 38% answering \"yes\" or \"definitely yes\" to the question about likelihood of use. LDA was rejected by all participants. A t-test (after converting responses to a 1-4 scale) shows that Castanet obtains significantly better scores than LDA (\u00a4 = 7.88 2.75) and Subsumption (\u00a4 = 4.50 2.75), for \u00a1 = 0.005. The differences between Castanet and the Baseline are not significant. Table 2 shows the average responses to the questions (i) Overall, these are categories meaningful; (ii) Overall, these categories describe the collection in a systematic way; (iii) These categories capture the important concepts.) They were scored as 1= Strongly disagree, 2 = Disagree Somewhat, 3 = Agree Somewhat, and 4 = Strongly agree. Castanet's score was about 35% higher than Subsumption's, and about 50% higher than LDA's.",
"cite_spans": [],
"ref_spans": [
{
"start": 431,
"end": 438,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Participants were asked to scrutinize the top-level categories and assess whether they would add categories, remove some, merge or rename some. The ratings were again converted to a four point scale (no changes = 4, change one or two = 3, change a few = 2, change many = 1). Table 3 shows the results. Castanet scores as well as or better than the others on all measures except Rename; Subsumption scores slightly higher on this measure, and does well on Split as well, but very poorly on Remove, reflecting the fact that it produces well-named categories at the top level, but too many at too fine a granularity.",
"cite_spans": [],
"ref_spans": [
{
"start": 275,
"end": 282,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Participants were also asked to examine two subcategories in detail. Table 4 shows results averaged across the two subcategories for number of categories to add, remove, promote, move, and how well the subcategories matched their expectations. Castanet performs especially well on this last measure (2.5 versus 1.5 and 1.7). Participants generally did not suggest moves or promotions.",
"cite_spans": [],
"ref_spans": [
{
"start": 69,
"end": 76,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Thus on all measures, we see Castanet outperforming the other state-of-the-art algorithms. Note that we did not explicitly evaluate the \"facetedness\" of the category systems, as we thought this would be too difficult for the participants to do. We feel the questions about the coher- 2.5 1.5 1.7 Table 4 : Assessing second-level categories.",
"cite_spans": [],
"ref_spans": [
{
"start": 296,
"end": 303,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "ence, systematicity, and coverage of the category systems captured this to some degree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "We have presented an algorithm called Castanet that creates hierarchical faceted metadata using WordNet and Wordnet Domains. A questionnaire revealed that 85% information architects thought it was likely to be useful, compared to 0% for LDA and 38% for Subsumption.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "Although not discussed here, we have successfully applied the algorithm to other domains including biomedical journal titles and art history image descriptions, and to another lexical hierarchy, MeSH. 5 Although quite useful \"out of the box,\" the algorithm could benefit by several improvements and additions. The processing of the terms should recognize spelling variations (such as aging vs. ageing) and morphological variations. Verbs and adjectives are often quite important for a collection (e.g., stir-fry for cooking) and should be included, but with caution. Some terms should be allowed to occur with more than one sense if this is required by the dataset (and some in more than one facet even with the same sense, as seen in the brain example). Currently if a term is in a document it is assumed to use the sense assigned in the facet hierarchies; this is often incorrect, and so terms should be disambiguated within the text before automatic category assignment is done. And finally, WordNet is not exhaustive and some mechanism is needed to improve coverage for unknown terms. ",
"cite_spans": [
{
"start": 201,
"end": 202,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "A simpler, un-evaluated version of this algorithm was presented previously in a short paper(Stoica and Hearst, 2004).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Using code by Blei from www.cs.princeton.edu/\u02dcblei/lda-c/ 3 Pilot studies found that participants became very frustrated when asked to compare LDA against Subsumption, since neither tested well, so we dropped this condition. We did not consider asking any participant to evaluate all three systems, to avoid fatigue. To avoid biasing participants towards any approach, the target algorithms were given the neutral names of Pine, Birch, and Oak. Castanet was run without Domains for a fairer comparison. Top level pruning was done automatically as described, but with a few manual adjustments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "MEdical Subject Headings, http://www.nlm.nih.gov/mesh/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgements Thanks to Lou Rosenfeld and Rashmi Sinha for their help finding participants, and to all the participants themselves. This work was funded in part by NSF DBI-0317510 and in part by the Summer Undergraduate Program in Engineering Research at Berkeley (SUPERB).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The paraphrase search assistant:terminological feedback for iterative information seeking",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anick",
"suffix": ""
},
{
"first": "Susesh",
"middle": [],
"last": "Tipirneni",
"suffix": ""
}
],
"year": 1999,
"venue": "Procs. of SIGIR'99",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Anick and Susesh Tipirneni. 1999. The paraphrase search assistant:terminological feedback for iterative infor- mation seeking. In Procs. of SIGIR'99.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993-1022.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic construction of a hypernym-labeled noun hierarchy from text",
"authors": [
{
"first": "Sharon",
"middle": [
"A"
],
"last": "Caraballo",
"suffix": ""
}
],
"year": 1999,
"venue": "ACL '99",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon A. Caraballo. 1999. Automatic construction of a hypernym-labeled noun hierarchy from text. In ACL '99.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Scatter/gather: A cluster-based approach to browsing large document collections",
"authors": [
{
"first": "Douglas",
"middle": [],
"last": "Cutting",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Karger",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "John",
"middle": [
"W"
],
"last": "Tukey",
"suffix": ""
}
],
"year": 1992,
"venue": "Proc. of SIGIR'92",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas Cutting, David Karger D., Jan Pedersen, and John W. Tukey. 1992. Scatter/gather: A cluster-based approach to browsing large document collections. In Proc. of SIGIR'92.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "WordNet: An Electronic Lexical Database",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic acquisition of hyponyms from large text corpora",
"authors": [
{
"first": "Marti",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 1992,
"venue": "Proc. of COLING '92",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proc. of COLING '92.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The cluster-abstraction model: Unsupervised learning of topic hierarchies from text data",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 1999,
"venue": "Procs. of IJCAI'99",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Hofmann. 1999. The cluster-abstraction model: Un- supervised learning of topic hierarchies from text data. In Procs. of IJCAI'99, Stolckholm, July.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Findex: Search result categories help users when document ranking fails",
"authors": [
{
"first": "Mika",
"middle": [],
"last": "Kaki",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of CHI '05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mika Kaki. 2005. Findex: Search result categories help users when document ranking fails. In Proc. of CHI '05.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Integrating subject field codes into WordNet",
"authors": [
{
"first": "Bernardo",
"middle": [],
"last": "Magnini",
"suffix": ""
}
],
"year": 2000,
"venue": "Procs. of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernardo Magnini. 2000. Integrating subject field codes into WordNet. In Procs. of LREC 2000, Athens, Greece.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Ez.wordnet: Principles for automatic generation of a coarse grained wordnet",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Dan",
"middle": [
"I"
],
"last": "Moldovan",
"suffix": ""
}
],
"year": 2001,
"venue": "Procs. of FLAIRS Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Dan I. Moldovan. 2001. Ez.wordnet: Prin- ciples for automatic generation of a coarse grained wordnet. In Procs. of FLAIRS Conference 2001, May.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Ontology learning and its application to automated terminology translation. Intelligent Systems",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Paola",
"middle": [],
"last": "Velardi",
"suffix": ""
},
{
"first": "Aldo",
"middle": [],
"last": "Gangemi",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "18",
"issue": "",
"pages": "22--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli, Paola Velardi, and Aldo Gangemi. 2003. On- tology learning and its application to automated terminology translation. Intelligent Systems, 18(1):22-31.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Lexically generated subject hierarchies for browsing large collections",
"authors": [
{
"first": "Craig",
"middle": [],
"last": "Nevill-Manning",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Witten",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Paynter",
"suffix": ""
}
],
"year": 1999,
"venue": "Inter. J. on Digital Libraries",
"volume": "2",
"issue": "2+3",
"pages": "111--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Craig Nevill-Manning, I. Witten, and G. Paynter. 1999. Lexi- cally generated subject hierarchies for browsing large collec- tions. Inter. J. on Digital Libraries, 2(2+3):111-123.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A knowledge-based approach to organizing retrieved documents",
"authors": [
{
"first": "Wanda",
"middle": [],
"last": "Pratt",
"suffix": ""
},
{
"first": "Marti",
"middle": [],
"last": "Hearst",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Fagan",
"suffix": ""
}
],
"year": 1999,
"venue": "Procs. of AAAI 99",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wanda Pratt, Marti Hearst, and Larry Fagan. 1999. A knowledge-based approach to organizing retrieved docu- ments. In Procs. of AAAI 99, Orlando, FL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Information Architecture for the World Wide Web: Designing Large-scale Web Sites",
"authors": [
{
"first": "Louis",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Morville",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Louis Rosenfeld and Peter Morville. 2002. Information Archi- tecture for the World Wide Web: Designing Large-scale Web Sites. O'Reilly & Associates, Inc.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Deriving concept hierarchies from text",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Sanderson",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 1999,
"venue": "Procs. of SIGIR '99",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Sanderson and Bruce Croft. 1999. Deriving concept hi- erarchies from text. In Procs. of SIGIR '99.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Machine learning in automated text categorization",
"authors": [
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Computing Surveys",
"volume": "34",
"issue": "1",
"pages": "1--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabrizio Sebastiani. 2002. Machine learning in automated text categorization. ACM Computing Surveys, 34(1):1-47.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Nearly-automated metadata hierarchy creation",
"authors": [
{
"first": "Emilia",
"middle": [],
"last": "Stoica",
"suffix": ""
},
{
"first": "Marti",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emilia Stoica and Marti Hearst. 2004. Nearly-automated meta- data hierarchy creation. In Proc. of HLT-NAACL 2004.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Extending, trimming and fussing wordnet for technical documents",
"authors": [
{
"first": "Piek",
"middle": [],
"last": "Vossen",
"suffix": ""
}
],
"year": 2001,
"venue": "NAACL 2001 Workshop and Other Lexical Resources",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piek Vossen. 2001. Extending, trimming and fussing word- net for technical documents. In NAACL 2001 Workshop and Other Lexical Resources, East Stroudsburg, PA.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Taxonomies and tags: From trees to piles of leaves",
"authors": [
{
"first": "Dave",
"middle": [],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dave Weinberger. 2005. Taxonomies and tags: From trees to piles of leaves. In Release 1.0, Feb.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Faceted metadata for image search and browsing",
"authors": [
{
"first": "Ka-Ping",
"middle": [],
"last": "Yee",
"suffix": ""
},
{
"first": "Kirsten",
"middle": [],
"last": "Swearingen",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Marti",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 2003,
"venue": "Procs. of CHI '03",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ka-Ping Yee, Kirsten Swearingen, Kevin Li, and Marti Hearst. 2003. Faceted metadata for image search and browsing. In Procs. of CHI '03, Fort Lauderdale, FL, April.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Merging hypernym paths.",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "Figure 2: Compressing the tree.",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "Two path choices for an ambiguous term.",
"uris": null
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"text": "Eliminating top levels. For example, consider the tree in Figure 1(c) and assume that",
"uris": null
},
"FIGREF4": {
"num": null,
"type_str": "figure",
"text": "Partial view of categories obtained by (a) Castanet, (b) LDA and (c) Subsumption on the Recipes collection, displayed in the Flamenco interface.",
"uris": null
},
"TABREF2": {
"text": "Assessing top-level categories.",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"3\">Cas. (34). LDA (18) Sub. (16)</td></tr><tr><td>Add</td><td>2.8</td><td>2.8</td><td>2.4</td></tr><tr><td>Remove</td><td>3.4</td><td>2.2</td><td>2.5</td></tr><tr><td>Promote</td><td>3.7</td><td>3.4</td><td>3.8</td></tr><tr><td>Move</td><td>3.8</td><td>3.3</td><td>3.6</td></tr><tr><td>Matched Exp.</td><td/><td/><td/></tr></table>"
}
}
}
}