Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C04-1028",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:20:08.816405Z"
},
"title": "Generalizing Dimensionality in Combinatory Categorial Grammar",
"authors": [
{
"first": "Geert-Jan",
"middle": [
"M"
],
"last": "Kruijff",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Computational Linguistics Saarland University",
"location": {
"settlement": "Saarbr\u00fccken",
"country": "Germany"
}
},
"email": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh Edinburgh",
"location": {
"country": "Scotland"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We extend Combinatory Categorial Grammar (CCG) with a generalized notion of multidimensional sign, inspired by the types of representations found in constraint-based frameworks like HPSG or LFG. The generalized sign allows multiple levels to share information, but only in a resource-bounded way through a very restricted indexation mechanism. This improves representational perspicuity without increasing parsing complexity, in contrast to full-blown unification used in HPSG and LFG. Well-formedness of a linguistic expressions remains entirely determined by the CCG derivation. We show how the multidimensionality and perspicuity of the generalized signs lead to a simplification of previous CCG accounts of how word order and prosody can realize information structure.",
"pdf_parse": {
"paper_id": "C04-1028",
"_pdf_hash": "",
"abstract": [
{
"text": "We extend Combinatory Categorial Grammar (CCG) with a generalized notion of multidimensional sign, inspired by the types of representations found in constraint-based frameworks like HPSG or LFG. The generalized sign allows multiple levels to share information, but only in a resource-bounded way through a very restricted indexation mechanism. This improves representational perspicuity without increasing parsing complexity, in contrast to full-blown unification used in HPSG and LFG. Well-formedness of a linguistic expressions remains entirely determined by the CCG derivation. We show how the multidimensionality and perspicuity of the generalized signs lead to a simplification of previous CCG accounts of how word order and prosody can realize information structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The information conveyed by linguistic utterances is diverse, detailed, and complex. To properly analyze what is communicated by an utterance, this information must be encoded and interpreted at many levels. The literature contains various proposals for dealing with many of these levels in the description of natural language grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since information flows between different levels of analysis, it is common for linguistic formalisms to bundle them together and provide some means for communication between them. Categorial grammars, for example, normally employ a Saussurian sign that relates a surface string with its syntactic category and the meaning it expresses. Syntactic analysis is entirely driven by the categories, and when information from other levels is used to affect the derivational possibilities, it is typically loaded as extra information on the categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Head-driven Phrase Structure Grammar (HPSG) (Pollard and Sag, 1993) and Lexical Functional Grammar (LFG) (Kaplan and Bresnan, 1982) also use complex signs. However, these signs are monolithic structures which permit information to be freely shared across all dimensions: any given dimension can place restrictions on another. For example, variables resolved during the construction of the logical form can block a syntactic analysis. This provides a clean, unified formal system for dealing with the different levels, but it also can adversely affect the complexity of parsing grammars written in these frameworks (Maxwell and Kaplan, 1993) .",
"cite_spans": [
{
"start": 44,
"end": 67,
"text": "(Pollard and Sag, 1993)",
"ref_id": "BIBREF11"
},
{
"start": 72,
"end": 104,
"text": "Lexical Functional Grammar (LFG)",
"ref_id": null
},
{
"start": 105,
"end": 131,
"text": "(Kaplan and Bresnan, 1982)",
"ref_id": "BIBREF5"
},
{
"start": 614,
"end": 640,
"text": "(Maxwell and Kaplan, 1993)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We thus find two competing perspectives on communication between levels in a sign. In this paper, we propose a generalization of linguistic signs for Combinatory Categorial Grammar (CCG) (Steedman, 2000b) . This generalization enables different levels of linguistic information to be represented but limits their interaction in a resource-bounded manner, following White (2004) . This provides a clean separation of the levels and allows them to be designed and utilized in a more modular fashion. Most importantly, it allows us to retain the parsing complexity of CCG while gaining the representational advantages of the HPSG and LFG paradigms.",
"cite_spans": [
{
"start": 187,
"end": 204,
"text": "(Steedman, 2000b)",
"ref_id": "BIBREF13"
},
{
"start": 365,
"end": 377,
"text": "White (2004)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To illustrate the approach, we use it to model various aspects of the realization of information structure, an inherent aspect of the (linguistic) meaning of an utterance. Speakers use information structure to present some parts of that meaning as depending on the preceding discourse context and others as affecting the context by adding new content. Languages may realize information structure using different, often interacting means, such as word order, prosody, (marked) syntactic constructions, or morphological marking (Vallduv\u00ed and Engdahl, 1996; . The literature presents various proposals for how information structure can be captured in categorial grammar (Steedman, 2000a; Hoffman, 1995; Kruijff, 2001) . Here, we model the essential aspects of these accounts in a more perspicuous manner by using our generalized signs.",
"cite_spans": [
{
"start": 526,
"end": 554,
"text": "(Vallduv\u00ed and Engdahl, 1996;",
"ref_id": "BIBREF14"
},
{
"start": 667,
"end": 684,
"text": "(Steedman, 2000a;",
"ref_id": "BIBREF12"
},
{
"start": 685,
"end": 699,
"text": "Hoffman, 1995;",
"ref_id": "BIBREF4"
},
{
"start": 700,
"end": 714,
"text": "Kruijff, 2001)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main outcomes of the proposal are threefold: (1) CCG gains a more flexible and general kind of sign; (2) these signs contain multiple levels that interact in a modular fashion and are built via CCG derivations without increasing parsing com-plexity; and (3) we use these signs to simplify previous CCG's accounts of the effects of word order and prosody on information structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we give an overview of syntactic combination and semantic construction in CCG. We use CCG's multi-modal extension (Baldridge and Kruijff, 2003) , which enriches the inventory of slash types. This formalization renders constraints on rules unnecessary and supports a universal set of rules for all grammars.",
"cite_spans": [
{
"start": 131,
"end": 160,
"text": "(Baldridge and Kruijff, 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combinatory Categorial Grammar",
"sec_num": "2"
},
{
"text": "Nearly all syntactic behavior in CCG is encoded in categories. They may be atoms, like np, or functions which specify the direction in which they seek their arguments, like (s\\np)/np. The latter is the category for English transitive verbs; it first seeks its object to its right and then its subject to its left.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categories and combination",
"sec_num": "2.1"
},
{
"text": "Categories combine through a small set of universal combinatory rules. The simplest are application rules which allow a function category to consume its argument either on its right (>) or on its left (<):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categories and combination",
"sec_num": "2.1"
},
{
"text": "(>) X/ Y Y \u21d2 X (<) Y X\\ Y \u21d2 X",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categories and combination",
"sec_num": "2.1"
},
{
"text": "Four further rules allow functions to compose with other functions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categories and combination",
"sec_num": "2.1"
},
{
"text": "(>B) X/ Y Y/ Z \u21d2 X/ Z (<B) Y\\ Z X\\ Y \u21d2 X\\ Z (>B \u00d7 ) X/ \u00d7 Y Y\\ \u00d7 Z \u21d2 X\\ \u00d7 Z (<B \u00d7 ) Y/ \u00d7 Z X\\ \u00d7 Y \u21d2 X/ \u00d7 Z",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categories and combination",
"sec_num": "2.1"
},
{
"text": "The modalities , and \u00d7 on the slashes enforce different kinds of combinatorial potential on categories. For a category to serve as input to a rule, it must contain a slash which is compatible with that specified by the rule. The modalities work as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categories and combination",
"sec_num": "2.1"
},
{
"text": "is the most restricted modality, allowing combination only by the application rules (> and <). allows combination with the application rules and the order-preserving composition rules (>B and <B). \u00d7 allows limited permutation via the crossed composition rules (>B \u00d7 and <B \u00d7 ) as well as the application rules. Additionally, a permissive modality \u2022 allows combination by all rules in the system. However, we suppress the \u2022 modality on slashes to avoid clutter. An undecorated slash may thus combine by all rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categories and combination",
"sec_num": "2.1"
},
{
"text": "There are two further rules of type-raising that turn an argument category into a function over functions that seek that argument:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categories and combination",
"sec_num": "2.1"
},
{
"text": "(>T) X \u21d2 Y/ i (Y\\ i X) (<T) X \u21d2 Y\\ i (Y/ i X)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categories and combination",
"sec_num": "2.1"
},
{
"text": "The variable modality i on the output categories constrains both slashes to have the same modality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categories and combination",
"sec_num": "2.1"
},
{
"text": "These rules support the following incremental derivation for Marcel proved completeness:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categories and combination",
"sec_num": "2.1"
},
{
"text": "(1) Marcel proved completeness np (s\\np)/np np >T s/(s\\np) >B s/np > s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categories and combination",
"sec_num": "2.1"
},
{
"text": "This derivation does not display the effect of using modalities in CCG; see Baldridge (2002) and Baldridge and Kruijff (2003) for detailed linguistic justification for this modalized formulation of CCG.",
"cite_spans": [
{
"start": 76,
"end": 92,
"text": "Baldridge (2002)",
"ref_id": "BIBREF2"
},
{
"start": 97,
"end": 125,
"text": "Baldridge and Kruijff (2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Categories and combination",
"sec_num": "2.1"
},
{
"text": "Many different kinds of semantic representations and ways of building them with CCG exist. We use Hybrid Logic Dependency Semantics (HLDS) (Kruijff, 2001), a framework that utilizes hybrid logic (Blackburn, 2000) to realize a dependencybased perspective on meaning. Hybrid logic provides a language for representing relational structures that overcomes standard modal logic's inability to directly reference states in a model. This is achieved via nominals, a kind of basic formula which explicitly names states. Like propositions, nominals are first-class citizens of the object language, so formulas can be formed using propositions, nominals, standard boolean operators, and the satisfaction operator \"@\". A formula @ i (p \u2227 F (j \u2227 q)) indicates that the formulas p and F (j \u2227 q) hold at the state named by i and that the state j is reachable via the modal relation F.",
"cite_spans": [
{
"start": 195,
"end": 212,
"text": "(Blackburn, 2000)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid Logic Dependency Semantics",
"sec_num": "2.2"
},
{
"text": "In HLDS, hybrid logic is used as a language for describing semantic interpretations as follows. Each semantic head is associated with a nominal that identifies its discourse referent and heads are connected to their dependents via dependency relations, which are modeled as modal relations. As an example, the sentence Marcel proved completeness receives the representation in (2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid Logic Dependency Semantics",
"sec_num": "2.2"
},
{
"text": "(2) @ e (prove \u2227 TENSE past",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid Logic Dependency Semantics",
"sec_num": "2.2"
},
{
"text": "\u2227 ACT (m \u2227 Marcel) \u2227 PAT (c \u2227 comp.))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid Logic Dependency Semantics",
"sec_num": "2.2"
},
{
"text": "In this example, e is a nominal that labels the predications and relations for the head prove, and m and c label those for Marcel and completeness, respectively. The relations ACT and PAT represent the dependency roles Actor and Patient, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid Logic Dependency Semantics",
"sec_num": "2.2"
},
{
"text": "By using the @ operator, hierarchical terms such as (2) can be flattened to an equivalent conjunction of fixed-size elementary predications (EPs):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid Logic Dependency Semantics",
"sec_num": "2.2"
},
{
"text": "(3) @ e prove \u2227 @ e TENSE past \u2227 @ e ACT m \u2227 @ e PAT c \u2227 @ m Marcel \u2227 @ c comp. Baldridge and Kruijff (2002) show how HLDS representations can be built via CCG derivations. White (2004) improves HLDS construction by operating on flattened representations such as (3) and using a simple semantic index feature in the syntax. We adopt this latter approach, described below.",
"cite_spans": [
{
"start": 80,
"end": 108,
"text": "Baldridge and Kruijff (2002)",
"ref_id": "BIBREF0"
},
{
"start": 173,
"end": 185,
"text": "White (2004)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid Logic Dependency Semantics",
"sec_num": "2.2"
},
{
"text": "EPs are paired with syntactic categories in the lexicon as shown in (4)-(6) below. Each atomic category has an index feature, shown as a subscript, which makes a nominal available for capturing syntactically induced dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Construction",
"sec_num": "2.3"
},
{
"text": "(4) prove (s e \\np x )/np y : @ e prove \u2227 @ e TENSE past \u2227 @ e ACT x \u2227 @ e PAT y Applications of the combinatory rules co-index the appropriate nominals via unification on the categories. EPs are then conjoined to form the resulting interpretation. For example, in derivation (1), (5) type-raises and composes with (4) to yield (7). The index x is syntactically unified with m, and this resolution is reflected in the new conjoined logical form. (7) can then apply to (6) to yield (8), which has the same conjunction of predications as (3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Construction",
"sec_num": "2.3"
},
{
"text": "(7) Marcel proved s e /np y : @ e prove \u2227 @ e TENSE past \u2227 @ e ACT m \u2227 @ e PAT y \u2227 @ m Marcel (8) Marcel proved completeness s e : @ e prove \u2227 @ e TENSE past \u2227 @ e ACT m \u2227 @ e PAT c \u2227 @ m Marcel \u2227 @ c completeness Since the EPs are always conjoined by the combinatory rules, semantic construction is guaranteed to be monotonic. No semantic information can be dropped during the course of a derivation. This provides a clean way of establishing semantic dependencies as informed by the syntactic derivation. In the next section, we extend this paradigm for use with any number of representational levels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Construction",
"sec_num": "2.3"
},
{
"text": "To support a more modular and perspicuous encoding of multiple levels of analysis, we generalize the notion of sign commonly used in CCG. The approach is inspired on the one hand by earlier work by Steedman (2000a) and Hoffman (1995) , and on the other by the signs found in constraint-based approaches to grammar. The principle idea is to extend White's (2004) approach to semantic construction (see \u00a72.3). There, categories and the meaning they help express are connected through coindexation. Here, we allow for information in any (finite) number of levels to be related in this way.",
"cite_spans": [
{
"start": 198,
"end": 214,
"text": "Steedman (2000a)",
"ref_id": "BIBREF12"
},
{
"start": 219,
"end": 233,
"text": "Hoffman (1995)",
"ref_id": "BIBREF4"
},
{
"start": 347,
"end": 361,
"text": "White's (2004)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized dimensionality",
"sec_num": "3"
},
{
"text": "A sign is an n-tuple of terms that represent information at n distinct dimensions. Each dimension represents a level of linguistic information such as prosody, meaning, or syntactic category. As a representation, we assume that we have for each dimension a language that defines well-formed representations, and a set of operations which can create new representations from a set of given representations. 1 For example, we have by definition a dimension for syntactic categories. The language for this dimension is defined by the rules for category construction: given a set of atomic categories A, C is a category iff (i) C \u2208 A or (ii) C is of the form A\\ m B or A/ m B with A, B categories and m \u2208 { , \u00d7, \u2022}. The set of combinatory rules defines the possible operations on categories.",
"cite_spans": [
{
"start": 406,
"end": 407,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized dimensionality",
"sec_num": "3"
},
{
"text": "This syntactic category dimension drives the grammatical analysis, thus guiding the composition of signs. When two categories are combined via a rule, the appropriate indices are unified. It is through this unification of indices that information can be passed between signs. At a given dimension, the co-indexed information coming from the two signs we combine must be unifiable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized dimensionality",
"sec_num": "3"
},
{
"text": "With these signs, dimensions interact in a more limited way than in HPSG or LFG. Constraints (resolved through unification) may only be applied if they are invoked through co-indexation on categories. This provides a bound on the number of indices and the number of unifications to be made. As such, full recursion and complex unification as in attribute-value matrices with re-entrancy is avoided.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized dimensionality",
"sec_num": "3"
},
{
"text": "The approach incorporates various ideas from constraint-based approaches, but remains based on a derivational perspective on grammatical analysis and derivational control, unlike e.g Categorial Unification Grammar. Furthermore, the ability for dimensions to interact through shared indices brings several advantages: (1) \"parallel derivations\" (Hoffman, 1995) are unnecessary; (2) non-isomorphic, functional structures across different dimensions can be employed; and (3) there is no longer a need to load all the necessary information into syntactic categories (as with Kruijff (2001)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized dimensionality",
"sec_num": "3"
},
{
"text": "In this section, we illustrate our approach on several examples involving information structure. We use signs that include the following dimensions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Examples",
"sec_num": "4"
},
{
"text": "Prosody: sequences of tunes from the inventory of (Pierrehumbert and Hirschberg, 1990) , composition through concatenation Syntactic category: well-formed categories, combinatory rules (see \u00a72)",
"cite_spans": [
{
"start": 50,
"end": 86,
"text": "(Pierrehumbert and Hirschberg, 1990)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phonemic representation: word sequences, composition of sequences is through concatenation",
"sec_num": null
},
{
"text": "Information structure: hybrid logic formulas of the form @ d [in]r, with r a discourse referent that has informativity in (theme \u03b8, or rheme \u03c1) relative to the current point in the discourse d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phonemic representation: word sequences, composition of sequences is through concatenation",
"sec_num": null
},
{
"text": "Predicate-argument structure: hybrid logic formulas of the form as discussed in \u00a72.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phonemic representation: word sequences, composition of sequences is through concatenation",
"sec_num": null
},
{
"text": "Example (9) illustrates a sign with these dimensions. The word-form Marcel bears an H* accent, and acts as a type-raised category that seeks a verb missing its subject. The H* accent indicates that the discourse referent m introduces new information at the current point in the discourse d: i.e. the meaning @ m marcel should end up as part of the rheme (\u03c1) of the utterance,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phonemic representation: word sequences, composition of sequences is through concatenation",
"sec_num": null
},
{
"text": "@ d [\u03c1]m. (9) Marcel H* s h /(s h \\np m ) @ d [\u03c1]m @ m marcel",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phonemic representation: word sequences, composition of sequences is through concatenation",
"sec_num": null
},
{
"text": "If a sign does not specify any information at a particular dimension, this is indicated by (or an empty line if no confusion can arise).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phonemic representation: word sequences, composition of sequences is through concatenation",
"sec_num": null
},
{
"text": "We start with a simple example of topicalization in English. In topicalized constructions, a thematic object is fronted before the subject. Given the question Did Marcel prove soundness and completeness?, (10) is a possible response using topicalization:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topicalization",
"sec_num": "4.1"
},
{
"text": "(10) Completeness, Marcel proved, and soundness, he conjectured.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topicalization",
"sec_num": "4.1"
},
{
"text": "We can capture the syntactic and information structure effects of such sentences by assigning the following kind of sign to (topicalized) noun phrases:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topicalization",
"sec_num": "4.1"
},
{
"text": "(11) completeness s i /(s i /np c ) @ d [\u03b8]c @ c completeness",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topicalization",
"sec_num": "4.1"
},
{
"text": "This category enables the derivation in Figure 1 . The type-raised subject composes with the verb, and the result is consumed by the topicalizing category. The information structure specification stated in the sign in (11) is passed through to the final sign.",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 48,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Topicalization",
"sec_num": "4.1"
},
{
"text": "The topicalization of the object in (10) only indicates the informativity of the discourse referent realized by the object. It does not yield any indications about the informativity of other constituents; hence the informativity for the predicate and the Actor is left unspecified. In English, the informativity of these discourse referents can be indicated directly with the use of prosody, to which we now turn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topicalization",
"sec_num": "4.1"
},
{
"text": "Steedman (2000a) presents a detailed, CCG-based account of how prosody is used in English as a means to realize information structure. In the model, pitch accents and boundary tones have an effect on both the syntactic category of the expression they mark, and the meaning of that expression.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prosody & information structure",
"sec_num": "4.2"
},
{
"text": "Steedman distinguishes pitch accents as markers of either the theme (\u03b8) or of the rheme (\u03c1): L+H* and L*+H are \u03b8-markers; H*, L*, H*+L and H+L* are \u03c1-markers. Since pitch accents mark individual words, not (necessarily) larger phrases, Steedman uses the \u03b8/\u03c1-marking to spread informativity over the domain and the range of function categories. Identical markings on different parts of a function category not only act as features, but also as occurrences of a singular variable. The value of the marking on the domain can thus get passed down (\"projected\") to markings on categories in the range.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prosody & information structure",
"sec_num": "4.2"
},
{
"text": "Constituents bearing no tune have an \u03b7-marking, which can be unified with either \u03b7, \u03b8 or \u03c1. Phrases with such markings are \"incomplete\" until they combine with a boundary tone. Boundary tones have the effect of mapping phrasal tones into intonational phrase boundaries. To make these boundaries explicit and enforce such \"complete\" prosodic phrases to only combine with other complete prosodic phrases, Steedman introduces two further types of marking -\u03b9 and \u03c6 -on categories. The \u03c6 markings only unify with other \u03c6 or \u03b9 markings on categories, not with \u03b7, \u03b8 or \u03c1. These markings are only introduced to provide derivational control and are not reflected in the underlying meaning (which only reflects \u03b7, \u03b8 or \u03c1). Figure 2 recasts the above as an abstract specification of which different types of prosodic constituents can, or cannot, be combined. Figure 1 : Derivation for topicalization. system can be implemented using just one feature pros which takes the values ip for intermediate phrases, cp for complete phrases, and up for unmarked phrases. We write s pros=ip , or simply s ip if no confusion can arise. Figure 2 . If a constituent is marked with either a \u03b8-or \u03c1-tune, the atomic result category of the (possibly complex) category is marked with ip. Prosodically unmarked constituents are marked as up. The lexical entries in (12) illustrates this idea. 3",
"cite_spans": [],
"ref_spans": [
{
"start": 713,
"end": 721,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 848,
"end": 856,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1113,
"end": 1121,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Prosody & information structure",
"sec_num": "4.2"
},
{
"text": "s i /(s i /np c ) s j /(s j \\np m ) (s p \\np x )/np y @ d [\u03b8]c @ c completeness @ m Marcel @ p prove \u2227 @ p ACT x \u2227 @ p PAT y >B s p /np y @ p prove \u2227 @ p ACT m \u2227 @ p PAT y \u2227 @ m Marcel > s p @ d [\u03b8]c @ p prove \u2227 @ p ACT m \u2227 @ p PAT c \u2227 @ m Marcel \u2227 @ c completeness",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prosody & information structure",
"sec_num": "4.2"
},
{
"text": "(12) MARCEL proved COMPLETENESS H* L+H* s ip /(s up \\np) (s up \\np)/np s ip $\\(s up $/np)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prosody & information structure",
"sec_num": "4.2"
},
{
"text": "This can proceed in two ways. Either the marked MARCEL and the unmarked proved combine to produce an intermediate phrase (13), or proved and the marked COMPLETENESS combine (14). For the remainder of this paper, we will suppress up marking and write s up simply as s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prosody & information structure",
"sec_num": "4.2"
},
{
"text": "Examples 13and 14show that prosodically marked and unmarked phrases can combine. However, both of these partial derivations produce categories that cannot be combined further. For example, in 14 To capture the bottom-half of Figure 2 , the boundary tones L and LH% need categories which create complete phrases out of those for MARCEL and proved COMPLETENESS, and thereafter allow them to combine. Figure 3 shows the appropriate categories and complete analysis. We noted earlier that downstepped phrasal tunes form an exception to the rule that intermediate phrases cannot combine. To enable this, we not only should mark the result category with ip (tune), but also any leftward argument(s) should have ip (downstep). Thus, the effect of (lexically) combining a downstep tune with an unmarked category is specified by the following template: add marking x ip $\\y ip to an unmarked category of the form x$\\y. The derivation in Figure 5 illustrates this idea on example (64) from (Steedman, 2000a) .",
"cite_spans": [
{
"start": 980,
"end": 997,
"text": "(Steedman, 2000a)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 225,
"end": 233,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 398,
"end": 406,
"text": "Figure 3",
"ref_id": null
},
{
"start": 928,
"end": 936,
"text": "Figure 5",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Prosody & information structure",
"sec_num": "4.2"
},
{
"text": "To relate prosody to information structure, we extend the strategy used for constructing logical forms described in \u00a72.3, in which a simple index feature Figure 4 : Information structure for derivation for (67)-(68) from (Steedman, 2000a) on atomic categories makes a nominal (discourse referent) available. We represent information structure as a formula @ d [i]r at a dimension separate from the syntactic category. The nominal r stands for the discourse referent, which has informativity i with respect to the current point in the discourse d . Following Steedman, we distinguish two levels of informativity, namely \u03b8 (theme) and \u03c1 (rheme).",
"cite_spans": [
{
"start": 221,
"end": 238,
"text": "(Steedman, 2000a)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 154,
"end": 162,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Prosody & information structure",
"sec_num": "4.2"
},
{
"text": "\\np x )/np y s cp $\\ s ip $ s ip \\(s/np c ) (s cp \\s cp $)\\ (s ip \\s$) @ d [\u03b8]p @ d [\u03c1]c @ m Marcel @ p prove \u2227 @ p ACT x \u2227 @ p PAT y @ c completeness >T < s ip /(s ip \\np) s cp \\(s cp /np c ) @ d [\u03c1]c @ m Marcel @ c completeness >B s ip /np @ d [\u03b8]p @ p prove \u2227 @ p ACT m \u2227 @ p PAT y \u2227 @ m Marcel < s cp /np y @ d [\u03b8]p @ p prove \u2227 @ p ACT m \u2227 @ p PAT y \u2227 @ m Marcel < s cp @ d [\u03b8]p \u2227 @ d [\u03c1]c @ p prove \u2227 @ p ACT m \u2227 @ p PAT c \u2227 @ m Marcel \u2227 @ c completeness",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prosody & information structure",
"sec_num": "4.2"
},
{
"text": "We start with a minimal assignment of informativity: a theme-tune on a constituent sets the informativity of the discourse referent r realized by the constituent to \u03b8 and a rheme-tune sets it to \u03c1. This is a minimal assignment in the sense that we do not project informativity; instead, we only set informativity for those discourse referents whose realization shows explicit clues as to their information status. The derivation in Figure 4 illustrates this idea and shows the construction of both logical form and information structure.",
"cite_spans": [],
"ref_spans": [
{
"start": 432,
"end": 440,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Prosody & information structure",
"sec_num": "4.2"
},
{
"text": "Indices can also impose constraints on the informativity of arguments. For example, in the downstep example (Figure 5 ), the discourse referents corresponding to ANNA and SAYS are both part of the theme. We specify this with the constituent that has received the downstepped tune. The referent of the subject of SAYS (indexed x) must be in the theme along with the referent s for SAYS. This is satisfied in the derivation: a unifies with x, and we can unify the statements about a's informativity coming from ANNA (@ d [\u03b8]a) and SAYS (@ d [\u03b8] x with x replaced by a in the >B step).",
"cite_spans": [
{
"start": 539,
"end": 542,
"text": "[\u03b8]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 108,
"end": 117,
"text": "(Figure 5",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Prosody & information structure",
"sec_num": "4.2"
},
{
"text": "In this paper, we generalize the traditional Saussurian sign in CCG with an n-dimensional linguistic sign. The dimensions in the generalized linguistic sign can be related through indexation. Indexation places constraints on signs by requiring that co-indexed material is unifiable, on a per-dimension basis. Consequently, we do not need to overload the syntactic category with information from different dimensions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "The resulting sign structure resembles the signs found in constraint-based grammar formalisms. There is, however, an important difference. Information at various dimensions can be related through co-indexation, but dimensions cannot be directly ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "/s @ d [\u03b8]s \u2227 @ d [\u03b8]a >B s ip /(s\\np) @ d [\u03b8]s \u2227 @ d [\u03b8]a \u2227 @ d [\u03b8](pron) >B s ip /np @ d [\u03b8]s \u2227 @ d [\u03b8]a \u2227 @ d [\u03b8](pron) \u2227 @ d [i]p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Figure 5: Information structure for derivation for (64) from (Steedman, 2000a) referenced. As analysis remains driven only by inference over categories, only those constraints triggered by indexation on the categories are imposed. We do not allow for re-entrancy.",
"cite_spans": [
{
"start": 61,
"end": 78,
"text": "(Steedman, 2000a)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "It is possible to conceive of a scenario in which the various levels can contribute toward determining the well-formedness of an expression. For example, we may wish to evaluate the current information structure against a discourse model, and reject the analysis if we find it is unsatisfiable. If such a move is made, then the complexity will be bounded by the complexity of the dimension for which it is most difficult to determine satisfiability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "In the context of this paper we assume operations are multiplicative. Also, note that dimensions may differ in what languages and operations they use.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "There is one exception we should note: two intermediate phrases can combine if a second one has a downstepped accent. We deal with this exception at the end of the section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Coupling CCG and Hybrid Logic Dependency Semantics",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Geert",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of 40th Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "319--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Baldridge and Geert-Jan Kruijff. 2002. Coupling CCG and Hybrid Logic Dependency Semantics. In Proc. of 40th Annual Meeting of the ACL, pages 319- 326, Philadelphia, Pennsylvania.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Multi-Modal Combinatory Categorial Grammar",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
},
{
"first": "Geert-Jan",
"middle": [],
"last": "Kruijff",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of 10th Annual Meeting of the EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Baldridge and Geert-Jan Kruijff. 2003. Multi- Modal Combinatory Categorial Grammar. In Proc. of 10th Annual Meeting of the EACL, Budapest.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Lexically Specified Derivational Control in Combinatory Categorial Grammar",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Baldridge. 2002. Lexically Specified Derivational Control in Combinatory Categorial Grammar. Ph.D. thesis, University of Edinburgh.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Representation, reasoning, and relational structures: a hybrid logic manifesto",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Blackburn",
"suffix": ""
}
],
"year": 2000,
"venue": "Journal of the Interest Group in Pure Logic",
"volume": "8",
"issue": "3",
"pages": "339--365",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Blackburn. 2000. Representation, reasoning, and relational structures: a hybrid logic manifesto. Journal of the Interest Group in Pure Logic, 8(3):339- 365.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Integrating \"free\" word order syntax and information structure",
"authors": [
{
"first": "Beryl",
"middle": [],
"last": "Hoffman",
"suffix": ""
}
],
"year": 1995,
"venue": "Proc. of 7th Annual Meeting of the EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beryl Hoffman. 1995. Integrating \"free\" word order syntax and information structure. In Proc. of 7th An- nual Meeting of the EACL, Dublin.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Lexicalfunctional grammar: A formal system for grammatical representation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ronald",
"suffix": ""
},
{
"first": "Joan",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bresnan",
"suffix": ""
}
],
"year": 1982,
"venue": "The Mental Representation of Grammatical Relations",
"volume": "",
"issue": "",
"pages": "173--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald M. Kaplan and Joan Bresnan. 1982. Lexical- functional grammar: A formal system for grammat- ical representation. In The Mental Representation of Grammatical Relations, pages 173-281. The MIT Press, Cambridge Massachusetts.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Categorial-Modal Logical Architecture of Informativity: Dependency Grammar Logic & Information Structure",
"authors": [
{
"first": "Jan",
"middle": [
"M"
],
"last": "Geert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kruijff",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geert-Jan M. Kruijff. 2001. A Categorial-Modal Logi- cal Architecture of Informativity: Dependency Gram- mar Logic & Information Structure. Ph.D. thesis, Charles University, Prague, Czech Republic.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Formulating a category of informativity",
"authors": [
{
"first": "M",
"middle": [],
"last": "Geert-Jan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kruijff",
"suffix": ""
}
],
"year": 2002,
"venue": "Hilde Hasselgard, Stig Johansson, Bergljot Behrens, and Cathrine Fabricius-Hansen, editors, Information Structure in a Cross-Linguistic Perspective",
"volume": "",
"issue": "",
"pages": "129--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geert-Jan M. Kruijff. 2002. Formulating a category of informativity. In Hilde Hasselgard, Stig Johansson, Bergljot Behrens, and Cathrine Fabricius-Hansen, ed- itors, Information Structure in a Cross-Linguistic Per- spective, pages 129-146. Rodopi, Amsterdam.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Binding across boundaries",
"authors": [
{
"first": "Jan",
"middle": [
"M"
],
"last": "Geert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kruijff",
"suffix": ""
}
],
"year": 2003,
"venue": "Resource Sensitivity, Binding, and Anaphora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geert-Jan M. Kruijff. 2003. Binding across boundaries. In Geert-Jan M. Kruijff and Richard T. Oehrle, editors, Resource Sensitivity, Binding, and Anaphora. Kluwer Academic Publishers, Dordrecht.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The interface between phrasal and functional constraints",
"authors": [
{
"first": "T",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Ronald",
"middle": [
"M"
],
"last": "Maxwell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kaplan",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "4",
"pages": "571--590",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John T. III Maxwell and Ronald M. Kaplan. 1993. The interface between phrasal and functional constraints. Computational Linguistics, 19(4):571-590.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The meaning of intonational contours in the interpretation of discourse",
"authors": [
{
"first": "Janet",
"middle": [],
"last": "Pierrehumbert",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
}
],
"year": 1990,
"venue": "Intentions in Communication",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janet Pierrehumbert and Julia Hirschberg. 1990. The meaning of intonational contours in the interpretation of discourse. In J. Morgan P. Cohen and M. Pollack, editors, Intentions in Communication. The MIT Press, Cambridge Massachusetts.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Head-Driven Phrase Structure Grammar",
"authors": [
{
"first": "Carl",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "Ivan",
"middle": [
"A"
],
"last": "Sag",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carl Pollard and Ivan A. Sag. 1993. Head-Driven Phrase Structure Grammar. University of Chicago Press, Chicago IL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Information structure and the syntax-phonology interface",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2000,
"venue": "Linguistic Inquiry",
"volume": "31",
"issue": "4",
"pages": "649--689",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steedman. 2000a. Information structure and the syntax-phonology interface. Linguistic Inquiry, 31(4):649-689.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The Syntactic Process",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steedman. 2000b. The Syntactic Process. The MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The linguistic realization of information packaging",
"authors": [
{
"first": "Enric",
"middle": [],
"last": "Vallduv\u00ed",
"suffix": ""
},
{
"first": "Elisabet",
"middle": [],
"last": "Engdahl",
"suffix": ""
}
],
"year": 1996,
"venue": "Linguistics",
"volume": "34",
"issue": "",
"pages": "459--519",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Enric Vallduv\u00ed and Elisabet Engdahl. 1996. The linguis- tic realization of information packaging. Linguistics, 34:459-519.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Efficient realization of coordinate structures in Combinatory Categorial Grammar. Research on Language and Computation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael White. 2004. Efficient realization of coordinate structures in Combinatory Categorial Grammar. Re- search on Language and Computation. To appear.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Marcel np m : @ m Marcel (6) completeness np c : @ c completeness",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Abstract specification of derivational control in prosody First consider the top half of",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": ", s ip /(s\\np) cannot combine with s ip \\np to yield a larger intermediate phrase. This properly captures the top half of Figure 2.To obtain a complete analysis for (12), boundary tones are needed to complete the intermediate phrases tones. For example, consider (15) (based on example (70) inSteedman (2000a)",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"text": "up \\np) (s up \\np)/np s ip $\\(s up $/np) The $'s in the category for COMPLETENESS are standard CCG schematizations: s$ indicates all functions into s, such as s\\np and (s\\np)/np. See Steedman (2000b) for details.",
"num": null,
"type_str": "table",
"content": "<table><tr><td>(13) MARCEL H* s ip /(s (14) MARCEL proved COMPLETENESS L+H* H* s ip /(s up \\np) (s up \\np)/np s ip $\\(s up $/np) proved COMPLETENESS L+H*</td></tr></table>",
"html": null
},
"TABREF1": {
"text": "ip /(s\\np) (s cp /s cp $)\\ (s ip /s$) (s\\np)/np s ip $\\(s$/np) s cp $\\ s ip $",
"num": null,
"type_str": "table",
"content": "<table><tr><td>MARCEL</td><td colspan=\"2\">proved</td><td colspan=\"2\">COMPLETENESS</td></tr><tr><td>H*</td><td>L</td><td/><td>L+H*</td><td>LH%</td></tr><tr><td colspan=\"2\">s &lt; s cp /(s cp \\np)</td><td/><td>s ip \\np</td><td>&lt;</td></tr><tr><td/><td/><td/><td colspan=\"2\">s cp \\np</td><td>&lt;</td></tr><tr><td/><td>s cp</td><td/><td/><td>&gt;</td></tr><tr><td colspan=\"6\">Figure 3: Derivation including tunes and boundary tones; (70) from (Steedman, 2000a)</td></tr><tr><td>Marcel</td><td>PROVED L+H*</td><td>LH%</td><td colspan=\"2\">COMPLETENESS H*</td><td>LL%</td></tr><tr><td>np</td><td>(s ip:p</td><td/><td/><td/></tr></table>",
"html": null
},
"TABREF2": {
"text": "(s ip:s \\np ip:x )/s y s/(s\\np) (s p \\np)/np @ d [\u03b8]a @ d [\u03b8]s \u2227 @ d [\u03b8]x @ d [\u03b8](pron) @ d [i]p >T s ip /(s ip \\np ip ) @ d [\u03b8]a",
"num": null,
"type_str": "table",
"content": "<table><tr><td>ANNA</td><td>SAYS</td><td>he</td><td>proved</td><td>COMPLETENESS</td></tr><tr><td>L+H*</td><td>!L+H*</td><td/><td>LH%</td><td/></tr><tr><td>np ip:a</td><td/><td/><td/><td/></tr></table>",
"html": null
}
}
}
}