|
{ |
|
"paper_id": "W19-0124", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T06:26:39.179302Z" |
|
}, |
|
"title": "Abstract Meaning Representation for Human-Robot Dialogue", |
|
"authors": [ |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Bonial", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Army Research Laboratory", |
|
"institution": "U.S", |
|
"location": { |
|
"postCode": "20783", |
|
"settlement": "Adelphi", |
|
"region": "MD" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Donatelli", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Georgetown University", |
|
"location": { |
|
"postCode": "20057", |
|
"settlement": "Washington D.C" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Jessica", |
|
"middle": [], |
|
"last": "Ervin", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Rochester", |
|
"location": { |
|
"postCode": "14627", |
|
"settlement": "Rochester", |
|
"region": "NY" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Clare", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Voss", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Army Research Laboratory", |
|
"institution": "U.S", |
|
"location": { |
|
"postCode": "20783", |
|
"settlement": "Adelphi", |
|
"region": "MD" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this research, we begin to tackle the challenge of natural language understanding (NLU) in the context of the development of a robot dialogue system. We explore the adequacy of Abstract Meaning Representation (AMR) as a conduit for NLU. First, we consider the feasibility of using existing AMR parsers for automatically creating meaning representations for robot-directed transcribed speech data. We evaluate the quality of output of two parsers on this data against a manually annotated gold-standard data set. Second, we evaluate the semantic coverage and distinctions made in AMR overall: how well does it capture the meaning and distinctions needed in our collaborative human-robot dialogue domain? We find that AMR has gaps that align with linguistic information critical for effective human-robot collaboration in search and navigation tasks, and we present task-specific modifications to AMR to address the deficiencies.", |
|
"pdf_parse": { |
|
"paper_id": "W19-0124", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this research, we begin to tackle the challenge of natural language understanding (NLU) in the context of the development of a robot dialogue system. We explore the adequacy of Abstract Meaning Representation (AMR) as a conduit for NLU. First, we consider the feasibility of using existing AMR parsers for automatically creating meaning representations for robot-directed transcribed speech data. We evaluate the quality of output of two parsers on this data against a manually annotated gold-standard data set. Second, we evaluate the semantic coverage and distinctions made in AMR overall: how well does it capture the meaning and distinctions needed in our collaborative human-robot dialogue domain? We find that AMR has gaps that align with linguistic information critical for effective human-robot collaboration in search and navigation tasks, and we present task-specific modifications to AMR to address the deficiencies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "A central challenge in human-agent collaboration is that robots (or their virtual counterparts) do not have sufficient linguistic or world knowledge to communicate in a timely and effective manner with their human collaborators . We address this challenge in ongoing research directed at analyzing robotdirected communication in collaborative humanagent exploration tasks, with the ultimate goal of enabling robots to adapt to domain-specific language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we choose to adopt an intermediate semantic representation and select Abstract Meaning Representation (AMR) (Banarescu et al., 2013) in particular for three reasons: (i) the semantic representation framework abstracts away from surface variation, therefore the robot will only be trained to process and execute the actions corresponding to semantic elements of the representation (ii) there are a variety of fairly robust AMR parsers we can employ for this work, enabling us to forego manual annotation of substantial portions of our data and facilitating efficient automatic parsing in a future end-to-end system; and (iii) the structured representation facilitates the interpretation of novel instructions and grounding instructions with respect to the robot's current physical surroundings and set of executable actions. The latter motivation is especially important given that our human-robot dialogue is physically situated. This stands in contrast to many other dialogue systems, such as taskoriented chat bots, which do not require establishing and acting upon a shared understanding of the physical environment and often do not require any intermediate semantic representation (see \u00a76 for further comparison to related work).", |
|
"cite_spans": [ |
|
{ |
|
"start": 123, |
|
"end": 147, |
|
"text": "(Banarescu et al., 2013)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our paper is structured as follows: First, we present background both on the corpus of humanrobot dialogue we are leveraging ( \u00a72), and on AMR ( \u00a73). \u00a74 discusses the implementation and results of two AMR parsers on the human-robot dialogue data. \u00a75 assesses the semantic coverage of AMR for the human-robot dialogue data in particular. We then discuss related work that informs the current research in \u00a76. Finally, \u00a77 concludes and presents ideas for future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We aim to support NLU within the broader context of ongoing research to develop a human-robot dialogue system (Marge et al., 2016a) to be used onboard a remotely located agent collaborating with humans in search and navigation tasks (e.g., disaster relief). In developing this dialogue system, we are making use of portions of the corpus of humanrobot dialogue data collected under this effort . 1 This corpus was collected via a phased 'Wizard-of-Oz' (WoZ) methodology, in which human experimenters perform the dialogue and navigation capabilities of the robot during experimental trials, unbeknownst to participants interacting with the 'robot' (Marge et al., 2016b) . Specifically, a na\u00efve participant (unaware of the wizards) is tasked with instructing a robot to navigate through a remote, unfamiliar house-like environment, and asked to find and count objects such as shoes and shovels. In reality, the participant is not speaking directly to a robot, but to an unseen Dialogue Manager (DM) Wizard who listens to the participant's spoken instructions and responds with text messages in a chat window or passes a simplified text version of the instructions to a Robot Navigator (RN) Wizard, who joysticks the robot to complete the instructions. Given that the DM acts as an intermediary passing communications between the participant and the RN, the dialogue takes place across multiple conversational floors. The flow of dialogue from participant to DM, DM to RN and subsequent feedback to the participant can be seen in table 1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 131, |
|
"text": "(Marge et al., 2016a)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 396, |
|
"end": 397, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 647, |
|
"end": 668, |
|
"text": "(Marge et al., 2016b)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background: Human-Robot Dialogue Corpus", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The corpus comprises 20 participants and about 20 hours of audio, with 3,573 participant utterances (continuous speech) totaling 18,336 words, as well as 13,550 words from DM-Wizard text messages. The corpus includes speech transcriptions from participants as well as the speech of the RN-Wizard. These transcriptions are time-aligned with the DM-Wizard text messages passed either to the participant or to the RN-Wizard.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background: Human-Robot Dialogue Corpus", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The corpus also includes a dialogue annotation scheme specific to multi-floor dialogue that identifies initiator intent and signals relations between individual utterances pertaining to that intent .The design of the existing annotation scheme allows for the characterization of distinct information states by way of sets of participants, participant roles, turn-taking and floor-holding, and other factors (Traum and Larsson, 2003) . Transaction units (TU) identify utterances from multiple participants and floors into units according to the realization of an initiator's intent, such that all utterances involved in an ex-change surrounding the successful execution of a command are grouped and annotated for the relations they hold to one another. Relation types (Rel) signal how utterances within the same TU relate to one another in the context of the ultimate goal of the TU (e.g. \"ack-done\" in table 1, shortened from \"acknowledge-done,\" signals that an utterance acknowledges completion of a previous utterance; for full details on Rel types, see ). Antecedents (Ant) specify which utterance is related to which. An example of a TU may be seen in table 1. It is notable that the existing annotation scheme highlights dialogue structure and does not provide a markup of the semantic content of participant instructions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 407, |
|
"end": 432, |
|
"text": "(Traum and Larsson, 2003)", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background: Human-Robot Dialogue Corpus", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The Abstract Meaning Representation (AMR) project (Banarescu et al., 2013) has created a manually annotated \"semantics bank\" of text drawn from a variety of genres. The AMR project annotations are completed on a sentence-by-sentence basis, where each sentence is represented by a rooted directed acyclic graph (DAG). For ease of creation and manipulation, annotators work with the PENMAN representation of the same information (Penman Natural Language Group, 1989 ). For example: In neo-Davidsonian fashion (Davidson, 1969; Parsons, 1990) , AMR introduces variables (or graph nodes) for entities, events, properties, and states.", |
|
"cite_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 74, |
|
"text": "(Banarescu et al., 2013)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 427, |
|
"end": 463, |
|
"text": "(Penman Natural Language Group, 1989", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 507, |
|
"end": 523, |
|
"text": "(Davidson, 1969;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 524, |
|
"end": 538, |
|
"text": "Parsons, 1990)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background: Abstract Meaning Representation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(w / want-01 :ARG0 (d / dog) :ARG1 (p / pet-01 :ARG0 (g / girl) :ARG1 d))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background: Abstract Meaning Representation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Leaves are labeled with concepts, so that (d / dog) refers to an instance (d) of the concept dog. Relations link entities, so that (w / walk-01 :location (p/ park)) means the walking event (w) has the relation location to the entity, park (p). When an entity plays multiple roles in a sentence (e.g., (d / dog) above), AMR employs re-entrancy in graph notation (nodes with multiple parents) or variable re-use in PENMAN notation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background: Abstract Meaning Representation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "AMR concepts are either English words (boy), PropBank (Palmer et al., 2005) Table 1 : Example of a Transaction Unit (TU) in the existing corpus dialogue annotation, which contains an instruction initiated by the participant, its translation to a simplified form (DM to RN), and the execution of the instruction and acknowledgement of such by the RN. TU, Ant(ecedent), and Rel(ation type) are indicated in the right columns. sets (want-01), or special keywords indicating generic entity types: date-entity, world-region, distance-quantity, etc. In addition to the PropBank lexicon of rolesets, which associate argument numbers (ARG 0-6) with predicate-specific semantic roles (e.g., ARG0=wanter in ex. 1), AMR uses approximately 100 relations of its own (e.g., :time, :age, :quantity, :destination, etc.). The representation captures who is doing what to whom like other semantic role labeling (SRL) schemes (e.g., PropBank (Palmer et al., 2005 ), FrameNet (Baker et al., 1998 Fillmore et al., 2003) , VerbNet (Kipper et al., 2008) ), but also represents other aspects of meaning outside of semantic role information, such as fine-grained quantity and unit information and parthood relations. Also distinguishing it from other SRL schemes, a goal of AMR is to capture core facets of meaning while abstracting away from idiosyncratic syntactic structures; thus, for example, She adjusted the machine and She made an adjustment to the machine share the same AMR. AMR has been widely used to support NLU, generation, and summarization (Liu et al., 2015; Pourdamghani et al., 2016) , machine translation, question answering (Mitra and Baral, 2016) , information extraction (Pan et al., 2015) , and biomedical text mining (Garg et al., 2016; Rao et al., 2017; Wang et al., 2017) 4 Evaluating AMR parsers on Human-Robot Dialogue Data", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 75, |
|
"text": "(Palmer et al., 2005)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 923, |
|
"end": 943, |
|
"text": "(Palmer et al., 2005", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 944, |
|
"end": 975, |
|
"text": "), FrameNet (Baker et al., 1998", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 976, |
|
"end": 998, |
|
"text": "Fillmore et al., 2003)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 1009, |
|
"end": 1030, |
|
"text": "(Kipper et al., 2008)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1531, |
|
"end": 1549, |
|
"text": "(Liu et al., 2015;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 1550, |
|
"end": 1576, |
|
"text": "Pourdamghani et al., 2016)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 1619, |
|
"end": 1642, |
|
"text": "(Mitra and Baral, 2016)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1668, |
|
"end": 1686, |
|
"text": "(Pan et al., 2015)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 1716, |
|
"end": 1735, |
|
"text": "(Garg et al., 2016;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1736, |
|
"end": 1753, |
|
"text": "Rao et al., 2017;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 1754, |
|
"end": 1772, |
|
"text": "Wang et al., 2017)", |
|
"ref_id": "BIBREF43" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 83, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Background: Abstract Meaning Representation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "To serve as a conduit for NLU in a dialogue system, the ideal semantic representation would have robust parsers, allowing the representation to be implemented efficiently on a large scale. There have been a variety of parsers developed for AMR; two parsers using very different approaches are explored in the sections to follow.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background: Abstract Meaning Representation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "To automatically create AMRs for the humanrobot diaogue data, we used two off-the-shelf AMR parsers, JAMR 2 (Flanigan et al., 2014) and CAMR 3 (Wang et al., 2015) . JAMR was one of the first AMR parsers and uses a two-part algorithm to first identify concepts and then to build the maximum spanning connected subgraph of those concepts, adding in the relations. CAMR, in contrast, starts by obtaining the dependency tree (in this case, using the Charniak parser 4 and Stanford CoreNLP toolkit (Manning et al., 2014) ) and then uses their algorithm to apply a series of transformations to the dependency tree, ultimately transforming it into an AMR graph. One strength of CAMR is that the dependency parser is independent of the AMR creation, so a dependency parser that is trained on a larger data set, and therefore more accurate, can be used. Both JAMR and CAMR have algorithms that have learned probabilities from training data in order to execute their algorithms on novel sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 131, |
|
"text": "(Flanigan et al., 2014)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 143, |
|
"end": 162, |
|
"text": "(Wang et al., 2015)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 493, |
|
"end": 515, |
|
"text": "(Manning et al., 2014)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parsers", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In order to evaluate parser performance on our data set, we hand-annotated a subset of the participant's speech in the human-robot dialogue corpus to create a gold standard data set. We focus on only participant language because it is the natural language that the robot will ultimately need to process and act on. This selected subset comprises 10% of participant utterances from one phase of the corpus including 10 subjects. The resulting sample is 137 sentences, equally distributed across the 10 participants, who tend to have unique speech patterns. Three expert annotators familiar with both the human-robot dialogue data and AMR independently annotated this sample, ob-taining inter-annotator agreement (IAA) scores of .82, .82, and .91 using the Smatch metric. 5 After independent annotations, we collaboratively created the gold standard set. Notable choices made during this process include the treatment of \"can you\" utterances, re-entry of the subject in commands using motion verbs with independent arguments for the mover and thingmoved, and handling of disfluencies; each is described here.", |
|
"cite_spans": [ |
|
{ |
|
"start": 770, |
|
"end": 771, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gold Standard Data Set", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In \"can you\" utterances, there is an ambiguity as to whether it is a genuine question of ability, or a polite request. This difference determines whether the sentence gets annotated with possible-01 (used in AMR to convey both possibility and ability), or just as a command (figure 2). It also determines whether the robot should respond with a statement of ability, or perform the action requested. To resolve this ambiguity, we referred back to the full transcripts of the data, and inferred based on context. In our sample, only one of these utterances (\"can you go that way\") was deemed to be a genuine questions of ability, while the remaining 13 (e.g. \"can you take a picture,\" \"can you turn to your right\") were treated as commands. Those that were commands were annotated with :polite +, in order to preserve what we believe to be the speaker's intention in using the modal \"can.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gold Standard Data Set", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "(p / possible-01 :ARG1 (p2 / picture-01 :ARG0 (y / you)) :polarity (a / amr-unknown)) (p / picture-01 :mode imperative :polite + :ARG0 (y / you))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gold Standard Data Set", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Figure 2: Two different AMR parses for the utterance \"can you take a picture,\" convey two distinct interpretations of the utterance: the top can be read as \"Is it possible for you to take a picture?,\" the bottom as a command for a picturing event.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gold Standard Data Set", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "With commands like \"move\" or \"turn,\" it is implied that the robot is the agent impelling motion and the thing being moved. Therefore, we used re-entry in those AMRs to infer the implied \"you\" as both the :ARG0, mover, and the :ARG1, thing-moved (figure 3). This is consistent with AMR's goal of capturing the meaning of an utterance, independent of syntax-all arguments that 5 IAA Smatch scores on AMRs are generally between .7 and .8, depending on the complexity of the data (AMR development group communication, 2014). can be confidently inferred should be included in the AMR, even if implicit. 6 (m / move-01 :mode imperative :ARG0 (y / you) :ARG1 y :direction (f / forward) :extent (d / distance-quantity :quant 3 :unit (f2 / foot)))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gold Standard Data Set", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Figure 3: Gold standard AMR for \"move forward 3 feet,\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gold Standard Data Set", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "showing the inferred argument \"you\" as :ARG0 (mover) and :ARG1 (thing-moved).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gold Standard Data Set", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Although the LDC AMR corpus 7 does not include speech data, AMR does offer guidance for disfluencies-dropping the disfluent portion of an utterance in favor of representing only the speaker's repair utterance. 8 We followed this general AMR practice and dropped disfluent speech for the surprisingly infrequent cases of disfluency in our gold standard sample.", |
|
"cite_spans": [ |
|
{ |
|
"start": 210, |
|
"end": 211, |
|
"text": "8", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gold Standard Data Set", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Having created a gold standard sample of our data, we ran both JAMR and CAMR on the same sample and obtained the Smatch scores when compared to the gold standard. As seen in Table 2 , CAMR performs better on both precision and recall, thus obtaining the higher F-score. However, compared to their self-reported F-scores (0.58 for JAMR and 0.63 for CAMR) on other corpora, both under-perform on the human-robot dialogue data. Of the errors present in the parser output, many come from improper handling of light verb construction, imperatives, inferred arguments, and requests phrased as \"can you\" questions. \"Take a picture\" is an example of a frequent light verb construction, in which the verb (\"take\") is not semantically the main predicating element of the sentence. The correct parse is shown in Figure 4 , 6 See \"implicit roles\" in AMR guidelines:", |
|
"cite_spans": [ |
|
{ |
|
"start": 812, |
|
"end": 813, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 174, |
|
"end": 181, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 801, |
|
"end": 809, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results & Error Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "https://github.com/amrisi/amrguidelines/blob/master/amr.md 7 https://catalog.ldc.upenn.edu/ LDC2017T10 followed by the AMR that both parsers consistently create. They both incorrectly place \"take\" as the top node (using the grasping, caused motion sense of the verb), when in reality it should be dropped from the AMR completely according to AMR practice for light verbs. Light verb constructions occur in 60 utterances in our sample, with 59 of those being some variation on \"take a picture\" or \"take a photo.\" Another common error shown in figure 4 is the notation :mode imperative, used to indicate commands, which is present in 127 sentences within our sample. Despite the prevalence of this feature in our gold standard, it is never present in parser output.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results & Error Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Another problematic omission on the parsers' part is a lack of inferred arguments. As discussed earlier, commands like \"move forward 3 feet\" have an implied \"you\" in both the :ARG0 and :ARG1 positions. However, the parsers don't include this variable, instead only including concepts that are explicitly mentioned in the sentence (figure 5). A fourth error made by the parsers was on \"can you\" requests. These were consistently handled by both parsers as questions of ability, annotated using possible-01, even when (as was usually the case) the utterances were intended to be polite commands (parser output is shown in the top parse of figure 2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results & Error Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The poor performance of both parsers on the human-robot dialogue data is unsurprising given the significant differences between it and the data the parsers were trained on. For both parsers, training data came from the LDC AMR corpus, made up entirely of written text, mostly newswire. 9 In contrast, the human-robot dialogue data is transcribed from spoken utterances, taken from dialogue that is instructional and goaloriented. Thus, running the parsers on this data set shows that the differences in domain have significant effects on the parsers, and give rise to the systematic errors described above.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Improvements to the parser output could be obtained even by just adding a few simple heuristics, due to the formulaic nature of our data. Of 137 sample sentences, 25 were \"take a picture,\" so introducing a heuristic specific to that sentence would be a simple way to make several corrections. To obtain broader improvements, however, it's clear that it will be necessary to retrain the parsers on in-domain data. Given that retraining requires a corpus of hand-annotated data, this gives us an opportunity to examine the current features of AMR in relation to our collaborative human-robot dialogue domain, and to explore possible additions to the annotation scheme to ensure that all elements of meaning essential to our domain have coverage. The findings of this analysis are described in the sections to follow.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We assess the adequacy of AMR for its use in an NLU component of a human-robot dialogue system both on theoretical grounds and in light of the results and error analysis presented in \u00a74.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Semantic Coverage & Distinctions of AMR", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To our knowledge, our research is the first to employ AMR to capture the semantics of spoken language; existing corpora are otherwise textbased. 10 Here, we discuss the characteristics of our data relevant to semantic representation and highlight specific challenges and areas of interest that we hope to address with AMR ( \u00a75.1); explore how to leverage AMR for these purposes by identifying ( \u00a75.2) and remedying ( \u00a75.3) gaps in existing AMR; and conclude with discussion ( \u00a75.4).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Semantic Coverage & Distinctions of AMR", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our goal in introducing AMR is to bridge the gap between what is annotated currently as dialogue 9 https://catalog.ldc.upenn.edu/ LDC2017T10", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Challenges of the Data", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "10 Data released at https://amr.isi.edu/ download/amr-bank-struct-v1.6.txt (Little Prince) and LDC corpus (footnote 5).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Challenges of the Data", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "structure , and the semantic content of utterances that comprise such dialogue (not included in the current scheme). This goal follows from the understanding that dialogue acts are composed of two primary components: (i) semantic content, identifying the entities, events, and propositions relevant to the dialogue; and (ii) communicative function, identifying the ways an addressee may use semantic content to update the information state (Bunt et al., 2012) . The existing dialogue structure annotation scheme of distinguishes two primary levels of pragmatic meaning important to dialogue that our research aims to maintain. The first, intentional structure (Grosz and Sidner, 1986) , is equivalent to a TU 11 : all utterances that explicate and address an initiator's intent. The second, interactional structure, captures how the information state of participants in the dialogue is updated as the TU is constructed . These two levels of meaning stand apart from the basic compositional meaning of their associated utterances.We seek to represent these pragmatic levels of meaning and link them to their respective semantic forms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 440, |
|
"end": 459, |
|
"text": "(Bunt et al., 2012)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 660, |
|
"end": 684, |
|
"text": "(Grosz and Sidner, 1986)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Challenges of the Data", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We also seek to represent the temporal, aspectual, and veridical/modal nature of the robot's actions. Human instructions must be interpreted for actions: the robot may respond to such instruction by asserting whether or not such an instruction is possible given the robot's capabilities and the surrounding physical environment, and the robot may also communicate whether it is in the process of completing or has completed the desired instruction. Such information is implicitly linked to the intentional and interactional structure of the dialogue. For example, the act of giving a command implies that an event (if it occurs) will happen in the future; the act of asserting that an event has occurred signals that the event is past.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Challenges of the Data", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The representation of space and specific parameters that contribute to the robot's understanding of how human language maps on to its physical environment is also of interest to our work here. 12 As its capabilities are presented to the participant, the 'robot' in this research is capa-ble of performing low-level actions with specific endpoints or goals: for example, \"move five feet forward,\" \"face the doorway on your left,\" and \"get closer to that orange object.\" This robot cannot successfully perform instructions that have no clear endpoint:\"move forward\" and \"turn\" will trigger clarification requests for further specification. In our planned system, however, we would ultimately like to give the robot an instruction such as \"Robot, explore this space and tell me if anyone has been here recently.\" If the robot can learn to decompose an instruction such as explore into smaller actions, as well as how to identify signs of previous inhabitants, such instructions may become feasible.", |
|
"cite_spans": [ |
|
{ |
|
"start": 193, |
|
"end": 195, |
|
"text": "12", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Challenges of the Data", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Finally, much of the semantic content of the data in our experiment must be situated within the dialogue context to be properly interpreted. A command of \"Do that again\" is ambiguous in terms of which action it refers back to. Similarly, a negative command such as \"No, go to the doorway on the left\" negates specific information contained in a previous command. Implicit in natural language input, as well, are subtle event-event temporal relations, such as \"Move forward and take a picture\" (sequential actions) and \"Turn around and take a picture every 45 degrees\" (simultaneous actions).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Challenges of the Data", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We focus on the three elements of meaning crucial to human-robot dialogue and currently lacking in AMR: (i) speaker intended meaning (as differing from the compositional meaning of the speaker's utterances); 13 (ii) tense and aspect (and vericity, by association); and (iii) spatial parameters necessary for the robot to successfully execute instructions within its physical environment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gaps in AMR", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The goal of the AMR project is to represent meaning, but whether such meaning is purely semantic or also captures a speaker's intended meaning is not specified. Here, we attempt to strike a balance between capturing speaker intended meaning and overspecifying utterances. We do this to stay faithful to existing experimental dialogue annotation practices, and to enable the robot to generalize the connections between such intention and underlying semantic content.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gaps in AMR", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "To illustrate the need for adding tense and aspect information to existing AMRs, compare the following utterances: (i) \"move forward five feet\" (uttered by the human participant); (ii) \"moving forward five feet...\" (sent via text by the robot to signal initiation of instruction); and (iii) \"I moved forward five feet\" (sent by the robot upon completion of the action). Although distinctions between these three utterances are critical for our domain, AMR represents these three utterances the same way: 14 Spatial parameters of actions are represented in the current AMR through the use of predicatespecific ARG numbers, as outlined in the Prop-Bank rolesets, or with the use of AMR relations, such as :path and :destination. Whether or not a relation or argument number is used, and which argument number, is specific to a predicate and therefore inconsistent across motion relations. We aim to make the representation of these parameters more consistent and enrich them with information about which are required, which are optional, and which might have assumed, default interpretations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gaps in AMR", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We leverage the existing dialogue annotations to extract intentional and interactional meaning and, where relevant, map these annotations to proposed refinements described below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed Refinements", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Speech Acts. We add a higher level of pragmatic meaning to the propositional content represented in the AMR through frames that correspond to speech acts (Austin, 1975; Searle, 1969) . We use the model of vocatives in existing AMR as our guide. 15 We also make use of the existing dialogue structure annotation scheme for our corpus that identifies dialogue types similar to speech acts. Using this scheme as a starting point, we create 36 unique speech acts and corresponding AMR frames: this consists of six general dialogue types (command, assert, request, question, evaluate, express) with 5 to 12 subtypes each (e.g., move, send-image). An example of a speech act template for command:move may be seen in figure 7. 16 Notably, by adding a layer of speaker-intended meaning to the content of the proposition itself, we are able to capture participant roles within the speech act (:ARG0 and :ARG1 of command-02). Future work will be able to reference these roles and model how participant relations evolve over the course of the discourse (Allwood et al., 2000) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 168, |
|
"text": "(Austin, 1975;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 169, |
|
"end": 182, |
|
"text": "Searle, 1969)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 245, |
|
"end": 247, |
|
"text": "15", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 720, |
|
"end": 722, |
|
"text": "16", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1042, |
|
"end": 1064, |
|
"text": "(Allwood et al., 2000)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed Refinements", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "(c / command-02 :ARG0-commander :ARG1-impelled agent :ARG2 (g / go-02 :completable + :ARG0-goer :ARG1-extent :ARG3-start point :ARG4-end point :path :direction :time (a / after :op1 (n / now)))) Tense and Aspect. We also adopt the annotation scheme proposed by Donatelli et al. (2018) for augmenting AMR with tense and aspect. This scheme identifies the temporal nature of an event relative to the dialogue act in which it is expressed as past, present, or future. The aspectual nature of the event can be specified as atelic (:ongoing -/+), telic and hypothetical (:ongoing -, :completable +), telic and in progress (:ongoing +, :complete -), or telic and complete (:ongoing -, :complete +). A telic and hypothetical event representation can be seen in figure 7. This tense/aspect annotation scheme is specific to AMR and coarse-grained in nature, but through its use of existing AMR relations (:time, before, after, now, :op), it can be adapted to finergrained temporal relations in future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 261, |
|
"end": 284, |
|
"text": "Donatelli et al. (2018)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed Refinements", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Spatial Parameters. As seen in figure 7, all arguments of go-02 as well as additional relations are made use of in the template. These positions correspond to parameters of the action, needed so the robot may carry out the action successfully. Having a template for each domain-relevant speech act will allow us to specify required and optional parameters for robot operationalization. In figure 7, this is either :ARG1 or :ARG4, given that the robot currently needs an endpoint for each action; if both arguments are empty, this ought to trigger a request for further information by the robot. Note also that the same command:move AMR (including go-02) would be implemented for all realizations of commands for movement (e.g., move, go, drive), whereas these would receive distinct AMRs, headed by each individual predicate, under current practices.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed Refinements", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The refinements we present to the existing AMR aim to mimic a conservative learning process: we seek to provide just enough pragmatic meaning to assist robot understanding of natural language, but we do not provide specification to the point that there will be over-fitting in the form of one-to-one mappings of semantic content and pragmatic effect. As research continues and robot capabilities expand, we expect to augment the robot's linguistic knowledge based on general patterns in the current annotation scheme.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "6 Related work", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "There is a long-standing tradition of research in semantic representation within NLP, AI, as well as theoretical linguistics and philosophy (see Schubert (2015) for an overview). In this body of research, there are a variety of options that could be used within dialogue systems for NLU. However, for many of these representations, there are no existing automatic parsers, limiting their feasibility for larger-scale implementation. A notable exception is combinatory categorical grammar (CCG) (Steedman and Baldridge, 2009) ; CCG parsers have already been incorporated in some current dialogue systems (Chai et al., 2014) . Although promising, CCG parses closely mirror the input language, so systems making use of CCG parses still face the challenge of a great deal of linguistic variability that can be associated with a single intent. Again, in abstracting away from surface variation, AMR may offer more regular, consis-tent parses in comparison to CCG. Universal Conceptual Cognitive Annotation (UCCA) (Abend and Rappoport, 2013) , which also abstracts away from syntactic idiosyncrasies, and its corresponding parser (Hershcovich et al., 2017 ) merits future investigation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 494, |
|
"end": 524, |
|
"text": "(Steedman and Baldridge, 2009)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 603, |
|
"end": 622, |
|
"text": "(Chai et al., 2014)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1008, |
|
"end": 1035, |
|
"text": "(Abend and Rappoport, 2013)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1124, |
|
"end": 1149, |
|
"text": "(Hershcovich et al., 2017", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Representation", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Task-oriented spoken dialogue systems have been an active area of research since the early 1990s. Broadly, the architecture of such systems includes (i) automatic speech recognition (ASR) to recognize an utterance, (ii) an NLU component to identify the user's intent, and (iii) a dialogue manager to interact with the user and achieve the intended task (Bangalore et al., 2006) . The meaning representation within such systems has, in the past, been predefined frames for particular subtasks (e.g., flight inquiry), with slots to be filled (e.g., destination city) (Issar and Ward, 1993) . In such approaches, the meaning representation was crafted for a specific application, making generalizability to new domains difficult if not impossible. Current approaches still model NLU as a combination of intent and dialogue act classification and slot tagging, but many have begun to incorporate recurrent neural networks (RNNs) and some multi-task learning for both NLU and dialogue state tracking , the latter of which allows the system to take advantage of information from the discourse context to achieve improved NLU. Substantial challenges to these systems include working in domains with intents that have a large number of possible values for each slot and accommodation of out-of-vocabulary slot values (i.e. operating in a domain with a great deal of linguistic variability).", |
|
"cite_spans": [ |
|
{ |
|
"start": 353, |
|
"end": 377, |
|
"text": "(Bangalore et al., 2006)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 565, |
|
"end": 587, |
|
"text": "(Issar and Ward, 1993)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NLU in Dialogue Systems", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Thus, a primary challenge today and in the past is representing the meaning of an utterance in a form that can exploit the constraints of a particular domain but also remain portable across domains and robust despite linguistic variability. We see AMR as promising because the parsers are domain-independent (and can be retrained), and the representation itself is flexible enough for the addition of some domain-specific constraints. Furthermore, since AMR abstracts away from syntactic variability to represent only core elements of meaning, some of the variability in the input language can be \"tamed,\" to give systems more systematic input. With the proposed addition of speech acts to AMR described in \u00a75.3, the augmented AMRs also facilitate dialogue state tracking.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NLU in Dialogue Systems", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Although human-robot dialogue systems often leverage a similar architecture to that of the spoken dialogue systems described above, humanrobot dialogue introduces the challenge of physically situated dialogue and the necessity for symbol and action grounding, which generally incorporate computer vision. Few systems are tackling all of these challenges at this point (but see . A description of the preliminary human-robot dialogue system developed under the umbrella of this project, and where this research might fit into that system, is described in the next section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NLU in Dialogue Systems", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Overall, we find results to be mixed on the feasibility of AMR for NLU within a human-robot dialogue system. On one hand, AMR is attractive given that there are a variety of relatively robust parsers available for AMR, making implementation on a larger scale feasible. However, our evaluation of two parsers on the humanrobot dialogue data demonstrates that retraining on domain-relevant data is necessary, and this will require a certain amount of manual annotation. Furthermore, our assessment of the distinctions made in AMR reveal gaps that must be addressed for effective use in collaborative humanrobot search and navigation. Nonetheless, these AMR refinements are tractable and may also be valuable to the broader community.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions & Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Thus, we have several paths forward in ongoing and future work. First, we plan to use heuristics and manual corrections to CAMR parser output to create a larger in-domain training set following existing AMR guidelines. We plan to combine this training set with other AMRs from various humanagent dialogue data sets being annotated in parallel with this work. In addition to expanding the training set for dialogue, this will allow us to explore the extent to which our findings, with respect to AMR gaps, may also apply to other human-agent dialogue domains.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions & Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Second, we will consider how to implement AMR into the existing, preliminary dialogue system called 'Scout Bot,' which has been developed as part of our larger research project . For NLU, Scout Bot makes use of the NPCEditor (Leuski and Traum, 2011) , a statistical classifier that learns a mapping from inputs to outputs. Currently, the NPCEditor currently relies on string divergence measures to associate an instruction with either a text version to be sent forward to the RN-Wizard or a clarification question to be returned to the participant. However, some of the challenging cases we analyzed in \u00a75.1 suggest that an intermediate semantic representation will be needed within the NLU phase. Specifically, because the instructions must be grounded within physical surroundings and with respect to an executable set of robot actions, a semantic representation provides the structure needed to interpret novel instructions as well as ground instructions in novel physical contexts. Error analysis has demonstrated that the current Scout Bot system, by simply learning an association between an input string and a particular set of executed actions, cannot generalize to unseen, novel input instructions (e.g, \"Turn left 100 degrees,\" as opposed to a more typical number of degrees like 90) and fails to interpret instructions with respect to the current physical surroundings (e.g., the destination of \"Move to the door on the left\" will be interpreted differently depending where the robot is facing). The structure of the semantic representation provided by AMR will allow the system to interpret 100 degrees as a novel extent of turning, and allow destination slots like \"door on the left\" to be grounded to a location in the current physical context with the help of the robot's sensors.", |
|
"cite_spans": [ |
|
{ |
|
"start": 225, |
|
"end": 249, |
|
"text": "(Leuski and Traum, 2011)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions & Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Thus, in future iterations of the dialogue system incorporating AMR, we will retrain or reformulate the NPCEditor to take the automatic AMR parses as input and output the in-domain AMR templates described in \u00a75.3. The Dialogue Manager will act upon these templates with either a response/question to the participant or pass the domain-specific AMR along to be mapped to the behavior specification of the robot for execution. Specific steps on this research trajectory will include (i) development of graph to graph transformations to map parser output to the domainrefined AMRs and (ii) an assessment of how well the domain-refined AMRs map to a specific robot planning and behavior specification, which will facilitate determining what other refinements may be necessary to effectively bridge from natural language instructions to robot execution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions & Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Proceedings of the Society for Computation in Linguistics (SCiL) 2019, pages 236-246. New York City, New York, January 3-6, 2019", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This corpus is still being collected and a public release is in preparation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/jflanigan/jamr 3 https://github.com/c-amr/camr 4 https://github.com/BLLIP/bllip-parser", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.isi.edu/\u02dculf/amr/lib/ amr-dict.html#disfluency", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Transaction Unit is described in \u00a72.12 Though this mapping is outside the scope of our work, we see AMR as contributing substantially richer semantic forms to the NL planning research in robotics of others, such as(Howard et al., 2014) that entails such grounding, the process of assigning physical meanings to NL expressions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Speaker vs. compositional meaning is arguably annotated inconsistently in the current AMR corpus; see https://www.isi.edu/\u02dculf/amr/lib/ amr-dict.html#pragmatics for some specific guidelines, but note that the released annotations seem to differ from this guidance in places.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Utterance (i) would additionally annotate (m / move) with :mode imperative to signal a command. As noted in \u00a74, parsers rarely capture this information.15 https://www.isi.edu/\u02dculf/amr/lib/ amr-dict.html#vocative", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Annotation of tense and aspect, and the need for extra relations will be explained in continuation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Universal Conceptual Cognitive Annotation (UCCA)", |
|
"authors": [ |
|
{ |
|
"first": "Omri", |
|
"middle": [], |
|
"last": "Abend", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ari", |
|
"middle": [], |
|
"last": "Rappoport", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "228--238", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Omri Abend and Ari Rappoport. 2013. Universal Con- ceptual Cognitive Annotation (UCCA). In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), volume 1, pages 228-238.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Cooperation, dialogue and ethics", |
|
"authors": [ |
|
{ |
|
"first": "Jens", |
|
"middle": [], |
|
"last": "Allwood", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Traum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristiina", |
|
"middle": [], |
|
"last": "Jokinen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "International Journal of Human-Computer Studies", |
|
"volume": "53", |
|
"issue": "6", |
|
"pages": "871--914", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jens Allwood, David Traum, and Kristiina Jokinen. 2000. Cooperation, dialogue and ethics. In- ternational Journal of Human-Computer Studies, 53(6):871-914.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "How to do things with words", |
|
"authors": [ |
|
{ |
|
"first": "Austin", |
|
"middle": [], |
|
"last": "John Langshaw", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1975, |
|
"venue": "", |
|
"volume": "88", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Langshaw Austin. 1975. How to do things with words, volume 88. Oxford University Press.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The Berkeley FrameNet project", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Collin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Baker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Charles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John B", |
|
"middle": [], |
|
"last": "Fillmore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lowe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proc. of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "86--90", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The Berkeley FrameNet project. In Proc. of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics-Volume 1, pages 86-90. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Abstract Meaning Representation for sembanking", |
|
"authors": [ |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Banarescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Bonial", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shu", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Madalina", |
|
"middle": [], |
|
"last": "Georgescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kira", |
|
"middle": [], |
|
"last": "Griffitt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulf", |
|
"middle": [], |
|
"last": "Hermjakob", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "178--186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for sembanking. In Proceedings of the 7th Linguis- tic Annotation Workshop and Interoperability with Discourse, pages 178-186.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Introduction to the special issue on spoken language understanding in conversational systems", |
|
"authors": [ |
|
{ |
|
"first": "Srinivas", |
|
"middle": [], |
|
"last": "Bangalore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dilek", |
|
"middle": [], |
|
"last": "Hakkani-T\u00fcr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gokhan", |
|
"middle": [], |
|
"last": "Tur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Speech Communication", |
|
"volume": "3", |
|
"issue": "48", |
|
"pages": "233--238", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Srinivas Bangalore, Dilek Hakkani-T\u00fcr, and Gokhan Tur. 2006. Introduction to the special issue on spo- ken language understanding in conversational sys- tems. Speech Communication, 3(48):233-238.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Human-robot dialogue and collaboration in search and navigation", |
|
"authors": [ |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Bonial", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephanie", |
|
"middle": [], |
|
"last": "Lukin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashley", |
|
"middle": [], |
|
"last": "Foots", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cassidy", |
|
"middle": [], |
|
"last": "Henry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Marge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kimberly", |
|
"middle": [], |
|
"last": "Pollard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Artstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Traum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clare", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Voss", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Annotation, Recognition and Evaluation of Actions (AREA) Workshop at LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Claire Bonial, Stephanie Lukin, Ashley Foots, Cas- sidy Henry, Matthew Marge, Kimberly Pollard, Ron Artstein, David Traum, and Clare R. Voss. 2018. Human-robot dialogue and collaboration in search and navigation. In Proceedings of the Annota- tion, Recognition and Evaluation of Actions (AREA) Workshop at LREC, 2018.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Iso 24617-2: A semantically-based standard for dialogue annotation", |
|
"authors": [ |
|
{ |
|
"first": "Harry", |
|
"middle": [], |
|
"last": "Bunt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Alexandersson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jae-Woong", |
|
"middle": [], |
|
"last": "Choe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [ |
|
"Chengyu" |
|
], |
|
"last": "Fang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koiti", |
|
"middle": [], |
|
"last": "Hasida", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Volha", |
|
"middle": [], |
|
"last": "Petukhova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrei", |
|
"middle": [], |
|
"last": "Popescu-Belis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Traum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "430--437", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harry Bunt, Jan Alexandersson, Jae-Woong Choe, Alex Chengyu Fang, Koiti Hasida, Volha Petukhova, Andrei Popescu-Belis, and David R Traum. 2012. Iso 24617-2: A semantically-based standard for di- alogue annotation. In LREC, pages 430-437. Cite- seer.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Collaborative Language Grounding Toward Situated Human-Robot Dialogue", |
|
"authors": [ |
|
{ |
|
"first": "Joyce", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Chai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Fang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Changsong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lanbo", |
|
"middle": [], |
|
"last": "She", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "AI Magazine", |
|
"volume": "37", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joyce Y. Chai, Rui Fang, Changsong Liu, and Lanbo She. 2017. Collaborative Language Grounding To- ward Situated Human-Robot Dialogue. AI Maga- zine, 37(4):32.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Collaborative effort towards common ground in situated human-robot dialogue", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Joyce", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lanbo", |
|
"middle": [], |
|
"last": "Chai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "She", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Spencer", |
|
"middle": [], |
|
"last": "Fang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cody", |
|
"middle": [], |
|
"last": "Ottarson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Changsong", |
|
"middle": [], |
|
"last": "Littley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenneth", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hanson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joyce Y Chai, Lanbo She, Rui Fang, Spencer Ottarson, Cody Littley, Changsong Liu, and Kenneth Hanson. 2014. Collaborative effort towards common ground in situated human-robot dialogue. In Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction, pages 33-40. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "End-to-end memory networks with knowledge carryover for multiturn spoken language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Yun-Nung", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dilek", |
|
"middle": [], |
|
"last": "Hakkani-T\u00fcr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00f6khan", |
|
"middle": [], |
|
"last": "T\u00fcr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "INTER-SPEECH", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3245--3249", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yun-Nung Chen, Dilek Hakkani-T\u00fcr, G\u00f6khan T\u00fcr, Jianfeng Gao, and Li Deng. 2016. End-to-end mem- ory networks with knowledge carryover for multi- turn spoken language understanding. In INTER- SPEECH, pages 3245-3249.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "The individuation of events", |
|
"authors": [ |
|
{ |
|
"first": "Donald", |
|
"middle": [], |
|
"last": "Davidson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1969, |
|
"venue": "Essays in honor of Carl G. Hempel", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "216--234", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Donald Davidson. 1969. The individuation of events. In Essays in honor of Carl G. Hempel, pages 216- 234. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Annotation of tense and aspect semantics for sentential AMR", |
|
"authors": [ |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Donatelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Regan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Croft", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lucia Donatelli, Michael Regan, William Croft, and Nathan Schneider. 2018. Annotation of tense and aspect semantics for sentential AMR. In Proceed- ings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions, Santa Fe, New Mexico, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Background to", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Charles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Fillmore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miriam Rl", |
|
"middle": [], |
|
"last": "Christopher R Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Petruck", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Charles J Fillmore, Christopher R Johnson, and Miriam RL Petruck. 2003. Background to", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A discriminative graph-based parser for the Abstract Meaning Representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Flanigan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Thomson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah A", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1426--1436", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A Smith. 2014. A discrim- inative graph-based parser for the Abstract Meaning Representation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), volume 1, pages 1426-1436.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Extracting biomolecular interactions using semantic parsing of biomedical text", |
|
"authors": [ |
|
{ |
|
"first": "Sahil", |
|
"middle": [], |
|
"last": "Garg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aram", |
|
"middle": [], |
|
"last": "Galstyan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulf", |
|
"middle": [], |
|
"last": "Hermjakob", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proc. of AAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sahil Garg, Aram Galstyan, Ulf Hermjakob, and Daniel Marcu. 2016. Extracting biomolecular inter- actions using semantic parsing of biomedical text. In Proc. of AAAI, Phoenix, Arizona, USA.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Attention, intentions, and the structure of discourse", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Barbara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Candace", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Grosz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sidner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "Computational linguistics", |
|
"volume": "12", |
|
"issue": "3", |
|
"pages": "175--204", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barbara J Grosz and Candace L Sidner. 1986. Atten- tion, intentions, and the structure of discourse. Com- putational linguistics, 12(3):175-204.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Multi-domain joint semantic frame parsing using bi-directional rnn-lstm", |
|
"authors": [ |
|
{ |
|
"first": "Dilek", |
|
"middle": [], |
|
"last": "Hakkani-T\u00fcr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00f6khan", |
|
"middle": [], |
|
"last": "T\u00fcr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Asli", |
|
"middle": [], |
|
"last": "Celikyilmaz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yun-Nung", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ye-Yi", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Interspeech", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "715--719", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dilek Hakkani-T\u00fcr, G\u00f6khan T\u00fcr, Asli Celikyilmaz, Yun-Nung Chen, Jianfeng Gao, Li Deng, and Ye- Yi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional rnn-lstm. In Inter- speech, pages 715-719.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "A transition-based directed acyclic graph parser for UCCA", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Hershcovich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omri", |
|
"middle": [], |
|
"last": "Abend", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ari", |
|
"middle": [], |
|
"last": "Rappoport", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1704.00552" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2017. A transition-based directed acyclic graph parser for UCCA. arXiv preprint arXiv:1704.00552.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "A natural language planner interface for mobile manipulators", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Howard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephanie", |
|
"middle": [], |
|
"last": "Tellex", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Roy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the IEEE International Conference on Robotics and Automation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Howard, Stephanie Tellex, and Nicholas Roy. 2014. A natural language planner interface for mo- bile manipulators. In Proceedings of the IEEE In- ternational Conference on Robotics and Automation (ICRA 2014).", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Cmlps robust spoken language understanding system", |
|
"authors": [ |
|
{ |
|
"first": "Sunil", |
|
"middle": [], |
|
"last": "Issar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wayne", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Third European Conference on Speech Communication and Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sunil Issar and Wayne Ward. 1993. Cmlps robust spo- ken language understanding system. In Third Eu- ropean Conference on Speech Communication and Technology.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "A large-scale classification of English verbs. Language Resources and Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Karin", |
|
"middle": [], |
|
"last": "Kipper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Korhonen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Neville", |
|
"middle": [], |
|
"last": "Ryant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "42", |
|
"issue": "", |
|
"pages": "21--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karin Kipper, Anna Korhonen, Neville Ryant, and Martha Palmer. 2008. A large-scale classification of English verbs. Language Resources and Evaluation, 42(1):21-40.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Npceditor: Creating virtual human dialogue using information retrieval techniques", |
|
"authors": [ |
|
{ |
|
"first": "Anton", |
|
"middle": [], |
|
"last": "Leuski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Traum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "42--56", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anton Leuski and David Traum. 2011. Npceditor: Cre- ating virtual human dialogue using information re- trieval techniques. Ai Magazine, 32(2):42-56.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Toward abstractive summarization using semantic representations", |
|
"authors": [ |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Flanigan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Thomson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Norman", |
|
"middle": [], |
|
"last": "Sadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah A", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proc. of NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman Sadeh, and Noah A Smith. 2015. Toward abstrac- tive summarization using semantic representations. In Proc. of NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Scoutbot: A dialogue system for collaborative navigation", |
|
"authors": [ |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Stephanie M Lukin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cory", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Gervits", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anton", |
|
"middle": [], |
|
"last": "Hayes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pooja", |
|
"middle": [], |
|
"last": "Leuski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Moolchandani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "Rogers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "Sanchez Amaro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Marge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clare", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Voss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Traum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1807.08074" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephanie M Lukin, Felix Gervits, Cory J Hayes, An- ton Leuski, Pooja Moolchandani, John G Rogers III, Carlos Sanchez Amaro, Matthew Marge, Clare R Voss, and David Traum. 2018. Scoutbot: A dialogue system for collaborative navigation. arXiv preprint arXiv:1807.08074.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "The Stanford CoreNLP natural language processing toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenny", |
|
"middle": [], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bethard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mcclosky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd annual meeting of the association for computational lin- guistics: system demonstrations, pages 55-60.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Applying the Wizard-of-Oz Technique to Multimodal Human-Robot Dialogue", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Marge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Bonial", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brendan", |
|
"middle": [], |
|
"last": "Byrne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taylor", |
|
"middle": [], |
|
"last": "Cassidy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"William" |
|
], |
|
"last": "Evans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Susan", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clare", |
|
"middle": [], |
|
"last": "Voss", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proc. of RO-MAN", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Marge, Claire Bonial, Brendan Byrne, Taylor Cassidy, A. William Evans, Susan G. Hill, and Clare Voss. 2016a. Applying the Wizard-of-Oz Technique to Multimodal Human-Robot Dialogue. In Proc. of RO-MAN.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Assessing Agreement in Human-Robot Dialogue Strategies: A Tale of Two Wizards", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Marge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Bonial", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kimberly", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Pollard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Artstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brendan", |
|
"middle": [], |
|
"last": "Byrne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Susan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clare", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Voss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Traum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "International Conference on Intelligent Virtual Agents", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "484--488", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Marge, Claire Bonial, Kimberly A Pollard, Ron Artstein, Brendan Byrne, Susan G Hill, Clare Voss, and David Traum. 2016b. Assessing Agree- ment in Human-Robot Dialogue Strategies: A Tale of Two Wizards. In International Conference on In- telligent Virtual Agents, pages 484-488. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Addressing a question answering challenge by combining statistical methods with inductive rule learning and reasoning", |
|
"authors": [ |
|
{ |
|
"first": "Arindam", |
|
"middle": [], |
|
"last": "Mitra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chitta", |
|
"middle": [], |
|
"last": "Baral", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proc. of AAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2779--2785", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arindam Mitra and Chitta Baral. 2016. Addressing a question answering challenge by combining statisti- cal methods with inductive rule learning and reason- ing. In Proc. of AAAI, pages 2779-2785.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "The proposition bank: An annotated corpus of semantic roles", |
|
"authors": [ |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Kingsbury", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Computational Linguistics", |
|
"volume": "31", |
|
"issue": "1", |
|
"pages": "71--106", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated cor- pus of semantic roles. Computational Linguistics, 31(1):71-106.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Unsupervised entity linking with Abstract Meaning Representation", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoman", |
|
"middle": [], |
|
"last": "Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taylor", |
|
"middle": [], |
|
"last": "Cassidy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulf", |
|
"middle": [], |
|
"last": "Hermjakob", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ji", |
|
"middle": [], |
|
"last": "Heng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proc. of HLT-NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1130--1139", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaoman Pan, Taylor Cassidy, Ulf Hermjakob, Heng Ji, and Kevin Knight. 2015. Unsupervised entity link- ing with Abstract Meaning Representation. In Proc. of HLT-NAACL, pages 1130-1139.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Events in the Semantics of English", |
|
"authors": [ |
|
{ |
|
"first": "Terence", |
|
"middle": [], |
|
"last": "Parsons", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Terence Parsons. 1990. Events in the Semantics of En- glish, volume 5. MIT Press, Cambridge, MA.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Penman Natural Language Group", |
|
"authors": [], |
|
"year": 1989, |
|
"venue": "Information Sciences Institute", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Penman Natural Language Group. 1989. The Penman user guide. Technical report, Information Sciences Institute.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Generating English from Abstract Meaning Representations", |
|
"authors": [ |
|
{ |
|
"first": "Nima", |
|
"middle": [], |
|
"last": "Pourdamghani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulf", |
|
"middle": [], |
|
"last": "Hermjakob", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proc. of INLG", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "21--25", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nima Pourdamghani, Kevin Knight, and Ulf Herm- jakob. 2016. Generating English from Abstract Meaning Representations. In Proc. of INLG, pages 21-25.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Biomedical event extraction using Abstract Meaning Representation", |
|
"authors": [ |
|
{ |
|
"first": "Sudha", |
|
"middle": [], |
|
"last": "Rao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proc. of BioNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "126--135", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sudha Rao, Daniel Marcu, Kevin Knight, and Hal Daum\u00e9 III. 2017. Biomedical event extraction us- ing Abstract Meaning Representation. In Proc. of BioNLP, pages 126-135, Vancouver, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Semantic representation", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Lenhart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Schubert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "AAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4132--4139", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lenhart K Schubert. 2015. Semantic representation. In AAAI, pages 4132-4139.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Speech acts: An essay in the philosophy of language", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Rogers Searle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1969, |
|
"venue": "", |
|
"volume": "626", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Rogers Searle. 1969. Speech acts: An essay in the philosophy of language, volume 626. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Interactive Learning of Grounded Verb Semantics towards Human-Robot Communication", |
|
"authors": [ |
|
{ |
|
"first": "Lanbo", |
|
"middle": [], |
|
"last": "She", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joyce", |
|
"middle": [], |
|
"last": "Chai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1634--1644", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lanbo She and Joyce Chai. 2017. Interactive Learning of Grounded Verb Semantics towards Human-Robot Communication. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1634- 1644, Vancouver, Canada. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Combinatory categorial grammar. nontransformational syntax: A guide to current models", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Baldridge", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "13--67", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Steedman and Jason Baldridge. 2009. Combina- tory categorial grammar. nontransformational syn- tax: A guide to current models. Blackwell, Oxford, 9:13-67.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Dialogue Structure Annotation for Multi-Floor Interaction", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Traum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cassidy", |
|
"middle": [], |
|
"last": "Henry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephanie", |
|
"middle": [], |
|
"last": "Lukin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Artstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Gervits", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kimberly", |
|
"middle": [], |
|
"last": "Pollard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Bonial", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Su", |
|
"middle": [], |
|
"last": "Lei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clare", |
|
"middle": [], |
|
"last": "Voss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Marge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cory", |
|
"middle": [], |
|
"last": "Hayes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Susan", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Traum, Cassidy Henry, Stephanie Lukin, Ron Artstein, Felix Gervits, Kimberly Pollard, Claire Bonial, Su Lei, Clare Voss, Matthew Marge, Cory Hayes, and Susan Hill. 2018. Dialogue Structure Annotation for Multi-Floor Interaction. In Proceed- ings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "The information state approach to dialogue management", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Staffan", |
|
"middle": [], |
|
"last": "Traum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Larsson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Current and new directions in discourse and dialogue", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "325--353", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David R Traum and Staffan Larsson. 2003. The in- formation state approach to dialogue management. In Current and new directions in discourse and dia- logue, pages 325-353. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "A transition-based algorithm for AMR parsing", |
|
"authors": [ |
|
{ |
|
"first": "Chuan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Pradhan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "366--375", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015. A transition-based algorithm for AMR pars- ing. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 366-375.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Dependency and AMR embeddings for drug-drug interaction extraction from biomedical literature", |
|
"authors": [ |
|
{ |
|
"first": "Yanshan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sijia", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Majid", |
|
"middle": [], |
|
"last": "Rastegar-Mojarad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liwei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Feichen", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongfang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proc. of ACM-BCB", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "36--43", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yanshan Wang, Sijia Liu, Majid Rastegar-Mojarad, Li- wei Wang, Feichen Shen, Fei Liu, and Hongfang Liu. 2017. Dependency and AMR embeddings for drug-drug interaction extraction from biomedical lit- erature. In Proc. of ACM-BCB, pages 36-43, New York, NY, USA.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "PENMAN notation of The dog wants the girl to pet him.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "Gold standard AMR for \"take a picture\" (top), followed by parser output.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"text": "CAMR output for \"move forward 3 feet.\" The distance is mistaken for the :ARG1 thing-moved and the implied \"you\"/robot is omitted. Compare with figure 3.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"text": "f / foot)) :direction (f / forward))", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF5": { |
|
"text": "Without tense and aspect representation, current AMR conflates commands to move forward and assertions of ongoing and/or completed motion.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF6": { |
|
"text": "Speech act template for command:move. Arguments and relations in italics are filled in from context and the utterance.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "Parser performance on human-robot dialogue data.", |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |