|
{ |
|
"paper_id": "W05-0102", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T04:45:35.320437Z" |
|
}, |
|
"title": "Teaching Dialogue to Interdisciplinary Teams through Toolkits", |
|
"authors": [ |
|
{ |
|
"first": "Justine", |
|
"middle": [], |
|
"last": "Cassell", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Behavior Northwestern University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Stone", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present some lessons we have learned from using software infrastructure to support coursework in natural language dialogue and embodied conversational agents. We have a new appreciation for the differences between coursework and research infrastructure-supporting teaching may be harder, because students require a broader spectrum of implementation, a faster learning curve and the ability to explore mistaken ideas as well as promising ones. We outline the collaborative discussion and effort we think is required to create better teaching infrastructure in the future.", |
|
"pdf_parse": { |
|
"paper_id": "W05-0102", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present some lessons we have learned from using software infrastructure to support coursework in natural language dialogue and embodied conversational agents. We have a new appreciation for the differences between coursework and research infrastructure-supporting teaching may be harder, because students require a broader spectrum of implementation, a faster learning curve and the ability to explore mistaken ideas as well as promising ones. We outline the collaborative discussion and effort we think is required to create better teaching infrastructure in the future.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Hands-on interaction with dialogue systems is a necessary component of a course on computational linguistics and natural language technology. And yet, it is clearly impracticable to have students in a quarterlong or semester-long course build a dialogue system from scratch. For this reason, instructors of these courses have experimented with various options to allow students to view the code of a working dialogue system, tweak code, or build their own application using a dialogue system toolkit. Some popular options include the NLTK (Loper and Bird, 2002) , CSLU (Cole, 1999) , Trindi (Larsson and Traum, 2000) and Regulus (Rayner et al., 2003) toolkits. However, each of these options has turned out to have disadvantages. Some of the toolkits require too much knowledge of linguistics for the average computer science student, and vice-versa, others require too much programming for the average linguist. What is needed is an extensible dialogue toolkit that allows easy application building for beginning students, and more sophisticated access to, and tweakability of, the models of discourse for advanced students.", |
|
"cite_spans": [ |
|
{ |
|
"start": 539, |
|
"end": 561, |
|
"text": "(Loper and Bird, 2002)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 569, |
|
"end": 581, |
|
"text": "(Cole, 1999)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 591, |
|
"end": 616, |
|
"text": "(Larsson and Traum, 2000)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 629, |
|
"end": 650, |
|
"text": "(Rayner et al., 2003)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In addition, as computational linguists become increasingly interested in the role of non-verbal behavior in discourse and dialogue, more of us would like to give our students exposure to models of the interaction between language and nonverbal behaviors such as eye gaze, head nods and hand gestures. However, the available dialogue system toolkits either have no graphical body or if they do have (part of) a body-as in the case of the CSLU toolkit-the toolkit does not allow the implementation of alternative models of body-language interaction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We feel, therefore, that there is a need for a toolkit that allows the beginning graduate studentwho may have some computer science or some linguistics background, but not both-to implement a working embodied dialogue system, as a way to experiment with models of discourse, dialogue, collaborative conversation and the interaction between verbal and nonverbal behavior in conversation. We believe the community as a whole must be engaged in the design, implementation and fielding of this kind of educational software. In this paper, we survey the experience that has led us to these conclusions and frame the broader discussion we hope the TNLP workshop will help to further.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our perspective in this paper draws on more than fifteen course offerings at the graduate level in discourse and dialogue over the years. Justine Cassell These courses are similar in perspective. All address an extremely diverse and interdisciplinary audience of students from computer science, linguistics, cognitive science, information science, communication, and education. The typical student is a first or second-year PhD student with a serious interest in doing a dissertation on human-computer communication or in enriching their dissertation research with results from the theory or practice of discourse and dialogue. All are project courses, but no programming is required; projects may involve evaluation of existing implementations or the prospective design of new implementations based on ongoing empirical research. Nevertheless, the courses retain the dual goals that students should not only understand discourse and the theory of pragmatics, but should also understand how the theory is implemented, either well enough to talk intelligently about the implementation or, if they are computer scientists, to actually carry it out.", |
|
"cite_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 153, |
|
"text": "Cassell", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Courses", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As befits our dual goals, our courses all involve a mix of instruction in human-human dialogue and human-computer dialogue. For example, Cassell begins her course with a homework where students collect, transcribe and analyze their own recordings of face-to-face conversation. Students are asked to discuss what constitutes a sufficient record of discourse, and to speculate on what the most challenging processing issues would be to allow a computer to replace one of the participants. Computer scientists definitely have difficulty with this aspect of the course-only fair, since they are at the advantage when it comes to implementation. But computer scientists see the value in the exercise: even if they do not believe that interfaces should be designed to act like people, they still recognize that well-designed interactive systems must be ready to handle the kinds of behaviors people actually carry out. And hands-on experience convinces them that behavior in human conversation is both rich and surprising. The computer scientists agree-after turning in impoverished and uninformed \"analyses\" of their discourse for a brutal critique-that they will never look at conversation the same way again.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Courses", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our experience suggests that we should be trying to give students outside computer science the same kind of eye-opening hands-on experience with technology. For example, we have found that linguists are just as challenged and excited by the discipline of technology as computer scientists are by the discipline of empirical observations. Linguists in our classes typically report that successful engagement with technology \"exposes a lot of details that were missing from my theoretical understanding that I never would have considered without working through the code\". Nothing is better at bringing out the assumptions you bring to an analysis of human-human conversation than the thought experiment of replacing one of the participants by something that has to struggle consciously to understand it-a space alien, perhaps, or, more realistically, an AI system. We are frustrated that no succinct assignment, comparable to our transcription homework, yet exists that can reliably deliver this insight to students outside computer science.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Courses", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our courses are not typical NLP classes. Our treatment of parsing is marginal, and for the most part we ignore the mainstays of statistical language processing courses: the low-level technology such as finite-state methods; the specific language processing challenges for machine learning methods; and \"applied\" subproblems like named entity extraction, or phrase chunking. Our focus is almost exclusively on high-level and interactional issues, such as the structure of discourse and dialogue, information structure, intentions, turn-taking, collaboration, reference and clarification. Context is central, and under that umbrella we explicitly discuss both the perceptual environment in which conversation takes place and the non-verbal actions that contribute to the management of conversation and participants' real-world collaborations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Framing the Problem", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Our unusual focus means that we can not readily take advantage of software toolkits such as NLTK (Loper and Bird, 2002) or Regulus (Rayner et al., 2003) . These toolkits are great at helping students implement and visualize the fundamentals of natural language processing-lexicon, morphology, syntax. They make it easy to experiment with machine learning or with specific models for a small scale, short course assignment in a specific NLP module. You can think of this as a \"horizontal\" approach, allowing students to systematically develop a comprehensive approach to a single processing task. But what we need is a \"vertical\" approach, which allows students to follow a specific choice about the representation of communicative behaviors or communicative functions all the way through an end-to-end dialogue system. We have not succeeded in conceptualizing how a carefully modularized toolkit would support this kind of student experience.", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 119, |
|
"text": "(Loper and Bird, 2002)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 131, |
|
"end": 152, |
|
"text": "(Rayner et al., 2003)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Framing the Problem", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Still, we have not met with success with alternative approaches, either. As we describe in Section 3.1, our own research systems may allow the kinds of experiments we want students to carry out. But they demand too much expertise of students for a one-semester course. In fact, as we describe in Section 3.2, even broad research systems that come with specific support for students to carry out a range of tasks may not enable the specific directions that really turn students on to the challenge of discourse and dialogue. However, our experience with implementing dedicated modules for teaching, as described in Section 3.3, is that the lack of synergy with ongoing research can result in impoverished tools that fail to engage students. We don't have the tools we want-but our experience argues that we think the tools we really want will be developed only through a collaborative effort shared across multiple sites and broadly engaged with a range of research issues as well as with pedagogical challenges.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Framing the Problem", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Cassell has experimented with the use of her research platforms REA (Cassell et al., 1999) and BEAT (Cassell et al., 2001) for course projects in discourse and dialogue. REA is an embodied conversational agent that interacts with a user in a real estate agent domain. It includes an end-to-end dialogue architecture; it supports speech input, stereo vision input, conversational process including presence and turn-taking, content planning, the contextsensitive generation of communicative action and the animated realization of multimodal communicative actions. BEAT (the behavior expression animation toolkit), on the other hand, is a module that fits into animation systems. It marks up text to describe appropriate synchronized nonverbal behaviors and speech to realize on a humanoid talking character.", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 90, |
|
"text": "(Cassell et al., 1999)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 100, |
|
"end": 122, |
|
"text": "(Cassell et al., 2001)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Difficulties with REA and BEAT", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In teaching dialogue at MIT, Cassell invited students to adapt her existing REA and BEAT system to explore aspects of the theory and practice of discourse and dialogue. This led to a range of interesting projects. For example, students were able to explore hypothetical differences among charactersfrom virtual \"Italians\" with profuse gesture, to virtual children whose marked use of a large gesture space contrasted with typical adults, to characters who showed new and interesting behavior such as the repeated foot-tap of frustrated condescension. However, we think we can serve students much better. Many of these projects were accomplished only with substantial help from the instructor and TAs, who were already extremely familiar with the overall system. Students did not have time to learn how to make these changes entirely on their own.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Difficulties with REA and BEAT", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The foot-tapping agent is a good example of this. To add foot-tapping is a paradigmatic \"vertical\" modification. It requires adding suitable context to the discourse state to represent uncooperative user behavior; it requires extending the process for generating communicative actions to detect this new state and schedule an appropriate behavioral response; and then it requires extending the animation platform to be able to show this behavior. BEAT makes the second step easy-as it should be-even for linguistics students. To handle the first and third steps, you would hope that an interdisciplinary team containing a communication student and a computer sci-ence student would be able to bring the expertise to design the new dialogue state and the new animated behavior. But that wasn't exactly true. In order to add the behavior to REA, students needed not only background in the relevant technology-like what a computer scientist would learn in a general human animation class. To add the behavior, students also needed to know how this technology was realized in our particular research platform. This proved too much for one semester.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Difficulties with REA and BEAT", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We think this is a general problem with new research systems. For example, we think many of the same issues would arise in asking students to build a dialogue system on top of the Trindi toolkit in a one semester course.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Difficulties with REA and BEAT", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In Fall 2004, Cassell experimented with using the CSLU dialogue toolkit (Cole, 1999) as a resource for class projects. This is a broad toolkit to support research and teaching in spoken language technology. A particular strength of the toolkit is its support for the design of finite-state dialogue models. Even students outside computer science appreciated the toolkit's drag-and-drop interface for scripting dialogue flow. For example, with this interface, you can add a repair sequence to a dialogue flow in one easy step. However, the indirection the toolkit places between students and the actual constructs of dialogue theory can by quite challenging. For example, the finite-state architecture of the CSLU toolkit allows students to look at floor management and at dialogue initiative only indirectly: specific transition networks encode specific strategies for taking turns or managing problem solving by scheduling specific communicative functions and behaviors.", |
|
"cite_spans": [ |
|
{ |
|
"start": 72, |
|
"end": 84, |
|
"text": "(Cole, 1999)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Difficulties with the CSLU toolkit", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The way we see it, the CSLU toolkit is more heavily geared towards the rapid construction of particular kinds of research prototypes than we would like in a teaching toolkit. Its dialogue models provide an instructive perspective on actions in discourse, one that nicely complements the perspective of DAMSL (Core and Allen, 1997) in seeing utterances as the combined realization of a specific, constrained range of communicative functions. But we would like to be able to explore a range of other metaphors for organizing the information in dialogue. We would like students to be able to realize models of face-to-face dialogue (Cassell et al., 2000) , the informationstate approach to domain-independent practical dialogue (Larsson and Traum, 2000) , or approaches that emphasize the grounding of conversation in the specifics of a particular ongoing collaboration (Rich et al., 2001) . The integration of a talking head into the CSLU toolkit epitomizes these limitations with the platform. The toolkit allows for the automatic realization of text with an animated spoken delivery, but does not expose the model to programmers, making it impossible for programmers adapt or control the behavior of the face and head.", |
|
"cite_spans": [ |
|
{ |
|
"start": 629, |
|
"end": 651, |
|
"text": "(Cassell et al., 2000)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 725, |
|
"end": 750, |
|
"text": "(Larsson and Traum, 2000)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 867, |
|
"end": 886, |
|
"text": "(Rich et al., 2001)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Difficulties with the CSLU toolkit", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We think this is a general problem with platforms that are primarily designed to streamline a particular research methodology. For example, we think many of the same issues would arise in asking students to build a multimodal behavior realization system on top of a general-purpose speech synthesis platform like Festival (Black and Taylor, 1997) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 322, |
|
"end": 346, |
|
"text": "(Black and Taylor, 1997)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Difficulties with the CSLU toolkit", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "At this point, the right solution might seem to be to devise resources explicitly for teaching. In fact, Stone advocated more or less this at the 2002 TNLP workshop (2002) . There, Stone motivated the potential role for a simple lexicalized formalism for natural language syntax, semantics and pragmatics in a broad NLP class whose emphasis is to introduce topics of current research.", |
|
"cite_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 171, |
|
"text": "(2002)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Difficulties with TAGLET", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The system, TAGLET, is a context-free treerewriting formalism, defined by the usual complementation operation and the simplest imaginable modification operation. This formalism may in fact be a good way to present computational linguistics to technically-minded cognitive science studentsthose rare students who come with interest and experience in the science of language as well as a solid ability to program. By implementing a strong competence TAGLET parser and generator students simultaneously get experience with central computer science ideas-data structures, unification, recursion and abstraction-and develop an effective starting point for their own subsequent projects.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Difficulties with TAGLET", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "However, in retrospect, TAGLET does not serve to introduce students outside computer science to the distinctive insights that come from a computational approach to language use. For one thing, to reach a broad audience, it is a mistake to focus on repre-sentations that programmers can easily build at the expense of representations that other students can easily understand. These other students need visualization; they need to be able to see what the system computes and how it computes it. Moreover, these other students can tolerate substantial complexity in the underlying algorithms if the system can be understood clearly and mechanistically in abstract terms. You wouldn't ask a computer scientist to implement a parser for full tree-adjoining grammar but that doesn't change the fact that it's still a perfectly natural, and comprehensible, algorithmic abstraction for characterizing linguistic structure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Difficulties with TAGLET", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Another set of representations and algorithms might avoid some of these problems. But a new approach could not avoid another problem that we think applies generally to platforms that are designed exclusively for teaching: there is no synergy with ongoing research efforts. Rich resources are so crucial to any computational treatment of dialogue: annotated corpora, wide-coverage grammars, planrecognizers, context models, and the rest. We can't afford to start from scratch. We have found this concretely in our work. What got linguists involved in the computational exploration of dialogue semantics at Rutgers was not the special teaching resources Stone created. It was hooking students up with the systems that were being actively developed in ongoing research (DeVault et al., 2005) . These research efforts made it practical to provide students with the visualizations, task and context models, and interactive architecture they needed to explore substantive issues in dialogue semantics. Whatever we do will have to closely connect teaching and our ongoing research.", |
|
"cite_spans": [ |
|
{ |
|
"start": 766, |
|
"end": 788, |
|
"text": "(DeVault et al., 2005)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Difficulties with TAGLET", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Our experience teaching dialogue to interdisciplinary teams through toolkits has been humbling. We have a new appreciation for the differences between coursework and research infrastructuresupporting teaching may be harder, because students require a broader spectrum of implementation, a faster learning curve and the ability to explore mistaken ideas as well as promising ones. But we increasingly think the community can and should come together to foster more broadly useful resources for teaching.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Looking ahead", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We have reframed our ongoing activities so that we can find new synergies between research and teaching. For example, we are currently working to expand the repertoire of animated action in our freely-available talking head RUTH (DeCarlo et al., 2004) . In our next release, we expect to make different kinds of resources available than in the initial release. Originally, we distributed only the model we created. The next version will again provide that model, along with a broader and more useful inventory of facial expressions for it, but we also want the new RUTH to be more easily extensible than the last one. To do that, we have ported our model to a general-purpose animation environment (Alias Research's Maya) and created software tools that can output edited models into the collection of files that RUTH needs to run. This helps achieve our objective of quickly-learned extensibility. We expect that students with a background in human animation will bring experience with Maya to a dialogue course. (Anyway, learning Maya is much more general than learning RUTH!) Computer science students will thus find it easier to assist a team of communication and linguistics students in adding new expressions to an animated character.", |
|
"cite_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 251, |
|
"text": "RUTH (DeCarlo et al., 2004)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Looking ahead", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Creating such resources to span a general system for face-to-face dialogue would be an enormous undertaking. It could happen only with broad input from those who teach discourse and dialogue, as we do, through a mix of theory and practice. We hope the TNLP workshop will spark this kind of process. We close with the questions we'd like to consider further. What kinds of classes on dialogue and discourse pragmatics are currently being offered? What kinds of audiences do others reach, what goals do they bring, and what do they teach them? What are the scientific and technological principles that others would use toolkits to teach and illustrate? In short, what would your dialogue toolkit make possible? And how can we work together to realize both our visions?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Looking ahead", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The catchy title is the inspiration of Deb Roy at MIT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Thanks to Doug DeCarlo, NSF HLC 0308121.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": "5" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Festival speech synthesis system", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Black", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Taylor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Human Communication Research Center", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Black and Paul Taylor. 1997. Festi- val speech synthesis system. Technical Report HCRC/TR-83, Human Communication Research Cen- ter. http://www.cstr.ed.ac.uk/projects/festival/.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Embodiment in conversational characters: Rea", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Cassell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Bickmore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Billinghurst", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Campbell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Vilhj\u00e1lmsson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "CHI 99", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "520--527", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Cassell, T. Bickmore, M. Billinghurst, L. Campbell, K. Chang, H. Vilhj\u00e1lmsson, and H. Yan. 1999. Em- bodiment in conversational characters: Rea. In CHI 99, pages 520-527.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Human conversation as a system framework", |
|
"authors": [ |
|
{ |
|
"first": "Justine", |
|
"middle": [], |
|
"last": "Cassell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Bickmore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lee", |
|
"middle": [], |
|
"last": "Campbell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannes", |
|
"middle": [], |
|
"last": "Vilhjalmsson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Embodied Conversational Agents", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "29--63", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Justine Cassell, Tim Bickmore, Lee Campbell, Hannes Vilhjalmsson, and Hao Yan. 2000. Human conver- sation as a system framework. In J. Cassell, J. Sul- livan, S. Prevost, and E. Churchill, editors, Embod- ied Conversational Agents, pages 29-63. MIT Press, Cambridge, MA.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "BEAT: the behavioral expression animation toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Justine", |
|
"middle": [], |
|
"last": "Cassell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "SIGGRAPH", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "477--486", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Justine Cassell, Hannes Vilhj\u00e1lmsson, and Tim Bick- more. 2001. BEAT: the behavioral expression ani- mation toolkit. In SIGGRAPH, pages 477-486.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Tools for research and education in speech science", |
|
"authors": [ |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Cole", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the International Conference of Phonetic Sciences", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ron Cole. 1999. Tools for research and ed- ucation in speech science. In Proceedings of the International Conference of Phonetic Sciences. http://cslu.cse.ogi.edu/toolkit/.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Coding dialogs with the DAMSL annotation scheme", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Mark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Core", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Allen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Working Notes of AAAI Fall Symposium on Communicative Action in Humans and Machines", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark G. Core and James F. Allen. 1997. Cod- ing dialogs with the DAMSL annotation scheme. In Working Notes of AAAI Fall Symposium on Communicative Action in Humans and Machines. http://www.cs.rochester.edu/research/cisd/resources/damsl/.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Specifying and animating facial signals for discourse in embodied conversational agents", |
|
"authors": [ |
|
{ |
|
"first": "Douglas", |
|
"middle": [], |
|
"last": "Decarlo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Corey", |
|
"middle": [], |
|
"last": "Revilla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Stone", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [], |
|
"last": "Venditti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Journal of Visualization and Computer Animation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Douglas DeCarlo, Corey Revilla, Matthew Stone, and Jennifer Venditti. 2004. Specifying and animating fa- cial signals for discourse in embodied conversational agents. Journal of Visualization and Computer Ani- mation. http://www.cs.rutgers.edu/\u02dcvillage/ruth/.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "An information-state approach to collaborative reference", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Devault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anubha", |
|
"middle": [], |
|
"last": "Kothari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Natalia", |
|
"middle": [], |
|
"last": "Kariaeva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iris", |
|
"middle": [], |
|
"last": "Oved", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Stone", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "ACL Proceedings Companion Volume", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David DeVault, Anubha Kothari, Natalia Kariaeva, Iris Oved, and Matthew Stone. 2005. An information-state approach to collaborative ref- erence. In ACL Proceedings Companion Vol- ume (interactive poster and demonstration track).", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Information state and dialogue management in the TRINDI dialogue move engine toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Staffan", |
|
"middle": [], |
|
"last": "Larsson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Traum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Natural Language Engineering", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "323--340", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Staffan Larsson and David Traum. 2000. In- formation state and dialogue management in the TRINDI dialogue move engine toolkit. Natural Language Engineering, 6:323-340.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "NLTK: the natural language toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Loper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bird", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edward Loper and Steven Bird. 2002. NLTK: the natu- ral language toolkit. In Proceedings of the ACL Work- shop on Effective Tools and Methodologies for Teach- ing Natural Language Processing and Computational Linguistics. http://nltk.sourceforge.net.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "An open source environment for compiling typed unification grammars into speech recognisers", |
|
"authors": [ |
|
{ |
|
"first": "Manny", |
|
"middle": [], |
|
"last": "Rayner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beth", |
|
"middle": [ |
|
"Ann" |
|
], |
|
"last": "Hockey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Dowding", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 10th Conference of the European Chapter of the Association for Computation Linguistics (interactive poster and demo track", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manny Rayner, Beth Ann Hockey, and John Dowd- ing. 2003. An open source environment for com- piling typed unification grammars into speech recog- nisers. In Proceedings of the 10th Conference of the European Chapter of the Association for Computa- tion Linguistics (interactive poster and demo track). http://sourceforge.net/projects/regulus.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "COL-LAGEN: applying collaborative discourse theory to human-computer interaction", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Rich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Sidner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Lesh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "22", |
|
"issue": "", |
|
"pages": "15--25", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Rich, C. L. Sidner, and N. Lesh. 2001. COL- LAGEN: applying collaborative discourse theory to human-computer interaction. AI Magazine, 22:15-25.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Lexicalized grammar 101", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Stone", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "ACL Workshop on Effective Tools and Methodologies for Teaching NLP and CL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "76--83", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Stone. 2002. Lexicalized grammar 101. In ACL Workshop on Effective Tools and Method- ologies for Teaching NLP and CL, pages 76-83.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": {} |
|
} |
|
} |