ACL-OCL / Base_JSON /prefixS /json /sigdial /2005.sigdial-1.17.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:49:36.875204Z"
},
"title": "Parameters for Quantifying the Interaction with Spoken Dialogue Telephone Services",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "M\u00f6ller",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ruhr-Universit\u00e4t Bochum",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "When humans interact with spoken dialogue systems, parameters can be logged which quantify the flow of the interaction, the behavior of the user and the system, and the performance of individual system modules during the interaction. Although such parameters are not directly linked to the quality perceived by the user, they provide useful information for system development, optimization, and maintenance. This paper presents a collection of such parameters which are now considered to be recommended by the International Telecommunication Union (ITU-T) for evaluating telephone-based spoken dialogue services. As an initial evaluation, a case study is described which shows that the parameters correlate only weakly with subjective judgments, but that they still may be used for predicting quality with PARADISE-style regression models.",
"pdf_parse": {
"paper_id": "2005",
"_pdf_hash": "",
"abstract": [
{
"text": "When humans interact with spoken dialogue systems, parameters can be logged which quantify the flow of the interaction, the behavior of the user and the system, and the performance of individual system modules during the interaction. Although such parameters are not directly linked to the quality perceived by the user, they provide useful information for system development, optimization, and maintenance. This paper presents a collection of such parameters which are now considered to be recommended by the International Telecommunication Union (ITU-T) for evaluating telephone-based spoken dialogue services. As an initial evaluation, a case study is described which shows that the parameters correlate only weakly with subjective judgments, but that they still may be used for predicting quality with PARADISE-style regression models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Speech technology devices, such as automatic speech recognition (ASR), speaker verification, speech synthesis, or spoken dialogue systems (SDSs), are increasingly used in wireline and mobile telephone networks to provide automatic voice-enabled services. In contrast to simple interactive voice response (IVR) systems with DTMF input, spoken dialogue systems offer the full range of speech interaction capabilities, including the recognition of user speech, the assignment of meaning to the recognized words, the decision on how to continue the dialogue, the formulation of a linguistic response, and the generation of spoken output to the user. In this way, a more-or-less \"natural\" spoken interaction between user and system is enabled.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, the Telecommunication Standardization Sector of the International Telecommunication Union (ITU-T) set up a new Recommendation describing subjective evaluation methods for telephone services based on spoken dialogue systems (ITU-T Rec. P.851, 2003) . This Recommendation describes methods for conducting subjective evaluation experiments in order to determine quality from a user's point-of-view. For enabling system developers to get rough estimates of quality during the development phase, these methods are foreseen to be complemented by a set of so-called interaction parameters. Such parameters help to quantify the flow of the interaction, the behavior of the user and the system, and the performance of the speech technology devices involved in the interaction. They address system performance from a system developer's and service operator's point-of-view, and thus provide complementary information to subjective evaluation data.",
"cite_spans": [
{
"start": 233,
"end": 257,
"text": "(ITU-T Rec. P.851, 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The present paper provides an overview of interaction parameters which have been used for evaluating SDSs in the past 15 years, based on theoretical work which is described in M\u00f6ller (2005) . Section 2 presents a brief characterization of the parameters, with respect to the interaction aspect they address and the measurement method which is required to determine the parameter. The parameters are categorized and listed in Section 3. Section 4 presents an initial evaluation of the set of parameters, showing their correlation to subjective quality judgments and their contribution for predicting quality, using PARADISE-style regression models. Section 5 summarizes the main findings and identifies future work to obtain a reduced set of parameters to be recommended by the ITU-T.",
"cite_spans": [
{
"start": 176,
"end": 189,
"text": "M\u00f6ller (2005)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Interaction parameters can be extracted when real or test users interact with the telephone service under consideration. The extraction is performed on the basis of log files, be it instrumentally or with the help of a transcribing and annotating expert. Parameters which relate to the surface form of the utterances exchanged between user and system, like the duration of the interaction or the number of turns, can usually be measured fully instrumentally. On the other hand, human transcription and annotation is necessary when not only the surface form (speech signals) is addressed, but also the contents and meaning of system or user utterances (e.g. to determine a word or concept accuracy). Both (instrumental and expert-based) ways of collecting interaction parameters should be combined in order to obtain as much information as possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of Interaction Parameters",
"sec_num": "2"
},
{
"text": "Because interaction parameters are based on data which has been collected in an interaction between user and system, they are influenced by the characteristics of the system, of the user, and of the interaction between both. These influences cannot be separated, because the user's behavior is strongly influenced by that of the system (e.g. the questions asked by the system), and viceversa (e.g. the vocabulary and speaking style of the user influences the system's recognition and understanding accuracy). Consequently, interaction parameters strongly reflect the characteristics of the user group they have been collected with.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of Interaction Parameters",
"sec_num": "2"
},
{
"text": "Interaction parameters are either determined in a laboratory test setting under controlled conditions, or in a field test. In the latter case, it may not be possible to extract all parameters, because not all necessary information can be gathered. For example, if the success of a task-oriented interaction (e.g. collection of a train timetable) is to be determined, then it is necessary to know about the exact aims of the user. Such information can only be collected in a laboratory setting, e.g. in the way it is described in ITU-T Rec. P.851 (2003) . In case that the fully integrated system is not yet available, it is possible to collect parameters from a so-called \"Wizard-of-Oz\" simulation, where a human experimenter replaces missing parts of the system under test. The characteristics of such a simulation have to be taken into account when interpreting the obtained parameter values.",
"cite_spans": [
{
"start": 540,
"end": 552,
"text": "P.851 (2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of Interaction Parameters",
"sec_num": "2"
},
{
"text": "Interaction parameters can be calculated on a word level, on a sentence or utterance level, or on the level of a full interaction or dialogue. In case of word or utterance level parameters, average values are often calculated for each dialogue. The parameters collected with a specific group of users may be analyzed with respect to the impact of the system (version), the user group, and the experimental setting (scenarios, test environment, etc.), using standard statistical methods. A characterization of these influences can be found in M\u00f6ller (2005) .",
"cite_spans": [
{
"start": 542,
"end": 555,
"text": "M\u00f6ller (2005)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of Interaction Parameters",
"sec_num": "2"
},
{
"text": "Based on a broad literature survey, parameters were identified which have been used in different assessment and evaluation experiments during the past 15 years. The respective literature includes Billi et al. (1996) , Boros et al. (1996) , Carletta (1996) , Cookson (1988) , Danieli and Gerbino (1995) , Fraser (1997) , Gerbino et al. (1993) , Glass et al. (2000) , Goodine et al. (1992) , Hirschman and Pao (1993) , Kamm et al. (1998) , Polifroni et al. (1992) , Price et al. (1992) , San-Segundo et al. (2001) , Simpson and Fraser (1993) , Skowronek (2002) , Strik et al. (2000 Strik et al. ( , 2001 , van Leeuwen and Steeneken (1997) , Walker et al. (1997 Walker et al. ( , 1998 , Zue et al. (2000) .",
"cite_spans": [
{
"start": 196,
"end": 215,
"text": "Billi et al. (1996)",
"ref_id": "BIBREF1"
},
{
"start": 218,
"end": 237,
"text": "Boros et al. (1996)",
"ref_id": "BIBREF2"
},
{
"start": 240,
"end": 255,
"text": "Carletta (1996)",
"ref_id": "BIBREF3"
},
{
"start": 258,
"end": 272,
"text": "Cookson (1988)",
"ref_id": "BIBREF5"
},
{
"start": 275,
"end": 301,
"text": "Danieli and Gerbino (1995)",
"ref_id": "BIBREF6"
},
{
"start": 304,
"end": 317,
"text": "Fraser (1997)",
"ref_id": "BIBREF7"
},
{
"start": 320,
"end": 341,
"text": "Gerbino et al. (1993)",
"ref_id": "BIBREF9"
},
{
"start": 344,
"end": 363,
"text": "Glass et al. (2000)",
"ref_id": "BIBREF10"
},
{
"start": 366,
"end": 387,
"text": "Goodine et al. (1992)",
"ref_id": "BIBREF11"
},
{
"start": 390,
"end": 414,
"text": "Hirschman and Pao (1993)",
"ref_id": "BIBREF13"
},
{
"start": 417,
"end": 435,
"text": "Kamm et al. (1998)",
"ref_id": "BIBREF17"
},
{
"start": 438,
"end": 461,
"text": "Polifroni et al. (1992)",
"ref_id": "BIBREF21"
},
{
"start": 464,
"end": 483,
"text": "Price et al. (1992)",
"ref_id": "BIBREF22"
},
{
"start": 486,
"end": 511,
"text": "San-Segundo et al. (2001)",
"ref_id": "BIBREF23"
},
{
"start": 514,
"end": 539,
"text": "Simpson and Fraser (1993)",
"ref_id": "BIBREF24"
},
{
"start": 542,
"end": 558,
"text": "Skowronek (2002)",
"ref_id": "BIBREF25"
},
{
"start": 561,
"end": 579,
"text": "Strik et al. (2000",
"ref_id": "BIBREF27"
},
{
"start": 580,
"end": 601,
"text": "Strik et al. ( , 2001",
"ref_id": "BIBREF26"
},
{
"start": 608,
"end": 636,
"text": "Leeuwen and Steeneken (1997)",
"ref_id": "BIBREF28"
},
{
"start": 639,
"end": 658,
"text": "Walker et al. (1997",
"ref_id": "BIBREF30"
},
{
"start": 659,
"end": 681,
"text": "Walker et al. ( , 1998",
"ref_id": "BIBREF29"
},
{
"start": 684,
"end": 701,
"text": "Zue et al. (2000)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Review of Interaction Parameters",
"sec_num": "3"
},
{
"text": "The parameters can broadly be classified as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Review of Interaction Parameters",
"sec_num": "3"
},
{
"text": "\u2022 Dialogue-and communication-related parameters \u2022 Meta-communication-related parameters \u2022 Cooperativity-related parameters \u2022 Task-related parameters \u2022 Speech-input-related parameters",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Review of Interaction Parameters",
"sec_num": "3"
},
{
"text": "These categories will be briefly discussed in the following sections. The respective parameters are listed in the Appendix, together with a definition, the interaction level addressed by the parameter (word, utterance or dialogue), as well as the measurement method (instrumental or expert annotation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Review of Interaction Parameters",
"sec_num": "3"
},
{
"text": "Parameters which refer to the overall dialogue and to the communication of information give a very rough indication of how the interaction takes place. They do not specify the communicative function of each individual utterance in detail. These parameters are listed in Table 2 of the Appendix, and include duration-related parameters (overall dialogue duration, duration of system and user turns, system and user response delay), and word-and turn-related parameters (average number of system and user turns, average number of words per system and per user turn, number of system and user questions).",
"cite_spans": [],
"ref_spans": [
{
"start": 270,
"end": 277,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dialogue-and Communication-Related Parameters",
"sec_num": "3.1"
},
{
"text": "Two parameters which have been proposed by Glass et al. (2000) are worth noting: The query density gives an indication of how efficiently a user can provide new information to a system, and the concept efficiency describes how efficiently the system can absorb this information from the user. These parameters also refer to the system's language understanding capability, but they have been included in this section because they result from the system's interaction capabilities as a whole, and not purely from the language understanding capabilities.",
"cite_spans": [
{
"start": 43,
"end": 62,
"text": "Glass et al. (2000)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue-and Communication-Related Parameters",
"sec_num": "3.1"
},
{
"text": "All parameters in this category are of global character and refer to the dialogue as a whole, although they are partly calculated on an utterance level. Global parameters are sometimes problematic, because the individual differences in cognitive skill may be large in relation to the system-originated differences, and because subjects might learn strategies for task solution which have a significant impact on global parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue-and Communication-Related Parameters",
"sec_num": "3.1"
},
{
"text": "Meta-communication, i.e. the communication about communication, is particularly important for the spoken interaction with systems which have limited recogni-tion, understanding and reasoning capabilities. In this case, correction and clarification utterances or even subdialogues are needed to recover from misunderstandings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meta-Communication-Related Parameters",
"sec_num": "3.2"
},
{
"text": "The parameters belonging to this group quantify the number of system and user utterances which are part of meta-communication. Most of the parameters are calculated as the absolute number of utterances in a dialogue which relate to a specific interaction problem, and are then averaged over a set of dialogues. They include the number of help requests from the user, of time-out prompts from the system, of user utterances rejected by the system in the case that no semantic content could be extracted (ASR rejections), of diagnostic system error messages, of barge-in attempts from the user, and of user attempts to cancel a previous action.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meta-Communication-Related Parameters",
"sec_num": "3.2"
},
{
"text": "The ability of the system (and of the user) to recover from interaction problems can be described in two ways: Either explicitly by the correction rate, i.e. the percentage of all (system or user) turns which are primarily concerned with rectifying an interaction problem, or implicitly with the implicit recovery parameter, which quantifies the capacity of the system to regain utterances which have partially failed to be recognized or understood.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meta-Communication-Related Parameters",
"sec_num": "3.2"
},
{
"text": "In contrast to the global measures, most metacommunication-related parameters describe the function of system and user utterances in the communication process. Thus, most parameters have to be determined with the help of an annotating expert. The parameters are listed in Table 3 of the Appendix.",
"cite_spans": [],
"ref_spans": [
{
"start": 272,
"end": 279,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Meta-Communication-Related Parameters",
"sec_num": "3.2"
},
{
"text": "Cooperativity has been identified as a key aspect for a successful interaction with a spoken dialogue system (Bernsen et al., 1998) . Unfortunately, it is difficult to quantify whether a system behaves cooperatively or not. Several of the dialogue-and meta-communicationrelated parameters somehow relate to system cooperativity, but they do not attempt to quantify this aspect.",
"cite_spans": [
{
"start": 109,
"end": 131,
"text": "(Bernsen et al., 1998)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cooperativity-Related Parameters",
"sec_num": "3.3"
},
{
"text": "Direct measures of cooperativity are the contextual appropriateness parameters introduced by Simpson and Fraser (1993) . Each system utterance has to be judged by a number of experts as to whether it violates one or more of Grice's maxims for cooperativity, see Grice (1975) . These principles have been stated more precisely by Bernsen et al. (1998) with respect to spoken dialogue systems.",
"cite_spans": [
{
"start": 93,
"end": 118,
"text": "Simpson and Fraser (1993)",
"ref_id": "BIBREF24"
},
{
"start": 262,
"end": 274,
"text": "Grice (1975)",
"ref_id": "BIBREF12"
},
{
"start": 329,
"end": 350,
"text": "Bernsen et al. (1998)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cooperativity-Related Parameters",
"sec_num": "3.3"
},
{
"text": "The utterances are classified into the categories of appropriate (not violating Grice's maxims), inappropriate (violating one or more maxim), appropriate/inappropriate (the experts cannot reach agreement in their classification), incomprehensible (the content of the utterance cannot be discerned in the dialogue context), or total failure (no linguistic response from the system). It has to be noted that the classification is not always straightforward, and that interpretation principles may be necessary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cooperativity-Related Parameters",
"sec_num": "3.3"
},
{
"text": "Current state-of-the-art telephone services enable task-orientated interactions between system and user, and task success is a key issue for the usefulness of a service. Task success may best be determined in a laboratory situation where explicit tasks are given to the test subjects, see M\u00f6ller (2005) . However, realistic measures of task success have to take into account potential deviations from the scenario by the user, either because he/she did not pay attention to the instructions given in the scenario, because of his/her inattentiveness to the system utterances, or because the task was unresolvable and had to be modified in the course of the dialogue.",
"cite_spans": [
{
"start": 289,
"end": 302,
"text": "M\u00f6ller (2005)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Related Parameters",
"sec_num": "3.4"
},
{
"text": "Modification of the experimental task is considered in most definitions of task success which are reported in the literature. Success may be reached by simply providing the right answer to the constraints set in the instructions, by constraint relaxation from the system or from the user (or both), or by spotting that no solution exists for the defined task. Task failure may be tentatively attributed to the system's or to the user's behavior, the latter however being influenced by the one of the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Related Parameters",
"sec_num": "3.4"
},
{
"text": "A different approach to determine task success is the \u03ba coefficient. It assumes a speech-understanding approach which is based on attributes (concepts, slots) for which allowed values have to be assigned in the course of the dialogue, resulting in attribute-value-pairs (AVPs). A set of all available attributes together with the values assigned by the task (a so-called attributevalue matrix, AVM) completely describes a task which can be carried out with the help of the system. In order to determine the \u03ba coefficient, a confusion matrix",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Related Parameters",
"sec_num": "3.4"
},
{
"text": "M(i,j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Related Parameters",
"sec_num": "3.4"
},
{
"text": "is set up for the attributes in the key (scenario definition) and in the reported solution (log file of the dialogue). Then, the agreement between key and solution P(A) and the chance agreement P(E) can be calculated from this matrix, see Table 5 . M(i,j) can be calculated for individual dialogues, or for a set of dialogues which belong to a specific system or system configuration.",
"cite_spans": [],
"ref_spans": [
{
"start": 239,
"end": 246,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Task-Related Parameters",
"sec_num": "3.4"
},
{
"text": "The \u03ba coefficient relies on the availability of a simple task coding scheme, namely in terms of an AVM. However, some tasks cannot be characterized as easily.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Related Parameters",
"sec_num": "3.4"
},
{
"text": "In that case, more elaborated approaches to task success are needed, approaches which usually depend on the type of task under consideration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Related Parameters",
"sec_num": "3.4"
},
{
"text": "The speech input capability of a spoken dialogue system is determined by its capability to recognize words and utterances, and to extract the meaning from the recognized string. The speech recognition task can be categorized into isolated word recognition, keyword spotting, or continuous speech recognition. Speech understanding is often performed on the basis of attributevalue pairs, see the previous section. The parameters described in the following paragraph address both speech recognition and speech understanding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech-Input-Related Parameters",
"sec_num": "3.5"
},
{
"text": "Continuous speech recognizers generally provide a word string hypothesis which has to be aligned with a reference transcription produced by an annotating expert. On the basis of the alignment, the number of correctly determined words c w , of substitutions s w , of insertions i w , and of deletions d w is counted. These counts can be related to the total number of words in the reference n w , resulting in two alternative measures of recognition performance, the word error rate WER and the word accuracy WA, see Table 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 516,
"end": 523,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Speech-Input-Related Parameters",
"sec_num": "3.5"
},
{
"text": "Complementary performance measures can be defined on the sentence level, in terms of a sentence accuracy, SA, or a sentence error rate, SER, see Table 6 . In general, SA is lower than WA, because a single misrecognised word in a sentence impacts the SA parameter. It may however become higher than the word accuracy, especially when many single-word sentences are correctly recognized. The fact that SER and SA penalize a whole utterance when a single misrecognised word occurs has been pointed out by Strik et al. (2000 Strik et al. ( , 2001 ; the problem can be circumvented with the parameters NES and WES, see Table 6 . When utterances are not separated into sentences, all sentence-related metrics can also be calculated on an utterance instead of a sentence level.",
"cite_spans": [
{
"start": 502,
"end": 520,
"text": "Strik et al. (2000",
"ref_id": "BIBREF27"
},
{
"start": 521,
"end": 542,
"text": "Strik et al. ( , 2001",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 145,
"end": 152,
"text": "Table 6",
"ref_id": "TABREF5"
},
{
"start": 614,
"end": 621,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Speech-Input-Related Parameters",
"sec_num": "3.5"
},
{
"text": "Isolated word recognizers provide an output hypothesis for each input word or utterance. Input and output words can be directly compared, and similar performance measures as in the continuous recognition case can be defined, omitting the insertions. Instead of the insertions, the number of \"false alarms\" in a time period can be counted, see van Leeuwen and Steeneken (1997) . WA and WER can also be determined for keywords only, when the recognizer operates in a keywordspotting mode.",
"cite_spans": [
{
"start": 347,
"end": 375,
"text": "Leeuwen and Steeneken (1997)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speech-Input-Related Parameters",
"sec_num": "3.5"
},
{
"text": "For speech understanding assessment, two common approaches have to be distinguished. The first one is based on the classification of system answers to user questions into categories of correctly answered, partially correctly answered, incorrectly answered, or failed answers. The individual answer categories can be combined into measures which have been used in the US DARPA program, see Table 6 . The second way is to classify the system's parsing capabilities, either in terms of correctly parsed utterances, or of correctly identified AVPs. On the basis of the identified AVPs, global measures such as the concept accuracy, CA, the concept error rate, CER, or the understanding accuracy, UA, can be calculated. All parameters are listed in Table 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 389,
"end": 396,
"text": "Table 6",
"ref_id": "TABREF5"
},
{
"start": 744,
"end": 751,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Speech-Input-Related Parameters",
"sec_num": "3.5"
},
{
"text": "When separating the quality of an SDS-based service into quality aspects, in the way which is indicated in ITU-T Rec. P.851 (2003, Section 5.3) , it can be observed that several aspects of quality are not addressed by interaction parameters. No parameters directly relate to usability, user satisfaction, acceptability, or speech output quality. So far, only very few approaches have been made which address the quality of speech output (be it concatenated or synthesized) in a parametric way. Instrumental measures related to speech intelligibility are defined e.g. in IEC Standard 60268-16 (1998), but they have not been designed for a telephone environment. Concatenation cost measures have been proposed which can be calculated from the input text and the speech database of a concatenative synthesis system (Chu and Peng, 2001) . Although they sometimes show high correlations to mean opinion scores obtained in subjective experiments, such measures are very specific to the speech synthesizer and its concatenation corpus.",
"cite_spans": [
{
"start": 118,
"end": 143,
"text": "P.851 (2003, Section 5.3)",
"ref_id": null
},
{
"start": 812,
"end": 832,
"text": "(Chu and Peng, 2001)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Further Parameters",
"sec_num": "3.6"
},
{
"text": "Although interaction parameters as the ones defined in Section 3 are important for system design, optimization and maintenance, they are not directly linked to the quality which is perceived by the human user. Consequently, the collection of interaction parameters should be complemented by a collection of user judgments, as it is described in ITU-T Rec. P.851 (2003) .",
"cite_spans": [
{
"start": 356,
"end": 368,
"text": "P.851 (2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Initial Evaluation of Interaction Parameters",
"sec_num": "4"
},
{
"text": "In order to determine the relationship between subjective user judgments and interaction parameters, a limited case study has been carried out in the frame of the EC-funded IST project INSPIRE (INfotainment management with SPeech Interaction via REmote microphones and telephone interfaces). In this project, a prototype of a spoken dialogue system for controlling domestic devices (lamps, blinds, video recorder, answering machine, etc.) has been set up. The prototype has been evaluated in a controlled laboratory experiment at IKA. Because the speech recognizer was not available when the experiment was carried out, it had to be replaced by a human transcriber, making this a partly Wizard-of-Oz-based experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initial Evaluation of Interaction Parameters",
"sec_num": "4"
},
{
"text": "During this experiment, 24 test users interacted with the system in a realistic home environment, following three scenario-guided interactions, each comprising several tasks. After each interaction, users were asked to fill in a questionnaire with 37 statements which has been designed following the methodology of ITU-T Rec. P.851 (2003) . In parallel, the interactions have been logged, transcribed and annotated using a specifically-designed annotation interface (Skowronek, 2002; M\u00f6ller, 2005) . From the annotation, 64 parameters could be extracted for each interaction which are mainly identical to the ones listed in Section 3. Thus, a set of user judgments on quality and interaction parameters is available for the initial evaluation, reflecting the same set of interactions with a prototypical system. Details on the experiment are described in M\u00f6ller et al. (2005) .",
"cite_spans": [
{
"start": 326,
"end": 338,
"text": "P.851 (2003)",
"ref_id": null
},
{
"start": 466,
"end": 483,
"text": "(Skowronek, 2002;",
"ref_id": "BIBREF25"
},
{
"start": 484,
"end": 497,
"text": "M\u00f6ller, 2005)",
"ref_id": "BIBREF18"
},
{
"start": 855,
"end": 875,
"text": "M\u00f6ller et al. (2005)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Initial Evaluation of Interaction Parameters",
"sec_num": "4"
},
{
"text": "From this database, correlations between interaction parameters and subjective judgments have been calculated. Because several interaction parameters and user judgments do not follow a Gaussian distribution, Spearman rank-order correlations \u03c1 have been chosen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation between Interaction Parameters and User Judgments",
"sec_num": "4.1"
},
{
"text": "The results were disappointing on a first view: The highest coefficients were around 0.6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation between Interaction Parameters and User Judgments",
"sec_num": "4.1"
},
{
"text": "Interestingly, quality-related information seems to be captured mostly in the speech-recognition-and speech-understanding-related parameters. This is astonishing, because the (simulated) recognition accuracy of the INSPIRE system was nearly perfect (mean WA = 97.2%). The recognition-related parameters were shown to have correlations of up to 0.6 with interaction control, up to 0.52 with interaction pleasantness, up to 0.47 with the difficulty of operation, up to 0.43 with system helpfulness, up to 0.42 with dialogue smoothness, and up to 0.40 with error recovery. The correlation between speech-recognition-and speech-understandingrelated parameters is only moderate, justifying measuring both types of parameters to obtain a maximum of information. Perceived system understanding correlates only moderately with the measured understanding accuracy, UA (\u03c1 = 0.41).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation between Interaction Parameters and User Judgments",
"sec_num": "4.1"
},
{
"text": "With respect to efficiency, humans do not seem to be adequate measurement instruments either. The correlation between the perceived length of a dialogue and DD (communication efficiency) is very low, as well as the correlation between annotated and perceived task success (task efficiency).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation between Interaction Parameters and User Judgments",
"sec_num": "4.1"
},
{
"text": "The subjective judgment on overall quality seems to be mainly dominated by the characteristics of the system turns (STD: \u03c1 = 0.40), by the understanding accuracy (UA: \u03c1 = 0.39; UCT: \u03c1 = 0.36), and by the recognition accuracy (\u03c1 between 0.39 and 0.42). Still, this correlation is not high enough to be able to predict overall system quality on the basis of individual interaction parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation between Interaction Parameters and User Judgments",
"sec_num": "4.1"
},
{
"text": "More sophisticated models have been developed to predict system usability and acceptability from a combination of parameters. The most popular approach is the PARADISE framework developed by Walker et al. (1997 Walker et al. ( , 1998 . The model aims at predicting \"user satisfaction\", which is calculated as an arithmetic mean over several user judgments on different quality aspects, as a linear combination of several interaction parameters. In its original version, Walker et al. used 8-9 interaction parameters as an input to the model, including a subjective judgment on task success. The weighting coefficients of the linear prediction function are determined with the help of a multivariate linear regression analysis, using a database of user judgments and interaction parameters which have been collected under controlled (laboratory) conditions.",
"cite_spans": [
{
"start": 191,
"end": 210,
"text": "Walker et al. (1997",
"ref_id": "BIBREF30"
},
{
"start": 211,
"end": 233,
"text": "Walker et al. ( , 1998",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality Prediction Models",
"sec_num": "4.2"
},
{
"text": "From the INSPIRE database, several PARADISEstyle models have been calculated, using different user judgments as the prediction target (judgment on \"overall quality\", \"user satisfaction\", or the arithmetic mean over all 37 judgments), and several sets of interaction parameters as the input variables (full set of 64 parameters or restricted set of 5 parameters similar to Walker et al., 1997) . In particular, two types of parameters have been used for describing task success: Either an expertderived weighted task success index TSe (which is calculated from the TS labels of Table 2 , assigning a value of one for each sub-task which has been successfully achieved by the user, and a value of zero for all failures), or a user judgment of task success TSu (as it was the case in the experiments reported in Walker et al., 1997 and . The regression algorithm used a stepwise (forward-backward) inclusion of parameters (for 64 parameters) or a forced inclusion of all parameters (for 5 parameters only), did not include a constant term, and replaced missing values by their respective means. The results are shown in Table 1 . Indicated is the amount of variance in the subjective judgments which can be covered by the respective model (R 2 corr ) and the number of input parameters selected by the regression algorithm. For the large set of input parameters, R 2 corr reaches 0.46 in the best case, which is comparable to the prediction accuracy reported by Walker et al. (1997 Walker et al. ( , 1998 . However, when using only the restricted set of parameters as an input to the regression analysis, the prediction accuracy is much lower. The user-derived judgment of task success leads in all cases to better prediction results; it is particularly important when only few input parameters are available. All in all, the prediction accuracy does not depend on the number of input parameters, but on their informative value.",
"cite_spans": [
{
"start": 372,
"end": 392,
"text": "Walker et al., 1997)",
"ref_id": "BIBREF30"
},
{
"start": 809,
"end": 832,
"text": "Walker et al., 1997 and",
"ref_id": "BIBREF30"
},
{
"start": 1459,
"end": 1478,
"text": "Walker et al. (1997",
"ref_id": "BIBREF30"
},
{
"start": 1479,
"end": 1501,
"text": "Walker et al. ( , 1998",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 577,
"end": 584,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 1117,
"end": 1124,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Quality Prediction Models",
"sec_num": "4.2"
},
{
"text": "An overview has been presented of interaction parameters quantifying the interaction between a user and a spoken dialogue system. Such parameters can be used in the design, implementation, optimization and operation phase of SDS-based services. They provide important information to the system developer, but no direct measures of quality, as it would be perceived by the user of the respective service.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "The set of parameters has been evaluated in a pilot experiment carried out with an SDS for controlling domestic devices. The results show that the correlation between individual interaction parameters and subjective user judgments is indeed relatively low; highest correlations were in the area of 0.6, and for overall quality not higher than 0.42. Nevertheless, a combination of parameters can be used to predict overall quality or user satisfaction, based on a linear regression model defined by the PARADISE framework. Such models may capture about 45% of the variance in the subjective data, provided that the right -informative -parameters are selected as an input to the model. Still, this value is too low to replace subjective quality judgments by interaction parameters when the quality of SDS-based services is to be measured.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "The collected set of interaction parameters is considered by the ITU-T for a supplement to its P-Series Recommendations, to be approved in late 2005 (ITU-T Del. Contr. D.030, 2005) . However, further empirical validation is necessary in order to restrict the full set of available parameters to the ones which are relevant for quality. Such a restricted set of interaction parameters will form the basis for a new Recommendation P.PST which will be developed by ITU-T SG12 in the next 1-2 years. Contributions in this respect are invited by the ITU-T, see the roadmap on http://www.itu.int/ITU-T/studygroups/com12/q12roadmap/index.html. Fraser (1997) . dial. instr. STD system turn duration Average duration of a system turn, from the system starting speaking to the system stopping speaking, in [ms] . A turn is an utterance, i.e. a stretch of speech spoken by one party in the dialogue. (Fraser, 1997) utter. instr.",
"cite_spans": [
{
"start": 149,
"end": 180,
"text": "(ITU-T Del. Contr. D.030, 2005)",
"ref_id": null
},
{
"start": 637,
"end": 650,
"text": "Fraser (1997)",
"ref_id": "BIBREF7"
},
{
"start": 796,
"end": 800,
"text": "[ms]",
"ref_id": null
},
{
"start": 889,
"end": 903,
"text": "(Fraser, 1997)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Average duration of a user turn, from the user starting speaking to the user stopping speaking, in [ms] . (Fraser, 1997) utter. instr.",
"cite_spans": [
{
"start": 99,
"end": 103,
"text": "[ms]",
"ref_id": null
},
{
"start": 106,
"end": 120,
"text": "(Fraser, 1997)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "UTD user turn duration",
"sec_num": null
},
{
"text": "Average delay of a system response, from the user stopping speaking to the system starting speaking, in [ms] . (Fraser, 1997) utter.",
"cite_spans": [
{
"start": 104,
"end": 108,
"text": "[ms]",
"ref_id": null
},
{
"start": 111,
"end": 125,
"text": "(Fraser, 1997)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SRD system response delay",
"sec_num": null
},
{
"text": "instr.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SRD system response delay",
"sec_num": null
},
{
"text": "Average delay of a user response, from the system stopping speaking to the user starting speaking, in [ms] . (Fraser, 1997) utter. instr.",
"cite_spans": [
{
"start": 102,
"end": 106,
"text": "[ms]",
"ref_id": null
},
{
"start": 109,
"end": 123,
"text": "(Fraser, 1997)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "URD user response delay",
"sec_num": null
},
{
"text": "Overall number of turns uttered in a dialogue. (Walker et al., 1998) dial. instr./ expert. # system turns number of system turns Overall number of system turns uttered in a dialogue. (Walker et al., 1998) dial.",
"cite_spans": [
{
"start": 47,
"end": 68,
"text": "(Walker et al., 1998)",
"ref_id": "BIBREF29"
},
{
"start": 183,
"end": 204,
"text": "(Walker et al., 1998)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "# turns number of turns",
"sec_num": null
},
{
"text": "instr./ expert. # user turns number of user turns Overall number of user turns uttered in a dialogue. (Walker et al., 1998) dial. instr./ expert. WPST words per system turn Average number of words per system turn in a dialogue. (Cookson, 1988) utter. instr./ expert. WPUT words per user turn Average number of words per user turn in a dialogue. (Cookson, 1988) utter. instr./ expert. # system questions number of system questions Overall number of questions from the system per dialogue.",
"cite_spans": [
{
"start": 102,
"end": 123,
"text": "(Walker et al., 1998)",
"ref_id": "BIBREF29"
},
{
"start": 228,
"end": 243,
"text": "(Cookson, 1988)",
"ref_id": "BIBREF5"
},
{
"start": 345,
"end": 360,
"text": "(Cookson, 1988)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "# turns number of turns",
"sec_num": null
},
{
"text": "dial. expert.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "# turns number of turns",
"sec_num": null
},
{
"text": "Overall number of questions from the user per dialogue. (Goodine et al., 1992; Polifroni et al., 1992) dial. expert.",
"cite_spans": [
{
"start": 56,
"end": 78,
"text": "(Goodine et al., 1992;",
"ref_id": "BIBREF11"
},
{
"start": 79,
"end": 102,
"text": "Polifroni et al., 1992)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "# user questions number of user questions",
"sec_num": null
},
{
"text": "Average number of new concepts (slots, see Section 3.4) introduced per user query. Being n d the number of dialogues, n q (i) the total number of user queries in the i th dialogue, and n u (i) the number of unique concepts correctly \"understood\" by the system in the i th dialogue, then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "QD query density",
"sec_num": null
},
{
"text": "\u2211 = = d n i q u d i n i n n QD 1 ) ( ) ( 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "QD query density",
"sec_num": null
},
{
"text": "A concept is not counted to n u (i) if the system already understood it in one of the previous utterances. (Glass et al., 2000) set of dial.",
"cite_spans": [
{
"start": 107,
"end": 127,
"text": "(Glass et al., 2000)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "QD query density",
"sec_num": null
},
{
"text": "concept efficiency Average number of turns which are necessary for each concept to be \"understood\" by the system. Being n d the number of dialogues, n u (i) the number of unique concepts correctly \"understood\" by the system in the i th dialogue, and n c (i) the total number of concepts in the i th dialogue, then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CE",
"sec_num": null
},
{
"text": "\u2211 = = d N i c u d i n i n n CE 1 ) ( ) ( 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CE",
"sec_num": null
},
{
"text": "A concept is counted whenever it was uttered by the user and was not already understood by the system. (Glass et al., 2000) set of dial.",
"cite_spans": [
{
"start": 103,
"end": 123,
"text": "(Glass et al., 2000)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CE",
"sec_num": null
},
{
"text": "expert. Overall number of user help requests in a dialogue. A user help request is labeled by the annotation expert if the user explicitly asks for help. This request may be formulated as a question (e.g. \"What are the available options?\") or as a statement (\"Give me the available options!\"). (Walker et al., 1998) utter. expert.",
"cite_spans": [
{
"start": 294,
"end": 315,
"text": "(Walker et al., 1998)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CE",
"sec_num": null
},
{
"text": "# system help number of diagnostic system help messages",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CE",
"sec_num": null
},
{
"text": "Overall number of help messages generated by the system in a dialogue. A help message is a system utterance which informs the user about available options at a certain point in the dialogue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CE",
"sec_num": null
},
{
"text": "utter. instr./ expert.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CE",
"sec_num": null
},
{
"text": "# time-out number of time-out prompts Overall number of time-out prompts, due to no response from the user, in a dialogue. (Walker et al., 1998) utter. instr.",
"cite_spans": [
{
"start": 123,
"end": 144,
"text": "(Walker et al., 1998)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CE",
"sec_num": null
},
{
"text": "Overall number of ASR rejections in a dialogue. An ASR rejection is defined as a system prompt indicating that the system was unable to \"hear\" or to \"understand\" the user, i.e. that the system was unable to extract any meaning from a user utterance. (Walker et al., 1998) utter. instr.",
"cite_spans": [
{
"start": 250,
"end": 271,
"text": "(Walker et al., 1998)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "# ASR rejection number of ASR rejections",
"sec_num": null
},
{
"text": "Overall number of diagnostic error messages from the system in a dialogue. A diagnostic error message is defined as a system utterance in which the system indicates that it is unable to perform a certain task or to provide a certain information. (Price et al., 1992) utter. instr./ expert.",
"cite_spans": [
{
"start": 246,
"end": 266,
"text": "(Price et al., 1992)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "# system error number of diagnostic system error messages",
"sec_num": null
},
{
"text": "# barge-in number of user barge-in attempts Overall number of user barge-in attempts in a dialogue. A user barge-in attempt is counted when the user intentionally addresses the system while the system is still speaking. In this definition, user utterances which are not intended to influence the course of the dialogue (laughing, expressions of anger or politeness) are not counted as barge-ins. (Walker et al., 1998) utter.",
"cite_spans": [
{
"start": 396,
"end": 417,
"text": "(Walker et al., 1998)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "# system error number of diagnostic system error messages",
"sec_num": null
},
{
"text": "expert.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "# system error number of diagnostic system error messages",
"sec_num": null
},
{
"text": "Overall number of user cancel attempts in a dialogue. A user turn is classified as a cancel attempt if the user tries to restart the dialogue from the beginning, or if he/she explicitly wants to step one or several levels backwards in the dialogue hierarchy. (Kamm et al., 1998; San-Segundo et al., 2001) utter. expert.",
"cite_spans": [
{
"start": 259,
"end": 278,
"text": "(Kamm et al., 1998;",
"ref_id": "BIBREF17"
},
{
"start": 279,
"end": 304,
"text": "San-Segundo et al., 2001)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "# cancel number of user cancel attempts",
"sec_num": null
},
{
"text": "SCT, SCR number of system correction turns, system correction rate Overall number (SCT) or percentage (SCR) of all system turns in a dialogue which are primarily concerned with rectifying a \"trouble\", thus not contributing new propositional content and interrupting the dialogue flow. A \"trouble\" may be caused by speech recognition or understanding errors, or by illogical, contradictory, or undefined user utterances. In case that the user does not give an answer to a system question, the corresponding system answer is labeled as a system correction turn, except when the user asks for an information or action which is not supported by the current system functionality. (Simpson and Fraser, 1993; Gerbino et al., 1993) utter.",
"cite_spans": [
{
"start": 675,
"end": 701,
"text": "(Simpson and Fraser, 1993;",
"ref_id": "BIBREF24"
},
{
"start": 702,
"end": 723,
"text": "Gerbino et al., 1993)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "# cancel number of user cancel attempts",
"sec_num": null
},
{
"text": "expert.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "# cancel number of user cancel attempts",
"sec_num": null
},
{
"text": "UCT, UCR number of user correction turns, user correction rate",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "# cancel number of user cancel attempts",
"sec_num": null
},
{
"text": "Overall number (UCT) or percentage (UCR) of all user turns in a dialogue which are primarily concerned with rectifying a \"trouble\", thus not contributing new propositional content and interrupting the dialogue flow (see SCT, SCR). (Simpson and Fraser, 1993; Gerbino et al., 1993) utter. expert.",
"cite_spans": [
{
"start": 231,
"end": 257,
"text": "(Simpson and Fraser, 1993;",
"ref_id": "BIBREF24"
},
{
"start": 258,
"end": 279,
"text": "Gerbino et al., 1993)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "# cancel number of user cancel attempts",
"sec_num": null
},
{
"text": "implicit recovery Capacity of the system to recover from user utterances for which the speech recognition or understanding process partly failed. Determined by labeling the partially parsed utterances (see definition of PA:PA in Section 3.5) as to whether the system response was \"appropriate\" or not:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IR",
"sec_num": null
},
{
"text": "PA PA IR :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IR",
"sec_num": null
},
{
"text": "answer system e appropriat with utterances # = For the definition of \"appropriateness\" see Grice (1975) and Bernsen et al. (1998) . (Danieli and Gerbino, 1995) utter.",
"cite_spans": [
{
"start": 91,
"end": 103,
"text": "Grice (1975)",
"ref_id": "BIBREF12"
},
{
"start": 108,
"end": 129,
"text": "Bernsen et al. (1998)",
"ref_id": "BIBREF0"
},
{
"start": 132,
"end": 159,
"text": "(Danieli and Gerbino, 1995)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IR",
"sec_num": null
},
{
"text": "expert. Overall number or percentage of system utterances which are judged to be appropriate in their immediate dialogue context. Determined by labeling utterances according to whether they violate one or more of Grice's maxims for cooperativity: CA:AP: Appropriate, not violating Grice's maxims, not unexpectedly conspicuous or marked in some way. CA:IA: Inappropriate, violating one or more of Grice's maxims. CA:TF: Total failure, no linguistic response. CA:IC: Incomprehensible, content cannot be discerned by the annotation expert. For more details see Simpson and Fraser (1993) and Gerbino et al. (1993) ; the classification is similar to the one adopted in Hirschman and Pao (1993) . utter.",
"cite_spans": [
{
"start": 558,
"end": 583,
"text": "Simpson and Fraser (1993)",
"ref_id": "BIBREF24"
},
{
"start": 588,
"end": 609,
"text": "Gerbino et al. (1993)",
"ref_id": "BIBREF9"
},
{
"start": 664,
"end": 688,
"text": "Hirschman and Pao (1993)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IR",
"sec_num": null
},
{
"text": "expert. Meas. meth. TS task success Label of task success according to whether the user has reached his/her goal by the end of a dialogue, provided that this goal could be reached with the help of the system. The labels indicate whether the goal was reached or not, and the assumed source of problems: S: Succeeded (task for which solutions exist) SCs:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IR",
"sec_num": null
},
{
"text": "Succeeded with constraint relaxation by the system SCu:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IR",
"sec_num": null
},
{
"text": "Succeeded with constraint relaxation by the user SCsCu: Succeeded with constraint relaxation both from the system and from the user SN:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IR",
"sec_num": null
},
{
"text": "Succeeded in spotting that no solution exists Fs:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IR",
"sec_num": null
},
{
"text": "Failed because of the system's behavior, due to system adequacies Fu:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IR",
"sec_num": null
},
{
"text": "Failed because of the user's behavior, due to noncooperative user behavior See also Fraser (1997) , Simpson and Fraser (1993) and Danieli and Gerbino (1995) . ",
"cite_spans": [
{
"start": 84,
"end": 97,
"text": "Fraser (1997)",
"ref_id": "BIBREF7"
},
{
"start": 100,
"end": 125,
"text": "Simpson and Fraser (1993)",
"ref_id": "BIBREF24"
},
{
"start": 130,
"end": 156,
"text": "Danieli and Gerbino (1995)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IR",
"sec_num": null
},
{
"text": "\u2211 = = n i i T t E P 1 2 ) ( ) (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IR",
"sec_num": null
},
{
"text": ". (Carletta, 1996; Walker et al., 1997) dial. or set of dial.",
"cite_spans": [
{
"start": 2,
"end": 18,
"text": "(Carletta, 1996;",
"ref_id": "BIBREF3"
},
{
"start": 19,
"end": 39,
"text": "Walker et al., 1997)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IR",
"sec_num": null
},
{
"text": "expert. Percentage of words which have been correctly recognized, based on the orthographic form of the hypothesized and the (transcribed) reference utterance, and an alignment carried out with the help of the \"sclite\" algorithm, see NIST (2001) . Designating n w the overall number of words from all user utterances of a dialogue, and s w , d w and i w the number of substituted, deleted and inserted words, respectively, then the word error rate and word accuracy can be determined as follows: See Simpson and Fraser (1993) ; details on how these parameters can be calculated in case of isolated word recognition are given in van Leeuwen and Steeneken (1997) . (Simpson and Fraser, 1993) utter.",
"cite_spans": [
{
"start": 234,
"end": 245,
"text": "NIST (2001)",
"ref_id": null
},
{
"start": 500,
"end": 525,
"text": "Simpson and Fraser (1993)",
"ref_id": "BIBREF24"
},
{
"start": 632,
"end": 660,
"text": "Leeuwen and Steeneken (1997)",
"ref_id": "BIBREF28"
},
{
"start": 663,
"end": 689,
"text": "(Simpson and Fraser, 1993)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IR",
"sec_num": null
},
{
"text": "instr./ expert. (Strik et al., 2001) utter.",
"cite_spans": [
{
"start": 16,
"end": 36,
"text": "(Strik et al., 2001)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IR",
"sec_num": null
},
{
"text": "instr./ expert. not at all (AN:FA) answered by the system, per dialogue, see Polifroni et al. (1992) , Goodine et al. (1992) and Hirschman and Pao (1993 (Polifroni et al., 1992; Goodine et al., 1992; Skowronek, 2002) utter.",
"cite_spans": [
{
"start": 77,
"end": 100,
"text": "Polifroni et al. (1992)",
"ref_id": "BIBREF21"
},
{
"start": 103,
"end": 124,
"text": "Goodine et al. (1992)",
"ref_id": "BIBREF11"
},
{
"start": 129,
"end": 152,
"text": "Hirschman and Pao (1993",
"ref_id": "BIBREF13"
},
{
"start": 153,
"end": 177,
"text": "(Polifroni et al., 1992;",
"ref_id": "BIBREF21"
},
{
"start": 178,
"end": 199,
"text": "Goodine et al., 1992;",
"ref_id": "BIBREF11"
},
{
"start": 200,
"end": 216,
"text": "Skowronek, 2002)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NES",
"sec_num": null
},
{
"text": "expert.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WES",
"sec_num": null
},
{
"text": "PA:CO, PA:PA, PA:IC, %PA:CO, %PA:PA, %PA:IC number of correctly/ partially correctly/ incorrectly parsed user utterances Evaluation of the number of concepts (attribute-value pairs, AVPs) in an utterance which have been extracted by the system: PA:CO: All concepts of a user utterance have been correctly understood by the system. PA:PA: Not all but at least one concept of a user utterance has been correctly understood by the system. PA:IC: No concept of a user utterance has been correctly understood by the system. Expressed as the overall number or percentage of user utterances in a dialogue which have been parsed correctly/ partially correctly/ incorrectly. (Danieli and Gerbino, 1995) utter.",
"cite_spans": [
{
"start": 666,
"end": 693,
"text": "(Danieli and Gerbino, 1995)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WES",
"sec_num": null
},
{
"text": "expert. (Gerbino et al., 1993; Simpson and Fraser, 1993; Boros et al., 1996 ; Billi et al., 1996) utter.",
"cite_spans": [
{
"start": 8,
"end": 30,
"text": "(Gerbino et al., 1993;",
"ref_id": "BIBREF9"
},
{
"start": 31,
"end": 56,
"text": "Simpson and Fraser, 1993;",
"ref_id": "BIBREF24"
},
{
"start": 57,
"end": 77,
"text": "Boros et al., 1996 ;",
"ref_id": "BIBREF2"
},
{
"start": 78,
"end": 97,
"text": "Billi et al., 1996)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WES",
"sec_num": null
},
{
"text": "expert.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CA, CER",
"sec_num": null
},
{
"text": "Percentage of user utterances in which all semantic units (AVPs) have been correctly extracted: turns user CO PA UA # : = (Zue et al., 2000) utter.",
"cite_spans": [
{
"start": 122,
"end": 140,
"text": "(Zue et al., 2000)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "UA understanding accuracy",
"sec_num": null
},
{
"text": "expert.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "UA understanding accuracy",
"sec_num": null
}
],
"back_matter": [
{
"text": "The present work has been performed at IKA, Ruhr-University Bochum, in the context of the EC-funded IST-project INSPIRE (IST-2001-32746), see http://www.knowledge-speech.gr/inspire-project. Partners of INSPIRE were: Knowledge S.A., Patras, and WCL, University of Patras, both Greece; IKA, Ruhr-University Bochum, and ABS Jena, both Germany; TNO Human Factors, Soesterberg and Philips Electronics Nederland B.V., Eindhoven, both The Netherlands; and EPFL, Lausanne, Switzerland. The author would like to thank Rosa Pegam for acting as a Wizard and for reviewing the manuscript; Noha El Mehelmi and J\u00f6rn Opretzka for annotating the dialogues; as well as all other INSPIRE partners for their support in the experiments and for fruitful discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Designing interactive speech systems: From first ideas to user testing",
"authors": [
{
"first": "N",
"middle": [
"O"
],
"last": "Bernsen",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Dybkjaer",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Dybkjaer",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernsen, N.O., Dybkjaer, H., Dybkjaer, L. (1998). De- signing interactive speech systems: From first ideas to user testing. Springer, Berlin.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Field trial evaluations of two different information inquiry systems",
"authors": [
{
"first": "R",
"middle": [],
"last": "Billi",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Castagneri",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Danieli",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. 3 rd IEEE Workshop on Interactive Voice Technology for Telecommunications Applications (IVTTA'96)",
"volume": "",
"issue": "",
"pages": "129--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Billi, R., Castagneri, G., Danieli, M. (1996). Field trial evaluations of two different information inquiry sys- tems. In: Proc. 3 rd IEEE Workshop on Interactive Voice Technology for Telecommunications Applica- tions (IVTTA'96), Basking Ridge NJ, 129-134.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Towards understanding spontaneous speech: Word accuracy vs. concept accuracy",
"authors": [
{
"first": "M",
"middle": [],
"last": "Boros",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Eckert",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Gallwitz",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Gorz",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hanrieder",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Niemann",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. 4 th Int. Conf. on Spoken Language Processing (ICSLP'96)",
"volume": "2",
"issue": "",
"pages": "1009--1012",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boros, M., Eckert, W., Gallwitz, F., Gorz, G., Hanrieder, G., Niemann, H. (1996). Towards under- standing spontaneous speech: Word accuracy vs. concept accuracy. In: Proc. 4 th Int. Conf. on Spoken Language Processing (ICSLP'96), IEEE, Piscataway NJ, 2, 1009-1012.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Assessing agreement of classification tasks: The kappa statistics",
"authors": [
{
"first": "J",
"middle": [],
"last": "Carletta",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "2",
"pages": "249--254",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carletta, J. (1996). Assessing agreement of classifica- tion tasks: The kappa statistics, Computational Lin- guistics, 22(2), 249-254.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An objective measure for estimating MOS of synthesized speech",
"authors": [
{
"first": "M",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Peng",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. 7 th Europ. Conf. on Speech Communication and Technology",
"volume": "3",
"issue": "",
"pages": "2087--2090",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chu, M., Peng, H. (2001). An objective measure for estimating MOS of synthesized speech. In: Proc. 7 th Europ. Conf. on Speech Communication and Tech- nology (Eurospeech 2001 -Scandinavia), Aalborg, 3, 2087-2090.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Final evaluation of VODIS -Voice operated data inquiry system",
"authors": [
{
"first": "S",
"middle": [],
"last": "Cookson",
"suffix": ""
}
],
"year": 1988,
"venue": "Proc. of Speech'88, 7 th FASE Symposium",
"volume": "4",
"issue": "",
"pages": "1311--1320",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cookson, S. (1988). Final evaluation of VODIS -Voice operated data inquiry system. In: Proc. of Speech'88, 7 th FASE Symposium, Edinburgh, 4, 1311-1320.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Metrics for evaluating dialogue strategies in a spoken language system",
"authors": [
{
"first": "M",
"middle": [],
"last": "Danieli",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Gerbino",
"suffix": ""
}
],
"year": 1995,
"venue": "Empirical Methods in Discourse Interpretation and Generation. Papers from the 1995 AAAI Symposium",
"volume": "",
"issue": "",
"pages": "34--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danieli, M., Gerbino, E. (1995). Metrics for evaluating dialogue strategies in a spoken language system. In: Empirical Methods in Discourse Interpretation and Generation. Papers from the 1995 AAAI Sympo- sium, US-Stanford CA, AAAI Press, Menlo Park CA, 34-39.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Assessment of interactive systems",
"authors": [
{
"first": "N",
"middle": [
"D"
],
"last": "Fraser",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Gibbon",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 1997,
"venue": "Handbook on Standards and Resources for Spoken Language Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fraser, N. (1997). Assessment of interactive systems. In: Handbook on Standards and Resources for Spoken Language Systems (D. Gibbon, R. Moore and R.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Mouton de Gruyter",
"authors": [
{
"first": "",
"middle": [],
"last": "Winski",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "564--615",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Winski, eds.), Mouton de Gruyter, Berlin, 564-615.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Test and evaluation of a spoken dialogue system",
"authors": [
{
"first": "E",
"middle": [],
"last": "Gerbino",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Baggia",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ciaramella",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Rullent",
"suffix": ""
}
],
"year": 1993,
"venue": "Proc. Int. Conf. Acoustics Speech and Signal Processing (ICASSP'93)",
"volume": "2",
"issue": "",
"pages": "135--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerbino, E., Baggia, P., Ciaramella, A., Rullent, C. (1993). Test and evaluation of a spoken dialogue sys- tem. In: Proc. Int. Conf. Acoustics Speech and Signal Processing (ICASSP'93), 2, 135-138.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Data collection and performance evaluation of spoken dialogue systems: The MIT experience",
"authors": [
{
"first": "J",
"middle": [],
"last": "Glass",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Polifroni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Seneff",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Zue",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. 6 th Int. Conf. on Spoken Language Processing",
"volume": "4",
"issue": "",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Glass, J., Polifroni, J., Seneff, S., Zue, V. (2000). Data collection and performance evaluation of spoken dia- logue systems: The MIT experience. In: Proc. 6 th Int. Conf. on Spoken Language Processing (ICSLP 2000), Beijing, 4, 1-4.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Evaluating interactive spoken language systems",
"authors": [
{
"first": "D",
"middle": [],
"last": "Goodine",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Polifroni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Seneff",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Zue",
"suffix": ""
}
],
"year": 1992,
"venue": "Proc. 2 nd Int. Conf. on Spoken Language Processing (ICSLP'92)",
"volume": "",
"issue": "",
"pages": "201--204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goodine, D., Hirschman, L., Polifroni, J., Seneff, S., Zue, V. (1992). Evaluating interactive spoken lan- guage systems. In: Proc. 2 nd Int. Conf. on Spoken Language Processing (ICSLP'92), Banff, 1, 201-204.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Logic and conversation",
"authors": [
{
"first": "H",
"middle": [
"P"
],
"last": "Grice",
"suffix": ""
}
],
"year": 1975,
"venue": "Syntax and Semantics",
"volume": "3",
"issue": "",
"pages": "41--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grice, H.P. (1975). Logic and conversation. In: Syntax and Semantics, Vol. 3: Speech Acts (P. Cole and J.L. Morgan, eds.), Academic Press, New York NY, 41- 58.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The cost of errors in a spoken language system",
"authors": [
{
"first": "L",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Pao",
"suffix": ""
}
],
"year": 1993,
"venue": "Proc. 3 rd Europ. Conf. on Speech Communication and Technology (Eurospeech'93)",
"volume": "",
"issue": "",
"pages": "1419--1422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hirschman, L., Pao, C. (1993). The cost of errors in a spoken language system. In: Proc. 3 rd Europ. Conf. on Speech Communication and Technology (Eu- rospeech'93), Berlin, 2, 1419-1422.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Sound system equipment -Part 16: Objective rating of speech intelligibility by speech transmission index",
"authors": [],
"year": 1998,
"venue": "IEC Standard",
"volume": "",
"issue": "",
"pages": "60268--60284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "IEC Standard 60268-16 (1998). Sound system equip- ment -Part 16: Objective rating of speech intelligibil- ity by speech transmission index. International Electrotechnical Commission, Geneva.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Proposal for Parameters Describing the Performance of Speech Technology Devices",
"authors": [],
"year": 2005,
"venue": "Federal Republic of Germany (Author: S. M\u00f6ller), ITU-T SG12 Meeting",
"volume": "",
"issue": "",
"pages": "18--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ITU-T Delayed Contribution D.030 (2005). Proposal for Parameters Describing the Performance of Speech Technology Devices, Federal Republic of Germany (Author: S. M\u00f6ller), ITU-T SG12 Meeting, 18-27 January 2005, CH-Geneva.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Subjective quality evaluation of telephone services based on spoken dialogue systems. International Telecomm",
"authors": [
{
"first": "",
"middle": [],
"last": "Itu-T Rec",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ITU-T Rec. P.851 (2003). Subjective quality evaluation of telephone services based on spoken dialogue sys- tems. International Telecomm. Union, Geneva.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "From novice to expert: The effect of tutorials on user expertise with spoken dialogue systems",
"authors": [
{
"first": "C",
"middle": [
"A"
],
"last": "Kamm",
"suffix": ""
},
{
"first": "D",
"middle": [
"J"
],
"last": "Litman",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Walker",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. 5 th Int. Conf. on Spoken Language Processing (ICSLP'98)",
"volume": "",
"issue": "",
"pages": "1211--1214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kamm, C.A., Litman, D.J., Walker, M.A. (1998). From novice to expert: The effect of tutorials on user ex- pertise with spoken dialogue systems. In: Proc. 5 th Int. Conf. on Spoken Language Processing (ICSLP'98), Sydney, 4, 1211-1214.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Quality of telephone-based spoken dialogue systems",
"authors": [
{
"first": "S",
"middle": [],
"last": "M\u00f6ller",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M\u00f6ller, S. (2005). Quality of telephone-based spoken dialogue systems. Springer, New York NY.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Evaluating spoken dialogue systems according to defacto standards: A case study",
"authors": [
{
"first": "S",
"middle": [],
"last": "M\u00f6ller",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Smeele",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Boland",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Krebber",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M\u00f6ller, S., Smeele, P., Boland, H., Krebber, J. (2005). Evaluating spoken dialogue systems according to de- facto standards: A case study, submitted to Computer Speech and Language.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Speech recognition scoring toolkit",
"authors": [],
"year": 2001,
"venue": "NIST Speech Recognition Scoring Toolkit",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "NIST Speech Recognition Scoring Toolkit (2001). Speech recognition scoring toolkit. National Institute of Standards and technology, http://www.nist.gov/speech/tools, Gaithersburg MD.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Experiments in evaluating interactive spoken language systems",
"authors": [
{
"first": "J",
"middle": [],
"last": "Polifroni",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Seneff",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Zue",
"suffix": ""
}
],
"year": 1992,
"venue": "Proc. DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "28--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Polifroni, J., Hirschman, L., Seneff, S., Zue, V. (1992). Experiments in evaluating interactive spoken lan- guage systems. In: Proc. DARPA Speech and Natural Language Workshop, Harriman CA, 28-33.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Subject-based evaluation measures for interactive spoken language systems",
"authors": [
{
"first": "P",
"middle": [
"J"
],
"last": "Price",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Wade",
"suffix": ""
}
],
"year": 1992,
"venue": "Proc. DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "34--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Price, P.J., Hirschman, L., Shriberg, E., Wade, E. (1992). Subject-based evaluation measures for inter- active spoken language systems. In: Proc. DARPA Speech and Natural Language Workshop, Harriman CA, 34-39.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Methodology for dialogue design in telephone-based spoken dialogue systems: A Spanish train information system",
"authors": [
{
"first": "R",
"middle": [],
"last": "San-Segundo",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Montero",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Col\u00e1s",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Guti\u00e9rrez",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Ramos",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Pardo",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. 7 th Europ. Conf. on Speech Communication and Technology",
"volume": "3",
"issue": "",
"pages": "2165--2168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "San-Segundo, R., Montero, J.M., Col\u00e1s, J., Guti\u00e9rrez, J., Ramos, J.M., Pardo, J.M. (2001). Methodology for dialogue design in telephone-based spoken dialogue systems: A Spanish train information system. In: Proc. 7 th Europ. Conf. on Speech Communication and Technology (Eurospeech 2001 -Scandinavia), Aalborg, 3, 2165-2168.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Black box and glass box evaluation of the SUNDIAL system",
"authors": [
{
"first": "A",
"middle": [],
"last": "Simpson",
"suffix": ""
},
{
"first": "N",
"middle": [
"M"
],
"last": "Fraser",
"suffix": ""
}
],
"year": 1993,
"venue": "Proc. 3 rd Europ. Conf. on Speech Communication and Technology (Eurospeech'93)",
"volume": "",
"issue": "",
"pages": "1423--1426",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simpson, A., Fraser, N.M. (1993). Black box and glass box evaluation of the SUNDIAL system. In: Proc. 3 rd Europ. Conf. on Speech Communication and Tech- nology (Eurospeech'93), Berlin, 2, 1423-1426.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Entwicklung von Modellierungsans\u00e4tzen zur Vorhersage der Dienstequalit\u00e4t bei der Interaktion mit einem nat\u00fcrlichsprachlichen Dialogsystem. Diploma thesis (unpublished)",
"authors": [
{
"first": "J",
"middle": [],
"last": "Skowronek",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Skowronek, J. (2002). Entwicklung von Modellierungsans\u00e4tzen zur Vorhersage der Dienstequalit\u00e4t bei der Interaktion mit einem nat\u00fcrlichsprachlichen Dialogsystem. Diploma thesis (unpublished), Institut f\u00fcr Kommunikationsakustik, Ruhr-Universit\u00e4t, Bochum.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Comparing the performance of two CSRs: How to determine the significance level of the differences",
"authors": [
{
"first": "H",
"middle": [],
"last": "Strik",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cucchiarini",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Kessens",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. 7 th Europ. Conf. on Speech Communication and Technology",
"volume": "3",
"issue": "",
"pages": "2091--2094",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Strik, H., Cucchiarini, C., Kessens, J.M. (2001). Com- paring the performance of two CSRs: How to deter- mine the significance level of the differences. In: Proc. 7 th Europ. Conf. on Speech Communication and Technology (Eurospeech 2001 -Scandinavia), Aalborg, 3, 2091-2094.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Comparing the recognition performance of CSRs: In search of an adequate metric and statistical significance test",
"authors": [
{
"first": "H",
"middle": [],
"last": "Strik",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cucchiarini",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Kessens",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. 6 th Int. Conf. on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "740--743",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Strik, H., Cucchiarini, C., Kessens, J.M. (2000). Com- paring the recognition performance of CSRs: In search of an adequate metric and statistical signifi- cance test. In: Proc. 6 th Int. Conf. on Spoken Lan- guage Processing (ICSLP 2000), Beijing, 4, 740-743.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Assessment of recognition systems",
"authors": [
{
"first": "D",
"middle": [],
"last": "Van Leeuwen",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Steeneken",
"suffix": ""
}
],
"year": 1997,
"venue": "Handbook on Standards and Resources for Spoken Language Systems",
"volume": "",
"issue": "",
"pages": "381--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "van Leeuwen, D., Steeneken, H. (1997). Assessment of recognition systems. In: Handbook on Standards and Resources for Spoken Language Systems (D. Gib- bon, R. Moore and R. Winski, eds.), Mouton de Gruyter, Berlin, 381-407.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Evaluating spoken dialogue agents with PARADISE: Two case studies",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Walker",
"suffix": ""
},
{
"first": "D",
"middle": [
"J"
],
"last": "Litman",
"suffix": ""
},
{
"first": "C",
"middle": [
"A"
],
"last": "Kamm",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Abella",
"suffix": ""
}
],
"year": 1998,
"venue": "Computer Speech and Language",
"volume": "12",
"issue": "3",
"pages": "317--347",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Walker, M.A., Litman, D.J., Kamm, C.A., Abella, A. (1998). Evaluating spoken dialogue agents with PARADISE: Two case studies, Computer Speech and Language, 12(3), 317-347.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "PARADISE: A framework for evaluating spoken dialogue agents",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Walker",
"suffix": ""
},
{
"first": "D",
"middle": [
"J"
],
"last": "Litman",
"suffix": ""
},
{
"first": "C",
"middle": [
"A"
],
"last": "Kamm",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Abella",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. of the 35 th Ann. Meeting of the Assoc. for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "271--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Walker, M.A., Litman, D.J., Kamm, C.A., Abella, A. (1997). PARADISE: A framework for evaluating spo- ken dialogue agents. In: Proc. of the 35 th Ann. Meet- ing of the Assoc. for Computational Linguistics, Madrid, 271-280.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "JUPITER: A telephone-based conversational interface for weather information",
"authors": [
{
"first": "V",
"middle": [],
"last": "Zue",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Seneff",
"suffix": ""
},
{
"first": "J",
"middle": [
"R"
],
"last": "Glass",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Polifroni",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Pao",
"suffix": ""
},
{
"first": "T",
"middle": [
"J"
],
"last": "Hazen",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Hetherington",
"suffix": ""
}
],
"year": 2000,
"venue": "IEEE Trans. Speech and Audio Processing",
"volume": "8",
"issue": "1",
"pages": "85--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zue, V., Seneff, S., Glass, J.R., Polifroni, J., Pao, C., Hazen, T.J., Hetherington, L. (2000). JUPITER: A telephone-based conversational interface for weather information. IEEE Trans. Speech and Audio Process- ing, 8(1), 85-96.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "completion according to the kappa statistics. Determined on the basis of the correctness of the result AVM reached at the end of a dialogue with respect to the scenario (key) AVM. A confusion matrix M(i,j) is set up for the attributes in the result and in the key, with T the number of counts in M, and t i the sum of counts in column i of M. A) the proportion of times that the AVM of the actual dialogue and the key agree, E) can be estimated from the proportion of times that they are expected to agree by chance,",
"num": null,
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"text": "sentences which have been correctly identified. Denoting n s the total number of sentences, and s s , i s and d s the number of substituted, inserted and deleted sentences, respectively, then:",
"num": null,
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"text": "number of errors per sentence Average number of recognition errors in a sentence. Being s w (k), i w (k) and d w (k) the number of substituted, inserted and deleted words in sentence k,",
"num": null,
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"text": "word error per sentence Related to NES, but normalized to the number of words in sentence k, w(k):",
"num": null,
"uris": null
},
"FIGREF5": {
"type_str": "figure",
"text": "Percentage of correctly understood semantic units, per dialogue. Concepts are defined as attribute-value pairs (AVPs), with n AVP the total number of AVPs, and s AVP , i AVP and d AVP the number of substituted, inserted and deleted AVPs. The concept accuracy and the concept error rate can then be determined as follows:",
"num": null,
"uris": null
},
"TABREF0": {
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"3\">Input parameters Target variable</td><td colspan=\"2\">Prediction result</td></tr><tr><td># par.</td><td>Task</td><td/><td>R 2 corr</td><td># par.</td></tr><tr><td/><td>success</td><td/><td/><td/></tr><tr><td>64</td><td>TSe</td><td>Overall quality</td><td>0.247</td><td>2</td></tr><tr><td>64</td><td>TSe</td><td>User satisfaction</td><td>0.409</td><td>4</td></tr><tr><td>64</td><td>TSe</td><td>Mean of all judgm.</td><td>0.420</td><td>4</td></tr><tr><td>64</td><td>TSu</td><td>Overall quality</td><td>0.409</td><td>3</td></tr><tr><td>64</td><td>TSu</td><td>User satisfaction</td><td>0.409</td><td>4</td></tr><tr><td>64</td><td>TSu</td><td>Mean of all judgm.</td><td>0.459</td><td>3</td></tr><tr><td>5</td><td>TSe</td><td>Overall quality</td><td>0.091</td><td>5</td></tr><tr><td>5</td><td>TSe</td><td>User satisfaction</td><td>0.022</td><td>5</td></tr><tr><td>5</td><td>TSe</td><td>Mean of all judgm.</td><td>0.133</td><td>5</td></tr><tr><td>5</td><td>TSu</td><td>Overall quality</td><td>0.310</td><td>5</td></tr><tr><td>5</td><td>TSu</td><td>User satisfaction</td><td>0.086</td><td>5</td></tr><tr><td>5</td><td>TSu</td><td>Mean of all judgm.</td><td>0.305</td><td>5</td></tr></table>",
"type_str": "table",
"text": "Regression models."
},
"TABREF1": {
"num": null,
"html": null,
"content": "<table><tr><td>Abbr.</td><td>Name</td><td>Definition</td></tr></table>",
"type_str": "table",
"text": "Dialogue-and communication-related interaction parameters."
},
"TABREF2": {
"num": null,
"html": null,
"content": "<table><tr><td>Abbr.</td><td>Name</td><td>Definition</td></tr></table>",
"type_str": "table",
"text": "Meta-communication-related interaction parameters."
},
"TABREF3": {
"num": null,
"html": null,
"content": "<table><tr><td>Abbr.</td><td>Name</td><td>Definition</td></tr></table>",
"type_str": "table",
"text": "Cooperativity-related interaction parameters."
},
"TABREF4": {
"num": null,
"html": null,
"content": "<table><tr><td>Abbr.</td><td>Name</td><td>Definition</td></tr></table>",
"type_str": "table",
"text": "Task-related interaction parameters."
},
"TABREF5": {
"num": null,
"html": null,
"content": "<table><tr><td>Abbr.</td><td>Name</td><td>Definition</td></tr></table>",
"type_str": "table",
"text": "Speech-input-related interaction parameters."
},
"TABREF6": {
"num": null,
"html": null,
"content": "<table><tr><td>Abbr.</td><td>Name</td><td>Definition</td><td/><td/><td/><td/><td/><td/><td/><td/><td>Int.</td><td>Meas.</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>level</td><td>meth.</td></tr><tr><td>DARPA s ,</td><td>DARPA</td><td>score,</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>DARPA me</td><td>DARPA</td><td>modified</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>error</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td colspan=\"3\">DARPA s</td><td colspan=\"2\">=</td><td colspan=\"4\">questions IC AN CO user AN # : : \u2212</td></tr><tr><td/><td/><td>DARPA me</td><td>=</td><td>AN</td><td>:</td><td colspan=\"2\">FA</td><td># +</td><td>questions IC AN user : ( 2 + \u22c5</td><td>AN</td><td>:</td><td>PA</td><td>)</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>utter.</td><td>expert.</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>).</td></tr></table>",
"type_str": "table",
"text": "Measures according to the DARPA speech understanding initiative, modified bySkowronek (2002) to account for partially correct answers:"
}
}
}
}