|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:27:17.475320Z" |
|
}, |
|
"title": "Beyond Adjacency Pairs: Hierarchical Clustering of Long Sequences for Human-Machine Dialogues", |
|
"authors": [ |
|
{ |
|
"first": "Maitreyee", |
|
"middle": [], |
|
"last": "Tewari", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Ume\u00e5 University Ume\u00e5", |
|
"location": { |
|
"country": "Sweden" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This work proposes a framework to predict sequences in dialogues, using turn based syntactic features and dialogue control functions. Syntactic features were extracted using dependency parsing, while dialogue control functions were manually labelled. These features were transformed using tf-idf and word embedding; feature selection was done using Principal Component Analysis (PCA). We ran experiments on six combinations of features to predict sequences with Hierarchical Agglomerative Clustering. An analysis of the clustering results indicate that using word-embeddings and syntactic features, significantly improved the results.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This work proposes a framework to predict sequences in dialogues, using turn based syntactic features and dialogue control functions. Syntactic features were extracted using dependency parsing, while dialogue control functions were manually labelled. These features were transformed using tf-idf and word embedding; feature selection was done using Principal Component Analysis (PCA). We ran experiments on six combinations of features to predict sequences with Hierarchical Agglomerative Clustering. An analysis of the clustering results indicate that using word-embeddings and syntactic features, significantly improved the results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Dialogues between humans is not a solitary activity of words, rather the involved participants have certain desires/goals that they want to achieve. In order to do that, they co-create understanding, by aligning aspects of their believes/knowledge to achieve their goals and reach a consensus using dialogue control functions. Dialogues between humans and machines can be facilitated by dialogue management systems (DMS). A basic DMS operates by coordinating natural language understanding (NLU), natural language generation (NLG) and a dialogue manager (DM). A DM employs either learned or hand-crafted strategies to the output from the NLU and sends its decisions to NLG that carries forward the interaction with the human participant.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A DM's flexibility can be partially attributed to the incoming knowledge from the NLU. By DM's flexibility we mean to have functions for anaphora resolution, co-referencing, keeping track of topic shifts and being able to return to previous topics (McTear et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 248, |
|
"end": 269, |
|
"text": "(McTear et al., 2016)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The motivation behind this work is to explore sequences (Nicholas et al., 2016) in dialogues that can improve the NLU's knowledge. A well explored dialogue sequencing method (Palomar and Patricio, 2000; Boyer et al., 2009) with-in conversation analysis (CA) are studied as adjacency pairs (Schegloff and Sacks, 1973 ) such as (Question-Answer, Request-Accept, Greeting-Greeting etc.) , where the first one in the pair is called first pair part (F P P base ) and the second one is called second pair part (SP P base ). For exploring long sequences, CA provides a relevant framework of sequence expansion (Stivers, 2012) allowing the prior mentioned base parts to be expanded with preceding parts (F P P pre , SP P pre ), insertion parts (F P P insert , SP P insert ) or succeeding parts by (F P P post , SP P post ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 79, |
|
"text": "(Nicholas et al., 2016)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 174, |
|
"end": 202, |
|
"text": "(Palomar and Patricio, 2000;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 203, |
|
"end": 222, |
|
"text": "Boyer et al., 2009)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 289, |
|
"end": 315, |
|
"text": "(Schegloff and Sacks, 1973", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 360, |
|
"end": 383, |
|
"text": "Greeting-Greeting etc.)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 603, |
|
"end": 618, |
|
"text": "(Stivers, 2012)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This work proposes to use sequence expansion to analyse how much long sequences can be predicted by the machine learning models in order to build the knowledge for NLU. As an initial step, this work uses above mentioned sequence expansion labels to study the dendrograms and sequences of nodes longer than adjacency pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The paper is structured as follows: Section 2 presents a summary of related literature and provides the necessary background. The Methodology and the clustering model is presented in Section 3, and Section 4 presents the results of our proposed model. Section 5 concludes this article.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Structuring in dialogues have been explored by many researches utilising different sequencing theories: discourse representation theory (Kamp et al., 2011) , conversation analysis (CA) (Sidnell and Stivers, 2012) , and Rhetorical sequence theory (Hou et al., 2020) to name a few. Detailing these theories is beyond the scope of this work, but we will briefly explain some of their use-cases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 155, |
|
"text": "(Kamp et al., 2011)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 185, |
|
"end": 212, |
|
"text": "(Sidnell and Stivers, 2012)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 246, |
|
"end": 264, |
|
"text": "(Hou et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literature and Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For instance, (Stent, 2000) used rhetorical sequence theory for sequencing task-driven dialogues and report several issues such as, deciding a minimal unit for annotation, overlap between subjectmatter and presentational relations. In (Asher and Lascarides, 2003) , the authors presented a novel theory called Segmented Discourse Interpretation Theory (SDRT), combining the knowledge from dynamic semantics, common sense reasoning, and speech act theory. The authors claimed SDRT to be the most formally mature and linguistically grounded theory.", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 27, |
|
"text": "(Stent, 2000)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 235, |
|
"end": 263, |
|
"text": "(Asher and Lascarides, 2003)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literature and Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "While, the above mentioned works focused more on strengthening the theoretical foundations for dialogue sequencing, the authors (Boyer et al., 2009) identified themselves with solving practical matters of extracting sequences. Their corpus of humanhuman tutorial dialogues were manually annotated with dialogue acts and trained on a hidden Markov model (HMM) on adjacency pairs. More recently, the authors in (Nicholas et al., 2016) presented a multi-party corpus annotated with discourse sequence relations following SDRT mentioned earlier. Authors in (Gupta et al., 2018) proposed a hierarchical annotation scheme for query systems such as travel booking, in order to determine intents from complex nested queries compared to a single intent for each slot. In (Shi et al., 2019) , the authors used a variational recurrent neural network (VRNN) and variational inference for dialogue sequence in taskoriented dialogues (finding restaurant and getting weather report).", |
|
"cite_spans": [ |
|
{ |
|
"start": 128, |
|
"end": 148, |
|
"text": "(Boyer et al., 2009)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 409, |
|
"end": 432, |
|
"text": "(Nicholas et al., 2016)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 553, |
|
"end": 573, |
|
"text": "(Gupta et al., 2018)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 762, |
|
"end": 780, |
|
"text": "(Shi et al., 2019)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literature and Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The proposed work here is closely in line with (Zacharie et al., 2018; Duran and Battle, 2018; Tewari and Bensch, 2018) , where in prior work the authors proposed a two step methodology of extracting two dimensional patterns in dialogues, followed by clustering. Their dialogues are manually annotated with emotion, gaze and dialogue act. In the latter work, the authors demonstrated the significance of dialogue sequencing for building domain agnostic dialogue models using CA. They explored sequence expansion and developed an annotation tool to annotate dialogues with subsequences based on CA and dialogue control functions. In the final work the authors used syntactic, communicative and CA based features and formalised them by extending the cooperating distributed grammar system.", |
|
"cite_spans": [ |
|
{ |
|
"start": 47, |
|
"end": 70, |
|
"text": "(Zacharie et al., 2018;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 71, |
|
"end": 94, |
|
"text": "Duran and Battle, 2018;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 95, |
|
"end": 119, |
|
"text": "Tewari and Bensch, 2018)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literature and Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The biggest difference of this work from the above mentioned prior works is in the definition of the task, i.e, the dialogue corpus. All the prior work has utilised either publicly available corpora based on query systems, while this work aimed to gather as diverse genres of task-driven query/reservation (booking laundry, ordering food), collaboration (cooking, taking medications, going to the flower shop) dialogues and chit-chat dialogues. The other difference is in the annotation approach and the training input, where, we neither use only manually labelled or the entire utterance as the input. Instead, we combine manually labelled and automatically extracted features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literature and Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The next section briefly provides some background on adjacency pairs and CA based sequence expansion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literature and Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Adjacency pairs (Schegloff and Sacks, 1973) can be defined as utterances produced by two different participants and are adjacently placed. Instances of typically used adjacency pairs are greeting greeting, request accept/reject, offer accept/reject, question answer etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sequences in Dialogues", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "However, adjacency pairs allow one-shot conversations (McTear et al., 2016) , where the human asks a question or queries a system and the system responds. Moving towards long and complex interactions which may include (pronoun resolution, topic management, etc) would leave adjacency pairs insufficient for the purpose. In the example below, we explain our scenario, labelled with dialogue control functions (Bunt, 1999) Research has been done already with regard to anaphora resolution using adjacency pairs (Palomar and Patricio, 2000) , we propose to use sequence expansion for the problem a) above and for b) the annotation scheme proposed by Bunt et al. (Bunt et al., 2019) . Next we explain the concept of sequence expansion (SE) to understand what do we mean by longer sequences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 75, |
|
"text": "(McTear et al., 2016)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 408, |
|
"end": 420, |
|
"text": "(Bunt, 1999)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 509, |
|
"end": 537, |
|
"text": "(Palomar and Patricio, 2000)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 659, |
|
"end": 678, |
|
"text": "(Bunt et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sequences in Dialogues", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Sequence Expansion (SE) (Stivers, 2012) constitutes labels that can precede, be inserted, or followed by the base adjacency pairs (introduced in Section 1). The above mentioned example can be translated with SE labels as in Table 1 , and instead of knowledge from just a pair of turns, the machine can extract from multiple turns. Following such schemes allows machines, to have a longer window/slot for information. The other benefit is, it can optimise its knowledge and strategy, For example, if a machine observes that an SP P insert is present in its slot, and its the machine's turn then it can switch the topic back to the base topic introduced at F P P base if it hasn't been fulfilled by an SP P base , etc.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 231, |
|
"text": "Table 1", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sequences in Dialogues", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "To this end, SE labels are used to analyse the results of the clustering and to compare the amount of knowledge captured and the comprehensiveness they provide compared to adjacency pairs. The next section provides some details on the methodology employed by this work to predict distinctive clusters representing longer sequences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sequences in Dialogues", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The method employed by this work to predict long sequences uses feature engineering and unsupervised clustering method on n\u2212grams of syntactic features and dialogue control functions. The next sections provide details on the features used and the components of the model. Overall, our framework consists of following stages represented in Figure 1: 1. Preparation of the corpus-consists of determining which genres should be considered, then merging of the samples from different sources was done, then the corpus was preprocessed by performing data cleaning, missing imputation, and assignment of uniqueidentifier. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 339, |
|
"end": 348, |
|
"text": "Figure 1:", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We conduct experiments on a collection of 78 dialogues of which 41 were synthetically created dialogues between an older adult H and a robot R. We used the scenario that R is situated in H's home to assist in daily tasks such as: meal reminders, playing board games, taking care of hazardous items etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The synthetic dialogues were combined with 9 dialogues from Dialog Bank 1 which already came with gold standard labels of ISO 24617 \u2212 2 scheme (Bunt et al., 2017) and 28 dialogues from dialogue breakdown detection challenge (DBDC3) (Higashinaka et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 143, |
|
"end": 162, |
|
"text": "(Bunt et al., 2017)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 232, |
|
"end": 258, |
|
"text": "(Higashinaka et al., 2017)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The synthetic dialogues and DBDC3 dialogues were hand labelled by the author with dialogue control functions following the ISO 24617 \u2212 2 annotation scheme. Since, this work is aimed towards extracting generic sequences hence, we combined different domains (taks-driven and chit-chat) and participant types (human-human, human-machine). Figure 1 : The workflow to obtain dialogue patterns for sequencing dialogues to build natural flows in DMS.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 336, |
|
"end": 344, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Corpus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We use dependency parsing for extracting syntactic features of types, uni\u2212gram and tri\u2212gram with dependency relationship. Dependency parsing generates syntactic sequences between lexical elements i.e(words), which are linked by binary asymmetrical relation called dependencies. Figure 2 illustrates a dependency parsing graph with syntactic sequence. This work uses Spacy dependency parser proposed in (Honnibal and Johnson, 2015) . Based on a manual analysis of dependency graphs on randomly selected samples from the corpus, we decided to use POS tags as uni\u2212gram syntactic features: pronouns, proper nouns, direct object, indirect objects, coordinating conjunction, and interjection. For tri-gram syntactic features (subject-object-verb) tuples and dependency graphs of (auxiliary verb) and its right two neighbours were used. Utterances in dialogues, have one to many relationship with functions to either provide or require information from an addressee and such functions are referred as dialogue control functions (Bunt et al., 2019) . For instance in an utterance 'Hi John, Please get ready for some exercise' can be segmented into 'Hi John' with dialogue control function (greeting) and 'Please get ready for some exercise' with dialogue control function (request) and each of these segments are referred as functional segments. List of dialogue control functions used in this work are provided in Table 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 402, |
|
"end": 430, |
|
"text": "(Honnibal and Johnson, 2015)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1021, |
|
"end": 1040, |
|
"text": "(Bunt et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 278, |
|
"end": 284, |
|
"text": "Figure", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1407, |
|
"end": 1414, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Syntactic Features and Dialogue Control Functions", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Data transformation is an essential step for all machine learning algorithms and here we use two different transformation techniques for the two features used in this work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Transformation and Reduction", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Term Frequency Inverse Document Frequency tf-idf (Church and Gale, 1999) determines the relative frequency of terms in a document compared to the inverse proportion of that term over the collection of documents. Dialogue control functions are of categorical type and hence were trans-Communicative Functions Dialogue Control Functions", |
|
"cite_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 72, |
|
"text": "(Church and Gale, 1999)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Transformation and Reduction", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Proposition, Set, Choice, Check Question, Inform, Agree, Disagree, Correction, Answer, Confirm, Dis-confirm, Promise, Offer, Address, Accept, Decline (Request, Suggest), Request, Instruct, Offer, Address, Accept, Decline (Offer).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "1.General Functions", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Auto-Positive, Allo-Positive, Auto-Negative, Allo-Negative, Feedback Elicitation. 3.Turn/Time Mgmt.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2.Feedback Functs.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Accept-Turn, Grab-Turn, Assign-Turn, Keep-Turn, Release-Turn, Take-Turn, Stalling, Pausing. 5.Own/ Partner Comm. Mgmt.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2.Feedback Functs.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Completion, Correct Misspeaking, Self-Error, Retraction, Selfcorrection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2.Feedback Functs.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Interaction Structuring, Opening. 7.Social Obligation Mgmt.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6.Discourse Structuring", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Initial, Return (Greeting, Self-introduction, Goodbye), Apology, Thanking, Accept (Apology, Thanking). formed using tf-idf technique. Intuitively, it determines how significant a term is for a given document. Consider the corpus as a document collection D, with a term (dialogue control function) t, and document (a dialogue) d \u2208 D, tf-idf can be calculated as (Ramos, 2003) :", |
|
"cite_spans": [ |
|
{ |
|
"start": 361, |
|
"end": 374, |
|
"text": "(Ramos, 2003)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6.Discourse Structuring", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "t d = f t,d \u00d7 log(|D|/f t,D ) Where, f t,d", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6.Discourse Structuring", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "is the frequency of (dialogue control function) t in the given dialogue d, |D| is the size of the corpus, and f t,D is the number of dialogues in which the dialogue control function t appears in the corpus D.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6.Discourse Structuring", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Word-embedding (Mikolov et al., 2013) transform words to vectors in a higher dimensional space to derive linear syntactic and/or semantic relationships between them. dc t is the dialogue control function and sf t = [w 1,t , w 2,t ...w n,t ] are the n\u2212gram syntactic features for each segment, where w is a single syntactic feature. The concatenation of these two features F = [dc t , sf t ] is the variable. dc t and sf t were averaged for each segment of an utterance resulting intos f t ,dc t and transformed using pre-trained GloVe (Pennington et al., 2014) embedding with 300 features provid-ingF = [dc t ,s f t ], which is then given to PCA for feature selection, explained next.", |
|
"cite_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 37, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 535, |
|
"end": 560, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6.Discourse Structuring", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Principal Component Analysis (PCA) reduces higher dimensional feature space to lower dimension, by selecting the features with highest variance (Shlens, 2014) . PCA receives the above trans-formed features: tf-idf t d and word-embeddingF . The linear transformations can be represented as a matrix computation:", |
|
"cite_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 158, |
|
"text": "(Shlens, 2014)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6.Discourse Structuring", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "P 1 t d = T and P 2F = F new .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6.Discourse Structuring", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Where, the input to the HAC model are the rows of P 1 for only dialogue control functions and P 2 for combination of the features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6.Discourse Structuring", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "HAC is an unsupervised machine learning method (Murtagh and Contreras, 2012) , that partitions the corpora into n singleton nodes and keeps merging mutually close pair of nodes until one final node is generated.", |
|
"cite_spans": [ |
|
{ |
|
"start": 47, |
|
"end": 76, |
|
"text": "(Murtagh and Contreras, 2012)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Agglomerative Clustering (HAC)", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Let S 0 be the initial set of data points, at each step n i is the new node formed by merging a i and b i with a given distance \u03b4 i . It runs for N 1 turns, resulting into a final state of only one node with all N initial nodes. Next we briefly describe the steps a HAC algorithm follows: (i) Generation of priority queue with nearest neighbours and minimal distances. (ii) Find the closest pair of nodes based on computed values for nearest neighbours and minimal distance, and append them to a list L to generate the dendrogram. (iii) Ensure the minimal distance between two nearest neighbours holds true till the end, and updates the minimum distance at every time step of the merging.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Agglomerative Clustering (HAC)", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "We experimented with four different HAC models and compared it for Euclidean and M anhattan distance measures for finding out the minimum distance between two feature combinations in order to merge them into clusters. To generate dendrograms we used W ard linkage and complete linkage. The four HAC models we experimented for different feature combinations: (i) Pre-defined number of clusters n = 6, distance metric: Euclidean distance, merging of clusters: Ward. (ii) Pre defined number of clusters n = 5, distance metric: Euclidean distance, merging of clusters: Ward. (iii) Pre defined number of clusters n = 5, distance metric: Manhattan distance, merging of clusters: complete. (iv) Pre defined number of clusters n = 3, distance metric: Euclidean distance, merging of clusters: Ward.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Definition", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "We ran the HAC models on tuple of features for each segment of an utterance. Following tuple of features were selected for running the experiment:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Definition", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "(i) only dialogue control functions (DCF). (ii) dialogue control functions and syntactic feature (tri\u2212grams-subject-object-verb) as (DCF,SS1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Definition", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "(iii) dialogue control functions and syntactic feature (tri\u2212grams-auxiliary verb, right neighbour1, right neighbour2) as (DCF,SS2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Definition", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "(iv) dialogue control functions and uni\u2212gram syntactic features (Nouns, Direct object, Indirect object, Interjection and Coordinating Conjunction) (DCF,ST).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Definition", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "(v) dialogue control functions and tri\u2212gram syntactic features (auxiliary right neighbour1 Right neighbour2 and subject object verb) as (DCF,SS1,SS2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Definition", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "(vi) dialogue control function and syntactic features, uni\u2212grams and tri\u2212grams as (DCF,ST,SS1,SS2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model Definition", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "To evaluate the HAC model on different combination of features, we compute the silhouette coefficient, Calinski Harabasz index and Davies Bouldin score, these metrics illustrate if the model generated well defined clusters. In Statistics Cophnet, measures how well the dendro-gram preserves the pair wise distances of original data points (Sara\u00e7li et al., 2013) . We use Cophnet to measure the cor-relation between original and the predicted data points.", |
|
"cite_spans": [ |
|
{ |
|
"start": 339, |
|
"end": 361, |
|
"text": "(Sara\u00e7li et al., 2013)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The evaluation of the HAC model is illustrated in the Table 3 . The overall performance of the HAC model is good on specific combination of features DCF, SS1, SS2 and DCF, ST as highlighted in bold with high Calinski Harabasz Index, Silhouette score, and Davies Bouldin score. The Cophnet score is high for half of the combination of features i.e, DCF, SS1, DCF, ST, and DCF, SS1, SS2. We can see that the performance of the HAC model on only DCF is also high, however it is not a relevant result for us because it doesn't convey any information about the sequence.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 61, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In order to identify the sequence expansions we manually analysed random sample of dendrograms, for all the six combination of features mentioned above with 200 nodes. We provide here five such examples of the analysed dendrograms, which are manually labelled with sequence expansion labels, in-order to see if such labelling can help to capture and build more knowledge. Table 4 provides two examples extracted from one of the generated dendrograms, for (dialogue control functions and uni\u2212gram syntactic feature). In the first sample, Instruct node with the syntactic feature mill was adjacent to Question node with the syntactic feature picket, other adjacent nodes without any syntactic feature was a positive feedback and an answer. Indicating that this example could possibly be a part of a navigation instruction, while the other dialogue seems to be a part of a chitchat dialogue. For each example, each subsequent line represents the closest node while browsing the dendrogram from top to bottom if its vertically drawn. As it can be seen in these examples, the model doesn't predict the nodes to be in perfect pairs, hence highlighting that using adjacency pairs will be insufficient in extracting knowledge that is not distributed with-in pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 372, |
|
"end": 379, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Empirical Analysis of HAC Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Three examples from the analysed dendrograms are presented in Table 5 representing the combination of features (dialogue control function and tri\u2212gram syntactic feature). Also, for this case we manually labelled them with SE labels. The example number 1 seems to be about finding glasses, while the others indicate towards them being a part of a dialogue on machines and stealing of the jobs. Here, it can be found in Example 2 third line that there is no F P P base for the SP P base , indicating that the parts for the same pair (base, pre, post, insert) can sometimes be very far away or possibly the model places them far because of the dissimilarities between them. The analysis also showed that among the syntactic features, uni\u2212grams were present dominantly around 84% of the times, while tri\u2212grams of subject-object-verb tuples constituted 50% of the segments and auxiliary verbs were 20% of the segments.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 69, |
|
"text": "Table 5", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Empirical Analysis of HAC Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "This work explored combination of features (syntactic features and dialogue control functions) in order to find sequences in dialogues, such that we can build NLU functions for capturing information distributed over turns longer than two for DMs to possibly conduct flexible dialogues. Dependency parsing was used for extracting syntactic features (uni\u2212grams and tri\u2212grams) and dialogue control functions were labelled manually using ISO 24617 \u2212 2 scheme. The feature transformation was done using tf-idf (when using only dialogue control function training the model), and GloVe embedding were used for combination of features (dialogue control functions and syntactic features), for both the cases feature selection was done with P CA. The selected features were modelled with hierarchical agglomerative clustering, the results validated our assumption that capturing longer sequences using syntactic features can provide knowledge that adjacency pairs would fall short in.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion, Discussion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "This work being at a preliminary stage doesn't provide any concrete solution yet for building flexible dialogue strategies and rich knowledge sources, however it can be seen as more of a proof-ofconcept for using syntactic features and sequence expansion labels for dialogue sequencing. The benefit of using syntactic features is that they can be extracted automatically from the raw data and stateof-the-art methods are robust enough. This work explored tuples of syntactic features, instead trees or graphs must be explored. Syntactic features provides flexibility to a machine, in the sense that it can select and prioritise to accomplish a topic (objects, nouns, etc) depending on the goals and/or the domain it is employed for. For pronoun resolution, relationship between prior mentioned proper noun/s and incoming pronouns can be established", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion, Discussion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Sequence Expansion 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S.No Feature Combination", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Inform, cant see anything F P P pre Question, do you remember F P P base Inform, on the bedside F P P insert Inform, did't find glasses SP P insert 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S.No Feature Combination", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Turn keep, don't you see F P P insert Confirm, they do not SP P insert Accept, machines steal jobs SP P base Inform, a set people F P P pre 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S.No Feature Combination", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Retract, it does not, steals jobs SP P base Inform, machines F P P pre Question, work that does F P P base using extraction of uni-gram syntactic features delimited by SE labels. For managing multiple dialogue control functions, coordinating conjunctions and interjections can be used for identifying response generation. This work also comes with its limitations, where the first is related to the corpus, which could be biased due to a large number of samples being synthetically prepared by the author. Another limitation is the size of the corpus. The author is currently working on both of these limitations and in the future we have planned to combine different genres of dialogues from publicly available sources. Another limitation of this work is that it doesn't use any dialogue features such as intents, semantics, context, etc. Other limitations include selection and model of the syntactic features, where some of the features such as auxiliary verbs should be dropped because of their low frequency, it could be also a bias from the corpus that was used. A common assumption that dialogues are about subjects objects and verbs could not be held by this work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S.No Feature Combination", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Whether dialogues are task-driven or open ended or chit-chat-one commonality is that they all are directed towards activities fulfilling human needs (both tangible or intangible) More abstract models such as BDI models (Rao et al., 1995) and/or Activity theory (Leontiev, 1978) should be considered and be complemented with syntactic and pragmatic features mentioned here.", |
|
"cite_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 237, |
|
"text": "(Rao et al., 1995)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 277, |
|
"text": "(Leontiev, 1978)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "S.No Feature Combination", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "I would like to thank Associate Prof. Suna Bensch at the Department of Computing Science, Ume\u00e5 University, Ume\u00e5, Sweden for her intellectual con-tribution towards ideation and refining of the research work done in this article.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "This work has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\u0142odowska-Curie grant agreement No 721619 for the SOCRATES project.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "https://dialogbank.uvt.nl/annotated dialogues/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Motivating Rhetorical Relations, chapter 1", |
|
"authors": [ |
|
{ |
|
"first": "Nicholas-Michael", |
|
"middle": [], |
|
"last": "Asher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Lascarides", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicholas-Michael Asher and Alex Lascarides. 2003. Motivating Rhetorical Relations, chapter 1. Cam- bridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Modeling dialogue structure with adjacency pair analysis and hidden markov models", |
|
"authors": [ |
|
{ |
|
"first": "Kristy", |
|
"middle": [], |
|
"last": "Boyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Phillips", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Young", |
|
"middle": [], |
|
"last": "Ha Eun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Wallis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mladen", |
|
"middle": [], |
|
"last": "Vouk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Lester", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "49--52", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kristy Boyer, Robert Phillips, Young Ha Eun, Michael Wallis, Mladen Vouk, and James Lester. 2009. Mod- eling dialogue structure with adjacency pair analysis and hidden markov models. In Proceedings of Hu- man Language Technologies: The 2009 Annual Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics, Companion Volume: Short Papers, pages 49-52. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Dynamic interpretation and dialogue theory. The structure of multimodal dialogue", |
|
"authors": [ |
|
{ |
|
"first": "Harry", |
|
"middle": [], |
|
"last": "Bunt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harry Bunt. 1999. Dynamic interpretation and dia- logue theory. The structure of multimodal dialogue, 2:1-8.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The dialogbank: dialogues with interoperable annotations. Language Resources and Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Harry", |
|
"middle": [], |
|
"last": "Bunt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Volha", |
|
"middle": [], |
|
"last": "Petukhova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrei", |
|
"middle": [], |
|
"last": "Malchanau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "53", |
|
"issue": "", |
|
"pages": "213--249", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harry Bunt, Volha Petukhova, Andrei Malchanau, Alex Chengyu Fang, and Kars Wijnhoven. 2019. The dialogbank: dialogues with interoperable an- notations. Language Resources and Evaluation, 53:213-249.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Dialogue Act Annotation with the ISO 24617-2 Standard", |
|
"authors": [ |
|
{ |
|
"first": "Harry", |
|
"middle": [], |
|
"last": "Bunt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Volha", |
|
"middle": [], |
|
"last": "Petukhova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Traum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "109--135", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harry Bunt, Volha Petukhova, David Traum, and Jan Alexandersson. 2017. Dialogue Act Annotation with the ISO 24617-2 Standard, pages 109-135. Springer International Publishing.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Inverse document frequency (idf): A measure of deviations from poisson", |
|
"authors": [ |
|
{ |
|
"first": "Kenneth", |
|
"middle": [], |
|
"last": "Church", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Gale", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Natural language processing using very large corpora", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "283--295", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenneth Church and William Gale. 1999. Inverse docu- ment frequency (idf): A measure of deviations from poisson. In Natural language processing using very large corpora, pages 283-295. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Conversation analysis structured dialogue for multi-domain dialogue management", |
|
"authors": [ |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Duran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steve", |
|
"middle": [], |
|
"last": "Battle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "DEXAHAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--4", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nathan Duran and Steve Battle. 2018. Conversation analysis structured dialogue for multi-domain dia- logue management. DEXAHAI, pages 1-4.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Semantic parsing for task oriented dialog using hierarchical representations", |
|
"authors": [ |
|
{ |
|
"first": "Sonal", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rushin", |
|
"middle": [], |
|
"last": "Shah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mrinal", |
|
"middle": [], |
|
"last": "Mohit", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Ku- mar, and Mike Lewis. 2018. Semantic parsing for task oriented dialog using hierarchical representa- tions. In Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), page 6, Bel- gium. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Overview of dialogue breakdown detection challenge 3. Proceedings of Dialogue System Technology Challenge", |
|
"authors": [ |
|
{ |
|
"first": "Ryuichiro", |
|
"middle": [], |
|
"last": "Higashinaka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Funakoshi", |
|
"middle": [], |
|
"last": "Kotaro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Inab", |
|
"middle": [], |
|
"last": "Michimasa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tsunomori", |
|
"middle": [], |
|
"last": "Yuiko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takahashi", |
|
"middle": [], |
|
"last": "Tetsuro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kaji", |
|
"middle": [], |
|
"last": "Nobuhiro", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryuichiro Higashinaka, Funakoshi Kotaro, Inab Michi- masa, Tsunomori Yuiko, Takahashi Tetsuro, and Kaji Nobuhiro. 2017. Overview of dialogue break- down detection challenge 3. Proceedings of Dia- logue System Technology Challenge, page 14.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "An improved non-monotonic transition system for dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Honnibal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1373--1378", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Honnibal and Mark Johnson. 2015. An im- proved non-monotonic transition system for depen- dency parsing. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing, pages 1373-1378, Lisbon, Portugal. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Rhetorical structure theory: A comprehensive review of theory, parsing methods and applications", |
|
"authors": [ |
|
{ |
|
"first": "Shengluan", |
|
"middle": [], |
|
"last": "Hou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuhan", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chaoqun", |
|
"middle": [], |
|
"last": "Fei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Expert Systems with Applications", |
|
"volume": "157", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shengluan Hou, Shuhan Zhang, and Chaoqun Fei. 2020. Rhetorical structure theory: A comprehensive review of theory, parsing methods and applications. Expert Systems with Applications, 157:113421.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Discourse Representation Theory", |
|
"authors": [ |
|
{ |
|
"first": "Hans", |
|
"middle": [], |
|
"last": "Kamp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Van Genabith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Uwe", |
|
"middle": [], |
|
"last": "Reyle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "125--394", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hans Kamp, Josef Van Genabith, and Uwe Reyle. 2011. Discourse Representation Theory, pages 125-394. Springer Netherlands, Dordrecht.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Activity, consciousness, and personality", |
|
"authors": [ |
|
{ |
|
"first": "Aleksei", |
|
"middle": [], |
|
"last": "Nikolaevich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leontiev", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1978, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aleksei Nikolaevich Leontiev. 1978. Activity, con- sciousness, and personality. Prentice-Hall, Moscow, Russia.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Towards a technology of conversation", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Mctear", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zoriada", |
|
"middle": [], |
|
"last": "Callejas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Davis", |
|
"middle": [], |
|
"last": "Griol", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "The Conversational Interface, chapter 3", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--45", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael McTear, Zoriada Callejas, and Davis Griol. 2016. Towards a technology of conversation. In The Conversational Interface, chapter 3, pages 25- 45. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality. In Proceedings of the 26th International Con- ference on Neural Information Processing Systems - Volume 2, NIPS'13, pages 3111-3119, USA. Curran Associates Inc.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Algorithms for hierarchical clustering: an overview", |
|
"authors": [ |
|
{ |
|
"first": "Fionn", |
|
"middle": [], |
|
"last": "Murtagh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pedro", |
|
"middle": [], |
|
"last": "Contreras", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery", |
|
"volume": "2", |
|
"issue": "1", |
|
"pages": "86--97", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fionn Murtagh and Pedro Contreras. 2012. Algorithms for hierarchical clustering: an overview. Wiley Inter- disciplinary Reviews: Data Mining and Knowledge Discovery, 2(1):86-97.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Discourse Structure and Dialogue Acts in Multiparty Dialogue: the STAC Corpus", |
|
"authors": [ |
|
{ |
|
"first": "Asher", |
|
"middle": [], |
|
"last": "Nicholas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hunter", |
|
"middle": [], |
|
"last": "Julie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morey", |
|
"middle": [], |
|
"last": "Mathieu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benamara", |
|
"middle": [], |
|
"last": "Farah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Afantenos", |
|
"middle": [], |
|
"last": "Stergos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "10th International Conference on Language Resources and Evaluation (LREC 2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2721--2727", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Asher Nicholas, Hunter Julie, Morey Mathieu, Bena- mara Farah, and Afantenos Stergos. 2016. Dis- course Structure and Dialogue Acts in Multiparty Dialogue: the STAC Corpus. In 10th International Conference on Language Resources and Evaluation (LREC 2016), pages 2721-2727, Portoroz, Slovenia.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Anaphora resolution through dialogue adjacency pairs and topics", |
|
"authors": [ |
|
{ |
|
"first": "Manuel", |
|
"middle": [], |
|
"last": "Palomar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mart\u00ednez-Barco", |
|
"middle": [], |
|
"last": "Patricio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Natural Language Processing -NLP 2000", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "196--203", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manuel Palomar and Mart\u00ednez-Barco Patricio. 2000. Anaphora resolution through dialogue adjacency pairs and topics. In Natural Language Processing -NLP 2000, pages 196-203, Berlin, Heidelberg. Springer Berlin Heidelberg.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Using tf-idf to determine word relevance in document queries", |
|
"authors": [ |
|
{ |
|
"first": "Juan", |
|
"middle": [], |
|
"last": "Ramos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "242", |
|
"issue": "", |
|
"pages": "133--142", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Juan Ramos. 2003. Using tf-idf to determine word rele- vance in document queries. volume 242, pages 133- 142.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Bdi agents: from theory to practice", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Anand", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Rao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Georgeff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "ICMAS", |
|
"volume": "95", |
|
"issue": "", |
|
"pages": "312--319", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anand S Rao, Michael P Georgeff, et al. 1995. Bdi agents: from theory to practice. In ICMAS, vol- ume 95, pages 312-319, USA. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Comparison of hierarchical cluster analysis methods by cophenetic correlation", |
|
"authors": [ |
|
{ |
|
"first": "Sinan", |
|
"middle": [], |
|
"last": "Sara\u00e7li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nurhan", |
|
"middle": [], |
|
"last": "Dogan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Andismet Dogan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Journal of Inequalities and Applications", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sinan Sara\u00e7li, Nurhan Dogan, and\u0130smet Dogan. 2013. Comparison of hierarchical cluster analysis methods by cophenetic correlation. In Journal of Inequalities and Applications, 1, page 203. SpringerOpen.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Opening up closings", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Emanuel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harvey", |
|
"middle": [], |
|
"last": "Schegloff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sacks", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1973, |
|
"venue": "Semiotica", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "289--327", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emanuel A Schegloff and Harvey Sacks. 1973. Open- ing up closings. In Semiotica, volume 8, pages 289- 327. Walter de Gruyter.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Unsupervised dialog structure learning", |
|
"authors": [ |
|
{ |
|
"first": "Weiyan", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tiancheng", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhou", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Weiyan Shi, Tiancheng Zhao, and Zhou Yu. 2019. Un- supervised dialog structure learning.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "A tutorial on principal component analysis", |
|
"authors": [ |
|
{ |
|
"first": "Jonathon", |
|
"middle": [], |
|
"last": "Shlens", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1404.1100" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathon Shlens. 2014. A tutorial on principal com- ponent analysis. arXiv preprint arXiv:1404.1100, page 12.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "The Handbook of Conversation Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Jack", |
|
"middle": [], |
|
"last": "Sidnell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tanya", |
|
"middle": [], |
|
"last": "Stivers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jack Sidnell and Tanya Stivers. 2012. The Handbook of Conversation Analysis. Wiley-Blackwell, UK.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Rhetorical structure in dialog", |
|
"authors": [ |
|
{ |
|
"first": "Amanda", |
|
"middle": [], |
|
"last": "Stent", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 2nd International Natural Language Generation Conference (INLG'2000", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "247--252", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amanda Stent. 2000. Rhetorical structure in dialog. In In Proceedings of the 2nd International Natural Lan- guage Generation Conference (INLG'2000, pages 247-252.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Sequence organization", |
|
"authors": [ |
|
{ |
|
"first": "Tanya", |
|
"middle": [], |
|
"last": "Stivers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "The Handbook of Conversation Analysis, chapter 10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "191--209", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tanya Stivers. 2012. Sequence organization. In The Handbook of Conversation Analysis, chapter 10, pages 191-209. Wiley-Blackwell, UK.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Natural language communication with social robots for assisted living", |
|
"authors": [ |
|
{ |
|
"first": "Maitreyee", |
|
"middle": [], |
|
"last": "Tewari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suna", |
|
"middle": [], |
|
"last": "Bensch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "IROS Workshop in Robots for Assisted Living", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--4", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maitreyee Tewari and Suna Bensch. 2018. Natural lan- guage communication with social robots for assisted living. In IROS Workshop in Robots for Assisted Liv- ing, pages 1-4, Madrid, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Extraction and clustering of twodimensional dialogue patterns", |
|
"authors": [ |
|
{ |
|
"first": "Ales", |
|
"middle": [], |
|
"last": "Zacharie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Knippel", |
|
"middle": [], |
|
"last": "Pauchet Alexandre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Arnaud", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "International Journal on Artificial Intelligence Tools", |
|
"volume": "27", |
|
"issue": "02", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ales Zacharie, Pauchet Alexandre, and Knippel Ar- naud. 2018. Extraction and clustering of two- dimensional dialogue patterns. International Jour- nal on Artificial Intelligence Tools, 27(02):1850001.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Dependency graph for an utterance, where coloured text in brackets are the POS tags associated to each lexical item (words). The arcs indicate the asymmetric dependency relation (auxiliary, noun subject, direct object and so on) between the head(arc orgins) and dependants(arc pointers)." |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"content": "<table><tr><td>4. Feature Transformation: employs a tf-idf</td></tr><tr><td>when the feature consists of only dialogue</td></tr><tr><td>control functions, and GloVe embeddings are</td></tr><tr><td>used for different combinations of syntactic</td></tr><tr><td>features and dialogue control functions.</td></tr><tr><td>5. Selection of features: we perform feature se-</td></tr><tr><td>lection using PCA on the transformed features</td></tr><tr><td>received from the previous stage.</td></tr><tr><td>6. Training of the model: the selected features</td></tr><tr><td>are clustered with hierarchical agglomerative</td></tr><tr><td>clustering.</td></tr><tr><td>7. Evaluation was done by computing Calin-</td></tr><tr><td>ski Harabasz index, Silhouette score, Davies</td></tr><tr><td>Bouldin score and Cophenetic Coefficient Cor-</td></tr><tr><td>relation (Cophnet) for the clustering model.</td></tr></table>", |
|
"type_str": "table", |
|
"text": "2. Manual Annotation: transforming utterances to segments and labelling them with dialogue control function.3. Extraction of features: next, a dependency parser was used on the corpus of dialogue segments to extract syntactic features (uni\u2212grams and tri\u2212grams).", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"2\">Segmenting and</td><td/></tr><tr><td>Transcribed</td><td>Cleaning</td><td>Feature</td></tr><tr><td>Dialogues</td><td/><td>Extraction</td></tr><tr><td/><td/><td>Dependency</td></tr><tr><td/><td/><td>Parsing of</td></tr><tr><td/><td/><td>dialogue</td></tr><tr><td/><td/><td>segment</td></tr><tr><td>Annotated Dialogues + Syntactic Features</td><td>Annotating Dialogues with Communicative</td><td>Dialogues with Syntactic Features</td></tr><tr><td>PCA</td><td>Functions</td><td/></tr><tr><td colspan=\"2\">Transformation</td><td/></tr><tr><td>And</td><td/><td/></tr><tr><td>Clustering</td><td/><td/></tr><tr><td>Dialogue</td><td/><td/></tr><tr><td>Structuring</td><td/><td/></tr><tr><td/><td/><td>DMS MODEL</td></tr></table>", |
|
"type_str": "table", |
|
"text": "The first column consists of the information about the turn and the participant, the second column provides one or more utterances with-in each turn, followed by the dialogue control functions (DCF) and sequence expansion (SE)", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Different dialogue control functions corresponding to their respective Communicative functions", |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"2\">S.No Feature Combination</td><td>Sequence Expansion</td></tr><tr><td>1.</td><td>Positive feedback, uh huh</td><td>F P P pre</td></tr><tr><td/><td>Instruct, mill</td><td>SP P pre</td></tr><tr><td/><td>Check question, picket, fence</td><td>F P P base</td></tr><tr><td/><td>Positive feedback, answer, picket</td><td>SP P base</td></tr><tr><td/><td>Positive feedback, uh huh</td><td>F P P post</td></tr><tr><td>2.</td><td>Inform, school</td><td>F P P pre</td></tr><tr><td/><td>Question, kids</td><td>F P P base</td></tr><tr><td/><td>Stalling, uh</td><td>F P P insert</td></tr></table>", |
|
"type_str": "table", |
|
"text": "Evaluation of HAC Model on eight combination of features with communication and syntax features.", |
|
"num": null |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Some clusters from HAC model for the combination of features dialogue control functions and syntactic features", |
|
"num": null |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Selection of clusters from HAC model indicating sequence expansions for feature combination dialogue control function and tri\u2212gram syntactic features.", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |