|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:31:15.214618Z" |
|
}, |
|
"title": "WeaSuL \u03c0 : Weakly Supervised Dialogue Policy Learning: Reward Estimation for Multi-turn Dialogue", |
|
"authors": [ |
|
{ |
|
"first": "Anant", |
|
"middle": [], |
|
"last": "Khandelwal", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "An intelligent dialogue system in a multi-turn setting should not only generate the responses which are of good quality, but it should also generate the responses which can lead to long-term success of the dialogue. Although, the current approaches improved the response quality, but they overlook the training signals present in the dialogue data. We can leverage these signals to generate the weakly supervised training data for learning dialog policy and reward estimator, and make the policy take actions (generates responses) which can foresee the future direction for a successful (rewarding) conversation. We simulate the dialogue between an agent and a user (modelled similar to an agent with supervised learning objective) to interact with each other. The agent uses dynamic blocking to generate ranked diverse responses and explorationexploitation to select among the Top-K responses. Each simulated state-action pair is evaluated (works as a weak annotation) with three quality modules: Semantic Relevant, Semantic Coherence and Consistent Flow. Empirical studies with two benchmarks indicate that our model can significantly out-perform the response quality and lead to a successful conversation on both automatic evaluation and human judgment. 1", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "An intelligent dialogue system in a multi-turn setting should not only generate the responses which are of good quality, but it should also generate the responses which can lead to long-term success of the dialogue. Although, the current approaches improved the response quality, but they overlook the training signals present in the dialogue data. We can leverage these signals to generate the weakly supervised training data for learning dialog policy and reward estimator, and make the policy take actions (generates responses) which can foresee the future direction for a successful (rewarding) conversation. We simulate the dialogue between an agent and a user (modelled similar to an agent with supervised learning objective) to interact with each other. The agent uses dynamic blocking to generate ranked diverse responses and explorationexploitation to select among the Top-K responses. Each simulated state-action pair is evaluated (works as a weak annotation) with three quality modules: Semantic Relevant, Semantic Coherence and Consistent Flow. Empirical studies with two benchmarks indicate that our model can significantly out-perform the response quality and lead to a successful conversation on both automatic evaluation and human judgment. 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Dialog policy for multi-turn dialogue decides the next best action to take on the environment so as to complete the conversation based on various success criteria. Reinforcement learning can help to learn such a policy where the environment can be users (human or model) and the policy takes action on the environment from which it gets a reward signal (Fatemi et al., 2016; Peng et al., 2017; Chen et al., 2017; Yarats and Lewis, 2018; Lei et al., 2018; He et al., 2018; .", |
|
"cite_spans": [ |
|
{ |
|
"start": 353, |
|
"end": 374, |
|
"text": "(Fatemi et al., 2016;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 375, |
|
"end": 393, |
|
"text": "Peng et al., 2017;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 394, |
|
"end": 412, |
|
"text": "Chen et al., 2017;", |
|
"ref_id": "BIBREF55" |
|
}, |
|
{ |
|
"start": 413, |
|
"end": 436, |
|
"text": "Yarats and Lewis, 2018;", |
|
"ref_id": "BIBREF61" |
|
}, |
|
{ |
|
"start": 437, |
|
"end": 454, |
|
"text": "Lei et al., 2018;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 455, |
|
"end": 471, |
|
"text": "He et al., 2018;", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Learning a dialogue policy using reinforcement learning can be challenging with humans users, since it requires a large set of samples with a reward to train. Since there are a lot of previous works on neural response generation (Gu et al., 2020; Zhang et al., 2019; we can model the users also, using any of these encoder-decoder architectures. This helps to simulate the conversations between the simulated user and the agent (policy model) replying to each other (Zhao and Eskenazi, 2016; Dhingra et al., 2016; Shah et al., 2018) . Reward signal for policy learning can be as simple as the small constant negative reward at each turn and a large reward at the end (if the goal completes) to encourage shorter conversations (Takanobu et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 229, |
|
"end": 246, |
|
"text": "(Gu et al., 2020;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 247, |
|
"end": 266, |
|
"text": "Zhang et al., 2019;", |
|
"ref_id": "BIBREF62" |
|
}, |
|
{ |
|
"start": 466, |
|
"end": 491, |
|
"text": "(Zhao and Eskenazi, 2016;", |
|
"ref_id": "BIBREF64" |
|
}, |
|
{ |
|
"start": 492, |
|
"end": 513, |
|
"text": "Dhingra et al., 2016;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 514, |
|
"end": 532, |
|
"text": "Shah et al., 2018)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 726, |
|
"end": 749, |
|
"text": "(Takanobu et al., 2019)", |
|
"ref_id": "BIBREF49" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, reward estimation for dialogue is challenging, the small constant negative reward at each turn may lead to ending the conversation prematurely. Instead of handcrafting the reward at the end based on success or failure, it is more useful if we can evaluate reward at every turn to guide the policy to dynamically change actions as per the need for the user and end the conversation naturally. With the growing complexity of the system across different topics, it is required to build a more sophisticated reward function to avoid manual intervention for accounting different factors towards conversation success.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we proposed a novel model for contextual response generation in multi-turn dialogue. The model includes the turn-level reward estimator, which combines the weak supervision signals obtained from three basic modules 1) Semantic Coherence, 2) Consistent Flow, 3) Semantic Relevance. These modules are learned jointly with the response generation model with the counterfactual examples obtained from negative sampling. Leveraging the weak supervision signals obtained from these models, we further update the reward estimator and dialog policy jointly in an alternative way, thus improving each other.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our proposed approach integrates semantic understanding of utterances using encoder-decoder systems with the power of Reinforcement Learning (RL) to optimize long-term success. We test the proposed approach with two benchmarks: Daily-Dialog (Li et al., 2017b) and PersonaChat . Experimental results demonstrate on both datasets indicate that our model can significantly outperform state-of-the-art generation models in terms of both automatic evaluation and human judgment.", |
|
"cite_spans": [ |
|
{ |
|
"start": 241, |
|
"end": 259, |
|
"text": "(Li et al., 2017b)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Open-domain dialogue in a multi-turn setting has been widely explored with different encoderdecoder architectures (Gu et al., 2020; Feng et al., 2021; Kottur et al., 2017; Shah et al., 2018; Shang et al., 2015; Vinyals and Le, 2015; Wu et al., 2019; Zhong et al., 2019) . The basic encoder-decoder architectures like Seq-to-Seq models have been widely extended and modified to generate the generic responses, context modelling and grounding by persona/emotion/knowledge (Li et al., 2015; Xing et al., 2017; Zhang et al., 2019 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 131, |
|
"text": "(Gu et al., 2020;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 132, |
|
"end": 150, |
|
"text": "Feng et al., 2021;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 151, |
|
"end": 171, |
|
"text": "Kottur et al., 2017;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 172, |
|
"end": 190, |
|
"text": "Shah et al., 2018;", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 191, |
|
"end": 210, |
|
"text": "Shang et al., 2015;", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 211, |
|
"end": 232, |
|
"text": "Vinyals and Le, 2015;", |
|
"ref_id": "BIBREF51" |
|
}, |
|
{ |
|
"start": 233, |
|
"end": 249, |
|
"text": "Wu et al., 2019;", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 269, |
|
"text": "Zhong et al., 2019)", |
|
"ref_id": "BIBREF66" |
|
}, |
|
{ |
|
"start": 470, |
|
"end": 487, |
|
"text": "(Li et al., 2015;", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 488, |
|
"end": 506, |
|
"text": "Xing et al., 2017;", |
|
"ref_id": "BIBREF55" |
|
}, |
|
{ |
|
"start": 507, |
|
"end": 525, |
|
"text": "Zhang et al., 2019", |
|
"ref_id": "BIBREF62" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The dialogue literature widely applies reinforcement learning, including the recent ones based on deep architectures (Takanobu et al., 2019 (Takanobu et al., , 2020 Takanobu et al., 2020; Gordon-Hall et al., 2020a,b) . But these taskoriented RL dialogue systems often model the dialogue with limited parameters and assumptions specific to the dataset, targeted for that task. The dataset includes hand-built templates with state, action and reward signals designed by humans for each new domain making this setting difficult for extending these to open domain dialogue systems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 139, |
|
"text": "(Takanobu et al., 2019", |
|
"ref_id": "BIBREF49" |
|
}, |
|
{ |
|
"start": 140, |
|
"end": 164, |
|
"text": "(Takanobu et al., , 2020", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 165, |
|
"end": 187, |
|
"text": "Takanobu et al., 2020;", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 188, |
|
"end": 216, |
|
"text": "Gordon-Hall et al., 2020a,b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our goal in this work is to integrate the stateof-the-art encoder-decoder architectures like in Gu et al. (2020) ; ; Csaky and Recski (2020) and reinforcement learning paradigms to efficiently learn the dialogue policy optimized for long-term success in the multi-turn dialogue scenarios. We are recently inspired by the works in Takanobu et al. (2019) ; to jointly learn the reward function and dialogue policy, and reduce the effort and cost for manual labelling the conversations for building the reward model. Specifically, we leverage the weak supervision inspired from Chang et al. (2021a,b) to generate the labelled dataset to facilitate this joint learning and building reward estimation model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 112, |
|
"text": "Gu et al. (2020)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 330, |
|
"end": 352, |
|
"text": "Takanobu et al. (2019)", |
|
"ref_id": "BIBREF49" |
|
}, |
|
{ |
|
"start": 575, |
|
"end": 597, |
|
"text": "Chang et al. (2021a,b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We represent dialog sessions D = {\u03c4 1 , \u03c4 2 , \u03c4 3 , .......\u03c4 n } where each dialog session \u03c4 represents the trajectory of state-action pairs as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "{s u 0 , a u 0 , s 0 , a 0 , s u 1 , a u 1 , s 1 , a 1 , .....}.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The user in our case is a simulator which utters a response a u given the state s u denoted as \u00b5(a u , e u |s u ) where e u denotes the binary signal indicating the end of a dialog session, in that case the response a u is empty. The dialog policy \u03c0 \u03b8 (a|s) decides the action a according to the current state s after the agent interacts with the user simulator \u00b5. At each time, the state given to the either dialog party is updated after recording the action uttered by the other party. The reward estimator f evaluates the quality of response/action uttered by the dialog policy \u03c0. The dialog policy \u03c0 is based on the BERT (Devlin et al., 2019 ) encoder-decoder model and the reward function f is the MLP model parameterized by \u03b8 and \u03c9 respectively. We have modeled the user simulator exactly in the same way as the agent but trained only using supervised learning objective.", |
|
"cite_spans": [ |
|
{ |
|
"start": 625, |
|
"end": 645, |
|
"text": "(Devlin et al., 2019", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In the subsequent section, we will introduce the components action, state, policy, quality modules and reward estimator. Further, sections explain the setup we have used for weakly supervised learning and, finally, the experimental results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "An action a is the dialogue utterance generated by the encoder-decoder model as shown in Figure 1 . The model takes as input the context history (state), and outputs the probability distribution over a set of possible actions denoted as \u03c0 \u03b8 (a|s) parameterized by \u03b8. The user simulator generates the action a u , policy generates the action a, and the input state for the agent and the user is s and s u respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 97, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Action", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The state is the past conversation history between an agent and a user denoted as, s t = {q 1 , a 1 , q 2 , a 2 , q 3 , a 3 , ....., q t }. The state for an agent and a user are differently denoted as s and s u respectively. Let's say the agent utter- ances are denoted by a's, then state, s = s t and the agent utters a t . Similarly, the user state", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "State", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "s u t = {q 1 , a 1 , q 2 , a 2 , q 3 , a 3 , ....., q", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "State", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "t , a t } and the user utters q t+1 . Each of the utterances is mapped to a fixed-length sentence vector using SBERT (Reimers and Gurevych, 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 145, |
|
"text": "(Reimers and Gurevych, 2019)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "State", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The dialogue policy takes the form of a BERT based encoder-decoder ( i.e. \u03c0 \u03b8 (a|s) ) (Gu et al., 2020) as shown in Figure 1 . Similar to , we have used the BERT based encoder and transformer decoder, but instead of feeding the utterance at word level, we instead fed the utterance representation (obtained from SBERT) into the encoder. The encoder takes as input the previous context history as s t and output the response a t at the output of the decoder.", |
|
"cite_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 103, |
|
"text": "(Gu et al., 2020)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 124, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dialogue Policy", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We have modelled the user simulator in exactly the same way as the BERT based encoder-decoder shown in Figure 1 . However, the user simulator is trained only (with supervised learning objective) for utterances in dialog corpus and predicting user response (Gu et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 256, |
|
"end": 273, |
|
"text": "(Gu et al., 2020)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 111, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "User Simulator", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "We calculate the reward for each state-action pair (see Section. 3.8) and use this signal to train the dialogue policy so that it can avoid reaching bad states so as to reach the successful end of the conversation between a user and an agent. We have leveraged the signals from three basic modules, namely, Semantic Coherence, Consistent Flow and Semantic Relevance (which are jointly learned with the dialogue policy). For each of the three modules, the data for the positive class is obtained from the source corpus while for the negative class it has been generated dynamically during training. We describe each of the three modules in the following sections.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conversation Quality Modules", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "We need to filter out the utterances generated with high confidence by the dialog policy but are semantically irrelevant to the previous context. To quantify such a characteristic, we modeled the general response relevance prediction task which utilizes the sequential relationship of the dialog data fed to the encoder side of BERT encoder-decoder framework. Since, the task of semantic relevance is to match the two sequences of conversation, so instead of matching the context and response, we have measured the relevance of two fragments of dialogue session.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Relevance", |
|
"sec_num": "3.5.1" |
|
}, |
|
{ |
|
"text": "Specifically, given a context c", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Relevance", |
|
"sec_num": "3.5.1" |
|
}, |
|
{ |
|
"text": "= {q 1 , a 1 , q 2 , a 2 , .....q m },", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Relevance", |
|
"sec_num": "3.5.1" |
|
}, |
|
{ |
|
"text": "we randomly split c into two consecutive pieces , we replaced the left or right part with the sampled piece from the corpus. Also, we additionally generate the negative samples by internal shuffling in the left or right part. The whole model is trained like a classifier with corresponding labels y sr \u2208 {0, 1}. Since the individual utterances are fed after obtaining their vector representation, the aggregated representation of two pieces is represented by E sr CLS over which the non-linear transformation is applied, the score for semantic relevance is given by g(c left , c right ), and similar to , it has been trained using the binary cross-entropy loss as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Relevance", |
|
"sec_num": "3.5.1" |
|
}, |
|
{ |
|
"text": "c lef t = {q 1 , a 1 , q 2 , a 2 , ....q t , a t } and c right = {q t+1 , a t+1 , .....q m }. Similar to", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Relevance", |
|
"sec_num": "3.5.1" |
|
}, |
|
{ |
|
"text": "L sr = \u2212y sr log(g(c left , c right )) \u2212 (1 \u2212 y sr ) log(1 \u2212 g(c left , c right )) (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Relevance", |
|
"sec_num": "3.5.1" |
|
}, |
|
{ |
|
"text": "The response generated should be rewarded only if it is coherent despite having adequate content. This makes the model to generate the coherent responses while avoiding the incoherent ones. Specifically, given a context c = {q 1 , a 1 , q 2 , a 2 , .....q m }, we randomly select any of the agent response at time t, denoted as a t , and replace it with any random utterance from the corpus. We also generate the incoherent samples by internal shuffling of bi-grams. The incoherent utterance is labelled as y coh t = 0 and coherent samples as y coh t = 1. The semantic coherence model is also trained like a classifier for each of the utterance representations obtained at the output of BERT encoder as shown in Figure 1 . The probability of the t-th utterance being incoherent is given as:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 712, |
|
"end": 720, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Semantic Coherence", |
|
"sec_num": "3.5.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(z t = 1|a 1 , .., a t ) = sof tmax(W coh E at +b coh ) = exp(W coh E at + b coh ) m l=1 exp(W coh E a l + b coh )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Semantic Coherence", |
|
"sec_num": "3.5.2" |
|
}, |
|
{ |
|
"text": "and the loss function is given as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Coherence", |
|
"sec_num": "3.5.2" |
|
}, |
|
{ |
|
"text": "L coh = \u2212 m t=1 z t log p(z t = 1|a 1 , a 2 .....a m ) (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Coherence", |
|
"sec_num": "3.5.2" |
|
}, |
|
{ |
|
"text": "We want the agent to continuously add the information to keep the conversation going in the forward direction. To determine the flowing conversation, we take the cosine similarity between the last two agent utterances denoted as E a i\u22121 and E a i denoted as g(a i\u22121 , a i ), and we measure the similarity with randomly sampled utterance v in place of a i\u22121 given as g(a i\u22121 , v). We would like g(a i\u22121 , a i ) to be larger than g(a i\u22121 , v) by at least a margin \u2206 and define the learning objective as a hing loss function:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Consistent Flow", |
|
"sec_num": "3.5.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L cf = max{0, \u2206 \u2212 g(a i\u22121 , a i ) + g(a i\u22121 , v)}", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Consistent Flow", |
|
"sec_num": "3.5.3" |
|
}, |
|
{ |
|
"text": "To initialize the parameters of agent and reward modules M ={Semantic Relevance, Semantic Coherence, Consistent Flow}, we used the supervised learning objective since all the state-action pairs obtained from the pre-training corpus are the groundtruth and can be used as close approximation for further fine-tuning on other dialog corpus. We used the pre-training corpus P as Gutenberg dialog corpus (Csaky and Recski, 2020) . Since the agent model in our case is based on BERT encoderdecoder parameterized by \u03b8 similar to Gu et al. (2020) , the probability of generating agent's response a is given as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 400, |
|
"end": 424, |
|
"text": "(Csaky and Recski, 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 523, |
|
"end": 539, |
|
"text": "Gu et al. (2020)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Training of Agent and Reward Modules", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p \u03b8 (a|s) = N j=1 p \u03b8 (a j |a <j , s),", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Joint Training of Agent and Reward Modules", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "where a j is the j-th word generated at the output of decoder and s is the whole context history utterances fed to the encoder and N is the maximum sequence length of decoder. The loss function for generating agent response a is given as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Training of Agent and Reward Modules", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "L a = J(\u03b8) = \u2212 N i=1 log p \u03b8 (a j |a <j , s) (6)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Training of Agent and Reward Modules", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "The joint loss function is defined as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Training of Agent and Reward Modules", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L f ull = L a + \u03b1 * (L sr + L coh + L cf )", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Joint Training of Agent and Reward Modules", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "The policy \u03c0 \u03b8 is also parameterized by \u03b8, and the probability of action a is given by \u03c0 \u03b8 (a|s) similar to p \u03b8 (a|s), since the probability distribution is learned only from (s, a) pairs obtained from the corpus with human demonstrations. It is a good approximation to initialize the parameters of policy \u03c0 \u03b8 (a|s) with parameters of p \u03b8 (a|s). Furthermore, we update the policy \u03c0 \u03b8 (Step 13 in the Algorithm. 1) to avoid actions a which do not lead to rewarding conversations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint Training of Agent and Reward Modules", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "We setup simulation between virtual agent and user, and let them take turns talking to each other. The simulation is started with a starter utterance obtained from the dialog samples D H (Step 5 of Algorithm 1) and fed to the agent, it then encodes the utterance and generates the response a, the state s u is then updated with previous history and fed to the user model to obtain the next response a u . The response a u is appended to s u to obtain the updated state s. Similarly, the process is repeated until one of the following conditions occurs after a few number of turns 2 : a) When agent starts to produce dull responses like \"I don't know\" 3 . b) When agent starts to generate repetitive response consecutively 4 c) Or, the conversation achieved the maximum number of turns handled by agent and user models. 5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Simulation between Agent and User", |
|
"sec_num": "3.7" |
|
}, |
|
{ |
|
"text": "Learning with weak supervision is widely used with the rise of data-driven neural approaches (Ratner et al., 2020; Mrk\u0161i\u0107 et al., 2017; Chang et al., 2020; Bach et al., 2017; Chang et al., 2021a) . Our approach incorporates a similar line of work by providing noisy text to a pretrained model which incorporates prior knowledge from general-domain text and small in-domain text Chen et al., 2019; Harkous et al., 2020) and use it as a weak annotator similar to Ratner et al. (2020) . The primary challenge with the synthetic data is the noise introduced during the generation process, and the noisy labels tend to bring little to no improvement (Fr\u00e9nay and Verleysen, 2013) . To train on such noisy data, we employ three step training process: a) pre-training b) generate data with weighted categories c) fine-tuning similar to Chang et al. (2021a) ; Dehghani et al. (2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 114, |
|
"text": "(Ratner et al., 2020;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 115, |
|
"end": 135, |
|
"text": "Mrk\u0161i\u0107 et al., 2017;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 136, |
|
"end": 155, |
|
"text": "Chang et al., 2020;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 156, |
|
"end": 174, |
|
"text": "Bach et al., 2017;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 195, |
|
"text": "Chang et al., 2021a)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 378, |
|
"end": 396, |
|
"text": "Chen et al., 2019;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 397, |
|
"end": 418, |
|
"text": "Harkous et al., 2020)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 461, |
|
"end": 481, |
|
"text": "Ratner et al. (2020)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 645, |
|
"end": 673, |
|
"text": "(Fr\u00e9nay and Verleysen, 2013)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 828, |
|
"end": 848, |
|
"text": "Chang et al. (2021a)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 851, |
|
"end": 873, |
|
"text": "Dehghani et al. (2017)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Learning Algorithm", |
|
"sec_num": "3.8" |
|
}, |
|
{ |
|
"text": "Step 1: Pre-train Generation and Quality Modules Jointly. This step involves pre-training the agent with quality modules jointly as explained in Section 3.6. Quality modules trained on clean data as well as automatically generated negative samples by random sampling. These modules are further fine-tuned on the sampled dialogues from target dialogue corpus at each training iteration. Similarly, we initialized the user also by supervised training on the pre-training dialogue corpus with fine-tuning on target dialogue corpus. (see steps 2-7 of Algorithm 1). The fine-tuning steps make use of continual learning to avoid catastrophic forgetting (Madotto et al., 2020; Lee, 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 647, |
|
"end": 669, |
|
"text": "(Madotto et al., 2020;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 670, |
|
"end": 680, |
|
"text": "Lee, 2017)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Learning Algorithm", |
|
"sec_num": "3.8" |
|
}, |
|
{ |
|
"text": "Step 2: Generates the Weakly Labelled data with Reward categories. After the models are initialized with trained parameters, the dialogue simulation has been started between the agent and the user (see Section. 3.7) to interact with each other and generates the synthetic data with annotated scores with each quality module for every stateaction pair in sampled dialogues. During dialogue simulation, we employ Dynamic Blocking mechanism (Niu et al., 2020) to generate novel words and paraphrased responses. Specifically, we generate Top-7 response at each turn and set the agent to exploration for 60 percent of the times and for the rest of the times it exploits by selecting the response from top two ranked responses. We specifically filter the state-action pairs into three reward categories namely, VeryHigh, High and Low. For the state-action pairs whose scores by each module are greater than or equal to 0.8 are put into the VeryHigh category. Other, state-action pairs whose scores by each module are between 0.6 and 0.8 are put into the High reward category. The rest of all state-action pairs are put into the Low reward category. Additionally, we include state-action pairs sampled from target dialog corpus in Step 1. into the VeryHigh category.", |
|
"cite_spans": [ |
|
{ |
|
"start": 438, |
|
"end": 456, |
|
"text": "(Niu et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Learning Algorithm", |
|
"sec_num": "3.8" |
|
}, |
|
{ |
|
"text": "Step 3: Update the reward estimator and policy. The reward estimator maximizes the log likelihood state-action pairs of higher rewards than the lower ones. The reward estimator f \u03c9 , parameterized by \u03c9, and let's say H, V and L represents the collection of all state action pairs of High, Very-High and Low reward category respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Learning Algorithm", |
|
"sec_num": "3.8" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c9 * = arg max E (s k ,a k )\u223c{H,V } [f \u03c9 (s k , a k )] f \u03c9 (s k , a k ) = log p \u03c9 (s k , a k ) = log e R\u03c9(s k ,a k ) Z \u03c9 R \u03c9 (s k , a k ) = T t=k \u03b3 t\u2212k r \u03c9 (s t , a t ) Z \u03c9 = \u2200(s k ,a k ) e R\u03c9(s k ,a k )", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Weakly Supervised Learning Algorithm", |
|
"sec_num": "3.8" |
|
}, |
|
{ |
|
"text": "where f models state-action pairs of H, V and L category as a Boltzmann distribution (Takanobu et al., 2019) . The cost function for reward estimator in terms of trajectories obtained from respective reward categories is given as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 108, |
|
"text": "(Takanobu et al., 2019)", |
|
"ref_id": "BIBREF49" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Learning Algorithm", |
|
"sec_num": "3.8" |
|
}, |
|
{ |
|
"text": "J f (\u03c9) = \u22120.5 * KL(p H (s, a) p \u03c9 (s, a)) \u2212 KL(p V (s, a) p \u03c9 (s, a)) + KL(p L (s, a) p \u03c9 (s, a)) (9)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Learning Algorithm", |
|
"sec_num": "3.8" |
|
}, |
|
{ |
|
"text": "It minimize the KL-divergence between reward distribution and the state-action pairs of high and very high reward but maximize the distribution from the ones with low category. The gradient yields:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Learning Algorithm", |
|
"sec_num": "3.8" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c9 J f = 0.5 * E (s,a)\u223cH [ \u03c9 f \u03c9 (s, a)] +E (s,a)\u223cV [ \u03c9 f \u03c9 (s, a)]\u2212E (s,a)\u223cL [ \u03c9 f \u03c9 (s, a)]", |
|
"eq_num": "(10)" |
|
} |
|
], |
|
"section": "Weakly Supervised Learning Algorithm", |
|
"sec_num": "3.8" |
|
}, |
|
{ |
|
"text": "Since, the dialog policy is required to put the actions atleast to that of high category, i.e. maximize the entropy regularized expected reward (E \u03c0 [R] + H(\u03c0)) which is effectively minimizes the KL divergence between the policy distribution and Boltzmann distribution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Learning Algorithm", |
|
"sec_num": "3.8" |
|
}, |
|
{ |
|
"text": "J \u03c0 (\u03b8) = \u2212KL(\u03c0 \u03b8 (a|s) p \u03c9 (s, a)) = E (s,a)\u223c\u03c0 [f \u03c9 (s, a) \u2212 log \u03c0 \u03b8 (a|s)] = E (s,a)\u223c\u03c0 [R \u03c9 (s, a)] \u2212 log Z \u03c9 + H(\u03c0 \u03b8 ) (11)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Learning Algorithm", |
|
"sec_num": "3.8" |
|
}, |
|
{ |
|
"text": "where the term log Z \u03c9 is independent to \u03b8, and H(\u2022) denotes the entropy of a model. Using likelihood ratio trick the gradient for policy is given as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Learning Algorithm", |
|
"sec_num": "3.8" |
|
}, |
|
{ |
|
"text": "\u03b8 J \u03c0 = E (s,a)\u223c\u03c0 [(f \u03c9 (s, a) \u2212 log \u03c0 \u03b8 (a|s)) \u03b8 log \u03c0 \u03b8 (a|s)]. (12)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Learning Algorithm", |
|
"sec_num": "3.8" |
|
}, |
|
{ |
|
"text": "Hence, the reward is r \u03c9 (s, a) = f \u03c9 (s, a) \u2212 log \u03c0 \u03b8 (a|s) for each state-action pair and the loss function re-written as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Learning Algorithm", |
|
"sec_num": "3.8" |
|
}, |
|
{ |
|
"text": "J \u03c0 (\u03b8) = E (s,a)\u223c\u03c0 [ T k=t \u03b3 k\u2212t (f \u03c9 (s k , a k ) \u2212 log \u03c0 \u03b8 (a k |s k ))] (13)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Learning Algorithm", |
|
"sec_num": "3.8" |
|
}, |
|
{ |
|
"text": "Like in Takanobu et al. (2019) the reward estimator f \u03c9 includes the shaping term. Formally, we include next state s t+1 also instead of just (s t , a t )", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 30, |
|
"text": "Takanobu et al. (2019)", |
|
"ref_id": "BIBREF49" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Learning Algorithm", |
|
"sec_num": "3.8" |
|
}, |
|
{ |
|
"text": "f \u03c9 (s t , a t , s t+1 ) = g \u03c9 (s t , a t ) + \u03b3h(s t+1 ) \u2212 h(s t ) (14)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Learning Algorithm", |
|
"sec_num": "3.8" |
|
}, |
|
{ |
|
"text": "where h is the MLP network with input as presigmoid scores from each quality modules, and g \u03c9 is also the MLP network with input as the concatenation of E CLS as state vector and SBERT sentence embedding of action a.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Learning Algorithm", |
|
"sec_num": "3.8" |
|
}, |
|
{ |
|
"text": "We conduct experiments on DailyDialog (Li et al., 2017b) , PersonaChat and used Gutenberg Dialogue Dataset (Csaky and Recski, 2020) as a pre-training corpus. We compare our model performance with baselines on various aspects of response quality.", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 56, |
|
"text": "(Li et al., 2017b)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 107, |
|
"end": 131, |
|
"text": "(Csaky and Recski, 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We considered DailyDialog (Li et al., 2017b) and PersonaChat which are open domain dialog corpus to evaluate our system. Dai-lyDialog contains conversation revolving around Get weak annotation scores for all (s, a) \u2208 D \u03c0 from each of the modules M.", |
|
"cite_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 44, |
|
"text": "(Li et al., 2017b)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Filtering the (s, a) pairs into {VeryHigh, High and Low} reward categories.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "10:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Update the reward estimator f by minimizing J f w.r.t \u03c9 ( Eq.10)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "11:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "12:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "11:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Compute reward for each (s, a) \u2208 D \u03c0 as,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "11:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "r = f \u03c9 (s t , a t , s t+1 ) \u2212 log \u03c0(a t |s t )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "11:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "13:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "11:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Update the policy \u03c0 \u03b8 by minimizing J \u03c0 w.r.t \u03b8 (Eq. 13). 14: end for various topics pertaining to daily life, and Per-sonaChat contains conversations between people with their respective persona profiles. These dialogues can be of varying length, we limit the maximum length to 20, that can be fed to the BERT Encoder-Decoder model. Since average length of DailyDialog is 7.9 and that of PersonaChat is 9.4, so most of the dialogues fit easily without truncation from the history. For rest of the dialogues, it can be slided across to include the more recent utterances and remove it from the starting. Since we are mapping the utterances to their corresponding vectors using SBERT, the length of individual utterances truncated automatically and retain only first 512 word pieces in case of longer utterances. For pre-training corpus the vocabulary is limited to 100,000 while the vocabularies for DailyDialog and PersonaChat are 25,000 and 32,768 respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "11:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We select various multi-turn response generation baselines. The baselines which are not included pre-training are (1) HRED 6 : Hierarchical encoder-decoder framework (2) VHRED 7 : an extension of HRED that generates response with latent variables (3) HRAN 8 : Hierarchical attention mechanism based encoder-decoder framework (4) ReCoSa 9 : Hierarchical transformer based model (Zhang et al., 2019) (5) SSN: dialogue generation learning with self-supervision signals extracted from utterance order (Wu et al., 2019 ) (6) Transformer-Auxiliary Tasks: A recent state-of-the are model leaning language generation with joint learning of transformer with auxiliary tasks . The another two baselines from Csaky and Recski (2020) which involve pre-training on the Gutenberg corpus are:", |
|
"cite_spans": [ |
|
{ |
|
"start": 377, |
|
"end": 397, |
|
"text": "(Zhang et al., 2019)", |
|
"ref_id": "BIBREF62" |
|
}, |
|
{ |
|
"start": 497, |
|
"end": 513, |
|
"text": "(Wu et al., 2019", |
|
"ref_id": "BIBREF53" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "(1)Transformer 10 : 50M parameters version and (2) GPT-2 11 : Pre-trained model with version of 117M parameters. The repository 12 contains these two trained models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We evaluate the performance of our model on various aspects of response quality using both automatic and human evaluation. Although, most of the automatic metrics poorly correlate with human evaluation (Liu et al., 2016) , and the recently proposed metrics (Li et al., 2017a; Tao et al., 2018) are harder to evaluate than perplexity and BLEU (Papineni et al., 2002) . Additionally, human evaluation has its inherent limitation of bias, cost and replication difficulty (Tao et al., 2018) . Due to this consensus, some used only automatic metrics (Xing and Fern\u00e1ndez, 2018; Xu et al., 2018b) and some used only human evaluation (Krause et al., 2017; Fang et al., 2018) while some used both (Shen et al., 2018; Xu et al., 2018a; Baheti et al., 2018; Ram et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 202, |
|
"end": 220, |
|
"text": "(Liu et al., 2016)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 257, |
|
"end": 275, |
|
"text": "(Li et al., 2017a;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 276, |
|
"end": 293, |
|
"text": "Tao et al., 2018)", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 365, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 468, |
|
"end": 486, |
|
"text": "(Tao et al., 2018)", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 545, |
|
"end": 571, |
|
"text": "(Xing and Fern\u00e1ndez, 2018;", |
|
"ref_id": "BIBREF57" |
|
}, |
|
{ |
|
"start": 572, |
|
"end": 589, |
|
"text": "Xu et al., 2018b)", |
|
"ref_id": "BIBREF60" |
|
}, |
|
{ |
|
"start": 626, |
|
"end": 647, |
|
"text": "(Krause et al., 2017;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 648, |
|
"end": 666, |
|
"text": "Fang et al., 2018)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 688, |
|
"end": 707, |
|
"text": "(Shen et al., 2018;", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 708, |
|
"end": 725, |
|
"text": "Xu et al., 2018a;", |
|
"ref_id": "BIBREF58" |
|
}, |
|
{ |
|
"start": 726, |
|
"end": 746, |
|
"text": "Baheti et al., 2018;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 747, |
|
"end": 764, |
|
"text": "Ram et al., 2018)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We mainly used the automatic metrics using the DIALOG-EVAL repository 13 , it contains 17 different metrics, but we measure only a few met-rics to facilitate the comparison with the published baselines results. We specifically follow to measure automatic evaluation and human evaluation. For response content quality we measured BLEU-4 (Papineni et al., 2002) and Perplexity(PPL) (Sutskever et al., 2014) . Like in used embedding metrics average (AVG), extrema (EXT), and greedy (GRE) measuring similarity between response and target embedding. Similar to we also measured the informativeness of responses with distinct-1 and distinct-2 that are calculated as the ratios of distinct unigrams and bigrams.", |
|
"cite_spans": [ |
|
{ |
|
"start": 336, |
|
"end": 359, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 380, |
|
"end": 404, |
|
"text": "(Sutskever et al., 2014)", |
|
"ref_id": "BIBREF47" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Since our main objective is not to judge the response quality but to predict the response for longterm success of dialogue. We follow the guidelines as in to explore both single-turn and multi-turn settings. We picked 500 dialogues from the test set and asked 3 native speakers for their judgement. In the first setting, we asked judges to pick the better response among the one generated by our model and a baseline model (Pre-Trained GPT2) based on various criteria like answerability and semantics. In the second setting, in case of multi-turn we used 200 simulated conversations between RL agent and a user model to judge the whole conversation for responses uttered by agent. In a complete end-to-end conversation we asked the judges to decide which of the simulated conversations are of higher quality. To compare against the RL model we employ baseline model to simulate the 200 conversations with the same starter utterance used by RL model. Automatic and Human evaluation are shown in Table. 1 and 2 respectively. Table. 1 reports automatic evaluation metrics on the baseline and the proposed model. Our model outperforms for most of the metrics on both datasets. Since our main idea is to generate the responses for successful conversation in the long run than just evaluating the response quality at each of the turn. This is the main reason of why our model outperforms on both distinct-1 and distinct-2 metrics, in comparison to Transformer-auxiliary task model which also trained jointly with the similar tasks but lacks fine-tuning with the weak supervision signals indicate that an additional training with weakly labelled data improves the generalization performance. Although, we see the perplexity also improves since our model is generating the responses more like humans to optimize the conversation in long run. Similarly, embedding metrics also shown the improvement but little on average since it capturing the sense but due to length mismatch which occurs owing to the fact that our model is generating more novel words with futuristic sense. However, Distinct-{1,2} scores shows improvement because of the large pre-trained vocabulary, it gives the model more flexibility to generate novel words without disturbing the sense of the sentence. We also note the results for our model without weak supervision training, namely, Our Model w/o Weak Supervision, this model just fine-tunes on the DailyDialog (Li et al., 2017b) and Per-sonaChat without generating the weak labelled data. Clearly, the distinct-1 and distinct-2 metrics are lower than the proposed model, because the model tends to generate the repetitive words more frequently. Similarly, the embedding metrics and PPL does not show any improvement over the proposed model except on embedding metric based on Average. However, it performs well on BLEU scores since it learns well to reproduce the responses as in the ground truth but not optimized for a successful conversation in the long run. Table 1 also reports the results of another two baselines which are pre-trained models on Gutenberg Dialogue Corpus (Csaky and Recski, 2020) . These models are fine-tuned on DailyDialog and PersonaChat dataset respectively. These models although improved much on BLEU scores and distinct-1 and distinct-2 scores since it gets the larger vocab and more enhanced training for learning the language structure. But lags in the embedding metrics indicating the response quality is low. Table 2 reports the human evaluation results, the objective for which our model training is to generate the response for a successful conversation in the long run for the multi-turn scenario. Clearly, the evaluation results are up to our expectation, since the RL system does not bring a significant boost in single-turn response quality than the case of multi-turn setting.", |
|
"cite_spans": [ |
|
{ |
|
"start": 2428, |
|
"end": 2446, |
|
"text": "(Li et al., 2017b)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 3096, |
|
"end": 3120, |
|
"text": "(Csaky and Recski, 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 994, |
|
"end": 1000, |
|
"text": "Table.", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1023, |
|
"end": 1029, |
|
"text": "Table.", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2980, |
|
"end": 2987, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 3461, |
|
"end": 3468, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We proposed a weak supervision framework for policy and reward estimation for long-term success of the dialogue by simulating the conversation between a virtual agent and user. Empirical studies on two benchmarks proves the effectiveness of our approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Work done prior to joining Amazon", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The number of turns after these rules applied is average number of turns in the corpus 3 Used simple rule matching method with 9 phrases collected from the corpus, instead of having false positives and negatives this works well in practice.4 If by rule two consecutive utterances matched more than 80% it is said to be repetitive.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The maximum number of turn is set as 20.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/hsgodhia/hred 7 https://github.com/julianser/hed-dlg-truncated 8 https://github.com/LynetteXing1991/HRAN 9 https://github.com/zhanghainan/ReCoSa 10 https://github.com/tensorflow/tensor2tensor 11 https://github.com/huggingface/transfer-learning-convai 12 https://github.com/ricsinaruto/gutenberg-dialog 13 https://github.com/ricsinaruto/ dialog-eval", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank the three anonymous reviewers for their helpful comments and invaluable suggestions. We also thank the members of [24]7.ai Innovation Labs -Pataparla Raga Ashritha and Rishav Sahay for their work in building Dialogue agents, and especially Satyajit Banerjee for the detailed concepts in Reinforcement Learning. We also thank Satyajit Banerjee and [24]7.ai, India for providing access to necessary resources.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our implementation uses the open source Huggingface Transformer repository (Wolf et al., 2020) . Specifically, we have used the base version from sentence transformers pre-trained on millions of paraphrase examples, named as 'paraphrasedistilroberta-base-v1'. The encoder-decoder framework is initialized with the base version 'bert-baseuncased'but with configuration of smaller size. The smaller sized model reduces the 'bert-baseuncased'configuration to 6 transformer layers, has a hidden size of 768, and contains 2 attention heads, {L=6, H=768, A=2}. Similar to Gu et al. 2020we sum the position embeddings to the output sentence embeddings of size 768 to indicate the user or agent utterances. Odd ones indicate the user utterances and even ones are that of an agent. The MLP network for semantic relevance and semantic coherence used a hidden dimension of 128. The \u2206 has been set to best value of 0.54 after performing a grid search in the range of {0.4, 0.7} with step size of 0.02. The reward estimator models g \u03c9 using two hidden layers of size 512 and 256 respectively. And, h is modelled using a single hidden layer of size one. In each training iteration the policy and reward estimator are updated with continual learning to avoid catastrophic forgetting mechanism using EWC modified loss, the \u03bb value used as a parameter is set to 0.4. Also, at each training iteration the policy and reward parameters are saved if it reduces the perplexity on the validation set (calculated after running for all the batches of the training dataset) and patience is set to 3 as a stopping criterion before we terminate the training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 94, |
|
"text": "(Wolf et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Implementation Details", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Learning the structure of generative models without labeled data", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Stephen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bryan", |
|
"middle": [], |
|
"last": "Bach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Ratner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "R\u00e9", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "273--282", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen H Bach, Bryan He, Alexander Ratner, and Christopher R\u00e9. 2017. Learning the structure of generative models without labeled data. In Interna- tional Conference on Machine Learning, pages 273- 282. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Generating more interesting responses in neural conversation models with distributional constraints", |
|
"authors": [ |
|
{ |
|
"first": "Ashutosh", |
|
"middle": [], |
|
"last": "Baheti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Dolan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1809.01215" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashutosh Baheti, Alan Ritter, Jiwei Li, and Bill Dolan. 2018. Generating more interesting responses in neural conversation models with distributional con- straints. arXiv preprint arXiv:1809.01215.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Unsupervised pidgin text generation by pivoting english data and self-training", |
|
"authors": [ |
|
{ |
|
"first": "Ernie", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"Ifeoluwa" |
|
], |
|
"last": "Adelani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoyu", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vera", |
|
"middle": [], |
|
"last": "Demberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2003.08272" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ernie Chang, David Ifeoluwa Adelani, Xiaoyu Shen, and Vera Demberg. 2020. Unsupervised pidgin text generation by pivoting english data and self-training. arXiv preprint arXiv:2003.08272.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Jointly improving language understanding and generation with quality-weighted weak supervision of automatic labeling", |
|
"authors": [ |
|
{ |
|
"first": "Ernie", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vera", |
|
"middle": [], |
|
"last": "Demberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Marin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2102.03551" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ernie Chang, Vera Demberg, and Alex Marin. 2021a. Jointly improving language understand- ing and generation with quality-weighted weak su- pervision of automatic labeling. arXiv preprint arXiv:2102.03551.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Neural data-to-text generation with lm-based text augmentation", |
|
"authors": [ |
|
{ |
|
"first": "Ernie", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoyu", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dawei", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vera", |
|
"middle": [], |
|
"last": "Demberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hui", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2102.03556" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ernie Chang, Xiaoyu Shen, Dawei Zhu, Vera Demberg, and Hui Su. 2021b. Neural data-to-text generation with lm-based text augmentation. arXiv preprint arXiv:2102.03556.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Agent-aware dropout dqn for safe and efficient on-line dialogue policy learning", |
|
"authors": [ |
|
{ |
|
"first": "Lu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cheng", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Runzhe", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2454--2464", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lu Chen, Xiang Zhou, Cheng Chang, Runzhe Yang, and Kai Yu. 2017. Agent-aware dropout dqn for safe and efficient on-line dialogue policy learning. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2454-2464.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Few-shot nlg with pre-trained language model", |
|
"authors": [ |
|
{ |
|
"first": "Zhiyu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harini", |
|
"middle": [], |
|
"last": "Eavani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenhu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinyin", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"Yang" |
|
], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1904.09521" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiyu Chen, Harini Eavani, Wenhu Chen, Yinyin Liu, and William Yang Wang. 2019. Few-shot nlg with pre-trained language model. arXiv preprint arXiv:1904.09521.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Jaap Kamps, and Bernhard Sch\u00f6lkopf. 2017. Fidelity-weighted learning", |
|
"authors": [ |
|
{ |
|
"first": "Mostafa", |
|
"middle": [], |
|
"last": "Dehghani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arash", |
|
"middle": [], |
|
"last": "Mehrjou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Gouws", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1711.02799" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mostafa Dehghani, Arash Mehrjou, Stephan Gouws, Jaap Kamps, and Bernhard Sch\u00f6lkopf. 2017. Fidelity-weighted learning. arXiv preprint arXiv:1711.02799.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Towards end-to-end reinforcement learning of dialogue agents for information access", |
|
"authors": [ |
|
{ |
|
"first": "Bhuwan", |
|
"middle": [], |
|
"last": "Dhingra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lihong", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiujun", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yun-Nung", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Faisal", |
|
"middle": [], |
|
"last": "Ahmed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1609.00777" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, and Li Deng. 2016. Towards end-to-end reinforcement learning of dia- logue agents for information access. arXiv preprint arXiv:1609.00777.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Wizard of wikipedia: Knowledge-powered conversational agents", |
|
"authors": [ |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Dinan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Roller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kurt", |
|
"middle": [], |
|
"last": "Shuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angela", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1811.01241" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. arXiv preprint arXiv:1811.01241.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Sounding board: A user-centric and content-driven social chatbot", |
|
"authors": [ |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Fang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maarten", |
|
"middle": [], |
|
"last": "Sap", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ari", |
|
"middle": [], |
|
"last": "Holtzman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mari", |
|
"middle": [], |
|
"last": "Ostendorf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "96--100", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hao Fang, Hao Cheng, Maarten Sap, Elizabeth Clark, Ari Holtzman, Yejin Choi, Noah A. Smith, and Mari Ostendorf. 2018. Sounding board: A user-centric and content-driven social chatbot. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Demonstrations, pages 96-100, New Orleans, Louisiana. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Policy networks with two-stage training for dialogue systems", |
|
"authors": [ |
|
{ |
|
"first": "Mehdi", |
|
"middle": [], |
|
"last": "Fatemi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Layla", |
|
"middle": [ |
|
"El" |
|
], |
|
"last": "Asri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannes", |
|
"middle": [], |
|
"last": "Schulz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kaheer", |
|
"middle": [], |
|
"last": "Suleman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1606.03152" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, and Kaheer Suleman. 2016. Policy networks with two-stage training for dialogue systems. arXiv preprint arXiv:1606.03152.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Multi-view feature representation for dialogue generation with bidirectional distillation", |
|
"authors": [ |
|
{ |
|
"first": "Shaoxiong", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuancheng", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kan", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2102.10780" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shaoxiong Feng, Xuancheng Ren, Kan Li, and Xu Sun. 2021. Multi-view feature representation for di- alogue generation with bidirectional distillation. arXiv preprint arXiv:2102.10780.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Classification in the presence of label noise: a survey", |
|
"authors": [ |
|
{ |
|
"first": "Beno\u00eet", |
|
"middle": [], |
|
"last": "Fr\u00e9nay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Verleysen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "IEEE transactions on neural networks and learning systems", |
|
"volume": "25", |
|
"issue": "", |
|
"pages": "845--869", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Beno\u00eet Fr\u00e9nay and Michel Verleysen. 2013. Classifica- tion in the presence of label noise: a survey. IEEE transactions on neural networks and learning sys- tems, 25(5):845-869.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Learning dialog policies from weak demonstrations", |
|
"authors": [ |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Gordon-Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [ |
|
"John" |
|
], |
|
"last": "Gorinski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shay B", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.11054" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gabriel Gordon-Hall, Philip John Gorinski, and Shay B Cohen. 2020a. Learning dialog policies from weak demonstrations. arXiv preprint arXiv:2004.11054.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Gerasimos Lampouras, and Ignacio Iacobacci. 2020b. Show us the way: Learning to manage dialog from demonstrations", |
|
"authors": [ |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Gordon-Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [ |
|
"John" |
|
], |
|
"last": "Gorinski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.08114" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gabriel Gordon-Hall, Philip John Gorinski, Gerasimos Lampouras, and Ignacio Iacobacci. 2020b. Show us the way: Learning to manage dialog from demon- strations. arXiv preprint arXiv:2004.08114.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Dialogbert: Discourse-aware response generation via learning to recover and rank utterances", |
|
"authors": [ |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jung-Woo", |
|
"middle": [], |
|
"last": "Kang Min Yoo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ha", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2012.01775" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaodong Gu, Kang Min Yoo, and Jung-Woo Ha. 2020. Dialogbert: Discourse-aware response generation via learning to recover and rank utterances. arXiv preprint arXiv:2012.01775.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Have your text and use it too! end-to-end neural data-to-text generation with semantic fidelity", |
|
"authors": [ |
|
{ |
|
"first": "Hamza", |
|
"middle": [], |
|
"last": "Harkous", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isabel", |
|
"middle": [], |
|
"last": "Groves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Saffari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.06577" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hamza Harkous, Isabel Groves, and Amir Saffari. 2020. Have your text and use it too! end-to-end neural data-to-text generation with semantic fidelity. arXiv preprint arXiv:2004.06577.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Decoupling strategy and generation in negotiation dialogues", |
|
"authors": [ |
|
{ |
|
"first": "He", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Derek", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anusha", |
|
"middle": [], |
|
"last": "Balakrishnan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1808.09637" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "He He, Derek Chen, Anusha Balakrishnan, and Percy Liang. 2018. Decoupling strategy and gener- ation in negotiation dialogues. arXiv preprint arXiv:1808.09637.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Exploring personalized neural conversational models", |
|
"authors": [ |
|
{ |
|
"first": "Satwik", |
|
"middle": [], |
|
"last": "Kottur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoyu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V\u00edtor", |
|
"middle": [], |
|
"last": "Carvalho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "IJCAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3728--3734", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Satwik Kottur, Xiaoyu Wang, and V\u00edtor Carvalho. 2017. Exploring personalized neural conversational models. In IJCAI, pages 3728-3734.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Building an open domain socialbot with self-dialogues", |
|
"authors": [ |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Krause", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Damonte", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Dobre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Duma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joachim", |
|
"middle": [], |
|
"last": "Fainberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Federico", |
|
"middle": [], |
|
"last": "Fancellu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emmanuel", |
|
"middle": [], |
|
"last": "Kahembwe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianpeng", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bonnie", |
|
"middle": [], |
|
"last": "Webber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1709.09816" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ben Krause, Marco Damonte, Mihai Dobre, Daniel Duma, Joachim Fainberg, Federico Fancellu, Em- manuel Kahembwe, Jianpeng Cheng, and Bonnie Webber. 2017. Edina: Building an open do- main socialbot with self-dialogues. arXiv preprint arXiv:1709.09816.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Toward continual learning for conversational agents", |
|
"authors": [ |
|
{ |
|
"first": "Sungjin", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1712.09943" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sungjin Lee. 2017. Toward continual learn- ing for conversational agents. arXiv preprint arXiv:1712.09943.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures", |
|
"authors": [ |
|
{ |
|
"first": "Wenqiang", |
|
"middle": [], |
|
"last": "Lei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xisen", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min-Yen", |
|
"middle": [], |
|
"last": "Kan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhaochun", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiangnan", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dawei", |
|
"middle": [], |
|
"last": "Yin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1437--1447", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, and Dawei Yin. 2018. Sequicity: Simplifying task-oriented dialogue systems with sin- gle sequence-to-sequence architectures. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1437-1447.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "A diversity-promoting objective function for neural conversation models", |
|
"authors": [ |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Galley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Brockett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Dolan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1510.03055" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objec- tive function for neural conversation models. arXiv preprint arXiv:1510.03055.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Deep reinforcement learning for dialogue generation", |
|
"authors": [ |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Will", |
|
"middle": [], |
|
"last": "Monroe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Galley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1606.01541" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016. Deep rein- forcement learning for dialogue generation. arXiv preprint arXiv:1606.01541.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Adversarial learning for neural dialogue generation", |
|
"authors": [ |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Will", |
|
"middle": [], |
|
"last": "Monroe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tianlin", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S\u00e9bastien", |
|
"middle": [], |
|
"last": "Jean", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1701.06547" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiwei Li, Will Monroe, Tianlin Shi, S\u00e9bastien Jean, Alan Ritter, and Dan Jurafsky. 2017a. Adversar- ial learning for neural dialogue generation. arXiv preprint arXiv:1701.06547.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Dailydialog: A manually labelled multi-turn dialogue dataset", |
|
"authors": [ |
|
{ |
|
"first": "Yanran", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hui", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoyu", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenjie", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ziqiang", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuzi", |
|
"middle": [], |
|
"last": "Niu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of The 8th International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017b. Dailydialog: A man- ually labelled multi-turn dialogue dataset. In Pro- ceedings of The 8th International Joint Conference on Natural Language Processing (IJCNLP 2017).", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Guided dialog policy learning without adversarial learning in the loop", |
|
"authors": [ |
|
{ |
|
"first": "Ziming", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sungjin", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Baolin", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinchao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shahin", |
|
"middle": [], |
|
"last": "Shayandeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.03267" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ziming Li, Sungjin Lee, Baolin Peng, Jinchao Li, Shahin Shayandeh, and Jianfeng Gao. 2020. Guided dialog policy learning without adversarial learning in the loop. arXiv preprint arXiv:2004.03267.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation", |
|
"authors": [ |
|
{ |
|
"first": "Chia-Wei", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Lowe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Iulian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Serban", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurent", |
|
"middle": [], |
|
"last": "Noseworthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joelle", |
|
"middle": [], |
|
"last": "Charlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pineau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1603.08023" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation met- rics for dialogue response generation. arXiv preprint arXiv:1603.08023.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Towards an automatic turing test: Learning to evaluate dialogue responses", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Lowe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Noseworthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Iulian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Serban", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Angelard-Gontier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joelle", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pineau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1708.07149" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Lowe, Michael Noseworthy, Iulian V Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic turing test: Learning to evaluate dialogue responses. arXiv preprint arXiv:1708.07149.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Continual learning in task-oriented dialogue systems", |
|
"authors": [ |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Madotto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhaojiang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhenpeng", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Seungwhan", |
|
"middle": [], |
|
"last": "Moon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Crook", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhou", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eunjoon", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiguang", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2012.15504" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrea Madotto, Zhaojiang Lin, Zhenpeng Zhou, Se- ungwhan Moon, Paul Crook, Bing Liu, Zhou Yu, Eu- njoon Cho, and Zhiguang Wang. 2020. Continual learning in task-oriented dialogue systems. arXiv preprint arXiv:2012.15504.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Neural belief tracker: Data-driven dialogue state tracking", |
|
"authors": [ |
|
{ |
|
"first": "Nikola", |
|
"middle": [], |
|
"last": "Mrk\u0161i\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diarmuid\u00f3", |
|
"middle": [], |
|
"last": "S\u00e9aghdha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tsung-Hsien", |
|
"middle": [], |
|
"last": "Wen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Blaise", |
|
"middle": [], |
|
"last": "Thomson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steve", |
|
"middle": [], |
|
"last": "Young", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1777--1788", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nikola Mrk\u0161i\u0107, Diarmuid\u00d3 S\u00e9aghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neu- ral belief tracker: Data-driven dialogue state track- ing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1777-1788, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Nitish Shirish Keskar, and Caiming Xiong. 2020. Unsupervised paraphrase generation via dynamic blocking", |
|
"authors": [ |
|
{ |
|
"first": "Tong", |
|
"middle": [], |
|
"last": "Niu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Semih", |
|
"middle": [], |
|
"last": "Yavuz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yingbo", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2010.12885" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tong Niu, Semih Yavuz, Yingbo Zhou, Huan Wang, Nitish Shirish Keskar, and Caiming Xiong. 2020. Unsupervised paraphrase generation via dynamic blocking. arXiv preprint arXiv:2010.12885.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Bleu: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Composite task-completion dialogue policy learning via hierarchical deep reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "Baolin", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiujun", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lihong", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Asli", |
|
"middle": [], |
|
"last": "Celikyilmaz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sungjin", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kam-Fai", |
|
"middle": [], |
|
"last": "Wong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1704.03084" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Baolin Peng, Xiujun Li, Lihong Li, Jianfeng Gao, Asli Celikyilmaz, Sungjin Lee, and Kam-Fai Wong. 2017. Composite task-completion dialogue policy learning via hierarchical deep reinforcement learn- ing. arXiv preprint arXiv:1704.03084.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Few-shot natural language generation for task-oriented dialog", |
|
"authors": [ |
|
{ |
|
"first": "Baolin", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chenguang", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chunyuan", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiujun", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinchao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Zeng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2002.12328" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Baolin Peng, Chenguang Zhu, Chunyuan Li, Xi- ujun Li, Jinchao Li, Michael Zeng, and Jian- feng Gao. 2020. Few-shot natural language gen- eration for task-oriented dialog. arXiv preprint arXiv:2002.12328.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Conversational ai: The science behind the alexa prize", |
|
"authors": [ |
|
{ |
|
"first": "Ashwin", |
|
"middle": [], |
|
"last": "Ram", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rohit", |
|
"middle": [], |
|
"last": "Prasad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chandra", |
|
"middle": [], |
|
"last": "Khatri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anu", |
|
"middle": [], |
|
"last": "Venkatesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raefer", |
|
"middle": [], |
|
"last": "Gabriel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Nunn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Behnam", |
|
"middle": [], |
|
"last": "Hedayatnia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Nagar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1801.03604" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashwin Ram, Rohit Prasad, Chandra Khatri, Anu Venkatesh, Raefer Gabriel, Qing Liu, Jeff Nunn, Behnam Hedayatnia, Ming Cheng, Ashish Nagar, et al. 2018. Conversational ai: The science behind the alexa prize. arXiv preprint arXiv:1801.03604.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Snorkel: Rapid training data creation with weak supervision", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Ratner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Stephen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Henry", |
|
"middle": [], |
|
"last": "Bach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Ehrenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sen", |
|
"middle": [], |
|
"last": "Fries", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "R\u00e9", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "The VLDB Journal", |
|
"volume": "29", |
|
"issue": "2", |
|
"pages": "709--730", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Ratner, Stephen H Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher R\u00e9. 2020. Snorkel: Rapid training data creation with weak su- pervision. The VLDB Journal, 29(2):709-730.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Sentencebert: Sentence embeddings using siamese bertnetworks", |
|
"authors": [ |
|
{ |
|
"first": "Nils", |
|
"middle": [], |
|
"last": "Reimers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1908.10084" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. arXiv preprint arXiv:1908.10084.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Building end-to-end dialogue systems using generative hierarchical neural network models", |
|
"authors": [ |
|
{ |
|
"first": "Iulian", |
|
"middle": [], |
|
"last": "Serban", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Sordoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Courville", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joelle", |
|
"middle": [], |
|
"last": "Pineau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iulian Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hier- archical neural network models. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 30.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "A hierarchical latent variable encoder-decoder model for generating dialogues", |
|
"authors": [ |
|
{ |
|
"first": "Iulian", |
|
"middle": [], |
|
"last": "Vlad Serban", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Sordoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Lowe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurent", |
|
"middle": [], |
|
"last": "Charlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joelle", |
|
"middle": [], |
|
"last": "Pineau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Courville", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3295--3301", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17, page 3295-3301. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Bootstrapping a neural conversational agent with dialogue self-play, crowdsourcing and on-line reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "Pararth", |
|
"middle": [], |
|
"last": "Shah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dilek", |
|
"middle": [], |
|
"last": "Hakkani-Tur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gokhan", |
|
"middle": [], |
|
"last": "Tur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "41--51", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pararth Shah, Dilek Hakkani-Tur, Bing Liu, and Gokhan Tur. 2018. Bootstrapping a neural conversa- tional agent with dialogue self-play, crowdsourcing and on-line reinforcement learning. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers), pages 41-51.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Neural responding machine for short-text conversation", |
|
"authors": [ |
|
{ |
|
"first": "Lifeng", |
|
"middle": [], |
|
"last": "Shang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhengdong", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1503.02364" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neu- ral responding machine for short-text conversation. arXiv preprint arXiv:1503.02364.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Nexus network: Connecting the preceding and the following in dialogue generation", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoyu", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hui", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenjie", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dietrich", |
|
"middle": [], |
|
"last": "Klakow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4316--4327", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaoyu Shen, Hui Su, Wenjie Li, and Dietrich Klakow. 2018. Nexus network: Connecting the preceding and the following in dialogue generation. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 4316- 4327.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Discriminative deep dyna-q: Robust planning for dialogue policy learning", |
|
"authors": [ |
|
{ |
|
"first": "Shang-Yu", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiujun", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingjing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yun-Nung", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1808.09442" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shang-Yu Su, Xiujun Li, Jianfeng Gao, Jingjing Liu, and Yun-Nung Chen. 2018. Discriminative deep dyna-q: Robust planning for dialogue policy learn- ing. arXiv preprint arXiv:1808.09442.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1409.3215" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. arXiv preprint arXiv:1409.3215.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Multi-agent task-oriented dialog policy learning with role-aware reward decomposition", |
|
"authors": [ |
|
{ |
|
"first": "Ryuichi", |
|
"middle": [], |
|
"last": "Takanobu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Runze", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minlie", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.03809" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryuichi Takanobu, Runze Liang, and Minlie Huang. 2020. Multi-agent task-oriented dialog policy learn- ing with role-aware reward decomposition. arXiv preprint arXiv:2004.03809.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Guided dialog policy learning: Reward estimation for multi-domain task-oriented dialog", |
|
"authors": [ |
|
{ |
|
"first": "Ryuichi", |
|
"middle": [], |
|
"last": "Takanobu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hanlin", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minlie", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1908.10719" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryuichi Takanobu, Hanlin Zhu, and Minlie Huang. 2019. Guided dialog policy learning: Reward esti- mation for multi-domain task-oriented dialog. arXiv preprint arXiv:1908.10719.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Ruber: An unsupervised method for automatic evaluation of open-domain dialog systems", |
|
"authors": [ |
|
{ |
|
"first": "Chongyang", |
|
"middle": [], |
|
"last": "Tao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lili", |
|
"middle": [], |
|
"last": "Mou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dongyan", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chongyang Tao, Lili Mou, Dongyan Zhao, and Rui Yan. 2018. Ruber: An unsupervised method for au- tomatic evaluation of open-domain dialog systems. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "A neural conversational model", |
|
"authors": [ |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1506.05869" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. arXiv preprint arXiv:1506.05869.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "Self-supervised dialogue learning", |
|
"authors": [ |
|
{ |
|
"first": "Jiawei", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"Yang" |
|
], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1907.00448" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiawei Wu, Xin Wang, and William Yang Wang. 2019. Self-supervised dialogue learning. arXiv preprint arXiv:1907.00448.", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "Learning matching models with weak supervision for response selection in retrieval-based chatbots", |
|
"authors": [ |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhoujun", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1805.02333" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu Wu, Wei Wu, Zhoujun Li, and Ming Zhou. 2018. Learning matching models with weak supervision for response selection in retrieval-based chatbots. arXiv preprint arXiv:1805.02333.", |
|
"links": null |
|
}, |
|
"BIBREF55": { |
|
"ref_id": "b55", |
|
"title": "Topic aware neural response generation", |
|
"authors": [ |
|
{ |
|
"first": "Chen", |
|
"middle": [], |
|
"last": "Xing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jie", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yalou", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Ying", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 31.", |
|
"links": null |
|
}, |
|
"BIBREF56": { |
|
"ref_id": "b56", |
|
"title": "Hierarchical recurrent attention network for response generation", |
|
"authors": [ |
|
{ |
|
"first": "Chen", |
|
"middle": [], |
|
"last": "Xing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yalou", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen Xing, Yu Wu, Wei Wu, Yalou Huang, and Ming Zhou. 2018. Hierarchical recurrent attention net- work for response generation. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 32.", |
|
"links": null |
|
}, |
|
"BIBREF57": { |
|
"ref_id": "b57", |
|
"title": "Automatic evaluation of neural personality-based chatbots", |
|
"authors": [ |
|
{ |
|
"first": "Yujie", |
|
"middle": [], |
|
"last": "Xing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raquel", |
|
"middle": [], |
|
"last": "Fern\u00e1ndez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.00472" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yujie Xing and Raquel Fern\u00e1ndez. 2018. Auto- matic evaluation of neural personality-based chat- bots. arXiv preprint arXiv:1810.00472.", |
|
"links": null |
|
}, |
|
"BIBREF58": { |
|
"ref_id": "b58", |
|
"title": "Towards explainable and controllable open domain dialogue generation with dialogue acts", |
|
"authors": [ |
|
{ |
|
"first": "Can", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1807.07255" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Can Xu, Wei Wu, and Yu Wu. 2018a. Towards ex- plainable and controllable open domain dialogue generation with dialogue acts. arXiv preprint arXiv:1807.07255.", |
|
"links": null |
|
}, |
|
"BIBREF59": { |
|
"ref_id": "b59", |
|
"title": "Learning an effective context-response matching model with self-supervised tasks for retrieval-based dialogues", |
|
"authors": [ |
|
{ |
|
"first": "Ruijian", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chongyang", |
|
"middle": [], |
|
"last": "Tao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daxin", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xueliang", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dongyan", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ruijian Xu, Chongyang Tao, Daxin Jiang, Xueliang Zhao, Dongyan Zhao, and Rui Yan. 2020. Learning an effective context-response matching model with self-supervised tasks for retrieval-based dialogues.", |
|
"links": null |
|
}, |
|
"BIBREF60": { |
|
"ref_id": "b60", |
|
"title": "Better conversations by modeling, filtering, and optimizing for coherence and diversity", |
|
"authors": [ |
|
{ |
|
"first": "Xinnuo", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Du\u0161ek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ioannis", |
|
"middle": [], |
|
"last": "Konstas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Verena", |
|
"middle": [], |
|
"last": "Rieser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1809.06873" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xinnuo Xu, Ond\u0159ej Du\u0161ek, Ioannis Konstas, and Ver- ena Rieser. 2018b. Better conversations by model- ing, filtering, and optimizing for coherence and di- versity. arXiv preprint arXiv:1809.06873.", |
|
"links": null |
|
}, |
|
"BIBREF61": { |
|
"ref_id": "b61", |
|
"title": "Hierarchical text generation and planning for strategic dialogue", |
|
"authors": [ |
|
{ |
|
"first": "Denis", |
|
"middle": [], |
|
"last": "Yarats", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5591--5599", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Denis Yarats and Mike Lewis. 2018. Hierarchical text generation and planning for strategic dialogue. In In- ternational Conference on Machine Learning, pages 5591-5599. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF62": { |
|
"ref_id": "b62", |
|
"title": "Recosa: Detecting the relevant contexts with self-attention for multi-turn dialogue generation", |
|
"authors": [ |
|
{ |
|
"first": "Hainan", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanyan", |
|
"middle": [], |
|
"last": "Lan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Pang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiafeng", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xueqi", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1907.05339" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hainan Zhang, Yanyan Lan, Liang Pang, Jiafeng Guo, and Xueqi Cheng. 2019. Recosa: Detecting the rel- evant contexts with self-attention for multi-turn dia- logue generation. arXiv preprint arXiv:1907.05339.", |
|
"links": null |
|
}, |
|
"BIBREF63": { |
|
"ref_id": "b63", |
|
"title": "Personalizing dialogue agents: I have a dog", |
|
"authors": [ |
|
{ |
|
"first": "Saizheng", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Dinan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jack", |
|
"middle": [], |
|
"last": "Urbanek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arthur", |
|
"middle": [], |
|
"last": "Szlam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Douwe", |
|
"middle": [], |
|
"last": "Kiela", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1801.07243" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? arXiv preprint arXiv:1801.07243.", |
|
"links": null |
|
}, |
|
"BIBREF64": { |
|
"ref_id": "b64", |
|
"title": "Towards end-to-end learning for dialog state tracking and management using deep reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "Tiancheng", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxine", |
|
"middle": [], |
|
"last": "Eskenazi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1606.02560" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tiancheng Zhao and Maxine Eskenazi. 2016. To- wards end-to-end learning for dialog state tracking and management using deep reinforcement learning. arXiv preprint arXiv:1606.02560.", |
|
"links": null |
|
}, |
|
"BIBREF65": { |
|
"ref_id": "b65", |
|
"title": "Learning a simple and effective model for multi-turn response generation with auxiliary tasks", |
|
"authors": [ |
|
{ |
|
"first": "Yufan", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Can", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.01972" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yufan Zhao, Can Xu, and Wei Wu. 2020. Learning a simple and effective model for multi-turn response generation with auxiliary tasks. arXiv preprint arXiv:2004.01972.", |
|
"links": null |
|
}, |
|
"BIBREF66": { |
|
"ref_id": "b66", |
|
"title": "An affect-rich neural conversational model with biased attention and weighted cross-entropy loss", |
|
"authors": [ |
|
{ |
|
"first": "Peixiang", |
|
"middle": [], |
|
"last": "Zhong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Di", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chunyan", |
|
"middle": [], |
|
"last": "Miao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "33", |
|
"issue": "", |
|
"pages": "7492--7500", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peixiang Zhong, Di Wang, and Chunyan Miao. 2019. An affect-rich neural conversational model with bi- ased attention and weighted cross-entropy loss. In Proceedings of the AAAI Conference on Artificial In- telligence, volume 33, pages 7492-7500.", |
|
"links": null |
|
}, |
|
"BIBREF67": { |
|
"ref_id": "b67", |
|
"title": "Emotional chatting machine: Emotional conversation generation with internal and external memory", |
|
"authors": [ |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minlie", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tianyang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoyan", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018. Emotional chatting ma- chine: Emotional conversation generation with in- ternal and external memory. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 32.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "BERT based Encoder-Decoder with Semantic Coherence and Relevance. Similarly, Consistent Flow loss is also calculated using encoder.", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Dialogue Policy LearningRequire: Pre-Training corpus P , Dialogue Corpus D.1: Modules M = {Semantic Relevance, SemanticCoherence, Consistent Flow} 2: Do Agent training on P as in Section 3.6 jointly with modules M 3: User \u00b5 supervised training on P. 4: for each training iteration do5:Sample dialogues D H from D randomly.6:Fine-tune user simulator \u00b5 on D H .7:Fine-tune Agent and M on D H jointly.8:Collect dialog samples D \u03c0 by executing the dialog policy \u03c0 and interacting with \u00b5, a u \u223c \u00b5(\u2022|s u ), a \u223c \u03c0(\u2022|s) where s and s u is updated each time after getting response from user and agent respectively.9:", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Automatic metrics comparison with baselines. Results in bold indicate the best performing model on the corresponding metrics.", |
|
"html": null, |
|
"content": "<table><tr><td/><td>DailyDialog</td><td/></tr><tr><td>Setting</td><td colspan=\"3\">RL-Win RL-Lose Tie</td></tr><tr><td colspan=\"2\">Single-Turn general quality 0.41</td><td>0.28</td><td>0.31</td></tr><tr><td colspan=\"2\">Single-Turn ease to answer 0.55</td><td>0.12</td><td>0.33</td></tr><tr><td colspan=\"2\">Multi-turn general quality 0.76</td><td>0.13</td><td>0.11</td></tr><tr><td/><td>PersonaChat</td><td/></tr><tr><td>Setting</td><td colspan=\"3\">RL-Win RL-Lose Tie</td></tr><tr><td colspan=\"2\">Single turn general quality 0.36</td><td>0.22</td><td>0.42</td></tr><tr><td colspan=\"2\">Single-Turn ease to answer 0.51</td><td>0.14</td><td>0.35</td></tr><tr><td colspan=\"2\">Multi-turn general quality 0.71</td><td>0.17</td><td>0.12</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Human Evaluation Results. Ratios are calculated after taking majority vote among the decisions made by three judges.", |
|
"html": null, |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |