|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:35:34.563610Z" |
|
}, |
|
"title": "Let's Chat: Understanding User Expectations in Socialbot Interactions", |
|
"authors": [ |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Soper", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Erin", |
|
"middle": [], |
|
"last": "Pacquetet", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Sougata", |
|
"middle": [], |
|
"last": "Saha", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Souvik", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Rohini", |
|
"middle": [], |
|
"last": "Srihari", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper analyzes data from the 2021 Amazon Alexa Prize Socialbot Grand Challenge 4, in order to better understand the differences between human-computer interactions (HCI) in a socialbot setting and conventional humanto-human interactions. We find that because socialbots are a new genre of HCI, we are still negotiating norms to guide interactions in this setting. We present several notable patterns in user behavior toward socialbots, which have important implications for guiding future work in the development of conversational agents.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper analyzes data from the 2021 Amazon Alexa Prize Socialbot Grand Challenge 4, in order to better understand the differences between human-computer interactions (HCI) in a socialbot setting and conventional humanto-human interactions. We find that because socialbots are a new genre of HCI, we are still negotiating norms to guide interactions in this setting. We present several notable patterns in user behavior toward socialbots, which have important implications for guiding future work in the development of conversational agents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In recent years, it has become increasingly common for humans to interact with computers through natural language, either through speech (e.g. voice assistants) or through text (e.g. customer service chatbots). Most of these interactions have a specific functional goal; users may ask a bot to perform tasks such as giving the weather forecast, setting a timer, or making a dinner reservation. It is less common for users to engage in purely social conversations with a bot -chit-chat remains a primarily human mode of language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we explore data collected during the Alexa Prize Socialbot Grand Challenge 4 1 (Ram et al., 2018; , where teams designed chatbots to have social 'chit-chat' conversations with humans, with the goal of mimicking human interactions. Users conversed orally with socialbots via an Alexa-enabled device. We analyze this data in order to better understand user behavior: how do the human-bot interactions differ in nature from typical human conversation? What are users' expectations of a socialbot, and how can we develop socialbots which better meet these expectations? The human-centered analysis of socialbot interactions presented here aims to inform future research in developing natural and engaging conversational agents.", |
|
"cite_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 112, |
|
"text": "(Ram et al., 2018;", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Of course, the quality of the bot's responses plays an important part in how the user interacts with it; if the bot's responses aren't human-like, users won't treat it like a human. In this paper, our primary goal is not to evaluate the quality of this particular socialbot, but rather to get a sense of what users want from socialbots in general. Once we understand user expectations, we can design socialbots which better satisfy these expectations. The rest of this paper is organized as follows: in \u00a72 we summarize previous work studying conversation, in both human-to-human and HCI settings. Next, we analyze new Alexa Prize data: in \u00a73 we describe ways in which users treated the bot the same as a human, and in \u00a74 we highlight ways that users behave differently with the bot than they would with a human. We discuss the implications of our analysis in \u00a75, and finally conclude in \u00a76.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There is a long tradition of literature studying the social and linguistic rules of human discourse. H.P. Grice, in particular, formalized many of the underlying assumptions that we make when conversing with humans. His cooperative principle holds that speakers must work together to negotiate the terms of a conversation (Grice, 1989) . He further breaks this principle down into four maxims of conversation (quantity, quality, rela-tion, and manner) which specify the assumptions required for cooperative conversations. Other work has also highlighted the importance of established scripts for different scenarios (Hymes, 1972; Tomkins, 1987) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 322, |
|
"end": 335, |
|
"text": "(Grice, 1989)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 616, |
|
"end": 629, |
|
"text": "(Hymes, 1972;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 630, |
|
"end": 644, |
|
"text": "Tomkins, 1987)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The history of research on HCI is shorter but vibrant. Early work questioned how we should conceptualize AI, and made predictions about how more human-like computers might fit into our lives (Mori, 1970; Winograd et al., 1986) . As conversational agents became more widespread, these predictions have been put to the test, with two major patterns surfacing:", |
|
"cite_spans": [ |
|
{ |
|
"start": 191, |
|
"end": 203, |
|
"text": "(Mori, 1970;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 226, |
|
"text": "Winograd et al., 1986)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The first pattern is that humans tend to treat computers as if they were humans. The Computers as Social Actors (CASA) paradigm (Nass and Moon, 2000) holds that people will \"mindlessly\" apply existing social scripts to interactions with computers. In an early study of HCI, Nass and Moon (2000) showed that people demonstrated politeness and applied gender stereotypes to computers, even though they were aware that such behavior didn't make sense in the context. Posard and Rinderknecht (2015) show that participants in a trust game behaved the same toward their partner no matter whether they believed the partner to be human or computer. Such results support the idea that humans tend to apply existing social scripts to computers, even when they are aware that the scripts may not make sense for the situation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 128, |
|
"end": 149, |
|
"text": "(Nass and Moon, 2000)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 274, |
|
"end": 294, |
|
"text": "Nass and Moon (2000)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 464, |
|
"end": 494, |
|
"text": "Posard and Rinderknecht (2015)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Assuming that user expectations of computers are identical to their expectations of humans may be overly simplistic, however; in other studies of HCI, a different pattern emerges. Mori (1970) posited that increased humanness will increase a computer's likeability up to a certain point, past which it will become 'uncanny' or creepy, a phenomenon which he dubs The Uncanny Valley. The Uncanny Valley of Mind theory holds that people are uncomfortable with computers that seem too human. Gray and Wegner (2012) find that computers perceived to have experience (being able to taste food or feel sad) are unsettling, whereas computers perceived to have agency (being able to retrieve a weather report or make a dinner reservation) are not. Clark et al. (2019) found in a series of interviews that users have different priorities in conversing with computers versus other humans. Shi et al. (2020) found that people were less likely to be persuaded to donate to a charity when they perceived their interlocutor to be a computer. Other recent studies have found similar differences in interactions with virtual assistants (V\u00f6lkel et al., 2021; Porcheron et al., 2018) . All of this evidence suggests that, while people may default to existing social scripts in interacting with computers, they may not be comfortable treating a computer identically to a human.", |
|
"cite_spans": [ |
|
{ |
|
"start": 180, |
|
"end": 191, |
|
"text": "Mori (1970)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 487, |
|
"end": 509, |
|
"text": "Gray and Wegner (2012)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 737, |
|
"end": 756, |
|
"text": "Clark et al. (2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 876, |
|
"end": 893, |
|
"text": "Shi et al. (2020)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1117, |
|
"end": 1138, |
|
"text": "(V\u00f6lkel et al., 2021;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1139, |
|
"end": 1162, |
|
"text": "Porcheron et al., 2018)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this paper, we extend the existing literature on HCI to a new genre by analyzing user interactions with socialbots. We find evidence that users \"mindlessly\" apply social rules and scripts in many cases (see \u00a73) as well as evidence that users adapt their behavior when conversing with the social bot (see \u00a74). Overall, we conclude that although socialbots are designed to mimic human interactions, users have fundamentally different goals in socialbot conversations than in typical human conversation, but that the norms of socialbot interactions are still being actively negotiated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We analyze a subset of the live conversations collected by one of the finalists of Alexa Prize 2021 (Konr\u00e1d et al., 2021; Chi et al., 2021; Saha et al., 2021; Walker et al., 2021; Finch et al., 2021) . The dataset comprises 8,650 unique and unconstrained conversations conducted between June and October 2021 with English-speaking users in the US. With a total of 346,554 turns and an average of 44 turns per conversation, the dataset is almost twice the size of the existing human chat corpus Con-vAI (Logacheva et al., 2018) . Further, with a ratio of 1.1 conversations per user, the corpus significantly exceeds the number of unique users, compared to similar previous studies (V\u00f6lkel et al., 2021; Porcheron et al., 2018; V\u00f6lkel et al., 2020) . The dataset also contains user ratings measuring conversation quality on a Likert scale from 1 to 5, making it possible to analyze the impact of diverse conversational features on overall user experience. Fig. 1 depicts a sample conversation, along with some of the features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 121, |
|
"text": "(Konr\u00e1d et al., 2021;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 122, |
|
"end": 139, |
|
"text": "Chi et al., 2021;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 140, |
|
"end": 158, |
|
"text": "Saha et al., 2021;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 159, |
|
"end": 179, |
|
"text": "Walker et al., 2021;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 180, |
|
"end": 199, |
|
"text": "Finch et al., 2021)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 502, |
|
"end": 526, |
|
"text": "(Logacheva et al., 2018)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 680, |
|
"end": 701, |
|
"text": "(V\u00f6lkel et al., 2021;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 702, |
|
"end": 725, |
|
"text": "Porcheron et al., 2018;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 726, |
|
"end": 746, |
|
"text": "V\u00f6lkel et al., 2020)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 954, |
|
"end": 960, |
|
"text": "Fig. 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "4 How do users treat the socialbot like a human?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Conversation can serve two broad purposes: social and functional. Social conversations aim to build a rapport between the interlocutors, whereas functional conversations aim to achieve some practical goal. Clark et al. (2019) found that this dichotomy was important in explaining differences between human-to-human and human-computer conversation; their participants found social conversation less relevant when interacting with computers. We manually identified salient phrases for each conversation type (social vs. functional) from a subset of the conversations, and found that 65% of user queries are social in nature; these queries include seeking opinions, preferences, and personal anecdotes (see Appendix Fig. 2) . This shows that, contrary to the findings of Clark et al. (2019) , socialbot users actually engage in social conversation more than purely functional conversation. This suggests that the preference for functional conversation reported in Clark et al. (2019) is situational in nature, rather than a general preference in human-computer interactions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 206, |
|
"end": 225, |
|
"text": "Clark et al. (2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 768, |
|
"end": 787, |
|
"text": "Clark et al. (2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 961, |
|
"end": 980, |
|
"text": "Clark et al. (2019)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 704, |
|
"end": 720, |
|
"text": "Appendix Fig. 2)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Another way that user behavior towards the socialbot mimics human conversation is the use of indirectness. Around 21% of the socialbot's Yes/No queries result in a user response which does not include yes or no. In these cases, the bot must infer the connection between the question and the user's answer as in (1), where the user's answer implies no.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(1) BOT: \"Whenever I have a craving, I order food online from my favorite restaurant. Do you?\" USER: \"I do drive through.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Making the necessary inferences to understand and appropriately respond to such indirect responses is quite difficult for conversational agents, but users assume that the bot can follow their implicatures as easily as a human would. This evidence seems to support the CASA theory, showing that humans mindlessly apply human expectations to the bot.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "While a surprisingly high proportion of user queries are social in nature, that leaves 35% of queries that are functional in nature, including requests for the bot to perform a task (Can you sing please?), or provide information (Who directed Jurassic Park?). While not as frequent as social queries in our data, functional queries are still much more common than would be expected in human conversation. Functional queries generally lead to higher ratings on average than social queries (see Appendix Fig. 2 for a detailed breakdown). This suggests users' preference for functional interactions with computers. This could also be explained by the bot performing better in a functional mode than social, or by preconceived user expectations from interactions with other bots. However, although this socialbot will answer factual questions, it does not act as a smart assistant and will reject requests to perform Alexaassistant commands.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 502, |
|
"end": 508, |
|
"text": "Fig. 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "How do users treat a bot differently from a human?", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Another clear difference between socialbot and human conversations is the violation of traditional Gricean maxims. As is customary in the US, the socialbot begins by asking the user how they are doing. In human conversation, this question is almost invariably followed by some form of \"I'm fine. And you?\" Such phatic conversational openings serve to establish a rapport between speakers. By contrast, in the socialbot data we find that in 9.3% of cases, the user disregards this greeting and starts a new topic, as in (2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "How do users treat a bot differently from a human?", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "(2) BOT: \"Hi. How's your day going so far?\" USER: \"Do you want me to tell a joke?\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "How do users treat a bot differently from a human?", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We find this type of abrupt shift also happens beyond the initial \"How are you?\" exchange. Users don't feel obligated to obey the Gricean maxim of relevance by responding directly to queries, as they would in human conversation, because the bot is programmed to respond to any queries and try to continue the conversation. Using high-precision keyword-based mappings to detect topics from entities, and subsequently incorporating logic to identify switches in a conversation, we observe abrupt topic changes in 4% of the user turns, such as (3):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "How do users treat a bot differently from a human?", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "(3) BOT: \"Ok. So, i wanted to know, what's your favorite ice cream flavor?\" USER: \"Let's talk about aliens.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "How do users treat a bot differently from a human?", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In comparison, topic changes in human-tohuman conversations generally occur in specific environments and in characterizable ways, and are rarely abrupt (Maynard, 1980) . Another Gricean maxim that appears not to apply in socialbot scenarios is the maxim of quantity, which requires responses to be appropriate in length. In interactions with the socialbot, however, user responses tend to be much briefer than one would expect in a human conversation, as in (4).", |
|
"cite_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 167, |
|
"text": "(Maynard, 1980)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "How do users treat a bot differently from a human?", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "(4) BOT: \"What do you think of the current state of the economy?\" USER: \"Hit bad.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "How do users treat a bot differently from a human?", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Overall the median utterance length for users is 3 words, much shorter than the bot's median utterance length of 21. In fact, almost 97.5% of user utterances are less than 14 words (see Appendix, Fig. 3 ). Such short responses are unusual in human conversations. This pattern might be due to the fact that users believe that the bot will be more likely to understand if they keep their responses short. Another possible explanation is that users feel it's the bot's job to drive the conversation forward, and thus take a more passive role.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 196, |
|
"end": 202, |
|
"text": "Fig. 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "How do users treat a bot differently from a human?", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The above examples make clear that many conventional conversational scripts don't apply to socialbot interactions. We find that many users employ bot-specific scripts, reverting to virtual assistant commands during conversations. Example (5) demonstrates a frequent phenomenon in the data: when a user feels the bot hasn't understood them, they invoke the standard prompts which they are accustomed to using when invoking the virtual assistant, by using the \"Alexa\" command to get the bot's attention and reset the prompts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "How do users treat a bot differently from a human?", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "(5) USER: \"Are you okay?\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "How do users treat a bot differently from a human?", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "BOT: \"I am sorry I could not hear you well. Please repeat what you said.\" USER: \"Alexa, are you okay?\" 38.9% of conversations include at least one invocation of the \"Alexa\" command. In these cases, instead of applying scripts from human conversations, users apply scripts they've learned from interacting with their virtual assistant. This tends to happen in cases where an unnatural or unsatisfactory response from the bot reminds the user that they are not chatting with a real human.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "How do users treat a bot differently from a human?", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "One major difference between socialbot and typical human conversations is the perceived relationship between user and bot. In the user-socialbot relationship there is more of a power imbalance than in a human conversation; users are in control. They can stop, redirect, or reboot the bot, and choose conversation topics. The bot is designed to be cooperative, arguably more than a human when it comes to abrupt topic changes or overly brief responses. Where such responses might signal hostility (or at least disinterest) to a human interlocutor, users may consider such social implications irrelevant for a socialbot conversation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Although all users are generally aware that they are speaking to a computer, some users are more willing to pretend. In the Alexa Prize, users were already users of the Alexa virtual assistant, and spoke to the socialbot on their Alexa-enabled devices. The socialbot uses the same voice as the virtual assistant, so the familiarity of the Alexa voice may foster a sense of the relationship between users and the socialbot, and allow some users to forget that they are interacting with a computer. Other users, however, will still be wary of human-like behaviors from the bot, as in (6). (6) BOT: \"I ate some pampered chef chicken salad tea sandwiches today, and it was amazing! Have you ever heard of it?\" USER: \"No, Alexa. How can you eat something? You're a computer.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The Uncanny Valley is a clear obstacle to truly natural socialbot conversations, even if thresholds vary among users. Obviously, presenting a socialbot to a user as if it were really a human would pose ethical issues, so users' awareness of the conversation's artificiality is a necessary limitation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The increasing quality and cultural salience of socialbots have led to significant advances in conversational AI. This paper analyzed conversations between an Alexa Prize socialbot and its users to better understand what users expect from socialbot interactions. We find that, because socialbots present a novel genre of conversation, users aren't always sure how to behave. Often, users react by applying human conversational norms to the socialbot; in other cases, they draw on the virtual assistant scripts acquired from using their Alexaenabled devices. Based on our above analysis of user behavior, we feel that the goal of a socialbot shouldn't be to strictly mimic human conversation. Humans may be unpleasant, have diverging opinions, or push back on certain topics. On the other hand, socialbots are designed to provide an enjoyable and entertaining experience for the user. Socialbot developers should embrace the unique aspects of the scenario, rather than attempting to conform to conventional conversational norms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We see two potential sources for the advancement of socialbot systems moving forward: first, developers should design bots to fulfill user expectations, acknowledging that these will be slightly different from human conversation norms. Second, as socialbots become more commonplace, the emergence of socialbot-specific scripts will give users a clearer guide for those interactions. Like a real conversation, the future of socialbots must involve negotiating terms: developers must adapt socialbots to user expectations, and users will in turn adjust their expectations as they become more familiar with socialbots as a mode of interaction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Figure 2: Analysis of different types of user queries. The primary Y-axis depicts the average rating associated with a query type across all conversations. The secondary Y-axis denotes the percentage of encountering each query. The primary Y-axis depicts the average rating associated with each length category across all conversations. The secondary Y-axis denotes the percentage of each length category for the bot and the user. Note that the percentage of short responses generated by the bot is very low.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Appendix", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Avanika Narayan, and Ashwin Paranjape. 2021. Neural, neural everywhere: Controlled generation meets scaffolded", |
|
"authors": [ |
|
{ |
|
"first": "Ethan", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Chi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chetanya", |
|
"middle": [], |
|
"last": "Rastogi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Iyabor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hari", |
|
"middle": [], |
|
"last": "Sowrirajan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ethan A. Chi, Chetanya Rastogi, Alexander Iyabor, Hari Sowrirajan, Avanika Narayan, and Ashwin Paranjape. 2021. Neural, neural everywhere: Con- trolled generation meets scaffolded, structured dia- logue.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "What makes a good conversation? challenges in designing truly conversational agents", |
|
"authors": [ |
|
{ |
|
"first": "Leigh", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nadia", |
|
"middle": [], |
|
"last": "Pantidi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orla", |
|
"middle": [], |
|
"last": "Cooney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Doyle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diego", |
|
"middle": [], |
|
"last": "Garaialde", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Justin", |
|
"middle": [], |
|
"last": "Edwards", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brendan", |
|
"middle": [], |
|
"last": "Spillane", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emer", |
|
"middle": [], |
|
"last": "Gilmartin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christine", |
|
"middle": [], |
|
"last": "Murad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cosmin", |
|
"middle": [], |
|
"last": "Munteanu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leigh Clark, Nadia Pantidi, Orla Cooney, Philip Doyle, Diego Garaialde, Justin Edwards, Brendan Spillane, Emer Gilmartin, Christine Murad, Cosmin Munteanu, et al. 2019. What makes a good conver- sation? challenges in designing truly conversational agents. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pages 1- 12.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "An approach to inference-driven dialogue management within a social chatbot", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Sarah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Finch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniil", |
|
"middle": [], |
|
"last": "Finch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Huryn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoyuan", |
|
"middle": [], |
|
"last": "Hutsell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Han", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinho D", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2111.00570" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarah E Finch, James D Finch, Daniil Huryn, William Hutsell, Xiaoyuan Huang, Han He, and Jinho D Choi. 2021. An approach to inference-driven dia- logue management within a social chatbot. arXiv preprint arXiv:2111.00570.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Feeling robots and human zombies: Mind perception and the uncanny valley", |
|
"authors": [ |
|
{ |
|
"first": "Kurt", |
|
"middle": [], |
|
"last": "Gray", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Daniel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wegner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Cognition", |
|
"volume": "125", |
|
"issue": "1", |
|
"pages": "125--130", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kurt Gray and Daniel M Wegner. 2012. Feeling robots and human zombies: Mind perception and the un- canny valley. Cognition, 125(1):125-130.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Studies in the Way of Words", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Grice", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. P. Grice. 1989. Studies in the Way of Words. Cam- bridge: Harvard University Press.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Toward ethnographies of communication: The analysis of communicative events. Language and social context", |
|
"authors": [ |
|
{ |
|
"first": "Dell", |
|
"middle": [], |
|
"last": "Hymes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1972, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "21--44", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dell Hymes. 1972. Toward ethnographies of com- munication: The analysis of communicative events. Language and social context, pages 21-44.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Alquist 4.0: Towards social intelligence using generative models and dialogue personalization", |
|
"authors": [ |
|
{ |
|
"first": "Jakub", |
|
"middle": [], |
|
"last": "Konr\u00e1d", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Pichl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Marek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Lorenc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Van Duy Ta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lenka", |
|
"middle": [], |
|
"last": "Kobza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "H\u00fdlov\u00e1", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jakub Konr\u00e1d, Jan Pichl, Petr Marek, Petr Lorenc, Van Duy Ta, Ond\u0159ej Kobza, Lenka H\u00fdlov\u00e1, and Jan \u0160ediv\u00fd. 2021. Alquist 4.0: Towards social intelli- gence using generative models and dialogue person- alization.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Convai dataset of topic-oriented human-to-chatbot dialogues", |
|
"authors": [ |
|
{ |
|
"first": "Varvara", |
|
"middle": [], |
|
"last": "Logacheva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mikhail", |
|
"middle": [], |
|
"last": "Burtsev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Valentin", |
|
"middle": [], |
|
"last": "Malykh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vadim", |
|
"middle": [], |
|
"last": "Polulyakh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aleksandr", |
|
"middle": [], |
|
"last": "Seliverstov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "The NIPS'17 Competition: Building Intelligent Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "47--57", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Varvara Logacheva, Mikhail Burtsev, Valentin Malykh, Vadim Polulyakh, and Aleksandr Seliverstov. 2018. Convai dataset of topic-oriented human-to-chatbot dialogues. In The NIPS'17 Competition: Building Intelligent Systems, pages 47-57. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Placement of topic changes in conversation", |
|
"authors": [ |
|
{ |
|
"first": "Douglas", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Maynard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Douglas W. Maynard. 1980. Placement of topic changes in conversation.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "The uncanny valley. Energy", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Mori", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1970, |
|
"venue": "", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "33--35", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Mori. 1970. The uncanny valley. Energy, 7(4):33- 35.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Machines and mindlessness: Social responses to computers", |
|
"authors": [ |
|
{ |
|
"first": "Clifford", |
|
"middle": [], |
|
"last": "Nass", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Youngme", |
|
"middle": [], |
|
"last": "Moon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Journal of social issues", |
|
"volume": "56", |
|
"issue": "1", |
|
"pages": "81--103", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Clifford Nass and Youngme Moon. 2000. Machines and mindlessness: Social responses to computers. Journal of social issues, 56(1):81-103.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Voice interfaces in everyday life", |
|
"authors": [ |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Porcheron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Fischer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stuart", |
|
"middle": [], |
|
"last": "Reeves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Sharples", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martin Porcheron, Joel E. Fischer, Stuart Reeves, and Sarah Sharples. 2018. Voice interfaces in everyday life. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18, page 1-12, New York, NY, USA. Association for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Do people like working with computers more than human beings?", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Marek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Posard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gordon Rinderknecht", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Computers in Human Behavior", |
|
"volume": "51", |
|
"issue": "", |
|
"pages": "232--238", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marek N Posard and R Gordon Rinderknecht. 2015. Do people like working with computers more than human beings? Computers in Human Behavior, 51:232-238.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Sk Jayadevan, Gene Hwang, and Art Pettigrue", |
|
"authors": [ |
|
{ |
|
"first": "Ashwin", |
|
"middle": [], |
|
"last": "Ram", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rohit", |
|
"middle": [], |
|
"last": "Prasad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chandra", |
|
"middle": [], |
|
"last": "Khatri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anu", |
|
"middle": [], |
|
"last": "Venkatesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raefer", |
|
"middle": [], |
|
"last": "Gabriel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Nunn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Behnam", |
|
"middle": [], |
|
"last": "Hedayatnia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Nagar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "King", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashwin Ram, Rohit Prasad, Chandra Khatri, Anu Venkatesh, Raefer Gabriel, Qing Liu, Jeff Nunn, Behnam Hedayatnia, Ming Cheng, Ashish Nagar, Eric King, Kate Bland, Amanda Wartick, Yi Pan, Han Song, Sk Jayadevan, Gene Hwang, and Art Pet- tigrue. 2018. Conversational ai: The science behind the alexa prize.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Proto: A neural cocktail for generating appealing conversations", |
|
"authors": [ |
|
{ |
|
"first": "Sougata", |
|
"middle": [], |
|
"last": "Saha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Souvik", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Soper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erin", |
|
"middle": [], |
|
"last": "Pacquetet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rohini", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Srihari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sougata Saha, Souvik Das, Elizabeth Soper, Erin Pac- quetet, and Rohini K. Srihari. 2021. Proto: A neural cocktail for generating appealing conversations.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Effects of persuasive dialogues: testing bot identities and inquiry strategies", |
|
"authors": [ |
|
{ |
|
"first": "Weiyan", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuewei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoo", |
|
"middle": [], |
|
"last": "Jung Oh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingwen", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Saurav", |
|
"middle": [], |
|
"last": "Sahay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhou", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--13", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Weiyan Shi, Xuewei Wang, Yoo Jung Oh, Jingwen Zhang, Saurav Sahay, and Zhou Yu. 2020. Effects of persuasive dialogues: testing bot identities and inquiry strategies. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Sys- tems, pages 1-13.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Script theory. the emergence of personality", |
|
"authors": [ |
|
{ |
|
"first": "Silvan", |
|
"middle": [], |
|
"last": "Tomkins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Silvan Tomkins. 1987. Script theory. the emergence of personality. eds. joel arnoff, ai rabin, and robert a. zucker.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Eliciting and analysing users' envisioned dialogues with perfect voice assistants", |
|
"authors": [ |
|
{ |
|
"first": "Sarah", |
|
"middle": [ |
|
"Theres" |
|
], |
|
"last": "V\u00f6lkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Buschek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Malin", |
|
"middle": [], |
|
"last": "Eiband", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Cowan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heinrich", |
|
"middle": [], |
|
"last": "Hussmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI '21", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarah Theres V\u00f6lkel, Daniel Buschek, Malin Eiband, Benjamin R. Cowan, and Heinrich Hussmann. 2021. Eliciting and analysing users' envisioned dialogues with perfect voice assistants. In Proceedings of the 2021 CHI Conference on Human Factors in Com- puting Systems, CHI '21, New York, NY, USA. As- sociation for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Developing a Personality Model for Speech-Based Conversational Agents Using the Psycholexical Approach, page 1-14. Association for Computing Machinery", |
|
"authors": [ |
|
{ |
|
"first": "Sarah", |
|
"middle": [ |
|
"Theres" |
|
], |
|
"last": "V\u00f6lkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramona", |
|
"middle": [], |
|
"last": "Sch\u00f6del", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Buschek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clemens", |
|
"middle": [], |
|
"last": "Stachl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Verena", |
|
"middle": [], |
|
"last": "Winterhalter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "B\u00fchner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heinrich", |
|
"middle": [], |
|
"last": "Hussmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarah Theres V\u00f6lkel, Ramona Sch\u00f6del, Daniel Buschek, Clemens Stachl, Verena Winterhalter, Markus B\u00fchner, and Heinrich Hussmann. 2020. Developing a Personality Model for Speech-Based Conversational Agents Using the Psycholexical Ap- proach, page 1-14. Association for Computing Ma- chinery, New York, NY, USA.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Athena 2.0: Contextualized dialogue management for an Alexa Prize SocialBot", |
|
"authors": [ |
|
{ |
|
"first": "Marilyn", |
|
"middle": [], |
|
"last": "Walker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vrindavan", |
|
"middle": [], |
|
"last": "Harrison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juraj", |
|
"middle": [], |
|
"last": "Juraska", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lena", |
|
"middle": [], |
|
"last": "Reed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Bowden", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wen", |
|
"middle": [], |
|
"last": "Cui", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "124--133", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marilyn Walker, Vrindavan Harrison, Juraj Juraska, Lena Reed, Kevin Bowden, Wen Cui, Omkar Patil, and Adwait Ratnaparkhi. 2021. Athena 2.0: Con- textualized dialogue management for an Alexa Prize SocialBot. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Pro- cessing: System Demonstrations, pages 124-133, Online and Punta Cana, Dominican Republic. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Understanding computers and cognition: A new foundation for design", |
|
"authors": [ |
|
{ |
|
"first": "Terry", |
|
"middle": [], |
|
"last": "Winograd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Flores", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando F", |
|
"middle": [], |
|
"last": "Flores", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Terry Winograd, Fernando Flores, and Fernando F Flo- res. 1986. Understanding computers and cognition: A new foundation for design. Intellect Books.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Sample conversation with annotated features.", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |