ACL-OCL / Base_JSON /prefixC /json /conll /2020.conll-1.14.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:29:01.711390Z"
},
"title": "A simple repair mechanism can alleviate computational demands of pragmatic reasoning: simulations and complexity analysis",
"authors": [
{
"first": "Jacqueline",
"middle": [],
"last": "Van Arkel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {
"settlement": "Groningen",
"country": "the Netherlands"
}
},
"email": "[email protected]"
},
{
"first": "Marieke",
"middle": [],
"last": "Woensdregt",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Radboud University",
"location": {
"settlement": "Nijmegen",
"country": "the Netherlands"
}
},
"email": "[email protected]"
},
{
"first": "Mark",
"middle": [],
"last": "Dingemanse",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Radboud University",
"location": {
"settlement": "Nijmegen",
"country": "the Netherlands"
}
},
"email": "[email protected]"
},
{
"first": "Mark",
"middle": [],
"last": "Blokpoel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Radboud University",
"location": {
"settlement": "Nijmegen",
"country": "the Netherlands"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "How can people communicate successfully while keeping resource costs low in the face of ambiguity? We present a principled theoretical analysis comparing two strategies for disambiguation in communication: (i) pragmatic reasoning, where communicators reason about each other, and (ii) other-initiated repair, where communicators signal and resolve trouble interactively. Using agent-based simulations and computational complexity analyses, we compare the efficiency of these strategies in terms of communicative success, computation cost and interaction cost. We show that agents with a simple repair mechanism can increase efficiency, compared to pragmatic agents, by reducing their computational burden at the cost of longer interactions. We also find that efficiency is highly contingent on the mechanism, highlighting the importance of explicit formalisation and computational rigour.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "How can people communicate successfully while keeping resource costs low in the face of ambiguity? We present a principled theoretical analysis comparing two strategies for disambiguation in communication: (i) pragmatic reasoning, where communicators reason about each other, and (ii) other-initiated repair, where communicators signal and resolve trouble interactively. Using agent-based simulations and computational complexity analyses, we compare the efficiency of these strategies in terms of communicative success, computation cost and interaction cost. We show that agents with a simple repair mechanism can increase efficiency, compared to pragmatic agents, by reducing their computational burden at the cost of longer interactions. We also find that efficiency is highly contingent on the mechanism, highlighting the importance of explicit formalisation and computational rigour.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Natural languages are rife with ambiguity (Wasow et al., 2005 ), yet people seem to communicate efficiently regardless. How can people communicate successfully in the face of ambiguity while keeping resource costs low? There seem to be at least three strategies communicators have at their disposal. First, contextual information can be used to disambiguate the speaker's intended meaning (Piantadosi et al., 2012; Sperber and Wilson, 1986; MacDonald et al., 1994) , though context-sensitive computations are notorious in computational cognitive science for the astronomical demands they make on computation time (Fodor, 2000; Haselager, 1997; van Rooij et al., 2011) . Second, pragmatic reasoning allows taking into account the speaker's goal (e.g. 'being informative') (Grice, 1975; Sperber and Wilson, 1986; Goodman and Frank, 2016) , but this alone is not always enough to fully disambiguate meaning (Schegloff, 1992) . Finally, communicators can leverage the interaction itself by explicitly requesting clarification (e.g. by asking 'Huh?' or 'Who?') in a process known as other-initiated repair (Schegloff et al., 1977; Purver et al., 2018) . This provides a possible way for communicators to reduce their computational burden through interaction, potentially increasing communicative efficiency (Dingemanse, 2020) .",
"cite_spans": [
{
"start": 42,
"end": 61,
"text": "(Wasow et al., 2005",
"ref_id": "BIBREF36"
},
{
"start": 389,
"end": 414,
"text": "(Piantadosi et al., 2012;",
"ref_id": "BIBREF23"
},
{
"start": 415,
"end": 440,
"text": "Sperber and Wilson, 1986;",
"ref_id": "BIBREF30"
},
{
"start": 441,
"end": 464,
"text": "MacDonald et al., 1994)",
"ref_id": "BIBREF20"
},
{
"start": 613,
"end": 626,
"text": "(Fodor, 2000;",
"ref_id": "BIBREF10"
},
{
"start": 627,
"end": 643,
"text": "Haselager, 1997;",
"ref_id": "BIBREF18"
},
{
"start": 644,
"end": 667,
"text": "van Rooij et al., 2011)",
"ref_id": "BIBREF35"
},
{
"start": 771,
"end": 784,
"text": "(Grice, 1975;",
"ref_id": "BIBREF17"
},
{
"start": 785,
"end": 810,
"text": "Sperber and Wilson, 1986;",
"ref_id": "BIBREF30"
},
{
"start": 811,
"end": 835,
"text": "Goodman and Frank, 2016)",
"ref_id": "BIBREF16"
},
{
"start": 904,
"end": 921,
"text": "(Schegloff, 1992)",
"ref_id": "BIBREF27"
},
{
"start": 1101,
"end": 1125,
"text": "(Schegloff et al., 1977;",
"ref_id": "BIBREF28"
},
{
"start": 1126,
"end": 1146,
"text": "Purver et al., 2018)",
"ref_id": "BIBREF24"
},
{
"start": 1302,
"end": 1320,
"text": "(Dingemanse, 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To investigate the computational plausibility of this potential gain in communicative efficiency we present a theoretical analysis of other-initiated repair and pragmatic reasoning. Following Gibson et al. (2019) , we define efficient communication as communication in which participants reach mutual understanding while requiring minimal effort in terms of resource costs (deconstructed here as the sum of computational and interactional cost). We compare a novel agent-based model of otherinitiated repair with one of pragmatic reasoning (Goodman and Frank, 2016) for both their communicative success and use of computational and interactional resources. Simulations are used to evaluate the models' success and interactional resource costs while a computational complexity analysis is used to determine the computational resource demands (van Rooij, 2008; van Rooij et al., 2019) .",
"cite_spans": [
{
"start": 192,
"end": 212,
"text": "Gibson et al. (2019)",
"ref_id": "BIBREF13"
},
{
"start": 841,
"end": 858,
"text": "(van Rooij, 2008;",
"ref_id": null
},
{
"start": 859,
"end": 882,
"text": "van Rooij et al., 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The results show that, on roughly equal terms of communicative success, agents with a simple repair mechanism can reduce their computational burden compared to pragmatic agents, at the cost of longer interactions. While this shows that an efficiency-increasing trade-off is in principle possible, the question remains whether the computational advantage scales to more complex forms of other-initiated repair. The work we present here makes two contributions: 1) a proof of concept that a simple form of repair can help communicators outsource computational demands in interaction, and 2) a framework for the careful theoretical anal-ysis of the interplay of cognitive and interactional resources in human communication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A computational model of pragmatic reasoning in communication that is widely used and has been shown to fit empirical data of human communicative behaviour well, is the rational speech act (RSA) model (Frank and Goodman, 2012; Goodman and Frank, 2016) . This model formalises communication as rational behaviour in which a speaker chooses an utterance by maximising its utility, where utility is defined as the probability that the listener will correctly infer the speaker's communicative intention 1 . This means that the speaker reasons about a listener when choosing an utterance. Likewise, the listener in the RSA model reasons about a speaker by inverting this model of rational utterance production: inferring what the speaker's most likely communicative intention is given the utterance produced (using Bayesian inference). Thus, both RSA production and RSA interpretation consist of a chain of recursive social reasoning, eventually bottoming out in a literal (i.e. zero-order) speaker or listener, which is where the interaction is grounded in semantic meaning. We take this model as our basis to implement pragmatic reasoning for disambiguation in communication.",
"cite_spans": [
{
"start": 201,
"end": 226,
"text": "(Frank and Goodman, 2012;",
"ref_id": "BIBREF11"
},
{
"start": 227,
"end": 251,
"text": "Goodman and Frank, 2016)",
"ref_id": "BIBREF16"
},
{
"start": 500,
"end": 501,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "As mentioned above, another mechanism that human communicators use to reach mutual understanding is repair (Schegloff et al., 1977; Clark and Schaefer, 1987) . Cross-linguistic work on informal face-to-face conversation has shown that repair is frequent (on average once every 1.4 minutes) and that it is highly similar in form and function across unrelated languages (Dingemanse et al., 2015) . Attested repair initiations fall into three basic types, which differ in the grasp they display of the trouble source: (i) open request (e.g. 'Huh?'), (ii) restricted request (e.g. 'Who?') and (iii) restricted offer (e.g. 'At the market?'). These types are used according to similar principles across languages, with participants requesting clarification when necessary and reusing material when possible, resulting in repair sequences that appear to minimise the joint effort of speaker and listener (Dingemanse et al., 2015; Clark and Wilkes-Gibbs, 1986) .",
"cite_spans": [
{
"start": 107,
"end": 131,
"text": "(Schegloff et al., 1977;",
"ref_id": "BIBREF28"
},
{
"start": 132,
"end": 157,
"text": "Clark and Schaefer, 1987)",
"ref_id": "BIBREF5"
},
{
"start": 368,
"end": 393,
"text": "(Dingemanse et al., 2015)",
"ref_id": "BIBREF8"
},
{
"start": 897,
"end": 922,
"text": "(Dingemanse et al., 2015;",
"ref_id": "BIBREF8"
},
{
"start": 923,
"end": 952,
"text": "Clark and Wilkes-Gibbs, 1986)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Interactive repair is a universal and frequently used mechanism for resolving trouble in communication. Here we hypothesise that it provides an affordance that inference based on context or pragmatic reasoning does not: it allows at least part of the computational burden of making inferences to be offloaded onto interaction, in effect distributing the process of reaching mutual understanding over multiple interactional turns (Dingemanse, 2020) . This can be seen as a form of cognitive offloading (Risko and Gilbert, 2016) , with turns at talk constituting material symbols that can augment cognitive processes (Clark, 2006) . In this paper we combine agent-based simulations with a computational complexity analysis to investigate the relative resource demands of pragmatic reasoning and interactive repair. We aim to find out whether other-initiated repair can increase communicative efficiency by relieving communicators of the computational demands of pragmatic reasoning, without that causing a decrease in communicative success.",
"cite_spans": [
{
"start": 429,
"end": 447,
"text": "(Dingemanse, 2020)",
"ref_id": "BIBREF7"
},
{
"start": 501,
"end": 526,
"text": "(Risko and Gilbert, 2016)",
"ref_id": "BIBREF25"
},
{
"start": 615,
"end": 628,
"text": "(Clark, 2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "We use agent-based simulations to compare the communicative efficiency (in terms of both success and resource costs) of other-initiated repair (OIR) and pragmatic reasoning. As reviewed above, people use both strategies for disambiguation in natural conversation. Here, however, we separate them in order to create a baseline comparison between the two. We design two separate models: (i) an interactional model, in which agents have the ability to use repair, but do not use pragmatic reasoning, and (ii) a pragmatic model, in which agents use pragmatic reasoning, but do not have the ability to use repair. Both models of communication start from a lexicon consisting of binary signal-referent mappings (see Table 1 for an example). Depending on the model of communication (interactional or pragmatic), speakers and listeners use this lexicon in different ways in order to arrive at signal productions and interpretations.",
"cite_spans": [],
"ref_spans": [
{
"start": 710,
"end": 717,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Computational models 2",
"sec_num": "3.1"
},
{
"text": "In the interactional model, agents are literal communicators who do not use pragmatic reasoning but can initiate repair. The main innovation we present here is a model of other-initiated repair Table 1 : Example of a simple lexicon. s denotes a signal, and r a referent. This lexicon has an ambiguity level of 0.5: every signal is associated with half of the referents.",
"cite_spans": [],
"ref_spans": [
{
"start": 194,
"end": 201,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Interactional model",
"sec_num": "3.1.1"
},
{
"text": "r 1 r 2 r 3 r 4 s 1 0 1 1 0 s 2 1 0 1 0 s 3 1 1 0 0 s 4 1 0 0 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interactional model",
"sec_num": "3.1.1"
},
{
"text": "governed by the listener's level of certainty about the speaker's intended referent. Our model consists of three parts. First, after each signal production by the speaker, we measure the listener's uncertainty as the conditional entropy of the probability distribution over referents given the signal (MacKay, 2003) . Second, we define an entropy threshold parameter which simulates the amount of uncertainty that the listener is willing to tolerate: when a listener's uncertainty falls above this threshold (i.e. uncertainty is too high), they initiate repair using an open request (which one can think of as saying 'Huh?' or 'What did you say?') (for a related use of entropy as a trigger for repair, see de Ruiter and Cummins, 2012) . Finally, we provide a simple mechanism for solving the ambiguity problem indicated by the listener: the speaker can send another signal associated with the intended referent, and the listener then performs a conjunction operation to determine what referents are in the intersection of the current signal and the previous signal(s), thereby (potentially) reducing referential uncertainty. When the conditional entropy of the listener's probability distribution over referents given the signal(s) received falls below the entropy threshold (i.e. when uncertainty is low enough), an interpretation is reached by choosing the referent that has maximum posterior probability.",
"cite_spans": [
{
"start": 301,
"end": 315,
"text": "(MacKay, 2003)",
"ref_id": "BIBREF21"
},
{
"start": 710,
"end": 735,
"text": "Ruiter and Cummins, 2012)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interactional model",
"sec_num": "3.1.1"
},
{
"text": "For example, imagine a speaker with an intention to communicate referent 2 who has just uttered signal 3 based on the lexicon in Table 1 . After the listener has initiated repair, the speaker utters signal 1, which leads to the association vector of [0, 1, 0, 0] after conjunction, and now the listener can be certain referent 2 is the speaker's intended referent. Below we give a computational-level description of production and interpretation in this interactional model.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 136,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Interactional model",
"sec_num": "3.1.1"
},
{
"text": "Input: A set of signals S, a set of referents R, a lex-icon L : S \u00d7 R \u2192 B mapping signal-referent pairs to a Boolean value. We write L(s) to denote the list of values for all referents given signal s. A dialogue history D r which is a set of signals produced earlier in a conversation {s, . . . }. The dialogue history D r is relative to the intended referent r by the speaker. An order of pragmatic inference n = 0. And finally an intended referent r \u2208 R. Output: The signal s that maximizes the probability Pr",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PRODUCTION",
"sec_num": null
},
{
"text": "S S 0 (s | r, L Dr ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PRODUCTION",
"sec_num": null
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PRODUCTION",
"sec_num": null
},
{
"text": "L Dr (s, r) = L(s, r) s \u2208Dr L(s , r)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PRODUCTION",
"sec_num": null
},
{
"text": "For interactional production, the following equations are relevant:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PRODUCTION",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Pr S S 0 (s | r, L Dr ) = \u03b4 S (s|r, L Dr ) (1) \u03b4 S (s|r, L Dr ) = L Dr (s, r) s \u2208S L Dr (s , r)",
"eq_num": "(2)"
}
],
"section": "PRODUCTION",
"sec_num": null
},
{
"text": "Equation 1 shows the probability of a signal s given the intended referent r and the lexicon updated according to the dialogue history L Dr . For an interactional speaker (who uses literal production), this probability is given by Equation 2, which normalises the lexicon over signals, given the intended referent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PRODUCTION",
"sec_num": null
},
{
"text": "Input: L, L(s), and D r as defined for the production model above. An order of pragmatic inference n = 0. An entropy threshold H t determining whether the entropy H is too high or sufficiently low. And finally an observed signal s \u2208 S. Output: L Dr (s, r) as defined for the production model above. Let Pr L L 0 (r | s, L Dr ) provide the posterior distribution over referents given s and L Dr , and let H(R|s, L Dr ) be the conditional entropy (i.e. uncertainty) of that distribution. The output is of one of two types: a repair signal, or an inferred referent given the signal and dialogue history:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTERPRETATION",
"sec_num": null
},
{
"text": "\uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 repair signal if H(R|s, L Dr ) > H t arg max r\u2208R Pr L L 0 (r | s, L Dr ) if H(R|s, L Dr ) \u2264 H t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTERPRETATION",
"sec_num": null
},
{
"text": "For interactional interpretation, the following equations are relevant:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTERPRETATION",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Pr L L 0 (r | s, L Dr ) = \u03b4 L (r|s, L Dr ) (3) \u03b4 L (r|s, L Dr ) = L Dr (s, r) r \u2208R L Dr (s, r ) (4) H(R|s, L Dr ) = r\u2208R Pr(r | s, L Dr )\u00d7 log 2 1 Pr(r | s, L Dr )",
"eq_num": "(5)"
}
],
"section": "INTERPRETATION",
"sec_num": null
},
{
"text": "Equation 3 shows the probability of a referent r given the received signal s and the lexicon updated according to the dialogue history L Dr . For an interactional listener (who uses literal interpretation), this probability is given by Equation 4, which normalises the lexicon over referents given the received signal. Finally, the conditional entropy of the probability distribution over referents given the signal and the lexicon updated according to the dialogue history is shown in Equation 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTERPRETATION",
"sec_num": null
},
{
"text": "The pragmatic model is based on the RSA framework (Frank and Goodman, 2012; Goodman and Frank, 2016) . This framework models pragmatic reasoning as a chain of social recursion, in which the speaker reasons about the listener when choosing a signal, and the listener reasons about the speaker when interpreting a signal. Figure 1 shows the chain of reasoning used in the current model. In order to not stack the deck against the pragmatic agents in terms of computational burden, we further distinguish between two subtypes of pragmatic agents: 'frugally pragmatic' and 'fully pragmatic'. A frugally pragmatic listener starts out at a low level of social recursion (order n = 1), and only 'levels up' to a higher order of pragmatic reasoning (n + 1) when too uncertain about the speaker's intended referent. Thus, they decide how to proceed based on their own uncertainty, somewhat analogously to how the interactional listener decides whether to initiate repair. In contrast, a fully pragmatic listener starts at the maximum order of pragmatic reasoning straight away (here we cap pragmatic reasoning at order 2, as previous simulation work has shown that orders higher than 2 yield diminishing returns in terms of communicative success; Blokpoel et al., 2020) . As this paper focuses on disambiguation by the listener, we keep the speaker model that these two subtypes of pragmatic listener interact with constant: a 'fully pragmatic' speaker who starts at the maximum order of pragmatic reasoning straight away. Below we give a computational-level description of production and interpretation in this pragmatic model.",
"cite_spans": [
{
"start": 50,
"end": 75,
"text": "(Frank and Goodman, 2012;",
"ref_id": "BIBREF11"
},
{
"start": 76,
"end": 100,
"text": "Goodman and Frank, 2016)",
"ref_id": "BIBREF16"
},
{
"start": 1238,
"end": 1260,
"text": "Blokpoel et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 320,
"end": 328,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Pragmatic model",
"sec_num": "3.1.2"
},
{
"text": "Input: L and L(s) as defined above (see Production in Interactional Model; Section 3.1.1). An order of pragmatic inference n = 2, and an intended referent r \u2208 R. Output: The signal s that maximizes the probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PRODUCTION",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Pr S Sn (s | r, L). Pr S Sn (s | r, L) = Pr S Ln (r | s, L) s \u2208S Pr S Ln (r | s , L) (6) Pr S Ln (r | s, L) = Pr S S n\u22121 (s | r, L) r \u2208R Pr S S n\u22121 (s | r , L)",
"eq_num": "(7)"
}
],
"section": "PRODUCTION",
"sec_num": null
},
{
"text": "Pr",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PRODUCTION",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S S 0 (s | r, L) = \u03b4 S (s|r, L) (8) \u03b4 S (s|r, L) = L(s, r) s \u2208S L(s , r)",
"eq_num": "(9)"
}
],
"section": "PRODUCTION",
"sec_num": null
},
{
"text": "For pragmatic production, the speaker reasons about the listener (Equation 6), who in turn reasons about the speaker being one order of pragmatic reasoning below (Equation 7). Finally, this bottoms out to reasoning about a literal (zero-order) speaker (Equation 8), where the normalised lexicon comes into play (Equation 9).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PRODUCTION",
"sec_num": null
},
{
"text": "Input: L and L(s) as defined above (see Production in Interactional Model; Section 3.1.1). An order of pragmatic inference n with a maximum at n max = 2. An entropy threshold H t determining whether the entropy H is too high or sufficiently low. And finally an observed signal s \u2208 S. Output: Let Pr L Ln (r | s, L) be the posterior distribution over referents given s and L, and let H(R|s, L) be the conditional entropy (i.e. uncertainty) of that distribution. The output is an inferred referent r given the signal, if needed by moving a level up on the order of pragmatic reasoning:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTERPRETATION",
"sec_num": null
},
{
"text": "\uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 RSA INTERPRETATION(n + 1) if H(R|s, L)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTERPRETATION",
"sec_num": null
},
{
"text": "> H t , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTERPRETATION",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "n < n max arg max r\u2208R Pr L Ln (r | s, L) if H(R|s, L) \u2264 H t , or n = n max Pr L Ln (r | s, L) = Pr L Sn (s | r, L) r \u2208R Pr L Sn (s | r , L) (10) Pr L Sn (s | r, L) = Pr L L n\u22121 (r | s, L) s \u2208S Pr L L n\u22121 (r | s , L) (11) Pr L L 0 (r | s, L) = \u03b4 L (r|s, L) (12) \u03b4 L (r|s, L) = L(s, r) r \u2208R L(s, r ) (13) H(R|s, L) = r\u2208R Pr(r | s, L)\u00d7 log 2 1 Pr(r | s, L)",
"eq_num": "(14)"
}
],
"section": "INTERPRETATION",
"sec_num": null
},
{
"text": "For pragmatic interpretation, the listener reasons about the speaker (Equation 10), who in turn reasons about the listener being one order of pragmatic reasoning below (Equation 11). This bottoms out to reasoning about a literal (zero-order) listener (Equation 12), where the normalised lexicon comes into play (Equation 13). Finally, the conditional entropy of the probability distribution over referents given the signal is shown in Equation 14.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTERPRETATION",
"sec_num": null
},
{
"text": "Computational-level models such as those above have very specific computational resource demands. These demands can be analysed using mathematical proof techniques from computational complexity theory (Garey and Johnson, 1979) . A model's resource demands (also referred to as computational complexity) are defined by the worst-case running time of the fastest possible algorithm that computes the specified input-output mapping. Worst-case complexity is most appropriate assuming that all instances from the model's input domain may possibly occur. 3 The computational complexity of a model can be proven by reduction or by proposing an algorithm, and is given in terms of the input size of the model (e.g., the size of the lexicon).",
"cite_spans": [
{
"start": 201,
"end": 226,
"text": "(Garey and Johnson, 1979)",
"ref_id": "BIBREF12"
},
{
"start": 550,
"end": 551,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity theory",
"sec_num": "3.2"
},
{
"text": "In the first method (reduction), one constructs a mathematical relationship, i.e., a polynomial-time reduction, between the model of interest (say M I ) and a model who's complexity is known (say M K ). A reduction proves that either M I is a special case of M K or the other way around. 4 Depending on the complexity of M K , the reduction may inform us about the complexity of M I . If M I reduces to M K and M K is easy, then M I must be easy too, because we can use the 'fast' algorithm that exists for M K to compute M I . If M K reduces to M I and M K is hard, then M I must be hard too, otherwise if M I would be easy, we could compute M K easily too. A reduction is denoted as A \u2264 B, where A reduces to B.",
"cite_spans": [
{
"start": 288,
"end": 289,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity theory",
"sec_num": "3.2"
},
{
"text": "M I is easy \u21d0\u21d2 M I \u2264 M K and M K is easy M I is hard \u21d0\u21d2 M I \u2265 M K and M K is hard",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity theory",
"sec_num": "3.2"
},
{
"text": "Polynomial time reductions can be used to prove that models are easy or hard. Easy models belong to the complexity class P and for these models there exist polynomial-time (or faster) algorithms. Hard models belong to class NP-hard; these models are as hard as all other models in NP and require exponential time or worse, assuming that P = NP. See Table 2 for example resource requirements.",
"cite_spans": [],
"ref_spans": [
{
"start": 349,
"end": 356,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Complexity theory",
"sec_num": "3.2"
},
{
"text": "In the second method (proposing an algorithm), one creates an algorithm that computes the model exactly and then analyses the algorithm's complexity profile. Unless one can prove the algorithm is the fastest, this method gives an upperbound on the model's computational complexity. This method affords comparison between models of similar complexity class. This is the method we use to deter-mine the complexity of the interactional and pragmatic models, because they are both polynomialtime computable. We illustrate this method using matrix row normalization. Given a definition of basic computation step (e.g., multiplication), input size (e.g., max(|rows|, |columns|)) and an algorithm (see Algorithm 1), one expresses the number of required computation steps. Here, n 2 computations steps are required.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity theory",
"sec_num": "3.2"
},
{
"text": "Algorithm 1: Matrix row normalization taking 2kl = n 2 steps, where n = max(k, l). Using the second method, we derived upper bounds on the computational complexity of each model (see Appendix B for the full proofs). Table 3 shows the computational complexity for the different agent types.",
"cite_spans": [],
"ref_spans": [
{
"start": 216,
"end": 223,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Complexity theory",
"sec_num": "3.2"
},
{
"text": "Data: M is a k \u00d7 l matrix 1 for i \u2190 1 to k do 2 for j \u2190 1 to l do 3 Si \u2190 Si + Mij ; // k x l steps 4 end 5 end 6 for i \u2190 1 to k do 7 for j \u2190 1 to l do 8 Mij \u2190 Mij/Si ; // k x l",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity theory",
"sec_num": "3.2"
},
{
"text": "Fully pragmatic (number of signals and referents, respectively), and t denotes the number of turns. Frugally pragmatic agents may end up in one of two scenarios: either (1) they are sufficiently certain about their 1 st -order inference or (2) they will make an additional 2 nd -order inference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interactional Frugally pragmatic",
"sec_num": null
},
{
"text": "2m(t \u2212 1) + 2mt + 2m 1: 16m 2 + 4m 2: 20m 2 + 4m 20m 2 + 2m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interactional Frugally pragmatic",
"sec_num": null
},
{
"text": "For the purposes of this paper, we assume that there is no disparity between the agent types within a given speaker-listener pair, meaning that interactional speakers always converse with an interactional listener, and pragmatic speakers always converse with a pragmatic listener. This provides a clear-cut contrast to compare the effect of OIR versus pragmatic reasoning on efficiency in communication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulation details",
"sec_num": "3.3"
},
{
"text": "We ran simulations to see which agent type performs best at communicating efficiently (which we break down into communicative success and resource costs). These simulations consist of a set of interactions between two agents. An interaction starts with the speaker being assigned a randomly chosen intended referent, and ends when the listener reaches an interpretation based on the signal(s) sent by the speaker. If the agents are of the interactional type, they can use multiple turns; if the agents are pragmatic, the speaker can only send one signal. We cap the number of turns at 2 \u00d7 |S| \u2212 1, to make sure agents do not get stuck in an infinite loop of other-initiated repair. In addition to interacting agents being of the same type, we also assume that there is no asymmetry between interacting agents: they always share the same lexicon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulation details",
"sec_num": "3.3"
},
{
"text": "In the simulations described below, we looked at three different lexicon sizes (|S| \u00d7 |R|=6x4, 15x10, and 30x20) in order to investigate how the efficiency of the different strategies scales with lexicon size. We kept the ambiguity of the lexicons constant at a moderate level of 0.5 (given that we are interested in disambiguation), and the entropy threshold constant at H t = 1.0 bits (which corresponds approximately to a probability distribution where most of the probability mass is distributed equally over two referents).Following Blokpoel et al. (2020), we define lexicon ambiguity as mean signal ambiguity, and signal ambiguity as the relative number of referents a signal is associated with. Appendix A shows additional simulation results that explore the effects of varying the ambiguity level and entropy threshold parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulation details",
"sec_num": "3.3"
},
{
"text": "For each combination of parameter settings, we randomly generate 1,000 lexicons of the corresponding size and ambiguity level, and have the corresponding pair of agents interact for 2 \u00d7 |R| times (about randomly selected referential intentions). We constrain the set of possible lexicons such that (i) each referent has at least one signal associated with it, and (ii) each signal has an equal level of ambiguity. The latter constraint is to avoid potential effects of skewed ambiguity (e.g. when half of the signals refer to all referents and the other signals to none, in the case of a mean ambiguity of 0.5) (Blokpoel et al., 2020) .",
"cite_spans": [
{
"start": 611,
"end": 634,
"text": "(Blokpoel et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Simulation details",
"sec_num": "3.3"
},
{
"text": "For each simulation, we measured (i) the communicative success, (ii) the interactional cost, and (iii) the computational cost. We define communicative success as 1.0 if the listener's interpretation matches the speaker's intended referent, and 0.0 otherwise. We define interactional cost as the number of turns (i.e. the total number of signals and repair initiators that are sent back and forth between speaker and listener). Computational resource requirements are based on the complexity upper bound derived for each model (see Table 3 and Appendix B). Figure 2a shows the mean communicative success for the different agent types and lexicon sizes. The frugally pragmatic listeners were always sufficiently certain about the intended referent of the speaker when using a lexicon of size 6x4, resulting in the agents staying with their first order inference for that lexicon size. For the lexicon sizes of 15x10 and 30x20, the frugally pragmatic listeners were always too uncertain about the speaker's intended referent, and therefore always went up to order n = 2. 5 As Figure 2a shows, the pragmatic agents have an advantage in terms of communicative success for the smallest lexicon size (6x4), while for bigger lexicon sizes (15x10 and 30x20) the interactional agents have an advantage. This can be accounted for by the fact that the interactional agents do not use OIR for a lexicon with only 4 referents and an ambiguity level of 0.5 (see Figure 2b) , as they are already certain enough 6 , and therefore choose ran- 5 This model behaviour depends on the entropy threshold (lower values mean agents tolerate less uncertainty), ambiguity level (more ambiguous lexicons lead to more uncertainty), and lexicon size (larger lexicons result in more dispersed probability distributions, which causes higher uncertainty). See Appendix A for results with different parameter settings.",
"cite_spans": [
{
"start": 1525,
"end": 1526,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 531,
"end": 538,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 556,
"end": 565,
"text": "Figure 2a",
"ref_id": null
},
{
"start": 1073,
"end": 1082,
"text": "Figure 2a",
"ref_id": null
},
{
"start": 1447,
"end": 1457,
"text": "Figure 2b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Measures: Communicative success and resource costs",
"sec_num": "3.4"
},
{
"text": "6 Recall that the entropy threshold of 1.0 bits corresponds approximately to an equal distribution of probability mass over two referents. domly between two referents straight away (resulting in \u223c50% communicative success). For bigger lexicons, however, they do use OIR, which explains the increased communicative success: through multiple turns they can reduce referential uncertainty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Interactional agents perform approximately equally well with the bigger lexicons of 15x10 and 30x20, while the pragmatic agents show a steady decline in communicative success for bigger lexicon sizes. This decline can be explained by bigger lexicons resulting in more dispersed probability distributions, which causes less certainty for both speakers and listeners when choosing their productions and interpretations. This is more of a problem for pragmatic agents as they cannot do anything other than go one level up in pragmatic reasoning, while interactional agents can take as many turns as needed to reduce referential uncertainty (for as far as their lexicon allows). For pragmatic agents, we see no difference in communicative success between the Frugally Pragmatic and Fully Pragmatic strategies. This is as expected since they have access to the same pragmatic reasoning mechanisms and differ only in the successive deployment of orders of reasoning. Figure 2b shows the distribution of the number of turns for the interactional agents. Here, a clear effect of lexicon size is visible: the bigger the lexicon, the more turns are used. This is unsurprising given that larger lexicons (given a constant ambiguity level) contain more referent associations per signal. Therefore, a larger lexicon causes more uncertainty, which results in more turns. (Note that we allowed agents to take more turns for bigger lexicons: we set a cap at 2 \u00d7 |S| \u2212 1 turns.) For the smallest lexicon size of 6x4, only one turn (i.e. one speaker production) is needed for the listener to be certain enough to end the interaction, meaning that listeners do not make use of OIR for this lexicon size. Most interactional sequences take less than 10 turns in total regardless of lexicon size. This means that interactional listeners need on average less than 5 repair attempts to reach a sufficiently certain interpretation. Figure 2c shows the computation cost (as means of the computational complexity) by agent type and lexicon size. For the interactional agents, the average number of turns per lexicon size (6x4: 1.0, 15x10: 3.0, and 30x20: 4.7 turns) is entered into the computation cost, since the worst case is defined by an artificial limit on interaction length. As mentioned above, the frugally pragmatic agents always went up to order n = 2 for lexicon sizes 15x10 and 30x20, resulting in almost the same computation cost as for the fully pragmatic agents (see also Table 3). Only for a lexicon of size 6x4 the frugally pragmatic agents were certain enough to stay with their first-order inference, ending up with a slightly lower computation cost than the fully pragmatic agents.",
"cite_spans": [],
"ref_spans": [
{
"start": 961,
"end": 970,
"text": "Figure 2b",
"ref_id": null
},
{
"start": 1907,
"end": 1916,
"text": "Figure 2c",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "There is a substantial difference in computation cost between the interactional and pragmatic agent types. Especially for larger lexicons the computation cost is considerably lower for interactional than for pragmatic agents. Compared to this difference, the degree to which computation cost is reduced for Frugally Pragmatic compared to Fully Pragmatic agents is a lot smaller. The effect of lexicon size is smaller for the interactional compared to the pragmatic agents, as the computation cost increases linearly with lexicon size for interactional agents, while it increases quadratically with lexicon size for pragmatic agents (see Table 3 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 637,
"end": 644,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Can communicators reduce their computational burden through interaction? We showed using a theoretical analysis that the use of other-initiated repair can be more efficient than pragmatic reasoning in communication, by reducing the computational demands of pragmatic reasoning through interaction. The chief computational advantage of repair in our model derives from the fact that it trades recursive pragmatic inferences (which scale quadratically with lexicon size) for computationally simpler conjunctions (which scale linearly). This advantage seems to scale to bigger lexicon sizes as well, with the communicative success of the interactional agents not being affected by lexicon size, whereas pragmatic agents' communicative success decreases. This supports the hypothesis that communicating agents can leverage interactive repair to reduce their computational burden, essentially outsourcing individual computation to interaction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "A number of design choices may affect the generalisability of these results. First, we have modelled only a simple form of interactive repair, albeit one corresponding to a widely used repair format (the open request). Other forms of repair may have different computational complexity profiles. For instance, restricted offers hold up a candidate understanding for confirmation, and their formulation likely requires some degree of pragmatic reasoning, adding to the computational complexity (Schl\u00f6der and Fern\u00e1ndez, 2015). Also, dealing with some forms of repair may involve belief revision (Wilkes-Gibbs and Clark, 1992) , which requires context-sensitive abductive inferences known to be computationally intractable (Abdelbar and Hedetniemi, 1998; Bylander et al., 1991; Thagard and Verbeurgt, 1998) . In sum, other-initiated repair is not a monolithic phenomenon, and the analytical tools we supply here can be used to systematically investigate the computational tractability of a range of possible interactional strategies (see e.g. Ginzburg and Fern\u00e1ndez, 2010; van Rooij et al., 2011) .",
"cite_spans": [
{
"start": 592,
"end": 622,
"text": "(Wilkes-Gibbs and Clark, 1992)",
"ref_id": "BIBREF37"
},
{
"start": 719,
"end": 750,
"text": "(Abdelbar and Hedetniemi, 1998;",
"ref_id": "BIBREF0"
},
{
"start": 751,
"end": 773,
"text": "Bylander et al., 1991;",
"ref_id": "BIBREF3"
},
{
"start": 774,
"end": 802,
"text": "Thagard and Verbeurgt, 1998)",
"ref_id": "BIBREF31"
},
{
"start": 1039,
"end": 1068,
"text": "Ginzburg and Fern\u00e1ndez, 2010;",
"ref_id": "BIBREF14"
},
{
"start": 1069,
"end": 1092,
"text": "van Rooij et al., 2011)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Another limitation is that the conditions under which the agents communicate are unrealistic in that all agent pairs share the exact same lexicon. Any potential misunderstanding thus stems solely from ambiguity, and not from one agent associating a given signal with a slightly different set of referents than their interlocutor. Relaxing this assumption is likely to cause problems for the simple repair strategy presented here, because it is based on conjunction. If interactional agents would base their (literal) productions and interpretations on conjunctions of more asymmetrical lexicons, divergences between intended referent and interpretation would soon arise, in which case we predict a decrease in communicative success, and therefore in efficiency. Pragmatic agents, on the other hand, have been shown to be able to leverage a moderate level of ambiguity in their lexicons to overcome asymmetry (Blokpoel et al., 2020) .",
"cite_spans": [
{
"start": 908,
"end": 931,
"text": "(Blokpoel et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We now consider two possible extensions to the current modelling work. Note, however, that these both come with additional computational demands and require careful theoretical re-analysis to investigate where efficiency trade-offs may play a role.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "First, a hybrid model of pragmatic inference and other-initiated repair might combine the best of both worlds. The question then is if agents can achieve communicative success while keeping resource demands low by having their choice of strategy depend on an assessment of the situation. For example, in a speaker role, such a hybrid agent could 'level up' to a higher order of pragmatic reasoning in response to a repair initiator. Such hybrid agents will of course need a meta-cognitive capacity to decide which strategy to use (for instance by reasoning about the level of asymmetry between themselves and their interlocutor). This meta-level reasoning would bring additional computational resource demands that would affect the agents' efficiency. While a hybrid strategy may be able to preserve some of the efficiency trade-offs we have documented here, it is an open question whether they would not be dwarfed by the added computational cost of meta-cognition (see e.g. Otworowska et al., 2018) .",
"cite_spans": [
{
"start": 976,
"end": 1000,
"text": "Otworowska et al., 2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Second, agents may revise their beliefs about the way their interlocutors use signals on the basis of conversation history, in order to overcome asymmetry (Hawkins et al., 2017) . This form of updating might be able to explain why people are successful communicators while spending minimal interactional resources, but it comes at a computational cost too. Consider that agents would have to entertain the possibility that their interlocutor has any in principally possible lexicon, and from those infer the ones that are most likely given their conversation history. There exist, however, exponentially many possible lexicons (viz. 2 n for a lexicon of binary mappings, where n is the lexicon size, Blokpoel et al., 2020) 7 .",
"cite_spans": [
{
"start": 155,
"end": 177,
"text": "(Hawkins et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Which of these (or other) models best explains the relation between interactive repair and pragmatic reasoning is an empirical question. Here we have shown that formal models informed by research on human interaction (Albert and Ruiter, 2018) can bring us closer to an understanding of the cognitive and communicative capacities of interacting people. The question of communicative efficiency is inherently one of computational plausibility. This question is best addressed through careful theoretical analysis as we have shown here. Further modelling can be used to refine our computational understanding of the phenomenon prior to empirical testing (cf. van Rooij and Baggio, 2020).",
"cite_spans": [
{
"start": 217,
"end": 242,
"text": "(Albert and Ruiter, 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Using theoretical analysis, we showed that a simple form of other-initiated repair can ease the computational burden of pragmatic reasoning and thereby contribute to communicative efficiency. Our models make several simplifying assumptions, so scaling them to other interactional strategies will increase computational demands and perhaps alter the division of labour. Besides offering a proof of concept of how repair can ease the computational demands of communication, our methods pave the way for principled theory-driven analyses of how people balance cognitive and interactional resources in human interaction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "This appendix shows the simulation results for all different parameter settings that were run, by way of a robustness check. The parameters that were manipulated are (i) the ambiguity level, (ii) the entropy threshold (i.e. the level of uncertainty that the listener is willing to tolerate) and (iii) the lexicon size. Figure 3 shows the mean communicative success for the different parameter settings for which simulations were run. For lexicon size 6x4, no data is shown for the agents of type Frugally Pragmatic 1 for the entropy thresholds of 0.8 and 1.0 bits combined with an ambiguity level of 0.8, as in these conditions all frugally pragmatic listeners levelled up to a higher order of pragmatic reasoning (for which the data can be found under Frugally Pragmatic Agents 2). For agents of type Frugally Pragmatic 2 with lexicon size 6x4, no data is available for any of the entropy thresholds combined with an ambiguity level of 0.2, and for the entropy thresholds of 1.0 and 1.5 bits combined with an ambiguity level of 0.5, because the frugally pragmatic listeners never levelled up to second-order reasoning in these conditions. For the Frugally Pragmatic Agents 1 the same happened for the larger lexicons of 15x10 and 30x20 with an ambiguity level of either 0.5 or 0.8 (and for the combination of an ambiguity level of 0.2 with an entropy threshold of either 0.8 or 1.0 bits for the lexicon size of 30x20), as all agents went an order up here as well (i.e., data for these parameter settings is shown under Frugally Pragmatic Agents 2). For the Frugally Pragmatic Agents 2, there is no data for the lexicon size of 15x10, an ambiguity level of 0.2 and entropy thresholds of 1.0 and 1.5 bits, as no agents went up from order 1 to order 2 in these conditions. Finally, the fully pragmatic agents did not have the possibility to move an order up, therefore no entropy threshold was set.",
"cite_spans": [],
"ref_spans": [
{
"start": 319,
"end": 327,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Additional Results",
"sec_num": null
},
{
"text": "First of all, the expected effect of ambiguity level is visible: the higher the ambiguity level, the lower the communicative success. This holds for almost all conditions, except for the Frugally Pragmatic Agents 1 with a lexicon size of 6x4 and an entropy threshold of 1.5 bits. Here we can see a slight improvement in communicative success when the ambiguity goes up from 0.5 to 0.8, which can be explained by the fact that for a high ambiguity level, these agents decide to go up to order 2 of pragmatic reasoning most of the time, and only stay with order 1 when they are sufficiently certain about the speaker's intended referent. Another exception when it comes to the effect of ambiguity level on the communicative success can be detected for the interactional agents with a lexicon size of 15x10, for the entropy thresholds of 1.0 and 1.5 bits and an ambiguity level of 0.2 and 0.5: here, the interactional agents perform better with an ambiguity level of 0.5 than 0.2. This is due to the fact that an ambiguity level of 0.2 for a lexicon size of 15x10",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Additional Results",
"sec_num": null
},
{
"text": "Figure 3: The mean communicative success for the different parameter settings: different agent types, entropy thresholds, ambiguity levels and lexicon sizes. For the frugally pragmatic agents 1, a lexicon size of 6x4, entropy thresholds of 1.0 and 0.8 bits, and an ambiguity level of 0.8 there is no data available as all agents went up on the order of pragmatic reasoning, for which the data is represented below at the frugally pragmatic agents 2 for a lexicon size of 6x4 (because the agents went up from an order of 1 to 2). Notice though that for a lexicon size of 6x4 no data is shown for an ambiguity level of 0.2 or an ambiguity level of 0.5 combined with an entropy threshold of either 1.0 or 1.5 bits, as no agents decided to go up from an order of 1 to 2. Again, for the frugally pragmatic agents 1 for the larger lexicons of 15x10 and 30x20 with an ambiguity level of either 0.5 or 0.8 (and for the combination of an ambiguity level of 0.2 and an entropy threshold of either 0.8 or 1.0 bits for the lexicon size of 30x20), all agents went an order up as well, explaining why no data is shown here. For the frugally pragmatic agents 2, there is no data for the lexicon size of 15x10, an ambiguity level of 0.2 and entropy thresholds of 1.0 and 1.5 bits, as no agents went up from an order of 1 to 2. Finally, the fully pragmatic agents did not have the possibility to move an order up, therefore no entropy threshold was set. The white outlines indicate the simulation results reported in the main body of the paper. means that every signal refers to 2 referents. Therefore, agents do not use OIR for this ambiguity level as they have already reached the entropy threshold from the start; when the entropy threshold is set to 1.0 bits (or higher), agents are satisfied with having their set of possible interpretations narrowed down to two approximately equiprobable candidates. With a higher ambiguity level the agents do need to use OIR for these entropy thresholds, therefore they can reach an entropy level under the entropy threshold and only have one referent left to choose from in some cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Additional Results",
"sec_num": null
},
{
"text": "Secondly, the entropy threshold is used to manipulate how much uncertainty a listener allows for in a conversation; we ran simulations with three different entropy thresholds: 0.8, 1.0 and 1.5. With an Figure 4 : The mean number of turns for the interactional agents for the different parameter settings: different entropy thresholds, ambiguity levels and lexicon sizes. The white outlines indicate the simulation results reported in the main body of the paper. entropy threshold of 0.8 bits, listeners are quite certain about which referent to choose, as one referent has a higher probability than the others. An entropy threshold of 1.0 bits means that the listener still has to choose between two more or less equally probable referents given a signal. Finally, with an entropy threshold of 1.5 bits, listeners have to choose between three more or less equally probable referents given a signal. 8 For the fully pragmatic agents the entropy threshold does not play a role, as these agents start at the maximum order of pragmatic reasoning (n = 2) from the beginning, regardless of their level of (un)certainty. When looking at the results in Figure 3 , a clear effect of entropy threshold is not detectable. Overall, we can spot a small effect of the lowest entropy threshold of 0.8 bits leading to a higher communicative success, but this effect is not consistent across conditions and not very visible between the entropy thresholds of 1.0 and 1.5 bits.",
"cite_spans": [],
"ref_spans": [
{
"start": 202,
"end": 210,
"text": "Figure 4",
"ref_id": null
},
{
"start": 1145,
"end": 1153,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Additional Results",
"sec_num": null
},
{
"text": "Finally, an effect of lexicon size can be seen as well: for bigger lexicons the communicative success tends to be lower than for smaller ones. As discussed in the main body of the paper, this is due to bigger lexicons resulting in more dispersed probability distributions over signals and referents (for speakers and listeners respectively). Furthermore, we can observe that frugally pragmatic listeners go an order up in pragmatic reasoning (thereby entering the Frugally Pragmatic 2 scenario) when the ambiguity level is higher and when the lexicon size is larger, which happens more often for the agents who tolerate less uncertainty (i.e. have a lower entropy threshold). This is in line with our expectations, as bigger lexicons with higher ambiguity levels cause more dispersed probabilities over the referents given a signal. A listener who is uncertain about the speaker's intended referent is more likely to go up on the order of reasoning, and this effect will be stronger if the listener has a lower entropy threshold. Figure 4 shows the mean number of turns for the interactional agents for the different ambiguity levels and entropy thresholds. These parameters have a clear effect on the number of turns. The higher the ambiguity level, the more turns are used to be certain enough about the speaker's intended referent. Next, the lower the entropy threshold, the more turns are needed to be certain enough (as a lower entropy threshold means that the agent tolerates less uncertainty). And finally, regarding the lexicon size: the bigger the lexicon, the more turns are needed to be certain enough, as bigger lexicons lead to more dispersed probability distributions over the referents given the signal(s).",
"cite_spans": [],
"ref_spans": [
{
"start": 1030,
"end": 1038,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Additional Results",
"sec_num": null
},
{
"text": "As mentioned above, for lower entropy thresholds, agents want to eliminate more uncertainty (i.e. gain a lower conditional entropy), which they try to achieve by taking more turns. However, we can observe in Figure 4 that there is not a (big) difference in the number of turns that the agents take between an entropy threshold of 1.0 and 1.5 bits, which means that after some turns agents are equally certain for both entropy thresholds (probably both fall under 1.0, regardless of the threshold). Next, Figure 5 shows the computational complexity for the interactional agents for the different lexicon sizes and numbers of turns. Here, we can see that the computational complexity goes up for bigger lexicons and more turns, as expected.",
"cite_spans": [],
"ref_spans": [
{
"start": 208,
"end": 216,
"text": "Figure 4",
"ref_id": null
},
{
"start": 504,
"end": 512,
"text": "Figure 5",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "A Additional Results",
"sec_num": null
},
{
"text": "Finally, Figure 6 shows the difference in conditional entropy over turns for different lexicon sizes for the interactional agents (as the pragmatic agents only perform one turn), meaning that the entropy difference of turn 2 for instance is given by:",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 17,
"text": "Figure 6",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "A Additional Results",
"sec_num": null
},
{
"text": "\u2206 H = H(t 2 ) \u2212 H(t 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Additional Results",
"sec_num": null
},
{
"text": "Here we can see that for the smaller lexicon sizes, the entropy difference between turns is smaller. Moreover, we can observe that around 10 turns the entropy does not differ much anymore when taking more turns, meaning that taking more than 10 turns in total (i.e. 5 per agent; including 5 counts of OIR) is not very effective when it comes to the listener's certainty about their interpretation. This is in line with the result discussed in the body of the paper that the interactional agents, regardless of lexicon size, take less than 5 turns most of the time. We distinguish three agent types:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Additional Results",
"sec_num": null
},
{
"text": "1. Interactional agents: Use other-initiated repair and conjunction to get to mutual understanding 2. Frugally pragmatic agents: Speaker is always order 2, but listener starts at order 1, and only levels up to order 2 when uncertainty is too high 3. Fully pragmatic agents: Both speaker and listener are order 2. Listener's strategy doesn't depend on their uncertainty",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Additional Results",
"sec_num": null
},
{
"text": "Together, these three agent types make use of three different operations of which we can analyse the computational complexity:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Additional Results",
"sec_num": null
},
{
"text": "1. Conjunction (only used by interactional agents)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Additional Results",
"sec_num": null
},
{
"text": "2. Entropy (only used by listeners of the interactional and frugally pragmatic agent types)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Additional Results",
"sec_num": null
},
{
"text": "3. Inference (used by all agent types)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Additional Results",
"sec_num": null
},
{
"text": "Below, we will analyse the computational complexity of the relevant operations for the three different agent types listed above in turn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Additional Results",
"sec_num": null
},
{
"text": "Data: L is a lexicon matrix with |S| rows and |R| columns. s t\u22121 is the latest signal in the dialogue history D r . Note that we assume that at each turn t, L contains the outcome of the conjunction operation that was performed at the previous turn t \u2212 1 (if such a previous turn exists). If t = 1, L is identical to the listener's lexicon L. Result: An updated lexicon L on which the conjunction operation has been performed given s t=1 (the first signal in the interaction) and s t\u22121 (the latest signal in the interaction).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3: Conjunction operation for interactional listener",
"sec_num": null
},
{
"text": "Note that the only values that are updated in L are the cells in the row corresponding to s t=1 (i.e. the signal that was received in the very first turn of the interaction).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3: Conjunction operation for interactional listener",
"sec_num": null
},
{
"text": "1 for i \u2190 1 to |R| do 2 L s t=1 ,i = L s t=1 ,i * L s t\u22121 ,i ; 3 end",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3: Conjunction operation for interactional listener",
"sec_num": null
},
{
"text": "above) is given by equation 15.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3: Conjunction operation for interactional listener",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H(R|s, L Dr ) = r\u2208R P r(r|s, L Dr )log 2 1 P r(r|s, L Dr )",
"eq_num": "(15)"
}
],
"section": "Algorithm 3: Conjunction operation for interactional listener",
"sec_num": null
},
{
"text": "Thus, the listener has to first do a multiplication operation for each r \u2208 |R|, and then sum each of the |R| resulting values together. This means that the computational complexity of the entropy calculation is 2|R|. Because the entropy calculation happens at every single turn, the overall computational complexity of the entropy operation for a given interaction is 2|R|t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3: Conjunction operation for interactional listener",
"sec_num": null
},
{
"text": "Once the entropy falls below the entropy threshold, or once the cap on the number of turns has been reached, the listener will decide to move to the inference step to actually interpret the signal(s). In order to do that, the listener has to go along the row corresponding to the first signal that was sent s t=1 and select the referent that has the highest value. To do this, the listener has to make |R| comparisons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3: Conjunction operation for interactional listener",
"sec_num": null
},
{
"text": "Taken together, this means that the computational complexity for the interactional listener strat-egy as a whole (per interaction) is |R|(t \u2212 1) + 2|R|t + |R|.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3: Conjunction operation for interactional listener",
"sec_num": null
},
{
"text": "We can consider the RSA operation of updating the matrix of production/reception probabilities separately from the inference step of choosing an actual utterance or interpretation. The computational complexity of that RSA step by itself (for speakers and listeners alike) is (2 + 4n)|S||R|, where n is the order of pragmatic reasoning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2.2 Pragmatic agents",
"sec_num": null
},
{
"text": "The idea behind this is that a pragmatic agent has to normalise 2n times (first along the rows and then along the columns for production, or first along the columns and then along the rows for interpretation; and that n times). Each normalisation step itself takes 2|S||R| (taking the sum over rows or columns takes |S||R| steps, then dividing each cell by the relevant sum also takes |S||R| steps). Taken together, this makes 2n \u2022 2|S||R| = (4n)|S||R| steps. However, we haven't yet incorporated the first normalisation step which turns the lexicon of binary mappings into a level-0 speaker (in the case of pragmatic production) or a level-0 listener (in the case of pragmatic interpretation). As explained above, this initial normalisation operation consists of 2|S||R| steps, so if we add it in, we get: (2 + 4n)|S||R|.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2.2 Pragmatic agents",
"sec_num": null
},
{
"text": "If we combine this with the inference step (which, as we saw above, is |S| for speakers, and |R| for listeners), we get (2 + 4n)|S||R| + max(|S|, |R|). Which is a generic computational complexity analysis for pragmatic agents in general. But we can make this more specific by considering speakers and listeners separately, as we do below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2.2 Pragmatic agents",
"sec_num": null
},
{
"text": "Frugally pragmatic speaker The frugally pragmatic speaker strategy is exactly the same as the fully pragmatic speaker strategy; see the corresponding complexity analysis below under 'Fully pragmatic speaker'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2.2 Pragmatic agents",
"sec_num": null
},
{
"text": "Frugally pragmatic listener For the frugally pragmatic listener, the computational complexity of this strategy depends on whether the listener decides to level up to order 2 or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2.2 Pragmatic agents",
"sec_num": null
},
{
"text": "\u2022 Scenario 1: In this scenario, the listener doesn't level up, which means that n = 1. This yields: (2 + 4n)|S||R| = (2 + (4 * 1))|S||R| = 6|S||R| for the RSA operation. This is then combined with the entropy calculation, which, as shown above, takes 2|R| ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2.2 Pragmatic agents",
"sec_num": null
},
{
"text": "Frugally pragmatic steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interactional",
"sec_num": null
},
{
"text": "And finally, the listener has to do an inference step to come to an actual interpretation. As shown above, this takes |R| steps. Taken together, this means that the computational complexity for the frugally pragmatic listener who doesn't level up is 6|S||R| + 2|R| + |R|.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interactional",
"sec_num": null
},
{
"text": "\u2022 Scenario 2: In this scenario, the listener does level up, which means that n = 2. This yields: (2 + 4n)|S||R| = (2 + (4 * 2))|S||R| = 10|S||R| for the RSA operation. (Note that this subsumes the initial RSA operation at order n = 1; we assume that the listener can hold on to the outcome of that first n = 1 operation to use it as the basis for their subsequent n = 2 inference.) This is then combined with the entropy calculation, which, as shown above, takes 2|R| steps. And finally, the listener has to do an inference step to come to an actual interpretation. As shown above, this takes |R| steps. Taken together, this means that the computational complexity for the frugally pragmatic listener who does level up is 10|S||R|+2|R|+ |R|.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interactional",
"sec_num": null
},
{
"text": "Fully pragmatic speaker The frugally pragmatic speaker and fully pragmatic speaker strategies are exactly the same: they both start at order n = 2, and don't do anything other than regular RSA production. The RSA operation part for order n = 2 is (2 + 4n)|S||R| = (2 + (4 * 2)|S||R| = 10|S||R|.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interactional",
"sec_num": null
},
{
"text": "As shown above, the inference step for production takes |S| steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interactional",
"sec_num": null
},
{
"text": "Taken together, this means that the computational complexity for the frugally or fully pragmatic speaker is 10|S||R| + |S|.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interactional",
"sec_num": null
},
{
"text": "Fully pragmatic listener The fully pragmatic listener starts at order n = 2 straight away, and doesn't do any entropy calculation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interactional",
"sec_num": null
},
{
"text": "Taken together, this means that the computational complexity for the fully pragmatic listener is 10|S||R| + |R|. Table 4 shows a comparison of the computational complexity of each strategy.",
"cite_spans": [],
"ref_spans": [
{
"start": 113,
"end": 120,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Interactional",
"sec_num": null
},
{
"text": "We can make the computational complexity of the speaker and listener roles more comparable by subsuming |S| and |R| under one combined variable m = max(|S|, |R|), which simply takes on whichever is the highest value out of |S| and |R|. (Given the parameter settings used in the simulations, this will always be |S|, given that |S| was fixed at 1.5 \u00d7 |R|.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2.3 Comparison across agent types",
"sec_num": null
},
{
"text": "Division of labour between speaker and listener From Table 5 we can read off how the division of labour between speaker and listener differs between the different strategies. Only in the fully pragmatic agent types do speaker and listener do an exactly equal amount of work. In the interactional strategy,",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 60,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "B.2.3 Comparison across agent types",
"sec_num": null
},
{
"text": "Frugally pragmatic",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interactional",
"sec_num": null
},
{
"text": "Fully pragmatic 2m(t \u2212 1) + 2mt + 2 1:16m 2 + 4m 2:20m 2 + 4m 20m 2 + 2m Table 6 : Computational complexity comparison across agent types (with speaker and listener strategy summed together).",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 80,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Interactional",
"sec_num": null
},
{
"text": "the listener always does a bit more work than the speaker (because the listener thinks about how uncertain they are about their inference; the speaker in contrast only has to react when they get a repair request). In the frugally pragmatic strategy, the listener does less work than the speaker when they can stay at order n = 1 (scenario 1), but more work than the speaker when they have to level up to order n = 2 because their initial inference was too uncertain (scenario 2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interactional",
"sec_num": null
},
{
"text": "Comparison across agent types (collapsing across speaker and listener role) Ultimately however, we are interested in comparing across agent types. In order to do this, we can sum the complexity of the speaker and listener within each agent type together, as shown in Table 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 267,
"end": 274,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Interactional",
"sec_num": null
},
{
"text": "As described in Section 4, we used the formulas in Table 6 in combination with the mean number of turns derived from the simulations for each separate strategy to yield the computational cost results shown in Figure 2c .",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 58,
"text": "Table 6",
"ref_id": null
},
{
"start": 209,
"end": 218,
"text": "Figure 2c",
"ref_id": null
}
],
"eq_spans": [],
"section": "Interactional",
"sec_num": null
},
{
"text": "We use the conventional 'speaker' and 'listener', though we are aware that natural languages are produced and perceived in diverse modalities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The implementation code and simulation data are available at: https://osf.io/fxphv/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "If one finds this assumption to generic, one can propose a restricted special case model. Such a model may have a different computational complexity. Parameterized complexity analysis(Downey and Fellows, 1999;van Rooij et al., 2019) is a sophisticated approach for investigating various special case models.4 A polynomial-time reduction from A to B does not strictly prove a special case relationship. Formally it proves that at polynomial cost any input of A can be transformed into an equivalent input for B such that the output of B is consistent with the output of A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For 30 signals and 20 referents there exist 2 30\u00d720 = 1152921504606846976 possible alternatives lexicons to consider. Even when agents can consider a million alternatives per second, it would take them about 3.6 years to update each time they hear their interlocutor speak.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We can make these generalisations based on the definition of conditional entropy, but note that a given conditional entropy value can in principle correspond to a number of different probability distributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is funded by the Netherlands Organisation for Scientific Research (NWO): MB is funded by Gravitation grant 024.001.006 of the Language in Interaction consortium, and MD and MW are supported by Vidi grant Elementary particles of conversation (016.Vidi.185.205). We would like to thank the reviewers for their valuable comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Let's start with distinguishing between |S| (number of signals) and |R| (number of referents) in our computational complexity analysis. We can later simplify by subsuming these two variables under a single variable m, which simply takes on whichever value is the maximum out of |S| and |R|.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2 Computational complexity analysis",
"sec_num": null
},
{
"text": "Interactional speaker The interactional speaker has to do one conjunction step and one inference step per turn.The conjunction step updates the lexicon L (which exists only within a particular interaction and is 'reset' to the speaker's original lexicon at the start of each new interaction) by multiplying each value in the column corresponding to r intended in L with the corresponding value in the signal row corresponding to the signal that was last sent s t\u22121 . This operation is specified below in algorithm 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2.1 Interactional agents",
"sec_num": null
},
{
"text": "Data: L is a lexicon matrix with |S| rows and |R| columns. r intended is the speaker's intended referent. s t\u22121 is the latest signal in the dialogue history D r . Note that we assume that at each turn t, L contains the outcome of the conjunction operation that was performed at the previous turn t \u2212 1 (if such a previous turn exists). If t = 1, L is identical to the speaker's lexicon L. Result: An updated lexicon L on which the conjunction operation has been performed given r intended and s t\u22121 . Note that the only values that are updated in L are the cells in the column corresponding to r intended .The computational complexity of step 2 in algorithm 2 is |S| (i.e. the multiplication operation has to be done exactly once for each cell in the column corresponding to r intended ).Algorithm 2 has to be performed exactly once for each turn t > 1 after the first turn. Therefore, this conjunction operation has to be performed exactly t \u2212 1 times in a given interaction. Thus, the overall computational complexity of conjunction for the interactional speaker is |S|(t \u2212 1).To determine which signal to send next, the speaker has to go along the column of their intended referent, and select the signal that has the highest value. Given that every agent type has to do this inference step, let's assume that lookup is free. In that case the speaker has to make |S| comparisons to check which signal has the highest value.Taken together, this means that the computational complexity for the interactional speaker strategy as a whole (per interaction) is |S|(t \u2212 1) + |S|.Interactional listener In addition to the conjunction and inference steps, the interactional listener has to do an entropy step in between, to decide whether to move on to the inference step (if entropy is low), or whether to respond with a repair initiator (if entropy is high).Let's again go through the steps in order: The conjunction step updates the lexicon L (which exists only within a particular interaction and is 'reset' to the listener's original lexicon at the start of each new interaction) by multiplying the signal row corresponding to the first signal that was received in the interaction s t=1 with the signal row corresponding to the signal that was last received s t\u22121 . This operation is specified below in algorithm 3.The computational complexity of step 2 in algorithm 3 is |R| (i.e. the multiplication operation has to be done exactly once for each cell in the row corresponding to signal s t=1 ).Just like for the interactional speaker, algorithm 3 has to be performed exactly once for each turn t > 1 after the first turn. Therefore, this conjunction operation has to be performed exactly t \u2212 1 times in a given interaction. Thus, the overall computational complexity of conjunction for the interactional listener is |R|(t \u2212 1).At each turn of the interaction (including the very first turn), the listener does an entropy step to check how certain they are about their inference over possible intended referents. The entropy of the probability distribution over referents given the received signal s and the lexicon updated according to the dialogue history L Dr (what is called L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 2: Conjunction operation for interactional speaker",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Approximating MAPS for belief networks is NPhard and other theorems",
"authors": [
{
"first": "M",
"middle": [],
"last": "Aashraf",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"M"
],
"last": "Abdelbar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hedetniemi",
"suffix": ""
}
],
"year": 1998,
"venue": "Artificial Intelligence",
"volume": "102",
"issue": "1",
"pages": "21--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aashraf M. Abdelbar and Sandra M. Hedetniemi. 1998. Approximating MAPS for belief networks is NP- hard and other theorems. Artificial Intelligence, 102(1):21-38.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Improving Human Interaction Research through Ecological Grounding",
"authors": [
{
"first": "Saul",
"middle": [],
"last": "Albert",
"suffix": ""
},
{
"first": "Jan-Peter De",
"middle": [],
"last": "Ruiter",
"suffix": ""
}
],
"year": 2018,
"venue": "Collabra: Psychology",
"volume": "4",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1525/collabra.132"
]
},
"num": null,
"urls": [],
"raw_text": "Saul Albert and Jan-Peter de Ruiter. 2018. Improv- ing Human Interaction Research through Ecological Grounding. Collabra: Psychology, 4(1).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Pragmatic communicators can overcome asymmetry by exploiting ambiguity",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Blokpoel",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dingemanse",
"suffix": ""
},
{
"first": "Marieke",
"middle": [],
"last": "Woensdregt",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Kachergis",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "B\u00f6gels",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Toni",
"suffix": ""
},
{
"first": "Iris",
"middle": [],
"last": "Van Rooij",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.31219/osf.io/q56xs"
]
},
"num": null,
"urls": [],
"raw_text": "Mark Blokpoel, Mark Dingemanse, Marieke Woens- dregt, George Kachergis, Sara B\u00f6gels, Ivan Toni, and Iris van Rooij. 2020. Pragmatic communicators can overcome asymmetry by exploiting ambiguity. Preprint, Open Science Framework.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The computational complexity of abduction",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Bylander",
"suffix": ""
},
{
"first": "Dean",
"middle": [],
"last": "Allemang",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "John R Josephson",
"middle": [],
"last": "Tanner",
"suffix": ""
}
],
"year": 1991,
"venue": "Artificial Intelligence",
"volume": "49",
"issue": "1-3",
"pages": "25--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Bylander, Dean Allemang, Michael C Tanner, and John R Josephson. 1991. The computational complexity of abduction. Artificial Intelligence, 49(1-3):25-60.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Material Symbols",
"authors": [
{
"first": "Andy",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "19",
"issue": "",
"pages": "291--307",
"other_ids": {
"DOI": [
"10.1080/09515080600689872"
]
},
"num": null,
"urls": [],
"raw_text": "Andy Clark. 2006. Material Symbols. Philosophical Psychology, 19(3):291-307.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Collaborating on contributions to conversations. Language and Cognitive Processes",
"authors": [
{
"first": "H",
"middle": [],
"last": "Herbert",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schaefer",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "2",
"issue": "",
"pages": "19--41",
"other_ids": {
"DOI": [
"10.1080/01690968708406350"
]
},
"num": null,
"urls": [],
"raw_text": "Herbert H. Clark and Edward Schaefer. 1987. Collabo- rating on contributions to conversations. Language and Cognitive Processes, 2(1):19-41.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Referring as a collaborative process",
"authors": [
{
"first": "H",
"middle": [],
"last": "Herbert",
"suffix": ""
},
{
"first": "Deanna",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wilkes-Gibbs",
"suffix": ""
}
],
"year": 1986,
"venue": "Cognition",
"volume": "22",
"issue": "1",
"pages": "1--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herbert H. Clark and Deanna Wilkes-Gibbs. 1986. Referring as a collaborative process. Cognition, 22(1):1-39.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Resource-rationality beyond individual minds: the case of interactive language use",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Dingemanse",
"suffix": ""
}
],
"year": 2020,
"venue": "Behavioral and Brain Sciences",
"volume": "43",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1017/S0140525X19001638"
]
},
"num": null,
"urls": [],
"raw_text": "Mark Dingemanse. 2020. Resource-rationality beyond individual minds: the case of interactive language use. Behavioral and Brain Sciences, 43:e9.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Universal principles in the repair of communication problems",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Dingemanse",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Se\u00e1n",
"suffix": ""
},
{
"first": "Julija",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Baranova",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Blythe",
"suffix": ""
},
{
"first": "Simeon",
"middle": [],
"last": "Drew",
"suffix": ""
},
{
"first": "Rosa",
"middle": [
"S"
],
"last": "Floyd",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gisladottir",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Kobin",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"C"
],
"last": "Kendrick",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Levinson",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Manrique",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Rossi",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Enfield",
"suffix": ""
}
],
"year": 2015,
"venue": "PLoS ONE",
"volume": "10",
"issue": "9",
"pages": "1--15",
"other_ids": {
"DOI": [
"10.1371/journal.pone.0136100"
]
},
"num": null,
"urls": [],
"raw_text": "Mark Dingemanse, Se\u00e1n G. Roberts, Julija Baranova, Joe Blythe, Paul Drew, Simeon Floyd, Rosa S. Gis- ladottir, Kobin H. Kendrick, Stephen C. Levinson, Elizabeth Manrique, Giovanni Rossi, and Nick En- field, J. 2015. Universal principles in the repair of communication problems. PLoS ONE, 10(9):1-15.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Parameterized complexity",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Fellows",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Downey and Mike Fellows. 1999. Parameter- ized complexity. Springer, Berlin.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The Mind Doesn't Work That Way: The Scope and Limits of Computational Psychology",
"authors": [
{
"first": "A",
"middle": [],
"last": "Jerry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fodor",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerry A. Fodor. 2000. The Mind Doesn't Work That Way: The Scope and Limits of Computational Psy- chology. MIT press, Cambridge, MA.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Predicting pragmatic reasoning in language games",
"authors": [
{
"first": "C",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"D"
],
"last": "Frank",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 2012,
"venue": "Science",
"volume": "336",
"issue": "6084",
"pages": "",
"other_ids": {
"DOI": [
"10.1126/science.1218633"
]
},
"num": null,
"urls": [],
"raw_text": "Michael C. Frank and Noah D. Goodman. 2012. Pre- dicting pragmatic reasoning in language games. Sci- ence, 336(6084):998.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Computers and intractability: A guide to the theory of NP-completeness",
"authors": [
{
"first": "R",
"middle": [],
"last": "Micheal",
"suffix": ""
},
{
"first": "David",
"middle": [
"S"
],
"last": "Garey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micheal R Garey and David S. Johnson. 1979. Com- puters and intractability: A guide to the theory of NP-completeness. W. H. Freeman, San Francisco, CA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "How Efficiency Shapes Human Language",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Gibson",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Futrell",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"T"
],
"last": "Piandadosi",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Dautriche",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Mahowald",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Bergen",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2019,
"venue": "Trends in Cognitive Sciences",
"volume": "23",
"issue": "5",
"pages": "389--407",
"other_ids": {
"DOI": [
"10.1016/j.tics.2019.02.003"
]
},
"num": null,
"urls": [],
"raw_text": "Edward Gibson, Richard Futrell, Steven T. Piandadosi, Isabelle Dautriche, Kyle Mahowald, Leon Bergen, and Roger Levy. 2019. How Efficiency Shapes Human Language. Trends in Cognitive Sciences, 23(5):389-407.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The handbook of computational linguistics and natural language processing, Blackwell handbooks in linguistics",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Ginzburg",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Ginzburg and Raquel Fern\u00e1ndez. 2010. Com- putational models of dialogue. In Alexander Clark, Chris Fox, and Shalom Lappin, editors, The hand- book of computational linguistics and natural lan- guage processing, Blackwell handbooks in linguis- tics. Wiley-Blackwell, Chichester, West Sussex ;",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Pragmatic language interpretation as probabilistic inference",
"authors": [
{
"first": "D",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"C"
],
"last": "Goodman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2016,
"venue": "Trends in Cognitive Sciences",
"volume": "20",
"issue": "11",
"pages": "818--829",
"other_ids": {
"DOI": [
"10.1016/j.tics.2016.08.005"
]
},
"num": null,
"urls": [],
"raw_text": "Noah D. Goodman and Michael C. Frank. 2016. Prag- matic language interpretation as probabilistic infer- ence. Trends in Cognitive Sciences, 20(11):818- 829.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Logic and Conversation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Herbert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grice",
"suffix": ""
}
],
"year": 1975,
"venue": "Studies in the Way of Words",
"volume": "",
"issue": "",
"pages": "305--315",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herbert P. Grice. 1975. Logic and Conversation. In Herbert P. Grice, editor, Studies in the Way of Words, pages 305-315. Harvard University Press.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Cognitive Science and Folk Psychology: The Right Frame of Mind",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Haselager",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pim F. Haselager. 1997. Cognitive Science and Folk Psychology: The Right Frame of Mind. Sage, Lon- don.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Convention-formation in iterated reference games",
"authors": [
{
"first": "X",
"middle": [
"D"
],
"last": "Robert",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"C"
],
"last": "Hawkins",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"D"
],
"last": "Frank",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 39th Annual Meeting of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert X. D. Hawkins, Michael C. Frank, and Noah D. Goodman. 2017. Convention-formation in iterated reference games. Proceedings of the 39th Annual Meeting of the Cognitive Science Society.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Lexical nature of syntactic ambiguity resolution",
"authors": [
{
"first": "Maryellen",
"middle": [
"C"
],
"last": "Macdonald",
"suffix": ""
},
{
"first": "Neal",
"middle": [
"J"
],
"last": "Pearlmutter",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"S"
],
"last": "Seidenberg",
"suffix": ""
}
],
"year": 1994,
"venue": "Psychological Review",
"volume": "101",
"issue": "4",
"pages": "676--703",
"other_ids": {
"DOI": [
"10.1037//0033-295X.101.4.676"
]
},
"num": null,
"urls": [],
"raw_text": "Maryellen C. MacDonald, Neal J. Pearlmutter, and Mark S. Seidenberg. 1994. Lexical nature of syn- tactic ambiguity resolution. Psychological Review, 101(4):676-703.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Information Theory, Inference and Learning Algorithms",
"authors": [
{
"first": "J",
"middle": [
"C"
],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mackay",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David J. C. MacKay. 2003. Information Theory, Infer- ence and Learning Algorithms. Cambridge Univer- sity Press.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Demons of ecological rationality",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Otworowska",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Blokpoel",
"suffix": ""
},
{
"first": "Marieke",
"middle": [],
"last": "Sweers",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Wareham",
"suffix": ""
},
{
"first": "Iris",
"middle": [],
"last": "Van Rooij",
"suffix": ""
}
],
"year": 2018,
"venue": "Cognitive Science",
"volume": "42",
"issue": "3",
"pages": "1057--1066",
"other_ids": {
"DOI": [
"10.1111/cogs.12530"
]
},
"num": null,
"urls": [],
"raw_text": "Maria Otworowska, Mark Blokpoel, Marieke Sweers, Todd Wareham, and Iris van Rooij. 2018. Demons of ecological rationality. Cognitive Science, 42(3):1057-1066.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The communicative function of ambiguity in language",
"authors": [
{
"first": "T",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "Harry",
"middle": [],
"last": "Piantadosi",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Tily",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gibson",
"suffix": ""
}
],
"year": 2012,
"venue": "Cognition",
"volume": "122",
"issue": "3",
"pages": "280--291",
"other_ids": {
"DOI": [
"10.1016/j.cognition.2011.10.004"
]
},
"num": null,
"urls": [],
"raw_text": "Steven T. Piantadosi, Harry Tily, and Edward Gibson. 2012. The communicative function of ambiguity in language. Cognition, 122(3):280-291.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Computational Models of Miscommunication Phenomena",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Purver",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Hough",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Howes",
"suffix": ""
}
],
"year": 2018,
"venue": "Topics in Cognitive Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1111/tops.12324"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Purver, Julian Hough, and Christine Howes. 2018. Computational Models of Miscommunication Phenomena. Topics in Cognitive Science.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Cognitive Offloading",
"authors": [
{
"first": "Evan",
"middle": [
"F"
],
"last": "Risko",
"suffix": ""
},
{
"first": "Sam",
"middle": [
"J"
],
"last": "Gilbert",
"suffix": ""
}
],
"year": 2016,
"venue": "Trends in Cognitive Sciences",
"volume": "20",
"issue": "9",
"pages": "676--688",
"other_ids": {
"DOI": [
"10.1016/j.tics.2016.07.002"
]
},
"num": null,
"urls": [],
"raw_text": "Evan F. Risko and Sam J. Gilbert. 2016. Cognitive Of- floading. Trends in Cognitive Sciences, 20(9):676- 688.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A model of intentional communication: AIRBUS (Asymmetric Intention Recognition with Bayesian Updating of Signals)",
"authors": [
{
"first": "Jan-Peter",
"middle": [],
"last": "De Ruiter",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Cummins",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of SemDial 2012",
"volume": "",
"issue": "",
"pages": "149--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan-Peter de Ruiter and Chris Cummins. 2012. A model of intentional communication: AIRBUS (Asymmetric Intention Recognition with Bayesian Updating of Signals). Proceedings of SemDial 2012, pages 149-50.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Repair After Next Turn: The Last Structurally Provided Defense of Intersubjectivity in Conversation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Emanuel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schegloff",
"suffix": ""
}
],
"year": 1992,
"venue": "American Journal of Sociology",
"volume": "97",
"issue": "5",
"pages": "1295--1345",
"other_ids": {
"DOI": [
"10.1086/229903"
]
},
"num": null,
"urls": [],
"raw_text": "Emanuel A. Schegloff. 1992. Repair After Next Turn: The Last Structurally Provided Defense of Intersub- jectivity in Conversation. American Journal of Soci- ology, 97(5):1295-1345.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The Preference for Self-Correction in the Organization of Repair in Conversation",
"authors": [
{
"first": "Emanuel",
"middle": [
"A"
],
"last": "Schegloff",
"suffix": ""
},
{
"first": "Gail",
"middle": [],
"last": "Jefferson",
"suffix": ""
},
{
"first": "Harvey",
"middle": [],
"last": "Sacks",
"suffix": ""
}
],
"year": 1977,
"venue": "Language",
"volume": "53",
"issue": "2",
"pages": "361--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emanuel A. Schegloff, Gail Jefferson, and Harvey Sacks. 1977. The Preference for Self-Correction in the Organization of Repair in Conversation. Lan- guage, 53(2):361-382. ArticleType: primary article / Full publication date: Jun., 1977 / Copyright c 1977 Linguistic Society of America.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Clarifying Intentions in Dialogue: A Corpus Study",
"authors": [
{
"first": "J",
"middle": [],
"last": "Julian",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Schl\u00f6der",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 11th International Conference on Computational Semantics (IWCS-2015)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julian J. Schl\u00f6der and Raquel Fern\u00e1ndez. 2015. Clar- ifying Intentions in Dialogue: A Corpus Study. In Proceedings of the 11th International Conference on Computational Semantics (IWCS-2015), London.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Relevance: Communication and Cognition",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Sperber",
"suffix": ""
},
{
"first": "Deirdre",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Sperber and Deirdre Wilson. 1986. Relevance: Communication and Cognition, first edition. Black- well Publishing.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Coherence as constraint satisfaction",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Thagard",
"suffix": ""
},
{
"first": "Karsten",
"middle": [],
"last": "Verbeurgt",
"suffix": ""
}
],
"year": 1998,
"venue": "Cognitive Science",
"volume": "22",
"issue": "1",
"pages": "1--24",
"other_ids": {
"DOI": [
"10.1016/S0364-0213(99)80033-0"
]
},
"num": null,
"urls": [],
"raw_text": "Paul Thagard and Karsten Verbeurgt. 1998. Coher- ence as constraint satisfaction. Cognitive Science, 22(1):1-24.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Theory before the test: How to build high-verisimilitude explanatory theories in psychological science",
"authors": [
{
"first": "Giosu\u00e8",
"middle": [],
"last": "Iris Van Rooij",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baggio",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.31234/osf.io/7qbpr"
]
},
"num": null,
"urls": [],
"raw_text": "Iris van Rooij and Giosu\u00e8 Baggio. 2020. Theory before the test: How to build high-verisimilitude explana- tory theories in psychological science. Preprint, PsyArXiv.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Cognition and Intractability: A Guide to Classical and Parameterized Complexity Analysis",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Iris Van Rooij",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Blokpoel",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Kwisthout",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wareham",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1017/9781107358331"
]
},
"num": null,
"urls": [],
"raw_text": "Iris van Rooij, Mark Blokpoel, Johan Kwisthout, and Todd Wareham. 2019. Cognition and Intractability: A Guide to Classical and Parameterized Complexity Analysis. Cambridge University Press.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "tentional Communication: Computationally Easy or Difficult? Frontiers in Human Neuroscience",
"authors": [
{
"first": "Johan",
"middle": [],
"last": "Iris Van Rooij",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Kwisthout",
"suffix": ""
},
{
"first": "Jakub",
"middle": [],
"last": "Blokpoel",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Szymanik",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Wareham",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Toni",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3389/fnhum.2011.00052"
]
},
"num": null,
"urls": [],
"raw_text": "Iris van Rooij, Johan Kwisthout, Mark Blokpoel, Jakub Szymanik, Todd Wareham, and Ivan Toni. 2011. In- tentional Communication: Computationally Easy or Difficult? Frontiers in Human Neuroscience, 5.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "The puzzle of ambiguity",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wasow",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Perfors",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Beaver",
"suffix": ""
}
],
"year": 2005,
"venue": "Morphology and The Web of Grammar: Essays in Memory of Steven G. Lapointe",
"volume": "",
"issue": "",
"pages": "265--282",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wasow, Andrew Perfors, and David Beaver. 2005. The puzzle of ambiguity. In O Orgun and P Sells, editors, Morphology and The Web of Gram- mar: Essays in Memory of Steven G. Lapointe, pages 265-282. CSLI Publications.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Coordinating beliefs in conversation",
"authors": [
{
"first": "Deanna",
"middle": [],
"last": "Wilkes",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Gibbs",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Herbert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 1992,
"venue": "Journal of Memory and Language",
"volume": "31",
"issue": "2",
"pages": "183--194",
"other_ids": {
"DOI": [
"10.1016/0749-596X(92)90010-U"
]
},
"num": null,
"urls": [],
"raw_text": "Deanna Wilkes-Gibbs and Herbert H Clark. 1992. Co- ordinating beliefs in conversation. Journal of Mem- ory and Language, 31(2):183-194.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Pragmatic reasoning model for listener and speaker. Arrow direction represents a 'reasons about' relationship, illustrating the recursive reasoning being done by the agents. Agents reason about increasingly lower levels, eventually bottoming out in a literal listener or speaker respectively."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "(a) Communicative success by agent type and lexicon size (horizontal lines indicate chance level, error bars 95% CIs). (b) Number of turns by lexicon size (interactional agents only); turns >1 increment by 2 since repair sequences are paired turns. (c) Computational complexity (in basic computation steps) by agent type and lexicon size. For interactional agents with a 6 \u00d7 4 lexicon no data is visible as the computation cost is very small (48) relative to the range of the y-axis."
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "The computational complexity for different numbers of turns for different lexicon sizes, for the interactional agents."
},
"FIGREF3": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "The difference in entropy between the number of turns for different lexicon sizes for the interactional agents.B Computational Complexity Analysis of Interactional, Frugally Pragmatic and Fully Pragmatic Communication Strategies B.1 Agent types and computational operations"
},
"TABREF1": {
"html": null,
"content": "<table/>",
"num": null,
"text": "Illustration of time required to compute models of varying complexity with input size n.",
"type_str": "table"
},
"TABREF2": {
"html": null,
"content": "<table/>",
"num": null,
"text": "Computational complexity comparison across agent types. m denotes the maximum of |S| and |R|",
"type_str": "table"
},
"TABREF4": {
"html": null,
"content": "<table/>",
"num": null,
"text": "Computational complexity comparison across agent types (with speaker and listener strategy summed together).",
"type_str": "table"
},
"TABREF5": {
"html": null,
"content": "<table/>",
"num": null,
"text": "Computational complexity comparison across agent types (with speaker and listener strategy summed together).",
"type_str": "table"
}
}
}
}