|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:08:32.561655Z" |
|
}, |
|
"title": "Discovering the Compositional Structure of Vector Representations with Role Learning Networks", |
|
"authors": [ |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Soulos", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"Thomas" |
|
], |
|
"last": "Mccoy", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Linzen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "New York University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Smolensky", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "How can neural networks perform so well on compositional tasks even though they lack explicit compositional representations? We use a novel analysis technique called ROLE to show that recurrent neural networks perform well on such tasks by converging to solutions which implicitly represent symbolic structure. This method uncovers a symbolic structure which, when properly embedded in vector space, closely approximates the encodings of a standard seq2seq network trained to perform the compositional SCAN task. We verify the causal importance of the discovered symbolic structure by showing that, when we systematically manipulate hidden embeddings based on this symbolic structure, the model's output is changed in the way predicted by our analysis. Goal: Interpret neural network encodings jump and run twice JUMP RUN RUN RNN Decoder RNN Encoder Encoding jump and run left twice JUMP LTURN RUN LTURN RUN Method: Approximate the encodings of a neural network with a more interpretable compositional model (\u00a74) Step 1: Assign structural roles to words using a learned role assigner. Step 2: Combine word and role vectors using a closed-form equation with learned parameters.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "How can neural networks perform so well on compositional tasks even though they lack explicit compositional representations? We use a novel analysis technique called ROLE to show that recurrent neural networks perform well on such tasks by converging to solutions which implicitly represent symbolic structure. This method uncovers a symbolic structure which, when properly embedded in vector space, closely approximates the encodings of a standard seq2seq network trained to perform the compositional SCAN task. We verify the causal importance of the discovered symbolic structure by showing that, when we systematically manipulate hidden embeddings based on this symbolic structure, the model's output is changed in the way predicted by our analysis. Goal: Interpret neural network encodings jump and run twice JUMP RUN RUN RNN Decoder RNN Encoder Encoding jump and run left twice JUMP LTURN RUN LTURN RUN Method: Approximate the encodings of a neural network with a more interpretable compositional model (\u00a74) Step 1: Assign structural roles to words using a learned role assigner. Step 2: Combine word and role vectors using a closed-form equation with learned parameters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Traditional models of cognition, and language in particular, have relied heavily on symbol structures and symbol manipulation. However, in the current era, deep learning research has shown that Neural Networks (NNs) can display remarkable degrees of generalization on tasks traditionally viewed as depending on symbolic structure (Wu et al., 2016; McCoy et al., 2019a) , albeit with some important limits to their generalization (Lake and . Given that standard NNs have no obvious mechanisms for representing symbolic structures, parsing inputs into such structures, nor applying compositional symbol-manipulating rules to them, this success raises the question that we address in this paper: How do NNs achieve such strong performance on compositional tasks?", |
|
"cite_spans": [ |
|
{ |
|
"start": 330, |
|
"end": 347, |
|
"text": "(Wu et al., 2016;", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 348, |
|
"end": 368, |
|
"text": "McCoy et al., 2019a)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Could it be that NNs do learn symbolic representations-covertly embedded as vectors in their state spaces? McCoy et al. (2019a) showed that when trained on highly compositional tasks,", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 127, |
|
"text": "McCoy et al. (2019a)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The RNN encodings can be manipulated in symbolic ways to alter the output. standard NNs learned representations that are functionally equivalent to compositional vector embeddings of symbolic structures (Sec. 3). Processing in these NNs assigns structural representations to inputs and generates outputs that are governed by compositional rules stated over those representations. We refer to the networks we will analyze as target NNs, because we will propose a new type of NN (in Sec. 4)-the Role Learner (ROLE)which is used to analyze the target network. In contrast with the analysis model of McCoy et al. (2019a) , which relies on a hand-specified hypothesis about the structure underlying the learned representations of the target NN, ROLE automatically learns a symbolic structure that best approximates the internal representation of the target network. This yields two advantages. First, ROLE achieves success at analyzing networks for which the underlying structure is unclear. We show this in Sec. 5, where ROLE successfully uncovers the symbolic structures learned by a seq2seq RNN trained on the SCAN synthetic semantic parsing task (Lake and . Second, removing the need for hand-specified structural hypotheses reduces the burden on the analyst, who only needs to provide input sequences and their target NN encodings. Discovering symbolic structure within a model enables us to perform precise alterations to the internal representations in order to produce desired alterations in the output (Sec. 5.3). Then, in Sec. 6, we turn briefly to partially-compositional tasks in NLP.", |
|
"cite_spans": [ |
|
{ |
|
"start": 596, |
|
"end": 616, |
|
"text": "McCoy et al. (2019a)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The novel contributions of this research are:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 ROLE, a NN module that learns to assign symbolic structures to input sequences (Sec. 4).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Demonstration that RNNs converge to compositional solutions on the synthetic SCAN task (Sec. 5).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 A precise closed-form expression for the distributed encoding learned by an RNN trained on SCAN, exhibiting its latent symbolic structure (Sec. 5.2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Demonstration of the causal relevance of this symbolic structure by using the equation for its vector encoding to control RNN output through precise alteration of the RNN's internal encoding (Sec. 5.3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Additional evidence showing that sentence embedding models do not capture compositional structure (Sec. 6).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2 Background Related work", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Certain cognitive tasks consist in computing a function \u03d5 that is governed by strict rules: e.g., if \u03d5 is the function mapping a mathematical expression to its value (e.g., mapping '19 \u2212 2 * 7' to 5), then \u03d5 obeys the rule that \u03d5(x + y) = sum(\u03d5(x), \u03d5(y)) for any expressions x and y. This rule is compositional: the output of a structure (here, x + y) is a function of the outputs of the structure's constituents (here, x and y). The rule can be stated with full generality once the input is assigned a symbolic structure giving its decomposition into constituents. For a fully-compositional task, completely determined by compositional rules, a system that can assign appropriate symbolic structures to inputs and apply appropriate compositional rules to these structures will display full systematic generalization: it will correctly process arbitrary novel combinations of familiar constituents. This is a core capability of symbolic AI systems. Other tasks, including most natural language tasks such as machine translation, are only partially characterizable by compositional rules: natural language is only partially compositional in nature. For example, if \u03d5 is the function that assigns meanings to English adjectives, it generally obeys the rule that \u03d5(in-+ x) = not \u03d5(x), (e.g., \u03d5(inoffensive) = not \u03d5(offensive)), yet there are exceptions: \u03d5(inflammable) = \u03d5(flammable). On these \"partially-compositional\" tasks, this strategy of compositional analysis has demonstrated considerable, but limited, generalization capabilities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Compositionality", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Many past works in the rich body of literature about analyzing NNs focus on compositional structure (Hupkes et al., 2020 Hewitt and Manning, 2019; Li et al., 2019) and systematicity (Lake and Goodwin et al., 2020) . Two of the most popular analysis techniques are the behavioral and probing approaches. In the behavioral approach, a model is evaluated on a set of examples carefully chosen to require competence in particular linguistic phenomena (Marvin and Linzen, 2018; Wang et al., 2018; Dasgupta et al., 2019; Poliak et al., 2018; Linzen et al., 2016; McCoy et al., 2019b; Warstadt et al., 2020) . This technique can illuminate behavioral shortcomings but says little about how the internal representations are struc-tured, treating the model as a black box.", |
|
"cite_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 120, |
|
"text": "(Hupkes et al., 2020", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 121, |
|
"end": 146, |
|
"text": "Hewitt and Manning, 2019;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 147, |
|
"end": 163, |
|
"text": "Li et al., 2019)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 192, |
|
"end": 213, |
|
"text": "Goodwin et al., 2020)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 459, |
|
"end": 472, |
|
"text": "Linzen, 2018;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 473, |
|
"end": 491, |
|
"text": "Wang et al., 2018;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 492, |
|
"end": 514, |
|
"text": "Dasgupta et al., 2019;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 515, |
|
"end": 535, |
|
"text": "Poliak et al., 2018;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 536, |
|
"end": 556, |
|
"text": "Linzen et al., 2016;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 557, |
|
"end": 577, |
|
"text": "McCoy et al., 2019b;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 578, |
|
"end": 600, |
|
"text": "Warstadt et al., 2020)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of NNs", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In the probing approach, an auxiliary classifier is trained to classify the model's internal representations based on some linguistically-relevant distinction (Adi et al., 2017; Giulianelli et al., 2018; Conneau and Kiela, 2018; Blevins et al., 2018; Peters et al., 2018; Tenney et al., 2019) . In contrast with the behavioral approach, the probing approach tests whether some particular information is present in the model's encodings, but it says little about whether this information is actually used by the model. Indeed, in some cases models fail despite having the necessary information to succeed in their representations, showing that the ability of a classifier to extract that information does not mean that the model is using it (Voita and Titov, 2020; Ravichander et al., 2020; Vanmassenhove et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 159, |
|
"end": 177, |
|
"text": "(Adi et al., 2017;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 178, |
|
"end": 203, |
|
"text": "Giulianelli et al., 2018;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 228, |
|
"text": "Conneau and Kiela, 2018;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 229, |
|
"end": 250, |
|
"text": "Blevins et al., 2018;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 251, |
|
"end": 271, |
|
"text": "Peters et al., 2018;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 272, |
|
"end": 292, |
|
"text": "Tenney et al., 2019)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 740, |
|
"end": 763, |
|
"text": "(Voita and Titov, 2020;", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 764, |
|
"end": 789, |
|
"text": "Ravichander et al., 2020;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 790, |
|
"end": 817, |
|
"text": "Vanmassenhove et al., 2017)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of NNs", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We build on McCoy et al. (2019a), which introduced the analysis task DISCOVER (DISsecting COmpositionality in VEctor Representations): take a NN and, to the extent possible, find an explicitly-compositional approximation to its internal distributed representations. DISCOVER allows us to bridge the gap between representation and behavior: It reveals not only what information is encoded in the representation, but also reveals this information in a way that we can manipulate to show that the information is causally implicated in the model's behavior (Section 5.3). Moreover, it provides a much more comprehensive window into the representation than the probing approach does; while probing extracts particular types of information from a representation (e.g., \"does this representation distinguish between active and passive sentences?\"), DISCOVER exhaustively decomposes the model's representational space. In this regard, DISCOVER is most closely related to the approaches of Andreas (2019), Chrupa\u0142a and Alishahi (2019) , and Abnar et al. (2019) , who also propose methods for discovering a complete symbolic characterization of a set of vector representations, and Omlin and Giles (1996) and Weiss et al. (2018) , which also seek to extract more interpretable symbolic models that approximate neural network behavior. Like Andreas (2019) and Chrupa\u0142a and Alishahi (2019) , we seek to find the structure encoded in neural networks, rather than seeking structure directly from the data as is the goal in grammar induction work such as Shen et al. (2019) and Bowman et al. (2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 997, |
|
"end": 1025, |
|
"text": "Chrupa\u0142a and Alishahi (2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1032, |
|
"end": 1051, |
|
"text": "Abnar et al. (2019)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1182, |
|
"end": 1194, |
|
"text": "Giles (1996)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1199, |
|
"end": 1218, |
|
"text": "Weiss et al. (2018)", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 1330, |
|
"end": 1344, |
|
"text": "Andreas (2019)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1349, |
|
"end": 1377, |
|
"text": "Chrupa\u0142a and Alishahi (2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1540, |
|
"end": 1558, |
|
"text": "Shen et al. (2019)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 1563, |
|
"end": 1583, |
|
"text": "Bowman et al. (2016)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of NNs", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "McCoy et al. showed that, in GRU (Cho et al., 2014) encoder-decoder networks performing simple, fully-compositional string manipulations, the medial encoding (between encoder and decoder) could be extremely well approximated, up to an affine transformation, by Tensor Product Representations (TPRs) (Smolensky, 1990) , which are explicitly-compositional vector embeddings of symbolic structures. To represent a string of symbols as a TPR, the symbols in the string 337 might be parsed into three constituents {3 : pos1, 7 : pos3, 3 : pos2}, where posn is the role of n th position from the left edge of the string; other role schemes are also possible, such as roles denoting right-to-left position: {3 : third-to-last, 3 : second-to-last, 7 : last}. The embedding of a constituent 7 : pos3 is e(7 : pos3) = e F (7) \u2297 e R (pos3), where \u2297 is the tensor product (outer product), e R , e F are respectively a vector embedding of the roles and a vector embedding of the fillers of those roles: the digits. The embedding of the whole string is the sum of the embeddings of its constituents. In general, for a symbol structure S with roles {r k } that are respectively filled by the symbols {f k }, e TPR (S) = k e F (f k ) \u2297 e R (r k ). The DISCOVER task including the TPR equations is depicted in Figure 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 51, |
|
"text": "(Cho et al., 2014)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 299, |
|
"end": 316, |
|
"text": "(Smolensky, 1990)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1293, |
|
"end": 1301, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "NN embedding of symbol structures", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "At a high level, these role embeddings serve a similar purpose as positional embeddings in a Transformer (Vaswani et al., 2017) , in that they are vector embeddings of a token's position in a sequence. The roles discussed above-and the positional embeddings used in Transformers-illustrate role schemes based on sequential position; nonsequential role schemes such as positions in a tree are also possible. McCoy et al. (2019a) showed that,for a given seq2seq architecture learning a given string-mapping task, there exists a highly accurate TPR approximation of the medial encoding, given an appropriate pre-defined role scheme. The main technical contribution of the present paper is the Role Learner (ROLE) model, an RNN that learns its own role scheme to optimize the fit of a TPR approximation to a given set of internal representations in a pre-trained target NN. This makes the DISCOVER framework more general by removing the need for human-generated hypothe- Figure 2 : The DISCOVER task and functions. At the top is the target network and question we pose: is the internal embedding a TPR? The middle row is the TPE which follows the provided equation. We train the TPE to minimize the MSE between\u00ca and E. In the bottom row, we evaluate our model by passing the approxima-tions\u00ca through the decoder and checking the substitution accuracy -the proportion of examples for which the approximated encoding\u00ca yields the correct output when provided to the decoder . ses about the role schemes the network might be implementing. Learned role schemes, we will see in Sec. 5.1, can enable good TPR approximation of networks for which human-generated role schemes fail.", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 127, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 407, |
|
"end": 427, |
|
"text": "McCoy et al. (2019a)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 967, |
|
"end": 975, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "NN embedding of symbol structures", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "ROLE 1 produces a vector-space embedding of an input string of T symbols S = s 1 s 2 . . . s T by producing a TPR T(S) and then passing it through an affine transformation. ROLE is trained to approximate a pre-trained target string-encoder E. Given a set of N training strings {S (1) , . . . , S (N ) }, ROLE minimizes the total mean-squared error (MSE) between its output W T(S (i) ) + b and E(S (i) ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Role Learner (ROLE) Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "ROLE is an extension of the Tensor-Product Encoder (TPE) introduced in McCoy et al. (2019a) (as the \"Tensor Product Decomposition Network\") and depicted in Figure 3 . Crucially, ROLE is not given role labels for the input symbols, but learns to compute them. More precisely, it learns a dictionary of n R d R -dimensional role-embedding vectors, R \u2208 R d R \u00d7n R , and, for each input symbol s t , computes a soft-attention vector a t over these role vectors: the role vector assigned to s t is then the attention-weighted linear combination 1 Code available at https://github.com/ psoulos/role-decomposition. The fillers (yellow circles) and roles (blue circles) are first vectorized with an embedding layer. These two vector embeddings are combined by an outer product to produce the green matrix representing the TPR of the constituent. All of the constituents are summed together to produce the TPR of the sequence, and then a linear transformation is applied to resize the TPR to the target encoder's dimensionality. ROLE replaces the role embedding layer and directly produces the blue role vector. of role vectors, r t = R a t . ROLE simultaneously learns a dictionary of n F d F -dimensional symbolembedding filler vectors F \u2208 R d F \u00d7n F , the \u03c6 th column of which is f \u03c6 , the embedding of symbol type \u03c6; \u03c6 \u2208 1, . . . , n F where n F is the size of the vocabulary of symbol types. The TPR generated by ROLE is thus T(S) = T t=1 f \u03c4 (st) \u2297 r t , where \u03c4 (s t ) is symbol s t 's type. Finally, ROLE learns an affine transformation to map this TPR into R d , where d is the dimension of the representations of the encoder E.", |
|
"cite_spans": [ |
|
{ |
|
"start": 540, |
|
"end": 541, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 164, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Role Learner (ROLE) Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "ROLE uses an LSTM (Hochreiter and Schmidhuber, 1997) to compute the role-assigning attentionvectors a t from its learned embedding F of the input symbols s t : at each t, the hidden state of the LSTM passes through a linear layer and then a softmax to produce a t (depicted in Figure 4) . Let the t th LSTM hidden state be q t \u2208 R H ; let the output-layer weight-matrix have rows k \u03c1 \u2208 R H and let the columns of R be", |
|
"cite_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 52, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 277, |
|
"end": 286, |
|
"text": "Figure 4)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Role Learner (ROLE) Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "v \u03c1 \u2208 R d R , with \u03c1 = 1, . . . , n R . Then r t = R a t = n R \u03c1=1 v \u03c1 softmax(k \u03c1 q t ):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Role Learner (ROLE) Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "the result of query-key attention (e.g., Vaswani et al., 2017) with query q t to a fixed external memory containing key-value pairs", |
|
"cite_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 62, |
|
"text": "Vaswani et al., 2017)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Role Learner (ROLE) Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "{(k \u03c1 , v \u03c1 )} n R", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Role Learner (ROLE) Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u03c1=1 . Since a TPR for a discrete symbol structure deploys a discrete set of roles specifying discrete structural positions, ideally a single role would be Figure 4 : The role learning module. The role attention vector a t is encouraged to be one-hot through regularization; if a t were one-hot, the produced role embedding r t would correspond directly to one of the roles defined in the role matrix R. The LSTM can be unidirectional or bidirectional. selected for each s t : a t would be one-hot. ROLE training therefore deploys regularization to bias learning towards one-hot a t vectors (based on the regularization proposed in Palangi et al. 2017, developed for the same purpose). See Appendix A.2 for the precise regularization terms that we used.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 163, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Role Learner (ROLE) Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "It is essential to note that, while we impose this regularization on ROLE, there is no explicit bias favoring discrete compositional representations in the target encoder E: any such structure that ROLE finds hidden in the representations learned by E must result from biases implicit in the vanilla RNNarchitecture of E when applied to its target task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Role Learner (ROLE) Model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Returning to our central question from Sec. 1, how can neural networks without explicit compositional structure perform well on fully-compositional tasks? Our hypothesis is that, though these models have no constraint forcing them to be compositional, they still have the ability to implicitly learn compositional structure. To test this hypothesis, we apply ROLE to a standard RNN-based seq2seq model (Sutskever et al., 2014) trained on a fully compositional task. Because the RNN has no constraint forcing it to use TPRs, we do not know a priori whether there exists any solution that ROLE could learn; thus, if ROLE does learn anything it will be a significant empirical finding about how these RNNs operate.", |
|
"cite_spans": [ |
|
{ |
|
"start": 402, |
|
"end": 426, |
|
"text": "(Sutskever et al., 2014)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The SCAN task", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We consider the SCAN task (Lake and Baroni, 2018), which was designed to test compositional generalization and systematicity. SCAN is a synthetic semantic parsing task: an input sequence describing an action plan, e.g., jump opposite left, is mapped to a sequence of primitive actions, e.g., TL TL JUMP (see Sec. 5.3 for a complex example). We use TL to abbreviate TURN LEFT, sometimes written LTURN; similarly, we use TR for TURN RIGHT. The SCAN mapping is defined by a complete set of compositional rules (Lake and Baroni, 2018, Supplementary Fig. 7 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 531, |
|
"end": 551, |
|
"text": "Supplementary Fig. 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The SCAN task", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For our target SCAN encoder E, we trained a standard GRU with one hidden layer of dimension 100 for 100,000 steps (batch-size 1) with a dropout of 0.1 on the simple train-test split (hyperparameters determined by a limited search; see Appendix A.3). E achieves 98.47% (full-string) accuracy on the test set. Thus E provides what we want: a standard RNN achieving near-perfect accuracy on a non-trivial fully compositional task. After training, we extract the final hidden embedding from the encoder for each example in the training and test sets. These are the encodings we attempt to approximate as explicitly compositional TPRs. We provide ROLE with 50 roles to use as it wants (hyperparameters described in Appendix A.4). We evaluate the substitution accuracy that this learned role scheme provides in three ways. The continuous method tests ROLE in the same way as it was trained, with input symbol s t assigned role vector r t = R a t . The continuous method does not produce a discrete set of role vectors because the linear layer that generates a t allows for continuously-valued weights. The remaining two methods test the efficacy of a truly discrete set of role vectors. First, in the snapped method, a t is replaced at evaluation time by the one-hot vector m t singling out role m t = arg max(a t ): r t = R m t . This method serves the goal of enforcing the discreteness of roles, but it is expected to decrease performance because it tests ROLE in a different way than it was trained. Our final evaluation method, the discrete method, uses discrete roles without having such a train/test discrepancy by using a two-stage process. In the first stage, the snapped method is used to output one-hot vector roles m t for every symbol in the dataset. In the second stage, we train a TPE which does not learn roles but rather uses the one-hot vector m t as input during training. In this case, ROLE acts as an automatic data labeler, assigning a role to every input word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The compositional structure of SCAN encoder representations", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Snapped", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Continuous", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Discrete LTR RTL Bi Tree Wickel BOW", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Continuous", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "94.83% 81.71% \u00b1 7.28 92.44% 6.68% 6.96% 10.72% 4.31% 44.00% 4.52% Table 1 : Mean substitution accuracy for learned (bold) and hand-defined role schemes on SCAN across three random initializations. Standard deviation was below 1% for all schemes except for snapped. Substitution accuracy is measured by feeding ROLE's approximation to the target decoder. (Sec. 5.1)", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 73, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Continuous", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For comparison, we also train TPEs using a variety of discrete hand-crafted role schemes: left-toright (LTR), right-to-left (RTL), bidirectional (Bi), tree position, neighbor-based Wickelrole (Wickel), and bag-of-words (BOW) (descriptions of these role schemes are in Appendix A.1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Continuous", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The mean substitution accuracy from these different methods is shown in Table 1 . All of the predefined role schemes provide poor approximations, none surpassing 44.00% accuracy. The role scheme learned by ROLE does significantly better than any of the predefined role schemes: when tested with the basic, continuous role-attention method, the accuracy is 94.83%.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 72, |
|
"end": 79, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Continuous", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The success of ROLE tells us two things. First, it shows that the target model's compositional behavior relies on compositional internal representations: it was by no means guaranteed to be the case that ROLE would be successful here, so the fact that it is successful tells us that the encoder has learned compositional representations. Second, it adds further validation to the efficacy of ROLE, because it shows that it can be a useful analysis tool in cases of significantly greater complexity than the simple string manipulation tasks studied in McCoy et al. (2019a) . In fact, it allows us to write in closed form the embedding e(S) of an input S = s 1 . . . s T that is learned by the SCAN encoder, to an excellent degree of approximation (as measured by substitution accuracy):", |
|
"cite_spans": [ |
|
{ |
|
"start": 551, |
|
"end": 571, |
|
"text": "McCoy et al. (2019a)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Continuous", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "e(S) = W T t=1 f \u03c4 (st) \u2297 r \u03c1(st) + b, where \u03c4 (s t ) is symbol s t 's type, \u03c1(s t )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Continuous", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "is the role assigned to s t by the algorithm discussed next, and the matri-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Continuous", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "ces W, F = [f 1 . . . f n F ], and R = [r 1 . . . r n R ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Continuous", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "and bias vector b are learned by ROLE. Note that this expression is bilinear, even though the GRU encoder that generates it includes nonlinearities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Continuous", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "By analyzing the roles assigned by ROLE to the sequences in the SCAN training set, we created a symbolic algorithm for predicting which role will be assigned to each filler. This section covers the primary factors of the algorithm, while the entire algorithm is described in Appendix A.5 and discussed at additional length in Appendix A.6. Though the algorithm was created based only on sequences in the SCAN training set, it is equally successful at predicting which roles will be assigned to test sequences, exactly matching ROLE's predicted roles for 98.7% of sequences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interpreting the learned role scheme", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The algorithm illuminates how the filler-role scheme encodes information relevant to the task. First, one of the initial facts that the decoder must determine is whether the sequence is a single command, a pair of subcommands connected by and, or a pair of subcommands connected by after; such a determination is crucial for knowing the basic structure of the output (how many actions to perform and in what order). We have found that role 30 is used for, and only for, the filler and, while role 17 is used in and only in sequences containing after (usually with after as the filler bound to role 17). Thus, the decoder can use these roles to tell which basic structure is in play: if role 30 is present, it is an and sequence; if role 17 is present, it is an after sequence; otherwise it is a single command.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interpreting the learned role scheme", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Once the decoder has established the basic syntactic structure of the output, it must then fill in the particular actions. This can be accomplished using the remaining roles, which mainly encode absolute position within a subcommand. For example, the last word of a subcommand before after (e.g., jump left after walk twice) is always assigned role 8, while the last word of a subcommand after after (e.g., jump left after walk twice) is always assigned role 46. Therefore, once the decoder knows (based on the presence of role 17) that it is dealing with an after sequence, it can check for the fillers bound to roles 8 and 46 to begin to figure out what the two subcommands surrounding after look like. The identity of the last word in a subcommand is informative because that is where a cardinality (i.e., twice or thrice) appears if there is one. Thus, by checking what filler is at the end of a subcommand, the model can determine whether there is a cardinality present and, if so, which one. ROLE itself does not provide an interpretation for the symbolic structure it generates, but we have shown that this structure can be successfully interpreted by humans. By contrast, it is very difficult to interpret the continuous neuron values of RNN representations; even the rare successful cases of doing so, such as Lakretz et al. (2019) and Mu and Andreas (2020), only interpret a few isolated units, while we were able to exhaustively explain the entire symbolic structure discovered by ROLE.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1319, |
|
"end": 1340, |
|
"text": "Lakretz et al. (2019)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Interpreting the learned role scheme", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The substitution-accuracy results above show that if the entire learned representation is replaced by ROLE's approximation, the output remains correct. But do the individual word embeddings in this TPR have the appropriate causal consequences when processed by the decoder? To address this causal question (Pearl, 2000), we actively intervene on the constituent structure of the internal representations by replacing one constituent with another syntactically equivalent one, 2 and see whether this produces the expected change in the output of the decoder. We take the encoding generated by the RNN encoder E for an input such as jump opposite left, subtract the vector embedding of the opposite constituent, add the embedding of the around constituent, and see whether this causes the output to change from the correct output for jump opposite left (TL TL JUMP) to the correct output for jump around left (TL JUMP TL JUMP TL JUMP TL JUMP). The roles in these constituents are determined by the algorithm of Appendix A.5. If changing a word leads other roles in the sequence to change (according to the algorithm), we update the encoding with those new roles as well. Such surgery can be viewed as a more general extension of the analogy approach used by Mikolov et al. (2013) for analysis of word embeddings. An example of applying a sequence of five such constituent surgeries to a sequence is shown in Figure 5 (left). Even long sequences of such replacements produce the expected change in the decoder's output with high accuracy ( Figure 5 , 2 We extract syntactic categories from the SCAN grammar (Lake and Baroni, 2018, Supplementary Fig. 6 ) by saying that two words belong to the same category if every occurrence of one could be grammatically replaced by the other. We do not replace occurrences of and and after since the presence of either of these words causes substantial changes in the roles assigned within the sequence (Appendix A.5). right), indicating that the compositional structure discovered by ROLE does play a central causal role in the model's behavior.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1548, |
|
"end": 1549, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1406, |
|
"end": 1414, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1537, |
|
"end": 1545, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1628, |
|
"end": 1648, |
|
"text": "Supplementary Fig. 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Precision constituent-surgery on internal representations produces desired outputs", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The previous sections explored fully-compositional tasks where there is a strong signal for compositionality. In this section, we explore whether the representations of NNs trained on tasks that are only partially-compositional also capture compositional structure. Partially-compositional tasks are especially challenging to model because a fullycompositional model may enforce compositionality too strictly to handle the non-compositional aspects of the task, while a model without a compositional bias may not learn any sort of compositionality from the weak cues in the training set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Partially-compositional NLP tasks", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We test four sentence encoding models for compositionality: InferSent (Conneau et al., 2017) , Skip-thought (Kiros et al., 2015) , Stanford Sentiment Model (SST) (Socher et al., 2013) , and SPINN (Bowman et al., 2016) . For each of these models, we extract the encodings for the SNLI premise sentences (Bowman et al., 2015) . We use the extracted embeddings to train ROLE with 50 roles available (additional training information provided in Appendix A.8).", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 92, |
|
"text": "(Conneau et al., 2017)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 108, |
|
"end": 128, |
|
"text": "(Kiros et al., 2015)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 162, |
|
"end": 183, |
|
"text": "(Socher et al., 2013)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 190, |
|
"end": 217, |
|
"text": "SPINN (Bowman et al., 2016)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 302, |
|
"end": 323, |
|
"text": "(Bowman et al., 2015)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Partially-compositional NLP tasks", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "As a baseline, we also train TPEs that use predefined role schemes (hyperparameters described in Appendix A.7). For all of the sentence embedding models except Skip-thought, ROLE with continuous attention provides the lowest mean squared error at approximating the encoding ( Table 2) . The BOW (bag-of-words) role scheme represents a TPE that uses a degenerate 'compositional' structure which assigns the same role to every filler; for each of the sentence embedding models tested except for SST, performance is within the same order of magnitude as structure-free BOW. Parikh et al. 2016found that a bag-of-words model scores extremely well on Natural Language Inference despite having no knowledge of word order, showing that structure is not necessary to perform well on the sorts of tasks commonly used to train sentence encoders. Although not definitive, the ROLE results provide no evidence that these models' sentence embeddings possess compositional structure.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 276, |
|
"end": 284, |
|
"text": "Table 2)", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Partially-compositional NLP tasks", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In future work, it would be interesting to perform a similar analysis on Transformer architectures (Vaswani et al., 2017 4.15e-4 5.76e-4 8.21e-4 9.70e-4 9.16e-4 7.78e-4 4.34e-4 Skip-thought 9.30e-5 9.32e-5 9.85e-5 9.91e-5 1.78e-3 3.95e-4 9.64e-5 8.87e-5 SST 5.58e-3 6.72e-3 6.48e-3 8.35e-3 9.29e-3 8.55e-3 5.99e-3 9. et al., 2020) and few-shot learning of compositional tasks (Brown et al., 2020), both of which suggest that they learn substantial degrees of compositional structure; thus, ROLE may be more likely to discover meaningful structure in Transformers than in the sentence-embedding models in Table 2 . Further work has found impressive degrees of syntactic structure in Transformer encodings (Hewitt and Manning, 2019) , suggesting that there may well be compositional structure for ROLE to pick up on. The main difficulty in applying ROLE to Transformers-and the reason we did not include Transformers in our study-is that the sentence representation used by a Transformer is typically viewed as a variable-sized collection of vectors, whereas ROLE requires single-vector representations; this discrepancy must be overcome if ROLE is to be applied to Transformers. One past work (Jawahar et al., 2019) has applied ROLE's precursor (the TPDN of McCoy et al. (2019a)) to Transformer representations by choosing the [CLS] token of BERT (Devlin et al., 2019) as the single-vector sentence encoding to decompose. Jawahar et al. found that these encodings were approximated better by human-specified treeposition roles than by other human-specified candidates (e.g., left-to-right and right-to-left roles). By removing the constraint of requiring human-designed role schemes, ROLE may be able to discover other role schemes that approximate BERT's encodings even more closely.", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 120, |
|
"text": "(Vaswani et al., 2017", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 704, |
|
"end": 730, |
|
"text": "(Hewitt and Manning, 2019)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1192, |
|
"end": 1214, |
|
"text": "(Jawahar et al., 2019)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1346, |
|
"end": 1367, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 604, |
|
"end": 611, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Partially-compositional NLP tasks", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We have introduced ROLE, a neural network that learns to approximate the representations of an existing target neural network E using an explicit symbolic structure. ROLE successfully discovers symbolic structure in a standard RNN trained on the fully-compositional SCAN semantic parsing task, even though the RNN has no such structure explicitly present in its architecture. This yields a closed-form equation for the RNN's encoding of any input string. When applied to sentence embedding models trained on partially-compositional tasks, ROLE performs better than hand-specified hypothesized structures but still provides little evidence that the sentence encodings represent compositional structure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "While this work has shown that NNs can converge to TPRs to solve compositional tasks, it is still unknown how the weights in the NN actually convert the raw input into a TPR. To investigate this process, in future work we plan to apply our technique to representations of partial sequences. For instance, when the complete input is jump right twice, the target RNN must first represent jump right as a well-formed TPR at the point when only those two words have been encountered. The representation then needs to be updated when the next word, twice, is encountered. By studying the nature of that update, we can gain insight into how the target model builds up a TPR from the input elements.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Uncovering the latent symbolic structure of NN representations learned for fully-compositional tasks is a significant step towards explaining how NNs achieve the level of compositional generalization that they do. In addition, by illuminating shortcomings in the representations learned for standard tasks that are not fully-compositional, ROLE can help suggest types of inductive bias for improving models' generalization with standard, partiallycompositional datasets. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "A.1 Designed role schemes", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Appendix", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use six hand-specified role schemes as a baseline to compare the learned role schemes against. Examples of each role scheme are shown in Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 148, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Appendix", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1. Left-to-right (LTR): Each filler's role is its index in the sequence, counting from left to right.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Appendix", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2. Right-to-left (RTL): Each filler's role is its index in the sequence, counting from right to left.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Appendix", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "3. Bidirectional (Bi): Each filler's role is a pair of indices, where the first index counts from left to right, and the second index counts from right to left.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Appendix", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "4. Tree: Each filler's role is given by its position in a tree. This depends on a tree parsing algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Appendix", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "5. Wickelroles (Wickel): Each filler's role is a 2-tuple containing the filler before it and the filler after it. (Wickelgren, 1969) 6. Bag-of-words (BOW): Each filler is assigned the same role. The position and context of the filler is ignored.", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 132, |
|
"text": "(Wickelgren, 1969)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Appendix", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Letting A = {a t } T t=1 , the regularization term applied during ROLE training is R = \u03bb(R 1 + R 2 + R 3 ), where \u03bb is a regularization hyperparameter and:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.2 ROLE regularization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "R 1 (A) = T t=1 n R \u03c1=1 [a t ] \u03c1 (1 \u2212 [a t ] \u03c1 ); R 2 (A) = \u2212 T t=1 n R \u03c1=1 [a t ] 2 \u03c1 ; R 3 (A) = n R \u03c1=1 ([s A ] \u03c1 (1 \u2212 [s A ] \u03c1 )) 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.2 ROLE regularization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since each a t results from a softmax, its elements are positive and sum to 1. Thus the factors in R 1 (A) are all non-negative, so R 1 assumes its minimal value of 0 when each a t has binary elements; since these elements must sum to 1, such an a t must be one-hot. R 2 (A) is also minimized when each a t is one-hot because when a vector's L 1 norm is 1, its L 2 norm is maximized when it is one-hot. Although each of these terms individually favor one-hot vectors, empirically we find that using both terms helps the training process. In a discrete symbolic structure, each position can hold at most one symbol, and the final term R 3 in ROLE's regularizer R is designed to encourage this. In the vector s A = T t=1 a t , the \u03c1 th element is the total attention weight, over all symbols in the string, assigned to the \u03c1 th role: in the discrete case, this must be 0 (if no symbol is assigned this role) or 1 (if a single symbol is assigned this role). Thus R 3 is minimized when all elements of s are 0 or 1 (R 3 is similar to R 1 , but with squared terms since we are no longer assured each element is at most 1). It is important to normalize each role embedding in the role matrix R so that small attention weights have correspondingly small impacts on the weighted-sum role embedding.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.2 ROLE regularization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To train the standard RNN on SCAN, we ran a limited hyperparameter search similar to the procedure in Lake and Baroni (2018). Since our goal was to produce a single embedding that captured the entire input sequence, we fixed the architecture as a GRU with a single hidden layer. We did not train models with attention, to investigate whether a standard RNN could capture compositionality in its single bottleneck encoding. The remaining hyperparameters were hidden dimension and dropout. We ran a search over the hidden dimension sizes of 50, 100, 200, and 400 as well as dropout with a value of 0, .1, and .5 applied to the word embeddings and recurrent layer. Each network was trained with the Adam optimizer (Kingma and Ba, 2015 ) and a learning rate of .001 for 100,000 steps with a batch-size of 1. The best performing network had a hidden dimension or 100 and dropout of .1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 711, |
|
"end": 731, |
|
"text": "(Kingma and Ba, 2015", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.3 RNN trained on SCAN", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For the ROLE models trained to approximate the GRU encoder trained on SCAN, we used a filler dimension of 100, and a role dimension of 50 with 50 roles available. For training, we used the Adam (Kingma and Ba, 2015) optimizer with a learning rate of .001, batch size 32, and an early stopping patience of 10. The role assignment module used a bidirectional 2-layer LSTM (Hochreiter and Schmidhuber, 1997) . We performed a hyperparameter search over the regularization coefficient \u03bb 3 1 1 6 5 2 3 1 9 7 Left-to-right 0 1 2 3 0 1 2 3 4 5 Right-to-left 3 2 1 0 5 4 3 2 1 0 Bidirectional (0, 3) (1, 2) (2, 1) (3, 0) (0, 5) (1, 4) (2, 3) (3, 2) (4, 1) (5, 0) Wickelroles # 1 3 1 1 6 1 # # 2 5 3 2 1 3 9 1 using the values in the set [.1, .02, .01] . The best performing value was .02, and we used this model in our analysis. The algorithm below characterizes our post-hoc interpretation of which roles the Role Learner will assign to elements of the input to the SCAN model. This algorithm was created by hand based on an analysis of the Role Learner's outputs for the elements of the SCAN training set. The algorithm works equally well on examples in the training set and the test set; on both datasets, it exactly matches the roles chosen by the Role Learner for 98.7% of sequences (20,642 out of 20,910).", |
|
"cite_spans": [ |
|
{ |
|
"start": 194, |
|
"end": 215, |
|
"text": "(Kingma and Ba, 2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 370, |
|
"end": 404, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 768, |
|
"end": 782, |
|
"text": "[.1, .02, .01]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 480, |
|
"end": 622, |
|
"text": "\u03bb 3 1 1 6 5 2 3 1 9 7 Left-to-right 0 1 2 3 0 1 2 3 4 5 Right-to-left 3 2 1 0 5 4 3 2 1 0 Bidirectional (0, 3)", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 698, |
|
"end": 739, |
|
"text": "# 1 3 1 1 6 1 # # 2 5 3 2 1 3 9 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A.4 ROLE trained on SCAN", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "r 0 r 0 r 0 r 0 r 0 r 0 r 0 r 0 r 0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.4 ROLE trained on SCAN", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The input sequences have three basic types that are relevant to determining the role assignment: sequences that contain and (e.g., jump around left and walk thrice), sequences that contain after (e.g., jump around left after walk thrice), and sequences without and or after (e.g., turn opposite right thrice). Within commands containing and or after, it is convenient to break the command down into the command before the connecting word and the command after it; for example, in the command jump around left after walk thrice, these two components would be jump around left and walk thrice.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.5 A role-assignment algorithm implicitly learned by the SCAN seq2seq encoder", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Sequence with and:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.5 A role-assignment algorithm implicitly learned by the SCAN seq2seq encoder", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "-Elements of the command before and: -Action word directly before a cardinality: 4 -Action word before, but not directly before, a cardinality: 34 thrice directly after an action word: 2 twice directly after an action word: 2 opposite in a sequence ending with twice: 8 opposite in a sequence ending with thrice: 34 around in a sequence ending with a cardinality: 22 -Direction word directly before a cardinality: 2 -Action word in a sequence without a cardinality: 46 opposite in a sequence without a cardinality: 2 -Direction after opposite in a sequence without a cardinality: 26 around in a sequence without a cardinality: 3 -Direction after around in a sequence without a cardinality: 22 -Direction directly after an action in a sequence without a cardinality: 22", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.5 A role-assignment algorithm implicitly learned by the SCAN seq2seq encoder", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "*", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.5 A role-assignment algorithm implicitly learned by the SCAN seq2seq encoder", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To show how this works with an example, consider the input jump around left after walk thrice. The command before after is jump around left. left, as the last word, is given role 8. around, as the secondto-last word, gets role 36. jump, as a first word that is not also the last or second-to-last word gets role 11. The command after after is walk thrice. thrice, as the last word, gets role 46. walk, as the secondto-last word, gets role 4. Finally, after gets role 17 because no other elements have been assigned role 17 yet. These predicted outputs match those given by the Role Learner.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.5 A role-assignment algorithm implicitly learned by the SCAN seq2seq encoder", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We offer several observations about this algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.6 Discussion of the algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1. This algorithm may seem convoluted, but a few observations can illuminate how the roles assigned by such an algorithm support success on the SCAN task. First, a sequence will contain role 30 if and only if it contains and, and it will contain role 17 if and only if it contains after. Thus, by implicitly checking for the presence of these two roles (regardless of the fillers bound to them), the decoder can tell whether the output involves one or two basic commands, where the presence of and or after leads to two basic commands and the absence of both leads to one basic command. Moreover, if there are two basic commands, whether it is role 17 or role 30 that is present can tell the decoder whether the input order of these commands also corresponds to their output order (when it is and in play, i.e., role 30), or if the input order is reversed (when it is after in play, i.e., role 17).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.6 Discussion of the algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "With these basic structural facts established, the decoder can begin to decode the specific commands. For example, if the input is a sequence with after, it can begin with the command after after, which it can decode by checking which fillers are bound to the relevant roles for that type of command.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.6 Discussion of the algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It may seem odd that so many of the roles are based on position (e.g., \"first word\" and \"second-to-last word\"), rather than more functionally-relevant categories such as \"direction word.\" However, this approach may actually be more efficient: Each command consists of a single mandatory element (namely, an action word such as walk or jump) followed by several optional modifiers (namely, rotation words, direction words, and cardinalities). Because most of the word categories are optional, it might be inefficient to check for the presence of, e.g., a cardinality, since many sequences will not have one. By contrast, every sequence will have a last word, and checking the identity of the last word provides much functionally-relevant information: if that word is not a cardinality, then the decoder knows that there is no cardinality present in the command (because if there were, it would be the last word); and if it is a cardinality, then that is important to know, because the presence of twice or thrice can dramatically affect the shape of the output sequence. In this light, it is unsurprising that the SCAN encoder has implicitly learned several different roles that essentially mean the last element of a particular subcommand.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.6 Discussion of the algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2. The algorithm does not constitute a simple, transparent role scheme. But its job is to describe the representations that the original network produces, and we have no a priori expectation about how complex that process may be. The role-assignment algorithm implicitly learned by ROLE is interpretable locally (each line is readily expressible in simple English), but not intuitively transparent globally. We see this as a positive result, in two respects.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.6 Discussion of the algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "First, it shows why ROLE is crucial: no human-generated role scheme would provide a good approximation to this algorithm. Such an algorithm can only be identified because ROLE is able to use gradient descent to find role schemes far more complex than any we would hypothesize intuitively. This enables us to analyze networks far more complex than we could analyze previously, being necessarily limited to hand-designed role schemes based on human intuitions about how to perform the task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.6 Discussion of the algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Second, when future work illuminates the computation in the original SCAN GRU seq2seq decoder, the baroqueness of the roleassignment algorithm that ROLE has shown to be implicit in the seq2seq encoder can potentially explain certain limitations in the original model, which is known to suffer from severe failures of systematic generalization outside the training distribution (Lake and Baroni, 2018). It is reasonable to hypothesize that systematic generalization requires that the encoder learn an implicit role scheme that is relatively simple and highly compositional. Future proposals for improving the systematic generalization of models on SCAN can be examined using ROLE to test the hypothesis that greater systematicity requires greater compositional simplicity in the role scheme implicitly learned by the encoder.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.6 Discussion of the algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "3. While the role-assignment algorithm of A.8.1 may not be simple, from a certain perspective, it is quite surprising that it is not far more complex. Although ROLE is provided 50 roles to learn to deploy as it likes, it only chooses to use 16 of them (only 16 are ever selected as the arg max(a t ); see Sec. 6.1). Furthermore, the SCAN grammar generates 20,910 input sequences, containing a total of 151,688 words (an average of 7.25 words per input). This means that, if one were to generate a series of conditional statements to determine which role is assigned to each word in every context, this could in theory require up to 151,688 conditionals (e.g., \"if the filler is 'jump' in the context 'walk thrice after opposite left', then assign role 17\"). However, our algorithm involves just 47 conditionals. This reduction helps explain how the model performs so well on the test set: If it used many more of the 151,688 possible conditional rules, it would completely overfit the training examples in a way that would be unlikely to generalize. The 47-conditional algorithm we found is more likely to generalize by abstracting over many details of the context. 4. Were it not for ROLE's ability to characterize the representations generated by the original encoder in terms of implicit roles, providing an equally complete and accurate interpretation of those representations would necessarily require identifying the conditions determining the activation level of each of the 100 neurons hosting those representations. It seems to us grossly overly optimistic to estimate that each neuron's activation level in the representation of a given input could be characterized by a property of the input statable in, say, two lines of roughly 20 words/symbols; yet even then, the algorithm would require 200 lines, whereas the algorithm in A.8.1 requires 47 lines of that scale. Thus, by even such a crude estimate of the degree of complexity expected for an algorithm describing the representations in terms of neuron activities, the algorithm we find, stated over roles, is 4 times simpler.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.6 Discussion of the algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For each sentence embedding model, we trained three randomly initialized TPEs for each role scheme and selected the best performing one as measured by the lowest MSE. For each TPE, we used the original filler embedding from the sentence embedding model. This filler dimensionality is 25 for SST, 300 for SPINN and InferSent, and 620 for Skipthought. We applied a linear transformation to the pre-trained filler embedding where the input size is the dimensionality of the pre-trained embedding and the output size is also the dimensionality of the pre-trained embedding. This linearly transformed embedding is used as the filler vector in the filler-role binding in the TPE. For each TPE, we use a role dimension of 50. Training was done with a batch size of 32 using the Adam optimizer with a learning rate of .001. To generate tree roles from the English sentences, we used the constituency parser released in version 3.9.1 of Stanford CoreNLP (Klein and Manning, 2003) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 945, |
|
"end": 970, |
|
"text": "(Klein and Manning, 2003)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.7 TPEs trained on sentence embedding models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For each sentence embedding model, we trained three randomly initialized ROLE models and selected the best performing one as measured by the lowest MSE. We used the original filler embedding from the sentence embedding model (25 for SST, 300 for SPINN and InferSent, and 620 for Skipthought). We applied a linear transformation to the pre-trained filler embedding where the input size is the dimensionality of the pre-trained embedding and the output size is also the dimensionality of the pre-trained embedding. This linearly transformed embedding is used as the filler vector in the filler-role binding in the TPE. We also applied a similar linear transformation to the pretrained filler embedding before input to the role learner LSTM. For each ROLE model, we provide up to 50 roles with a role dimension of 50. Training was done with a batch size of 32 using the ADAM optimizer with a learning rate of .001. We performed a hyperparameter search over the regularization coefficient \u03bb using the values in the set {1, 0.1, 0.01, 0.001, 0.0001}. For SST, SPINN, In-ferSent and SST, respectively, the best performing network used \u03bb = 0.001, 0.01, 0.001, 0.1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.8 ROLE trained on sentence embedding models", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 1746891, and work partially supported by NSF grant BCS1344269. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.For helpful comments we are grateful to the members of the Johns Hopkins Neurosymbolic Computation group and the Microsoft Research AI Deep Learning Group. Any errors remain our own.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Blackbox meets blackbox: Representational similarity & stability analysis of neural language models and brains", |
|
"authors": [ |
|
{ |
|
"first": "Samira", |
|
"middle": [], |
|
"last": "Abnar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lisa", |
|
"middle": [], |
|
"last": "Beinborn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rochelle", |
|
"middle": [], |
|
"last": "Choenni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Willem", |
|
"middle": [], |
|
"last": "Zuidema", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "191--203", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-4820" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samira Abnar, Lisa Beinborn, Rochelle Choenni, and Willem Zuidema. 2019. Blackbox meets blackbox: Representational similarity & stability analysis of neural language models and brains. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 191-203, Florence, Italy. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Fine-grained analysis of sentence embeddings using auxiliary prediction tasks", |
|
"authors": [ |
|
{ |
|
"first": "Yossi", |
|
"middle": [], |
|
"last": "Adi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Einat", |
|
"middle": [], |
|
"last": "Kermany", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ofer", |
|
"middle": [], |
|
"last": "Lavi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained anal- ysis of sentence embeddings using auxiliary predic- tion tasks. In International Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Measuring compositionality in representation learning", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Andreas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Andreas. 2019. Measuring compositionality in representation learning. In International Confer- ence on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Evaluating layers of representation in neural machine translation on part-of-speech and semantic tagging tasks", |
|
"authors": [ |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llu\u00eds", |
|
"middle": [], |
|
"last": "M\u00e0rquez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hassan", |
|
"middle": [], |
|
"last": "Sajjad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nadir", |
|
"middle": [], |
|
"last": "Durrani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fahim", |
|
"middle": [], |
|
"last": "Dalvi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yonatan Belinkov, Llu\u00eds M\u00e0rquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2017. Evaluating layers of representation in neural ma- chine translation on part-of-speech and semantic tagging tasks. In Proceedings of the Eighth In- ternational Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1-10, Taipei, Taiwan. Asian Federation of Natural Lan- guage Processing.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Deep RNNs encode soft hierarchical syntax", |
|
"authors": [ |
|
{ |
|
"first": "Terra", |
|
"middle": [], |
|
"last": "Blevins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "14--19", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-2003" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Terra Blevins, Omer Levy, and Luke Zettlemoyer. 2018. Deep RNNs encode soft hierarchical syntax. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 14-19, Melbourne, Australia. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A large annotated corpus for learning natural language inference", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Samuel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabor", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Angeli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Potts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "632--642", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A fast unified model for parsing and sentence understanding", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Samuel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jon", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abhinav", |
|
"middle": [], |
|
"last": "Gauthier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raghav", |
|
"middle": [], |
|
"last": "Rastogi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1466--1477", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuel R. Bowman, Jon Gauthier, Abhinav Ras- togi, Raghav Gupta, Christopher D. Manning, and Christopher Potts. 2016. A fast unified model for parsing and sentence understanding. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1466-1477. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Language models are few-shot learners", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Tom B Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nick", |
|
"middle": [], |
|
"last": "Mann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melanie", |
|
"middle": [], |
|
"last": "Ryder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jared", |
|
"middle": [], |
|
"last": "Subbiah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prafulla", |
|
"middle": [], |
|
"last": "Kaplan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arvind", |
|
"middle": [], |
|
"last": "Dhariwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pranav", |
|
"middle": [], |
|
"last": "Neelakantan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Girish", |
|
"middle": [], |
|
"last": "Shyam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanda", |
|
"middle": [], |
|
"last": "Sastry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Askell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2005.14165" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Van Merri\u00ebnboer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caglar", |
|
"middle": [], |
|
"last": "Gulcehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fethi", |
|
"middle": [], |
|
"last": "Bougares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1724--1734", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/D14-1179" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1724- 1734, Doha, Qatar. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Correlating neural and symbolic representations of language", |
|
"authors": [ |
|
{ |
|
"first": "Grzegorz", |
|
"middle": [], |
|
"last": "Chrupa\u0142a", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Afra", |
|
"middle": [], |
|
"last": "Alishahi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2952--2962", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1283" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Grzegorz Chrupa\u0142a and Afra Alishahi. 2019. Corre- lating neural and symbolic representations of lan- guage. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 2952-2962, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "SentEval: An evaluation toolkit for universal sentence representations", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Douwe", |
|
"middle": [], |
|
"last": "Kiela", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau and Douwe Kiela. 2018. SentEval: An evaluation toolkit for universal sentence representa- tions. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Supervised learning of universal sentence representations from natural language inference data", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Douwe", |
|
"middle": [], |
|
"last": "Kiela", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lo\u00efc", |
|
"middle": [], |
|
"last": "Barrault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "670--680", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "German", |
|
"middle": [], |
|
"last": "Kruszewski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lo\u00efc", |
|
"middle": [], |
|
"last": "Barrault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2126--2136", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-1198" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, German Kruszewski, Guillaume Lam- ple, Lo\u00efc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136, Melbourne, Aus- tralia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Analyzing machinelearned representations: A natural language case study", |
|
"authors": [ |
|
{ |
|
"first": "Ishita", |
|
"middle": [], |
|
"last": "Dasgupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Demi", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Samuel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah D", |
|
"middle": [], |
|
"last": "Gershman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Goodman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.05885" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ishita Dasgupta, Demi Guo, Samuel J Gershman, and Noah D Goodman. 2019. Analyzing machine- learned representations: A natural language case study. arXiv preprint arXiv:1909.05885.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Under the hood: Using diagnostic classifiers to investigate and improve how language models track agreement information", |
|
"authors": [ |
|
{ |
|
"first": "Mario", |
|
"middle": [], |
|
"last": "Giulianelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jack", |
|
"middle": [], |
|
"last": "Harding", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Florian", |
|
"middle": [], |
|
"last": "Mohnert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dieuwke", |
|
"middle": [], |
|
"last": "Hupkes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Willem", |
|
"middle": [], |
|
"last": "Zuidema", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "240--248", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-5426" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, and Willem Zuidema. 2018. Un- der the hood: Using diagnostic classifiers to in- vestigate and improve how language models track agreement information. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and In- terpreting Neural Networks for NLP, pages 240-248, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Probing linguistic systematicity", |
|
"authors": [ |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Goodwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koustuv", |
|
"middle": [], |
|
"last": "Sinha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "O'donnell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1958--1969", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.177" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emily Goodwin, Koustuv Sinha, and Timothy J. O'Donnell. 2020. Probing linguistic systematicity. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1958-1969, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "A structural probe for finding syntax in word representations", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Hewitt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4129--4138", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Hewitt and Christopher D Manning. 2019. A structural probe for finding syntax in word represen- tations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4129-4138.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural Computation", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/neco.1997.9.8.1735" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "A systematic assessment of syntactic generalization in neural language models", |
|
"authors": [ |
|
{ |
|
"first": "Jennifer", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jon", |
|
"middle": [], |
|
"last": "Gauthier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Qian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ethan", |
|
"middle": [], |
|
"last": "Wilcox", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roger P", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, and Roger P Levy. 2020. A systematic assessment of syntactic generalization in neural language mod- els. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Seat- tle, Washington. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Compositionality decomposed: How do neural networks generalise", |
|
"authors": [ |
|
{ |
|
"first": "Dieuwke", |
|
"middle": [], |
|
"last": "Hupkes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Verna", |
|
"middle": [], |
|
"last": "Dankers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mathijs", |
|
"middle": [], |
|
"last": "Mul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elia", |
|
"middle": [], |
|
"last": "Bruni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "67", |
|
"issue": "", |
|
"pages": "757--795", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni. 2020. Compositionality decomposed: How do neural networks generalise? Journal of Ar- tificial Intelligence Research, 67:757-795.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Visualisation and 'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure", |
|
"authors": [ |
|
{ |
|
"first": "Dieuwke", |
|
"middle": [], |
|
"last": "Hupkes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sara", |
|
"middle": [], |
|
"last": "Veldhoen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Willem", |
|
"middle": [], |
|
"last": "Zuidema", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "61", |
|
"issue": "", |
|
"pages": "907--926", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and 'diagnostic classifiers' re- veal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907-926.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "What does BERT learn about the structure of language", |
|
"authors": [ |
|
{ |
|
"first": "Ganesh", |
|
"middle": [], |
|
"last": "Jawahar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beno\u00eet", |
|
"middle": [], |
|
"last": "Sagot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Djam\u00e9", |
|
"middle": [], |
|
"last": "Seddah", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3651--3657", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1356" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, and Djam\u00e9 Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 3651-3657, Florence, Italy. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "Diederik", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "International Conference for Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference for Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Skip-thought vectors", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Kiros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yukun", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Ruslan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raquel", |
|
"middle": [], |
|
"last": "Zemel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Urtasun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanja", |
|
"middle": [], |
|
"last": "Torralba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Fidler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3294--3302", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems, pages 3294-3302.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Accurate unlexicalized parsing", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "423--430", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Klein and Christopher D Manning. 2003. Accu- rate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computa- tional Linguistics-Volume 1, pages 423-430. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Brenden", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Lake", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brenden M. Lake and Marco Baroni. 2018. General- ization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In International Conference on Machine Learning.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "The emergence of number and syntax units in LSTM language models", |
|
"authors": [ |
|
{ |
|
"first": "Yair", |
|
"middle": [], |
|
"last": "Lakretz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "German", |
|
"middle": [], |
|
"last": "Kruszewski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Theo", |
|
"middle": [], |
|
"last": "Desbordes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dieuwke", |
|
"middle": [], |
|
"last": "Hupkes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stanislas", |
|
"middle": [], |
|
"last": "Dehaene", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "11--20", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1002" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yair Lakretz, German Kruszewski, Theo Desbordes, Dieuwke Hupkes, Stanislas Dehaene, and Marco Ba- roni. 2019. The emergence of number and syn- tax units in LSTM language models. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 11-20, Minneapolis, Minnesota. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Compositional generalization for primitive substitutions", |
|
"authors": [ |
|
{ |
|
"first": "Yuanpeng", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianyu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Hestness", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4293--4302", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1438" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuanpeng Li, Liang Zhao, Jianyu Wang, and Joel Hes- tness. 2019. Compositional generalization for prim- itive substitutions. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 4293-4302, Hong Kong, China. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Assessing the ability of LSTMs to learn syntax-sensitive dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Linzen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emmanuel", |
|
"middle": [], |
|
"last": "Dupoux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Transactions of the ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"https://www.mitpressjournals.org/doi/pdfplus/10.1162/tacl_a_00115" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the ACL.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Probing the probing paradigm", |
|
"authors": [ |
|
{ |
|
"first": "Abhilasha", |
|
"middle": [], |
|
"last": "Ravichander", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Does probing accuracy entail task relevance? arXiv preprint", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2005.00719" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abhilasha Ravichander, Yonatan Belinkov, and Eduard Hovy. 2020. Probing the probing paradigm: Does probing accuracy entail task relevance? arXiv preprint arXiv:2005.00719.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Ordered neurons: Integrating tree structures into recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Yikang", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shawn", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Sordoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Courville", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. 2019. Ordered neurons: Integrat- ing tree structures into recurrent neural networks. In International Conference on Learning Representa- tions.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Tensor product variable binding and the representation of symbolic structures in connectionist systems", |
|
"authors": [ |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Smolensky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Artif. Intell", |
|
"volume": "46", |
|
"issue": "1-2", |
|
"pages": "159--216", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/0004-3702(90)90007-M" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul Smolensky. 1990. Tensor product variable bind- ing and the representation of symbolic structures in connectionist systems. Artif. Intell., 46(1-2):159- 216.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Recursive deep models for semantic compositionality over a sentiment treebank", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Perelygin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Chuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1631--1642", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3104--3112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Sys- tems, pages 3104-3112.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "What do you learn from context? probing for sentence structure in contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Ian", |
|
"middle": [], |
|
"last": "Tenney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Berlin", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Poliak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"Thomas" |
|
], |
|
"last": "Mccoy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Najoung", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ellie", |
|
"middle": [], |
|
"last": "Pavlick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? probing for sentence structure in contextu- alized word representations. In International Con- ference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Investigating 'aspect' in NMT and SMT: Translating the English simple past and present perfect", |
|
"authors": [ |
|
{ |
|
"first": "Eva", |
|
"middle": [], |
|
"last": "Vanmassenhove", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinhua", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Way", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Computational Linguistics in the Netherlands Journal", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "109--128", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eva Vanmassenhove, Jinhua Du, and Andy Way. 2017. Investigating 'aspect' in NMT and SMT: Translating the English simple past and present perfect. Com- putational Linguistics in the Netherlands Journal, 7:109-128.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Informationtheoretic probing with minimum description length", |
|
"authors": [ |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Voita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2003.12298" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elena Voita and Ivan Titov. 2020. Information- theoretic probing with minimum description length. arXiv preprint arXiv:2003.12298.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanpreet", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "353--355", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-5446" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "BLiMP: A benchmark of linguistic minimal pairs for english", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Warstadt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alicia", |
|
"middle": [], |
|
"last": "Parrish", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haokun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anhad", |
|
"middle": [], |
|
"last": "Mohananey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sheng-Fu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel R", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the Society for Computation in Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo- hananey, Wei Peng, Sheng-Fu Wang, and Samuel R Bowman. 2020. BLiMP: A benchmark of linguistic minimal pairs for english. Proceedings of the Soci- ety for Computation in Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Extracting automata from recurrent neural networks using queries and counterexamples", |
|
"authors": [ |
|
{ |
|
"first": "Gail", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eran", |
|
"middle": [], |
|
"last": "Yahav", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5244--5253", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gail Weiss, Yoav Goldberg, and Eran Yahav. 2018. Ex- tracting automata from recurrent neural networks us- ing queries and counterexamples. In International Conference on Machine Learning, pages 5244- 5253.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Context-sensitive coding, associative memory, and serial order in (speech) behavior", |
|
"authors": [ |
|
{ |
|
"first": "Wayne", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Wickelgren", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1969, |
|
"venue": "Psychological Review", |
|
"volume": "76", |
|
"issue": "1", |
|
"pages": "1--15", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wayne A. Wickelgren. 1969. Context-sensitive coding, associative memory, and serial order in (speech) be- havior. Psychological Review, 76(1):1-15.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Norouzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxim", |
|
"middle": [], |
|
"last": "Krikun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qin", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Macherey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Klingner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Apurva", |
|
"middle": [], |
|
"last": "Shah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaobing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Gouws", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshikiyo", |
|
"middle": [], |
|
"last": "Kato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taku", |
|
"middle": [], |
|
"last": "Kudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hideto", |
|
"middle": [], |
|
"last": "Kazawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keith", |
|
"middle": [], |
|
"last": "Stevens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Kurian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nishant", |
|
"middle": [], |
|
"last": "Patil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1609.08144" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rud- nick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural ma- chine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": "Df [twice] \u2297 Dr[36]) + W(Df [thrice] \u2297 Dr[36]) JUMP RUN RUN RUN RNN DecoderFigure 1: Summary of our approach.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"text": "The Tensor Product Encoder architecture.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"text": ": 11 left : 36 twice : 8 after : 43 jump : 10 opposite : 17 right : 4 thrice : 46 \u2192 TR TR JUMP TR TR JUMP TR TR JUMP TL RUN TL RUN \u2212 run : 11 + look : 11 \u2192 TR TR JUMP TR TR JUMP TR TR JUMP TL LOOK TL LOOK \u2212 jump : 10 + walk : 10 \u2192 TR TR WALK TR TR WALK TR TR WALK TL LOOK TL LOOK \u2212 left : 36 + right : 36 \u2192 TR TR WALK TR TR WALK TR TR WALK TR LOOK TR LOOK \u2212 twice : 8 + thrice : 8 \u2192 TR TR WALK TR TR WALK TR TR WALK TR LOOK TR LOOK TR LOOK \u2212 opposite : 17 + around : 17 \u2192 TR WALK TR WALK TR WALK TR WALK TR WALK TR WALK TR WALK TR WALK TR WALK TR WALK TR WALK TR WALK TR LOOK TR LOOK TR LOOKFigure 5: Left: Example of successive constituent surgeries. The roles assigned to the input symbols are indicated in the first line (e.g., run was assigned role 11). Altered output symbols are in blue. The model produces the correct outputs for all cases shown here. Right: Mean constituent-surgery accuracy across three runs. Standard deviation is below 1% for each number of substitutions. (Sec. 5.3)", |
|
"num": null, |
|
"content": "<table><tr><td/><td>Continuous Snapped Discrete</td><td>LTR</td><td>RTL</td><td>Bi</td><td>Tree</td><td>BOW</td></tr><tr><td>InferSent</td><td>4.05e-4</td><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">). Such models have</td></tr><tr><td/><td/><td colspan=\"5\">displayed impressive syntactic generalization (Hu</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"text": "Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, Brussels, Belgium. Association for Computational Linguistics.", |
|
"num": null, |
|
"content": "<table><tr><td/><td>Conference on Empirical Methods in Natural Lan-</td></tr><tr><td/><td>guage Processing, pages 67-81, Brussels, Belgium.</td></tr><tr><td/><td>Association for Computational Linguistics.</td></tr><tr><td colspan=\"2\">R. Thomas McCoy, Tal Linzen, Ewan Dunbar, and</td></tr><tr><td colspan=\"2\">Paul Smolensky. 2019a. RNNs implicitly imple-</td></tr><tr><td colspan=\"2\">ment tensor-product representations. In Interna-</td></tr><tr><td colspan=\"2\">tional Conference on Learning Representations.</td></tr><tr><td colspan=\"2\">R. Thomas McCoy, Ellie Pavlick, and Tal Linzen.</td></tr><tr><td colspan=\"2\">2019b. Right for the wrong reasons: Diagnosing</td></tr><tr><td colspan=\"2\">syntactic heuristics in natural language inference.</td></tr><tr><td colspan=\"2\">In Proceedings of the 57th Annual Meeting of the</td></tr><tr><td colspan=\"2\">Association for Computational Linguistics, pages</td></tr><tr><td colspan=\"2\">3428-3448, Florence, Italy. Association for Compu-</td></tr><tr><td>tational Linguistics.</td><td/></tr><tr><td colspan=\"2\">Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig.</td></tr><tr><td colspan=\"2\">2013. Linguistic regularities in continuous space</td></tr><tr><td colspan=\"2\">word representations. In Proceedings of the 2013</td></tr><tr><td colspan=\"2\">Conference of the North American Chapter of the</td></tr><tr><td colspan=\"2\">Association for Computational Linguistics: Human</td></tr><tr><td colspan=\"2\">Language Technologies, pages 746-751, Atlanta,</td></tr><tr><td colspan=\"2\">Georgia. Association for Computational Linguistics.</td></tr><tr><td colspan=\"2\">Jesse Mu and Jacob Andreas. 2020. Compositional ex-</td></tr><tr><td colspan=\"2\">planations of neurons. In Advances in Neural Infor-</td></tr><tr><td colspan=\"2\">mation Processing Systems 33.</td></tr><tr><td colspan=\"2\">Christian W Omlin and C Lee Giles. 1996. Extrac-</td></tr><tr><td colspan=\"2\">tion of rules from discrete-time recurrent neural net-</td></tr><tr><td colspan=\"2\">works. Neural networks, 9(1):41-52.</td></tr><tr><td colspan=\"2\">Hamid Palangi, Paul Smolensky, Xiaodong He,</td></tr><tr><td>and Li Deng. 2017.</td><td>Question-answering with</td></tr><tr><td colspan=\"2\">grammatically-interpretable representations. In Pro-</td></tr><tr><td colspan=\"2\">ceedings of the Association for the Advancement of</td></tr><tr><td>Artificial Intelligence.</td><td/></tr><tr><td colspan=\"2\">Ankur Parikh, Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, and</td></tr><tr><td colspan=\"2\">Jakob Uszkoreit. 2016. A decomposable attention</td></tr><tr><td colspan=\"2\">model for natural language inference. In Proceed-</td></tr><tr><td colspan=\"2\">ings of the 2016 Conference on Empirical Methods</td></tr><tr><td colspan=\"2\">in Natural Language Processing, pages 2249-2255,</td></tr><tr><td colspan=\"2\">Austin, Texas. Association for Computational Lin-</td></tr><tr><td>guistics.</td><td/></tr><tr><td colspan=\"2\">Judea Pearl. 2000. Causality. MIT Press, Cambridge,</td></tr><tr><td>MA.</td><td/></tr><tr><td colspan=\"2\">Matthew Peters, Mark Neumann, Luke Zettlemoyer,</td></tr><tr><td colspan=\"2\">and Wen-tau Yih. 2018. Dissecting contextual</td></tr><tr><td colspan=\"2\">word embeddings: Architecture and representation.</td></tr><tr><td colspan=\"2\">In Proceedings of the 2018 Conference on Em-</td></tr><tr><td colspan=\"2\">pirical Methods in Natural Language Processing,</td></tr><tr><td colspan=\"2\">pages 1499-1509, Brussels, Belgium. Association</td></tr><tr><td colspan=\"2\">for Computational Linguistics.</td></tr><tr><td colspan=\"2\">Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Ed-</td></tr><tr><td colspan=\"2\">ward Hu, Ellie Pavlick, Aaron Steven White, and</td></tr><tr><td colspan=\"2\">Benjamin Van Durme. 2018. Collecting diverse nat-</td></tr><tr><td colspan=\"2\">ural language inference problems for sentence rep-</td></tr><tr><td colspan=\"2\">resentation evaluation. In Proceedings of the 2018</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"text": "The assigned roles for two sequences, 3116 and 523197.Table reproduced from McCoy et al. (2019a).", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"text": "17 if no other word has role 17 or if the command after after ends with around left; 43 otherwise\u2022 Sequence without and or after:", |
|
"num": null, |
|
"content": "<table><tr><td>-after:</td><td/></tr><tr><td/><td>* Direction word after opposite but not before thrice: 4</td></tr><tr><td/><td>* around: 22 * Direction word after around: 2 * Direction word between an action word and twice or thrice: 2</td></tr><tr><td/><td>-Elements of the command before and:</td></tr><tr><td/><td>* First word: 11 * Last word (if not also the first word): 36</td></tr><tr><td/><td>* Second-to-last word (if not also the first word): 3</td></tr><tr><td/><td>* Second of four words: 24 -and: 30</td></tr><tr><td/><td>\u2022 Sequence with after:</td></tr><tr><td/><td>-Elements of the command before after:</td></tr><tr><td/><td>* Last word: 8 * Second-to-last word: 36 * First word (if not the last or second-to-last word): 11</td></tr><tr><td/><td>* Second word (if not the last or second-to-last word): 3</td></tr><tr><td/><td>-Elements of the command after after:</td></tr><tr><td/><td>* Last word: 46 * Second-to-last word: 4 * First word if the command ends with around right: 4</td></tr><tr><td>Last word: 28</td><td>* First word if the command ends with thrice and contains a rotation: 10</td></tr><tr><td>* First word (if not also last word): 46 * opposite if the command ends with thrice: 22</td><td>* First word if the command does not end with around right and does not contain both thrice and a rotation: 17</td></tr><tr><td>* Direction word between opposite and thrice: 2</td><td>* Second word if the command ends with thrice: 17</td></tr><tr><td>* opposite if the command does not end with thrice: 2</td><td>* Second word if the command does not end with thrice: 10</td></tr></table>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |