|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:28:45.728361Z" |
|
}, |
|
"title": "Neural Proof Nets", |
|
"authors": [ |
|
{ |
|
"first": "Konstantinos", |
|
"middle": [], |
|
"last": "Kogkalidis", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "CNRS", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Moortgat", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "CNRS", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Moot", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "CNRS", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Linear logic and the linear \u03bb-calculus have a long standing tradition in the study of natural language form and meaning. Among the proof calculi of linear logic, proof nets are of particular interest, offering an attractive geometric representation of derivations that is unburdened by the bureaucratic complications of conventional prooftheoretic formats. Building on recent advances in set-theoretic learning, we propose a neural variant of proof nets based on Sinkhorn networks, which allows us to translate parsing as the problem of extracting syntactic primitives and permuting them into alignment. Our methodology induces a batch-efficient, end-to-end differentiable architecture that actualizes a formally grounded yet highly efficient neuro-symbolic parser. We test our approach on AEthel, a dataset of typelogical derivations for written Dutch, where it manages to correctly transcribe raw text sentences into proofs and terms of the linear \u03bbcalculus with an accuracy of as high as 70%.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Linear logic and the linear \u03bb-calculus have a long standing tradition in the study of natural language form and meaning. Among the proof calculi of linear logic, proof nets are of particular interest, offering an attractive geometric representation of derivations that is unburdened by the bureaucratic complications of conventional prooftheoretic formats. Building on recent advances in set-theoretic learning, we propose a neural variant of proof nets based on Sinkhorn networks, which allows us to translate parsing as the problem of extracting syntactic primitives and permuting them into alignment. Our methodology induces a batch-efficient, end-to-end differentiable architecture that actualizes a formally grounded yet highly efficient neuro-symbolic parser. We test our approach on AEthel, a dataset of typelogical derivations for written Dutch, where it manages to correctly transcribe raw text sentences into proofs and terms of the linear \u03bbcalculus with an accuracy of as high as 70%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "There is a broad consensus among grammar formalisms that the composition of form and meaning in natural language is a resource-sensitive process, with the words making up a phrase contributing exactly once to the resulting whole. The sentence \"the Mad Hatter offered\" is ill-formed because of a lack of grammatical material, \"offer\" being a ditransitive verb; \"the Cheshire Cat grinned Alice a cup of tea\" on the other hand is ill-formed because of an excess of material, which the intransitive verb \"grin\" cannot accommodate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Given the resource-sensitive nature of language, it comes as no surprise that Linear Logic (Girard, 1987) , in particular its intuitionistic version ILL, plays a central role in current logic-based grammar formalisms. Abstract Categorial Grammars and Lambda Grammars (de Groote, 2001; Muskens, 2001) use ILL \"as-is\" to characterize an abstract level of grammatical structure from which surface form and semantic interpretation are obtained by means of compositional translations. Modern typelogical grammars in the tradition of the Lambek Calculus (Lambek, 1958) , e.g. Multimodal TLG (Moortgat, 1996) , Displacement Calculus (Morrill, 2014) , Hybrid TLG (Kubota and Levine, 2020), refine the type language to account for syntactic aspects of word order and constituency; ILL here is the target logic for semantic interpretation, reached by a homomorphism relating types and derivations of the syntactic calculus to their semantic counterparts.", |
|
"cite_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 105, |
|
"text": "(Girard, 1987)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 238, |
|
"end": 284, |
|
"text": "Grammars and Lambda Grammars (de Groote, 2001;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 285, |
|
"end": 299, |
|
"text": "Muskens, 2001)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 548, |
|
"end": 562, |
|
"text": "(Lambek, 1958)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 585, |
|
"end": 601, |
|
"text": "(Moortgat, 1996)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 626, |
|
"end": 641, |
|
"text": "(Morrill, 2014)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A common feature of the aforementioned formalisms is their adoption of the parsing-asdeduction method: determining whether a phrase is syntactically well-formed is seen as the outcome of a process of logical deduction. This logical deduction automatically gives rise to a program for meaning composition, thanks to the remarkable correspondence between logical proof and computation known as the Curry-Howard isomorphism (S\u00f8rensen and Urzyczyn, 2006) , a natural manifestation of the syntax-semantics interface. The Curry-Howard \u03bb-terms associated with derivations are neutral with respect to the particular semantic theory one wants to adopt, accommodating both the truth-conditional view of formal semantics and the vector-based distributional view (Muskens and Sadrzadeh, 2018) , among others.", |
|
"cite_spans": [ |
|
{ |
|
"start": 421, |
|
"end": 450, |
|
"text": "(S\u00f8rensen and Urzyczyn, 2006)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 751, |
|
"end": 780, |
|
"text": "(Muskens and Sadrzadeh, 2018)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Despite their formal appeal, grammars based on variants of linear logic have fallen out of favour within the NLP community, owing to a scarcity of large-scale datasets, but also due to difficulties in aligning them with the established highperformance neural toolkit. Seeking to bridge the gap between formal theory and applied practice, we focus on the proof nets of linear logic, a lean graphical calculus that does away with the bureau-cratic symbol-manipulation overhead characteristic of conventional prooftheoretic presentations ( \u00a72). Integrating proof nets with recent advances in neural processing, we propose a novel approach to linear logic proof search that eliminates issues commonly associated with higher-order types and hypothetical reasoning, while greatly reducing the computational costs of structure manipulation, backtracking and iterative processing that burden standard parsing techniques ( \u00a73).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our proposed methodology relies on two key components. The first is an encoder/decoder-based supertagger that converts raw text sentences into linear logic judgements by dynamically constructing contextual type assignments, one primitive symbol at a time. The second is a bi-modal encoder that contextualizes the generated judgement in conjunction with the input sentence. The contextualized representations are fed into a Sinkhorn layer, tasked with finding the valid permutation that brings primitive symbol occurrences into alignment. The architecture induced is trained on labeled data, and assumes the role of a formally grounded yet highly accurate parser, which transforms raw text sentences into linear logic proofs and computational terms of the simply typed linear \u03bb-calculus, further decorated with dependency annotations that allow reconstruction of the underlying dependency graph ( \u00a74).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We briefly summarize the logical background we are assuming, starting with ILL , the implicationonly fragment of ILL, then moving on to the dependency-enhanced version ILL ,3,2 which we employ in our experimental setup.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Formulas (or types) of ILL are inductively defined according to the grammar below:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ILL", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "T ::= A | T 1 T 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ILL", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Formula A is taken from a finite set of atomic formulas A \u2282 T ; a complex formula T 1 T 2 is the type signature of a transformation that applies on T 1 \u2208 T and produces T 2 \u2208 T , consuming the argument in the process. This view of formulas as non-renewable resources makes ILL the logic of linear functions. 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ILL", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "1 We refer to Wadler (1993) for a gentle introduction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 27, |
|
"text": "Wadler (1993)", |
|
"ref_id": "BIBREF49" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ILL", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We can present the inference rules of ILL together with the associated linear \u03bb-terms in Natural Deduction format. Judgements are sequents of the form x 1 : T 1 , . . . , x n : T n M : C. The antecedent left of the turnstile is a typing environment (or context), a sequence of variables x i , each given a type declaration T i . These variables serve as the parameters of a program M of type C that corresponds to the proof of the sequent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ILL", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Proofs are built from axioms x : T x : T with the aid of two rules of inference:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ILL", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u0393 M : T 1 T 2 \u2206 N : T 1 \u0393, \u2206 (M N) : T 2 E (1) \u0393, x : T 1 M : T 2 \u0393 \u03bbx.M : T 1 T 2 I", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "ILL", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "(1) is the elimination of the implication and models function application; it proposes that if from some context \u0393 one can derive a program M of type T 1 T 2 , and from context \u2206 one can derive a program N of type T 1 , then from the multiset union \u0393, \u2206 one can derive a term (M N) of type T 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ILL", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "(2) is the introduction of the implication and models function abstraction; it proposes that if from a context \u0393 together with a type declaration x : T 1 one can derive a program term M of type T 2 , then from \u0393 alone one can derive the abstraction \u03bbx.M, denoting a linear function of type T 1 T 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ILL", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "To obtain a grammar based on ILL , we consider the logic in combination with a lexicon, assigning one or more type formulas to the words of the language. In this setting, the proof of a sequent x 1 : T 1 , . . . , x n : T n M : C constitutes an algorithm to compute a meaning M of type C, given by substituting parameters x i with lexical meanings w i . In the type lexicon, atomic types are used to denote syntactically autonomous, stand-alone units (words and phrases); e.g. NP for noun-phrase, S for sentence, etc. Function types are assigned to incomplete expressions, e.g. NP S for an intransitive verb consuming a noun-phrase to produce a sentence, NP NP S for a transitive verb, etc. 2 Higher-order types, i.e. types of order greater than 1, denote functions that apply to functions; these give the grammar access to hypothetical reasoning, in virtue of the implication introduction rule. 3 Combined with parametric polymorphism, Figure 1 : Example derivation and Curry-Howard \u03bb-term for the phrase De strategie die ze volgen is eeuwenoud (\"The strategy that they follow is ancient\") from AEthel sample dpc-ind-001645-nl-sen.p.12.s.1_1, showcasing how hypothetical reasoning enables the derivation of an object-relative clause (note how the instantiation of variable x of type PRON followed by its subsequent abstraction creates an argument for the higher-order function assigned to \"die\"). Judgement premises and rule names have been omitted for brevity's sake.", |
|
"cite_spans": [ |
|
{ |
|
"start": 896, |
|
"end": 897, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 937, |
|
"end": 945, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "ILL", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "higher-order types eschew the need for phantom syntactic nodes, enabling straightforward derivations for apparent non-linear phenomena involving long-range dependencies, elliptical conjunctions, wh-movement and the like.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ILL", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "For our experimental setup, we will be utilizing the AEthel dataset, a Dutch corpus of typelogical derivations (Kogkalidis et al., 2020) . Noncommutative categorial grammars in the tradition of Lambek (1958) attempt to directly capture syntactic fine-structure by making a distinction between left-and right-directed variants of the implication. In order to deal with the relatively free word order of Dutch and contrary to the former, AEthel's type system sticks to the directionally non-committed for function types, but compensates with two strategies for introducing syntactic discrimination. First, the atomic type inventory distinguishes between major clausal types S sub , S v1 , S main , based on the positioning of their verbal head (clause final, clause initial, verb second, respectively). Secondly, function types are enhanced with dependency information, expressed via a family of unary modalities 3 d , 2 m , with dependency labels d, m drawn from disjoint sets of complement vs adjunct markers. The new constructors produce types 3 d A B, used to denote the head of a phrase B that selects for a complement A and assigns it the dependency role d, and types 2 m (A B), used to denote adjuncts, i.e. nonhead functions that project the dependency role m upon application. Following dependency grammar tradition, determiners and modifiers are treated as non-head functions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 111, |
|
"end": 136, |
|
"text": "(Kogkalidis et al., 2020)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 207, |
|
"text": "Lambek (1958)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ILL ,3,2", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The type enhancement induces a dependency marking on the derived \u03bb-term, reflecting the introduction/elimination of the 3, 2 constructors; each dependency domain has a unique head, together with its complements and possible adjuncts, denoted by superscripts and subscripts, respectively. Figure 1 provides an example derivation and the corresponding \u03bb-term.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 288, |
|
"end": 296, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "ILL ,3,2", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "A shallow dependency graph can be trivially reconstructed by traversal of the decorated \u03bb-term, recursively establishing labeled edges along the path from a phrasal head to the head of each of its dependants while skipping abstractions; see ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ILL ,3,2", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Despite their clear computational interpretation (Girard et al., 1988; Troelstra and Schwichtenberg, 2000; S\u00f8rensen and Urzyczyn, 2006) , proofs in natural deduction format are arduous to obtain; reasoning with hypotheticals necessitates a mixture of forward and backward chaining search strategies. The sequent calculus presentation, on the other hand, permits exhaustive proof search via pure backward chaining, but does so at the cost of spurious ambiguity. Moreover, both the above assume a tree-like proof structure, which hinders their parallel processing and impairs compatibility with neural methods. As an alternative, we turn Figure 1 , with modal markings in place of implication arrows. Atomic types at the fringe of the formula decomposition trees are marked with superscript indices denoting their position for ease of identification. During decoding, the proof frame is flattened as the linear sequence:", |
|
"cite_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 70, |
|
"text": "(Girard et al., 1988;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 71, |
|
"end": 106, |
|
"text": "Troelstra and Schwichtenberg, 2000;", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 107, |
|
"end": 135, |
|
"text": "S\u00f8rensen and Urzyczyn, 2006)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 636, |
|
"end": 644, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Proof Nets", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "[SOS], 2 det , N, NP, [SEP], N, [SEP], 3 body , 3 su , PRON, Ssub, 2 mod , NP, NP, [SEP], PRON, [SEP], 3 obj , . . .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proof Nets", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "our attention towards proof nets (Girard, 1987) , a graphical representation of linear logic proofs that captures hypothetical reasoning in a purely geometric manner. Proof nets may be seen as a parallelized version of the sequent calculus or a multi-conclusion version of natural deduction and combine the best of both words, allowing for flexible and easily parallelized proof search while maintaining the 1-to-1 correspondence with the terms of the linear \u03bb-calculus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 47, |
|
"text": "(Girard, 1987)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proof Nets", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "To define ILL proof nets, we first need the auxiliary notion of polarity. We assign positive polarity to resources we have, negative polarity to resources we seek. Logically, a formula with negative polarity appears in conclusion position (right of the turnstile), whereas formulas with positive polarity appear in premise position (left of the turnstile). Given a formula and its polarity, the polarity of its subformulas is computed as follows: for a positive formula T 1 T 2 , T 1 is negative and T 2 is positive, whereas for a negative formula T 1 T 2 , T 1 is positive and T 2 is negative.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proof Nets", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "With respect to proof search, proof nets present a simple but general setup as follows. (1) Begin by writing down the formula decomposition tree for all formulas in a sequent P 1 , . . . P n C, keeping track of polarity information; the result is called a proof frame.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proof Nets", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "(2) Find a perfect matching between the positive and negative atomic formulas; the result is called a proof structure. (3) Finally, verify that the proof structure satisfies the correctness condition; if so, the result is a proof net.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proof Nets", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Formula decomposition is fully deterministic, with the decomposition rules shown in Figure 2 . There are two logical links, denoting positive and negative occurrences of an implication (corresponding to the elimination and introduction rules of natural deduction, respectively). A third rule, called the axiom link, connects two equal formulas of opposite polarity.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 92, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Proof Nets", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "To transform a proof frame into a proof structure, we first need to check the count invariance property, which requires an equal count of positive and negative occurrences for every atomic type, and then connect atoms of opposite polarity. In principle, we can connect any positive atom to any negative atom when both are of the same type; the combinatorics of proof search lies, therefore, in the axiom connections (the number of possible proof structures scales factorial to the number of atoms). Not all proof structures are, however, proof nets. Validating the correctness of a proof net can be done in linear time (Guerrini, 1999; Murawski and Ong, 2000) ; a common approach is to attempt a traversal of the proof net, ensuring that all nodes are visited (connectedness) and no loops exist (acyclicity) (Danos and Regnier, 1989) . There is an apparent tension here between finding just a matching of atomic formulas (which is trivial once we satisfy the count invariance) and finding the correct matching, which produces not only a proof net, but also the preferred semantic reading of the sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 619, |
|
"end": 635, |
|
"text": "(Guerrini, 1999;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 636, |
|
"end": 659, |
|
"text": "Murawski and Ong, 2000)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 808, |
|
"end": 833, |
|
"text": "(Danos and Regnier, 1989)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proof Nets", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Deciding the provability of a linear logic sequent is an NP-complete problem (Lincoln, 1995) , even in the simplest case where formulas are restricted to order 1 (Kanovich, 1994) . Figure 3 shows the proof net equivalent to the derivation of Figure 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 92, |
|
"text": "(Lincoln, 1995)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 162, |
|
"end": 178, |
|
"text": "(Kanovich, 1994)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 189, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 242, |
|
"end": 250, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Proof Nets", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "To sidestep the complexity inherent in the combinatorics of linear logic proof search, we investigate proof net construction from a neural perspective. First, we will need to convert a sentence into a proof frame, i.e. the decomposition of a logical judgement of the form P 1 , . . . P n C, with P i the type of word i and C the goal type to be derived. Having obtained a correct proof frame, the problem boils down to establishing axiom links between the set of positive and negative atoms and verifying their validity according to the correctness criteria. We address each of these steps via a functionally independent neural module, and define Neural Proof Nets as their composition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Proof Nets", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Obtaining proof frames is a special case of supertagging, a common problem in NLP literature (Bangalore and Joshi, 1999) . Conventional practice treats supertagging as a discriminative sequence labeling problem, with a neural model contextualizing the tokens of an input sentence before passing them through a linear projection in order to convert them to class weights (Xu et al., 2015; Vaswani et al., 2016) . Here, instead, we adopt the generative paradigm (Kogkalidis et al., 2019 ; Bhargava and Penn, 2020), whereby each type is itself perceived as a sequence of primitive symbols.", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 120, |
|
"text": "(Bangalore and Joshi, 1999)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 370, |
|
"end": 387, |
|
"text": "(Xu et al., 2015;", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 388, |
|
"end": 409, |
|
"text": "Vaswani et al., 2016)", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 460, |
|
"end": 484, |
|
"text": "(Kogkalidis et al., 2019", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proof Frames", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Concretely, we perform a depth-first-left-first traversal of formula trees to convert types to prefix (Polish) notation. This converts a type to a linear sequence of symbols s \u2208 V, where V =A \u222a D, the union of atomic types and dependency-decorated modal markings. 4 Proof frames can then be represented by joining individual type representations, separated with an extra-logical token [SEP] denoting type breaks and prefixed with a special token [SOS] to denote the sequence start (see the caption of Figure 3 for an example). The resulting sequence becomes the goal of a decoding process conditional on the input sentence, as implemented by a sequence-to-sequence model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 264, |
|
"end": 265, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 501, |
|
"end": 509, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Proof Frames", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Treating supertagging as auto-regressive decoding enables the prediction of any valid type in the grammar, improving generalization and eliminating the need for a strictly defined type lexicon. Further, the decoder's comprehension of the type construction process can yield drastic improvements for beam search, allowing distinct branching paths within individual types. Most importantly, it grants access to the atomic sub-formulas of a sequent, i.e. the primitive entities to be paired within a proof net -a quality that will come into play when considering the axiom linking process later on.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proof Frames", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The conversion of a proof frame into a proof structure requires establishing a correct bijection between positive and negative atoms, i.e. linking each positive occurrence of an atom with a single unique negative occurrence of the same atom.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proof Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We begin by first noting that each atomic formula occurrence within a proof frame can be assigned an identifying index according to its position in the sequence (refer to the example of Figure 3 ). For each distinct atomic type, we can then create a table with rows enumerating negative and columns enumerating positive occurrences of that type, ordered by their indexes. We mark cells indexing linked occurrences and leave the rest empty; tables for our running example can be seen in Figure 5 . The resulting tables correspond to a permutation matrix \u03a0 A for each atomic type A, i.e. a set of matrices that are square, binary and doubly-stochastic, encoding the permutation over the chain (i.e. ordered set) of negative elements that aligns them with the chain of matching positive elements. This key insight allows us to reframe automated proof search as learning the latent space that dictates the permutations between disjoint and non-contiguous sub-sequences of the primitive symbols constituting a decoded proof frame.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 186, |
|
"end": 194, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 486, |
|
"end": 494, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Proof Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Permutation matrices are discrete mathematical objects that are not directly attainable by neural models. Their continuous relaxations are, however, valid outputs, approximated by means of the Sinkhorn operator (Sinkhorn, 1964) . In essence, the operator and its underlying theorem state that the iterative normalization (alternating between rows and columns) of a square matrix with positive entries yields, in the limit, a doubly-stochastic matrix, the entries of which are almost binary. Figure 5 : An alternative view of the axiom links of Figure 3 , with tables \u03a0 N , \u03a0 ADJ , \u03a0 Smain , \u03a0 Ssub , \u03a0 PRON , \u03a0 NP depicting the linked indices and corresponding permutations for each atomic type in the sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 211, |
|
"end": 227, |
|
"text": "(Sinkhorn, 1964)", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 491, |
|
"end": 499, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 544, |
|
"end": 552, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Proof Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "non-linear activation function that applies on matrices, pushing them towards binarity and bistochasticity, analogous to a 2-dimensional softmax that preserves assignment (Mena et al., 2018) . Moving to the logarithmic space eliminates the positive entry constraint and facilitates numeric stability through the log-sum-exp trick. In that setting, the Sinkhorn-normalization of a real-valued square matrix X is defined as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 171, |
|
"end": 190, |
|
"text": "(Mena et al., 2018)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proof Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Sinkhorn(X) = lim \u03c4 \u2192\u221e exp (Sinkhorn \u03c4 (X))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proof Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where the induction is given by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proof Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Sinkhorn 0 (X) = X Sinkhorn \u03c4 (X) = T r T r Sinkhorn (\u03c4 \u22121) (X)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proof Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "with T r the row normalization in the log-space:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proof Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "T r (X) i,j = X i,j \u2212 log N \u22121 r=0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proof Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "e (X r,j \u2212max(Xr,:))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proof Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Bearing the above in mind, our goal reduces to assembling a matrix for each atomic type in a proof frame, with entries containing the unnormalized agreement scores of pairs in the cartesian product of positive and negative occurrences of that type. Given contextualized representations for each primitive symbol within a proof frame, scores can be simply computed as the inter-representation dot-product attention. Assuming, for instance, I +", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proof Structures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "A the vectors indexing the positions of all a positive and negative occurrences of type A in a proof frame sequence, we can arrange the matrices P A , N A \u2208 R a\u00d7d containing their respective contextualized d-dimensional representations (recall that the count invariance property asserts equal shapes). The dot-product attention matrix containing their element-wise agreements will then be given asS A = P A N A \u2208 R a\u00d7a . Applying the Sinkhorn operator, we obtain S A = Sinkhorn(S A ), which, in our setup, will be modeled as a continuous approximation of the underlying permutation matrix \u03a0 A .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A and I \u2212", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Encoder-Decoder We first encode sentences using BERTje (de Vries et al., 2019), a pretrained BERT-Base model (Devlin et al., 2019) localized for Dutch. We then decode into proof frame sequences using a Transformer-like decoder (Vaswani et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 130, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 227, |
|
"end": 249, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF47" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Symbol Embeddings In order to best utilize the small, structure-rich vocabulary of the decoder, we opt for lower-dimensional, positiondependent symbol embeddings. We follow insights from Wang et al. (2020) and embed decoder symbols as continuous functions in the complex space, associating each output symbol s \u2208 V with a magnitude embedding r s \u2208 R 128 and a frequency embedding \u03c9 s \u2208 R 128 . A symbol s occurring in position p in the proof frame is then assigned a vector v s,p = r s e j\u03c9sp \u2208 C 128 . We project to the decoder's vector space by concatenating the real and imaginary parts, obtaining the final representation as v s,p = conc( (\u1e7d s,p ), (\u1e7d s,p )) \u2208 R 256 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 187, |
|
"end": 205, |
|
"text": "Wang et al. (2020)", |
|
"ref_id": "BIBREF50" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Tying the embedding parameters with those of the pre-softmax transformation reduces the network's memory footprint and improves representation quality (Press and Wolf, 2017) . In duality to the input embeddings, we treat output embeddings as functionals parametric to positions. To classify a token occurring in position p, we first compute a matrix V p consisting of the local embeddings of all vocabulary symbols, V p = v :,p \u2208 R ||V||\u00d7256 . The transpose of that matrix acts then as a linear map from the decoder's representation to class weights, from which a probability distribution is obtained by application of the softmax function.", |
|
"cite_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 173, |
|
"text": "(Press and Wolf, 2017)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Proof Frame Contextualization Proof frames may generally give rise to more than one distinct proof, with only a portion of those being linguistically plausible. Frames eligible to more than one potential semantic reading can be disambiguated by accounting for statistical preferences, as exhibited by lexical cues. Consequently, we need our contextualization scheme to incorporate the sentential representation in its processing flow. To that end, we employ another Transformer decoder, now modified to operate with no causal mask, thus allowing all decoded symbols to freely attend over one another regardless of their relative position. This effectively converts it into a bi-modal encoder which operates on two input sequences of different length and dimensionality, namely the BERT output and the sequence of proof frame symbols, and constructs contextualized representations of the latter as informed by the former.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We index the contextualized proof frame to obtain a pair of matrices for each distinct atomic type in a sentence, easing the complexity of the problem by preemptively dismissing the possibility of linking unequal types; this also alleviates performance issues noted when permuting sets of high cardinality (Mena et al., 2018) . Post contextualization, positive and negative items are projected to a lower dimensionality via a pair of feed-forward neural functions, applied token-wise. Normalizing the dot-product attention weights between the above with Sinkhorn yields our final output.", |
|
"cite_spans": [ |
|
{ |
|
"start": 306, |
|
"end": 325, |
|
"text": "(Mena et al., 2018)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Axiom Linking", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We train, validate and test our architecture on the corresponding subsets of the AEthel dataset, filtering out samples the proof frames of which exceed 100 primitive symbols. Implementation details and hyper-parameter tables, an illustration of the full architecture, dataset statistics and example parses are provided in Appendix A. 5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We train our architecture end-to-end, including all BERT parameters apart from the embedding layer, using AdamW (Loshchilov and Hutter, 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 141, |
|
"text": "(Loshchilov and Hutter, 2018)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In order to jointly learn representations that accommodate both the proof-frame and the proofstructure outputs, we back-propagate a loss signal derived as the addition of two loss functions. The first is the Kullback-Leibler divergence between the predicted proof frame symbols and the labelsmoothed ground-truth distribution (M\u00fcller et al., 2019) . The second is the negative log-likelihood between the Sinkhorn-activated dot-product weights and the corresponding binary-valued permutation matrices.", |
|
"cite_spans": [ |
|
{ |
|
"start": 326, |
|
"end": 347, |
|
"text": "(M\u00fcller et al., 2019)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Throughout training, we validate by measuring the per-symbol and per-sentence typing accuracy of the greedily decoded proof frame, as well as the linking accuracy under the assumption of an errorfree decoding. We perform model selection on the basis of the above metrics and reach convergence after approximately 300 epochs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We test model performance using beam search. For each input sentence, we consider the \u03b2 best decode paths, with a path's score being the sum of its symbols' log probabilities, counting all symbols up to the last expected [SEP] token. Neural decoding is followed by a series of filtering steps. We first parse the decoded symbol sequences, discarding beams containing subsequences that do not meet the inductive constructors of the type grammar. The atomic formulas of the passing proof frames are polarized according to the process of \u00a72.3. Frames failing to satisfy the count invariance property are also discarded. The remaining ones constitute potential candidates for a proof structure; their primitive symbols are contextualized by the bimodal encoder, and are then used to compute soft axiom link strengths between atomic formulas of matching types. Discretization of the output yields a graph encoding a proof structure; we follow the net traversal algorithm of Lamarche (2008) to check whether it is a valid proof net, and, if so, produce the \u03bb-term in the process (de Groote and Retor\u00e9, 1996) . Terms generated this way contain no redundant abstractions, being in \u03b2-normal \u03b7-long form. Table 1 presents a breakdown of model performance at different beam widths. To evaluate model performance, we use the first valid beam of each sample, defaulting to the highest scoring beam if none is available. On the token level, we report supertagging accuracy, i.e. the percentage of types correctly assigned. We further measure the percentage of samples satisfying each of the following sentential metrics: 1) invariance property, a condition necessary for being eligible to a proof structure, 2) frame correctness, i.e. whether the decoded frame is identical to the target frame, meaning all types assigned are the correct ones, 3) untyped term accuracy, i.e. whether, regardless of the proof frame, the untyped \u03bb-term coincides with the true one, and 4) typed term accuracy, meaning that both the proof frame and the untyped term are correct.", |
|
"cite_spans": [ |
|
{ |
|
"start": 969, |
|
"end": 984, |
|
"text": "Lamarche (2008)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1073, |
|
"end": 1101, |
|
"text": "(de Groote and Retor\u00e9, 1996)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1195, |
|
"end": 1202, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Testing", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Numeric comparisons against other works in the literature is neither our prime goal nor an easy task; the dataset utilized is fairly recent, the novelty of our methods renders them non-trivial to adapt to other settings, and ILL-friendly categorial grammars are not particularly common in experimental setups. As a sanity check, however, and in order to obtain some meaningful baselines, we employ the Alpino parser (Bouma et al., 2001 ). Alpino is a hybrid parser based on a sophisticated handwritten grammar and a maximum entropy disambiguation model; despite its age and the domain difference, Alpino is competitive to the state-of-theart in UD parsing, remaining within a 2% margin to the last reported benchmark (Bouma and van Noord, 2017; Che et al., 2018) . We pair Alpino with the extraction algorithm used to convert its output into ILL ,3,2 derivations (Kogkalidis et al., 2020) ; together, the two faithfully replicate the data generating process our system has been trained on, modulo the manual correction phase of van Noord et al. (2013). We query Alpino for the globally optimal parse of each sample in the test set (enforcing no time constraints), perform the conversion and log the results in Table 1. Our model achieves remarkable performance even in the greedy setting, especially considering the rigidity of our metrics. Untyped term accuracy conveys the percentage of sentences for which the function-argument structure has been perfectly captured. Typed term accuracy is even stricter; the added requirement of a correct proof frame practically translates to no erroneous assignments of part-of-speech and syntactic phrase tags or dependency labels. Keeping in mind that dependency information are already incorporated in the proof frame, obtaining the correct proof structure fully subsumes dependency parsing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 416, |
|
"end": 435, |
|
"text": "(Bouma et al., 2001", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 717, |
|
"end": 744, |
|
"text": "(Bouma and van Noord, 2017;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 745, |
|
"end": 762, |
|
"text": "Che et al., 2018)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 863, |
|
"end": 888, |
|
"text": "(Kogkalidis et al., 2020)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1210, |
|
"end": 1218, |
|
"text": "Table 1.", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The filtering criteria of the previous paragraph yield significant benefits when combined with beam search, allowing us to circumvent logically unsound analyses regardless of their sequence scores. It is worth noting that our metrics place the model's bottleneck at the supertagging rather than the permutation component. Term accuracy closely follows along (and actually surpasses, in the untyped case) frame accuracy. This is further evidenced when providing the ground truth types as input to the parser, in which case term accuracy reaches as high as 85.4%, indicative of the high expressive power of Sinkhorn on top of the the bi-modal encoder's contextualization. On the negative side, the strong reliance on correct type assignments means that a single mislabeled word can heavily skew the parse outcome, but also hints at increasing returns from improvements in the decoding architecture.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Our work bears semblances to other neural methodologies related to syntactic/semantic parsing. Sequence-to-sequence models have been successfully employed in the past to decode directly into flattened representations of parse trees (Wiseman and Rush, 2016; Buys and Blunsom, 2017; Li et al., 2018) . In dependency parsing literature, head selection involves building word representations that act as classifying functions over other words (Zhang et al., 2017) , similar to our dotproduct weighting between atoms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 257, |
|
"end": 280, |
|
"text": "Buys and Blunsom, 2017;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 297, |
|
"text": "Li et al., 2018)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 439, |
|
"end": 459, |
|
"text": "(Zhang et al., 2017)", |
|
"ref_id": "BIBREF56" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Akin to graph-based parsers (Ji et al., 2019; Zhang et al., 2019) , our model generates parse structures in the form of graphs. In our case, how-ever, graph nodes correspond to syntactic primitives (atomic types & dependencies) rather than words, while the discovery of the graph structure is subject to hard constraints imposed by the decoder's output.", |
|
"cite_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 45, |
|
"text": "(Ji et al., 2019;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 46, |
|
"end": 65, |
|
"text": "Zhang et al., 2019)", |
|
"ref_id": "BIBREF55" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Transcription to formal expressions (logical forms, \u03bb-terms, database queries and executable program instructions) has also been a prominent theme in NLP literature, using statistical methods (Zettlemoyer and Collins, 2012) or structurallyconstrained decoders (Dong and Lapata, 2016; Xiao et al., 2016; Cheng et al., 2019) . Unlike prior approaches, the decoding we employ here is unhindered by explicit structure; instead, parsing is handled in parallel across the entire sequence by the Sinkhorn operator, which biases the output towards structural correctness while requiring neither backtracking nor iterative processing. More importantly, the \u03bb-terms we generate are not in themselves the product of a neural decoding process, but rather a corollary of the isomorphic relation between ILL proofs and linear \u03bb-calculus programs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 260, |
|
"end": 283, |
|
"text": "(Dong and Lapata, 2016;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 284, |
|
"end": 302, |
|
"text": "Xiao et al., 2016;", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 303, |
|
"end": 322, |
|
"text": "Cheng et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In machine learning literature, Sinkhorn-based networks have been gaining popularity as a means of learning latent permutations of visual or synthetic data (Mena et al., 2018) or imposing permutation invariance for set-theoretic learning (Grover et al., 2019) , with so far limited adoption in the linguistic setting (Tay et al., 2020; Swanson et al., 2020) . In contrast to prior applications of Sinkhorn as a final classification layer, we use it over chain element representations that have been mutually contextualized, rather than set elements vectorized in isolation. Our benchmarks, combined with the assignment-preserving property of the operator, hint towards potential benefits from adopting it in a similar fashion across other parsing tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 175, |
|
"text": "(Mena et al., 2018)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 238, |
|
"end": 259, |
|
"text": "(Grover et al., 2019)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 317, |
|
"end": 335, |
|
"text": "(Tay et al., 2020;", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 336, |
|
"end": 357, |
|
"text": "Swanson et al., 2020)", |
|
"ref_id": "BIBREF43" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We have introduced neural proof nets, a data-driven perspective on the proof nets of ILL , and successfully employed them on the demanding task of transcribing raw text to proofs and computational terms of the linear \u03bb-calculus. The terms construed constitute type-safe abstract program skeletons that are free to interpret within arbitrary domains, fulfilling the role of a practical intermediary between text and meaning. Used as-is, they can find direct application in logic-driven models of natural language inference (Abzianidze, 2016).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Our architecture marks a departure from other parsing approaches, owing to the novel use of the Sinkhorn operator, which renders it both fully parallel and backtrack-free, but also logically grounded. It is general enough to apply to a variety of grammar formalisms inheriting from linear logic; if augmented with Gumbel sampling (Mena et al., 2018) , it can further a provide a probabilistic means to account for derivational ambiguity. Viewed as a means of exposing deep tecto-grammatic structure, it paves the way for graph-theoretic approaches at syntax-aware sentential meaning representations. Output activation LayerNorm ", |
|
"cite_spans": [ |
|
{ |
|
"start": 330, |
|
"end": 349, |
|
"text": "(Mena et al., 2018)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We train with an adaptive learning rate following Vaswani et al. (2017) , such that the learning rate at optimization step i is given as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 71, |
|
"text": "Vaswani et al. (2017)", |
|
"ref_id": "BIBREF47" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.2 Optimization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "768 \u22120.5 \u2022 min i \u22120.5 , i \u2022 warmup_steps \u22121.5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.2 Optimization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For BERT parameters, learning rate is scaled by 0.1. We freeze the oversized word embedding layer to reduce training costs and avoid overfitting. Optimization hyper-parameters are presented in Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 193, |
|
"end": 200, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A.2 Optimization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We provide strict teacher guidance when learning axiom links, whereby the network is provided with the original proof frame symbol sequence instead of the predicted one. To speed up computation, positive and negative indexes are arranged perlength rather than type for each batch; this allows us to process symbol transformations, dot-product attentions and Sinkhorn activations in parallel for many types across many sentences. During training, we set the number of Sinkhorn iterations to 5; lower values are more difficult to reach convergence with, hurting performance, whereas higher values can easily lead to vanishing gradients, impeding learning (Grover et al., 2019) . A.3 Data Figure 7 presents cumulative distributions of dataset statistics. The kept portion of the dataset corresponds to roughly 97% of the original, enumerating 55 683 training, 6 971 validation and 6 957 test samples. Table 4 summarizes the model's performance in terms of untyped term accuracy over the test set in the greedy setting, binned according to input sentence lengths. Figure 6 : Schematic diagram of the full network architecture. The supertagger (orange, left) iteratively generates a proof frame by attending over the currently available part of it plus the full input sentence. The axiom linker (green, right) contextualizes the complete proof frame by attending over it as well as the sentence. Representations of atomic formulas are gathered and transformed according to their polarity, and their Sinkhorn-activated dotproduct attention is computed. Discretization of the result yields a permutation matrix denoting axiom links for each unique atomic type in the proof frame. The final output is a proof structure, i.e. the pair of a proof frame and its axiom links. In het wiskundige taalgebruik is er meestal een scheiding aan te brengen tussen de echte wiskundige taal en de taal waarmee we over die wiskundige taal of over het wiskundige bedrijf spreken. \"In mathematical discourse, there is usually a distinction to be made between the real mathematical language and the language with which we speak about the mathematical language or about the mathematical practice.\" -Probeer zinnen steeds zo te stellen dat ze alleen op de door de schrijver bedoelde wijze zijn terug te lezen. \"Try to always formulate sentences in such a way that they can only be read in the manner intended by the author.\" -In het Nederlands kunnen vele zinnen wat volgorde betreft omgegooid worden. \"In Dutch, many sentences can be restructured as far as order is concerned.\" In het Nederlands kunnen vaak twee zinnen tot \u00e8\u00e8n kortere worden samengetrokken. \"In Dutch, two sentences can often be merged into a shorter one.\" Populaire taal is vaak minder beveiligd tegen dubbelzinnigheid dan nette taal, en het mengsel van beide talen is n\u00f2g gevaarlijker. \"Informal language is often less protected against ambiguity than formal language, and the mixture of both languages is even more dangerous.\" Table 5 : Greedy parses of the opening sentences of the first seven paragraphs of de Bruijn (1979) , in the form of type-and dependency-annotated \u03bb expressions. Two of them (3", |
|
"cite_spans": [ |
|
{ |
|
"start": 653, |
|
"end": 674, |
|
"text": "(Grover et al., 2019)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 3056, |
|
"end": 3069, |
|
"text": "Bruijn (1979)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 686, |
|
"end": 694, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 898, |
|
"end": 905, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
}, |
|
{ |
|
"start": 1060, |
|
"end": 1068, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2971, |
|
"end": 2978, |
|
"text": "Table 5", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A.2 Optimization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Read as right-associative. 3 O(A), the order of an atomic type, equals zero; for function types O(T1 T2) = max(O(T1) + 1, O(T2)).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Dependency decorations occur only within the scope of an implication, so the two are merged into a single symbol for reasons of length economy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The implementing code can be found at github.com/ konstantinosKokos/neural-proof-nets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "& 4) yield no valid proof net; the remaining five are both valid and correct.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank the anonymous reviewers for their detailed feedback, which helped improve the presentation of the paper. Konstantinos and Michael are supported by the Dutch Research Council (NWO) under the scope of the project \"A composition calculus for vector-based semantic modelling with a localization for Dutch\" (360-89-070).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": " Table 2 presents model hyper-parameters, as selected by greedy grid search. An illustration of the model can be seen in Figure 6 . ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1, |
|
"end": 8, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 121, |
|
"end": 129, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A.1 Model", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Natural solution to FraCaS entailment problems", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lasha Abzianidze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "64--74", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/S16-2007" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lasha Abzianidze. 2016. Natural solution to FraCaS entailment problems. In Proceedings of the Fifth Joint Conference on Lexical and Computational Se- mantics, pages 64-74, Berlin, Germany. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Supertagging: An approach to almost parsing", |
|
"authors": [ |
|
{ |
|
"first": "Srinivas", |
|
"middle": [], |
|
"last": "Bangalore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Aravind", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Computational linguistics", |
|
"volume": "25", |
|
"issue": "2", |
|
"pages": "237--265", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Srinivas Bangalore and Aravind K Joshi. 1999. Su- pertagging: An approach to almost parsing. Com- putational linguistics, 25(2):237-265.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Supertagging with CCG primitives", |
|
"authors": [ |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Bhargava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerald", |
|
"middle": [], |
|
"last": "Penn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 5th Workshop on Representation Learning for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "194--204", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aditya Bhargava and Gerald Penn. 2020. Supertag- ging with CCG primitives. In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 194-204, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Increasing return on annotation investment: The automatic construction of a Universal Dependency treebank for Dutch", |
|
"authors": [ |
|
{ |
|
"first": "Gosse", |
|
"middle": [], |
|
"last": "Bouma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gertjan Van Noord", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the NoDaLiDa 2017 Workshop on Universal Dependencies (UDW 2017)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "19--26", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gosse Bouma and Gertjan van Noord. 2017. Increas- ing return on annotation investment: The automatic construction of a Universal Dependency treebank for Dutch. In Proceedings of the NoDaLiDa 2017 Workshop on Universal Dependencies (UDW 2017), pages 19-26, Gothenburg, Sweden. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Alpino: Wide-coverage computational analysis of dutch", |
|
"authors": [ |
|
{ |
|
"first": "Gosse", |
|
"middle": [], |
|
"last": "Bouma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Gertjan Van Noord", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Malouf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Computational linguistics in the Netherlands", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--59", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gosse Bouma, Gertjan van Noord, and Robert Malouf. 2001. Alpino: Wide-coverage computational anal- ysis of dutch. In Computational linguistics in the Netherlands 2000, pages 45-59. Brill Rodopi.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Wiskundigen, let op uw Nederlands", |
|
"authors": [ |
|
{ |
|
"first": "Nicolaas", |
|
"middle": [], |
|
"last": "Govert De Bruijn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1979, |
|
"venue": "", |
|
"volume": "55", |
|
"issue": "", |
|
"pages": "429--435", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicolaas Govert de Bruijn. 1979. Wiskundigen, let op uw Nederlands. Euclides, 55(juni/juli):429-435.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Robust incremental neural semantic graph parsing", |
|
"authors": [ |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Buys", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1215--1226", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jan Buys and Phil Blunsom. 2017. Robust incremen- tal neural semantic graph parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1215-1226.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Towards better UD parsing: Deep contextualized word embeddings, ensemble, and treebank concatenation", |
|
"authors": [ |
|
{ |
|
"first": "Wanxiang", |
|
"middle": [], |
|
"last": "Che", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yijia", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuxuan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ting", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--64", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K18-2005" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wanxiang Che, Yijia Liu, Yuxuan Wang, Bo Zheng, and Ting Liu. 2018. Towards better UD parsing: Deep contextualized word embeddings, ensemble, and treebank concatenation. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 55-64, Brussels, Belgium. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Learning an executable neural semantic parser", |
|
"authors": [ |
|
{ |
|
"first": "Jianpeng", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Siva", |
|
"middle": [], |
|
"last": "Reddy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vijay", |
|
"middle": [], |
|
"last": "Saraswat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Computational Linguistics", |
|
"volume": "45", |
|
"issue": "1", |
|
"pages": "59--94", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jianpeng Cheng, Siva Reddy, Vijay Saraswat, and Mirella Lapata. 2019. Learning an executable neu- ral semantic parser. Computational Linguistics, 45(1):59-94.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "The structure of multiplicatives", |
|
"authors": [ |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Danos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurent", |
|
"middle": [], |
|
"last": "Regnier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Archive for Mathematical Logic", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "181--203", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vincent Danos and Laurent Regnier. 1989. The struc- ture of multiplicatives. Archive for Mathematical Logic, 28:181-203.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Language to logical form with neural attention", |
|
"authors": [ |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "33--43", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Li Dong and Mirella Lapata. 2016. Language to logi- cal form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 33-43.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Linear logic. Theoretical computer science", |
|
"authors": [ |
|
{ |
|
"first": "Jean-Yves", |
|
"middle": [], |
|
"last": "Girard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "", |
|
"volume": "50", |
|
"issue": "", |
|
"pages": "1--101", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean-Yves Girard. 1987. Linear logic. Theoretical computer science, 50(1):1-101.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Proofs and Types. Cambridge Tracts in Theoretical Computer Science 7", |
|
"authors": [ |
|
{ |
|
"first": "Jean-Yves", |
|
"middle": [], |
|
"last": "Girard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Lafont", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Taylor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean-Yves Girard, Yves Lafont, and P. Taylor. 1988. Proofs and Types. Cambridge Tracts in Theoretical Computer Science 7. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Towards abstract categorial grammars", |
|
"authors": [ |
|
{ |
|
"first": "Philippe", |
|
"middle": [], |
|
"last": "De", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Groote", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "252--259", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philippe de Groote. 2001. Towards abstract categorial grammars. In Proceedings of the 39th Annual Meet- ing of the Association for Computational Linguistics, pages 252-259.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "On the semantic readings of proof-nets", |
|
"authors": [ |
|
{ |
|
"first": "Philippe", |
|
"middle": [], |
|
"last": "De", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Groote", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Retor\u00e9", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings Formal grammar", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "57--70", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philippe de Groote and Christian Retor\u00e9. 1996. On the semantic readings of proof-nets. In Proceedings For- mal grammar, pages 57-70, Prague, Czech Repub- lic. FoLLI.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Stochastic optimization of sorting networks via continuous relaxations", |
|
"authors": [ |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Grover", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Zweig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefano", |
|
"middle": [], |
|
"last": "Ermon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aditya Grover, Eric Wang, Aaron Zweig, and Stefano Ermon. 2019. Stochastic optimization of sorting net- works via continuous relaxations. In International Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Correctness of multiplicative proof nets is linear", |
|
"authors": [ |
|
{ |
|
"first": "Stefano", |
|
"middle": [], |
|
"last": "Guerrini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Fourteenth Annual IEEE Symposium on Logic in Computer Science", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "454--263", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stefano Guerrini. 1999. Correctness of multiplicative proof nets is linear. In Fourteenth Annual IEEE Sym- posium on Logic in Computer Science, pages 454- 263. IEEE Computer Science Society.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Bridging nonlinearities and stochastic regularizers with gaussian error linear units", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Hendrycks", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Hendrycks and Kevin Gimpel. 2016. Bridging nonlinearities and stochastic regularizers with gaus- sian error linear units.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Graph-based dependency parsing with graph neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuanbin", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Man", |
|
"middle": [], |
|
"last": "Lan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2475--2485", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tao Ji, Yuanbin Wu, and Man Lan. 2019. Graph-based dependency parsing with graph neural networks. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2475- 2485.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "The complexity of horn fragments of linear logic", |
|
"authors": [ |
|
{ |
|
"first": "Max", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Kanovich", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Annals of Pure and Applied Logic", |
|
"volume": "69", |
|
"issue": "2-3", |
|
"pages": "195--241", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Max I. Kanovich. 1994. The complexity of horn frag- ments of linear logic. Annals of Pure and Applied Logic, 69(2-3):195-241.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Constructive type-logical supertagging with self-attention networks", |
|
"authors": [ |
|
{ |
|
"first": "Konstantinos", |
|
"middle": [], |
|
"last": "Kogkalidis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Moortgat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tejaswini", |
|
"middle": [], |
|
"last": "Deoskar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 4th Workshop on Representation Learning for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "113--123", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Konstantinos Kogkalidis, Michael Moortgat, and Te- jaswini Deoskar. 2019. Constructive type-logical su- pertagging with self-attention networks. In Proceed- ings of the 4th Workshop on Representation Learn- ing for NLP (RepL4NLP-2019), pages 113-123.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "AEthel: Automatically extracted typelogical derivations for dutch", |
|
"authors": [ |
|
{ |
|
"first": "Konstantinos", |
|
"middle": [], |
|
"last": "Kogkalidis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Moortgat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Moot", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5259--5268", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Konstantinos Kogkalidis, Michael Moortgat, and Richard Moot. 2020. AEthel: Automatically ex- tracted typelogical derivations for dutch. In Pro- ceedings of The 12th Language Resources and Eval- uation Conference, pages 5259-5268, Marseille, France. European Language Resources Association.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Type-Logical Syntax", |
|
"authors": [ |
|
{ |
|
"first": "Ysuke", |
|
"middle": [], |
|
"last": "Kubota", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Levine", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ysuke Kubota and Robert Levine. 2020. Type-Logical Syntax. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Proof nets for intuitionistic linear logic: Essential nets", |
|
"authors": [ |
|
{ |
|
"first": "Fran\u00e7ois", |
|
"middle": [], |
|
"last": "Lamarche", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fran\u00e7ois Lamarche. 2008. Proof nets for intuitionistic linear logic: Essential nets. Research report, INRIA Nancy.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "The mathematics of sentence structure", |
|
"authors": [ |
|
{ |
|
"first": "Joachim", |
|
"middle": [], |
|
"last": "Lambek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1958, |
|
"venue": "The American Mathematical Monthly", |
|
"volume": "65", |
|
"issue": "3", |
|
"pages": "154--170", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joachim Lambek. 1958. The mathematics of sentence structure. The American Mathematical Monthly, 65(3):154-170.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Seq2seq dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Zuchao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiaxun", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shexia", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hai", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3203--3214", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zuchao Li, Jiaxun Cai, Shexia He, and Hai Zhao. 2018. Seq2seq dependency parsing. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3203-3214.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Deciding provability of linear logic formulas", |
|
"authors": [ |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Lincoln", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Advances in Linear Logic", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "109--122", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Patrick Lincoln. 1995. Deciding provability of linear logic formulas. In Jean-Yves Girard, Yves Lafont, and Laurent Regnier, editors, Advances in Linear Logic, pages 109-122. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Discourse representation structure parsing", |
|
"authors": [ |
|
{ |
|
"first": "Jiangming", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Shay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "429--439", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiangming Liu, Shay B Cohen, and Mirella Lapata. 2018. Discourse representation structure parsing. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 429-439.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Fixing weight decay regularization in adam", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Loshchilov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Hutter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Loshchilov and Frank Hutter. 2018. Fixing weight decay regularization in adam.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Learning latent permutations with Gumbel-Sinkhorn networks", |
|
"authors": [ |
|
{ |
|
"first": "Gonzalo", |
|
"middle": [], |
|
"last": "Mena", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Belanger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Scott", |
|
"middle": [], |
|
"last": "Linderman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jasper", |
|
"middle": [], |
|
"last": "Snoek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gonzalo Mena, David Belanger, Scott Linderman, and Jasper Snoek. 2018. Learning latent permutations with Gumbel-Sinkhorn networks. In International Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Multimodal linguistic inference", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Moortgat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Journal of Logic, Language and Information", |
|
"volume": "5", |
|
"issue": "3/4", |
|
"pages": "349--385", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Moortgat. 1996. Multimodal linguistic infer- ence. Journal of Logic, Language and Information, 5(3/4):349-385.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "A categorial type logic", |
|
"authors": [ |
|
{ |
|
"first": "Glyn", |
|
"middle": [], |
|
"last": "Morrill", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Categories and Types in Logic, Language, and Physics -Essays Dedicated to Jim Lambek on the Occasion of His 90th Birthday", |
|
"volume": "8222", |
|
"issue": "", |
|
"pages": "331--352", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Glyn Morrill. 2014. A categorial type logic. In Cate- gories and Types in Logic, Language, and Physics - Essays Dedicated to Jim Lambek on the Occasion of His 90th Birthday, volume 8222 of Lecture Notes in Computer Science, pages 331-352. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "When does label smoothing help?", |
|
"authors": [ |
|
{ |
|
"first": "Rafael", |
|
"middle": [], |
|
"last": "M\u00fcller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Kornblith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Hinton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4696--4705", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rafael M\u00fcller, Simon Kornblith, and Geoffrey E Hin- ton. 2019. When does label smoothing help? In Advances in Neural Information Processing Systems, pages 4696-4705.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Dominator trees and fast verification of proof nets", |
|
"authors": [ |
|
{ |
|
"first": "Andrzej", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Murawski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C.-H. Luke", |
|
"middle": [], |
|
"last": "Ong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Logic in Computer Science", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "181--191", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrzej S. Murawski and C.-H. Luke Ong. 2000. Dom- inator trees and fast verification of proof nets. In Logic in Computer Science, pages 181-191.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Lambda grammars and the syntax-semantics interface", |
|
"authors": [ |
|
{ |
|
"first": "Reinhard", |
|
"middle": [], |
|
"last": "Muskens", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 13th Amsterdam Colloquium", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "150--155", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Reinhard Muskens. 2001. Lambda grammars and the syntax-semantics interface. In Proceedings of the 13th Amsterdam Colloquium, pages 150-155.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Static and dynamic vector semantics for lambda calculus models of natural language", |
|
"authors": [ |
|
{ |
|
"first": "Reinhard", |
|
"middle": [], |
|
"last": "Muskens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehrnoosh", |
|
"middle": [], |
|
"last": "Sadrzadeh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Journal of Language Modelling", |
|
"volume": "6", |
|
"issue": "2", |
|
"pages": "319--351", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Reinhard Muskens and Mehrnoosh Sadrzadeh. 2018. Static and dynamic vector semantics for lambda cal- culus models of natural language. Journal of Lan- guage Modelling, 6(2):319-351.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Jelmer van der Linde, Ineke Schuurman, Erik Tjong Kim Sang, and Vincent Vandeghinste", |
|
"authors": [ |
|
{ |
|
"first": "Gosse", |
|
"middle": [], |
|
"last": "Gertjan Van Noord", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bouma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Frank Van Eynde", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "De Kok", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Essential speech and language technology for Dutch", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "147--164", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gertjan van Noord, Gosse Bouma, Frank van Eynde, Daniel de Kok, Jelmer van der Linde, Ineke Schuur- man, Erik Tjong Kim Sang, and Vincent Vandeghin- ste. 2013. Large scale syntactic annotation of writ- ten dutch: Lassy. In Essential speech and language technology for Dutch, pages 147-164. Springer, Berlin, Heidelberg.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Using the output embedding to improve language models", |
|
"authors": [ |
|
{ |
|
"first": "Ofir", |
|
"middle": [], |
|
"last": "Press", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lior", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "157--163", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ofir Press and Lior Wolf. 2017. Using the output em- bedding to improve language models. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 157-163.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Resource Logics: Prooftheoretical Investigations", |
|
"authors": [ |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Roorda", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dirk Roorda. 1991. Resource Logics: Proof- theoretical Investigations. Ph.D. thesis, Universiteit van Amsterdam.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "A relationship between arbitrary positive matrices and doubly stochastic matrices. The annals of mathematical statistics", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Sinkhorn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1964, |
|
"venue": "", |
|
"volume": "35", |
|
"issue": "", |
|
"pages": "876--879", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Sinkhorn. 1964. A relationship between arbitrary positive matrices and doubly stochastic matrices. The annals of mathematical statistics, 35(2):876-879.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Lectures on the Curry-Howard isomorphism", |
|
"authors": [ |
|
{ |
|
"first": "Morten", |
|
"middle": [], |
|
"last": "Heine S\u00f8rensen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pawel", |
|
"middle": [], |
|
"last": "Urzyczyn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Morten Heine S\u00f8rensen and Pawel Urzyczyn. 2006. Lectures on the Curry-Howard isomorphism. Else- vier.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Rationalizing text matching: Learning sparse alignments via optimal transport", |
|
"authors": [ |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Swanson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lili", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Lei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2005.13111" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyle Swanson, Lili Yu, and Tao Lei. 2020. Ra- tionalizing text matching: Learning sparse align- ments via optimal transport. arXiv preprint arXiv:2005.13111.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Sparse sinkhorn attention", |
|
"authors": [ |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Tay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dara", |
|
"middle": [], |
|
"last": "Bahri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liu", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donald", |
|
"middle": [], |
|
"last": "Metzler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Da-Cheng", |
|
"middle": [], |
|
"last": "Juan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2002.11296v1" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yi Tay, Dara Bahri, Liu Yang, Donald Metzler, and Da- Cheng Juan. 2020. Sparse sinkhorn attention. arXiv preprint arXiv:2002.11296v1.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Basic Proof Theory", |
|
"authors": [ |
|
{ |
|
"first": "Anne", |
|
"middle": [], |
|
"last": "Sjerp Troelstra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Helmut", |
|
"middle": [], |
|
"last": "Schwichtenberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "43", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anne Sjerp Troelstra and Helmut Schwichtenberg. 2000. Basic Proof Theory, 2 edition, volume 43 of Cambridge Tracts in Theoretical Computer Science. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Supertagging with lstms", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Bisk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenji", |
|
"middle": [], |
|
"last": "Sagae", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Musa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "232--237", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Yonatan Bisk, Kenji Sagae, and Ryan Musa. 2016. Supertagging with lstms. In Proceed- ings of the 2016 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 232-237.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "BERTje: A Dutch BERT model", |
|
"authors": [ |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Wietse De Vries", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arianna", |
|
"middle": [], |
|
"last": "Van Cranenburgh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommaso", |
|
"middle": [], |
|
"last": "Bisazza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Caselli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Malvina", |
|
"middle": [], |
|
"last": "Gertjan Van Noord", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nissim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1912.09582v1" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wietse de Vries, Andreas van Cranenburgh, Arianna Bisazza, Tommaso Caselli, Gertjan van Noord, and Malvina Nissim. 2019. BERTje: A Dutch BERT model. arXiv preprint arXiv:1912.09582v1.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "A taste of linear logic", |
|
"authors": [ |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Wadler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "International Symposium on Mathematical Foundations of Computer Science", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "185--210", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philip Wadler. 1993. A taste of linear logic. In Interna- tional Symposium on Mathematical Foundations of Computer Science, pages 185-210. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Encoding word order in complex embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Benyou", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donghao", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christina", |
|
"middle": [], |
|
"last": "Lioma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiuchi", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [ |
|
"Grue" |
|
], |
|
"last": "Simonsen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benyou Wang, Donghao Zhao, Christina Lioma, Qi- uchi Li, Peng Zhang, and Jakob Grue Simonsen. 2020. Encoding word order in complex embeddings. In International Conference on Learning Represen- tations.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "Sequence-to-sequence learning as beam-search optimization", |
|
"authors": [ |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Wiseman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander M", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1296--1306", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sam Wiseman and Alexander M Rush. 2016. Sequence-to-sequence learning as beam-search optimization. In Proceedings of the 2016 Confer- ence on Empirical Methods in Natural Language Processing, pages 1296-1306.", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "Sequence-based structured prediction for semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Chunyang", |
|
"middle": [], |
|
"last": "Xiao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Dymetman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Gardent", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1341--1350", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for se- mantic parsing. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1341- 1350.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "Ccg supertagging with a recurrent neural network", |
|
"authors": [ |
|
{ |
|
"first": "Wenduan", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "250--255", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wenduan Xu, Michael Auli, and Stephen Clark. 2015. Ccg supertagging with a recurrent neural network. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 2: Short Papers), pages 250-255.", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Luke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1207.1420v1" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luke S Zettlemoyer and Michael Collins. 2012. Learn- ing to map sentences to logical form: Structured classification with probabilistic categorial grammars. arXiv preprint arXiv:1207.1420v1.", |
|
"links": null |
|
}, |
|
"BIBREF55": { |
|
"ref_id": "b55", |
|
"title": "AMR parsing as sequence-tograph transduction", |
|
"authors": [ |
|
{ |
|
"first": "Sheng", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xutai", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Duh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "80--94", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1009" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019. AMR parsing as sequence-to- graph transduction. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 80-94, Florence, Italy. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF56": { |
|
"ref_id": "b56", |
|
"title": "Dependency parsing as head selection", |
|
"authors": [ |
|
{ |
|
"first": "Xingxing", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianpeng", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "665--676", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xingxing Zhang, Jianpeng Cheng, and Mirella Lapata. 2017. Dependency parsing as head selection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 1, Long Papers, pages 665-676.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Shallow graph for the term of Figure 1.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "Links for linear logic proof nets. Left/right: positive/negative implication. Center: axiom link. Proof net corresponding to the natural deduction derivation of", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"text": "log2-transformed cumulative distributions of symbol and word lengths, counts of atomic formulas, matrices and matrix sizes from the portion of the dataset trained on.De voorafgaande stukjes over Wiskundige Omgangstaal hadden het vooral over het samenspel tussen woorden en formules.\"The preceding articles on the Mathematical Vernacular mainly focused on the interplay between words and formules.\"stukjes :: N))) suIn het wiskundig Nederlands worden vaak dezelfde fouten gemaakt als in het gewone Nederlands. \"The same mistakes are often made in mathematical Dutch as in common Dutch.\" (worden ::", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Model hyper-parameters", |
|
"content": "<table/>" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"text": "", |
|
"content": "<table/>" |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"text": "", |
|
"content": "<table><tr><td>presents input-output</td></tr></table>" |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Test set model performance broken down by sentence length.", |
|
"content": "<table><tr><td>Input Sentence</td></tr><tr><td>Proof Frame</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |