|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:11:17.819795Z" |
|
}, |
|
"title": "UDon2: a library for manipulating Universal Dependencies trees", |
|
"authors": [ |
|
{ |
|
"first": "Dmytro", |
|
"middle": [], |
|
"last": "Kalpakchi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "KTH Royal Institute of Technology Stockholm", |
|
"location": { |
|
"country": "Sweden" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Boye", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "KTH Royal Institute of Technology Stockholm", |
|
"location": { |
|
"country": "Sweden" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "UDon2 is an open-source library for manipulating dependency trees represented in the CoNLL-U format. The library is compatible with the Universal Dependencies. UDon2 is aimed at developers of downstream Natural Language Processing applications that require manipulating dependency trees on the sentence level (to complement other available tools geared towards working with treebanks). 1 import udon2 2 nodes = udon2.ConllReader.read_file(\"example.conll\") 3 sing = [obj for node in nodes for obj in node.select_by(\"deprel\", \"obj\") 4 if obj.has(\"feats\", \"Number\", \"Sing\")] UDon2 is an open-source library written in C++ with Python bindings, combining the speed of C++ and the flexibility and ease-of-use of Python. UDon2 is hosted on Github (the source code is available at https://github.com/udon2/udon2), and everyone is welcome to contribute. 2 Example use cases UDon2 operates on dependency trees for individual sentences. Preparing a raw text for downstream applications requires segmenting it into sentences and then parsing every sentence to get its dependency tree stored in CoNLL-U format. The result of reading a CoNLL-U file is an instance of the Node class This work is licensed under a Creative Commons Attribution 4.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "UDon2 is an open-source library for manipulating dependency trees represented in the CoNLL-U format. The library is compatible with the Universal Dependencies. UDon2 is aimed at developers of downstream Natural Language Processing applications that require manipulating dependency trees on the sentence level (to complement other available tools geared towards working with treebanks). 1 import udon2 2 nodes = udon2.ConllReader.read_file(\"example.conll\") 3 sing = [obj for node in nodes for obj in node.select_by(\"deprel\", \"obj\") 4 if obj.has(\"feats\", \"Number\", \"Sing\")] UDon2 is an open-source library written in C++ with Python bindings, combining the speed of C++ and the flexibility and ease-of-use of Python. UDon2 is hosted on Github (the source code is available at https://github.com/udon2/udon2), and everyone is welcome to contribute. 2 Example use cases UDon2 operates on dependency trees for individual sentences. Preparing a raw text for downstream applications requires segmenting it into sentences and then parsing every sentence to get its dependency tree stored in CoNLL-U format. The result of reading a CoNLL-U file is an instance of the Node class This work is licensed under a Creative Commons Attribution 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Universal Dependencies (UD) is a framework unifying ways of annotating grammar for different human languages (Nivre et al., 2020) . To date, the UD community has produced more than 150 treebanks in 90 languages and a number of UD-compatible tools for processing data. Most of the available tools focus on working with treebanks, e.g. annotating textual data, validating existing treebanks or making simple edits. However, many downstream Natural Language Processing (NLP) applications require researchers to manipulate individual dependency trees. For instance, finding all subordinate clauses in the sentence might help in performing text simplification, finding all objects connected to a verb in the passive form might be useful for creating a list of candidate referents for co-reference resolution, and being able to remove certain subtrees might assist in generating reading-comprehension questions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 129, |
|
"text": "(Nivre et al., 2020)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Some of those tasks are easy to achieve with some simple scripting, but such ad-hoc solutions become difficult to maintain over time. Furthermore, they tend to lack speed and hinder large-scale experimentation, since they are typically written in high-level programming languages in presence of time pressure. To aid the community in solving these tasks, we present UDon2 -a library for manipulating UD dependency trees optimized for querying. UDon2 has a user-friendly API allowing to perform routine tasks with only a couple of lines of code. For instance, finding all nominal objects in singular requires only a code snippet below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "representing a 'root' pseudonode of the dependency tree. The dependent of the 'root' pseudonode will be later referred to as a root word. In the section below, we present possible use cases along with the manipulations available for a generic Node instance n, and exemplify using the dependency tree in Figure 1 with its root word study being denoted as r. Figure 1 : A dependency tree for the sentence \"You should study these topics or you will fail the exam\", obtained using the ewt-model of Stanza package (Qi et al., 2020) and visualized using UDon2.", |
|
"cite_spans": [ |
|
{ |
|
"start": 509, |
|
"end": 526, |
|
"text": "(Qi et al., 2020)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 303, |
|
"end": 311, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 357, |
|
"end": 365, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Each node n has a number of accessors and mutators for its word index, universal part-of-speech (POS) tag, language-specific POS tag, lemma, form, dependency relation with its head node, universal morphological features (FEATS) or any other annotation (MISC). Each accessor can be called as n.<prop> substituting <prop> for id, upos, xpos, lemma, form, deprel, feats and misc respectively. The last two will be referred to as key-value properties. Each mutator can be called as n.<prop> = val with the same values of <prop>. The parent node of n can be accessed by calling n.parent, and the children of n can be accessed by calling n.children. While mutator for a parent is available (by calling n.parent = n1), no direct mutator for children is. Instead, calling n.add child or n.remove child is required to modify the list of children.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Accessing basic properties", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Let T n denote a subtree rooted at n. Calling n.get subtree text() will return a textual representation of T n . For instance, calling r.get subtree text() will return the whole sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Accessing basic properties", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Comments and enhanced dependency relations are currently not supported, since those are typically not provided by existing dependency parsers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Accessing basic properties", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Multiword tokens are supported and can be accessed by calling n.multi word. If n belongs to any multiword token, an instance of udon2.MultiWordNode will be returned, otherwise the accessor will return None. Mutators for multi-word nodes are currently not available. Getting a textual representation of a subtree (by calling n.get subtree text()) accounts for the multiword nodes. Empty nodes 1 are currently ignored while reading CoNLL-U files.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiword and empty nodes", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Querying a dependency tree for a specific type of node is useful, for instance, for finding all relative clauses of a sentence, or finding the subject of a sentence. UDon2 allows issuing a variety of queries for selecting the nodes in T n :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Querying", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 having a property with a specified value, by calling n.select by(<prop>, <val>). Here, <prop> could be substituted for the same values as in the previous section, except key-value properties. <val> should be substituted for the desired value of the respective property. For instance, r.select by(\"upos\", \"VERB\") will return a list of Nodes corresponding to the verbs study and fail;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Querying", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 having specified key-value properties in the universal feature format 2 <key-val-str>, by calling n.select having(<prop>, <key-val-str>), where <prop> is one of feats or misc. For instance, the nodes for words You and you will be returned after calling r.select having(\"feats\", \"Case=Nom|Person=2|PronType=Prs\"));", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Querying", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 being direct children of n and having a specified non key-value property, by calling n.get by(<prop>, <val>).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Querying", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 having a specified chain of dependency relations, by calling n.select by deprel chain or n.get by deprel chain (if the requirement of being a direct child is added). For instance, r.select by deprel chain(\"obj.det\") will return a list of Nodes corresponding to the determiners these and the, whereas r.get by deprel chain(\"obj.det\") will return only the Node corresponding to the determiner these;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Querying", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 being identical to another node n', by calling n.select identical(n');", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Querying", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "\u2022 being identical to another node n' except for properties props, by calling n.select identical except(n', props) with props being a comma-separated string of property names (later referred to as a prop-string), e.g. pos, rel;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Querying", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "A number of simpler indicator queries to check whether a specified property is present are also available and described in our online documentation 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Querying", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Suppose we want, as a step in text simplification, to split all coordinate clauses in a sentence into separate sentences. This requires identifying the nodes corresponding to the roots of coordinate clauses, by using the querying functionality from the previous section. Each clause should then be converted to a separate dependency tree, and all coordinate conjunctions should be removed. UDon2 makes this possible via its n.prune(<rel>) and n.make root() functions, where rel corresponds to the chain of dependency relations pointing at the node to be pruned. To exemplify the pruning operation, r.prune(\"conj\") will result in a subtree corresponding to the sentence \"You should study these topics\". r.make root() function will create a root pseudonode and assign it to be a parent of r.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pruning", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "If the same tree is going to be used multiple times, destructive pruning might not be a viable option. In order to avoid copying trees, which might be a time-intensive (currently not implemented) operation, UDon2 allows ignoring individual nodes or subtrees by calling n.ignore(<label>) (n.ignore subtree(<label>)), which assigns an ignore label label to n (all nodes in a subtree induced by n). All ignored nodes (no matter the label) will be excluded for all the queries presented in the previous section and during calling n.get subtree text().", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pruning", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Reverting to the original state, possible by calling n.reset(<label>) (n.reset subtree(<label>)), will unignore only nodes with a matching ignore label. The <label> argument defaults to 0 for all mentioned methods. If all nodes should be reset (no matter the label), n.hard reset() or n.hard reset subtree() should be used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pruning", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "UDon2 is capable of visualizing the dependency tree and storing it as an SVG file. An example of such visualization is shown in Figure 1 and the code for visualizing a tree with a root node is presented below.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 128, |
|
"end": 136, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Visualization", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "It is non-trivial to represent dependency trees as features to use in machine learning contexts. One option was proposed by Moschitti (2006) in the form of convolution partial tree kernels that can be used with Support Vector Machines (Cortes and Vapnik, 1995) . In a nutshell, a partial tree kernel calculates the number of common tree structures (not only full subtrees) between two trees. Unfortunately, tree kernels cannot handle trees with labeled edges, which is why Moschitti (2006) applied kernels to dependency tree containing only lexicals. An alternative solution, proposed by Croce et al. (2011) and implemented in UDon2, is to re-format dependency trees to include the edge labels as separate nodes. Three possible formats were proposed, depending on the order of inclusion:", |
|
"cite_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 140, |
|
"text": "Moschitti (2006)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 235, |
|
"end": 260, |
|
"text": "(Cortes and Vapnik, 1995)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 473, |
|
"end": 489, |
|
"text": "Moschitti (2006)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 588, |
|
"end": 607, |
|
"text": "Croce et al. (2011)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformations and convolution tree kernels", |
|
"sec_num": "2.6" |
|
}, |
|
{ |
|
"text": "\u2022 POS-tag Centered Tree (PCT) -each grammatical relation is added as the father of the POS-tag and a lexical as a child (transformation is possible by calling udon2.transform.to pct(node));", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformations and convolution tree kernels", |
|
"sec_num": "2.6" |
|
}, |
|
{ |
|
"text": "\u2022 Grammatical Relation Centered Tree (GRCT) -each POS-tag is a child of a grammatical relation and a father of a lexical (transformation is possible by calling udon2.transform.to grct(node));", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformations and convolution tree kernels", |
|
"sec_num": "2.6" |
|
}, |
|
{ |
|
"text": "\u2022 Lexical Centered Tree (LCT) -both a POS-tag and a grammatical relation are children of a lexical (transformation is possible by calling udon2.transform.to lct()).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformations and convolution tree kernels", |
|
"sec_num": "2.6" |
|
}, |
|
{ |
|
"text": "In UDon2, a partial tree kernel can be calculated in any of the aforementioned formats by substituting a string tree format with any of PCT, GRCT or LCT in the code snippet below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformations and convolution tree kernels", |
|
"sec_num": "2.6" |
|
}, |
|
{ |
|
"text": "1 from udon2.kernels import ConvPartialTreeKernel 2 # ptk_lambda and ptk_mu are decay factors as defined by Moschitti (2006) 3 kernel = ConvPartialTreeKernel(tree_format, ptk_lambda, ptk_mu) 4 # prints a number of common tree fragments between trees rooted at root1 and root2 5 print(kernel(root1, root2)) # root1 and root2 are udon2.Node instances", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 124, |
|
"text": "Moschitti (2006)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformations and convolution tree kernels", |
|
"sec_num": "2.6" |
|
}, |
|
{ |
|
"text": "Currently available UD processing tools for Python are geared towards working with treebanks and making batch manipulations and edits. UDPipe (Straka and Strakov\u00e1, 2017 ) is a library written in C++ with bindings to other programming languages. UDPipe provides a trainable pipeline which performs sentence segmentation, tokenization, POS-tagging, lemmatization and dependency parsing. The library provides no built-in support for manipulations on dependency trees. A similar functionality is also provided by the Stanza package (Qi et al., 2020) . DepEdit (Peng and Zeldes, 2018 ) is a configurable tool for manipulating dependency trees in the CoNLL-U format. The manipulations are specified in the configuration file using regular expressions for selecting nodes of interest, and a custom syntax for specifying relations between the nodes, and actions to perform on the matched nodes. The tool is geared towards performing batch operations and thus operations like querying to get a list of matching nodes for performing further manipulations, getting a text of the subtree induced by the node or implementing convolution tree kernels are impossible to achieve, to the best of our knowledge.", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 168, |
|
"text": "(Straka and Strakov\u00e1, 2017", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 528, |
|
"end": 545, |
|
"text": "(Qi et al., 2020)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 556, |
|
"end": 578, |
|
"text": "(Peng and Zeldes, 2018", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Udapi (Popel et al., 2017) is one such framework providing the ability to parse dependency trees, visualize them, convert between different representation formats (CoNLL-U, SDParse and VISL-cg), applying batch queries and edits to treebanks, and validate the format and contents of treebanks. Udapi is available as a command line tool, and has APIs for Java, Python and Perl. One of the reviewers has brought to our attention that Udapi is capable of performing directly (or gives a possibility to implement) the same transformations as UDon2.", |
|
"cite_spans": [ |
|
{ |
|
"start": 6, |
|
"end": 26, |
|
"text": "(Popel et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Two smaller packages, pyconll 4 and conllu 5 , provide an interface to the CoNLL-U annotation scheme without the possibility of visualization, but with a possibility to reimplement the same transformations as in UDon2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In order to compare the last three mentioned packages, we provide the benchmark results for UDon2 and Udapi in Table 1 on the same CoNLL-U file 6 as in (Popel et al., 2017) ran on the same machine having Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz on Ubuntu (x86 64) and Windows 10 (win32). sentences, 800k words). Memory is in MiB and all other benchmarks provide average time in seconds after 30 runs on the computer with Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz. Load refers to loading from CoNLL-U file, Save -to storing to the CoNLL-U file, Read -getting a form and a lemma for every node of every tree, Write -changing a deprel for every node of every tree, Text -computing a textual representation of a subtree induced by every root node of every tree, Relchain -finding nodes at the end of a relchain for every tree. The values with star indicate experiments with a standard deviation of more than 1 second.", |
|
"cite_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 172, |
|
"text": "(Popel et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 111, |
|
"end": 118, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Most of the current UD-compatible tools are focused on treebank developers, whereas UDon2 aims at helping researchers explore the use of dependency trees for downstream applications, and hence is optimized mostly for querying and interacting with CoNLL-U files. To the best of our knowledge, UDon2 is the first package providing the possibility to both perform manipulations on dependency trees, perform advanced transformations (such as GRCT, PCT or LCT), and compute convolution tree kernels. UDon2 provides a superior performance on the majority of the benchmarks, except for Read and Write. The reason is that these two benchmarks require using Python's for-loops for C++ objects, requiring a lot of type conversions between Python and C++. UDon2 tries to avoid this by offering various query methods for common tasks, where looping is done in C++ as well (e.g. Text, Relchain, Load and Save benchmarks), which brings evident performance gains. Optimizing UDon2 for working better with Python's loops is an ongoing work and contributions are welcome. We hope that UDon2 is going to aid researchers in experimenting with dependency trees, and that it will be expanded with the help of the UD community.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and conclusions", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "https://universaldependencies.org/format.html#words-tokens-and-empty-nodes", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Providing support for other image formats is an ongoing work.2 https://universaldependencies.org/u/overview/morphology.html#features 3 https://udon2.github.io", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://pyconll.github.io/ 5 https://github.com/EmilStenstrom/conllu/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/UniversalDependencies/UD_Czech-PDT/raw/r1.2/cs-ud-train-l. conllu", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported by Vinnova (Sweden's Innovation Agency) within project 2019-02997. We are also sincerely grateful to both reviewers for the incredibly useful comments (especially Reviewer 1 for the most thorough review we have ever seen). We would also like to thank Martin Popel for helpful discussions on the matter of benchmarking.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Support-vector networks", |
|
"authors": [ |
|
{ |
|
"first": "Corinna", |
|
"middle": [], |
|
"last": "Cortes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vladimir", |
|
"middle": [], |
|
"last": "Vapnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Machine learning", |
|
"volume": "20", |
|
"issue": "3", |
|
"pages": "273--297", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Corinna Cortes and Vladimir Vapnik. 1995. Support-vector networks. Machine learning, 20(3):273-297.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Structured lexical similarity via convolution kernels on dependency trees", |
|
"authors": [ |
|
{ |
|
"first": "Danilo", |
|
"middle": [], |
|
"last": "Croce", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Moschitti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Basili", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1034--1046", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danilo Croce, Alessandro Moschitti, and Roberto Basili. 2011. Structured lexical similarity via convolution ker- nels on dependency trees. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1034-1046.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Efficient convolution kernels for dependency and constituent syntactic trees", |
|
"authors": [ |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Moschitti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "European Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "318--329", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alessandro Moschitti. 2006. Efficient convolution kernels for dependency and constituent syntactic trees. In European Conference on Machine Learning, pages 318-329. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Universal dependencies v2: An evergrowing multilingual treebank collection", |
|
"authors": [ |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Haji\u010d", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sampo", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pyysalo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.10643" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Haji\u010d, Christopher D Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal dependencies v2: An evergrowing multilingual treebank collection. arXiv preprint arXiv:2004.10643.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "All roads lead to ud: Converting stanford and penn parses to english universal dependencies with multilayer annotations", |
|
"authors": [ |
|
{ |
|
"first": "Siyao", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Zeldes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "167--177", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siyao Peng and Amir Zeldes. 2018. All roads lead to ud: Converting stanford and penn parses to english universal dependencies with multilayer annotations. In Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018), pages 167-177.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Udapi: Universal api for universal dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Popel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zden\u011bk\u017eabokrtsk\u1ef3", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Vojtek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the NoDaLiDa 2017 Workshop on Universal Dependencies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "96--101", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martin Popel, Zden\u011bk\u017dabokrtsk\u1ef3, and Martin Vojtek. 2017. Udapi: Universal api for universal dependencies. In Proceedings of the NoDaLiDa 2017 Workshop on Universal Dependencies (UDW 2017), pages 96-101.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Stanza: A python natural language processing toolkit for many human languages", |
|
"authors": [ |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuhao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuhui", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Bolton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2003.07082" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. arXiv preprint arXiv:2003.07082.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe", |
|
"authors": [ |
|
{ |
|
"first": "Milan", |
|
"middle": [], |
|
"last": "Straka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jana", |
|
"middle": [], |
|
"last": "Strakov\u00e1", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "88--99", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Milan Straka and Jana Strakov\u00e1. 2017. Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 88-99, Vancouver, Canada, August. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": {} |
|
} |
|
} |