|
{ |
|
"paper_id": "W17-0216", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T04:28:59.614775Z" |
|
}, |
|
"title": "SWEGRAM -A Web-Based Tool for Automatic Annotation and Analysis of Swedish Texts", |
|
"authors": [ |
|
{ |
|
"first": "Jesper", |
|
"middle": [], |
|
"last": "N\u00e4sman", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Uppsala University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Be\u00e1ta", |
|
"middle": [], |
|
"last": "Megyesi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Uppsala University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Anne", |
|
"middle": [], |
|
"last": "Palm\u00e9r", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Uppsala University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present SWEGRAM, a web-based tool for the automatic linguistic annotation and quantitative analysis of Swedish text, enabling researchers in the humanities and social sciences to annotate their own text and produce statistics on linguistic and other text-related features on the basis of this annotation. The tool allows users to upload one or several documents, which are automatically fed into a pipeline of tools for tokenization and sentence segmentation, spell checking, part-of-speech tagging and morpho-syntactic analysis as well as dependency parsing for syntactic annotation of sentences. The analyzer provides statistics on the number of tokens, words and sentences, the number of parts of speech (PoS), readability measures, the average length of various units, and frequency lists of tokens, lemmas, PoS, and spelling errors. SWEGRAM allows users to create their own corpus or compare texts on various linguistic levels.", |
|
"pdf_parse": { |
|
"paper_id": "W17-0216", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present SWEGRAM, a web-based tool for the automatic linguistic annotation and quantitative analysis of Swedish text, enabling researchers in the humanities and social sciences to annotate their own text and produce statistics on linguistic and other text-related features on the basis of this annotation. The tool allows users to upload one or several documents, which are automatically fed into a pipeline of tools for tokenization and sentence segmentation, spell checking, part-of-speech tagging and morpho-syntactic analysis as well as dependency parsing for syntactic annotation of sentences. The analyzer provides statistics on the number of tokens, words and sentences, the number of parts of speech (PoS), readability measures, the average length of various units, and frequency lists of tokens, lemmas, PoS, and spelling errors. SWEGRAM allows users to create their own corpus or compare texts on various linguistic levels.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Although researchers in natural language processing have focused for decades on the development of tools for the automatic linguistic analysis of languages and state-of-the-art systems for linguistic analysis have achieved a high degree of accuracy today, these tools are still not widely used by scholars in the humanities and social sciences. The main reason is that many of the tools require programming skills to prepare and process texts. Furthermore, these tools are not linked in a straightforward way to allow the annotation and analysis on different linguistic levels that could be used easily in data-driven text research.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we present SWEGRAM, a webbased tool for the automatic linguistic annotation and quantitative analysis of Swedish text, which allows researchers in the humanities and social sciences to annotate their own text or create their own corpus and produce statistics on linguistic and other text-related features based on the annotation. SWEGRAM requires no previous knowledge of text processing or any computer skills, and is available online for anyone to use. 1 We start with a brief overview of some important infrastructural tools for processing language data. In Section 3 we give an introduction to SWE-GRAM along with our goals and considerations in developing the web-based tool. Following this introductory section, we present the components used for the linguistic annotation on several levels, and the format of the data representation. We then give an overview of quantitative linguistic analysis, providing statistics on various linguistic features for text analysis. In Section 4 we describe a linguistic study of student essays to illustrate how SWEGRAM can be used by scholars in the humanities. Finally, in Section 5, we conclude the paper and identify some future challenges.", |
|
"cite_spans": [ |
|
{ |
|
"start": 470, |
|
"end": 471, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To make language technology applications available and useful to scholars of all disciplines, in particular researchers in the humanities and social sciences has attracted great attention in the language technology community in the past years. One aim is to create language resources and tools that are readily available for automatic linguistic analysis and can help in quantitative text analysis. Important resources are corpora and lexicons of various kinds. Basic tools usually include a tokenizer for the automatic segmentation of tokens and sentences, a lemmatizer for finding the base form of words, a part-of-speech (PoS) tagger to annotate the words with their PoS and morpholog-ical features, and a syntactic parser to annotate the syntactic structure of the sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Creating infrastructure for language analyis is not new and several projects have been focusing on developing on-line services for collection, annotation and/or analysis of language data with joint effort from the LT community. One of the important projects is the European Research Infrastructure for Language Resources and Technology CLARIN 2 with nodes in various countries, such as the Swedish SWE-CLARIN 3 . During the past years, we have seen a noticable increase in web-services allowing storage, annotation and/or analysis of data for various languages. Such example include LAP: The CLARINO Language Analysis Portal that was developed to allow large-scale processing service for many European languages (Kouylekov et al., 2014; ; WebLicht, a web-based tool for semi-annotation and visualization of language data (Hinrichs et al., 2010; CLARIN-D/SfS-Uni. T\u00fcbingen, 2012) ; The Australian project Alveo: Above and Beyond Speech, Language and Music infrastructure, a virtual lab for human communication science, for easy access to language resources that can be shared with workflow tools for data processing (Estival and Cassidy, 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 712, |
|
"end": 736, |
|
"text": "(Kouylekov et al., 2014;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 821, |
|
"end": 844, |
|
"text": "(Hinrichs et al., 2010;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 845, |
|
"end": 878, |
|
"text": "CLARIN-D/SfS-Uni. T\u00fcbingen, 2012)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1115, |
|
"end": 1142, |
|
"text": "(Estival and Cassidy, 2016)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Many language technology tools are readily available as off-the-shelf packages and achieve a high degree of accuracy, including the analysis of Swedish text. A pipeline in which standard annotation tools can be run on-line was recently established through SPARV (Borin et al., 2016) at the Swedish Language Bank (Spr\u00e5kbanken), 4 for the linguistic analysis of uploaded text, including tokenization, lemmatization, word senses, compound analysis, named-entity recognition, PoS and syntactic analysis using dependency structures. Users can access the annotation directly online, or download the results as an XML document. The goal is to provide linguistic annotation and allow further analysis using Spr\u00e5kbanken's own corpus search tool, Korp (Borin et al., 2012) . 5 Many tools are available for various types of text analysis. These include search programs for analyzing specific resources or corpora. Examples include Xaira, 6 an open source software pack-age that supports indexing and analysis of corpora using XML, which was originally developed for the British National Corpus; the BNCWeb (Hoffmann et al., 2008) , a web-based interface for the British National Corpus; or Korp (Borin et al., 2012) , for searches of Swedish corpora. Other popular tools are concordance programs, such as AntConc, 7 Webcorp 8 and ProtAnt (Anthony and Baker, 2015) , which also displays other text related features such as frequencies, collocations and keywords. WordSmith Tools (Scott, 2016) is also commonly used for text analysis, allowing the creation of word lists with frequencies, concordance lists, clusters, collocations and keywords.", |
|
"cite_spans": [ |
|
{ |
|
"start": 262, |
|
"end": 282, |
|
"text": "(Borin et al., 2016)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 742, |
|
"end": 762, |
|
"text": "(Borin et al., 2012)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 765, |
|
"end": 766, |
|
"text": "5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 920, |
|
"end": 926, |
|
"text": "Xaira,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 927, |
|
"end": 928, |
|
"text": "6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1095, |
|
"end": 1118, |
|
"text": "(Hoffmann et al., 2008)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1184, |
|
"end": 1204, |
|
"text": "(Borin et al., 2012)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1327, |
|
"end": 1352, |
|
"text": "(Anthony and Baker, 2015)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1467, |
|
"end": 1480, |
|
"text": "(Scott, 2016)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Next, we describe SWEGRAM, a publicly available on-line tool for corpus creation, annotation and data-driven analysis of Swedish text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The main goal of SWEGRAM is to provide a simple web-based tool that allows linguistic annotation and quantitative analysis of Swedish text without any expert knowledge in natural language processing. SWEGRAM consists of two separate web-based applications: annotation and analysis. In the web-based interface, users can upload one or several text files of their choice and receive the annotated text(s), which can be sent for further text analysis, as specified by the user.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SWEGRAM", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The annotation includes tokenization and sentence segmentation, normalization in terms of spelling correction, PoS tagging including morphological features, and dependency parsing to represent the syntactic structure of the sentence. The annotation tool can be used to annotate individual texts or create a large collection of annotated texts, a corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SWEGRAM", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Once the data set is uploaded and annotated, the analyzer provides information about the number of tokens, words, and sentences; the distribution of PoS and morphological features; various readability measures; average length of different units (such as words, tokens, sentences); frequency lists and spelling errors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SWEGRAM", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In developing SWEGRAM, we wanted to create a tool with open source components that was freely accessible, where users can upload any text without it being saved by the system. Another important goal was to build a modular system in which the components involved can be easily changed as better models are developed, while individual components can be built on one another with a simple representation format that is easy to understand.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SWEGRAM", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The pipeline handling linguistic annotation is written mainly in Python, and the user interface was developed using regular HTML, CSS and JavaScript. The backend of the web interface was developed using the Django web framework. Next, we will describe the components included for annotation and analysis in more detail.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SWEGRAM", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In order to automatically process and annotate texts, we use state-of-the-art natural language processing tools trained on Swedish standard texts with a documented high degree of performance. The annotation pipeline is illustrated in Figure 1 . When a file is uploaded, the document is preprocessed by converting the file into a plain text format. The text is segmented into sentences and tokens by a tokenizer and misspelled tokens are corrected for spelling errors by a normalizer. The corrected text is run through a PoS tagger and lemmatizer to get the base form of the words and their correct PoS and morphological annotation given the context. Finally, the sentences are syntactically analyzed by a parsing module using dependency analysis. The following subsections contain descriptions of each of these modules.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 234, |
|
"end": 242, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Automatic Annotation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In many cases, SWEGRAM does not require any preprocessing of documents. Users can upload documents in formats such as DOC, DOCX and RTF and the document is automatically converted into a plain text file encoded in UTF-8, which is what the annotation pipeline requires as input. The text is converted using unoconv, 9 which can handle any format that LibreOffice is able to import.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "Tokenization is used to separate the words from punctuation marks and segment the sentences. Two tokenizers were considered for SWEGRAM: the tokenizer written in Java and used in the PoS tagger Stagger (\u00d6stling, 2013) and the Svannotate tokenizer, originally developed for the Swedish Treebank (Nivre et al., 2008) . A comparison was made between these tokenizers, and only a few differences were found, since both tokenizers achieved similar results. However, while Svannotate is an independent, rule-based tokenizer written in Python, Stagger's tokenizer is built into the PoS tagger. We chose to include Svannotate for modularity and consistency in the pipeline since it is written in Python, like the rest of SWEGRAM.", |
|
"cite_spans": [ |
|
{ |
|
"start": 294, |
|
"end": 314, |
|
"text": "(Nivre et al., 2008)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tokenization", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "In evaluating Svannotate to tokenize student writings (Megyesi et al., 2016) , errors that occurred were due in part to the inconsistent use of punctuation marks -for example, when a sentence does not always end with an appropriate punctuation mark, either because abbreviations are not always spelled correctly or a new sentence does not always begin with a capital letter.", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 76, |
|
"text": "(Megyesi et al., 2016)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tokenization", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "Since the annotation pipeline is modular, users have the option of tokenizing a text, manually correcting it and then using the corrected version for the remaining steps.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tokenization", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "After tokenization and sentence segmentation, normalization is carried out in the form of spelling correction, including correction of erroneously split compounds. Since there is no open source, state-of-the-art normalizer that is readily available for Swedish, we used a modified version of Hist-Norm (Pettersson et al., 2013) for spelling correction. HistNorm was originally developed to transform words in historical texts that had substantial variation in possible spellings of their modern variant using either Levenshtein-based normalization or normalization based on statistical machine translation (SMT). When used on historical data, HistNorm achieves accuracy of 92.9% on Swedish text, based on SMT. For texts written by students, however, we found that the Levenshteinbased normalization gave better results.", |
|
"cite_spans": [ |
|
{ |
|
"start": 302, |
|
"end": 327, |
|
"text": "(Pettersson et al., 2013)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normalization", |
|
"sec_num": "3.1.3" |
|
}, |
|
{ |
|
"text": "One type of spelling error that occurs frequently in Swedish is erroneously split compounds, that is, compounds that are split into two or more words instead of written as one word. If we consider the Swedish compound kycklinglever (chicken liver), erroneously splitting the words would form the two words kyckling (chicken) and lever (is alive). This significantly alters the meaning of the phrase and will affect the final output of the annotation, making the statistical analysis less accurate. Addressing these errors can lead to an improved annotation performance. This problem is addressed using a rule-based system as described by (\u00d6hrman, 1998) . Because of the PoS tags rules for identifying split compounds for each token, PoS tagging has to be performed prior to correcting compounds. The text is then tagged again using the corrected compounds. We will return to how these types of corrections are represented while still keeping the original tokens in Section 3.1.6.", |
|
"cite_spans": [ |
|
{ |
|
"start": 638, |
|
"end": 652, |
|
"text": "(\u00d6hrman, 1998)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normalization", |
|
"sec_num": "3.1.3" |
|
}, |
|
{ |
|
"text": "Further analysis and improvement are needed to adapt this normalization tool to texts written in less standard Swedish for a higher degree of accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Normalization", |
|
"sec_num": "3.1.3" |
|
}, |
|
{ |
|
"text": "For the PoS and morphological annotation of the normalized texts, we use two types of annotation. One is based on the universal PoS tagset, 10 which consists of 17 main PoS categories: adjective, adposition, adverb, auxiliary, coordinating conjunction, determiner, interjection, noun, numeral, particle, pronoun, proper noun, punctuation, subordinating conjunction, symbol, verb and others with their morphological features. The other tagset used is the Stockholm-Ume\u00e5 Corpus tagset (Gustafson-Capkov\u00e1 and Hartmann, 2006) , which contains 23 main PoS categories.", |
|
"cite_spans": [ |
|
{ |
|
"start": 483, |
|
"end": 521, |
|
"text": "(Gustafson-Capkov\u00e1 and Hartmann, 2006)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpho-Syntactic Annotation", |
|
"sec_num": "3.1.4" |
|
}, |
|
{ |
|
"text": "We compared two commonly used PoS taggers for Swedish, HunPos (Hal\u00e1csy et al., 2007) and Stagger (\u00d6stling, 2013), and evaluated their performance on our test data. Both taggers used models trained on what is normally used as a standard corpus for Swedish, the Stockholm Ume\u00e5 Corpus (Gustafson-Capkov\u00e1 and Hartmann, 2006) . The accuracy of these taggers when trained and evaluated on SUC 2.0 is very similar, 95.9% for HunPos (Megyesi, 2008) and 96.6% for Stagger (\u00d6stling, 2013). Testing these taggers on the Uppsala Corpus of Student Writings (Megyesi et al., 2016) using SUC models, Stagger performed slightly better. Another advantage of Stagger is that it can also perform lemmatization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 84, |
|
"text": "(Hal\u00e1csy et al., 2007)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 282, |
|
"end": 320, |
|
"text": "(Gustafson-Capkov\u00e1 and Hartmann, 2006)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 425, |
|
"end": 440, |
|
"text": "(Megyesi, 2008)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 544, |
|
"end": 566, |
|
"text": "(Megyesi et al., 2016)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpho-Syntactic Annotation", |
|
"sec_num": "3.1.4" |
|
}, |
|
{ |
|
"text": "However, we ultimately decided to use a reimplementation of Stagger, the tagger called Efficient Sequence Labeler (efselab), 11 as the default tagger. This, like Stagger, uses an averaged perceptron learning algorithm, but Efselab has the ad-vantage that it performs PoS tagging significantly faster (about one million tokens a second) while achieving similar performance results as Stagger.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Morpho-Syntactic Annotation", |
|
"sec_num": "3.1.4" |
|
}, |
|
{ |
|
"text": "The final step in the annotation pipeline is the syntactic annotation in terms of dependency structure. We apply universal dependencies (UD) (Nivre et al., 2016) to mark syntactic structures and relations where one word is the head of the sentence, attached to a ROOT, and all other words are dependent on another word in a sentence. Dependency relations are marked between content words while function words are direct dependents of the most closely related content word. Punctuation marks are attached to the head of the clause or phrase to which they belong. The UD taxonomy distinguishes between core arguments such as subjects, direct and indirect objects, clausal complements, and other non-core or nominal dependents. For a detailed description of the dependency structure and annotation, we refer readers to the UD website. 12 To annotate the sentences with UD, we use MaltParser 1.8.1 (Nivre et al., 2006) , along with a model trained on the Swedish data with Universal Dependencies (UD). Since parser input needs to be in the form of the universal tagset, the tags need to be converted. This conversion is carried out using a script that comes with efselab, which converts SUC to UD.", |
|
"cite_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 161, |
|
"text": "(Nivre et al., 2016)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 832, |
|
"end": 834, |
|
"text": "12", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 894, |
|
"end": 914, |
|
"text": "(Nivre et al., 2006)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Annotation", |
|
"sec_num": "3.1.5" |
|
}, |
|
{ |
|
"text": "Since UD was developed in our field of natural language processing only recently, it has not been used widely by scholars outside our community. In the near future, we will experiment with various types of syntactic representation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Annotation", |
|
"sec_num": "3.1.5" |
|
}, |
|
{ |
|
"text": "In order to make it easy for scholars in the humanities to interpret the annotated texts, we chose the CoNLL-U tab-separated format 13 instead of an XML-based representation. Sentences consist of one or more lines of words where each line represents a single word/token with a series of 11 fields with separate tabs for various annotation types. Table 1 describes the fields that represent the analysis of each token. New sentences are preceded by a blank line, which marks sentence boundaries. Comment lines starting with hash (#) are also allowed and may be used for metadata information (such as sentence numbering) for the sentence following immediately. All annotations are encoded in plain text files in UTF-8.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 346, |
|
"end": 353, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Format", |
|
"sec_num": "3.1.6" |
|
}, |
|
{ |
|
"text": "In Table 2 an example is provided of an annotated text in the CoNLL-U format. In this example, the original text contains a spelling mistake, vekan, corrected as veckan in the column NORM, where the corrected form is analyzed. The example sentence also contains an erroneously split compound -Syd Korea which should be written as one word, Sydkorea. The corrected word is given the index numbers of the two original words, in this case 4-5, where the corrected version is analyzed linguistically while the original forms are left as they are without any further analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Format", |
|
"sec_num": "3.1.6" |
|
}, |
|
{ |
|
"text": "Text containing metadata has been an important factor in the development of SWEGRAM. Metadata containing information about the text such as the author's age, gender, geographic area or type of text can be parsed and used during analysis, allowing users to filter their texts based on the metadata provided, and produce statistics on the features of the particular text(s). The metadata should be represented in the format <feature1, feature2 ... featureN>. Development is currently under way to allow metadata of any type (defined by the user) to be used in annotation and analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Format", |
|
"sec_num": "3.1.6" |
|
}, |
|
{ |
|
"text": "The web-based annotation tool is illustrated in Figure 2 . Users can upload one or several texts and annotate them. Modularity has been an important factor in developing the annotation tool. Any module can be deactivated, which enables users to exclude some part of the annotation if they wish and use their own annotation instead. For example, users can upload a text that is already tokenized in order to annotate it with PoS and syntactic features. After tokenization, normalization can also be carried out in the form of spell checking and correction of erroneously split compound words, or a text that is already corrected can be uploaded. Similarly, users could correct the PoS annotation given by the tool and run the syntactic analyzer on the corrected PoS tagged data. Users are thus free to decide which particular tools are needed, and the subsequent linguistic annotation is based on corrected, normalized forms, which could help improve the performance of subsequent steps since corrected data are used.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 56, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Web-based Annotation Tool", |
|
"sec_num": "3.1.7" |
|
}, |
|
{ |
|
"text": "Each module may include several algorithms and models depending on the corpus data the models were trained on. We include the most frequently used models with the highest accuracy on standard Swedish, which were evaluated and published previously.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Web-based Annotation Tool", |
|
"sec_num": "3.1.7" |
|
}, |
|
{ |
|
"text": "Moreover, the pipeline is built in such a way that new, better analyzers can be plugged into the system. It is also possible to select different models for the PoS tagger and the syntactic parser, but currently only one model is provided for each, both based on Stockholm-Ume\u00e5 Corpus (SUC) 3.0 (Gustafson-Capkov\u00e1 and Hartmann, 2006) and previously evaluated with a documented high degree of accuracy. However, one restriction in choosing syntactic annotation (the parser and parser model) is that only the PoS model that the parser was trained on may be run during the PoS tagging module to get consistent annotation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 294, |
|
"end": 332, |
|
"text": "(Gustafson-Capkov\u00e1 and Hartmann, 2006)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Web-based Annotation Tool", |
|
"sec_num": "3.1.7" |
|
}, |
|
{ |
|
"text": "Another important factor was that the format should be readable and easy to understand so that users can manually examine the data annotated. The results are made available to users in the form of a downloadable plain text file encoded in UTF-8 or shown directly on the web-page. In contrast to formats like SGML or XML, the CoNLL-U format, which is tab-separated with one token per line and has various linguistic fields represented in various columns, is well suited for our purposes. The format with fields separated by tabs allows users to import their file in Excel or another tool of their choice to carry out further quantitative analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Web-based Annotation Tool", |
|
"sec_num": "3.1.7" |
|
}, |
|
{ |
|
"text": "Since the corpus format allows several types of annotation by including additional columns, users can easily choose between them based on their desires or choose to have all annotations available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Web-based Annotation Tool", |
|
"sec_num": "3.1.7" |
|
}, |
|
{ |
|
"text": "Users can upload one or several annotated texts for further quantitative analysis. Statistics are calculated and shown on several levels: for all texts, and if the text file is divided into several subtexts, for each of these. Figure 3 illustrates the start page of the quantitative analysis where information is given about the number of uploaded texts, words, tokens and sentences. The following features can be extracted automatically: number of tokens, words, sentences, texts and PoS; readability measures; average length of words, tokens, sentences, paragraphs and texts; frequency lists of tokens, lemmas and PoS; and spelling errors.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 235, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Automatic Quantitative Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The statistical calculations are divided into three sections: general statistics, frequencies and spelling errors. General statistics provide users with the option of including statistics for all PoS or for specific ones, readability metrics in terms of LIX, OVIX and the nominal ratio, and frequencies of word length above, below or at a specific threshold value.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automatic Quantitative Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The frequencies section can provide users with frequency lists for all texts and for individual texts. These can be based on lemmas or tokens, with or without delimiters. In addition, the frequency lists can be sorted based on frequencies or words (lemmas or tokens) in alphabetical order. The frequency lists can also be limited to specific parts of speech.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automatic Quantitative Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The spelling errors section provides a list of spelling errors sorted by frequency, for all uploaded texts and for individual texts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automatic Quantitative Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In addition, users can generate statistics by filtering the texts using metadata information. In order to do so, the uploaded texts have to be marked up with metadata as described in Section 3.1.6. Given each field, the texts can be filtered based on the properties of the metadata. Examples of analyses are provided in the next section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automatic Quantitative Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Users can also specify whether the output should be delivered as a downloadable file separated by tabs, which can be imported into other programs such as Excel for further analysis, or shown directly in the browser.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automatic Quantitative Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Separately from the statistics, users can also view the uploaded texts in their entirety and perform different types of searches in the annotated text. This includes searching for words, lemmas and PoS tags that either start with, end with, contain or exactly match a user-defined search query. The results are then printed and sorted according to what texts they appear in.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automatic Quantitative Analysis", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In this section we will demonstrate some of the possibilities of using SWEGRAM to analyze student writing as part of the national test carried out by school children in Sweden. We concentrate on two texts which are interesting to compare because they have some features in common but also differ in terms of the age of the writers, with the difference being three school years. Without making use of SWEGRAMs capacity to analyze extensive data, we simply want to demonstrate some features included in the tool and what they can show about the characteristics of the two texts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Study", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Essay D245, from the last year of compulsory school, and essay C381, from the final year of upper secondary school, both represent the expository genre. Both essays have also been used as examples, benchmarks, of essays receiving the highest grade in the guide for assessing national tests. Therefore these two essays are both considered to be good writing for their respective school year. However, there is a three-year difference in age between the students, and the writing assignments given in the tests are different. Text D245 discusses a general subject, love. The introduction of the essay, translated into English, is: Would you risk sacrificing your life for love, would you risk turning your entire existence upside down? The question is not easy to answer. Text C381 is an expository essay on the fairytale Sleeping Beauty, which makes the subject more specific. The introduction to this essay is translated into English as: Folk tales -anonymous stories that have been passed down from one generation to the next no matter where humans have lived /.../ Why is this old fairytale still widely read in society today?.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Study", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We compare the documents in terms of different features that can give information relevant to text quality and writing development, such as lexical variation and specification, word frequencies, nominal ratio and distribution of parts-of speech.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Study", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Looking at the lexical variation, the two texts are about the same length; D245 has 790 words and C381 has 713 words. But the average word length of the text from upper secondary school is higher than that of the text from compulsory school, as shown in Table 3 These results indicate that C381 may be more specified and lexically varied than D245, since longer words correlate with specification and variation in a text (Hultman and Westman, 1977; Vagle, 2005) . Lexical variation in a text can also be measured by Ovix, a word variation index (Hultman and Westman, 1977) . This measure shows the same tendencies: more variation in the text from upper secondary school.", |
|
"cite_spans": [ |
|
{ |
|
"start": 421, |
|
"end": 448, |
|
"text": "(Hultman and Westman, 1977;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 449, |
|
"end": 461, |
|
"text": "Vagle, 2005)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 545, |
|
"end": 572, |
|
"text": "(Hultman and Westman, 1977)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 254, |
|
"end": 261, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "User Study", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The lexicon can further be studied using SWE-GRAMs word frequency lists. In the list of nouns we look for long, compound nouns, since this is considered one feature of Swedish academic language. We find a number of these long words, several with more than 12 letters, in C 381. In D 245 there are a few compound nouns but none as long as this, which makes the lexicon of this text less specified and dense.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Study", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Nominal ratio is used to measure the nominality of the text. A high nominal ratio indicates high information load, whereas a low nominal ratio indicates a less dense text (Magnusson and Johansson Kokkinakis, 2011) . Texts with high nominality are often conceived as having more of a written style, whereas lower values tend to give the text a more colloquial character. The difference in the nominal ratio for the two texts is substantial, 0.55 in D245 and as high as 1.35 in C381, as shown in Table 3 . As a result, the essay from upper secondary school is considerably more nominal, has a higher information load and presumably has more of a written style than the essay from compulsory school. The surprisingly high value of the nominal ratio in C381 could partly be explained by the fact that there are several references to other works in this text, and these include long nominal phrases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 171, |
|
"end": 213, |
|
"text": "(Magnusson and Johansson Kokkinakis, 2011)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 494, |
|
"end": 501, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "User Study", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "C381 VB (17.38%) NN (19.82%) NN (12.33%) VB (13.34%) PN (11.66%) PP (11.27%) AB (10.87%) AB (7.12%) PP (8.30%) JJ (6.99%) A look at the parts of speech used most frequently shows that D245 is rich in verbs and pronouns, parts of speech that characterize a colloquial style; see Table 4 . C381, on the other hand, has high proportions of nouns and prepositions, which are important words in forming nominal phrases. Table 3 shows that there is a difference in the average sentence length in the two essays: 20.27 words in D245 and 25.73 in C381. Since longer sentences may contain more clauses than shorter ones, this result indicates that the syntax of the essay from upper secondary school may be more complex that in D 245. The hypothesis can be controlled by a frequency list of conjunctions and subjunctions, words that connect clauses. In D245 there are six different conjunctions and three different subjunctions, a total of nine connectives of this kind. In C381 there are eight different conjunctions and four subjunctions, a total of twelve different words. So the variation in connectives is more important in C381. The distribution of parts of speech also shows that conjunctions and subjunctions occur more frequently in C381 (KN + SN 7.12 %) than in D245 (KN + SN 5.54 %), which supports the hypothesis.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 278, |
|
"end": 285, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 415, |
|
"end": 422, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "D245", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In summary, the analysis shows considerable differences between the two essays, as regards the lexicon, distribution of PoS and syntax. However, the result should not be interpreted in relation to the writing competence or writing development shown in the student texts. The purpose is to show the potential of analyses made with SWEGRAM without using the appropriate amount of data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "D245", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We presented a web-based interface for the automatic annotation and quantitative analysis of Swedish. The web-based tool enables users to upload a file, which is then automatically fed into a pipeline of tools for tokenization and sentence segmentation, spell checking, PoS tagging and morpho-syntactic analysis as well as dependency parsing for syntactic annotation of sentences. Users can then send the annotated file for further quantitative analysis of the linguistically annotated data. The analyzer provides statistics about the number of tokens, words, sentences, number of PoS, readability measures, average length of various units (such as words, tokens and sentences), frequency lists of tokens, lemmas and PoS, and spelling errors. Statistics can be also extracted based on metadata, given that metadata are defined by the user.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The tool can be easily used for the analysis of a single text, for the comparison of several texts, or for the creation of an entire corpus of the user's choice by uploading a number of text documents. The tool has been used succesfully in the creation of the Uppsala Corpus of Student Writings (Megyesi et al., 2016) . Since SWEGRAM will be used to create corpora, the possibility of customizing the content and format of metadata is something that could be beneficial to users and will be implemented in the near future.", |
|
"cite_spans": [ |
|
{ |
|
"start": 295, |
|
"end": 317, |
|
"text": "(Megyesi et al., 2016)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The tools are readily available and can be used by anyone who is interested in the linguistic annotation of Swedish text. As better models for standard Swedish are presented, our intention is to include them in the interface along with the old models to allow comparative studies. Our priority for further improvement is the normalization tool since there is no readily available open source tool for automatic spelling and grammar correction of Swedish. In addition, we would like to implement a visualization tool of the linguistic analysis, especially syntax, which will also facilitate syntactic searches.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "http://stp.lingfil.uu.se/swegram/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.clarin.eu/ 3 https://sweclarin.se/eng/about 4 https://spraakbanken.gu.se/sparv/ 5 https://spraakbanken.gu.se/korp/ 6 http://projects.oucs.ox.ac.uk/xaira/Doc/refman.xml", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.laurenceanthony.net/software.html 8 http://www.webcorp.org.uk/live/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/dagwieers/unoconv", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://universaldependencies.org/u/pos/ 11 https://github.com/robertostling/efselab", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://universaldependencies.org/ 13 http://universaldependencies.org/format.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This project was supported by SWE-CLARIN, a Swedish consortium in Common Language Resources and Technology Infrastructure (CLARIN) financed by the Swedish Research Council for the period 2014-2018.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "ProtAnt: A tool for analysing the prototypicality of texts", |
|
"authors": [ |
|
{ |
|
"first": "Laurence", |
|
"middle": [], |
|
"last": "Anthony", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Baker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "International Journal of Corpus Linguistics", |
|
"volume": "20", |
|
"issue": "3", |
|
"pages": "273--292", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laurence Anthony and Paul Baker. 2015. ProtAnt: A tool for analysing the prototypicality of texts. Inter- national Journal of Corpus Linguistics, 20(3):273- 292.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Korp -the corpus infrastructure of Spr\u00e5kbanken", |
|
"authors": [ |
|
{ |
|
"first": "Lars", |
|
"middle": [], |
|
"last": "Borin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Forsberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Roxendal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 8th International Conference on Language Resources and Evaluation, LREC 2012", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lars Borin, Markus Forsberg, and Johan Roxen- dal. 2012. Korp -the corpus infrastructure of Spr\u00e5kbanken. In Proceedings of the 8th Interna- tional Conference on Language Resources and Eval- uation, LREC 2012, page 474478.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Sparv: Spr\u00e5kbanken's corpus annotation pipeline infrastructure", |
|
"authors": [ |
|
{ |
|
"first": "Lars", |
|
"middle": [], |
|
"last": "Borin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Forsberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Hammarstedt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Ros\u00e9n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anne", |
|
"middle": [], |
|
"last": "Schumacher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roland", |
|
"middle": [], |
|
"last": "Sch\u00e4fer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "SLTC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lars Borin, Markus Forsberg, Martin Hammarstedt, Dan Ros\u00e9n, Anne Schumacher, and Roland Sch\u00e4fer. 2016. Sparv: Spr\u00e5kbanken's corpus annotation pipeline infrastructure. In SLTC 2016.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "We-bLicht: Web-Based Linguistic Chaining Tool. Online. Date Accessed", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Clarin-D/Sfs-Uni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "T\u00fcbingen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "CLARIN-D/SfS-Uni. T\u00fcbingen. 2012. We- bLicht: Web-Based Linguistic Chaining Tool. On- line. Date Accessed: 28 Mar 2017. URL https://weblicht.sfs.uni-tuebingen.de/ .", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Alveo: Above and beyond speech, language and music, a virtual lab for human communication science. Online. Date Accessed", |
|
"authors": [ |
|
{ |
|
"first": "Dominique", |
|
"middle": [], |
|
"last": "Estival", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steve", |
|
"middle": [], |
|
"last": "Cassidy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dominique Estival and Steve Cassidy. 2016. Alveo: Above and beyond speech, language and music, a virtual lab for human communication science. Online. Date Accessed: 28 Mar 2017. URL http://alveo.edu.au/about/.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Hunpos: An open source trigram tagger", |
|
"authors": [ |
|
{ |
|
"first": "P\u00e9ter", |
|
"middle": [], |
|
"last": "Hal\u00e1csy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andr\u00e1s", |
|
"middle": [], |
|
"last": "Kornai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Csaba", |
|
"middle": [], |
|
"last": "Oravecz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "209--212", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P\u00e9ter Hal\u00e1csy, Andr\u00e1s Kornai, and Csaba Oravecz. 2007. Hunpos: An open source trigram tagger. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07, pages 209-212, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Weblicht: Web-based LRT services for German", |
|
"authors": [ |
|
{ |
|
"first": "Erhard", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Hinrichs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie", |
|
"middle": [], |
|
"last": "Hinrichs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Zastrow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the ACL 2010 System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--29", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Erhard W. Hinrichs, Marie Hinrichs, and Thomas Zas- trow. 2010. Weblicht: Web-based LRT services for German. In Proceedings of the ACL 2010 System Demonstrations, pages 25-29.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Corpus Linguistics with BNCweb -A Practical Guide", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Hoffmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Evert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ylva Berglund", |
|
"middle": [], |
|
"last": "Prytz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Hoffmann, Stefan Evert, Nicholas Smith, David Lee, and Ylva Berglund Prytz. 2008. Cor- pus Linguistics with BNCweb -A Practical Guide. Frankfurt am Main: Peter Lang.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Gymnasistsvenska", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Tor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Margareta", |
|
"middle": [], |
|
"last": "Hultman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": ";", |
|
"middle": [], |
|
"last": "Westman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lund", |
|
"middle": [], |
|
"last": "Liberl\u00e4romedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1977, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tor G. Hultman and Margareta Westman. 1977. Gym- nasistsvenska. LiberL\u00e4romedel, Lund.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "LAP: The language analysis portal. Online. Date Accessed", |
|
"authors": [ |
|
{ |
|
"first": "Milen", |
|
"middle": [], |
|
"last": "Kouylekov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emanuele", |
|
"middle": [], |
|
"last": "Lapponi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Oepen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erik", |
|
"middle": [], |
|
"last": "Velldal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikolay Aleksandrov", |
|
"middle": [], |
|
"last": "Vazov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Milen Kouylekov, Emanuele Lapponi, Stephan Oepen, Erik Velldal, and Nikolay Aleksandrov Vazov. 2014. LAP: The language analysis portal. Online. Date Accessed: 28 Mar 2017. URL http://www.mn.uio.no/ifi/english/research/projects/- clarino/.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Off-road laf: Encoding and processing annotations in nlp workflows", |
|
"authors": [ |
|
{ |
|
"first": "Emanuele", |
|
"middle": [], |
|
"last": "Lapponi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erik", |
|
"middle": [], |
|
"last": "Velldal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Oepen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rune Lain", |
|
"middle": [], |
|
"last": "Knudsen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emanuele Lapponi, Erik Velldal, Stephan Oepen, and Rune Lain Knudsen. 2014. Off-road laf: Encoding and processing annotations in nlp workflows. In 9th edition of the Language Resources and Evaluation Conference (LREC).", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Computer-Based Quantitative Methods Applied to First and Second Language Student Writing", |
|
"authors": [ |
|
{ |
|
"first": "Ulrika", |
|
"middle": [], |
|
"last": "Magnusson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sofie Johansson", |
|
"middle": [], |
|
"last": "Kokkinakis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "105--124", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ulrika Magnusson and Sofie Johansson Kokkinakis. 2011. Computer-Based Quantitative Methods Ap- plied to First and Second Language Student Writ- ing. In Inger K\u00e4llstr\u00f6m and Inger Lindberg, editors, Young Urban Swedish. Variation and change in mul- tilingual settings, pages 105-124. G\u00f6teborgsstudier i nordisk spr\u00e5kvetenskap 14. University of Gothen- burg.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "The Uppsala corpus of student writings: Corpus creation, annotation, and analysis", |
|
"authors": [ |
|
{ |
|
"first": "Be\u00e1ta", |
|
"middle": [], |
|
"last": "Megyesi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jesper", |
|
"middle": [], |
|
"last": "N\u00e4sman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anne", |
|
"middle": [], |
|
"last": "Palm\u00e9r", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": ";", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Khalid", |
|
"middle": [], |
|
"last": "Choukri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thierry", |
|
"middle": [], |
|
"last": "Declerck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sara", |
|
"middle": [], |
|
"last": "Goggi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marko", |
|
"middle": [], |
|
"last": "Grobelnik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bente", |
|
"middle": [], |
|
"last": "Maegaard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Mariani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3192--3199", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Be\u00e1ta Megyesi, Jesper N\u00e4sman, and Anne Palm\u00e9r. 2016. The Uppsala corpus of student writings: Cor- pus creation, annotation, and analysis. In Nico- letta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Sara Goggi, Marko Grobelnik, Bente Maegaard, Joseph Mariani, Helene Mazo, Asuncion Moreno, Jan Odijk, and Stelios Piperidis, editors, Proceedings of the Tenth International Con- ference on Language Resources and Evaluation (LREC 2016), pages 3192-3199, Paris, France. Eu- ropean Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "The Open Source Tagger Hun-PoS for Swedish", |
|
"authors": [ |
|
{ |
|
"first": "Be\u00e1ta", |
|
"middle": [], |
|
"last": "Megyesi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Be\u00e1ta Megyesi. 2008. The Open Source Tagger Hun- PoS for Swedish. Uppsala University: Department of Linguistics and Philology.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Maltparser", |
|
"authors": [ |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jens", |
|
"middle": [], |
|
"last": "Nilsson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 5th International Conference on Language Resources and Evaluation, LREC '06", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2216--2219", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joakim Nivre, Johan Hall, and Jens Nilsson. 2006. Maltparser. In Proceedings of the 5th International Conference on Language Resources and Evaluation, LREC '06, pages 2216-2219.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Cultivating a Swedish treebank", |
|
"authors": [ |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Be\u00e1ta", |
|
"middle": [], |
|
"last": "Megyesi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sofia", |
|
"middle": [], |
|
"last": "Gustafson-Capkov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Salomonsson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bengt", |
|
"middle": [], |
|
"last": "Dahlqvist", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Resourceful Language Technology: A Festschrift in Honor of Anna S\u00e5gvall Hein", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "111--120", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joakim Nivre, Be\u00e1ta Megyesi, Sofia Gustafson- Capkov\u00e1, Filip Salomonsson, and Bengt Dahlqvist. 2008. Cultivating a Swedish treebank. In Joakim Nivre, Mats Dahll\u00f6f, and Be\u00e1ta Megyesi, editors, Resourceful Language Technology: A Festschrift in Honor of Anna S\u00e5gvall Hein, pages 111-120.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics, Charles University in Prague.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Felaktigt s\u00e4rskrivna sammans\u00e4ttningar", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lena\u00f6hrman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lena\u00d6hrman, 1998. Felaktigt s\u00e4rskrivna sam- mans\u00e4ttningar. Stockholm University, Department of Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Stagger: An open-source part of speech tagger for Swedish", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Robert\u00f6stling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Northern European Journal of Language Technology", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "1--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert\u00d6stling. 2013. Stagger: An open-source part of speech tagger for Swedish. Northern European Journal of Language Technology, 3:1-18.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Normalisation of historical text using contextsensitive weighted Levenshtein distance and compound splitting", |
|
"authors": [ |
|
{ |
|
"first": "Eva", |
|
"middle": [], |
|
"last": "Pettersson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Be\u00e1ta", |
|
"middle": [], |
|
"last": "Megyesi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 19th Nordic Conference of Computational Linguistics, NODAL-IDA '13", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eva Pettersson, Be\u00e1ta Megyesi, and Joakim Nivre. 2013. Normalisation of historical text using context- sensitive weighted Levenshtein distance and com- pound splitting. In Proceedings of the 19th Nordic Conference of Computational Linguistics, NODAL- IDA '13.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "WordSmith Tools Version 7. Stroud: Lexical Analysis Software", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Scott", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Scott, 2016. WordSmith Tools Version 7. Stroud: Lexical Analysis Software.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Siegfred Evensen, Frydis Hertzberg, and Wenche. Vagle, editors, Ungdommers skrivekompetanse, Bind 2. Norskexamen som tekst", |
|
"authors": [ |
|
{ |
|
"first": "Wenche", |
|
"middle": [], |
|
"last": "Vagle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Kjell Lars Berge", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wenche Vagle. 2005. Tekstlengde + ordlengdesnitt = kvalitet? Hva kvantitative kriterier forteller om avgangselevenas skriveprestasjoner. In Kjell Lars Berge, Siegfred Evensen, Frydis Hertzberg, and Wenche. Vagle, editors, Ungdommers skrivekom- petanse, Bind 2. Norskexamen som tekst. Univer- sitetsforlaget.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "Screenshot of the web-based annotation interface.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "Screenshot of the web-based annotation interface.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"text": "Automatic quantitative analysis.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"text": "FEATURE Description TEXT IDParagraph-sentence index, integer starting at 1 for each new paragraph and sentence TOKEN ID Token index, integer starting at 1 for each new sentence; may be a range for tokens with multiple words FORM Word form or punctuation symbol NORM Corrected/normalized token (e.g. in case of spelling error) LEMMA Lemma or stem of word form UPOS Part-of-speech tag based on universal part-of-speech tag XPOS Part-of-speech tag based on the Stockholm-Ume\u00e5 Corpus; underscore if not available", |
|
"content": "<table><tr><td>XFEATS UFEATS HEAD DEPREL DEPS MISC</td><td>List of morphological features for XPOS; underscore if not available List of morphological features for UPOS; underscore if not available Head of the current token, which is either a value of ID or zero (0) Dependency relation to the HEAD (root iff HEAD = 0) based on the Swedish Treebank annotation List of secondary dependencies (head-deprel pairs) Any other annotation</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"text": "Annotation representation format for each token and field.", |
|
"content": "<table><tr><td>TEXT ID ID FORM NORM LEMMA UPOS XPOS XFEATS 2.4 1 Jag Jag jag PRON PN UTR |SIN |DEF |SUB 2.4 2 var var vara VERB VB PRT |AKT 2.4 3 i i i ADP PP 2.4 4-5 Sydkorea Sydkorea PROPN PM NOM 2.4 4 Syd Syd 2.4 5 Korea Korea 2.4 6 f\u00f6rra f\u00f6rra f\u00f6rra ADJ JJ POS |UTR/NEU |SIN |DEF |NOM Case=Nom|Definite=Def|Degree=Pos|Number=Sing 7 UFEATS HEAD DEPREL DEPS MISC Case=Nom|Definite=Def|Gender=Com|Number=Sing 0 root I Mood=Ind|Tense=Past|VerbForm=Fin|Voice=Act 1 acl was 4-5 case in Case=Nom 2 nmod South Korea South Korea det last 2.4 7 vekan veckan vecka NOUN NN UTR |SIN |DEF |NOM Case=Nom|Definite=Def|Gender=Com|Number=Sing 4-5 nmod week 2.4 8 . . . PUNCT MAD 1 punct .</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"text": "Example of the extended CoNLL-U shared task format for the sentence Jag var i Syd Korea f\u00f6rra vekan (I was in South Korea last week). It contains one misspelled word, veckan, and one erroneously split compound, Syd Korea -South Korea, which should be a single compound word in Swedish. Note that the MISC column here is used to provide English translations for this table.", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"text": "Some measures from SWEGRAM.", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"text": "The five most frequently occurring parts of speech.", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |