|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:10:10.673986Z" |
|
}, |
|
"title": "Shallow parsing of Portuguese texts annotated under Universal Dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Guilherme", |
|
"middle": [], |
|
"last": "Martiniano De Oliveira", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "FFCLRP University of S\u00e3o Paulo", |
|
"location": { |
|
"settlement": "Ribeir\u00e3o Preto", |
|
"region": "SP", |
|
"country": "Brazil" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Paulo", |
|
"middle": [ |
|
"Berlanga" |
|
], |
|
"last": "Neto", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "FFCLRP University of S\u00e3o Paulo", |
|
"location": { |
|
"settlement": "Ribeir\u00e3o Preto", |
|
"region": "SP", |
|
"country": "Brazil" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Evandro", |
|
"middle": [ |
|
"Eduardo" |
|
], |
|
"last": "Seron Ruiz", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "FFCLRP University of S\u00e3o Paulo", |
|
"location": { |
|
"settlement": "Ribeir\u00e3o Preto", |
|
"region": "SP", |
|
"country": "Brazil" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Shallow parsing is an intermediate step to many natural language processing tasks, such as information retrieval, question answering, and information extraction. An alternative to full-sentence parsing consists of segmentation and identifying phrases in sentences. Building such a parser for the Portuguese language is challenging considering the proposed formalism for grammar annotation, the Universal Dependency (UD). This paper addresses preliminary studies to overcome these barriers by annotating noun phrases tagged in UD.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Shallow parsing is an intermediate step to many natural language processing tasks, such as information retrieval, question answering, and information extraction. An alternative to full-sentence parsing consists of segmentation and identifying phrases in sentences. Building such a parser for the Portuguese language is challenging considering the proposed formalism for grammar annotation, the Universal Dependency (UD). This paper addresses preliminary studies to overcome these barriers by annotating noun phrases tagged in UD.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Assigning a complete syntactic structure to sentences based on grammar and a search strategy is the goal of full parsing. However, not all-natural language processing (NLP) applications require a complete syntactic analysis [9] . For many NLP tasks, such as named entity recognition [3] , sentiment analysis [15] and information retrieval [6] , recovering only a limited amount of syntactic information has proved to be a valuable technology for written and spoken language domains. This chunking strategy is generally known as partial parsing or shallow parsing. Shallow parsing can also serve as a baseline for full parsing [2] since it provides a foundation for other levels of analysis.", |
|
"cite_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 227, |
|
"text": "[9]", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 283, |
|
"end": 286, |
|
"text": "[3]", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 308, |
|
"end": 312, |
|
"text": "[15]", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 339, |
|
"end": 342, |
|
"text": "[6]", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 626, |
|
"end": 629, |
|
"text": "[2]", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This work focuses on extracting non-overlapping noun-phrase (NP) chunks, as proposed initially by Abney [2] , including nouns and proper nouns, among other classes of words that add more meaning to these two. Shallow parsers have already been developed for the constituency tree format [5] . Here we address the challenge of developing a parser to work under Brazilian-Portuguese texts annotated with the Universal Dependencies (UD) format, which is currently used in many NLP tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 104, |
|
"end": 107, |
|
"text": "[2]", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 286, |
|
"end": 289, |
|
"text": "[5]", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We propose constructing a model for recognizing noun phrases in input sentences through a neural network (NN) trained model, as proposed by S\u00f8gaard and Goldberg [16] . Some NN architectures are explicitly designed for long-term dependency learning, as written texts are. More specifically, our proposed shallow parser model processes text in three stages: 1) A learning corpus is built from partial parsed sentences. These sentences are extracted from the constituency version of Bosque corpus (version 8); 2) Sentences from this learning corpora are augmented with UD labels from the UD Portuguese-Bosque version 2.2. This revised UD treebank retains the additional tags for NP. Finally; 3) A neural network-based classification model is built from the learning corpus and applied to the original test subset from the UD Portuguese-Bosque, here called text corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 161, |
|
"end": 165, |
|
"text": "[16]", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Following, we briefly introduce the main related work to shallow parsing. In Section 3, we present the data and methods used. In Section 4, we report a summary of the experiments. Some considerations about the experiment's results are detailed in Section 5. Finally, in Section 6, we present some concluding remarks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The idea of text chunking was proposed in the seminal work of Steven Abney [2] , where he shows the correspondence of prosodic patterns to segments of constituency grammar trees. Following this intuition, Ramshaw and Marcus [14] developed the first known method for chunking sentences similarly to traditional grammar, creating templates and rules that described chunk formation. This method is known as Transformation-Based Learning (TBL).", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 78, |
|
"text": "[2]", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 224, |
|
"end": 228, |
|
"text": "[14]", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Alonso et al. [3] , Brants [6] and a team led by Hammerton [9] , among others, have also developed and applied shallow parsing to sentences annotated in the constituency tree format.", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 17, |
|
"text": "[3]", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 27, |
|
"end": 30, |
|
"text": "[6]", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 59, |
|
"end": 62, |
|
"text": "[9]", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For the Portuguese language, we highlight the work of Barreto and his colleagues [4] with the TagShare project that embraces linguistic resources and tools for the shallow processing of Portuguese. These resources also include a 1M token corpus that has been accurately hand-annotated. Noun phrase chunking for English, Portuguese, and Hindi was proposed by Milidi\u00fa, Santos, and Duarte [12] . They applied Entropy Guided Transformation Learning (ETL), a machine learning strategy that combines decision trees and the classical TBL method. For the Portuguese, their proposed methodology achieved a precision of 92.62%, recall of 93.05%, and an F-measure of 92.84%.", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 84, |
|
"text": "[4]", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 386, |
|
"end": 390, |
|
"text": "[12]", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Machine learned-based system was also used for a shallow parsing similar task called clause identification (CI). The Milidi\u00fa team extended their previous experiments to work likewise with CI [8] . They stated that CI is a phrase-chunklike (PCL) task. PCL consists of splitting a sentence into clauses. A clause is defined as a word sequence containing a subject and a predicate. Clause identification is a special kind of shallow parsing. They proposed an Entropy Guided Transformation Learning system that achieved an F-measure of 73.9%.", |
|
"cite_spans": [ |
|
{ |
|
"start": 191, |
|
"end": 194, |
|
"text": "[8]", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Chunking received much attention, mostly when syntactic parsing was predominantly guided by constituency parsing, as it is the case for all previous works. With the UD grammar annotation surge, new methods need to be cre-ated. To our knowledge, Oph\u00e9lie Lacroix [11] was the first to show that UD annotated texts can also leverage the information provided by the constituency annotation. She grouped tokens to form NP chunks and used neural networks to train and test her method. She showed that it is possible to extract NP-chunks (noun phrases) from Universal Dependencies annotated texts with accuracy similar to traditional chunks operated under constituency trees. Her NP-chunking method achieved F-measure=89.9% when applied to dependency trees.", |
|
"cite_spans": [ |
|
{ |
|
"start": 261, |
|
"end": 265, |
|
"text": "[11]", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our project aims to deduce NP-chunks from automatically UD annotated texts using a deep neural network (NN) approach. To lead our way to a feasible NN model for NP-chunking, we based our project on the work of S\u00f8gaard and Goldberg [16] . They showed that it is possible to utilize a multi-task learning architecture (MTL) with deep bi-directional recurrent neural networks (RNNs) to make syntactic chunking more precise, achieving an F-score=94.1%. They conclude that deep neural networks are a powerful tool for syntactic analysis.", |
|
"cite_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 235, |
|
"text": "[16]", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Using an NN-trained model, we aim to recognize and extract non-overlapping noun phrase (NP) chunks. As requested by a supervised learning approach, two corpora are needed: a) A learning corpora, and; b) A test corpora. The learning corpora used is composed of sentences from the Bosque corpus. Version 8.0 of the Bosque corpus 1 provides syntactic annotations of noun phrase chunks, under the 'NP' category, like other types of phrase chunks. As a constituency parsed corpus, no UD labels were provided for this version of the Bosque. UD labels were acquired from the UD Portuguese-Bosque version 2.2 2 . This UD treebank retained the original NP tags. The test corpora is composed of the test subset labeled sentences from the UD Portuguese-Bosque.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "A classification engine (detailed in the next subsection, 3.2) is fed with the test corpora sentences. Each extracted sentence is analyzed accordingly to the knowledge acquired from the learning corpora. The following subsection describes the classification engine.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We define the noun phrase detection task as a sequence labeling problem. Given an input sentence composed of a sequence of tokens, w 1 , . . . , w n , the goal is the prediction of an output sequence y 1 , . . . , y n , y i \u2208 {1, . . . , |L|}, where L is a determined set of labels and y i is the respective label for w i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We adopted an MTL architecture based on deep bi-directional recurrent neural networks (Bi-LSTM). The MTL can be understood as a layer-sharing method that helps models deal with different tasks simultaneously. Therefore, such intermediary representations allow different tasks to benefit from each other, stimulating the standard practical knowledge learning process. Considering the proposed sequence labeling model, we may, for example, experiment with part-of-speech (POS) tagging and syntactic chunking predictions for the same input sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Long Short-Term Memory (LSTM) [10] is a particular flavor of recurrent neural networks (RNN) widely applied in NLP tasks that enables long-term dependency learning. It may also be considered an instance that primarily aims to eliminate the vanishing gradient problem observed in the 'vanilla' RNN [7] since the latter cannot correctly handle long sequences of tokens [16] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 34, |
|
"text": "[10]", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 300, |
|
"text": "[7]", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 367, |
|
"end": 371, |
|
"text": "[16]", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Explained in a simple way, the LSTM architecture, consider RNNs as a blackbox abstraction. One may view LSTMs as an instance of a RNN interface. RNN may be seen as a function R \u0398 (w 1:n ) mapping a sequence of n input vectors w 1:n , w i \u2208 R in , to output vector h 1:n , h i \u2208 R out . Applying R \u0398 (w 1:n ) to all prefixes w 1:i , 1 \u2264 i \u2264 n of w 1:n , result in n output vectors h 1:n , where h 1:i is a summary of w 1:i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Layers of RNN are called deep RNN. A k\u2212layer RNN are a set of k RNN functions (RNN 1 , RNN 2 , . . . , RNN k ) feeding each other. A bidirectional RNN is composed of two RNNs, RNN F and RNN R , one that reads the sequence in one order, e.g., forward, and the other reading it in reverse.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 96, |
|
"text": "(RNN 1 , RNN 2 , .", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We employed an architecture-based Bi-LSTM following S\u00f8gaard and Goldberg reference work [16] . They show that this architecture can explore contextual information to process long sequences. Our proposed model comprises an embedding layer that feeds two hidden layers (forward and backward), composed of 300 units. The model was trained using back-propagation and Stochastic Gradient Descent (SGD), employing batch sizes of 64 with a learning rate of 0.01. The training process lasted ten epochs. All the hyper-parameters were defined empirically. The Bi-LSTM implementation was accomplished with the nlp-architecture 3 Python module [1].", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 92, |
|
"text": "[16]", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We recall that although the Bosque corpus version 8 is composed of 18,804 sentences, only part of this corpus, 9,364 sentences, were annotated under UD, assembling the UD Portuguese-Bosque. Further, these 9,364 sentences are divided into three subsets: learning-train (8, 328) , dev (560) and test 476. Since not all the sentences have NP and some processing errors, such as bugs reading the XML file, only 8,585 sentences were used, corresponding to 91,6% of the 9,364. Table 1 below depicts the number of sentences used for both corpora, the learning (7,605) and the test (444) corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 268, |
|
"end": 271, |
|
"text": "(8,", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 272, |
|
"end": 276, |
|
"text": "328)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Based on the syntactic annotations provided by the Bosque corpus (v.8), we acknowledge noun phrase chunks searching for tokens inside the noun phrase ('NP' category) also considering the alongside adjectives ('adpj' category). Figure 1(a) illustrates such annotations in the Bosque SimTreeML format. After that, we annotate each token from every sentence with the respective labels from the Universal Dependencies (UD) annotation format. In parallel, NP chunks were labeled with the IOB (Inside-Outside-Beginning) format [14] . Figure 1(b) illustrates the final annotated example.", |
|
"cite_spans": [ |
|
{ |
|
"start": 521, |
|
"end": 525, |
|
"text": "[14]", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 238, |
|
"text": "Figure 1(a)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 528, |
|
"end": 539, |
|
"text": "Figure 1(b)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Following the work of Lacroix [11] , we aim to detect minimal, non-recursive noun phrases. For example, in the sentence \"O 7 e Meio\u00e9 um ex-libris da noite algarvia.\", we consider the following constituents: \"O 7 e Meio\", \"um ex-libris\" and \"a noite algarvia.\". Thus, we do not consider a single long noun phrase for \"um ex-libris da noite algarvia.\", but the aforesaid minimal version instead.", |
|
"cite_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 34, |
|
"text": "[11]", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We assembled the Bosque data division in train-development-test subsets according to the work of Rademaker et al. [13] . See Table 1 . Later, we trained the model with the previously mentioned method in Section 3.2. Running the test against the full reserved test set, we obtained an F-measure of 85.1%. See Table 2 . We may also see in Figure 2 an example of a prediction outputted by the trained model that correctly identifies the noun phrases present in the input sentence provided, based on the IOB pattern. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 118, |
|
"text": "[13]", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 132, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 308, |
|
"end": 315, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 337, |
|
"end": 345, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "A rudimentary qualitative analysis of the outputs reveals that the model could detect the desired minimal noun phrase chunks performing slightly better on sentences with simple syntax. Even so, many of the longest and most complex sentences were also labeled correctly. Quantitatively, an F-measure of 85.1% is not a state-of-the-art achievement. Although this work is not comparable with the work of Lacroix [11] that achieved an F-measure of 89.9%, we considered our result an encouraging preliminary one. The Bi-LSTM classifier was used with its default parameters, suggesting that an optimized gradient boosting approach like XGBoost would provide more gratifying results. The obtained F-score establishes our approach as a feasible method for Portuguese text chunking.", |
|
"cite_spans": [ |
|
{ |
|
"start": 409, |
|
"end": 413, |
|
"text": "[11]", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Considerations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Although a comprehensively qualitative manual inspection of the errors shall be the subject of a prospective study, a casual manual search for minimal NP reveals some inconsistencies in the original POS tagging. Below we highlight the expression \"(P)presidente da (R)rep\u00fablica\", which should not be tagged as a minimal NP. One may see a possible disagreement between human annotators in the following expressions. . . .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Considerations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In the previous examples the word \"Presidente\", with a capital 'P' is tagged as Proper Noun while \"presidente\" is tagged as Noun. Respectively \"Rep\u00fablica\" appears with two distinctive tags, Proper Noun and Noun. Originally, the expression in the first sentence, \"Presidente da Rep\u00fablica\" was tagged as a NP, while for the second and third sentences, \"presidente\" and \"Rep\u00fablica\" were tagged individually as NP.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Considerations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "These last divergent examples encourage an extensive investigation, even if insufficient to justify the modest F-measure obtained. We note that the learning step might be impaired on comparable divergences due to the relatively small training dataset for the enormous variety of similar expressions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Considerations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We inferred that the method proposed has much potential for chunking detection that takes advantage of the characteristics presented in the UD pattern. We also believe that expanding learning corpora annotated under UD will foster more encouraging results. An accuracy over 95% and new methods to extract other types of chunks (prepositional, adverbial, and adjective) are some future works we are pursuing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "https://www.linguateca.pt/Floresta/corpus.html#download 2 https://github.com/UniversalDependencies/UD Portuguese-Bosque", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://intellabs.github.io/nlp-architect/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was carried out at the Center for Artificial Intelligence (C4AI-USP), with support by the S\u00e3o Paulo Research Foundation (FAPESP grant #2019/07665-4) and by the IBM Corporation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": "7" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "NLP Architect, by Intel AI Laboratories", |
|
"authors": [], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.5281/zenodo.1477518" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "NLP Architect, by Intel AI Laboratories (Nov 2018), https://doi.org/10.5281/zenodo.1477518", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Principle-Based Parsing: Computation and Psycholinguistics, chap. Parsing by Chunks", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Abney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "257--278", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-94-011-3474-310" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abney, S.P.: Principle-Based Parsing: Computation and Psycholinguistics, chap. Parsing by Chunks, pp. 257-278. Springer Netherlands, Dordrecht (1992). https://doi.org/10.1007/978-94-011-3474-3 10", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "On the Use of Parsing for Named Entity Recognition", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Alonso", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "G\u00f3mez-Rodr\u00edguez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Vilares", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Applied Sciences", |
|
"volume": "11", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alonso, M.A., G\u00f3mez-Rodr\u00edguez, C., Vilares, J.: On the Use of Parsing for Named Entity Recognition. Applied Sciences 11(3), 1090 (2021)", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Open resources and tools for the shallow processing of Portuguese: the TagShare project", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Barreto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Branco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Ferreira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Mendes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Bacelar Do Nascimento", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Nunes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Silva", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the V International Conference on Language Resources and Evaluation -LREC2006. European Language Resources Association", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barreto, F., Branco, A., Ferreira, E., Mendes, A., Bacelar do Nascimento, M.F., Nunes, F., Silva, J.R.: Open resources and tools for the shallow processing of Portuguese: the TagShare project. In: Proceedings of the V International Confer- ence on Language Resources and Evaluation -LREC2006. European Language Resources Association (2006)", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "The Parsing System \"Palavras\": Automatic Grammatical Analysis of Portuguese in a Constraint Grammar Framework", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Bick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bick, E.: The Parsing System \"Palavras\": Automatic Grammatical Analysis of Portuguese in a Constraint Grammar Framework. Aarhus University Press (2000), https://books.google.com.br/books?id=ISUgDvPg7hcC", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Natural Language Processing in Information Retrieval. CLIN -Computational Linguistics in the Netherlands", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Brants", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "111", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brants, T.: Natural Language Processing in Information Retrieval. CLIN -Com- putational Linguistics in the Netherlands 111 (2003)", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Finding structure in time", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Elman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Cognitive Science", |
|
"volume": "14", |
|
"issue": "2", |
|
"pages": "179--211", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elman, J.L.: Finding structure in time. Cognitive Science 14(2), 179-211 (1990)", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A Machine Learning Approach to Portuguese Clause Identification", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Fernandes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Santos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Milidi\u00fa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"A S" |
|
], |
|
"last": "Pardo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Branco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Klautau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Vieira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "De Lima", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Computational Processing of the Portuguese Language", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--64", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fernandes, E.R., dos Santos, C.N., Milidi\u00fa, R.L.: A Machine Learning Approach to Portuguese Clause Identification. In: Pardo, T.A.S., Branco, A., Klautau, A., Vieira, R., de Lima, V.L.S. (eds.) Computational Processing of the Portuguese Language. pp. 55-64. Springer Berlin Heidelberg, Berlin, Heidelberg (2010)", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Introduction to Special Issue on Machine Learning Approaches to Shallow Parsing", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Hammerton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Osborne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Armstrong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Daelemans", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "2", |
|
"issue": "4", |
|
"pages": "551--558", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hammerton, J., Osborne, M., Armstrong, S., Daelemans, W.: Intro- duction to Special Issue on Machine Learning Approaches to Shallow Parsing. Journal of Machine Learning Research 2(4), 551-558 (2002).", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Long Short-Term Memory", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural Comput", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/neco.1997.9.8.1735" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hochreiter, S., Schmidhuber, J.: Long Short-Term Memory. Neural Com- put. 9(8), 1735-1780 (Nov 1997). https://doi.org/10.1162/neco.1997.9.8.1735, https://doi.org/10.1162/neco.1997.9.8.1735", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Investigating NP-chunking with", |
|
"authors": [ |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Lacroix", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Second Workshop on Universal Dependencies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "85--90", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-6010" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lacroix, O.: Investigating NP-chunking with Universal Dependencies for English. In: Proceedings of the Second Workshop on Universal Depen- dencies (UDW 2018). pp. 85-90. Association for Computational Linguis- tics, Brussels, Belgium (Nov 2018). https://doi.org/10.18653/v1/W18-6010, https://aclanthology.org/W18-6010", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Phrase chunking using entropy guided transformation learning", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Milidi\u00fa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Santos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Duarte", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of ACL-08: HLT. pp", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "647--655", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Milidi\u00fa, R.L., dos Santos, C., Duarte, J.C.: Phrase chunking using entropy guided transformation learning. In: Proceedings of ACL-08: HLT. pp. 647-655 (2008)", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Universal Dependencies for Portuguese", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Rademaker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Chalub", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Real", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Freitas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Bick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "De Paiva", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Fourth International Conference on Dependency Linguistics (Depling)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "197--206", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rademaker, A., Chalub, F., Real, L., Freitas, C., Bick, E., de Paiva, V.: Universal Dependencies for Portuguese. In: Proceedings of the Fourth International Confer- ence on Dependency Linguistics (Depling). pp. 197-206. Pisa, Italy (September 2017), http://aclweb.org/anthology/W17-6523", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Text Chunking Using Transformation-Based Learning", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Ramshaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Marcus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "157--176", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ramshaw, L.A., Marcus, M.P.: Text Chunking Using Transformation- Based Learning, pp. 157-176. Springer Netherlands, Dordrecht (1999).", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Shallow parsing pipeline -Hindi-English code-mixed social media text", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Sharma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Motlani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Shrivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mamidi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Sharma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1340--1345", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-1159" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sharma, A., Gupta, S., Motlani, R., Bansal, P., Shrivastava, M., Mamidi, R., Sharma, D.M.: Shallow parsing pipeline -Hindi-English code-mixed social media text. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pp. 1340-1345. Association for Computational Linguistics, San Diego, California (Jun 2016). https://doi.org/10.18653/v1/N16-1159, https://aclanthology.org/N16-1159", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Deep multi-task learning with low level tasks supervised at lower layers", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "231--235", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S\u00f8gaard, A., Goldberg, Y.: Deep multi-task learning with low level tasks su- pervised at lower layers. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). pp. 231- 235. Association for Computational Linguistics, Berlin, Germany (Aug 2016).", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "Noun phrase prediction produced by the proposed model.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "1", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td/><td colspan=\"2\">Bosque v.8 UD Portuguese-Bosque</td></tr><tr><td>subset</td><td>(constituency)</td><td>(original)</td></tr><tr><td>learning-train</td><td>7,605</td><td>8,328</td></tr><tr><td>dev</td><td>536</td><td>560</td></tr><tr><td>test</td><td>444</td><td>476</td></tr><tr><td>Total</td><td>8,585</td><td>9,364</td></tr></table>", |
|
"text": "Number of sentences for the used corpora.", |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"3\">Precision Recall F-measure</td></tr><tr><td>84.8</td><td>85.3</td><td>85.1</td></tr></table>", |
|
"text": "Evaluation metrics for the Bi-LSTM network model in %.", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"text": ". . . . o governador do Rio e o Presidente[PROPN] da Rep\u00fablica[PROPN] chamaram o Ex\u00e9rcito. 2. . . . o presidente[NOUN] da Rep\u00fablica[NOUN] abriu uma fresta . . . 3. No caso de impedimento de o presidente[NOUN] da Rep\u00fablica[PROPN]", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |