Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U07-1016",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:08:46.760887Z"
},
"title": "Parsing Internal Noun Phrase Structure with Collins' Models",
"authors": [
{
"first": "David",
"middle": [],
"last": "Vadas",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sydney NSW 2006",
"location": {
"country": "Australia"
}
},
"email": "[email protected]"
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sydney NSW 2006",
"location": {
"country": "Australia"
}
},
"email": "james\[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Collins' widely-used parsing models treat noun phrases (NPs) in a different manner to other constituents. We investigate these differences, using the recently released internal NP bracketing data (Vadas and Curran, 2007a). Altering the structure of the Treebank, as this data does, has a number of consequences, as parsers built using Collins' models assume that their training and test data will have structure similar to the Penn Treebank's. Our results demonstrate that it is difficult for Collins' models to adapt to this new NP structure, and that parsers using these models make mistakes as a result. This emphasises how important treebank structure itself is, and the large amount of influence it can have.",
"pdf_parse": {
"paper_id": "U07-1016",
"_pdf_hash": "",
"abstract": [
{
"text": "Collins' widely-used parsing models treat noun phrases (NPs) in a different manner to other constituents. We investigate these differences, using the recently released internal NP bracketing data (Vadas and Curran, 2007a). Altering the structure of the Treebank, as this data does, has a number of consequences, as parsers built using Collins' models assume that their training and test data will have structure similar to the Penn Treebank's. Our results demonstrate that it is difficult for Collins' models to adapt to this new NP structure, and that parsers using these models make mistakes as a result. This emphasises how important treebank structure itself is, and the large amount of influence it can have.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Collins' parsing models (Collins, 1999) are widely used in natural language processing (NLP), as they are robust and accurate for recovering syntactic structure. These models can be trained on a wide variety of domains and languages, such as biological text, Chinese and Arabic. However, Collins' models were originally built for the Penn Treebank (Marcus et al., 1993) , and as such, are predisposed to parsing not only similar text, but also similar structure.",
"cite_spans": [
{
"start": 24,
"end": 39,
"text": "(Collins, 1999)",
"ref_id": "BIBREF2"
},
{
"start": 348,
"end": 369,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper deals with the effects of assuming such a structure, after the Treebank has been altered. We focus on noun phrases (NPs) in particular, for two reasons. Firstly, there are a number of intricacies as part of Collins' model in this area. Indeed, a Collins-style parser uses a different model for generating NPs, compared to all other structures. Secondly, we can make use of our previous work in annotating internal NP structure (Vadas and Curran, 2007a) , which gives us a ready source of Penn Treebank data with an altered structure.",
"cite_spans": [
{
"start": 438,
"end": 463,
"text": "(Vadas and Curran, 2007a)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Using this extended corpus, we make a number of alterations to the model itself, in an attempt to improve the parser's performance. We also examine the errors made, focusing on the altered data in particular, and suggest ways that performance can be improved in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Next, we look at what effect the NP bracketing structure has. The existing Penn Treebank uses a generally flat structure (especially in NPs), but when internal NP brackets are introduced this is no longer the case. We determine what is the best representation to use for these new annotations, and also examine why that is.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Finally, we experiment with the task of identifying apposition within NPs, using manually annotated data as a gold standard. Although we find that this is a relatively easy task, the parser's performance is very low. The reasons for why Collins' models are unable to recover this structure are then explored.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The work described here raises questions about how researchers create NLP models, which may be for any task and not just parsing. Implicitly assuming that data will always retain the same structure can, as the results described here show, cause many problems for researchers in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present here a brief introduction to Collins' (1999) models, focusing in particular on how they generate NPs. All of the models use a Probablistic Context Free Grammar (PCFG), with the Penn Treebank training data used to define the grammar, and to estimate the probabilities of the grammar rules being used. The CKY algorithm is then used to find the optimal tree for a sentence.",
"cite_spans": [
{
"start": 40,
"end": 55,
"text": "Collins' (1999)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "The grammar is also lexicalised, i.e. the rules are conditioned on the token and the POS tag of the head of the constituent being generated. Estimating probabilities over the grammar rule, the head and the head's POS is problematic because of sparse data problems, and so generation probabilities are broken down into smaller steps. Thus, instead of calculating the probability of the entire rule, we note that each rule can be framed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "\u00a2 \u00a3 \u00a5 \u00a7 \u00a9 \u00a3 \u00a7 # \u00a3 # \u00a7 ( \u00a3 \u00a5 \u00a7 1 # \u00a3 4 # \u00a7 1 7 \u00a3 4 7 \u00a7 (1) where ( is the head child, \u00a3 \u00a7 # \u00a3 # \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "are its left modifiers and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "1 # \u00a3 4 # \u00a7 1 7 \u00a3 4 7 \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "are its right modifiers. If we make independence assumptions between the modifiers, then using the chain rule yields the following equations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00a2 H \u00a3 ( P \u00a2 R 4 U V W Y \u00a5 \u00a7 (2) a b c # d d d \u00a2 g \u00a3 b \u00a3 b \u00a7 P \u00a2 R 4 U V W Y \u00a5 Y ( \u00a7 (3) a b c # d d d 7 \u00a2 u \u00a3 1 b \u00a3 4 b \u00a7 P \u00a2 R 4 U V W Y \u00a5 Y ( \u00a7",
"eq_num": "(4)"
}
],
"section": "Background",
"sec_num": "2"
},
{
"text": "So the head is generated first, then the left and right modifiers (conditioned only on the head 1 ) afterwards. Now this differs for base-NPs, which use a slightly different model. Instead of conditioning on the head, the current modifier is dependant on the previous modifier, resulting in a sort of bigram model. Formally, equations 3 and 4 above are changed as shown below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a b c # d d d \u00a2 g \u00a3 b \u00a3 b \u00a7 P \u00a2 R 4 U V W Y b x # \u00a3 b x # \u00a7 \u00a7 (5) a b c # d d d 7 \u00a2 u \u00a3 1 b \u00a3 4 b \u00a7 P \u00a2 R 4 U V W Y 1 b x # \u00a3 4 b x # \u00a7 \u00a7",
"eq_num": "(6)"
}
],
"section": "Background",
"sec_num": "2"
},
{
"text": "1 There are also distance measures, the subcategorisation frame, etc, but they are not relevant for this discussion as they do not affect NPs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "There are a few reasons given by Collins for this. Most relevant for this work, is that because the Penn Treebank does not fully bracket NPs, the head is unreliable. When generating crude in the NP crude oil prices, we would want to condition on oil, the true head. However, prices is the head that would be found. Using the special NP submodel thus results in the correct behaviour. As Bikel (2004) notes, the model is not conditioning on the previous modifier instead of the head, the model is treating the previous modifier as the head.",
"cite_spans": [
{
"start": 387,
"end": 399,
"text": "Bikel (2004)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "The separate NP submodel also allows the parser to learn NP boundaries effectively, i.e. it is rare for words to precede the in an NP; and creates a distinct X-bar level, which Collins notes is helpful for the parser's performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In order to implement the separate base-NP submodel, a preprocessing step is taken wherein NP brackets that do not dominate any other nonpossessive NP nodes are relabelled as NPB. For consistency, an extra NP bracket is inserted around NPB nodes not already dominated by an NP. These NPB nodes are reverted before evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In our previous work (Vadas and Curran, 2007a) , we annotated the full structure of NPs in the Penn Treebank. This means that the true head can be identified, which may remove the need to condition on the previous modifier. We will experiment with this in Section 3.2.",
"cite_spans": [
{
"start": 21,
"end": 46,
"text": "(Vadas and Curran, 2007a)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Other researchers have also looked at the effect of treebank structure upon parser performance. Collins himself notes that binary trees would be a poor choice, as the parser loses some context sensitivity, and the distance measures become ineffective. He advocates one level of bracketing structure per Xbar level. Goodman (1997) explicitly converts trees to a binary branching format as a preprocessing step, in order to avoid these problems. Johnson (1998) finds that the performance of simple PCFGs can be improved through tree transformations, while Klein and Manning (2001) observe that some simple tree transforms can increase parser speed. The variation shown in these approaches, all to the same task, highlights the difficulty in identifying optimal tree stucture. The issue of treebank structure extends to other languages as well, and implies further difficulties when comparing between languages. K\u00fcbler (2005) investigates two German treebanks with different annotation schemes, and finds that certain properties, such as having unary nodes and flatter clauses, increase performance. Rehbein and van Genabith (2007) suggest that the treebank structure also affects evaluation methods.",
"cite_spans": [
{
"start": 315,
"end": 329,
"text": "Goodman (1997)",
"ref_id": "BIBREF3"
},
{
"start": 444,
"end": 458,
"text": "Johnson (1998)",
"ref_id": "BIBREF4"
},
{
"start": 554,
"end": 578,
"text": "Klein and Manning (2001)",
"ref_id": "BIBREF5"
},
{
"start": 909,
"end": 922,
"text": "K\u00fcbler (2005)",
"ref_id": "BIBREF6"
},
{
"start": 1097,
"end": 1128,
"text": "Rehbein and van Genabith (2007)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Treebank Structure",
"sec_num": "2.1"
},
{
"text": "We begin by analysing the performance of Collins' model, using the Vadas and Curran (2007a) data. This additional level of bracketing in the Penn Treebank consists of NML and JJP brackets to mark internal NP structure, as shown below:",
"cite_spans": [
{
"start": 67,
"end": 91,
"text": "Vadas and Curran (2007a)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Internal NP Brackets",
"sec_num": "3"
},
{
"text": "(NP (NML (NN crude) (NN oil) ) (NNS prices) ) (NP (NN world) (NN oil) (NNS prices) )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Internal NP Brackets",
"sec_num": "3"
},
{
"text": "In the first example, a NML bracket has been inserted around crude oil to indicate that the NP is leftbranching. In the second example, we do not explicitly add a bracket around oil prices, but the NP should now be interpreted implicitly as rightbranching.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Internal NP Brackets",
"sec_num": "3"
},
{
"text": "The experiments are carried out using the Bikel (2004) implementation of the Collins (1999) parser. We use Sections 02-21 for training, and report labelled bracket precision, recall and F-scores for all sentences in Section 00.",
"cite_spans": [
{
"start": 42,
"end": 54,
"text": "Bikel (2004)",
"ref_id": "BIBREF0"
},
{
"start": 77,
"end": 91,
"text": "Collins (1999)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Internal NP Brackets",
"sec_num": "3"
},
{
"text": "Firstly, we compare the parser's performance on the original Penn Treebank to when it is retrained with the new NML and JJP bracketed version. Table 1 shows that the new brackets make parsing marginally more difficult overall (by about 0.5% in F-score). However, if we evaluate only the original structure, by excluding the new NML and JJP brackets, then we find that the F-score has dropped by only a negligible amount. This means that the drop in overall performance results from low accuracy on the new NML and JJP brackets. Bikel's parser does not come inbuilt with an expectation of NML or JJP nodes in the treebank, and so it is not surprising that these new labels cause problems. For example, head-finding for these constituents is undefined. Also, as we described previously, NPs are treated differently in Collins' model, and so changing their structure could have unexpected consequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Internal NP Brackets",
"sec_num": "3"
},
{
"text": "In an attempt to remove any complications introduced by the new labels, we relabelled NML and JJP brackets as NP and ADJP, and then retrained again. These are the labels that would be given if internal NP structure was originally bracketed with the rest of the Penn Treebank. This relabelleling means that the model does not have to discriminate between two different types of noun and adjective structure, and for this reason, we might expect to see an increase in performance. This approach is also easy to implement, and eliminates the need for any change to the parser itself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Internal NP Brackets",
"sec_num": "3"
},
{
"text": "The results in Table 2 show that this is not the case, as the overall F-score has dropped another 0.5%. The NML and JJP brackets cannot be evaluated directly in this experiment, but we can compare against the corpus without relabelling, and count correct bracketings whenever a test NP matches a gold NML. The same is done for ADJP and JJP brackets. This results in a precision of 100%, because whenever a NML or JJP node is seen, it has already been matched against the gold-standard. Also, some incorrect NP or ADJP nodes are in fact false NML or JJP nodes, but this difference cannot be recovered.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Internal NP Brackets",
"sec_num": "3"
},
{
"text": "We carried out a visual inspection of the errors that were made in this experiment, but which hadn't been made when the NP and NML labels were distinct. It was noticable that many of these errors occurred when a company name or other entity needed to be bracketed, such as W.R. Grace in the example NP below: We conclude that the model was not able to generalise a rule that multiple tokens with the NNP POS tag should be bracketed. Even though NML brackets often follow this rule, NPs do not. As a result, the distinction between the labels should be retained, and we must change the parser itself to deal with the new labels properly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Internal NP Brackets",
"sec_num": "3"
},
{
"text": "The first and simplest change we made was to create head-finding rules for NML and JJP constituents. In the previous experiments, these nodes would be covered by the catch-all rule, which chooses the leftmost child as the head. This is incorrect in most NMLs, where the head is usually the rightmost child. We define NML and JJP rules in the parser data file, copying those used for NPs and ADJPs respectively. We also add to the rules for NPs, so that child NML and JJP nodes can be recursively examined, in the same way that NPs and ADJPs are. This change is not needed for other labels, as NMLs and JJPs only exist under NPs. We retrained and ran the parser again with this change, and achieve the results in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 712,
"end": 719,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Head-finding Rules",
"sec_num": "3.1"
},
{
"text": "Once again, we are surprised to find that the Fscore has been reduced, though only by 0.06% overall in this case. This drop comes chiefly from the NML and JJP brackets, whose performance has dropped by about 2%. As before, we scanned the errors in search of an explanation; however, there was no readily apparent pattern. It appears that conditioning on the incorrect head is simply helpful when parsing some sentences, and instances where the correct head gives a better result are less frequent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Head-finding Rules",
"sec_num": "3.1"
},
{
"text": "The next alteration to the parser is to turn off the base-NP submodel. Collins (1999, page 179) explains that this separate model is used because the Penn Treebank does not fully annotate internal NP structure, something that we have now done. Hopefully, with these new brackets in place, we can remove the NP submodel and perhaps even improve performance in doing so. Figure 1 : An unlikely structure",
"cite_spans": [
{
"start": 71,
"end": 95,
"text": "Collins (1999, page 179)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 369,
"end": 377,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Base-NP Submodel",
"sec_num": "3.2"
},
{
"text": "We experimented with three different approaches to turning off the base-NP model. All three techniques involved editing the parser code:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Base-NP Submodel",
"sec_num": "3.2"
},
{
"text": "1. Changing the R U \u00a2 \u00a3 \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Base-NP Submodel",
"sec_num": "3.2"
},
{
"text": "method to always return false. This means that the main model, i.e. equations 3 and 4 are always used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Base-NP Submodel",
"sec_num": "3.2"
},
{
"text": "NPB nodes. This alteration will have the same effect as the one above, and will also remove the distinction between NP and NPB nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Removing the preprocessing step that creates",
"sec_num": "2."
},
{
"text": "3. Changing the \u00a2 \u00a3 \u00a7 method to return true for NMLs. This will affect which NPs are turned into NPBs during the preprocessing step, as NPs that dominate NMLs will no longer be basal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Removing the preprocessing step that creates",
"sec_num": "2."
},
{
"text": "The third change does not turn the base-NP model off as such, but it does affect where it functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Removing the preprocessing step that creates",
"sec_num": "2."
},
{
"text": "The results are in Table 4 , and in all cases the overall F-score has decreased. In the 1st change, to",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 26,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Removing the preprocessing step that creates",
"sec_num": "2."
},
{
"text": "R U \u00a2 \u00a3 \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Removing the preprocessing step that creates",
"sec_num": "2."
},
{
"text": ", performance on NML and JJP brackets has actually increased by 3.78% F-score, although the original structure is almost 10% worse. The 2nd change, to the preprocessing step, results in a much smaller loss to the original structure, but also not as big an increase on the internal NP brackets. The 3rd change, to \u00a2 \u00a3 \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Removing the preprocessing step that creates",
"sec_num": "2."
},
{
"text": ", is most notable for the large drop in performance on the internal NP structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Removing the preprocessing step that creates",
"sec_num": "2."
},
{
"text": "There are a few reasons for these results, which demonstrate the necessity of the base-NP submodel. Collins (1999, 8.2.2) Table 5 : Error analysis structures such as that in Figure 1 , which never occur in the Treebank, are given too high a probability. The parser needs to know where NPs will not recurse anymore (when they are basal), so that it can generate the correct flat structure. Furthermore, the 3rd change effectively treats NP and NML nodes as equivalent, and we have already seen that this is not true.",
"cite_spans": [
{
"start": 100,
"end": 121,
"text": "Collins (1999, 8.2.2)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 122,
"end": 129,
"text": "Table 5",
"ref_id": null
},
{
"start": 174,
"end": 182,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Removing the preprocessing step that creates",
"sec_num": "2."
},
{
"text": "So far, all our changes have had negative results. We need to look at the errors being made by the parser, so that any problems that appear can be solved. Accordingly, we categorised every NML and JJP error through manual inspection. The results of this analysis are shown in Table 5 , together with examples of the errors being made. Only relevant brackets and labels are shown in the examples, while the final column describes whether or not the particular bracketing shown is correct.",
"cite_spans": [],
"ref_spans": [
{
"start": 276,
"end": 283,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "3.3"
},
{
"text": "The most common bracketing error results in a modifier being attached to the wrong head. In the example, because there is no bracket around lung cancer, there is a dependency between lung and deaths, instead of lung and cancer. We can further divide these errors into general NML and JJP cases, and instances where the error occurs inside a company name or in a person's title.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "3.3"
},
{
"text": "These errors occur because the ngrams that need to be bracketed simply do not exist in the training data. Looking for each of the 142 unique ngrams that were not bracketed, we find that 93 of them do not occur in Sections 02-21 at all. A further 17 of the ngrams do occur, but not as constituents, which would make reaching the correct decision even more difficult for the parser. In order to fix these problems, an outside source of information must be consulted, as the lexical information is currently not available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "3.3"
},
{
"text": "The next largest source of errors is mislabelling the bracket itself. In particular, distinguishing between using NP and NML labels, as well as ADJP and JJP, accounts for 75 of the 92 errors. This is not surprising, as we noted during the preparation of the corpus that the labels of some NPs were inconsistant (Vadas and Curran, 2007a) . The previous relabelling experiment suggests that we should not evaluate the pairs of labels equally, meaning that the best way to fix these errors would be to change the training data itself. This would require alterations to the original Penn Treebank brackets, which is not feasible here.",
"cite_spans": [
{
"start": 311,
"end": 336,
"text": "(Vadas and Curran, 2007a)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "3.3"
},
{
"text": "Conjunctions are another significant source of errors, and are quite a difficult problem. This is because coordinating multi-token constituents requires brackets around each of the constituents, as well as a further bracket around the entire conjunction. Getting just a single decision wrong can mean that a number of these brackets are in error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "3.3"
},
{
"text": "Another notable category of errors arises from possessive NPs, which always have a bracket placed around the possessor in our annotation scheme. The parser is not very good at replicating this pattern, perhaps because these constituents would usually not be bracketed if it weren't for the possessive. In particular, NML nodes beginning with a determiner are rare, only occurring when a possessive follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "3.3"
},
{
"text": "The parser also has difficulty in replicating the constituents around speech marks and brackets. We suspect that this is due to the fact that Collins' model does not generate punctuation as it does other constituents. There is also less need for speech marks and brackets to be correct, as the standard evaluation does not find an error when they are placed in the wrong constituent. The justification for this is that during the annotation process, they were given the lowest priority, and are thus inconsistant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "3.3"
},
{
"text": "There are a number of NML and JJP brackets in the parser's output that are clearly incorrect, either because they define right-branching structure (which is not bracketed explicitly) or because they dominate only a single token. Single token NMLs exist only in conjunctions, but unfortunately the parser is too liberal with this rule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "3.3"
},
{
"text": "The final major group of errors are structural; that is, the entire parse for the sentence is malformed, as in the example where figures is actually a noun.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "3.3"
},
{
"text": "From this analysis, we can say that the modifier attachment problem is the best to pursue. Not only is it the largest cause of errors, but there is an obvious way to reduce the problem: find and make use of more data. This data does not need to be annotated, as we demonstrated in previous NP bracketing experiments (Vadas and Curran, 2007b) , which attained a positive result. However, incorporating the data into Collins' model is still difficult. Our previous work only implemented a post-processor that ignored the parser's output. There is still room for improvement in this area.",
"cite_spans": [
{
"start": 316,
"end": 341,
"text": "(Vadas and Curran, 2007b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "3.3"
},
{
"text": "We have now seen how a Collins-style parser performs on internal NP structure, but a question remains about the structure itself: is it optimal for the parser? It may be argued that a better representation is to explicitly bracket right-branching structure. For example, in the NP the New York Stock Exchange, if there was a bracket around New York Stock Exchange, then it would be useful training for when the parser comes across New York Stock Exchange composite trading (which it does quite often). The parser should learn to add a bracket in Table 6 : Explicit right-branching structure both cases. Explicit right-branching brackets could also remove some of the clearly wrong bracket errors in Table 5 . The current bracketing guidelines do not mark right-branching constituents, they are simply assumed implicitly to be there. We can automatically add them in however, and then examine what difference this change makes. We find, in Table 6 , that overall performance drops by 1.51% F-score.",
"cite_spans": [],
"ref_spans": [
{
"start": 546,
"end": 553,
"text": "Table 6",
"ref_id": null
},
{
"start": 699,
"end": 706,
"text": "Table 5",
"ref_id": null
},
{
"start": 939,
"end": 946,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bracket Structure",
"sec_num": "4"
},
{
"text": "This was surprising result, as there are a number of easily recoverable brackets that are introduced by making right-branching structure explicit. For example, a POS tag sequence of DT NN NN is always right-branching. This explains the more than 10% increase in F-score when evaluating internal NP brackets only. As Rehbein and van Genabith (2007) found, increasing the number of non-terminal nodes has caused an increase in performance, though we may question, as they do, whether performance has truly increased, or whether the figure is simply inflated by the evaluation method.",
"cite_spans": [
{
"start": 316,
"end": 347,
"text": "Rehbein and van Genabith (2007)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bracket Structure",
"sec_num": "4"
},
{
"text": "On the other hand, the increased number of brackets has had a deleterious effect on the original brackets. This result suggests that it is better to leave right-branching structure implicit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bracket Structure",
"sec_num": "4"
},
{
"text": "The results in this paper so far, have been attained using noun modifier data. This final set of experiments however, focuses upon a different kind of internal NP structure: appositions. We thus show that the effects of treebank structure are important for a wider range of constructions, and demonstrate once again the difficulty experienced by a model that must adapt to altered data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appositions",
"sec_num": "5"
},
{
"text": "Appositions are a very common linguistic construction in English. They have been used in areas such as Information Extraction (Sudo et al., 2003) and Question Answering systems (Moldovan et al., 2003) , however there is little work on automatically identifying them. Researchers have typically used simple patterns for this task, although the accuracy of this method has not been determined. We will compare how well Bikel's parser performs.",
"cite_spans": [
{
"start": 126,
"end": 145,
"text": "(Sudo et al., 2003)",
"ref_id": "BIBREF11"
},
{
"start": 177,
"end": 200,
"text": "(Moldovan et al., 2003)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Appositions",
"sec_num": "5"
},
{
"text": "As in our previous experiments, we use the Penn Treebank, as it contains numerous appositions, such as the (slightly edited) example shown below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appositions",
"sec_num": "5"
},
{
"text": "(NP-SBJ (NP (NNP Darrell) (NNP Phillips) ) (, ,) (NP (NN vice) (NN president) ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appositions",
"sec_num": "5"
},
{
"text": "Appositional structure is, of course, not annotated in the Penn Treebank, and so we manually added gold-standard apposition brackets to the corpus. This was actually done during the annotation process for the Vadas and Curran (2007a) data. For example, we add a new bracket labelled APP to the previous noun phrase: We should note that we only look at nonrestrictive apposition, i.e. where NPs are separated by punctuation (usually a comma, but also a colon or dash). Cases of close apposition, as in the vice president Darrel Phillips have been shown to have a different interpretation (Lee, 1952) and also present more difficult cases for annotation.",
"cite_spans": [
{
"start": 209,
"end": 233,
"text": "Vadas and Curran (2007a)",
"ref_id": "BIBREF12"
},
{
"start": 587,
"end": 598,
"text": "(Lee, 1952)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Appositions",
"sec_num": "5"
},
{
"text": "There are 9,082 APP constituents in the corpus, out of the 60,959 NPs (14.90%) that were manually inspected during the annotation process. We measured inter-annotator agreement by comparing against a second annotator on Section 23. This resulted in an F-score of 90.41%, with precision of 84.93% and recall of 96.65%. The precision was notably low because the second annotator inserted APP nodes into NPs such as the one shown below: and while these cases are appositive, the first annotator did not insert an APP node when a conjunction was present. Table 7 : Parser results with appositions",
"cite_spans": [],
"ref_spans": [
{
"start": 551,
"end": 558,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Appositions",
"sec_num": "5"
},
{
"text": "Our first experiment uses a pattern matching technique, simply identifying appositions by looking for a pair of NPs separated by a comma or colon. This rule was then expanded to include other similar constituent labels: NML, UCP, ADJ, NNP, and APP itself, after noticing that errors were occurring in these cases. Evaluating this approach, using the entire Penn Treebank as a test set, we achieved an F-score of 95.76%, with precision 95.53% and recall 95.99%. This result is very good for such a simplistic approach, and could be improved further by adding some additional patterns, such as when an adverb appears between the comma and the second NP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5.1"
},
{
"text": "Having set this very high baseline, we once again used Bikel's (2004) parser in an attempt to find an improvement. This experiment includes the NML and JJP brackets in the data, and the parser is in its original state, without any of the alterations we made earlier. The results are shown in Table 7 .",
"cite_spans": [
{
"start": 55,
"end": 69,
"text": "Bikel's (2004)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 292,
"end": 299,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5.1"
},
{
"text": "The parser's performance on APP brackets is almost 30% F-score below the pattern matching approach, and it has also dropped 1.21% counting only the non-APP constituents. The reason for this very low performance arises from the way that Collins' models treats punctuation, i.e. all tokens with a POS tag of . or :. The Collins (1997) model does not generate punctuation at all, and later models still do not treat punctuation the same way as other tokens. Instead, punctuation is generated as a boolean flag on the following constituent. As a result, the parser cannot learn that a rule such as APP \u00a9 NP , NP should have high probability, because this rule is never part of the grammar. For this reason, the parser is unable to replicate the performance of a simple rule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5.1"
},
{
"text": "The results of this paper emphasise the strong relationship between a statistical model and the structure of the data it uses. We have demonstrated this by changing two different constructions in the Penn Treebank: noun modifiers, and appositions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The annotation structure of the Vadas and Curran (2007a) data has also been validated by these results, which is actually a complement for the BioMedical guidelines (Warner et al., 2004) , on which ours were based. It is not neccessary to explicitly bracket rightbranching constituents, and furthermore, it is harmful to do so. In addition, separating the NML and NP labels is advantageous, although our results suggest that performance would increase if the original Treebank annotations were made more consistent with our internal NP data.",
"cite_spans": [
{
"start": 32,
"end": 56,
"text": "Vadas and Curran (2007a)",
"ref_id": "BIBREF12"
},
{
"start": 165,
"end": 186,
"text": "(Warner et al., 2004)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Our results also demonstrate the neccessity of having a base-NP submodel as part of Collins' models. This specialised case, while seeming unneccesary with our new internal NP brackets, is still required to attain a high level of performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Instead, the error analysis in Section 3.3 shows us the true reason why the parser's performance on internal NP brackets is low. NML and JJP brackets are difficult because they require specific lexical information, i.e. the exact words must be in the training data. This is because POS tags, which are very important when making most parsing decisions, are uninformative here. For example, they do not help at all when trying to determine whether a sequence of 3 NNs is left or right-branching.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Our previous work in NP bracketing (Vadas and Curran, 2007b) is positive evidence that this is the correct direction, although incorporating such a submodel back into Collins' model in place of the existing NP submodel, would be a further improvement. It also remains to be seen whether the results observed here would apply to other parsing models.",
"cite_spans": [
{
"start": 35,
"end": 60,
"text": "(Vadas and Curran, 2007b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "This work has demonstrated the large effect that data structure can have on a standard NLP tool. It is important that any system, not just parsers, ensure that they perform adequately when faced with changing data. Otherwise, assumptions made today will cause problems for researchers in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "We would like to thank Matthew Honnibal for his annotation of the apposition data; and the anonymous reviewers for their helpful feedback. This work has been supported by the Australian Research Council under Discovery Projects DP0453131 and DP0665973.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "On the Parameter Space of Generative Lexicalized Statistical Parsing Models",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Bikel",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Bikel. 2004. On the Parameter Space of Generative Lexi- calized Statistical Parsing Models. Ph.D. thesis, University of Pennsylvania.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Three generative, lexicalised models for statistical parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "16--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the 35th Annual Meet- ing of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics, pages 16-23.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Head-Driven Statistical Models for Natural Language Parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1999. Head-Driven Statistical Models for Nat- ural Language Parsing. Ph.D. thesis, University of Pennsyl- vania.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Probabilistic feature grammars",
"authors": [
{
"first": "Joshua",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshua Goodman. 1997. Probabilistic feature grammars. In Proceedings of the International Workshop on Parsing Tech- nologies.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "PCFG models of linguistic tree representations",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "24",
"issue": "4",
"pages": "613--632",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson. 1998. PCFG models of linguistic tree represen- tations. Computational Linguistics, 24(4):613-632.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Parsing with treebank grammars: empirical bounds, theoretical models, and the structure of the Penn Treebank",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "338--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2001. Parsing with treebank grammars: empirical bounds, theoretical models, and the structure of the Penn Treebank. In Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, pages 338-345. Toulouse, France.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "How do treebank annotation schemes influence parsing results? Or how not to compare apples and oranges",
"authors": [
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of RANLP 2005. Borovets, Bulgaria",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandra K\u00fcbler. 2005. How do treebank annotation schemes in- fluence parsing results? Or how not to compare apples and oranges. In Proceedings of RANLP 2005. Borovets, Bul- garia.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Close apposition: An unresolved pattern",
"authors": [
{
"first": "Donald",
"middle": [
"W"
],
"last": "Lee",
"suffix": ""
}
],
"year": 1952,
"venue": "American Speech",
"volume": "27",
"issue": "4",
"pages": "268--275",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donald W. Lee. 1952. Close apposition: An unresolved pattern. American Speech, 27(4):268-275.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell Marcus, Beatrice Santorini, and Mary Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "COGEX: A logic prover for question answering",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Moldovan",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Sanda",
"middle": [],
"last": "Harabagiu",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Maiorano",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT-NAACL 2003",
"volume": "",
"issue": "",
"pages": "166--172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Moldovan, Christine Clark, Sanda Harabagiu, and Steve Maiorano. 2003. COGEX: A logic prover for question an- swering. In Proceedings of HLT-NAACL 2003, pages 166- 172.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Treebank annotation schemes and parser evaluation for German",
"authors": [
{
"first": "Ines",
"middle": [],
"last": "Rehbein",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "630--639",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ines Rehbein and Josef van Genabith. 2007. Treebank anno- tation schemes and parser evaluation for German. In Pro- ceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 630-639.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An improved extraction pattern representation model for automatic IE pattern acquisition",
"authors": [
{
"first": "Kiyoshi",
"middle": [],
"last": "Sudo",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association of Computational Linguistics (ACL-03)",
"volume": "",
"issue": "",
"pages": "224--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kiyoshi Sudo, Satoshi Sekine, and Ralph Grishman. 2003. An improved extraction pattern representation model for auto- matic IE pattern acquisition. In Proceedings of the 41st An- nual Meeting of the Association of Computational Linguis- tics (ACL-03), pages 224-231.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Adding noun phrase structure to the Penn Treebank",
"authors": [
{
"first": "David",
"middle": [],
"last": "Vadas",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL-07)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Vadas and James R. Curran. 2007a. Adding noun phrase structure to the Penn Treebank. In Proceedings of the 45th Annual Meeting of the Association for Computational Lin- guistics (ACL-07).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Large-scale supervised models for noun phrase bracketing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Vadas",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 10th Conference of the Pacific Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Vadas and James R. Curran. 2007b. Large-scale super- vised models for noun phrase bracketing. In Proceedings of the 10th Conference of the Pacific Association for Computa- tional Linguistics (PACLING-2007). Melbourne, Australia.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Addendum to the Penn Treebank II style bracketing guidelines: BioMedical Treebank annotation",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Warner",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Brisson",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Mott",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Warner, Ann Bies, Christine Brisson, and Justin Mott. 2004. Addendum to the Penn Treebank II style bracketing guidelines: BioMedical Treebank annotation. Technical re- port.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "NN vice) (NN president) )))",
"type_str": "figure"
},
"TABREF1": {
"content": "<table/>",
"num": null,
"html": null,
"text": "Parser performance",
"type_str": "table"
},
"TABREF2": {
"content": "<table><tr><td>Overall</td><td>88.09 87.77</td><td>87.93</td></tr><tr><td>Original structure</td><td>87.92 88.68</td><td>88.30</td></tr></table>",
"num": null,
"html": null,
"text": "PREC. RECALL F-SCORE NML and JJP brackets only 100.00 53.54 69.74",
"type_str": "table"
},
"TABREF3": {
"content": "<table/>",
"num": null,
"html": null,
"text": "Parser performance with relabelled brackets",
"type_str": "table"
},
"TABREF5": {
"content": "<table/>",
"num": null,
"html": null,
"text": "Performance with correct head-finding",
"type_str": "table"
},
"TABREF6": {
"content": "<table><tr><td>Overall</td><td>72.11</td><td>87.71</td><td>79.14</td></tr><tr><td>1 Original structure</td><td>72.09</td><td>88.19</td><td>79.33</td></tr><tr><td colspan=\"2\">NML /JJP brackets only 72.93</td><td>69.58</td><td>71.22</td></tr><tr><td>Overall</td><td>87.37</td><td>87.17</td><td>87.27</td></tr><tr><td>2 Original structure</td><td>87.75</td><td>87.65</td><td>87.70</td></tr><tr><td colspan=\"2\">NML /JJP brackets only 72.36</td><td>69.27</td><td>70.78</td></tr><tr><td>Overall</td><td>86.83</td><td>86.46</td><td>86.64</td></tr><tr><td>3 Original structure</td><td>86.90</td><td>88.66</td><td>87.77</td></tr><tr><td colspan=\"2\">NML /JJP brackets only 48.61</td><td>3.65</td><td>6.78</td></tr></table>",
"num": null,
"html": null,
"text": "PREC. RECALL F-SCORE",
"type_str": "table"
},
"TABREF7": {
"content": "<table><tr><td/><td>NP</td></tr><tr><td>NP</td><td>PP</td></tr><tr><td>NP</td><td>PP</td></tr></table>",
"num": null,
"html": null,
"text": "Performance with the base-NP model off",
"type_str": "table"
},
"TABREF8": {
"content": "<table><tr><td>ERROR</td><td>#</td><td>%</td><td>FP</td><td>FN</td><td>EXAMPLE</td><td/></tr><tr><td>Modifier attachment</td><td>213</td><td>38.04</td><td colspan=\"2\">56 157</td><td/><td/></tr><tr><td>NML</td><td>122</td><td>21.79</td><td colspan=\"2\">21 101</td><td>lung cancer deaths</td><td>Y</td></tr><tr><td>Internal Entity Structure</td><td>43</td><td>7.68</td><td>24</td><td>19</td><td>(Circulation Credit) Plan</td><td>Y</td></tr><tr><td>Appositive Title</td><td>29</td><td>5.18</td><td>6</td><td>23</td><td>(Republican Rep.) Jim Courter</td><td>Z</td></tr><tr><td>JJP</td><td>10</td><td>1.79</td><td>4</td><td>6</td><td>(More common) chrysotile fibers</td><td>Z</td></tr><tr><td>Company/name postmodifiers</td><td>9</td><td>1.61</td><td>1</td><td>8</td><td>(Kawasaki Heavy Industries) Ltd.</td><td/></tr><tr><td>Mislabelling</td><td>92</td><td>16.43</td><td>30</td><td>62</td><td>(ADJP more influential) role</td><td>Z</td></tr><tr><td>Conjunctions</td><td>92</td><td>16.43</td><td>38</td><td>54</td><td>(cotton and acetate) fibers</td><td>Z</td></tr><tr><td>Company names</td><td>10</td><td>1.79</td><td>0</td><td>10</td><td>(F.H. Faulding) &amp; (Co.)</td><td>Z</td></tr><tr><td>Possessives</td><td>61</td><td>10.89</td><td>0</td><td>61</td><td>(South Korea) 's</td><td>Z</td></tr><tr><td>Speech marks and brackets</td><td>35</td><td>6.25</td><td>0</td><td>35</td><td>(\" closed-end \")</td><td>Z</td></tr><tr><td>Clearly wrong bracketing</td><td>45</td><td>8.04</td><td>45</td><td>0</td><td/><td/></tr><tr><td>Right-branching</td><td>27</td><td>4.82</td><td>27</td><td>0</td><td>(NP (NML Kelli Green))</td><td>Y</td></tr><tr><td>Unary</td><td>13</td><td>2.32</td><td>13</td><td>0</td><td>a (NML cash) transaction</td><td>Y</td></tr><tr><td>Conjunction</td><td>5</td><td>0.89</td><td>5</td><td>0</td><td>(NP a (NML savings and loan))</td><td>Y</td></tr><tr><td>Structural</td><td>8</td><td>1.43</td><td>3</td><td>5 (NP</td><td>[ [ [ construction spending) (VP (VBZ figures)</td><td>[ [ [ Y</td></tr><tr><td>Other</td><td>14</td><td>2.50</td><td>8</td><td>6</td><td/><td/></tr><tr><td>Total</td><td colspan=\"4\">560 100.00 180 380</td><td/><td/></tr></table>",
"num": null,
"html": null,
"text": "explains why the distinction between NP and NPB nodes is needed: otherwise,",
"type_str": "table"
}
}
}
}