|
{ |
|
"paper_id": "N12-1045", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:05:02.070584Z" |
|
}, |
|
"title": "A Hierarchical Dirichlet Process Model for Joint Part-of-Speech and Morphology Induction", |
|
"authors": [ |
|
{ |
|
"first": "Kairit", |
|
"middle": [], |
|
"last": "Sirts", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Tallinn University of Technology", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Tanel", |
|
"middle": [], |
|
"last": "Alum\u00e4e", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Technology", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper we present a fully unsupervised nonparametric Bayesian model that jointly induces POS tags and morphological segmentations. The model is essentially an infinite HMM that infers the number of states from data. Incorporating segmentation into the same model provides the morphological features to the system and eliminates the need to find them during preprocessing step. We show that learning both tasks jointly actually leads to better results than learning either task with gold standard data from the other task provided. The evaluation on multilingual data shows that the model produces state-of-the-art results on POS induction.", |
|
"pdf_parse": { |
|
"paper_id": "N12-1045", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper we present a fully unsupervised nonparametric Bayesian model that jointly induces POS tags and morphological segmentations. The model is essentially an infinite HMM that infers the number of states from data. Incorporating segmentation into the same model provides the morphological features to the system and eliminates the need to find them during preprocessing step. We show that learning both tasks jointly actually leads to better results than learning either task with gold standard data from the other task provided. The evaluation on multilingual data shows that the model produces state-of-the-art results on POS induction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Nonparametric Bayesian modeling has recently become very popular in natural language processing (NLP), mostly because of its ability to provide priors that are especially suitable for tasks in NLP (Teh, 2006) . Using nonparametric priors enables to treat the size of the model as a random variable with its value to be induced during inference which makes its use very appealing in models that need to decide upon the number of states.", |
|
"cite_spans": [ |
|
{ |
|
"start": 197, |
|
"end": 208, |
|
"text": "(Teh, 2006)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The task of unsupervised parts-of-speech (POS) tagging has been under research in numerous papers, for overview see (Christodoulopoulos et al., 2010) . Most of the POS induction models use the structure of hidden Markov model (HMM) (Rabiner, 1989) that requires the knowledge about the number of hidden states (corresponding to the number of tags) in advance. According to our considerations, supplying this information is not desirable for two opposing reasons: 1) it injects into the system a piece of knowledge which in a truly unsupervised setting would be unavailable; and 2) the number of POS tags used is somewhat arbitrary anyway because there is no common consensus of what should be the true number of tags in each language and therefore it seems unreasonable to constrain the model with such a number instead of learning it from the data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 149, |
|
"text": "(Christodoulopoulos et al., 2010)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 232, |
|
"end": 247, |
|
"text": "(Rabiner, 1989)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Unsupervised morphology learning is another popular task that has been extensively studied by many authors. Here we are interested in learning concatenative morphology of words, meaning the substrings of the word corresponding to morphemes that, when concatenated, will give the lexical representation of the word type. For the rest of the paper we will refer to this task as (morphological) segmentation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Several unsupervised POS induction systems make use of morphological features (Blunsom and Cohn, 2011; Lee et al., 2010; Berg-Kirkpatrick et al., 2010; Clark, 2003; Christodoulopoulos et al., 2011) and this approach has been empirically proved to be helpful (Christodoulopoulos et al., 2010) . In a similar fashion one could think that knowing POS tags could be useful for learning morphological segmentations and in this paper we will study this hypothesis.", |
|
"cite_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 102, |
|
"text": "(Blunsom and Cohn, 2011;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 103, |
|
"end": 120, |
|
"text": "Lee et al., 2010;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 121, |
|
"end": 151, |
|
"text": "Berg-Kirkpatrick et al., 2010;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 152, |
|
"end": 164, |
|
"text": "Clark, 2003;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 165, |
|
"end": 197, |
|
"text": "Christodoulopoulos et al., 2011)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 258, |
|
"end": 291, |
|
"text": "(Christodoulopoulos et al., 2010)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper we will build a model that combines POS induction and morphological segmentation into one learning problem. We will show that the unsupervised learning of both of these tasks in the same model will lead to better results than learning both tasks separately with the gold standard data of the other task provided. We will also demonstrate that our model produces state-of-the-art results on POS tagging. As opposed to the compared methods, our model also induces the number of tags from data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the following, section 2 gives the overview of the Dirichlet Processes, section 3 describes the model setup followed by the description of inference procedures in section 4, experimental results are presented in section 5, section 6 summarizes the previous work and last section concludes the paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Let H be a distribution called base measure. Dirichlet process (DP) (Ferguson, 1973 ) is a probability distribution over distributions whose support is the subset of the support of H:", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 83, |
|
"text": "(Ferguson, 1973", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background 2.1 Dirichlet Process", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "G \u223c DP (\u03b1, H),", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Background 2.1 Dirichlet Process", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where \u03b1 is the concentration parameter that controls the number of values instantiated by G. DP has no analytic form and therefore other representations must be developed for sampling. In the next section we describe Chinese Restaurant Process that enables to obtain samples from DP.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background 2.1 Dirichlet Process", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Chinese Restaurant Process (CRP) (Aldous, 1985) enables to calculate the marginal probabilities of the elements conditioned on the values given to all previously seen items and integrating over possible DP prior values.", |
|
"cite_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 47, |
|
"text": "(Aldous, 1985)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Chinese Restaurant Process", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Imagine an infinitely big Chinese restaurant with infinitely many tables with each table having capacity for infinitely many customers. In the beginning the restaurant is empty. Then customers, corresponding to data points, start entering one after another. The first customer chooses an empty table to sit at. Next customers choose a new table with probability proportional to the concentration parameter \u03b1 or sit into one of the already occupied tables with probability proportional to the number of customers already sitting there. Whenever a customer chooses an empty table, he will also pick a dish from H to be served on that table. The predictive probability distribution over dishes for the i-th customer is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Chinese Restaurant Process", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "P (x i = \u03c6 k |x \u2212i , \u03b1, H) = n \u03c6 k + \u03b1 i \u2212 1 + \u03b1 p H (\u03c6 k ), (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Chinese Restaurant Process", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "where x \u2212i is the seating arrangement of customers excluding the i-th customer and n \u03c6 k is the number of customers eating dish \u03c6 k and p H (\u2022) is the probability according to H.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Chinese Restaurant Process", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The notion of hierarchical Dirichlet Process (HDP) can be derived by letting the base measure itself to be a draw from a DP:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Dirichlet Process", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "G 0 |\u03b1 0 , H \u223c DP (\u03b1 0 , H) (3) G j |\u03b1, G 0 \u223c DP (\u03b1, G 0 ) j = 1 \u2022 \u2022 \u2022 J", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Hierarchical Dirichlet Process", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Under HDP, CRP becomes Chinese Restaurant Franchise with several restaurants sharing the same franchise-wide menu G 0 . When a customer sits at an empty table in one of the G j -th restaurants, the event of a new customer entering the restaurant G 0 will be triggered. Analogously, when a table becomes empty in one of the G j -th restaurants, it causes one of the customers leaving from restaurant G 0 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical Dirichlet Process", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We consider the problem of unsupervised learning of POS tags and morphological segmentations in a joint model. Similarly to some recent successful attempts (Lee et al., 2010; Christodoulopoulos et al., 2011; Blunsom and Cohn, 2011) , our model is typebased, arranging word types into hard clusters. Unlike many recent POS tagging models, our model does not assume any prior information about the number of POS tags. We will define the model as a generative sequence model using the HMM structure. Graphical depiction of the model is given in Figure 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 174, |
|
"text": "(Lee et al., 2010;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 207, |
|
"text": "Christodoulopoulos et al., 2011;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 208, |
|
"end": 231, |
|
"text": "Blunsom and Cohn, 2011)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 542, |
|
"end": 550, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We assume the presence of a fixed length vocabulary W . The process starts with generating the lexicon that stores for each word type its POS tag and morphological segmentation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative story", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Draw a unigram tag distribution from the respective DP; \u2022 Draw a segment distribution from the respective DP; \u2022 For each tag, draw a tag-specific segment distribution from HDP with the segment distribution as base measure; \u2022 For each word type, draw a tag from the unigram tag distribution; \u2022 For each word type, draw a segmentation from the respective tag-specific segment distribution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative story", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Next we proceed to generate the HMM parameters:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative story", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 For each tag, draw a bigram distribution from HDP with the unigram tag distribution as base measure; \u2022 For each tag bigram, draw a trigram distribution from HDP with the respective bigram distribution as base measure; \u2022 For each tag, draw a Dirichlet concentration parameter from Gamma distribution and an emission distribution from the symmetric Dirichlet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative story", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Finally the standard HMM procedure for generating the data sequence follows. At each time step:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative story", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Generate the next tag conditioned on the last two tags from the respective trigram HDP; \u2022 Generate the word from the respective emission distribution conditioned on the tag just drawn; \u2022 Generate the segmentation of the word deterministically by looking it up from the lexicon.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative story", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The trigram transition hierarchy is a HDP:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model setup", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "G U \u223c DP (\u03b1 U , H) (5) G B j \u223c DP (\u03b1 B , G U ) j = 1 \u2022 \u2022 \u2022 \u221e (6) G T jk \u223c DP (\u03b1 T , G B j ) j, k = 1 \u2022 \u2022 \u2022 \u221e,", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Model setup", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where G U , G B and G T denote the unigram, bigram and trigram context DP-s respectively, \u03b1-s are the respective concentration parameters coupled for DPs of the same hierarchy level. Emission parameters are drawn from multinomials with symmetric Dirichlet priors:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model setup", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "w 1 w 2 w 3 s 1 s 2 s 3 t 1 t 2 t 3 G jk G j G U E j j Gj G S S ... ... ... B j =1... T H TS k=1... j =1...", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model setup", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "E j |\u03b2 j , H \u223c M ult(\u03b8)Dir(\u03b2 j )d\u03b8 j = 1 \u2022 \u2022 \u2022 \u221e,", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Model setup", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where each emission distribution has its own Dirichlet concentration parameter \u03b2 j drawn from H.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model setup", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Morphological segments are modelled with another HDP where the groups are formed on the basis of tags:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model setup", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "G S \u223c DP (\u03b1 S , S) (9) G T S j \u223c DP (\u03b1 T S , G S ) j = 1 \u2022 \u2022 \u2022 \u221e,", |
|
"eq_num": "(10)" |
|
} |
|
], |
|
"section": "Model setup", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where G T S j are the tag-specific segment DP-s and G S is their common base distribution with S as base measure over all possible strings. S consists of two components: a geometric distribution over the segment lengths and collapsed Dirichlet-multinomial over character unigrams.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model setup", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We implemented Gibbs sampler to draw new values for tags and Metropolis-Hastings sampler for resampling segmentations. We use a type-based col-lapsed sampler that draws the tagging and segmentation values for all tokens of a word type in one step and integrates out the random DP measures by using the CRP representation. The whole procedure alternates between three sampling steps:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 Sampling new tag value for each word type;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 Resampling the segmentation for each type;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 Sampling new values for all parameters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The tags will be sampled from the posterior:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tag sampling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P (T|W, S, w, \u0398),", |
|
"eq_num": "(11)" |
|
} |
|
], |
|
"section": "Tag sampling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where W is the set of words in the vocabulary, T and S are tags and segmentations assigned to each word type, w is the actual word sequence, and \u0398 denotes the set of all parameters relevant for tag sampling. For brevity, we will omit \u0398 notation in the formulas below. For a single word type, this posterior can be factored as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tag sampling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P (T i = t|T \u2212i , S, W, w) \u223c P (S i |T i = t, T \u2212i , S \u2212i )\u00d7 P (W i |T i = t, T \u2212i , W \u2212i )\u00d7 P (w|T i = t, T \u2212i , W),", |
|
"eq_num": "(12)" |
|
} |
|
], |
|
"section": "Tag sampling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where \u2212i in the subscript denotes the observations with the i-th word type excluded. The first term is the segmentation likelihood and can be computed according to the CRP formula:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tag sampling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P (S i |T i = t, T \u2212i , S \u2212i ) = |W i | j=1 s\u2208S i n \u2212S i ts n \u2212S i t\u2022 + \u03b1 + \u03b1(m \u2212S i s + \u03b2P 0 (s)) (n \u2212S i t\u2022 + \u03b1)(m \u2212S i \u2022 + \u03b2) ,", |
|
"eq_num": "(13)" |
|
} |
|
], |
|
"section": "Tag sampling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where the outer product is over the word type count, n ts and m s denote the number of customers \"eating\" the segment s under tag t and the number of tables \"serving\" the segment s across all restaurants respectively, dot represents the marginal counts and \u03b1 and \u03b2 are the concentration parameters of the respective DP-s. \u2212S i in upper index means that the segments belonging to the segmentation of the i-th word type and not calculated into likelihood term yet have been excluded.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tag sampling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The word type likelihood is calculated according to the collapsed Dirichlet-multinomial likelihood formula:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tag sampling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "P (W i |T i = t, T \u2212i , W \u2212i , w) = |W i |\u22121 j=0 n tW i + j + \u03b1 n t\u2022 + j + \u03b1N (14)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tag sampling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where n tW i is the number of times the word W i has been tagged with tag t so far, n t\u2022 is the number of total word tokens tagged with the tag t and N is the total number of words in the vocabulary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tag sampling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The last factor is the word sequence likelihood and covers the transition probabilities. Relevant trigrams are those three containing the current word, and in all contexts where the word token appears in:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tag sampling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "P (w|T i = t, T \u2212i , W) \u223c c\u2208C W i P (t|t(c \u22122 ), t(c \u22121 ))\u2022 P (t(c +1 )|t(c \u22121 ), t)\u2022 P (t(c +2 )|t, t(c +1 )) (15)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tag sampling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where C W i denotes all the contexts where the word type W i appears in, t(c) are the tags assigned to the context words. All these terms can be calculated with CRP formulas.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tag sampling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We sample the whole segmentation of a word type as a block with forward-filtering backward-sampling scheme as described in (Mochihashi et al., 2009) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 123, |
|
"end": 148, |
|
"text": "(Mochihashi et al., 2009)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segmentation sampling", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "As we cannot sample from the exact marginal conditional distribution due to the dependencies between segments induced by the CRP, we use the Metropolis-Hastings sampler that draws a new proposal with forward-filtering backwardsampling scheme and accepts it with probability min(1,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segmentation sampling", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "P (Sprop) P (S old ) )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segmentation sampling", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": ", where S prop is the proposed segmentation and S old is the current segmentation of a word type. The acceptance rate during experiments varied between 94-98%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segmentation sampling", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "For each word type, we build a forward filtering table where we maintain the forward variables \u03b1[t][k] that present the probabilities of the last k characters of a t-character string constituting a segment. Define:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segmentation sampling", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b1[0][0] = 1 (16) \u03b1[t][0] = 0, t > 0", |
|
"eq_num": "(17)" |
|
} |
|
], |
|
"section": "Segmentation sampling", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Then the forward variables can be computed recursively by using dynamic programming algorithm: Sampling starts from the end of the word because it is known for certain that the word end coincides with the end of a segment. We sample the beginning position k of the last segment from the forward variables \u03b1[t][k], where t is the length of the word. Then we set t = t \u2212 k and continue to sample the start of the previous to the last segment. This process continues until t = 0. The segment probabilities, conditioned on the tag currently assigned to the word type, will be calculated according to the segmentation likelihood formula (13).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Segmentation sampling", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b1[t][k] = p(c t t\u2212k ) t\u2212k j=0 \u03b1[t \u2212 k][j], t = 1 \u2022 \u2022 \u2022 L,", |
|
"eq_num": "(18)" |
|
} |
|
], |
|
"section": "Segmentation sampling", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "All DP and Dirichlet concentration parameters are given vague Gamma(10, 0.1) priors and new values are sampled by using the auxiliary variable sampling scheme described in (Escobar and West, 1995) and the extended version for HDP-s described in . The segment length control parameter is given uniform Beta prior and its new values are sampled from the posterior which is also a Beta distribution.", |
|
"cite_spans": [ |
|
{ |
|
"start": 172, |
|
"end": 196, |
|
"text": "(Escobar and West, 1995)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hyperparameter sampling", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We test the POS induction part of the model on all languages in the Multext-East corpora (Erjavec, 2010) as well as on the free corpora from CONLL-X Shared Task 1 for Dutch, Danish, Swedish and Portuguese. The evaluation of morphological segmentations is based on the Morpho Challenge gold segmented wordlists for English, Finnish and Turkish 2 . We gathered the sentences from Europarl corpus 3 for English and Finnish, and use the Turkish text data from the Morpho Challenge 2009 4 . Estonian gold standard segmentations have been obtained from the Estonian morphologically annotated corpus 5 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We report three accuracy measures for tagging: greedy one-to-one mapping (1-1) (Haghighi and Klein, 2006) , many-to-one mapping (m-1) and Vmeasure (V-m) (Rosenberg and Hirschberg, 2007) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 79, |
|
"end": 105, |
|
"text": "(Haghighi and Klein, 2006)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 153, |
|
"end": 185, |
|
"text": "(Rosenberg and Hirschberg, 2007)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Segmentation is evaluated on the basis of standard F-score which is the harmonic mean of precision and recall.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "For each experiment, we made five runs with random initializations and report the results of the median. The sampler was run 200 iterations for burnin, after which we collected 5 samples, letting the sampler to run for another 200 iterations between each two sample. We start with 15 segmenting iterations during each Gibbs iteration to enable the segmentation sampler to burnin to the current tagging state, and gradually reduce this number to one. Segmentation likelihood term for tagging is calculated on the basis of the last segment only because this setting gave the best results in preliminary experiments and it also makes the whole computation less expensive.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The first set of experiments was conducted to test the model tagging accuracy on different languages mentioned above. The results obtained were in general slightly lower than the current state-of-the-art and the number of tags learned was generally bigger than the number of gold standard tags. We observed that different components making up the corpus logarithmic probability have different magnitudes. In particular, we found that the emission probability component in log-scale is roughly four times smaller than the transition probability. This observation motivated introducing the likelihood scaling heuristic into the model to scale the emission probability up. We tried a couple of different scaling factors on Multext-East English corpus and then set its value to 4 for all languages for the rest of the experiments. This improved the tagging results consistently across all languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "POS induction results are given in Table 1 . When comparing these results with the recently published results on the same corpora (Christodoulopoulos et al., 2011; Blunsom and Cohn, 2011; Lee et al., 2010) we can see that our results compare favorably with the state-of-the-art, resulting with the best published results in many occasions. The number of tag clusters learned by the model corresponds surprisingly well to the number of true coarse-grained gold standard tags across all languages. There are two things to note here: 1) the tag distributions learned are influenced by the likelihood scaling heuristic and more experiments are needed in order to fully understand the characteristics and influence of this heuristic; 2) as the model is learning the coarse-grained tagset consistently in all languages, it might as well be that the POS tags are not as dependent on the morphology as we assumed, especially in inflectional languages with many derivational and inflectional suffixes, because otherwise the model should have learned a more fine-grained tagset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 163, |
|
"text": "(Christodoulopoulos et al., 2011;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 164, |
|
"end": 187, |
|
"text": "Blunsom and Cohn, 2011;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 188, |
|
"end": 205, |
|
"text": "Lee et al., 2010)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 42, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Segmentation results are presented in Table 2 . For each language, we report the lexicon-based precision, recall and F-measure, the number of word types in the corpus and and number of word types with gold segmentation available. The reported standard deviations show that the segmentations obtained are stable across different runs which is probably due to the blocked sampler. We give the segmentation results both with and without likelihood scaling heuristic and denote that while the emission likelihood scaling improves the tagging accuracy, it actually degrades the segmentation results.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 45, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "It can also be seen that in general precision score is better but for Estonian recall is higher. This can be explained by the characteristics of the evaluation data sets. For English, Finnish and Turkish we use the Morpho Challenge wordlists where the gold standard segmentations are fine-grained, separating both inflectional and derivational morphemes. Especially derivational morphemes are hard to learn with pure data-driven methods with no knowledge about semantics and thus it can result in undersegmentation. On the other hand, Estonian corpus separates only inflectional morphemes which thus leads to higher recall. Some difference can also come from the fact that the sets of gold-segmented word types for other languages are much smaller than in Esto- nian and thus it would be interesting to see whether and how the results would change if the evaluation could be done on all word types in the corpus for other languages as well. In general, undersegmentation is more acceptable than oversegmentation, especially when the aim is to use the resulting segmentations in some NLP application. Next, we studied the convergence characteristics of our model. For these experiments we made five runs with random initializations on Estonian corpus and let the sampler run up to 1100 iterations. Samples were taken after each ten iterations. Figure 2 shows the log-likelihood of the samples plotted against iteration number. Dark lines show the averages over five runs and gray lines in the background are the likelihoods of real samples showing also the variance. We first calculated the full likelihood of the samples (the solid line) that showed a quick improvement during the first few iterations and then stabilized by continuing with only slow improvements over time. We then divided the full likelihood into two factors in order to see the contribution of both tagging and segmentation parts separately. The results are quite surprising. It turned out that the random tagging initializations are very good in terms of probability and as a matter of fact much better than the data can support and thus the tagging likelihood drops quite significantly after the first iteration and then continues with very slow improvements. The matters are totally different with segmentations where the initial random segmentations result in a low likelihood that improves heavily Table 1 : Tagging results for different languages. For each language we report median one-to-one (1-1), many-to-one (m-1) and V-measure (V-m) together with standard deviation from five runs where median is taken over V-measure.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1343, |
|
"end": 1351, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 2373, |
|
"end": 2380, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Types is the number of word types in each corpus, True is the number of gold tags and Induced reports the median number of tags induced by the model together with standard deviation. Best Pub. lists the best published results so far (also 1-1, m-1 and V-m) in (Christodoulopoulos et al., 2011) * , (Blunsom and Cohn, 2011) and (Lee et al., 2010) \u2020 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 260, |
|
"end": 293, |
|
"text": "(Christodoulopoulos et al., 2011)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 298, |
|
"end": 322, |
|
"text": "(Blunsom and Cohn, 2011)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Types Segmented Estonian without LLS 43.5 (0.8) 59.4 (0.6) 50.3 (0.7) 16820 16820 with LLS 42.8 (1.1) 54.6 (0.7) 48.0 (0.9) English without LLS 69.0 (1.3) 37.3 (1.5) 48.5 (1.1) 20628 399 with LLS 59.8 (1.8) 29.0 (1.0) 39.1 (1.3) Finnish without LLS 56.2 (2.5) 29.5 (1.7) 38.7 (2.0) 25364 292 with LLS 56.0 (1.1) 28.0 (0.6) 37.4 (0.7) Turkish without LLS 65.4 (1.8) 44.8 (1.8) 53.2 (1.7) 18459 293 with LLS 68.9 (0.8) 39.2 (1.0) 50.0 (0.6) Table 2 : Segmentation results on different languages. Results are calculated based on word types. For each language we report precision, recall and F1 measure, number of word types in the corpus and number of word types with gold standard segmentation available. For each language we report the segmentation result without and with emission likelihood scaling (without LLS and with LLS respectively).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 439, |
|
"end": 446, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Precision Recall F1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "with the first few iterations and then stabilizes but still continues to improve over time. The explanation for this kind of model behaviour needs further studies and we leave it for future work. Figure 3 plots the V-measure against the tagging factor of the log-likelihood for all samples. It can be seen that the lower V-measure values are more spread out in terms of likelihood. These points correspond to the early samples of the runs. The samples taken later during the runs are on the right in the figure and the positive correlation between the V-measure and likelihood values can be seen.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 196, |
|
"end": 204, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Precision Recall F1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Next we studied whether the morphological seg- 40.5 (1.5) 53.4 (1.0) 37.5 (1.3) Learned seg 47.6 (0.4) 64.5 (1.9) 45.6 (1.4) Precision Recall F1 Fixed tag 36.7 (0.3) 56.4 (0.2) 44.5 (0.3) Learned tag 42.8 (1.1) 54.6 (0.7) 48.0 (0.9) Morfessor 51.29 52.59 51.94 mentations and POS tags help each other in the learning process. For that we conducted two semisupervised experiments on Estonian corpus. First we provided gold standard segmentations to the model and let it only learn the tags. Then, we gave the model gold standard POS tags and only learned the segmentations. The results are given in Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 598, |
|
"end": 605, |
|
"text": "Table 3", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Precision Recall F1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We also added the results from joint unusupervised learning for easier comparison. Unfortunately we cannot repeat this experiment on other languages to see whether the results are stable across different languages because to our knowledge there is no other free corpus with both gold standard POS tags and morphological segmentations available. From the results it can be seen that the unsupervised learning results for both tagging and segmentation are better than the results obtained from semisupervised learning. This is surprising because one would assume that providing gold standard data would lead to better results. On the other hand, these results are encouraging, showing that learning two dependent tasks in a joint model by unsupervised manner can be as good or even better than learning the same tasks separately and providing the gold standard data as features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Precision Recall F1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Finally, we learned the morphological segmentations with the state-of-the-art morphology induction system Morfessor baseline 6 (Creutz and Lagus, 2005) and report the best results in the last row of Table 3 . Apparently, our joint model cannot beat Morfessor in morphological segmentation and when 6 http://www.cis.hut.fi/projects/morpho/ using the emission likelihood scaling that influences the tagging results favorably, the segmentation results get even worse. Altough the semisupervised experiments showed that there are dependencies between tags and segmentations, the conducted experiments do not reveal of how to use these dependencies for helping the POS tags to learn better morphological segmentations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 151, |
|
"text": "(Creutz and Lagus, 2005)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 199, |
|
"end": 206, |
|
"text": "Table 3", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Precision Recall F1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We will review some of the recent works related to Bayesian POS induction and morphological segmentation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "One of the first Bayesian POS taggers is described in (Goldwater and Griffiths, 2007) . The model presented is a classical HMM with multinomial transition and emission distributions with Dirichlet priors. Inference is done using a collapsed Gibbs sampler and concentration parameter values are learned during inference. The model is token-based, allowing different words of the same type in different locations to have a different tag. This model can actually be classified as semi-supervised as it assumes the presence of a tagging dictionary that contains the list of possible POS tags for each word typean assumption that is clearly not realistic in an unsupervised setting.", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 85, |
|
"text": "(Goldwater and Griffiths, 2007)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Models presented in (Christodoulopoulos et al., 2011) and (Lee et al., 2010) are also built on Dirichlet-multinomials and, rather than defining a sequence model, present a clustering model based on features. Both report good results on type basis and use (among others) also morphological features, with (Lee et al., 2010) making use of fixed length suffixes and (Christodoulopoulos et al., 2011) using the suffixes obtained from an unsupervised morphology induction system. Nonparametric Bayesian POS induction has been studied in (Blunsom and Cohn, 2011) and (Gael et al., 2009) . The model in (Blunsom and Cohn, 2011) uses Pitman-Yor Process (PYP) prior but the model itself is finite in the sense that the size of the tagset is fixed. Their model also captures morphological regularities by modeling the generation of words with character n-grams. The model in (Gael et al., 2009) uses infinite state space with Dirichlet Process prior. The model structure is classical HMM consisting only of transitions and emissions and containing no morphological features. Inference is done by using beam sampler introduced in (Gael et al., 2008) which enables parallelized implementation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 20, |
|
"end": 53, |
|
"text": "(Christodoulopoulos et al., 2011)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 58, |
|
"end": 76, |
|
"text": "(Lee et al., 2010)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 363, |
|
"end": 396, |
|
"text": "(Christodoulopoulos et al., 2011)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 532, |
|
"end": 556, |
|
"text": "(Blunsom and Cohn, 2011)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 561, |
|
"end": 580, |
|
"text": "(Gael et al., 2009)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 596, |
|
"end": 620, |
|
"text": "(Blunsom and Cohn, 2011)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 865, |
|
"end": 884, |
|
"text": "(Gael et al., 2009)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1119, |
|
"end": 1138, |
|
"text": "(Gael et al., 2008)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "One close model for morphology stems from Bayesian word segmentation (Goldwater et al., 2009) where the task is to induce word borders from transcribed sentences. Our segmentation model is in principle the same as the unigram word segmentation model and the main difference is that we are using blocked sampler while (Goldwater et al., 2009) uses point-wise Gibbs sampler by drawing the presence or absence of the word border between every two characters.", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 93, |
|
"text": "(Goldwater et al., 2009)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 317, |
|
"end": 341, |
|
"text": "(Goldwater et al., 2009)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In the morphology is learned in the adaptor grammar framework by using a PYP adaptor. PYP adaptor caches the numbers of observed derivation trees and forces the distribution over all possible trees to take the shape of power law. In the PYP (and also DP) case the adaptor grammar can be interpreted as PYP (or DP) model with regular PCFG distribution as base measure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The model proposed in ) makes several assumptions that we do not: 1) segmentations have a fixed structure of stem and suffix; and 2) there is a fixed number of inflectional classes. Inference is performed with Gibbs sampler by sampling for each word its stem, suffix and inflectional class.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In this paper we presented a joint unsupervised model for learning POS tags and morphological segmentations with hierarchical Dirichlet Process model. Our model induces the number of POS clusters from data and does not contain any hand-tuned parameters. We tested the model on many languages and showed that by introcing a likelihood scaling heuristic it produces state-of-the-art POS induction results. We believe that the tagging results could further be improved by adding additional features concerning punctuation, capitalization etc. which are heavily used in the other state-of-the-art POS induction systems but these features were intentionally left out in the current model for enabling to test the concept of joint modelling of two dependent tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We found some evidence that the tasks of POS induction and morphological segmentation are dependent by conducting semisupervised experiments where we gave the model gold standard tags and segmentations in turn and let it learn only segmentations or tags respectively and found that the results in fully unsupervised setting are better. Despite of that, the model failed to learn as good segmentations as the state-of-the-art morphological segmentation model Morfessor. One way to improve the segmentation results could be to use segment bigrams instead of unigrams.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The model can serve as a basis for several further extensions. For example, one possibility would be to expand it into multilingual setting in a fashion of (Naseem et al., 2009) , or it could be extended to add the joint learning of morphological paradigms of the words given their tags and segmentations in a manner described by (Dreyer and Eisner, 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 177, |
|
"text": "(Naseem et al., 2009)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 330, |
|
"end": 355, |
|
"text": "(Dreyer and Eisner, 2011)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "http://ilk.uvt.nl/conll/free_data.html 2 http://research.ics.tkk.fi/events/ morphochallenge2010/datasets.shtml 3 http://www.statmt.org/europarl/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://research.ics.tkk.fi/events/ morphochallenge2009/datasets.shtml 5 http://www.cl.ut.ee/korpused/ morfkorpus/index.php?lang=eng", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank the anonymous reviewers who helped to improve the quality of this paper. This research was supported by the Estonian Ministry of Education and Research target-financed research theme no. 0140007s12, and by European Social Funds Doctoral Studies and Internationalisation Programme DoRa.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "In\u00c9cole d'\u00e9t\u00e9 de Probabilit\u00e9s de Saint-Flour, XIII-1983", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Aldous", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1985, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--198", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Aldous. 1985. Exchangeability and related topics. In\u00c9cole d'\u00e9t\u00e9 de Probabilit\u00e9s de Saint-Flour, XIII- 1983, pages 1-198. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Painless unsupervised learning with features", |
|
"authors": [ |
|
{ |
|
"first": "Taylor", |
|
"middle": [], |
|
"last": "Berg-Kirkpatrick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Bouchard-C\u00f4t\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Denero", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "582--590", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taylor Berg-Kirkpatrick, Alexandre Bouchard-C\u00f4t\u00e9, John DeNero, and Dan Klein. 2010. Painless unsu- pervised learning with features. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Com- putational Linguistics, pages 582-590.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A hierarchical Pitman-Yor process HMM for unsupervised Part of Speech induction", |
|
"authors": [ |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "865--874", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Phil Blunsom and Trevor Cohn. 2011. A hierarchical Pitman-Yor process HMM for unsupervised Part of Speech induction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguis- tics: Human Language Technologies -Volume 1, pages 865-874.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Two decades of unsupervised POS induction: How far have we come?", |
|
"authors": [ |
|
{ |
|
"first": "Christos", |
|
"middle": [], |
|
"last": "Christodoulopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sharon", |
|
"middle": [], |
|
"last": "Goldwater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "575--584", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christos Christodoulopoulos, Sharon Goldwater, and Mark Steedman. 2010. Two decades of unsupervised POS induction: How far have we come? In Proceed- ings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 575-584.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A Bayesian mixture model for PoS induction using multiple features", |
|
"authors": [ |
|
{ |
|
"first": "Christos", |
|
"middle": [], |
|
"last": "Christodoulopoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sharo", |
|
"middle": [], |
|
"last": "Goldwater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "638--647", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christos Christodoulopoulos, Sharo Goldwater, and Mark Steedman. 2011. A Bayesian mixture model for PoS induction using multiple features. In Proceed- ings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 638-647, Edin- burgh, Scotland, UK.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Combining distributional and morphological information for Part of Speech induction", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the Tenth Conference on European Chapter", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "59--66", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Clark. 2003. Combining distributional and morphological information for Part of Speech induc- tion. In Proceedings of the Tenth Conference on Eu- ropean Chapter of the Association for Computational Linguistics -Volume 1, pages 59-66.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Inducing the morphological lexicon of a natural language from unannotated text", |
|
"authors": [ |
|
{ |
|
"first": "Mathias", |
|
"middle": [], |
|
"last": "Creutz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krista", |
|
"middle": [], |
|
"last": "Lagus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "106--113", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mathias Creutz and Krista Lagus. 2005. Inducing the morphological lexicon of a natural language from unannotated text. In In Proceedings of the Inter- national and Interdisciplinary Conference on Adap- tive Knowledge Representation and Reasoning, pages 106-113.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Discovering morphological paradigms from plain text using a Dirichlet Process mixture model", |
|
"authors": [ |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Dreyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "616--627", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Markus Dreyer and Jason Eisner. 2011. Discover- ing morphological paradigms from plain text using a Dirichlet Process mixture model. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 616-627.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "MULTEXT-East version 4: Multilingual morphosyntactic specifications, lexicons and corpora", |
|
"authors": [ |
|
{ |
|
"first": "Toma", |
|
"middle": [ |
|
"Erjavec" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Toma Erjavec. 2010. MULTEXT-East version 4: Mul- tilingual morphosyntactic specifications, lexicons and corpora. In Proceedings of the Seventh International Conference on Language Resources and Evaluation.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Bayesian density estimation and inference using mixtures", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Escobar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "West", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Journal of the American Statistical Association", |
|
"volume": "", |
|
"issue": "430", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael D. Escobar and Mike West. 1995. Bayesian density estimation and inference using mixtures. Jour- nal of the American Statistical Association, 90(430).", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A Bayesian analysis of some nonparametric problems", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Ferguson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1973, |
|
"venue": "The Annals of Statistics", |
|
"volume": "1", |
|
"issue": "2", |
|
"pages": "209--230", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas S. Ferguson. 1973. A Bayesian analysis of some nonparametric problems. The Annals of Statis- tics, 1(2):209-230.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Beam sampling for the infinite Hidden Markov Model", |
|
"authors": [ |
|
{ |
|
"first": "Jurgen", |
|
"middle": [], |
|
"last": "Van Gael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yunus", |
|
"middle": [], |
|
"last": "Saatci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yee", |
|
"middle": [ |
|
"Whye" |
|
], |
|
"last": "Teh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zoubin", |
|
"middle": [], |
|
"last": "Ghahramani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 25th International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1088--1095", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jurgen Van Gael, Yunus Saatci, Yee Whye Teh, and Zoubin Ghahramani. 2008. Beam sampling for the infinite Hidden Markov Model. In Proceedings of the 25th International Conference on Machine Learning, pages 1088-1095.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "The infinite HMM for unsupervised PoS tagging", |
|
"authors": [ |
|
{ |
|
"first": "Jurgen", |
|
"middle": [], |
|
"last": "Van Gael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Vlachos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zoubin", |
|
"middle": [], |
|
"last": "Ghahramani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "678--687", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jurgen Van Gael, Andreas Vlachos, and Zoubin Ghahra- mani. 2009. The infinite HMM for unsupervised PoS tagging. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2 -Volume 2, pages 678-687.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A fully Bayesian approach to unsupervised Part-of-Speech tagging", |
|
"authors": [ |
|
{ |
|
"first": "Sharon", |
|
"middle": [], |
|
"last": "Goldwater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "744--751", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sharon Goldwater and Tom Griffiths. 2007. A fully Bayesian approach to unsupervised Part-of-Speech tagging. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 744-751, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Interpolating between types and tokens by estimating power-law generators", |
|
"authors": [ |
|
{ |
|
"first": "Sharon", |
|
"middle": [], |
|
"last": "Goldwater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "18", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sharon Goldwater, Thomas L. Griffiths, and Mark John- son. 2006. Interpolating between types and tokens by estimating power-law generators. In Advances in Neu- ral Information Processing Systems 18, Cambridge, MA.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A Bayesian framework for word segmentation: Exploring the effects of context", |
|
"authors": [ |
|
{ |
|
"first": "Sharon", |
|
"middle": [], |
|
"last": "Goldwater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Cognition", |
|
"volume": "112", |
|
"issue": "", |
|
"pages": "21--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sharon Goldwater, Thomas L. Griffiths, and Mark John- son. 2009. A Bayesian framework for word segmen- tation: Exploring the effects of context. Cognition, 112:21-54.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Prototype-driven learning for sequence models", |
|
"authors": [ |
|
{ |
|
"first": "Aria", |
|
"middle": [], |
|
"last": "Haghighi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "320--327", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aria Haghighi and Dan Klein. 2006. Prototype-driven learning for sequence models. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 320-327, New York City, USA.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Adaptor grammars: A framework for specifying compositional nonparametric Bayesian models", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sharon", |
|
"middle": [], |
|
"last": "Goldwater", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Advances in Neural Information Processing Systems 19", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "641--648", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Johnson, Thomas L. Griffiths, and Sharon Goldwa- ter. 2006. Adaptor grammars: A framework for speci- fying compositional nonparametric Bayesian models. In Advances in Neural Information Processing Sys- tems 19, pages 641-648.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Simple type-level unsupervised POS tagging", |
|
"authors": [ |
|
{ |
|
"first": "Aria", |
|
"middle": [], |
|
"last": "Yoong Keok Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Haghighi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "853--861", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoong Keok Lee, Aria Haghighi, and Regina Barzilay. 2010. Simple type-level unsupervised POS tagging. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 853- 861.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Bayesian unsupervised word segmentation with nested Pitman-Yor language modeling", |
|
"authors": [ |
|
{ |
|
"first": "Daichi", |
|
"middle": [], |
|
"last": "Mochihashi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takeshi", |
|
"middle": [], |
|
"last": "Yamada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naonori", |
|
"middle": [], |
|
"last": "Ueda", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "100--108", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daichi Mochihashi, Takeshi Yamada, and Naonori Ueda. 2009. Bayesian unsupervised word segmentation with nested Pitman-Yor language modeling. In Proceed- ings of the Joint Conference of the 47th Annual Meet- ing of the ACL and the 4th International Joint Confer- ence on Natural Language Processing of the AFNLP: Volume 1 -Volume 1, pages 100-108.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Multilingual part-of-speech tagging: Two unsupervised approaches", |
|
"authors": [ |
|
{ |
|
"first": "Tahira", |
|
"middle": [], |
|
"last": "Naseem", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Snyder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "36", |
|
"issue": "", |
|
"pages": "1--45", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tahira Naseem, Benjamin Snyder, Jacob Eisenstein, and Regina Barzilay. 2009. Multilingual part-of-speech tagging: Two unsupervised approaches. Journal of Ar- tificial Intelligence Research, 36:1-45.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "A tutorial on Hidden Markov Models and selected applications in speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "Lawrence", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Rabiner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Proceedings of the IEEE", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "257--286", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lawrence R. Rabiner. 1989. A tutorial on Hidden Markov Models and selected applications in speech recognition. In Proceedings of the IEEE, pages 257- 286.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Vmeasure: A conditional entropy-based external cluster evaluation measure", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Rosenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Hirschberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "410--420", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Rosenberg and Julia Hirschberg. 2007. V- measure: A conditional entropy-based external cluster evaluation measure. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning (EMNLP-CoNLL), pages 410-420, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Hierarchical Dirichlet processes", |
|
"authors": [ |
|
{ |
|
"first": "Yee Whye", |
|
"middle": [], |
|
"last": "Teh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michel", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Jordan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Beal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Journal of the American Statistical Association", |
|
"volume": "101", |
|
"issue": "476", |
|
"pages": "1566--1581", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yee Whye Teh, Michel I. Jordan, Matthew J. Beal, and David M. Blei. 2006. Hierarchical Dirichlet pro- cesses. Journal of the American Statistical Associa- tion, 101(476):1566-1581.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "A hierarchical Bayesian language model based on Pitman-Yor processes", |
|
"authors": [ |
|
{ |
|
"first": "Yee Whye", |
|
"middle": [], |
|
"last": "Teh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "985--992", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yee Whye Teh. 2006. A hierarchical Bayesian language model based on Pitman-Yor processes. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Associa- tion for Computational Linguistics, pages 985-992.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Plate diagram representation of the model. t is, w i -s and s i -s denote the tags, words and segmentations respectively. G-s are various DP-s in the model, E j -s and \u03b2 j -s are the tag-specific emission distributions and their respective Dirichlet prior parameters. H is Gamma base distribution. S is the base distribution over segments. Coupled DP concetrations parameters have been omitted for clarity.", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"text": "where c n m denotes the characters c m \u2022 \u2022 \u2022 c n of a string c and L is the length of the word.", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"text": "Log-likelihood of samples plotted against iterations. Dark lines show the average over five runs, grey lines in the back show the real samples.", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF4": { |
|
"text": "Tagging part of log-likelihood plotted against V", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"content": "<table><tr><td>: Tagging and segmentation results on Estonian</td></tr><tr><td>Multext-East corpus (Learned seg and Learned tag) com-</td></tr><tr><td>pared to the semisupervised setting where segmentations</td></tr><tr><td>are fixed to gold standard (Fixed seg) and tags are fixed</td></tr><tr><td>to gold standard (Fixed tag). Finally the segmentatation</td></tr><tr><td>results from Morfessor system for comparison are pre-</td></tr><tr><td>sented.</td></tr></table>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |