Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q15-1020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:07:43.013120Z"
},
"title": "Domain Adaptation for Syntactic and Semantic Dependency Parsing Using Deep Belief Networks",
"authors": [
{
"first": "Haitong",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "Chinese Academy of Sciences",
"location": {
"postCode": "100190",
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Tao",
"middle": [],
"last": "Zhuang",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "Chinese Academy of Sciences",
"location": {
"postCode": "100190",
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "Chinese Academy of Sciences",
"location": {
"postCode": "100190",
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In current systems for syntactic and semantic dependency parsing, people usually define a very high-dimensional feature space to achieve good performance. But these systems often suffer severe performance drops on outof-domain test data due to the diversity of features of different domains. This paper focuses on how to relieve this domain adaptation problem with the help of unlabeled target domain data. We propose a deep learning method to adapt both syntactic and semantic parsers. With additional unlabeled target domain data, our method can learn a latent feature representation (LFR) that is beneficial to both domains. Experiments on English data in the CoNLL 2009 shared task show that our method largely reduced the performance drop on out-of-domain test data. Moreover, we get a Macro F1 score that is 2.32 points higher than the best system in the CoNLL 2009 shared task in out-of-domain tests.",
"pdf_parse": {
"paper_id": "Q15-1020",
"_pdf_hash": "",
"abstract": [
{
"text": "In current systems for syntactic and semantic dependency parsing, people usually define a very high-dimensional feature space to achieve good performance. But these systems often suffer severe performance drops on outof-domain test data due to the diversity of features of different domains. This paper focuses on how to relieve this domain adaptation problem with the help of unlabeled target domain data. We propose a deep learning method to adapt both syntactic and semantic parsers. With additional unlabeled target domain data, our method can learn a latent feature representation (LFR) that is beneficial to both domains. Experiments on English data in the CoNLL 2009 shared task show that our method largely reduced the performance drop on out-of-domain test data. Moreover, we get a Macro F1 score that is 2.32 points higher than the best system in the CoNLL 2009 shared task in out-of-domain tests.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Both syntactic and semantic dependency parsing are the standard tasks in the NLP community. The stateof-the-art model performs well if the test data comes from the domain of the training data. But if the test data comes from a different domain, the performance drops severely. The results of the shared tasks of CoNLL 2008 (Surdeanu et al., 2008 Haji\u010d et al., 2009) also substantiates the argument. To relieve the domain adaptation, in this paper, we propose a deep learning method for both syntactic and semantic parsers. We focus on the situation that, besides source domain training data and target domain test data, we also have some unlabeled target domain data.",
"cite_spans": [
{
"start": 312,
"end": 322,
"text": "CoNLL 2008",
"ref_id": null
},
{
"start": 323,
"end": 345,
"text": "(Surdeanu et al., 2008",
"ref_id": "BIBREF27"
},
{
"start": 346,
"end": 365,
"text": "Haji\u010d et al., 2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many syntactic and semantic parsers are developed using a supervised learning paradigm, where each data sample is represented as a vector of features, usually a high-dimensional feature. The performance degradation on target domain test data is mainly caused by the diversity of features of different domains, i.e., many features in target domain test data are never seen in source domain training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous work have shown that using word clusters to replace the sparse lexicalized features (Koo et al., 2008; Turian et al., 2010) , helps relieve the performance degradation on the target domain. But for syntactic and semantic parsing, people also use a lot of syntactic features, i.e., features extracted from syntactic trees. For example, the relation path between a predicate and an argument is a syntactic feature used in semantic dependency parsing (Johansson and Nugues, 2008) . Figure 1 shows an example of this relation path feature. Obviously, syntactic features like this are also very sparse and usually specific to each domain. The method of clustering fails in generalizing these kinds of features. Our method, however, is very different from clustering specific features and substituting these features using their clusters. Instead, we attack the domain adaption problem by learning a latent feature representation (LFR) for different domains, which is similar to Titov (2011) . Formally, we propose a Deep Belief Network (DBN) model to represent a data sample using a vector of latent features. This latent feature vector is inferred by our DBN model based on the data sample's original feature vector. Our DBN model is trained unsupervisedly on original feature vectors of data in both domains: training data from the source domain, and unlabeled data from the target domain. So our DBN model can produce a common feature representation for data from both domains. A common feature representation can make two domains more similar and thus is very helpful for domain adaptation (Blitzer, 2006) . Discriminative models using our latent features adapt better to the target domain than models using original features. Discriminative models in syntactic and semantic parsers usually use millions of features. Applying a typical DBN to learn a sensible LFR on that many original features is computationally too expensive and impractical (Raina et al., 2009) . Therefore, we constrain the DBN by splitting the original features into groups. In this way, we largely reduce the computational cost and make LFR learning practical. We carried out experiments on the English data of the CoNLL 2009 shared task. We use a basic pipelined system and compare the effectiveness of the two feature representations: original feature representation and our LFR. Using the original features, the performance drop on out-of-domain test data is 10.58 points in Macro F1 score. In contrast, using the LFR, the performance drop is only 4.97 points. And we have achieved a Macro F1 score of 80.83% on the out-of-domain test data. As far as we know, this is the best result on this data set to date.",
"cite_spans": [
{
"start": 93,
"end": 111,
"text": "(Koo et al., 2008;",
"ref_id": "BIBREF18"
},
{
"start": 112,
"end": 132,
"text": "Turian et al., 2010)",
"ref_id": "BIBREF29"
},
{
"start": 457,
"end": 485,
"text": "(Johansson and Nugues, 2008)",
"ref_id": "BIBREF17"
},
{
"start": 982,
"end": 994,
"text": "Titov (2011)",
"ref_id": "BIBREF28"
},
{
"start": 1598,
"end": 1613,
"text": "(Blitzer, 2006)",
"ref_id": "BIBREF1"
},
{
"start": 1952,
"end": 1972,
"text": "(Raina et al., 2009)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 488,
"end": 496,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Dependency parsing and semantic role labeling are two standard tasks in the NLP community. There have been many works on the two tasks (McDonald et al., 2005; Gildea and Jurafsky, 2002; Yang and Zong, 2014; Zhuang and Zong, 2010a; Zhuang and Zong, 2010b, etc) . Among them, researches on domain adaptation for dependency parsing and SRL are directly related to our work. Dredze et al., (2007) show that domain adaptation is hard for dependency parsing based on results in the CoNLL 2007 shared task . Chen et al., (2008) adapted a syntactic dependency parser by learning reliable information on shorter dependencies in unlabeled target domain data. But they do not consider the task of semantic dependency parsing. Huang et al., (2010) used an HMM-based latent variable language model to adapt a SRL system. Their method is tailored for a chunking-based SRL system and can hardly be applied to our dependency based task. Weston et al., (2008) used deep neural networks to improve an SRL system. But their tests are on in-domain data.",
"cite_spans": [
{
"start": 135,
"end": 158,
"text": "(McDonald et al., 2005;",
"ref_id": "BIBREF20"
},
{
"start": 159,
"end": 185,
"text": "Gildea and Jurafsky, 2002;",
"ref_id": "BIBREF9"
},
{
"start": 186,
"end": 206,
"text": "Yang and Zong, 2014;",
"ref_id": "BIBREF32"
},
{
"start": 207,
"end": 230,
"text": "Zhuang and Zong, 2010a;",
"ref_id": "BIBREF35"
},
{
"start": 231,
"end": 259,
"text": "Zhuang and Zong, 2010b, etc)",
"ref_id": null
},
{
"start": 371,
"end": 392,
"text": "Dredze et al., (2007)",
"ref_id": "BIBREF7"
},
{
"start": 501,
"end": 520,
"text": "Chen et al., (2008)",
"ref_id": "BIBREF4"
},
{
"start": 715,
"end": 735,
"text": "Huang et al., (2010)",
"ref_id": null
},
{
"start": 921,
"end": 942,
"text": "Weston et al., (2008)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "On methodology, the work in Glorot et al., (2011) and Titov (2011) is closely related to ours. They also focus on learning LFRs for domain adaptation. However, their work deals with domain adaptation for sentiment classification, which uses much fewer features and training samples. So they do not need to worry about computational cost as much as we do. Titov (2011) used a graphical model that has only one layer of hidden variables. On contrast, we need to use a model with two layers of hidden variables and split the first hidden layer to reduce computational cost. The model of Titov (2011) also embodies a specific classifier. But our model is independent of the classifier to be used. Glorot et al., (2011) used a model called Stacked Denoising Auto-Encoders, which also contains multiple hidden layers. However, they do not exploit the hierarchical structure of their model to reduce computational cost. By splitting, our model contains much less parameters than theirs. In fact, the models in Glorot et al., (2011) and Titov (2011) cannot be applied to our task simply because of the high computational cost.",
"cite_spans": [
{
"start": 28,
"end": 49,
"text": "Glorot et al., (2011)",
"ref_id": "BIBREF8"
},
{
"start": 54,
"end": 66,
"text": "Titov (2011)",
"ref_id": "BIBREF28"
},
{
"start": 693,
"end": 714,
"text": "Glorot et al., (2011)",
"ref_id": "BIBREF8"
},
{
"start": 1003,
"end": 1024,
"text": "Glorot et al., (2011)",
"ref_id": "BIBREF8"
},
{
"start": 1029,
"end": 1041,
"text": "Titov (2011)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In discriminative models, each data sample is represented as a vector of features. Our DBN model maps this original feature vector to a vector of latent features. And we use this latent feature vector to represent the sample, i.e., we replace the whole original feature vector by the latent feature vector. In this section, we introduce how our DBN model represent a data sample as a vector of latent features. Before introducing our DBN model, we first review a simpler model called Restricted Boltzman Machines (RBM) . When training a DBN model, RBM is used as a basic unit in a DBN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our DBN Model for LFR",
"sec_num": "3"
},
{
"text": "An RBM is an undirected graphical model with a layer of visible variables",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Restricted Boltzmann Machines",
"sec_num": "3.1"
},
{
"text": "v = (v 1 , ..., v m )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Restricted Boltzmann Machines",
"sec_num": "3.1"
},
{
"text": ", and a layer of hidden variables h = (h 1 , ..., h n ). These variables are binary. The parameters of an RBM are \u03b8 = (W, a, b) where W = (W ij ) m\u00d7n is a matrix with W ij being the weight for the edge between v i and h j , and a = (a 1 , ..., a m ), b = (b 1 , ..., b n ) are bias vectors for v and h respectively. The probabilistic model of an RBM is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Restricted Boltzmann Machines",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(v, h|\u03b8) = 1 Z(\u03b8) exp(\u2212E(v, h))",
"eq_num": "(1)"
}
],
"section": "Restricted Boltzmann Machines",
"sec_num": "3.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Restricted Boltzmann Machines",
"sec_num": "3.1"
},
{
"text": "E(v, h) = \u2212 m i=1 a i v i \u2212 n j=1 b j h j \u2212 m i=1 n j=1 v i w ij h j Z(\u03b8) = v,h exp(\u2212E(v, h))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Restricted Boltzmann Machines",
"sec_num": "3.1"
},
{
"text": "Because the connections in an RBM are only between visible and hidden variables, the conditional distribution over a hidden or a visible variable is quite simple:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Restricted Boltzmann Machines",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(h j = 1|v) = \u03c3(b j + m i=1 v i w ij ) (2) p(v i = 1|h) = \u03c3(a i + n j=1 h i w ij )",
"eq_num": "(3)"
}
],
"section": "Restricted Boltzmann Machines",
"sec_num": "3.1"
},
{
"text": "where \u03c3(x) = 1/(1 + exp(\u2212x)) is the logistic sigmoid function. An RBM can be efficiently trained on a sequence of visible vectors using the Contrastive Divergence method (Hinton, 2002) .",
"cite_spans": [
{
"start": 170,
"end": 184,
"text": "(Hinton, 2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Restricted Boltzmann Machines",
"sec_num": "3.1"
},
{
"text": "In our syntactic and semantic parsing task, all features are binary. So each data sample (an shift action in syntactic parsing or an argument candidate in semantic parsing) is represented as a binary feature vector. By treating a sample's feature vector as visible variable vector in an RBM, and taking hidden variables as latent features, we could get the LFR of this sample using the RBM. However, for our syntactic and semantic parsing tasks, training such an RBM is computationally impractical due to the following considerations. Let m, n denote respectively the number of visible and hidden variables in the RBM. Then there are O(mn) parameters in this RBM. If we train the RBM on d samples, then the time complexity for Contrastive Divergence training is O(mnd). For syntactic or semantic parsing, there are over 1 million unique binary features, and millions of training samples. That means both m and d are in an order of 10 6 . With m and n of that order, n should not be chosen too small to get a sensible LFR (Hinton, 2010) . Our experience indicates that n should be at least in an order of 10 3 . Now we see why the O(mnd) complexity is formidable for our task.",
"cite_spans": [
{
"start": 1021,
"end": 1035,
"text": "(Hinton, 2010)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Problem of Large Scale",
"sec_num": "3.2"
},
{
"text": "A DBN is a probabilistic generative model that is composed of multiple layers of stochastic, latent variables . The motivation of using a DBN is two-fold. First, previous research has shown that a deep network can capture high-level correlations between visible variables better than an RBM (Bengio, 2009) . Second, as shown in the preceding subsection, the large scale of our task poses ... a great challenge for learning an LFR. By manipulating the hierarchical structure of a DBN, we can significantly reduce the number of parameters in the DBN model. This largely reduces the computational cost for training the DBN. Without this technique, it is impractical to learn a DBN model with that many parameters on large training sets.",
"cite_spans": [
{
"start": 291,
"end": 305,
"text": "(Bengio, 2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our DBN Model",
"sec_num": "3.3"
},
{
"text": "h 2 v h 1 ... ... ... ... ... ... ... ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our DBN Model",
"sec_num": "3.3"
},
{
"text": "As shown in Fig.3 , our DBN model contains 2 layers of hidden variables: h 1 , h 2 , and a visible vector v. The visible vector corresponds to a sample's original feature vector. The second-layer hidden variable vector h 2 are used as the LFR of this sample.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 17,
"text": "Fig.3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Our DBN Model",
"sec_num": "3.3"
},
{
"text": "Suppose there are m, n 1 , n 2 variables in v, h 1 , h 2 respectively. To reduce the number of parameters in the DBN, we split its first layer (h 1 \u2212 v) into k groups, as we will explain in the following subsection. We confine the connections in this layer to variables within the same group. So there are only mn 1 /k parameters in the first layer. Without splitting, the number of parameters would be mn 1 . Therefore, learning that many parameters requires too much computation. By splitting, we reduce the number of parameters by a factor of k. If we choose k big enough, learning is feasible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our DBN Model",
"sec_num": "3.3"
},
{
"text": "The second layer (h 2 \u2212 h 1 ) is fully connected, so that the variables in the second layer can capture the relations between variables in different groups in the first layer. There are n 1 n 2 parameters in the second layers. Because n 1 and n 2 are relatively small, learning the parameters in the second layer is also feasible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our DBN Model",
"sec_num": "3.3"
},
{
"text": "In summary, by splitting the first layer into groups, we have largely reduced the number of pa-rameters in our DBN model. This makes learning our DBN model practical for our task. In our task, visible variables corresponds to original binary features and the second layer hidden variables are used as the LFR of these original features. One deficiency of splitting is that the relationships between original features in different groups can not be captured by hidden variables in the first layer. However, this deficiency is compensated by using the second layer to capture relationships between all variables in the first layer. In this way, the second layer still captures the relationships between all original features indirectly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our DBN Model",
"sec_num": "3.3"
},
{
"text": "When we split the first layer into k groups, every group, except the last one, contains m/k visible variables and n 1 /k hidden variables. The last group contains the remaining visible and hidden variables. But how to split the visible variables, i.e., the original features, into these groups? Of course there are many ways to split the original features. But it is difficult to find a good principle to split. So we tried two splitting strategies in this paper. The first strategy is very simple. We arrange all features as the order they appeared in the training data . Suppose each group contains r original features. We just put the first r unique features of training data into the first group, the following r unique features into the second group, and so on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Splitting Features into Groups",
"sec_num": "3.3.1"
},
{
"text": "The second strategy is more sophisticated. All features can be divided into three categories: the common features, the source-specific features and the target-specific features. Its main idea is to make each group contain the three categories of features evenly, which we think makes the distribution of features close to the 'true' distribution over domains. Let F s and F t denote the sets of features that appeared on source and target domain data respectively. We collect F s and F t from our training data. The features in F s and F t are are ordered the same as the order they appeared in training data. And let F s\u2229t = F s \u2229 F t (the common features), F s\\t = F s \\F t (the source-specific features), F t\\s = F t \\F s (the target-specific features). So, to evenly distribute features in F s\u2229t , F s\\t and F t\\s to each group, each group should consist of |F s\u2229t |/k, |F s\\t |/k and |F t\\s |/k features from F s\u2229t , F s\\t and F t\\s respec-tively. Therefore, we put the first |F s\u2229t |/k features from F s\u2229t , the first |F s\\t |/k features from F s\\t and the first |F t\\s |/k features from F t\\s into the first group. Similarly, we put the second |F s\u2229t |/k features from F s\u2229t , the second |F s\\t |/k features from F s\\t and the second |F t\\s |/k features from F t\\s into the second group. The intuition of this strategy is to let features in F s\u2229t act as pivot features that link features in F s\\t and F t\\s in each group. In this way, the first hidden layer might capture better relationships between features from source and target domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Splitting Features into Groups",
"sec_num": "3.3.1"
},
{
"text": "Given a sample represented as a vector of original features, our DBN model will represent it as a vector of latent features. The sample's original feature vector corresponds to the visible vector v in our DBN model in Figure 3 . Our DBN model uses the second-layer hidden variable vector h 2 to represent this sample. Therefore, we must infer the value of hidden variables in the second-layer given the visible vector. This inference can be done using the methods in . Given the visible vector, the values of the hidden variables in every layer can be efficiently inferred in a single, bottomup pass.",
"cite_spans": [],
"ref_spans": [
{
"start": 218,
"end": 226,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "LFR of a Sample",
"sec_num": "3.3.2"
},
{
"text": "Inference in a DBN is simple and fast. Nonetheless, training a DBN is more complicated. A DBN can be trained in two stages: greedy layer-wise pretraining and fine tuning .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Our DBN Model",
"sec_num": "3.4"
},
{
"text": "In this stage, the DBN is treated as a stack of RBMs as shown in Figure 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 73,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Greedy Layer-wise Pretraining",
"sec_num": "3.4.1"
},
{
"text": "The second layer is treated as a single RBM. The first layer is treated as k parallel RBMs with each group being one RBM. These k RBMs are parallel because their visible variable vectors constitute a partition of the original feature vector. In this stage, we train these constituent RBMs in a bottomup layer-wise manner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Greedy Layer-wise Pretraining",
"sec_num": "3.4.1"
},
{
"text": "To learn parameters in the first layer, we only need to learn the parameters of each RBM in the first layer. With the original feature vector v given, these k RBMs can be trained using the Contrastive Divergence method (Hinton, 2002) . After the first layer is ... trained, we will fix the parameters in the first layer and start to train the second layer.",
"cite_spans": [
{
"start": 219,
"end": 233,
"text": "(Hinton, 2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Greedy Layer-wise Pretraining",
"sec_num": "3.4.1"
},
{
"text": "For the RBM of the second layer, its visible variables are the hidden variables in the first layer. Given an original feature vector v, we first infer the activation probabilities for the hidden variables in the first layer using equation 2. And we use these activation probabilities as values for visible variables in the second layer RBM. Then we train the second layer RBM using contrastive divergence algorithm. Note that the activation probabilities are not binary values. But this is only a trick for training because using probabilities generally produces better models . This trick does not change our assumption that each variable is binary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Greedy Layer-wise Pretraining",
"sec_num": "3.4.1"
},
{
"text": "The greedy layer-wise pretraining initializes the parameters of our DBN to sensible values. But these values are not optimal and the parameters need to be fine tuned. For fine tuning, we unroll the DBN to form an autoencoder as in Hinton and Salakhutdinov (2006) , which is shown in Figure 5 .",
"cite_spans": [
{
"start": 231,
"end": 262,
"text": "Hinton and Salakhutdinov (2006)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 283,
"end": 291,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Fine Tuning",
"sec_num": "3.4.2"
},
{
"text": "In this autoencoder, the stochastic activities of binary hidden variables are replaced by its activation probabilities. So the autoencoder is in essence a feed-forward neural network. We tune the parameters of our DBN model on this autoencoder using backpropagation algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine Tuning",
"sec_num": "3.4.2"
},
{
"text": "In this section, we introduce how to use our DBN model to adapt a basic syntactic and semantic de- pendency parsing system to target domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation with Our DBN Model",
"sec_num": "4"
},
{
"text": "... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation with Our DBN Model",
"sec_num": "4"
},
{
"text": "We build a typical pipelined system, which first analyze syntactic dependencies, and then analyze semantic dependencies. This basic system only serves as a platform for experimenting with different feature representations. So we just briefly introduce our basic system in this subsection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Basic Pipelined System",
"sec_num": "4.1"
},
{
"text": "For syntactic dependency parsing, we use a deterministic shift-reduce method as in Nivre et al., (2006) . It has four basic actions: left-arc, right-arc, shift, and reduce. A classifier is used to determine an action at each step. To decide the label for each dependency link, we extend the left/right-arc actions to their corresponding multi-label actions, leading to 31 left-arc and 66 right-arc actions. Altogether a 99class problem is yielded for parsing action classification. We add arcs to the dependency graph in an arc eager manner as in . We also projectivize the non-projective sequences in training data using the transformation from Nivre and Nilsson (2005) . A maximum entropy classifier is used to make decisions at each step. The features utilized are the same as those in Zhao et al., (2008) .",
"cite_spans": [
{
"start": 83,
"end": 103,
"text": "Nivre et al., (2006)",
"ref_id": "BIBREF23"
},
{
"start": 646,
"end": 670,
"text": "Nivre and Nilsson (2005)",
"ref_id": "BIBREF24"
},
{
"start": 789,
"end": 808,
"text": "Zhao et al., (2008)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Dependency Parsing",
"sec_num": "4.1.1"
},
{
"text": "Our semantic dependency parser is similar to the one in Che et al., (2009) . We first train a predicate sense classifier on training data, using the same features as in Che et al., (2009) . Again, a maximum en-tropy classifier is employed. Given a predicate, we need to decide its semantic dependency relation with each word in the sentence. To reduce the number of argument candidates, we adopt the pruning strategy in Zhao et al., (2009) , which is adapted from the strategy in Xue and Palmer (2004) . In the semantic role classification stage, we use a maximum entropy classifier to predict the probabilities of a candidate to be each semantic role. We train two different classifiers for verb and noun predicates using the same features as in Che et al., (2009) . We use a simple method for post processing. If there are duplicate arguments for ARG0\u223cARG5, we preserve the one with the highest classification probability and remove its duplicates.",
"cite_spans": [
{
"start": 56,
"end": 74,
"text": "Che et al., (2009)",
"ref_id": "BIBREF3"
},
{
"start": 169,
"end": 187,
"text": "Che et al., (2009)",
"ref_id": "BIBREF3"
},
{
"start": 420,
"end": 439,
"text": "Zhao et al., (2009)",
"ref_id": "BIBREF33"
},
{
"start": 480,
"end": 501,
"text": "Xue and Palmer (2004)",
"ref_id": "BIBREF31"
},
{
"start": 747,
"end": 765,
"text": "Che et al., (2009)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Dependency Parsing",
"sec_num": "4.1.2"
},
{
"text": "In our basic pipeline system, both the syntactic and semantic dependency parsers are built using discriminative models. We train a syntactic parsing model and a semantic parsing model using the original feature representation. We will refer to this syntactic parsing model as OriSynModel, and the semantic parsing model as OriSemModel. However, these two models do not adapt well to the target domain. So we use the LFR of our DBN model to train new syntactic and semantic parsing models. We will refer to the new syntactic parsing model as LatSyn-Model, and the new semantic parsing model as Lat-SemModel. Details of using our DBN model are as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting the Basic System to Target Domain",
"sec_num": "4.2"
},
{
"text": "The input data for training our DBN model are the original feature vectors on training and unlabeled data. Therefore, to train our DBN model, we first need to extract the original features for syntactic parsing on these data. Features on training data can be directly extracted using golden-standard annotations. On unlabeled data, however, some features cannot be directly extracted. This is because our syntactic parser uses history-based features which depend on previous actions taken when parsing a sentence. Therefore, features on unlabeled data can only be extracted after the data are parsed. To solve this problem, we first parse the unlabeled data using the already trained OriSynModel. In this way, we can obtain the features on the unlabeled data. Because of the poor performance of the OriSynModel on the target domain, the extracted features on unlabeled data contains some noise. However, experiments show that our DBN model can still learn a good LFR despite the noise in the extracted features. Using the LFR, we can train the syntactic parsing model LatSynModel. Then by applying the LFR on test and unlabeled data, we can parse the data using LatSynModel. Experiments in later sections show that the LatSynModel adapts much better to the target domain than the OriSynModel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting the Syntactic Parser",
"sec_num": "4.2.1"
},
{
"text": "The situation here is similar to the adaptation of the syntactic parser. Features on training data can be directly extracted. To extract features on unlabeled data, we need to have syntactic dependency trees on this data. So we use our LatSynModel to parse the unlabeled data first. And we automatically identify predicates on unlabeled data using a classifier as in Che et al., (2008) . Then we extract the original features for semantic parsing on unlabeled data. By feeding original features extracted on these data to our DBN model, we learn the LFR for semantic dependency parsing. Using the LFR, we can train the semantic parsing model LatSemModel.",
"cite_spans": [
{
"start": 367,
"end": 385,
"text": "Che et al., (2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting the Semantic Parser",
"sec_num": "4.2.2"
},
{
"text": "We use the English data in the CoNLL 2009 shared task for experiments. The training data and in-domain test data are from the WSJ corpus, whereas the out-of-domain test data is from the Brown corpus. We also use unlabeled data consisting of the following sections of the Brown corpus: K, L, M, N, P. The test data are excerpts from fictions. The unlabeled data are also excerpts from fictions or stories, which are similar to the test data. Although the unlabeled data is actually annotated in Release 3 of the Penn Treebank, we do not use any information contained in the annotation, only using the raw texts. The training, test and unlabeled data contains 39279, 425, and 16407 sentences respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Data",
"sec_num": "5.1.1"
},
{
"text": "For the syntactic parsing task, there are 748,598 original features in total. We use 7,486 hidden variables in the first layer and 3,743 hidden variables in the second layer. For semantic parsing, there are 1,074,786 original features. We use 10,748 hidden variables in the first layer and 5,374 hidden variables in the second layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings of Our DBN Model",
"sec_num": "5.1.2"
},
{
"text": "In our DBN models, we need to determine the number of groups k. Because larger k means less computational cost, k should not be set too small. We empirically set k as follows: according to our experience, each group should contain about 5000 original features. We have about 10 6 original features in our tasks. So we estimate k \u2248 10 6 /5000 = 200. And we set k to be 200 in the DBN models for both syntactic and semantic parsing. As for splitting strategy, we use the more sophisticated one in subsection 3.3.1 because it should generate better results than the simple one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings of Our DBN Model",
"sec_num": "5.1.2"
},
{
"text": "In greedy pretraining of the DBN, the contrastive divergence algorithm is configured as follows: the training data is divided to mini-batches, each containing 100 samples. The weights are updated with a learning rate of 0.3, momentum of 0.9, weight decay of 0.0001. Each layer is trained for 30 passes (epochs) over the entire training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Details of DBN Training",
"sec_num": "5.1.3"
},
{
"text": "In fine-tuning, the backpropagation algorithm is configured as follows: The training data is divided to mini-batches, each containing 50 samples. The weights are updated with a learning rate of 0.1, momentum of 0.9, weight decay of 0.0001. The finetuning is repeated for 50 epochs over the entire training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Details of DBN Training",
"sec_num": "5.1.3"
},
{
"text": "We use the fast computing technique in Raina et al., (2009) to learn the LFRs. Moreover, in greedy pretraining, we train RBMs in the first layer in parallel.",
"cite_spans": [
{
"start": 39,
"end": 59,
"text": "Raina et al., (2009)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Details of DBN Training",
"sec_num": "5.1.3"
},
{
"text": "We use the official evaluation measures of the CoNLL 2009 shared task, which consist of three different scores: (i) syntactic dependencies are scored using the labeled attachment score, (ii) semantic dependencies are evaluated using a labeled F1 score, and (iii) the overall task is scored with a macro av- erage of the two previous scores. The three scores above are represented by LAS, Sem F1, and Macro F1 respectively in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.2"
},
{
"text": "Our basic system uses the OriSynModel for syntactic parsing, and the OriSemModel for semantic parsing. Our adapted system uses the LatSynModel for syntactic parsing, and the LatSemModel for semantic parsing. The results of these two systems are shown in Table 1 , in which our basic and adapted systems are denoted as Ori and Lat respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 254,
"end": 261,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with Un-adapted System",
"sec_num": "5.2.1"
},
{
"text": "From the results in Table 1 , we can see that Lat performs slightly worse than Ori on in-domain WSJ test data. But on the out-of-domain Brown test data, Lat performs much better than Ori, with 5 points improvement in Macro F1 score. This shows the effectiveness of our method for domain adaptation tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 27,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with Un-adapted System",
"sec_num": "5.2.1"
},
{
"text": "As described in subsection 5.1.2, we have empirically set the number of groups k to be 200 and chosen the more sophisticated splitting strategy. In this subsection, we experiment with different splitting configurations to see their effects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Splitting Configurations",
"sec_num": "5.2.2"
},
{
"text": "Under each splitting configuration, we learn the LFRs using our the DBN models. Using the LFRs, we test the our adapted systems on both in-domain and out-of-domain data. Therefore we get many test results, each corresponding to a splitting configuration. The in-domain and out-of-domain test results are reported in Table 2 and Table 3 respectively. In these two tables, 's1' and 's2' represents the simple and the more sophisticated splitting strategies in subsection 3.3.1 respectively. 'k' represents the number of groups in our DBN models. For both syntactic and semantic parsing, we use the same k in their DBN models. The 'Time' column reports the training time of our DBN models for both syntactic and semantic parsing. The unit of the 'Time' column is the hour. Please note that we only need to train our DBN models once. And we report the training time in Table 2 . For easy viewing, we repeat those training times in Table 3 . But this does not mean we need to train new DBN models for outof-domain test.",
"cite_spans": [],
"ref_spans": [
{
"start": 316,
"end": 335,
"text": "Table 2 and Table 3",
"ref_id": "TABREF2"
},
{
"start": 865,
"end": 872,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 927,
"end": 934,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Different Splitting Configurations",
"sec_num": "5.2.2"
},
{
"text": "From Tables 2 and 3 we get the following observations:",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 19,
"text": "Tables 2 and 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Different Splitting Configurations",
"sec_num": "5.2.2"
},
{
"text": "First, although the more sophisticated splitting strategy 's2' generate slightly better result than the simple strategy 's1', the difference is not significant. This means that the hierarchical structure of our DBN model can robustly capture the relationships between features. Even with the simple splitting strategy 's1', we still get quite good results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Splitting Configurations",
"sec_num": "5.2.2"
},
{
"text": "Second, the 'Time' column in Table 2 shows that different splitting strategies with the same k value has the same training time. This is reasonable because training time only depends on the number of parameters in our DBN model. And different splitting strategies do not affect the number of parameters in our DBN model. Third, the number of groups k affects both the training time and the final results. When k increases, the training time reduces but the results degrade. As k gets larger, the time reduction gets less obvious, but the degradation of results gets more obvious. When k = 100, 200, 300, there is not much difference between the results. This shows that the results of our DBN model is not sensitive to the values of k within a range of 100 around our initial estimation 200. But when k is further away from our estimation, e.g. k = 400, the results get significantly worse.",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 36,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Different Splitting Configurations",
"sec_num": "5.2.2"
},
{
"text": "Please note that the results in Tables 2 and 3 are not used to tune the parameter k or to choose a splitting strategy in our DBN model. As mentioned in subsection 5.1.2, we have chosen k = 200 and the more sophisticated splitting strategy beforehand. In this paper, we always use the results with k = 200 and the 's2' strategy as our main results, even though the results with k = 100 are better.",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 46,
"text": "Tables 2 and 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Different Splitting Configurations",
"sec_num": "5.2.2"
},
{
"text": "An interesting question for our method is how much unlabeled target domain data should be used. To empirically answer this question, we learn several LFRs by gradually adding more unlabeled data to train our DBN model. We compared the performance of these LFRs as shown in Figure 6 . From Figure 6 , we can see that by adding more unlabeled target domain data, our system adapts better to the target domain with only small degradation of result on source domain. However, with more unlabeled data used, the improvement on target domain result gradually gets smaller.",
"cite_spans": [],
"ref_spans": [
{
"start": 273,
"end": 281,
"text": "Figure 6",
"ref_id": "FIGREF7"
},
{
"start": 289,
"end": 297,
"text": "Figure 6",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "The Size of Unlabeled Target Domain Data",
"sec_num": "5.3"
},
{
"text": "In this subsection, we compare our method with several systems. These are described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with other methods",
"sec_num": "5.4"
},
{
"text": "Daume07. Daum\u00e9 III (2007) proposed a simple and effective adaptation method by augmenting feature vector. Its main idea is to augment the feature vector. They took each feature in the original problem and made three versions of it: a general version, a source-specific version and a target-specific version. Thus, the augmented source data contains only general and source-specific versions; the augmented target data contains general and target-specific versions. In the baseline system, we adopt the same technique for dependency and semantic parsing.",
"cite_spans": [
{
"start": 9,
"end": 25,
"text": "Daum\u00e9 III (2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with other methods",
"sec_num": "5.4"
},
{
"text": "Chen. The participation system of Zhao et al., (2009) , reached the best result in the out-of-domain test of the CoNLL 2009 shared task.",
"cite_spans": [
{
"start": 34,
"end": 53,
"text": "Zhao et al., (2009)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with other methods",
"sec_num": "5.4"
},
{
"text": "In Daum\u00e9 III and and Marcu (2006) , they presented and discussed several 'obvious' ways to attack the domain adaptation problem without developing new algorithms. Following their idea, we construct similar systems.",
"cite_spans": [
{
"start": 3,
"end": 33,
"text": "Daum\u00e9 III and and Marcu (2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with other methods",
"sec_num": "5.4"
},
{
"text": "OnlySrc. The system is trained on only the data of the source domain (News).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with other methods",
"sec_num": "5.4"
},
{
"text": "OnlyTgt. The system is trained on only the data of the target domain (Fiction).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with other methods",
"sec_num": "5.4"
},
{
"text": "All. The system is trained on all data of the source domain and the target domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with other methods",
"sec_num": "5.4"
},
{
"text": "It is worth noting that training the systems of Daume07, OnlyTgt and All need the labeled data of the target domain. We utilize OnlySrc to parse the unlabeled data of the target domain to generate the labeled data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with other methods",
"sec_num": "5.4"
},
{
"text": "ALl comparison results are shown in Table 4 , in which the 'Diff' column is the difference of scores on in-domain and out-of-domain test data.",
"cite_spans": [],
"ref_spans": [
{
"start": 36,
"end": 43,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Comparison with other methods",
"sec_num": "5.4"
},
{
"text": "First, we compare OnlySrc, OnlyTgt and All. We can see that OnlyTgt performs very poor both in the source domain and in the target domain. It is not hard to understand that OnlyTgt performs poor in the source domain because of the adaptation problem. OnlyTgt also performs poor in the target domain. We think the main reason is that OnlyTgt is trained on the auto parsed data in which there are many parsing errors. But we note that All performs better than both OnlySrc and OnlyTgt on the target domain test, although its training data contains some auto parsed data. Therefore, the data of the target domain, labeled or unlabeled, are potential in alleviating the adaptation problem of different domains. But All just puts the auto parsed data of the target domain into the training set. Thus, its improvement on the test data of the target domain is limited. In fact, how to use the data of the target domain, especially the unlabeled data, in the adaptation problem is still an open and hot topic in NLP and machine learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with other methods",
"sec_num": "5.4"
},
{
"text": "Second, we compare Daume07, All and our method. In Daume07, they reported improvement on the target domain test. But one point to note is that the target domain data used in their experiments is labeled while in our case there is only unlabeled data. We can see Daume07 have comparable performance with All in which there is not any adaptation strategy besides adding more data of the target domain. We think the main reason is that there are many parsing errors in the data of the target domain. But our method performs much better than Daume07 and All even though some faulty data are also utilized in our system. This suggests that our method successfully learns new robust representations for different domains, even when there are some noisy data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with other methods",
"sec_num": "5.4"
},
{
"text": "Third, we compare Chen with our method. Chen reached the best result in the out-of-domain test of the CoNLL 2009 shared task. The results in Table 4 show that Chen's system performs better than ours on in-domain test data, especially on LAS score. Chen's system uses a sophisticated graph-based syntactic dependency parser. Graph-based parsers use substantially more features, e.g. more than 1.3 \u00d7 10 7 features are used in McDonald et al., (2005) . Learning an LFR for that many features would take months of time using our DBN model. So at present we only use a transition-based parser. The better performance of Chen's system mainly comes from their sophisticated syntactic parsing method.",
"cite_spans": [
{
"start": 424,
"end": 447,
"text": "McDonald et al., (2005)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 141,
"end": 148,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Comparison with other methods",
"sec_num": "5.4"
},
{
"text": "To reduce the sparsity of features, Chen's system uses word cluster features as in Koo et al., (2008) . On out-of-domain tests, however, our system still performs much better than Chen's, especially on semantic parsing. To our knowledge, on out-of-domain tests on this data set, our system has obtained the best performance to date. More importantly, the performance difference between indomain and out-of-domain tests is much smaller in our system. This shows that our system adapts much better to the target domain.",
"cite_spans": [
{
"start": 83,
"end": 101,
"text": "Koo et al., (2008)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with other methods",
"sec_num": "5.4"
},
{
"text": "In this paper, we propose a DBN model to learn LFRs for syntactic and semantic parsers. These LFRs are common representations of original features in both source and target domains. Syntactic and semantic parsers using the LFRs adapt to target domain much better than the same parsers using original feature representation. Our model provides a unified method that adapts both syntactic and semantic dependency parsers to a new domain. In the future, we hope to further scale up our method to adapt parsing models using substantially more features, such as graph-based syntactic dependency parsing models. We will also search for better splitting strategies for our DBN model. Finally, although our experiments are conducted on syntactic and semantic parsing, it is expected that the proposed ap-proach can be applied to the domain adaptation of other tasks with little adaptation efforts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "The research work has been partially funded by the Natural Science Foundation of China under Grant No.61333018 and supported by the West Light Foundation of Chinese Academy of Sciences under Grant No.LHXZ201301. We thank the three anonymous reviewers and the Action Editor for their helpful comments and suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning Deep Architectures for AI",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2009,
"venue": "Foundations and Trends in Machine Learning",
"volume": "2",
"issue": "",
"pages": "1--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio. 2009. Learning Deep Architectures for AI. In Foundations and Trends in Machine Learning, 2(1):1-127.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Domain Adaptation with sturctural correspondance learning",
"authors": [
{
"first": "John",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of ACL-2006",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Blitzer, Ryan McDonald and Fernando Pereira. 2006. Domain Adaptation with sturctural correspon- dance learning. In Proceedings of ACL-2006.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Cascaded Syntactic and Semantic Dependency Parsing System",
"authors": [
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Zhenghua",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yuxuan",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Yongqiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of CoNLL-2008 shared task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wanxiang Che, Zhenghua Li, Yuxuan Hu, Yongqiang Li, Bing Qin, Ting Liu and Sheng Li. 2008. A Cascaded Syntactic and Semantic Dependency Parsing System. In Proceedings of CoNLL-2008 shared task.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multilingual Dependency-based Syntactic and Semantic Parsing",
"authors": [
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Zhenghua",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yongqiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yuhang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of CoNLL-2009 shared task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wanxiang Che, Zhenghua Li, Yongqiang Li, Yuhang Guo, Bing Qin and Ting Liu. 2009. Multilingual Dependency-based Syntactic and Semantic Parsing. In Proceedings of CoNLL-2009 shared task.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning reliable information for dependency parsing adaptation",
"authors": [
{
"first": "Wenliang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Youzhengwu",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenliang Chen, YouzhengWu and Hitoshi Isahara. 2008. Learning reliable information for dependency parsing adaptation. In Proceedings of COLING-2008.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Frustratingly Easy Domain Adaptation",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL-2007",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9 III. 2007. Frustratingly Easy Domain Adap- tation. In Proceedings of ACL-2007.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Domain Adaptation for Statistical Classifer",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2006,
"venue": "In Journal of Artificial Intelligence Research",
"volume": "26",
"issue": "",
"pages": "101--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9 III and Daniel Marcu. 2006. Domain Adap- tation for Statistical Classifer. In Journal of Artificial Intelligence Research, 26(2006), 101-126.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Frustratingly Hard Domain Adaptation for Dependency Parsing",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "Partha",
"middle": [
"P"
],
"last": "Talukdar",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Joao",
"middle": [],
"last": "Graca",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Dredze, John Blitzer, Partha P. Talukdar, Kuzman Ganchev, Joao Graca and Fernando Pereira. 2007. Frustratingly Hard Domain Adaptation for Depen- dency Parsing. In Proceedings of EMNLP-CoNLL- 2007.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Glorot",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Glorot, Antoine Bordes and Yoshua Bengio. 2011. Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach. In Pro- ceedings of International Conference on Machine Learning (ICML) 2011.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic labeling for semantic roles",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "",
"pages": "245--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea and Daniel Jurafsky. 2002. Automatic la- beling for semantic roles. In Computational Linguis- tics, 28(3): 245-288.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Measuring invariances in deep networks",
"authors": [
{
"first": "I",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Saxe",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Advances in Neural Information Processing Systems(NIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Goodfellow, Q. Le, A. Saxe and A. Ng. 2009. Mea- suring invariances in deep networks. In Proceedings of Advances in Neural Information Processing Sys- tems(NIPS)2011.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The CoNLL-2009 Shared Task: Syntactic and Semantic Dependencies in Multiple Languages",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Maria",
"middle": [
"Ant\u00f2nia"
],
"last": "Mart\u00ed",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Jan\u0161t\u011bp\u00e1nek",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Stra\u0148\u00e1k",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of CoNLL-2009",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Haji\u010d, Massimiliano Ciaramita, Richard Johans- son, Daisuke Kawahara, Maria Ant\u00f2nia Mart\u00ed, Llu\u00eds M\u00e0rquez, Adam Meyers, Joakim Nivre, Sebastian Pad\u00f3, Jan\u0160t\u011bp\u00e1nek, Pavel Stra\u0148\u00e1k, Mihai Surdeanu, Nianwen Xue and Yi Zhang. 2009. The CoNLL-2009 Shared Task: Syntactic and Semantic Dependencies in Multiple Languages. In Proceedings of CoNLL-2009.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Single Malt or Blended? A Study in Multilingual Parser Optimization",
"authors": [
{
"first": "J",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nilsson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Eryi\u01e7it",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Megyesi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Nilsson",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Saers",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Hall, J. Nilsson, J. Nivre, G. Eryi\u01e7it, B. Megyesi, M. Nilsson, and M. Saers. 2007. Single Malt or Blended? A Study in Multilingual Parser Optimization. In Pro- ceedings of EMNLP-CoNLL-2007.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A Practical Guide to Training Restricted Boltzmann Machines",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2010,
"venue": "Technical report 2010-003",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton. 2010. A Practical Guide to Train- ing Restricted Boltzmann Machines. In Technical re- port 2010-003, Machine Learning Group, University of Toronto.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Training products of experts by minimizing constrastive divergence",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2002,
"venue": "Neural Computation",
"volume": "14",
"issue": "",
"pages": "1711--1800",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton. 2002. Training products of experts by minimizing constrastive divergence. In Neural Com- putation, 14(8): 1711-1800.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A fast learning algorithm for deep belief nets",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Osindero",
"suffix": ""
},
{
"first": "Yee-Whye",
"middle": [],
"last": "Teh",
"suffix": ""
}
],
"year": 2006,
"venue": "Neural Computation",
"volume": "18",
"issue": "",
"pages": "1527--1554",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton, Simon Osindero and Yee-Whye Teh. 2006. A fast learning algorithm for deep belief nets. In Neural Computation, 18(7): 1527-1554.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Reducing the dimensionality of data with neural networks",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2006,
"venue": "In Science",
"volume": "",
"issue": "5786",
"pages": "504--507",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton and R. Salakhutdinov. 2006. Reducing the dimensionality of data with neural networks. In Science, 313(5786), 504-507.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Dependency-based semantic role labeling of Prop-Bank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Nugues",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Johansson and Pierre Nugues. 2008. Dependency-based semantic role labeling of Prop- Bank. In Proceedings of EMNLP-2008.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Simple Semi-supervised Dependency Parsing",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Koo, Xavier Carreras and Michael Collins. 2008. Simple Semi-supervised Dependency Parsing. In Pro- ceedings of ACL-HLT-2008.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Semantic Role Labeling: An Introduction to the Special Issue",
"authors": [
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [
"C"
],
"last": "Litkowski",
"suffix": ""
},
{
"first": "Suzanne",
"middle": [],
"last": "Stevenson",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "",
"pages": "145--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Llu\u00eds M\u00e0rquez, Xavier Carreras, Kenneth C.Litkowski and Suzanne Stevenson. 2008. Semantic Role Label- ing: An Introduction to the Special Issue. In Compu- tational Linguistics, 34(2):145-159.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Non-projective dependency parsing using spanning tree algortihms",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haj\u02c7c",
"suffix": ""
},
{
"first": "Kiril",
"middle": [],
"last": "Ribarov",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of NAACL-HLT-2005",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Fernando Pereira, Jan Haj\u02c7c, and Kiril Ribarov. 2005. Non-projective dependency parsing using spanning tree algortihms. In Proceedings of NAACL-HLT-2005.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Shared Task on Dependency Parsing",
"authors": [],
"year": null,
"venue": "Proceedings of CoNLL-2007",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shared Task on Dependency Parsing. In Proceedings of CoNLL-2007.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Labeled Pseudo-Projective Dependency Parsing with Support Vector Machines",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nilsson",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Eryi\u01e7it",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Marinov",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of CoNLL-2006",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Nivre, J. Hall, J. Nilsson, G. Eryi\u01e7it and S. Marinov. 2006. Labeled Pseudo-Projective Dependency Parsing with Support Vector Machines. In Proceedings of CoNLL-2006.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Pseudo-projective dependency parsing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL-2005",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Nivre, and J. Nilsson. 2005. Pseudo-projective depen- dency parsing. In Proceedings of ACL-2005.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Large-scale Deep Unsupervised Learning using Graphics Processors",
"authors": [
{
"first": "Rajat",
"middle": [],
"last": "Raina",
"suffix": ""
},
{
"first": "Anand",
"middle": [],
"last": "Madhavan",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 26th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajat Raina, Anand Madhavan, and Andrew Y. Ng. 2009. Large-scale Deep Unsupervised Learning us- ing Graphics Processors. In Proceedings of the 26th",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Annual International Conference on Machine Learning(ICML)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "152--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual International Conference on Machine Learn- ing(ICML), pages 152-164.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "The CoNLL-2008 Shared Task on Joint Parsing of Syntactic and Semantic Dependencies",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Meyers",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, Richard Johansson, Adam Meyers, Llu\u00eds M\u00e0rquez and Joakim Nivre. 2008. The CoNLL- 2008 Shared Task on Joint Parsing of Syntactic and Semantic Dependencies. In Proceedings of CoNLL- 2008.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Domain Adaptation by Constraining Inter-Domain Variability of Latent Feature Representation",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL-2011",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Titov. 2011. Domain Adaptation by Constraining Inter-Domain Variability of Latent Feature Represen- tation. In Proceedings of ACL-2011.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Word representations: a simple and general method for semi-supervised learning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL-2010",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Turian, Lev Ratinov and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of ACL- 2010.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Deep Learning via Semi-Supervised Embedding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Rattle",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Collobert",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of International Conference on Machine Learning(ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Weston, F. Rattle, and R. Collobert. 2008. Deep Learn- ing via Semi-Supervised Embedding. In Proceed- ings of International Conference on Machine Learn- ing(ICML).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Calibrating features for semantic role labeling",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue and Martha Palmer. 2004. Calibrating fea- tures for semantic role labeling. In Proceedings of EMNLP-2004.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Multi-Predicate Semantic Role Labeling",
"authors": [
{
"first": "Haitong",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP-2014",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haitong Yang and Chengqing Zong. 2014. Multi- Predicate Semantic Role Labeling. In Proceedings of EMNLP-2014.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Multilingual Dependency Learning: Exploiting Rich Features for Tagging Syntactic and Semantic Dependencies",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Wenliang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of CoNLL-2009 shared task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao, Wenliang Chen, Chunyu Kit, Guodong Zhou. 2009. Multilingual Dependency Learning: Exploiting Rich Features for Tagging Syntactic and Semantic De- pendencies. In Proceedings of CoNLL-2009 shared task.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Parsing Syntactic and Semantic Dependencies with Two Single-Stage Maximum Entropy Models",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao and Chunyu Kit. 2008. Parsing Syntactic and Semantic Dependencies with Two Single-Stage Max- imum Entropy Models. In Proceedings of CoNLL- 2008.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A Minimum Error Weighting Combination Strategy for Chinese Semantic Role Labeling",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Zhuang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of COLING 2010",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Zhuang and Chengqing Zong. 2010a. A Minimum Error Weighting Combination Strategy for Chinese Se- mantic Role Labeling. In Proceedings of COLING 2010.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Joint Inference for Bilingual Semantic Role Labeling",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Zhuang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP 2010",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Zhuang and Chengqing Zong. 2010b. Joint Inference for Bilingual Semantic Role Labeling. In Proceedings of EMNLP 2010.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "A path feature example. The red edges are the path between She and visit and thus the relation path feature between them is SBJ\u2191OPRD\u2193IM\u2193OBJ\u2193",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Graphical representations of an RBM: (a) represents an RBM. (b) is a more compact representation",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Our DBN model. The blue nodes stand for the visible variables (v) and the blank node stands for the hidden variables (h 1 and h 2 ). The symbols are also used in the figures of the following subsectins.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Stack of RBMs in pretraining.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "Unrolling the DBN.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF7": {
"text": "Macro F1 scores on test data with respect to the size of unlabeled target domain data used in DBN training. The horizontal axis is the number of sentences in unlabeled target domain data and the coordinate axis is the Macro F1 Score.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF2": {
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"6\">: Results of different splitting configurations on in-domain WSJ development data</td></tr><tr><td>Str</td><td>k</td><td colspan=\"4\">Time(h) LAS Sem F1 Macro F1</td></tr><tr><td>s1</td><td>100 200 300</td><td>392 261 218</td><td>82.81 82.73 82.44</td><td>78.77 78.49 77.90</td><td>80.82 80.63 80.37</td></tr><tr><td/><td>400</td><td>196</td><td>81.83</td><td>76.72</td><td>79.31</td></tr><tr><td>s2</td><td>100 200 300</td><td>392 261 218</td><td>82.95 82.84 82.63</td><td>79.03 78.75 78.34</td><td>81.03 80.83 80.50</td></tr><tr><td/><td>400</td><td>196</td><td>81.97</td><td>76.98</td><td>79.51</td></tr></table>",
"html": null,
"text": ""
},
"TABREF3": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Results of different splitting configurations on out-of-domain Brown test data"
},
"TABREF5": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Comparison with other methods."
}
}
}
}