id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_100200
For each Persian word w and each PWN synset t, ߠ ௪,௧ is considered as probability of selecting PWN synset t for Persian word w. That is: In order to estimate these parameters we can divide the number of times that a Persian word w occurs with PWN synset t in a Persian tagged corpus to the number of times that a Persian word w appears in that Persian tagged corpus.
table 3: and average number of occurrence with respect the learning maximum changes in probabilities become less than a predefined threshold.
neutral
train_100201
For English, PWN has been used while for Czech, Romanian and Bulgarian WordNets from the BalkaNet project have been used.
that is, we find: In order to maximize ܳሺΘ, Θ ିଵ ሻ subject to constraint has shown in formula (2), we introduce the Lagrange multiplier λ and to find the expression for ߠ ௪,௧ , we should to solve the following equation: Whit solving differential equation 7, we obtain the new value of parameters as follows: in order to calculate new estimation of parameters, according to the formula (8) we must iterate over all possible sense tagged sequences ܶ ଵ for Persian word sequence ‫ݓ‬ ଵ .
neutral
train_100202
Such statistical models that rely on topics will not generalize well over text in previously unseen topics or genre.
result Table 7 tabulates the result.
neutral
train_100203
The first row -N-GRAMis the baseline.
we conjecture that N-gram sequences are not robust against domain changes, as N-grams are powerful features for topic categorization (e.g., (Türkoǧlu et al., 2007)).
neutral
train_100204
As shown in Table 1, these categories were not completely disjoint.
result Table 6 tabulates the result.
neutral
train_100205
As before, the mixture of all features actually decrease the performance.
we should like to point out that this extremely high performance of simple features are attainable only when supplied with sufficiently large amount of data.
neutral
train_100206
It is proved that articles from the selected set are better socially annotated for all tested language, which results in a good weighting indicator of value 42,73.
usually, motivation was approached by classic social sciences methodologies which discuss about where it resides, in the individual by itself or in it while is acting.
neutral
train_100207
Also, one of the cor-relations we noticed was that the more endogamy in terms of inlinks, the less interwiki links it had.
it can include writers and geographic places, music and historical objects.
neutral
train_100208
We use with the dataset from SemEval-1 Task 4 on Classification of Semantic Relations between Nominals (Girju et al., 2009), which is the most popular dataset for our problem; using it allows for a direct comparison to state-of-the-art systems that were evaluated on it.
the ability to recognize semantic relations in text could potentially help many NLP applications.
neutral
train_100209
Instead, we mine the Web to automatically extract patterns (verbs, prepositions and coordinating conjunctions) expressing the relationship between the relation arguments, as well as hypernyms and co-hyponyms of the arguments, which we use in instance-based classifiers.
relational similarity focuses on the relationship between two pairs of words (or nominals, noun phrases), i.e., it asks how similar the relations A:B and C:D are.
neutral
train_100210
This is probably due to the fact that we employ a more sophisticated system for identifying DNIs.
the omission in (8) is lexically specific: the verb arrive allows the GOAL to be unspecified but the verb reach, also a member of the Arriving frame, does not (9).
neutral
train_100211
One can distinguish two views of the use of morphology as a tool for term (or word) analysis.
note that in this paper, we distinguish morphs, elementary linguistic signs (segments), from morphemes, equivalence classes with identical signified and close significants (Mel'čuk, 2006).
neutral
train_100212
As mentioned in Section 2.2, the LA model also leaves some of the features underspecified or assigns default values to them.
as stated above, unlike Alpino, the GG does not have a full form lexicon.
neutral
train_100213
Figure 1 shows a sample of verb table 12.
figure 2 shows a sample of the hierarchy.
neutral
train_100214
They share some features with our approach: they use bootstrapping; they only weakly depend on the language; they are domain independent.
this decrease can be significantly limited via human pattern selection, as can be seen from the performance of the CT. system Overall, the automatic approach proposed here, coupled with a lightweight human post-processing step, generates a good quality pattern lexicon for information extraction.
neutral
train_100215
If we compare the number of the learned rules and the learned instances in Table 3, all four experiments do not differ too much from each other.
among the instances with three arguments, there are two argument combinations where W stands for winners, P for prize names, Y for years and a for areas.
neutral
train_100216
Our data pre-processing on the noisy Wikipedia topic pairs involved using simple regular expressions to filter out most of the irrelevant entities including: characters from other writing systems; temporal and numeric expressions; and punctuation symbols.
a DBN model is formally defined as a pair B 0 , B → where B 0 is a Bayesian network over an initial distribution over states, and B → is a two slice Temporal Bayes Net (2-TBN) (Murphy, 2002).
neutral
train_100217
The classic HMMs in particular have already been applied in detecting transliteration pairs from bilingual text.
we used the same parameter for staying in the X or Y state, another same parameter for moving from either X or Y to the end state, and the same parameter from the substitution state to the X or Y state.
neutral
train_100218
This information can be useful for a subsequent phase of normalization.
numerous research works at the crossroads of NLP and data mining, are focusing on the problem of opinion detection and mining.
neutral
train_100219
It takes the user question as an input and returns a query-tuple representing the question in a compact form.
we use JAPE grammars to specify patterns over annotations.
neutral
train_100220
She suggests several simple heuristics for ranking the candidate trees, two of which will be considered here.
the results are very encouraging on several dimensions: 1. overall parsing performance on the test data for both the np-wsjqb and the np-wsjqblq 500 models is very good 2. adding questions from the NLP-Qt yields a desired increase in performance 3. almost two-thirds of all questions from the test data yield a completely correct parse.
neutral
train_100221
6 For example, Nachttischlampe 'bedside lamp' has both hypernyms Tischlampe 'table lamp' (hypernym distance is 1, i.e., direct hypernym) and Lampe 'lamp' (hypernym distance is 2, i.e., indirect hypernym).
furthermore, the splitting of compounds having more than two constituents, such as Brennstofflagerungsbehälter ('fuel storage container'), which is split into brennen + Stoff + lagern + Behälter cannot be used in this form for determining immediate constituents, since the constituents are not grouped.
neutral
train_100222
pared with the limited database of annotated clauses created by the expert.
is given in figures 3 to 6 with the following parameters: m = 316, N max = 400 and t rel = 0.157 designed to produce M = 8 groups after aggregation, because the two evaluation indexes have interesting value Classes identified by the expert Total arg descr dial expl inj nar Clusters 1 48 30 33 34 15 101 261 2 4 0 2 1 0 0 7 3 0 0 1 1 0 0 2 4 0 0 2 0 0 0 2 5 0 2 2 0 8 0 12 6 2 0 1 0 3 0 6 7 5 0 5 4 0 3 17 8 0 1 0 6 0 2 9 Total 59 33 46 46 26 106 316 Table 4: Cross-counts between unsupervised and manual classification.
neutral
train_100223
Indeed, this text is made of a journal part.
this preliminary work aims at investigating a possible convergence between unsupervised statistical learning on the one hand, and the typological approach developed in particular in the field of French linguistics by Adam (2008) and in language psychology by Bronckart (1997) on the other hand.
neutral
train_100224
The RapidMiner Top Down Clustering operator was chosen as the number of clusters was selected by the operator.
the expected lexicon of CEOs should contain overtly positive language, which is designed to manipulate the public opinion.
neutral
train_100225
The lexicon analysis was the same as for the CEOs i.e.
the clusters, which contained 75% of a single category had their labels propagated and the job titles from the propagated and labelled data were recorded.
neutral
train_100226
The lexicon difference in conjunction with the baseline experiments provides justification for the two-step strategy as it will be possible to identify the role of the quotation maker by his language.
in order to collect as strong as possible quotations, a high precision rule classifier selected quotations with three or more adjectives from one category.
neutral
train_100227
With the success of the communicative and actionoriented approach in the teaching of a second or foreign language (L2), teachers are encouraged to work on authentic texts in order to bring their students in contact with real linguistic data.
in this study, we defined the 7 following variables to assess the efficiency of n-gram models in the context of readability: • The normalized log-probability of every text (normTLProb), which is in keeping with Kate et al.
neutral
train_100228
They aim to develop tools capable of assessing the difficulty of texts for a given population through textual features only (such as the number of letters per word, the number of words per sentence, etc.).
four thresholds were selected for each of the two references: a zero threshold where all nominal structures were considered, a second and a fourth one respectively corresponding to a 30% and 50% precision for our extractor, and an intermediate value as the third threshold.
neutral
train_100229
Faced with these results, one might conclude that MWEs are not as good predictors as the simple complex nominal structures (θ = 0).
there is another textual aspect that is likely to be a good predictor of lexical difficulty for L2 readers: collocations and idioms.
neutral
train_100230
The discriminative capability of higher-order models suffers too much from the smoothing, since the number of unknown n-grams increases proportionally to the model order.
we assume that at beginner or intermediate level, this facilitating effect is likely to be counterbalanced by the fact that the MWEs encountered are (1) mostly unknown to readers and (2) even more difficult to elucidate using the context as their meaning can be non-compositional.
neutral
train_100231
These gaps can in general be filled either by short (unwritten) vowels or by long (written) ones.
10: the third singular masculine and feminine and third dual feminine forms have suffixes which begin with (a) added to them, so Fig.
neutral
train_100232
o m ), its total cost, denoted by C w (P ), is: Let .
for each (T,H) pair there might be many proofs.
neutral
train_100233
We have shown that the system outperforms baselines that are restricted to each of these information sources alone; that is, both structural and similarity information are essential.
then, we partition the participant descriptions for each scenario into sets.
neutral
train_100234
This shows that parsing accuracy has a considerable effect on the overall performance of the system.
the difference between full and base is significant at p < 0.001.
neutral
train_100235
Difficult cases, mostly related to metonymies, were solved in consultation.
the next step must demonstrate that this can be done, without requiring the manual selection of scenarios to ask people about.
neutral
train_100236
Our content organization approach first categorizes questions to determine which schema will better convey the expected communicative goal of the answer for a particular question type and should be used for text planning.
in this experiment, BlogSum achieved a better F-Measure for ROUGE-2 and ROUGE-SU4 compared to OList.
neutral
train_100237
In our approach, schemata help to ensure the global coherence of the summary.
predicate: (6) At Carmax, the price is the Attributive * price and when you want a car you go get one.
neutral
train_100238
In our test, we have developed our own sentence extractor based on question similarity, topic similarity, and subjectivity scores.
(5) have to say that Carmax rocks.
neutral
train_100239
The follow-ing two sections describe a new concept, parallel suffix arrays, and how it enables annotations and more powerful search patterns.
although optimizing the implementation's use of RAM is certainly possible, it is quite clear that possibilities to use the described approach in linguistic practice depend on whether servers with sufficiently large RAM are available, and affordable.
neutral
train_100240
Our initial evaluations showed considerable reduction of time needed to create a taxonomy using the tool comparing to manual taxonomy creation.
the hierarchy approximates a directed acyclic graph (DAG).
neutral
train_100241
(2005) that contains pairwise human similarity judgments for 1,225 text pairs.
for example, the text pairs about tool/implement and cemetery/graveyard were consistently said to be synonymous.
neutral
train_100242
The dataset comprises 21 questions, 21 reference answers and 630 student answers.
further datasets are necessary to guide the development of measures along other dimensions such as structure or style.
neutral
train_100243
We analyze the properties of each dataset by means of annotation studies and a critical view on the performance of common similarity measures.
in authorship classification (Holmes, 1998) only style is important.
neutral
train_100244
Of course, checking all cases of plagiarism manually is an unfeasible task.
the external detection phase will make the system more precise in the task of plagiarism detection thanks to the high precision of the external detection techniques.
neutral
train_100245
This lexicon has been created from a mapping between the TimeML event classes and the SIMPLE/CLIPS entries at the ontological level and it is composed by 8,721 lemmas (1,068 for adjectives, 4,614 for nouns, 3,390 for verbs).
the SemEval tempEval-1 and tempEval-2 international evaluation excercises (Verhagen et al., 2007;Verhagen et al., 2010), have provided the NLP community with gold standard resources for comparative evaluations of different systems.
neutral
train_100246
The other two best performing models differ from the basic one for the combination of morphological features and presence of semantic features.
the similarity of the results with respect to the recall is not surprising.
neutral
train_100247
It is worth noticing that the context windows differentiating TIPSemIT FPC5 from TIPSemIT basic do not contribute at all to an improvement in classification, while this feature has a positive impact on event recognition.
the semantic information is not always a necessary and sufficient condition for its classification.
neutral
train_100248
Furthermore, we can associate a particular regular expression for each one of the six possible forms of a verb in this tense.
the form "ceartȃ" is however generated by the grammar, as: we cannot say that the grammar models this alternation.
neutral
train_100249
So, its length is not known in advance.
to these dictionaries, we add some entries related to the sport domain.
neutral
train_100250
The comparison was done with only one reference translation, as we work in a realistic scenario with dynamic domain change (see section 1.)
for English-Romanian, SMT systems are presented in (Cristea, 2009) and (Ignat, 2009), with BLEU results of 0.5464 and 0.3208, respectively.
neutral
train_100251
For Romanian-German our result overtake the system presented in (Ignat, 2009).
test 3) is identical, the differences in BLEU and tER score can be explained only through lexical and syntactical variation across test-sets.
neutral
train_100252
A more general formulation does not limit the methods described, though a small set decreases runtime.
the work presented in this paper aims towards an automatic detection of beneficial struc- Senator John Green .
neutral
train_100253
This does not make the domain specific development unnecessary (as the application of feature selection still needs the development of features for a domain), but helps to find templates to improve the results.
to previous work, this paper addresses the question how to select meaningful skip edges automatically from a set of possibilities.
neutral
train_100254
In this section, we create a key set by human assessment as our gold standard: we mixed up the machine translation, human corrections, and reference for one original sentence and 5 annotators in Chinese-English, and 3 in Spanish-English, were asked to assess the sentences scores from 1 to 5, where 1 corresponds to poor and 5 corresponds to perfect translation.
it is another kind of cross-validation, where the quality of a correction is based on other corrections from the same user.
neutral
train_100255
For governor and dependent, we use the semantic similarity mentioned in section 4.3.2.
we checked the data and found that the overall language model score for translated Spanish is better than reference, which means for Spanish, the fluency is not the big problem.
neutral
train_100256
condition, drug, result) and modifiers (e.g.
there is no such annotated available corpus.
neutral
train_100257
Estimating Probabilities for Action Maps: In obtaining frequency estimates for synonyms, we require these phrases to co-occcur with instances where a known verb appears.
note however, that we do not explicitly use any predicates or logical structures; these are implicit in the visual schema.
neutral
train_100258
But clearly more work is needed to be able to approach verbs that are not directly based on motion.
we remove some of the topmost frequent words "the" in this analysis where they appear as part of a phrase.
neutral
train_100259
During association, we remove units that are very frequent in general discourse, assuming these to be non-relevant to this context.
a number of approaches have tried to construct such term-meaning associations from sensorimotor data (Steels and Kaplan, 2002;Gorniak and Roy, 2004;Roy and Pentland, 2002;Oates et al., 2000).
neutral
train_100260
As for the individual features, 'MFV' proves to be the most successful on its own, thus, the changes in the verb list are beneficial.
the stems of decision and decide do not coincide).
neutral
train_100261
As our results indicate, the distance is small between the source and the target domain in the case of light verb constructions since similar results can be achieved on the two domains if domain-specific solutions are employed.
as represented in Table 3, syntax clearly helps in identifying light verb constructions: on average, it adds 2.58% and 2.37% to the F-measure on the source and the target domains, respectively.
neutral
train_100262
"Corolla" is not a common word and thus this was probably one of the only alternative senses.
we believe Google Adwords has already implemented some form of sense disambiguation for frequently-searched key phrases; it seems that frequently-searched negative senses for ads are already filtered out.
neutral
train_100263
12 of the 50 key phrase pairs triggered ads from our campaign and 20 of the 50 had ads closely related to our campaign.
over-specification would simply involve selecting long enough and specific enough key phrases so that all ambiguity is removed.
neutral
train_100264
A special heuristics is implemented in our system via the unification method provided by SProUT, in order to find the equivalent classes of persons and organizations.
avoiding the negative consequences of these factors and evaluating the quality of the TECH-WATCHTOOL system proper remains an open challenge.
neutral
train_100265
We first run experiments using each of the three features TOPIC (T), SOURCE (S), AUTHOR (A) separately and then combined across the various N-GRAM and N-GRAM combinations described earlier.
this feature indicated the author of each the document to which the paragraph belongs.
neutral
train_100266
While semantic networks have words in nodes and the arcs are semantic relations, wordnets are definitely more than that: they are:  monolingual dictionaries: they contain words with definitions for each of their senses;  multilingual dictionaries: via the Inter-Lingual Index, ILI, access from one language-specific network to all the others is facilitated; thus, it is possible to compare the organization of the lexical material of various languages, to find examples supporting the thesis of semantic specificity of languages, to introduce the multilingual dimension in various applications relying on wordnets;  thesauri: lexical information is organized in terms of word meanings, not word forms;  lexical ontologies: wordnets contain concepts lexicalizations from various domains and the relations between these concepts lexicalizations.
the high costs involved in terms of money and time prevent other teams to undertake a similar enterprise.
neutral
train_100267
There, concatenation of inflectional morpheme for nouns and adjectives is performed not concerning the problem of the alternations in the root.
after applying the method of validation, we obtain correct words on the basis Figure 1: Cycle of the lexicon completion of language.
neutral
train_100268
The DB population program produces a file that shows if words were inserted, word codes, and the result for each operation.
not all the derivatives correspond to the norms of human language.
neutral
train_100269
In this work, we adapted each schema to represent an Arabic syntactic phenomenon (the simple one).
in fact, coordination interacts with different others phenomenon.
neutral
train_100270
To analyze a simple phrase, we have to unroll the "parse" menu and choose the "parse input" order.
the study on Arabic language showed that coordination is one of particular structures.
neutral
train_100271
Still, visualization of coreference information is somewhat rare.
in many cases, coreference is resolved using supervised machine learning methods.
neutral
train_100272
Tokenisation: determine token (words, punctuation symbols) boundaries in sentences.
the corpus contains both external and intrinsic plagiarism cases, that is, cases where plagiarism is to be identified within the actual suspicious document, without referring to a source document.
neutral
train_100273
For the experiment with lexical generalisation, functional words (stop words) were removed and all remaining (content) words were generalised using their WordNet synsets, that is, groups of synonym words.
for practical reasons, in this paper we selected a subset of the PAN corpus: the first 1,000 suspicious documents, along with all 11,147 source documents.
neutral
train_100274
Reducing this variability should help Web-oriented QA.
for all the possible configurations of parameters, the system provides results for the complete QA chain.
neutral
train_100275
While a summary, the most condensed form of a text, has to give an outline of the text contents that respects the text structure, a title indicates the treated subject in the text without revealing all the content (Wang et al., 2009).
the variation of λ between 0 and 1 determines the value adapted to the corpus.
neutral
train_100276
The electronic dictionary that has been chosen is WordNet (Miller, 1995), as has already been mentioned.
in supervised approaches, there is an adequate number of labelled data to train the model from scratch, on the new domain.
neutral
train_100277
In a sense, both approaches exploit information that can be characterised as "text relatedness" (or "feature relatedness"), as both "is-a" relations and correlation can be viewed as a relatedness measure between features.
the improvement was significantly better for case 3, while case 2 performed worse than the other two methods.
neutral
train_100278
Expanding and duplicating vectors: this case is very similar to the previous one regarding dimensionality: the dimensionality also increases, identical to the previous case.
after our approach expands the feature space with features from the target domain, 2289 posts of the target domain contained at least one feature from the (Shi, Fan and Ren, 2008) are also shown for comparison purposes (evaluated on different data partitioning).
neutral
train_100279
The OCA corpus is composed of Arabic reviews obtained from specialized Arabic web pages related to movies and films.
also, another part of this project was funded by agencia Española de Cooperación Internacional para el Desarrollo MaEC-aECID.
neutral
train_100280
In order to compare the experiment for Arabic and English, we have translated OCA into English using an automatic Machine Translation (MT) tool freely available.
the experiments carried out with the English Version of OCA (EVOCA) show that, although we lost precision in the translation, the results are comparable to other works using English texts.
neutral
train_100281
With respect to closed categories most of the errors were due to the fact that units or symbols had not been defined in the hierarchy.
the tYPE and tEXt are credited independently, regardless if one of them is incorrect (Nadeau and Sekine, 2007).
neutral
train_100282
Firstly, the properties associated with units or symbols and boundaries of the numerical entities are tagged.
the system was not able to identify and classify these entities correctly.
neutral
train_100283
The first annotator in both English and Italian languages and the second one in both English and Czech languages.
if the goal is to detect sentiment expressed towards entities, the aggregated sentiment of the articles, in which the entity appears, need not to correspond to opinions expressed towards the entity.
neutral
train_100284
agreement on the contrary, had clear indicators: 'I agree with', 'have to agree'.
we proposed to build discourse structure using RST and based on empirical analysis, to determine which types of discourse structures are leading to final consensus.
neutral
train_100285
MADA Both sets of experiments show that the amount of morphological and morpho-syntactic information present in the POS tagset has an influence on the difficulty of the POS tagging step, even though the connection is not always a direct one.
for the AMIRA-SPLIT, we follow the procedure by Diab et al.
neutral
train_100286
DEV and TEST (each making up 10% of ATB3V3.1) respectively.
if ASMA is used as a preprocessing system for upstream modules, it is necessary to choose the tagset with regard to the upstream task.
neutral
train_100287
posts) were classified as positive or negative stance with F-score 39%−67% (Somasundaran & Wiebe, 2009); when those posts were enriched with preferences learned from the Web, F-score increased to 53%−75%.
opinionated Sentence: I don't think you will find anyone who this level of amplification is undamaging, but the option is to not hear.
neutral
train_100288
In case of noun the root word is the singular form of the plural noun, e.g., bottles becomes bottle, etc.
the data had an inheritably high major class baseline of Accuracy = 70% and F-score = 57%.
neutral
train_100289
Further sub-categorization can be found in (Ghoul, 2011).
statistical and machine learning approaches generally require a large amount of manually annotated training data.
neutral
train_100290
Therefore, the final patterns set (P) is filtered every time a new pattern is added to it.
for example, the short pattern "Dr./NN <Per-sonName >" might successfully match more NEs in the text than the long pattern illustrated in figure 3.
neutral
train_100291
The methodology was applied without any major modifications.
sentence detection was applied to the corpora.
neutral
train_100292
These annotated documents are then accessed via browser-based clients which essentially look like traditional e-book reading environments but with a much richer set of user accessible functionality.
our system has been implemented save for a couple of features and we are now in the process of planning an intrinsic evaluation followed by a deployment to have it be used to gauge if student users find it effective.
neutral
train_100293
On the client side, the presentation layer is responsible for (i) keeping track of the user status and the opened documents, (ii) displaying the opened documents (iii) handling user-interactions, and (iv) sending queries to the server.
our system architecture is language-independent; adopting new languages is a fairly easy process as long as the relevant annotation tools and their UIMA interfaces are available.
neutral
train_100294
We now turn to the experiments with enforced agreement based on a dyad of parsers.
miceli Barone and Attardi (2012) perform domain adaptation for dependency parsing using unannotated data.
neutral
train_100295
Notably, the distribution of these labels in the Stanford parses as well as in the parser combination parses is similar to that of the gold standard.
domain adaptation can be divided into two different scenarios: one where a small set of annotated data from the target domain is available, and one where no annotated target data is available.
neutral
train_100296
• We propose an effective method to refine/rank extracted tuples and patterns without human supervision.
the tuple, bought(Google, Youtube) can be extracted with high confidence.
neutral
train_100297
Tables 10 and 11 present results for the both sub-forums.
they used character-based and word-based PPM.
neutral
train_100298
The algorithm outperformed NB and KNN on both the forums.
given that individuals may be reluctant to share personal health information on online forums, they may choose to post anonymously.
neutral
train_100299
RT, ROFL), and emoticons.
the word "Govt" is normalised to government, which is then tagged correctly as NN, instead of NNP.
neutral