id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_98500
Despite those positive outcomes, several issues with multi-task learning for sequence labeling remain open.
actually, by training our model on NER, chunking and POS tagging, we report state-of-the-art (or highly competitive) results on each task, without using external knowledge (such as gazetteers that has been shown to be important for NER), or hand-picking tasks to combine.
neutral
train_98501
The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources used to conduct this research.
this establishes a new state-ofthe-art on the test set, outperforming concurrently published work (Xiong et al., 2019) and matching the performance of a BERt model (Devlin et al., 2018) on this task.
neutral
train_98502
Training data We train the relabeling function g on another synthetically-noised dataset D drop generated from the manually-labeled data D. To mimic the type distribution of the distantly-labeled examples, we take each example (s, m, t) and randomly drop each type with a fixed rate 0.7 independent of other types to produce a new type set t .
we show in the next section that this is not sufficient for denoising.
neutral
train_98503
We generate artificial subjectverb agreement errors from large amounts of data.
both neural models obtain higher F 0.5 scores than the rule-based baseline, on average and across the board, i.e., +10.6 for LSTM ESL and +15.7 for LSTM ESL+Art .
neutral
train_98504
We train these models using EM for 500 iterations or until convergence, and we select the model with the lowest perplexity from among 70 random restarts.
here we note a mixed result: whilst de, sv, and it do benefit from POS information, the other languages do not, obtaining great improvements from MUSE embed-dings instead.
neutral
train_98505
This challenge should be taken into consideration in model design.
we also thank USC Plus Lab and UCLA-NLP group for discussion and comments.
neutral
train_98506
Training on increasing amounts of target samples improves the model performances monotonically for each target language and the model leveraging the bilingual data consistently outperforms the monolingual baseline model.
in this section, we present our evaluation for zeroshot learning.
neutral
train_98507
For both SVD-aligned and ADV-aligned, we use the embeddings as provided by the original authors.
among the 5 considered languages, Turkish seemed to benefit the least from cross-lingual learning in all experiments.
neutral
train_98508
10 serves as a regularization that encourages the output of R(G) to be distinguishable among classes.
the performances of these works are highly dependent on manually annotated training data while annotation process is time-consuming and expensive.
neutral
train_98509
Rotten Tomatoes and Idebate dataset (Wang and Ling, 2016) use online text as source, but they are relatively small in scale: 3.7K posts of Rotten-Tomatoes compared to 80K posts of TIFU-short as shown in Table 1.
second, we propose a novel abstractive summarization model named multilevel memory networks (MMN), equipped with multi-level memory to store the information of text from different levels of abstraction.
neutral
train_98510
Different from previous approaches, we propose to alleviate such bias issue by changing the source of summarization dataset.
high scores of the TIFU dataset in both metrics show that it is potentially an excellent benchmark for evaluation of abstractive summarization systems.
neutral
train_98511
We collect data from Reddit, which is a discussion forum platform with a large number of subreddits on diverse topics and interests.
instead of using only the last layer output of CNNs, we exploit the outputs of multiple layers of CNNs to construct S sets of memories.
neutral
train_98512
We combine the best variants from the three approaches into a single system by taking the majority vote from the models.
all content included is accurate, with no irrelevant details or repetitions.
neutral
train_98513
In order to asses the effectiveness of AL for neural text compression we extend the OpenNMT 7 implementations with our interactive framework following Algorithm 1.
+ Coverage-AL: The urgency of the situation in Alaska , Defenders needs your immediate assistance .
neutral
train_98514
It is therefore indispensable to minimize the cost of data annotation.
neural sequence-to-sequence (Seq2Seq) models have shown remarkable success in many areas of natural language processing and specifically in natural language generation tasks, including text compression (Rush et al., 2015;Filippova et al., 2015;Yu et al., 2018;Kamigaito et al., 2018).
neutral
train_98515
Given a question, the system predicts an answer using an extractive summary as the source input.
these summaries should factually adhere to the content of the source text and present the reader with the key points therein.
neutral
train_98516
Abstractive methods can thus introduce new words to the summary that are not present in the source article.
the articles were evenly split across the four competing systems, and each HIt was completed by 5 turkers.
neutral
train_98517
As shown in Table 2, the linguistic output of SL & CL is closer to the language used by humans: Our agent is able to produce a much richer and less repetitive output than both BL and RL.
overall, the accuracy of the CL and RL models is close.
neutral
train_98518
Language is highly abstract: one dialog can correctly describe a lot of different scenes in real world, so why should we force a dialog to fit one single example among them?
the full training procedure is specified in Algorithm 1.
neutral
train_98519
(2016) and Arora et al.
we provide these results in Table 4 and observe that each component is indeed important for our model.
neutral
train_98520
Intuitively, we expect our model to have a lower sample complexity since training our model involves learning fewer parameters.
for multimodal personality traits recognition on POM (left side of Table 1), our baseline is able to additionally outperform more complicated memory-based recurrent models such as MfN on several metrics.
neutral
train_98521
The models on the leaderboard are evaluated on a private unseen test set which contains 18 new environments.
while these tasks are driven by different goals, they all require agents that can perceive their surroundings, understand the goal (either presented visually or in language instructions), and act in a virtual environment.
neutral
train_98522
Our speaker model is an enhanced version of the encoder-decoder model of Fried et al.
we introduce our environmental dropout method to mimic the "new" environment E , as described next in Sec.
neutral
train_98523
Inspired by reading strategies, with limited resources and a pretrained transformer, we propose three strategies to improve machine reading comprehension.
we use a sigmoid function instead of softmax at the final layer ( Figure 1) and regard the task as a binary (i.e., correct or incorrect) classification problem over each (document, question, answer option) instance.
neutral
train_98524
For experiments with two datasets, we use Algorithm 2; for experiments with three datasets we find the re-weighting mechanism in Section 4.2 to have a better performance (a detailed comparison will be presented in Section 5.4).
the similar gain indicates that our method is orthogonal to ELMo.
neutral
train_98525
Since the GCN layer retains important structural information and is sensitive to positional data from the syntax tree, we consider it as a position-based approach.
in this paper, we introduced the application of GCN and attention mechanism to identification of verbal MWEs and finally proposed and tested a hybrid approach integrating both models.
neutral
train_98526
Second, the attention-based variants further boosted performance in comparison with their counterparts without attention.
incorrect attention led to a large drop in segmentation accuracy.
neutral
train_98527
Su and Lee (2017) also introduce a pixel-based model that learns character features from font images.
the final embedding of the target word is indirectly affected by the visual information.
neutral
train_98528
The BPE algorithm constructs a subword list from raw data and lattice LSTM introduces subwords into character LSTM representation.
we examine their non-pretrained model performance for fair comparison.
neutral
train_98529
Our work is in line with their work in directly using word information for CWS.
as shown in Table 5, among the ten most improved words, seven words are domain-specific noun entities, including person names, disease names and chemical compound names.
neutral
train_98530
Although the in-word negative sampling method proposed above is expected to prevent our model from incorrectly splitting multi-character words, we still want our model to pay more attention to the segmentation of such words.
• On four datasets in different special domains, our model improves the word F-measure by more than 3.0%, compared with the state-ofthe-art baseline segmenter.
neutral
train_98531
In contrast, we model characterbased POS tagging.
to further investigate the robustness of our model, we conduct experiments with different levels of corrupted tokenization in English.
neutral
train_98532
While most pure character-level models cannot ensure consistent labels for each character of a token, our semi-CRF outputs correct segments in most cases (tokenization F 1 is 98.69%, see Table 4), and ensures a single label for all characters of a segment.
we calculate joint tokenization and UPOS (universal POS) F 1 scores.
neutral
train_98533
Using a dictionary with NN is also popular (Zhang et al., 2018b;.
we do not induce a uniform smoothing.
neutral
train_98534
Our unsupervised dynamic speaker model differs from previous work in that we build speaker embeddings as a weighted combination of latent modes with weights computed based on the utterance.
second, we use the learned dynamic speaker embed- dings in two representative tasks in dialogs: predicting user topic decisions in socialbot dialogs, and classifying dialog acts in human-human dialogs.
neutral
train_98535
Figure 1 shows a visual comparison between outputs generated by the two models.
to contextualize these results, we compare disfluency removal as a post-processing step after end-to-end speech translation with the original disfluent pardev test Model 1Ref 2Ref 1Ref 2Ref Postproc.
neutral
train_98536
While SACNN can focus on important segments and gain local features, DepRNN helps to handle long-distance dependency between two entities based on the SDP as well as provide subject and object roles of two entities for the directional relation.
in this paper, we present a new model combining both CNNs and RNNs, exploiting the information from both the raw sequence and the SDP.
neutral
train_98537
We also propose the SACNN which automatically focus on the essential segments and gains local features.
the relation classification task is treated as a multi-class classification problem.
neutral
train_98538
In the future work, we will consider to detect events and their sentence-level and documentlevel factuality with a joint framework, and we will also continue to expand the scale of our DLEF corpus.
(Time: November, 2017) (Document-level factuality of the event "reach" is CT-.)
neutral
train_98539
Table 2 indicates that sentence-level factuality usually agrees with document-level factuality in CT+ documents, making them straightforward to be identified.
the results is given in table 6, which shows that contexts can improve the performance more significantly on the Chinese corpus than the English corpus.
neutral
train_98540
To our best knowledge, this is the first document-level event factuality corpus.
no previous work annotated a document-level corpus.
neutral
train_98541
4, where Mintz (Mintz et al., 2009), MultiR (Hoffmann et al., 2011) and MIMLRE (Surdeanu et al., 2012) are conventional feature-based methods, and (Lin et al., 2016) and are PCNN-based ones 4 .
table 5 compares the AUC values reported in these two papers and the results of our proposed models.
neutral
train_98542
During the training, the model learns a joint network including F, E and D to minimize the empirical loss Eq (1).
in order to have a reasonable comparisons, we report the average ranking score for each method.
neutral
train_98543
In order to transfer the rules into a new policy π r , the KL divergence between the posterior of π and π r should be minimized, this can be formally defined as minKL(P π (A t |S t , θ π )||P πr (A t |S t , θ π )) (5) Optimizing the constrained convex problem defined by Eq.
here we use a rule pattern as the Fig.1 shows (?).
neutral
train_98544
• We apply the PR REINFORCE to the instance selection task for DS dataset to alleviate the wrong label problem in DS.
unbiased methods, such as REINFORCE, could usually take much time to train.
neutral
train_98545
The Metropolis Hastings Walker (MHW) method (Li et al., 2014) scales well in the number of topics, and uses a collapsed inference algorithm, but it operates in the batch setting, so it is not scalable to large corpora.
the SCVB0 algorithm does not leverage sparsity, and hence requires O(K) operations per word token.
neutral
train_98546
(S3) W AE W +W AE L +Sim: It is similar to W AE W +Sim except we also include the average embeddings of attribute labels associated with the instance.
we want the words and their attribute labels to be close to each other in the embedding space and the embeddings of different labels to be far away from each other.
neutral
train_98547
Previous studies have mostly focused on estimating each annotator's overall reliability on the entire annotation task.
when constructing models that learn from noisy labels produced by multiple annotators, it is important to accurately estimate the reliability of annotators.
neutral
train_98548
The function H denotes the cross entropy between two distributions.
the review representation is obtained by averaging word embeddings.
neutral
train_98549
To solve the FDKB task using KBR, one feasible way is to exhaustively calculate the scores of all (r, t) combinations for the given head entity h. Afterwards, the highly-scored facts are returned as results.
semantic matching models such as REsCAL (Nickel et al., 2011), DistMult (Yang et al., 2014), Complex (Trouillon et al., 2016), HolE (Nickel et al., 2016) and ANALOGY (Liu et al., 2017) model the score of triples by the semantic similarity.
neutral
train_98550
RE has been widely studied in NLP community for many years.
we aim to address them and further extensions of our model in future works.
neutral
train_98551
Moreover, it is hard for machines to learn the attention weights from a long sequence of input text.
our RbSP model yields an F1-score of 86.3%, outperforms other comparative models, except Multi-Att-CNN model of with multi-level attention CNN.
neutral
train_98552
Our idea is based on fundamental notion that the syntactic structure of a sentence consists of binary asymmetrical relations between words (Nivre, 2005).
we propose a novel DNN framework which combines Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and Convolutional Neural Networks (CNN) (LeCun et al., 1989) with a multi-attention layer.
neutral
train_98553
Given questions in natural language (NL), the goal of KBQA is to automatically find answers from the underlying KB, which provides a more natural and intuitive way to access the vast underlying knowledge resources.
for instance, the same question can be expressed in various ways in NL while a KB usually has a canonical lexicon.
neutral
train_98554
Unlike a basic memory network (Weston et al., 2014), its addressing stage is based on the key memory while the reading stage uses the value memory, which gives greater flexibility to encode prior knowledge via functionality separation.
4 shows the attention heatmap generated for a test question "who did location surrender to in number " (where "location" and " number " are entity types which replace the topic entity mention "France" and the constraint entity mention "ww2", respectively in the original question).
neutral
train_98555
Following previous work , we also try building entailment-like training data from SQuAD 2.0 (Rajpurkar et al., 2018).
molecular and quantum physics show that the electromagnetic force is the fundamental interaction responsible for contact forces.
neutral
train_98556
Preliminary experiments found that simply transferring the lower-level weights of extractive QA models was ineffective, so we instead consider three methods of con-structing entailment-like data from extractive QA data.
in this paper, we gather a large number of additional yes/no questions in order to construct a dedicated yes/no QA dataset.
neutral
train_98557
Classify: Finally we feed [v * ; p * ] into a fully connected layer, and then through a softmax layer to predict the output class.
part of the value of this dataset is that it contains questions that people genuinely want to answer.
neutral
train_98558
The learning rate is set to 0.001.
kVQU+AR: applies the approach introduced in this paper that additionally considers both the key and Value representations in the Query Updating (kVQU).
neutral
train_98559
Then with the hop size increasing, the performance drops.
[2016] achieves the state- Table 2: Running examples of addressed keys and corresponding relevance probabilities before and after introducing the STOP strategy.
neutral
train_98560
We perform this additional contextualization only when sentences form contiguous text.
given a premise sentence P i , the entailment function f e computes a single hypothesis-aware representation x i containing information in the premise that is relevant to entailing the answer hypothesis H qa .
neutral
train_98561
Models such as Decomposable Attention (Parikh et al., 2016) and ESIM (Chen et al., 2017), on the other hand, find alignments between the hypothesis and premise words through crossattention.
so we haveX This is similar to a standard attention mechanism, where attended representation is computed by summing the scaled representations.
neutral
train_98562
To understand why, consider the set of premises in Figure 1, which entail the hypothesis H c .
figure 4a illustrates this behavior.
neutral
train_98563
The goal of this module is to identify sentences in the paragraph that are important for the given hypothesis.
such scaled addition is not possible when the outputs from lower layers are not of the same shapes, as in the following case.
neutral
train_98564
Processed sentence after this step is sentence 2 in Figure 3.
since we do not have labeled data, we need to identify mentions in contexts, and assign gender labels to them.
neutral
train_98565
Gendered Language Gendered language is the use of words and phrases that discriminate 1 the gender of a subject.
we also look at classifier probability distribution for human decisions shown in box and whisker plot in Figure 5, where x-axis is the classifier probability of the mention being female.
neutral
train_98566
o t is the output and e t is the hidden state of the GRU.
so we first try to tackle it based on the sequence-to-sequence model, which is commonly used in machine translation.
neutral
train_98567
This means the Seq2Seq model can better explain the hate symbols when Twitter users intentionally misspell or abbreviate common slur terms.
so we first try to tackle it based on the sequence-to-sequence model, which is commonly used in machine translation.
neutral
train_98568
Distant supervision has been widely used in relation extraction tasks without hand-labeled datasets recently.
evaluations P@100 P@200 P@300 Automatic@NYT 76.2 73.1 67.4 Manual@NYT 96.0(+19.8) 95.5(+22.4) 91.0(+23.6) Automatic@A-NYT 93.0 89.5 88.0 Manual@A-NYT 96.0(+3.0) 92.5(+3.0) 90.7(+2.7) To further demonstrate the effectiveness of our training strategies, we compare Generative Adversarial Training (GAT) with other baselines on the partially labeled dataset A-NYT as shown in Figure 4(b).
neutral
train_98569
To further alleviate the effect of wrong labeling problem, soft-label training algorithm (Liu et al., 2017b), reinforcement learning methods (Feng et al., 2018;Qin et al., 2018b) and additional side information (Vashishth et al., 2018;Wang et al., 2018) have been used.
a former negative instance has a big chance to be credible negative if any of its entities is not mentioned in the description of the other one.
neutral
train_98570
The edges are weighted by the coreference and relation scores, which are trained according to the neural architecture explained in Section 3.1.
our Model We develop a general information extraction framework (DYGIE) to identify and classify entities, relations, and coreference in a multi-task setup.
neutral
train_98571
3 This experiment envisions a pipeline where the noisy source is first automatically corrected and then translated.
combining this simple method with an automatic grammar correction system, we find that we can recover 1.5 BLEU.
neutral
train_98572
(2017) employ a common encoder to encode the sentences from both the in-domain and out-of-domain data and meanwhile add a discriminator to the encoder to make sure that only domain-invariant information is transferred to the decoder.
in addition to the common encoder, Zeng et al.
neutral
train_98573
In this paper, we present a method to make use of out-of-domain data to help in-domain translation.
our method can achieve a mild improvement on the out-of-domain compared to the baseline system.
neutral
train_98574
For Encoder Context integration, the HAN encoder (Miculicich et al., 2018) is the best for TED and News datasets, however, the results are statistically insignificant with respect to our best model.
the Hierarchical Attention module has four operations: 1.
neutral
train_98575
This amounts to 1,200 sentence pairs in the target side.
* " indicates that the correlation is significantly better than the next-best one.
neutral
train_98576
This makes it possible to evaluate the effectiveness of adversarial attacks or defenses either using goldstandard human evaluation, or approximations that can be calculated without human intervention.
for a word-based translation model M 6 , and given an input sentence w 1 , .
neutral
train_98577
The implementation of our method is available at https: //github.com/hassyGo/NLG-RL.
this section discusses our main contribution: how efficient our method is in accelerating reinforcement learning for sentence generation.
neutral
train_98578
For reference, we report the test set results in Table 4.
for the En-Ja (2M) and En-Ja (2M, SW) datasets, we used a single GPU of NVIDIA Tesla V100 4 to speedup our experiments.
neutral
train_98579
Better generation of rare words These BLEU scores suggest that our method for reinforcement learning has the potential to outperform the full softmax baseline.
we then review how reinforcement learning is used, and present a simple and efficient method to accelerate the training.
neutral
train_98580
This paper has presented how to accelerate reinforcement learning for sentence generation tasks by reducing large action spaces.
e(y t ) is the y t -th row vector in W p , and the technique has shown to be effective in machine translation (Hashimoto and Tsuruoka, 2017) and text summarization (Paulus et al., 2018).
neutral
train_98581
The work related to this paper falls into two sub topics, described as follows.
gal and ghahramani used dropout in DNNs as an approximate Bayesian inference in deep gaussian processes (gal and ghahramani, 2016) to mitigate the problem of representing uncertainty in deep learning without sacrificing the computational complexity.
neutral
train_98582
Dropout-based methods have also been extended to various tasks such as computer vision , autonomous vehicle safety (McAllister et al., 2017) and medical decision making (van der Westhuizen and Lasenby, 2017).
we used review data from the Sports and outdoors category, with 272,630 data samples and rating labels from 1 to 5.
neutral
train_98583
The Dropout operation will be randomly applied to the activations during the training and uncertainty measurement phrases, but will not be applied to the evaluation phrase.
the shortened intra-class distance and enlarged inter-class distance can reduce the prediction variance and increase the confidence for the accurate predictions.
neutral
train_98584
We experiment with the following state-of-the-art neural text classification methods: 1.
helpfulness might be conflated with other reasons such as humour, sentiment in certain domains.
neutral
train_98585
As a consequence the model has little freedom in discovering and concentrating on some natural label order.
• Sequence Generation Model (SGM) (Yang et al., 2018) which trains the RNN model similar to seq2seq-RNN but uses a new decoder structure that computes a weighted global embedding based on all labels as opposed to just the top one at each timestep.
neutral
train_98586
Between the above two extremes are Vinyals-RNN-max and set-RNN (we have omitted Vinyals-RNN-sample and Vinyals-RNN-maxdirect here as they are similar to Vinyals-RNNmax).
if for each document, RNN finds one good way of ordering relevant labels (such as hierarchically) and allocates most of the probability mass to the sequence in that order, the model still assigns low probabilities to the ground truth label sets and will be penalized heavily.
neutral
train_98587
For both lexicons, we keep 90% of the lexicon in train set and remaining 10% in validation set.
with this motivation, we focus on increasing accuracy on the most frequent words.
neutral
train_98588
Figure 2 and Table 5 categorize the errors of systems trained on 3 types of lexicons (Romanized version of our lexicon is discussed in section 6.3) by using Algorithm 2.
for the remaining 30K words (29105 words to be exact), we observe that at least one set provides different transcription.
neutral
train_98589
(2018) utilized additional side information from KBs for improved RE.
to the best of our knowledge, ours is the first principled framework to combine and jointly learn heterogeneous representations from both language and knowledge for the RE task.
neutral
train_98590
All above models are based on handcrafted feature.
p φ (y|x) is calculated by the relation classifier from the semi-supervised learning framework.
neutral
train_98591
In this paper, we proposed RCEND to fully exploit valid information of the noisy data in distant supervision relation classification.
sentence Ds Gold s1:Al Gore was waiting to board a commercial flight from Nashville to Miami... s2:There were also performers who were born in Louisiana , including Lucinda Williams... s3:Boggs was married, had three young children and lived in Brewster NA LivedIn it suffers from noisy labeling problem due to the irrelevance of aligned text and incompleteness of KB, which consists of false positives and false negatives.
neutral
train_98592
FastText optimizes the loss function in Eq.1, but uses the scoring function s F T defined in Eq.2.
when the amount of misspellings is higher, i.e., r ∈ {0.25, 0.375}, MOE improves the results over the baseline for all of the α values.
neutral
train_98593
In fact, it is the only component of the loss function which attempts to learn these relationships.
, w i+l } for some l set as a hyperparameter.
neutral
train_98594
Conversely, CC-DBP and NYT-FB contain dirtier sentences which mean a high probability of incurring wrong labeling.
for instance, T-REX has well-written textual mentions, because the sentences are extracted from Wikipedia.
neutral
train_98595
Formally, given is a new dense representation of w ij which encodes also the information of the whole sentence.
siamese neural networks (Bromley et al., 1993) are well suited to this task because they are specifically designed to compute the similarity between two instances.
neutral
train_98596
This kind of neural network has been used in both computer vision (Koch et al., 2015) and natural language processing (Mueller and Thyagarajan, 2016;Neculoiu et al., 2016) in order to map two similar instances close in a feature space.
the works in (Gladkova et al., 2016;Vylomova et al., 2016) explore the use of word vectors to model the semantic relations.
neutral
train_98597
In order to avoid this situation, and to prevent overfitting, we apply l 2 regularization to the noise model.
3 We further investigate the performance of the proposed approach on instance-dependent label noise by flipping each class labels with different noise percentages as shown in Fig.
neutral
train_98598
The results of our analysis imply that early in training, representing part of speech is the natural way to get initial high performance.
3 If the LM begins by maximizing mutual information with input, because the input is identical for the LM and tag models it may lead to these similar initial representations, followed by a decline in similarity as the compression narrows to properties specific to each task.
neutral
train_98599
Note that topic and UDP POS both apply to the same enwikipedia corpus, but PTB POS and SEM use two different unaligned sets from the GMB corpus.
we ask: is our first conceptual shift (to SVCCA) necessary?
neutral