id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_97800 | With respect to the emission and transition score matrices E i,y i and A y i ,y i+1 , we adopt an upper bound between source/target domains, which helps the target domain predictor to be guided by the source domain predictor. | in this paper, we focus on medical NER from EHRs, which is a fundamental task and is widely studied in the research community (Nadeau and Sekine, 2007;Uzuner et al., 2011). | neutral |
train_97801 | In practice, the difficulty of building a universally robust and high-performance medical NER system lies in the variety of medical terminologies and expressions among different departments of specialties and hospitals. | in this paper, we propose a novel NER transfer learning framework, namely label-aware double transfer learning (La-DTL): (i) We leverage bidirectional long-short term memory (Bi-LSTM) network (Graves and Schmidhuber, 2005) to automatically learn the text representations, based on which we perform a label-aware feature representation transfer. | neutral |
train_97802 | Although these works consider both out-of-context noise and overly-specific noise, they rely on handcrafted features which become an impediment to further improvement of the model performance. | that work assumed that weights to control the hierarchical loss would be solicited from domain experts, which is inapplicable for FETC. | neutral |
train_97803 | More precisely, a labeled corpus for entity type classification consists of a set of extracted entity mentions {m i } N i=1 (i.e., token spans representing entities in text), the context (e.g., sentence, paragraph) of each mention {c i } N i=1 , and the candidate type sets {Y i } N i=1 automatically generated for each mention. | we compute a probability distribution over all the K = |Y| types in the target type hierarchy Y. | neutral |
train_97804 | In other words, config 9 (Table 5) is combination of both weighted negative and scaled positive extractions. | #out: count of output instances with cnfpi, Λ, Gq ¥ 0.5. avg: average. | neutral |
train_97805 | We achieved this through SGD training of the diffeomorphism parameters θ, the means µ i of the Gaussian phones, and the parameters of the focalization kernel F. Our data is taken from the Becker-Kristal corpus (Becker-Kristal, 2006), which is a compilation of various phonetic studies and forms the largest multilingual phonetic database. | the essence of modeling is the same in that we explain formant values, rather than discrete IPA symbols. | neutral |
train_97806 | We call models trained on datasets augmented with unlabeled corpus data or random strings DA-U or DA-R, respectively. | we thus conclude that multi-task training (instead of simple data augmentation) is crucial for the use of unlabeled data. | neutral |
train_97807 | Also, we can notice that Wixarika has more unique words than the rest of our studied languages. | the last two questions are crucial: While for many languages it is difficult to obtain the number of annotated examples used in earlier work on (semi-)supervised methods, a limited amount might still be obtainable. | neutral |
train_97808 | In order to make follow-up work on minimal-resource settings for morphological segmentation easily comparable, we provide predefined splits of our datasets 2 . | to the best of our knowledge, we are the first to try such an approach for a morphological segmentation task. | neutral |
train_97809 | (2016) could be seen as the best alternative for translating into MRLs as it works at the character level on the decoder side and it was evaluated in different settings on different languages. | the look-up table includes high-quality affixes trained on the target side of the parallel corpus by which we train the translation model. | neutral |
train_97810 | When the Turkish word 'terbiyesizlik' is generated, the first channel is supposed to predict t, e, r, up to k, one after another. | the second output channel helps us train better affix embeddings. | neutral |
train_97811 | The f0 and E features are processed at the word level: each sequence of frames corresponding to a time-aligned word (and potentially its surrounding context) is convolved with N filters of m sizes (a total of mN filters). | figure 3 demonstrates one case where the pause feature helps in correcting a PP attachment error made by a text-only parser. | neutral |
train_97812 | The NXT corpus provides reconciliation between Treebank and MS-State transcripts in terms of annotating missed/extra/substituted words, but parses were not re-annotated. | in the context of this conversation (the speaker was talking about another person in an informal manner), and everything acts more like filler -e.g. | neutral |
train_97813 | Although we could in principle perform word discovery directly on speech, we leave this for future work, and only explore singletask and reconstruction models. | in the speech experiments, the decoders output the sequences at the grapheme level, so the output embedding size is set to 64. | neutral |
train_97814 | In Table 2, we present results on three small datasets that demonstrate the efficacy of our models. | we didn't observe similar improvements in the text translation experiments. | neutral |
train_97815 | Can the model identify vulnerable sentences, which are more likely to change the OH's view when addressed? | although this work did not look at the interaction between OHs and specific challengers, it provides valuable insight into persuasive arguments. | neutral |
train_97816 | appeal to emotion requires some introspection and determining your own worth to your family etc. | some of these aspects have been used as features to predict debate winners (Wang et al., 2017) and view changes (Tan et al., 2016). | neutral |
train_97817 | The intuition behind our model is that addressing certain parts of the OH's reasoning often has little impact in changing the OH's view, even if the OH realizes the reasoning is flawed. | in addition, 23% of the top pairs in one dimension capture the comment pointing out that the OH may have missed something (e.g., you don't know the struggles ...). | neutral |
train_97818 | As expected, the frequency of changes in view differs across topics ( Figure 1b). | challenger 2's argument is relatively similar to the OH's reasoning, as it attempts to directly correct the OH's reasoning. | neutral |
train_97819 | Similarly, the topological field (Höhle, 1986) identifying the major section of a sentence in relation to the clausal main verb is potentially relevant for a word's focus status. | if available in the original description of the approach, we also report the accuracy obtained without acoustic and prosodic features. | neutral |
train_97820 | We have seen in section 5.2 that surface-based givenness is helpful in predicting focus. | this shows that while focus information is clearly useful in Short Answer Assessment, it needs to be reliable enough to be of actual benefit. | neutral |
train_97821 | In the Shakespeare task, XU12 did observe a higher correlation with PINC (0.41) although the correlation was not with overall system ranking but rather only on the style metric. | meaning Preservation: modeling semantic similarity at a sentence level is a fundamental language processing task, and one that is a wide open field of research. | neutral |
train_97822 | From Figure 4, we can see that our paragraph-level models (the latter three) overall outperform DU-pair baselines across all the subsets. | building Discourse Unit Representations: We aim to build discourse unit (DU) representations that sufficiently leverage cues for discourse relation prediction from paragraph-wide contexts, including the preceding and following discourse units in a paragraph. | neutral |
train_97823 | However, nearly all the previous works assume that a pair of discourse units is independent from its wider paragraph-level contexts and build their discourse relation prediction models based on only two relevant discourse units. | third, implicit discourse relation prediction should benefit from modeling discourse relation continuity and patterns in a paragraph that involve easy-to-identify explicit discourse relations (e.g., "Implicit-Comparison" relation is followed by "Explicit-Comparison" in the above example). | neutral |
train_97824 | We fixed the weights of word embeddings during training. | the multi-way classification setting is more appropriate and natural in evaluating a practical end-to-end discourse parser, and we mainly evaluate our proposed models using the four-way multi-class classification setting. | neutral |
train_97825 | Our results indicate that the proposed utterance splitting applied to the training set greatly improves the neural model's accuracy and ability to generalize. | this can likely be attributed to the model having access to table 6: Automatic metric scores of different models tested on the E2E dataset, both unmodified (s) and augmented (s) through the utterance splitting. | neutral |
train_97826 | Although the E2E dataset contains a large number of samples, each MR is associated on average with 8.65 different reference utterances, effectively offering less than 5K unique MRs in the training set ( Fig. | in order to bring our model to produce more sophisticated utterances, we experimented with filtering the training data to contain only the most natural sounding and structurally complex utterances for each MR. For instance, we prefer having an elegant, singlesentence utterance with an apposition as the reference for an MR, rather than an utterance composed of three simple sentences, two of which begin with "it" (see the examples in Table 5). | neutral |
train_97827 | , ⁄ @ @ 1 , -2) ) ) ) ) ) ) ) ) ) ) ! | regarding the "Listenability" evaluation, workers gave high scores to the Fine-tuned and Pseudo-melody models that are trained using both the melody and lyrics. | neutral |
train_97828 | In the first example, the generator learns to complete the actions of placing the mixture into the a greased casserole and then baking it, which the MLE model misses. | indeed, recent studies have reported cases where commonly used measures do not align well with desired aspects of generation quality (Rennie et al., 2017;Li et al., 2016). | neutral |
train_97829 | Although this was a CWI task, surprisingly only 4.7% of the words in the test data were identified as complex, and all the other words were viewed as simple. | coster and Kauchak (2011) employ a phrase-based Machine Translation system extended to support phrase deletion, and Wubben et al. | neutral |
train_97830 | We collect the list of entity types of each entity in the FB5M through the predicate fb:type/instance. | the post processing step does not take into consideration that some verbs and prepositions do not fit in the sentence structure, or that some words are already existing in the question words (Example 4 table 7). | neutral |
train_97831 | Automatic Metrics for evaluating text generation such as BLEU and METEOR give an measure of how close the generated questions are to the target correct labels. | for text generation from tables, (Lebret et al., 2016) extend positional copy actions to copy values from fields in the given table. | neutral |
train_97832 | 3.3) is another RNN that generates the output question. | we can see that it has affected the naturalness of the question. | neutral |
train_97833 | Research in cognitive science, psychology and other social studies offer a great amount of work on (conscious and unconscious) biases and their effects on a variety of human activities (Kaheman and Tversky, 1972;Tversky and Kaheman, 1974). | our subjectivity lexicons are categorized into the following groups: Argumentation: This lexicon includes markers of argumentative discourse. | neutral |
train_97834 | 3 Using a trained neural network, we first project all published documents into a vector space such that a document tends to be close to its references. | (Caragea et al., 2014a,b;Lopez and Romary, 2010). | neutral |
train_97835 | In this section, we delineate details of the process for collecting questions and answers. | figure 4 shows the count of first word(s) for our questions. | neutral |
train_97836 | The numbers reported for this baseline represent the expected outcome (statistical expectation). | we present MultiRC (Multi-Sentence Reading Comprehension) 1 -a dataset of short paragraphs and multi-sentence questions that can be answered from the content of the paragraph. | neutral |
train_97837 | To detect whether an essay is adversarial, we further augment the system with an adversarial text detection component that simply captures adversarial input based on the difference between the predicted essay and coherence scores. | our local coherence model is inspired by the model of Li and Hovy (2014) which uses a window approach to evaluate coherence. | neutral |
train_97838 | Higgins and Heilman (2014) proposed a framework for evaluating the susceptibility of AES systems to gaming behavior. | the LSTM T&N and LC networks predict an essay and coherence score respectively (as described earlier), but now they both share the word embedding layer. | neutral |
train_97839 | In addition to marking only a subset of the incorrect tokens at inference time, we also train new models for which the training data also only had a subset of incorrect tokens marked. | the system presents an initial translation to the user who can accept a prefix and select among the most likely postfix iteratively. | neutral |
train_97840 | the sentence decoded from the initial system) and the reference sentence. | we consider the task of reformulating either a sentence, i.e. | neutral |
train_97841 | According to Ganesalingam (2008), the sense of mathematical text is conveyed through the interaction of two contexts: the textual context (flowing text) and the mathematical (or symbolic) context (mathematical formulae). | scientific documents, such as those from Physics and Computer science, rely on mathematics to communicate ideas and results. | neutral |
train_97842 | Moreover, DKRL fundamentally uses TransE (Bordes et al., 2013) method for encoding structural information. | in this work, we propose to enhance learning models with world knowledge in the form of Knowledge Graph (KG) fact triples for Natural Language Processing (NLP) tasks. | neutral |
train_97843 | The results are shown in Figure 5. | we report hypernym-specific scores -where the set of ground-truth edges considers just wordNet hypernyms -synonym-specific scores, and combined scores -where all wordNet hypernym and synonym edges are taken as ground truth, and a predicted edge must have the correct start node, end node, and relation type to be correct. | neutral |
train_97844 | We conduct experiments aimed at addressing three primary research questions: (1) How does each taxonomic organization algorithm perform? | our second method selects hypernym edges for the taxonomy by using the Chu-Liu-Edmonds optimum branching algorithm (Chu and Liu, 1965;Edmonds, 1967) to solve the directed analog of the maximum spanning tree problem (DMST). | neutral |
train_97845 | To understand why, we examine the output taxonomies. | how do DAG algorithms compare to tree-constrained ones, and how do transitive algorithms compare to their non-transitive counterparts? | neutral |
train_97846 | As we can see in Figure 2, because of the alignment shift, both tied and fixnorm incorrectly replace the two unknown words (in bold) with But Deutsche instead of Deutsche Telekom. | this coincides with other observations that NMt's translations are often fluent but lack accuracy (Wang et al., 2017b;Wu et al., 2016). | neutral |
train_97847 | It is more reasonable to have similar languages as auxiliary languages. | recent efforts (Artetxe et al., 2017;Conneau et al., 2018) also showed that it is possible to learn the transformation without any seeds, which makes it feasible for our proposed method to be utilized in purely zero parallel resource cases. | neutral |
train_97848 | Effective conversation prediction and recommendation requires an understanding of both user interests and discourse behaviors, such as agreement, disagreement, inquiry, backchanneling, and emotional reactions. | the intuition is that messages of different discourse modes may show different distributions of the three word types. | neutral |
train_97849 | For example, by leveraging both topical content and discourse structure, our model achieves a mean average precision (MAP) of 0.76 on conversations about the U.S. presidential election, compared with 0.70 by McAuley and Leskovec (2013), which only considers topics. | in addition to using 75% of conversation history, we also extract the first 25% and 50% of history as training. | neutral |
train_97850 | While 41% arguments were categorized as abusive, other categories (tu quoque, circumstantial, and guilt by association) were found to be rather ambiguous with very subtle differences. | based on our observations, we summarize several linguistic and argumentative phenomena with examples most likely responsible for ad hominem threads in Table 4. | neutral |
train_97851 | There are two strategies that we could use in deciding whether to align a scene graph node d (whose label space is O ∪ A ∪ R) with a word/phrase w in the sentence: • Word-by-word match (WBW): d ↔ w only when d's label and w match word-for-word. | we thank Peter Anderson, Sebastian Schuster, Ranjay Krishna, Tsung-Yi Lin for comments and help regarding the experiments. | neutral |
train_97852 | 7, selects a following transition, and updates the configuration. | at each step, we use a hinge loss defined as: where Y is the set of possible transitions and Y + is the set of correct transitions at the current step. | neutral |
train_97853 | In the present work, the order of the tasks was inspired by cognitive and linguistic abilities (see § 1). | note that quantification always refers to animals (target set). | neutral |
train_97854 | The model has a core structure, represented by layers 1-5 in the figure, which is shared across tasks and trained with multiple outputs. | the 32-d turquoise vector in Figure 3). | neutral |
train_97855 | We conduct a user study via Amazon Mechanic Turk (AMT) to test humans' performance on the datasets after they are remedied by our automatic procedures. | we examine our automatic procedures for creating decoys on five datasets. | neutral |
train_97856 | (2) Neutrality. | recently, Lin and Parikh (2017) study active learning for Visual QA: i.e., how to select informative image-question pairs (for acquiring annotations) or image-question-answer triplets for machines to "learn" from. | neutral |
train_97857 | On the contrary, a decoy may hardly meet QoU and IoU simultaneously 1 . | we conduct extensive empirical studies to demonstrate the effectiveness of our methods in creating better Visual QA datasets. | neutral |
train_97858 | The decoys need to be plausible to the image. | the remedied datasets and the newly created ones are released and available at http://www.teds. | neutral |
train_97859 | gained a lot of attention recently. | our design goal is that a learning machine needs to understand all the 3 components of an image-question-candidate answers triplet in order to make the right choiceignoring either one or two components will result in drastic degradation in performance. | neutral |
train_97860 | The results suggest that the CNN prediction performance improves when word and context vectors are jointly learned by our attr2vec model. | note that there could be multiple rows in X referring to the same pair of words but associated with different contextual variables. | neutral |
train_97861 | (2016), which learns word representations by adopting a ranking-based loss function. | moreover, note that our attr2vec algorithm, unlike GloVe, can handle generic contextual information. | neutral |
train_97862 | In Section 4, we present the experimental results, and close this paper with some concluding remarks in Section 5. | we considered two topics: general news stories (G) and sport news (SP O). | neutral |
train_97863 | To the best of our knowledge, attr2vec is the first model that incorporates syntactic dependency relations in a co-occurrence counts based model (such as GloVe). | we associate with each variable v ∈ V a bias term b v ∈ R and a latent factor vector f v ∈ R d , where the dimensionality d of the latent factor space is a hyperparameter of the model. | neutral |
train_97864 | All of the Spearman's ρ values are comparable to those found in Levy and Goldberg (2014) and Hamilton et al. | we visualize the time points where a node is zero in Figure 5. | neutral |
train_97865 | In the original dataset, there are 200 words categorized into 17 classes. | the second component incorporates wordspecific information into our model. | neutral |
train_97866 | (2016b) use attested shifts generated by historical linguists. | models were then evaluated on their ability to detect when those words changed. | neutral |
train_97867 | In this section, we evaluate our model's ability to measure the speed at which a word is changing. | if the bins are too small, the synchronic models get trained on insufficient data. | neutral |
train_97868 | To preserve DIH, we propose a novel word embedding method, distributional inclusion vector embedding (DIVE), which fixes the two flaws by performing non-negative factorization (NMF) (Lee and Seung, 2001) where k I is a constant which shifts PMI value like SGNS, Z = |D| |V | is the average word frequency, and |V | is the vocabulary size. | a combination of unsupervised DIVE with the proposed scoring functions produces new state-of-the-art performances on many datasets in the unsupervised regime. | neutral |
train_97869 | This paper presents a corpus and experiments to mine possession relations from text. | the third, fourth and fifth classifiers predict temporal anchors, i.e., classify pairs between which a possession holds-either alienable or controlinto before yes or before no, during yes or during no, and after yes or after no. | neutral |
train_97870 | Labels per temporal anchor with respect to verb x (binary flags for before, during and after) and possession type are presented in Table 3. | our approach extracts possessions intuitive to humans when there is no specific possession cue (e.g., we extract a control possession from The [computer] y at work was slow, [I] x didn't get anything done). | neutral |
train_97871 | Linear mapping, which is a standard component of NNs, has been applied successfully in many tasks. | the dimension m for S and T is a hyperparameter and must be determined prior to training. | neutral |
train_97872 | In view of this tradeoff, our work here further advances unsupervised learning of sentence embeddings. | it relies on a Siamese neural network architecture to predict surrounding sentences, contrasting our simpler unsupervised objective. | neutral |
train_97873 | The above baseline Multi achieves state-of-theart performance for multi-domain sentiment analysis (Liu et al., 2016), yet the domain indicator d i is used solely to select softmax parameters. | the results on dataset 1 are shown in table 2 (the results on Blitzer's dataset exhibit similar results and are omitted due to space constraints). | neutral |
train_97874 | We address this issue by investigating a model that learns domain-specific input representations for multi-domain sentiment analysis. | an end-to-end memory network is further proposed by Sukhbaatar et al. | neutral |
train_97875 | where θ ds is the set of domain-specific parameters including domain descriptors, attention weights and softmax parameters. | a descriptor vector is learned for representing each domain, which is used to map adversarially trained domaingeneral Bi-LSTM input representations into domain-specific representations. | neutral |
train_97876 | One naive baseline solution ignores the domain characteristics when learning f . | the mini-batch size, the size of the vocabulary V , dropout rate, learning rate for AdaGra and λ for adversarial training are set to 50, 10000, 0.4, 0.5 and 0.1, respectively. | neutral |
train_97877 | (2011) They extract rich targetdependent and target-independent lexical and syntactic features for classification. | the transitions generated at each step are discrete, making it difficult to train and propagate errors to update the model parameters. | neutral |
train_97878 | For instance, if embedding of a word 'अ छा|achcha' is unknown we translate it in English as 'good', and use its word embeddings in place of the source word 'अ छा|achcha'. | in the first architecture (A1, Figure 2a), we concatenate extracted features of each word of an instance with the corre- sponding word representations and pass it through a LSTM network followed by dense and output layers. | neutral |
train_97879 | Closest to our work are Yang and Cardie (2013) (Y&C) and Katiyar and Cardie (2016) (K&C). | 7 holders that both models predict incorrectly (hard cases) are less frequently subjects or A0 roles (col. 2, rows 2-3). | neutral |
train_97880 | that overall character and word-level edit distances roughly matched the edit distances between clean and noisy examples in our parallel seed corpus. | reverse noising yields significantly more convincing errors, but the edit distance between synthesized examples is significantly lower than in real data ( Figure 5). | neutral |
train_97881 | The final context dependent token representation h (e) t is the concatenation of the forward and backward pass token representations: To obtain the final context dependent token representation c j at the decoding time step j, we take a weighted average over to- Bahdanau et al. | encoder: We use a bi-directional LSTM (Graves et al., 2005) with attention mechanism as our sentence encoder. | neutral |
train_97882 | To test this, we plot MAP for our best self-training model and various QA baselines as we vary the proportion of labeled training set in Figure 3. | this procedure can be repeated as long as both the two models continue to improve. | neutral |
train_97883 | based algorithm are in the supplementary material. | we found that combining the web-facing model of Talmor et al. | neutral |
train_97884 | Most of them are derived from some important properties associated with each quantity. | we apply the attention mechanism to scan all hidden state sequence of body by the last hidden state of question to pay more attention to those more important (i.e., more similar between the body and the question) words. | neutral |
train_97885 | 4 The userlevel tweets consist of ≈10 million tweets from 5,191 users mapped to their user-level features. | it suggests that the users having intelligence below average are present oriented but they seem to have negative view of it. | neutral |
train_97886 | Major studies on time have been done for event detection (Ihler et al., 2006;Batal et al., 2012;Sakaki et al., 2013) which are mainly of the subjective consent. | for example, the tweet "Hoping to have fun among my friends but wishing I were with you instead" has a future orientation but it is mis-classified into the past orientation. | neutral |
train_97887 | Our contributions are summarised as below: • We introduce the sentiment dimensions in the human temporal orientation to infer the social media users' psycho-demographic attributes on a large-scale. | when we considered the sentiment dimension, we found that it was actually correlated with present orientation with negative sentiment. | neutral |
train_97888 | We examined our proposal using a recently published dataset of production norms (Jouravlev and McRae, 2016) and confirmed when people were explicitly asked to recall thematically related words, their responses were more likely located within the context embedding space in the vicinity of the cues word embedding. | facilitation is seen for word pairs that are purely category coordinates (lawyersurgeon) or purely associates (scalpel-surgeon), and pairs that share both types of relations (nursesurgeon) tend to see an additive processing benefit that reflects the privilege of both similarity and relatedness, an effect generally referred to as the "associative boost" (Chiarello et al., 1990;Lucas, 2000). | neutral |
train_97889 | Third, defining a semantic measure that does not require references avoids the difficulties incurred by their non-uniqueness, and the difficulty in collecting high quality references, as reported by Xu et al. | this places these measures in the 3rd and 4th places in the shared task, where the only two systems that surpassed it are marginally better, with scores of 0.33 and 0.34, and where the next | neutral |
train_97890 | We further compute an average human score: Inter-annotator agreement rates are computed in two ways. | unlike in their model, SAMSA stipulates that not only should multiple events evoked by a verb in the same sentence be avoided in a simplification, but penalizes sentences containing multiple events evoked by a lexical item of any category. | neutral |
train_97891 | We use 300D Glove 6 (Pennington et al., 2014) and 1000D wiki2vec 7 pre-trained vectors to initialize our word and entity vectors. | to this end, we present a method to effectively apply linked entities in sequence-tosequence models, called Entity2topic (E2t). | neutral |
train_97892 | To do this, we employ a CNN-based model to locally disambiguate the entities. | sentence E1.2 is linked correctly to the country "United States", and thus is given a low d value.. | neutral |
train_97893 | To achieve the above goal, we first infer the representations of entity descriptions using relation representation as attention: where r ∈ d w is the representation of the relation mention by averaging all the hidden vectors of BiLSTM, h i is the hidden representation of w i , and W e ∈ d w×2×h is a trained parameter matrix. | the parameters of our model are optimized using the stochastic gradient descent (SGD) algorithm. | neutral |
train_97894 | Concretely, we employ BiLSTM model (Schuster and Paliwal, 1997;Graves and Schmidhuber, 2005) with mutual attention mechanism to learn representations for relation mentions and entity descriptions. | (ii) We propose a mutual attention mechanism which exploits the textual representations of relation and entity to enhance each other (Section 3.2). | neutral |
train_97895 | This section presents our accurate text-enhanced knowledge graph representation learning framework. | several illustrative triples from the test set of FB15K are listed in Table 4. | neutral |
train_97896 | In our research, we propose an automatic phrase abduction mechanism to inject phrasal knowledge during the proof construction process. | in the presence of two or more variables with the same predicate, there might be multiple possible variable unifications. | neutral |
train_97897 | There is one critical reason that the word-toword axiom injection described in Section 3.2 fails to detect phrase-to-phrase correspondences. | then, the negation in G is removed by applying the introduction rule (¬-I) to G. Here, False is the propositional constant denoting the contradiction. | neutral |
train_97898 | We represent logical formulas using graphs, since this is a general formalism that is easy to visualize and analyze. | this work was supported by JSt CRESt Grant Number JPMJCR1301 and AIP Challenge Program, Japan. | neutral |
train_97899 | Moreover, we find that it can accurately identify useful key phrases such as officials declared the video, according to previous reports, believed will come, president in his tweets as supporting pieces of evidence, and proved a hoax, shot down a cnn report, would be skeptical as opposing pieces of evidence. | the rationale behind using the similarity matrix is that in our memory network model, as Figure 3 shows, we seek a transformation of the input claim such that s = M × s in order to obtain the closest facts to the claim. | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.