ACL-OCL / Base_JSON /prefixS /json /starsem /2021.starsem-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:40:33.245876Z"
},
"title": "NeuralLog: Natural Language Inference with Joint Neural and Logical Reasoning",
"authors": [
{
"first": "Zeming",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Terre Haute",
"location": {
"region": "IN",
"country": "USA"
}
},
"email": ""
},
{
"first": "Qiyue",
"middle": [],
"last": "Gao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Terre Haute",
"location": {
"region": "IN",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Lawrence",
"middle": [
"S"
],
"last": "Moss",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indiana University",
"location": {
"settlement": "Bloomington",
"region": "IN",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Deep learning (DL) based language models achieve high performance on various benchmarks for Natural Language Inference (NLI). And at this time, symbolic approaches to NLI are receiving less attention. Both approaches (symbolic and DL) have their advantages and weaknesses. However, currently, no method combines them in a system to solve the task of NLI. To merge symbolic and deep learning methods, we propose an inference framework called NeuralLog, which utilizes both a monotonicity-based logical inference engine and a neural network language model for phrase alignment. Our framework models the NLI task as a classic search problem and uses the beam search algorithm to search for optimal inference paths. Experiments show that our joint logic and neural inference system improves accuracy on the NLI task and can achieve state-of-art accuracy on the SICK and MED datasets.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Deep learning (DL) based language models achieve high performance on various benchmarks for Natural Language Inference (NLI). And at this time, symbolic approaches to NLI are receiving less attention. Both approaches (symbolic and DL) have their advantages and weaknesses. However, currently, no method combines them in a system to solve the task of NLI. To merge symbolic and deep learning methods, we propose an inference framework called NeuralLog, which utilizes both a monotonicity-based logical inference engine and a neural network language model for phrase alignment. Our framework models the NLI task as a classic search problem and uses the beam search algorithm to search for optimal inference paths. Experiments show that our joint logic and neural inference system improves accuracy on the NLI task and can achieve state-of-art accuracy on the SICK and MED datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Currently, many NLI benchmarks' state-of-the-art systems are exclusively deep learning (DL) based language models (Devlin et al., 2019; Lan et al., 2020; Liu et al., 2020; Yin and Sch\u00fctze, 2017) . These models often contain a large number of parameters, use high-quality pre-trained embeddings, and are trained on large-scale datasets, which enable them to handle diverse and large test data robustly. However, several experiments show that DL models lack generalization ability, adopt fallible syntactic heuristics, and show exploitation of annotation artifacts (Glockner et al., 2018; McCoy et al., 2019; Gururangan et al., 2018) . On the other hand, there are logic-based systems that use symbolic reasoning and semantic formalism to solve NLI (Abzianidze, 2017; Mart\u00ednez-G\u00f3mez et al., 2017; * The first two authors have equal contribution Figure 1 : Analogy between path planning and an entailment inference path from the premise A motorcyclist with a red helmet is riding a blue motorcycle down the road to the hypothesis A motorcyclist is riding a motorbike along a roadway. Yanaka et al., 2018; Hu et al., 2020) . These systems show high precision on complex inferences involving difficult linguistic phenomena and present logical and explainable reasoning processes. However, these systems lack background knowledge and do not handle sentences with syntactic variations well, which makes them poor competitors with state-ofthe-art DL models. Both DL and logic-based systems show a major issue with NLI models: they are too one-dimensional (either purely DL or purely logic), and no method has combined these two ap-proaches together for solving NLI. This paper makes several contributions, as follows: first, we propose a new framework in section 3 for combining logic-based inference with deeplearning-based network inference for better performance on conducting natural language inference. We model an NLI task as a path-searching problem between the premises and the hypothesis. We use beam-search to find an optimal path that can transform a premise to a hypothesis through a series of inference steps. This way, different inference modules can be inserted into the system. For example, DL inference modules will handle inferences with diverse syntactic changes and logic inference modules will handle inferences that require complex reasoning. Second, we introduce a new method in section 4.3 to handle syntactic variations in natural language through sequence chunking and DL based paraphrase detection. We evaluate our system in section 6 by conducting experiments on the SICK and MED datasets. Experiments show that joint logical and neural reasoning show state-of-art accuracy and recall on these datasets.",
"cite_spans": [
{
"start": 114,
"end": 135,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 136,
"end": 153,
"text": "Lan et al., 2020;",
"ref_id": "BIBREF14"
},
{
"start": 154,
"end": 171,
"text": "Liu et al., 2020;",
"ref_id": "BIBREF17"
},
{
"start": 172,
"end": 194,
"text": "Yin and Sch\u00fctze, 2017)",
"ref_id": "BIBREF31"
},
{
"start": 563,
"end": 586,
"text": "(Glockner et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 587,
"end": 606,
"text": "McCoy et al., 2019;",
"ref_id": "BIBREF22"
},
{
"start": 607,
"end": 631,
"text": "Gururangan et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 747,
"end": 765,
"text": "(Abzianidze, 2017;",
"ref_id": "BIBREF0"
},
{
"start": 766,
"end": 794,
"text": "Mart\u00ednez-G\u00f3mez et al., 2017;",
"ref_id": "BIBREF21"
},
{
"start": 795,
"end": 795,
"text": "",
"ref_id": null
},
{
"start": 1082,
"end": 1102,
"text": "Yanaka et al., 2018;",
"ref_id": "BIBREF30"
},
{
"start": 1103,
"end": 1119,
"text": "Hu et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 844,
"end": 852,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Perhaps the closest systems to NeuralLog are Yanaka et al. (2018) , MonaLog (Hu et al., 2020) , and Hy-NLI (Kalouli et al., 2020) . Using Mart\u00ednez-G\u00f3mez et al. (2016) to work with logic representations derived from CCG trees, Yanaka et al. (2018) proposed a framework that can detect phrase correspondences for a sentence pair, using natural deduction on semantic relations and can thus extract various paraphrases automatically. Their experiments show that assessing phrase correspondences helps improve NLI accuracy. Our system uses a similar methodology to solve syntactic variation inferences, where we determine if two phrases are paraphrases. Our method is rather different on this point, since we call on neural language models to detect paraphrases between two sentences. We feel that it would be interesting to compare the systems on a more theoretical level, but we have not done the comparison in this paper.",
"cite_spans": [
{
"start": 45,
"end": 65,
"text": "Yanaka et al. (2018)",
"ref_id": "BIBREF30"
},
{
"start": 76,
"end": 93,
"text": "(Hu et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 107,
"end": 129,
"text": "(Kalouli et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 138,
"end": 166,
"text": "Mart\u00ednez-G\u00f3mez et al. (2016)",
"ref_id": "BIBREF20"
},
{
"start": 226,
"end": 246,
"text": "Yanaka et al. (2018)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "NeuralLog inherits the use of polarity marking found in MonaLog (Hu et al., 2020) . (However, we use the dependency-based system of Chen and Gao (2021) instead of the CCG-based system of Hu and Moss (2018) .) MonaLog did propose some integration with neural models, using BERT when logic failed to find entailment or contradiction. We are doing something very different, using neural models to detect paraphrases at several levels of \"chunking\". In addition, the exact algorithms found in Sections 3 and 4 are new here. In a sense, our work on alignment in NLI goes back to MacCartney and Manning (2009) where alignment was used to find a chain of edits that changes a premise to a hypothesis, but our work uses much that simply was not available in 2009.",
"cite_spans": [
{
"start": 64,
"end": 81,
"text": "(Hu et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 132,
"end": 151,
"text": "Chen and Gao (2021)",
"ref_id": "BIBREF5"
},
{
"start": 187,
"end": 205,
"text": "Hu and Moss (2018)",
"ref_id": "BIBREF12"
},
{
"start": 574,
"end": 603,
"text": "MacCartney and Manning (2009)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Hy-NLI is a hybrid system that makes inferences using either symbolic or deep learning models based on how linguistically challenging a pair of sentences is. The principle Hy-NLI followed is that deep learning models are better at handling sentences that are linguistically less complex, and symbolic models are better for sentences containing hard linguistic phenomena. Although the system integrates both symbolic and neural methods, its decision process is still separate, in which the symbolic and deep learning sides make decisions without relying on the other side. Differently, our system incorporates logical inferences and neural inferences as part of the decision process, in which the two inference methods rely on each other to make a final decision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The key motivation behind our architecture and inference modules is that the Natural Language Inference task can be modeled as a path planning problem. Path planning is a task for finding an optimal path traveling from a start point to a goal containing a series of actions. To formulate NLI as path planning, we define the premise as the start state and the hypothesis as the goal that needs to be reached. The classical path planning strategy applies expansions from the start state through some search algorithms, such as depth-first-search or Dijkstra search, until an expansion meets the goal. In a grid map, two types of action produce an expansion. The vertical action moves up and down, and the horizontal action moves left and right. Similarly, language inference also contains these two actions. Monotonicity reasoning is a vertical action, where the monotone inference moves up and simplifies a sentence, and the antitone inference moves down and makes a sentence more specific. Syntactic variation and synonym replacement are horizontal actions. They change the form of a sentence while maintaining the original mean- ing. Then, similar to path planning, we can continuously make inferences from the premise using a search algorithm to determine if the premise entails the hypothesis by observing whether one of the inferences can reach the hypothesis. If the hypothesis is reached, we can connect the list of inferences that transform a premise to a hypothesis to be the optimal path in NLI, a valid reasoning chain for entailment. Figure 1 shows an analogy between an optimal path for the classical grid path planning problem and an example of an optimal inference path for NLI. On the top, we have a reasoning process for natural language inference. From the premise, we can first delete the modifier with a red helmet, then delete blue to get a simplified sentence. Finally, we can paraphrase down the road to along a roadway in the premise to reach the hypothesis and conclude the entailment relationship between these two sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 1545,
"end": 1553,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "NLI As Path Planning",
"sec_num": "3.1"
},
{
"text": "Our system contains four components: (1) a polarity annotator, (2) three sentence inference modules, (3) a search engine, and (4) a sentence inference controller. Figure 2 shows a diagram of the full system. The system first annotates a sentence with monotonicity information (polarity marks) using Udep2Mono (Chen and Gao, 2021) . The polarity marks include monotone (\u2191), antitone (\u2193), and no monotonicity information (=) polarities. Next, the polarized parse tree is passed to the search engine. A beam search algorithm searches for the optimal inference path from a premise to a hypothesis. The search space is generated from three inference modules: lexical, phrasal, and syntactic variation. Through graph alignment, the sentence inference controller selects a inference module to apply to the premise and produce a set of new premises that potentially form entailment relations with the hypothesis. The system returns Entail if an inference path is found. Otherwise, the controller will determine if the premise and hypothesis form a contradiction by searching for counter example signatures and returns Contradict accordingly. If neither Entail nor Contradict is returned, the system returns Neutral.",
"cite_spans": [
{
"start": 309,
"end": 329,
"text": "(Chen and Gao, 2021)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 163,
"end": 171,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.2"
},
{
"text": "The system first annotates a given premise with monotonicity information using Udep2Mono, a polarity annotator that determines polarization of all constituents from universal dependency trees. The annotator first parses the premise into a binarized universal dependency tree and then conducts polarization by recursively marks polarity on each node . An example can be Every \u2191 healthy \u2193 person \u2193 plays \u2191 sports \u2191 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polarity Annotator",
"sec_num": "3.3"
},
{
"text": "To efficiently search for the optimal inference path from a premise P to a hypothesis H, we use a beam search algorithm which has the advantage of reducing search space by focusing on sentences with higher scores. To increase the search efficiency and accuracy, we add an inference controller that can guide the search direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search Engine",
"sec_num": "3.4"
},
{
"text": "Scoring In beam search, a priority queue Q maintains the set of generated sentences. A core operation is the determination of the highest-scoring generated sentence for a given input under a learned scoring model. In our case, the maximum score is equivalent to the minimum distance:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search Engine",
"sec_num": "3.4"
},
{
"text": "y = arg max s\u2208S score(s, H) y = arg min s\u2208S dist(s, H)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search Engine",
"sec_num": "3.4"
},
{
"text": "where H is the hypothesis and S is a set of generated sentences produced by the three (lexical, phrasal, syntactic variation) inference modules. We will present more details about these inference modules in section 4. We formulate the distance function as the Euclidean distance between the sentence embeddings of the premise and hypothesis. To obtain semantically meaningful sentence embeddings efficiently, we use Reimers and Gurevych (2019)'s language model, Sentence-BERT (SBERT), a modification of the BERT model. It uses siamese and triplet neural network structures to derive sentence embeddings which can be easily compared using distance functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search Engine",
"sec_num": "3.4"
},
{
"text": "In each iteration, the search algorithm expands the search space by generating a set of potential sentences using three inference modules: (1) lexical inference, (2) phrasal inference, and (3) syntactic variation inference. To guide the search engine to select the most applicable module, we designed a inference controller that can recommend which of the labels the overall algorithm should proceed with. For example, for a premise All animals eat food and a hypothesis All dogs eat food, only a lexical inference of animals to dogs would be needed. Then, the controller will apply the lexical inference to the premise, as we discuss below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Inference Controller",
"sec_num": "3.5"
},
{
"text": "The controller makes its decision based on graphbased representations for the premise and the hy-pothesis. We first build a sentence representation graph from parsed input using Universal Dependencies. Let V = V m \u222a V c be the set of vertices of a sentence representation graph, where V m represents the set of modifiers such as tall in Figure 5 , and V c represents the set of content words (words that are being modified) such as man in Figure 5 . While content words in V c could modify other content words, modifiers in V m are not modified by other vertices. Let E be the set of directed edges in the form",
"cite_spans": [],
"ref_spans": [
{
"start": 337,
"end": 345,
"text": "Figure 5",
"ref_id": "FIGREF4"
},
{
"start": 439,
"end": 447,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Sentence Representation Graph",
"sec_num": "3.5.1"
},
{
"text": "v c , v m such that v m \u2208 V m and v c \u2208 V c .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Representation Graph",
"sec_num": "3.5.1"
},
{
"text": "A sentence representation graph is then defined as a tuple G = V, E . Figure 3a shows an example graph.",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 79,
"text": "Figure 3a",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence Representation Graph",
"sec_num": "3.5.1"
},
{
"text": "To observe the differences between two sentences, we rely on graph alignment between two sentence representation graphs. We first align nodes from subjects, verbs and objects, which constitutes what we call a component level. Define G p as the graph for a premise and G h as the graph for a hypothesis. Then, C p and C h are component level nodes from the two graphs. We take the Cartesian product",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Alignment",
"sec_num": "3.5.2"
},
{
"text": "C p \u00d7 C h = {(c p , c h ) : c p \u2208 C p , c h \u2208 C h }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Alignment",
"sec_num": "3.5.2"
},
{
"text": "In the first round, we recursively pair the child nodes of each c p to child nodes of each c h . We compute word similarity between two child nodes c i p and c i h and eliminate pairs with non-maximum similarity. We denote the new aligned pairs as a set A * . At the second round, we iterate through the aligned pairs in A * . If multiple child nodes from the first graph are paired to a child node in the second graph, we only keep the pair with maximum word similarity. In the final round, we perform the same check for each child node in the first graph to ensure that there are no multiple child nodes from the second graph paired to it. Figure 3b shows a brief visualization of the alignment process.",
"cite_spans": [],
"ref_spans": [
{
"start": 642,
"end": 651,
"text": "Figure 3b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Graph Alignment",
"sec_num": "3.5.2"
},
{
"text": "After aligning the premise graph G p with hypothesis graph G h , the controller checks through each node in the two graphs. If a node does not get aligned, the controller considers to delete the node or insert it depending on which graph the node belongs to and recommends phrasal inference. If a node is different from its aligned node, the controller recommends lexical inference. If additional lexical or phrasal inferences are detected under this node, the controller decides that there is a more complex transition under this node and rec- ommends a syntactic variation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "inference Module Recommendation",
"sec_num": "3.5.3"
},
{
"text": "We determine whether the premise and the hypothesis contradict each other inside the controller by searching for potential contradiction transitions from the premise to the hypothesis. For instance, a transition in the scope of the quantifier (a \u2212\u2192 no) from the same subject could be what we call a contradiction signature (possible evidence for a contradiction). With all the signatures, the controller decides if they can form a contradiction as a whole. To avoid situations when multiple signatures together fail to form a complete contradiction, such as double negation, the controller checks through the contradiction signatures to ensure a contradiction. For instance, in the verb pair (not remove, add), the contradiction signature not would cancel the verb negation contradiction signature from remove to add so the pair as a whole would not be seen as a contradiction. Nevertheless, other changes from the premise to the hypothesis may change the meaning of the sentence. Hence, our controller would go through other transitions to make sure the meaning of the sentence does not change when the contradiction sign is valid. For example, in the neutral pair P: A person is eating and H: No tall person is eating, the addition of tall would be detected by our controller. But the aligned word of the component it is applied to, person in P, has been marked downward monotone. So this transition is invalid. This pair would then be classified as neutral. For P2 and H2 in Figure 4 , the controller notices the contradictory quantifier change around the subject man. The subject man in P2 is upward monotone so the deletion of tall is valid. Our controller also detects the meaning transition from signature type example quantifier negation no dogs =\u21d2 some dogs verb negation is eating =\u21d2 is not eating noun negation some people =\u21d2 nobody action contradiction is sleeping =\u21d2 is running direction contradiction The turtle is following the fish =\u21d2",
"cite_spans": [],
"ref_spans": [
{
"start": 1478,
"end": 1486,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Contradiction Detection",
"sec_num": "3.5.4"
},
{
"text": "The fish is following the turtle down the road to inside the building, which affects the sentence's meaning and cancels the previous contradiction signature. The controller thus will not classify P2 and H2 as a pair of contradiction. 4 Inference Generation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contradiction Detection",
"sec_num": "3.5.4"
},
{
"text": "Lexical inference is word replacement based on monotonicity information for key-tokens including nouns, verbs, numbers, and quantifiers. The system uses lexical knowledge bases including Word-Net (Miller, 1995) and ConceptNet (Liu and Singh, 2004) . From the knowledge bases, we extract four word sets: hypernyms, hyponyms, synonyms, and antonyms. Logically, if a word has a monotone polarity (\u2191), it can be replaced by its hypernyms. For example, swim \u2264 move; then swim can be replaced with move. If a word has an antitone polarity (\u2193), it can be replaced by its hyponyms. For example, flower \u2265 rose. Then, flower can be replaced with rose. We filter out irrelevant words from the knowledge bases that do not appear in the hypothesis. Additionally, we handcraft knowledge relations for words like quantifiers and prepositions that do not have sufficient taxonomies from knowledge bases. Some handcrafted relations include: all = every = each \u2264 most \u2264 many \u2264 several \u2264 some = a, up \u22a5 down.",
"cite_spans": [
{
"start": 187,
"end": 210,
"text": "Word-Net (Miller, 1995)",
"ref_id": null
},
{
"start": 226,
"end": 247,
"text": "(Liu and Singh, 2004)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Monotonicity Inference",
"sec_num": "4.1"
},
{
"text": "Phrasal replacements are for phrase-level monotonicity inference. For example, with a polarized sentence A \u2191 woman \u2191 who \u2191 is \u2191 beautiful \u2191 is \u2191 walking \u2191 in \u2191 the \u2191 rain = , the monotone mark \u2191 on woman allows an upward inference: woman woman who is beautiful, in which the relative clause who is beautiful is deleted. The system follows a set of phrasal monotonicity inference rules. For upward monotonicity inference, modifiers of a word are deleted. For downward monotonicity inference, modifiers are inserted to a word. The algorithm traverses down a polarized UD parse tree, deletes the modifier sub-tree if a node is monotone (\u2191), and inserts a new sub-tree if a node is antitone (\u2193). To insert new modifiers, the algorithm extracts a list of potential modifiers associated to a node from a modifier dictionary. The modifier dictionary is derived from the hypothesis and contains wordmodifier pairs for each dependency relation. Below is an example of a modifier dictionary from There are no beautiful flowers that open at night: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrasal Monotonicity Inference",
"sec_num": "4.2"
},
{
"text": "We categorize linguistic changes between a premise and a hypothesis that cannot be inferred from monotonicity information as syntactic variations. For example, a change from red rose to a rose which is red is a syntactic variation. Many logical systems rely on handcrafted rules and manual transformation to enable the system to use syntactic variations. However, without accurate alignments between the two sentences, these methods are not robust enough, and thus are difficult to scale up for wide-coverage input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Variation Inference",
"sec_num": "4.3"
},
{
"text": "Recent development of pretrained transformerbased language models are showing state-of-art performance on multiple benchmarks for Natural Language Understanding (NLU) including the task for paraphrase detection (Devlin et al., 2019; Lan et al., 2020; Liu et al., 2020) exemplify phrasal knowledge of syntactic variation. We propose a method that incorporates transformer-based language models to robustly handle syntactic variations. Our method first uses a sentence chunker to decompose both the premise and the hypothesis into chunks of phrases and then forms a Cartesian product of chunk pairs. For each pair, we use a transformer model to calculate the likelihood of a pair of chunks being a pair of paraphrases.",
"cite_spans": [
{
"start": 211,
"end": 232,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 233,
"end": 250,
"text": "Lan et al., 2020;",
"ref_id": "BIBREF14"
},
{
"start": 251,
"end": 268,
"text": "Liu et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Variation Inference",
"sec_num": "4.3"
},
{
"text": "To obtain phrase-level chunks from a sentence, we build a sequence chunker to extract chunks from a sentence using its universal dependency information. Instead of splitting a sentence into chunks, our chunker composes word tokens recursively to form meaningful chunks. First, we construct a sentence representation graph of a premise from the controller. Recall that a sentence representation graph is defined as G",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Chunking",
"sec_num": "4.3.1"
},
{
"text": "= V, E , where V = V m \u222a V c is the set of modifiers (V m ) and content words (V c )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Chunking",
"sec_num": "4.3.1"
},
{
"text": ", and E is the set of directed edges. To generate the chunk for a content word in V c , we arrange its modifiers, which are nodes it points to, together with the content word by their word orders in the original sentence to form a word chain. Modifiers that make the chain disconnected are discarded because they are not close enough to be part of the chunk. For instance, the chunk from the verb eats in the sentence A person eats the food carefully would not contain its modifier carefully because they are separated by the object the food. If the sentence is stated as A person carefully eats the food, carefully now is next to eat and it would be included in the chunk of the verb eat. To obtain chunks for a sentence, we iterate through each main component node, which is a node for subject, verb, or object, in the sentence's graph representation and construct verb phrases by combining verbs' chunks with their paired objects' chunks. There are cases when a word modifies other words and gets modified in the same time. They often occur when a chunk serves as a modifier. For example, in The woman in a pink dress is dancing, the phrase in a pink dress modifies woman whereas dress is modified by in, a and pink. Then edges from dress to in, a, pink with the edge from woman to dress can be drawn. Chunks in a pink dress and the woman in a Noun Phrase Variation A man with climbing equipment is hanging A man with equipment used for climbing is from rock which is vertical and white hanging from a white, vertical rock. Here the left graph represents the premise: A tall man is running down the road. The right graph represents the hypothesis A man who is tall is running along a roadway. The blue region represents phrase chunks extracted by the chunker from the graph. An alignment score is calculated for each pair of chunks. The pair tall man, man who is tall is a pair of paraphrases, and thus has a high alignment score (0.98). The pair tall man, running along a road way has two unrelated phrases, and thus has a low alignment score(0.03).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Chunking",
"sec_num": "4.3.1"
},
{
"text": "pink dress will be generated for dress and woman respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Chunking",
"sec_num": "4.3.1"
},
{
"text": "After the chunker outputs a set of chunks from a generated sentence and from the hypothesis, the system selects chunk pairs that are aligned by computing an alignment score for each pair of chunks. Formally, we define C s as the set of chunks from a generated sentence and C h as the set of chunks from the hypothesis. We build the Cartesian product from C s and C h , denoted C s \u00d7 C h . For each chunk pair (c si , c hj ) \u2208 C s \u00d7 C h , we compute an alignment score \u03b1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Phrase Alignment",
"sec_num": "4.3.2"
},
{
"text": "y c si ,c hi = ALBERT.forward( c si , c hi ) \u03b1 c si ,c hi = p(c si | c hj ) \u03b1 c si ,c hi = exp y c si ,c hi 0 2 j=1 exp y c si ,c hi j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Phrase Alignment",
"sec_num": "4.3.2"
},
{
"text": "If \u03b1 > 0.85, the system records this pair of phrases as a pair of syntactic variation. To calculate the alignment score, we use an ALBERT (Lan et al., 2020) model for the paraphrase detection task, fine tuned on the Microsoft Research Paraphrase Corpus (Dolan and Brockett, 2005) . We first pass the chunk pair to ALBERT to obtain the logits. Then we apply a softmax function to the logits to get the final probability. A full demonstration of the alignment between chunks is shown in Figure 5 .",
"cite_spans": [
{
"start": 138,
"end": 156,
"text": "(Lan et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 253,
"end": 279,
"text": "(Dolan and Brockett, 2005)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 485,
"end": 493,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Monolingual Phrase Alignment",
"sec_num": "4.3.2"
},
{
"text": "The SICK (Marelli et al., 2014) dataset is an English benchmark that provides in-depth evaluation for compositional distribution models. There are 10,000 English sentence pairs exhibiting a variety of lexical, syntactic, and semantic phenomena. Each sentence pair is annotated as Entailment, Contradiction, or Neutral. we use the 4,927 test problems for evaluation.",
"cite_spans": [
{
"start": 9,
"end": 31,
"text": "(Marelli et al., 2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The SICK Dataset",
"sec_num": "5.1"
},
{
"text": "The Monotonicity Entailment Dataset (MED), is a challenge dataset designed to examine a model's ability to conduct monotonicity inference (Yanaka et al., 2019a) . There are 5382 sentence pairs in MED, where 1820 pairs are upward inference problems, 3270 pairs are downward inference problems, and 292 pairs are problems with no monotonicity information. MED's problems cover a variety of linguistic phenomena, such as lexical knowledge, reverse, conjunction and disjunction, conditional, and negative polarity items. . In the parser, we use a neural parsing model pretrained on the UD English GUM corpus (Zeldes, 2017) with 90.0 LAS (Zeman et al., 2018) evaluation score. For Sentence-BERT, we selected the BERT-large model pre-trained on STS-B (Cer et al., 2017) . For AL-BERT, we used textattack's ALBERT-base model pretrained on MRPC from transformers. For word alignment in the controller, we select\u0158eh\u016f\u0159ek and Sojka (2010)'s Gensim framework to calculate word similarity from pre-trained word embedding. We evaluated our model on the SICK and MED datasets using the standard NLI evaluation metrics of accuracy, precision, and recall. Additionally, we conducted two ablation tests focusing on analyzing the contributions of the monotonicity inference modules and the syntactic variation module.",
"cite_spans": [
{
"start": 138,
"end": 160,
"text": "(Yanaka et al., 2019a)",
"ref_id": "BIBREF28"
},
{
"start": 604,
"end": 618,
"text": "(Zeldes, 2017)",
"ref_id": "BIBREF32"
},
{
"start": 633,
"end": 653,
"text": "(Zeman et al., 2018)",
"ref_id": "BIBREF33"
},
{
"start": 745,
"end": 763,
"text": "(Cer et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The MED Dataset",
"sec_num": "5.2"
},
{
"text": "SICK Table 3 shows the experiment results tested on SICK. We compared our performance to several logic-based systems as well as two deep learning based models. As the evaluation results show, our model achieves the state-of-art performance on the SICK dataset. The best logic-based model is Abzianidze (2020) with 84.4 percent accuracy. The best DL-based model is Yin and Sch\u00fctze (2017) with 87.1 percent accuracy. Our system outperforms both of them with 90.3 percent accuracy. Compare to Hu et al. (2020) + BERT, which also explores a way of combining logic-based methods and deep learning based methods, our system Model Up Down All DeComp (Parikh et al., 2016) 71.1 45.2 51.4 ESIM (Chen et al., 2017) 66.1 42.1 53.8 BERT (Devlin et al., 2019) 82.7 22.8 44.7 BERT+ (Yanaka et al., 2019a) 76.0 70.3 71.6 NeuralLog (ours) 91.4 93.9 93.4 Table 4 : Results comparing model compared to stateof-art NLI models evaluated on MED. Up, Down, and All stand for the accuracy on upward inference, downward inference, and the overall dataset.",
"cite_spans": [
{
"start": 490,
"end": 506,
"text": "Hu et al. (2020)",
"ref_id": "BIBREF11"
},
{
"start": 643,
"end": 664,
"text": "(Parikh et al., 2016)",
"ref_id": "BIBREF24"
},
{
"start": 685,
"end": 704,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 725,
"end": 746,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 768,
"end": 822,
"text": "(Yanaka et al., 2019a) 76.0 70.3 71.6 NeuralLog (ours)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 5,
"end": 12,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 838,
"end": 845,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2"
},
{
"text": "shows higher accuracy with a 4.92 percentage point increase. In addition, our system's accuracy has a 3.8 percentage point increase than another hybrid system, Hy-NLI (Kalouli et al., 2020) . The good performance proves that our framework for joint logic and neural reasoning can achieve state-of-art performance on inference and outperforms existing systems.",
"cite_spans": [
{
"start": 167,
"end": 189,
"text": "(Kalouli et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2"
},
{
"text": "Ablation Test In addition to the standard evaluation on SICK, we conducted two ablation tests. The results are included in Table 3 . First, we remove the syntactic variation module that uses neural network for alignment (\u2212ALBERT-SV). As the table shows, the accuracy drops 18.9 percentage points. This large drop in accuracy indicates that the syntactic variation module plays a major part in our overall inference process. The result also proves our hypothesis that deep learning methods for inference can improve the performance of traditional logic-based systems significantly. Secondly, when we remove the monotonicity-based inference modules (\u2212Monotonicity), the accuracy shows another large decrease in accuracy, with a 15.6 percentage point drop. This result demonstrates the important contribution of the logic-based inference modules toward the overall state-of-the-art performance. Compared to the previous ablation test which removes the neural network based syntactic variation module, the accuracy does not change much (only 3.3 differences). This similar performance indicates that neural network inference in our system alone cannot achieve state-of-art performance on the SICK dataset, and additional guidance and constrains from the logic-based methods are essential parts of our framework. Overall, we believe that the results reveal that both modules, logic and neural, contribute equally to the final performance and are both important parts that are unmovable.",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 130,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2"
},
{
"text": "MED Table 4 shows the experimental results tested on MED. We compared to multiple deep learning based baselines. Here, DeComp and ESIM are trained on SNLI and BERT is fine-tuned with MultiNLI. The BERT+ model is a BERT model fine-tuned on a combined training data with the HELP dataset, (Yanaka et al., 2019b) , a set of augmentations for monotonicity reasoning, and the MultiNLI training set. Both models were tested in Yanaka et al. (2019a) . Overall, our system (Neural-Log) outperforms all DL-based baselines in terms of accuracy, by a significant amount. Compared to BERT+, our system performs better both on upward (+15.4) and downward (+23.6) inference, and shows significant higher accuracy overall (+21.8).",
"cite_spans": [
{
"start": 287,
"end": 309,
"text": "(Yanaka et al., 2019b)",
"ref_id": "BIBREF29"
},
{
"start": 421,
"end": 442,
"text": "Yanaka et al. (2019a)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 4,
"end": 11,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2"
},
{
"text": "The good performance on MED validates our system's ability on accurate and robust monotonicitybased inference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2"
},
{
"text": "For entailment, a large amount of inference errors are due to an incorrect dependency parse trees from the parser. For example, P: A black, red, white and pink dress is being worn by a woman, H: A dress, which is black, red, white and pink is being worn by a woman, has long conjunctions that cause the parser to produce two separate trees from the same sentence. Secondly, a lack of sufficient background knowledge causes the system to fail to make inferences which would be needed to obtain a correct label. For example, P: One man is doing a bicycle trick in midair, H: The cyclist is performing a trick in the air requires the system to know that a man doing a bicycle trick is a cyclist. This kind of knowledge can only be injected to the system either by handcrafting rules or by extracting it from the training data. For contradiction, our analysis reveals inconsistencies in the SICK dataset. We account for multiple sentence pairs that have the same syntactic and semantic structures, but are labeled differently. For example, P: A man is folding a tortilla, H: A man is unfolding a tortilla has gold-label Neutral while P: A man is playing a guitar, H: A man is not playing a guitar has gold-label Contradiction. These two pair of sentences clearly have similar structures but have inconsistent gold-labels. Both gold-labels would be reasonable depending on whether the two subjects refer to the same entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.3"
},
{
"text": "In this paper, we presented a framework to combine logic-based inference with deep-learning based inference for improved Natural Language Inference performance. The main method is using a search engine and an alignment based controller to dispatch the two inference methods (logic and deeplearning) to their area of expertise. This way, logicbased modules can solve inference that requires logical rules and deep-learning based modules can solve inferences that contain syntactic variations which are easier for neural networks. Our system uses a beam search algorithm and three inference modules (lexical, phrasal, and syntactic variation) to find an optimal path that can transform a premise to a hypothesis. Our system handles syntactic variations in natural sentences using the neural network on phrase chunks, and our system determines contradictions by searching for contradiction signatures (evidence for contradiction). Evaluations on SICK and MED show that our proposed framework for joint logical and neural reasoning can achieve state-of-art accuracy on these datasets. Our experiments on ablation tests show that neither logic nor neural reasoning alone fully solve Natural Language Inference, but a joint operation between them can bring improved performance. For future work, one plan is to extend our system with more logic inference methods such as those using dynamic semantics (Haruta et al., 2020) and more neural inference methods such as those for commonsense reasoning (Levine et al., 2020) . We also plan to implement a learning method that allows the system to learn from mistakes on a training dataset and automatically expand or correct its rules and knowledge bases, which is similar to Abzianidze (2020)'s work.",
"cite_spans": [
{
"start": 1395,
"end": 1416,
"text": "(Haruta et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 1491,
"end": 1512,
"text": "(Levine et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers for their insightful comments. We also thank Dr. Michael Wollowski from Rose-hulman Institute of Technology for his helpful feedback on this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "LangPro: Natural language theorem prover",
"authors": [
{
"first": "",
"middle": [],
"last": "Lasha Abzianidze",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "115--120",
"other_ids": {
"DOI": [
"10.18653/v1/D17-2020"
]
},
"num": null,
"urls": [],
"raw_text": "Lasha Abzianidze. 2017. LangPro: Natural language theorem prover. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 115- 120, Copenhagen, Denmark. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning as abduction: Trainable natural logic theorem prover for natural language inference",
"authors": [],
"year": null,
"venue": "Proceedings of the Ninth Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "20--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lasha Abzianidze. 2020. Learning as abduction: Train- able natural logic theorem prover for natural lan- guage inference. In Proceedings of the Ninth Joint Conference on Lexical and Computational Seman- tics, pages 20-31, Barcelona, Spain (Online). Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Representing meaning with a combination of logical and distributional models",
"authors": [
{
"first": "I",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Pengxiang",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2016,
"venue": "Computational Linguistics",
"volume": "42",
"issue": "4",
"pages": "763--808",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00266"
]
},
"num": null,
"urls": [],
"raw_text": "I. Beltagy, Stephen Roller, Pengxiang Cheng, Katrin Erk, and Raymond J. Mooney. 2016. Represent- ing meaning with a combination of logical and distributional models. Computational Linguistics, 42(4):763-808.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "I\u00f1igo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {
"DOI": [
"10.18653/v1/S17-2001"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, I\u00f1igo Lopez- Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Enhanced LSTM for natural language inference",
"authors": [
{
"first": "Qian",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhen-Hua",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1657--1668",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1152"
]
},
"num": null,
"urls": [],
"raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1657-1668, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Monotonicity marking from universal dependency trees",
"authors": [
{
"first": "Zeming",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Qiyue",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeming Chen and Qiyue Gao. 2021. Monotonicity marking from universal dependency trees. CoRR, abs/2104.08659.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatically constructing a corpus of sentential paraphrases",
"authors": [
{
"first": "B",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Third International Workshop on Paraphrasing (IWP2005)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William B. Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Breaking NLI systems with sentences that require simple lexical inferences",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Glockner",
"suffix": ""
},
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "650--655",
"other_ids": {
"DOI": [
"10.18653/v1/P18-2103"
]
},
"num": null,
"urls": [],
"raw_text": "Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that re- quire simple lexical inferences. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 650-655, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Annotation artifacts in natural language inference data",
"authors": [
{
"first": "Swabha",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "107--112",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2017"
]
},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural lan- guage inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107-112, New Orleans, Louisiana. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Combining event semantics and degree semantics for natural language inference",
"authors": [
{
"first": "Izumi",
"middle": [],
"last": "Haruta",
"suffix": ""
},
{
"first": "Koji",
"middle": [],
"last": "Mineshima",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Bekki",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1758--1764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Izumi Haruta, Koji Mineshima, and Daisuke Bekki. 2020. Combining event semantics and degree se- mantics for natural language inference. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 1758-1764, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "MonaLog: a lightweight system for natural language inference based on monotonicity",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "Atreyee",
"middle": [],
"last": "Mukherjee",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [
"S"
],
"last": "Moss",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "Kuebler",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Society for Computation in Linguistics 2020",
"volume": "",
"issue": "",
"pages": "334--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Hu, Qi Chen, Kyle Richardson, Atreyee Mukher- jee, Lawrence S. Moss, and Sandra Kuebler. 2020. MonaLog: a lightweight system for natural language inference based on monotonicity. In Proceedings of the Society for Computation in Linguistics 2020, pages 334-344, New York, New York. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Polarity computations in flexible categorial grammar",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Moss",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "124--129",
"other_ids": {
"DOI": [
"10.18653/v1/S18-2015"
]
},
"num": null,
"urls": [],
"raw_text": "Hai Hu and Larry Moss. 2018. Polarity computations in flexible categorial grammar. In Proceedings of the Seventh Joint Conference on Lexical and Com- putational Semantics, pages 124-129, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Hy-NLI: a hybrid system for natural language inference",
"authors": [
{
"first": "Aikaterini-Lida",
"middle": [],
"last": "Kalouli",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Crouch",
"suffix": ""
},
{
"first": "Valeria",
"middle": [],
"last": "De Paiva",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5235--5249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aikaterini-Lida Kalouli, Richard Crouch, and Valeria de Paiva. 2020. Hy-NLI: a hybrid system for natural language inference. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 5235-5249, Barcelona, Spain (Online). Inter- national Committee on Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In International Con- ference on Learning Representations.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "SenseBERT: Driving some sense into BERT",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Levine",
"suffix": ""
},
{
"first": "Barak",
"middle": [],
"last": "Lenz",
"suffix": ""
},
{
"first": "Or",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Ori",
"middle": [],
"last": "Ram",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Padnos",
"suffix": ""
},
{
"first": "Or",
"middle": [],
"last": "Sharir",
"suffix": ""
},
{
"first": "Shai",
"middle": [],
"last": "Shalev-Shwartz",
"suffix": ""
},
{
"first": "Amnon",
"middle": [],
"last": "Shashua",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Shoham",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4656--4667",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.423"
]
},
"num": null,
"urls": [],
"raw_text": "Yoav Levine, Barak Lenz, Or Dagan, Ori Ram, Dan Padnos, Or Sharir, Shai Shalev-Shwartz, Amnon Shashua, and Yoav Shoham. 2020. SenseBERT: Driving some sense into BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4656-4667, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Conceptnet -a practical commonsense reasoning tool-kit",
"authors": [
{
"first": "H",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2004,
"venue": "BT Technology Journal",
"volume": "22",
"issue": "4",
"pages": "211--226",
"other_ids": {
"DOI": [
"10.1023/B:BTTJ.0000047600.45421.6d"
]
},
"num": null,
"urls": [],
"raw_text": "H. Liu and P. Singh. 2004. Conceptnet -a practi- cal commonsense reasoning tool-kit. BT Technology Journal, 22(4):211-226.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Ro{bert}a: A robustly optimized {bert} pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Ro{bert}a: A robustly optimized {bert} pretraining approach.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "An extended model of natural logic",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Eighth International Conference on Computational Semantics (IWCS-8)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill MacCartney and Christopher D. Manning. 2009. An extended model of natural logic. In Proceedings of the Eighth International Conference on Computa- tional Semantics (IWCS-8), Tilburg, Netherlands.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A SICK cure for the evaluation of compositional distributional semantic models",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Marelli",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)",
"volume": "",
"issue": "",
"pages": "216--223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zampar- elli. 2014. A SICK cure for the evaluation of compo- sitional distributional semantic models. In Proceed- ings of the Ninth International Conference on Lan- guage Resources and Evaluation (LREC'14), pages 216-223, Reykjavik, Iceland. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "ccg2lambda: A compositional semantics system",
"authors": [
{
"first": "Pascual",
"middle": [],
"last": "Mart\u00ednez-G\u00f3mez",
"suffix": ""
},
{
"first": "Koji",
"middle": [],
"last": "Mineshima",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Bekki",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL 2016 System Demonstrations",
"volume": "",
"issue": "",
"pages": "85--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascual Mart\u00ednez-G\u00f3mez, Koji Mineshima, Yusuke Miyao, and Daisuke Bekki. 2016. ccg2lambda: A compositional semantics system. In Proceedings of ACL 2016 System Demonstrations, pages 85- 90, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "On-demand injection of lexical knowledge for recognising textual entailment",
"authors": [
{
"first": "Pascual",
"middle": [],
"last": "Mart\u00ednez-G\u00f3mez",
"suffix": ""
},
{
"first": "Koji",
"middle": [],
"last": "Mineshima",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Bekki",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "710--720",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascual Mart\u00ednez-G\u00f3mez, Koji Mineshima, Yusuke Miyao, and Daisuke Bekki. 2017. On-demand injec- tion of lexical knowledge for recognising textual en- tailment. In Proceedings of the 15th Conference of the European Chapter of the Association for Compu- tational Linguistics: Volume 1, Long Papers, pages 710-720, Valencia, Spain. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3428--3448",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1334"
]
},
"num": null,
"urls": [],
"raw_text": "Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448, Florence, Italy. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Wordnet: A lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Commun. ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {
"DOI": [
"10.1145/219717.219748"
]
},
"num": null,
"urls": [],
"raw_text": "George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39-41.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A decomposable attention model for natural language inference",
"authors": [
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2249--2255",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1244"
]
},
"num": null,
"urls": [],
"raw_text": "Ankur Parikh, Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249-2255, Austin, Texas. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Stanza: A python natural language processing toolkit for many human languages",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yuhui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Bolton",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "101--108",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-demos.14"
]
},
"num": null,
"urls": [],
"raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 101- 108, Online. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Software Framework for Topic Modelling with Large Corpora",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Radim\u0159eh\u016f\u0159ek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Cor- pora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45- 50, Valletta, Malta. ELRA. http://is.muni.cz/ publication/884893/en.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3982--3992",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1410"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Can neural networks understand monotonicity reasoning?",
"authors": [
{
"first": "Hitomi",
"middle": [],
"last": "Yanaka",
"suffix": ""
},
{
"first": "Koji",
"middle": [],
"last": "Mineshima",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Bekki",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "Lasha",
"middle": [],
"last": "Abzianidze",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "31--40",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4804"
]
},
"num": null,
"urls": [],
"raw_text": "Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Ken- taro Inui, Satoshi Sekine, Lasha Abzianidze, and Jo- han Bos. 2019a. Can neural networks understand monotonicity reasoning? In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 31-40, Florence, Italy. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "HELP: A dataset for identifying shortcomings of neural models in monotonicity reasoning",
"authors": [
{
"first": "Hitomi",
"middle": [],
"last": "Yanaka",
"suffix": ""
},
{
"first": "Koji",
"middle": [],
"last": "Mineshima",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Bekki",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "Lasha",
"middle": [],
"last": "Abzianidze",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)",
"volume": "",
"issue": "",
"pages": "250--255",
"other_ids": {
"DOI": [
"10.18653/v1/S19-1027"
]
},
"num": null,
"urls": [],
"raw_text": "Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Ken- taro Inui, Satoshi Sekine, Lasha Abzianidze, and Jo- han Bos. 2019b. HELP: A dataset for identifying shortcomings of neural models in monotonicity rea- soning. In Proceedings of the Eighth Joint Con- ference on Lexical and Computational Semantics (*SEM 2019), pages 250-255, Minneapolis, Min- nesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Acquisition of phrase correspondences using natural deduction proofs",
"authors": [
{
"first": "Hitomi",
"middle": [],
"last": "Yanaka",
"suffix": ""
},
{
"first": "Koji",
"middle": [],
"last": "Mineshima",
"suffix": ""
},
{
"first": "Pascual",
"middle": [],
"last": "Mart\u00ednez-G\u00f3mez",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Bekki",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "756--766",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1069"
]
},
"num": null,
"urls": [],
"raw_text": "Hitomi Yanaka, Koji Mineshima, Pascual Mart\u00ednez- G\u00f3mez, and Daisuke Bekki. 2018. Acquisition of phrase correspondences using natural deduction proofs. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long Papers), pages 756-766, New Orleans, Louisiana. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Taskspecific attentive pooling of phrase alignments contributes to sentence matching",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "699--709",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin and Hinrich Sch\u00fctze. 2017. Task- specific attentive pooling of phrase alignments con- tributes to sentence matching. In Proceedings of the 15th Conference of the European Chapter of the As- sociation for Computational Linguistics: Volume 1, Long Papers, pages 699-709, Valencia, Spain. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The GUM corpus: Creating multilayer resources in the classroom. Language Resources and Evaluation",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Zeldes",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "51",
"issue": "",
"pages": "581--612",
"other_ids": {
"DOI": [
"10.1007/s10579-016-9343-x"
]
},
"num": null,
"urls": [],
"raw_text": "Amir Zeldes. 2017. The GUM corpus: Creating mul- tilayer resources in the classroom. Language Re- sources and Evaluation, 51(3):581-612.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "CoNLL 2018 shared task: Multilingual parsing from raw text to Universal Dependencies",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "1--21",
"other_ids": {
"DOI": [
"10.18653/v1/K18-2001"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Zeman, Jan Haji\u010d, Martin Popel, Martin Pot- thast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 shared task: Mul- tilingual parsing from raw text to Universal Depen- dencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies, pages 1-21, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Overview system diagram of NeuralLog.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Sentence representation graph (b) Graph alignment visualizationFigure 3: (a) A sentence representation graph for A tall man is running down the road. (b) Visualization for the graph alignment. The lines between two words represent their similarity. The orange lines are the pairs with maximum similarities for a blue word. Through bi-directional alignment, we eliminate word pairs with nonmaximum similarity and gets the final alignment pairs.",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "Example of contradiction signatures. P1 and H1 form a contradiction. P2 and H2 does not form a contradiction because the meaning after the verb running has changed.",
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"num": null,
"text": "obl: [head: open, mod: at night] \u2022 amod: [head: flowers, mod: beautiful] \u2022 acl:relcl: [head: flowers, mod: that open at night]",
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"num": null,
"text": "A graph representation of the monolingual phrase alignment process.",
"type_str": "figure"
},
"TABREF0": {
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "",
"html": null
},
"TABREF1": {
"content": "<table><tr><td>Type</td><td>Premise</td><td>Hypothesis</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Verb Phrase VariationTwo men are standing near the water and Two men are standing near the water and are holding fishing poles are holding tools used for fishing",
"html": null
},
"TABREF2": {
"content": "<table><tr><td>man</td><td>root</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Examples of phrasal alignments detected by the syntactic variation module",
"html": null
},
"TABREF4": {
"content": "<table><tr><td>: Performance on the SICK test set</td></tr><tr><td>from Stanford's natural language analysis pack-</td></tr><tr><td>age, Stanza</td></tr></table>",
"type_str": "table",
"num": null,
"text": "",
"html": null
}
}
}
}