ACL-OCL / Base_JSON /prefixN /json /nlptea /2020.nlptea-1.17.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:47:46.899105Z"
},
"title": "SEMA: Text Simplification Evaluation through Semantic Alignment",
"authors": [
{
"first": "Xuan",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Huizhou",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Kexin",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yiyang",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Text simplification is an important branch of natural language processing. At present, methods used to evaluate the semantic retention of text simplification are mostly based on string matching. We propose the SEMA (text Simplification Evaluation Measure through Semantic Alignment), which is based on semantic alignment. Semantic alignments include complete alignment, partial alignment and hyponymy alignment. Our experiments show that the evaluation results of SEMA have a high consistency with human evaluation for the simplified corpus of Chinese and English news texts.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Text simplification is an important branch of natural language processing. At present, methods used to evaluate the semantic retention of text simplification are mostly based on string matching. We propose the SEMA (text Simplification Evaluation Measure through Semantic Alignment), which is based on semantic alignment. Semantic alignments include complete alignment, partial alignment and hyponymy alignment. Our experiments show that the evaluation results of SEMA have a high consistency with human evaluation for the simplified corpus of Chinese and English news texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text simplification is a rewriting operation that aims to improve the comprehensibility of the text by modifying, deleting, simplifying humanreadable text. It tried to retain the core semantics of original text while improving readability of the text. In natural language processing tasks, long and complex sentences will bring about various problems, for example, the quality of grammatical analysis depends on the length and the grammar difficulty of texts directly, and complex sentences may cause ambiguity during machine translation (Chandrasekar and Srinivas, 1997) . Therefore, text simplification is often used in the preprocessing steps of other NLP tasks. In addition, text simplification is also used to rewrite reading materials for children, second language learners, readers with aphasia and other people with low reading comprehension skills (Carroll J, 1998) . As related researches are in the early stage, the results of text simplification cannot meet the needs of the audience well. One of the difficulties is the lack of reasonable text simplification evaluation indicators. At present, most evaluation methods are conducted * Corresponding author: [email protected] by experts or machine translation evaluation indicators. Therefore, researches on how to analyze the results of text simplification has important application value.",
"cite_spans": [
{
"start": 538,
"end": 571,
"text": "(Chandrasekar and Srinivas, 1997)",
"ref_id": "BIBREF3"
},
{
"start": 857,
"end": 874,
"text": "(Carroll J, 1998)",
"ref_id": "BIBREF2"
},
{
"start": 1145,
"end": 1146,
"text": "*",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Text simplification mainly includes vocabulary and semantic structure simplification. The main operation is text segmentation, that is, rewriting a single sentence into one or more simpler sentences while preserving the main semantics (Sulem et al., 2018b) . Text simplification has gradually attracted attention in recent years (Xu et al., 2016; Saggion and Horacio, 2017; Saggion et al., 2012) , it should be evaluated from three aspects: fluency (ie: grammatical correctness), correctness (ie: semantic retention) and simplicity (ie: degree of text simplification). Initially, experts can only evaluated the results through three aspects, and the final score is based on the Likert scale 1 ; Later, someone proposed to use readability indicators to evaluate text simplification, but because the readability indicators are designed for passage-level texts, the application effects at the sentence level are not very prominent (Coster and Kauchak, 2011) . In recent years, the evaluation indicators of machine translation have been increasingly used in the evaluation of text simplification, including BLEU, ROUGE based on N-gram and WER, TER based on edit distance.",
"cite_spans": [
{
"start": 235,
"end": 256,
"text": "(Sulem et al., 2018b)",
"ref_id": "BIBREF15"
},
{
"start": 329,
"end": 346,
"text": "(Xu et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 347,
"end": 373,
"text": "Saggion and Horacio, 2017;",
"ref_id": "BIBREF11"
},
{
"start": 374,
"end": 395,
"text": "Saggion et al., 2012)",
"ref_id": "BIBREF12"
},
{
"start": 928,
"end": 954,
"text": "(Coster and Kauchak, 2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In machine translation tasks, BLEU is the most widely used evaluation indicators, which was proposed in 2002. The original purpose is to replace the manual evaluation of translation results. The quality of the machine translation task is mainly evaluated by evaluating the difference between the 1 Likert scale is one of the most commonly used scoring aggregate scales. It was developed by American social psychologist Likert in 1932 on the basis of the original aggregate scale. The scale consists of a set of statements. Each statement has five answers: \"strongly agree\", \"agree\", \"not necessary\", \"disagree\" and \"strongly disagree\", which are recorded as 5, 4, 3, 2, 1, and the final score is the sum of score for each aspect. output generated by model and the reference. It has low computational cost and is highly correlated with human evaluation, so it is widely used. Elior Sulem's experiments show that Since the main operation of text simplification is text segmentation, involving semantic structure splitting, BLEU did not show a high degree of relevance to manual evaluation in terms of grammar and semantic retention of 70 pairs of sentences (Sulem et al., 2018c) . In addition, in terms of simplicity assessment, BLEU shows a negative result which penalized simplified sentences highly.",
"cite_spans": [
{
"start": 1155,
"end": 1176,
"text": "(Sulem et al., 2018c)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "SARI is an evaluation indicator based on reference sentences proposed in 2016 (Xu et al., 2016) . It focuses on the aspect of words added, deleted, and retained, but it cannot evaluate sentences at semantic level. SAMSA is a semantic structure-based evaluation indicator proposed in 2018 (Sulem et al., 2018a) , but it relies too much on string matching in the judgment of semantic consistency, which leads to low semantic retention calculation results for simplified text. Based on the characteristics of these evaluation indicators, this research proposes a text simplification evaluation indicator SEMA based on semantic alignment.",
"cite_spans": [
{
"start": 78,
"end": 95,
"text": "(Xu et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 288,
"end": 309,
"text": "(Sulem et al., 2018a)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The contribution of this paper is to propose a semantic retention evaluation indicator of text simplification based on semantic alignment. Semantic alignment includes complete alignment, partial alignment and hyponymy alignment. Different semantic alignment weights are given according to the degree of semantic alignment, so as to reasonably evaluate the semantic retention of text simplification of different rewriting methods. The current traditional syntactic structure cannot directly reflect the semantic difference of the text, for example:\"John took a shower.\" (a) and \"John showered.\" (b) Syntactic analysis will regard them as different structures, but at the semantic level, (a) and (b) are similar. The UCCA (Universal Conceptual Cognitive Annotation) (Abend and Rappoport, 2013) proposed in 2013 avoids this defect. Its scene-based semantic structure annotation method aims to extract the scene graph formed by main relation and participants to represent the main semantic infomation in the text.",
"cite_spans": [
{
"start": 764,
"end": 791,
"text": "(Abend and Rappoport, 2013)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The scenes of UCCA represent motions, actions or states that persist in time, and are divided into State (S) and Process (P). A State represents a continuous state in time, such as:\"There has been conflict in Syria for the last nine years.\" A Process describes an event that is evolving and unfolding in time, such as: \"The dog runs into the house.\" Each scene contains a main relation, one or more participants (including location information), such as:\"John kicked his ball.\" In this scene, the participants are \"John\" and \"his ball\", the relation is \"Kicked\". The UCCA structure is a directed acyclic graph, and the smallest meaningful unit is on the leaf node (that is, the word in the text). For units that cannot form a scene, the UCCA sets a category Centers (C) to represent the subunits of a non-scene unit, and there may be one or more C in a non-scene unit. Modifiers (including qualifiers) are marked as Elaborator (E). For example, in the non-scene unit \"his ball\", \"his\" is E, and \"ball\" is C. In actual contexts, more complicated situations often occure: one scene may be a participant of another scene. For example, in the sentence \"The report says that the USA can be war criminals\", \"the USA can be war criminals\" is A in the scene where \"says\" is a relation; one scene can also be E in another scene, such as the sentence: \"The day Tom arrived in Beijing was Friday\", the scene \"Tom arrived in Beijing\" is E that modifies \"The day\", and \"The day\" is A of the scene \"The day was Friday\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "UCCA is a semantic annotation method as opposed to syntactic analysis. It is portable between various fields and languages, and is not sensitive to semantic-retain grammatical changes. In addition, it can accommodate more semantic differences. In this research, the TUPA tool is used to obtain the UCCA annotation result (Hershcovich et al., 2017) , it uses the NN classifier and BiLSTM model for training, inputting text and outputting UCCA result.",
"cite_spans": [
{
"start": 321,
"end": 347,
"text": "(Hershcovich et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Measure through Semantic Annotation(SAMSA)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplification Automatic evaluation",
"sec_num": "2.2"
},
{
"text": "SAMSA is the first indicator to evaluate the quality of Text Simplification (TS) system at the semantic structure level. It uses UCCA based on the concept of scene to try to reasonably evaluate the text simplification results in terms of semantic rather than syntax (Sulem et al., 2018a) . SAMSA extracts the scene of the input sentence, and after identifying the relation and participants, it does the word comparision calculation with output sentence It believes that the result of a high-quality text simplification should be: each input scene is mapped to the output sentence one by one, the smallest unit of the relation and the participants (see later) can be matched in the output sentence. SAMSA is a non-referenced automatic evaluation method. Elior Sulem's experiments show that SAMSA has a high relevance to human evaluation in terms of semantic retention. SAMSA is explained in detail below.",
"cite_spans": [
{
"start": 266,
"end": 287,
"text": "(Sulem et al., 2018a)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Simplification Automatic evaluation",
"sec_num": "2.2"
},
{
"text": "SAMSA is based on two external tools-UCCA and Word Alignment. UCCA decomposes each input sentence S into a set of scenes{SC 1 , SC 2 , \u2022 \u2022 \u2022 SC n }, each sceneSC i contains one main relationMR i and one or more participantsA i ; Word Alignment aligns the words of the input sentence with one or zero words of the output sentence to form a set A, which can identify synonym substitution (start/begin) and stemming (run/ran). n inp is the number of scenes of input, n out is the number of sentences of output (S 1 , S2 ,.., Snout). Firstly, SAMSA aligns the input scene and the output sentence. There are two cases:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplification Automatic evaluation",
"sec_num": "2.2"
},
{
"text": "1.n inp \u2265 n out : in this case, we compute the maximal Many-to-1 correspondence between Scenes and sentences. To align each input scene with the output sentence, SAMSA gets the number of word matches between each scene and each output sentence according to the word alignment A, and select the sentence with the highest matching degree to align. If n inp = n out , once a sentence is matched to a scene, it cannot be matched to another one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplification Automatic evaluation",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M * (SC i ) = argmax s score (SC i , S)",
"eq_num": "(1)"
}
],
"section": "Simplification Automatic evaluation",
"sec_num": "2.2"
},
{
"text": "2.n inp < n out : In this case, a scene will necessarily be split across several sentences. As this is an undesired result, SAMSA assigns this instance a score of zero.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplification Automatic evaluation",
"sec_num": "2.2"
},
{
"text": "For the scenes of input{SC 1 , \u2022 \u2022 \u2022 SC ninp }, the sentences of output{S 1 , \u2022 \u2022 \u2022 , S nout } and their map-ping relationshipM * (SC i ), the calculation formula of SAMSA is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplification Automatic evaluation",
"sec_num": "2.2"
},
{
"text": "SAM SA = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 nout ninp 1 2ninp",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplification Automatic evaluation",
"sec_num": "2.2"
},
{
"text": "SCi",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplification Automatic evaluation",
"sec_num": "2.2"
},
{
"text": "II M * (SCi) (M Ri) + 1 ki ki j=1 II M * (SCi) P ar (j) i , ninp \u2265 nout 0, ninp < nout (2) MR i is the smallest unit of relation in SC i , Par (j) i (j = 1, \u2022 \u2022 \u2022 , k i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplification Automatic evaluation",
"sec_num": "2.2"
},
{
"text": "is the smallest unit of participants in SC i . The smallest unit is the child node marked as C in the UCCA graph starting recurrence from P/S and A until the leaf node. If the participant is a scene, its smallest unit is the main relation of the scene. For example, the center of \"the tallest building in the world\" (u1) is \"the tallest building\". The center of the latter is \"building\", which is a leaf node. Therefore, the smallest unit of u1 is \"building\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplification Automatic evaluation",
"sec_num": "2.2"
},
{
"text": "II s (u) defines a function with a value between 0 and 1. If there is a word alignment in u and s, the value is 1, otherwise the value is 0. SAMSA sets a penalty factor n out / n inp to penalize the case of n inp > n out . In addition, SAMSA-abl is also set as the calculation indicator for removing the penalty coefficient, and the calculation is shown in formula 3. Elior Sulem's experiment (Sulem et al., 2018a) shows that the evaluation result of the SAMSA-abl indicator (0.54), which removes the penalty coefficient, is better than SAMSA. It indicates that the penalty coefficient will over-punish the situation of n inp > n out , so this research improves the indicator based on SAMSA-abl.",
"cite_spans": [
{
"start": 393,
"end": 414,
"text": "(Sulem et al., 2018a)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Simplification Automatic evaluation",
"sec_num": "2.2"
},
{
"text": "SAM SA = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 1 2ninp SCi II M * (SCi) (M Ri) + 1 ki ki j=1 II M * (SCi) P ar (j) i , ninp \u2265 nout 0, ninp < nout",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplification Automatic evaluation",
"sec_num": "2.2"
},
{
"text": "(3) To make the calculation process of SAMSA-abl clearer, we take the input sentence (a) \"About 13 million Syrians had to leave their homes because of danger.\" and the simplified sentence (b) \"About 13 million had to leave their homes.\" as an example. The smallest unit of the main relation of input scene is \"leave\", and the smallest unit of participants is \"About, 13, million\", \"Syrians\" and \"homes\". In all the smallest units, only \"Syrians\" in the simplified sentence fails to match the input sentence. Therefore,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplification Automatic evaluation",
"sec_num": "2.2"
},
{
"text": "II M * (sc 1 ) (M R 1 ) is 1, II M * (sc 1 ) (P ar i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplification Automatic evaluation",
"sec_num": "2.2"
},
{
"text": "is 1+0+1=2(k=3), and the score of (b) is 1/2*(1+1/3*2)=0.83.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplification Automatic evaluation",
"sec_num": "2.2"
},
{
"text": "Through Semantic Alignment(SEMA)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Simplification Evaluation",
"sec_num": "3"
},
{
"text": "SEMA is a further optimization of the SAMSA indicator, including two parts: 1. The basic for-mula SEMA-base (basic formula) is obtained by calculation when n inp < n out is added on the basis of SAMSA-abl; 2. In terms of indicator calculation strategy, semantic alignment is used to replace the string alignment and it mainly includes three semantic alignment methods: full alignment (SEMAbase), partial alignment, and hyponymy alignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Simplification Evaluation",
"sec_num": "3"
},
{
"text": "SAMSA believes that when n inp < n out , a scene is broken into multiple sentences, which destroyes the structure of the scene, so the score is 0. However, in the corpus used in this research, there are more texts that meet n inp < n out . For example, in the original sentence \"Central Park Tower has just become the tallest residential building in the world\", the simplified text is divided into four sentences:\" (1)Central Park Tower is a building in New York. (2)There are only apartments in this building.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SEMA-base",
"sec_num": "3.1"
},
{
"text": "(3)There are no offices in this building. (4)Now, it is the tallest building with apartments in the world.\" Although this text divides a scene into multiple sentences, from the perspective of reading comprehension, the simplified sentence is easier to understand and also retains the semantics of original sentence. It is unreasonable to get 0 under the condition of n inp < n out .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SEMA-base",
"sec_num": "3.1"
},
{
"text": "Based on this point, on the basis of SAMSA-abl, the definition of SEMA-base is shown in formula 4, where when n inp < n out , the simplified text can still get a score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SEMA-base",
"sec_num": "3.1"
},
{
"text": "SEM A \u2212 base = 1 2ninp",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SEMA-base",
"sec_num": "3.1"
},
{
"text": "SCi",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SEMA-base",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "II M * (SCi) (M R i ) + 1 ki ki j=1 II M * (SCi)(P ar (j) )",
"eq_num": "(4)"
}
],
"section": "SEMA-base",
"sec_num": "3.1"
},
{
"text": "SAMSA relies too much on string-match when aligning the words of the input scene and the output sentence, which leads to low evaluation results easily. SEMA changes the calculation and matching method based on SEMA-base, and emphasizes semantic alignment, including complete alignment, partial alignment and hyponymy alignment. Complete alignment is the original SAMSA string-match strategy. Partial Alignment: SAMSA requires that the smallest unit of the participant in the scene matches the word of the output sentence. For the case where a participant contains multiple smallest units, SAMSA requires that all smallest units should be matched to get score 1, otherwise it is 0. For example, for the input sentence \"I like banana, apple and orange.\", the participants are \"banana, apple, orange\". When the output sentence is \"I love apple.\", only \"apple\" is matched in the smallest unit of the participant, but the value is 0 according to the SAMSA matching method. Obviously, this is not friendly to sentences that contain part of the smallest unit. Partial alignment calculates the matching degree of every single smallest unit and SEMApart is defined as shown in formula 5. On the basis of SEMA-base, the parameter m q is added to represent the number of smallest units of participants, and P ar",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Strategy changes",
"sec_num": "3.2"
},
{
"text": "(j)(q) i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Strategy changes",
"sec_num": "3.2"
},
{
"text": "is the qth smallest unit of the jth participant in the ith scene.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Strategy changes",
"sec_num": "3.2"
},
{
"text": "SEM A \u2212 part = 1 2ninp",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Strategy changes",
"sec_num": "3.2"
},
{
"text": "SCi",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Strategy changes",
"sec_num": "3.2"
},
{
"text": "II M * (SCi) (M Ri) + 1 ki ki j=1 1 mq mq q=1 II M * (SCi) P ar (j)(q) i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Strategy changes",
"sec_num": "3.2"
},
{
"text": "(5) Hyponymy Alignment: In order to summarize the text features of text simplification better and establish a more complete evaluation indicator in terms of semantic evaluation, we observed and disassembled the corpus, compared the manual score with automatic machine score, and found the feature of hyponymy in the corpus. It is a common operation to replace hyponym with hypernym in text simplification. Here, the hyponymy refers to the words with the upper and lower conceptual relationship, and they have a species relationship (Chi, 1989) , such as \"drinks\" is the hypernym of \"beer\", \"fruit\" is the hypernym of \"kiwi\". Generally, the most simplified text has more hyponymy. In this research, based on SEMA-part, we use WordNet's hyponymic relationship network to align the smallest unit of relations and participants which include hyponymy. And it improves the degree alignment between the input scene and the output sentence. Finally, a text simplification evaluation indicator based on semantic alignment SEMA is formed. The calculation formula of SEMA is still shown in formula 5. The difference between SEMA-part and SEMA is only the addition of hyponymy alignment to the semantic alignment. In the end, our experiments proved that SEMA is highly usable in evaluating the semantic retention of Chinese and English text simplification at sentence and passage level. See chapter 4 for more details.",
"cite_spans": [
{
"start": 532,
"end": 543,
"text": "(Chi, 1989)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Strategy changes",
"sec_num": "3.2"
},
{
"text": "Artificial Simplified Corpus",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Experiment Based On",
"sec_num": "4"
},
{
"text": "This research uses simplified Chinese and English news corpus for experiments. The simplified English corpus comes from the English website: News in Levels, which is a free online news website specially designed for English students. Each article is written in three levels, and level 1 is the simplest. Taking level 3 as the benchmark, the semantic retention of level 2 and level 1 is manually judged to be around 70% and 50% respectively. The Chinese news corpus comes from the texts of the Chinese news reading textbook and its corresponding original texts. The texts of the news reading textbook are simplified and adapted for teaching needs. The semantic retention of the adapted text is around 80%. This research collected 200 English passages (three levels), 600 pieces in total, 100 aligned sentences; 100 Chinese aligned sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "4.1"
},
{
"text": "We first perform experiments on SAMSA, SAMSA-abl, SEMA-base, SEMA-part, and SEMA on 100 English sentences. The experimental results are shown in Table 1 The results show that SAMSA does not evaluate the semantic retention of each level of corpus very well; after removing the penalty coefficient, SAMSA-abl significantly improves the scores of the two levels. It proves that the penalty coefficient will over-punish the corpus; when considering the case: n inp < n out ,the scores of level1 and level2 are improved and the degree of improvement of level1 is more obvious, which also matches the corpus characteristics of level1 (more corpus conforms to n inp < n out ); After adding partial alignment and hyponymy alignment, the results of the corpus evaluated by SEMA are closer to the human estimated scores, with level1-score increased to 0.48 and level2-score increased to 0.69. The effect of each optimization strategy on the experimental results is shown in Figure 2 . SAMSA is proposed to evaluate the sentencelevel text simplification system. This research applies it to the passage-level evaluation. Firstly, 35 passages corresponding to 100 alignment sentences are selected for experiment. Based on SEMA-base, the result of level1 is 0.38, and the result of level2 is 0.52.",
"cite_spans": [],
"ref_spans": [
{
"start": 145,
"end": 152,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 964,
"end": 972,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "English Corpus Experiment",
"sec_num": "4.2"
},
{
"text": "It can be seen from the experimental results that the overall score of the passage level is lower than the sentence level. This is because in the passage-level evaluation, the length of the passage and the sentence number increase, the scene analysis tool TUPA is unstable. Therefore, it is difficult to extract the scene (that is, multiple sentences extract a large scene), such as the sentence \"They based the report on hundreds of interviews and analyses of photos, videos, and satellite images.\" should be divided into one scene, but in the actual results, \"videos, and satellite images\" and \"Put simply\" which is far away are seen as one scene. Since the input scene and the output sentence are aligned according to the maximum number of word matches, and the scene cannot be split clearly, multiple scenes can only be aligned with one sentence. Obviously, it is difficult to find all the semantic information of multiple scenes in one sentence in this case, which directly affects the quality of the indicator evaluation. In order to improve this shortcoming, we splitted the original passages (level3) and then used TUPA for scene analysis. The scene analysis result of each sentence was compared with the simplified whole passage, so the best match can be selected. The final score is averaged.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English Corpus Experiment",
"sec_num": "4.2"
},
{
"text": "The experimental results at the passage level are shown in Table 2 . \"Segmentation+SEMA-base\" is an improvement based on SEMA-base. It can be seen that the division of the passage helps TUPA extract the scene and improve the accuracy of the indicator. In the end, the evaluation results of 35 passages of level1 and level2 increased from the initial 0.24 and 0.26 to 0.53 and 0.69 respectively. When we expand the corpus from 35 passages to 200 passages, level1 and level2 scores are 0.51 and 0.68 respectively, which is consistent with the manual evaluation results. As for the passage level, the effect of each indicator optimization strategy on the experimental results is shown in Figure 3 . Among them, the improvement of hyponymy alignment is obvious, and the performance on level 1 is particularly prominent. In the final SEMA evaluation results, the passage level score of level 1 is much higher than the sentence level. The main reason is that when we align the sentences, we filter out some improperly aligned sentences, and all sentences at the passgae level participate in the scoring. The scores of these sentences increase the average score at the passage level. ",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 66,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 685,
"end": 693,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "English Corpus Experiment",
"sec_num": "4.2"
},
{
"text": "Compared with English, Chinese is consistent with English in the main output sequence of sentences such as subject, predicate and object. For some subsidiary components, such as attributes that modify the subject and object, there are many differences between Chinese and English. The Chinese corpus comes from the adapted Chinese news reading textbook and its corresponding original text. The adaptation methods include but are not limited to: deletion, replacement, and rewriting. According to manual evaluation, the semantic retention of the adapted corpus is about 80% or more.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Corpus Experiment",
"sec_num": "4.3"
},
{
"text": "This research uses semi-automatic processing in the Chinese corpus experiment. There are no tool to analyse Chinese UCCA structure, when extracting the main information of the input sentence, we use Baidu dependency syntax analysis 2 to extract the core word of the sentence (HED) as the main relation, the first-level child nodes of the core words are participants. For example, in the sentence \"2004\u5e743\u670826\u65e5\u5168\u6cd5\u6c49\u8bed\u6559\u5b66\u7814\u8ba8\u4f1a\u5728 \u5df4\u9ece\u56fd\u9645\u5927\u5b66\u751f\u57ce\u4e3e\u884c\u3002\", the relation is \"\u4e3e \u884c\", the participants are \"\u7814\u8ba8\u4f1a\" and \"\u5728\u5df4\u9ece\". For semantic alignment, we use manual alignment in a non-automated way, and finally conduct experiments based on aligned 100 Chinese sentences. The results are shown in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 662,
"end": 669,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Chinese Corpus Experiment",
"sec_num": "4.3"
},
{
"text": "Adapted text SEMA 0.804 Experiments show that in evaluating the semantic retention of the adapted Chinese sentences, SEMA reaches to 0.804, which is consistent with the manual evaluation result. This has great significance for the evaluation of the semantic retention of Chinese text simplification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Corpus Experiment",
"sec_num": "4.3"
},
{
"text": "This research improves the semantic structure based text simplification evaluation measure SAMSA proposed in 2018. There are mainly several aspects: the case of n inp < n out is considered on the basis of SAMSA-abl; semantic alignment is used to replace string matching, mainly based on three semantic alignments method: Full alignment, partial alignment, hyponymy alignment. Finally, a semantic retention evaluation measure about text simplification SEMA based on semantic alignment is formed. We did experiments on English sentencelevel and passage-level. The experimental results show that it is similar to the manual evaluation results, which shows its significance in text simplification evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "When we apply SEMA to Chinese, we summarize the characteristics of Chinese and use dependency syntax analysis to extract the main semantic information in the sentence. Experimental results show that SEMA has high applicability in Chinese corpus, and it is the first semantic retention evaluation indicator based on semantic alignment on Chinese corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In future research, we will continue to use larger corpus to explore SEMA's evaluation methods under different semantic retention thresholds; In addition, the text simplification indicator proposed in this paper only evaluates the semantic retention, other aspects of evaluating texts simplification such as grammaticality and degree of simplification need to be further explored; At the same time, follow-up research should expand the scale of the text corpus and collect multi-subject, multi-genre and multilength texts to test the usability of our indicator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The dependency syntax explains its syntactic structure by analyzing the dependencies of the components in the language unit, claiming that the core verb in the sentence is the central component that dominates other components",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research is supported by Science Foundation of Beijing Language and Culture University (supported by \"the Fundamental Research Funds for the Central Universities\")(19YJ040005); Major Program of National Social Science Foundation of China (18ZDA295); Top-ranking Discipline Team Support Program of Beijing Language and Culture University(JC201902).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Universal conceptual cognitive annotation (ucca)",
"authors": [
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omri Abend and Ari Rappoport. 2013. Universal con- ceptual cognitive annotation (ucca). In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Chinese semantic role tagging based on dependency syntax analysis",
"authors": [
{
"first": "Xiaohong Yuan Guodong Zhou Bukang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hongling",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Chinese Information Processing -J Chin Inf Proc",
"volume": "01",
"issue": "",
"pages": "25--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaohong Yuan Guodong Zhou Bukang Wang, Hongling Wang. 2010. Chinese semantic role tag- ging based on dependency syntax analysis. Journal of Chinese Information Processing -J Chin Inf Proc, 01:25-29. (in Chinese).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Practical simplification of English newspaper text to assist aphasic readers",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Canning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tait",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Minnen",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the AAAI-98 Workshop on Integrating Artificial Intelligence and Assistive Technology",
"volume": "",
"issue": "",
"pages": "7--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Canning Y Devlin S Tait J Carroll J, Minnen G. 1998. Practical simplification of English newspaper text to assist aphasic readers. In Proceedings of the AAAI- 98 Workshop on Integrating Artificial Intelligence and Assistive Technology, pages 7-10.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatic induction of rules for text simplification",
"authors": [
{
"first": "R",
"middle": [],
"last": "Chandrasekar",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Srinivas",
"suffix": ""
}
],
"year": 1997,
"venue": "Knowledge Based Systems",
"volume": "10",
"issue": "3",
"pages": "183--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Chandrasekar and B. Srinivas. 1997. Automatic in- duction of rules for text simplification. Knowledge Based Systems, 10(3):183-190.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Talking about hyponymy",
"authors": [
{
"first": "Mei",
"middle": [],
"last": "Chi",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "26--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mei Chi. 1989. Talking about hyponymy. HAN YU XUE XI, (01):26-28. (in Chinese).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning to simplify sentences using wikipedia",
"authors": [
{
"first": "Will",
"middle": [],
"last": "Coster",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Kauchak",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Workshop on Monolingual Text-To-Text Generation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Will Coster and David Kauchak. 2011. Learning to simplify sentences using wikipedia. In Proceedings of the Workshop on Monolingual Text-To-Text Gen- eration.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Meteor universal: Language specific translation evaluation for any target language",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Denkowski and Alon Lavie. 2014. Meteor uni- versal: Language specific translation evaluation for any target language. In Proceedings of the Ninth Workshop on Statistical Machine Translation.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A transition-based directed acyclic graph parser for ucca",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Hershcovich",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.00552"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2017. A transition-based directed acyclic graph parser for ucca. arXiv preprint arXiv:1704.00552.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Rouge: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text summarization branches out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Wordnet: A lexical database for English",
"authors": [
{
"first": "George",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Miller. 1995. Wordnet: A lexical database for English. Communications of the ACM, 38:39-.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Com- putational Linguistics, pages 311-318.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatic text simplification",
"authors": [
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2017,
"venue": "Synthesis Lectures on Human Language Technologies",
"volume": "10",
"issue": "1",
"pages": "1--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saggion and Horacio. 2017. Automatic text simplifica- tion. Synthesis Lectures on Human Language Tech- nologies, 10(1):1-137.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Text simplification in simplext. making text more accessible",
"authors": [
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "G\u00f3mezmart\u00ednez",
"suffix": ""
},
{
"first": "Esteban",
"middle": [],
"last": "Etayo",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Anula",
"suffix": ""
},
{
"first": "Lorena",
"middle": [],
"last": "Bourg",
"suffix": ""
}
],
"year": 2012,
"venue": "International Conference on Computational Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Horacio Saggion, Elena G\u00f3mezmart\u00ednez, Esteban Etayo, Alberto Anula, and Lorena Bourg. 2012. Text simplification in simplext. making text more ac- cessible. In International Conference on Computa- tional Science.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of association for machine translation in the Americas",
"volume": "200",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine transla- tion in the Americas, volume 200. Cambridge, MA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Semantic structural evaluation for text simplification",
"authors": [
{
"first": "Elior",
"middle": [],
"last": "Sulem",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elior Sulem, Omri Abend, and Ari Rappoport. 2018a. Semantic structural evaluation for text simplification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Simple and effective text simplification using semantic and neural methods",
"authors": [
{
"first": "Elior",
"middle": [],
"last": "Sulem",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2018,
"venue": "Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elior Sulem, Omri Abend, and Ari Rappoport. 2018b. Simple and effective text simplification using seman- tic and neural methods. In Meeting of the Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "BLEU is not suitable for the evaluation of text simplification",
"authors": [
{
"first": "Elior",
"middle": [],
"last": "Sulem",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.05995"
]
},
"num": null,
"urls": [],
"raw_text": "Elior Sulem, Omri Abend, and Ari Rappoport. 2018c. BLEU is not suitable for the evaluation of text sim- plification. arXiv preprint arXiv:1810.05995.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Problems in current text simplification research: New data can help",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Courtney",
"middle": [],
"last": "Napoles",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "283--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in current text simplification re- search: New data can help. Transactions of the Asso- ciation for Computational Linguistics, 3:283-297.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Optimizing statistical machine translation for text simplification",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Courtney",
"middle": [],
"last": "Napoles",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Quanze",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Lingus",
"volume": "4",
"issue": "4",
"pages": "401--415",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Lingus, 4(4):401-415.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"text": "The result of \"John kicked his ball\" by UCCA",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "Sentence-level semantic retention evaluation, the improved experimental results of each optimization strategy",
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"text": "Passage-level semantic retention evaluation, the improved experimental results of each optimization strategy",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Sentence-level results of SAMSA and SEMA"
},
"TABREF3": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Passage-level results of SEMA"
},
"TABREF4": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "SEMA evaluation results at the Chinese sentence level"
}
}
}
}