ACL-OCL / Base_JSON /prefixN /json /nlp4convai /2021.nlp4convai-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:55:50.824823Z"
},
"title": "Personalized Extractive Summarization Using an Ising Machine Towards Real-time Generation of Efficient and Coherent Dialogue Scenarios",
"authors": [
{
"first": "Hiroaki",
"middle": [],
"last": "Takatsu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Waseda University",
"location": {
"settlement": "Tokyo",
"country": "Japan"
}
},
"email": "[email protected]"
},
{
"first": "Takahiro",
"middle": [],
"last": "Kashikawa",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Koichi",
"middle": [],
"last": "Kimura",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Ryota",
"middle": [],
"last": "Ando",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Yoichi",
"middle": [],
"last": "Matsuyama",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Waseda University",
"location": {
"settlement": "Tokyo",
"country": "Japan"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a personalized dialogue scenario generation system which transmits efficient and coherent information with a real-time extractive summarization method optimized by an Ising machine. The summarization problem is formulated as a quadratic unconstraint binary optimization (QUBO) problem, which extracts sentences that maximize the sum of the degree of user's interest in the sentences of documents with the discourse structure of each document and the total utterance time as constraints. To evaluate the proposed method, we constructed a news article corpus with annotations of the discourse structure, users' profiles, and interests in sentences and topics. The experimental results confirmed that a Digital Annealer, which is a simulated annealing-based Ising machine, can solve our QUBO model in a practical time without violating the constraints using this dataset.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a personalized dialogue scenario generation system which transmits efficient and coherent information with a real-time extractive summarization method optimized by an Ising machine. The summarization problem is formulated as a quadratic unconstraint binary optimization (QUBO) problem, which extracts sentences that maximize the sum of the degree of user's interest in the sentences of documents with the discourse structure of each document and the total utterance time as constraints. To evaluate the proposed method, we constructed a news article corpus with annotations of the discourse structure, users' profiles, and interests in sentences and topics. The experimental results confirmed that a Digital Annealer, which is a simulated annealing-based Ising machine, can solve our QUBO model in a practical time without violating the constraints using this dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "As mobile personal assistants and smart speakers become ubiquitous, the demand for dialogue-based media technologies has increased since they allow users to consume a fair amount of information via a dialogue form in daily life situations. Dialoguebased media is more restrictive than textual media. For example, when listening to an ordinary smart speaker, users can not skip unnecessary information or skim only for necessary information. Thus, it is crucial for future dialogue-based media to extract and efficiently transmit information that the users are particularly interested in without excess or deficiencies. In addition, the dialogue scenarios generated based on the extracted information should be coherent to aid in the proper understanding. Generating such efficient and coherent scenarios personalized for each user generally takes more time as the information source size and the number of target users increase. Moreover, the nature of conversational experiences requires personalization in real time. In this paper, we propose a personalized extractive summarization method formulated as a combinatorial optimization problem to generate efficient and coherent dialogue scenarios and demonstrate that an Ising machine can solve the problem at high speeds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As a realistic application of the proposed personalized summarization method for a spoken dialogue system, we consider a news delivery task (Takatsu et al., 2018) . This news dialogue system proceeds the dialogue according to a primary plan to explain the summary of the news article and subsidiary plans to transmit supplementary information though question answering. As long as the user is listening passively, the system transmits the content of the primary plan. The personalized primary plan generation problem can be formulated as follows:",
"cite_spans": [
{
"start": 140,
"end": 162,
"text": "(Takatsu et al., 2018)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "From N documents with different topics, sentences that may be of interest to the user are extracted based on the discourse structure of each document. Then the contents are transmitted by voice within T seconds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Specifically, this problem can be formulated as an integer linear programming (ILP) problem, which extracts sentences that maximize the sum of the degree of user's interest in the sentences of documents with the discourse structure of each document and the total utterance time T as constraints. Because this ILP problem is NP-hard, it takes an enormous amount of time to find an optimal solution using the branch-and-cut method (Mitchell, 2002; Padberg and Rinaldi, 1991) as the problem scale becomes large.",
"cite_spans": [
{
"start": 429,
"end": 445,
"text": "(Mitchell, 2002;",
"ref_id": "BIBREF27"
},
{
"start": 446,
"end": 472,
"text": "Padberg and Rinaldi, 1991)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In recent years, non-von Neumann computers called Ising machines have been attracting attention as they can solve combinatorial optimization problems and obtain quasi-optimal solutions instantly (Sao et al., 2019) . Ising machines can solve combi-natorial optimization problems represented by an Ising model or a quadratic unconstrained binary optimization (QUBO) model (Lucas, 2014; Glover et al., 2019) . In this paper, we propose a QUBO model that generates an efficient and coherent personalized summary for each user. Additionally, we verify that our QUBO model can be solved by a Digital Annealer (Aramon et al., 2019; Matsubara et al., 2020) , which is a simulated annealing-based Ising machine, in a practical time without violating the constraints using the constructed dataset.",
"cite_spans": [
{
"start": 195,
"end": 213,
"text": "(Sao et al., 2019)",
"ref_id": "BIBREF32"
},
{
"start": 370,
"end": 383,
"text": "(Lucas, 2014;",
"ref_id": "BIBREF21"
},
{
"start": 384,
"end": 404,
"text": "Glover et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 603,
"end": 624,
"text": "(Aramon et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 625,
"end": 648,
"text": "Matsubara et al., 2020)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The contributions of this paper are three-fold:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The ILP and QUBO models for the personalized summary generation are formulated in terms of efficient and coherent information transmission. \u2022 To evaluate the effectiveness of the proposed method, we construct a Japanese news article corpus with annotations of the discourse structure, users' profiles, and interests in sentences and topics. \u2022 Experiments demonstrate that a Digital Annealer, which is a simulated annealing-based Ising machine, can solve our QUBO model in a practical time without violating the constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows. Section 2 discusses the related work. Section 3 overviews the annotations of the discourse structure and interest data collection. Section 4 details the proposed method. Section 5 describes the Digital Annealer. Section 6 evaluates the performance of the proposed method. Section 7 provides the conclusions and prospects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Typical datasets for discourse structure analysis are RST Discourse Treebank (Carlson et al., 2001) , Discourse Graphbank (Wolf and Gibson, 2005) , and Penn Discourse Treebank (Prasad et al., 2008) . RST Discourse Treebank is a dataset constructed based on rhetorical structure theory (Mann and Thompson, 1988) . Some studies have annotated discourse relations to Japanese documents. Kaneko and Bekki (2014) annotated the temporal and causal relations for segments obtained by decomposing the sentences of the balanced corpus of contemporary written Japanese (Maekawa et al., 2014) based on segmented discourse representation theory (Asher and Lascarides, 2003) . Kawahara et al. (2014) proposed a method to annotate discourse relations for the first three sentences of web documents in various domains using crowdsourcing. They showed that discourse relations can be annotated in many documents over a short amount of time. Kishimoto et al. (2018) confirmed that making improvements such as adding language tests to the annotation criteria of Kawahara et al. (2014) can improve the annotation quality.",
"cite_spans": [
{
"start": 77,
"end": 99,
"text": "(Carlson et al., 2001)",
"ref_id": "BIBREF5"
},
{
"start": 122,
"end": 145,
"text": "(Wolf and Gibson, 2005)",
"ref_id": "BIBREF36"
},
{
"start": 176,
"end": 197,
"text": "(Prasad et al., 2008)",
"ref_id": "BIBREF31"
},
{
"start": 285,
"end": 310,
"text": "(Mann and Thompson, 1988)",
"ref_id": "BIBREF25"
},
{
"start": 384,
"end": 407,
"text": "Kaneko and Bekki (2014)",
"ref_id": "BIBREF16"
},
{
"start": 559,
"end": 581,
"text": "(Maekawa et al., 2014)",
"ref_id": "BIBREF22"
},
{
"start": 633,
"end": 661,
"text": "(Asher and Lascarides, 2003)",
"ref_id": "BIBREF3"
},
{
"start": 664,
"end": 686,
"text": "Kawahara et al. (2014)",
"ref_id": "BIBREF17"
},
{
"start": 925,
"end": 948,
"text": "Kishimoto et al. (2018)",
"ref_id": "BIBREF19"
},
{
"start": 1044,
"end": 1066,
"text": "Kawahara et al. (2014)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work 2.1 Discourse structure corpus",
"sec_num": "2"
},
{
"text": "When applying discourse structure analysis results to tasks such as document summarization (Hirao et al., 2013; Yoshida et al., 2014; Kikuchi et al., 2014; Hirao et al., 2015) or dialogue (Feng et al., 2019) , a dependency structure, which directly expresses the parent-child relationship between discourse units, is preferable to a phrase structure such as a rhetorical structure tree. Although methods have been proposed to convert a rhetorical structure tree into a discourse dependency tree (Li et al., 2014; Hirao et al., 2013) , the generated trees depends on the conversion algorithm . Yang and Li (2018) proposed a method to manually annotate the dependency structure and discourse relations between elementary discourse units for abstracts of scientific papers, and then constructed SciDTB.",
"cite_spans": [
{
"start": 91,
"end": 111,
"text": "(Hirao et al., 2013;",
"ref_id": "BIBREF13"
},
{
"start": 112,
"end": 133,
"text": "Yoshida et al., 2014;",
"ref_id": "BIBREF41"
},
{
"start": 134,
"end": 155,
"text": "Kikuchi et al., 2014;",
"ref_id": "BIBREF18"
},
{
"start": 156,
"end": 175,
"text": "Hirao et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 188,
"end": 207,
"text": "(Feng et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 495,
"end": 512,
"text": "(Li et al., 2014;",
"ref_id": "BIBREF20"
},
{
"start": 513,
"end": 532,
"text": "Hirao et al., 2013)",
"ref_id": "BIBREF13"
},
{
"start": 593,
"end": 611,
"text": "Yang and Li (2018)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work 2.1 Discourse structure corpus",
"sec_num": "2"
},
{
"text": "In this study, we construct a dataset suitable to build summarization or dialogue systems that transmit personalized information while considering the coherence based on the discourse structure. Experts annotated the inter-sentence dependencies, discourse relations, and chunks, which are highly cohesive sets of sentences, for Japanese news articles. Users' profiles and interests in the sentences and topics of news articles were collected via crowdsourcing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work 2.1 Discourse structure corpus",
"sec_num": "2"
},
{
"text": "As people's interests and preferences diversify, the demand for personalized summarization technology has increased (Sappelli et al., 2018) . Summaries are classified as generic or user-focused, based on whether they are specific to a particular user (Mani and Bloedorn, 1998) . Unlike generic summaries generated by extracting important information from the text, user-focused summaries are generated based not only on important information but also on the user's interests and preferences. Most user-focused summarization methods rank sentences using a score calculated considering user's characteristics and subsequently generate a summary by extracting higher-ranked sentences (D\u00edaz and Gerv\u00e1s, 2007; Yan et al., 2011; Hu et al., 2012) . However, such conventional user-focused methods tend to generate incoherent summaries. Generic summarization methods, which consider the discourse structure of documents, have been proposed to maintain coherence (Kikuchi et al., 2014; Hirao et al., 2015; Xu et al., 2020) .",
"cite_spans": [
{
"start": 116,
"end": 139,
"text": "(Sappelli et al., 2018)",
"ref_id": "BIBREF33"
},
{
"start": 251,
"end": 276,
"text": "(Mani and Bloedorn, 1998)",
"ref_id": "BIBREF24"
},
{
"start": 681,
"end": 704,
"text": "(D\u00edaz and Gerv\u00e1s, 2007;",
"ref_id": "BIBREF7"
},
{
"start": 705,
"end": 722,
"text": "Yan et al., 2011;",
"ref_id": "BIBREF39"
},
{
"start": 723,
"end": 739,
"text": "Hu et al., 2012)",
"ref_id": "BIBREF14"
},
{
"start": 954,
"end": 976,
"text": "(Kikuchi et al., 2014;",
"ref_id": "BIBREF18"
},
{
"start": 977,
"end": 996,
"text": "Hirao et al., 2015;",
"ref_id": "BIBREF12"
},
{
"start": 997,
"end": 1013,
"text": "Xu et al., 2020)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Personalized summarization",
"sec_num": "2.2"
},
{
"text": "To achieve both personalization and coherence simultaneously, we propose ILP and QUBO models to extract sentences based on the degree of user's interest and generate a personalized summary for each user while maintaining coherence based on the discourse structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Personalized summarization",
"sec_num": "2.2"
},
{
"text": "We constructed a news article corpus with annotations of the discourse structure, users' profiles, and interests in sentences and topics. Experts annotated the inter-sentence dependencies, discourse relations, and chunks for the Japanese news articles. Users' profiles and interests in the sentences and topics of news articles were collected via crowdsourcing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "Two web news clipping experts annotated the dependencies, discourse relations, and chunks for 1,200 Japanese news articles. Each article contained between 15-25 sentences. The articles were divided into six genres: sports, technology, economy, international, society, and local. In each genre, we manually selected 200 articles to minimize topic overlap. The annotation work was performed in the order of dependencies, discourse relations, and chunks. The discourse unit was a sentence, which represents a character string separated by an ideographic full stop.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse structure dataset",
"sec_num": "3.1"
},
{
"text": "The conditions in which sentence j can be specified as the parent of sentence i are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency annotation",
"sec_num": "3.1.1"
},
{
"text": "\u2022 In the original text, sentence j appears before sentence i. \u2022 The flow of the story is natural when reading from the root node in order according to the tree structure and reading sentence i after sentence j. \u2022 The information from the root node to sentence j is the minimum information necessary to understand sentence i. \u2022 If it is possible to start reading from sentence i, the parent of sentence i is the root node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency annotation",
"sec_num": "3.1.1"
},
{
"text": "A discourse relation classifies the type of semantic relationship between the child sentence and the parent sentence. We defined the following as discourse relations: Start, Result, Cause, Background, Correspondence, Contrast, Topic Change, Example, Conclusion, and Supplement. An annotation judgment was made while confirming whether both the definition of the discourse relation and the dialogue criterion were met. The dialogue criterion is a judgment based on whether the response is natural according to the discourse relation. For example, the annotators checked whether it was appropriate to present a child sentence as an answer to a question asking the cause, such as \"Why?\" after the parent sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse relation annotation",
"sec_num": "3.1.2"
},
{
"text": "A chunk is a highly cohesive set of sentences. If a parent sentence should be presented with a child sentence, it is regarded as a chunk.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chunk annotation",
"sec_num": "3.1.3"
},
{
"text": "A hard chunk occurs when the child sentence provides information essential to understand the content of the parent sentence. Examples include when the parent sentence contains a comment and the child sentence contains the speaker's information or when a procedure is explained over multiple sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chunk annotation",
"sec_num": "3.1.3"
},
{
"text": "A soft chunk occurs when the child sentence is useful to prevent a biased understanding of the content of the parent sentence, although it does not necessarily contain essential information to understand the parent sentence itself. An example is explaining the situation in two countries related to a subject, where the parent sentence contains one explanation and the child sentence contains another.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chunk annotation",
"sec_num": "3.1.3"
},
{
"text": "Participants were recruited via crowdsourcing. They were asked to answer a profile questionnaire and an interest questionnaire. We used 1,200 news articles, which were the same as those used in the discourse structure dataset. We collected the questionnaire results of 2,507 participants. Each participant received six articles, one from each genre. The six articles were distributed so that the total number of sentences was as even as possible across participants. Each article was reviewed by at least 11 participants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interest dataset",
"sec_num": "3.2"
},
{
"text": "The profile questionnaire collected the following information: gender, age, residential prefecture, occupation type, industry type, hobbies, frequency of checking news (daily, 4-6 days a week, 1-3 days a week, or 0 days a week), typical time of day news is checked (morning, afternoon, early evening, or night), methods to access the news (video, audio, or text), tools used to check the news (TV, newspaper, smartphone, etc.), newspapers, websites, and applications used to check the news (Nihon Keizai Shimbun, LINE NEWS, SNS, etc.), whether a fee was paid to check the news, news genre actively checked (economy, sports, etc.), and the degree of interest in each news genre (not interested at all, not interested, not interested if anything, interested if anything, interested, or very interested).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Profile questionnaire",
"sec_num": "3.2.1"
},
{
"text": "Participants read the text of the news article and indicated their degree of interest in the content of each sentence. Finally, they indicated their degree of interest in the topic of the article. The degree of interest was indicated on six levels: 1, not interested at all; 2, not interested; 3, not interested if anything; 4, interested if anything; 5, interested; or 6, very interested.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interest questionnaire",
"sec_num": "3.2.2"
},
{
"text": "We propose an integer linear programming (ILP) model and a quadratic unconstraint binary optimization (QUBO) model for the personalized summary generation in terms of efficient and coherent information transmission.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "We considered a summarization problem, which extracts sentences that user u may be interested in from the selected N documents and then transmits them by voice within T seconds. The summary must be of interest to the user, coherent, and not redundant. Therefore, we formulated the summarization problem as an integer linear programming problem in which the objective function is defined by the balance between a high degree of interest in the sentences and a low degree of similarity between the sentences with the discourse structure as constraints. This is expressed as max. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integer linear programming model",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "k\u2208D u N i<j\u2208S k b u ki b u kj (1 \u2212 r kij ) y kij",
"eq_num": "(1)"
}
],
"section": "Integer linear programming model",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(i) Function that returns the parent ID of s ki D u N IDs of the selected N documents for user u S k Sentence IDs contained in document d k C km Sentence IDs contained in chunk m of d k s.t. \u2200k, i, j : x ki \u2208 {0, 1}, y kij \u2208 {0, 1} k\u2208D u N i\u2208S k t ki x ki \u2264 T (2) \u2200k < l : i\u2208S k x ki \u2212 i\u2208S l x li \u2264 L (3) \u2200k, i : j = f k (i) , x ki \u2264 x kj (4) \u2200k, m, i \u2208 C km : j\u2208C km x kj = |C km | \u00d7 x ki (5) \u2200k, i, j : y kij \u2212 x ki \u2264 0 (6) \u2200k, i, j : y kij \u2212 x kj \u2264 0 (7) \u2200k, i, j : x ki + x kj \u2212 y kij \u2264 1",
"eq_num": "(8)"
}
],
"section": "Integer linear programming model",
"sec_num": "4.1"
},
{
"text": "Table 1 explains each variable. Here, the i-th sentence of the k-th document is expressed as s ki . r kij represents the cosine similarity between the bag-of-words constituting s ki and s kj .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integer linear programming model",
"sec_num": "4.1"
},
{
"text": "Equation 2 is a constraint restricting the utterance time of the summary to T seconds or less. Equation 3 is a constraint restricting the bias of the number of extracting sentences between documents to L sentences or less. Equation 4 is a constraint in which the parent s kj of s ki in the discourse dependency tree must be extracted when s ki is extracted. Equation 5 is a constraint requiring other sentences in the chunk to be extracted when extracting s ki in a chunk. Equations 6-8 are constraints that set y kij = 1 when s ki and s kj are selected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integer linear programming model",
"sec_num": "4.1"
},
{
"text": "The maximum bias in the number of extracting sentences between documents L is calculated by the following formulas, which are based on the maximum summary length T , the number of documents N , and the average utterance time of the sentencest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integer linear programming model",
"sec_num": "4.1"
},
{
"text": "L = n \u221a N + 0.5 (9) n = T t \u00d7 N (10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integer linear programming model",
"sec_num": "4.1"
},
{
"text": "n represents the expected number of sentences to be extracted from one document. L is the value obtained by dividingn by the square root of the number of documents and rounding the result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integer linear programming model",
"sec_num": "4.1"
},
{
"text": "To solve the combinatorial optimization problem with an Ising machine, the problem must be formulated with the Ising model or the quadratic unconstraint binary optimization (QUBO) model (Lucas, 2014; Glover et al., 2019 ",
"cite_spans": [
{
"start": 186,
"end": 199,
"text": "(Lucas, 2014;",
"ref_id": "BIBREF21"
},
{
"start": 200,
"end": 219,
"text": "Glover et al., 2019",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quadratic unconstrained binary optimization model",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H 0 = \u2212 k\u2208D u N i<j\u2208S k b u ki b u kj (1 \u2212 r kij ) x ki x kj (12) H 1 = \uf8eb \uf8ed T \u2212 k\u2208D u N i\u2208S k t ki x ki \u2212 log 2 (T \u22121) n=0 2 n y n \uf8f6 \uf8f7 \uf8f8 2",
"eq_num": "(13)"
}
],
"section": "Quadratic unconstrained binary optimization model",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H 2 = k<l\u2208D u N \uf8eb \uf8ed L \u2212 \uf8eb \uf8ed i\u2208S k x ki \u2212 i\u2208S l x li \uf8f6 \uf8f8 \u2212 log 2 (L\u22121) n=0 2 n z n \uf8f6 \uf8f7 \uf8f8 2",
"eq_num": "(14)"
}
],
"section": "Quadratic unconstrained binary optimization model",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H 3 = x ki \u2212 x ki x kj=f k (i) 2",
"eq_num": "(15)"
}
],
"section": "Quadratic unconstrained binary optimization model",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H 4 = k\u2208D u N m\u2208C k i\u2208C km \uf8eb \uf8ed j\u2208C km x kj \u2212 |C km | \u00d7 x ki 2",
"eq_num": "(16)"
}
],
"section": "Quadratic unconstrained binary optimization model",
"sec_num": "4.2"
},
{
"text": "where y n and z n are slack variables introduced to convert inequality constraints into equality constraints, \u03bb 1 , \u03bb 2 , \u03bb 3 , and \u03bb 4 are the weight coefficients for each constraint. QUBO models are solved by a simulated annealing-based method (Aramon et al., 2019) or parallel tempering (known as replica-exchange Monte Carlo) (Matsubara et al., 2020) . In these methods, multiple solution candidates can be obtained by annealing in parallel with different initial values. However, these methods do not guarantee that constraint violations will not occur. Therefore, the solution candidate with the lowest constraint violation score and the highest score of the objective function is adopted. The constraint violation score E total is calculated as",
"cite_spans": [
{
"start": 246,
"end": 267,
"text": "(Aramon et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 330,
"end": 354,
"text": "(Matsubara et al., 2020)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quadratic unconstrained binary optimization model",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E total = E dep + E chunk + E bias + E time (17) E bias = max L \u2212 L, 0",
"eq_num": "(18)"
}
],
"section": "Quadratic unconstrained binary optimization model",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E time = max T \u2212 T, 0",
"eq_num": "(19)"
}
],
"section": "Quadratic unconstrained binary optimization model",
"sec_num": "4.2"
},
{
"text": "where E dep is the number of dependency errors, E chunk is the number of chunk errors,L is the maximum bias in the number of extracted sentences between documents,T is the total utterance time of the extracted sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quadratic unconstrained binary optimization model",
"sec_num": "4.2"
},
{
"text": "Quantum computing technologies are categorized into two types: quantum gate computers (Arute et al., 2019; Gyongyosi, 2020) and Ising machines. Quantum gate computers are for universal computing, whereas Ising machines specialize in searching for solutions of combinatorial optimization problems. Ising machines can be subdivided into two categories: quantum annealing machines (Johnson et al., 2011; Bunyk et al., 2014; Maezawa et al., 2019) and simulated annealing machines (Yamaoka et al., 2016; Okuyama et al., 2017; Aramon et al., 2019; Matsubara et al., 2020) . Quantum annealing machines search solutions using quantum bits, which are made of quantum devices such as a superconducting circuit. By contrast, simulated annealing machines use a digital circuit. Digital Annealer (Aramon et al., 2019; Matsubara et al., 2020) is a type of simulated annealing machines with a new digital circuit architecture, which is designed to solve combinatorial optimization problems efficiently. This study uses a Digital Annealing Unit (DAU) of the second generation Digital Annealer. The DAU has an annealing mode (Aramon et al., 2019) or parallel tempering mode (Matsubara et al., 2020) to solve QUBO models. It can handle up to 4,096 binary variables with 64-bit precision or as many as 8,192 binary variables with 16-bit precision. Hereinafter, the DAU in the annealing mode is referred to as DAU-AM, and the DAU in the parallel tempering mode is referred to as DAU-PTM.",
"cite_spans": [
{
"start": 86,
"end": 106,
"text": "(Arute et al., 2019;",
"ref_id": null
},
{
"start": 107,
"end": 123,
"text": "Gyongyosi, 2020)",
"ref_id": "BIBREF10"
},
{
"start": 378,
"end": 400,
"text": "(Johnson et al., 2011;",
"ref_id": "BIBREF15"
},
{
"start": 401,
"end": 420,
"text": "Bunyk et al., 2014;",
"ref_id": "BIBREF4"
},
{
"start": 421,
"end": 442,
"text": "Maezawa et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 476,
"end": 498,
"text": "(Yamaoka et al., 2016;",
"ref_id": "BIBREF38"
},
{
"start": 499,
"end": 520,
"text": "Okuyama et al., 2017;",
"ref_id": "BIBREF29"
},
{
"start": 521,
"end": 541,
"text": "Aramon et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 542,
"end": 565,
"text": "Matsubara et al., 2020)",
"ref_id": "BIBREF26"
},
{
"start": 774,
"end": 804,
"text": "Annealer (Aramon et al., 2019;",
"ref_id": null
},
{
"start": 805,
"end": 828,
"text": "Matsubara et al., 2020)",
"ref_id": "BIBREF26"
},
{
"start": 1108,
"end": 1129,
"text": "(Aramon et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 1157,
"end": 1181,
"text": "(Matsubara et al., 2020)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Digital Annealer",
"sec_num": "5"
},
{
"text": "Using the constructed dataset, we evaluated the performance of the personalized summarization method for dialogue scenario planning. The ILP model was solved by the branch-and-cut method 1 (Mitchell, 2002; Padberg and Rinaldi, 1991) on the CPU. Hereinafter, this method is referred to as CPU-CBC. We used CPU-CBC as a benchmark and compared the performance of CPU-CBC, DAU-AM and DAU-PTM.",
"cite_spans": [
{
"start": 189,
"end": 205,
"text": "(Mitchell, 2002;",
"ref_id": "BIBREF27"
},
{
"start": 206,
"end": 232,
"text": "Padberg and Rinaldi, 1991)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "We used 2,117 participants data which the answer time of the six articles was between 5 and 30 minutes. The performance was evaluated for two cases. The first transmitted N = 6 articles in T = 450 seconds and the second transmitted the top N = 3 articles with high interest in the topic in T = 270 seconds. Sentences in the news articles was synthesized by AITalk 4.1 2 to calculate the duration of speech. The maximum summary length T was calculated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "6.1"
},
{
"text": "T = T \u2212 (N \u2212 1) \u00d7 (q d \u2212 q s ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "6.1"
},
{
"text": "where T denotes the total utterance time of the primary plan, q s denotes the pause between sentences, and q d denotes the pause between documents. Here, q s = 1 second and q d = 3 seconds. The value obtained by adding q s to the playback time of the synthesized audio file was set as t ki .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "6.1"
},
{
"text": "The PULP_CBC_CMD solver of the PuLP 3 , which is a Python library for linear programming optimization, was used to solve the ILP model. Python and PuLP versions were 3.7.6 and 2.4, respectively. The parameter for the number of threads of PULP_CBC_CMD was set to 30. The execution time of the solving function was measured on the Google Compute Engine 4 with the following settings: OS, Ubuntu 18.04; CPU, Xeon (2.20 GHz, 1 https://projects.coin-or.org/Cbc 2 https://www.ai-j.jp/product/ voiceplus/manual/ 3 https://coin-or.github.io/pulp/ 4 https://cloud.google.com/compute/?hl= en 32 cores); Memory, 64 GB. Figure 1 shows the number of problems for a given number of sentences when N = 3. These are the top three articles with the highest degree of interest in the topic among the six articles distributed to each participant. Figure 2 shows the number of problems for a given number of sentences when N = 6. Since the six articles were distributed so that the total number of sentences was as even as possible across participants, the variation in the number of sentences was small.",
"cite_spans": [],
"ref_spans": [
{
"start": 608,
"end": 616,
"text": "Figure 1",
"ref_id": null
},
{
"start": 828,
"end": 836,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "6.1"
},
{
"text": "The DAU must set the number of bits parameter from 1, 024, 2,048, 4,096, or 8,192 , depending on the problem size. In the experimental setting of N = 3, T = 270, the 2,090 QUBO problems were less than 2,048 bits and the 27 QUBO problems were less than 4,096. On the other hand, in the experimental setting of N = 6, T = 450, the 2,017 QUBO problems were less than 4,096 bits and the 100 QUBO problems were less than 8,192. In the latter experiment, these 100 participants data were excluded because the calculation precision of the DAU decreased when the problem size exceeds 4,096 bits.",
"cite_spans": [
{
"start": 54,
"end": 81,
"text": "024, 2,048, 4,096, or 8,192",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "6.1"
},
{
"text": "The number of replicas in DAU-PTM and the number of runs of annealing in DAU-AM were 128. Since the performance of the DAU mainly depended on \u03bb and the number of searches in one annealing (#iteration), these parameters were adjusted to prevent constraint violations. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "6.1"
},
{
"text": "We used EoIT \u03b2 (efficiency of information transmission) (Takatsu et al., 2021) as the evaluation metric. When C is the coverage of sentences annotated as \"very interested,\" \"interested,\" or \"interested if anything,\" and E is the exclusion rate of the sentences annotated as \"not interested at all,\" \"not interested,\" or \"not interested if anything,\" EoIT \u03b2 is defined based on the weighted F-measure (Chinchor, 1992) as",
"cite_spans": [
{
"start": 56,
"end": 78,
"text": "(Takatsu et al., 2021)",
"ref_id": "BIBREF35"
},
{
"start": 400,
"end": 416,
"text": "(Chinchor, 1992)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "6.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "EoIT \u03b2 = 1 + \u03b2 2 \u00d7 C \u00d7 E \u03b2 2 \u00d7 C + E",
"eq_num": "(20)"
}
],
"section": "Evaluation metrics",
"sec_num": "6.2"
},
{
"text": "When \u03b2 = 2, the exclusion rate is twice as important as the coverage. Compared to textual media, which allows readers to read at their own pace, dialogue-based media does not allow users to skip unnecessary information or skim necessary information while listening. Consequently, we assumed that the exclusion rate is more important than the coverage in information transmission by spoken dialogue and set \u03b2 = 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "6.2"
},
{
"text": "Tables 2 and 3 show the results of N = 3, T = 270 and N = 6, T = 450, respectively. The values of the evaluation metrics and processing times represent the average. DAU-AM had a shorter processing time than DAU-PTM for the same number of iterations. However, in terms of the EoIT, DAU-PTM was higher than DAU-AM. The EoIT of DAU-PTM improved as the number of iterations increased, but the processing time also increased. On the other hand, increasing the number of iterations did not improve the EoIT of DAU-AM. Figures 3 and 4 show the distributions of processing time for each number of sentences in the experimental settings of N = 3, T = 270 and N = 6, T = 450, respectively. The number of iterations for DAU-AM and DAU-PTM was 1,000. The processing time of CPU-CBC became longer as the number of sentences increased. Even if the number of sentences was the same, the processing time varied widely. On the other hand, the processing time of DAU-AM and DAU-PTM changed according to the size of the QUBO problems and the number of iterations, regardless of the number of sentences. At N = 6, the size of all QUBO problems was less than 4,096, and the processing time of DAU was almost constant regardless of the number of sentences. At N = 3, the size of the QUBO problems was less than 2,048 when the number of sentences was 59 or less, and less than 4,096 when the number of sentences was 63 or more. On the other hand, when the number of sentences was between 60 and 62, problems with less than 2,048 and problems with less than 4,096 were mixed due to the number of chunks.",
"cite_spans": [],
"ref_spans": [
{
"start": 512,
"end": 527,
"text": "Figures 3 and 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "6.3"
},
{
"text": "Although the DAU is inferior to CPU-CBC in the EoIT, its processing time is considerably shorter than CPU-CBC. In addition, the processing time of the DAU can be estimated in advance based on the size of the QUBO problems and the number of iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "6.3"
},
{
"text": "The DAU does not guarantee that constraint violations will not occur. Although the DAU did not induce constraint violations in the parameter settings shown in the Tables 2 and 3, constraint violations occurred when the number of iterations was too small or the value of \u03bb was inappropriate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.4"
},
{
"text": "In the case that delivers six articles consisting of 15 to 25 sentences, one DAU can generate personalized summaries on the scale of 100,000 users within 6 hours since about 0.2 seconds per person was necessary to generate a summary. In other words, an application with 100,000 users can prepare personalized summaries of the previous night's news by the next morning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.4"
},
{
"text": "The second generation Digital Annealer can only handle problems up to 8,192 bits. However, a third generation Digital Annealer, which can solve problems on the scale of 100,000 bits, is currently under development (Nakayama et al., 2021) . Spoken dialogue systems capable of extracting and presenting personalized information for each user from a huge amount of data in real time will be developed in the near future.",
"cite_spans": [
{
"start": 214,
"end": 237,
"text": "(Nakayama et al., 2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.4"
},
{
"text": "This paper proposed a quadratic unconstraint binary optimization (QUBO) model for the real-time personalized summary generation in terms of effi-cient and coherent information transmission. The model extracts sentences that maximize the sum of the degree of user's interest in the sentences of documents with the discourse structure of each document and total utterance time as constraints. To evaluate the effectiveness of the proposed method, we constructed a Japanese news article corpus with annotations of the discourse structure, users' profiles, and interests in sentences and topics. Experiments demonstrated that our QUBO model could be solved by a Digital Annealer, which is a simulated annealing-based Ising machine, in a practical time without violating the constraints using the dataset. In the future, we will verify the effect of personalized scenarios on the spoken dialogue system. Figure 6 : Tree depth distribution, which is the maximum number of sentences from the root node to the leaf nodes for each article. Average tree depth per article is 6.5 sentences. ",
"cite_spans": [],
"ref_spans": [
{
"start": 899,
"end": 907,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "This work was supported by Japan Science and Technology Agency (JST) Program for Creating STart-ups from Advanced Research and Technology (START), Grant Number JPMJST1912 \"Commercialization of Socially-Intelligent Conversational AI Media Service.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Figure 5: Annotation example of the discourse structure. [ * ] indicates the sentence position in the original text. Dependency annotation, discourse relation annotation, and chunk annotation criteria are described in 3.1.1, 3.1.2, and 3.1.3, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix",
"sec_num": null
},
{
"text": "Start means that the parent of the sentence is the root node. Result means that the child sentence is the result of the parent sentence. Cause means that the child sentence is the cause of the parent sentence and includes event origin, the basis of the author's claim, and the reason. Background means that the parent sentence states facts or events, and the child sentence provides the background or premise. Correspondence means that the child sentence answers the question in the parent sentence. It also includes countermeasures or examinations of problems or issues. Contrast means that the parent sentence and the child sentence have a contrasting relationship. Topic Change means that the topic of the parent sentence changes in the child sentence, which includes a subtopic level change. Example means that the child sentence provides an instance to support the statement in the parent sentence. Conclusion means that the child sentence is a summary or conclusion of the story up to the parent sentence. Supplement means that the child sentence provides details or supplementary information about what is stated in the parent sentence. In a broad sense, although the above discourse relations are also included in supplement, herein, Supplement covers the inter-sentence relations that are not included in the aforementioned discourse relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of the discourse relations",
"sec_num": null
},
{
"text": "A one-month training period was established, and discussions were held until the annotation criteria of the two annotators matched. To validate the inter-rater reliability, the two annotators annotated the same 34 articles after the training period. The Cohen's kappa of dependencies, discourse relations, and chunks were 0.960, 0.943, and 0.895, respectively. To calculate kappa of the discourse relations, the comparison was limited to the inter-sentence dependencies in which the parent sentence matched. To calculate kappa of the chunks, we set the label of the sentence selected as the hard chunk, soft chunk, and other to \"1, 2, and 0,\" respectively. Then we compared the labels between sentences. Given the high inter-rater reliability, we concluded that the two annotators could cover different assignments separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of the discourse annotations",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Physics-inspired optimization for quadratic unconstrained problems using a Digital Annealer",
"authors": [
{
"first": "Gili",
"middle": [],
"last": "Maliheh Aramon",
"suffix": ""
},
{
"first": "Elisabetta",
"middle": [],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "Toshiyuki",
"middle": [],
"last": "Valiante",
"suffix": ""
},
{
"first": "Hirotaka",
"middle": [],
"last": "Miyazawa",
"suffix": ""
},
{
"first": "Helmut",
"middle": [
"G"
],
"last": "Tamura",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Katzgraber",
"suffix": ""
}
],
"year": 2019,
"venue": "Frontiers in Physics",
"volume": "7",
"issue": "48",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maliheh Aramon, Gili Rosenberg, Elisabetta Valiante, Toshiyuki Miyazawa, Hirotaka Tamura, and Hel- mut G. Katzgraber. 2019. Physics-inspired opti- mization for quadratic unconstrained problems us- ing a Digital Annealer. Frontiers in Physics, 7(48):1-14.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Quantum supremacy using a programmable superconducting processor",
"authors": [
{
"first": "Evan",
"middle": [],
"last": "Isakov",
"suffix": ""
},
{
"first": "Zhang",
"middle": [],
"last": "Jeffrey",
"suffix": ""
},
{
"first": "Dvir",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Kostyantyn",
"middle": [],
"last": "Kafri",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Kechedzhi",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"V"
],
"last": "Kelly",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Klimov",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Knysh",
"suffix": ""
},
{
"first": "Fedor",
"middle": [],
"last": "Korotkov",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Kostritsa",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Landhuis",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Lindmark",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Lucero",
"suffix": ""
},
{
"first": "Salvatore",
"middle": [],
"last": "Lyakh",
"suffix": ""
},
{
"first": "Jarrod",
"middle": [
"R"
],
"last": "Mandr\u00e0",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Mc-Clean",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Mcewen",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Megrant",
"suffix": ""
},
{
"first": "Kristel",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Masoud",
"middle": [],
"last": "Michielsen",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Mohseni",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Mutus",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Naaman",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Neeley",
"suffix": ""
},
{
"first": "Murphy",
"middle": [
"Yuezhen"
],
"last": "Neill",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Andre",
"middle": [],
"last": "Ostby",
"suffix": ""
},
{
"first": "John",
"middle": [
"C"
],
"last": "Petukhov",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Platt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Quintana",
"suffix": ""
}
],
"year": 2019,
"venue": "Nature",
"volume": "574",
"issue": "7779",
"pages": "505--510",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isakov, Evan Jeffrey, Zhang Jiang, Dvir Kafri, Kostyantyn Kechedzhi, Julian Kelly, Paul V. Klimov, Sergey Knysh, Alexander Korotkov, Fedor Kostritsa, David Landhuis, Mike Lindmark, Erik Lucero, Dmitry Lyakh, Salvatore Mandr\u00e0, Jarrod R. Mc- Clean, Matthew McEwen, Anthony Megrant, Xiao Mi, Kristel Michielsen, Masoud Mohseni, Josh Mu- tus, Ofer Naaman, Matthew Neeley, Charles Neill, Murphy Yuezhen Niu, Eric Ostby, Andre Petukhov, John C. Platt, Chris Quintana, Eleanor G. Rieffel, Pedram Roushan, Nicholas C. Rubin, Daniel Sank, Kevin J. Satzinger, Vadim Smelyanskiy, Kevin J. Sung, Matthew D. Trevithick, Amit Vainsencher, Benjamin Villalonga, Theodore White, Z. Jamie Yao, Ping Yeh, Adam Zalcman, Hartmut Neven, and John M. Martinis. 2019. Quantum supremacy using a programmable superconducting processor. Nature, 574(7779):505--510.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Logics of conversation: Studies in natural language processing",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Lascarides",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Asher and Alex Lascarides. 2003. Logics of conversation: Studies in natural language process- ing. Cambridge University Press.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Architectural considerations in the design of a superconducting quantum annealing processor",
"authors": [
{
"first": "Paul",
"middle": [
"I"
],
"last": "Bunyk",
"suffix": ""
},
{
"first": "Emile",
"middle": [
"M"
],
"last": "Hoskinson",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"W"
],
"last": "Johnson",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Tolkacheva",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Altomare",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"J"
],
"last": "Berkley",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [
"P"
],
"last": "Hilton",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Lanting",
"suffix": ""
},
{
"first": "Anthony",
"middle": [
"J"
],
"last": "Przybysz",
"suffix": ""
},
{
"first": "Jed",
"middle": [],
"last": "Whittaker",
"suffix": ""
}
],
"year": 2014,
"venue": "IEEE Transactions on Applied Superconductivity",
"volume": "24",
"issue": "4",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul I. Bunyk, Emile M. Hoskinson, Mark W. John- son, Elena Tolkacheva, Fabio Altomare, Andrew J. Berkley, Richard Harris, Jeremy P. Hilton, Trevor Lanting, Anthony J. Przybysz, and Jed Whittaker. 2014. Architectural considerations in the design of a superconducting quantum annealing processor. IEEE Transactions on Applied Superconductivity, 24(4):1-10.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Building a discourse-tagged corpus in the framework of rhetorical structure theory",
"authors": [
{
"first": "Lynn",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ellen"
],
"last": "Okurovsky",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 2nd SIGdial Workshop on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lynn Carlson, Daniel Marcu, and Mary Ellen Okurovsky. 2001. Building a discourse-tagged cor- pus in the framework of rhetorical structure theory. In Proceedings of the 2nd SIGdial Workshop on Dis- course and Dialogue, pages 1-10.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "MUC-4 evaluation metrics",
"authors": [
{
"first": "Nancy",
"middle": [],
"last": "Chinchor",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 4th conference on Message understanding",
"volume": "",
"issue": "",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nancy Chinchor. 1992. MUC-4 evaluation metrics. In Proceedings of the 4th conference on Message un- derstanding, pages 22-29.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "User-model based personalized summarization. Information Processing and Management",
"authors": [
{
"first": "Alberto",
"middle": [],
"last": "D\u00edaz",
"suffix": ""
},
{
"first": "Pablo",
"middle": [],
"last": "Gerv\u00e1s",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "43",
"issue": "",
"pages": "1715--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alberto D\u00edaz and Pablo Gerv\u00e1s. 2007. User-model based personalized summarization. Information Processing and Management, 43(6):1715-1734.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "DOC2DIAL: A framework for dialogue composition grounded in business documents",
"authors": [
{
"first": "Song",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Kshitij",
"middle": [],
"last": "Fadnis",
"suffix": ""
},
{
"first": "Q",
"middle": [
"Vera"
],
"last": "Liao",
"suffix": ""
},
{
"first": "Luis",
"middle": [
"A"
],
"last": "Lastras",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 33rd Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Song Feng, Kshitij Fadnis, Q. Vera Liao, and Luis A. Lastras. 2019. DOC2DIAL: A framework for dia- logue composition grounded in business documents. In Proceedings of the 33rd Conference on Neural In- formation Processing Systems, pages 1-4.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Quantum bridge analytics I: A tutorial on formulating and using QUBO models. 4OR: A",
"authors": [
{
"first": "Fred",
"middle": [],
"last": "Glover",
"suffix": ""
},
{
"first": "Gary",
"middle": [],
"last": "Kochenberger",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Du",
"suffix": ""
}
],
"year": 2019,
"venue": "Quarterly Journal of Operations Research",
"volume": "17",
"issue": "4",
"pages": "335--371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fred Glover, Gary Kochenberger, and Yu Du. 2019. Quantum bridge analytics I: A tutorial on formulat- ing and using QUBO models. 4OR: A Quarterly Journal of Operations Research, 17(4):335--371.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised quantum gate control for gate-model quantum computers",
"authors": [
{
"first": "Laszlo",
"middle": [],
"last": "Gyongyosi",
"suffix": ""
}
],
"year": 2020,
"venue": "Scientific Reports",
"volume": "10",
"issue": "1",
"pages": "1--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laszlo Gyongyosi. 2020. Unsupervised quantum gate control for gate-model quantum computers. Scien- tific Reports, 10(1):1-16.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Empirical comparison of dependency conversions for RST discourse trees",
"authors": [
{
"first": "Katsuhiko",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "128--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katsuhiko Hayashi, Tsutomu Hirao, and Masaaki Na- gata. 2016. Empirical comparison of dependency conversions for RST discourse trees. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 128-136.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Summarizing a document by trimming the discourse tree",
"authors": [
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nishino",
"suffix": ""
},
{
"first": "Yasuhisa",
"middle": [],
"last": "Yoshida",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Norihito",
"middle": [],
"last": "Yasuda",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2015,
"venue": "Speech and Language Processing",
"volume": "23",
"issue": "",
"pages": "2081--2092",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsutomu Hirao, Masaaki Nishino, Yasuhisa Yoshida, Jun Suzuki, Norihito Yasuda, and Masaaki Nagata. 2015. Summarizing a document by trimming the discourse tree. IEEE/ACM Transactions on Au- dio, Speech and Language Processing, 23(11):2081- 2092.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Singledocument summarization as a tree knapsack problem",
"authors": [
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Yasuhisa",
"middle": [],
"last": "Yoshida",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nishino",
"suffix": ""
},
{
"first": "Norihito",
"middle": [],
"last": "Yasuda",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1515--1520",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsutomu Hirao, Yasuhisa Yoshida, Masaaki Nishino, Norihito Yasuda, and Masaaki Nagata. 2013. Single- document summarization as a tree knapsack prob- lem. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1515-1520.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Context-enhanced personalized social summarization",
"authors": [
{
"first": "Po",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Donghong",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Teng",
"suffix": ""
},
{
"first": "Yujing",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 24th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1223--1238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Po Hu, Donghong Ji, Chong Teng, and Yujing Guo. 2012. Context-enhanced personalized social sum- marization. In Proceedings of the 24th International Conference on Computational Linguistics, pages 1223-1238.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Quantum annealing with manufactured spins",
"authors": [
{
"first": "M",
"middle": [
"W"
],
"last": "Johnson",
"suffix": ""
},
{
"first": "M",
"middle": [
"H S"
],
"last": "Amin",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Gildert",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Lanting",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Hamze",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Dickson",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "A",
"middle": [
"J"
],
"last": "Berkley",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bunyk",
"suffix": ""
},
{
"first": "E",
"middle": [
"M"
],
"last": "Chapple",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Enderud",
"suffix": ""
},
{
"first": "J",
"middle": [
"P"
],
"last": "Hilton",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Karimi",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Ladizinsky",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Ladizinsky",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Perminov",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Rich",
"suffix": ""
},
{
"first": "M",
"middle": [
"C"
],
"last": "Thom",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Tolkacheva",
"suffix": ""
},
{
"first": "C",
"middle": [
"J S"
],
"last": "Truncik",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Uchaikin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Rose",
"suffix": ""
}
],
"year": 2011,
"venue": "Nature",
"volume": "473",
"issue": "7346",
"pages": "194--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. W. Johnson, M. H. S. Amin, S. Gildert, T. Lant- ing, F. Hamze, N. Dickson, R. Harris, A. J. Berkley, J. Johansson, P. Bunyk, E. M. Chapple, C. Enderud, J. P. Hilton, K. Karimi, E. Ladizinsky, N. Ladizinsky, T. Oh, I. Perminov, C. Rich, M. C. Thom, E. Tolka- cheva, C. J. S. Truncik, S. Uchaikin, J. Wang, B. Wil- son, and G. Rose. 2011. Quantum annealing with manufactured spins. Nature, 473(7346):194-198.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Building a Japanese corpus of temporal-causal-discourse structures based on SDRT for extracting causal relations",
"authors": [
{
"first": "Kimi",
"middle": [],
"last": "Kaneko",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Bekki",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the EACL 2014 Workshop on Computational Approaches to Causality in Language",
"volume": "",
"issue": "",
"pages": "33--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kimi Kaneko and Daisuke Bekki. 2014. Building a Japanese corpus of temporal-causal-discourse struc- tures based on SDRT for extracting causal relations. In Proceedings of the EACL 2014 Workshop on Com- putational Approaches to Causality in Language, pages 33-39.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Rapid development of a corpus with discourse annotations using two-stage crowdsourcing",
"authors": [
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Yuichiro",
"middle": [],
"last": "Machida",
"suffix": ""
},
{
"first": "Tomohide",
"middle": [],
"last": "Shibata",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "Hayato",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "Manabu",
"middle": [],
"last": "Sassano",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 25th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "269--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daisuke Kawahara, Yuichiro Machida, Tomohide Shi- bata, Sadao Kurohashi, Hayato Kobayashi, and Man- abu Sassano. 2014. Rapid development of a corpus with discourse annotations using two-stage crowd- sourcing. In Proceedings of the 25th International Conference on Computational Linguistics, pages 269-278.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Single document summarization based on nested tree structure",
"authors": [
{
"first": "Yuta",
"middle": [],
"last": "Kikuchi",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Hiroya",
"middle": [],
"last": "Takamura",
"suffix": ""
},
{
"first": "Manabu",
"middle": [],
"last": "Okumura",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "315--320",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuta Kikuchi, Tsutomu Hirao, Hiroya Takamura, Man- abu Okumura, and Masaaki Nagata. 2014. Single document summarization based on nested tree struc- ture. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 315-320.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving crowdsourcing-based annotation of Japanese discourse relations",
"authors": [
{
"first": "Yudai",
"middle": [],
"last": "Kishimoto",
"suffix": ""
},
{
"first": "Shinnosuke",
"middle": [],
"last": "Sawada",
"suffix": ""
},
{
"first": "Yugo",
"middle": [],
"last": "Murawaki",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "4044--4048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yudai Kishimoto, Shinnosuke Sawada, Yugo Mu- rawaki, Daisuke Kawahara, and Sadao Kurohashi. 2018. Improving crowdsourcing-based annotation of Japanese discourse relations. In Proceedings of the 11th International Conference on Language Re- sources and Evaluation, pages 4044-4048.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Text-level discourse dependency parsing",
"authors": [
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "25--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sujian Li, Liang Wang, Ziqiang Cao, and Wenjie Li. 2014. Text-level discourse dependency parsing. In Proceedings of the 52nd Annual Meeting of the Asso- ciation for Computational Linguistics, pages 25-35.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Ising formulations of many NP problems",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Lucas",
"suffix": ""
}
],
"year": 2014,
"venue": "Frontiers in Physics",
"volume": "2",
"issue": "5",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Lucas. 2014. Ising formulations of many NP problems. Frontiers in Physics, 2(5):1-15.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Balanced corpus of contemporary written Japanese. Language Resources and Evaluation",
"authors": [
{
"first": "Kikuo",
"middle": [],
"last": "Maekawa",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Yamazaki",
"suffix": ""
},
{
"first": "Toshinobu",
"middle": [],
"last": "Ogiso",
"suffix": ""
},
{
"first": "Takehiko",
"middle": [],
"last": "Maruyama",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Ogura",
"suffix": ""
},
{
"first": "Wakako",
"middle": [],
"last": "Kashino",
"suffix": ""
},
{
"first": "Hanae",
"middle": [],
"last": "Koiso",
"suffix": ""
},
{
"first": "Masaya",
"middle": [],
"last": "Yamaguchi",
"suffix": ""
},
{
"first": "Makiro",
"middle": [],
"last": "Tanaka",
"suffix": ""
},
{
"first": "Yasuharu",
"middle": [],
"last": "Den",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "48",
"issue": "",
"pages": "345--371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kikuo Maekawa, Makoto Yamazaki, Toshinobu Ogiso, Takehiko Maruyama, Hideki Ogura, Wakako Kashino, Hanae Koiso, Masaya Yamaguchi, Makiro Tanaka, and Yasuharu Den. 2014. Balanced corpus of contemporary written Japanese. Language Re- sources and Evaluation, 48(2):345-371.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Toward practical-scale quantum annealing machine for prime factoring",
"authors": [
{
"first": "Masaaki",
"middle": [],
"last": "Maezawa",
"suffix": ""
},
{
"first": "Go",
"middle": [],
"last": "Fujii",
"suffix": ""
},
{
"first": "Mutsuo",
"middle": [],
"last": "Hidaka",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Imafuku",
"suffix": ""
},
{
"first": "Katsuya",
"middle": [],
"last": "Kikuchi",
"suffix": ""
},
{
"first": "Hanpei",
"middle": [],
"last": "Koike",
"suffix": ""
},
{
"first": "Kazumasa",
"middle": [],
"last": "Makise",
"suffix": ""
},
{
"first": "Shuichi",
"middle": [],
"last": "Nagasawa",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Nakagawa",
"suffix": ""
},
{
"first": "Masahiro",
"middle": [],
"last": "Ukibe",
"suffix": ""
},
{
"first": "Shiro",
"middle": [],
"last": "Kawabata",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of the Physical Society of Japan",
"volume": "",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masaaki Maezawa, Go Fujii, Mutsuo Hidaka, Kentaro Imafuku, Katsuya Kikuchi, Hanpei Koike, Kazu- masa Makise, Shuichi Nagasawa, Hiroshi Naka- gawa, Masahiro Ukibe, and Shiro Kawabata. 2019. Toward practical-scale quantum annealing machine for prime factoring. Journal of the Physical Society of Japan, 88(6).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Machine learning of generic and user-focused summarization",
"authors": [
{
"first": "Inderjeet",
"middle": [],
"last": "Mani",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Bloedorn",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 15th National / 10th Conference on Artificial Intelligence / Innovative Applications of Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "820--826",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Inderjeet Mani and Eric Bloedorn. 1998. Machine learning of generic and user-focused summarization. In Proceedings of the 15th National / 10th Confer- ence on Artificial Intelligence / Innovative Applica- tions of Artificial Intelligence, pages 820-826.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Rethorical structure theory: Toward a functional theory of text organization",
"authors": [
{
"first": "C",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"A"
],
"last": "Mann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 1988,
"venue": "Text",
"volume": "8",
"issue": "3",
"pages": "243--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William C. Mann and Sandra A. Thompson. 1988. Rethorical structure theory: Toward a functional the- ory of text organization. Text, 8(3):243-281.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Digital Annealer for high-speed solving of combinatorial optimization problems and its applications",
"authors": [
{
"first": "Satoshi",
"middle": [],
"last": "Matsubara",
"suffix": ""
},
{
"first": "Motomu",
"middle": [],
"last": "Takatsu",
"suffix": ""
},
{
"first": "Toshiyuki",
"middle": [],
"last": "Miyazawa",
"suffix": ""
},
{
"first": "Takayuki",
"middle": [],
"last": "Shibasaki",
"suffix": ""
},
{
"first": "Yasuhiro",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Takemoto",
"suffix": ""
},
{
"first": "Hirotaka",
"middle": [],
"last": "Tamura",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 25th Asia and South Pacific Design Automation Conference",
"volume": "",
"issue": "",
"pages": "667--672",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satoshi Matsubara, Motomu Takatsu, Toshiyuki Miyazawa, Takayuki Shibasaki, Yasuhiro Watanabe, Kazuya Takemoto, and Hirotaka Tamura. 2020. Dig- ital Annealer for high-speed solving of combinato- rial optimization problems and its applications. In Proceedings of the 25th Asia and South Pacific De- sign Automation Conference, pages 667--672.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Branch-and-cut algorithms for combinatorial optimization problems. Handbook of Applied Optimization",
"authors": [
{
"first": "John",
"middle": [
"E"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "65--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John E. Mitchell. 2002. Branch-and-cut algorithms for combinatorial optimization problems. Handbook of Applied Optimization, pages 65-77.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Noboru Yoneoka, and Toshiyuki Miyazawa. 2021. Description: Third generation Digital Annealer technology",
"authors": [
{
"first": "Hiroshi",
"middle": [],
"last": "Nakayama",
"suffix": ""
},
{
"first": "Junpei",
"middle": [],
"last": "Koyama",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroshi Nakayama, Junpei Koyama, Noboru Yoneoka, and Toshiyuki Miyazawa. 2021. Description: Third generation Digital Annealer technology.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "An Ising computer based on simulated quantum annealing by path integral Monte Carlo method",
"authors": [
{
"first": "Takuya",
"middle": [],
"last": "Okuyama",
"suffix": ""
},
{
"first": "Masato",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "Masanao",
"middle": [],
"last": "Yamaoka",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 IEEE International Conference on Rebooting Computing",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takuya Okuyama, Masato Hayashi, and Masanao Ya- maoka. 2017. An Ising computer based on sim- ulated quantum annealing by path integral Monte Carlo method. In Proceedings of the 2017 IEEE International Conference on Rebooting Computing, pages 1-6.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A branch-and-cut algorithm for the resolution of largescale symmetric traveling salesman problems",
"authors": [
{
"first": "Manfred",
"middle": [],
"last": "Padberg",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Rinaldi",
"suffix": ""
}
],
"year": 1991,
"venue": "SIAM Review",
"volume": "33",
"issue": "1",
"pages": "60--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manfred Padberg and Giovanni Rinaldi. 1991. A branch-and-cut algorithm for the resolution of large- scale symmetric traveling salesman problems. SIAM Review, 33(1):60-100.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The penn discourse treebank 2.0",
"authors": [
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Dinesh",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Eleni",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "Livio",
"middle": [],
"last": "Robaldo",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 6th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "2961--2968",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The penn discourse treebank 2.0. In Proceedings of the 6th International Conference on Language Resources and Evaluation, pages 2961- 2968.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Application of Digital Annealer for faster combinatorial optimization",
"authors": [
{
"first": "Masataka",
"middle": [],
"last": "Sao",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Yuuichi",
"middle": [],
"last": "Musha",
"suffix": ""
},
{
"first": "Akihiro",
"middle": [],
"last": "Utsunomiya",
"suffix": ""
}
],
"year": 2019,
"venue": "FUJITSU SCIENTIFIC & TECHNICAL JOURNAL",
"volume": "55",
"issue": "2",
"pages": "45--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masataka Sao, Hiroyuki Watanabe, Yuuichi Musha, and Akihiro Utsunomiya. 2019. Application of Dig- ital Annealer for faster combinatorial optimization. FUJITSU SCIENTIFIC & TECHNICAL JOURNAL, 55(2):45-51.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "SMART journalism: Personalizing, summarizing, and recommending financial economic news. The Algorithmic Personalization and News (APEN18) Workshop at ICWSM",
"authors": [
{
"first": "Maya",
"middle": [],
"last": "Sappelli",
"suffix": ""
},
{
"first": "Dung",
"middle": [
"Manh"
],
"last": "Chu",
"suffix": ""
},
{
"first": "Bahadir",
"middle": [],
"last": "Cambel",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Graus",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Bressers",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "18",
"issue": "",
"pages": "1--3",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maya Sappelli, Dung Manh Chu, Bahadir Cambel, David Graus, and Philippe Bressers. 2018. SMART journalism: Personalizing, summarizing, and recom- mending financial economic news. The Algorithmic Personalization and News (APEN18) Workshop at ICWSM, 18(5):1-3.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A spoken dialogue system for enabling information behavior of various intention levels",
"authors": [
{
"first": "Hiroaki",
"middle": [],
"last": "Takatsu",
"suffix": ""
},
{
"first": "Ishin",
"middle": [],
"last": "Fukuoka",
"suffix": ""
},
{
"first": "Shinya",
"middle": [],
"last": "Fujie",
"suffix": ""
},
{
"first": "Yoshihiko",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "Tetsunori",
"middle": [],
"last": "Kobayashi",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of the Japanese Society for Artificial Intelligence",
"volume": "33",
"issue": "1",
"pages": "1--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroaki Takatsu, Ishin Fukuoka, Shinya Fujie, Yoshi- hiko Hayashi, and Tetsunori Kobayashi. 2018. A spoken dialogue system for enabling information be- havior of various intention levels. Journal of the Japanese Society for Artificial Intelligence, 33(1):1- 24.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Personalized extractive summarization for a news dialogue system",
"authors": [
{
"first": "Hiroaki",
"middle": [],
"last": "Takatsu",
"suffix": ""
},
{
"first": "Mayu",
"middle": [],
"last": "Okuda",
"suffix": ""
},
{
"first": "Yoichi",
"middle": [],
"last": "Matsuyama",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Honda",
"suffix": ""
},
{
"first": "Shinya",
"middle": [],
"last": "Fujie",
"suffix": ""
},
{
"first": "Tetsunori",
"middle": [],
"last": "Kobayashi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 8th IEEE Spoken Language Technology Workshop",
"volume": "",
"issue": "",
"pages": "1044--1051",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroaki Takatsu, Mayu Okuda, Yoichi Matsuyama, Hi- roshi Honda, Shinya Fujie, and Tetsunori Kobayashi. 2021. Personalized extractive summarization for a news dialogue system. In Proceedings of the 8th IEEE Spoken Language Technology Workshop, pages 1044-1051.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Representing discourse coherence: A corpus-based study",
"authors": [
{
"first": "Florian",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Gibson",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics",
"volume": "31",
"issue": "2",
"pages": "249--287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Florian Wolf and Edward Gibson. 2005. Representing discourse coherence: A corpus-based study. Compu- tational Linguistics, 31(2):249-287.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Discourse-aware neural extractive text summarization",
"authors": [
{
"first": "Jiacheng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5021--5031",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiacheng Xu, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Discourse-aware neural extractive text sum- marization. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 5021-5031.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "A 20k-spin Ising chip to solve combinatorial optimization problems with CMOS annealing",
"authors": [
{
"first": "Masanao",
"middle": [],
"last": "Yamaoka",
"suffix": ""
},
{
"first": "Chihiro",
"middle": [],
"last": "Yoshimura",
"suffix": ""
},
{
"first": "Masato",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "Takuya",
"middle": [],
"last": "Okuyama",
"suffix": ""
},
{
"first": "Hidetaka",
"middle": [],
"last": "Aoki",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Mizuno",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE Journal of Solid-State Circuits",
"volume": "51",
"issue": "1",
"pages": "303--309",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masanao Yamaoka, Chihiro Yoshimura, Masato Hayashi, Takuya Okuyama, Hidetaka Aoki, and Hi- royuki Mizuno. 2016. A 20k-spin Ising chip to solve combinatorial optimization problems with CMOS annealing. IEEE Journal of Solid-State Circuits, 51(1):303-309.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Summarize what you are interested in: An optimization framework for interactive personalized summarization",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Jian-Yun",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Xiaoming",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1342--1351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Yan, Jian-Yun Nie, and Xiaoming Li. 2011. Sum- marize what you are interested in: An optimization framework for interactive personalized summariza- tion. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1342-1351.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "SciDTB: Discourse dependency treebank for scientific abstracts",
"authors": [
{
"first": "An",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "444--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "An Yang and Sujian Li. 2018. SciDTB: Discourse de- pendency treebank for scientific abstracts. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics, pages 444-449.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Dependency-based discourse parser for single-document summarization",
"authors": [
{
"first": "Yasuhisa",
"middle": [],
"last": "Yoshida",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1834--1839",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yasuhisa Yoshida, Jun Suzuki, Tsutomu Hirao, and Masaaki Nagata. 2014. Dependency-based dis- course parser for single-document summarization. In Proceedings of the 2014 Conference on Empiri- cal Methods in Natural Language Processing, pages 1834-1839.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Example answers of the interest questionnaire",
"authors": [],
"year": null,
"venue": "Table",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Table 4: Example answers of the interest questionnaire. Interest level in the topic is assigned to the title [0].",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Number of problems for a given number of sentences when N = 3 Number of problems for a given number of sentences when N = 6",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "Processing time for each number of sentences (N = 3, T = 270)",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "Processing time for each number of sentences (N = 6, T = 450)",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "Tree width distribution, which is the number of leaf nodes for each article. Average tree width per article is 7.5 sentences.",
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"text": "Frequencies of discourse relations in the dataset: Start = 1,221; Result = 400; Cause = 691; Background = 1,343; Correspondence = 851; Contrast = 638; Topic Change = 220; Example = 709; Conclusion = 920; and Supplement = 14,609.",
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"uris": null,
"text": "Frequencies of chunks in the dataset. There are 231 hard chunks and 699 soft chunks. Average number of sentences per chunk is 2.15.",
"num": null,
"type_str": "figure"
},
"FIGREF6": {
"uris": null,
"text": "Percentage of each interest level calculated from the questionnaire results, which includes 15,042 articles and 268,509 sentences.",
"num": null,
"type_str": "figure"
},
"FIGREF7": {
"uris": null,
"text": "Average interest level of the sentences for each depth of the discourse dependency tree.",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"content": "<table><tr><td>T</td><td>Maximum summary length (seconds)</td></tr><tr><td>L</td><td>Maximum bias in the number of extracting sentences between documents</td></tr><tr><td>f k</td><td/></tr></table>",
"html": null,
"type_str": "table",
"text": "Variable definitions in the interesting sentence extraction method x ki Whether sentence s ki is selected y kij Whether both s ki and s kj are selected b u ki Degree of user u's interest in s ki r kij Similarity between s ki and s kj t ki Utterance time of s ki (seconds)"
},
"TABREF2": {
"num": null,
"content": "<table><tr><td/><td colspan=\"5\">Parameters of the Digital Annealer</td><td colspan=\"3\">Efficiency of information transmission</td><td>Processing</td></tr><tr><td/><td>\u03bb1</td><td>\u03bb2</td><td>\u03bb3</td><td colspan=\"6\">\u03bb4 #iteration Coverage Exclusion rate EoIT1 EoIT2 time (sec)</td></tr><tr><td>CPU-CBC (30 threads)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>0.687</td><td>0.726</td><td>0.672 0.696</td><td>18.9</td></tr><tr><td>DAU-AM</td><td colspan=\"4\">10 2 10 6 10 10 10</td><td>10 3</td><td>0.638</td><td>0.612</td><td>0.584 0.593</td><td>0.0570</td></tr><tr><td>DAU-PTM</td><td colspan=\"4\">10 2 10 5 10 9 10</td><td>10 3 10 4</td><td>0.656 0.669</td><td>0.637 0.661</td><td>0.608 0.618 0.627 0.639</td><td>0.245 1.76</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Information transmission efficiency of the summaries (N = 3, T = 270)"
},
"TABREF3": {
"num": null,
"content": "<table><tr><td/><td colspan=\"5\">Parameters of the Digital Annealer</td><td colspan=\"3\">Efficiency of information transmission</td><td>Processing</td></tr><tr><td/><td>\u03bb1</td><td>\u03bb2</td><td>\u03bb3</td><td colspan=\"6\">\u03bb4 #iteration Coverage Exclusion rate EoIT1 EoIT2 time (sec)</td></tr><tr><td>CPU-CBC (30 threads)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>0.638</td><td>0.667</td><td>0.639 0.651</td><td>102</td></tr><tr><td>DAU-AM</td><td colspan=\"4\">10 2 10 6 10 10 10</td><td>10 3</td><td>0.538</td><td>0.585</td><td>0.552 0.568</td><td>0.199</td></tr><tr><td>DAU-PTM</td><td colspan=\"4\">10 2 10 5 10 9 10</td><td>10 3 4</td><td>0.553 0.570</td><td>0.577 0.591</td><td>0.556 0.565 0.572 0.580</td><td>0.749 6.44</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Information transmission efficiency of the summaries (N = 6, T = 450)"
}
}
}
}